threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\n\n\n(This was originally posted on the Admin list where it was suggested it\nshould be posted to pgsql-hackers.)\n\nHi all, I'm the database analyst for PC Week and am comparing\nPostgreSQL with Inprise's InterBase (which will be open sourced later this\nyear). I wrote the features list that is circulating around the list right\nnow by\nMarc Fournier.\n\nAs part of this project, I'm running a benchmark with a mix of OLTP\nand DSS queries in a set of various mixes and at a variety of user loads\n(up to 100 concurrent users). I'd like to get your tuning suggestions on\nthe engine to make sure I am not missing anything.\n\nMy test server is a departmental-type machine with two Pentium III 450MHz\nCPUs with 512MB of RAM running RedHat Linux 6.1. PGDATA is pointing to a\nRAID 5 array and the database is ~40 MB of data before indexing, and so\nwill fit entirely into the db cache. The OS swapfile is not used at all. I\nwill be using ODBC to query the database. I compiled with -m486 and am\nusing a page size of 2KB instead of 8KB as the benchmark is mostly\nOLTP-type queries.\n\n1. The biggest performance item I've seen in looking through the mailing\nlists is the fsync option. I want to leave this enabled as I don't think a\ntransactional database should ever lose data. My understanding is that\nwith it on PG checkpoints after every commit. Is there a way to let the\nlog grow to a certain size before checkpointing? When fsync is off, how is\ndata loss possible?\n\n2. Can I move the log to a different spindle from the disks the\ndatabase data is on? The manuals seem to indicate the log is actually\npart of the datafile itself, which would imply it can't be moved\nelsewhere.\n\n3. Any other suggestions are much appreciated.\n\nRegards,\nTim Dyck\nSenior Analyst, PC Week Labs\[email protected]\n519-746-4241\n\n\n",
"msg_date": "Mon, 31 Jan 2000 17:29:05 -0500",
"msg_from": "Timothy Dyck <[email protected]>",
"msg_from_op": true,
"msg_subject": "request for tuning suggestions from PC Week Labs"
},
{
"msg_contents": "Timothy Dyck <[email protected]> writes:\n> 1. The biggest performance item I've seen in looking through the mailing\n> lists is the fsync option. I want to leave this enabled as I don't think a\n> transactional database should ever lose data. My understanding is that\n> with it on PG checkpoints after every commit. Is there a way to let the\n> log grow to a certain size before checkpointing? When fsync is off, how is\n> data loss possible?\n\nWith fsync on, pgsql does fsync() after every write, which essentially\nmeans you get zero overlap of computation and I/O. Horribly\ninefficient.\n\nWith fsync off, we don't do the fsync() call. The data is still\npushed out to the Unix kernel at the same times, but the kernel's disk\nscheduler has discretion about what order the disk pages actually get\nsent to disk in. Also, you get fewer physical writes when several\nsuccessive transactions modify the same disk page. On most Unixes this\nmakes for a vast performance improvement.\n\nThe risk scenario here is that the pg_log update saying that your\ntransaction has committed might get physically written out before\nthe data pages that contain the actual tuples do. We write the\npg_log page last, of course, but the kernel might reorder the physical\nwrites.\n\nIf the pg_log update gets written, and some but not all of the updated\ndata pages have been written, and you suffer a system crash, then after\nreboot it appears that some but not all of the changes made by your\ntransaction have \"stuck\". That counts as data corruption for most\napplications.\n\nNote that I'm talking about an actual system crash: power failure,\nhardware failure, or kernel failure. A crash of the Postgres backend\ndoes *not* create this hazard. Also note that a crash does not create\na corruption hazard unless pg_log says that the incomplete transaction\ncommitted.\n\nMy feeling is that if you have a UPS and a reliable kernel, there is\nno meaningful reliability benefit from keeping fsync on --- certainly\nnot enough to justify the performance hit. The above risk analysis\nignores non-software risk issues, when in fact there are big risks\nat the hardware level. One of the more obvious ones is that modern\ndisk drives do a certain amount of traffic reordering themselves.\nIf your drive acts like that, I don't see that fsync buys anything\nat all, except perhaps protection against kernel crashes. My experience\n(on HPUX) is that the kernel's MTBF is little worse than the disk drive's,\nso I don't use fsync. YMMV.\n\n> 2. Can I move the log to a different spindle from the disks the\n> database data is on? The manuals seem to indicate the log is actually\n> part of the datafile itself, which would imply it can't be moved\n> elsewhere.\n\nYou could move pg_log to a different drive but it probably wouldn't buy\nmuch. pg_log only contains a commit/no commit flag for each\ntransaction, not copies of data, so there's not that much traffic there.\n\nPeople have reported seeing wins from moving indexes on big tables to\nseparate drives. We don't currently have any nice neat GUI for that,\nbut you can kluge it with a few symbolic links. Better methods are\nunder discussion...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jan 2000 21:37:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] request for tuning suggestions from PC Week Labs "
}
] |
[
{
"msg_contents": "I received the following from the Apache people:\n\n>The financial costs were actually fairly negligible; on\n>the order of US$500, I believe. Most of the effort went\n>into discussions about the structure, the bylaws, and the\n>articles of incorporation. I have raised this topic\n>to our board of directors, as it *does* seem as though\n>a case-study Web page might be useful. I've also asked\n>the treasurer and incorporator to recap the expenses.\n>Hopefully we will get back to you.. :-)\n\nI'm obviously looking forward to further information, but this at least\nsounds hopefull. The costs are not too high, and I presume one could build\non the work they have done on their by-lays and articles, thereby\nshort-circuiting that process.\n\nWhat do other people think?\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 01 Feb 2000 11:53:30 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "The Apache Model (was Re: Copyright)"
}
] |
[
{
"msg_contents": "\nis there any reason why we can't make the permissions on pg_hba.conf 600\nvs 400? the data directory itself is only readable by the 'superuser'...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 Jan 2000 23:10:34 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "reduce pg_hba.conf restrictions ..."
},
{
"msg_contents": "\nOn 01-Feb-00 The Hermit Hacker wrote:\n> \n> is there any reason why we can't make the permissions on pg_hba.conf 600\n> vs 400? the data directory itself is only readable by the 'superuser'...\n\nDepends on what you edit with. If you use vi you can override the perms,\nif you use ee (like I do) you swear alot then change them yourself :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 31 Jan 2000 22:30:34 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] reduce pg_hba.conf restrictions ..."
},
{
"msg_contents": "On Mon, 31 Jan 2000, Vince Vielhaber wrote:\n\n> \n> On 01-Feb-00 The Hermit Hacker wrote:\n> > \n> > is there any reason why we can't make the permissions on pg_hba.conf 600\n> > vs 400? the data directory itself is only readable by the 'superuser'...\n> \n> Depends on what you edit with. If you use vi you can override the perms,\n> if you use ee (like I do) you swear alot then change them yourself :)\n\nI use vi and \"swear alot then change them yourself\" :)\n\nbut, why are we bothering to swear instead of just changing them, is my\nquestion :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 Jan 2000 23:34:07 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] reduce pg_hba.conf restrictions ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> is there any reason why we can't make the permissions on pg_hba.conf 600\n> vs 400? the data directory itself is only readable by the 'superuser'...\n\nI think the motivation may have been to prevent an attacker who manages\nto connect as superuser from overwriting the pg_hba.conf file with\nsomething more liberal (using backend-side COPY). However, if he's\nalready managed to connect as superuser, it's difficult to see what\nhe needs more-liberal connection privileges for.\n\n600 does seem a lot more convenient for the admin. 400 might save\nthe admin from some simple kinds of human error --- but not if he's\nalready in the habit of overriding the protection whenever he updates\nthe file.\n\nIn short, I agree. Does anyone else see any real security gain from\nmaking it 400?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jan 2000 22:43:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] reduce pg_hba.conf restrictions ... "
}
] |
[
{
"msg_contents": "Okay, I'm running into two things that I would expect to work. \nI've included a simple test case for both to reproduce the problem. \n\n1) Obviously, the first two work and the third does not. \nare these bugs?\n\n2) Cannot create index on timestamp colmun\n\nbasement=> select version();\nversion \n-------------------------------------------------------------------\nPostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n(1 row)\nbasement=> select 'hello' where 1 in (select 1);\n?column?\n--------\nhello \n(1 row)\nbasement=> select 'hello' where 1 in (select 2);\n?column?\n--------\n(0 rows)\nbasement=> select 'hello' where 1 in (select 2 union select 1);\nERROR: parser: parse error at or near \"union\"\nbasement=> \n\n\nAnd then, I find that I cannot create an index on a \ntimestamp column;\n\nbasement=> create table ts (t timestamp);\nCREATE\nbasement=> create index ttt on ts(t); \nERROR: Can't find a default operator class for type 1296.\nbasement=> \n\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n",
"msg_date": "Tue, 1 Feb 2000 01:48:54 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": true,
"msg_subject": "union in an in clause and timestamp"
},
{
"msg_contents": "Brian Hirt <[email protected]> writes:\n> Okay, I'm running into two things that I would expect to work. \n\n> basement=> select 'hello' where 1 in (select 2 union select 1);\n> ERROR: parser: parse error at or near \"union\"\n\nUNION isn't currently supported in sub-selects. Hopefully we can make\nit work after the long-threatened querytree redesign. But right now,\nthe union code is so crufty that no one wants to touch it...\n\n> And then, I find that I cannot create an index on a \n> timestamp column;\n> basement=> create index ttt on ts(t); \n> ERROR: Can't find a default operator class for type 1296.\n\nFor the moment, use one of the other time-related types instead.\nAfter the dust settles from Thomas' upcoming consolidation of the\ndate/time datatypes, I expect everything that remains will have a\ncomplete set of operators and index support.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Feb 2000 20:17:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] union in an in clause and timestamp "
}
] |
[
{
"msg_contents": "Oops, I think my majordomo problems are caused by me. Ignore previous\nmessage.\n",
"msg_date": "Tue, 01 Feb 2000 19:36:25 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "majordomo"
}
] |
[
{
"msg_contents": "Just downloaded a completely fresh cvs copy. When I\ndo initdb...\n\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /home/pghack/pgsql/data\n\nCreating Postgres database system directory /home/pghack/pgsql/data/base\n\nCreating template database in /home/pghack/pgsql/data/base/template1\nERROR: Error: unknown type 'oidvector'.\n\nERROR: Error: unknown type 'oidvector'.\n\n syntax error 12 : parse errorinitdb: could not create template\ndatabase\n",
"msg_date": "Tue, 01 Feb 2000 23:14:33 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem in current CVS"
},
{
"msg_contents": "Did you do a 'make clean'?\n\n> Just downloaded a completely fresh cvs copy. When I\n> do initdb...\n> \n> This user will own all the files and must also own the server process.\n> \n> Creating Postgres database system directory /home/pghack/pgsql/data\n> \n> Creating Postgres database system directory /home/pghack/pgsql/data/base\n> \n> Creating template database in /home/pghack/pgsql/data/base/template1\n> ERROR: Error: unknown type 'oidvector'.\n> \n> ERROR: Error: unknown type 'oidvector'.\n> \n> syntax error 12 : parse errorinitdb: could not create template\n> database\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Feb 2000 08:47:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem in current CVS"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Did you do a 'make clean'?\n\nOops, my different installations are getting mixed up. My fault.\n\n\n> \n> > Just downloaded a completely fresh cvs copy. When I\n> > do initdb...\n> >\n> > This user will own all the files and must also own the server process.\n> >\n> > Creating Postgres database system directory /home/pghack/pgsql/data\n> >\n> > Creating Postgres database system directory /home/pghack/pgsql/data/base\n> >\n> > Creating template database in /home/pghack/pgsql/data/base/template1\n> > ERROR: Error: unknown type 'oidvector'.\n> >\n> > ERROR: Error: unknown type 'oidvector'.\n> >\n> > syntax error 12 : parse errorinitdb: could not create template\n> > database\n> >\n> > ************\n> >\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n",
"msg_date": "Wed, 02 Feb 2000 01:06:50 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem in current CVS"
},
{
"msg_contents": "The strangest thing started happening on one of my boxes today running\nPostgre.\n\nI'm using a function to total up values in a table.. The exact same code is\non our development and live server yet on the live server I get :\n\n\nERROR: stat failed on file '${exec_prefix}/lib/plpgsql.so': No such file or\ndirectory\n\n\nAnytime the function is called.\n\nThe plpgsql.so library is in the exact same place on both machine with the\nexact same permissions.\n\nI can't find any reference to the $exec_prefix variable on either machine,\nyet it works on the development server and not on the live server.\n\nThanks for *any* hints, help or ideas.\n\n-Mitch\n\n\n",
"msg_date": "Tue, 1 Feb 2000 10:07:59 -0500",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "plpgsql problem.."
},
{
"msg_contents": "Mitch Vincent wrote:\n\n> ERROR: stat failed on file '${exec_prefix}/lib/plpgsql.so': No such file or\n> directory\n\n> I can't find any reference to the $exec_prefix variable on either machine,\n> yet it works on the development server and not on the live server.\n\nPostgreSQL does not expand environment variables when looking for\nfunction code. Presumably your installer is broken and did not\nsubstitute the variable at install time on the affected system. You\nmight dump your function catalog on both systems to compare - if the\nfunction path on the sane system contains a variable as well, there is\nsome strange magic going on there.\n\nSevo\n",
"msg_date": "Tue, 01 Feb 2000 16:28:33 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.."
},
{
"msg_contents": "This might sound like an ignorant question but how does one dump the\nfunction catalog?\n\n----- Original Message -----\nFrom: Sevo Stille <[email protected]>\nTo: Mitch Vincent <[email protected]>\nCc: Postgres Hackers List <[email protected]>\nSent: Tuesday, February 01, 2000 10:28 AM\nSubject: Re: [HACKERS] plpgsql problem..\n\n\n> Mitch Vincent wrote:\n>\n> > ERROR: stat failed on file '${exec_prefix}/lib/plpgsql.so': No such\nfile or\n> > directory\n>\n> > I can't find any reference to the $exec_prefix variable on either\nmachine,\n> > yet it works on the development server and not on the live server.\n>\n> PostgreSQL does not expand environment variables when looking for\n> function code. Presumably your installer is broken and did not\n> substitute the variable at install time on the affected system. You\n> might dump your function catalog on both systems to compare - if the\n> function path on the sane system contains a variable as well, there is\n> some strange magic going on there.\n>\n> Sevo\n>\n\n",
"msg_date": "Tue, 1 Feb 2000 10:31:08 -0500",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.."
},
{
"msg_contents": "Just an additional comment.\n\nI re-configured and re-installed Postgre and there is no change...\n\nI'm baffled....\n\n----- Original Message -----\nFrom: Sevo Stille <[email protected]>\nTo: Mitch Vincent <[email protected]>\nCc: Postgres Hackers List <[email protected]>\nSent: Tuesday, February 01, 2000 10:28 AM\nSubject: Re: [HACKERS] plpgsql problem..\n\n\n> Mitch Vincent wrote:\n>\n> > ERROR: stat failed on file '${exec_prefix}/lib/plpgsql.so': No such\nfile or\n> > directory\n>\n> > I can't find any reference to the $exec_prefix variable on either\nmachine,\n> > yet it works on the development server and not on the live server.\n>\n> PostgreSQL does not expand environment variables when looking for\n> function code. Presumably your installer is broken and did not\n> substitute the variable at install time on the affected system. You\n> might dump your function catalog on both systems to compare - if the\n> function path on the sane system contains a variable as well, there is\n> some strange magic going on there.\n>\n> Sevo\n>\n\n",
"msg_date": "Tue, 1 Feb 2000 10:58:50 -0500",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.."
},
{
"msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> This might sound like an ignorant question but how does one dump the\n> function catalog?\n\nTry\n\tselect * from pg_proc where proname = 'functionOfInterest';\n\nI think Sevo has identified the problem though: the CREATE FUNCTION\ncommand for the plpgsql_call_handler function needs to give an exact\npath name. What you are showing looks like the command tried to use an\nenvironment variable and the substitution didn't happen. Better review\nthe procedure you used to install plpgsql. I'd recommend using the\ncreatelang script, btw, not doing it by hand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2000 11:08:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.. "
},
{
"msg_contents": "Mitch Vincent wrote:\n> \n> This might sound like an ignorant question but how does one dump the\n> function catalog?\n\nThe functions are in pg_proc. So generally, it would be \"select * from\npg_proc\". For the given problem, \"select proname,probin from pg_proc;\"\nwould be sufficient. Dump to a importable set of SQL statements, as in\npg_dump, can't be done - restoring a system table would hose the id\nreferences, so exporting to a restorable format is of no use.\n\nSevo\n",
"msg_date": "Tue, 01 Feb 2000 17:21:12 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.."
},
{
"msg_contents": "I used mklang.sql in the plpgsql directory to install the language.. It's\nthe same thing I used on the working devel server..\n\n--\n-- PL/pgSQL language declaration\n--\n-- $Header: /usr/local/cvsroot/pgsql/src/pl/plpgsql/src/mklang.sql.in,v 1.4\n1999/05/11 22:57:50 tgl Exp $\n--\n\ncreate function plpgsql_call_handler() returns opaque\n as '/usr/local/pgsql/plpgsql.so'\n lib/language 'C';\n\ncreate trusted procedural language 'plpgsql'\n handler plpgsql_call_handler\n lancompiler 'PL/pgSQL';\n\nThat's the contents of that file and /usr/local/pgsql/plpgsql.so is exactly\nwhere plpgsql.so is.\n\n\n\n\n----- Original Message -----\nFrom: Tom Lane <[email protected]>\nTo: Mitch Vincent <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Tuesday, February 01, 2000 11:08 AM\nSubject: Re: [HACKERS] plpgsql problem..\n\n\n> \"Mitch Vincent\" <[email protected]> writes:\n> > This might sound like an ignorant question but how does one dump the\n> > function catalog?\n>\n> Try\n> select * from pg_proc where proname = 'functionOfInterest';\n>\n> I think Sevo has identified the problem though: the CREATE FUNCTION\n> command for the plpgsql_call_handler function needs to give an exact\n> path name. What you are showing looks like the command tried to use an\n> environment variable and the substitution didn't happen. Better review\n> the procedure you used to install plpgsql. I'd recommend using the\n> createlang script, btw, not doing it by hand.\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Tue, 1 Feb 2000 11:29:54 -0500",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.. "
},
{
"msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> I used mklang.sql in the plpgsql directory to install the language.. It's\n> the same thing I used on the working devel server..\n\nOdd. So what is in pg_proc for plpgsql_call_handler?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2000 11:38:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.. "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Just an additional comment.\n>\n> I re-configured and re-installed Postgre and there is no change...\n>\n> I'm baffled....\n\n Which version of PostgreSQL and how do you install the\n PL/pgSQL language in the database?\n\n In either case, the support script you're using issues a\n damaged CREATE FUNCTION command for the PL handler. Somehow\n the build/install did not replace it with the actual\n installation path.\n\n After that, initdb again and anything should be fine.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 1 Feb 2000 17:51:01 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.."
},
{
"msg_contents": "Ok guys, brand new problem!\n\nNow the backend segfaults.\n\n\n\n\nquery: begin transaction\nProcessUtility: begin transaction\nCommitTransactionCommand\nStartTransactionCommand\nquery: insert into\nagencys(agency_id,created,createdby,updated,updatedby,loginallow,memberdate,\nagencycode,agencyowner,agencyname,address1,address2,city,state,postal,countr\ny,fed_taxid,naps_member,email,url,fee_sched,refund_policy,membership_type,up\ns_type,invoice_hard_copy,zerodate,balance) values\n(487,'now',291,'now',291,0,'02-01-2000','TEST-123','Mitch Vincent','Test\nINC','knkln','lknlkn','lknlkn','ky','41101','US','','f','[email protected]','ww\nw.mitch.com','','jnblj','trac','f','f','12-01-1999',0)\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: update agencys set specialization='dfdf' where agency_id=487\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: update agencys set background='fdf' where agency_id=487\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: delete from agencys_phones where agency_id=487\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: insert into agencys_logins (agency_id,month,maxallowed) values\n(487,'02-01-2000',0)\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: insert into accounting(agency_id,month,totaljobs,totalapps) values\n(487,'02-01-2000',0,0)\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: insert into invoice\n(invoice_number,agencycode,invoice_date,fee_membership,fee_logins,fee_conven\ntion,fee_prints_jobs,fee_prints_apps,fee_hotlines,fee_postage,fee_ups,fee_la\nte,fee_other1,other_desc1,fee_other2,other_desc2,fee_other3,other_desc3,fee_\npastdue,amount_paid,paid,total)\nvalues(1,'TEST-123','02-01-2000',193.33333333333,0,0,0,0,0,0,0,0,0,'',0,'',0\n,'',0,0,'f',0)\nProcessQuery\nquery: SELECT $1 + $2 + $3 + $4 + $5 + $6 + $7 + $8 + $9 + $10 + $11 + $12 +\n$13 - $14\nCommitTransactionCommand\nStartTransactionCommand\nquery: select * from invoice where invoice_number=1\nProcessQuery\nCommitTransactionCommand\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 75928 exited with status 0\n\n\nI get that in the log file.... Can anyone see anything that might point to\nthe reason behind the segfault?\n\n\nYou guys have no idea how much I appreciate your help.. Thanks one and all.\n\n-Mitch\n\n\n\n\n----- Original Message -----\nFrom: Tom Lane <[email protected]>\nTo: Mitch Vincent <[email protected]>\nSent: Tuesday, February 01, 2000 11:49 AM\nSubject: Re: [HACKERS] plpgsql problem..\n\n\n> \"Mitch Vincent\" <[email protected]> writes:\n> > plpgsql_call_handler| 1002| 13|f |t |f\n|\n> > 0|f | 0|0 0 0 0 0 0 0 0| 100| 0|\n> > 0| 100|- |/usr/local/pgsql/lib/plpgsql.so\n> > (1 row)\n>\n> That looks like it should work now...\n>\n> > Hmm, get this, now I have another error after droping and re-creating\nthe\n> > language and function.\n>\n> > ERROR: fmgr_info: Cache lookup for language failed 4427969\n>\n> It sounds like your language entry is still pointing at the old\n> function. (There's no interlock to keep you from dropping a function\n> that things still depend on ... probably there should be ...)\n>\n> Try recreating the language entry now that you've remade the function.\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Tue, 1 Feb 2000 12:49:45 -0500",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.. "
},
{
"msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> Ok guys, brand new problem!\n> Now the backend segfaults.\n\n> StartTransactionCommand\n> query: select * from invoice where invoice_number=1\n> ProcessQuery\n> CommitTransactionCommand\n> pq_recvbuf: unexpected EOF on client connection\n> proc_exit(0) [#0]\n> shmem_exit(0) [#0]\n> exit(0)\n> /usr/local/pgsql/bin/postmaster: reaping dead processes...\n> /usr/local/pgsql/bin/postmaster: CleanupProc: pid 75928 exited with status 0\n\n> I get that in the log file.... Can anyone see anything that might point to\n> the reason behind the segfault?\n\nUm, I see no backend segfault there --- I see a backend exiting in a\nperfectly orderly fashion after detecting that the client closed the\nchannel. The client didn't send the expected \"quit\" (X) message,\nwhich might or might not be normal behavior for that client. But if\nyou've got a backend segfault problem, this doesn't document it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2000 23:09:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plpgsql problem.. "
}
] |
[
{
"msg_contents": "Folks,\n\nI'm trying to build an NT Binary, but the site\nwith Ludovic Lange's IPC package\n\n http://www.multione.capgemini.fr/tools/pack_ipc/ \n\nseems to be down. very crapulent. does anyone\nknow where else i could get the IPC package ?\n\n======================================================\nJeff MacDonald\n\[email protected]\tirc: bignose on EFnet\n======================================================\n\n",
"msg_date": "Tue, 1 Feb 2000 14:59:29 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Win NT Binary"
},
{
"msg_contents": "Hi,\n\nOn Tue, 1 Feb 2000 14:59:29 -0400 (AST)\n\"Jeff MacDonald <[email protected]>\" <[email protected]> wrote:\n\n> Folks,\n> \n> I'm trying to build an NT Binary, but the site\n> with Ludovic Lange's IPC package\n> \n> http://www.multione.capgemini.fr/tools/pack_ipc/ \n> \n> seems to be down. very crapulent. does anyone\n> know where else i could get the IPC package ?\n\nI have a mirror on following URL.\n\nhttp://www.s34.co.jp/~luster/pgsql/require/cygwin32_ipc-1.03.tgz\n\nPrecompiled binary with patch is ready on following URL.\n\nhttp://www.s34.co.jp/~luster/pgsql/cygwin32_ipc-1.03-bin-patched.tar.bz2\n\n-----\nYutaka Tanida<[email protected]>\n",
"msg_date": "Wed, 02 Feb 2000 09:12:20 +0900",
"msg_from": "Yutaka tanida <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Win NT Binary"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]\n> Sent: Sunday, January 30, 2000 1:01 PM\n> To: [email protected]\n> Subject: cvs-commit-digest V1 #856\n> \n> ------------------------------\n> \n> Date: Sat, 29 Jan 2000 11:58:52 -0500 (EST)\n> From: Peter Eisentraut - PostgreSQL <petere>\n> Subject: [COMMITTERS] pgsql/src/interfaces/libpq (fe-misc.c \n> fe-print.c libpq-fe.h)\n> \n> Date: Saturday, January 29, 2000 @ 11:58:51\n> Author: petere\n> \n> Update of /usr/local/cvsroot/pgsql/src/interfaces/libpq\n> from hub.org:/home/tmp/cvs-serv53967/src/interfaces/libpq\n> \n> Modified Files:\n> \tfe-misc.c fe-print.c libpq-fe.h \n> \n> - ----------------------------- Log Message \n> -----------------------------\n> \n> A few minor psql enhancements\n> Initdb help correction\n> Changed end/abort to commit/rollback and changed related notices\n> Commented out way old printing functions in libpq\n> Fixed a typo in alter table / alter column\n>\n\npqbool is removed from libpq-fe.h.\nCouldn't compile interfaces/perl5 now.\n\nIn addition,this seems to change external interface of PQprint().\nIs it OK ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n",
"msg_date": "Wed, 2 Feb 2000 11:31:19 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> From: Peter Eisentraut - PostgreSQL <petere>\n>> Commented out way old printing functions in libpq\n\n> pqbool is removed from libpq-fe.h.\n> Couldn't compile interfaces/perl5 now.\n\n> In addition,this seems to change external interface of PQprint().\n> Is it OK ?\n\nNot IMHO. It looks like Peter has removed typedef pqbool (potentially\nbreaking application sources, not just perl5) and changed what were\npqbool == char fields into int fields (thereby breaking binaries that\ndepend on shared libraries of libpq). Not to mention the advertised\nchange of removing documented API entry points.\n\nPeter, you need to have a little more respect for stability of\nlibrary APIs. Gratuitous breaking of backwards compatibility\nis not the done thing around here. It's especially not done\nwithout any discussion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2000 22:52:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856 "
},
{
"msg_contents": "At 10:52 PM 2/1/00 -0500, Tom Lane wrote:\n\n>Peter, you need to have a little more respect for stability of\n>library APIs. Gratuitous breaking of backwards compatibility\n>is not the done thing around here. It's especially not done\n>without any discussion.\n\nI thought we went over this a week ago...was I dreaming?\n\nPG is intended to be a PROFESSIONAL product. You don't arbitrarily\nbreak things for the hell of it. \n\nPG has CUSTOMERS. Not in the formal \"we bought it\" sense, but in\nthe moral and professional engineering sense.\n\nYou don't screw your customers without good reason, and when you\ndo you at least provide them cushions and soft mattresses and\nadvance notice. Especially advance notice. And if you do screw\nthem, you do so after you explore alternatives and come to realize\nthat there is no other course open to you. And you offer them\na condom (i.e. an upgrade path).\n\nBecause they depend on you.\n\nIs professionalism so hard to understand?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 01 Feb 2000 21:34:17 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856 "
},
{
"msg_contents": "On Tue, 1 Feb 2000, Don Baccus wrote:\n\n> At 10:52 PM 2/1/00 -0500, Tom Lane wrote:\n> \n> >Peter, you need to have a little more respect for stability of\n> >library APIs. Gratuitous breaking of backwards compatibility\n> >is not the done thing around here. It's especially not done\n> >without any discussion.\n> \n> I thought we went over this a week ago...was I dreaming?\n> \n> PG is intended to be a PROFESSIONAL product. You don't arbitrarily\n> break things for the hell of it. \n> \n> PG has CUSTOMERS. Not in the formal \"we bought it\" sense, but in\n> the moral and professional engineering sense.\n> \n> You don't screw your customers without good reason, and when you\n> do you at least provide them cushions and soft mattresses and\n> advance notice. Especially advance notice. And if you do screw\n> them, you do so after you explore alternatives and come to realize\n> that there is no other course open to you. And you offer them\n> a condom (i.e. an upgrade path).\n> \n> Because they depend on you.\n> \n> Is professionalism so hard to understand?\n\nDon ... I try to stay out of stuff like this but ... TONE IT DOWN!\n\nPeter is making mistakes, granted, but he is making them in a *NONE\nPRODUCTION RELEASE* code tree ... if he messes with a -STABLE release in\nthis way, fine, your responses are justified, but, right now, I don't\nthink they are ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 2 Feb 2000 02:07:50 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856 "
},
{
"msg_contents": "On Tue, Feb 01, 2000 at 09:34:17PM -0800, Don Baccus wrote:\n> At 10:52 PM 2/1/00 -0500, Tom Lane wrote:\n> \n<deleted to save having to read that again> \n\nAh Don, that's a little harsh, isn't it? Did Peter's actions in checking in\ncode lose you one minutes work, either time? I seem to recall that your not\ntracking the CVS (which you shouldn't). So your basically bitching about\n_theoretical_ problems? Tom, as a core developer, is directly affected,\nand has earned the right to chew out Peter. You, on the other hand, are\na kibitzer here, as am I. Your comments in technical discussions lead\nme to believe that you are a professional developer, but you haven't\nstepped up to the plate for postgresql, yet, and submitted code. Let the\ncurrent core developers deal with this: we all know your position!\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 2 Feb 2000 00:21:21 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "> > Is professionalism so hard to understand?\n> \n> Don ... I try to stay out of stuff like this but ... TONE IT DOWN!\n> \n> Peter is making mistakes, granted, but he is making them in a *NONE\n> PRODUCTION RELEASE* code tree ... if he messes with a -STABLE release in\n> this way, fine, your responses are justified, but, right now, I don't\n> think they are ...\n\nI also told Peter that 7.0 was a good time to remove routines that were\nno longer needed. Yes, it is a migration problem, but why drag around\nfunctions forever that are useless. Maybe he pulled one too many?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 01:23:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "> On Tue, Feb 01, 2000 at 09:34:17PM -0800, Don Baccus wrote:\n> > At 10:52 PM 2/1/00 -0500, Tom Lane wrote:\n> > \n> <deleted to save having to read that again> \n> \n> Ah Don, that's a little harsh, isn't it? Did Peter's actions in checking in\n> code lose you one minutes work, either time? I seem to recall that your not\n> tracking the CVS (which you shouldn't). So your basically bitching about\n> _theoretical_ problems? Tom, as a core developer, is directly affected,\n> and has earned the right to chew out Peter. You, on the other hand, are\n> a kibitzer here, as am I. Your comments in technical discussions lead\n> me to believe that you are a professional developer, but you haven't\n> stepped up to the plate for postgresql, yet, and submitted code. Let the\n> current core developers deal with this: we all know your position!\n\nThis is excellent advise.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 01:31:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I also told Peter that 7.0 was a good time to remove routines that were\n> no longer needed. Yes, it is a migration problem, but why drag around\n> functions forever that are useless. Maybe he pulled one too many?\n\nWe have in fact talked about removing some of the older-generation\nprint functions (though I was envisioning a slow process of labeling\nthem deprecated for a few releases...). It was the quite unnecessary\nmodification of the exported PQprintOpt struct that got my Irish up.\nI've fought way too many hard-to-debug crashes caused by that sort\nof change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Feb 2000 01:57:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856 "
},
{
"msg_contents": "On Wed, 2 Feb 2000, Hiroshi Inoue wrote:\n\n\n> pqbool is removed from libpq-fe.h.\n> Couldn't compile interfaces/perl5 now.\n> \n> In addition,this seems to change external interface of PQprint().\n> Is it OK ?\n\nDarn, seems like I'm doing everything wrong these days. I gotta take some\ntime off to get my wits together. This is not anyone's fault out there,\nmaybe I just wasn't ready quite yet. I don't want to be the problem\nperson. I'll be back. (psql quoting bug will be fixed.)\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 2 Feb 2000 12:37:48 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "At 12:21 AM 2/2/00 -0600, Ross J. Reedstrom wrote:\n>On Tue, Feb 01, 2000 at 09:34:17PM -0800, Don Baccus wrote:\n>> At 10:52 PM 2/1/00 -0500, Tom Lane wrote:\n>> \n><deleted to save having to read that again> \n>\n>Ah Don, that's a little harsh, isn't it?\n\nYes, it is, and I apologize. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 02 Feb 2000 06:33:18 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> Ah Don, that's a little harsh, isn't it?\n\nYes, it was a little harsh, and Don apologized.\n\n> Did Peter's actions in checking in\n> code lose you one minutes work, either time? I seem to recall that your not\n> tracking the CVS (which you shouldn't).\n\nWrong -- Don is one of the core (or lead) developers porting the\nArsDigita Community System from Oracle to PostgreSQL -- and in order to\ndo this he has indeed been tracking the CVS -- and in fact he is running\nthe pre-beta PostgreSQL 7 right now on a site with the pre-pre-beta ACS\nport to PostgreSQL running on the beta AOLserver 3.0. He also is a\nmajor maintainer of the AOLserver driver for postgresql -- which could\nbe directly impacted by these changes. So, even though Don hasn't been\na heavy contributor here as yet, I believe that he has a right to let\nhis position be known -- although a little more gently, perhaps.\n\nWhy does he need to do this? Two words: Referential Integrity, which is\nheavily used by the ACS.\n\nI also track the current CVS -- but for a totally different reason, as I\nwant to be able to release RPMs of the beta release the same day as the\nbeta release -- thus, I am doing trial builds of RPM's against the CVS. \nHowever, this current issue doesn't impact me in the slightest -- which\nis why I have not and will not say anything about it.\n\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 02 Feb 2000 13:24:07 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
},
{
"msg_contents": "On Wed, Feb 02, 2000 at 01:24:07PM -0500, Lamar Owen wrote:\n> \"Ross J. Reedstrom\" wrote:\n> > Ah Don, that's a little harsh, isn't it?\n> \n> Yes, it was a little harsh, and Don apologized.\n> \n> > Did Peter's actions in checking in\n> > code lose you one minutes work, either time? I seem to recall that your not\n> > tracking the CVS (which you shouldn't).\n> \n> Wrong -- Don is one of the core (or lead) developers porting the\n\n(details of what Don's up to)\n\nAh, now it's time for me to apologize. As I said, *I'm* the kibitzer\nhere, so I'll shut up now.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 2 Feb 2000 14:06:19 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: cvs-commit-digest V1 #856"
}
] |
[
{
"msg_contents": "Poking into Oliver's report of \"RelationClearRelation: relation 21645\nmodified while in use\", I find that the culprit is the following\ncode in execMain.c's InitPlan():\n\n foreach(l, parseTree->rowMark)\n {\n rm = lfirst(l);\n relid = rt_fetch(rm->rti, rangeTable)->relid;\n relation = heap_open(relid, RowShareLock);\n if (!(rm->info & ROW_MARK_FOR_UPDATE))\n continue;\n erm = (execRowMark *) palloc(sizeof(execRowMark));\n erm->relation = relation;\n erm->rti = rm->rti;\n sprintf(erm->resname, \"ctid%u\", rm->rti);\n estate->es_rowMark = lappend(estate->es_rowMark, erm);\n }\n\nThat heap_open() call has no corresponding heap_close() anywhere,\nso every SELECT FOR UPDATE leaves the relation's refcount one higher\nthan it was. This didn't use to be a huge problem, other than that the\nrel would be permanently locked into the backend's relcache. (I think\nan attempt to DROP the table later in the session would have caused\ntrouble, though.) However, I just committed changes in the relcache\nthat assume that zero refcount is trustworthy, and it's those changes\nthat are spitting up.\n\nIt's easy enough to add code to EndPlan that goes through the\nestate->es_rowMark list to close the rels that had ROW_MARK_FOR_UPDATE\nset. But if that bit wasn't set, the above code opens the rel and then\nforgets about it completely. Is that a bug? If not, I guess we need\nanother data structure to keep track of the non-ROW_MARK_FOR_UPDATE\nrels through execution. (EndPlan doesn't currently get the parsetree\nas a parameter, so it can't just duplicate the above loop --- though\npassing it the parsetree might be one possible solution.)\n\nI don't understand SELECT FOR UPDATE enough to know what is going on\nhere. But it seems darn peculiar to open a rel and then not keep any\nreference to the rel for later use. Anybody know how this works?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2000 22:03:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT FOR UPDATE leaks relation refcounts"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Poking into Oliver's report of \"RelationClearRelation: relation 21645\n> modified while in use\", I find that the culprit is the following\n> code in execMain.c's InitPlan():\n> \n> foreach(l, parseTree->rowMark)\n> {\n> rm = lfirst(l);\n> relid = rt_fetch(rm->rti, rangeTable)->relid;\n> relation = heap_open(relid, RowShareLock);\n> if (!(rm->info & ROW_MARK_FOR_UPDATE))\n> continue;\n> erm = (execRowMark *) palloc(sizeof(execRowMark));\n> erm->relation = relation;\n> erm->rti = rm->rti;\n> sprintf(erm->resname, \"ctid%u\", rm->rti);\n> estate->es_rowMark = lappend(estate->es_rowMark, erm);\n> }\n> \n> That heap_open() call has no corresponding heap_close() anywhere,\n> so every SELECT FOR UPDATE leaves the relation's refcount one higher\n> than it was. This didn't use to be a huge problem, other than that the\n> rel would be permanently locked into the backend's relcache. (I think\n> an attempt to DROP the table later in the session would have caused\n> trouble, though.) However, I just committed changes in the relcache\n> that assume that zero refcount is trustworthy, and it's those changes\n> that are spitting up.\n> \n> It's easy enough to add code to EndPlan that goes through the\n> estate->es_rowMark list to close the rels that had ROW_MARK_FOR_UPDATE\n> set. But if that bit wasn't set, the above code opens the rel and then\n> forgets about it completely. Is that a bug? If not, I guess we need\n\nSeems its a bug though I'm not sure.\nIs there anything wrong with inserting heap_close(relation, NoLock)\nimmediately before 'continue;' ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 3 Feb 2000 09:13:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] SELECT FOR UPDATE leaks relation refcounts"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> It's easy enough to add code to EndPlan that goes through the\n>> estate->es_rowMark list to close the rels that had ROW_MARK_FOR_UPDATE\n>> set. But if that bit wasn't set, the above code opens the rel and then\n>> forgets about it completely. Is that a bug? If not, I guess we need\n\n> Seems its a bug though I'm not sure.\n\nI looked over the code that works with rowmarks and decided it is a\nbug. There are just two action flag bits for the executor to worry\nabout, ROW_MARK_FOR_UPDATE and ROW_ACL_FOR_UPDATE. The first makes\nthe execution-time stuff actually happen, while the second causes\na suitable permissions check to be applied before execution. In a\nsimple SELECT FOR UPDATE situation, both bits will be set. The only\nway that the ROW_MARK_FOR_UPDATE bit can get unset is if the SELECT\nFOR UPDATE command references a view --- in that case, the rewriter\nclears the ROW_MARK_FOR_UPDATE bit on the view's rowmark entry,\nand adds rowmark entries with only ROW_MARK_FOR_UPDATE set for the\ntables referenced by the view. As far as I can see, this is correct\nbehavior: the permissions check should be applied to the view, not\nthe referenced tables, but actual execution happens in the referenced\ntables and doesn't touch the view at all. Therefore, it's unnecessary\n--- and perhaps actually wrong --- for InitPlan to be grabbing a\nRowShareLock on the view.\n\nSo, I've rearranged the InitPlan code to not open the rel at all when\nROW_MARK_FOR_UPDATE is clear, and I've added code in EndPlan to\ntraverse the estate->es_rowMark list and heap_close the opened rels\n(specifying NoLock, so that the RowShareLock is held till commit).\n\nThis seems to solve Oliver's problem, and the regress tests still pass,\nso I committed it a little while ago.\n\n> Is there anything wrong with inserting heap_close(relation, NoLock)\n> immediately before 'continue;' ?\n\nWe can do that if it turns out my analysis is wrong and RowShareLock\nshould indeed be grabbed on views as well as their underlying tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Feb 2000 20:00:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT FOR UPDATE leaks relation refcounts "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Thursday, February 03, 2000 10:00 AM\n> \n> This seems to solve Oliver's problem, and the regress tests still pass,\n> so I committed it a little while ago.\n> \n> > Is there anything wrong with inserting heap_close(relation, NoLock)\n> > immediately before 'continue;' ?\n> \n> We can do that if it turns out my analysis is wrong and RowShareLock\n> should indeed be grabbed on views as well as their underlying tables.\n>\n\nI couldn't judge whether the following current behavior has some meaning\nor not.\n\nLet v be a view;\n\nSession-1\nbegin;\nlock table v in exclusive mode; (I don't know what this means)\n\nSession-2\nbegin;\nselect * from v for update;\n(blocked by Session-1)\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 3 Feb 2000 11:19:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] SELECT FOR UPDATE leaks relation refcounts "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I couldn't judge whether the following current behavior has some meaning\n> or not.\n\n> Let v be a view;\n\n> lock table v in exclusive mode; (I don't know what this means)\n\nGood question ... but it seems to me that it has to mean grabbing\nexclusive lock on the table(s) referred to by v. Otherwise, if\nclient A locks the view and client B locks the underlying table\ndirectly, they'll both pass the lock and be able to access/modify\nthe underlying table at the same time. That can't be right.\n\nThe rewriter correctly passes SELECT FOR UPDATE locking from the\nview to the referenced tables, but I'm not sure whether it is\nbright enough to do the same for LOCK statements. (Jan?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Feb 2000 22:10:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT FOR UPDATE leaks relation refcounts "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I couldn't judge whether the following current behavior has some meaning\n> > or not.\n>\n> > Let v be a view;\n>\n> > lock table v in exclusive mode; (I don't know what this means)\n>\n> Good question ... but it seems to me that it has to mean grabbing\n> exclusive lock on the table(s) referred to by v. Otherwise, if\n> client A locks the view and client B locks the underlying table\n> directly, they'll both pass the lock and be able to access/modify\n> the underlying table at the same time. That can't be right.\n>\n> The rewriter correctly passes SELECT FOR UPDATE locking from the\n> view to the referenced tables, but I'm not sure whether it is\n> bright enough to do the same for LOCK statements. (Jan?)\n\n Isn't LOCK TABLE a utility statement? So it doesn't go\n through the rewriter.\n\n The LOCK code would have to do the correct locking of the\n underlying tables. And not to forget cascaded views or\n possible subselects.\n\n Actually LockTableCommand() in command.c doesn't do it. It\n simply locks the view relation, what's definitely wrong.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 3 Feb 2000 11:53:18 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT FOR UPDATE leaks relation refcounts"
},
{
"msg_contents": "> > The rewriter correctly passes SELECT FOR UPDATE locking from the\n> > view to the referenced tables, but I'm not sure whether it is\n> > bright enough to do the same for LOCK statements. (Jan?)\n> \n> Isn't LOCK TABLE a utility statement? So it doesn't go\n> through the rewriter.\n> \n> The LOCK code would have to do the correct locking of the\n> underlying tables. And not to forget cascaded views or\n> possible subselects.\n> \n> Actually LockTableCommand() in command.c doesn't do it. It\n> simply locks the view relation, what's definitely wrong.\n> \nAdded to TODO:\n\n\t* Disallow LOCK on view \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Feb 2000 07:22:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT FOR UPDATE leaks relation refcounts"
}
] |
[
{
"msg_contents": "\n> > (In the same spirit it would also be nice to tag NOT_USED \n> sections with a\n> > version number, so it could be yanked two or three releases past.)\n> \n> Why not just yank it period? 'cvs diff' will show what was \n> yanked, and\n> the log message could say just 'yanked NOT_USED code from \n> source tree'...\n\nOnce GetAttributeBy[Num|Name] was yanked, because it was not referenced \ninside the code. It is heavily used in extensions though.\n\nFor me it was relatively easy to find the problem, because it was ifdef'd\nNOT_USED.\nI am not sure if I had found it that easily, if the \"old\" code would have\nonly been in cvs.\n\nAndreas\n",
"msg_date": "Wed, 2 Feb 2000 09:49:25 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] freefuncs.c is never called from anywhere!?"
},
{
"msg_contents": "On Wed, 2 Feb 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > > (In the same spirit it would also be nice to tag NOT_USED \n> > sections with a\n> > > version number, so it could be yanked two or three releases past.)\n> > \n> > Why not just yank it period? 'cvs diff' will show what was \n> > yanked, and\n> > the log message could say just 'yanked NOT_USED code from \n> > source tree'...\n> \n> Once GetAttributeBy[Num|Name] was yanked, because it was not referenced \n> inside the code. It is heavily used in extensions though.\n> \n> For me it was relatively easy to find the problem, because it was ifdef'd\n> NOT_USED.\n> I am not sure if I had found it that easily, if the \"old\" code would have\n> only been in cvs.\n\nMaybe date/release stamp a NOT_USED and if after X releases, yank it as\nnot being relevant ... at least in cases like this ...\n\n\n",
"msg_date": "Wed, 2 Feb 2000 09:11:02 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] freefuncs.c is never called from anywhere!?"
}
] |
[
{
"msg_contents": "> > - can run on a laptop running Windows 95 with 32MB of RAM\n> \n> Why just Win95? How about a real operating system. :-)\n\nWe don't support Win95, only WinNT\n\n\t\tDan\n",
"msg_date": "Wed, 2 Feb 2000 10:43:55 +0100 ",
"msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] PCLabs Survey, Part VII: Embedded Database / Branch\n\tOffice Support"
}
] |
[
{
"msg_contents": "Ok, i've managed to get all the files i need, dandy.\n\n* side note, i am aware that there is a pre-compiled\nbinary on hub. i'm doing this to see how well kevins\ninstructions work for every one. (from scratch)\n\nlets start here , first 3 steps\n1.Download ftp://go.cygnus.com/pub/sourceware.cygnus.com/cygwin/latest/full.exe\n\ndone.\n\n2. Run full.exe and install in c:\\Unix\\Root directory. \n\nafaik this means i should have a c:\\Unix\\Root\\Cygwin \ndir ? \n\n3.Run Cygwin, and then run \"mount c:/Unix/Root /\" \nthis command will not work. it gives the error\n\"Device Busy\" , which makes perfect sense, since cygwin\nis self is running out of a sub-dir of this dir.\n\nany thoughts as to what kevin might have meant ?\n\n\n======================================================\nJeff MacDonald\n\[email protected]\tirc: bignose on EFnet\n======================================================\n\n",
"msg_date": "Wed, 2 Feb 2000 10:02:15 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "WinNT compiling: ongoing"
},
{
"msg_contents": "\"Jeff MacDonald \" wrote:\n\n> Ok, i've managed to get all the files i need, dandy.\n>\n> * side note, i am aware that there is a pre-compiled\n> binary on hub. i'm doing this to see how well kevins\n> instructions work for every one. (from scratch)\n>\n> lets start here , first 3 steps\n> 1.Download ftp://go.cygnus.com/pub/sourceware.cygnus.com/cygwin/latest/full.exe\n>\n> done.\n>\n> 2. Run full.exe and install in c:\\Unix\\Root directory.\n>\n> afaik this means i should have a c:\\Unix\\Root\\Cygwin\n> dir ?\n\nNope. You can install it in any directory :)\n\n> 3.Run Cygwin, and then run \"mount c:/Unix/Root /\"\n> this command will not work. it gives the error\n> \"Device Busy\" , which makes perfect sense, since cygwin\n> is self is running out of a sub-dir of this dir.\n>\n> any thoughts as to what kevin might have meant ?\n\nTry umount /.\n\n> ======================================================\n> Jeff MacDonald\n> [email protected] irc: bignose on EFnet\n> ======================================================\n\n- Kevin\n\n\n",
"msg_date": "Wed, 09 Feb 2000 18:01:55 +0800",
"msg_from": "Kevin Lo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] WinNT compiling: ongoing"
}
] |
[
{
"msg_contents": "> 3.Run Cygwin, and then run \"mount c:/Unix/Root /\" \n> this command will not work. it gives the error\n> \"Device Busy\" , which makes perfect sense, since cygwin\n> is self is running out of a sub-dir of this dir.\n\ntry to do \"umount /\" before doing mount\n\n\t\t\tDan\n",
"msg_date": "Wed, 2 Feb 2000 15:07:38 +0100 ",
"msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] WinNT compiling: ongoing"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been spending a lot of time lately with gdb and tracing the\nback-end seeing if I can understand it enough to make some changes.\nI'm starting to actually understand a lot of stuff, so in order\nto have some possibility of having my changes accepted, I want to\ndiscuss \nthem here first. Based on that, I'm going to hopefully make an attempt\nat implementation. I have a patch for one of these changes already \nif I get the go ahead.\n\nTHESE CHANGES DON'T AFFECT YOU IF YOU DON'T USE INHERITANCE.\n\nSpeak now about these changes or please, forever hold your peace. Of\ncourse you can comment later if I screw up implementation.\n\nThe proposed changes are....\n\n1) An imaginary field in every tuple that tells you the class it came\nfrom.\nThis is useful when you select from table* and want to know which\nrelation the object actually came from. It wouldn't be stored on disk,\nand like oid it wouldn't be displayed when you do SELECT *. The field\nwould be called classname. So you could have...\nSELECT p.classname, p.name FROM person p;\nperson | Fred\nstudent | Bill\nemployee | Jim\nperson | Chris\n\nIf you want to know the exact behaviour it is as if every table in the\ndatabase had done to it...\nALTER TABLE foo ADD COLUMN classname TEXT;\nUPDATE foo SET classname='foo';\n\nOf course this is not how it would be implemented. It is just\nreference for how it will appear to work. BTW, this idea was also\nin the original berkeley design notes.\n\n2) Changing the sense of the default for getting inherited tuples.\nCurrently you only get inherited tuples if you specify \"tablename*\".\nThis would be changed so that you get all sub-class tuples too by\ndefault unless you specify \"ONLY tablename\". There are several\nrationale for this. Firstly this is what Illustra/Informix have\nimplemented. Secondly, I believe it is more logical from an OO\nperspective as well as giving a more useful default. If a politician\nIS a person and I say SELECT * from person, then logically I should\nsee all the politicians because they are people too (so they claim\n:). Thirdly, there are a whole range of SQL statements that should\nprobably be disallowed without including sub-classes. e.g. an ALTER\nTABLE ADD COLUMN that does not include sub-classes is almost certainly\nundesirable. It seems ashame to have to resort to non-standard SQL\nwith the \"*\" syntax in this case when it is really your only\nchoice. Basicly, wanting ONLY a classname is a far more unusual\nchoice, and leaving off the \"*\" is a common error. Fourthly, it seems\nout of character for the SQL language to have this single character\noperator. The SQL style is to use wordy descriptions of the operators\nmeaning. \"ONLY\" fits well here because it describes its own meaning\nperfectly whereas to the unitiated, \"*\" is harder to guess at. While\nthis change is an incompatibility I hope for those few people using\ninheritance they can accept the need to move forward without\nover-burden of backwards compatibility.\n\n3) The ability to return different types of rows from a SELECT. This\nis to allow implementation of ODBMS functionality where a query could\nbe required to instantiate objects of differing types with differing\nattributes.\n\nI would propose that that anytime you do a SELECT * from a base table\nthat you would get back the full rows from those sub tables. Since the\ncurrent PQ interface which doesn't support this notion would remain\nunchanged this wouldn't affect current users.\n\nIt's probably also desirable to have a syntax for getting just the\ncolumns of the base table when this is desired. Say perhaps SELECT %\nfrom table. This would be a performance hack for users of libpq and a\nfunctionality difference for users of psql.\n\nThe reason I think the \"*\" syntax should take on the new functionality\nis because it would be more consistent with what the OQL (object query\nlanguage) standard specifies, and also because it seems the more\nuseful default. Also there is no compatibility reason not to do it.\n\nIn addition it would be legal to specify columns that only exist in\nsub-classes. For example, if we had \n\nCREATE TABLE person (name TEXT);\nCREATE TABLE student (studentid TEXT, faculty TEXT) INHERITS (person);\n\nit would be legal to say...\n> SELECT * FROM person;\nNAME\n----\nFred\nBill\n\nNAME | STUDENTID | FACULTY\n--------------------------\nJim | 23455 | Science\nChris| 45666 | Arts\n\n> SELECT *, studentid FROM person;\nNAME\n----\nFred\nBill\n\nNAME | STUDENTID\n----------------\nJim | 23455 \nChris| 45666 \n\n> SELECT *, studentid FROM ONLY person;\nERROR: person does not contain studentid.\n\n> SELECT % FROM person;\nNAME\n----\nFred\nBill\nJim\nChris\n\nAs you can see, it is desirable that psql be modified to be able to\nprint these differing tuple types. Presumably new column headings will\nbe printed when a tuple is differing to the previous one. Likely it\nwill be often desirable to do a\nSELECT * FROM person p ORDER BY p.classname;\nin order to have all the tuples of a particular type grouped together.\n\nIn addition some extenions will be done to the PQ interface to support\nthese differing return types. The current PQ interface will be left\nunchanged and backwards compatible for retrieving rows of a single\ntype.\n\nAlso there should be an settable option that specifies that \"*\" should\nalso return the normally ignored columns of oid and classname. This is\nso that OO programs that embed SQL into them also get back the oid and\nclassname which are required for the behind the scenes implementation\nof an ODMG client. Something like...\n\nSET SHOW_OID TRUE;\nSHOW_CLASSNAME TRUE;\n\nSELECT * FROM person;\n\nOID CLASSNAME NAME\n-------------------\n2344 person Fred\n3445 person Bill\n\nOID CLASSNAME NAME | STUDENTID | FACULTY\n-----------------------------------------\n2355 student Jim | 23455 | Science\n5655 student Chris| 45666 | Arts\n",
"msg_date": "Thu, 03 Feb 2000 12:30:26 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> THESE CHANGES DON'T AFFECT YOU IF YOU DON'T USE INHERITANCE.\n> \n> Speak now about these changes or please, forever hold your peace. Of\n> course you can comment later if I screw up implementation.\n> \n> The proposed changes are....\n> \n> 1) An imaginary field in every tuple that tells you the class it came\n> from.\n> This is useful when you select from table* and want to know which\n> relation the object actually came from. It wouldn't be stored on disk,\n> and like oid it wouldn't be displayed when you do SELECT *. The field\n> would be called classname. So you could have...\n> SELECT p.classname, p.name FROM person p;\n> person | Fred\n> student | Bill\n> employee | Jim\n> person | Chris\n\nSo the field is created on the fly to show what table it came from.\nSeems like a good idea, though implementing another usually-invisible\ncolumn will be tough. However, because it is not really a column like\nthe oid is a column, it should be ok. Of course, internally it is\nrelid.\n\n\n> 2) Changing the sense of the default for getting inherited tuples.\n> Currently you only get inherited tuples if you specify \"tablename*\".\n> This would be changed so that you get all sub-class tuples too by\n> default unless you specify \"ONLY tablename\". There are several\n> rationale for this. Firstly this is what Illustra/Informix have\n> implemented. Secondly, I believe it is more logical from an OO\n> perspective as well as giving a more useful default. If a politician\n> IS a person and I say SELECT * from person, then logically I should\n> see all the politicians because they are people too (so they claim\n> :). Thirdly, there are a whole range of SQL statements that should\n> probably be disallowed without including sub-classes. e.g. an ALTER\n> TABLE ADD COLUMN that does not include sub-classes is almost certainly\n> undesirable. It seems ashame to have to resort to non-standard SQL\n> with the \"*\" syntax in this case when it is really your only\n> choice. Basicly, wanting ONLY a classname is a far more unusual\n> choice, and leaving off the \"*\" is a common error. Fourthly, it seems\n> out of character for the SQL language to have this single character\n> operator. The SQL style is to use wordy descriptions of the operators\n> meaning. \"ONLY\" fits well here because it describes its own meaning\n> perfectly whereas to the unitiated, \"*\" is harder to guess at. While\n> this change is an incompatibility I hope for those few people using\n> inheritance they can accept the need to move forward without\n> over-burden of backwards compatibility.\n\nSounds fine to me. Just realize you are taking on a long-overdue but\nbig job here.\n\n> \n> 3) The ability to return different types of rows from a SELECT. This\n> is to allow implementation of ODBMS functionality where a query could\n> be required to instantiate objects of differing types with differing\n> attributes.\n\nThis bothers me. We return relational data, showing the same number of\ncolumns and types for every query. I don't think we want to change\nthat, even for OO. How are you going to return that info the the client\nside?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 21:08:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> So the field is created on the fly to show what table it came from.\n> Seems like a good idea, though implementing another usually-invisible\n> column will be tough.\n\nWhat problems do you forsee?\n\n> However, because it is not really a column like\n> the oid is a column, it should be ok. Of course, internally it is\n> relid.\n> \n> > 2) Changing the sense of the default for getting inherited tuples.\n> > Currently you only get inherited tuples if you specify \"tablename*\".\n>\n> Sounds fine to me. Just realize you are taking on a long-overdue but\n> big job here.\n\nI already have a patch for this one. The change is a few pretty simple\nchanges\nto gram.y.\n\n> > 3) The ability to return different types of rows from a SELECT. This\n> > is to allow implementation of ODBMS functionality where a query could\n> > be required to instantiate objects of differing types with differing\n> > attributes.\n> \n> This bothers me. We return relational data, showing the same number of\n> columns and types for every query. I don't think we want to change\n> that, even for OO. \n\nWhat aspects bother you? This is the fundamental important thing about\nobject databases.\n\nIt's also something that I'm always wanting to do when generating web\npages.\nI have web links like http://foo.com/page?id=123. I want to retrieve\nthe webpage object (which is an inheritance hierarchy) of id=123 which\nmay \nrepresent a web page of different types. Then process appropriately for\ndifferent objects. i.e. typical OO polymorphism.\n\n> How are you going to return that info the the client side?\n\nWell the backend <-> frontend protocol that used to be able to return\ntuples of different types would be put back in.\n\nAlso the berkerly postgres docs had other scenarios where different\ntuples\ncould be returned. One is you could have a field of type postquel called\nsay\nEMP.hobbies which had a value of \"retrieve HOBBIES.all where...\", and\nthen \"retrieve\nEMP.hobbies would return tuples of different types of hobbies.\n",
"msg_date": "Thu, 03 Feb 2000 13:45:31 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "[ I trimmed the cc list a bit ]\n\nChris Bitmead <[email protected]> writes:\n> The proposed changes are....\n\n> 1) An imaginary field in every tuple that tells you the class it came\n> from.\n> This is useful when you select from table* and want to know which\n> relation the object actually came from. It wouldn't be stored on disk,\n> and like oid it wouldn't be displayed when you do SELECT *. The field\n> would be called classname. So you could have...\n> SELECT p.classname, p.name FROM person p;\n\nThis is a good idea, but it seems to me that it'd fit into the system\ntraditions better if the pseudo-field gave the OID of the source\nrelation. If you wanted the actual name of the relation, you'd need\nto join against pg_class. You could argue it either way I suppose;\na name would be more convenient for simple interactive uses, but an\nOID would probably be more convenient and efficient for applications\nusing this feature. I tend to lean towards the programmatic convenience\nside --- far more SQL queries are issued by programs than humans.\n\n> 2) Changing the sense of the default for getting inherited tuples.\n> Currently you only get inherited tuples if you specify \"tablename*\".\n> This would be changed so that you get all sub-class tuples too by\n> default unless you specify \"ONLY tablename\". There are several\n> rationale for this. Firstly this is what Illustra/Informix have\n> implemented. Secondly, I believe it is more logical from an OO\n> perspective as well as giving a more useful default.\n\nWell, mumble. That would be the cleanest choice if we were designing\nin a green field, but we aren't. You're talking about breaking every\nsingle extant Postgres application that uses inheritance, and possibly\nsome that don't use it except as a shorthand for making their schemas\nmore compact. (That's not a hypothetical case; I have DBs that use\nschema inheritance but never do SELECT FROM table*.) I think that's\na mighty high price to pay for achieving a little more logical\ncleanliness.\n\nThere is also a nontrivial performance penalty that would be paid\nfor reversing this default, because then every ordinary SQL query\nwould suffer the overhead of looking to see whether there are\nchild tables for each table named in the query. That *really*\ndoesn't strike me as a good idea.\n\nIf Illustra were popular enough to have defined an industry standard\nabout inheritance, I might think we should follow their lead --- but\nwho else has followed their lead?\n\nIn short, I vote for leaving well enough alone. It's not so badly\nwrong as to be intolerable, and the pain of changing looks high.\n\n> Thirdly, there are a whole range of SQL statements that should\n> probably be disallowed without including sub-classes. e.g. an ALTER\n> TABLE ADD COLUMN that does not include sub-classes is almost certainly\n> undesirable.\n\nThis is true. We could either silently add *, or reject it (\"hey bozo,\nhave you forgotten that this table has subclasses?\"). The reject\noption would be more conservative, just in case the admin *has*\nforgotten that the table has subclasses --- as a crude analogy,\nUnix \"rm\" doesn't assume \"-r\" by default ;-). I agree that allowing\nan ALTER to make a parent table inconsistent with its children is\nvery bad news and should be prevented. (Dropping an inherited column\nis another example of something we shouldn't allow.)\n\n> I would propose that that anytime you do a SELECT * from a base table\n> that you would get back the full rows from those sub tables.\n\nFrankly: ugh. This doesn't square with *my* ideas of object\ninheritance. When you are dealing with something that ISA person,\nyou do not really want to hear about any additional properties it may\nhave; you are dealing with it as a person and not at any finer grain of\ndetail. That goes double for dealing with whole collections of persons.\nIf you want to examine a particular member of the collection and\ndynamically downcast it to some more-specific type, the proposed\nclassname/classoid feature will give you the ability to do that;\nbut I think it's a mistake to assume that this should happen by default.\n\n> Since the current PQ interface which doesn't support this notion would\n> remain unchanged this wouldn't affect current users.\n\nHow would you implement this without actually breaking the current\nPQ interface?\n\n> It's probably also desirable to have a syntax for getting just the\n> columns of the base table when this is desired. Say perhaps SELECT %\n> from table. This would be a performance hack for users of libpq and a\n> functionality difference for users of psql.\n\nAgain, I think you've got the default backwards. I remind you also\nof something we've been beating on Peter about: psql is an application\nscripting tool, so you don't get to redefine its behavior at whim,\nanymore than you can change libpq's API at whim.\n\n\n> In addition it would be legal to specify columns that only exist in\n> sub-classes. For example,\n> it would be legal to say...\n>> SELECT *, studentid FROM person;\n\nYipes. I really, really, really DON'T like that one. At the level\nof table person, studentid is unequivocally an invalid column name.\nIf you do this, you couldn't even guarantee that different subtables\nthat had studentid columns would have compatible datatypes for those\ncolumns.\n\n\n> SELECT * FROM person;\n\n> OID CLASSNAME NAME\n> -------------------\n> 2344 person Fred\n> 3445 person Bill\n\n> OID CLASSNAME NAME | STUDENTID | FACULTY\n> -----------------------------------------\n> 2355 student Jim | 23455 | Science\n> 5655 student Chris| 45666 | Arts\n\nThis is not too hard for a person to make sense of, but I think that\nit'd be mighty unwieldy for a program to deal with. What would the\nlibpq-like interface look like, and what would a typical client\nroutine look like?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Feb 2000 21:55:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > So the field is created on the fly to show what table it came from.\n> > Seems like a good idea, though implementing another usually-invisible\n> > column will be tough.\n> \n> What problems do you forsee?\n\nWell, it is usually pretty strange to carry around a column that doesn't\nexist through all the code and finally contruct it at the end. I would\nsuspect something in the rewrite system could do that pretty easily,\nthough. That is the direction I would go with that.\n\n> \n> > However, because it is not really a column like\n> > the oid is a column, it should be ok. Of course, internally it is\n> > relid.\n> > \n> > > 2) Changing the sense of the default for getting inherited tuples.\n> > > Currently you only get inherited tuples if you specify \"tablename*\".\n> >\n> > Sounds fine to me. Just realize you are taking on a long-overdue but\n> > big job here.\n> \n> I already have a patch for this one. The change is a few pretty simple\n> changes\n> to gram.y.\n\nOK, you will have to canvas the general list to make sure this does not\nbreak things for people, though our inheritance system needs an overhaul\nbadly.\n\n> \n> > > 3) The ability to return different types of rows from a SELECT. This\n> > > is to allow implementation of ODBMS functionality where a query could\n> > > be required to instantiate objects of differing types with differing\n> > > attributes.\n> > \n> > This bothers me. We return relational data, showing the same number of\n> > columns and types for every query. I don't think we want to change\n> > that, even for OO. \n> \n> What aspects bother you? This is the fundamental important thing about\n> object databases.\n\nI fear it is totally against the way our API works. How does someone\nsee how many columns in the returned row?\n\n> > How are you going to return that info the the client side?\n> \n> Well the backend <-> frontend protocol that used to be able to return\n> tuples of different types would be put back in.\n> \n> Also the berkerly postgres docs had other scenarios where different\n> tuples\n> could be returned. One is you could have a field of type postquel called\n> say\n> EMP.hobbies which had a value of \"retrieve HOBBIES.all where...\", and\n> then \"retrieve\n> EMP.hobbies would return tuples of different types of hobbies.\n\nYikes. Strange. Can we just return nulls for the empty fields?\n\nHow many new API calls are required?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 21:57:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "On Wed, 2 Feb 2000, Tom Lane wrote:\n\n> > 2) Changing the sense of the default for getting inherited tuples.\n> > Currently you only get inherited tuples if you specify \"tablename*\".\n> > This would be changed so that you get all sub-class tuples too by\n> > default unless you specify \"ONLY tablename\". There are several\n> > rationale for this. Firstly this is what Illustra/Informix have\n> > implemented. Secondly, I believe it is more logical from an OO\n> > perspective as well as giving a more useful default.\n> \n> Well, mumble. That would be the cleanest choice if we were designing\n> in a green field, but we aren't. You're talking about breaking every\n> single extant Postgres application that uses inheritance, and possibly\n> some that don't use it except as a shorthand for making their schemas\n> more compact. (That's not a hypothetical case; I have DBs that use\n> schema inheritance but never do SELECT FROM table*.) I think that's\n> a mighty high price to pay for achieving a little more logical\n> cleanliness.\n> \n> There is also a nontrivial performance penalty that would be paid\n> for reversing this default, because then every ordinary SQL query\n> would suffer the overhead of looking to see whether there are\n> child tables for each table named in the query. That *really*\n> doesn't strike me as a good idea.\n> \n> If Illustra were popular enough to have defined an industry standard\n> about inheritance, I might think we should follow their lead --- but\n> who else has followed their lead?\n> \n> In short, I vote for leaving well enough alone. It's not so badly\n> wrong as to be intolerable, and the pain of changing looks high.\n\nCould this be implemented/patched in using #ifdef's, so that you could\nconfigure using --old-style-inheritance so that those that require it\nstill have it, giving applications a chance to catch up? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 2 Feb 2000 23:38:05 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> So the field is created on the fly to show what table it came from.\n>>>> Seems like a good idea, though implementing another usually-invisible\n>>>> column will be tough.\n>> \n>> What problems do you forsee?\n\n> Well, it is usually pretty strange to carry around a column that doesn't\n> exist through all the code and finally contruct it at the end. I would\n> suspect something in the rewrite system could do that pretty easily,\n> though. That is the direction I would go with that.\n\nYeah. In fact, since the field is not required except on specific\nuser request (explicit SELECT, or if you like Chris' SET SHOW_CLASSNAME\nidea, that'd still get translated into a SELECT target item at some\npretty early stage), I don't see any need for it to get added to the\nHeapTupleHeader fields. That makes the implementation a *lot* cleaner\nbecause you wouldn't need in-memory HeapTupleHeader to be different from\non-disk headers. I'm visualizing this as a parameterless function (or\nmaybe a new primitive expression node type) that gets evaluated during\nExecProject's construction of the output tuple for a a bottom-level\nseqscan or indexscan plan node. The only trick is to persuade the\nplanner to push it down to the bottom level; normally anything that\nisn't a Var gets evaluated at the top of the plan tree.\n\n>>>> This bothers me. We return relational data, showing the same number of\n>>>> columns and types for every query. I don't think we want to change\n>>>> that, even for OO. \n\nMy thought also. If we had a *real* object orientation, then a returned\ncolumn would have an abstract data type that might correspond to an\nobject supertype. Of course that just pushes the problem down a level:\nhow does the application know what methods the returned object has?\nHow can it even invoke those methods --- whatever code might exist\nfor them would live on the server, presumably, not get shipped around\nin query results.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Feb 2000 22:39:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [GENERAL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n\n> > 1) An imaginary field in every tuple that tells you the class it came\n> This is a good idea, but it seems to me that it'd fit into the system\n> traditions better if the pseudo-field gave the OID of the source\n> relation. \n\nThis was my initial thought too, but then it occured to me that SQL\ndoesn't normally deal in oids. For example you don't do a DROP TABLE\noid;\n\nOTOH, oids are probably programmatically useful for things like ODBMSs.\n\nWhat do you think about having both? I know you can go from one to the \nother by joining with pg_class, but that's too inconvenient, and I can't\nmake up my mind which is the better \"system tradition\" either.\n\nI'm not overly fussed on this point though.\n\n> Well, mumble. That would be the cleanest choice if we were designing\n> in a green field, but we aren't. You're talking about breaking every\n> single extant Postgres application that uses inheritance, and possibly\n> some that don't use it except as a shorthand for making their schemas\n> more compact. (That's not a hypothetical case; I have DBs that use\n> schema inheritance but never do SELECT FROM table*.) I think that's\n> a mighty high price to pay for achieving a little more logical\n> cleanliness.\n\nOk, well compatibility is always a contentious thing. But in your case\nyou are mis-using the inheritance feature.\n\nThe question is, are you willing to do the (simple) changes to your\ncode to cater for the common good? I'm wanting to make postgresql into a\nREAL odbms, and this is a stumbling point that will eventually affect\n100x\nas many users as it does now (I hope :).\n\nWe can also leave the old gram.y for people who want to retain\ncompatibility\nfor longer.\n\n> There is also a nontrivial performance penalty that would be paid\n> for reversing this default, because then every ordinary SQL query\n> would suffer the overhead of looking to see whether there are\n> child tables for each table named in the query. That *really*\n> doesn't strike me as a good idea.\n\nI can't comment on what the current performance penalty would be, but \nI'm sure this can be optimised to be a completely trivial overhead.\n \n> If Illustra were popular enough to have defined an industry standard\n> about inheritance, I might think we should follow their lead --- but\n> who else has followed their lead?\n\nWell Informix of course, which is not small potatoes.\n \n> > I would propose that that anytime you do a SELECT * from a base table\n> > that you would get back the full rows from those sub tables.\n> \n> Frankly: ugh. This doesn't square with *my* ideas of object\n> inheritance. When you are dealing with something that ISA person,\n> you do not really want to hear about any additional properties it may\n> have; you are dealing with it as a person and not at any finer grain of\n> detail. That goes double for dealing with whole collections of persons.\n> If you want to examine a particular member of the collection and\n> dynamically downcast it to some more-specific type, the proposed\n> classname/classoid feature will give you the ability to do that;\n> but I think it's a mistake to assume that this should happen by default.\n\nThis would be the case if the database were the whole world. But it is\nnot,\nit is a repository for applications written in other languages. How can\nyou\n\"dynamically downcast to a more specific type\" if the database hasn't\nreturned\nthe columns of the more specific type? How can I instantiate a C++\nobject of\ntype \"Student\" if the database has only returned to me the data members\nof type\n\"Person\"?\n\n> > Since the current PQ interface which doesn't support this notion would\n> > remain unchanged this wouldn't affect current users.\n> \n> How would you implement this without actually breaking the current\n> PQ interface?\n\nBy adding new functions for use when you need to access the extra\ncolumns.\n\n> > It's probably also desirable to have a syntax for getting just the\n> > columns of the base table when this is desired. Say perhaps SELECT %\n> > from table. This would be a performance hack for users of libpq and a\n> > functionality difference for users of psql.\n> \n> Again, I think you've got the default backwards. I remind you also\n> of something we've been beating on Peter about: psql is an application\n> scripting tool, so you don't get to redefine its behavior at whim,\n> anymore than you can change libpq's API at whim.\n\nI am less adamant about the default in this scenario than in the \"ONLY\ntable\"\nscenario. I'm a bit concerned about the fact that this would break\ncompatibility with OQL standards, but I can live with this.\n \n> > In addition it would be legal to specify columns that only exist in\n> > sub-classes. For example,\n> > it would be legal to say...\n> >> SELECT *, studentid FROM person;\n> \n> Yipes. I really, really, really DON'T like that one. At the level\n> of table person, studentid is unequivocally an invalid column name.\n\nThe reason for this is you need some kind of compromise between seeing\nevery single column (which overwhelms you in psql) and not seeing any\nsub-type columns at all.\n\n> If you do this, you couldn't even guarantee that different subtables\n> that had studentid columns would have compatible datatypes for those\n> columns.\n\nI think you can because postgres won't let you create sub-types with\ncolumn of the same name with incompatible data types. In fact it is\nthis very fact about postgres that makes this feature feasible. \n\n> > SELECT * FROM person;\n> \n> > OID CLASSNAME NAME\n> > -------------------\n> > 2344 person Fred\n> > 3445 person Bill\n> \n> > OID CLASSNAME NAME | STUDENTID | FACULTY\n> > -----------------------------------------\n> > 2355 student Jim | 23455 | Science\n> > 5655 student Chris| 45666 | Arts\n> \n> This is not too hard for a person to make sense of, but I think that\n> it'd be mighty unwieldy for a program to deal with. What would the\n> libpq-like interface look like, and what would a typical client\n> routine look like?\n\nThe PQ interface would have a new function something like\nPQnfieldsv(PQresult,tuplenum),\nso it returns a different number for each tuple.\n\nBut the real benefit is not writing \"unwieldy\" code in C, but ODBMS\nstyle code where you \ncan go...\nList<Shape> l = query(\"SELECT * FROM shape\");\nShape *s;\nfor (l.begin(); s = l.get(); l.next())\n s.display();\n\nBecause if the dbms returns ALL the columns, a C++ runtime system can\nproperly instantiate subtypes and use polymorphism.\n",
"msg_date": "Thu, 03 Feb 2000 14:41:18 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > I already have a patch for this one. The change is a few pretty simple\n> > changes\n> > to gram.y.\n> \n> OK, you will have to canvas the general list to make sure this does not\n> break things for people, though our inheritance system needs an overhaul\n> badly.\n\nThis is already CCed to the general list.\n\n> I fear it is totally against the way our API works. How does someone\n> see how many columns in the returned row?\n\nA new API PQnfieldsv(PQresult, tupnum) or some such.\n\n> Yikes. Strange. \n\nStrange for C code perhaps. Very useful for constructing real objects in \nOO application code framework.\n\n> Can we just return nulls for the empty fields?\n\nWell, I think we should probably distinguish between a field that is\nnull,\nand a field that simply doesn't exist.\n\n> How many new API calls are required?\n\nPerhaps just the one. (above).\n",
"msg_date": "Thu, 03 Feb 2000 14:48:46 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Tom Lane wrote:\n> >>>> This bothers me. We return relational data, showing the same number of\n> >>>> columns and types for every query. I don't think we want to change\n> >>>> that, even for OO.\n> \n> My thought also. If we had a *real* object orientation, then a returned\n> column would have an abstract data type that might correspond to an\n> object supertype. Of course that just pushes the problem down a level:\n> how does the application know what methods the returned object has?\n> How can it even invoke those methods --- whatever code might exist\n> for them would live on the server, presumably, not get shipped around\n> in query results.\n\nIn (most) ODBMSes, the code for a class does NOT live in the database\nserver. (How\nwould you store a C++ binary in a database?).\n\nWhat happens is when a query returns an object, some magic behind the\nscenes\nchecks the type of the returned object (thus the need for the\n\"classname\" column\nor similar.) The magic behind the scenes then instantiates a C++ object\nof\nthe correct class and populates all the data members from the query\nresults.\n\nThe application code is then free to make polymorphic calls on the\nobject\nbecause ALL the fields are populated, not just those of the base class.\n",
"msg_date": "Thu, 03 Feb 2000 14:56:16 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> Could this be implemented/patched in using #ifdef's, so that you could\n> configure using --old-style-inheritance so that those that require it\n> still have it, giving applications a chance to catch up?\n\nSounds like an excellent idea, although I'm not sure how to ifdef a .y\nbison file.\n",
"msg_date": "Thu, 03 Feb 2000 14:57:12 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Yeah. In fact, since the field is not required except on specific\n> user request (explicit SELECT, or if you like Chris' SET SHOW_CLASSNAME\n> idea, that'd still get translated into a SELECT target item at some\n> pretty early stage), I don't see any need for it to get added to the\n> HeapTupleHeader fields. That makes the implementation a *lot* cleaner\n> because you wouldn't need in-memory HeapTupleHeader to be different from\n> on-disk headers. I'm visualizing this as a parameterless function (or\n> maybe a new primitive expression node type) that gets evaluated during\n> ExecProject's construction of the output tuple for a a bottom-level\n> seqscan or indexscan plan node. The only trick is to persuade the\n> planner to push it down to the bottom level; normally anything that\n> isn't a Var gets evaluated at the top of the plan tree.\n\nYes, I agree this is a good way to do it.\n\n> >>>> This bothers me. We return relational data, showing the same number of\n> >>>> columns and types for every query. I don't think we want to change\n> >>>> that, even for OO. \n> \n> My thought also. If we had a *real* object orientation, then a returned\n> column would have an abstract data type that might correspond to an\n> object supertype. Of course that just pushes the problem down a level:\n> how does the application know what methods the returned object has?\n> How can it even invoke those methods --- whatever code might exist\n> for them would live on the server, presumably, not get shipped around\n> in query results.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 22:58:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > > 1) An imaginary field in every tuple that tells you the class it came\n> > This is a good idea, but it seems to me that it'd fit into the system\n> > traditions better if the pseudo-field gave the OID of the source\n> > relation. \n> \n> This was my initial thought too, but then it occured to me that SQL\n> doesn't normally deal in oids. For example you don't do a DROP TABLE\n> oid;\n> \n> OTOH, oids are probably programmatically useful for things like ODBMSs.\n> \n> What do you think about having both? I know you can go from one to the \n> other by joining with pg_class, but that's too inconvenient, and I can't\n> make up my mind which is the better \"system tradition\" either.\n\nSure, let them have both. Why not, or you could force them to join to\npg_class for the name. That would work too.\n\n> Ok, well compatibility is always a contentious thing. But in your case\n> you are mis-using the inheritance feature.\n> \n> The question is, are you willing to do the (simple) changes to your\n> code to cater for the common good? I'm wanting to make postgresql into a\n> REAL odbms, and this is a stumbling point that will eventually affect\n> 100x\n> as many users as it does now (I hope :).\n> \n> We can also leave the old gram.y for people who want to retain\n> compatibility\n> for longer.\n\nI would canvas the list to find out how many people object, and if there\nare few, you may be able to get away with something in config.h.in that\nthey can change if they want the old behavour.\n\n> > > Since the current PQ interface which doesn't support this notion would\n> > > remain unchanged this wouldn't affect current users.\n> > \n> > How would you implement this without actually breaking the current\n> > PQ interface?\n> \n> By adding new functions for use when you need to access the extra\n> columns.\n\nWhatever it is, the API has to be lean and clean.\n\nI saw your PQnfieldsv, and that looks fine to me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 23:02:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "I can live with this. Thanks.\n\n\n> Bruce Momjian wrote:\n> \n> > > I already have a patch for this one. The change is a few pretty simple\n> > > changes\n> > > to gram.y.\n> > \n> > OK, you will have to canvas the general list to make sure this does not\n> > break things for people, though our inheritance system needs an overhaul\n> > badly.\n> \n> This is already CCed to the general list.\n> \n> > I fear it is totally against the way our API works. How does someone\n> > see how many columns in the returned row?\n> \n> A new API PQnfieldsv(PQresult, tupnum) or some such.\n> \n> > Yikes. Strange. \n> \n> Strange for C code perhaps. Very useful for constructing real objects in \n> OO application code framework.\n> \n> > Can we just return nulls for the empty fields?\n> \n> Well, I think we should probably distinguish between a field that is\n> null,\n> and a field that simply doesn't exist.\n> \n> > How many new API calls are required?\n> \n> Perhaps just the one. (above).\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 23:03:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Again, I think you've got the default backwards. I remind you also\n> of something we've been beating on Peter about: psql is an application\n> scripting tool, so you don't get to redefine its behavior at whim,\n> anymore than you can change libpq's API at whim.\n\nIf this is the only objection, we could make the old behaviour available\nby a SET command, as well as a command-line switch, as well as a \n./configure option.\n\nI hope we can get the best design here possible without over-emphasis\non compatibility.\n",
"msg_date": "Thu, 03 Feb 2000 15:11:01 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > Again, I think you've got the default backwards. I remind you also\n> > of something we've been beating on Peter about: psql is an application\n> > scripting tool, so you don't get to redefine its behavior at whim,\n> > anymore than you can change libpq's API at whim.\n> \n> If this is the only objection, we could make the old behaviour available\n> by a SET command, as well as a command-line switch, as well as a \n> ./configure option.\n> \n> I hope we can get the best design here possible without over-emphasis\n> on compatibility.\n\nSET command is probably the best.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Feb 2000 23:22:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "At 09:55 PM 2/2/00 -0500, Tom Lane wrote:\n\n>There is also a nontrivial performance penalty that would be paid\n>for reversing this default, because then every ordinary SQL query\n>would suffer the overhead of looking to see whether there are\n>child tables for each table named in the query. That *really*\n>doesn't strike me as a good idea.\n\nThank you for pointing this out, because my first reaction to\nthe proposal was \"what's the overhead for SQL users\"?\n\nGiven the stated goals of becoming a fast, efficient, reliable\nSQL engine, this has to be a crucial consideration.\n\nOn the other hand, as someone who once made his living off his \ndesigned and implemented optimizing multi-language, multi-platform\ncompiler technology...is it entirely out of the question to \nconsider more greatly abstracting the language (gram.y/analyze.c)\nand backend (optimizer and executor) interfaces so more than one\nfront-end could exist (even if only in experimental and research\nenvironments)? Along with front-end specific versions of libpq?\n\nThese front-ends wouldn't necessarily need to be supported by\nthe mainstream PG development group, except to support a defined\nand sufficiently abstract interface to the optimization/planning and\nexecuting guts of the system so that folks could mess around to\ntheir heart's content. And bear the burden of doing so if they\npick up users :)\n\nJust a thought...\n\n>> I would propose that that anytime you do a SELECT * from a base table\n>> that you would get back the full rows from those sub tables.\n>\n>Frankly: ugh. This doesn't square with *my* ideas of object\n>inheritance.\n\nNor mine, in fact the stuff I've seen about primitive OO in databases\nmake me thing the folks just don't get it.\n\nNot to mention that I'm not convinced that \"getting it\" is worth it. OO\nfits some paradigms, not others, when programming in the large. And \nmost database stuff is really programming in the small (the query parts,\nthe data is often huge, of course). The notion of asking a query, as\nin (say) psql is more related to the notion of typing a few lines at\nBASIC than the notion of writing a few million lines of integrated \ncode. In database design, even more so than in conventional programming,\nit is the data model that reigns supreme and the actual size tends to\nbe manageable, though the models themselves can be very complex.\n\nI offer this as a reason why commercial DB users are more concerned \nwith things like performance, scalability, and the like than with\nreworking of the RDBMS paradigm. Complaints about queries seem to\nplace heavy emphasis on \"why they are slow\", and the OO paradigm\ndoesn't help here. I'm not certain that psuedo-OO features help.\n\nOne reason I raise the issue of possible multiple front-ends (or making\nit easy for folks to make there own by making the parser->optimizer/backend\ninterface more general) is that this whole area would seem to be one \nthat begs for RESEARCH and experimentalism.\n\nThe reality, AFAIK, is that in the crucible of commercial use, real\nOO databases and thinking simply haven't penetrated. \n\nNor is Postgres written in C++ :) (GOOD decision to abandon that\nthought, IMO, though at the moment I'm working on C++ tools for\nmy current client).\n\n\n\n>Again, I think you've got the default backwards. I remind you also\n>of something we've been beating on Peter about: psql is an application\n>scripting tool, so you don't get to redefine its behavior at whim,\n>anymore than you can change libpq's API at whim.\n\nYeah, this is VERY important.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 02 Feb 2000 21:09:52 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "Don Baccus wrote:\n\n> Given the stated goals of becoming a fast, efficient, reliable\n> SQL engine, this has to be a crucial consideration.\n\nI'm sure this can be made fast.\n\n> On the other hand, as someone who once made his living off his\n> designed and implemented optimizing multi-language, multi-platform\n> compiler technology...is it entirely out of the question to\n> consider more greatly abstracting the language (gram.y/analyze.c)\n> and backend (optimizer and executor) interfaces so more than one\n> front-end could exist (even if only in experimental and research\n> environments)? Along with front-end specific versions of libpq?\n\nA good thought, but we still need one good front end that supports\nall the features.\n\n> >> I would propose that that anytime you do a SELECT * from a base table\n> >> that you would get back the full rows from those sub tables.\n> >\n> >Frankly: ugh. This doesn't square with *my* ideas of object\n> >inheritance.\n> \n> Nor mine, in fact the stuff I've seen about primitive OO in databases\n> make me thing the folks just don't get it.\n> \n> Not to mention that I'm not convinced that \"getting it\" is worth it. OO\n> fits some paradigms, not others, when programming in the large.\n\nWell, the features I'm talking about don't affect you unless you want\nOO.\n\n> And\n> most database stuff is really programming in the small (the query parts,\n> the data is often huge, of course). The notion of asking a query, as\n> in (say) psql is more related to the notion of typing a few lines at\n> BASIC than the notion of writing a few million lines of integrated\n> code. In database design, even more so than in conventional programming,\n> it is the data model that reigns supreme and the actual size tends to\n> be manageable, though the models themselves can be very complex.\n\nAnd as those models become so complex it is crucial that the data-model\nthat\n\"reigns supreme\" is properly integrated with the programming language.\n\nFor example, in an IBM Java project I'm working on there is 15000 lines\nof \ncode that converts about 10 or so SQL tables into Java objects. Insane\nstuff.\n\n> I offer this as a reason why commercial DB users are more concerned\n> with things like performance, scalability, and the like than with\n> reworking of the RDBMS paradigm. \n\nActually developers are very interested in supporting the ODBMS paradigm\nas you can see from the Sun proposed standard for RDBMS interface which\nis an exact copy of the ODMG ODBMS interface standard.\n\nIn fact I think about 90% of \"stuff\" is best solved with an ODBMS\nstyle of interaction. The trouble is that most ODBMS don't do the other\n10% very well (i.e. wierd and wonderful queries), which is where\npostgresql _could_ be the ultimate at solving both.\n\n> Complaints about queries seem to\n> place heavy emphasis on \"why they are slow\", and the OO paradigm\n> doesn't help here.\n\nHuh? The OO paradigm helps heaps here because you can model something\nwith a far smaller number of tables.\n\n> I'm not certain that psuedo-OO features help.\n\nDon't know what a pseudo-OO feature is.\n\n> One reason I raise the issue of possible multiple front-ends (or making\n> it easy for folks to make there own by making the parser->optimizer/backend\n> interface more general) is that this whole area would seem to be one\n> that begs for RESEARCH and experimentalism.\n\nNo research is required. I simply want to implement the ODMG STANDARD\nfor ODBMS databases on PostgreSQL. There are no great design issues\nhere,\njust a matter of nailing down the details so that everyone can live \nwith them.\n\n> The reality, AFAIK, is that in the crucible of commercial use, real\n> OO databases and thinking simply haven't penetrated.\n\nNot really true. In certain areas ODBMSes are pervasive. For example\nmany\nTelco companies use ODBMSes for the majority of their stuff. It's\nnecessary\nto get the performance they need. Also of course CAD apps can only use\nan ODBMS.\n\nNo offence, but you havn't actually used one have you?\n",
"msg_date": "Thu, 03 Feb 2000 16:38:08 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 09:55 PM 2/2/00 -0500, Tom Lane wrote:\n> \n> >There is also a nontrivial performance penalty that would be paid\n> >for reversing this default, because then every ordinary SQL query\n> >would suffer the overhead of looking to see whether there are\n> >child tables for each table named in the query. That *really*\n> >doesn't strike me as a good idea.\n> \n> Thank you for pointing this out, because my first reaction to\n> the proposal was \"what's the overhead for SQL users\"?\n\n\nI just did a performance check on this. I found that the overhead\nis one tenth of a millisecond on a Sun desktop workstation. Pretty\ntrivial, and I'm sure it can be improved.\n",
"msg_date": "Thu, 03 Feb 2000 17:07:40 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 02, 2000 at 09:57:48PM -0500, Bruce Momjian allegedly wrote:\n> > > > 3) The ability to return different types of rows from a SELECT. This\n> > > > is to allow implementation of ODBMS functionality where a query could\n> > > > be required to instantiate objects of differing types with differing\n> > > > attributes.\n> > > \n> > > This bothers me. We return relational data, showing the same number of\n> > > columns and types for every query. I don't think we want to change\n> > > that, even for OO. \n> > \n> > What aspects bother you? This is the fundamental important thing about\n> > object databases.\n> \n> I fear it is totally against the way our API works. How does someone\n> see how many columns in the returned row?\n\nThis would probably break applications written in PHP and Perl (and\npossibly others) that have their queryresults returned to them in a\nnumerically indexed array (index by offset). If this behaviour could\nbe turned off, than it shouldn't be a problem.\n\nMathijs\n",
"msg_date": "Thu, 3 Feb 2000 07:25:15 +0100",
"msg_from": "Mathijs Brands <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Mathijs Brands wrote:\n> \n> On Wed, Feb 02, 2000 at 09:57:48PM -0500, Bruce Momjian allegedly wrote:\n> > > > > 3) The ability to return different types of rows from a SELECT. This\n> > > > > is to allow implementation of ODBMS functionality where a query could\n> > > > > be required to instantiate objects of differing types with differing\n> > > > > attributes.\n> > > >\n> > > > This bothers me. We return relational data, showing the same number of\n> > > > columns and types for every query. I don't think we want to change\n> > > > that, even for OO.\n> > >\n> > > What aspects bother you? This is the fundamental important thing about\n> > > object databases.\n> >\n> > I fear it is totally against the way our API works. How does someone\n> > see how many columns in the returned row?\n> \n> This would probably break applications written in PHP and Perl (and\n> possibly others) that have their queryresults returned to them in a\n> numerically indexed array (index by offset). If this behaviour could\n> be turned off, than it shouldn't be a problem.\n\nIt wouldn't affect them because the current APIs would continue to\nreturn\nthe same base-level columns. You would only get access to the extra\ncolumns\nwith a new API.\n",
"msg_date": "Thu, 03 Feb 2000 17:29:38 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>>>> 1) An imaginary field in every tuple that tells you the class it came\n>> This is a good idea, but it seems to me that it'd fit into the system\n>> traditions better if the pseudo-field gave the OID of the source\n>> relation. \n\n> What do you think about having both? I know you can go from one to the \n> other by joining with pg_class, but that's too inconvenient, and I can't\n> make up my mind which is the better \"system tradition\" either.\n\nIf we can implement it as I sketched before, there's no reason not to\noffer both, since either one would create zero overhead for any query\nnot using the feature.\n\nI'll comment on the other issues later ... but I will say that I don't\nthink it's acceptable to add *any* overhead to standard-SQL queries\nin order to support inheritance better. The vast majority of our users\nwant SQL performance and don't give a damn about inheritance. We have\nto pay attention to that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2000 02:00:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n\n> I'll comment on the other issues later ... but I will say that I don't\n> think it's acceptable to add *any* overhead to standard-SQL queries\n> in order to support inheritance better. The vast majority of our users\n> want SQL performance and don't give a damn about inheritance. We have\n> to pay attention to that.\n\nWell I see that pg_class has columns like \"relhasindex\". If we added a\n\"relhassubclass\", the overhead should be unmeasureable.\n",
"msg_date": "Thu, 03 Feb 2000 20:17:10 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> \n> The reality, AFAIK, is that in the crucible of commercial use, real\n> OO databases and thinking simply haven't penetrated.\n\nAFAIK Informix integrated most OO features from Illustra into their UDB\nand also latest versions of Oracle have moved a lot in that direction too.\n \n> Nor is Postgres written in C++ :)\n\nwhat does C++ have to do with OO ;)\n\n----------------------\nHannu\n",
"msg_date": "Thu, 03 Feb 2000 12:00:20 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Hello Chris,\n\nOnce, Thursday, February 03, 2000, 6:30:26 AM, you wrote:\n\nCB> 1) An imaginary field in every tuple that tells you the class it came\nCB> from.\nCB> This is useful when you select from table* and want to know which\nCB> relation the object actually came from. It wouldn't be stored on disk,\nCB> and like oid it wouldn't be displayed when you do SELECT *. The field\nCB> would be called classname. So you could have...\nCB> SELECT p.classname, p.name FROM person p;\nCB> person | Fred\nCB> student | Bill\nCB> employee | Jim\nCB> person | Chris\n\nI am voting for this by both hands. Now we forced to use an additional\ncolumn classname in every table and rule to fill this column.\n\nCB> 2) Changing the sense of the default for getting inherited tuples.\nCB> Currently you only get inherited tuples if you specify \"tablename*\".\nCB> This would be changed so that you get all sub-class tuples too by\nCB> default unless you specify \"ONLY tablename\". There are several\nCB> rationale for this. Firstly this is what Illustra/Informix have\nCB> implemented. Secondly, I believe it is more logical from an OO\nCB> perspective as well as giving a more useful default. If a politician\nCB> IS a person and I say SELECT * from person, then logically I should\nCB> see all the politicians because they are people too (so they claim\nCB> :). Thirdly, there are a whole range of SQL statements that should\nCB> probably be disallowed without including sub-classes. e.g. an ALTER\nCB> TABLE ADD COLUMN that does not include sub-classes is almost certainly\nCB> undesirable. It seems ashame to have to resort to non-standard SQL\nCB> with the \"*\" syntax in this case when it is really your only\nCB> choice. Basicly, wanting ONLY a classname is a far more unusual\nCB> choice, and leaving off the \"*\" is a common error. Fourthly, it seems\nCB> out of character for the SQL language to have this single character\nCB> operator. The SQL style is to use wordy descriptions of the operators\nCB> meaning. \"ONLY\" fits well here because it describes its own meaning\nCB> perfectly whereas to the unitiated, \"*\" is harder to guess at. While\nCB> this change is an incompatibility I hope for those few people using\nCB> inheritance they can accept the need to move forward without\nCB> over-burden of backwards compatibility.\n\nSounds very logically.\n\n-- \nBest regards,\n Yury ICQ 11831432\n mailto:[email protected]\n\n\n",
"msg_date": "Thu, 3 Feb 2000 15:03:57 +0500",
"msg_from": "Yury Don <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> >\n> > > So the field is created on the fly to show what table it came from.\n> > > Seems like a good idea, though implementing another usually-invisible\n> > > column will be tough.\n> >\n> > What problems do you forsee?\n> \n> Well, it is usually pretty strange to carry around a column that doesn't\n> exist through all the code and finally contruct it at the end. I would\n> suspect something in the rewrite system could do that pretty easily,\n> though. That is the direction I would go with that.\n> \n\nOracle has a ROWNR (IIRC) pseudo-column that is added in th every end of \nquery and is a convienient way to put numbers on report rows (among other \nthings).\n\n------------\nHannu\n",
"msg_date": "Thu, 03 Feb 2000 12:06:09 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> \n> Yikes. Strange. Can we just return nulls for the empty fields?\n\nI think more natural way would be to define a new type (NAF - NotAFiled),\nlike we have NAN for floats (do we ?, at least IEEE has)\n\n-----------------\nHannu\n",
"msg_date": "Thu, 03 Feb 2000 12:10:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Tom Lane wrote:\n> \n> > > 1) An imaginary field in every tuple that tells you the class it came\n> > This is a good idea, but it seems to me that it'd fit into the system\n> > traditions better if the pseudo-field gave the OID of the source\n> > relation.\n> \n> This was my initial thought too, but then it occured to me that SQL\n> doesn't normally deal in oids. For example you don't do a DROP TABLE\n> oid;\n\nDROP TABLE (SELECT relname FROM pg_class WHERE oid=the_oid);\n\nwould be cool ;)\n\n> > > I would propose that that anytime you do a SELECT * from a base table\n> > > that you would get back the full rows from those sub tables.\n\nMaybe SELECT ** FROM BASE would be more flexible as it leaves the standard \nSQL with its \"standard\" meaning ?\n\n> > Frankly: ugh. This doesn't square with *my* ideas of object\n> > inheritance. When you are dealing with something that ISA person,\n> > you do not really want to hear about any additional properties it may\n> > have; you are dealing with it as a person and not at any finer grain of\n> > detail. That goes double for dealing with whole collections of persons.\n> > If you want to examine a particular member of the collection and\n> > dynamically downcast it to some more-specific type, the proposed\n> > classname/classoid feature will give you the ability to do that;\n> > but I think it's a mistake to assume that this should happen by default.\n> \n> This would be the case if the database were the whole world. But it is\n> not,\n> it is a repository for applications written in other languages. How can\n> you\n> \"dynamically downcast to a more specific type\" if the database hasn't\n> returned\n> the columns of the more specific type? How can I instantiate a C++\n> object of\n> type \"Student\" if the database has only returned to me the data members\n> of type\n> \"Person\"?\n\nYou could do as some DB's (IIRC Oracle) do with large objects - return the \nwhole row if doing a select that has many rows.\n\nreturn just a handle when going over a cursor with FETCH 1 and then have \ncalls to get the rest.\n\nWe will have to change the API sometime not too distant anyway, the current \napi is unable to deal with anything that does not have a nice textual \nrepresentation (like an image or sound) in spite of all the talks about \neasy extensibility - the extensibility is all in the backend, ther is no \neasy way to get new datatypes in/out.\n\n---------------\nHannu\n",
"msg_date": "Thu, 03 Feb 2000 12:31:23 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Maybe SELECT ** FROM BASE would be more flexible as it leaves the standard\n> SQL with its \"standard\" meaning ?\n\nThat was my first thought and it's definitely a possibility. My argument\nagainst it is that SQL doesn't have a \"standard meaning\" in the case of\ninheritance, and ** is an incompatibility with OQL.\n\nI suspect we need both. Something like \nSET GET_INHERITED_COLUMNS true; etc. \n \n> We will have to change the API sometime not too distant anyway, the current\n> api is unable to deal with anything that does not have a nice textual\n> representation (like an image or sound) in spite of all the talks about\n> easy extensibility - the extensibility is all in the backend, ther is no\n> easy way to get new datatypes in/out.\n\nWhat about PQbinaryTuples() and friends?\n",
"msg_date": "Thu, 03 Feb 2000 21:46:45 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Don Baccus wrote:\n> > \n> > At 09:55 PM 2/2/00 -0500, Tom Lane wrote:\n> > \n> > >There is also a nontrivial performance penalty that would be paid\n> > >for reversing this default, because then every ordinary SQL query\n> > >would suffer the overhead of looking to see whether there are\n> > >child tables for each table named in the query. That *really*\n> > >doesn't strike me as a good idea.\n> > \n> > Thank you for pointing this out, because my first reaction to\n> > the proposal was \"what's the overhead for SQL users\"?\n> \n> \n> I just did a performance check on this. I found that the overhead\n> is one tenth of a millisecond on a Sun desktop workstation. Pretty\n> trivial, and I'm sure it can be improved.\n\nGood point. Has to be non-mearurable performance penalty because most\npeople don't use it. Maybe you will need a system cache entry for this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Feb 2000 07:09:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > I'll comment on the other issues later ... but I will say that I don't\n> > think it's acceptable to add *any* overhead to standard-SQL queries\n> > in order to support inheritance better. The vast majority of our users\n> > want SQL performance and don't give a damn about inheritance. We have\n> > to pay attention to that.\n> \n> Well I see that pg_class has columns like \"relhasindex\". If we added a\n> \"relhassubclass\", the overhead should be unmeasureable.\n\nYes, but how do you keep that accurate? If I add indexes, then drop\nthem, does relhasindex go to false. Could you do that for relhassubclass?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Feb 2000 07:13:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > Well I see that pg_class has columns like \"relhasindex\". If we added a\n> > \"relhassubclass\", the overhead should be unmeasureable.\n> \n> Yes, but how do you keep that accurate? If I add indexes, then drop\n> them, does relhasindex go to false. \n\nI don't know. Does it? \n\n>Could you do that for relhassubclass?\n\nIf we made it relnumsubclasses and incremented/decremented on\nCREATE/DROP, it seems easy in theory.\n\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n",
"msg_date": "Thu, 03 Feb 2000 23:28:31 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > > Well I see that pg_class has columns like \"relhasindex\". If we added a\n> > > \"relhassubclass\", the overhead should be unmeasureable.\n> > \n> > Yes, but how do you keep that accurate? If I add indexes, then drop\n> > them, does relhasindex go to false. \n> \n> I don't know. Does it? \n\nOops:\n\t\n\ttest=> create table test(x int);\n\tCREATE\n\ttest=> create index i_test on test(x);\n\tCREATE\n\ttest=> select relhasindex from pg_class where relname = 'test';\n\t relhasindex \n\t-------------\n\t t\n\t(1 row)\n\n\ttest=> drop index i_test;\n\tDROP\n\ttest=> select relhasindex from pg_class where relname = 'test';\n\t relhasindex \n\t-------------\n\t t\n\t(1 row)\n\nLet me add that to the TODO list.\n\n> \n> >Could you do that for relhassubclass?\n> \n> If we made it relnumsubclasses and incremented/decremented on\n> CREATE/DROP, it seems easy in theory.\n\nYes, that would work. Seems hasindex has problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Feb 2000 07:37:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Oops:\n\n> test=> drop index i_test;\n> DROP\n> test=> select relhasindex from pg_class where relname = 'test';\n> relhasindex\n> -------------\n> t\n> (1 row)\n> \n> Let me add that to the TODO list.\n\nWhy not change that to a relnumindexes as well? Easier to maintain and\nmore useful information.\n\n> > >Could you do that for relhassubclass?\n> >\n> > If we made it relnumsubclasses and incremented/decremented on\n> > CREATE/DROP, it seems easy in theory.\n> \n> Yes, that would work. Seems hasindex has problems.\n",
"msg_date": "Fri, 04 Feb 2000 00:05:32 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> > Let me add that to the TODO list.\n> \n> Why not change that to a relnumindexes as well? Easier to maintain and\n> more useful information.\n\nYes, we probably should do that, but I bet some interfaces us it. \nComments?\n\nActually, looks like only pg_dump uses it, so maybe we would be OK.\nMaybe 7.0 is a good time to fix this.\n\n> \n> > > >Could you do that for relhassubclass?\n> > >\n> > > If we made it relnumsubclasses and incremented/decremented on\n> > > CREATE/DROP, it seems easy in theory.\n> > \n> > Yes, that would work. Seems hasindex has problems.\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Feb 2000 08:26:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> [ discussion on changing the default to getting subclasses ]\n\nI object.\n\nHow about a set variable?\n\nSET GETSUBCLASSES = true\n\nWith the '*' and ONLY being explicit overrides to the setting\nof the variable. The default would be 'false'. I would not\nobject to a configuration switch that would change the\ndefault.\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Thu, 03 Feb 2000 08:40:24 -0500",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "At 04:38 PM 2/3/00 +1100, Chris Bitmead wrote:\n>Don Baccus wrote:\n\n>> On the other hand, as someone who once made his living off his\n>> designed and implemented optimizing multi-language, multi-platform\n>> compiler technology...is it entirely out of the question to\n>> consider more greatly abstracting the language (gram.y/analyze.c)\n>> and backend (optimizer and executor) interfaces so more than one\n>> front-end could exist (even if only in experimental and research\n>> environments)? Along with front-end specific versions of libpq?\n>\n>A good thought, but we still need one good front end that supports\n>all the features.\n\nI wasn't think in terms of this being mutually exclusive with your\ndesires. Merely raising up the notion that the possibility exists\nof creating a sandbox, so to speak, for people to play in, a tool\nfor the exploration of such concepts.\n\n>> Nor mine, in fact the stuff I've seen about primitive OO in databases\n>> make me thing the folks just don't get it.\n>> \n>> Not to mention that I'm not convinced that \"getting it\" is worth it. OO\n>> fits some paradigms, not others, when programming in the large.\n>\n>Well, the features I'm talking about don't affect you unless you want\n>OO.\n\nNo, and I wasn't arguing that you shouldn't move forward, either. I\nwas just stating my personal opinion regarding the utility of simple\nOO-ish features, that's all.\n\n>> One reason I raise the issue of possible multiple front-ends (or making\n>> it easy for folks to make there own by making the parser->optimizer/backend\n>> interface more general) is that this whole area would seem to be one\n>> that begs for RESEARCH and experimentalism.\n>\n>No research is required. I simply want to implement the ODMG STANDARD\n>for ODBMS databases on PostgreSQL. There are no great design issues\n>here,\n>just a matter of nailing down the details so that everyone can live \n>with them.\n\nWell...that's sorta like saying no research into procedural language\ndesign is needed 'cause now we've got C++.\n\nWhether or not the existing standard for ODBMS is the greatest thing\nsince sliced bread, I find it hard to believe that no research is\nrequired or design issues raised by the fundamental problems of \ndatabase technology.\n\nMaybe I'm wrong, though, maybe the problem's been solved.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 03 Feb 2000 07:39:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "At 12:00 PM 2/3/00 +0200, Hannu Krosing wrote:\n>Don Baccus wrote:\n\n>what does C++ have to do with OO ;)\n\nNothing, but don't tell them :) Having worked on C++ compilers,\ndon't get me started on THAT subject!\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 03 Feb 2000 07:42:08 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Why not change that to a relnumindexes as well? Easier to maintain and\n> more useful information.\n\nMaintaining an accurate count of descendants (or indexes for that\nmatter) would be expensive; in particular, it'd create severe\nconcurrency problems. If one transaction is in the middle of creating\nor dropping a child C of table P, then all other transactions would be\nblocked from creating or dropping any other children of P until the C\ntransaction commits or aborts. They'd have to wait or they wouldn't\nknow what to set relnumchildren to.\n\nFor the purpose at hand, I think it would be OK to have a\n\"relhaschildren\" field that is set true when the first child is created\nand then never changed. If you have a table that once had children but\nhas none at the moment, then you pay the price of looking through\npg_inherits; but the case that we're really concerned about (a pure SQL,\nno-inheritance table) would still win.\n\nNot sure whether we can concurrently create/delete indexes on a rel,\nbut I'd be inclined to leave relhasindexes alone: again its main\nfunction in life is to let you short-circuit looking for indexes on\na table that's never had and never will have any.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2000 11:26:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "While I think that these kinds of changes are a No Go because they'd break\na lot of applications (including mine), IF (big if) you really want to\nmake major changes to the inheritance scheme, I got a few ideas.\n\nFirst let me say that I like the conceptual simplicity of relational\ndatabases. Some or all of the ideas thrown around here break with\nsimplicity and consistency, by suggesting, e.g., that some commands be\nallowed only on entire inheritance structures, while others be allowed on\nindividual tables, and attached to it a discussion which ones those should\nbe. That doesn't strike me as too promising.\n\nA lot of people use inheritance to create \"consistent schemas\", that is,\nthey empty create base tables, such as \"address\" which are inherited by\ntables such as customer, vendor, office, etc. That is probably not what\ninheritance is for, perhaps it should be some sort of a macro-like\nconcept, such as create table vendor (name text,\ncopy_schema_from(address), more fields), expanded by the parser. This is\npretty much what it does now, only this scheme wouldn't have to actually\nstore the (useless) inheritance link.\n\nAnyway, an idea I had would be to reimplement inheritance based on joins,\nsince this is what the \"pure relational\" solution would be anyway. When I\ncreate a table B that is based on A, all the system does is create the\ntable B as usual and store a note \"I inherit from A\". Any row you insert\ninto B also creates a row in A, and the row in B contains an oid pointer\nto it. Thus a select on B performs a join on A.oid and B.row_in_A_pointer.\nA select on A just returns all the rows in A, no extras needed. A delete\non B deletes the row in B and in A. A delete in A would cascade to B. Both\nof this can be gotten for free with foreign keys. Adding a column to A\njust adds the column to A, all other tables get the new column magically\nand in the right order. Same with dropping columns, etc.\n\nIn short, this approach solves all inheritance problems at once and does\nso without adding any extra kludges besides the \"I inherited from\" field,\nwhich is static, plus the necessary transformations necessary in the\nparser. The drawback is of course that a select from an inherited table\nwould always incur a join, perhaps some optimizing could be done in this\ndirection. But the bottom line is that the compatibility issue looms big.\n\n\t-Peter\n\n\nOn Thu, 3 Feb 2000, Chris Bitmead wrote:\n\n> Hi,\n> \n> I've been spending a lot of time lately with gdb and tracing the\n> back-end seeing if I can understand it enough to make some changes.\n> I'm starting to actually understand a lot of stuff, so in order\n> to have some possibility of having my changes accepted, I want to\n> discuss \n> them here first. Based on that, I'm going to hopefully make an attempt\n> at implementation. I have a patch for one of these changes already \n> if I get the go ahead.\n> \n> THESE CHANGES DON'T AFFECT YOU IF YOU DON'T USE INHERITANCE.\n> \n> Speak now about these changes or please, forever hold your peace. Of\n> course you can comment later if I screw up implementation.\n> \n> The proposed changes are....\n> \n> 1) An imaginary field in every tuple that tells you the class it came\n> from.\n> This is useful when you select from table* and want to know which\n> relation the object actually came from. It wouldn't be stored on disk,\n> and like oid it wouldn't be displayed when you do SELECT *. The field\n> would be called classname. So you could have...\n> SELECT p.classname, p.name FROM person p;\n> person | Fred\n> student | Bill\n> employee | Jim\n> person | Chris\n> \n> If you want to know the exact behaviour it is as if every table in the\n> database had done to it...\n> ALTER TABLE foo ADD COLUMN classname TEXT;\n> UPDATE foo SET classname='foo';\n> \n> Of course this is not how it would be implemented. It is just\n> reference for how it will appear to work. BTW, this idea was also\n> in the original berkeley design notes.\n> \n> 2) Changing the sense of the default for getting inherited tuples.\n> Currently you only get inherited tuples if you specify \"tablename*\".\n> This would be changed so that you get all sub-class tuples too by\n> default unless you specify \"ONLY tablename\". There are several\n> rationale for this. Firstly this is what Illustra/Informix have\n> implemented. Secondly, I believe it is more logical from an OO\n> perspective as well as giving a more useful default. If a politician\n> IS a person and I say SELECT * from person, then logically I should\n> see all the politicians because they are people too (so they claim\n> :). Thirdly, there are a whole range of SQL statements that should\n> probably be disallowed without including sub-classes. e.g. an ALTER\n> TABLE ADD COLUMN that does not include sub-classes is almost certainly\n> undesirable. It seems ashame to have to resort to non-standard SQL\n> with the \"*\" syntax in this case when it is really your only\n> choice. Basicly, wanting ONLY a classname is a far more unusual\n> choice, and leaving off the \"*\" is a common error. Fourthly, it seems\n> out of character for the SQL language to have this single character\n> operator. The SQL style is to use wordy descriptions of the operators\n> meaning. \"ONLY\" fits well here because it describes its own meaning\n> perfectly whereas to the unitiated, \"*\" is harder to guess at. While\n> this change is an incompatibility I hope for those few people using\n> inheritance they can accept the need to move forward without\n> over-burden of backwards compatibility.\n> \n> 3) The ability to return different types of rows from a SELECT. This\n> is to allow implementation of ODBMS functionality where a query could\n> be required to instantiate objects of differing types with differing\n> attributes.\n> \n> I would propose that that anytime you do a SELECT * from a base table\n> that you would get back the full rows from those sub tables. Since the\n> current PQ interface which doesn't support this notion would remain\n> unchanged this wouldn't affect current users.\n> \n> It's probably also desirable to have a syntax for getting just the\n> columns of the base table when this is desired. Say perhaps SELECT %\n> from table. This would be a performance hack for users of libpq and a\n> functionality difference for users of psql.\n> \n> The reason I think the \"*\" syntax should take on the new functionality\n> is because it would be more consistent with what the OQL (object query\n> language) standard specifies, and also because it seems the more\n> useful default. Also there is no compatibility reason not to do it.\n> \n> In addition it would be legal to specify columns that only exist in\n> sub-classes. For example, if we had \n> \n> CREATE TABLE person (name TEXT);\n> CREATE TABLE student (studentid TEXT, faculty TEXT) INHERITS (person);\n> \n> it would be legal to say...\n> > SELECT * FROM person;\n> NAME\n> ----\n> Fred\n> Bill\n> \n> NAME | STUDENTID | FACULTY\n> --------------------------\n> Jim | 23455 | Science\n> Chris| 45666 | Arts\n> \n> > SELECT *, studentid FROM person;\n> NAME\n> ----\n> Fred\n> Bill\n> \n> NAME | STUDENTID\n> ----------------\n> Jim | 23455 \n> Chris| 45666 \n> \n> > SELECT *, studentid FROM ONLY person;\n> ERROR: person does not contain studentid.\n> \n> > SELECT % FROM person;\n> NAME\n> ----\n> Fred\n> Bill\n> Jim\n> Chris\n> \n> As you can see, it is desirable that psql be modified to be able to\n> print these differing tuple types. Presumably new column headings will\n> be printed when a tuple is differing to the previous one. Likely it\n> will be often desirable to do a\n> SELECT * FROM person p ORDER BY p.classname;\n> in order to have all the tuples of a particular type grouped together.\n> \n> In addition some extenions will be done to the PQ interface to support\n> these differing return types. The current PQ interface will be left\n> unchanged and backwards compatible for retrieving rows of a single\n> type.\n> \n> Also there should be an settable option that specifies that \"*\" should\n> also return the normally ignored columns of oid and classname. This is\n> so that OO programs that embed SQL into them also get back the oid and\n> classname which are required for the behind the scenes implementation\n> of an ODMG client. Something like...\n> \n> SET SHOW_OID TRUE;\n> SHOW_CLASSNAME TRUE;\n> \n> SELECT * FROM person;\n> \n> OID CLASSNAME NAME\n> -------------------\n> 2344 person Fred\n> 3445 person Bill\n> \n> OID CLASSNAME NAME | STUDENTID | FACULTY\n> -----------------------------------------\n> 2355 student Jim | 23455 | Science\n> 5655 student Chris| 45666 | Arts\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 3 Feb 2000 17:26:50 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "On Thu, 3 Feb 2000, Tom Lane wrote:\n\n> Maintaining an accurate count of descendants (or indexes for that\n> matter) would be expensive; in particular, it'd create severe\n> concurrency problems.\n\nWhat about fixing these things on VACUUM then?\n\nTaral\n\n",
"msg_date": "Thu, 3 Feb 2000 10:50:30 -0600 (CST)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> How about a set variable?\n\n> SET GETSUBCLASSES = true\n\n> With the '*' and ONLY being explicit overrides to the setting\n> of the variable. The default would be 'false'.\n\nI like that a lot. Clean, flexible, doesn't break any existing\napplications.\n\nPerhaps the business of whether to fetch extra columns from subclasses\ncould be done similarly. I am beginning to understand why Chris wants\nto do that, and I see that it would support a particular style of\ndatabase programming very nicely. But I really fail to see why it's\nnecessary to change the default behavior to cater to those apps rather\nthan existing ones. Let the new apps use a variant syntax; don't\nexpect people to change existing code in order to avoid getting tripped\nup by a new feature.\n\nNote that \"oh they won't see the extra columns if they're using an\nold API\" doesn't answer my objection. I'm concerned about the\nperformance hit from fetching those columns and transferring them to\nthe client, as well as the memory hit of storing them in query results\non the client side. We should *not* set things up in such a way that\nthat happens by default when the client didn't ask for it and isn't\neven using an API that can support it. That's why it'd be a mistake\nto redefine the existing query syntax to act this way.\n\nThe suggestion of \"SELECT ** FROM ...\" sounds pretty good to me,\nactually. I don't really see any need for changing the behavior of\nanything that looks like a standard SQL query. Applications that\nneed this feature will know that they need it and can issue a query\nthat specifically requests it.\n\n> I would not object to a configuration switch that would change the\n> default.\n\nMmm, I think that would probably not be such a hot idea. That would\nintroduce a pretty fundamental semantics incompatibility between\ndifferent installations, which would hurt script portability, complicate\ndebugging and support, yadda yadda. I think a SET variable is enough...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2000 11:52:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "Taral <[email protected]> writes:\n> On Thu, 3 Feb 2000, Tom Lane wrote:\n>> Maintaining an accurate count of descendants (or indexes for that\n>> matter) would be expensive; in particular, it'd create severe\n>> concurrency problems.\n\n> What about fixing these things on VACUUM then?\n\nCould probably do that ... not sure if it's worth the trouble ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2000 12:32:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris <[email protected]> writes:\n> > Why not change that to a relnumindexes as well? Easier to maintain and\n> > more useful information.\n> \n> Maintaining an accurate count of descendants (or indexes for that\n> matter) would be expensive; in particular, it'd create severe\n> concurrency problems. If one transaction is in the middle of creating\n> or dropping a child C of table P, then all other transactions would be\n> blocked from creating or dropping any other children of P until the C\n> transaction commits or aborts. They'd have to wait or they wouldn't\n> know what to set relnumchildren to.\n> \n> For the purpose at hand, I think it would be OK to have a\n> \"relhaschildren\" field that is set true when the first child is created\n> and then never changed. If you have a table that once had children but\n> has none at the moment, then you pay the price of looking through\n> pg_inherits; but the case that we're really concerned about (a pure SQL,\n> no-inheritance table) would still win.\n> \n> Not sure whether we can concurrently create/delete indexes on a rel,\n> but I'd be inclined to leave relhasindexes alone: again its main\n> function in life is to let you short-circuit looking for indexes on\n> a table that's never had and never will have any.\n> \n\nWOuld it be possible to consider this a 'statistic' and let\nvacuum update it?\n\nIn other words, creating an index (or subtable) sets \nrelhasindex (relhaschild) but vacuum will set it to false\nif it finds no children or indexes. or would this\nrun into concurrency problems as well?\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Thu, 03 Feb 2000 12:47:18 -0500",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Chris Bitmead <[email protected]> writes:\n> \n> I'll comment on the other issues later ... but I will say that I don't\n> think it's acceptable to add *any* overhead to standard-SQL queries\n> in order to support inheritance better. The vast majority of our users\n> want SQL performance and don't give a damn about inheritance. We have\n> to pay attention to that.\n> \n\n Well said ! \n\n Actually I'm a little bit uncertain what ORDBMS really improves ? After\nwriting a full mapper and wrapper for PostgreSQL and a Smalltalk dialect\nI see really no usage for these additional inheritance features databases\nlike PostgreSQL offer.\n\n Some points about this:\n\n - all these additional features are very specific to PostgreSQL and\n are not compatible with other databases. Writing an application \n based on these features results in non-portable systems.\n \n - Speed is still a very, very important feature for a database. A\n single query, which uses about 5 seconds because the optimizer\n is not very clever to use several indices to improove the \n query execution is much more worse and can change the structure\n of the whole application program.\n\n - when creating automatic sql-queries through a mapper one can get\n very complicated sql queries which tests the parser very hard and\n the limits of PostgreSQL has been seen very quickly during\n the development of the wrapper above.\n\n What I'm missing from these new database are structural changes to\nthe query system: the possibility to execute complicated \nconcatenated queries on the server .. perhaps with different \nparameters.\n\n Just some ideas about all these nice features\n\n Marten\n\n \n\n \n\n\n",
"msg_date": "Thu, 3 Feb 2000 22:19:06 +0100 (CET)",
"msg_from": "Marten Feldtmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris wrote:\n> \n> Hannu Krosing wrote:\n> \n> > Maybe SELECT ** FROM BASE would be more flexible as it leaves the standard\n> > SQL with its \"standard\" meaning ?\n> \n> That was my first thought and it's definitely a possibility. My argument\n> against it is that SQL doesn't have a \"standard meaning\" in the case of\n> inheritance, and ** is an incompatibility with OQL.\n> \n> I suspect we need both. Something like\n> SET GET_INHERITED_COLUMNS true; etc.\n> \n> > We will have to change the API sometime not too distant anyway, the current\n> > api is unable to deal with anything that does not have a nice textual\n> > representation (like an image or sound) in spite of all the talks about\n> > easy extensibility - the extensibility is all in the backend, ther is no\n> > easy way to get new datatypes in/out.\n> \n> What about PQbinaryTuples() and friends?\n\nThey don't help you at all when doing inserts and are by definition in native\nbyte order on queries.\n\nSomething like [ PREPARE query; BIND arguments ; EXEC ] which knows about\nbinary \nformats would be needed here.\n\nOne could use LOs except that the current ineffective implementation.\n\n-------------\nHannu\n",
"msg_date": "Fri, 04 Feb 2000 00:38:28 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Mark Hollomon wrote:\n> \n> > [ discussion on changing the default to getting subclasses ]\n> \n> I object.\n\nTell me why you object. Performance concerns? Compatibility?\n\nA SET might be a good idea, but to decide whether and also a \ndefault, it's good to know what the objections are.\n\n> \n> How about a set variable?\n> \n> SET GETSUBCLASSES = true\n> \n> With the '*' and ONLY being explicit overrides to the setting\n> of the variable. The default would be 'false'. I would not\n> object to a configuration switch that would change the\n> default.\n> --\n> \n> Mark Hollomon\n> [email protected]\n> ESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 04 Feb 2000 09:43:48 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Don Baccus wrote:\n\n> >No research is required. I simply want to implement the ODMG STANDARD\n> >for ODBMS databases on PostgreSQL. There are no great design issues\n> >here,\n> >just a matter of nailing down the details so that everyone can live\n> >with them.\n> \n> Well...that's sorta like saying no research into procedural language\n> design is needed 'cause now we've got C++.\n> \n> Whether or not the existing standard for ODBMS is the greatest thing\n> since sliced bread, I find it hard to believe that no research is\n> required or design issues raised by the fundamental problems of\n> database technology.\n> \n> Maybe I'm wrong, though, maybe the problem's been solved.\n\nNo research is required _for what I want to do_. (or if there is\nresearch required, I think I've just done it over the last 5 years :).\ni.e. I don't want to explore some new style database, only implement\na current ODMG standard on postgresql. This style of database is\nfairly well understood now for good or bad. Once the RDBMS and ODBMS\nfeatures exist in one database, maybe then research can be done\nto move forward. That's my opinion anyway.\n",
"msg_date": "Fri, 04 Feb 2000 09:53:33 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> >\n> >A good thought, but we still need one good front end that supports\n> >all the features.\n> \n> I wasn't think in terms of this being mutually exclusive with your\n> desires. Merely raising up the notion that the possibility exists\n> of creating a sandbox, so to speak, for people to play in, a tool\n> for the exploration of such concepts.\n\nSo we would be returning to roots. The original Postgres was exactly that -\na tool for the exploration of such concepts.\n\n> No, and I wasn't arguing that you shouldn't move forward, either. I\n> was just stating my personal opinion regarding the utility of simple\n> OO-ish features, that's all.\n\nYes, it needs quite much discussion/design befor going forth, lest we \nwill be in the next level of the current situation where some peoples \nusage of the current limited inheritance is an obstacle to moving \nforward to a more developed one.\n\n> >> One reason I raise the issue of possible multiple front-ends (or making\n> >> it easy for folks to make there own by making the parser->optimizer/backend\n> >> interface more general) is that this whole area would seem to be one\n> >> that begs for RESEARCH and experimentalism.\n> >\n> >No research is required. I simply want to implement the ODMG STANDARD\n> >for ODBMS databases on PostgreSQL. There are no great design issues\n> >here, just a matter of nailing down the details so that everyone can \n> >live with them.\n> \n> Well...that's sorta like saying no research into procedural language\n> design is needed 'cause now we've got C++.\n> \n> Whether or not the existing standard for ODBMS is the greatest thing\n> since sliced bread, I find it hard to believe that no research is\n> required or design issues raised by the fundamental problems of\n> database technology.\n> \n> Maybe I'm wrong, though, maybe the problem's been solved.\n> \n\nMy wife has forbidden me to buy any sliced bread, because the slices are of \nwrong thickness.\n\nHardly the situation can be any better in OODB design.\n\nThe ODMG standard may be a good starting point for discussion, but one can't \nrun any programs on a standard - one needs a real db. \nAnd IIRC the standard is only semi-public (not freely\navailable/distributable).\n\n------------------\nHannu\n",
"msg_date": "Fri, 04 Feb 2000 00:57:45 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> A lot of people use inheritance to create \"consistent schemas\", that is,\n> they empty create base tables, such as \"address\" which are inherited by\n> tables such as customer, vendor, office, etc. \n\nThis is a really bad idea. You could never have both a postal address\nAND\na home address for example. I thought the original postgres supported\nthis\nby having\nCREATE TABLE ADDRESS (...)\nCREATE TABLE PERSON(add ADDRESS).\n\nAnyway, this is what Oracle and others can do these days, and this is\nthe right\nthing.\n\n> Anyway, an idea I had would be to reimplement inheritance based on joins,\n> since this is what the \"pure relational\" solution would be anyway. When I\n> create a table B that is based on A, all the system does is create the\n> table B as usual and store a note \"I inherit from A\". Any row you insert\n> into B also creates a row in A, and the row in B contains an oid pointer\n> to it. \n\nThis is a really stu^H^H^H bad idea. I have hierarchies 5 levels deep\nwith\nmultiple inheritance, and I\ndon't want to do a 10 way join just to retrieve an object.\n\nThis is why RDBMS's performance sucks so incredibly badly on some\napplications.\nan ODBMS can perform 100x as fast in these cases just because of what\nyou\nare proposing.\n\n> Thus a select on B performs a join on A.oid and B.row_in_A_pointer.\n> A select on A just returns all the rows in A, no extras needed. A delete\n> on B deletes the row in B and in A. A delete in A would cascade to B. Both\n> of this can be gotten for free with foreign keys. Adding a column to A\n> just adds the column to A, all other tables get the new column magically\n> and in the right order. Same with dropping columns, etc.\n> \n> In short, this approach solves all inheritance problems at once and does\n> so without adding any extra kludges besides the \"I inherited from\" field,\n> which is static, plus the necessary transformations necessary in the\n> parser. The drawback is of course that a select from an inherited table\n> would always incur a join, perhaps some optimizing could be done in this\n> direction. But the bottom line is that the compatibility issue looms big.\n> \n> -Peter\n> \n> On Thu, 3 Feb 2000, Chris Bitmead wrote:\n> \n> > Hi,\n> >\n> > I've been spending a lot of time lately with gdb and tracing the\n> > back-end seeing if I can understand it enough to make some changes.\n> > I'm starting to actually understand a lot of stuff, so in order\n> > to have some possibility of having my changes accepted, I want to\n> > discuss\n> > them here first. Based on that, I'm going to hopefully make an attempt\n> > at implementation. I have a patch for one of these changes already\n> > if I get the go ahead.\n> >\n> > THESE CHANGES DON'T AFFECT YOU IF YOU DON'T USE INHERITANCE.\n> >\n> > Speak now about these changes or please, forever hold your peace. Of\n> > course you can comment later if I screw up implementation.\n> >\n> > The proposed changes are....\n> >\n> > 1) An imaginary field in every tuple that tells you the class it came\n> > from.\n> > This is useful when you select from table* and want to know which\n> > relation the object actually came from. It wouldn't be stored on disk,\n> > and like oid it wouldn't be displayed when you do SELECT *. The field\n> > would be called classname. So you could have...\n> > SELECT p.classname, p.name FROM person p;\n> > person | Fred\n> > student | Bill\n> > employee | Jim\n> > person | Chris\n> >\n> > If you want to know the exact behaviour it is as if every table in the\n> > database had done to it...\n> > ALTER TABLE foo ADD COLUMN classname TEXT;\n> > UPDATE foo SET classname='foo';\n> >\n> > Of course this is not how it would be implemented. It is just\n> > reference for how it will appear to work. BTW, this idea was also\n> > in the original berkeley design notes.\n> >\n> > 2) Changing the sense of the default for getting inherited tuples.\n> > Currently you only get inherited tuples if you specify \"tablename*\".\n> > This would be changed so that you get all sub-class tuples too by\n> > default unless you specify \"ONLY tablename\". There are several\n> > rationale for this. Firstly this is what Illustra/Informix have\n> > implemented. Secondly, I believe it is more logical from an OO\n> > perspective as well as giving a more useful default. If a politician\n> > IS a person and I say SELECT * from person, then logically I should\n> > see all the politicians because they are people too (so they claim\n> > :). Thirdly, there are a whole range of SQL statements that should\n> > probably be disallowed without including sub-classes. e.g. an ALTER\n> > TABLE ADD COLUMN that does not include sub-classes is almost certainly\n> > undesirable. It seems ashame to have to resort to non-standard SQL\n> > with the \"*\" syntax in this case when it is really your only\n> > choice. Basicly, wanting ONLY a classname is a far more unusual\n> > choice, and leaving off the \"*\" is a common error. Fourthly, it seems\n> > out of character for the SQL language to have this single character\n> > operator. The SQL style is to use wordy descriptions of the operators\n> > meaning. \"ONLY\" fits well here because it describes its own meaning\n> > perfectly whereas to the unitiated, \"*\" is harder to guess at. While\n> > this change is an incompatibility I hope for those few people using\n> > inheritance they can accept the need to move forward without\n> > over-burden of backwards compatibility.\n> >\n> > 3) The ability to return different types of rows from a SELECT. This\n> > is to allow implementation of ODBMS functionality where a query could\n> > be required to instantiate objects of differing types with differing\n> > attributes.\n> >\n> > I would propose that that anytime you do a SELECT * from a base table\n> > that you would get back the full rows from those sub tables. Since the\n> > current PQ interface which doesn't support this notion would remain\n> > unchanged this wouldn't affect current users.\n> >\n> > It's probably also desirable to have a syntax for getting just the\n> > columns of the base table when this is desired. Say perhaps SELECT %\n> > from table. This would be a performance hack for users of libpq and a\n> > functionality difference for users of psql.\n> >\n> > The reason I think the \"*\" syntax should take on the new functionality\n> > is because it would be more consistent with what the OQL (object query\n> > language) standard specifies, and also because it seems the more\n> > useful default. Also there is no compatibility reason not to do it.\n> >\n> > In addition it would be legal to specify columns that only exist in\n> > sub-classes. For example, if we had\n> >\n> > CREATE TABLE person (name TEXT);\n> > CREATE TABLE student (studentid TEXT, faculty TEXT) INHERITS (person);\n> >\n> > it would be legal to say...\n> > > SELECT * FROM person;\n> > NAME\n> > ----\n> > Fred\n> > Bill\n> >\n> > NAME | STUDENTID | FACULTY\n> > --------------------------\n> > Jim | 23455 | Science\n> > Chris| 45666 | Arts\n> >\n> > > SELECT *, studentid FROM person;\n> > NAME\n> > ----\n> > Fred\n> > Bill\n> >\n> > NAME | STUDENTID\n> > ----------------\n> > Jim | 23455\n> > Chris| 45666\n> >\n> > > SELECT *, studentid FROM ONLY person;\n> > ERROR: person does not contain studentid.\n> >\n> > > SELECT % FROM person;\n> > NAME\n> > ----\n> > Fred\n> > Bill\n> > Jim\n> > Chris\n> >\n> > As you can see, it is desirable that psql be modified to be able to\n> > print these differing tuple types. Presumably new column headings will\n> > be printed when a tuple is differing to the previous one. Likely it\n> > will be often desirable to do a\n> > SELECT * FROM person p ORDER BY p.classname;\n> > in order to have all the tuples of a particular type grouped together.\n> >\n> > In addition some extenions will be done to the PQ interface to support\n> > these differing return types. The current PQ interface will be left\n> > unchanged and backwards compatible for retrieving rows of a single\n> > type.\n> >\n> > Also there should be an settable option that specifies that \"*\" should\n> > also return the normally ignored columns of oid and classname. This is\n> > so that OO programs that embed SQL into them also get back the oid and\n> > classname which are required for the behind the scenes implementation\n> > of an ODMG client. Something like...\n> >\n> > SET SHOW_OID TRUE;\n> > SHOW_CLASSNAME TRUE;\n> >\n> > SELECT * FROM person;\n> >\n> > OID CLASSNAME NAME\n> > -------------------\n> > 2344 person Fred\n> > 3445 person Bill\n> >\n> > OID CLASSNAME NAME | STUDENTID | FACULTY\n> > -----------------------------------------\n> > 2355 student Jim | 23455 | Science\n> > 5655 student Chris| 45666 | Arts\n> >\n> > ************\n> >\n> >\n> \n> --\n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n",
"msg_date": "Fri, 04 Feb 2000 10:03:14 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Taral wrote:\n> \n> On Thu, 3 Feb 2000, Tom Lane wrote:\n> \n> > Maintaining an accurate count of descendants (or indexes for that\n> > matter) would be expensive; in particular, it'd create severe\n> > concurrency problems.\n> \n> What about fixing these things on VACUUM then?\n\nIt could produce wrong results to queries if the data is wrong.\n",
"msg_date": "Fri, 04 Feb 2000 10:27:37 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "\nTom, I agree with most of what you say. If we want to have ** be the\ndefault\nsyntax for getting sub-columns I can live with that (for suggestion (3))\n\nBut for (2), I do feel very strongly that getting sub-tuples should be\nthe\n\"default default\", and a SET GETSUBCLASSES=true should be the default\nsetting.\n\nI've been using the postgres inheritance for a real system and I can\nsay with certainty that this is a massive source of errors. Not \nwanting sub-class tuples seems rarely needed, and leaving off the \"*\" is\nsomething that too often seems forgotten. I often can trawl through\ncode and realise that some query is missing the \"*\" but it hasn't been\ndiscovered yet. In fact I find that almost all queries require the \"*\"\nwhen you have a proper OO model, and not using \"*\" is usually laziness.\n\nAlso when adding a sub-class where there previously was none, one \nusually has to trawl through the queries and add \"*\" to all of them\nbecause as I said, there are almost never occasions where \"*\" is not\nrequired in real life OO models.\n\nSo I understand the compatibility issue here, but I really feel strongly\nthat this should be changed now before there really are a lot of people\nusing it. Sure, have as many compatibility modes as you like, but I\nthink\nthis is a broken enough design that the default should be changed.\nApparently Illustra/Informix agreed.\n\nTom Lane wrote:\n> \n> \"Mark Hollomon\" <[email protected]> writes:\n> > How about a set variable?\n> \n> > SET GETSUBCLASSES = true\n> \n> > With the '*' and ONLY being explicit overrides to the setting\n> > of the variable. The default would be 'false'.\n> \n> I like that a lot. Clean, flexible, doesn't break any existing\n> applications.\n> \n> Perhaps the business of whether to fetch extra columns from subclasses\n> could be done similarly. I am beginning to understand why Chris wants\n> to do that, and I see that it would support a particular style of\n> database programming very nicely. But I really fail to see why it's\n> necessary to change the default behavior to cater to those apps rather\n> than existing ones. Let the new apps use a variant syntax; don't\n> expect people to change existing code in order to avoid getting tripped\n> up by a new feature.\n> \n> Note that \"oh they won't see the extra columns if they're using an\n> old API\" doesn't answer my objection. I'm concerned about the\n> performance hit from fetching those columns and transferring them to\n> the client, as well as the memory hit of storing them in query results\n> on the client side. We should *not* set things up in such a way that\n> that happens by default when the client didn't ask for it and isn't\n> even using an API that can support it. That's why it'd be a mistake\n> to redefine the existing query syntax to act this way.\n> \n> The suggestion of \"SELECT ** FROM ...\" sounds pretty good to me,\n> actually. I don't really see any need for changing the behavior of\n> anything that looks like a standard SQL query. Applications that\n> need this feature will know that they need it and can issue a query\n> that specifically requests it.\n> \n> > I would not object to a configuration switch that would change the\n> > default.\n> \n> Mmm, I think that would probably not be such a hot idea. That would\n> introduce a pretty fundamental semantics incompatibility between\n> different installations, which would hurt script portability, complicate\n> debugging and support, yadda yadda. I think a SET variable is enough...\n> \n> regards, tom lane\n",
"msg_date": "Fri, 04 Feb 2000 10:55:39 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> > Bruce Momjian wrote:\n> > \n> > > > Well I see that pg_class has columns like \"relhasindex\". If \n> we added a\n> > > > \"relhassubclass\", the overhead should be unmeasureable.\n> > > \n> > > Yes, but how do you keep that accurate? If I add indexes, then drop\n> > > them, does relhasindex go to false. \n> > \n> > I don't know. Does it? \n> \n> Let me add that to the TODO list.\n> \n> > \n> > >Could you do that for relhassubclass?\n> > \n> > If we made it relnumsubclasses and incremented/decremented on\n> > CREATE/DROP, it seems easy in theory.\n> \n> Yes, that would work. Seems hasindex has problems.\n>\n\nThis posting may be off the point,sorry.\n\nIsn't relhasindex a kind of item that we can live without it ?\nI proposed to change the use of this item in [[HACKERS] Index\nrecreation in vacuum]. Though I have heard no clear objection,\nI want to confirm again. My proposal is as follows.\n\n1) DDL commands don't rely on relhasindex.\n2) DML commands don't take indexes into account if\n relhasindex is set to false.\n3) REINDEX command and vacuum with REINDEX option\n sets this flag to false at the beginning and sets it to true\n when recreation of all indexes completed.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 4 Feb 2000 09:16:46 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "relhasindex(was RE: [HACKERS] Proposed Changes to PostgreSQL)"
},
{
"msg_contents": "On Fri, Feb 04, 2000 at 10:55:39AM +1100, Chris Bitmead wrote:\n<snipped it all!>\n\nJust wanted to chime in on this thread with the sugestion that Chris\nclearly has been thinking about this a lot, and has some strong opinions\nabout the 'right way to do things'. How about an offical, postgresql.org\nhosted, CVS branch for ORDBMS development? Let Chris and whomever is\ninterested take a crack at doing it however they want, and _prove_\nthat the performance is as good, or much better, and is compatible, etc.\nClearly, details of implementation can be discussed to death, until\nChris gets fed up and goes away: not good. So, what do the core\ndevelopers think? Sound feasable? As to problems of keeping in sync with\nHEAD, etc., that'd be up to Chris and his crew. Does postgresql.org\nhave the extra 20-30 MB of disk?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 3 Feb 2000 18:28:47 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Marten Feldtmann wrote:\n\n> Actually I'm a little bit uncertain what ORDBMS really improves ? After\n> writing a full mapper and wrapper for PostgreSQL and a Smalltalk dialect\n> I see really no usage for these additional inheritance features databases\n> like PostgreSQL offer.\n> \n> Some points about this:\n> \n> - all these additional features are very specific to PostgreSQL and\n> are not compatible with other databases. Writing an application\n> based on these features results in non-portable systems.\n\nNot true, because if the wrapper conforms to the ODMG standard, it will \nbe compatible with ObjectStore, Versant, the new Sun RDBS standard,\nGemstone, and many others.\n\n> - Speed is still a very, very important feature for a database. A\n> single query, which uses about 5 seconds because the optimizer\n> is not very clever to use several indices to improove the\n> query execution is much more worse and can change the structure\n> of the whole application program.\n\nThe biggest thing you can do for speed is to have less objects/tuples\nin the database. Inheritance and the array feature of postgresql\ncan improve things here by orders of magnitude. The problem is that\nthese\ntwo features are not viable to use at present. With an ODMG interface,\nand TOAST to allow tuples of unlimited size this will then be a viable\nfeature. In some situations this will improve queries by 100x even\nwith the most brain-dead optimizer. ODBMS doesn't care a great deal\nabout wonderful optimizers because joins are less necessary.\n\n> - when creating automatic sql-queries through a mapper one can get\n> very complicated sql queries which tests the parser very hard and\n> the limits of PostgreSQL has been seen very quickly during\n> the development of the wrapper above.\n\nExactly, so stop mapping things and creating complicated joins. ODBMSes\ndo not do ANY joins to re-create objects. That's why mappers suck so\nhard.\n\n> What I'm missing from these new database are structural changes to\n> the query system: the possibility to execute complicated\n> concatenated queries on the server .. perhaps with different\n> parameters.\n\nWhat is a concatenated query? \n\nI'm all in favour of more powerful queries, but that is not what this\nproposal is about. This is about AVOIDING queries. Mappers and so forth\nare great query generators because the database representation is\ndifferent from the in-memory object representation. This proposal\nis all about making the in-memory object representation the same\nas in the database.\n\nIf you still don't get it take an example..\n\nclass CarPart {\n\tint volume;\n}\nclass Wheel : CarPart {\n\tint diameter;\n}\nclass SteeringWheel : Wheel {\n boolean horn;\n}\nclass RoadWheel : Wheel {\n int airpressure;\n}\nclass Car {\n List<CarPart> parts;\n}\n\nNow with an ODBMS, a Car with 4 wheels and a steering wheel we'll have 6\nobjects in the database - 1 Car, 4 RoadWheels and 1 SteeringWheel. With\na relational mapper, depending on how you map it you'll have 21 objects\n- 5 CarPart objects, 5 wheel objects, 4 road wheel, 1 steering wheel, 1\ncar and 5 car_carpart relation entities. And when you join it all\ntogether you'll have to join against 6 tables instead of 3.\n",
"msg_date": "Fri, 04 Feb 2000 14:42:00 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "At 02:42 PM 2/4/00 +1100, Chris Bitmead wrote:\n\n>Not true, because if the wrapper conforms to the ODMG standard, it will \n>be compatible with ObjectStore, Versant, the new Sun RDBS standard,\n>Gemstone, and many others.\n\nWithout prejudice, I'd be interested in some order-of-magnitude \nmarket share for these technologies vs., say, Oracle.\n\n>The biggest thing you can do for speed is to have less objects/tuples\n>in the database. Inheritance and the array feature of postgresql\n>can improve things here by orders of magnitude.\n\nThere's no doubt of this, for applications that can make use\nof the paradigms.\n\n\n>The problem is that\n>these\n>two features are not viable to use at present. With an ODMG interface,\n>and TOAST to allow tuples of unlimited size this will then be a viable\n>feature. In some situations this will improve queries by 100x even\n>with the most brain-dead optimizer. ODBMS doesn't care a great deal\n>about wonderful optimizers because joins are less necessary.\n\nAnd this last statement I really have to wonder about. For restricted\napplication spaces, yeah, no doubt. But in general, no way.\n\n>Exactly, so stop mapping things and creating complicated joins. ODBMSes\n>do not do ANY joins to re-create objects. That's why mappers suck so\n>hard.\n\nIf they don't do joins, then presumably they map many-to-one relations\nby copying data into each of the \"many\" table rows. TANSTAAFL, no?\n\nThough this strategy is a very viable one in today's big-memory, big-disk\nenvironment. It's not clear to me that a extremely smart RDBMS system\ncouldn't decide to add redundancy itself and gain much of the efficiency,\nbut, heck, that's just my weak, uncreative compiler-writer mind at work\nagain.\n\n(and clearly, of course, PG isn't on any threshold of doing it, I'm \nthinking in theoretical space here).\n\n\n>Now with an ODBMS, a Car with 4 wheels and a steering wheel we'll have 6\n>objects in the database - 1 Car, 4 RoadWheels and 1 SteeringWheel. With\n>a relational mapper, depending on how you map it you'll have 21 objects\n>- 5 CarPart objects, 5 wheel objects, 4 road wheel, 1 steering wheel, 1\n>car and 5 car_carpart relation entities. And when you join it all\n>together you'll have to join against 6 tables instead of 3.\n\nNot really. You'd probably denormalize and not worry about it, in \npractice.\n\nWould the result be as beautiful? I don't know - do most car designers\nthink that SteeringMechanism and PavementInterface are the same? It's\ntrue for a variety of reasons in today's cars that aren't actually\nrelated, and high-end race cars are exploring joystick control.\n\nSo one could claim that your hierarchy is merely limiting creative\nexpression...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 03 Feb 2000 21:17:10 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "<I trimmed the CC list a bit>\n\nChris Bitmead wrote:\n> \n> Mark Hollomon wrote:\n> >\n> > > [ discussion on changing the default to getting subclasses ]\n> >\n> > I object.\n> \n> Tell me why you object. Performance concerns? Compatibility?\n\nDefinitely compatibility. The load I see (200 - 300 queries a DAY)\nisn't enough for me to be concerned about an extra millisecond\nor two per query. But I certainly understand others concerns in\nthis area.\n\nOne of my responsibilities at work is the maintenance of a homegrown\ndocument indexing and retrieval system. It is about 100K of Perl\nthat calls into a custom Perl wrapper around libpq. The system\nis an escaped 'proof-of-concept'. I wrote it using inheritance\nfeatures of Postgres95.\n\nThe upshot is, that this proposed change would require me to examine\nalmost every line of this system in order to make sure that I put\nONLY in just the right spots. Yes, this would be where ever there\n_isn't_ a '*', but how do I grep for the lack of a asterisk? Since\nit is a \"prototype\", The code feels very free to pass around small\nsnippets of SQL, a disembodied FROM clause, a portion of a VALUES\nclause.\n\nI simply would not be allowed the time to do the rewrite necessary\nto accomodate this change. And if I _did_ have the time, I would\nprobably rewrite it for Oracle because then DB Admin would be someone\n_else's_ job.\n\nNow, one of the days, I will find a good excuse (eg new feature)\nto do a complete rewrite. And _then_ your proposal will actually\nbe a help.\n\nAnd that is why I suggest a SET variable. When I'm ready to\nuse the new feature, I can. But no work is necessary until that\nday arrives.\n\nThanks for listening.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 04 Feb 2000 08:49:20 -0500",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> Peter Eisentraut wrote:\n> \n> \n> This is a really stu^H^H^H bad idea. I have hierarchies 5 levels deep\n> with\n> multiple inheritance, and I\n> don't want to do a 10 way join just to retrieve an object.\n> \n> This is why RDBMS's performance sucks so incredibly badly on some\n> applications.\n> an ODBMS can perform 100x as fast in these cases just because of what\n> you\n> are proposing.\n> \n\n Hmm, and yes one may find problems where the pure relational system\nis 100x faster than your ODBMS.\n\n After doing a project with VERSANT and VisualWorks (election projection\nsystem for the first television sender here in Germany) I like the\nidea of OODBMS, but I've also noticed, that they are not the solution\nto all problems.\n\n Clever database desing leeds to good performance on both systems, but\none should consider, that the designs of the database layout will be\ndifferent. There are cases, where a pure relational system is very\nfast and an ODBMS never get it, but there are the examples you\nmentioned.\n\n Joins per se are not that bad .. it depends on when and how they\nare used and how good the analyzer of the database is and how good\nhe uses the indices to get the job done.\n\n One very good point is the query language of the rdbms systems. On\nthe odbms side no standard is really available, which can be seen as\nthe sql of the odbms.\n\n Marten\n\n\n",
"msg_date": "Fri, 4 Feb 2000 19:15:31 +0100 (CET)",
"msg_from": "Marten Feldtmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Tom, I agree with most of what you say. If we want to have ** be the\n> default\n> syntax for getting sub-columns I can live with that (for suggestion (3))\n> \n> But for (2), I do feel very strongly that getting sub-tuples should be\n> the\n> \"default default\", and a SET GETSUBCLASSES=true should be the default\n> setting.\n\nThen maybe we need a way to \"break off\" inheritance, i.e. make the inherited \ntable independent but retain the columns as they are at the time of breakage.\n\nAt least it could be given as an option in pg_dump. (--dump_flat_creates or\nsmth.)\n\n> I've been using the postgres inheritance for a real system and I can\n> say with certainty that this is a massive source of errors. Not\n> wanting sub-class tuples seems rarely needed, and leaving off the \"*\" is\n> something that too often seems forgotten. I often can trawl through\n> code and realise that some query is missing the \"*\" but it hasn't been\n> discovered yet. In fact I find that almost all queries require the \"*\"\n> when you have a proper OO model, and not using \"*\" is usually laziness.\n\nTrue. I also think that people who used inheritance as a create table shortcut\ncan most easily ensure compatibility by dumping their not-really-inherited \ntables as independent. They will have to dump-relaod anyway.\n\n> Also when adding a sub-class where there previously was none, one\n> usually has to trawl through the queries and add \"*\" to all of them\n> because as I said, there are almost never occasions where \"*\" is not\n> required in real life OO models.\n> \n> So I understand the compatibility issue here, but I really feel strongly\n> that this should be changed now before there really are a lot of people\n> using it. Sure, have as many compatibility modes as you like, but I\n> think\n> this is a broken enough design that the default should be changed.\n> Apparently Illustra/Informix agreed.\n\nAnd they are probably the only external DB we can aim to be compatible with,\nor what does SQL3 say?\n\n----------------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 00:37:35 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Don Baccus wrote:\n\n> Without prejudice, I'd be interested in some order-of-magnitude\n> market share for these technologies vs., say, Oracle.\n\nWould you be interested in the market share of Win98 compared to Linux?\n\nNobody uses an ODBMS if they can get it to work with Oracle. They go to\nan ODBMS when they realise that's the only way they can get it to work.\n\nHowever, as I said, Sun is defining for Java a standard interface for\nRDBMS which is exactly the same as ODMG. So expect a lot of people using\nOracle to be writing code that ports to an ODBMS. Maybe when they\nrealise they can slot a real ODBMS under their\napp and greatly increase performance, it might be\ngood for the ODBMS market.\n\n\n> There's no doubt of this, for applications that can > make use of the paradigms.\n\nTo my mind that is like saying OO is useful for programs that can make\nuse of the paradigms. In fact I think nearly all programs can make use\nof OO.\n\n> And this last statement I really have to wonder \n> about. For restricted\n> application spaces, yeah, no doubt. But in general, \n> no way.\n\nIt's only when you need a great deal of ad-hoc queries that you really\nneed a RDBMS. But a very great proportion of apps have only very\nspecific querying needs, and an ODBMS can do those queries MUCH faster.\n\nAnd if postgresql has *both*, then it should be the\nbest of both worlds. I'm not going to go around\nclaiming RDBMS is obsolete, but I do know that ODBMS\nis much more convenient to use for programming. Once\nyou've done your app and you want to spew off a few\nreports, that's when you wish you had RDBMS.\n\n> >Exactly, so stop mapping things and creating complicated joins. ODBMSes\n> >do not do ANY joins to re-create objects. That's why mappers suck so\n> >hard.\n> \n> If they don't do joins, then presumably they map many-to-one relations\n> by copying data into each of the \"many\" table rows. TANSTAAFL, no?\n^^^ ?\n\nThey have a similar layout on disk to what you might have in memory. So\nif you store a 1:M in memory as an array of pointers, that's how you\nmight do it on disk too.\n \n> Though this strategy is a very viable one in today's \n> big-memory, big-disk\n> environment. It's not clear to me that a extremely \n> smart RDBMS system\n> couldn't decide to add redundancy itself and gain \n> much of the efficiency,\n> but, heck, that's just my weak, uncreative \n> compiler-writer mind at work again.\n\nDo you mean an RDBMS might try and be smart and store it the same way?\nWell if it did that, we might call it an ODBMS. But the other main\nbenefit of an ODBMS is that retrieving records for many cases\n(non-ad-hoc) is very simple to program for because you don't have to map\nsay a join table into say a C++ List<type>. In \nother words it's not just the performance of ODBMS\nthat is good, but also the interface. Also\nif an RDBMS maps an object to a table and then maps it\nback to an array on disk, well you've done an \nunnecessary conversion.\n\n> >Now with an ODBMS, a Car with 4 wheels and a steering wheel we'll have 6\n> >objects in the database - 1 Car, 4 RoadWheels and 1 SteeringWheel. With\n> >a relational mapper, depending on how you map it you'll have 21 objects\n> >- 5 CarPart objects, 5 wheel objects, 4 road wheel, 1 steering wheel, 1\n> >car and 5 car_carpart relation entities. And when you join it all\n> >together you'll have to join against 6 tables instead of 3.\n> \n> Not really. You'd probably denormalize and not \n> worry about it, in practice.\n\nThen what happens to your RDBMSes wonderful ad-hoc query facility if you\nde-normalise? Will you have to do a UNION with about 5000 clauses to\nretrieve the volume and price of each type of car part?\n\n> Would the result be as beautiful? I don't know - do \n> most car designers\n> think that SteeringMechanism and PavementInterface \n>are the same? It's\n> true for a variety of reasons in today's cars that \n> aren't actually\n> related, and high-end race cars are exploring \n> joystick control.\n> \n> So one could claim that your hierarchy is merely \n> limiting creative expression...\n\nMy hierarchy? The point is that you can _have_ a hierarchy. It's well\naccepted that OO hierarchies are\ngood. The good thing here is being able to directly\nstore it in the database.\n",
"msg_date": "Sat, 05 Feb 2000 12:41:34 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Marten Feldtmann wrote:\n\n> Hmm, and yes one may find problems where the pure \n> relational system is 100x faster than your ODBMS.\n> \n> After doing a project with VERSANT and VisualWorks \n> (election projection system for the first television \n> sender here in Germany) I like the idea of OODBMS, \n> but I've also noticed, that they are not the \n> solution to all problems.\n\nGive me a clear application spec and VERSANT, and I will ALWAYS flog\nOracle into the dust. But...\n\nWhere SQL comes into it's own is _conveniently_ doing queries that I\nnever thought of when I first designed my app. Of course many ODBMSes\nhave SQL or similar too.\n\n> Joins per se are not that bad .. it depends on when \n> and how they are used and how good the analyzer of \n> the database is and how good he uses the indices to \n> get the job done.\n\nTake the simple SUPPLIER, PART and SUPPLIER_PART situation. The very\nfact that you've got an extra table here means you've got to touch many\nmore disk pages and transfer more data. An RDBMS just can't win when the\nODBMS data model is designed right.\n\n> One very good point is the query language of the \n> rdbms systems. On the odbms side no standard is \n> really available, which can be seen as the sql of \n> the odbms.\n\nThere is a standard called OQL which is very similar to SQL. It's just\nrather poorly supported.\n\n--\nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 05 Feb 2000 12:52:16 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "At 12:41 PM 2/5/00 +1100, Chris wrote:\n>Don Baccus wrote:\n>\n>> Without prejudice, I'd be interested in some order-of-magnitude\n>> market share for these technologies vs., say, Oracle.\n>\n>Would you be interested in the market share of Win98 compared to Linux?\n\nPostgres isn't in competition with either of those software products. It\nis probably worth pointing out that at least some of the folks in the Linux\ncommunity would like to derail Win98 to some degree.\n\nAnd I, at least, would love to see Postgres derail Oracle to some degree.\n\n...\n\n>> There's no doubt of this, for applications that can > make use of the\nparadigms.\n>\n>To my mind that is like saying OO is useful for programs that can make\n>use of the paradigms. In fact I think nearly all programs can make use\n>of OO.\n\nThis really isn't the place for a religious fight. Personally, I believe\nthe OO paradigm is well-suited to the decomposition of some problems, not\nparticularly well-suited to others. I've only been a professional software\nengineer for 29 years, though, so I don't pretend to have all the answers.\nI'd humbly suggest that OO methodologists don't, either.\n\nBut, that's just my opinion.\n\nFar more important to me is that SQL queries not suffer performance hits\nas a result of whatever changes to OO support make it into the standard\nversion of PG.\n\nLet's just leave it at that, OK? \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 04 Feb 2000 18:03:57 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris wrote:\n> \n> > One very good point is the query language of the\n> > rdbms systems. On the odbms side no standard is\n> > really available, which can be seen as the sql of\n> > the odbms.\n> \n> There is a standard called OQL which is very similar to SQL. It's just\n> rather poorly supported.\n> \n\nI think the operative word here is \"available\". I know that SQL specs \nare'nt freely available either, but due to SQL being already widely \nsupported one can get the general idea from many freely available sources, \nlike the bunch of freely downloadable DB's currently available for linux.\nMost of them have some docs included. \n\nIt is still quite a job to reconstruct SQL92 from them ;)\n\nI know now description (except a BNF syntax available from some ODBMS website)\nthat I could use to get some idea about OQL.\n\n----------------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 14:00:52 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> \n> It is still quite a job to reconstruct SQL92 from them ;)\n> \n> I know now description (except a BNF syntax available from some ODBMS website)\n\nSHould be \"I know no description ...\"\n\n> that I could use to get some idea about OQL.\n> \n> ----------------------\n> Hannu\n> \n> ************\n",
"msg_date": "Sat, 05 Feb 2000 14:19:31 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> I think the operative word here is \"available\". I know that SQL specs\n> are'nt freely available either, but due to SQL being already widely\n> supported one can get the general idea from many freely available sources,\n> like the bunch of freely downloadable DB's currently available for linux.\n> Most of them have some docs included.\n> \n> It is still quite a job to reconstruct SQL92 from them ;)\n> \n> I know now description (except a BNF syntax available from some ODBMS website)\n> that I could use to get some idea about OQL.\n\nPoet at http://www.poet.com have their doco online including OQL.\n\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 05 Feb 2000 23:26:46 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Chris wrote:\n> \n> Hannu Krosing wrote:\n> \n> > I think the operative word here is \"available\". I know that SQL specs\n> > are'nt freely available either, but due to SQL being already widely\n> > supported one can get the general idea from many freely available sources,\n> > like the bunch of freely downloadable DB's currently available for linux.\n> > Most of them have some docs included.\n> >\n> > It is still quite a job to reconstruct SQL92 from them ;)\n> >\n> > I know now description (except a BNF syntax available from some ODBMS website)\n> > that I could use to get some idea about OQL.\n> \n> Poet at http://www.poet.com have their doco online including OQL.\n> \n\nThanks, I'll check that.\n\nBtw, has anyone compared PostgreSQL's object features with SQL3 (draft)\nfeatures.\n\nFor example they seem to use UNDER instead of INHERITS and no parentheses.\n\nThey also have a special privilege also called UNDER for being able to define \na subtype (as they call it). A lot of other features seem to be considered too\n-\nno wonder it is 2.3 MB text file.\n\navailable at:\n\nftp://ftp.digital.com/pub/standards/sql/\n\nor\n\nhttp://gatekeeper.dec.com/pub/standards/sql/\n\nthe main file is sql-foundation-aug94.txt\n\n------------------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 15:33:13 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "1) free is freedom, not free bear ;-) also, there are some sites\n has oql draft -- like sql draft. sorry, can not remember.\n2) good books, like \" C++ object databases\" (David Jordan) has\n a lot material.\n3) a lot of OODBM evaluation copy there. Not totally complied, but\n together with 1 and 2, still can see what is going on.\n\nso, no excuse for not knowing oodbm/oql :-) -- I'm waiting for\ntrying them on pg . \n\nOn Sat, 5 Feb 2000, Hannu Krosing wrote:\n\n> Chris wrote:\n> > \n> > > One very good point is the query language of the\n> > > rdbms systems. On the odbms side no standard is\n> > > really available, which can be seen as the sql of\n> > > the odbms.\n> > \n> > There is a standard called OQL which is very similar to SQL. It's just\n> > rather poorly supported.\n> > \n> \n> I think the operative word here is \"available\". I know that SQL specs \n> are'nt freely available either, but due to SQL being already widely \n> supported one can get the general idea from many freely available sources, \n> like the bunch of freely downloadable DB's currently available for linux.\n> Most of them have some docs included. \n> \n> It is still quite a job to reconstruct SQL92 from them ;)\n> \n> I know now description (except a BNF syntax available from some ODBMS website)\n> that I could use to get some idea about OQL.\n> \n> ----------------------\n> Hannu\n> \n> ************\n> \n\n",
"msg_date": "Mon, 7 Feb 2000 00:15:34 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "> 2) good books, like \" C++ object databases\" (David Jordan) has\n> a lot material.\n\n\nAs an example:\n\n Cattel, \"The Object Database Standard ODMG 2.0\"\n \n Morgan Kaufmann, ISBN 1 - 55860 - 463 -4\n\n\n Marten Feldtmann\n\n\n",
"msg_date": "Mon, 7 Feb 2000 18:06:18 +0100 (CET)",
"msg_from": "Marten Feldtmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Proposed Changes to PostgreSQL"
}
] |
[
{
"msg_contents": "unsubscribe\n______________________________________________________\nGet Your Private, Free Email at http://www.hotmail.com\n\n",
"msg_date": "Thu, 03 Feb 2000 05:52:33 PST",
"msg_from": "\"O Nubeiro\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "> The SQL style is to use wordy descriptions of the operators\n> meaning. \"ONLY\" fits well here because it describes its own meaning\n> perfectly whereas to the unitiated, \"*\" is harder to guess at. While\n> this change is an incompatibility I hope for those few people using\n> inheritance they can accept the need to move forward without\n> over-burden of backwards compatibility.\n\nMight also allow the *, but do nothing with it, or maybe throw a\n\"deprecated\" notice. \n\n> > SELECT *, studentid FROM person;\n> NAME\n> ----\n> Fred\n> Bill\n> \n> NAME | STUDENTID\n> ----------------\n> Jim | 23455 \n> Chris| 45666\n\nThe above is incorrect, since the * already returns studentid, thus the\nresult \nof the above query should be:\n> SELECT *, studentid FROM person;\nNAME\n----\nFred\nBill\n\nNAME | STUDENTID | FACULTY | STUDENTID\n--------------------------\nJim | 23455 | Science | 23455\nChris| 45666 | Arts | 45666\n \n> Also there should be an settable option that specifies that \"*\" should\n> also return the normally ignored columns of oid and classname. This is\n> so that OO programs that embed SQL into them also get back the oid and\n> classname which are required for the behind the scenes implementation\n> of an ODMG client. Something like...\n\nwhy don't they simply always \nselect oid, classname, * from ...\nThe reason I suggest this is, because implementing joins to return the \ncorrect oid, classname seems very complex.\n\nThe rest sounds good to me :-)\n\nAndreas\n",
"msg_date": "Thu, 3 Feb 2000 15:47:07 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Proposed Changes to PostgreSQL"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n\n> > Also there should be an settable option that specifies that \"*\" should\n> > also return the normally ignored columns of oid and classname. This is\n> > so that OO programs that embed SQL into them also get back the oid and\n> > classname which are required for the behind the scenes implementation\n> > of an ODMG client. Something like...\n> \n> why don't they simply always\n> select oid, classname, * from ...\n> The reason I suggest this is, because implementing joins to return the\n> correct oid, classname seems very complex.\n\nBecause I envisage people using an ODBMS-ish interface and allowing\nuse of SQL queries. This infrastructure wouldn't work without oid and \nclassname. Forcing always to add oid, classname would be\nrepetitive and error prone.\n",
"msg_date": "Fri, 04 Feb 2000 09:48:04 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Proposed Changes to PostgreSQL"
}
] |
[
{
"msg_contents": "\n> > > > I would propose that that anytime you do a SELECT * \n> from a base table\n> > > > that you would get back the full rows from those sub tables.\n> \n> Maybe SELECT ** FROM BASE would be more flexible as it leaves \n> the standard \n> SQL with its \"standard\" meaning ?\n\nI like the idea of not messing with the traditional meaning of the *.\nThe Informix select * from table_with_subclasses also only returns \nthe parent columns.\n\nOf course I would also like that the default select * from table \nreturn all subclass rows.\n\nImho there is no real argument against extra syntax to select\nall columns of subclasses though.\n\nAndreas\n",
"msg_date": "Thu, 3 Feb 2000 16:21:03 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL"
}
] |
[
{
"msg_contents": "> > \n> Added to TODO:\n> \n> \t* Disallow LOCK on view \n> \n\nI really think we should give views a different relkind 'V'.\nAll this mess with a real table + rules is very unclean.\nIt is then clear that such a relkind does'nt have locks.\nThen it would also be easy to readd some of the now lost features\nlike computed columns ...\n\nI would instead add the following TODO:\n\t* create new relkind 'V' for views\n\nAndreas\n",
"msg_date": "Thu, 3 Feb 2000 16:30:32 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] SELECT FOR UPDATE leaks relation refcounts"
}
] |
[
{
"msg_contents": "\nShouldn't this produce something? Was talking with Dave Page today about\nthe lack of a serial type in PgAdmin, and he mentioned that its not a listed\ntype?\n\n\ntemplate1=> SELECT typname FROM pg_type WHERE typrelid = 0 and typname='serial';\n typname \n---------\n(0 rows)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Feb 2000 11:54:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "SERIAL type isn't listed...?"
},
{
"msg_contents": "Serial isn't a type. The parser transforms it to int4 plus some default\nand a sequence. There is a TODO item in this direction, but I think no one\nis quite sure how/why/whether to do it.\n\nOn Thu, 3 Feb 2000, The Hermit Hacker wrote:\n\n> \n> Shouldn't this produce something? Was talking with Dave Page today about\n> the lack of a serial type in PgAdmin, and he mentioned that its not a listed\n> type?\n> \n> \n> template1=> SELECT typname FROM pg_type WHERE typrelid = 0 and typname='serial';\n> typname \n> ---------\n> (0 rows)\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 3 Feb 2000 17:28:07 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SERIAL type isn't listed...?"
},
{
"msg_contents": "> Shouldn't this produce something? Was talking with Dave Page today about\n> the lack of a serial type in PgAdmin, and he mentioned that its not a \n> listed type?\n\nRight. That's because the SERIAL type is a parser kludge rather than a\nfull-fledged type. At the moment, a column defined as SERIAL becomes,\nin the parser backend, a defined sequence (CREATE SEQUENCE ...) and an\nINT4 column with a constraint of DEFAULT ... which refers to the\nsequence just created.\n\nThere are downsides to this: the implicit SEQUENCE is not cleaned up\nif the column is destroyed; explicit reference to SERIAL is lost\nduring dump/restore; the implicit stuff just leads to confusion, etc\netc etc.\n\nPerhaps eventually it should become a type on its own, directly\naccessing the same structures as Vadim's \"sequence\" code. Or perhaps\nit could be done using the SPI interface.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Feb 2000 16:31:47 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SERIAL type isn't listed...?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Shouldn't this produce something? Was talking with Dave Page today about\n> the lack of a serial type in PgAdmin, and he mentioned that its not a listed\n> type?\n\nOn 6.5, serial is not a type, but is promoted to \"int4 DEFAULT nextval (\n'\"sequence_name_here\"' ) NOT NULL\", generating the associated sequence\nin the process.\n\nSevo\n",
"msg_date": "Thu, 03 Feb 2000 19:06:23 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SERIAL type isn't listed...?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Shouldn't this produce something? Was talking with Dave Page today about\n> the lack of a serial type in PgAdmin, and he mentioned that its not a listed\n> type?\n\nDoes PgAdmin support VIEWs ? They don't really exist either.\n\n----------------------\nHannu\n",
"msg_date": "Fri, 04 Feb 2000 01:00:28 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SERIAL type isn't listed...?"
}
] |
[
{
"msg_contents": "\n\n Hi,\n\n is a plan remove from the contrib tree 'array interator' \n(in contrib/array) to the main tree as standard array operator(s)?\n\n \t\t\t\t\t\tKarel\n\n\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Thu, 3 Feb 2000 17:31:04 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "array operators to the main tree"
}
] |
[
{
"msg_contents": "OK, on the road toward \"outer join syntax\"...\n\nI'm implementing the \"column alias\" features of SQL92, as in\n\npostgres=# select b, c from t2 ty (b, c);\n b | c \n---+---\n 1 | 1\n 1 | 2\n 2 | 2\n(3 rows)\n\nwhere the t2 columns are labeled \"j, k\" when created.\n\nI'm running across the behavior that an explicit select as above\nworks, but if I try a wildcard expansion (select *...) instead of the\nexplicit column listing the planner decides it needs to do some wild\nnested join stuff:\n\npostgres=# select * from t2 ty (b, c);\n b | c \n---+---\n 1 | 1\n 1 | 2\n 2 | 2\n 1 | 1\n 1 | 2\n 2 | 2\n 1 | 1\n 1 | 2\n 2 | 2\n(9 rows)\n\n(Darn!)\n\nExplain shows the following for the two cases:\n\npostgres=# explain verbose select b, c from t2 ty (b, c);\nNOTICE: QUERY DUMP:\n\n{ SEQSCAN :cost 43 :rows 1000 :width 8 :state <> :qptargetlist ({\nTARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1\n:resname b :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY :resdom {\nRESDOM :resno 2 :restype 23 :restypmod -1 :resname c :reskey 0\n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 2}}) :qpqual <> :lefttree <> :righttree <> :extprm ()\n:locprm () :initplan <> :nprm 0 :scanrelid 1 }\nNOTICE: QUERY PLAN:\n\nSeq Scan on t2 ty (cost=43.00 rows=1000 width=8)\n\nEXPLAIN\npostgres=# explain verbose select * from t2 ty (b, c);\nNOTICE: QUERY DUMP:\n\n{ NESTLOOP :cost 43043 :rows 1000000 :width 12 :state <> :qptargetlist\n({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1\n:resname b :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 65000 :varattno 1 :vartype 23 :vartypmod -1 \n:varlevelsup 0 :varnoold 0 :varoattno 1}} { TARGETENTRY :resdom {\nRESDOM :resno 2 :restype 23 :restypmod -1 :resname c :reskey 0\n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno\n65000 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold\n0 :varoattno 2}}) :qpqual <> :lefttree { SEQSCAN :cost 43 :rows 1000\n:width 4 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM\n:resno 1 :restype 26 :restypmod -1 :resname <> :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno -2\n:vartype 26 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno -2}})\n:qpqual <> :lefttree <> :righttree <> :extprm () :locprm () :initplan\n<> :nprm 0 :scanrelid 1 } :righttree { SEQSCAN :cost 43 :rows 1000\n:width 8 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM\n:resno 1 :restype 23 :restypmod -1 :resname <> :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 0 :varattno 1\n:vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 0 :varoattno 1}} {\nTARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1\n:resname <> :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 0 :varattno 2 :vartype 23 :vartypmod -1 \n:varlevelsup 0 :varnoold 0 :varoattno 2}}) :qpqual <> :lefttree <>\n:righttree <> :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 0\n} :extprm () :locprm () :initplan <> :nprm 0 }\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=43043.00 rows=1000000 width=12)\n -> Seq Scan on t2 ty (cost=43.00 rows=1000 width=4)\n -> Seq Scan (cost=43.00 rows=1000 width=8)\n\nEXPLAIN\n\n\nI *think* that the transformed parts of the query tree looks similar\nfor the two cases coming out of the parser, but clearly something is\ndifferent. Does anyone (Tom Lane??) know if the planner reaches back\ninto the untransformed nodes of the parse tree to get info? The resdom\nnodes in the transformed target list look the same for the two cases,\nbut the planner is generating a bunch of new ones sometime later.\n\nHints would be appreciated, though I'm pretty sure I'll be able to\ntrack it down even without ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Feb 2000 16:43:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parser/planner and column aliases"
},
{
"msg_contents": "> I'm running across the behavior that an explicit select as above\n> works, but if I try a wildcard expansion (select *...) instead of the\n> explicit column listing the planner decides it needs to do some wild\n> nested join stuff:\n\nHmm. Wildcarding like this works:\n\npostgres=# select ty.* from t2 ty (b, c);\n b | c \n---+---\n 1 | 1\n 1 | 2\n 2 | 2\n(3 rows)\n\nSo my problems are maybe just within the parser. Will keep looking at\nit...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Feb 2000 17:00:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parser/planner and column aliases"
},
{
"msg_contents": "Ha, got it:\n\npostgres=# select * from t2 ty (b, c);\n b | c \n---+---\n 1 | 1\n 1 | 2\n 2 | 2\n(3 rows)\n\nso never mind...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Feb 2000 17:04:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parser/planner and column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm running across the behavior that an explicit select as above\n> works, but if I try a wildcard expansion (select *...) instead of the\n> explicit column listing the planner decides it needs to do some wild\n> nested join stuff:\n\n> Nested Loop (cost=43043.00 rows=1000000 width=12)\n> -> Seq Scan on t2 ty (cost=43.00 rows=1000 width=4)\n> -> Seq Scan (cost=43.00 rows=1000 width=8)\n\nMan, that's weird-looking. What happened to the table name in the\nsecond Seq Scan line? I think you must be passing a broken rangetable\nlist.\n\nMy guess is that expansion of \"*\" is somehow failing to recognize that\nit should be using the same RTE for all columns, and is causing an\nextra bogus RTE to get added to the list. Put two RTEs in there and\nyou get a join...\n\n\t\t\tregards, tom lane\n\nBTW, this example reminds me once again that un-pretty-printed\nEXPLAIN VERBOSE output is damn near unreadable. Would anyone object\nif it got formatted the same as what goes to the postmaster log?\n(It used to be unsafe to do that, but now that we can cope with\nunlimited-length NOTICE messages I see no real good reason not to\nformat it more nicely.)\n",
"msg_date": "Thu, 03 Feb 2000 12:30:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parser/planner and column aliases "
}
] |
[
{
"msg_contents": "As usual when replying from here, replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Thursday, February 03, 2000 4:26 PM\nTo: Chris\nCc: Bruce Momjian; [email protected];\[email protected]\nSubject: Re: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL \n\nPM: [snip]\n\nFor the purpose at hand, I think it would be OK to have a\n\"relhaschildren\" field that is set true when the first child is created\nand then never changed. If you have a table that once had children but\nhas none at the moment, then you pay the price of looking through\npg_inherits; but the case that we're really concerned about (a pure SQL,\nno-inheritance table) would still win.\n\nPM: Perhaps get vacuum to check for any children when it's set, and if\nit finds none, it clears the flag?\n",
"msg_date": "Thu, 3 Feb 2000 16:51:33 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: [SQL] Proposed Changes to PostgreSQL "
}
] |
[
{
"msg_contents": "This is probably somewhat off topic, but all the people who need to\nknow about this read this list, so, if you'll forgive me:\n\nBackground:\nThe project I'm working on is using ColdFusion as it's middleware,\naccessing a PostgreSQL backend. To date, we've had to run on NT, using\nODBC to the DB on a separate box. Allaire is in the process of releasing\na real live Linux version of their software (a native port, this time)\nwhich I've been beta testing. I was pleased to discover that the unixODBC\ndriver worked (although I had to hand configure it into ColdFusion.)\n\nCurrent news:\nI just pulled the latest beta, and low and behold, it's checking to see\nif PostgreSQL is installed, in order to install examples! It missed on my\nbox, since I run a Debian install, not the RedHat it's expecting, but\nthey're on the right track. Yep, the pgsql datasources are configurable\nfrom within the CF adminstrator pages: excellent!\n\nThis _will_ lead to more commercial type users, I can guarantee.\nEspecially since the examples will be there. Lamar, we should make sure\nthat they detect the RPM install correctly, so that the examples just\nwork, right out of the box. I can image a lot of \"throw together a demo,\nusing a DB backend, oh, here's PostgreSQL, I can use that\" systems ending\nup in production, since it'll just keep working.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n",
"msg_date": "Thu, 3 Feb 2000 12:36:14 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "coming ColdFusion support for PostgreSQL"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> This _will_ lead to more commercial type users, I can guarantee.\n> Especially since the examples will be there. Lamar, we should make sure\n> that they detect the RPM install correctly, so that the examples just\n> work, right out of the box. I can image a lot of \"throw together a demo,\n> using a DB backend, oh, here's PostgreSQL, I can use that\" systems ending\n> up in production, since it'll just keep working.\n\nIf you, as a beta tester for Cold Fusion, can let me know what they're\nlooking for, then I can oblige them with no problem. :-)\n\nI am going to have to make it easier for third party software to detect\nthe RPM installation -- while things have settled down on where things\nare, I have been considering moving some things around -- in particular,\nthe location of PGDATA is likely to move in 7.0 RPM's unless I hear a\ncry otherwise. Currently, PGDATA is /var/lib/pgsql, I'm considering\nchanging that to /var/lib/pgsql/data, which is more in line with the\nstandard installation. This gives me the whole /var/lib/pgsql tree for\nbackups and other temp data that I need to move out of /usr/lib/pgsql.\n\nI am open to suggestions -- environment variables perhaps?\n\nI am going to look at Olivers Debian packages more closely so that I can\nuse the same envvar names for consistency, if I do use envvars.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 03 Feb 2000 13:59:07 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] coming ColdFusion support for PostgreSQL"
},
{
"msg_contents": "On Thu, Feb 03, 2000 at 01:59:07PM -0500, Lamar Owen wrote:\n> \"Ross J. Reedstrom\" wrote:\n> > This _will_ lead to more commercial type users, I can guarantee.\n> > Especially since the examples will be there. Lamar, we should make sure\n> > that they detect the RPM install correctly, so that the examples just\n> > work, right out of the box. I can image a lot of \"throw together a demo,\n> > using a DB backend, oh, here's PostgreSQL, I can use that\" systems ending\n> > up in production, since it'll just keep working.\n> \n> If you, as a beta tester for Cold Fusion, can let me know what they're\n> looking for, then I can oblige them with no problem. :-)\n\nHmm, looks like it would have found either a RPM or source install,\nbut mis-sets a variable for the source install. I'll send you the\nappropriate snippet from the install script privately. The script wants\nto find createdb and psql to run, and uses the location of postmaster\nas a clue.\n\n> \n> I am going to look at Olivers Debian packages more closely so that I can\n> use the same envvar names for consistency, if I do use envvars.\n> \n\nGood plan. Coordinating the different packages will make everyone's job \neasier.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n\n",
"msg_date": "Thu, 3 Feb 2000 15:15:18 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] coming ColdFusion support for PostgreSQL"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n >Background:\n >The project I'm working on is using ColdFusion as it's middleware,\n >accessing a PostgreSQL backend. To date, we've had to run on NT, using\n >ODBC to the DB on a separate box. Allaire is in the process of releasing\n >a real live Linux version of their software (a native port, this time)\n >which I've been beta testing. I was pleased to discover that the unixODBC\n >driver worked (although I had to hand configure it into ColdFusion.)\n >\n >Current news:\n >I just pulled the latest beta, and low and behold, it's checking to see\n >if PostgreSQL is installed, in order to install examples! It missed on my\n >box, since I run a Debian install, not the RedHat it's expecting, but\n >they're on the right track. Yep, the pgsql datasources are configurable\n >from within the CF adminstrator pages: excellent!\n \nIf you can give me a contact, I would like to help them set it up as a\nDebian package.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"O come, let us worship and bow down; let us kneel \n before the LORD our maker.\" Psalms 95:6 \n\n\n",
"msg_date": "Thu, 03 Feb 2000 21:41:23 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] coming ColdFusion support for PostgreSQL "
}
] |
[
{
"msg_contents": ">the location of PGDATA is likely to move in 7.0 RPM's unless I hear a\n>cry otherwise. Currently, PGDATA is /var/lib/pgsql, I'm considering\n>changing that to /var/lib/pgsql/data, which is more in line with the\n>standard installation. This gives me the whole /var/lib/pgsql tree for\n>backups and other temp data that I need to move out of /usr/lib/pgsql.\n\nMy two cents (sense?).\n\n/var/lib follows Redhat dist install, correct?\n\nI just installed 6.5.3 from the rpms for the first time and was thrown off\njust a little, because I have always compiled & installed from the sources\nto the default of /usr/local/pgsql.\nThis caused me to update some environment variables (PGxxx & PATH), but\nthat's about it. I actually prefer to have the source install and the rpm\ninstall remain in separate places. But I would suggest /var/lib/pgsql/data\njust to be inline with the source install. BTW, the rpm install was a\nbreeze. Much appreciated ;-)\n\nKen\n\n",
"msg_date": "Thu, 03 Feb 2000 11:28:02 -0800",
"msg_from": "\"Ken J. Wright\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] coming ColdFusion support for\n\tPostgreSQL"
}
] |
[
{
"msg_contents": "We were having some trouble doing updates to our database,\na lot of our database sort of works like this:\n\n\ndbfunc(data) \n\tsomedatatype\t*data;\n{\n\tsomedatatype\t*existing_row;\n\n\texisting_row = exists_in_table(data);\n\n\tif (existing_row != NULL) {\n\t\tupdate_table(existing_row, count = count + data->count)\n\t} else\n\t\tinsert_into_table(data);\n\n}\n\nIs there anything built into postgresql to accomplish this without\nthe \"double\" work that goes on here?\n\nsomething like:\n update_row_but_insert_if_it_doesn't_exist(data, \n update = 'count = count + data->count');\n\nMeaning, if a row matching the 'new' data exists, update it, otherwise\nstore our new data as a new record?\n\nIt seems like the database has to do an awful amount of extra work\nfor our application because we haven't figured out how to do this\neffeciently.\n\nAny pointers?\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Thu, 3 Feb 2000 14:54:01 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "The thing is, in the relational model there isn't a standard\ndefininition of \"already exists\". For example, when you say\n\"already exists\", I presume you mean that a record with the\nsame primary key already exists. But not all tables have\nprimary keys.\n\nThere are two things you can do...\n\n1) remember if a record came out of the database in the first place\nwith a flag. This is what an object database would do.\n\n2) If there is a unique index, instead of checking \nwhether the record exists with exists_in_table,\nattempt to update the record. If you get a database error, THEN\ndo an insert. This is a common programming technique, often used\nwith unix system calls. Try one option and if error try the other.\nDon't try to predict yourself whether an error will occur. This\nwill save 1 or two database calls depending on whether it exists\nor not.\n\nAlfred Perlstein wrote:\n> \n> We were having some trouble doing updates to our database,\n> a lot of our database sort of works like this:\n> \n> dbfunc(data)\n> somedatatype *data;\n> {\n> somedatatype *existing_row;\n> \n> existing_row = exists_in_table(data);\n> \n> if (existing_row != NULL) {\n> update_table(existing_row, count = count + data->count)\n> } else\n> insert_into_table(data);\n> \n> }\n> \n> Is there anything built into postgresql to accomplish this without\n> the \"double\" work that goes on here?\n> \n> something like:\n> update_row_but_insert_if_it_doesn't_exist(data,\n> update = 'count = count + data->count');\n> \n> Meaning, if a row matching the 'new' data exists, update it, otherwise\n> store our new data as a new record?\n> \n> It seems like the database has to do an awful amount of extra work\n> for our application because we haven't figured out how to do this\n> effeciently.\n> \n> Any pointers?\n> \n> thanks,\n> --\n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \n> ************\n",
"msg_date": "Fri, 04 Feb 2000 11:01:44 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "* Chris Bitmead <[email protected]> [000203 16:32] wrote:\n> The thing is, in the relational model there isn't a standard\n> defininition of \"already exists\". For example, when you say\n> \"already exists\", I presume you mean that a record with the\n> same primary key already exists. But not all tables have\n> primary keys.\n\nI could adopt the tables to use this particular field as a primary\nkey, but see my questions about interpreting errors in responce\nto suggestion #2.\n\n> There are two things you can do...\n> \n> 1) remember if a record came out of the database in the first place\n> with a flag. This is what an object database would do.\n\nYou mean implement an LRU cache outside the database, I've thought about\nthis and could actually do it, the thing that bugs me about it is\nthat i'm essentially trying to outsmart a 10+ year (guessing) old\npiece of software with something that I'd have to hack up in a\nmatter of days.\n\n> 2) If there is a unique index, instead of checking \n> whether the record exists with exists_in_table,\n> attempt to update the record. If you get a database error, THEN\n> do an insert. This is a common programming technique, often used\n> with unix system calls. Try one option and if error try the other.\n> Don't try to predict yourself whether an error will occur. This\n> will save 1 or two database calls depending on whether it exists\n> or not.\n\nThis is what I was thinking, the problem then becomes that I'm\nnot aware of way to determine the error with\nsome degree of accuracy so that I don't mistake:\n insert error because of duplication\nwith:\n insert error because of database connectivity (or other factors)\n\nIs it possible to do that? I guess I could parse the error responce\nfrom the backend, but maybe there's an easier/more-correct way?\n\n-Alfred\n\n\n> \n> Alfred Perlstein wrote:\n> > \n> > We were having some trouble doing updates to our database,\n> > a lot of our database sort of works like this:\n> > \n> > dbfunc(data)\n> > somedatatype *data;\n> > {\n> > somedatatype *existing_row;\n> > \n> > existing_row = exists_in_table(data);\n> > \n> > if (existing_row != NULL) {\n> > update_table(existing_row, count = count + data->count)\n> > } else\n> > insert_into_table(data);\n> > \n> > }\n> > \n> > Is there anything built into postgresql to accomplish this without\n> > the \"double\" work that goes on here?\n> > \n> > something like:\n> > update_row_but_insert_if_it_doesn't_exist(data,\n> > update = 'count = count + data->count');\n> > \n> > Meaning, if a row matching the 'new' data exists, update it, otherwise\n> > store our new data as a new record?\n> > \n> > It seems like the database has to do an awful amount of extra work\n> > for our application because we haven't figured out how to do this\n> > effeciently.\n> > \n> > Any pointers?\n> > \n> > thanks,\n> > --\n> > -Alfred Perlstein - [[email protected]|[email protected]]\n> > \n> > ************\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Thu, 3 Feb 2000 16:54:37 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "Alfred Perlstein wrote:\n> > There are two things you can do...\n> >\n> > 1) remember if a record came out of the database in the first place\n> > with a flag. This is what an object database would do.\n> \n> You mean implement an LRU cache outside the database, I've thought about\n> this and could actually do it, the thing that bugs me about it is\n> that i'm essentially trying to outsmart a 10+ year (guessing) old\n> piece of software with something that I'd have to hack up in a\n> matter of days.\n\nWell, you only gave a small code snippet, I don't know how your app\nworks.\n\nBut often you retrieve tuples from the database and populate a C struct\nor something...\n\nstruct Person {\n\tchar *firstname;\n\tchar *lastname;\n};\n\nWhat I'm saying is, if you are already doing something like this, then\njust add one more boolean to say if it is a new or existing Person.\nIf you are not doing anything like this currently then it's not an\noption.\n\nAlternatively wait for my ODBMS features :-)\n\n> This is what I was thinking, the problem then becomes that I'm\n> not aware of way to determine the error with\n> some degree of accuracy so that I don't mistake:\n> insert error because of duplication\n> with:\n> insert error because of database connectivity (or other factors)\n> \n> Is it possible to do that? I guess I could parse the error responce\n> from the backend, but maybe there's an easier/more-correct way?\n\nHmm. Doesn't PostgreSQL have a big list of error codes? I don't think\nit does, I've never seen one. There should be a way to get error\ncodes without comparing strings. Should this be on the TODO?\n",
"msg_date": "Fri, 04 Feb 2000 12:10:08 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think\n> it does, I've never seen one. There should be a way to get error\n> codes without comparing strings. Should this be on the TODO?\n\nIt doesn't, there should, and it already is ;-)\n\nIn the meantime, looking at the error message string is Alfred's\nonly option for distinguishing duplicate-record from other errors,\nI'm afraid.\n\nA partial answer to his performance concern is to use a rule\n(or possibly a trigger) on the database side to reinterpret\n\"insert into table X\" as \"either insert or update in table Y,\ndepending on whether the key is already there\". This wouldn't\nbuy anything in terms of database cycles, but it would avoid two\nrounds of client-to-backend communication and query parsing.\n\nI've never done that myself, but perhaps someone else on the\nlist has a working example.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2000 23:31:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Chris Bitmead <[email protected]> writes:\n> > Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think\n> > it does, I've never seen one. There should be a way to get error\n> > codes without comparing strings. Should this be on the TODO?\n> \n> It doesn't, there should, and it already is ;-)\n> \n\nDoens't the following TODO imply it ?\n\n* Allow elog() to return error codes, not just messages\n\nMany people have complained about it.\nHowever,it seems not effective without a functionality of statement\nlevel rollback. AFAIK,Vadim has planed it together with savepoint\nfunctionality.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 4 Feb 2000 14:01:05 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] how to deal with sparse/to-be populated tables "
},
{
"msg_contents": "\n>From time to time the old Time Travel postgres functionality is\nmentioned.\nWhen it is mentioned, somebody usually says \"Yeah well you can implement\nit\njust as well with triggers therefore it's redundant\" and the doco says\n\"New \nfeatures such as triggers allow one to mimic the behavior of time travel\nwhen \ndesired, without incurring the overhead when it is not needed (for most\nusers, \nthis is most of the time).\n\nThis seems to fail to take into account the original design which was\nto take advantage of a different style of storage manager, that doesn't\nhave an undo log. Unless I'm missing something, postgres is indeed still\n\"incurring the overhead\" of time travel, but losing the feature.\n\nIn fact, if you have fsync turned on for full safety, the postgres\nperformance is going to be bad compared to a regular design\nstorage manager.\n\nOn the other hand the postgres storage manager had the advantage of time\ntravel because it does not update in place.\n\nNow in the documentation it mentioned removing time travel because \n\"performance impact, storage size, and a pg_time file which\ngrows toward infinite size in a short period of time.\".\n\nNow since I believe the postgres storage manager does not replace\nrecords in place when updated, I can't see how it is different to\nhaving the time travel feature with vacuum configured to remove\nall old records immediately. I don't know what the pg_time file\nis.\n\nHave I missed something about why taking out time travel has\nimproved performance, as opposed to simply making immediate\nvacuum the default? Clearly the performance of triggers as an\nalternative is going to suck very badly, since the postgres\nstorage manager was built specially from the ground up to\nsupport time travel with its non-update semantics, and it\nstill has these characteristics.\n",
"msg_date": "Fri, 04 Feb 2000 16:16:43 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Time travel"
},
{
"msg_contents": "* Tom Lane <[email protected]> [000203 20:58] wrote:\n> Chris Bitmead <[email protected]> writes:\n> > Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think\n> > it does, I've never seen one. There should be a way to get error\n> > codes without comparing strings. Should this be on the TODO?\n> \n> It doesn't, there should, and it already is ;-)\n> \n> In the meantime, looking at the error message string is Alfred's\n> only option for distinguishing duplicate-record from other errors,\n> I'm afraid.\n> \n> A partial answer to his performance concern is to use a rule\n> (or possibly a trigger) on the database side to reinterpret\n> \"insert into table X\" as \"either insert or update in table Y,\n> depending on whether the key is already there\". This wouldn't\n> buy anything in terms of database cycles, but it would avoid two\n> rounds of client-to-backend communication and query parsing.\n> \n> I've never done that myself, but perhaps someone else on the\n> list has a working example.\n\nActually we have some plpgsql code lying around that does this.\nThe issue isn't ease of implementation, but actually the speed of\nthe implementation. Even parsing the error return isn't as optimal\nas a insert_new|update_existing_with_args single op would be.\n\nOne of the more fustrating aspects is that we could use the field\nthat we merge rows on as a primary index, this would allow us to\ndo a insert or update on failed insert...\n\nhowever... if we fail to locate the row on the initial query (to\nsee if it exists) we pay a large penalty because the insert must\nbe validated to be unique. This effectively doubles the search.\nThis is also a problem if we do \"update or insert on fail\", basically\na double scan is required.\n\n(yes, I just thought about only indexing, and trying the update\nfirst and only on failure doing an insert, however we really can't\ndetermine if the initial update failed because no record matched(ok),\nor possible some other error (ouch))\n\nThat's why we can't use this feild as a primary index, even though\nit is supposed to be unqiue.\n\nBasically the database seems to force a _double_ lookup, the only\nway I see around this is to then switch over to a bulk copy getting\naround the double lookup. However, this will only work for our\nspecial case where there is only a single reader/writer updating\nthe table at any time, otherwise we need special locking to avoid\nraces.\n\nEven if this isn't a TODO item, if there's a wish list out there\nit'd be nice to see this request for feature listed.\n\nI think once the dust settles over here and the need to scale goes\nfrom very scalable to insanely scalable I'm going to have an even\ngreater interest in learning postgresql internals. :)\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Thu, 3 Feb 2000 21:32:33 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "* Hiroshi Inoue <[email protected]> [000203 21:34] wrote:\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Tom Lane\n> > \n> > Chris Bitmead <[email protected]> writes:\n> > > Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think\n> > > it does, I've never seen one. There should be a way to get error\n> > > codes without comparing strings. Should this be on the TODO?\n> > \n> > It doesn't, there should, and it already is ;-)\n> > \n> \n> Doens't the following TODO imply it ?\n> \n> * Allow elog() to return error codes, not just messages\n> \n> Many people have complained about it.\n> However,it seems not effective without a functionality of statement\n> level rollback. AFAIK,Vadim has planed it together with savepoint\n> functionality.\n\nIt would help, but it wouldn't be avoid the double searches I seem\nto need to do to maintain a unique index.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Thu, 3 Feb 2000 21:41:16 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> (yes, I just thought about only indexing, and trying the update\n> first and only on failure doing an insert, however we really can't\n> determine if the initial update failed because no record matched(ok),\n> or possible some other error (ouch))\n\nUh ... why not? \"UPDATE 0\" is a perfectly recognizable result\nsignature, it seems like. (I forget just how that looks at the\nlibpq API level, but if psql can see it so can you.)\n\nAlternatively, if you think the insert is more likely to be the\nright thing, try it first and look to see if you get a \"can't\ninsert duplicate key into unique index\" error.\n\nYou're right that SQL provides no combination statement that would\nallow these sequences to be done with only one index probe. But\nFWIW, I'd think that the amount of wasted I/O would be pretty minimal;\nthe relevant index pages should still be in the buffer cache when\nthe second query gets to the backend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Feb 2000 01:06:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables "
},
{
"msg_contents": "\n>This is what I was thinking, the problem then becomes that I'm\n>not aware of way to determine the error with\n>some degree of accuracy so that I don't mistake:\n> insert error because of duplication\n>with:\n> insert error because of database connectivity (or other factors)\n>\n>Is it possible to do that? I guess I could parse the error responce\n>from the backend, but maybe there's an easier/more-correct way?\n\nNot sure what interface you are using, But for example, perl will\neasily tell the difference.\n\n========================================================================\n execute\n\n $rv = $sth->execute || die $sth->errstr;\n $rv = $sth->execute(@bind_values) || die $sth->errstr;\n\n Perform whatever processing is necessary to execute\n the prepared statement. An undef is returned if an\n error occurs, a successful execute always returns true\n regardless of the number of rows affected (even if\n it's zero, see below). It is always important to check\n the return status of execute (and most other DBI\n methods) for errors.\n\n For a non-select statement, execute returns the number\n of rows affected (if known). If no rows were affected\n then execute returns \"0E0\" which Perl will treat as 0\n but will regard as true. Note that it is not an error\n for no rows to be affected by a statement. If the\n number of rows affected is not known then execute\n returns -1.\n========================================================================\n\nwhich means the return value will be 0 if the insert is blocked, but\nundef in there is a connectivity error.\n\nIn other words, failing to insert where a unique index prevents the\ninsertion is not an error.\n\nPHP is similar.\n\nOne trick is to insert all tuple into a temporary table. Then do an\nupdate using the natural join. The do the insert from that same\ntable.\n\nIf you can use a copy to create the temporary table, I think your\nperformance will be best.\n\nTypically I would index the primary key of the temp table so that the\njoin proceeds well, but you may want to bench yourself with and\nwithout the index. I don't think it's needed in the case you\ndescribe.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 4 Feb 2000 09:15:40 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to deal with sparse/to-be populated tables"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Now in the documentation it mentioned removing time travel because\n> \"performance impact, storage size, and a pg_time file which\n> grows toward infinite size in a short period of time.\".\n\nIn the time this was written 200MB disk was a big disk. \n\n> Now since I believe the postgres storage manager does not replace\n> records in place when updated,\n\nYes, it's true at least for 6.5.3 (I've written a small script that \nextracts the old/hidden tuples) and I'm pretty sure that for 7.x too\nperhaps it is the removal of pg_time (which i think recorded correspondence \nbetween transaction ids and timestamps) that gives the big performance win.\n\n> I can't see how it is different to\n> having the time travel feature with vacuum configured to remove\n> all old records immediately. I don't know what the pg_time file\n> is.\n\nI guss it could be just a add_only, monotonuously growing 'tape'-type file, \nsuitable for being searched using binary search. So really not nearly as \nmuch overhead as would be a regular pg table with two indexes.\n\n> Have I missed something about why taking out time travel has\n> improved performance, as opposed to simply making immediate\n> vacuum the default? Clearly the performance of triggers as an\n> alternative is going to suck very badly, since the postgres\n> storage manager was built specially from the ground up to\n> support time travel with its non-update semantics, and it\n> still has these characteristics.\n\nImplementing time-travel with triggers wil actually give us double \ntime-travel, on hidden and one visible ;)\n",
"msg_date": "Sat, 05 Feb 2000 00:15:11 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Time travel"
}
] |
[
{
"msg_contents": "Attached is a tarball that contains:\n\npl-perl.sgml - A start at documentation.\n\tThomas, I didn't know where this belongs. Does\n\tit need to be integrated into the user manual\n\tor should it stand on its own. If it needs\n\tto stand alone, then all the docbook declarations\n\tneed to be added at the front.\n\ncreatelang.sh.diff - a patch to add plperl\n\tto the repertoire of createlang.sh\n-- \nMark Hollomon\[email protected]",
"msg_date": "Thu, 3 Feb 2000 20:51:12 -0500",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "docs and createlang patch for plperl"
},
{
"msg_contents": "> pl-perl.sgml - A start at documentation.\n> Thomas, I didn't know where this belongs. Does\n> it need to be integrated into the user manual\n> or should it stand on its own. If it needs\n> to stand alone, then all the docbook declarations\n> need to be added at the front.\n\nGreat. I haven't had a chance to look, but I would assume that it\nshould be in the User's Guide or more likely in the Programmer's\nGuide.\n\nThanks for the docs!! I'll add them in this weekend, if not before.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 04 Feb 2000 07:47:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs and createlang patch for plperl"
},
{
"msg_contents": "Mark Hollomon <[email protected]> writes:\n> createlang.sh.diff - a patch to add plperl\n> \tto the repertoire of createlang.sh\n\nGood, but dare I mention that droplang needs to know about it too?\n\nI'm sure you see no possible reason for someone to want to drop\nplperl ;-) ... but it should be there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 11:00:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs and createlang patch for plperl "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Mark Hollomon <[email protected]> writes:\n> > createlang.sh.diff - a patch to add plperl\n> > to the repertoire of createlang.sh\n> \n> Good, but dare I mention that droplang needs to know about it too?\n> \n> I'm sure you see no possible reason for someone to want to drop\n> plperl ;-) ... but it should be there.\n\nDoooh! of course. I'll post a patch tonight.\n\nI guess the docs for droplang and createlang need updating as well.\nUnless Thomas has beat me to it.\n\n-- \n\nMark Hollomon\[email protected]\n",
"msg_date": "Tue, 08 Feb 2000 13:59:35 -0500",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs and createlang patch for plperl"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Mark Hollomon <[email protected]> writes:\n> > > createlang.sh.diff - a patch to add plperl\n> > > to the repertoire of createlang.sh\n> > \n> > Good, but dare I mention that droplang needs to know about it too?\n> > \n> > I'm sure you see no possible reason for someone to want to drop\n> > plperl ;-) ... but it should be there.\n> \n> Doooh! of course. I'll post a patch tonight.\n\n.PS\nYes, make it a NOOP. We spent so much time building these\nlanguages, that noone should ever have the right to drop 'em again.\n.PE\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n",
"msg_date": "Tue, 8 Feb 2000 20:35:30 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs and createlang patch for plperl"
}
] |
[
{
"msg_contents": "What happens if I run two backends on different ports at the same time on\nthe same database? Is there any kind of locking done?\n\nNo, one backend won't do since one software want US style dates and the\nother wants German style dates. There should be no concurrent access but who\nknows when I will type the wrong port.\n\nmichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 4 Feb 2000 09:55:25 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Two backends at the same time"
},
{
"msg_contents": "Michael Meskes wrote:\n\n> No, one backend won't do since one software want US style dates and the\n> other wants German style dates.\n\nSET DateStyle can select date styles per session. There is no need to do\nthat at backend level.\n\nSevo\n",
"msg_date": "Fri, 04 Feb 2000 13:01:23 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Two backends at the same time"
},
{
"msg_contents": "On Fri, Feb 04, 2000 at 01:01:23PM +0100, Sevo Stille wrote:\n> SET DateStyle can select date styles per session. There is no need to do\n> that at backend level.\n\nOops. I should have thought before asking. Sorry.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 4 Feb 2000 15:29:50 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Two backends at the same time"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> What happens if I run two backends on different ports at the same time on\n> the same database? Is there any kind of locking done?\n\nYou mean two postmasters? Death and destruction is what will happen.\nI believe there's an interlock to prevent this mistake in current\nsources, but I don't recall if it was in 6.5.*.\n\n> No, one backend won't do since one software want US style dates and the\n> other wants German style dates. There should be no concurrent access but who\n> knows when I will type the wrong port.\n\nAs Sevo points out, setting DATESTYLE per-session is the right way to do\nthis. You might find that setting environment variable PGDATESTYLE on\nthe client side is a comfortable way to work --- if libpq sees that\nvariable set, it will issue a SET DATESTYLE command for you during\nconnection startup. Works with any libpq-based client, so it's\npretty transparent...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Feb 2000 10:17:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Two backends at the same time "
},
{
"msg_contents": "On Fri, Feb 04, 2000 at 10:17:53AM -0500, Tom Lane wrote:\n> You mean two postmasters? Death and destruction is what will happen.\n> I believe there's an interlock to prevent this mistake in current\n> sources, but I don't recall if it was in 6.5.*.\n\nI don't think there is one in 6.5.* since I was able to start two.\n\n> As Sevo points out, setting DATESTYLE per-session is the right way to do\n> this. You might find that setting environment variable PGDATESTYLE on\n> the client side is a comfortable way to work --- if libpq sees that\n\nYes, that would be a much better way. If now my perl wasn't so rusty since I\nhave to set that variable in a CGI script.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 4 Feb 2000 20:45:48 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Two backends at the same time"
},
{
"msg_contents": "> If now my perl wasn't so rusty since I\n> have to set that variable in a CGI script.\n\n$ENV{\"PGDATESTYLE\"} = \"German\";\n\nmight do it, but I'm not recalling with certainty that Perl does the\nRight Thing in to make the variable visible to children (pretty sure\nit does).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 05 Feb 2000 01:51:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Two backends at the same time"
},
{
"msg_contents": "On Sat, Feb 05, 2000 at 01:51:50AM +0000, Thomas Lockhart wrote:\n> $ENV{\"PGDATESTYLE\"} = \"German\";\n> \n> might do it, but I'm not recalling with certainty that Perl does the\n> Right Thing in to make the variable visible to children (pretty sure\n> it does).\n\nYes, that's exactly what I tried and it works.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sun, 6 Feb 2000 11:23:26 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Two backends at the same time"
}
] |
[
{
"msg_contents": "Timothy Dyck wrote:\n> \n> Hi everybody, I'm done my tests of PostgreSQL and Interbase.\n> \n> I concentrated on two tests, an OLTP Single Read Test, where we read a\n> single row out of a 200K row indexed table, and the OLTP Read Mix Test,\n> which is a mix of about 30 queries, about half single table selects and\n> the other half joins of various complexity (up to four way). For both of\n> these tests, InterBase was about 2x to 2.5x as fast as PostgreSQL. In\n> multiuser tests (up to 100 users), the situation was reversed, with\n> PostgreSQL close to 3 times faster at peak throughput (which was at 50\n> concurrent users). The reason why is that InterBase on Linux has a\n> process-per-connection architecture without a shared cache. As such, I had\n> to really limit cache sizes to allow 100 users to connect, and that really\n> hurt InterBase's performance.\n> \n> I ran both PostgreSQL and InterBase with syncs turned off, and used a\n> cache of 65536 4KB pages and 4000K of sort buffer.\n> \n> Here's a list of things about PostgreSQL I had problems with:\n> \n> 1. \"Null\" is not accepted keyword on \"create table\" (\"not null\" is ok)\n\nThere was some discussion of this in the lists in the past:\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1998-12/msg00546.html\n\n: : : Now that we have the syntax problem straightened out:\nI'm still : confused\n: : : about the semantics. Does a \"NULL\" constraint say\nthat the field\n: : : *must* be null, or only that it *can* be null (in\nwhich case NULL is\n: : : just a noise word, since that's the default\ncondition)? I had assumed\n: : : the former, but Bruce seemed to think the latter...\n: : \n: : Can be null. Noise word. At least that is what I\nrememeber Thomas\n: : saying, and because it was noise, we removed it. In\nfact, it doesn't\n: : look like the standard accepts it, but there is no\nreason we can't.\n\n: This NULL clause is not part of constraints it is a\ndefault option and\n: we already support it,\n: there's nothing like: \n: CREATE TABLE table1 (field1 type NULL) in SQL92.\n\n: but the following is SQL92 and it works on PostgreSQL:\n \n: prova=> CREATE TABLE table1 (field1 INTEGER DEFAULT NULL);\n: CREATE\n\n> 2. copy command 'with null as' option not functional\n> 3. try to create an index on a numeric and \"no operator class for\n> 'numeric' data type\" error message results. Numerics not indexable?\n\nThat's fixed in current sources...its too bad you aren't\nreviewing this a couple of months from now -- but I bet you\nhear a lot of that...\n\n> 4. no outer join -- I had to drop one query because of this\n\nThat's always been annoying, although it can be simulated\neasily with:\n\nSELECT t1.x, t2.y \nFROM t1, t2\nWHERE t1.x = t2.x\nUNION\nSELECT t1.x, NULL\nFROM t1 WHERE NOT EXISTS ( SELECT t2.x FROM t2 WHERE t1.x =\nt2.x );\n\n> 5. no alter table add constraint\n> 6. select count(distinct *) from a view gives a parser error on distinct\n> -- distinct keyword not supported here?\n> 7. one query (dss_select_05) has an avg on a numeric field. I got an\n> overflow error (is there a cast to a longer type?). When the avg on\n> numeric field is removed, the query consumes memory rapidly and doesn't\n> terminate. I dropped this query.\n> 8. Can't start postmaster with more than 65536 buffers as I get a \"FATAL\n> 1: couldn't\n> initialize shared buffer pool Hash Tbl\". Variable overflow?\n\nIf you are referring to the -B option of the postmaster,\neach \"buffer\" is 8K in size. So, for example -B 256 would be\n2 megs of buffers. How much RAM was on the test machine? -B\n65536 is a 1/2 gig...\n\n> \n> Thanks for the tuning suggestions I received from various people.\n> \n> Also, is PostgreSQL 7 expected to be SQL-92 compliant? It's pretty close\n> now.\n> \n> I'll be posting complete scripts and C code when the story goes to print\n> on Feb. 14.\n> \n> Regards,\n> Tim Dyck\n> Senior Analyst\n> PC Week Labs\n> \n> ************\n",
"msg_date": "Fri, 04 Feb 2000 04:38:03 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results"
},
{
"msg_contents": "\n\n\nHi everybody, I'm done my tests of PostgreSQL and Interbase.\n\nI concentrated on two tests, an OLTP Single Read Test, where we read a\nsingle row out of a 200K row indexed table, and the OLTP Read Mix Test,\nwhich is a mix of about 30 queries, about half single table selects and\nthe other half joins of various complexity (up to four way). For both of\nthese tests, InterBase was about 2x to 2.5x as fast as PostgreSQL. In\nmultiuser tests (up to 100 users), the situation was reversed, with\nPostgreSQL close to 3 times faster at peak throughput (which was at 50\nconcurrent users). The reason why is that InterBase on Linux has a\nprocess-per-connection architecture without a shared cache. As such, I had\nto really limit cache sizes to allow 100 users to connect, and that really\nhurt InterBase's performance.\n\nI ran both PostgreSQL and InterBase with syncs turned off, and used a\ncache of 65536 4KB pages and 4000K of sort buffer.\n\nHere's a list of things about PostgreSQL I had problems with:\n\n1. \"Null\" is not accepted keyword on \"create table\" (\"not null\" is ok)\n2. copy command 'with null as' option not functional\n3. try to create an index on a numeric and \"no operator class for\n'numeric' data type\" error message results. Numerics not indexable?\n4. no outer join -- I had to drop one query because of this\n5. no alter table add constraint\n6. select count(distinct *) from a view gives a parser error on distinct\n-- distinct keyword not supported here?\n7. one query (dss_select_05) has an avg on a numeric field. I got an\noverflow error (is there a cast to a longer type?). When the avg on\nnumeric field is removed, the query consumes memory rapidly and doesn't\nterminate. I dropped this query.\n8. Can't start postmaster with more than 65536 buffers as I get a \"FATAL\n1: couldn't\ninitialize shared buffer pool Hash Tbl\". Variable overflow?\n\nThanks for the tuning suggestions I received from various people.\n\nAlso, is PostgreSQL 7 expected to be SQL-92 compliant? It's pretty close\nnow.\n\nI'll be posting complete scripts and C code when the story goes to print\non Feb. 14.\n\nRegards,\nTim Dyck\nSenior Analyst\nPC Week Labs\n\n\n",
"msg_date": "Fri, 04 Feb 2000 05:46:24 -0500",
"msg_from": "Timothy Dyck <[email protected]>",
"msg_from_op": false,
"msg_subject": "PC Week Labs benchmark results"
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> Timothy Dyck wrote:\n> >\n> > Hi everybody, I'm done my tests of PostgreSQL and Interbase.\n> >\n> > I concentrated on two tests, an OLTP Single Read Test, where we read a\n> > single row out of a 200K row indexed table, and the OLTP Read Mix Test,\n> > which is a mix of about 30 queries, about half single table selects and\n> > the other half joins of various complexity (up to four way). For both of\n> > these tests, InterBase was about 2x to 2.5x as fast as PostgreSQL. In\n> > multiuser tests (up to 100 users), the situation was reversed, with\n> > PostgreSQL close to 3 times faster at peak throughput (which was at 50\n> > concurrent users). The reason why is that InterBase on Linux has a\n> > process-per-connection architecture without a shared cache. As such, I had\n> > to really limit cache sizes to allow 100 users to connect, and that really\n> > hurt InterBase's performance.\n> >\n> > I ran both PostgreSQL and InterBase with syncs turned off, and used a\n> > cache of 65536 4KB pages and 4000K of sort buffer.\n\n> If you are referring to the -B option of the postmaster,\n> each \"buffer\" is 8K in size. So, for example -B 256 would be\n> 2 megs of buffers. How much RAM was on the test machine? -B\n> 65536 is a 1/2 gig...\n\nI should have read your post more carefully. You say you\nused 65536 4KB pages, so I assume you built PostgreSQL with\na BLCKSZ of 4 instead of 8, running with 256M of in-memory\nbuffers...\n\nMike Mascari\n",
"msg_date": "Fri, 04 Feb 2000 05:52:58 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results"
},
{
"msg_contents": "> Hi everybody, I'm done my tests of PostgreSQL and Interbase.\n> I concentrated on two tests, an OLTP Single Read Test, where we read a\n> single row out of a 200K row indexed table, and the OLTP Read Mix Test,\n> which is a mix of about 30 queries, about half single table selects and\n> the other half joins of various complexity (up to four way). For both of\n> these tests, InterBase was about 2x to 2.5x as fast as PostgreSQL. In\n> multiuser tests (up to 100 users), the situation was reversed, with\n> PostgreSQL close to 3 times faster at peak throughput (which was at 50\n> concurrent users). The reason why is that InterBase on Linux has a\n> process-per-connection architecture without a shared cache. As such, I had\n> to really limit cache sizes to allow 100 users to connect, and that really\n> hurt InterBase's performance.\n\nSo, we scale better. Nice.\n\n> I ran both PostgreSQL and InterBase with syncs turned off, and used a\n> cache of 65536 4KB pages and 4000K of sort buffer.\n> Here's a list of things about PostgreSQL I had problems with:\n> \n> 1. \"Null\" is not accepted keyword on \"create table\" (\"not null\" is ok)\n\n\"NULL\" is *not* SQL92 standard (well, at least it wasn't in the draft\nstandard available on DEC's web site) presumably since including it\ngenerally leads to parsing problems with a one-token-lookahead parser\nsuch as yacc.\n\nAlso, since it is the default behavior for column creation, it seems\nto be pretty much a useless noise word in this context.\n\nBut it will be in the next release for the typical case; I implemented\nit a month or so ago but have some other developments I've been\nworking on and haven't yet committed this one to the source tree. Will\nlikely be there this weekend.\n\n> 2. copy command 'with null as' option not functional\n\nThis was added 1999/12/14 to the development tree. Will be in the next\nrelease.\n\n> 3. try to create an index on a numeric and \"no operator class for\n> 'numeric' data type\" error message results. Numerics not indexable?\n\nNot yet. Should be there for the next release (1-2 months).\n\n> 4. no outer join -- I had to drop one query because of this\n> 5. no alter table add constraint\n> 6. select count(distinct *) from a view gives a parser error on distinct\n> -- distinct keyword not supported here?\n\nThese are all high on the ToDo list, but I'm not sure they will be in\nthe next release.\n\n> 7. one query (dss_select_05) has an avg on a numeric field. I got an\n> overflow error (is there a cast to a longer type?). When the avg on\n> numeric field is removed, the query consumes memory rapidly and doesn't\n> terminate. I dropped this query.\n> 8. Can't start postmaster with more than 65536 buffers as I get a \"FATAL\n> 1: couldn't\n> initialize shared buffer pool Hash Tbl\". Variable overflow?\n\nJust guessing, but this is more likely a system resource problem. That\nis a lot of buffers!\n\n> Also, is PostgreSQL 7 expected to be SQL-92 compliant? It's pretty close\n> now.\n\nWhat feature do you feel is lacking for Postgres to be SQL92\ncompliant? As you know, SQL92 defines three levels of compliance, and\nalthough virtually all databases claim compliance it is almost always\nto the lowest, most basic level.\n\nThings like outer joins are not required for the basic compliance,\nwhich is how, for example, Oracle gets to claim compliance without\nsupporting SQL92 outer join syntax.\n\n> I'll be posting complete scripts and C code when the story goes to print\n> on Feb. 14.\n\nGreat.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 04 Feb 2000 14:53:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results"
},
{
"msg_contents": "Timothy Dyck <[email protected]> writes:\n> Here's a list of things about PostgreSQL I had problems with:\n\n> 1. \"Null\" is not accepted keyword on \"create table\" (\"not null\" is ok)\n\nAFAICT from the SQL92 spec, NULL is not a legal column constraint.\nI know some DBMSs accept it anyway, but we don't because it creates\ngrammatical ambiguities.\n\n> 2. copy command 'with null as' option not functional\n\nIt looks like this has been added for 7.0 ... I haven't tried it\nbut I see the syntax is there.\n\n> 3. try to create an index on a numeric and \"no operator class for\n> 'numeric' data type\" error message results. Numerics not indexable?\n\nOversight in 6.5.* ... fixed for 7.0.\n\n> 4. no outer join -- I had to drop one query because of this\n\nThomas is working on outer joins, but I'm not sure if it'll be ready\nfor 7.0. 7.1 for sure though; this is our most-requested missing SQL92\nfeature.\n\n> 5. no alter table add constraint\n\nNot there yet (but Peter E. was working on it when last seen...)\n\n> 6. select count(distinct *) from a view gives a parser error on distinct\n> -- distinct keyword not supported here?\n\nNo, but it is for 7.0.\n\n> 7. one query (dss_select_05) has an avg on a numeric field. I got an\n> overflow error (is there a cast to a longer type?). When the avg on\n> numeric field is removed, the query consumes memory rapidly and doesn't\n> terminate. I dropped this query.\n\nBug. I posted a patch for this and a couple of other NUMERIC problems\na few weeks ago; it'll be in 7.0 of course, and you can get the patch\noff the pgsql-patches list archives if you need it to work in 6.5.*.\n\n> 8. Can't start postmaster with more than 65536 buffers as I get a \"FATAL\n> 1: couldn't initialize shared buffer pool Hash Tbl\". Variable overflow?\n\nProbably. Hadn't occurred to me that we need to check for a sane upper\nbound on the number of buffers, but I guess we do. (You do realize that\nwould be half a gig of in-memory buffers, right? If you've actually got\nthat much RAM, it's probably better to let the OS use it for general-\npurpose disk buffers instead of dedicating it all to Postgres.)\n\n> Also, is PostgreSQL 7 expected to be SQL-92 compliant? It's pretty close\n> now.\n\nWe're getting closer all the time, but I wouldn't want to promise that\nwe'll ever have everything that's in SQL92.\n\nThanks for the report! I don't suppose you'd be interested in rerunning\nyour tests on current (pre-beta-7.0) sources?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Feb 2000 10:35:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results "
},
{
"msg_contents": "At 04:38 AM 2/4/00 -0500, Mike Mascari wrote:\n\n>That's always been annoying, although it can be simulated\n>easily with:\n>\n>SELECT t1.x, t2.y \n>FROM t1, t2\n>WHERE t1.x = t2.x\n>UNION\n>SELECT t1.x, NULL\n>FROM t1 WHERE NOT EXISTS ( SELECT t2.x FROM t2 WHERE t1.x =\n>t2.x );\n\nSOME - but not all - outer joins can be simulated with this\ntrick. Others require subselects in the target list, etc.\n\nAnd the union form gets really messy when a query includes more\nthan one outer join.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 04 Feb 2000 08:26:29 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results"
},
{
"msg_contents": "I was disappointed this benchmark did not include database recovery\nand reliability measurements. Benchmarks ought to include the most\nimportant characteristics of an RDBMS, and recovery/reliability is\ncertainly one of them. People tend to try to \"measure up\" against\naccepted benchmarks; as one currently suffering from apparent\nreliability issues, the thought of decreased focus on reliability irks\nme. \n\nCheers,\nEd Loehr\n\n\nTimothy Dyck wrote:\n> \n> Hi everybody, I'm done my tests of PostgreSQL and Interbase.\n",
"msg_date": "Fri, 04 Feb 2000 10:39:05 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results"
},
{
"msg_contents": "I wrote:\n> Timothy Dyck <[email protected]> writes:\n>> 8. Can't start postmaster with more than 65536 buffers as I get a \"FATAL\n>> 1: couldn't initialize shared buffer pool Hash Tbl\". Variable overflow?\n\n> Probably. Hadn't occurred to me that we need to check for a sane upper\n> bound on the number of buffers, but I guess we do. (You do realize that\n> would be half a gig of in-memory buffers, right? If you've actually got\n> that much RAM, it's probably better to let the OS use it for general-\n> purpose disk buffers instead of dedicating it all to Postgres.)\n\nJust FYI, this is now fixed for 7.0. Turns out there was a bogus\nhard-wired assumption about the maximum size of the hashtable for\nshared buffers.\n\nI still doubt that anyone really *needs* more than 64K buffers ;-)\n... but it will work if you have the RAM.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Feb 2000 00:35:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week Labs benchmark results "
}
] |
[
{
"msg_contents": "It is a test...\n\n\n",
"msg_date": "Fri, 4 Feb 2000 16:40:06 -0200",
"msg_from": "\"Douglas Ribas de Mattos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Test"
},
{
"msg_contents": "In article <[email protected]>, \"Douglas Ribas de \nMattos\" <[email protected]> wrote:\n\n| It is a test...\n\nAnd that is what alt.test is for !!!!!!!!!\n\n-- \nPray to God, But Hammer Away\n - Spanish Proverb\n\nClyde Jones\njjj.trbpvgvrf.pbz/pylqr-wbarf\[email protected]\n",
"msg_date": "Sun, 06 Feb 2000 01:04:29 GMT",
"msg_from": "clyde jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test"
}
] |
[
{
"msg_contents": "I have a mechanism which stores objects inside the database. The primary\nkey is supplied by the object, so when a new object is created I need to\nidentify duplicate primary keys.\n\nOriginally I ran a SELECT against the table to identify if a record with\nthe same key exists, followed by an UPDATE. This approach can fail in a\nrace condition with some other thread getting the INSERT done first.\n\nFor Sybase I have restructed it to perform the INSERT first, and based\non the returned X/Open error code identify if a duplicate key exists.\nPostgresql does not return an X/Open error code, so in the event of a\nduplicate key I need to perform the INSERT to determine whether a\nduplicate key was there, or it was another INSERT error.\n\nWhen I perform an INSERT (which fails) followed by a SELECT on the same\ntable row, the SELECT operation ends with an error, reporting 'No\nresults were returned by the query.'\n\nIs this a know bug and is a fixed planned for it in 0.7?\n\narkin\n\n\n-- \n----------------------------------------------------------------------\nAssaf Arkin www.exoffice.com\nCTO, Exoffice Technologies, Inc. www.exolab.org\n",
"msg_date": "Fri, 04 Feb 2000 13:48:38 -0800",
"msg_from": "Assaf Arkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Identifying duplicate key inserts"
}
] |
[
{
"msg_contents": "Attached is a patch for the ONLY inheritance\nfunctionality...\n\n*) It includes a SET compatibility mode.\n*) The overhead for non-inheritance has\nbeen cut down to 30 microseconds (on a pc).\n*) It needs an initdb.\n\nComments welcome.\n\n-- \nChris Bitmead\nmailto:[email protected]",
"msg_date": "Sat, 05 Feb 2000 14:36:43 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patch attached..."
},
{
"msg_contents": "At 02:36 PM 2/5/00 +1100, Chris wrote:\n\n>*) The overhead for non-inheritance has\n>been cut down to 30 microseconds (on a pc).\n\nWhat kind of PC? I'm getting 4,000 microseconds doing\nsimple selects on a classic P200 (no L2 cache) through\nAOLserver and Tcl scripts, which probably means more like\n2,000 microseconds for PG alone. But without knowing your\nPC, I have no way to scale. For instance, my P500e that\nI just built gets between 3-6x performance over my P200.\n\nWhat's an acceptable level for overhead? I have no personal\ndesire to eat any overhead, in all honesty. 2000/30 < 1%\nbut without knowledge of the actual PC platform (which certainly\nyou must know vary widely in performance) I have no way to\nscale. If your PC platform is closer to my P500e than my\n(classic) P200 (not pro, no L2 cache) then the overhead is\nmore like 2-3%. That's measurable.\n\nAnd if SQL92 compliance is the goal, why must ANY degradation\nof performance be acceptable unless there are very, very strong\nreasons to do so (reasons that impact the target audience).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 04 Feb 2000 20:38:23 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "> >*) The overhead for non-inheritance has\n> >been cut down to 30 microseconds (on a pc).\n\nWe actually have to see how you have implemented it. I am not so\ninterested in timings as in your method. It can be done fast, or it can\nbe done sloppy. I will check the patch.\n\n> And if SQL92 compliance is the goal, why must ANY degradation\n> of performance be acceptable unless there are very, very strong\n> reasons to do so (reasons that impact the target audience).\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 00:00:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "> \n> Attached is a patch for the ONLY inheritance\n> functionality...\n> \n> *) It includes a SET compatibility mode.\n> *) The overhead for non-inheritance has\n> been cut down to 30 microseconds (on a pc).\n> *) It needs an initdb.\n> \n> Comments welcome.\n> \n\nOne problem is that you use SearchSysCacheTupleCopy while\nSearchSysCacheTuple is more appropriate. You need Copy only when you\nare going to be using the cache tuple for an extended period. Looks\nlike the rest of that particular function is OK.\n\nHowever, I have received an objection from someone on another issue\nrelated to the patch. There are no documentation changes for that\npatch. That includes the SET manual page, and any other place\nInheritance is mentioned. You also need to update\ninclude/catalog/catversion.h because initdb is required. \n\nI will wait for a new patch that has these changes, and any others\nmentioned by people. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 00:05:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "FYI the SearchSysCacheTupleCopy call that I was objecting to was in\nfunction has_inheritors. Looks like you clearly are on the right track\nwith the patch. There is some tricky code in there, and it looks pretty\ngood.\n\n\n> \n> Attached is a patch for the ONLY inheritance\n> functionality...\n> \n> *) It includes a SET compatibility mode.\n> *) The overhead for non-inheritance has\n> been cut down to 30 microseconds (on a pc).\n> *) It needs an initdb.\n> \n> Comments welcome.\n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 00:07:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 02:36 PM 2/5/00 +1100, Chris wrote:\n> \n> >*) The overhead for non-inheritance has\n> >been cut down to 30 microseconds (on a pc).\n> \n> What kind of PC? \n\nCerelon 400, 64MB, IDE disk. \n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 05 Feb 2000 16:59:46 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Thanks Bruce! That suggestion with SearchSysCacheTuple makes a big\ndifference! I am no longer able to measure ANY performance difference\nbetween inherited and non-inherited while doing one million queries.\n\nAttached is a patch that incorporates your suggestions.",
"msg_date": "Sat, 05 Feb 2000 19:32:56 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "New Improved Patch"
},
{
"msg_contents": "Chris wrote:\n> \n> Don Baccus wrote:\n> >\n> > At 02:36 PM 2/5/00 +1100, Chris wrote:\n> >\n> > >*) The overhead for non-inheritance has\n> > >been cut down to 30 microseconds (on a pc).\n> >\n> > What kind of PC?\n> \n> Cerelon 400, 64MB, IDE disk.\n\nBtw, how did you measure that 30us overhead ?\n\nDoes it involve disk accesses or is it just in-memory code that \nspeed-concious folks could move to assembly like current \nspinlocking code for some architectures?\n\n-------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 13:47:50 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Btw, how did you measure that 30us overhead ?\n\nI measured it with the test program below. With the latest patch it is\nno longer 30us, but as far as I can measure 0us.\n \n> Does it involve disk accesses or is it just \n> in-memory code that\n> speed-concious folks could move to assembly like current\n> spinlocking code for some architectures?\n\nFor this patch it is an in-memory issue.\n\n-- \nChris Bitmead\nmailto:[email protected]\n\n#include <stdio.h>\n#include <time.h>\n#include \"libpq-fe.h\"\n\n#define rep 1000000\n\n\nmain() {\nint c;\nPGconn *conn;\nPGresult *res;\ntime_t t, t2;\n\nconn = PQsetdb(NULL,NULL,NULL,NULL,\"foo\");\ntime(&t);\nfor (c = 0; c < rep; c++) {\n res = PQexec(conn, \"select * from a*\");\n PQclear(res);\n}\ntime(&t2);\nprintf(\"inh %d\\n\", t2 - t);\ntime(&t);\nfor (c = 0; c < rep; c++) {\n res = PQexec(conn, \"select * from only a\");\n PQclear(res);\n}\ntime(&t2);\nprintf(\"no inh %d\\n\", t2 - t);\n\nPQfinish(conn);\n\n}\n",
"msg_date": "Sat, 05 Feb 2000 22:55:12 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Hi\n\nI looked at the patch and for me the name of the variable \nto set to get the old behaviour (SET EXAMINE_SUBCLASS TO 'on';)\nseems confusing.\n\nAt first I thought it was a typo to set it to 'ON' for old behaviour,\nmy internal logic would set it to 'OFF' to not select subclass by default.\n\nI think something like DONT_SELECT_INHERITED or OLD_INHERITED_SELECT_SYNTAX\nwould be much clearer in meaning.\n\nActually the name is not very important, as most people won't use it anyway ;)\n\n----------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 14:36:12 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Hi\n> \n> I looked at the patch and for me the name of the variable\n> to set to get the old behaviour (SET EXAMINE_SUBCLASS TO 'on';)\n> seems confusing.\n> \n> At first I thought it was a typo to set it to 'ON' for old behaviour,\n> my internal logic would set it to 'OFF' to not select subclass by default.\n\nUmm, but that IS how it works...\n\n$ psql\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\npghack=# select * from a;\n aa \n-----\n aaa\n bbb\n(2 rows)\n\npghack=# set examine_subclass to 'off';\nSET VARIABLE\npghack=# select * from a;\n aa \n-----\n aaa\n(1 row)\n\n> I think something like DONT_SELECT_INHERITED or OLD_INHERITED_SELECT_SYNTAX\n> would be much clearer in meaning.\n\nI'm happy to hear alternative names, but I don't really want \"SELECT\" in\nthe name, because this might apply to UPDATE eventually too.\n\n\n> \n> Actually the name is not very important, as most people won't use it anyway ;)\n> \n> ----------------\n> Hannu\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 06 Feb 2000 00:12:53 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "> pghack=# set examine_subclass to 'off';\n> > I think something like DONT_SELECT_INHERITED or \n> > OLD_INHERITED_SELECT_SYNTAX\n> > would be much clearer in meaning.\n> I'm happy to hear alternative names, but I don't really want \"SELECT\" in\n> the name, because this might apply to UPDATE eventually too.\n\nHmm. This uncovers our clunky SET syntax. imho this should be a clean\nand natural option, not something with a bunch of underscores in the\nkeyword and a quoted string for the option.\n\nBut this is awfully close to beta to be even considering making this\nchange in default behavior and syntax for 7.0. We've got a lot of\nother features to babysit through beta, and this one wasn't even on\nthe radar until a few days ago...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 05 Feb 2000 14:46:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Chris,\n This is to let you know that the core list has discussed this patch,\nand we feel that it is not appropriate to apply it at this late stage\nin the 7.0 development cycle. There are several reasons for this:\n\n* It appears that making such a definitional change is still\ncontroversial. (One thing that still needs to be looked at is whether\nSQL 3 defines any comparable features, and if so whether we ought\nto be following their syntax and behavior.)\n\n* The implications of changing this behavior still need to be followed\nthrough in the rest of the system. For example, it doesn't make much\nsense to me to change SELECT to have recursive behavior by default when\nUPDATE and DELETE can't yet do it at all. A user would naturally\nexpect \"UPDATE table\" to scan the same tuples that \"SELECT FROM table\"\ndoes.\n\n* It's awfully late in the 7.0 development cycle to be making such a\nsignificant change. We have only ten days left to scheduled beta,\nwhich is not enough time to find and work out any unexpected problems\nthat may be lurking.\n\nWe encourage you to continue to work on this line of development,\nbut with an eye to merging your code into CVS early in the 7.1 cycle,\nrather than trying to squeeze it into 7.0 at the last minute.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Feb 2000 12:10:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Status of inheritance-changing patch"
},
{
"msg_contents": "Chris wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > Hi\n> >\n> > I looked at the patch and for me the name of the variable\n> > to set to get the old behaviour (SET EXAMINE_SUBCLASS TO 'on';)\n> > seems confusing.\n> >\n> > At first I thought it was a typo to set it to 'ON' for old behaviour,\n> > my internal logic would set it to 'OFF' to not select subclass by default.\n> \n> Umm, but that IS how it works...\n\nI don't contest that ;)\n\n> > I think something like DONT_SELECT_INHERITED or OLD_INHERITED_SELECT_SYNTAX\n> > would be much clearer in meaning.\n> \n> I'm happy to hear alternative names, but I don't really want \"SELECT\" in\n> the name, because this might apply to UPDATE eventually too.\n\nOops, I didnt think of that.\n\nOf course it should actually apply to all four (SELECT, UPDATE, SELETE,\nINSERT)\nas wellas DDL statements (ALTER TABLE ADD/DROP xxx, CREATE\nCONSTRAINT/INDEX/RULE/TRIGGER)\n\n------------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 21:30:15 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch attached..."
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris,\n> This is to let you know that the core list has discussed this patch,\n> and we feel that it is not appropriate to apply it at this late stage\n> in the 7.0 development cycle.\n\nHere you see Chris what happens when you try to force the default \nbehaviour be the \"wrong\" way :-p\n\nBut seriously, we could still warn people about current (mis)use of \ninheritance and that it may be soon be changed/deprecated or \"made \ncompatible with Informix\" whichever seems most PC.\n\n> There are several reasons for this:\n> \n> * It appears that making such a definitional change is still\n> controversial. (One thing that still needs to be looked at is whether\n> SQL 3 defines any comparable features, \n\nIt does define \"comparable\" features, but moves away from out nice clean \nSQL92 worldview quite radically.\n\n> and if so whether we ought\n> to be following their syntax and behavior.)\n\nI agree that some discussion about OQL vs. SQL3 would be in place.\n\n> * The implications of changing this behavior still need to be followed\n> through in the rest of the system. For example, it doesn't make much\n> sense to me to change SELECT to have recursive behavior by default when\n> UPDATE and DELETE can't yet do it at all. A user would naturally\n> expect \"UPDATE table\" to scan the same tuples that \"SELECT FROM table\"\n> does.\n\nThat's true. I would like to see INSERT,UPDATE,DELETE and SELECT be \nupdated together.\n\nFixing ALTER TABLE behaviour is not so important as we are just getting \nmost of it done for plain SQL92 by 7.0. \n\n> * It's awfully late in the 7.0 development cycle to be making such a\n> significant change. We have only ten days left to scheduled beta,\n> which is not enough time to find and work out any unexpected problems\n> that may be lurking.\n\nAlso - fixing object DB behaviours would give us reason to move to 8.x \nfaster ;)\n\n> We encourage you to continue to work on this line of development,\n> but with an eye to merging your code into CVS early in the 7.1 cycle,\n> rather than trying to squeeze it into 7.0 at the last minute.\n\nBut could we then disable the current half-hearted OO for the time being \nto avoid more compatibility problems from people who might err to use it.\n\nIf there is serious attempt to put the O back in ORDBMS we should not let \ncompatibility with non-SQL postgres extensions to be a decisive fact.\n\nBut then again that kind of change is best done at a major number change.\n\n--------------------\nHannu\n",
"msg_date": "Sat, 05 Feb 2000 21:55:05 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status of inheritance-changing patch"
},
{
"msg_contents": "Tom Lane wrote:\n>(One thing that still needs to be looked at is \n> whether SQL 3 defines any comparable features, and \n> if so whether we ought to be following their syntax \n> and behavior.)\n\nI just downloaded the SQL3 document from dec. I can't seem to make head\nor tail of it. Can anybody understand what it's saying?\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 06 Feb 2000 11:21:52 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Status of inheritance-changing patch"
},
{
"msg_contents": "At 11:21 AM 2/6/00 +1100, Chris wrote:\n>Tom Lane wrote:\n>>(One thing that still needs to be looked at is \n>> whether SQL 3 defines any comparable features, and \n>> if so whether we ought to be following their syntax \n>> and behavior.)\n\n>I just downloaded the SQL3 document from dec. I can't seem to make head\n>or tail of it. Can anybody understand what it's saying?\n\nNo ... a full summary of the private discussion earlier today between\nJan and I regarding referential integrity would indicate that NOBODY can\nunderstand what it's saying! Be glad it was in private, it was bad\nenough that the two of us had to see each other so confused.\n\nDate cheated, his co-author's a ringer who was part of the standards\ncommittee and knows what they meant, rather than what they wrote :)\n\nThe appendix on SQL3 in Date's book talks very briefly about it.\nThere's a CREATE TABLE foo LIKE bar that causes foo to inherit\nfrom bar. He doesn't go into details, though. Talks briefly about\nsub and super tables and how the consequences aren't fully \nunderstood. Then he punts.\n\nIt would still be a good place to start if you have it.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 05 Feb 2000 16:39:00 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Status of inheritance-changing patch"
},
{
"msg_contents": "On Sun, Feb 06, 2000 at 11:21:52AM +1100, Chris wrote:\n> Tom Lane wrote:\n> >(One thing that still needs to be looked at is \n> > whether SQL 3 defines any comparable features, and \n> > if so whether we ought to be following their syntax \n> > and behavior.)\n> \n> I just downloaded the SQL3 document from dec. I can't seem to make head\n> or tail of it. Can anybody understand what it's saying?\n\nI can occasionall twist my brian in the specific way necessary to read\n(and partially understand) standards. Got a URI for what you downloaded?\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Sun, 6 Feb 2000 19:13:39 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Status of inheritance-changing patch"
}
] |
[
{
"msg_contents": "\nThanks Bruce! That suggestion with SearchSysCacheTuple makes a big\ndifference! I am no longer able to measure ANY performance difference\nbetween inherited and \nnon-inherited while doing one million queries.\n\nThe patch is too big to email, so it's up for ftp here...\n\nftp://www.tech.com.au/pub/patch.only\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 05 Feb 2000 19:45:55 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "New improved patch\u000e"
},
{
"msg_contents": "> \n> Thanks Bruce! That suggestion with SearchSysCacheTuple makes a big\n> difference! I am no longer able to measure ANY performance difference\n> between inherited and \n> non-inherited while doing one million queries.\n> \n> The patch is too big to email, so it's up for ftp here...\n> \n> ftp://www.tech.com.au/pub/patch.only\n> \n\nI am getting a failure trying to retrieve this. It says server not\nfound. Also, are the documentation changes in there?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 04:16:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New improved patch^N"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> I am getting a failure trying to retrieve this. It \n> says server not found. \n\nThat's strange. I'll email you the patch in a separate email.\n\n> Also, are the documentation \n> changes in there?\n\nYes.\n\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n",
"msg_date": "Sat, 05 Feb 2000 20:21:47 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New improved patch^N"
},
{
"msg_contents": "> \n> Thanks Bruce! That suggestion with SearchSysCacheTuple makes a big\n> difference! I am no longer able to measure ANY performance difference\n> between inherited and \n> non-inherited while doing one million queries.\n> \n> The patch is too big to email, so it's up for ftp here...\n> \n> ftp://www.tech.com.au/pub/patch.only\n> \n\nNever mind. Got it. I am attaching it here for people to review.\nLet's see what people say now. I see documentation changes in there\ntoo. Great.\n\nGee, I didn't know catalog.sgml existed. I wonder if it is up-to-date?\nNo, pg_database doesn't show \"encoding\". Man, this is really old. I\nsee pg_platter, which we have never had. I deals with jukebox platter\ninventory. pg_class shows things like relpreserved, which deals with\ntime travel. I suggest we remove this file and tell people to look in\ninclude/catalog/*.h and use \\dS. Comments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? pgsql/src/config.log\n? pgsql/src/config.cache\n? pgsql/src/config.status\n? pgsql/src/GNUmakefile\n? pgsql/src/Makefile.global\n? pgsql/src/backend/fmgr.h\n? pgsql/src/backend/parse.h\n? pgsql/src/backend/postgres\n? pgsql/src/backend/global1.bki.source\n? pgsql/src/backend/local1_template1.bki.source\n? pgsql/src/backend/global1.description\n? pgsql/src/backend/local1_template1.description\n? pgsql/src/backend/1\n? pgsql/src/backend/catalog/genbki.sh\n? pgsql/src/backend/catalog/global1.bki.source\n? pgsql/src/backend/catalog/global1.description\n? pgsql/src/backend/catalog/local1_template1.bki.source\n? pgsql/src/backend/catalog/local1_template1.description\n? pgsql/src/backend/port/Makefile\n? pgsql/src/backend/utils/Gen_fmgrtab.sh\n? pgsql/src/backend/utils/fmgr.h\n? pgsql/src/backend/utils/fmgrtab.c\n? pgsql/src/bin/initdb/initdb\n? pgsql/src/bin/initlocation/initlocation\n? pgsql/src/bin/ipcclean/ipcclean\n? pgsql/src/bin/pg_ctl/pg_ctl\n? pgsql/src/bin/pg_dump/Makefile\n? pgsql/src/bin/pg_dump/pg_dump\n? pgsql/src/bin/pg_id/pg_id\n? pgsql/src/bin/pg_passwd/pg_passwd\n? pgsql/src/bin/pg_version/Makefile\n? pgsql/src/bin/pg_version/pg_version\n? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh\n? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh\n? pgsql/src/bin/psql/Makefile\n? pgsql/src/bin/psql/psql\n? pgsql/src/bin/scripts/createlang\n? pgsql/src/include/version.h\n? pgsql/src/include/config.h\n? pgsql/src/interfaces/ecpg/lib/Makefile\n? pgsql/src/interfaces/ecpg/lib/libecpg.so.3.0.10\n? pgsql/src/interfaces/ecpg/preproc/ecpg\n? pgsql/src/interfaces/libpgeasy/Makefile\n? pgsql/src/interfaces/libpgeasy/libpgeasy.so.2.1\n? pgsql/src/interfaces/libpgtcl/Makefile\n? pgsql/src/interfaces/libpq/Makefile\n? pgsql/src/interfaces/libpq/libpq.so.2.1\n? pgsql/src/interfaces/libpq++/Makefile\n? pgsql/src/interfaces/libpq++/libpq++.so.3.1\n? pgsql/src/interfaces/odbc/GNUmakefile\n? pgsql/src/interfaces/odbc/Makefile.global\n? pgsql/src/pl/plpgsql/src/Makefile\n? pgsql/src/pl/plpgsql/src/mklang.sql\n? pgsql/src/pl/plpgsql/src/libplpgsql.so.1.0\n? pgsql/src/pl/tcl/mkMakefile.tcldefs.sh\n? pgsql/src/test/regress/GNUmakefile\nIndex: pgsql/doc/src/sgml/advanced.sgml\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/src/sgml/advanced.sgml,v\nretrieving revision 1.7\ndiff -c -r1.7 advanced.sgml\n*** pgsql/doc/src/sgml/advanced.sgml\t1999/10/04 15:18:53\t1.7\n--- pgsql/doc/src/sgml/advanced.sgml\t2000/02/05 08:24:35\n***************\n*** 56,93 ****\n </para>\n </note>\n \n! For example, the following query finds\n! all the cities that are situated at an attitude of 500ft or higher:\n! \n! <programlisting>\n! SELECT name, altitude\n! FROM cities\n! WHERE altitude > 500;\n \n +----------+----------+\n |name | altitude |\n +----------+----------+\n |Las Vegas | 2174 |\n +----------+----------+\n |Mariposa | 1953 |\n +----------+----------+\n! </programlisting> \n! </para>\n \n! <para>\n! On the other hand, to find the names of all cities,\n! including state capitals, that are located at an altitude \n! over 500ft, the query is:\n! \n! <programlisting>\n! SELECT c.name, c.altitude\n! FROM cities* c\n! WHERE c.altitude > 500;\n! </programlisting>\n \n- which returns:\n- \n- <programlisting>\n +----------+----------+\n |name | altitude |\n +----------+----------+\n--- 56,97 ----\n </para>\n </note>\n \n! <para>\n! For example, the following query finds the names of all cities,\n! including state capitals, that are located at an altitude \n! over 500ft, the query is:\n! \n! <programlisting>\n! SELECT c.name, c.altitude\n! FROM cities c\n! WHERE c.altitude > 500;\n! </programlisting>\n \n+ which returns:\n+ \n+ <programlisting>\n +----------+----------+\n |name | altitude |\n +----------+----------+\n |Las Vegas | 2174 |\n +----------+----------+\n |Mariposa | 1953 |\n+ +----------+----------+\n+ |Madison | 845 |\n +----------+----------+\n! </programlisting>\n! </para>\n \n! <para>\n! On the other hand, the following query finds\n! all the cities, but not capital cities \n! that are situated at an attitude of 500ft or higher:\n! \n! <programlisting>\n! SELECT name, altitude\n! FROM ONLY cities\n! WHERE altitude > 500;\n \n +----------+----------+\n |name | altitude |\n +----------+----------+\n***************\n*** 95,112 ****\n +----------+----------+\n |Mariposa | 1953 |\n +----------+----------+\n! |Madison | 845 |\n! +----------+----------+\n! </programlisting>\n \n! Here the <quote>*</quote> after cities indicates that the query should\n! be run over cities and all classes below cities in the\n! inheritance hierarchy. Many of the commands that we\n! have already discussed (<command>select</command>,\n! <command>and>up</command>and> and <command>delete</command>)\n! support this <quote>*</quote> notation, as do others, like\n! <command>alter</command>.\n! </para>\n </sect1>\n \n <sect1>\n--- 99,129 ----\n +----------+----------+\n |Mariposa | 1953 |\n +----------+----------+\n! </programlisting> \n! </para>\n! \n \n! Here the <quote>ONLY</quote> before cities indicates that the query should\n! be run over only cities and not classes below cities in the\n! inheritance hierarchy. Many of the commands that we\n! have already discussed -- <command>SELECT</command>,\n! <command>UPDATE</command> and <command>DELETE</command> --\n! support this <quote>ONLY</quote> notation, as do others, like\n! <command>ALTER TABLE</command>.\n! </para>\n! <para>\n! Deprecated: In previous versions of postgres, the default was not to\n! get access to child classes. By experience this was found to be error\n! prone. Under the old syntax, to get the sub-classes you append \"*\"\n! to the table name. For example\n! <programlisting>\n! SELECT * from cities*;\n! </programlisting> \n! This old behaviour is still available by using a SET command... \n! <programlisting>\n! SET EXAMINE_SUBCLASS TO 'on';\n! </programlisting> \n! </para>\n </sect1>\n \n <sect1>\nIndex: pgsql/doc/src/sgml/catalogs.sgml\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/src/sgml/catalogs.sgml,v\nretrieving revision 2.3\ndiff -c -r2.3 catalogs.sgml\n*** pgsql/doc/src/sgml/catalogs.sgml\t2000/01/22 23:50:08\t2.3\n--- pgsql/doc/src/sgml/catalogs.sgml\t2000/02/05 08:24:37\n***************\n*** 192,197 ****\n--- 192,199 ----\n \t\t\t 2=main memory */\n int2vector relkey\t\t/* - unused */\n oidvector relkeyop\t/* - unused */\n+ bool relhassubclass\t/* does the class have a subclass?\n+ \t\t\t\t */\n aclitem relacl[1]\t/* access control lists */\n .fi\n .nf M\nIndex: pgsql/doc/src/sgml/inherit.sgml\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/src/sgml/inherit.sgml,v\nretrieving revision 1.4\ndiff -c -r1.4 inherit.sgml\n*** pgsql/doc/src/sgml/inherit.sgml\t1999/08/08 04:21:33\t1.4\n--- pgsql/doc/src/sgml/inherit.sgml\t2000/02/05 08:24:37\n***************\n*** 37,50 ****\n </para>\n </note>\n \n! For example, the following query finds\n! all the cities that are situated at an attitude of 500ft or higher:\n \n <programlisting>\n! SELECT name, altitude\n! FROM cities\n! WHERE altitude > 500;\n \n +----------+----------+\n |name | altitude |\n +----------+----------+\n--- 37,56 ----\n </para>\n </note>\n \n! <para>\n! For example, the following query finds the names of all cities,\n! including state capitals, that are located at an altitude \n! over 500ft, the query is:\n \n <programlisting>\n! SELECT c.name, c.altitude\n! FROM cities c\n! WHERE c.altitude > 500;\n! </programlisting>\n! \n! which returns:\n \n+ <programlisting>\n +----------+----------+\n |name | altitude |\n +----------+----------+\n***************\n*** 52,92 ****\n +----------+----------+\n |Mariposa | 1953 |\n +----------+----------+\n! </programlisting> \n </para>\n \n <para>\n! On the other hand, to find the names of all cities,\n! including state capitals, that are located at an altitude \n! over 500ft, the query is:\n \n <programlisting>\n! SELECT c.name, c.altitude\n! FROM cities* c\n! WHERE c.altitude > 500;\n! </programlisting>\n! \n! which returns:\n \n- <programlisting>\n +----------+----------+\n |name | altitude |\n +----------+----------+\n |Las Vegas | 2174 |\n +----------+----------+\n |Mariposa | 1953 |\n- +----------+----------+\n- |Madison | 845 |\n +----------+----------+\n! </programlisting>\n \n! Here the <quote>*</quote> after cities indicates that the query should\n! be run over cities and all classes below cities in the\n inheritance hierarchy. Many of the commands that we\n have already discussed -- <command>SELECT</command>,\n <command>UPDATE</command> and <command>DELETE</command> --\n! support this <quote>*</quote> notation, as do others, like\n <command>ALTER TABLE</command>.\n </para>\n </chapter>\n \n--- 58,109 ----\n +----------+----------+\n |Mariposa | 1953 |\n +----------+----------+\n! |Madison | 845 |\n! +----------+----------+\n! </programlisting>\n </para>\n \n <para>\n! On the other hand, the following query finds\n! all the cities, but not capital cities \n! that are situated at an attitude of 500ft or higher:\n \n <programlisting>\n! SELECT name, altitude\n! FROM ONLY cities\n! WHERE altitude > 500;\n \n +----------+----------+\n |name | altitude |\n +----------+----------+\n |Las Vegas | 2174 |\n +----------+----------+\n |Mariposa | 1953 |\n +----------+----------+\n! </programlisting> \n! </para>\n \n! \n! Here the <quote>ONLY</quote> before cities indicates that the query should\n! be run over only cities and not classes below cities in the\n inheritance hierarchy. Many of the commands that we\n have already discussed -- <command>SELECT</command>,\n <command>UPDATE</command> and <command>DELETE</command> --\n! support this <quote>ONLY</quote> notation, as do others, like\n <command>ALTER TABLE</command>.\n+ </para>\n+ <para>\n+ Deprecated: In previous versions of postgres, the default was not to\n+ get access to child classes. By experience this was found to be error\n+ prone. Under the old syntax, to get the sub-classes you append \"*\"\n+ to the table name. For example\n+ <programlisting>\n+ SELECT * from cities*;\n+ </programlisting> \n+ This old behaviour is still available by using a SET command... \n+ <programlisting>\n+ SET EXAMINE_SUBCLASS TO 'on';\n+ </programlisting> \n </para>\n </chapter>\n \nIndex: pgsql/doc/src/sgml/ref/alter_table.sgml\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/src/sgml/ref/alter_table.sgml,v\nretrieving revision 1.10\ndiff -c -r1.10 alter_table.sgml\n*** pgsql/doc/src/sgml/ref/alter_table.sgml\t2000/01/29 16:58:27\t1.10\n--- pgsql/doc/src/sgml/ref/alter_table.sgml\t2000/02/05 08:24:38\n***************\n*** 23,35 ****\n <date>1999-07-20</date>\n </refsynopsisdivinfo>\n <synopsis>\n! ALTER TABLE <replaceable class=\"PARAMETER\">table</replaceable> [ * ]\n ADD [ COLUMN ] <replaceable class=\"PARAMETER\">column</replaceable> <replaceable\n class=\"PARAMETER\">type</replaceable>\n! ALTER TABLE <replaceable class=\"PARAMETER\">table</replaceable> [ * ]\n ALTER [ COLUMN ] <replaceable class=\"PARAMETER\">column</replaceable> { SET DEFAULT <replaceable\n class=\"PARAMETER\">value</replaceable> | DROP DEFAULT }\n! ALTER TABLE <replaceable class=\"PARAMETER\">table</replaceable> [ * ]\n RENAME [ COLUMN ] <replaceable class=\"PARAMETER\">column</replaceable> TO <replaceable\n class=\"PARAMETER\">newcolumn</replaceable>\n ALTER TABLE <replaceable class=\"PARAMETER\">table</replaceable>\n--- 23,35 ----\n <date>1999-07-20</date>\n </refsynopsisdivinfo>\n <synopsis>\n! ALTER TABLE [ ONLY ]<replaceable class=\"PARAMETER\">table</replaceable> [ * ]\n ADD [ COLUMN ] <replaceable class=\"PARAMETER\">column</replaceable> <replaceable\n class=\"PARAMETER\">type</replaceable>\n! ALTER TABLE [ ONLY ]<replaceable class=\"PARAMETER\">table</replaceable> [ * ]\n ALTER [ COLUMN ] <replaceable class=\"PARAMETER\">column</replaceable> { SET DEFAULT <replaceable\n class=\"PARAMETER\">value</replaceable> | DROP DEFAULT }\n! ALTER TABLE [ ONLY ]<replaceable class=\"PARAMETER\">table</replaceable> [ * ]\n RENAME [ COLUMN ] <replaceable class=\"PARAMETER\">column</replaceable> TO <replaceable\n class=\"PARAMETER\">newcolumn</replaceable>\n ALTER TABLE <replaceable class=\"PARAMETER\">table</replaceable>\n***************\n*** 162,178 ****\n </para>\n \n <para>\n! <quote>*</quote> following a name of a table indicates that the statement\n! should be run over that table and all tables below it in the\n inheritance hierarchy;\n! by default, the attribute will not be added to or renamed in any of the subclasses.\n \n! This should always be done when adding or modifying an attribute in a\n! superclass. If it is not, queries on the inheritance hierarchy\n such as\n \n <programlisting>\n! SELECT <replaceable>NewColumn</replaceable> FROM <replaceable>SuperClass</replaceable>*\n </programlisting>\n \n will not work because the subclasses will be missing an attribute\n--- 162,178 ----\n </para>\n \n <para>\n! <quote>ONLY</quote> preceeding the name of a table indicates that the statement\n! should be run over only that table and not tables below it in the\n inheritance hierarchy;\n! by default, the attribute will be added to or renamed in any of the subclasses.\n \n! It is recommended to never use the ONLY feature however.\n! If it is, queries on the inheritance hierarchy\n such as\n \n <programlisting>\n! SELECT <replaceable>NewColumn</replaceable> FROM <replaceable>SuperClass</replaceable>\n </programlisting>\n \n will not work because the subclasses will be missing an attribute\nIndex: pgsql/doc/src/sgml/ref/select.sgml\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/src/sgml/ref/select.sgml,v\nretrieving revision 1.24\ndiff -c -r1.24 select.sgml\n*** pgsql/doc/src/sgml/ref/select.sgml\t2000/01/27 18:11:25\t1.24\n--- pgsql/doc/src/sgml/ref/select.sgml\t2000/02/05 08:24:40\n***************\n*** 25,31 ****\n SELECT [ ALL | DISTINCT [ ON ( <replaceable class=\"PARAMETER\">expression</replaceable> [, ...] ) ] ]\n <replaceable class=\"PARAMETER\">expression</replaceable> [ AS <replaceable class=\"PARAMETER\">name</replaceable> ] [, ...]\n [ INTO [ TEMPORARY | TEMP ] [ TABLE ] <replaceable class=\"PARAMETER\">new_table</replaceable> ]\n! [ FROM <replaceable class=\"PARAMETER\">table</replaceable> [ <replaceable class=\"PARAMETER\">alias</replaceable> ] [, ...] ]\n [ WHERE <replaceable class=\"PARAMETER\">condition</replaceable> ]\n [ GROUP BY <replaceable class=\"PARAMETER\">column</replaceable> [, ...] ]\n [ HAVING <replaceable class=\"PARAMETER\">condition</replaceable> [, ...] ]\n--- 25,31 ----\n SELECT [ ALL | DISTINCT [ ON ( <replaceable class=\"PARAMETER\">expression</replaceable> [, ...] ) ] ]\n <replaceable class=\"PARAMETER\">expression</replaceable> [ AS <replaceable class=\"PARAMETER\">name</replaceable> ] [, ...]\n [ INTO [ TEMPORARY | TEMP ] [ TABLE ] <replaceable class=\"PARAMETER\">new_table</replaceable> ]\n! [ FROM [ ONLY ]<replaceable class=\"PARAMETER\">table</replaceable> [ <replaceable class=\"PARAMETER\">alias</replaceable> ] [, ...] ]\n [ WHERE <replaceable class=\"PARAMETER\">condition</replaceable> ]\n [ GROUP BY <replaceable class=\"PARAMETER\">column</replaceable> [, ...] ]\n [ HAVING <replaceable class=\"PARAMETER\">condition</replaceable> [, ...] ]\n***************\n*** 198,203 ****\n--- 198,210 ----\n Candidates for selection are rows which satisfy the WHERE condition;\n if WHERE is omitted, all rows are candidates.\n (See <xref linkend=\"sql-where\" endterm=\"sql-where-title\">.)\n+ </para>\n+ <para>\n+ <command>ONLY</command> will eliminate rows from subclasses of the table.\n+ This was previously the default result, and getting subclasses was\n+ obtained by appending <command>*</command> to the table name.\n+ The old behaviour is available via the command \n+ <command>SET EXAMINE_SUBCLASS TO 'on';</command>\n </para>\n \n <para>\nIndex: pgsql/doc/src/sgml/ref/set.sgml\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/src/sgml/ref/set.sgml,v\nretrieving revision 1.28\ndiff -c -r1.28 set.sgml\n*** pgsql/doc/src/sgml/ref/set.sgml\t1999/07/22 15:09:15\t1.28\n--- pgsql/doc/src/sgml/ref/set.sgml\t2000/02/05 08:24:41\n***************\n*** 443,448 ****\n--- 443,482 ----\n \t </listitem>\n \t </varlistentry>\n \n+ <varlistentry>\n+ <term>EXAMINE_SUBCLASS</term>\n+ <listitem>\n+ <para>\n+ \tSets the inheritance query syntax to the traditional postgres style.\n+ \t\n+ \t<variablelist>\n+ \t <varlistentry>\n+ \t <term><replaceable class=\"parameter\">OFF</replaceable></term>\n+ \t <listitem>\n+ \t <para>\n+ Changes the behaviour of SELECT so that it no longer automatically\n+ examines sub-classes. (See SELECT). By default a SELECT on a table\n+ will also return subclass tuples unless specifying ONLY tablename.\n+ Setting this returns postgres to the traditional behaviour of\n+ only returning subclasses when appending \"*\" to the tablename.\n+ \t </para>\n+ \t </listitem>\n+ \t </varlistentry>\n+ \t \n+ \t <varlistentry>\n+ \t <term>ON</term>\n+ \t <listitem>\n+ \t <para>\n+ Returns SELECT to the behaviour of automatically returning\n+ results from sub-classes.\n+ \t </para>\n+ \t </listitem>\n+ \t </varlistentry>\n+ \t</variablelist>\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n \t <varlistentry>\n \t <term>OFF</term>\n \t <listitem>\nIndex: pgsql/src/backend/commands/creatinh.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/commands/creatinh.c,v\nretrieving revision 1.56\ndiff -c -r1.56 creatinh.c\n*** pgsql/src/backend/commands/creatinh.c\t2000/01/29 16:58:34\t1.56\n--- pgsql/src/backend/commands/creatinh.c\t2000/02/05 08:24:44\n***************\n*** 35,40 ****\n--- 35,43 ----\n const char *attributeType, List *schema);\n static List *MergeAttributes(List *schema, List *supers, List **supconstr);\n static void StoreCatalogInheritance(Oid relationId, List *supers);\n+ static void\n+ setRelhassubclassInRelation(Oid relationId, bool relhassubclass);\n+ \n \n /* ----------------------------------------------------------------\n *\t\tDefineRelation\n***************\n*** 323,328 ****\n--- 326,332 ----\n \t\tTupleConstr *constr;\n \n \t\trelation = heap_openr(name, AccessShareLock);\n+ \t\tsetRelhassubclassInRelation(relation->rd_id, true);\n \t\ttupleDesc = RelationGetDescr(relation);\n \t\tconstr = tupleDesc->constr;\n \n***************\n*** 655,657 ****\n--- 659,698 ----\n \t}\n \treturn false;\n }\n+ \n+ \n+ static void\n+ setRelhassubclassInRelation(Oid relationId, bool relhassubclass)\n+ {\n+ Relation relationRelation;\n+ HeapTuple tuple;\n+ Relation idescs[Num_pg_class_indices];\n+ \n+ /*\n+ * Lock a relation given its Oid. Go to the RelationRelation (i.e.\n+ * pg_relation), find the appropriate tuple, and add the specified\n+ * lock to it.\n+ */\n+ relationRelation = heap_openr(RelationRelationName, RowExclusiveLock);\n+ tuple = SearchSysCacheTuple(RELOID,\n+ ObjectIdGetDatum(relationId),\n+ 0, 0, 0)\n+ ;\n+ Assert(HeapTupleIsValid(tuple));\n+ \n+ ((Form_pg_class) GETSTRUCT(tuple))->relhassubclass = relhassubclass;\n+ heap_update(relationRelation, &tuple->t_self, tuple, NULL);\n+ \n+ /* keep the catalog indices up to date */\n+ CatalogOpenIndices(Num_pg_class_indices, Name_pg_class_indices, idescs);\n+ CatalogIndexInsert(idescs, Num_pg_class_indices, relationRelation, tuple\n+ );\n+ CatalogCloseIndices(Num_pg_class_indices, idescs);\n+ \n+ /* heap_freetuple(tuple); */\n+ heap_close(relationRelation, RowExclusiveLock);\n+ }\n+ \n+ \n+ \n+ \nIndex: pgsql/src/backend/commands/variable.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/commands/variable.c,v\nretrieving revision 1.28\ndiff -c -r1.28 variable.c\n*** pgsql/src/backend/commands/variable.c\t2000/01/22 23:50:10\t1.28\n--- pgsql/src/backend/commands/variable.c\t2000/02/05 08:24:45\n***************\n*** 48,53 ****\n--- 48,56 ----\n \n extern bool _use_keyset_query_optimizer;\n \n+ #define examine_subclass_default true\n+ bool examine_subclass = examine_subclass_default;\n+ \n /*\n *\n * Get_Token\n***************\n*** 228,233 ****\n--- 231,274 ----\n \tgeqo_rels = GEQO_RELS;\n \treturn TRUE;\n }\n+ /*\n+ *\n+ * EXAMINE_SUBCLASS\n+ *\n+ */\n+ #define EXAMINE_SUBCLASS \"EXAMINE_SUBCLASS\"\n+ \n+ static bool\n+ parse_examine_subclass(const char *value)\n+ {\n+ if (strcasecmp(value, \"on\") == 0)\n+ examine_subclass = true;\n+ else if (strcasecmp(value, \"off\") == 0)\n+ examine_subclass = false;\n+ else if (strcasecmp(value, \"default\") == 0) \n+ examine_subclass = examine_subclass_default;\n+ \telse\n+ \t\telog(ERROR, \"Bad value for %s (%s)\", EXAMINE_SUBCLASS, value);\n+ \treturn TRUE;\n+ }\n+ \n+ static bool\n+ show_examine_subclass()\n+ {\n+ \n+ \tif (examine_subclass)\n+ \t\telog(NOTICE, \"%s is ON\", EXAMINE_SUBCLASS);\n+ \telse\n+ \t\telog(NOTICE, \"%s is OFF\", EXAMINE_SUBCLASS);\n+ \treturn TRUE;\n+ }\n+ \n+ static bool\n+ reset_examine_subclass(void)\n+ {\n+ examine_subclass = examine_subclass_default;\n+ \treturn TRUE;\n+ }\n \n /*\n *\n***************\n*** 600,605 ****\n--- 641,649 ----\n \t{\n \t\t\"pg_options\", parse_pg_options, show_pg_options, reset_pg_options\n \t},\n+ {\n+ \t\tEXAMINE_SUBCLASS, parse_examine_subclass, show_examine_subclass, reset_examine_subclass\n+ },\n \t{\n \t\tNULL, NULL, NULL, NULL\n \t}\nIndex: pgsql/src/backend/optimizer/plan/planner.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/optimizer/plan/planner.c,v\nretrieving revision 1.74\ndiff -c -r1.74 planner.c\n*** pgsql/src/backend/optimizer/plan/planner.c\t2000/01/27 18:11:31\t1.74\n--- pgsql/src/backend/optimizer/plan/planner.c\t2000/02/05 08:24:48\n***************\n*** 35,40 ****\n--- 35,41 ----\n #include \"utils/builtins.h\"\n #include \"utils/lsyscache.h\"\n #include \"utils/syscache.h\"\n+ #include \"parser/parsetree.h\"\n \n static List *make_subplanTargetList(Query *parse, List *tlist,\n \t\t\t\t\t\t\t\t\tAttrNumber **groupColIdx);\n***************\n*** 140,146 ****\n \t\t * to change interface to plan_union_queries to pass that info back!\n \t\t */\n \t}\n! \telse if ((rt_index = first_inherit_rt_entry(rangetable)) != -1)\n \t{\n \t\tList\t *sub_tlist;\n \n--- 141,147 ----\n \t\t * to change interface to plan_union_queries to pass that info back!\n \t\t */\n \t}\n! \telse if ((rt_index = first_inherit_rt_entry(rangetable)) != -1 && has_inheritors(rt_fetch(rt_index, parse->rtable)->relid))\n \t{\n \t\tList\t *sub_tlist;\n \nIndex: pgsql/src/backend/optimizer/prep/prepunion.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/optimizer/prep/prepunion.c,v\nretrieving revision 1.43\ndiff -c -r1.43 prepunion.c\n*** pgsql/src/backend/optimizer/prep/prepunion.c\t2000/02/03 06:12:19\t1.43\n--- pgsql/src/backend/optimizer/prep/prepunion.c\t2000/02/05 08:24:49\n***************\n*** 25,30 ****\n--- 25,33 ----\n #include \"parser/parse_clause.h\"\n #include \"parser/parsetree.h\"\n #include \"utils/lsyscache.h\"\n+ #include \"access/heapam.h\"\n+ #include \"catalog/catname.h\"\n+ #include \"utils/syscache.h\"\n \n typedef struct {\n \tIndex\t\trt_index;\n***************\n*** 45,50 ****\n--- 48,54 ----\n static Append *make_append(List *appendplans, List *unionrtables,\n \t\t\t\t\t\t Index rt_index,\n \t\t\t\t\t\t List *inheritrtable, List *tlist);\n+ bool has_inheritors(Oid relationId);\n \n \n /*\n***************\n*** 352,357 ****\n--- 356,386 ----\n \n \t*union_rtentriesPtr = union_rtentries;\n \treturn union_plans;\n+ }\n+ \n+ bool has_inheritors(Oid relationId)\n+ {\n+ bool rtn;\n+ Relation relationRelation;\n+ HeapTuple tuple;\n+ \n+ /*\n+ * Lock a relation given its Oid. Go to the RelationRelation (i.e.\n+ * pg_relation), find the appropriate tuple, and add the specified\n+ * lock to it.\n+ */\n+ relationRelation = heap_openr(RelationRelationName, NoLock);\n+ tuple = SearchSysCacheTuple(RELOID,\n+ ObjectIdGetDatum(relationId),\n+ 0, 0, 0)\n+ ;\n+ /* Assert(HeapTupleIsValid(tuple)); */\n+ \n+ rtn = ((Form_pg_class) GETSTRUCT(tuple))->relhassubclass;\n+ \n+ /* heap_freetuple(tuple); */\n+ heap_close(relationRelation, NoLock);\n+ return rtn;\n }\n \n /*\nIndex: pgsql/src/backend/parser/gram.y\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.139\ndiff -c -r2.139 gram.y\n*** pgsql/src/backend/parser/gram.y\t2000/02/04 18:49:33\t2.139\n--- pgsql/src/backend/parser/gram.y\t2000/02/05 08:25:00\n***************\n*** 811,868 ****\n \n AlterTableStmt:\n /* ALTER TABLE <name> ADD [COLUMN] <coldef> */\n ALTER TABLE relation_name opt_inh_star ADD opt_column columnDef\n \t{\n \t\tAlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'A';\n \t\tn->relname = $3;\n! \t\tn->inh = $4;\n \t\tn->def = $7;\n \t\t$$ = (Node *)n;\n \t}\n /* ALTER TABLE <name> ALTER [COLUMN] <colname> {SET DEFAULT <expr>|DROP DEFAULT} */\n | ALTER TABLE relation_name opt_inh_star ALTER opt_column ColId alter_column_action\n {\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'T';\n n->relname = $3;\n! n->inh = $4;\n n->name = $7;\n n->def = $8;\n $$ = (Node *)n;\n }\n /* ALTER TABLE <name> DROP [COLUMN] <name> {RESTRICT|CASCADE} */\n | ALTER TABLE relation_name opt_inh_star DROP opt_column ColId drop_behavior\n {\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'D';\n n->relname = $3;\n! n->inh = $4;\n n->name = $7;\n n->behavior = $8;\n $$ = (Node *)n;\n }\n /* ALTER TABLE <name> ADD CONSTRAINT ... */\n | ALTER TABLE relation_name opt_inh_star ADD TableConstraint\n {\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'C';\n n->relname = $3;\n! n->inh = $4;\n n->def = $6;\n $$ = (Node *)n;\n }\n /* ALTER TABLE <name> DROP CONSTRAINT <name> {RESTRICT|CASCADE} */\n | ALTER TABLE relation_name opt_inh_star DROP CONSTRAINT name drop_behavior\n {\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'X';\n n->relname = $3;\n! n->inh = $4;\n n->name = $7;\n n->behavior = $8;\n $$ = (Node *)n;\n }\n ;\n \n alter_column_action:\n--- 811,926 ----\n \n AlterTableStmt:\n /* ALTER TABLE <name> ADD [COLUMN] <coldef> */\n+ /* \"*\" deprecated */\n ALTER TABLE relation_name opt_inh_star ADD opt_column columnDef\n \t{\n+ extern bool examine_subclass;\n \t\tAlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'A';\n \t\tn->relname = $3;\n! \t\tn->inh = $4 || examine_subclass;\n \t\tn->def = $7;\n \t\t$$ = (Node *)n;\n \t}\n+ | ALTER TABLE ONLY relation_name ADD opt_column columnDef\n+ \t{\n+ \t\tAlterTableStmt *n = makeNode(AlterTableStmt);\n+ n->subtype = 'A';\n+ \t\tn->relname = $4;\n+ \t\tn->inh = FALSE;\n+ \t\tn->def = $7;\n+ \t\t$$ = (Node *)n;\n+ \t}\n /* ALTER TABLE <name> ALTER [COLUMN] <colname> {SET DEFAULT <expr>|DROP DEFAULT} */\n+ /* \"*\" deprecated */\n | ALTER TABLE relation_name opt_inh_star ALTER opt_column ColId alter_column_action\n {\n+ extern bool examine_subclass;\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'T';\n n->relname = $3;\n! n->inh = $4 || examine_subclass;\n n->name = $7;\n n->def = $8;\n $$ = (Node *)n;\n }\n+ | ALTER TABLE ONLY relation_name ALTER opt_column ColId alter_column_action\n+ {\n+ AlterTableStmt *n = makeNode(AlterTableStmt);\n+ n->subtype = 'T';\n+ n->relname = $4;\n+ n->inh = FALSE;\n+ n->name = $7;\n+ n->def = $8;\n+ $$ = (Node *)n;\n+ }\n /* ALTER TABLE <name> DROP [COLUMN] <name> {RESTRICT|CASCADE} */\n+ /* \"*\" deprecated */\n | ALTER TABLE relation_name opt_inh_star DROP opt_column ColId drop_behavior\n {\n+ extern bool examine_subclass;\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'D';\n n->relname = $3;\n! n->inh = $4 || examine_subclass;\n n->name = $7;\n n->behavior = $8;\n $$ = (Node *)n;\n }\n+ | ALTER TABLE ONLY relation_name DROP opt_column ColId drop_behavior\n+ {\n+ AlterTableStmt *n = makeNode(AlterTableStmt);\n+ n->subtype = 'D';\n+ n->relname = $4;\n+ n->inh = FALSE;\n+ n->name = $7;\n+ n->behavior = $8;\n+ $$ = (Node *)n;\n+ }\n /* ALTER TABLE <name> ADD CONSTRAINT ... */\n+ /* \"*\" deprecated */\n | ALTER TABLE relation_name opt_inh_star ADD TableConstraint\n {\n+ extern bool examine_subclass;\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'C';\n n->relname = $3;\n! n->inh = $4 || examine_subclass;\n n->def = $6;\n $$ = (Node *)n;\n }\n+ | ALTER TABLE ONLY relation_name ADD TableConstraint\n+ {\n+ AlterTableStmt *n = makeNode(AlterTableStmt);\n+ n->subtype = 'C';\n+ n->relname = $4;\n+ n->inh = FALSE;\n+ n->def = $6;\n+ $$ = (Node *)n;\n+ }\n /* ALTER TABLE <name> DROP CONSTRAINT <name> {RESTRICT|CASCADE} */\n+ /* \"*\" deprecated */\n | ALTER TABLE relation_name opt_inh_star DROP CONSTRAINT name drop_behavior\n {\n+ extern bool examine_subclass;\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->subtype = 'X';\n n->relname = $3;\n! n->inh = $4 || examine_subclass;\n n->name = $7;\n n->behavior = $8;\n $$ = (Node *)n;\n }\n+ | ALTER TABLE ONLY relation_name DROP CONSTRAINT name drop_behavior\n+ {\n+ AlterTableStmt *n = makeNode(AlterTableStmt);\n+ n->subtype = 'X';\n+ n->relname = $4;\n+ n->inh = FALSE;\n+ n->name = $7;\n+ n->behavior = $8;\n+ $$ = (Node *)n;\n+ }\n ;\n \n alter_column_action:\n***************\n*** 2380,2390 ****\n *****************************************************************************/\n \n RenameStmt: ALTER TABLE relation_name opt_inh_star\n \t\t\t\t RENAME opt_column opt_name TO name\n \t\t\t\t{\n \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n \t\t\t\t\tn->relname = $3;\n! \t\t\t\t\tn->inh = $4;\n \t\t\t\t\tn->column = $7;\n \t\t\t\t\tn->newname = $9;\n \t\t\t\t\t$$ = (Node *)n;\n--- 2438,2460 ----\n *****************************************************************************/\n \n RenameStmt: ALTER TABLE relation_name opt_inh_star\n+ /* \"*\" deprecated */\n \t\t\t\t RENAME opt_column opt_name TO name\n \t\t\t\t{\n+ extern bool examine_subclass;\n \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n \t\t\t\t\tn->relname = $3;\n! \t\t\t\t\tn->inh = $4 || examine_subclass;\n! \t\t\t\t\tn->column = $7;\n! \t\t\t\t\tn->newname = $9;\n! \t\t\t\t\t$$ = (Node *)n;\n! \t\t\t\t}\n! | ALTER TABLE ONLY relation_name\n! \t\t\t\t RENAME opt_column opt_name TO name\n! \t\t\t\t{\n! \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n! \t\t\t\t\tn->relname = $4;\n! \t\t\t\t\tn->inh = FALSE;\n \t\t\t\t\tn->column = $7;\n \t\t\t\t\tn->newname = $9;\n \t\t\t\t\t$$ = (Node *)n;\n***************\n*** 3553,3562 ****\n \n relation_expr:\trelation_name\n \t\t\t\t{\n! \t\t\t\t\t/* normal relations */\n \t\t\t\t\t$$ = makeNode(RelExpr);\n \t\t\t\t\t$$->relname = $1;\n! \t\t\t\t\t$$->inh = FALSE;\n \t\t\t\t}\n \t\t| relation_name '*'\t\t\t\t %prec '='\n \t\t\t\t{\n--- 3623,3633 ----\n \n relation_expr:\trelation_name\n \t\t\t\t{\n! \t\t\t\t/* default inheritance */\n! extern bool examine_subclass;\n \t\t\t\t\t$$ = makeNode(RelExpr);\n \t\t\t\t\t$$->relname = $1;\n! \t\t\t\t\t$$->inh = examine_subclass;\n \t\t\t\t}\n \t\t| relation_name '*'\t\t\t\t %prec '='\n \t\t\t\t{\n***************\n*** 3565,3570 ****\n--- 3636,3648 ----\n \t\t\t\t\t$$->relname = $1;\n \t\t\t\t\t$$->inh = TRUE;\n \t\t\t\t}\n+ | ONLY relation_name\n+ {\n+ \t\t\t\t\t/* no inheritance */\n+ \t\t\t\t\t$$ = makeNode(RelExpr);\n+ \t\t\t\t\t$$->relname = $2;\n+ \t\t\t\t\t$$->inh = FALSE;\n+ }\n \n opt_array_bounds:\t'[' ']' opt_array_bounds\n \t\t\t\t{ $$ = lcons(makeInteger(-1), $3); }\nIndex: pgsql/src/include/catalog/catversion.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/catalog/catversion.h,v\nretrieving revision 1.13\ndiff -c -r1.13 catversion.h\n*** pgsql/src/include/catalog/catversion.h\t2000/01/27 18:11:40\t1.13\n--- pgsql/src/include/catalog/catversion.h\t2000/02/05 08:25:05\n***************\n*** 53,58 ****\n */\n \n /* yyyymmddN */\n! #define CATALOG_VERSION_NO 200001271\n \n #endif\n--- 53,58 ----\n */\n \n /* yyyymmddN */\n! #define CATALOG_VERSION_NO 200002050\n \n #endif\nIndex: pgsql/src/include/catalog/pg_attribute.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/catalog/pg_attribute.h,v\nretrieving revision 1.53\ndiff -c -r1.53 pg_attribute.h\n*** pgsql/src/include/catalog/pg_attribute.h\t2000/01/26 05:57:57\t1.53\n--- pgsql/src/include/catalog/pg_attribute.h\t2000/02/05 08:25:09\n***************\n*** 402,408 ****\n { 1259, {\"relrefs\"},\t 21, 0,\t2, 16, 0, -1, -1, '\\001', 'p', '\\0', 's', '\\0', '\\0' }, \\\n { 1259, {\"relhaspkey\"}, 16, 0,\t1, 17, 0, -1, -1, '\\001', 'p', '\\0', 'c', '\\0', '\\0' }, \\\n { 1259, {\"relhasrules\"}, 16, 0,\t1, 18, 0, -1, -1, '\\001', 'p', '\\0', 'c', '\\0', '\\0' }, \\\n! { 1259, {\"relacl\"},\t\t 1034, 0, -1, 19, 0, -1, -1,\t'\\0', 'p', '\\0', 'i', '\\0', '\\0' }\n \n DATA(insert OID = 0 ( 1259 relname\t\t\t19 0 NAMEDATALEN 1 0 -1 -1 f p f i f f));\n DATA(insert OID = 0 ( 1259 reltype\t\t\t26 0 4 2 0 -1 -1 t p f i f f));\n--- 402,409 ----\n { 1259, {\"relrefs\"},\t 21, 0,\t2, 16, 0, -1, -1, '\\001', 'p', '\\0', 's', '\\0', '\\0' }, \\\n { 1259, {\"relhaspkey\"}, 16, 0,\t1, 17, 0, -1, -1, '\\001', 'p', '\\0', 'c', '\\0', '\\0' }, \\\n { 1259, {\"relhasrules\"}, 16, 0,\t1, 18, 0, -1, -1, '\\001', 'p', '\\0', 'c', '\\0', '\\0' }, \\\n! { 1259, {\"relhassubclass\"},16, 0,\t1, 19, 0, -1, -1, '\\001', 'p', '\\0', 'c', '\\0', '\\0' }, \\\n! { 1259, {\"relacl\"},\t\t 1034, 0, -1, 20, 0, -1, -1,\t'\\0', 'p', '\\0', 'i', '\\0', '\\0' }\n \n DATA(insert OID = 0 ( 1259 relname\t\t\t19 0 NAMEDATALEN 1 0 -1 -1 f p f i f f));\n DATA(insert OID = 0 ( 1259 reltype\t\t\t26 0 4 2 0 -1 -1 t p f i f f));\n***************\n*** 422,428 ****\n DATA(insert OID = 0 ( 1259 relrefs\t\t\t21 0 2 16 0 -1 -1 t p f s f f));\n DATA(insert OID = 0 ( 1259 relhaspkey\t\t16 0 1 17 0 -1 -1 t p f c f f));\n DATA(insert OID = 0 ( 1259 relhasrules\t\t16 0 1 18 0 -1 -1 t p f c f f));\n! DATA(insert OID = 0 ( 1259 relacl\t\t 1034 0 -1 19 0 -1 -1 f p f i f f));\n DATA(insert OID = 0 ( 1259 ctid\t\t\t\t27 0 6 -1 0 -1 -1 f p f i f f));\n DATA(insert OID = 0 ( 1259 oid\t\t\t\t26 0 4 -2 0 -1 -1 t p f i f f));\n DATA(insert OID = 0 ( 1259 xmin\t\t\t\t28 0 4 -3 0 -1 -1 t p f i f f));\n--- 423,430 ----\n DATA(insert OID = 0 ( 1259 relrefs\t\t\t21 0 2 16 0 -1 -1 t p f s f f));\n DATA(insert OID = 0 ( 1259 relhaspkey\t\t16 0 1 17 0 -1 -1 t p f c f f));\n DATA(insert OID = 0 ( 1259 relhasrules\t\t16 0 1 18 0 -1 -1 t p f c f f));\n! DATA(insert OID = 0 ( 1259 relhassubclass\t16 0 1 19 0 -1 -1 t p f c f f));\n! DATA(insert OID = 0 ( 1259 relacl\t\t 1034 0 -1 20 0 -1 -1 f p f i f f));\n DATA(insert OID = 0 ( 1259 ctid\t\t\t\t27 0 6 -1 0 -1 -1 f p f i f f));\n DATA(insert OID = 0 ( 1259 oid\t\t\t\t26 0 4 -2 0 -1 -1 t p f i f f));\n DATA(insert OID = 0 ( 1259 xmin\t\t\t\t28 0 4 -3 0 -1 -1 t p f i f f));\nIndex: pgsql/src/include/catalog/pg_class.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/catalog/pg_class.h,v\nretrieving revision 1.33\ndiff -c -r1.33 pg_class.h\n*** pgsql/src/include/catalog/pg_class.h\t2000/01/26 05:57:57\t1.33\n--- pgsql/src/include/catalog/pg_class.h\t2000/02/05 08:25:09\n***************\n*** 78,88 ****\n \tint2\t\trelrefs;\t\t/* # of references to this relation */\n \tbool\t\trelhaspkey;\t\t/* has PRIMARY KEY */\n \tbool\t\trelhasrules;\n \taclitem\t\trelacl[1];\t\t/* this is here for the catalog */\n } FormData_pg_class;\n \n #define CLASS_TUPLE_SIZE \\\n! \t (offsetof(FormData_pg_class,relhasrules) + sizeof(bool))\n \n /* ----------------\n *\t\tForm_pg_class corresponds to a pointer to a tuple with\n--- 78,89 ----\n \tint2\t\trelrefs;\t\t/* # of references to this relation */\n \tbool\t\trelhaspkey;\t\t/* has PRIMARY KEY */\n \tbool\t\trelhasrules;\n+ \tbool\t\trelhassubclass;\n \taclitem\t\trelacl[1];\t\t/* this is here for the catalog */\n } FormData_pg_class;\n \n #define CLASS_TUPLE_SIZE \\\n! \t (offsetof(FormData_pg_class,relhassubclass) + sizeof(bool))\n \n /* ----------------\n *\t\tForm_pg_class corresponds to a pointer to a tuple with\n***************\n*** 102,109 ****\n *\t\trelacl field.\n * ----------------\n */\n! #define Natts_pg_class_fixed\t\t\t18\n! #define Natts_pg_class\t\t\t\t\t19\n #define Anum_pg_class_relname\t\t\t1\n #define Anum_pg_class_reltype\t\t\t2\n #define Anum_pg_class_relowner\t\t\t3\n--- 103,110 ----\n *\t\trelacl field.\n * ----------------\n */\n! #define Natts_pg_class_fixed\t\t\t19\n! #define Natts_pg_class\t\t\t\t\t20\n #define Anum_pg_class_relname\t\t\t1\n #define Anum_pg_class_reltype\t\t\t2\n #define Anum_pg_class_relowner\t\t\t3\n***************\n*** 122,128 ****\n #define Anum_pg_class_relrefs\t\t\t16\n #define Anum_pg_class_relhaspkey\t\t17\n #define Anum_pg_class_relhasrules\t\t18\n! #define Anum_pg_class_relacl\t\t\t19\n \n /* ----------------\n *\t\tinitial contents of pg_class\n--- 123,130 ----\n #define Anum_pg_class_relrefs\t\t\t16\n #define Anum_pg_class_relhaspkey\t\t17\n #define Anum_pg_class_relhasrules\t\t18\n! #define Anum_pg_class_relhassubclass\t\t19\n! #define Anum_pg_class_relacl\t\t\t20\n \n /* ----------------\n *\t\tinitial contents of pg_class\n***************\n*** 135,141 ****\n DESCR(\"\");\n DATA(insert OID = 1255 ( pg_proc 81\t\t PGUID 0 0 0 0 f f r 16 0 0 0 0 0 f f _null_ ));\n DESCR(\"\");\n! DATA(insert OID = 1259 ( pg_class 83\t\t PGUID 0 0 0 0 f f r 19 0 0 0 0 0 f f _null_ ));\n DESCR(\"\");\n DATA(insert OID = 1260 ( pg_shadow 86\t\t PGUID 0 0 0 0 f t r 8 0 0 0 0 0 f f _null_ ));\n DESCR(\"\");\n--- 137,143 ----\n DESCR(\"\");\n DATA(insert OID = 1255 ( pg_proc 81\t\t PGUID 0 0 0 0 f f r 16 0 0 0 0 0 f f _null_ ));\n DESCR(\"\");\n! DATA(insert OID = 1259 ( pg_class 83\t\t PGUID 0 0 0 0 f f r 20 0 0 0 0 0 f f _null_ ));\n DESCR(\"\");\n DATA(insert OID = 1260 ( pg_shadow 86\t\t PGUID 0 0 0 0 f t r 8 0 0 0 0 0 f f _null_ ));\n DESCR(\"\");",
"msg_date": "Sat, 5 Feb 2000 04:31:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New improved patch^N"
}
] |
[
{
"msg_contents": "In the TODO file:\n\n* -Allow transaction commits with rollback with no-fsync performance [fsync](Vadim)\n\nHas this been done in current? I see almost no performance\nimprovement on copying data into a table.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 05 Feb 2000 19:37:19 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "TODO item"
},
{
"msg_contents": "> In the TODO file:\n> \n> * -Allow transaction commits with rollback with no-fsync performance [fsync](Vadim)\n> \n> Has this been done in current? I see almost no performance\n> improvement on copying data into a table.\n\nTODO updated. That was part of MVCC which originally was supposed to be\nin 7.0.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 12:02:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> > In the TODO file:\n> > \n> > * -Allow transaction commits with rollback with no-fsync performance [fsync](Vadim)\n> > \n> > Has this been done in current? I see almost no performance\n> > improvement on copying data into a table.\n> \n> TODO updated. That was part of MVCC which originally was supposed to be\n> in 7.0.\n\nThanks.\n\nBTW, I have worked a little bit on this item. The idea is pretty\nsimple. Instead of doing a real fsync() in pg_fsync(), just marking it\nso that we remember to do fsync() at the commit time. Following\npatches illustrate the idea. An experience shows that it dramatically\nboosts the performance of copy. Unfortunately I see virtually no\ndifference for TPC-B like small many concurrent transactions. Maybe we\nwould need WAL for this. Comments?\n\nIndex: access/transam/xact.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/access/transam/xact.c,v\nretrieving revision 1.60\ndiff -c -r1.60 xact.c\n*** access/transam/xact.c\t2000/01/29 16:58:29\t1.60\n--- access/transam/xact.c\t2000/02/06 06:12:58\n***************\n*** 639,644 ****\n--- 639,646 ----\n \tif (SharedBufferChanged)\n \t{\n \t\tFlushBufferPool();\n+ \t\tpg_fsync_pending();\n+ \n \t\tif (leak)\n \t\t\tResetBufferPool();\n \n***************\n*** 653,658 ****\n--- 655,661 ----\n \t\t */\n \t\tleak = BufferPoolCheckLeak();\n \t\tFlushBufferPool();\n+ \t\tpg_fsync_pending();\n \t}\n \n \tif (leak)\nIndex: storage/file/fd.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/storage/file/fd.c,v\nretrieving revision 1.52\ndiff -c -r1.52 fd.c\n*** storage/file/fd.c\t2000/01/26 05:56:55\t1.52\n--- storage/file/fd.c\t2000/02/06 06:13:01\n***************\n*** 189,202 ****\n static File fileNameOpenFile(FileName fileName, int fileFlags, int fileMode);\n static char *filepath(char *filename);\n static long pg_nofile(void);\n \n /*\n * pg_fsync --- same as fsync except does nothing if -F switch was given\n */\n int\n pg_fsync(int fd)\n {\n! \treturn disableFsync ? 0 : fsync(fd);\n }\n \n /*\n--- 189,238 ----\n static File fileNameOpenFile(FileName fileName, int fileFlags, int fileMode);\n static char *filepath(char *filename);\n static long pg_nofile(void);\n+ static void alloc_fsync_info(void);\n \n+ static char *fsync_request;\n+ static int nfds;\n+ \n /*\n * pg_fsync --- same as fsync except does nothing if -F switch was given\n */\n int\n pg_fsync(int fd)\n+ {\n+ \tif (fsync_request == NULL)\n+ \t alloc_fsync_info();\n+ \tfsync_request[fd] = 1;\n+ \treturn 0;\n+ }\n+ \n+ static void alloc_fsync_info(void)\n+ {\n+ nfds = pg_nofile();\n+ fsync_request = malloc(nfds);\n+ if (fsync_request == NULL) {\n+ elog(ERROR, \"alloc_fsync_info: cannot allocate memory\");\n+ return;\n+ }\n+ }\n+ \n+ void\n+ pg_fsync_pending(void)\n {\n! int i;\n! \n! if (disableFsync)\n! return;\n! \n! if (fsync_request == NULL)\n! alloc_fsync_info();\n! \n! for (i=0;i<nfds;i++) {\n! if (fsync_request[i]) {\n! fsync(i);\n! fsync_request[i] = 0;\n! }\n! }\n }\n \n /*\n",
"msg_date": "Sun, 06 Feb 2000 15:40:59 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> BTW, I have worked a little bit on this item. The idea is pretty\n> simple. Instead of doing a real fsync() in pg_fsync(), just marking it\n> so that we remember to do fsync() at the commit time. Following\n> patches illustrate the idea. An experience shows that it dramatically\n> boosts the performance of copy. Unfortunately I see virtually no\n> difference for TPC-B like small many concurrent transactions. Maybe we\n> would need WAL for this. Comments?\n\n\nCan you be more specific. How does fsync work now vs. your proposed\nchange. I did not see that here. Sorry.\n\n\n> \n> Index: access/transam/xact.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/access/transam/xact.c,v\n> retrieving revision 1.60\n> diff -c -r1.60 xact.c\n> *** access/transam/xact.c\t2000/01/29 16:58:29\t1.60\n> --- access/transam/xact.c\t2000/02/06 06:12:58\n> ***************\n> *** 639,644 ****\n> --- 639,646 ----\n> \tif (SharedBufferChanged)\n> \t{\n> \t\tFlushBufferPool();\n> + \t\tpg_fsync_pending();\n> + \n> \t\tif (leak)\n> \t\t\tResetBufferPool();\n> \n> ***************\n> *** 653,658 ****\n> --- 655,661 ----\n> \t\t */\n> \t\tleak = BufferPoolCheckLeak();\n> \t\tFlushBufferPool();\n> + \t\tpg_fsync_pending();\n> \t}\n> \n> \tif (leak)\n> Index: storage/file/fd.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/storage/file/fd.c,v\n> retrieving revision 1.52\n> diff -c -r1.52 fd.c\n> *** storage/file/fd.c\t2000/01/26 05:56:55\t1.52\n> --- storage/file/fd.c\t2000/02/06 06:13:01\n> ***************\n> *** 189,202 ****\n> static File fileNameOpenFile(FileName fileName, int fileFlags, int fileMode);\n> static char *filepath(char *filename);\n> static long pg_nofile(void);\n> \n> /*\n> * pg_fsync --- same as fsync except does nothing if -F switch was given\n> */\n> int\n> pg_fsync(int fd)\n> {\n> ! \treturn disableFsync ? 0 : fsync(fd);\n> }\n> \n> /*\n> --- 189,238 ----\n> static File fileNameOpenFile(FileName fileName, int fileFlags, int fileMode);\n> static char *filepath(char *filename);\n> static long pg_nofile(void);\n> + static void alloc_fsync_info(void);\n> \n> + static char *fsync_request;\n> + static int nfds;\n> + \n> /*\n> * pg_fsync --- same as fsync except does nothing if -F switch was given\n> */\n> int\n> pg_fsync(int fd)\n> + {\n> + \tif (fsync_request == NULL)\n> + \t alloc_fsync_info();\n> + \tfsync_request[fd] = 1;\n> + \treturn 0;\n> + }\n> + \n> + static void alloc_fsync_info(void)\n> + {\n> + nfds = pg_nofile();\n> + fsync_request = malloc(nfds);\n> + if (fsync_request == NULL) {\n> + elog(ERROR, \"alloc_fsync_info: cannot allocate memory\");\n> + return;\n> + }\n> + }\n> + \n> + void\n> + pg_fsync_pending(void)\n> {\n> ! int i;\n> ! \n> ! if (disableFsync)\n> ! return;\n> ! \n> ! if (fsync_request == NULL)\n> ! alloc_fsync_info();\n> ! \n> ! for (i=0;i<nfds;i++) {\n> ! if (fsync_request[i]) {\n> ! fsync(i);\n> ! fsync_request[i] = 0;\n> ! }\n> ! }\n> }\n> \n> /*\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 01:55:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tatsuo Ishii\n>\n> > > In the TODO file:\n> > >\n> > > * -Allow transaction commits with rollback with no-fsync\n> performance [fsync](Vadim)\n> > >\n> > > Has this been done in current? I see almost no performance\n> > > improvement on copying data into a table.\n> >\n> > TODO updated. That was part of MVCC which originally was supposed to be\n> > in 7.0.\n>\n> Thanks.\n>\n> BTW, I have worked a little bit on this item. The idea is pretty\n> simple. Instead of doing a real fsync() in pg_fsync(), just marking it\n> so that we remember to do fsync() at the commit time. Following\n\nThis seems not good,unfortunately.\nNote that the backend which calls pg_fsync() for a relation file may\nbe different from the backend which updated shared buffers of the file.\nThe former backend wouldn't necessarily be committed when the\nlatter backend is committed.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Sun, 6 Feb 2000 17:36:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TODO item"
},
{
"msg_contents": "> > BTW, I have worked a little bit on this item. The idea is pretty\n> > simple. Instead of doing a real fsync() in pg_fsync(), just marking it\n> > so that we remember to do fsync() at the commit time. Following\n> > patches illustrate the idea. An experience shows that it dramatically\n> > boosts the performance of copy. Unfortunately I see virtually no\n> > difference for TPC-B like small many concurrent transactions. Maybe we\n> > would need WAL for this. Comments?\n> \n> \n> Can you be more specific. How does fsync work now vs. your proposed\n> change. I did not see that here. Sorry.\n\nAs already pointed out by many people, current buffer manager is not\nvery smart on flushing out dirty pages. From TODO.detail/fsync:\n\n>This is the problem of buffer manager, known for very long time:\n>when copy eats all buffers, manager begins write/fsync each\n>durty buffer to free buffer for new data. All updated relations\n>should be fsynced _once_ @ transaction commit. You would get\n>the same results without -F...\n\nWith my changes, pg_fsync would just mark the relation (actually its\nfile descriptor) as it is needed fsync, instead of calling real fsync. \nUpon transaction commit, the mark would be checked and relations are\nfsynced if necessary.\n\nBTW, Hiroshi has raised a question with my changes, and I have written\nto him (in Japanese, of course:-) to make sure that what I'm missing\nhere. I will let you know the result later.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 06 Feb 2000 23:04:12 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>>> BTW, I have worked a little bit on this item. The idea is pretty\n>>>> simple. Instead of doing a real fsync() in pg_fsync(), just marking it\n>>>> so that we remember to do fsync() at the commit time. Following\n>>>> patches illustrate the idea.\n\nIn the form you have shown it, it would be completely useless, for\ntwo reasons:\n\n1. It doesn't guarantee that the right files are fsync'd. It would\nin fact fsync whichever files happen to be using the same kernel\nfile descriptor numbers at the close of the transaction as the ones\nyou really wanted to fsync were using at the time fsync was requested.\n\n2. It doesn't guarantee that the files are fsync'd in the right order.\nPer my discussion a few days ago, the only reason for doing fsync at all\nis to guarantee that the data pages touched by a transaction get flushed\nto disk before the pg_log update claiming that the transaction is done\ngets flushed to disk. A change like this completely destroys that\nordering, since pg_fsync_pending has no idea which fd is pg_log.\n\nYou could possibly fix #1 by logging fsync requests at the vfd level;\nthen, whenever a vfd is closed to free up a kernel fd, check the fsync\nflag and execute the pending fsync before closing the file. You could\npossibly fix #2 by having transaction commit invoke the pg_fsync_pending\nscan before it updates pg_log (and then fsyncing pg_log itself again\nafter).\n\n(Actually, you could probably eliminate the notion of \"fsync request\"\nentirely, and simply have each vfd get marked \"dirty\" automatically when\nwritten to. Both closing a vfd and the scan at xact commit would look\nat the dirty bit to decide to do fsync.)\n\nWhat would still need to be thought about is whether this scheme\npreserves the ordering guarantee when a group of concurrent backends\nis considered, rather than one backend in isolation. (I believe that\nfsync() will apply to all dirty kernel buffers for a file, not just\nthose dirtied by the requesting process, so each backend's fsyncs can\naffect the order in which other backends' writes hit the disk.)\nOffhand I do not see any problems there, but it's the kind of thing\nthat requires more than offhand thought...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 10:47:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> You could possibly fix #1 by logging fsync requests at the vfd level;\n> then, whenever a vfd is closed to free up a kernel fd, check the fsync\n> flag and execute the pending fsync before closing the file. You could\n> possibly fix #2 by having transaction commit invoke the pg_fsync_pending\n> scan before it updates pg_log (and then fsyncing pg_log itself again\n> after).\n> \n> (Actually, you could probably eliminate the notion of \"fsync request\"\n> entirely, and simply have each vfd get marked \"dirty\" automatically when\n> written to. Both closing a vfd and the scan at xact commit would look\n> at the dirty bit to decide to do fsync.)\n> \n> What would still need to be thought about is whether this scheme\n> preserves the ordering guarantee when a group of concurrent backends\n> is considered, rather than one backend in isolation. (I believe that\n> fsync() will apply to all dirty kernel buffers for a file, not just\n> those dirtied by the requesting process, so each backend's fsyncs can\n> affect the order in which other backends' writes hit the disk.)\n> Offhand I do not see any problems there, but it's the kind of thing\n> that requires more than offhand thought...\n\nGlad someone is looking into this. Seems the above concern about order\nis fine because it is only marking the pg_log transactions as committed\nthat is important. You can fsync anything you want, you just need to\nmake sure you current transactions buffers are fsync'ed before you mark\nthe transaction as complete. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 12:47:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> >>>> BTW, I have worked a little bit on this item. The idea is pretty\n> >>>> simple. Instead of doing a real fsync() in pg_fsync(), just marking it\n> >>>> so that we remember to do fsync() at the commit time. Following\n> >>>> patches illustrate the idea.\n>\n> What would still need to be thought about is whether this scheme\n> preserves the ordering guarantee when a group of concurrent backends\n> is considered, rather than one backend in isolation. (I believe that\n> fsync() will apply to all dirty kernel buffers for a file, not just\n> those dirtied by the requesting process, so each backend's fsyncs can\n> affect the order in which other backends' writes hit the disk.)\n> Offhand I do not see any problems there, but it's the kind of thing\n> that requires more than offhand thought...\n\nThe following is an example of what I first pointed out.\nI say about PostgreSQL shared buffers not kernel buffers.\n\nSession-1\nbegin;\nupdate A ...;\n\nSession-2\nbegin;\nselect * fromB ..;\n There's no PostgreSQL shared buffer available.\n This backend has to force the flush of a free buffer\n page. Unfortunately the page was dirtied by the\n above operation of Session-1 and calls pg_fsync()\n for the table A. However fsync() is postponed until\n commit of this backend.\n\nSession-1\ncommit;\n There's no dirty buffer page for the table A.\n So pg_fsync() isn't called for the table A.\n\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Mon, 07 Feb 2000 09:40:45 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> 1. It doesn't guarantee that the right files are fsync'd. It would\n> in fact fsync whichever files happen to be using the same kernel\n> file descriptor numbers at the close of the transaction as the ones\n> you really wanted to fsync were using at the time fsync was requested.\n\nRight. If a VFD is reused, the fd would not point to the same file\nanymore.\n\n> You could possibly fix #1 by logging fsync requests at the vfd level;\n> then, whenever a vfd is closed to free up a kernel fd, check the fsync\n> flag and execute the pending fsync before closing the file. You could\n> possibly fix #2 by having transaction commit invoke the pg_fsync_pending\n> scan before it updates pg_log (and then fsyncing pg_log itself again\n> after).\n\nI do not understand #2. I call pg_fsync_pending twice in\nRecordTransactionCommit, one is after FlushBufferPool, and the other\nis after TansactionIdCommit and FlushBufferPool. Or am I missing\nsomething?\n\n> What would still need to be thought about is whether this scheme\n> preserves the ordering guarantee when a group of concurrent backends\n> is considered, rather than one backend in isolation. (I believe that\n> fsync() will apply to all dirty kernel buffers for a file, not just\n> those dirtied by the requesting process, so each backend's fsyncs can\n> affect the order in which other backends' writes hit the disk.)\n> Offhand I do not see any problems there, but it's the kind of thing\n> that requires more than offhand thought...\n\nI thought about that too. If the ordering was that important, a\ndatabase managed by backends with -F on could be seriously\ncorrupted. I've never heard of such disasters caused by -F. So my\nconclusion was that it's safe or I had been so lucky. Note that I'm\nnot talking about pg_log vs. relations but the ordering among\nrelations.\n\nBTW, Hiroshi has noticed me an excellent point #3:\n\n>Session-1\n>begin;\n>update A ...;\n>\n>Session-2\n>begin;\n>select * fromB ..;\n> There's no PostgreSQL shared buffer available.\n> This backend has to force the flush of a free buffer\n> page. Unfortunately the page was dirtied by the\n> above operation of Session-1 and calls pg_fsync()\n> for the table A. However fsync() is postponed until\n> commit of this backend.\n>\n>Session-1\n>commit;\n> There's no dirty buffer page for the table A.\n> So pg_fsync() isn't called for the table A.\n\nSeems there's no easy solution for this. Maybe now is the time to give\nup my idea...\n--\nTatsuo Ishii\n\n",
"msg_date": "Mon, 07 Feb 2000 21:28:56 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> BTW, Hiroshi has noticed me an excellent point #3:\n> \n> >Session-1\n> >begin;\n> >update A ...;\n> >\n> >Session-2\n> >begin;\n> >select * fromB ..;\n> > There's no PostgreSQL shared buffer available.\n> > This backend has to force the flush of a free buffer\n> > page. Unfortunately the page was dirtied by the\n> > above operation of Session-1 and calls pg_fsync()\n> > for the table A. However fsync() is postponed until\n> > commit of this backend.\n> >\n> >Session-1\n> >commit;\n> > There's no dirty buffer page for the table A.\n> > So pg_fsync() isn't called for the table A.\n> \n> Seems there's no easy solution for this. Maybe now is the time to give\n> up my idea...\n\nI hate to see you give up on this. \n\nDon't tell me we fsync on every buffer write, and not just at\ntransaction commit? That is terrible.\n\nWhat if we set a flag on the file descriptor stating we dirtied/wrote\none of its buffers during the transaction, and cycle through the file\ndescriptors on buffer commit and fsync all involved in the transaction. \nWe also fsync if we close a file descriptor and it was involved in the\ntransaction. We clear the \"involved in this transaction\" flag on commit\ntoo.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 11:31:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> possibly fix #2 by having transaction commit invoke the pg_fsync_pending\n>> scan before it updates pg_log (and then fsyncing pg_log itself again\n>> after).\n\n> I do not understand #2. I call pg_fsync_pending twice in\n> RecordTransactionCommit, one is after FlushBufferPool, and the other\n> is after TansactionIdCommit and FlushBufferPool. Or am I missing\n> something?\n\nOh, OK. That's what I meant. The snippet you posted didn't show where\nyou were calling the fsync routine from.\n\n> I thought about that too. If the ordering was that important, a\n> database managed by backends with -F on could be seriously\n> corrupted. I've never heard of such disasters caused by -F.\n\nThis is why I think that fsync actually offers very little extra\nprotection ;-)\n\n> BTW, Hiroshi has noticed me an excellent point #3:\n\n>> This backend has to force the flush of a free buffer\n>> page. Unfortunately the page was dirtied by the\n>> above operation of Session-1 and calls pg_fsync()\n>> for the table A. However fsync() is postponed until\n>> commit of this backend.\n>> \n>> Session-1\n>> commit;\n>> There's no dirty buffer page for the table A.\n>> So pg_fsync() isn't called for the table A.\n\nOooh, right. Backend A dirties the page, but leaves it sitting in\nshared buffer. Backend B needs the buffer space, so it does the\nfwrite of the page. Now if backend A wants to commit, it can fsync\neverything it's written --- but does that guarantee the page that\nwas actually written by B will get flushed to disk? Not sure.\n\nIf the pending-fsync logic is based on either physical fds or vfds\nthen it definitely *won't* work; A might have found the desired page\nsitting in buffer cache to begin with, and never have opened the\nunderlying file at all!\n\nSo it seems you would need to keep a list of all the relation files (and\nsegments) you've written to in the current xact, and open and fsync each\none just before writing/fsyncing pg_log. Even then, you're assuming\nthat fsync applied to a file via an fd belonging to one backend will\nflush disk buffers written to the same file via *other* fds belonging\nto *other* processes. I'm not sure that that is true on all Unixes...\nheck, I'm not sure it's true on any. The fsync(2) man page here isn't\nreal specific.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 11:40:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Don't tell me we fsync on every buffer write, and not just at\n> transaction commit? That is terrible.\n\nIf you don't have -F set, yup. Why did you think fsync mode was\nso slow?\n\n> What if we set a flag on the file descriptor stating we dirtied/wrote\n> one of its buffers during the transaction, and cycle through the file\n> descriptors on buffer commit and fsync all involved in the transaction. \n\nThat's exactly what Tatsuo was describing, I believe. I think Hiroshi\nhas pointed out a serious problem that would make it unreliable when\nmultiple backends are running: if some *other* backend fwrites the page\ninstead of your backend, and it doesn't fsync until *its* transaction is\ndone (possibly long after yours), then you lose the ordering guarantee\nthat is the point of the whole exercise...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 11:47:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "At 11:31 AM 2/7/00 -0500, Bruce Momjian wrote:\n\n>I hate to see you give up on this. \n\n>Don't tell me we fsync on every buffer write, and not just at\n>transaction commit? That is terrible.\n\nWon't we have many more options in this area, i.e. increasing performance\nwhile maintaining on-disk data integrity, once WAL is implemented?\n\nsnapshot+WAL = your database so in theory -F on tables and \nthe transaction log would be safe as long as you have a snapshot and\nas long as the WAL is being fsync'd and you have the disk space to\nhold the WAL until you update your snapshot, no?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 08:54:22 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Don't tell me we fsync on every buffer write, and not just at\n> > transaction commit? That is terrible.\n> \n> If you don't have -F set, yup. Why did you think fsync mode was\n> so slow?\n> \n> > What if we set a flag on the file descriptor stating we dirtied/wrote\n> > one of its buffers during the transaction, and cycle through the file\n> > descriptors on buffer commit and fsync all involved in the transaction. \n> \n> That's exactly what Tatsuo was describing, I believe. I think Hiroshi\n> has pointed out a serious problem that would make it unreliable when\n> multiple backends are running: if some *other* backend fwrites the page\n> instead of your backend, and it doesn't fsync until *its* transaction is\n> done (possibly long after yours), then you lose the ordering guarantee\n> that is the point of the whole exercise...\n\nOK, I understand now. You are saying if my backend dirties a buffer,\nbut another backend does the write, would my backend fsync() that buffer\nthat the other backend wrote.\n\nI can't imagine how fsync could flush _only_ the file discriptor buffers\nmodified by the current process. It would have to affect all buffers\nfor the file descriptor.\n\nBSDI says:\n\n Fsync() causes all modified data and attributes of fd to be moved to a\n permanent storage device. This normally results in all in-core modified\n copies of buffers for the associated file to be written to a disk.\n\nLooking at the BSDI kernel, there is a user-mode file descriptor table,\nwhich maps to a kernel file descriptor table. This table can be shared,\nso a file descriptor opened multiple times, like in a fork() call. The\nkernel table maps to an actual file inode/vnode that maps to a file. \nThe only thing that is kept in the file descriptor table is the current\noffset in the file (struct file in BSD). There is no mapping of who\nwrote which blocks.\n\nIn fact, I would suggest that any kernel implementation that could track\nsuch things would be pretty broken. I can imagine some cases the use of\nthat mapping of blocks to file descriptors would cause compatibility\nproblems. Those buffers have to be shared by all processes.\n\nSo, I think we are safe if we can either keep that file descriptor open\nuntil commit, or re-open it and fsync it on commit. That assume a\nre-open is hitting the same file. My opinion is that we should just\nfsync it on close and not worry about a reopen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 12:35:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> > So, I think we are safe if we can either keep that file descriptor open\n> > until commit, or re-open it and fsync it on commit. That assume a\n> > re-open is hitting the same file. My opinion is that we should just\n> > fsync it on close and not worry about a reopen.\n> \n> I'm pretty sure that the standard is that a close on a file _should_\n> fsync it.\n\nThis is not true. close flushes the user buffers to kernel buffers. It\ndoes not force to physical disk in all cases, I think. There is really\nno need to force them to disk on close. The only time they have to be\nforced to disk is when the system shuts down, or on an fsync call.\n\n> \n> In re the fsync problems...\n> \n> I came across this option when investigating implementing range fsync()\n> for FreeBSD, 'O_FSYNC'/'O_SYNC'.\n> \n> Why not keep 2 file descritors open for each datafile, one opened\n> with O_FSYNC (exists but not documented in FreeBSD) and one normal?\n> This garantees sync writes for all write operations on that fd.\n\nWe actually don't want this. We like to just fsync the file descriptor\nand retroactively fsync all our writes. fsync allows us to decouple the\nwrite and the fsync, which is what we really are attempting to do. Our\ncurrent behavour is to do write/fsync together, which is wasteful.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 13:27:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync alternatives (was: Re: [HACKERS] TODO item)"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000207 10:14] wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > Don't tell me we fsync on every buffer write, and not just at\n> > > transaction commit? That is terrible.\n> > \n> > If you don't have -F set, yup. Why did you think fsync mode was\n> > so slow?\n> > \n> > > What if we set a flag on the file descriptor stating we dirtied/wrote\n> > > one of its buffers during the transaction, and cycle through the file\n> > > descriptors on buffer commit and fsync all involved in the transaction. \n> > \n> > That's exactly what Tatsuo was describing, I believe. I think Hiroshi\n> > has pointed out a serious problem that would make it unreliable when\n> > multiple backends are running: if some *other* backend fwrites the page\n> > instead of your backend, and it doesn't fsync until *its* transaction is\n> > done (possibly long after yours), then you lose the ordering guarantee\n> > that is the point of the whole exercise...\n> \n> OK, I understand now. You are saying if my backend dirties a buffer,\n> but another backend does the write, would my backend fsync() that buffer\n> that the other backend wrote.\n> \n> I can't imagine how fsync could flush _only_ the file discriptor buffers\n> modified by the current process. It would have to affect all buffers\n> for the file descriptor.\n> \n> BSDI says:\n> \n> Fsync() causes all modified data and attributes of fd to be moved to a\n> permanent storage device. This normally results in all in-core modified\n> copies of buffers for the associated file to be written to a disk.\n> \n> Looking at the BSDI kernel, there is a user-mode file descriptor table,\n> which maps to a kernel file descriptor table. This table can be shared,\n> so a file descriptor opened multiple times, like in a fork() call. The\n> kernel table maps to an actual file inode/vnode that maps to a file. \n> The only thing that is kept in the file descriptor table is the current\n> offset in the file (struct file in BSD). There is no mapping of who\n> wrote which blocks.\n> \n> In fact, I would suggest that any kernel implementation that could track\n> such things would be pretty broken. I can imagine some cases the use of\n> that mapping of blocks to file descriptors would cause compatibility\n> problems. Those buffers have to be shared by all processes.\n> \n> So, I think we are safe if we can either keep that file descriptor open\n> until commit, or re-open it and fsync it on commit. That assume a\n> re-open is hitting the same file. My opinion is that we should just\n> fsync it on close and not worry about a reopen.\n\nI'm pretty sure that the standard is that a close on a file _should_\nfsync it.\n\nIn re the fsync problems...\n\nI came across this option when investigating implementing range fsync()\nfor FreeBSD, 'O_FSYNC'/'O_SYNC'.\n\nWhy not keep 2 file descritors open for each datafile, one opened\nwith O_FSYNC (exists but not documented in FreeBSD) and one normal?\nThis garantees sync writes for all write operations on that fd.\n\nMost unicies offer an open flag for this type of access although the name\nwill vary (Linux/Solaris uses O_SYNC afaik).\n\nWhen a sync write is needed then use that filedescriptor to do the writing,\nand use the normal one for non-sync writes.\n\nThis would fix the problem where another backend causes an out-of-order\nor unsafe fsync to occur.\n\nAnother option is using mmap() and msync() to achive the same effect, the\nonly problem with mmap() is that under most i386 systems you are limited\nto a < 4gig (2gig with FreeBSD) mapping that would have to be 'windowed'\nover the datafiles, however depending on the locality of accesses this\nmay be much more effecient that read/write semantics.\nNot to mention that a lot of unicies have broken mmap() implementations\nand problems with merged vm/buffercache.\n\nYes, I haven't looked at the backend code, just hoping to offer some \nuseful suggestions.\n\n-Alfred\n",
"msg_date": "Mon, 7 Feb 2000 10:36:46 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "fsync alternatives (was: Re: [HACKERS] TODO item)"
},
{
"msg_contents": "> Yes, the way I understand it is that one backend doing the fsync\n> will sync the entire file perhaps forcing a sync in the middle of\n> a somewhat critical update being done by another instance of the\n> backend.\n\nWe don't mind that. Until the transaction is marked as complete, they\ncan fsync anything we want. We just want all stuff modified by a \ntransaction fsynced before a transaction is marked as completed.\n\n> I'm aware of the performance implications sync writes cause, but\n> using fsync after every write seems to cause massive amounts of\n> unessesary disk IO that could be avoided with using explicit\n> sync descriptors with little increase in complexity considering\n> what I understand of the current implementation.\n\nYes.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 13:54:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync alternatives (was: Re: [HACKERS] TODO item)"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000207 11:00] wrote:\n> > > So, I think we are safe if we can either keep that file descriptor open\n> > > until commit, or re-open it and fsync it on commit. That assume a\n> > > re-open is hitting the same file. My opinion is that we should just\n> > > fsync it on close and not worry about a reopen.\n> > \n> > I'm pretty sure that the standard is that a close on a file _should_\n> > fsync it.\n> \n> This is not true. close flushes the user buffers to kernel buffers. It\n> does not force to physical disk in all cases, I think. There is really\n> no need to force them to disk on close. The only time they have to be\n> forced to disk is when the system shuts down, or on an fsync call.\n> \n> > \n> > In re the fsync problems...\n> > \n> > I came across this option when investigating implementing range fsync()\n> > for FreeBSD, 'O_FSYNC'/'O_SYNC'.\n> > \n> > Why not keep 2 file descritors open for each datafile, one opened\n> > with O_FSYNC (exists but not documented in FreeBSD) and one normal?\n> > This garantees sync writes for all write operations on that fd.\n> \n> We actually don't want this. We like to just fsync the file descriptor\n> and retroactively fsync all our writes. fsync allows us to decouple the\n> write and the fsync, which is what we really are attempting to do. Our\n> current behavour is to do write/fsync together, which is wasteful.\n\nYes, the way I understand it is that one backend doing the fsync\nwill sync the entire file perhaps forcing a sync in the middle of\na somewhat critical update being done by another instance of the\nbackend.\n\nSince the current behavior seems to be write/fsync/write/fsync...\ninstead of write/write/write/fsync you may as well try opening the\nfiledescriptor with O_FSYNC on operating systems that support it to\navoid the cross-fsync problem.\n\nAnother option is to use O_FSYNC descriptiors and aio_write to\nallow a sync writes to be 'backgrounded'. More and more unix OS's\nare supporting aio nowadays.\n\nI'm aware of the performance implications sync writes cause, but\nusing fsync after every write seems to cause massive amounts of\nunessesary disk IO that could be avoided with using explicit\nsync descriptors with little increase in complexity considering\nwhat I understand of the current implementation.\n\nBasically it would seem to be a good hack until you get the algorithm\nto batch fsyncs working. (write/write/write.../fsync) At that point\nyou may want to window over the files using msync(), but there may\nbe a better way, one that allows a vector of io to be scheduled for\nsync write in one go, rather than a buffer at a time.\n\n-Alfred\n",
"msg_date": "Mon, 7 Feb 2000 11:17:36 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync alternatives (was: Re: [HACKERS] TODO item)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I can't imagine how fsync could flush _only_ the file discriptor buffers\n> modified by the current process. It would have to affect all buffers\n> for the file descriptor.\n\nYeah, you're probably right. After thinking about it, I can't believe\nthat a disk block buffer inside the kernel has any record of which FD\nit was written by (after all, it could have been dirtied through more\nthan one FD since it was last synced to disk). All it's got is a file\ninode number and a block number within the file. Presumably fsync()\nsearches the buffer cache for blocks that match the FD's inode number\nand schedules I/O for all the ones that are dirty.\n\n> So, I think we are safe if we can either keep that file descriptor open\n> until commit, or re-open it and fsync it on commit. That assume a\n> re-open is hitting the same file. My opinion is that we should just\n> fsync it on close and not worry about a reopen.\n\nThere's still the problem that your backend might never have opened the\nrelation file at all, still less done a write through its fd or vfd.\nI think we would need to have a separate data structure saying \"these\nrelations were dirtied in the current xact\" that is not tied to fd's or\nvfd's. Maybe the relcache would be a good place to keep such a flag.\n\nTransaction commit would look like:\n\n* scan buffer cache for dirty buffers, fwrite each one that belongs\nto one of the relations I'm trying to commit;\n\n* open and fsync each segment of each rel that I'm trying to commit\n(or maybe just the dirtied segments, if we want to do the bookkeeping\nat that level of detail);\n\n* make pg_log entry;\n\n* write and fsync pg_log.\n\nfsync-on-close is probably a waste of cycles. The only way that would\nmatter is if someone else were doing a RENAME TABLE on the rel, thus\npreventing you from reopening it. I think we could just put the\nresponsibility on the renamer to fsync the file while he's doing it\n(in fact I think that's already in there, at least to the extent of\nflushing the buffer cache).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 18:16:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > So, I think we are safe if we can either keep that file descriptor open\n> > > until commit, or re-open it and fsync it on commit. That assume a\n> > > re-open is hitting the same file. My opinion is that we should just\n> > > fsync it on close and not worry about a reopen.\n> >\n> > I'm pretty sure that the standard is that a close on a file _should_\n> > fsync it.\n> \n> This is not true. close flushes the user buffers to kernel buffers. It\n> does not force to physical disk in all cases, I think. \n\nfclose flushes user buffers to kernel buffers. close only frees the file\ndescriptor for re-use.\n",
"msg_date": "Tue, 08 Feb 2000 10:32:51 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync alternatives (was: Re: [HACKERS] TODO item)"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Is it still valuable to solve this item in current spec ?\n\nI'd be inclined to forget about it for now, and see what happens\nwith WAL. It looks like a fair amount of work for a problem that\nwill go away anyway in a release or so...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 18:34:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n>\n> > Bruce Momjian <[email protected]> writes:\n> > > Don't tell me we fsync on every buffer write, and not just at\n> > > transaction commit? That is terrible.\n> >\n> > If you don't have -F set, yup. Why did you think fsync mode was\n> > so slow?\n> >\n> > > What if we set a flag on the file descriptor stating we dirtied/wrote\n> > > one of its buffers during the transaction, and cycle through the file\n> > > descriptors on buffer commit and fsync all involved in the\n> transaction.\n> >\n> > That's exactly what Tatsuo was describing, I believe. I think Hiroshi\n> > has pointed out a serious problem that would make it unreliable when\n> > multiple backends are running: if some *other* backend fwrites the page\n> > instead of your backend, and it doesn't fsync until *its* transaction is\n> > done (possibly long after yours), then you lose the ordering guarantee\n> > that is the point of the whole exercise...\n>\n> OK, I understand now. You are saying if my backend dirties a buffer,\n> but another backend does the write, would my backend fsync() that buffer\n> that the other backend wrote.\n>\n> I can't imagine how fsync could flush _only_ the file discriptor buffers\n> modified by the current process. It would have to affect all buffers\n> for the file descriptor.\n>\n> BSDI says:\n>\n> Fsync() causes all modified data and attributes of fd to be\n> moved to a\n> permanent storage device. This normally results in all\n> in-core modified\n> copies of buffers for the associated file to be written to a disk.\n>\n> Looking at the BSDI kernel, there is a user-mode file descriptor table,\n> which maps to a kernel file descriptor table. This table can be shared,\n> so a file descriptor opened multiple times, like in a fork() call. The\n> kernel table maps to an actual file inode/vnode that maps to a file.\n> The only thing that is kept in the file descriptor table is the current\n> offset in the file (struct file in BSD). There is no mapping of who\n> wrote which blocks.\n>\n> In fact, I would suggest that any kernel implementation that could track\n> such things would be pretty broken. I can imagine some cases the use of\n> that mapping of blocks to file descriptors would cause compatibility\n> problems. Those buffers have to be shared by all processes.\n>\n> So, I think we are safe if we can either keep that file descriptor open\n> until commit, or re-open it and fsync it on commit. That assume a\n> re-open is hitting the same file. My opinion is that we should just\n> fsync it on close and not worry about a reopen.\n>\n\nI asked about this question 4 months ago but got no answer.\nObviouly this needs not only md/fd stuff changes but also bufmgr\nchanges. Keeping dirtied list of segments of each backend seems\nto work. But I'm afraid of other oversights.\n\nThe problem is that this feature is very difficult to verify.\nIn addtion WAL would solve this item naturally.\n\nIs it still valuable to solve this item in current spec ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 8 Feb 2000 08:38:46 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TODO item"
},
{
"msg_contents": "> > So, I think we are safe if we can either keep that file descriptor open\n> > until commit, or re-open it and fsync it on commit. That assume a\n> > re-open is hitting the same file. My opinion is that we should just\n> > fsync it on close and not worry about a reopen.\n> \n> There's still the problem that your backend might never have opened the\n> relation file at all, still less done a write through its fd or vfd.\n> I think we would need to have a separate data structure saying \"these\n> relations were dirtied in the current xact\" that is not tied to fd's or\n> vfd's. Maybe the relcache would be a good place to keep such a flag.\n> \n> Transaction commit would look like:\n> \n> * scan buffer cache for dirty buffers, fwrite each one that belongs\n> to one of the relations I'm trying to commit;\n> \n> * open and fsync each segment of each rel that I'm trying to commit\n> (or maybe just the dirtied segments, if we want to do the bookkeeping\n> at that level of detail);\n\nBy fsync'ing on close, we can not worry about file descriptors that were\nforced out of the file descriptor cache during the transaction.\n\nIf we dirty a buffer, we have to mark the buffer as dirty, and the file\ndescriptor associated with that buffer needing fsync. If someone else\nwrites and removes that buffer from the cache before we get to commit\nit, the file descriptor flag will tell us the file descriptor needs\nfsync.\n\nWe have to:\n\n\twrite our dirty buffers\n\tfsync all file descriptors marked as \"written\" during our transaction\n\tfsync all file descriptors on close when being cycled out of fd cache\n\t(fd close has to write dirty buffers before fsync)\n\nSo we have three states for a write:\n\n\tstill in dirty buffer\n\tfile descriptor marked as dirty/need fsync\n\tfile descriptor removed from cache, fsync'ed on close\n\nSeems this covers all the cases.\n\n> \n> * make pg_log entry;\n> \n> * write and fsync pg_log.\n\nYes.\n\n> \n> fsync-on-close is probably a waste of cycles. The only way that would\n> matter is if someone else were doing a RENAME TABLE on the rel, thus\n> preventing you from reopening it. I think we could just put the\n> responsibility on the renamer to fsync the file while he's doing it\n> (in fact I think that's already in there, at least to the extent of\n> flushing the buffer cache).\n\nI hadn't thought of that case. I was thinking of file descriptor cache\nremoval, or don't they get removed if they are in use? If not, you can\nskip my close examples.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 19:02:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Is it still valuable to solve this item in current spec ?\n> \n> I'd be inclined to forget about it for now, and see what happens\n> with WAL. It looks like a fair amount of work for a problem that\n> will go away anyway in a release or so...\n\nBut is seems Tatsuo is pretty close to it. I personally would like to\nsee it in 7.0. Even with WAL, we may decide to allow non-WAL mode, and\nif so, this code would still be useful.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 19:05:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> > So, I think we are safe if we can either keep that file descriptor open\n> > until commit, or re-open it and fsync it on commit. That assume a\n> > re-open is hitting the same file. My opinion is that we should just\n> > fsync it on close and not worry about a reopen.\n> >\n> \n> I asked about this question 4 months ago but got no answer.\n> Obviouly this needs not only md/fd stuff changes but also bufmgr\n> changes. Keeping dirtied list of segments of each backend seems\n> to work. But I'm afraid of other oversights.\n\nI don't think so. We can just mark file descriptors as needing fsync().\nBy doing that, we can spin through the buffer cache for each need_fsync\nfile desciptor, perform any writes needed, and fsync the descriptor. \nSeems like little redesign needed, except for adding the need_fsync\nflag. Should be no more than about 20 lines.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 19:07:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> > > So, I think we are safe if we can either keep that file \n> descriptor open\n> > > until commit, or re-open it and fsync it on commit. That assume a\n> > > re-open is hitting the same file. My opinion is that we should just\n> > > fsync it on close and not worry about a reopen.\n> > \n> > There's still the problem that your backend might never have opened the\n> > relation file at all, still less done a write through its fd or vfd.\n> > I think we would need to have a separate data structure saying \"these\n> > relations were dirtied in the current xact\" that is not tied to fd's or\n> > vfd's. Maybe the relcache would be a good place to keep such a flag.\n> > \n> > Transaction commit would look like:\n> > \n> > * scan buffer cache for dirty buffers, fwrite each one that belongs\n> > to one of the relations I'm trying to commit;\n> > \n> > * open and fsync each segment of each rel that I'm trying to commit\n> > (or maybe just the dirtied segments, if we want to do the bookkeeping\n> > at that level of detail);\n> \n> By fsync'ing on close, we can not worry about file descriptors that were\n> forced out of the file descriptor cache during the transaction.\n> \n> If we dirty a buffer, we have to mark the buffer as dirty, and the file\n> descriptor associated with that buffer needing fsync. If someone else\n\nWhat is the file descriptors associated with buffers ?\nWould you call heap_open() etc each time when a buffer is about \nto be dirtied?\n\nI don't object to you strongly but I ask again. \n\nThere's already -F option for speeding up.\nWho would want non-WAL mode with strict reliabilty after WAL\nis implemented ?\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 8 Feb 2000 10:04:31 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TODO item"
},
{
"msg_contents": "At 10:04 AM 2/8/00 +0900, Hiroshi Inoue wrote:\n\n>There's already -F option for speeding up.\n>Who would want non-WAL mode with strict reliabilty after WAL\n>is implemented ?\n\nExactly. I suspect WAL will actually run faster, or at least\nwill have that potential when its existence is fully exploited,\nthan non-WAL non -F.\n\nAnd it seems to me that touching something as crucial as\ndisk management in a fundamental way one week before the\nrelease of a hopefully solid beta is pushing things a bit.\n\nBut, then again, I'm the resident paranoid conservative, I\nguess.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 17:06:11 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TODO item"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> If we dirty a buffer, we have to mark the buffer as dirty, and the file\n>> descriptor associated with that buffer needing fsync. If someone else\n\n> What is the file descriptors associated with buffers ?\n\nI was about to make exactly that remark. A shared buffer doesn't have\nan \"associated file descriptor\", certainly not one that's valid across\nmultiple backends.\n\nAFAICS no bookkeeping based on file descriptors (either kernel FDs\nor vfds) can possibly work correctly in the multiple-backend case.\nWe *have to* do the bookkeeping on a relation basis, and that\npotentially means (re)opening the relation's file at xact commit\nin order to do an fsync. There is no value in having one backend\nfsync an FD before closing the FD, because that does not take\naccount of what other backends may have done or do later with that\nsame file through their own FDs for it. If we do not do an fsync\nat end of transaction, we cannot be sure that writes initiated by\n*other* backends will be complete.\n\n> There's already -F option for speeding up.\n> Who would want non-WAL mode with strict reliabilty after WAL\n> is implemented ?\n\nYes. We have a better solution in the pipeline, so ISTM it's not\nworth expending a lot of effort on a stopgap.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 22:00:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> > > * open and fsync each segment of each rel that I'm trying to commit\n> > > (or maybe just the dirtied segments, if we want to do the bookkeeping\n> > > at that level of detail);\n> > \n> > By fsync'ing on close, we can not worry about file descriptors that were\n> > forced out of the file descriptor cache during the transaction.\n> > \n> > If we dirty a buffer, we have to mark the buffer as dirty, and the file\n> > descriptor associated with that buffer needing fsync. If someone else\n> \n> What is the file descriptors associated with buffers ?\n> Would you call heap_open() etc each time when a buffer is about \n> to be dirtied?\n\nWriteBuffer -> FlushBuffer to flush buffer. Buffer can be either marked\ndirty or written/fsync to disk.\n\nIf written/fsync, smgr_flush -> mdflush -> _mdfd_getseg gets MdfdVec\nstructure of file descriptor. \n\nWhen doing flush here, mark MdfdVec structure new element needs_fsync to\ntrue. Don't do fsync yet.\n\nIf just marked dirty, also mark MdfdVec.needs_fsync as true.\n\nDo we currently all write dirty buffers on transaction commit? We\ncertainly must already do that in fsync mode.\n\nOn commit, run through virtial file descriptor table and do fsyncs on\nfile descriptors. No need to find the buffers attached to file\ndescriptors. They have already been written by other code. They just\nneed fsync.\n\n\n> There's already -F option for speeding up.\n> Who would want non-WAL mode with strict reliabilty after WAL\n> is implemented ?\n\nLet's see what Vadim says. Seems like a nice performance boost and 7.0\ncould be 6 months away. If we didn't ship with fsync enabled, I\nwouldn't care. Also, Vadim has a new job, so we really can't be sure\nabout WAL in 7.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 22:00:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Seems like little redesign needed, except for adding the need_fsync\n> flag. Should be no more than about 20 lines.\n\nIf you think this is a twenty line fix, take a deep breath and back\naway slowly. You have not understood the problem.\n\nThe problem comes in when *some other* backend has written out a\nshared buffer that contained a change that our backend made as part\nof the transaction that it now wants to commit. Without immediate-\nfsync-on-write (the current solution), there is no guarantee that the\nother backend will do an fsync any time soon; it might be busy in\na very long-running transaction. Our backend must fsync that file,\nand it must do so after the other backend flushed the buffer. But\nthere is no existing data structure that our backend can use to\ndiscover that it must do this. The shared buffer cannot record it;\nit might belong to some other file entirely by now (and in any case,\nthe shared buffer is noplace to record per-transaction status info).\nOur backend cannot use either FD or VFD to record it, since it might\nnever have opened the relation file at all, and certainly might have\nclosed it again (and recycled the FD or VFD) before the other backend\nflushed the shared buffer. The relcache might possibly work as a\nplace to record the need for fsync --- but I am concerned about the\nrelcache's willingness to drop entries if they are not currently\nheap_open'd; also, md/fd don't currently use the relcache at all.\n\nThis is not a trivial change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 22:26:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> wouldn't care. Also, Vadim has a new job, so we really can't be sure\n> about WAL in 7.1.\n>\n\nOops,it's a big problem.\nIf so,we may have to do something about this item.\nHowever it seems too late for 7.0.\nThis isn't a kind of item which beta could verify.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Tue, 8 Feb 2000 12:27:54 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TODO item"
},
{
"msg_contents": "At 10:00 PM 2/7/00 -0500, Tom Lane wrote:\n>\"Hiroshi Inoue\" <[email protected]> writes:\n\n>> There's already -F option for speeding up.\n>> Who would want non-WAL mode with strict reliabilty after WAL\n>> is implemented ?\n\n>Yes. We have a better solution in the pipeline, so ISTM it's not\n>worth expending a lot of effort on a stopgap.\n\nThanks to both of you. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 20:59:46 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "At 10:26 PM 2/7/00 -0500, Tom Lane wrote:\n>Bruce Momjian <[email protected]> writes:\n>> Seems like little redesign needed, except for adding the need_fsync\n>> flag. Should be no more than about 20 lines.\n>\n>If you think this is a twenty line fix, take a deep breath and back\n>away slowly. You have not understood the problem.\n\nAnd, again, thank you.\n\n>This is not a trivial change.\n\nI was actually through that code months ago, wondering why (ahem)\nPG was so stupid about disk I/O and reached the same conclusion.\n\nTherefore, I was more than pleased when a simple fix to get rid\nof fsync's on read-only transactions arose. In my application\nspace, this alone gave a huge performance boost.\n\nWAL...that's it. If Vadim is going to be unavailable because\nof his new job, we'll need to figure out another way to do it.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 21:03:51 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> The problem comes in when *some other* backend has written out a\n> shared buffer that contained a change that our backend made as part\n> of the transaction that it now wants to commit. Without immediate-\n> fsync-on-write (the current solution), there is no guarantee that the\n> other backend will do an fsync any time soon; it might be busy in\n> a very long-running transaction. Our backend must fsync that file,\n> and it must do so after the other backend flushed the buffer. But\n> there is no existing data structure that our backend can use to\n> discover that it must do this. The shared buffer cannot record it;\n> it might belong to some other file entirely by now (and in any case,\n> the shared buffer is noplace to record per-transaction status info).\n> Our backend cannot use either FD or VFD to record it, since it might\n> never have opened the relation file at all, and certainly might have\n> closed it again (and recycled the FD or VFD) before the other backend\n> flushed the shared buffer. The relcache might possibly work as a\n> place to record the need for fsync --- but I am concerned about the\n> relcache's willingness to drop entries if they are not currently\n> heap_open'd; also, md/fd don't currently use the relcache at all.\n\nOK, I will admit I must be wrong, but I would like to understand why.\n\nI am suggesting opening and marking a file descriptor as needing fsync\neven if I only dirty the buffer and not write it. I understand another\nbackend may write my buffer and remove it before I commit my\ntransaction. However, I will be the one to fsync it. I am also\nsuggesting that such file descriptors never get recycled until\ntransaction commit.\n\nIs that wrong?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 01:54:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am suggesting opening and marking a file descriptor as needing fsync\n> even if I only dirty the buffer and not write it. I understand another\n> backend may write my buffer and remove it before I commit my\n> transaction. However, I will be the one to fsync it. I am also\n> suggesting that such file descriptors never get recycled until\n> transaction commit.\n\n> Is that wrong?\n\nI see where you're going, and you could possibly make it work, but\nthere are a bunch of problems. One objection is that kernel FDs\nare a very finite resource on a lot of platforms --- you don't really\nwant to tie up one FD for every dirty buffer, and you *certainly*\ndon't want to get into a situation where you can't release kernel\nFDs until end of xact. You might be able to get around that by\nassociating the fsync-needed bit with VFDs instead of FDs.\n\nWhat may turn out to be a nastier problem is the circular dependency\nthis creates between shared-buffer management and md.c/fd.c. Right now\n(IIRC at 3am) md/fd are clearly at a lower level than bufmgr, but that\nwould stop being true if you make FDs be proxies for dirtied buffers.\nHere is one off-the-top-of-the-head trouble scenario: bufmgr wants to\ndump a buffer that was dirtied by another backend -> needs to open FD ->\nfd.c has no free FDs, needs to close one -> needs to dump and fsync a\nbuffer so it can forget the FD -> bufmgr needs to get I/O lock on two\ndifferent buffers at once -> potential deadlock against another backend\ndoing the reverse. (Assuming you even get that far, and don't hang up\nat the recursive entry to bufmgr trying to get a spinlock you already\nhold...)\n\nPossibly with close study you can prove that no such problem can happen.\nMy point is just that this isn't a trivial change. Is it worth\ninvesting substantial effort on what will ultimately be a dead end?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 03:24:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I am suggesting opening and marking a file descriptor as needing fsync\n> > even if I only dirty the buffer and not write it. I understand another\n> > backend may write my buffer and remove it before I commit my\n> > transaction. However, I will be the one to fsync it. I am also\n> > suggesting that such file descriptors never get recycled until\n> > transaction commit.\n> \n> > Is that wrong?\n> \n> I see where you're going, and you could possibly make it work, but\n> there are a bunch of problems. One objection is that kernel FDs\n> are a very finite resource on a lot of platforms --- you don't really\n> want to tie up one FD for every dirty buffer, and you *certainly*\n> don't want to get into a situation where you can't release kernel\n> FDs until end of xact. You might be able to get around that by\n> associating the fsync-needed bit with VFDs instead of FDs.\n\nOK, at least I was thinking correctly. Yes, there are serious drawbacks\nthat make this pretty hard to implement. Unless Vadim revives this, we\ncan drop it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 04:12:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> BTW, Hiroshi has noticed me an excellent point #3:\n> \n> >Session-1\n> >begin;\n> >update A ...;\n> >\n> >Session-2\n> >begin;\n> >select * fromB ..;\n> > There's no PostgreSQL shared buffer available.\n> > This backend has to force the flush of a free buffer\n> > page. Unfortunately the page was dirtied by the\n> > above operation of Session-1 and calls pg_fsync()\n> > for the table A. However fsync() is postponed until\n> > commit of this backend.\n> >\n> >Session-1\n> >commit;\n> > There's no dirty buffer page for the table A.\n> > So pg_fsync() isn't called for the table A.\n> \n> Seems there's no easy solution for this. Maybe now is the time to give\n> up my idea...\n\nThinking about a little bit more, I have come across yet another\npossible solution. It is actually *very* simple. Details as follows.\n\nIn xact.c:RecordTransactionCommit() there are two FlushBufferPool\ncalls. One is for relation files and the other is for pg_log. I add\nsync() right after these FlushBufferPool. It will force any pending\nkernel buffers physically be written onto disk, thus should guarantee\nthe ACID of the transaction (see attached code fragment).\n\nThere are two things that we should worry about sync, however.\n\n1. Does sync really wait for the completion of data be written on to\ndisk?\n\nI looked into the man page of sync(2) on Linux 2.0.36:\n\n According to the standard specification (e.g., SVID),\n sync() schedules the writes, but may return before the\n actual writing is done. However, since version 1.3.20\n Linux does actually wait. (This still does not guarantee\n data integrity: modern disks have large caches.)\n\nIt seems that sync(2) blocks until data is written. So it would be ok\nat least with Linux. I'm not sure about other platforms, though.\n\n2. Are we suffered any performance penalty from sync?\n\nSince sync forces *all* dirty buffers on the system be written onto\ndisk, it might be slower than fsync. So I did some testings using\ncontrib/pgbench. Starting postmaster with -F on (and with sync\nmodification), I ran 32 concurrent clients with performing 10\ntransactions each. In total 320 transactions are performed. Each\ntransaction contains an UPDATE and a SELECT to a table that has 1000k\ntuples and an INSERT to another small table. The result showed that -F\n+ sync was actually faster than the default mode (no -F, no\nmodifications). The system is a Red Hat 5.2, with 128MB RAM.\n\n\t\t\t-F + sync\tnormal mode\n--------------------------------------------------------\ntransactions/sec\t3.46\t\t2.93\n\nOf course if there are disk activities other than PostgreSQL, sync\nwould be suffered by it. However, in most cases the system is\ndedicated for only PostgreSQL, and I don't think this is a big problem\nin the real world.\n\nNote that for large COPY or INSERT was much faster than the normal\nmode due to no per-page-fsync.\n\nThinking about all these, I would like to propose we add a new switch\nto postgres to run with -F + sync.\n\n------------------------------------------------------------------------\n\t/*\n\t * If no one shared buffer was changed by this transaction then\n\t * we don't flush shared buffers and don't record commit status.\n\t */\n\tif (SharedBufferChanged)\n\t{\n\t\tFlushBufferPool();\n\t\tsync();\n\t\tif (leak)\n\t\t\tResetBufferPool();\n\n\t\t/*\n\t\t *\thave the transaction access methods record the status\n\t\t *\tof this transaction id in the pg_log relation.\n\t\t */\n\t\tTransactionIdCommit(xid);\n\n\t\t/*\n\t\t *\tNow write the log info to the disk too.\n\t\t */\n\t\tleak = BufferPoolCheckLeak();\n\t\tFlushBufferPool();\n\t\tsync();\n\t}\n\n",
"msg_date": "Wed, 09 Feb 2000 17:22:02 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "* Tatsuo Ishii <[email protected]> [000209 00:51] wrote:\n> > BTW, Hiroshi has noticed me an excellent point #3:\n> > \n> > >Session-1\n> > >begin;\n> > >update A ...;\n> > >\n> > >Session-2\n> > >begin;\n> > >select * fromB ..;\n> > > There's no PostgreSQL shared buffer available.\n> > > This backend has to force the flush of a free buffer\n> > > page. Unfortunately the page was dirtied by the\n> > > above operation of Session-1 and calls pg_fsync()\n> > > for the table A. However fsync() is postponed until\n> > > commit of this backend.\n> > >\n> > >Session-1\n> > >commit;\n> > > There's no dirty buffer page for the table A.\n> > > So pg_fsync() isn't called for the table A.\n> > \n> > Seems there's no easy solution for this. Maybe now is the time to give\n> > up my idea...\n> \n> Thinking about a little bit more, I have come across yet another\n> possible solution. It is actually *very* simple. Details as follows.\n> \n> In xact.c:RecordTransactionCommit() there are two FlushBufferPool\n> calls. One is for relation files and the other is for pg_log. I add\n> sync() right after these FlushBufferPool. It will force any pending\n> kernel buffers physically be written onto disk, thus should guarantee\n> the ACID of the transaction (see attached code fragment).\n> \n> There are two things that we should worry about sync, however.\n> \n> 1. Does sync really wait for the completion of data be written on to\n> disk?\n> \n> I looked into the man page of sync(2) on Linux 2.0.36:\n> \n> According to the standard specification (e.g., SVID),\n> sync() schedules the writes, but may return before the\n> actual writing is done. However, since version 1.3.20\n> Linux does actually wait. (This still does not guarantee\n> data integrity: modern disks have large caches.)\n> \n> It seems that sync(2) blocks until data is written. So it would be ok\n> at least with Linux. I'm not sure about other platforms, though.\n\nIt is incorrect to assume that sync() wait until all buffers are\nflushed on any other platform than Linux, I didn't think\nthat Linux even did so but the kernel sources say yes. \n\nSolaris doesn't do this and niether does FreeBSD/NetBSD.\n\nI guess if you wanted to implement this for linux only then it would\nwork, you ought to then also warn people that a non-dedicated db server\ncould experiance different performance using this code.\n\n-Alfred\n",
"msg_date": "Wed, 9 Feb 2000 02:04:48 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> [ use a global sync instead of fsync ]\n\n> 1. Does sync really wait for the completion of data be written on to\n> disk?\n\nLinux is *alone* among Unix platforms in waiting; every other\nimplementation of sync() returns as soon as the last dirty buffer\nis scheduled to be written.\n\n> 2. Are we suffered any performance penalty from sync?\n\nA global sync at the completion of every xact would be disastrous for\nthe performance of anything else on the system.\n\n> However, in most cases the system is dedicated for only PostgreSQL,\n\n\"Most cases\"? Do you have any evidence for that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Feb 2000 10:07:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> > It seems that sync(2) blocks until data is written. So it would be ok\n> > at least with Linux. I'm not sure about other platforms, though.\n> \n> It is incorrect to assume that sync() wait until all buffers are\n> flushed on any other platform than Linux, I didn't think\n> that Linux even did so but the kernel sources say yes. \n\nRight. I have looked at Linux kernel sources and confirmed it.\n\n> Solaris doesn't do this and niether does FreeBSD/NetBSD.\n\nI'm not sure about Solaris since I don't have an access to its source\ncodes. Will look at FreeBSD kernel sources.\n\n> I guess if you wanted to implement this for linux only then it would\n> work, you ought to then also warn people that a non-dedicated db server\n> could experiance different performance using this code.\n\nI just want to have more choices other than with/without -F. With -F\nlooses ACID, without it implies per-page-fsync. Both choices are\npainful. But switching to expensive commercial DBMSs is much more\npainful at least for me.\n\nEven if it would be usefull on Linux only and in a certain situation,\nit would better than nothing IMHO (until WAL comes up).\n--\nTatsuo Ishii\n\n",
"msg_date": "Thu, 10 Feb 2000 00:09:25 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> Thinking about a little bit more, I have come across yet another\n> possible solution. It is actually *very* simple. Details as follows.\n> \n> In xact.c:RecordTransactionCommit() there are two FlushBufferPool\n> calls. One is for relation files and the other is for pg_log. I add\n> sync() right after these FlushBufferPool. It will force any pending\n> kernel buffers physically be written onto disk, thus should guarantee\n> the ACID of the transaction (see attached code fragment).\n\nInteresting idea. I had proposed this solution long ago. My idea was\nto buffer pg_log writes every 30 seconds. Every 30 seconds, do a sync,\nthen write/sync pg_log. Seemed like a good solution at the time, but\nVadim didn't like it. I think he prefered to do logging, but honestly,\nit was over a year ago, and we could have been benefiting from it all\nthis time.\n\nSecond, I had another idea. What if we fsync()'ed a file descriptor\nonly when we were writing the _last_ dirty buffer for that file. Seems\nin many cases this would be a win. I just don't know how hard that is\nto figure out. Seems there is no need to fsync() if we still have dirty\nbuffers around.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 9 Feb 2000 11:17:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> postgresql has 3 files open (a, b, c), so will the syncer.\n\nThe syncer must have all the files open that are open in any backend?\nWhat happens when it runs into the FDs-per-process limit?\n\n> backend 1 completes a request, communicates to the syncer that a flush\n> is needed.\n> syncer starts by fsync'ing 'a'\n> backend 2 completes a request, communicates to the syncer\n> syncer continues with 'b' then 'c'\n> syncer responds to backend 1 that it's safe to proceed.\n> syncer fsyncs 'a' again\n> syncer responds to backend 2 that it's all completed.\n> effectively the fsync of 'b' and 'c' have been batched.\n\nAnd it's safe to update pg_log when?\n\nI'm failing to see where the advantage is compared to the backends\nissuing their own fsyncs...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Feb 2000 18:27:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item "
},
{
"msg_contents": "> Ok, here's a nifty idea, a slave process called pgsyncer. \n> \n> At the end of a transaction a backend asks the syncer to fsync all files.\n> \n> Now here's the cool part, this avoids the non-portability of the Linux\n> sync() problem and at the same time restricts the syncing to postgresql\n> and reduces 'cross-fsync' issues.\n> \n> Imagine:\n> \n> postgresql has 3 files open (a, b, c), so will the syncer.\n> backend 1 completes a request, communicates to the syncer that a flush\n> is needed.\n> syncer starts by fsync'ing 'a'\n> backend 2 completes a request, communicates to the syncer\n> syncer continues with 'b' then 'c'\n> syncer responds to backend 1 that it's safe to proceed.\n> syncer fsyncs 'a' again\n> syncer responds to backend 2 that it's all completed.\n> \n> effectively the fsync of 'b' and 'c' have been batched.\n> \n> It's just an elevator algorithm, perhaps this can be done without\n> a seperate slave process?\n\nIf you go to the hackers archive, you will see an implementation under\nsubject \"Bufferd loggins/pg_log\" dated November 1997. We have gone over\n2 years without this option, and it is going to be even longer before it\nis available via WAL.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 9 Feb 2000 18:28:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "* Tatsuo Ishii <[email protected]> [000209 07:32] wrote:\n> > > It seems that sync(2) blocks until data is written. So it would be ok\n> > > at least with Linux. I'm not sure about other platforms, though.\n> > \n> > It is incorrect to assume that sync() wait until all buffers are\n> > flushed on any other platform than Linux, I didn't think\n> > that Linux even did so but the kernel sources say yes. \n> \n> Right. I have looked at Linux kernel sources and confirmed it.\n> \n> > Solaris doesn't do this and niether does FreeBSD/NetBSD.\n> \n> I'm not sure about Solaris since I don't have an access to its source\n> codes. Will look at FreeBSD kernel sources.\n> \n> > I guess if you wanted to implement this for linux only then it would\n> > work, you ought to then also warn people that a non-dedicated db server\n> > could experiance different performance using this code.\n> \n> I just want to have more choices other than with/without -F. With -F\n> looses ACID, without it implies per-page-fsync. Both choices are\n> painful. But switching to expensive commercial DBMSs is much more\n> painful at least for me.\n> \n> Even if it would be usefull on Linux only and in a certain situation,\n> it would better than nothing IMHO (until WAL comes up).\n\nOk, here's a nifty idea, a slave process called pgsyncer. \n\nAt the end of a transaction a backend asks the syncer to fsync all files.\n\nNow here's the cool part, this avoids the non-portability of the Linux\nsync() problem and at the same time restricts the syncing to postgresql\nand reduces 'cross-fsync' issues.\n\nImagine:\n\npostgresql has 3 files open (a, b, c), so will the syncer.\nbackend 1 completes a request, communicates to the syncer that a flush\n is needed.\nsyncer starts by fsync'ing 'a'\nbackend 2 completes a request, communicates to the syncer\nsyncer continues with 'b' then 'c'\nsyncer responds to backend 1 that it's safe to proceed.\nsyncer fsyncs 'a' again\nsyncer responds to backend 2 that it's all completed.\n\neffectively the fsync of 'b' and 'c' have been batched.\n\nIt's just an elevator algorithm, perhaps this can be done without\na seperate slave process?\n\n-Alfred\n",
"msg_date": "Wed, 9 Feb 2000 15:44:22 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Tatsuo Ishii <[email protected]> writes:\n> > [ use a global sync instead of fsync ]\n> \n> > 1. Does sync really wait for the completion of data be written on to\n> > disk?\n> \n> Linux is *alone* among Unix platforms in waiting; every other\n> implementation of sync() returns as soon as the last dirty buffer\n> is scheduled to be written.\n> \n> > 2. Are we suffered any performance penalty from sync?\n> \n> A global sync at the completion of every xact would be disastrous for\n> the performance of anything else on the system.\n> \n> > However, in most cases the system is dedicated for only PostgreSQL,\n> \n> \"Most cases\"? Do you have any evidence for that?\n>\n\nTatsuo is afraid of the delay of WAL\nOTOH,it's not so easy to solve this item in current spec.\nProbably he wants a quick and simple solution.\n\nHis solution is only for limited OS but is very simple.\nMoreover it would make FlushBufferPool() more reliable(\nI don't understand why FlushBufferPool() is allowed to not\ncall fsync() per page.).\n\nThe implementation would be in time for 7.0.\nIs a temporary option unitl WAL bad ? \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 10 Feb 2000 09:32:22 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TODO item "
}
] |
[
{
"msg_contents": "Hi,\n\nI had a request from bulgarian user of postgres. He complained\nabout non-working locale. His system is MANDRAKE 7.0 which comes\nwith postgres 6.5.3 I believe. After several messages we found\nthat problem was in startup script /etc/init.d/rc3.d\n su -l postgres -c 'postmaster .......'\n The problem was '-l', after removing it all problems were solved !\nI'm not an expert in su, at least I don't know what '-l' is supposed\nfor, but it's worth to describe the problem and let people from\nMANDRAKE to know. \n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Sat, 5 Feb 2000 21:15:35 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux MANDRAKE startup startup script is broken ?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> Hi,\n> \n> I had a request from bulgarian user of postgres. He complained\n> about non-working locale. His system is MANDRAKE 7.0 which comes\n> with postgres 6.5.3 I believe. After several messages we found\n> that problem was in startup script /etc/init.d/rc3.d\n> su -l postgres -c 'postmaster .......'\n> The problem was '-l', after removing it all problems were solved !\n\n?!?!?!? Do something for me: add a couple of lines in\n/etc/rc.d/init.d/postgresql after the postmaster start:\nsu -l postgres -c 'set >/var/lib/pgsql/envvars-l.lst'\nsu postgres -c 'set >/var/lib/pgsql/envvaqrs-no-l.lst'\n\nAnd e-mail me the two '*.lst' files out of /var/lib/pgsql.\n\n> I'm not an expert in su, at least I don't know what '-l' is supposed\n\n>From man su:\nSU(1) FSF SU(1)\n\nNAME\n su - run a shell with substitute user and group IDs\n\nSYNOPSIS\n su [OPTION]... [-] [USER [ARG]...]\n\nDESCRIPTION\n Change the effective user id and group id to that of USER.\n\n -, -l, --login\n make the shell a login shell\n......\n\n> for, but it's worth to describe the problem and let people from\n> MANDRAKE to know.\n\nThe same problem should manifest itself in RedHat, which is what I build\nthe RPM's for. Mandrake has been taking the RedHat RPM's and using\nthem, with modifications, up till now, so, if I fix this in the RedHat\nRPM's, the Mandrake RPM's will follow from Mandrake shortly.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 05 Feb 2000 15:37:00 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux MANDRAKE startup startup script is broken ?"
},
{
"msg_contents": "\nOn 05-Feb-2000 Oleg Bartunov wrote:\n> Hi,\n> \n> I had a request from bulgarian user of postgres. He complained\n> about non-working locale. His system is MANDRAKE 7.0 which comes\n> with postgres 6.5.3 I believe. After several messages we found\n> that problem was in startup script /etc/init.d/rc3.d\n> su -l postgres -c 'postmaster .......'\n> The problem was '-l', after removing it all problems were solved !\n> I'm not an expert in su, at least I don't know what '-l' is supposed\n> for, but it's worth to describe the problem and let people from\n> MANDRAKE to know. \n\nSwitch -l cause su to emulate login procedure, \ni.e rewrite all environment. \nI use simple program to avoid such kind of collision,\nand apropriate startup script\n\n(see below sign)\n\n\n-- \nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n\n==================== cat ===========================\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/types.h>\n#include <unistd.h>\n#include <pwd.h>\n\n\nint main(int argc, char *argv[])\n{\n struct passwd *pw;\n uid_t u;\n \n if (!argv[1])\n { fprintf(stderr,\"usage: su_postgres command\\n\");\n exit(0);\n }\n\n pw = getpwnam(\"postgres\");\n if (!pw)\n { fprintf(stderr, \"user postgres doesn't exist\\n\");\n exit(0);\n }\n setuid(pw->pw_uid);\n seteuid(pw->pw_uid);\n\n u = geteuid();\n if( u != pw->pw_uid)\n { fprintf(stderr,\"Can\\'t change uid to %d\\n\", pw->pw_uid);\n exit(0);\n }\n system(argv[1]);\n\n}\n\n=================================================================\n# $Id: S81pgsql.in,v 1.2 1999/08/31 14:21:19 dms Exp $\n\nPG_HOME=\"/usr/local/pgsql\"\nPG_DATA=\"$PG_HOME/data\"\nUDS=\"/tmp/.s.PGSQL.5432\"\n\nPS=\"@PS@\"\nGREP=\"@GREP@\"\n\ncase \"$1\" in\n'start')\n # If no postgres run, remove UDS and start postgres.\n pid=\n set -- `$PS | $GREP postmaster | $GREP -v grep`\n [ $? -eq 0 ] && pid=$1\n\n if [ -z \"$pid\" ]; then\n rm -f \"$UDS\"\n $PG_HOME/bin/su_postgres \"$PG_HOME/bin/postmaster -D $PG_DATA -b\n $PG_HOME/bin/postgres -i -S -o -F &\"\n echo \"Postgres started\"\n else\n echo \"Postmaster already run with pid $pid\"\n fi\n ;;\n'stop')\n pid=\n set -- `$PS | $GREP postmaster | $GREP -v grep`\n [ $? -eq 0 ] && pid=$1\n\n if [ -z \"$pid\" ]; then\n echo \"Postgres not run\"\n else\n echo \"Stoping postmaster with pid $pid\" \n kill $pid\n fi\n\n ;;\n*)\n echo \"USAGE: $0 {start | stop}\"\n ;;\nesac\n\n=================================================================\n",
"msg_date": "Sun, 06 Feb 2000 00:01:09 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Linux MANDRAKE startup startup script is broken ?"
}
] |
[
{
"msg_contents": "We got a little dispute in the FKEY project :-)\n\n In section 11.9, the SQL3 draft explicitly discribes what to\n do for referential actions ON DELETE and ON UPDATE. First\n there seems to be an incompatibility between SQL3 and SQL-92.\n While Date describes and Oracle implements NO ACTION to raise\n an exception if a PK delete leaves an unsatisfied foreign\n key, the SQL3 specs explicitly define that behaviour for the\n RESTRICT action.\n\n Second, there's absolutely nothing said about anything to do\n for NO ACTION in SQL3. Thus, our current implementaion in\n fact doesn't do anything meaningful. That makes it totally\n legal, to delete a PK leaving an unsatisfied FK behind,\n resulting in an in fact violation. And NO ACTION is the\n default if no referential actions given explicitly in the\n schema.\n\n Don Baccus now suggested, to interpret NO ACTION as \"if it\n would result in a violation, then silently rollback this\n update for the PK row in question\". Not to speak about the\n technical problems arising from an attempt to do so, but as\n said, such a behaviour is nowhere mentioned in the SQL3\n draft. OTOH it would close the possible violation hole in\n our implementation of FOREIGN KEY.\n\n What do others think about it? We need a decision urgent, or\n going for the suppress/rollback will cause a release delay,\n definitely.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 5 Feb 2000 21:04:16 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "FOREIGN KEY !!!!!"
},
{
"msg_contents": "At 09:04 PM 2/5/00 +0100, Jan Wieck wrote:\n>We got a little dispute in the FKEY project :-)\n\nEtc...Jan and I have crossed a couple of e-mails.\n\nAfter he and I tossed our thoughts back-and-forth it appeared\nto both of us that SQL3 seemed to be defining \"NO ACTION\"\ndifferently than in SQL92.\n\nThen I remembered that Date's SQL92 primer has an appendix\non SQL3. I could've saved us all a bunch of trouble if I\nremembered earlier...\n\nBy the time you and I read this, Jan and I might well be in\n\"what exactly should we implement now that we know how it is\nSUPPOSED to work\" mode, rather than \"how is it supposed to\nwork?\" mode.\n\nFor those into self-flagellation and other forms of self-inflicted\npain, spend an hour or so with the SQL3 standard trying to figure\nout how \"NO ACTION\" is supposed to work and how it differs from\n\"RESTRICT\" before cheating and reading this excerpt from Date.\n\nHere's my note to Jan that he didn't quite have a chance to read\nbefore posting to the hacker's list:\n\n\"OK, mystery solved, I remembered that Date has an appendix on SQL3.\nFortunately, he has a short section on \"RESTRICT\" vs. \"NO ACTION\".\n\nWe're all wrong :)\n\n>From his SQL3 appendix...\n\nF.4 INTEGRITY\n\nReferential Action RESTRICT\n\nIn addition to ... CASCADE, SET NULL [etc] ... SQL3 supports\na new [referential action] RESTRICT. RESTRICT is very similar - but\nnot quite identical - to NO ACTION. The subtle difference between\nthem is as follows. Note: to fix our ideas, we concentrate here\non the delete rule; the implications for the update rule are\nessentially similar.\n\no Let T1 and T2 be the referenced table and the referencing\n table, respectively; let R1 be a row of T1, let R2 be a row\n of T2 that corresponds to row R1 under the referential \n constraint in question. What happens if an attempt is made\n to delete row R1?\n\no Under NO ACTION [equiv. to SQL92] the system - conceptually,\n at least - actually performs the requested DELETE, discovers\n row R2 now violates the constraint, and so undoes the DELETE.\n\no Under RESTRICT, by contrast, the system realizes \"ahead of\n time\" that row R2 exists and will violate the constraint if\n R1 is deleted, and so rejects the DELETE out of hand.\n\"\n\nThe standard also mentions (I've dug around a bit) that RESTRICT\nraises a \"restrict violation\" exception. The \"NO ACTION\" case\nconceptually might raise an \"integrity constraint violation\"\ninstead, and perhaps to be compliant MUST raise that constraint.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 05 Feb 2000 12:27:38 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY !!!!!"
},
{
"msg_contents": "> o Under RESTRICT, by contrast, the system realizes \"ahead of\n> time\" that row R2 exists and will violate the constraint if\n> R1 is deleted, and so rejects the DELETE out of hand.\n\n That'd mean in last consequence, that RESTRICT actions aren't\n DEFERRABLE, while the rest of their constraint definition is!\n Anyway, cannot work with the actual implementation of the\n trigger queue, so we could either make RESTRICT and NO ACTION\n identical (except for different ERROR messages), or leave the\n SQL3 RESTRICT out of 7.0 while changing NO ACTION to fire the\n message.\n\n I'd prefer to have them identical in 7.0, because according\n to Date they have no semantic difference, so it'll buy us\n little if we complicate the trigger stuff more than required\n right now.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 5 Feb 2000 21:30:04 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FOREIGN KEY !!!!!"
},
{
"msg_contents": "At 09:30 PM 2/5/00 +0100, Jan Wieck wrote:\n>> o Under RESTRICT, by contrast, the system realizes \"ahead of\n>> time\" that row R2 exists and will violate the constraint if\n>> R1 is deleted, and so rejects the DELETE out of hand.\n\n> That'd mean in last consequence, that RESTRICT actions aren't\n> DEFERRABLE, while the rest of their constraint definition is!\n\nThat's how I read it, too. Pardon me while I run off to vomit in\nthe toilet.\n\n> Anyway, cannot work with the actual implementation of the\n> trigger queue, so we could either make RESTRICT and NO ACTION\n> identical (except for different ERROR messages), or leave the\n> SQL3 RESTRICT out of 7.0 while changing NO ACTION to fire the\n> message.\n\n> I'd prefer to have them identical in 7.0, because according\n> to Date they have no semantic difference, so it'll buy us\n> little if we complicate the trigger stuff more than required\n> right now.\n\nIf others on the list agree, I think this is an excellent idea. I\nsee no semantic difference that the application will see, either,\nother than a difference in execution time.\n\nRaising the exception before the delete or update seems more an efficiency\nhack than anything, i.e. it's much less expensive to short-circuit the\ndelete/update rather than finish it, check afterwards, and roll it\nback.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 05 Feb 2000 12:47:42 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FOREIGN KEY !!!!!"
}
] |
[
{
"msg_contents": "Lory,\n\nhere is what people, responsible (for RPMs) suggested to do.\nAs I'm not a MANDRAKE user could you please to do\nwhat Lamar requested and send results to him.\n\n\tRegards,\n\n\t\tOleg\n\nPS.\nLamar, my Slackware 7.0 has:\nNAME\n su - Change user ID or become super-user\n\nSYNOPSIS\n su [-] [username [args]]\n\nBut you're right about su from GNU Shell Utilities - it has -l option\n\nIf MANDRAKE is just a modified Redhat distribution probable I know\npossible reason why Lory's setup didn't works. \nI remember I had problem with Redhat 6.1 startup files on some \nsystem (I didn't install Redhat). It's /etc/sysconfig/i18n which was\ndidn't configured properly. I spent several hours trying to figured out\nwhy compiled postgres wont' work properly with locale I directly\nspecified in postgres starting script. I found LC_ALL was specified\nas 'C' (or something else by default, dont' remember) and LC_ALL has \nhigher precendence than LC_CTYPE, LC_COLLATE, LANG which were\nspecified in the script. So I just redefine LC_ALL as LC_CTYPE, LC_COLLATE, \nLANG and everything become fine. I still don't understand LC_ALL\nbut I think it's worth to mention somewhere about this possible\nsource of confusion for Redhat people.\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Sat, 05 Feb 2000 15:37:00 -0500\nFrom: Lamar Owen <[email protected]>\nTo: Oleg Bartunov <[email protected]>\nCc: [email protected]\nSubject: Re: [HACKERS] Linux MANDRAKE startup startup script is broken ?\n\nOleg Bartunov wrote:\n> \n> Hi,\n> \n> I had a request from bulgarian user of postgres. He complained\n> about non-working locale. His system is MANDRAKE 7.0 which comes\n> with postgres 6.5.3 I believe. After several messages we found\n> that problem was in startup script /etc/init.d/rc3.d\n> su -l postgres -c 'postmaster .......'\n> The problem was '-l', after removing it all problems were solved !\n\n?!?!?!? Do something for me: add a couple of lines in\n/etc/rc.d/init.d/postgresql after the postmaster start:\nsu -l postgres -c 'set >/var/lib/pgsql/envvars-l.lst'\nsu postgres -c 'set >/var/lib/pgsql/envvaqrs-no-l.lst'\n\nAnd e-mail me the two '*.lst' files out of /var/lib/pgsql.\n\n> I'm not an expert in su, at least I don't know what '-l' is supposed\n\n>From man su:\nSU(1) FSF SU(1)\n\nNAME\n su - run a shell with substitute user and group IDs\n\nSYNOPSIS\n su [OPTION]... [-] [USER [ARG]...]\n\nDESCRIPTION\n Change the effective user id and group id to that of USER.\n\n -, -l, --login\n make the shell a login shell\n......\n\n> for, but it's worth to describe the problem and let people from\n> MANDRAKE to know.\n\nThe same problem should manifest itself in RedHat, which is what I build\nthe RPM's for. Mandrake has been taking the RedHat RPM's and using\nthem, with modifications, up till now, so, if I fix this in the RedHat\nRPM's, the Mandrake RPM's will follow from Mandrake shortly.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n",
"msg_date": "Sat, 5 Feb 2000 23:54:27 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Linux MANDRAKE startup startup script is broken ? (fwd)"
}
] |
[
{
"msg_contents": "I am about to implement some changes to the planner/optimizer's cost\nmodel, following up to the thread on pghackers beginning on 20 Jan.\nThe main conclusion of that thread was that we needed to charge more for\na page fetched nonsequentially than for a page fetched sequentially.\nAfter further investigation I have concluded that it is also appropriate\nto include explicit modeling of the cost of evaluation of WHERE clauses.\nFor example, using the regression database and a query like\n\nselect * from tenk1 where\n(unique1 = 1 and unique2 = 101) or\n(unique1 = 2 and unique2 = 102) or\n(unique1 = 3 and unique2 = 103) or\n... 100 OR clauses ...\n(unique1 = 100 and unique2 = 200);\n\n(which is not too implausible for certain automatic query generators),\nI observe that a sequential scan takes about 6 seconds, vs. less than\na second for a similar query with only 10 clauses. That says that the\ncost of evaluating a WHERE clause this large is far from negligible.\nThe optimizer needs to account for this because different query plans\ncan have a considerable impact on the number of tuples that the WHERE\nclause is evaluated for --- in this example, if we use indexscans to\npull out just the tuples with the right values of 'unique1', then the\nWHERE clause need only be checked at 100 tuples, not all 10000.\n\nI believe it would be reasonable to charge a certain amount per operator\nor function appearing in the WHERE clause in order to account for this\neffect. (Currently I see no need to model the cost of evaluating the\ntargetlist expressions. The same expressions should get evaluated for\nthe same tuples no matter what query plan the optimizer picks, so we\nmight as well just leave that cost out of our comparisons.)\n\nAlso, as was previously mentioned on pghackers, I would like to add SET\nvariables to control enabling/disabling of particular query plan types,\nso that different plans can be checked with less hassle than restarting\npsql with a new PGOPTIONS setting.\n\nThis all leads to the following proposal for redoing the optimizer\nplan cost SET variables. The variables proposed below would replace\nCOST_HEAP and COST_INDEX, which are poorly named IMHO and are definitely\nvery misleadingly documented at present.\n\n(Note that all costs will still be referenced to the cost of a disk page\nfetch. We will take 1.0 as the cost of a sequential page fetch.)\n\n\nSET variable name\tInternal variable\tProposed default\n\nRANDOM_PAGE_COST\trandom_page_cost\t4.0\n\nCost of fetching a disk block nonsequentially (as a multiple of the cost\nof a sequential block fetch).\n\nCPU_TUPLE_COST\t\tcpu_tuple_cost\t\t0.01\n\nCost of CPU time per tuple processed within a query (as a fraction of\nthe cost of a sequential disk block fetch). This renames the existing\nSET variable COST_HEAP (cpu_page_weight); but the default value is\nsmaller than it used to be, since WHERE clause evaluation will now be\naccounted for separately.\n\nCPU_INDEX_TUPLE_COST\tcpu_index_tuple_cost\t0.001\n\nCost of CPU time per index tuple processed within a query (as a fraction\nof the cost of a sequential disk block fetch). This renames the\nexisting SET variable COST_INDEX (cpu_index_page_weight); but the\ndefault value is much smaller than it used to be, since the operator\nevaluation cost will account for the bulk of the cost of visiting an\nindex tuple.\n\nCPU_OPERATOR_COST\tcpu_operator_cost\t0.0025\n\nCost of CPU time per operator or function evaluated in a WHERE clause.\nNote that this would apply to operators evaluated at index tuples as\nwell as those evaluated against heap tuples.\n(The proposed default corresponds to a ratio of 5 microsec against 2\nmillisec for a sequential block fetch, which seems to be about right\non my workstation.)\n\nENABLE_SEQSCAN\t\tenable_seqscan\t\tON\n\nENABLE_INDEXSCAN\tenable_indexscan\tON\n\nENABLE_TIDSCAN\t\tenable_tidscan\t\tON\n\nENABLE_SORT\t\tenable_sort\t\tON\n\nENABLE_NESTLOOP\t\tenable_nestloop\t\tON\n\nENABLE_MERGEJOIN\tenable_mergejoin\tON\n\nENABLE_HASHJOIN\t\tenable_hashjoin\t\tON\n\nProvide access via SET to the already-existing internal optimizer\ncontrol flags.\n\nCurrently, it is possible to have COST_HEAP and COST_INDEX set\nautomatically during connection startup; libpq will do that if\nthe environment variables PGCOSTHEAP and/or PGCOSTINDEX are defined\non the client side. If we want to continue that behavior, the\nenvironment variables for these variables would be named\nPGRANDOMPAGECOST etc (remove underscores and prepend PG).\nI'm not sure if we want to continue inventing client-side environment\nvariables, however.\n\n\nComments? Ideas for better names? Anyone object to renaming the\nexisting variables? (BTW, although it could be argued that this\nmight break existing scripts that set COST_HEAP or COST_INDEX,\nI doubt that there are any ... and given the existing doco,\nI doubt even more that anyone is setting appropriate values ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Feb 2000 16:29:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal for new SET variables for optimizer costs"
},
{
"msg_contents": "Looks great. I wouldn't change a thing in your proposal.\n\n> I am about to implement some changes to the planner/optimizer's cost\n> model, following up to the thread on pghackers beginning on 20 Jan.\n> The main conclusion of that thread was that we needed to charge more for\n> a page fetched nonsequentially than for a page fetched sequentially.\n> After further investigation I have concluded that it is also appropriate\n> to include explicit modeling of the cost of evaluation of WHERE clauses.\n> For example, using the regression database and a query like\n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 17:01:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposal for new SET variables for optimizer costs"
},
{
"msg_contents": "At 16:29 5/02/00 -0500, Tom Lane wrote:\n>\n>SET variable name\tInternal variable\tProposed default\n>\n>RANDOM_PAGE_COST\trandom_page_cost\t4.0\n>\n>Cost of fetching a disk block nonsequentially (as a multiple of the cost\n>of a sequential block fetch).\n>\n>CPU_TUPLE_COST\t\tcpu_tuple_cost\t\t0.01\n>\n>\n>CPU_INDEX_TUPLE_COST\tcpu_index_tuple_cost\t0.001\n>\n>CPU_OPERATOR_COST\tcpu_operator_cost\t0.0025\n>\n>ENABLE_SEQSCAN\t\tenable_seqscan\t\tON\n>\n>ENABLE_INDEXSCAN\tenable_indexscan\tON\n>\n>ENABLE_TIDSCAN\t\tenable_tidscan\t\tON\n>\n>ENABLE_SORT\t\tenable_sort\t\tON\n>\n>ENABLE_NESTLOOP\t\tenable_nestloop\t\tON\n>\n>ENABLE_MERGEJOIN\tenable_mergejoin\tON\n>\n>ENABLE_HASHJOIN\t\tenable_hashjoin\t\tON\n>\n\nAny chance of prefixing the 'set' variable names with 'PG_' or 'PG_OPT_' or\nsomething similar? Or doing something else to differentiate them from\nuser-declared SQL variables? I have no idea if user-declared SQL variables\nare an SQL92 thing, but these variables are 'system' things, and some kind\nof differentiation seems like a good idea.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 06 Feb 2000 10:27:28 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposal for new SET variables for optimizer\n costs"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Any chance of prefixing the 'set' variable names with 'PG_' or 'PG_OPT_' or\n> something similar? Or doing something else to differentiate them from\n> user-declared SQL variables?\n\nI see no need to do that, since the *only* place these names exist is\nin the SET command (and its friends SHOW and RESET), and SET exists only\nto set system control variables. There are no user-declared SQL\nvariables.\n\nThe names are quite long and underscore-filled enough without adding\nunnecessary prefixes, IMHO ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Feb 2000 18:31:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Proposal for new SET variables for optimizer costs "
},
{
"msg_contents": "At 18:31 5/02/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Any chance of prefixing the 'set' variable names with 'PG_' or 'PG_OPT_' or\n>> something similar? Or doing something else to differentiate them from\n>> user-declared SQL variables?\n>\n>I see no need to do that, since the *only* place these names exist is\n>in the SET command (and its friends SHOW and RESET), and SET exists only\n>to set system control variables. There are no user-declared SQL\n>variables.\n>\n>The names are quite long and underscore-filled enough without adding\n>unnecessary prefixes, IMHO ;-)\n\nI agree, given their complexity, they are unlikely to conflict with future\nSQL names, but the SET statment *is* part of the SQL standard, and I\nthought it would be good to be cautious in the names you choose. This would\navoid any possible future conflict, as well as make it clear from the\noutset that they are *not* standard SQL names.\n\nAnother option would be to add another command, eg. 'PG', which is used for\nall non-SQLxx commands:\n\n PG SET somename = somevalue\n PG VACUUM\n\n...etc. But this has the disctinct disadvantage of being more work, and\nbeing cumbersome in comparison to changing names. The transition could be\nmanaged by supporting old commands until version 8.0, with an appropriate\nnotice.\n\nJust my 0.02c worth.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 06 Feb 2000 11:33:51 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposal for new SET variables for optimizer\n costs"
},
{
"msg_contents": "Philip Warner wrote:\n\n> Another option would be to add another command, eg. 'PG', which is used for\n> all non-SQLxx commands:\n> \n> PG SET somename = somevalue\n> PG VACUUM\n> \n> ...etc. But this has the disctinct disadvantage of being more work, and\n> being cumbersome in comparison to changing names. \n\nThis does not work out in terms of general SQL compatibility. Even if we\ntreat commands after PG specially, no other SQL database would, and it\nwould raise at least as many errors as the extension syntax. Nor is\nthere any significant advantage of it within Postgres if we ever get a\nkeyword clash with a future SQL revision - I'd rather not have a syntax\nthat alows for two interpretations for the same keyword depending on\nwhether it follows PG or not.\n\nSevo\n\n-- \nSevo Stille\[email protected]\n",
"msg_date": "Sun, 06 Feb 2000 15:35:16 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposal for new SET variables for optimizercosts"
}
] |
[
{
"msg_contents": "Lamar Owen wrote, in a misguided moment: (:-/)\n> I also track the current CVS -- but for a totally different reason, as I\n> want to be able to release RPMs of the beta release the same day as the\n> beta release -- thus, I am doing trial builds of RPM's against the CVS. \n> However, this current issue doesn't impact me in the slightest -- which\n> is why I have not and will not say anything about it.\n\nI am now saying something about it. While I have been doing trial\nbuilds of the current sources, I have not been building all the clients\nup until today, for speed in building. And guess what -- the lack of\npqbool breaks the perl5 client. Badly. Won't-even-compile-badly.\n\nIs this breakage going to be fixed by the 15th? If not, what can I do\nto workaround it until it is fixed properly (either by putting pqbool\nback in libpq-fe.h, or by fixing Pg.xs to not need pqbool).\n\nI _would_ like to have RPM's ready of the beta on the release day....\n\nTIA\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 05 Feb 2000 17:12:16 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Spoke too soon (was RE: cvs committers digest)"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> And guess what -- the lack of\n> pqbool breaks the perl5 client. Badly. Won't-even-compile-badly.\n\nYeah, that was pointed out already. I am of the opinion that both\nthat change and removal of the \"obsolete\" print functions should be\nreverted, but I haven't done so --- I was sort of expecting Peter\nto take care of it. \n\n> Is this breakage going to be fixed by the 15th?\n\nSomeone will do something about it ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Feb 2000 17:26:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Spoke too soon (was RE: cvs committers digest) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Lamar Owen <[email protected]> writes:\n> > And guess what -- the lack of\n> > pqbool breaks the perl5 client. Badly. Won't-even-compile-badly.\n \n> Yeah, that was pointed out already.\n\nIIRC, it was Hiroshi. I remembered the post, went to the archives, and\npulled it up to double-check. So, I thought I'd just put out a feeler\nto see how I needed to allocate my time -- if it's fixed soon, I'll just\nput the 7.0 RPM's on my back burner today, and wait on the fix --\notherwise, I'm going to go back to building without the perl client for\nnow for my testing, as I have several other issues to deal with.\n\n> I am of the opinion that both\n> that change and removal of the \"obsolete\" print functions should be\n> reverted, but I haven't done so --- I was sort of expecting Peter\n> to take care of it.\n\nWell, after following the thread down a ways, I saw his reply to Hiroshi\nstating to the effect that he was going to take off for a bit, but that\nhe'd be back. Probably needed a breather. \n\n> > Is this breakage going to be fixed by the 15th?\n \n> Someone will do something about it ;-)\n\nI have a poem about Someone, Everyone, and Anyone.... Thanks, Tom. If\nI need to just apply a patch for build purposes, that's fine. I'm just\ntrying to get my build-act together, as 7.0 is quite different from a\npackaging standpoint than 6.5.x, at least from 'my' packaging\nstandpoint.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 05 Feb 2000 17:39:32 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Spoke too soon (was RE: cvs committers digest)"
}
] |
[
{
"msg_contents": "Hi,\n\nSolaris has always had problems with 1947 in the\nregression tests so I prepared a set of expected\nfiles to make things look OK.\n\nThere's also a file to account for minor variations\nin the geopmetry output and a resultmap patch to\npull them all together.\n\nWith these changes PostgreSQL, from CVS, builds and\nregression tests (runcheck) cleanly.\n\nKeith.",
"msg_date": "Sat, 5 Feb 2000 22:55:21 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Solaris regression tests."
},
{
"msg_contents": "Applied.\n\n\n> Hi,\n> \n> Solaris has always had problems with 1947 in the\n> regression tests so I prepared a set of expected\n> files to make things look OK.\n> \n> There's also a file to account for minor variations\n> in the geopmetry output and a resultmap patch to\n> pull them all together.\n> \n> With these changes PostgreSQL, from CVS, builds and\n> regression tests (runcheck) cleanly.\n> \n> Keith.\nContent-Description: resultmap.patch\n\n> *** src/test/regress/resultmap.orig\tTue Jan 25 20:29:28 2000\n> --- src/test/regress/resultmap\tSun Jan 30 11:56:04 2000\n> ***************\n> *** 4,10 ****\n> --- 4,16 ----\n> int4/.*-netbsd=int4-too-large\n> int2/i.86-pc-linux-gnulibc=int2-not-representable\n> int4/i.86-pc-linux-gnulibc=int4-not-representable\n> + int2/sparc-sun-solaris=int2-too-large\n> + int4/sparc-sun-solaris=int4-too-large\n> geometry/hppa=geometry-positive-zeros\n> geometry/.*-netbsd=geometry-positive-zeros\n> geometry/i.86-.*-gnulibc=geometry-i86-gnulibc\n> + geometry/sparc-sun-solaris=geometry-solaris-precision\n> horology/hppa=horology-no-DST-before-1970\n> + horology/sparc-sun-solaris=horology-solaris-1947\n> + abstime/sparc-sun-solaris=abstime-solaris-1947\n> + tinterval/sparc-sun-solaris=tinterval-solaris-1947\nContent-Description: solaris_regress.tar.gz\n\n[application/octet-stream is not supported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 00:03:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solaris regression tests."
}
] |
[
{
"msg_contents": "I've a problem with pg_ctl.\n\nWhen attempting to start the postmaster I get :-\n\nbash-2.03$ /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data start\npostmaster successfully started up.\nbash-2.03$ /usr/local/pgsql/bin/postmaster does not know where to find the \ndatabase system data. You must specify the directory that contains the database \nsystem either by specifying the -D invocation option or by setting the PGDATA \nenvironment variable.\n\nNo data directory -- can't proceed.\n\nI think this small patch should fix it.\n\nKeith.\n\n*** src/bin/pg_ctl/pg_ctl.sh.orig Sat Feb 5 22:29:52 2000\n--- src/bin/pg_ctl/pg_ctl.sh Sat Feb 5 22:30:55 2000\n***************\n*** 76,81 ****\n--- 76,82 ----\n -D)\n shift\n PGDATA=\"$1\"\n+ export PGDATA\n ;;\n -p)\n shift\n\n",
"msg_date": "Sat, 5 Feb 2000 22:55:29 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small problem with pg_ctl. "
}
] |
[
{
"msg_contents": "(This is mostly directed at Bruce, but anyone else who's looked at the\nplanner/optimizer is welcome to chime in.)\n\nOn the way to implementing estimates of WHERE-clause costs, I was\nforced to notice that the estimated sizes of join relations are set\n*after* all the planning work is done for a rel, instead of before,\nwhich makes it hard to use the info for cost estimation :-(.\nIn looking at whether the sizes couldn't be set earlier, I saw that\ntrying to set them early enough to be used in planning would result in\nduplicate effort; in fact there is a lot of duplicated effort already.\nThis is a proposal to rearrange the optimizer to clean that up.\n\nRight now, the sequence of events in constructing a join tree is that\nat each join level, make_one_rel_by_joins invokes make_rels_by_joins\nto prepare a list of the joins to be considered at this level (where\na \"join\" means a particular outer rel and inner rel, and each \"rel\"\nmight consist of several already-joined base relations). Each join\nis represented by a RelOptInfo node constructed by make_join_rel (see\njoinrels.c for these routines). Then update_rels_pathlist_for_joins is\ncalled to determine, for each of these joins, the best implementation or\n\"path\". Finally, since different \"joins\" in this sense may represent\nthe same set of joined base relations, merge_rels_with_same_relids is\ncalled to match equivalent joinrels together and keep just the best path\nfor each equivalence class of joinrels.\n\nMy beef with this is that we should never be generating distinct\nRelOptInfos in the first place for different ways of producing the\nsame join relation. make_join_rel spends a fair amount of time and\nmemory space to produce the RelOptInfo and its substructure, and\nfor an N-component joinrel this price will be paid (at least) N times\nover, after which we throw away all but one of the copies. Even more\ncritically, once one joinrel has been completed for a particular set\nof base rels, the implementation paths for other equivalent joinrels\nwill be considered in a vacuum --- we may have already discovered a\npath that will dominate many of the paths for other ways of building\nthe same join relation, but because that path isn't available to\nadd_pathlist when we are looking at another joinrel, we will have to\nkeep paths that could have been discarded instantly.\n\nIt seems to me that join RelOptInfos should be managed in the same way\nas base-relation RelOptInfos: there ought never be more than one of them\nfor a given set of Relids. When we are considering a new pair of outer\nand inner rels that can produce an already-considered join relation,\nwe should find the existing RelOptInfo for that join relation. Then\nadd_pathlist will keep proposed paths only if they survive comparison\nagainst paths already found from the earlier ways of generating the same\njoin relation.\n\nThis looks like it should be a fairly straightforward change: we should\nmeld make_join_rel and get_join_rel so that a requested join rel is\nfirst searched for in root->join_rel_list, and only built if not present\n(like get_base_rel). The join rel would have a flat relids list from the\nbeginning, since it would be agnostic about which inner and outer subset\nrels should be used to produce it. Then update_rels_pathlist_for_joins\nwould be called *immediately* to process just that one join rel, passing\nthe outer and inner subset relid lists as separate parameters. It would\ngenerate paths using this pair of outer and inner rels, and would add\nthem to the join rel's path list only if they survive comparison against\nall the already-found paths for that join rel. On return from\nmake_rels_by_joins, all the work is done, so make_one_rel_by_joins\ndoesn't need to invoke either update_rels_pathlist_for_joins or\nmerge_rels_with_same_relids (the latter routine disappears entirely).\nWe have but to invoke rels_set_cheapest and then advance to the next\nlevel of joining.\n\nWith this change, I could add more processing to make_join_rel to set\nthe estimated output rel size, without fear that it would be repeated\nuselessly.\n\nAnyone see a problem with this, or have a better idea about how to do\nit?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Feb 2000 21:22:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer cleanup to avoid redundant work on joins"
},
{
"msg_contents": "> This looks like it should be a fairly straightforward change: we should\n> meld make_join_rel and get_join_rel so that a requested join rel is\n> first searched for in root->join_rel_list, and only built if not present\n> (like get_base_rel). The join rel would have a flat relids list from the\n> beginning, since it would be agnostic about which inner and outer subset\n> rels should be used to produce it. Then update_rels_pathlist_for_joins\n> would be called *immediately* to process just that one join rel, passing\n> the outer and inner subset relid lists as separate parameters. It would\n> generate paths using this pair of outer and inner rels, and would add\n> them to the join rel's path list only if they survive comparison against\n> all the already-found paths for that join rel. On return from\n> make_rels_by_joins, all the work is done, so make_one_rel_by_joins\n> doesn't need to invoke either update_rels_pathlist_for_joins or\n> merge_rels_with_same_relids (the latter routine disappears entirely).\n> We have but to invoke rels_set_cheapest and then advance to the next\n> level of joining.\n> \n> With this change, I could add more processing to make_join_rel to set\n> the estimated output rel size, without fear that it would be repeated\n> uselessly.\n> \n> Anyone see a problem with this, or have a better idea about how to do\n> it?\n\nSounds good. The only question is how easy it will be to see if there\nalready is a RelOptInfo for that combination. My guess is that the\ncurrent code is brain-dead like the many fixes we made long ago. It was\ncarrying around too many versions, instead of keeping the cheapest. \nSeems you have found another place that should be doing this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 5 Feb 2000 23:41:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins"
},
{
"msg_contents": ">> This looks like it should be a fairly straightforward change: we should\n>> meld make_join_rel and get_join_rel so that a requested join rel is\n>> first searched for in root->join_rel_list, and only built if not present\n>> (like get_base_rel). The join rel would have a flat relids list from the\n>> beginning, since it would be agnostic about which inner and outer subset\n>> rels should be used to produce it.\n\nWell, drat. This idea looked good, and I still think it's good, but\nimplementation turns out to be trickier than I thought. I was thinking\nthat RelOptInfos for join rels were essentially independent of which\npair of sub-relations were used to produce them --- eg, {1 2 3} doesn't\ncare if you made it from {1 2} joined to 3 or {1 3} joined to 2, etc.\nThat's almost true ... but it falls down on the restrictinfo list,\nbecause which qual clauses are restrictions at a particular join level\n*does* depend on the path you took to build it. For example, if you\nhave a qual clause \"t1.v1 = t2.v2\", this clause will be a restrict\nclause for {1 2 3} if you make it from {1 3} joined to 2, but if you\nmake it from {1 2} joined to 3 then the clause was already handled when\n{1 2} was produced.\n\nWe could still unify the RelOptInfos for different ways of making the\nsame joinrel if we stored restrictinfo lists for joins in individual\njoin paths, rather than in the RelOptInfo. I think that might be worth\ndoing, but the change is looking larger and subtler than I thought.\nProbably best not to try to squeeze it in right before beta.\n\nI will set aside the code I already rewrote for this purpose, and come\nback to it after we start 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 15:26:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins "
},
{
"msg_contents": "> I will set aside the code I already rewrote for this purpose, and come\n> back to it after we start 7.1.\n\nWait a minute ... stop the presses ...\n\nI just realized that the bug I was complaining of is *already there\nin current sources*, and has been for a while (certainly since 6.5).\nLook at prune.c's code that merges together RelOptInfos after-the-\nfact:\n\n if (same(rel->relids, unmerged_rel->relids))\n {\n /*\n * These rels are for the same set of base relations,\n * so get the best of their pathlists. We assume it's\n * ok to reassign a path to the other RelOptInfo without\n * doing more than changing its parent pointer (cf. pathnode.c).\n */\n rel->pathlist = add_pathlist(rel,\n rel->pathlist,\n unmerged_rel->pathlist);\n }\n\nThis is wrong, wrong, wrong, because the paths coming from different\nRelOptInfos (different pairs of sub-relations) may need different sets\nof qual clauses applied as restriction clauses. There's no way to\nrepresent that in the single RelOptInfo that will be left over. The\nworst case is that the generated plan is missing a needed restriction\nclause (the other possibility is that the clause is evaluated\nredundantly, which is much less painful).\n\nI am not sure why we haven't heard bug reports about this. It seems\nlike it wouldn't be hard at all to provoke the failure (I'm going to\ntry to make a test case right now). Assuming I can do that, I think\nwe have no choice but to move join restrictlists into JoinPath nodes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 15:57:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins "
},
{
"msg_contents": "> We could still unify the RelOptInfos for different ways of making the\n> same joinrel if we stored restrictinfo lists for joins in individual\n> join paths, rather than in the RelOptInfo. I think that might be worth\n> doing, but the change is looking larger and subtler than I thought.\n> Probably best not to try to squeeze it in right before beta.\n> \n> I will set aside the code I already rewrote for this purpose, and come\n> back to it after we start 7.1.\n\nWhat happened to the time-honored tradition of jamming partially-tested\nfeatures in before the beta feature freeze. Are we getting too\nconservative in our old age? :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 16:59:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins"
},
{
"msg_contents": "> Well, drat. This idea looked good, and I still think it's good, but\n> implementation turns out to be trickier than I thought. I was thinking\n> that RelOptInfos for join rels were essentially independent of which\n> pair of sub-relations were used to produce them --- eg, {1 2 3} doesn't\n> care if you made it from {1 2} joined to 3 or {1 3} joined to 2, etc.\n> That's almost true ... but it falls down on the restrictinfo list,\n> because which qual clauses are restrictions at a particular join level\n> *does* depend on the path you took to build it. For example, if you\n> have a qual clause \"t1.v1 = t2.v2\", this clause will be a restrict\n> clause for {1 2 3} if you make it from {1 3} joined to 2, but if you\n> make it from {1 2} joined to 3 then the clause was already handled when\n> {1 2} was produced.\n\nI thought \"t1.v1 = t2.v2\" would be in t1 RelOptInfo and t2 RelOptInfo. \nOf course, this is a join info restriction, not a restrict info\nrestriction.\n\n\n> We could still unify the RelOptInfos for different ways of making the\n> same joinrel if we stored restrictinfo lists for joins in individual\n> join paths, rather than in the RelOptInfo. I think that might be worth\n> doing, but the change is looking larger and subtler than I thought.\n> Probably best not to try to squeeze it in right before beta.\n\nAren't the restrict-info/join-info of the Final RelOptInfo set only when\nthe cheapest is found, and before that, the individual Reloptinfo's\nrestrict-info/join-info that are part of the Path are used? Maybe these\nare stupid questions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 17:13:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I thought \"t1.v1 = t2.v2\" would be in t1 RelOptInfo and t2 RelOptInfo. \n> Of course, this is a join info restriction, not a restrict info\n> restriction.\n\nRight, it would appear in t1's joininfo list (showing t2 as unjoined_relids)\nand in t2's joininfo list (showing t1 as unjoined_relids). Then when\nwe make a join rel from t1 + t2, the clause would be put into that rel's\nrestrictinfo list, since it's no longer a joining clause for the\njoinrel; but it does need to be implemented at the time of the join.\n(The bug is probably only visible for auxiliary quals that are not\nbeing used as the driving clause of the join method; they need to show\nup in the qpquals of the final plan, or they won't get enforced.)\n\nThe trouble comes when there are more rels in the picture. If we make\na joinrel from t1 + t3, this clause will still appear in that joinrel's\njoininfo list, since it's still a joinclause for that rel. Then when\nwe make t1+t2+t3 from {t1 t3} and {t2}, the clause propagates up to\nbecome a restrict clause of that rel, and that's where the buck stops\n(and where the clause gets enforced at runtime).\n\n*BUT*, if we make {t1 t2 t3} from {t1 t2} and {t3}, it will *not* show\nthis clause as a restrictclause, because in that path it gets handled at\nthe {t1 t2} join. So a joinpath for {t1 t3} against {t2}, which needs\nthis clause to appear as a restrictclause, loses if it is copied into a\n{t1 t2 t3} RelOptInfo that was made from the other pair of\nsub-relations.\n\nI find that I can exhibit the bug very easily in current sources:\n\ncreate table t1 (k1 int, d1 int);\ncreate table t2 (k2 int, d2 int);\ncreate table t3 (k3 int, d3 int);\ncreate table t4 (k4 int, d4 int);\n\ninsert into t1 values (1, 1);\ninsert into t1 values (2, 2);\ninsert into t1 values (3, 3);\n\ninsert into t2 values (1, 11);\ninsert into t2 values (2, 22);\ninsert into t2 values (3, 33);\n\ninsert into t3 values (1, 111);\ninsert into t3 values (2, 222);\ninsert into t3 values (3, 333);\n\ninsert into t4 values (1, 1111);\ninsert into t4 values (2, 2222);\ninsert into t4 values (3, 3333);\n\nselect * from t1,t2,t3,t4 where k1 = k2 and k1 = k3 and k2=k4\nand d1<d2 and d1<d3 and d1<d4 and d2<d3 and d2<d4 and d3>d4;\n\n k1 | d1 | k2 | d2 | k3 | d3 | k4 | d4\n----+----+----+----+----+-----+----+------\n 1 | 1 | 1 | 11 | 1 | 111 | 1 | 1111\n 2 | 2 | 2 | 22 | 2 | 222 | 2 | 2222\n 3 | 3 | 3 | 33 | 3 | 333 | 3 | 3333\n(3 rows)\n\nwhich is obviously not meeting the restriction d3>d4. So we have\na problem.\n\nI haven't been able to make 6.5.3 fail similarly, but I do not\nunderstand why not --- it certainly looks like it *ought* to fail given\nthe right combination of circumstances. I think it may be escaping a\nfailure by sheer luck having to do with the order that RelOptInfos get\ninserted into the query's join_rel_list. Our changes since 6.5 may have\nexposed a problem that was only latent before. (Or maybe I just haven't\nhit the right combination to trip up 6.5.3 ... but it does seem that\ncurrent sources fail more easily.)\n\nAnyway, in the current sources things are certainly broken, and I don't\nsee any real alternative except to press forward with moving join\nrestrictinfos into JoinPaths. Even if we figure out exactly why 6.5.*\nis somehow failing to fail, I am pretty certain that it must be a\nnon-robust coincidence rather than a solution that we want to keep using.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 17:39:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I thought \"t1.v1 = t2.v2\" would be in t1 RelOptInfo and t2 RelOptInfo. \n> > Of course, this is a join info restriction, not a restrict info\n> > restriction.\n> \n> Right, it would appear in t1's joininfo list (showing t2 as unjoined_relids)\n> and in t2's joininfo list (showing t1 as unjoined_relids). Then when\n> we make a join rel from t1 + t2, the clause would be put into that rel's\n> restrictinfo list, since it's no longer a joining clause for the\n> joinrel; but it does need to be implemented at the time of the join.\n> (The bug is probably only visible for auxiliary quals that are not\n> being used as the driving clause of the join method; they need to show\n> up in the qpquals of the final plan, or they won't get enforced.)\n\nI understand. Is it only non-equi-joins that show this, where the join\nis actually also a restriction in a sense.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 17:43:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins"
},
{
"msg_contents": "> Anyway, in the current sources things are certainly broken, and I don't\n> see any real alternative except to press forward with moving join\n> restrictinfos into JoinPaths. Even if we figure out exactly why 6.5.*\n> is somehow failing to fail,\n\nEr ... um ... ahem ... DUH! The reason 6.5.3 works is that it does in\nfact keep join restrictinfo pointers in JoinPaths. I had eliminated\nthose pointers (the thoroughly undocumented \"pathinfo\" field) because\nI thought that the lists were always the same as the parent relations'\nrestrictinfo lists. Which they were --- at the time of creation of a\nJoinPath. What I missed was that prune.c moved a joinpath to belong\nto a different RelOptInfo with (potentially) a different restrictinfo\nlist, but the joinpath needs to keep its original restrictinfo list.\n\nIn other words, I broke it.\n\nSince surgery needs to be done anyway, I'm inclined to press ahead\nwith the changes I was going to put off. On the other hand, if the\npatient had a vote, it might ask for a second opinion ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 18:14:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins "
},
{
"msg_contents": "> Er ... um ... ahem ... DUH! The reason 6.5.3 works is that it does in\n> fact keep join restrictinfo pointers in JoinPaths. I had eliminated\n> those pointers (the thoroughly undocumented \"pathinfo\" field) because\n> I thought that the lists were always the same as the parent relations'\n> restrictinfo lists. Which they were --- at the time of creation of a\n> JoinPath. What I missed was that prune.c moved a joinpath to belong\n> to a different RelOptInfo with (potentially) a different restrictinfo\n> list, but the joinpath needs to keep its original restrictinfo list.\n> \n> In other words, I broke it.\n> \n> Since surgery needs to be done anyway, I'm inclined to press ahead\n> with the changes I was going to put off. On the other hand, if the\n> patient had a vote, it might ask for a second opinion ;-)\n\nGo for it. Beta is for testing. No better time to break things than\nthe present.\n\nThis is the first time I remember hearing about pre-beta release jitters\nfrom many people.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 18:17:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> This is the first time I remember hearing about pre-beta release jitters\n> from many people.\n\nMaybe our standards have gotten higher than they used to be. I know\nI really want 7.0 to be rock-solid, because I expect a lot of people\nwill be taking a new look at us when it comes out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 18:21:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > This is the first time I remember hearing about pre-beta release jitters\n> > from many people.\n> \n> Maybe our standards have gotten higher than they used to be. I know\n> I really want 7.0 to be rock-solid, because I expect a lot of people\n> will be taking a new look at us when it comes out.\n\nI had a terrible habit of trowing things in. I guess we are getting\nmore professional.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Feb 2000 18:26:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimizer cleanup to avoid redundant work on joins"
}
] |
[
{
"msg_contents": "I have written a man page for pg_ctl. I will appreciate if someone\nwould give me comments on it including grammatical corrections.\n--\nTatsuo Ishii\n\nNAME\n\npg_ctl - starts/stops/restarts postmaster\n\nSYNOPSIS\n\npg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\npg_ctl [-w][-D database_dir][-m s[mart]|f[ast]|i[mmediate]] stop\npg_ctl [-w][-D database_dir][-m s[mart]|f[ast]|i[mmediate]][-o \"postmaster_opts\"] restart\npg_ctl [-D database_dir] status\n\nDESCRIPTION\n\npg_ctl is a utility for starting, stopping or restarting postmaster.\n\nStarting postmaster\n\nTo start postmaster:\n\npg_ctl start\n\nIf -w is supplied, pg_ctl waits for the database server comes up,\nchecking the pid file (PGDATA/postmaster.pid) gets created, for up to\n60 seconds.\n\nParameters to invoke postmaster are taken from following sources:\n\nPath to postmaster: found in the command search path\nDatabase directory: PGDATA environment variable\nOther parameters: PGDATA/postmaster.opts.default\n\npostmaster.opts.default contains parameters for postmaster. With a\ndefault installation, it has a line \"-S.\" So \"pg_ctl start\" implies:\n\npostmaster -S\n\nNote that postmaster.opts.default is installed by initdb from\nlib/postmaster.opts.default.sample under the PostgreSQL installation\ndirectory (lib/postmaster.opts.default.sample is copied from\nsrc/bin/pg_ctl/postmaster.opts.default.sample while installing\nPostgreSQL).\n\nTo override default parameters you can use -D, -p and -o option.\n\n-D database_dir\n\tspecifies the database directory\n\n-p path_to_postmaster\n\tspecifies the path to postmaster\n\n-o \"postmaster_opts\"\n\tspecifies any parameter for postmaster\n\nExamples:\n\n# blocks until postmaster comes up\npg_ctl -w start\n\n# specifies postmaster path\npg_ctl -p /usr/local/pgsq/bin/postmaster start\n\n# uses port 5433 and disables fsync\npg_ctl -o \"-o -F -p 5433\" start\n\nStopping postmaster\n\npg_ctl stop\n\nstops postmaster.\n\nThere are several options for the stopping mode.\n\n-w\n\twaits for postmaster shutting down\n\n-m\n specifies the shutdown mode. s[mart] mode waits for all\n the clients get logged out. This is the default.\n f[ast] mode sends SIGTERM to the backends, that means\n active transactions get rollback. i[mmediate] mode sends SIGUSR1\n to the backends and let them abort. In this case, database recovery\n will be neccessary on the next startup.\n\n\nRestarting postmaster\n\nThis is almost equivalent to stopping postmaster then starting it\nagain except that the parameters for postmaster used before stopping\nit would be used too. This is done by saving them in\nPGDATA/postmaster.opts file. -w, -D, -m, and -o can also be used in\nthe restarting mode and they have same meanings as described above.\n\nExamples:\n\n# restarting postmaster in the simplest form\npg_ctl restart\n\n# waiting for postmaster shutdown and waiting for postmaster coming up\npg_ctl -w restart\n\n# uses port 5433 and disables fsync next time\npg_ctl -o \"-o -F -p 5433\" restart\n\nGetting status from postmaster\n\nTo get status information from postmaster:\n\npg_ctl status\n\nFollowings are sample outputs from pg_ctl.\n\npg_ctl: postmaster is running (pid: 13718)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5433\n-D /usr/local/src/pgsql/current/data\n-B 64\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 32\n-o '-F'\n\n",
"msg_date": "Sun, 06 Feb 2000 12:49:58 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_ctl man page"
},
{
"msg_contents": "> I have written a man page for pg_ctl. I will appreciate if someone\n> would give me comments on it including grammatical corrections.\n\nI assume that this is intended for the main documentation set? Then\nI'll be happy to convert this to sgml markup if you haven't done so or\ndo not know how. Also, I can make small changes to grammar etc at that\ntime.\n\nYou probably weren't asking about this, but...\n\nThe switch options \"smart\", \"fast\", and \"immediate\" are imho a bit too\ngeneral. I would suggest that \"wait\", \"stop\", and \"abort\" (or\nsomething similar) might be better and more direct terms which would\ncome to mind for an admin. Though I see that you also have the concept\nof \"wait\" wrt pg_ctl and the postmaster, to allow pg_ctl to return\nimmediately before the effects of the commands are seen. So maybe\n\"asychronous\" or something similar could be applied to the\npg_ctl/postmaster relationship, leaving the other terms for the\npg_ctl/client relationship.\n\nI would also suggest dropping \"-m <opt>\" style switches in favor of\nspecific flags, with the last flag specified taking precedence. I'm\nnot aware of other utilities having quite that same style.\n\n - Thomas\n\n> NAME\n> pg_ctl - starts/stops/restarts postmaster\n> \n> SYNOPSIS\n> \n> pg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\n> pg_ctl [-w][-D database_dir][-m s[mart]|f[ast]|i[mmediate]] stop\n> pg_ctl [-w][-D database_dir][-m s[mart]|f[ast]|i[mmediate]][-o \"postmaster_opts\"] restart\n> pg_ctl [-D database_dir] status\n> \n> DESCRIPTION\n> \n> pg_ctl is a utility for starting, stopping or restarting postmaster.\n> \n> Starting postmaster\n> \n> To start postmaster:\n> \n> pg_ctl start\n> \n> If -w is supplied, pg_ctl waits for the database server comes up,\n> checking the pid file (PGDATA/postmaster.pid) gets created, for up to\n> 60 seconds.\n> \n> Parameters to invoke postmaster are taken from following sources:\n> \n> Path to postmaster: found in the command search path\n> Database directory: PGDATA environment variable\n> Other parameters: PGDATA/postmaster.opts.default\n> \n> postmaster.opts.default contains parameters for postmaster. With a\n> default installation, it has a line \"-S.\" So \"pg_ctl start\" implies:\n> \n> postmaster -S\n> \n> Note that postmaster.opts.default is installed by initdb from\n> lib/postmaster.opts.default.sample under the PostgreSQL installation\n> directory (lib/postmaster.opts.default.sample is copied from\n> src/bin/pg_ctl/postmaster.opts.default.sample while installing\n> PostgreSQL).\n> \n> To override default parameters you can use -D, -p and -o option.\n> \n> -D database_dir\n> specifies the database directory\n> \n> -p path_to_postmaster\n> specifies the path to postmaster\n> \n> -o \"postmaster_opts\"\n> specifies any parameter for postmaster\n> \n> Examples:\n> \n> # blocks until postmaster comes up\n> pg_ctl -w start\n> \n> # specifies postmaster path\n> pg_ctl -p /usr/local/pgsq/bin/postmaster start\n> \n> # uses port 5433 and disables fsync\n> pg_ctl -o \"-o -F -p 5433\" start\n> \n> Stopping postmaster\n> \n> pg_ctl stop\n> \n> stops postmaster.\n> \n> There are several options for the stopping mode.\n> \n> -w\n> waits for postmaster shutting down\n> \n> -m\n> specifies the shutdown mode. s[mart] mode waits for all\n> the clients get logged out. This is the default.\n> f[ast] mode sends SIGTERM to the backends, that means\n> active transactions get rollback. i[mmediate] mode sends SIGUSR1\n> to the backends and let them abort. In this case, database recovery\n> will be neccessary on the next startup.\n> \n> Restarting postmaster\n> \n> This is almost equivalent to stopping postmaster then starting it\n> again except that the parameters for postmaster used before stopping\n> it would be used too. This is done by saving them in\n> PGDATA/postmaster.opts file. -w, -D, -m, and -o can also be used in\n> the restarting mode and they have same meanings as described above.\n> \n> Examples:\n> \n> # restarting postmaster in the simplest form\n> pg_ctl restart\n> \n> # waiting for postmaster shutdown and waiting for postmaster coming up\n> pg_ctl -w restart\n> \n> # uses port 5433 and disables fsync next time\n> pg_ctl -o \"-o -F -p 5433\" restart\n> \n> Getting status from postmaster\n> \n> To get status information from postmaster:\n> \n> pg_ctl status\n> \n> Followings are sample outputs from pg_ctl.\n> \n> pg_ctl: postmaster is running (pid: 13718)\n> options are:\n> /usr/local/src/pgsql/current/bin/postmaster\n> -p 5433\n> -D /usr/local/src/pgsql/current/data\n> -B 64\n> -b /usr/local/src/pgsql/current/bin/postgres\n> -N 32\n> -o '-F'\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 06 Feb 2000 04:33:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "> > I have written a man page for pg_ctl. I will appreciate if someone\n> > would give me comments on it including grammatical corrections.\n> \n> I assume that this is intended for the main documentation set? Then\n> I'll be happy to convert this to sgml markup if you haven't done so or\n> do not know how. Also, I can make small changes to grammar etc at that\n> time.\n\nOh, thank you very much!\n\n> You probably weren't asking about this, but...\n> \n> The switch options \"smart\", \"fast\", and \"immediate\" are imho a bit too\n> general. I would suggest that \"wait\", \"stop\", and \"abort\" (or\n> something similar) might be better and more direct terms which would\n> come to mind for an admin. Though I see that you also have the concept\n> of \"wait\" wrt pg_ctl and the postmaster, to allow pg_ctl to return\n> immediately before the effects of the commands are seen. So maybe\n> \"asychronous\" or something similar could be applied to the\n> pg_ctl/postmaster relationship, leaving the other terms for the\n> pg_ctl/client relationship.\n\nTalking about \"smart/fast/immediate,\" I have refered to them from\ncomments in postmaster.c probably written by Vadim. So before changing\nthem I would like to hear from Vadim. Ok?\n\n> I would also suggest dropping \"-m <opt>\" style switches in favor of\n> specific flags, with the last flag specified taking precedence. I'm\n> not aware of other utilities having quite that same style.\n\nSounds resonable. \n--\nTatsuo Ishii\n",
"msg_date": "Sun, 06 Feb 2000 15:40:03 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "I got grammer corrections from Ed Loehr. Also, I modified some option\nflag styles according to Thomas's suggestion. Thanks for those who\ngave me suggestions.\n\nThomas, I have changed -m <opts> style to -smart, -fast... style. I\nhope this was what you meant. Also, please note that I still stick\nwith smart/fast/immediate since I have been waiting for Vadim's\nopinion...\n\nTatsuo Ishii\n-------------------------------------------------------------------\nNAME\n\npg_ctl - starts/stops/restarts postmaster\n\nSYNOPSIS\n\npg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\npg_ctl [-w][-D database_dir][-smart|-fast|-immediate] stop\npg_ctl [-w][-D database_dir][-smart|-fast|-immediate][-o \"postmaster_opts\"] restart\npg_ctl [-D database_dir] status\n\nDESCRIPTION\n\npg_ctl is a utility for starting, stopping or restarting postmaster.\n\nStarting postmaster\n\nTo start postmaster:\n\npg_ctl start\n\nIf -w is supplied, pg_ctl waits for the database server comes up, by\nwatching for creation of the pid file (PGDATA/postmaster.pid), for up\nto 60 seconds.\n\nParameters to invoke postmaster are taken from following sources:\n\nPath to postmaster: found in the command search path\nDatabase directory: PGDATA environment variable\nOther parameters: PGDATA/postmaster.opts.default\n\npostmaster.opts.default contains parameters for postmaster. With a\ndefault installation, the \"-S\" option is enabled. So \"pg_ctl start\"\nimplies:\n\npostmaster -S\n\nNote that postmaster.opts.default is installed by initdb from\nlib/postmaster.opts.default.sample under the PostgreSQL installation\ndirectory (lib/postmaster.opts.default.sample is copied from\nsrc/bin/pg_ctl/postmaster.opts.default.sample while installing\nPostgreSQL).\n\nTo override default parameters you can use -D, -p and -o options.\n\n-D database_dir\n\tspecifies the database directory\n\n-p path_to_postmaster\n\tspecifies the path to postmaster\n\n-o \"postmaster_opts\"\n\tspecifies any parameters for postmaster\n\nExamples:\n\n# blocks until postmaster comes up\npg_ctl -w start\n\n# specifies postmaster path\npg_ctl -p /usr/local/pgsq/bin/postmaster start\n\n# uses port 5433 and disables fsync\npg_ctl -o \"-o -F -p 5433\" start\n\nStopping postmaster\n\npg_ctl stop\n\nstops postmaster.\n\nThere are several options for the stopping mode.\n\n-w\n\twaits for postmaster to shut down\n\n-smart|-fast|-immediate\n specifies the shutdown mode. smart mode waits for all\n the clients to logout. This is the default.\n fast mode sends SIGTERM to the backends, that means\n active transactions get rolled back. immediate mode sends SIGUSR1\n to the backends and lets them abort. In this case, database recovery\n will be neccessary on the next startup.\n\n\nRestarting postmaster\n\nThis is almost equivalent to stopping postmaster then starting it\nagain except that the parameters for postmaster used before stopping\nit would be used too. This is done by saving them in\nPGDATA/postmaster.opts file. -w, -D, -smart, -fast, -immediate and -o\ncan also be used in the restarting mode and they have same meanings as\ndescribed above.\n\nExamples:\n\n# restarts postmaster in the simplest form\npg_ctl restart\n\n# restarts postmaster, waiting for it to shut down and to come up\npg_ctl -w restart\n\n# uses port 5433 and disables fsync next time\npg_ctl -o \"-o -F -p 5433\" restart\n\nGetting status from postmaster\n\nTo get status information from postmaster:\n\npg_ctl status\n\nFollowing is sample outputs from pg_ctl.\n\npg_ctl: postmaster is running (pid: 13718)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5433\n-D /usr/local/src/pgsql/current/data\n-B 64\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 32\n-o '-F'\n\n\n************\n",
"msg_date": "Mon, 07 Feb 2000 13:49:47 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "On 2000-02-07, Tatsuo Ishii mentioned:\n\n> I got grammer corrections from Ed Loehr. Also, I modified some option\n> flag styles according to Thomas's suggestion. Thanks for those who\n> gave me suggestions.\n> \n> Thomas, I have changed -m <opts> style to -smart, -fast... style. I\n> hope this was what you meant. Also, please note that I still stick\n> with smart/fast/immediate since I have been waiting for Vadim's\n> opinion...\n\nUgh, that's not a compliant option style. What was wrong with -m\n<opt>? How about --smart, etc.? But 'single dash, multiple letters' is\nevil.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 7 Feb 2000 20:53:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "> > I got grammer corrections from Ed Loehr. Also, I modified some option\n> > flag styles according to Thomas's suggestion. Thanks for those who\n> > gave me suggestions.\n> > \n> > Thomas, I have changed -m <opts> style to -smart, -fast... style. I\n> > hope this was what you meant. Also, please note that I still stick\n> > with smart/fast/immediate since I have been waiting for Vadim's\n> > opinion...\n> \n> Ugh, that's not a compliant option style. What was wrong with -m\n> <opt>? How about --smart, etc.? But 'single dash, multiple letters' is\n> evil.\n\nOh, I'm confused now. Thomas, could you let me know what you think?\nLamar is trying to use pg_ctl for his RPM project, and we need\ndecision on that.\n--\nTatsuo Ishii\n\n\n",
"msg_date": "Tue, 08 Feb 2000 10:51:26 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "> > > Thomas, I have changed -m <opts> style to -smart, -fast... style. I\n> > > hope this was what you meant. Also, please note that I still stick\n> > > with smart/fast/immediate since I have been waiting for Vadim's\n> > > opinion...\n> > Ugh, that's not a compliant option style. What was wrong with -m\n> > <opt>? How about --smart, etc.? But 'single dash, multiple letters' is\n> > evil.\n> Oh, I'm confused now. Thomas, could you let me know what you think?\n> Lamar is trying to use pg_ctl for his RPM project, and we need\n> decision on that.\n\nSorry, I didn't catch your question earlier. I agree with Peter that\nwe should choose single character switches with single dashes (e.g. -s\nfor \"smart\", -f for \"fast\", etc.) or --smart, --fast etc (or both\nstyles).\n\nHopefully we'll get some feedback from Vadim soon on the naming; istm\nthat the \"smart\", \"fast\", \"immediate\" is just too obscure or too\nrelated to the developer's view of the implementation to be the right\nchoice for the user interface.\n\nThe switches and options should describe what they *do*, not what the\ndeveloper thought of them ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 06:47:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "On Tue, 08 Feb 2000, Thomas Lockhart wrote:\n\n> > Oh, I'm confused now. Thomas, could you let me know what you think?\n> > Lamar is trying to use pg_ctl for his RPM project, and we need\n> > decision on that.\n \n> Sorry, I didn't catch your question earlier. I agree with Peter that\n> we should choose single character switches with single dashes (e.g. -s\n> for \"smart\", -f for \"fast\", etc.) or --smart, --fast etc (or both\n> styles).\n\nAs for the RPM stuff, I can wait until it's ready easily enough -- if that's\nearly or late in the beta cycle is immaterial -- I'm just trying to get a feel\nfor the order I need to do things in.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 8 Feb 2000 07:51:49 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
},
{
"msg_contents": "Beta testing is about to start, but I have not heard from Vadim what\nhe thinks about \"smart\" etc yet. Time is up... So I decided to keep\nthe options of pg_ctl as it is for coming 7.0.\n\nP.S. I have reverted back \"-smart\" to \"-m smart\" as Peter and Thomas\nsuggested. If I have spare time, I would add \"---\" style options as\nwell. My first priority is fixing bugs and writing documentaions about\nthe multibyte support...\n--\nTatsuo Ishii\n-------------------------------------------------------------------\nNAME\n\npg_ctl - starts/stops/restarts postmaster\n\nSYNOPSIS\n\npg_ctl [-w][-D database_dir][-p path_to_postmaster][-o \"postmaster_opts\"] start\npg_ctl [-w][-D database_dir][-m [s[mart]|f[ast]|i[mmediate]]] stop\npg_ctl [-w][-D database_dir][-m [s[mart]|f[ast]|i[mmediate]][-o \"postmaster_opts\"] restart\npg_ctl [-D database_dir] status\n\nDESCRIPTION\n\npg_ctl is a utility for starting, stopping or restarting postmaster.\n\nStarting postmaster\n\nTo start postmaster:\n\npg_ctl start\n\nIf -w is supplied, pg_ctl waits for the database server comes up, by\nwatching for creation of the pid file (PGDATA/postmaster.pid), for up\nto 60 seconds.\n\nParameters to invoke postmaster are taken from following sources:\n\nPath to postmaster: found in the command search path\nDatabase directory: PGDATA environment variable\nOther parameters: PGDATA/postmaster.opts.default\n\npostmaster.opts.default contains parameters for postmaster. With a\ndefault installation, the \"-S\" option is enabled. So \"pg_ctl start\"\nimplies:\n\npostmaster -S\n\nNote that postmaster.opts.default is installed by initdb from\nlib/postmaster.opts.default.sample under the PostgreSQL installation\ndirectory (lib/postmaster.opts.default.sample is copied from\nsrc/bin/pg_ctl/postmaster.opts.default.sample while installing\nPostgreSQL).\n\nTo override default parameters you can use -D, -p and -o options.\n\n-D database_dir\n\tspecifies the database directory\n\n-p path_to_postmaster\n\tspecifies the path to postmaster\n\n-o \"postmaster_opts\"\n\tspecifies any parameters for postmaster\n\nExamples:\n\n# blocks until postmaster comes up\npg_ctl -w start\n\n# specifies postmaster path\npg_ctl -p /usr/local/pgsq/bin/postmaster start\n\n# uses port 5433 and disables fsync\npg_ctl -o \"-o -F -p 5433\" start\n\nStopping postmaster\n\npg_ctl stop\n\nstops postmaster.\n\nThere are several options for the stopping mode.\n\n-w\n\twaits for postmaster to shut down\n\n-m [s[mart]|f[ast]|i[mmediate]]\n specifies the shutdown mode. smart mode waits for all\n the clients to logout. This is the default.\n fast mode sends SIGTERM to the backends, that means\n active transactions get rolled back. immediate mode sends SIGUSR1\n to the backends and lets them abort. In this case, database recovery\n will be neccessary on the next startup.\n\n\nRestarting postmaster\n\nThis is almost equivalent to stopping postmaster then starting it\nagain except that the parameters for postmaster used before stopping\nit would be used too. This is done by saving them in\nPGDATA/postmaster.opts file. -w, -D, -m, -fast, -immediate and -o\ncan also be used in the restarting mode and they have same meanings as\ndescribed above.\n\nExamples:\n\n# restarts postmaster in the simplest form\npg_ctl restart\n\n# restarts postmaster, waiting for it to shut down and to come up\npg_ctl -w restart\n\n# uses port 5433 and disables fsync next time\npg_ctl -o \"-o -F -p 5433\" restart\n\nGetting status from postmaster\n\nTo get status information from postmaster:\n\npg_ctl status\n\nFollowing is sample outputs from pg_ctl.\n\npg_ctl: postmaster is running (pid: 13718)\noptions are:\n/usr/local/src/pgsql/current/bin/postmaster\n-p 5433\n-D /usr/local/src/pgsql/current/data\n-B 64\n-b /usr/local/src/pgsql/current/bin/postgres\n-N 32\n-o '-F'\n\n\n************\n\n************\n",
"msg_date": "Tue, 22 Feb 2000 10:47:57 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_ctl man page"
}
] |
[
{
"msg_contents": "\nHi all,\n\nI've been trying to implement UPDATE and DELETE to work on subclasses.\n\nI made some changes and it kinda seems to work. It works when I have no\nWHERE condition. When I put a WHERE condition in, it seems to update the\nwrong tuple, and then things go wierd...\n\npghack=# update a set aa='zzz' where oid=19286;\nUPDATE 1\npghack=# select oid,* from a;\n oid | aa \n-------+------\n 19286 | aaaa\n 19285 | zzz\n(2 rows)\n\npghack=# update a set aa='zzz' where oid=19285;\nERROR: heap_update: (am)invalid tid\nERROR: heap_update: (am)invalid tid\npghack=# update a set aa='zzz';\nERROR: heap_update: (am)invalid tid\nERROR: heap_update: (am)invalid tid\n\nThis message seems to be something to do with a tuple being in an\n\"Invisible\" state whatever that means.\n\nThe change I made was basicly to add an \"inh\" parameter to\nsetTargetTable which I pass on down to addRangeTableEntry. From there I\nexpect it to be passed on to the executor and as I said it seems to work\nok without a where clause.\n\nThe patch is here. Any suggestions on where to start looking?\n\nftp://ftp.tech.com.au/pub/patch.only2\n\n\n\n\n\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 06 Feb 2000 23:56:35 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice needed,"
},
{
"msg_contents": "> I've been trying to implement UPDATE and DELETE to work on subclasses.\n> The change I made was basicly to add an \"inh\" parameter to\n> setTargetTable which I pass on down to addRangeTableEntry. From there I\n> expect it to be passed on to the executor and as I said it seems to work\n> ok without a where clause.\n\nHi Chris. I don't have time to look at you patches right now, since\nI'm trying to get some syntax stuff finished up and committed. But fyi\nmy patches touch addRangeTableEntry and other files in the parser, so\nyou'll likely have a bit of a merge effort to get these sync'd back\nup. Sorry :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 06 Feb 2000 16:50:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Advice needed,"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> I've been trying to implement UPDATE and DELETE to work on subclasses.\n\nGood!\n\n> The change I made was basicly to add an \"inh\" parameter to\n> setTargetTable which I pass on down to addRangeTableEntry. From there I\n> expect it to be passed on to the executor and as I said it seems to work\n> ok without a where clause.\n\nHm. I do not believe that the executor is currently prepared to cope\nwith more than one target table for an UPDATE or DELETE. You'll\nprobably need to do some work in execMain.c and related files. Not\nsure why it seemed to work at all...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 00:00:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Advice needed, "
},
{
"msg_contents": "Tom Lane wrote:\n> > The change I made was basicly to add an \"inh\" parameter to\n> > setTargetTable which I pass on down to addRangeTableEntry. From there I\n> > expect it to be passed on to the executor and as I said it seems to work\n> > ok without a where clause.\n> \n> Hm. I do not believe that the executor is currently prepared to cope\n> with more than one target table for an UPDATE or DELETE. You'll\n> probably need to do some work in execMain.c and related files. Not\n> sure why it seemed to work at all...\n\nBeen doing more tracing. The flow of code seems to be going the way one\nmight\nexpect.\n\nHere is the strange thing. If I have\nCREATE TABLE a (aa text);\nCREATE TABLE b (bb text) inherits (a);\n\nIf I have a WHERE clause that updates at least one tuple in both a AND\nb.\n\nFor example...\nSELECT oid,* from ONLY a;\n1234 | abcd\nSELECT oid,* from ONLY b;\n5678 | defg | NULL\n\nNow if I have...\nUPDATE a SET aa='zzzz' WHERE oid=1234 or oid=5678 \nit works ok. or...\nUPDATE a SET aa='zzzz';\nit works ok.\nBut if I have a WHERE clause that only touches \"a\" table or only touches\n\"b\"\ntable, it just updates the wrong stuff, but appears to work. From then\non\nit doesn't work at all. \n\nIs there any function to print out a tuple?? I'm not sure how to do this\nin the\ndebugger. Why can't pprint do it?\n",
"msg_date": "Mon, 07 Feb 2000 16:50:17 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Advice needed,"
}
] |
[
{
"msg_contents": "As you surely noticed, the psql -e flag (\"echo\" modus, if you will) has\nchanged its format (regression tests ring a bell?) in that it echoes the\ninput file verbatim. For the particular case of the regression tests this\nseems like a good thing to me since you see the comments as well. However,\nI also offer the \"old\" mode that merely echoes the actual queries as they\nare sent to the backend (which, as we know since the array syntax thing,\ncan be quite different), but there's no option for this.\n\nThe suggestion I have is to offer the traditional behaviour with a single\n-e flag, so there's little change for anyone switching from <7.0, and the\n\"full\" echo mode with two -e flags. I'd then change the flags in the\nregression drivers to -e -e. Comments? Better ideas?\n\nFurthermore, does anyone have anything to say in defence of the -n (\"no\nreadline\") option? If not, I'd be tempted to \"hide\" it now, since it may\nbe a popular option letter to have available in the future.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 6 Feb 2000 14:07:26 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql -e and -n flags"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The suggestion I have is to offer the traditional behaviour with a single\n> -e flag, so there's little change for anyone switching from <7.0, and the\n> \"full\" echo mode with two -e flags. I'd then change the flags in the\n> regression drivers to -e -e. Comments? Better ideas?\n\nSeems reasonable.\n\n> Furthermore, does anyone have anything to say in defence of the -n (\"no\n> readline\") option? If not, I'd be tempted to \"hide\" it now, since it may\n> be a popular option letter to have available in the future.\n\nreadline automatically turns off if the input is not coming from a\nterminal, right? That seems like the only really compelling reason\nto have -n (since you wouldn't want script commands filling your\nhistory or being subject to tab-completion). I suppose someone who\nreally hated tab-completion might want a way to turn off just that\nfeature, though --- is there a way?\n\nBTW, if you need one more item for your psql todo list ;-) ... when\nlooking at EXPLAIN outputs it's possible to get NOTICE messages that\nfill many screensful. It might be nice if NOTICEs went through the\npager like query results do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 10:54:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql -e and -n flags "
},
{
"msg_contents": "> The suggestion I have is to offer the traditional behaviour with a single\n> -e flag, so there's little change for anyone switching from <7.0, and the\n> \"full\" echo mode with two -e flags. I'd then change the flags in the\n> regression drivers to -e -e. Comments? Better ideas?\n\nHmm. imho having a *count* of switch options being significant is the\nwrong way to go. It gets in the way of things like\n\n# alias ps psql -e\n# ps -e postgres\n\nwhere someone has defined a \"convenience\" alias for everyone and\nsomeone else uses it later. Also, it is a style of switch invocation\nnot appearing elsewhere afaik.\n\nI'd suggest a switch style like \"-ee\" or \"-eb\" (backend) or \"-ev\"\n(verbatim) or ??? Comments?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 06 Feb 2000 16:34:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql -e and -n flags"
},
{
"msg_contents": "\n>I'd suggest a switch style like \"-ee\" or \"-eb\" (backend) or \"-ev\"\n>(verbatim) or ??? Comments?\n\nDon's suggestion seems the right track for me.\n\nIt stays away from counting flags, which seems right. It sticks with\none-char flags for single dashes whihc is not the law but is common\nenough to be intuitive for many users. PLus there's an aesthetic\nappeal to -e for 'echo' and -E for 'echo everything'. It also does\nnot change current behavior in cases where people are expecting psql\n-e to behave a certain way.\n\nJust my $0.02 worth as a user.\n\n",
"msg_date": "Sun, 6 Feb 2000 11:49:32 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql -e and -n flags"
},
{
"msg_contents": ">>>>> \"tl\" == Thomas Lockhart <[email protected]> writes:\n\n tl> I'd suggest a switch style like \"-ee\" or \"-eb\" (backend) or\n tl> \"-ev\" (verbatim) or ??? Comments?\n\nWith the typical switch bundling, how is -ee different from -e -e? It\nis not unusual for programs to use -v for `verbose' with multiple -v's\npossible.\n\nroland\n--\n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n",
"msg_date": "06 Feb 2000 14:52:41 -0500",
"msg_from": "Roland Roberts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql -e and -n flags"
},
{
"msg_contents": "On 2000-02-06, Tom Lane mentioned:\n\n> > Furthermore, does anyone have anything to say in defence of the -n (\"no\n> > readline\") option? If not, I'd be tempted to \"hide\" it now, since it may\n> > be a popular option letter to have available in the future.\n> \n> readline automatically turns off if the input is not coming from a\n> terminal, right? That seems like the only really compelling reason\n> to have -n (since you wouldn't want script commands filling your\n> history or being subject to tab-completion). I suppose someone who\n\nYou're right, readline is of course not used if the session is not\ninteractive. The fact of the matter is that the flag isn't even checked in\nthat case and things like loading the history file (a real hog) is not\ndone either.\n\n> really hated tab-completion might want a way to turn off just that\n> feature, though --- is there a way?\n\nSure. Put\n $if psql\n set disable-completion on\n $endif\nin your ~/.inputrc. (Whoever came up with that double negative, though?)\n\n> BTW, if you need one more item for your psql todo list ;-) ... when\n> looking at EXPLAIN outputs it's possible to get NOTICE messages that\n> fill many screensful. It might be nice if NOTICEs went through the\n> pager like query results do.\n\nOh boy, I can't promise anything there at this point in time.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 7 Feb 2000 20:49:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql -e and -n flags "
},
{
"msg_contents": "On 2000-02-06, Thomas Lockhart mentioned:\n\n> > The suggestion I have is to offer the traditional behaviour with a single\n> > -e flag, so there's little change for anyone switching from <7.0, and the\n> > \"full\" echo mode with two -e flags. I'd then change the flags in the\n> > regression drivers to -e -e. Comments? Better ideas?\n> \n> Hmm. imho having a *count* of switch options being significant is the\n> wrong way to go. It gets in the way of things like\n> \n> # alias ps psql -e\n> # ps -e postgres\n> \n> where someone has defined a \"convenience\" alias for everyone and\n> someone else uses it later. Also, it is a style of switch invocation\n> not appearing elsewhere afaik.\n\nI don't like it either, but I wasn't sure of a better way.\n\n> \n> I'd suggest a switch style like \"-ee\" or \"-eb\" (backend) or \"-ev\"\n> (verbatim) or ??? Comments?\n\nWell that is an option style that doesn't appear anywhere either other\nthan perhaps find(1). getopt() would read \"-ee\" exactly as \"-e -e\", a\nbehaviour which conforms to POSIX and GNU and ROW (rest of world).\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Mon, 7 Feb 2000 20:49:56 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql -e and -n flags"
}
] |
[
{
"msg_contents": "I'm going to have to revert a few of the \"const-mania\" changes to the\nlibpq API done last fall. They are bound to be a real annoyance to users,\nespecially those that don't use const's religiously, but sometimes even to\nthose that do. This does not represent a break the with the 6.* API.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 6 Feb 2000 14:07:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq API tweaks"
}
] |
[
{
"msg_contents": "At 02:07 PM 2/6/00 +0100, Peter Eisentraut wrote:\n\n>The suggestion I have is to offer the traditional behaviour with a single\n>-e flag, so there's little change for anyone switching from <7.0, and the\n>\"full\" echo mode with two -e flags. I'd then change the flags in the\n>regression drivers to -e -e. Comments? Better ideas?\n\n\"-E\"? Or another flag? I think \"-e -e\" is a real kludge. If I\nthought the full-echo mode were only useful for regression tests I wouldn't\ncare, but I like the idea of a full echo and I'm sure others do, too, so\nI'd rather see it be receive full flag citizenship rather than the double\n\"-e\" bit.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 06 Feb 2000 08:03:11 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql -e and -n flags"
},
{
"msg_contents": "On 2000-02-06, Don Baccus mentioned:\n\n> At 02:07 PM 2/6/00 +0100, Peter Eisentraut wrote:\n> \n> >The suggestion I have is to offer the traditional behaviour with a single\n> >-e flag, so there's little change for anyone switching from <7.0, and the\n> >\"full\" echo mode with two -e flags. I'd then change the flags in the\n> >regression drivers to -e -e. Comments? Better ideas?\n> \n> \"-E\"? Or another flag? I think \"-e -e\" is a real kludge. If I\n\nYou're ignoring that -E is already used. It would be my first choice as\nwell, but it's a compatibility break. How about -a (\"all\")?\n\n> thought the full-echo mode were only useful for regression tests I wouldn't\n> care, but I like the idea of a full echo and I'm sure others do, too, so\n> I'd rather see it be receive full flag citizenship rather than the double\n> \"-e\" bit.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 7 Feb 2000 20:50:10 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql -e and -n flags"
}
] |
[
{
"msg_contents": "I know this topic came up recently:\n\nselect t1.* from t1 ty;\n\ngives a join between two instances of t1, rather than the expected\nquery rejection (the table in the from clause should be referred to as\n\"ty\").\n\nThere was talk of disallowing:\n\nselect t1.*;\n\nwhich seems to be a bit harsh, since it is a *nice* shorthand. How\nabout disallowing it if there is a FROM clause specified? That is,\n\nselect t1.*;\n\nis allowed, but\n\nselect t1.* from t2;\n\nis not? Pretty sure I can do this. Comments?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 06 Feb 2000 16:55:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implicit RTEs"
}
] |
[
{
"msg_contents": "The following query is rejected (and always has been afaik):\n\nselect * from t1, t1;\n\nDoes this rejection have any basis in SQL92? (I haven't looked; hoping\nsomeone else has.)\n\nistm that\n\nselect x from t1, t1;\n\nwould have trouble, but the wildcard could do the Right Thing even\nwithout resorting to (for example)\n\nselect * from t1 a, t1;\n\nas is currently required. I'm not sure what it would take to do this,\nbut it probably touches on an area of \"outer join syntax\" I'm looking\nat:\n\nselect a, b from t1 join t2 using (a);\n\nis legal, but the \"join table\" (t1 join t2 using...) must lose its\nunderlying table names (yuck, only for the join columns), resulting in\ndisallowing, for example,\n\nselect t1.a from t1 join t2 using (a);\n\nThat is, the \"relation.column\" syntax is not allowed to refer to the\njoin column(s), unless one specifies an alias for the \"join table\", as\nin\n\nselect tx.a from (t1 join t2 using (a)) as tx;\n\nI'm thinking of implementing this by allowing multiple RTEs to have\nthe *same* table alias, (as long as there aren't column name conflicts\nin the \"visible\" columns), so that, at least internally,\n\nselect * from t1 tx, t3 tx;\n\nbecomes legal as long as t1 and t3 do not share common column names.\n\nComments on either or both issues?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 06 Feb 2000 17:29:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Duplicate table names"
},
{
"msg_contents": "On 2000-02-06, Thomas Lockhart mentioned:\n\n> The following query is rejected (and always has been afaik):\n> \n> select * from t1, t1;\n> \n> Does this rejection have any basis in SQL92? (I haven't looked; hoping\n> someone else has.)\n\nNot according to the way I decoded it. It's a join of t1 with itself and\nyou get all columns twice.\n\n> \n> istm that\n> \n> select x from t1, t1;\n> \n> would have trouble, but the wildcard could do the Right Thing even\n\nThis is the same problem as\n\nselect x from t1, t2;\n\nwhere both t1 and t2 have a column x. It's an error. It's not an error if\ncolumn x is unambiguous. Chances are pretty good (=100%) that there will\nbe ambiguity if you list the same table twice, but there's no reason to\nreject this for the reason it gives now.\n\n[snip]\n> I'm thinking of implementing this by allowing multiple RTEs to have\n> the *same* table alias, (as long as there aren't column name conflicts\n> in the \"visible\" columns), so that, at least internally,\n> \n> select * from t1 tx, t3 tx;\n> \n> becomes legal as long as t1 and t3 do not share common column names.\n\nThis seems perfectly legal as well, even if they do share column names.\nAny reference to tx.y will fail due to ambiguity, but it shouldn't merely\nbecause of name checking.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 7 Feb 2000 20:49:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Duplicate table names"
},
{
"msg_contents": "At 08:49 PM 2/7/00 +0100, Peter Eisentraut wrote:\n\n>Not according to the way I decoded it. It's a join of t1 with itself and\n>you get all columns twice.\n\n...\n\n>This is the same problem as\n>\n>select x from t1, t2;\n>\n>where both t1 and t2 have a column x. It's an error. It's not an error if\n>column x is unambiguous. Chances are pretty good (=100%) that there will\n>be ambiguity if you list the same table twice, but there's no reason to\n>reject this for the reason it gives now.\n\nI believe that Peter's right on all counts.\n\n>\n>[snip]\n>> I'm thinking of implementing this by allowing multiple RTEs to have\n>> the *same* table alias, (as long as there aren't column name conflicts\n>> in the \"visible\" columns), so that, at least internally,\n>> \n>> select * from t1 tx, t3 tx;\n\n>> becomes legal as long as t1 and t3 do not share common column names.\n\n>This seems perfectly legal as well, even if they do share column names.\n>Any reference to tx.y will fail due to ambiguity, but it shouldn't merely\n>because of name checking.\n\nActually, according to Date an explicit range variable must be\nunique within a given scope.\n\nDoes Postgres implement scope? Apparently JOIN opens a new\nscope...so do subselects.\n\nselect * from t1 tx, t3 tx is not legal SQL\n\nselect * from t1 tx, (select * from t3 tx) is legal SQL.\n\nThe tx inside the subselect hides the outer tx, just like\nany 'ole block-structured language.\n\nDate takes over six pages of fairly terse prose with few examples to\ndefine the scope of range variables in and out of JOIN expressions.\nA bit over one page of that is devoted to scoping issues unique\nto JOINs, which I don't feel like reading at the moment!\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 12:26:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Duplicate table names"
},
{
"msg_contents": "At 12:26 PM 2/7/00 -0800, Don Baccus wrote:\n\n>>> select * from t1 tx, t3 tx;\n\n>>> becomes legal as long as t1 and t3 do not share common column names.\n\n>>This seems perfectly legal as well, even if they do share column names.\n>>Any reference to tx.y will fail due to ambiguity, but it shouldn't merely\n>>because of name checking.\n\n>Actually, according to Date an explicit range variable must be\n>unique within a given scope.\n\nI consulted the Oracle, and it agrees with Peter, hmmm...and the\nwording in Date's a bit ambiguous, he's not clear as to whether\nthe range variable must be unique when DEFINED, or must only be\nunique if it is referenced, i.e. select tx.foo from t1 tx, t3 tx\nis ambiguous.\n\nReading further into Date, he says that\n\nselect ... from t1 \n\nimplicitly defines t1 as a range variable, and since\n\nselect ... from t1, t1 is legal, then range variables need not be\nunique to be defined, 'cause according to the standard this\ncauses two range variables named t1 to be implicitly defined.\n\nSo, his comment about uniqueness within scope applies to whether\nor not you can explicitly REFERENCE, not DEFINE the range var.\n\nSorry for the confusion...Peter was right all along.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 13:03:01 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Duplicate table names"
},
{
"msg_contents": "> Date takes over six pages of fairly terse prose with few examples to\n> define the scope of range variables in and out of JOIN expressions.\n> A bit over one page of that is devoted to scoping issues unique\n> to JOINs, which I don't feel like reading at the moment!\n\nRight. We're not likely to meet all of the scoping rules in the first\nimplementation; they are *really* tough :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 06:54:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Duplicate table names"
}
] |
[
{
"msg_contents": "Dear pgsql-hackers list,\n\nFirst a few words of introduction : I'm 43 and, while I have been \nintroduced to computing for a long time (my first exposure was a small \nFortran exercise I wrote in '74 (!) for a timesharing system on an \nhardcopy terminal ...), my coding abilities are somewhat rusty. I am \nmainly a user by now, no longer a coder, and my interests in computers \nis now in making my life simpler (I'm a biostatistician, among other \nthings).I probably won't be contributing any code to PostgreSQL. Some \nbug reports, maybe ...\n\nHowever, I've lurked on some of the PostgreSQL lists for 2 to 3 months \n(through the Web interface), and I feel that I might offer some \nadvice, based on my past experience of seeing a lot of projects \ngrowing (or dying, due to feeping creaturism(TM) ...).\n\nSo I will shamelessly pull my first plea, related to the proposed \nchange to the default behaviour of PostgreSQL in querying classes with \nsubclasses.\n\nI *strongly* suggest not to change anything in the default behaviour, \nwhich is what is expected from an SQL-compliant system, even if the \ndatabase in question uses inheritance internally.\n\nThe reason for that plea is that a modification would crash any \nprogram not explicitly written for inheritance features : such \nfeatures might be used by, say, the administrator and coere \nprogrammers of a database, who are not necessarily publish this \ninternal use of inheritance to end-users. Furthermore, such a change \nwould forbid evolution of a database from a pure-relational to an \nobject-orien,ted one : the two representations would be incompatible.\n\nIt should also pointed out that most interface programs (such as ODBC \nor JDBC drivers) are not and will not in a foreseeable future be \ndesigned for use of these features. Modifying the default behaviour \nwould break them.\n\nApart from that, I am, after 17 years of exposure to the concepts of \nobject-oriented programming, still to be convinced of the value of \nthis paradigm. This is *not* to suggest that these developments should \nbe left over ! However, I *feel* that the real issues behind this \nconcept are not yet fully understood, and that some deep theoretical \nwork remains to be done (in logic, for example : while the \nwell-understood relational theory directly relates to set theory, I \nthink that a mathematically correct objects-and-types theory shoud \nemanate from category theory but remains to be created ...).\n\nYour thoughs ?\n\n\t\t\t\t\t\tEmmanuel Charpentier\n\n\n\n\n",
"msg_date": "Sun, 06 Feb 2000 17:57:42 GMT",
"msg_from": "Emmanuel Charpentier <[email protected]>",
"msg_from_op": true,
"msg_subject": "An introduction and a plea ..."
},
{
"msg_contents": "Emmanuel Charpentier wrote:\n\n> I *strongly* suggest not to change anything in the default behaviour,\n> which is what is expected from an SQL-compliant system, even if the\n> database in question uses inheritance internally.\n\nCan I assure you that these changes have NO EFFECT on anybody who\ndoes not use inheritance. i.e. Postgres will remain as SQL compliant\nas it was before.\n\n> The reason for that plea is that a modification would crash any\n> program not explicitly written for inheritance features.\n\nNo it won't. If you don't use inheritance, you will not be effected in\nany way.\n\n> : such\n> features might be used by, say, the administrator and coere\n> programmers of a database, who are not necessarily publish this\n> internal use of inheritance to end-users. Furthermore, such a change\n> would forbid evolution of a database from a pure-relational to an\n> object-orien,ted one : the two representations would be incompatible.\n> \n> It should also pointed out that most interface programs (such as ODBC\n> or JDBC drivers) are not and will not in a foreseeable future be\n> designed for use of these features. Modifying the default behaviour\n> would break them.\n\nIn my opinion, this change will give users of ODBC and such tools MORE\nuseful defaults. Of course if you are using a non-OO interface to an OO\ndatabase there will always be things you can't do. But IMHO, this gives\na more useful set of defaults as a trasition phase.\n\nFor example, currently if I have student and employee inheriting from\nperson, ODBC query of SELECT * from person will return... NOTHING! After\nthese changes the query will return all the persons (which happen to\nbe students and employees).\n\n> Apart from that, I am, after 17 years of exposure to the concepts of\n> object-oriented programming, still to be convinced of the value of\n> this paradigm. This is *not* to suggest that these developments should\n> be left over ! However, I *feel* that the real issues behind this\n> concept are not yet fully understood, and that some deep theoretical\n> work remains to be done (in logic, for example : while the\n> well-understood relational theory directly relates to set theory, I\n> think that a mathematically correct objects-and-types theory shoud\n> emanate from category theory but remains to be created ...).\n\nWell, the fact is people are using OO now, and it's hard for me to \nexplain the development advantages of an OO database to someone who\nis not coding. But if you really want to find out why an OO database\nis good, head on over to versant.com or odi.com, download the database\nand write a small application. Apart from anything else, some people\nneed the improved performance NOW, and can't wait for the academics\nto give their stamp of approval. And OO database coding simplicity\nis saving millions of $$$ NOW.\n",
"msg_date": "Mon, 07 Feb 2000 09:58:40 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An introduction and a plea ..."
},
{
"msg_contents": "Emmanuel Charpentier wrote:\n>\n> However, I've lurked on some of the PostgreSQL lists for 2 to 3 months\n> (through the Web interface), and I feel that I might offer some\n> advice, based on my past experience of seeing a lot of projects\n> growing (or dying, due to feeping creaturism(TM) ...).\n> \n> So I will shamelessly pull my first plea, related to the proposed\n> change to the default behaviour of PostgreSQL in querying classes with\n> subclasses.\n> \n> I *strongly* suggest not to change anything in the default behaviour,\n> which is what is expected from an SQL-compliant system, even if the\n> database in question uses inheritance internally.\n\nI agree wrt the * returning different types of tuples from different \nsubtypes.\n\nI somewhata disagree about default selects/updates/deletes working on \ninherited tables by default - If we want PostgreSQL to evolve back \nto an ORDBMS. \n\nWe should not change the defaul _yet_, but we should not exclude\nthe change in future. rather we should acknowledge the current state of\naffairs wrt inheritance and declare it deprecated (dont use in new projects,\nstart fixing your old ones) \n\n> The reason for that plea is that a modification would crash any\n> program not explicitly written for inheritance features : such\n> features might be used by, say, the administrator and coere\n> programmers of a database, who are not necessarily publish this\n> internal use of inheritance to end-users.\n\nI saw something similar when going from python 1.5.1 to 1.5.2 - suddenly \nsome broken usage became a show-stopping bug instead of just ignering it \nwith some hidden default usage. It did not byte me directly, but several \nof our developers had never read the introductory parts of docs, or had \nnot understood what was said.\n\nCurrently inheritance features can be used in a very limited way - \n\n1. for defining a table that shares some columns with some other table(s)\nthis usage is actually broken, as it currently results in tables that can't \nbe dumped properly after columns are added, and thus should be discouraged \nanyway until it is fixed.\n\n2. for selecting (and not updating/deleting) from a group of said broken\n tables, using a non-ansi syntax. The performance is also most likely\n suboptimal, as indexes are not inherited.\n\nTherefore I would propose the following, more radical approach - \n\n* officially acknowledge the current lacking OO support of PostgreSQL and \n declare the current usages deprecated and soon-to-be-removed in 7.0\n\n* not remove the support for them in the backend, but instead start to\n investigate ways to fix the buga and add the missing features.\n\n* hide the OO development behind \"set ORDBMS to 'ON'\", which case would\n behave in the new way for the current two OO features\n (create .. inherits .., and select), if it is set to 'off' (the default)\n spit out a warning on each use but behave compatibly.\n (maybe make psql check if it is invoked as osql and send the set command \n automatically)\n\n* for migrating databases provide a way to dump inherited tables as standalone\n so that it would be easy for people to clear up the inherits-as-macro usage\n\n* The OO development should solve the following problems (independent of which\n syntax will be eventually used)\n\n 1. if a table inherits another table, it has to (at least) inherit the\n following by default\n\n 1.1 columns - in a way that allows add/delete column (requires changes to\n storage manager, probably introduction of deleted/missing columns)\n\n 1.2 indexes, both unique and ordinary, where unique indexes should be\nunique\n _over_all_tables_ involved\n\n 1.3 constraints, including being the foreign end of foreign key constraint\n\n 2. a way to go from OID to tuple\n\n The must efficient solution seems to be a file with a simple structure\nthat\n has records of (TUPLE_OID,TABLE_OID) wher a record is added at each\ninsert.\n As this file is ordered wrt. TULE_OID and has fixed size records, it can\n be efficiently searche with binary search. As it is append-only it is also\n quite (probably most) efficient on inserts. I can't think of any solutions\n using current structures which would be nearly as efficient. If we\nsacrifice\n space for lookup speed we may write all oids and never shrink that file\nand \n have a computed lookup whic would require at most one disk access per oid \n lookup. We could use some kind of weighted binary search in any case.\n\n The same kind of file could be used for re_introducing time-travel in an\n efficient way.\n \n 3. a way to get full tuples (tuple type + all columns) from inherited\ntables.\n\n This would require minimal changes to wire protocol, but more changes to \n client API's.\n\n 4.possibly a bit unrelated to OO, but still a must-do - Start working on a\n binary cross-platform protocol, that could be used for _both_\n insert/update/delete and select (instead of current single-platform select\n only binary protocol)\n\n It would mean adding PREPARE to the backend (already exists in SPI)\n as well as smarter client libraries that would expose it and that could \n marshal binary data given to BIND over wire. Having PREPARE-d queries\n can also speed up our performance on standard benchmarks, as much of \n prepare/optimise can be skipped.\n\n From there on it gets a bit foggy as it is really a distant future (possibly\n more than 1 year ;)\n\n 5. become even more object-oriented and add methods to tables that can do\n different things depending on which table they operate on.\n\n 6. allow writing these mathods in a platform-independent language \n (java/python/tcl/perl/...) and also passed from backend to frontend.\n\n \n> Furthermore, such a change\n> would forbid evolution of a database from a pure-relational to an\n> object-orien,ted one : the two representations would be incompatible.\n\nDo you propose the two-separate-parsers way of doing things ?\n \n> It should also pointed out that most interface programs (such as ODBC\n> or JDBC drivers) are not and will not in a foreseeable future be\n> designed for use of these features. Modifying the default behaviour\n> would break them.\n\nStandard SQL queries should give standard SQL responses.\n\nOTOH, there is an evolving API for interfacing ObjectDatabases with Java\n\n> Apart from that, I am, after 17 years of exposure to the concepts of\n> object-oriented programming, still to be convinced of the value of\n> this paradigm.\n\nMy experience is exactly the opposite - after zenning the concept I'm unable \nto write anything longer than 15 lines that is not OO, (with the possible \nexclusion of SQL scripts, which do not fit nicely to that concept ;)\n\nIt does _not_ mean writing in an \"OO language\", but just a way of thinking \nabout problems and expressing these thoughts.\n\n> This is *not* to suggest that these developments should\n> be left over ! However, I *feel* that the real issues behind this\n> concept are not yet fully understood, and that some deep theoretical\n> work remains to be done\n\nThere will _always_ remain theoretical work to be done, at least for any \nlive concept.\n\n> (in logic, for example : while the\n> well-understood relational theory directly relates to set theory, I\n> think that a mathematically correct objects-and-types theory shoud\n> emanate from category theory but remains to be created ...).\n> \n> Your thoughs ?\n\nI suspect that OO programming as a whole could be complex enough that Goedels \ntheorem forbids any complete\"mathematically correct objects-and-types theory\"\n\n----------------\nHannu\n",
"msg_date": "Mon, 07 Feb 2000 02:34:41 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An introduction and a plea ..."
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> 2. a way to go from OID to tuple\n> \n> The must efficient solution seems to be a file with a simple structure\n> that\n> has records of (TUPLE_OID,TABLE_OID) wher a record is added at each\n> insert.\n> As this file is ordered wrt. TULE_OID and has fixed size records, it can\n> be efficiently searche with binary search. As it is append-only it is also\n> quite (probably most) efficient on inserts. I can't think of any solutions\n> using current structures which would be nearly as efficient. \n\nIf you have your suggested indexes that apply over multiple relations, I\ncan't\nsee why that can't be used for this too. It just means that if you use\nODBMS it\nis recommended that you do a CREATE INDEX oid_idx ON object (oid), where\n\"object\"\nis a conceptual super-class of all other objects.\n\nYour append-only file would grow without limit, which I think is a bit\nof a\nproblem for some apps. Also the way ODBMS will work is an application\nwill \nask for a chunk\nof oids from the database, some of which may be later \"wasted\".(This is\nhow\nVersant works and it is also a technique documented by Stonebraker in\nhis\npostgres papers). This technique is so that applications don't have to\ntalk to the backend to create objects in the front end that need oids.\nThis means objects may not be created with oids in order.\nSo you have to store space for oids in your file that may not be used.\n\nI think we need first more conventional style index that works well.\nThen we\ncan experiment with more radical ideas.\n\n> The same kind of file could be used for re_introducing time-travel in an\n> efficient way.\n\nHow?\n\n> 5. become even more object-oriented and add methods to tables that can do\n> different things depending on which table they operate on.\n\nDoes this definitely not work now?\n",
"msg_date": "Mon, 07 Feb 2000 11:51:03 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An introduction and a plea ..."
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> > > The same kind of file could be used for re_introducing time-travel in an\n> > > efficient way.\n> >\n> > How?\n> \n> By writing (TID,TIMESTAMP) tuples there and using that info to retrieve tuples\n> active at specified time by examinimg TIDs in \"deleted\" tuples.\n> As bot TID and TIMESTAMP should be monotonuously growing again binary search\n> can be used on retrieve and inserts are append-only (meaning fast)\n\nBut since we are already storing all the time travel stuff already in\nthe\nstorage pages do we need this to reinstate time travel? Also if you\nreinstate\ntime travel this way it will only work for people using this odbms\nfeature.\nWouldn't it be better to reinstate the old timetravel so it works for\neveryone?\n",
"msg_date": "Mon, 07 Feb 2000 12:23:27 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An introduction and a plea ..."
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> \n> > 2. a way to go from OID to tuple\n> >\n> > The must efficient solution seems to be a file with a simple structure\n> > that\n> > has records of (TUPLE_OID,TABLE_OID) wher a record is added at each\n> > insert.\n> > As this file is ordered wrt. TULE_OID and has fixed size records, it can\n> > be efficiently searche with binary search. As it is append-only it is also\n> > quite (probably most) efficient on inserts. I can't think of any solutions\n> > using current structures which would be nearly as efficient.\n> \n> If you have your suggested indexes that apply over multiple relations, I\n> can't see why that can't be used for this too.\n\nThe insert performance would be much worse for indexes than for append-only\nfile.\n\n> It just means that if you use ODBMS it is recommended that you do a \n> CREATE INDEX oid_idx ON object (oid), where \"object\"\n> is a conceptual super-class of all other objects.\n> \n> Your append-only file would grow without limit, which I think is a bit\n> of a problem for some apps.\n\nI meant vacuum to compress it (which AFAIK it does not do for indexes\ncurrently)\n\n> Also the way ODBMS will work is an application will ask for a chunk\n> of oids from the database, some of which may be later \"wasted\".(This is\n> how Versant works and it is also a technique documented by Stonebraker in\n> his postgres papers). This technique is so that applications don't have to\n> talk to the backend to create objects in the front end that need oids.\n> This means objects may not be created with oids in order.\n> So you have to store space for oids in your file that may not be used.\n\nYes, it needs some more book-keeping than I thought (keep the oid-file pages \nthat could possibly be updated in memory until the front-end which requested\nthe oids disconnects), or just assume all oids will be used and compress the \nunused ones below watermark out in VACUUM.\n\n> I think we need first more conventional style index that works well.\n> Then we can experiment with more radical ideas.\n\nAn index spanning multiple tables is quite radical anyway. Initially we could \nget by with multiple indexes and extra (but slow) check for uniqueness (when \nindex is unique).\n\n> \n> > The same kind of file could be used for re_introducing time-travel in an\n> > efficient way.\n> \n> How?\n\nBy writing (TID,TIMESTAMP) tuples there and using that info to retrieve tuples \nactive at specified time by examinimg TIDs in \"deleted\" tuples.\nAs bot TID and TIMESTAMP should be monotonuously growing again binary search \ncan be used on retrieve and inserts are append-only (meaning fast)\n\nBoth cases assume that we are oriented on fast inserts, as b-tree would\nprobably \nbe faster than binary search on retrieves, but is much slower on inserts.\n\n> \n> > 5. become even more object-oriented and add methods to tables that can do\n> > different things depending on which table they operate on.\n> \n> Does this definitely not work now?\n\nAFAIK functions are selected based on their arguments which can be either a\nfull \ntuple or several simple types, but not both.\n\nSo the first kind _may_ actually work, we must ask someone more familiar on\nwhen \nthe actual function is selected for \"SELECT T.func() from TAB* T\" queries.\n\n--------------\nHannu\n",
"msg_date": "Mon, 07 Feb 2000 03:26:00 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An introduction and a plea ..."
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> \n> > > > The same kind of file could be used for re_introducing time-travel in an\n> > > > efficient way.\n> > >\n> > > How?\n> >\n> > By writing (TID,TIMESTAMP) tuples there and using that info to retrieve tuples\n> > active at specified time by examinimg TIDs in \"deleted\" tuples.\n> > As bot TID and TIMESTAMP should be monotonuously growing again binary search\n> > can be used on retrieve and inserts are append-only (meaning fast)\n> \n> But since we are already storing all the time travel stuff already in\n> the storage pages do we need this to reinstate time travel?\n\nIf we want to query for old tuples by wallclock time (which is not stored) and \nnot only by transaction-id (which are) we need something to go from wc-time to\ntid\nand back.\n\n> Also if you reinstate time travel this way it will only work for people using \n> this odbms feature.\n> Wouldn't it be better to reinstate the old timetravel so it works for\n> everyone?\n\nIt would be probably better to do it under another set, probably at dbinit \n(or createdb) time.\n\nso maybe \n\nset TIME_TRAVEL to 'on';\nCREATE DATABASE TIME_TRAVELLERS_DB;\n\nwould create a database that can use the time-travel features.\n\nIt could of course be included in the db create statement:\n\nCREATE DATABASE TIME_TRAVELLERS_DB WITH TIME_TRAVEL='ON';\n\nBTW, have you considered making OO a per-database feature or at least the \ndefault being settable when creating the database.\n\n-----------------------\nHannu\n",
"msg_date": "Mon, 07 Feb 2000 22:53:29 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An introduction and a plea ..."
}
] |
[
{
"msg_contents": "It seems that PostgreSQKL might have a case-sensitivity problem.\n\nSorry to post that here. I tried to post on pgsql-interfaces, and my \npost has bounced (for no reason I have been able to fathom : the \nsubject line of the bounce messageb is : BOUNCE \[email protected]: Admin request of type /^\\s*config\\b/i \nat line 7. Beats me ...). I searched on the web site a reference to a \nlist manager, didn't find it ...\n\nFeel free to flame if this is not the correct procedure ... :-)).\n\n\t\t\t\t\tEmmanuel Charpentier\n\nBounced post follows :\n\n>From bouncefilter Sun Feb 6 13:38:25 2000\nReceived: from beth.bacbuc.fdn.fr ([email protected] \n[212.198.228.168])\n\tby hub.org (8.9.3/8.9.3) with ESMTP id NAA39566\n\tfor <[email protected]>; Sun, 6 Feb 2000 13:37:49 -0500 \n(EST)\n\t(envelope-from [email protected])\nReceived: from localhost.localdomain (really [193.57.55.1]) by \nbacbuc.fdn.fr\n\tvia in.smtpd with smtp (ident charpent using rfc1413)\n\tid <[email protected]> (Debian Smail3.2.0.101)\n\tfor <[email protected]>; Sun, 6 Feb 2000 19:37:48 +0100 \n(CET) \nFrom: Emmanuel Charpentier <[email protected]>\nDate: Sun, 06 Feb 2000 18:37:59 GMT\nMessage-ID: <[email protected]>\nSubject: Case sensitivity in ODBC ??\nTo: [email protected]\nReply-To: [email protected]\nX-Mailer: Mozilla/3.0 (compatible; StarOffice/5.1; Linux)\nX-Priority: 3 (Normal)\nMIME-Version: 1.0\nContent-Type: text/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: 8bit\nX-MIME-Autoconverted: from quoted-printable to 8bit by hub.org id \nNAA39635\n\nDear pgsql-interface list,\n\nI have lived that SQL database were not to be case-sensitive. However, \nit seems that, at least through the ODBC interface, PostgreSQL 6.5.3 \nis :\n\nConfig : Linux RedHat 6.1, PostgreSQL 6.5.3 from the PostgreSQL ftp \nsite's RPM, unixODBC1.7 (including their PostgreSQL driver). Works \ngreat from various interfaces (including StarOffice ...).\n\nI'm working from the R statistical language/package through a beta \nODBC interface. Note that the interpreted R language is case-sensitive \n...\n\nLogical Setup\n\nR user interface ---> RODBC library ---> unixODBC ---> PostgreSQL\n ^\n |\nI'm trying to enhance that---+\n\nThis interface mostly works, but has some odd behaviour :\n\n> sqlQuery(chan1,\"create table Test1 (id int4 not null primary key, val \nvarchar(255))\") /* This creates the table test1, correctly */\n\n> sqlColumns(chan1,\"test1\") /* Sanity check */\n TABLE_QUALIFIER TABLE_OWNER TABLE_NAME COLUMN_NAME DATA_TYPE \nTYPE_NAME\n1 NA NA test1 id 4 \nint4\n2 NA NA test1 val 12 \nvarchar\n PRECISION LENGTH SCALE RADIX NULLABLE REMARKS DISPLAY_SIZE \nFIELD_TYPE\n1 10 4 0 10 0 NA 11 \n23\n2 254 254 NA NA 1 NA 254 \n1043\n/* This is the expected answer */\n\n> sqlTables(chan1)\n TABLE_QUALIFIER TABLE_OWNER TABLE_NAME TABLE_TYPE REMARKS\n1 NA NA pga_forms TABLE NA\n2 NA NA pga_layout TABLE NA\n3 NA NA pga_queries TABLE NA\n4 NA NA pga_reports TABLE NA\n5 NA NA pga_schema TABLE NA\n6 NA NA pga_scripts TABLE NA\n7 NA NA test1 TABLE NA\n/* This also is the expected answer. Note however that the name of the \ntable is lowercased */\n\n\n> sqlColumns(chan1,\"Test1\") /* Same sanity check, with the original name \n*/\nError in sqlColumns(chan1, \"Test1\") : Test1 :table not found on \nchannel 0\n/* This is unexpected if SQL, PostgreSQL and ODBC are case-insensitive \n*/\n\nFurthermore : debugging shows that the initial request for table \ncreation seds the name with its capital, as shown in the next example \n:\n\n> sqlSave(chan1,USArrests,rownames=\"State\") /* The sqlSave function \ncreates the necessary table if it does not exists */\n[1] \"CREATE TABLE USArrests (State varchar(255) ,Murder varchar(255) \n,Assault varchar(255) ,UrbanPop varchar(255) ,Rape varchar(255) )\"\n/* This is a debugging output showing the exact request sent to \nPosstgreSQL, minus the ending semicolumn. Yes the types are \nridiculous, and that's what I'm trying to fix ... */\nError in sqlColumns(channel, tablename) : USArrests :table not found \non channel 0\n/* This is an (unexpected) error message*/\n> sqlTables(chan1)\n TABLE_QUALIFIER TABLE_OWNER TABLE_NAME TABLE_TYPE REMARKS\n1 NA NA pga_forms TABLE NA\n2 NA NA pga_layout TABLE NA\n3 NA NA pga_queries TABLE NA\n4 NA NA pga_reports TABLE NA\n5 NA NA pga_schema TABLE NA\n6 NA NA pga_scripts TABLE NA\n7 NA NA usarrests TABLE NA\n/* A table � usarrests � has been created, but querying � USArrests � \ndoes not work. However :*/\n\n> sqlColumns(chan1,\"usarrests\")\n TABLE_QUALIFIER TABLE_OWNER TABLE_NAME COLUMN_NAME DATA_TYPE \nTYPE_NAME\n1 NA NA usarrests state 12 \nvarchar\n2 NA NA usarrests murder 12 \nvarchar\n3 NA NA usarrests assault 12 \nvarchar\n4 NA NA usarrests urbanpop 12 \nvarchar\n5 NA NA usarrests rape 12 \nvarchar\n PRECISION LENGTH SCALE RADIX NULLABLE REMARKS DISPLAY_SIZE \nFIELD_TYPE\n1 254 254 NA NA 1 NA 254 \n1043\n2 254 254 NA NA 1 NA 254 \n1043\n3 254 254 NA NA 1 NA 254 \n1043\n4 254 254 NA NA 1 NA 254 \n1043\n5 254 254 NA NA 1 NA 254 \n1043\n> \n/* Querying � usarrests � does ! Note that column names have been \nlowercased as well ... */\n\nThis seems to me proof enough that the behaviour of PostgreSQL (or \nODBC driver) related to case sensitivity is not coherent.\n\nCould some kind soul shed some light on this ?\n\nThanks in advance,\n\n\t\t\t\t\t\tEmmanuel Charpentier\n\n\n\n\n\n\n\n",
"msg_date": "Sun, 06 Feb 2000 22:51:34 GMT",
"msg_from": "Emmanuel Charpentier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Case sensitivity issues"
},
{
"msg_contents": "Emmanuel Charpentier <[email protected]> writes:\n> [ much snipped ]\n> This seems to me proof enough that the behaviour of PostgreSQL (or \n> ODBC driver) related to case sensitivity is not coherent.\n> Could some kind soul shed some light on this ?\n\nIt's hard to tell what your driver is doing, but the underlying backend\nbehavior is simple enough.\n\nA table or field name written in an SQL query is forced to lowercase\n*unless* it is written with double-quotes around it:\n\n\tSELECT * FROM Table; -- refers to \"table\"\n\n\tSELECT * FROM \"Table\"; -- refers to \"Table\"\n\nYour debugging output shows that the CREATE TABLE statement is being\nsent as-is, so the name is lowercased before the CREATE happens.\nYou didn't show what was being sent for your other queries like\nsqlColumns(). I speculate that the driver is translating those into\nSQL queries in which the provided name is quoted...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Feb 2000 21:29:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Case sensitivity issues "
},
{
"msg_contents": "On Sun, Feb 06, 2000 at 10:51:34PM +0000, Emmanuel Charpentier wrote:\n> I have lived that SQL database were not to be case-sensitive. However, \n\nWhich one? I have yet to work with one, that is unless you count Access as a\nreal database which it is not.\n\n> This interface mostly works, but has some odd behaviour :\n> /* Querying � usarrests � does ! Note that column names have been \n> lowercased as well ... */\n\nSure. SQL is case insesitive by default. Quote your case-sensitive string\nwith double-quotes and it works.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 7 Feb 2000 07:48:53 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Case sensitivity issues"
}
] |
[
{
"msg_contents": "The following used to work in 6.5, works in Oracle, and is\nvery useful:\n\ndonb=# create table foo(c varchar);\nCREATE\ndonb=# insert into foo values('abc');\nINSERT 72649 1\n\ndonb=# select distinct c from foo order by upper(c);\nERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target list\ndonb=# \n\nIn other words, we want to order ignoring case - in this case, users\nwithin the Ars Digita Community system. We want don baccus to appear\nnext to Joe Blow rather than following Xena Xenophoba.\n\nIs this now refused because it is non-standard? It seems a pity...\n\nOf course, one can do \"select distinct c, upper(c) as ignore ...\"\n\nbut that forces the return of more data, so is slower, etc...\n\nBTW the very fact that my testing of our partial port of this web\ntoolkit under V7 pre-beta has gotten this far is a very good sign.\n\nAmong other things, it makes heavy (if simple) use of referential\nintegrity, which has already uncovered two bugs in the port that\nI've fixed.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 06 Feb 2000 19:04:18 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "DISTINCT and ORDER BY bug?"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> The following used to work in 6.5, works in Oracle, and is\n> very useful:\n\n> donb=# select distinct c from foo order by upper(c);\n> ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target list\n\nWell, it's not a bug --- it was an entirely deliberate change. It\nmight be a misfeature though. The case we were concerned about was\n\n\tselect distinct x from foo order by y;\n\nwhich produces ill-defined results. If I recall the thread correctly,\nOracle and a number of other DBMSs reject this. I think your point is\nthat\n\n\tselect distinct x from foo order by f(x);\n\n*is* well-defined, and useful. I think you are right, but how\nfar should we go in detecting common subexpressions? You might\nwant to contemplate the difference in these examples:\n\n\tselect distinct sin(x) from foo order by abs(sin(x));\n\n\tselect distinct random(x) from foo order by abs(random(x));\n\nIt would be interesting to poke at Oracle to find out just what they\nconsider a legitimate ORDER BY expression for a SELECT DISTINCT.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 00:26:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "At 12:26 AM 2/7/00 -0500, Tom Lane wrote:\n\n>Well, it's not a bug --- it was an entirely deliberate change. It\n>might be a misfeature though.\n\nAhhh...getting subtle, are we? :)\n\n> The case we were concerned about was\n>\n>\tselect distinct x from foo order by y;\n\nYes...I remember some discussion regarding this.\n\n>which produces ill-defined results. If I recall the thread correctly,\n>Oracle and a number of other DBMSs reject this. I think your point is\n>that\n\n>\tselect distinct x from foo order by f(x);\n\n>*is* well-defined, and useful. I think you are right, but how\n>far should we go in detecting common subexpressions?\n\nNot sure...having not been into that part of the code (and busy at\nthe moment testing my rewrites of small portions of RI trigger\ncode I rewrote at Jan's request, after our \"dispute\" [which was more\nor less \"I'm 50% certain you're right!\" \"No! I'm 50% you're right!\"\nuntil I found the paragraph in Date's book which proved we were both\njust about 50% right]) I can't really say. \n\nI was hoping the standard might give some guidance?\n\n> You might\n>want to contemplate the difference in these examples:\n>\n>\tselect distinct sin(x) from foo order by abs(sin(x));\n\nI'm not sure I see a problem here. My (brief) reading of the\nstandard tells me that \"order by\" follows everything else, \nin other words, you get\n\nselect ... arbitrary complexity, with group by and all sorts of\ncruft ...\n\nthen you take that result and apply the \"order by\" clause.\n\nYou'd get all the negative values followed by the positive\nvalues, but you'd also get -1.0 and 1.0 if the database had\nthose values. Because they're distinct, and therefore live to\nbe ordered.\n\nBut I'm not sure about it...if you push me, I'll probably go dig\ninto the standard again (I was so successful with referential\n\"NO ACTION\" last time, yeah, right, I sleep with Date's book under\nmy pillow at the moment!)\n\n>\tselect distinct random(x) from foo order by abs(random(x));\n\nOf course, real compiler systems (like I've spent my life working\non) have heuristic or, more modernly, other ways of deciding if a\nfunction returns different values depending on when it is called.\nIn such systems, you only have to guarantee the correct answer, so\nchoosing wrong simply means the code runs slower. \n\n\"upper(column_value)\" does not within a specific select. Column\nvalue won't change. I can think of rules to think of but the\nsimplest might be that internal functions that are invariant when\ntheir parameters are unchanged might be considered safe. Others,\nnot.\n\nAlso, the standard might simply say the result is implementation\ndependent or (slightly worse) defined if the function returns\ndifferent values for a call with the same parameter list in a\nsingle query. I don't know...it's an interesting question.\n\nThe other approach is to simply state that the function has one\nand only one value during statement (SQL-statement, in this case)\nexecution, and yank the sucker out of there, execute it, and stuff\nit in a temp variable. But that's probably too naive. Still, the\nstandard might say it is implementation defined as to whether or\nnot the function will be called once or more than once. The standard\nonly cares about embedded SQL but it might give guidance...\n\n>It would be interesting to poke at Oracle to find out just what they\n>consider a legitimate ORDER BY expression for a SELECT DISTINCT.\n\nI have full-time access to an Oracle installation, so fire away\nregarding examples and questions.\n\nNot just on this narrow subject, but in general. I'm probably not the\nONLY person here with Oracle access, but I do have it, and my poking\nat it won't hurt anything but Oracle's pride...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 06 Feb 2000 22:05:27 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "On Mon, 7 Feb 2000, Tom Lane wrote:\n\n> Well, it's not a bug --- it was an entirely deliberate change. It\n> might be a misfeature though. The case we were concerned about was\n> \n> \tselect distinct x from foo order by y;\n> \n> which produces ill-defined results. \n\nOkay, I can understand this...\n\n> \tselect distinct sin(x) from foo order by abs(sin(x));\n> \n> \tselect distinct random(x) from foo order by abs(random(x));\n\nThe thing here is that random() is not deterministic on its inputs,\nwhereas sin() is. Perhaps we should only allow fully deterministic ORDER\nBY? (Ugh, another flag for functions...)\n\nTaral\n\n",
"msg_date": "Mon, 7 Feb 2000 00:12:47 -0600 (CST)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "At 12:12 AM 2/7/00 -0600, Taral wrote:\n\n>The thing here is that random() is not deterministic on its inputs,\n>whereas sin() is. Perhaps we should only allow fully deterministic ORDER\n>BY? (Ugh, another flag for functions...)\n\nWhich, by it's nature is probably a misnomer, because I imagine that\nPL/pgSQL functions would always have to be non deterministic whatever\ntheir inputs? Given that unrecognized syntax is just tossed the\nquery executor. Thus calling any 'ole function without PL/pgSQL \nreally knowing what's going on?\n\nSo you probably end up with a LIST of functions by name that are built-in\nand deterministic.\n\nOr ... you simply say that results are really weird if the function has\nundeterministic behavior and document it.\n\nTom's on the right path asking what the standard might say and what\ndelphic, incomprehensible answer the Oracle might have for us.\n\n(the more I learn about the SQL standard, the more I appreciate the irony\nof Oracle's corporate name!)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 06 Feb 2000 22:17:17 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> At 12:26 AM 2/7/00 -0500, Tom Lane wrote:\n>> It would be interesting to poke at Oracle to find out just what they\n>> consider a legitimate ORDER BY expression for a SELECT DISTINCT.\n\n> I have full-time access to an Oracle installation, so fire away\n> regarding examples and questions.\n\nWell, try these on for size:\n\n\tselect distinct x from foo order by x+1;\n\n\tselect distinct x+1 from foo order by x+1;\n\n\tselect distinct x+1 from foo order by x;\n\n\tselect distinct x+1 from foo order by x+2;\n\n\tselect distinct x+y from foo order by x+y;\n\n\tselect distinct x,y from foo order by x+y;\n\n\tselect distinct x+y from foo order by x,y;\n\n\tselect distinct x+y from foo order by x-y;\n\nA human can easily see that all but the last two are well-defined,\nbut I'll be a little surprised if Oracle knows it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 01:36:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "\n> select distinct x from foo order by y;\n> \n> which produces ill-defined results.\n\nWhy is this ill-defined? If y is in x then it is also distinct and\nthere's no logic problem sorting on it.\n",
"msg_date": "Mon, 07 Feb 2000 19:31:14 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug?"
},
{
"msg_contents": "At 01:36 AM 2/7/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> At 12:26 AM 2/7/00 -0500, Tom Lane wrote:\n>>> It would be interesting to poke at Oracle to find out just what they\n>>> consider a legitimate ORDER BY expression for a SELECT DISTINCT.\n>\n>> I have full-time access to an Oracle installation, so fire away\n>> regarding examples and questions.\n>\n>Well, try these on for size:\n\nHere's what the Oracle proclaims:\n\nselect distinct x from foo order by x+1;\nno rows selected\n\nselect distinct x+1 from foo order by x+1;\nno rows selected\n\nselect distinct x+1 from foo order by x;\nSQL> select distinct x+1 from foo order by x\n *\nERROR at line 1:\nORA-01791: not a SELECTed expression\n\nselect distinct x+1 from foo order by x+2;\nSQL> select distinct x+1 from foo order by x+2\n *\nERROR at line 1:\nORA-01791: not a SELECTed expression\n\nselect distinct x+y from foo order by x+y;\nSQL> \nno rows selected\n\nI also tried: select distinct x+y from foo order by y+x,\nwhich fails.\n\nselect distinct x,y from foo order by x+y;\nSQL> \nno rows selected\n\nselect distinct x+y from foo order by x,y;\nSQL> select distinct x+y from foo order by x,y\n *\nERROR at line 1:\nORA-01791: not a SELECTed expression\n\nselect distinct x+y from foo order by x-y;\nSQL> select distinct x+y from foo order by x-y\n *\nERROR at line 1:\nORA-01791: not a SELECTed expression\n\nMy first thought is that it is following a simple rule:\n\nFor arithmetic \"order by\" expressions, either:\n\n1. The exact expression must also appear in the \"select\" list,\n and it must be exact, not just an expression that computes\n the same value as the \"order by\" expression\n \n or\n\n2. all of the variables used by the expression must be listed \n in the \"select\" list as simple column names, not as part of\n an expression.\n\nMust be true.\n\nAt least, the rule is simple if you can compare expression trees.\n\nAt this point I still am clueless regarding the standard, I think I'll\nmake Date my morning coffee date again.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 07:03:55 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "Chris <[email protected]> writes:\n>> select distinct x from foo order by y;\n>> \n>> which produces ill-defined results.\n\n> Why is this ill-defined? If y is in x then it is also distinct\n\nHuh? The query specifies distinct values of x, and only x.\nConsider\n\t\tx\ty\n\n\t\t1\t1\n\t\t1\t10\n\t\t2\t0\n\t\t2\t11\n\n\"select distinct x\" ought to produce one row with x=1, and one row with\nx=2, and nothing else. If it implicitly did the distinct on y as well,\nyou'd get four rows with two x=1 and two x=2, which is not my idea of\n\"distinct x\". But if you don't have four rows out, then there's no\nmeaningful way to order by y.\n\n6.5.3 in fact produces four rows from this query, which is generally\nconceded to be broken behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 10:56:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> My first thought is that it is following a simple rule:\n\n> For arithmetic \"order by\" expressions, either:\n\n> 1. The exact expression must also appear in the \"select\" list,\n> and it must be exact, not just an expression that computes\n> the same value as the \"order by\" expression\n \n> or\n\n> 2. all of the variables used by the expression must be listed \n> in the \"select\" list as simple column names, not as part of\n> an expression.\n\nCould be. How about cases like\n\n\tselect distinct x,y+1 from foo order by x+y+1;\n\n> At least, the rule is simple if you can compare expression trees.\n\nI think we have something pretty similar for GROUP BY, actually,\nso it may not be hard to make this work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 11:03:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "At 11:03 AM 2/7/00 -0500, Tom Lane wrote:\n\n>Could be. How about cases like\n>\n>\tselect distinct x,y+1 from foo order by x+y+1;\n\n\nSQL> select distinct x,y+1 from foo order by x+y+1\n *\nERROR at line 1:\nORA-01791: not a SELECTed expression\n\n>> At least, the rule is simple if you can compare expression trees.\n\n>I think we have something pretty similar for GROUP BY, actually,\n>so it may not be hard to make this work.\n\nActually, yes, you're probably right...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 08:21:46 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\nSQL> select distinct x,y+1 from foo order by x+y+1\n> *\n> ERROR at line 1:\n> ORA-01791: not a SELECTed expression\n\nActually, that was a little unfair, since their parser no doubt parsed\n\"x+y+1\" as (x+y)+1, leaving no common subexpression visible. Do they\naccept\n\n\tselect distinct x,y+1 from foo order by x+(y+1)\n\n>>> At least, the rule is simple if you can compare expression trees.\n\n>> I think we have something pretty similar for GROUP BY, actually,\n>> so it may not be hard to make this work.\n\nOn further thought, I think the real implementation issue is that\ndoing SELECT DISTINCT ORDER BY requires either two sorting steps\n(sort by DISTINCT fields, \"uniq\" filter, sort again by ORDER BY fields)\nor else some very hairy logic to figure out that ORDER BY x+1\n\"implies\" ORDER BY x. In fact I'm not sure it does imply it\nin the general case. In your original example, the requested sort\nwas ORDER BY upper(x), but that doesn't guarantee that the tuples\nwill be ordered adequately for duplicate-x elimination. For example,\nthat ORDER BY might yield\n\n\tAnsel Adams\n\tDon Baccus\n\tDON BACCUS\n\tDon Baccus\n\tJoe Blow\n\t...\n\nwhich is a valid sort by upper(x), but a uniq filter on plain x\nwill fail to get rid of the second occurrence of \"Don Baccus\" as\nit should.\n\nPossibly we could make this work by implicitly expanding the ORDER BY\nto \"ORDER BY upper(x), x\" which would ensure that the duplicate x's\nare brought together. I am not sure this will give the right results\nalways, but it seems promising. We are assuming here that upper(x)\ngives equal outputs for equal inputs, so it would fall down on random(x)\n--- I suppose we could refuse to do this if we see a function that is\nmarked non-constant-foldable in pg_proc...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 12:10:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
},
{
"msg_contents": "At 12:10 PM 2/7/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>SQL> select distinct x,y+1 from foo order by x+y+1\n>> *\n>> ERROR at line 1:\n>> ORA-01791: not a SELECTed expression\n>\n>Actually, that was a little unfair, since their parser no doubt parsed\n>\"x+y+1\" as (x+y)+1, leaving no common subexpression visible. Do they\n>accept\n>\n>\tselect distinct x,y+1 from foo order by x+(y+1)\n\nYes, it does. So, they must be doing some level of common expression\nanalysis, for real.\n\n>>>> At least, the rule is simple if you can compare expression trees.\n>\n>>> I think we have something pretty similar for GROUP BY, actually,\n>>> so it may not be hard to make this work.\n>\n>On further thought, I think the real implementation issue is that\n>doing SELECT DISTINCT ORDER BY requires either two sorting steps\n>(sort by DISTINCT fields, \"uniq\" filter, sort again by ORDER BY fields)\n\nYes.\n\n>or else some very hairy logic to figure out that ORDER BY x+1\n>\"implies\" ORDER BY x. In fact I'm not sure it does imply it\n>in the general case. In your original example, the requested sort\n>was ORDER BY upper(x), but that doesn't guarantee that the tuples\n>will be ordered adequately for duplicate-x elimination. \n\nI realize that. I would assume that a double-sort penalty might\nbe incurred, i.e. the select distinct ... is executed followed by\nthe order by.\n\n>Possibly we could make this work by implicitly expanding the ORDER BY\n>to \"ORDER BY upper(x), x\" which would ensure that the duplicate x's\n>are brought together.\n\nThat would be another approach, too, if it works for all cases...\n\n> I am not sure this will give the right results\n>always, but it seems promising. We are assuming here that upper(x)\n>gives equal outputs for equal inputs, so it would fall down on random(x)\n>--- I suppose we could refuse to do this if we see a function that is\n>marked non-constant-foldable in pg_proc...\n\nSomething like that, yes.\n\nI just checked Date while off having coffee, and it is clear that the\nSQL standard specifies that ORDER BY operates on COLUMNS, not expressions.\nSo the restriction that's now imposed is indeed standard compliant. However,\nsome level of extension in this area would be very useful, and my guess is\nthat examples like the one that started this discussion are very common.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 10:37:37 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DISTINCT and ORDER BY bug? "
}
] |
[
{
"msg_contents": "\nBeen trying to read the SQL3 draft. My best guess is that this\nis the appropriate section...\n\n Let T be the table identified by\n <ANSI> <table name>\n <ISO > <table or query name>\n contained in a <table specification> TS.\n\n...\n\n c) If ONLY is specified, then TS identifies a table fo the\nrows\n that do not have any corresponding row in any subtable of\n T.\n\nI assume this a round-about way of saying that \"ONLY\" is used to exclude\nsubtables?\n\nBTW, I think in SQL3 the oid column is supposed to be called \"IDENTITY\".\nMaybe, but who can read this thing? (Can we find the people who wrote\nthis document and have them taken out and flogged?).\n",
"msg_date": "Mon, 07 Feb 2000 16:07:24 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "ONLY"
},
{
"msg_contents": "> BTW, I think in SQL3 the oid column is supposed to be called \"IDENTITY\".\n> Maybe, but who can read this thing? (Can we find the people who wrote\n> this document and have them taken out and flogged?).\n\n Would that be enough? They'd sin on revenge, writing the next\n specs. Cut off hands and rip out tounge (or something that\n is really painful in in their terms - I'm unable to think\n about anything), so they are never of danger for the human\n community again.\n\n In fact, it's easier for me to interpret a sendmail.cf file\n than these specs. They'd been doing a better job by simply\n writing a gram.y with some appropriate comments - more\n readable and easier to adopt.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 7 Feb 2000 07:07:41 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "At 04:07 PM 2/7/00 +1100, Chris Bitmead wrote:\n\n>BTW, I think in SQL3 the oid column is supposed to be called \"IDENTITY\".\n>Maybe, but who can read this thing? (Can we find the people who wrote\n>this document and have them taken out and flogged?).\n\nIt's not ALL that bad, my earlier comments were partly tongue in cheek.\n\nMostly, it is obvious that you have to digest the whole thing in order to\ncorrectly understand bits and pieces. That was Jan's problem with \n\"NO ACTION\" and RI, leading him to believe that this meant he should\nleave dangling table references after deleting a referenced table. I\nknew that was wrong, and figured it had to do with the general definition\nof integrity constraints (i.e. there's a predicate function applied to\nthe enttire database that must be true at strictly-defined times, and if not, \nerrors spew forth and transactions roll backwards) but I'm damned if I\ncould find it. Thus our difficulty in deciding what PG should do for\nsuch cases.\n\nBut I know it is there... :) And Jan was as relieved as me to learn\nthat it must be (because Date tells us so). Still, neither of us has\nseen it, we're just trusting Date and common sense (occam's razor,\nwhen in doubt, do the right thing).\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 06 Feb 2000 22:11:24 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "Don Baccus wrote:\n\n> It's not ALL that bad, my earlier comments were \n> partly tongue in cheek.\n\n<grumble> I think they're pretty bad. I did start reading from the\nbeginning, even reading the definitions and there are many things that\nare not clear to me.\n\nIf you think it's not too bad, do you care to comment on the \"ONLY\"\nsituation?\n",
"msg_date": "Mon, 07 Feb 2000 19:42:38 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "At 07:42 PM 2/7/00 +1100, Chris wrote:\n>Don Baccus wrote:\n>\n>> It's not ALL that bad, my earlier comments were \n>> partly tongue in cheek.\n\n><grumble> I think they're pretty bad. I did start reading from the\n>beginning, even reading the definitions and there are many things that\n>are not clear to me.\n\n>If you think it's not too bad, do you care to comment on the \"ONLY\"\n>situation?\n\nWell, OK, I was trying to be nice. Let me put it in a way that insults\ntwo standards committees at once:\n\nIt's no harder to read than the C++ standard.\n\nHow's that? :)\n\nDate's primer takes potshots at it in almost every section. One way\nin which the SQL standard IS worse than even your typically crummy\nlanguage standard is that it apparently is not internally consistent.\nIt contradicts itself in many areas, according to Date (who seems to\ntake real pleasure in pointing out specifics). \n\nWhile all language standards have some bugs of this sort, the SQL standard\nseems to be full of them.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 07:10:35 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "On Mon, 7 Feb 2000, Don Baccus wrote:\n\n> While all language standards have some bugs of this sort, the SQL standard\n> seems to be full of them.\n\n*sigh* I hate it when people do this. YOU try writing a standard with that\nmuch information such that nobody will come back to you and say \"you left\nsuch-and-such undefined\". It's _very_ hard and requires a lot of\ndefinitions. I'll admit, however, that section summaries would be nice --\nI had to wade through way too much stuff to find out what MATCH FULL\nmeant exactly.\n\nTaral\n\n",
"msg_date": "Mon, 7 Feb 2000 12:54:06 -0600 (CST)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "At 12:54 PM 2/7/00 -0600, Taral wrote:\n>On Mon, 7 Feb 2000, Don Baccus wrote:\n>\n>> While all language standards have some bugs of this sort, the SQL standard\n>> seems to be full of them.\n\n>*sigh* I hate it when people do this. YOU try writing a standard with that\n>much information such that nobody will come back to you and say \"you left\n>such-and-such undefined\".\n\nI was at the very first organizational meeting for the creation of a\nstandard for Pascal, and was very active at the beginning of that \nprocess (delegating it to someone else in my company when it got bogged\ndown in paralytic discussions over whether to use a comma or semicolon to\nseparate particular clauses, etc).\n\nSince the BSI nailed me to the cross and made me agree to be one\nof a half-dozen folks who met annually to accept or reject proposed\nadditions to their test suite, I'm actually quite used to having folks\ntell me \"you left such-and-such undefined\". \"you\" in the collective sense\nof those who drafted the standard. Sometimes they were even right.\n\nI was also the BSI's designated technical consultant to the ISO \ncommittee convened by the BSI to standardize Modula-2. Though I\ngot out of that thankless task after a year. I don't even know\nif they ever finished, because I dropped out of the computer industry\nshortly thereafter.\n\nSo ... I am aware of how hard the problem is. And I've spent far\ntoo much of my life reading and reviewing proposed standards.\n\nThe SQL seems to have more than its fair share of contradictions.\n\nThen again, SQL is far more complex than either of the languages I\nmentioned above. So's C++, and its standard is a morass that reflects\nthe fact that the language itself is a morass. And Bjarne's just an...oh,\nlet's not go there.\n\nI had friends (and one employee) on the ANSI committee, and dropped in on\none meeting just to lend a sympathetic ear. I'm really glad I was smart\nenough to never come back, one meeting was enough!\n\n> It's _very_ hard and requires a lot of definitions.\n\nYes, I know. That doesn't change the fact that the result in this\ncase is extremely opaque!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 11:19:39 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "Don Baccus wrote:\n\n> While all language standards have some bugs of this sort, the SQL standard\n> seems to be full of them.\n\nDoes SQL3 seem to be going anywhere, or has the world lost interest in\nit?\n",
"msg_date": "Tue, 08 Feb 2000 10:17:08 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ONLY"
},
{
"msg_contents": "\n> >*sigh* I hate it when people do this. YOU try writing a standard with that\n> >much information such that nobody will come back to you and say \"you left\n> >such-and-such undefined\".\n\nLike someone else said, if they at least supplied a compilable gram.y\nwe'd\nat least have a definitive syntax, and could restrain ourselves to\narguing\nabout the meanings.\n",
"msg_date": "Tue, 08 Feb 2000 10:56:39 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ONLY"
}
] |
[
{
"msg_contents": "\n\n\nHi all, here are my responses to various questions:\n\nFirst, there was one more syntax request I forgot to include before:\n\ncreate view doesn't allow column aliases in the view definition: e.g.\ncreate view myview (a,b,c) as select x,y,z ... doesn't work, but create\nview myview select x as a, y as b, z as c ... works.\n\nfrom Tom Lane:\n> Thanks for the report! I don't suppose you'd be interested in rerunning\nyour tests on current (pre-beta-7.0) sources?\n\n(and other questions on what version to test)\n\nWe get a lot of that anytime we run tests. A simple answer is that PC Week\nnever benchmarks non-production, non-released code. The next question is\nwhy not wait until version x when we are that much better? We have to put\nthe stake in the ground sometime, and this story was driven by the news of\nInterBase going open source. We cover the news week by week, and all\nstories have to have a current news hook of some kind. That being said,\nwhen there is a significant change in the competitive landscape, I'd like\nto run the tests again (esp. now that I have gotten all the scripts done).\nIn particular, I'd like to compare IB 6 with PG 7.x and MySQL later this\nyear (I didn't benchmark MySQL this time because I ran out of time, but\nwould very much like to.)\n\n> 65536 buffer limit?\n\nI was using a 4KB page size, so this was 256MB of cache. In this case, I\nwas using a database that was about 86 MB of data and indices, so I didn't\nneed this much cache. However, I could easily see a production database\nserver for a mid-size company equipped with 1 or 2 GB of RAM and I would\nassign 80% of that RAM to the db cache. In future tests, I will be testing\nwith a 4 GB database and so will need to be using as big a cache as\nPostgreSQL can support.\n\n> outer joins\n\nAs Don Baccus points out, simulating outer joins with a union and a not\nexists gets hairy when you have more than two tables involved.\nInterestingly, Sybase only supported outer joins as of two months ago\n(12.0), though the others have supported them for some time.\n\n(from Ed Loehr)\n> I was disappointed this benchmark did not include database recovery\nand reliability measurements. Benchmarks ought to include the most\nimportant characteristics of an RDBMS, and recovery/reliability is\ncertainly one of them\n\nI quite agree, though, of course, there always a balancing of time to run\nthe tests vs. value of results gained. The benchmark I chose for PC Week's\nuse (AS3AP by Turbyfill, Orji and Bitton) is actually much more rounded\nthan most benchmarks (Wisconsin, TPC-A/B/C, etc.) because it includes a)\nload time, b) index time, c) update stats time, d) DSS queries such as\naggregates and counts, e) is both single user and multiuser, and f) uses a\nwide variety of data types, not just int and char.\n\nIn addition, I have a) extended the query set quite a bit to cover much\nmore of the SQL92 entry/intermediate level spec, and b) added query log\ntables and consistency check queries to do some testing of proper ACID\nproperties.\n\nNow on the specific issue of recovery, I decided a few years ago not to\nmeasure that metric solely because the TPC-C test does such a good job of\nchecking for ACIDity. In fact, continual TPC-C testing is a big reason why\ntoday's databases are so reliable. The problem here is that no open source\ndatabase has ever been tested by the TPC because none of the development\ngroups are TPC members. I'd certainly suggest that Red Hat or VA Linux do\nthis to get some database numbers on the board. It's a key credibility\ntest because just passing is a very good assurance of really debugged\ntransaction logging code.\n\n\n",
"msg_date": "Mon, 07 Feb 2000 01:41:26 -0500",
"msg_from": "Timothy Dyck <[email protected]>",
"msg_from_op": true,
"msg_subject": "follow-up on PC Week Labs benchmark results"
},
{
"msg_contents": "> create view doesn't allow column aliases in the view definition: e.g.\n> create view myview (a,b,c) as select x,y,z ... doesn't work, but create\n> view myview select x as a, y as b, z as c ... works.\n\nThanks for the heads up. We've never run into it before (and have not\nhad any requests for it), but will look at implementing it.\n\n> ... PC Week never benchmarks non-production, non-released code...\n> ... this story was driven by the news of\n> InterBase going open source.\n\nHmm. InterBase going open source seems to be a pre-alpha vaporware\nfeature so far ;)\n\n> In particular, I'd like to compare IB 6 with PG 7.x and MySQL later this\n> year (I didn't benchmark MySQL this time because I ran out of time, but\n> would very much like to.)\n\nGreat. We'll look forward to it. Also, it will be interesting to see\nthe relative performance and feature improvements over time; Postgres\nhas been living on \"Internet time\" for the last three or four years,\nand I'll be suprised if other \"Open Source\" DBs can keep up.\n\n> In addition, I have a) extended the query set quite a bit to cover much\n> more of the SQL92 entry/intermediate level spec, and b) added query log\n> tables and consistency check queries to do some testing of proper ACID\n> properties.\n\nYou had inquired earlier about \"when we would support complete SQL92\"\n(give or take a few words). What areas of entry level SQL92 are we\nmissing in your opinion (or should we wait for the article)?\n\nbtw, I've been amused and gratified by PC Week's obvious shift from\nOpen Source FUD generator to covering Open Source with a more even\nhand. It's been months since the last time John Dodge referred to\n\"linux fanatics\" with obvious scorn, and it is nice to see that y'all\nare starting to get the point.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Feb 2000 14:54:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] follow-up on PC Week Labs benchmark results"
},
{
"msg_contents": "At 02:54 PM 2/7/00 +0000, Thomas Lockhart wrote:\n\n>> ... PC Week never benchmarks non-production, non-released code...\n>> ... this story was driven by the news of\n>> InterBase going open source.\n>\n>Hmm. InterBase going open source seems to be a pre-alpha vaporware\n>feature so far ;)\n\nThis is a good point as InterBase announced that their upcoming\nBeta was going open source.\n\nDid you test an early copy of their upcoming beta, or did you test\ntheir current, non-Open Source product?\n\nIf you tested their upcoming beta, then it would seem fair to test\nour upcoming beta, too :) If you tested their non-Open Source\ncurrent production version, then you're not testing two Open Source\ndatabases...hmmm...\n\nOf course, InterBase may've expanded on their earlier announcement\nof what's going Open Source, I've not been tracking it.\n\n>> In particular, I'd like to compare IB 6 with PG 7.x and MySQL later this\n>> year (I didn't benchmark MySQL this time because I ran out of time, but\n>> would very much like to.)\n\nActually, this slipped by me the first time.\n\nWhy benchmark MySQL? It's not a real RDBMS, it doesn't even pretend\nto support ACID semantics. Clearly it is going to be faster than \ndatabases that do because supporting ACID semantics is expensive.\n\nThis would be comparing apples with oranges, meaningless.\n\nNow, don't get me wrong, for many application spaces mySQL is fine. If\nyou're running a bboard system for overclockers, for instance, you probably\nwould sigh in relief if disaster struck and you lost all your data.\n\nOn the other hand, if you're running an e-commerce site losing data is\nnot cool and mySQL is not appropriate.\n\nRather than benchmark, it would seem more useful to educate your readers\nabout the meaning of ACID, and how to decide when you need it and when you\ndon't. That would seem far more important, because in my experience many\npeople don't understand that there is a real difference between a program\nthat executes a subset of SQL in a simple manner, and an RDBMS that \npasses the ACID test and happens to be driven by SQL queries.\n\nIf you were to benchmark in the context of such an article, it would make\nsome sense, because you could do so in order to answer the question, \"How\nmuch does ACID hurt performance?\" \n\nThis would give your readers real information to help drive their choice\nof software. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 07:29:35 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] follow-up on PC Week Labs benchmark results"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> >> In particular, I'd like to compare IB 6 with PG 7.x and MySQL later this\n> >> year (I didn't benchmark MySQL this time because I ran out of time, but\n> >> would very much like to.)\n> \n> Actually, this slipped by me the first time.\n> \n> Why benchmark MySQL? It's not a real RDBMS, it doesn't even pretend\n> to support ACID semantics. Clearly it is going to be faster than\n> databases that do because supporting ACID semantics is expensive.\n\nI remember some reports of it still being slower on more complex queries.\n\n> This would be comparing apples with oranges, meaningless.\n> \n> Now, don't get me wrong, for many application spaces mySQL is fine. If\n> you're running a bboard system for overclockers, for instance, you probably\n> would sigh in relief if disaster struck and you lost all your data.\n> \n> On the other hand, if you're running an e-commerce site losing data is\n> not cool and mySQL is not appropriate.\n> \n> Rather than benchmark, it would seem more useful to educate your readers\n> about the meaning of ACID, and how to decide when you need it and when you\n> don't. That would seem far more important, because in my experience many\n> people don't understand that there is a real difference between a program\n> that executes a subset of SQL in a simple manner, and an RDBMS that\n> passes the ACID test and happens to be driven by SQL queries.\n\nYou probably can get ACID behaviour from MySQL by serializing at transaction \nlevel ;)\n\n------------\nHannu\n",
"msg_date": "Mon, 07 Feb 2000 23:21:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] follow-up on PC Week Labs benchmark results"
}
] |
[
{
"msg_contents": ">> Been trying to read the SQL3 draft. My best guess is that this\n>> is the appropriate section...\n>> \n>> Let T be the table identified by\n>> <ANSI> <table name>\n>> <ISO > <table or query name>\n>> contained in a <table specification> TS.\n>> \n>> ...\n>> \n>> c) If ONLY is specified, then TS identifies a table fo the\n>> rows\n>> that do not have any corresponding row in any \n>> subtable of\n>> T.\n>> \n>> I assume this a round-about way of saying that \"ONLY\" is \n>> used to exclude\n>> subtables?\nThat's not what it sounds like to me. To me, this sounds like it will only\ninclude those rows that do not have associated rows in sub-tables. That's\nnot the same as selecting rows without the associated sub-table rows. Kind\nof a select * from TS where not exists (any rows in sub-tables of TS)\n\nLooking at it again, it does sound very ambiguous, but I would still lean\n(semantically, not using common sense) to what I wrote above. Of course,\ncommon sense would dictate otherwise.\n\nMikeA\n",
"msg_date": "Mon, 7 Feb 2000 09:42:13 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] ONLY"
}
] |
[
{
"msg_contents": "\nI have the new developer's globe online. Please check your BIOs and\nlet me know if there's anything that needs correcting. For those \nwithout pictures, don't be so shy. Submit a picture - if you need to\nhave one scanned it can be arranged.\n\nAnd before I forget.. Good job on the globe, Jan!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 7 Feb 2000 07:12:37 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "New Globe"
},
{
"msg_contents": ">\n> I have the new developer's globe online. Please check your BIOs and\n> let me know if there's anything that needs correcting. For those\n> without pictures, don't be so shy. Submit a picture - if you need to\n> have one scanned it can be arranged.\n\n As core members, Tom Lane and me should move up into the\n steering area, I think.\n\n> And before I forget.. Good job on the globe, Jan!\n\n Did you succeed in rendering it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 7 Feb 2000 16:47:11 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> >\n> > I have the new developer's globe online. Please check your BIOs and\n> > let me know if there's anything that needs correcting. For those\n> > without pictures, don't be so shy. Submit a picture - if you need to\n> > have one scanned it can be arranged.\n> \n> As core members, Tom Lane and me should move up into the\n> steering area, I think.\n\nDoing it right now.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 11:11:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "On Mon, 7 Feb 2000, Jan Wieck wrote:\n\n> >\n> > I have the new developer's globe online. Please check your BIOs and\n> > let me know if there's anything that needs correcting. For those\n> > without pictures, don't be so shy. Submit a picture - if you need to\n> > have one scanned it can be arranged.\n> \n> As core members, Tom Lane and me should move up into the\n> steering area, I think.\n\nI leave those decisions to Marc. :)\n\n> > And before I forget.. Good job on the globe, Jan!\n> \n> Did you succeed in rendering it?\n\nYep. Did it on hub. Took (I think) 6 and a half hours.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 7 Feb 2000 11:15:22 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> On Mon, 7 Feb 2000, Jan Wieck wrote:\n> \n> > >\n> > > I have the new developer's globe online. Please check your BIOs and\n> > > let me know if there's anything that needs correcting. For those\n> > > without pictures, don't be so shy. Submit a picture - if you need to\n> > > have one scanned it can be arranged.\n> > \n> > As core members, Tom Lane and me should move up into the\n> > steering area, I think.\n> \n> I leave those decisions to Marc. :)\n> \n> > > And before I forget.. Good job on the globe, Jan!\n> > \n> > Did you succeed in rendering it?\n> \n> Yep. Did it on hub. Took (I think) 6 and a half hours.\n\nThat's it. No new developers. The globe takes too long to generate. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 11:32:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "Jan Wieck wrote:\n \n> As core members, Tom Lane and me should move up into the\n> steering area, I think.\n\nI was wondering when you two would be promoted. Congratulations.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 07 Feb 2000 11:58:17 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> > Yep. Did it on hub. Took (I think) 6 and a half hours.\n>\n> That's it. No new developers. The globe takes too long to generate. :-)\n\n Not as much as a problem you might think it is. The default\n make target is a poor quality image, rendering aprox. 2\n minutes on my 333MHz PII. So adding/removing pins can be\n verified/tested quickly.\n\n Only rendering the final image in full quality is what pushes\n the CPU against the wall.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 7 Feb 2000 18:11:56 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> > > Yep. Did it on hub. Took (I think) 6 and a half hours.\n> >\n> > That's it. No new developers. The globe takes too long to generate. :-)\n> \n> Not as much as a problem you might think it is. The default\n> make target is a poor quality image, rendering aprox. 2\n> minutes on my 333MHz PII. So adding/removing pins can be\n> verified/tested quickly.\n> \n> Only rendering the final image in full quality is what pushes\n> the CPU against the wall.\n\nThat map is just too cool.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 12:36:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> I have the new developer's globe online. Please check your BIOs and\n> let me know if there's anything that needs correcting. For those\n> without pictures, don't be so shy. Submit a picture - if you need to\n> have one scanned it can be arranged.\n> \n> And before I forget.. Good job on the globe, Jan!\n\nHow much code do I have to contribute to get a marker on Sydney? :)\n",
"msg_date": "Tue, 08 Feb 2000 10:13:00 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> Vince Vielhaber wrote:\n> > \n> > I have the new developer's globe online. Please check your BIOs and\n> > let me know if there's anything that needs correcting. For those\n> > without pictures, don't be so shy. Submit a picture - if you need to\n> > have one scanned it can be arranged.\n> > \n> > And before I forget.. Good job on the globe, Jan!\n> \n> How much code do I have to contribute to get a marker on Sydney? :)\n\nThat's a good question. We normally give \"pins\" out to people who have\ncontributed a code over a significant period of time. For example,\nPeter Eisentraut is almost ready for a pin. We would normally wait for\nhis feature patches to be released, and then wait a few months to see if\nhe is still around. We look for people who we feel are in this for the\nlong haul.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 18:48:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "\nYep those pop-up pictures on the globe are cool.\n\nMy initial reaction - I'm glad I've got a beard, because it looks to\nbe a pre-requisite for hacking on postgresql!!!\n\n\nBruce Momjian wrote:\n> \n> > Vince Vielhaber wrote:\n> > >\n> > > I have the new developer's globe online. Please check your BIOs and\n> > > let me know if there's anything that needs correcting. For those\n> > > without pictures, don't be so shy. Submit a picture - if you need to\n> > > have one scanned it can be arranged.\n> > >\n> > > And before I forget.. Good job on the globe, Jan!\n> >\n> > How much code do I have to contribute to get a marker on Sydney? :)\n> \n> That's a good question. We normally give \"pins\" out to people who have\n> contributed a code over a significant period of time. For example,\n> Peter Eisentraut is almost ready for a pin. We would normally wait for\n> his feature patches to be released, and then wait a few months to see if\n> he is still around. We look for people who we feel are in this for the\n> long haul.\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 08 Feb 2000 17:07:02 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Yep those pop-up pictures on the globe are cool.\n\n> My initial reaction - I'm glad I've got a beard, because it looks to\n> be a pre-requisite for hacking on postgresql!!!\n\nI was getting razzed on IRC the other day for being the only core\nmember without a beard ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 01:47:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe "
},
{
"msg_contents": "> Chris Bitmead <[email protected]> writes:\n> > Yep those pop-up pictures on the globe are cool.\n> \n> > My initial reaction - I'm glad I've got a beard, because it looks to\n> > be a pre-requisite for hacking on postgresql!!!\n> \n> I was getting razzed on IRC the other day for being the only core\n> member without a beard ...\n\nBut we are going to take care of that, right Tom? ;-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 02:00:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> I was getting razzed on IRC the other day for being the only core\n> member without a beard ...\n\nIt's only a matter of time...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 07:14:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": ">> I was getting razzed on IRC the other day for being the only core\n>> member without a beard ...\n\n> But we are going to take care of that, right Tom? ;-)\n\nEr ... will long hair do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 02:43:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe "
},
{
"msg_contents": "> >> I was getting razzed on IRC the other day for being the only core\n> >> member without a beard ...\n> \n> > But we are going to take care of that, right Tom? ;-)\n> \n> Er ... will long hair do?\n\nCan you wrap it up around your chin? :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 03:09:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "On Tue, 8 Feb 2000, Tom Lane wrote:\n\n> Chris Bitmead <[email protected]> writes:\n> > Yep those pop-up pictures on the globe are cool.\n> \n> > My initial reaction - I'm glad I've got a beard, because it looks to\n> > be a pre-requisite for hacking on postgresql!!!\n> \n> I was getting razzed on IRC the other day for being the only core\n> member without a beard ...\n> \n> \t\t\tregards, tom lane\n\nSo where's the photo?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 8 Feb 2000 06:09:44 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New Globe "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > How much code do I have to contribute to get a marker on Sydney? :)\n> \n> That's a good question. We normally give \"pins\" out to people who have\n> contributed a code over a significant period of time. For example,\n> Peter Eisentraut is almost ready for a pin. We would normally wait for\n> his feature patches to be released, and then wait a few months to see if\n> he is still around.\n\nVadim seens to have disappeared from earth, I hope you are not going to \nremove his pin yet ;)\n\nHopefully he won't be away from postgres for the full 6 year he'll spend in \nAmerica.\n\n> We look for people who we feel are in this for the\n> long haul.\n\n-------------\nHannu\n",
"msg_date": "Tue, 08 Feb 2000 13:40:13 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> On Tue, 8 Feb 2000, Tom Lane wrote:\n> \n> > Chris Bitmead <[email protected]> writes:\n> > > Yep those pop-up pictures on the globe are cool.\n> > \n> > > My initial reaction - I'm glad I've got a beard, because it looks to\n> > > be a pre-requisite for hacking on postgresql!!!\n> > \n> > I was getting razzed on IRC the other day for being the only core\n> > member without a beard ...\n> > \n> > \t\t\tregards, tom lane\n> \n> So where's the photo?\n\nIf anyone wants to postal mail me a photo, I will scan it in and send it\nto Vince. Postal address is in my signature.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 09:14:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "On Tue, 8 Feb 2000, Hannu Krosing wrote:\n\n> Bruce Momjian wrote:\n> > \n> > > How much code do I have to contribute to get a marker on Sydney? :)\n> > \n> > That's a good question. We normally give \"pins\" out to people who have\n> > contributed a code over a significant period of time. For example,\n> > Peter Eisentraut is almost ready for a pin. We would normally wait for\n> > his feature patches to be released, and then wait a few months to see if\n> > he is still around.\n> \n> Vadim seens to have disappeared from earth, I hope you are not going to \n> remove his pin yet ;)\n> \n> Hopefully he won't be away from postgres for the full 6 year he'll spend in \n> America.\n\nHe's alive and well ... but has >500 email to scan through before he even\ngets to this thread :)\n\n\n",
"msg_date": "Tue, 8 Feb 2000 11:14:37 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Vadim seens to have disappeared from earth, I hope you are not going to \n> remove his pin yet ;)\n\nCertainly not --- but it needs to be moved to San Francisco ...\n\n> Hopefully he won't be away from postgres for the full 6 year he'll\n> spend in America.\n\nHe's pretty busy at the moment with getting settled in, but I'm sure\nhe'll be participating again soon.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 10:52:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe "
},
{
"msg_contents": "> Hannu Krosing <[email protected]> writes:\n> > Vadim seens to have disappeared from earth, I hope you are not going to \n> > remove his pin yet ;)\n> \n> Certainly not --- but it needs to be moved to San Francisco ...\n> \n> > Hopefully he won't be away from postgres for the full 6 year he'll\n> > spend in America.\n> \n> He's pretty busy at the moment with getting settled in, but I'm sure\n> he'll be participating again soon.\n\nI hear he will be here for 3 years.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 10:54:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "On Tue, 8 Feb 2000, The Hermit Hacker wrote:\n\n> On Tue, 8 Feb 2000, Hannu Krosing wrote:\n> \n> > Bruce Momjian wrote:\n> > > \n> > > > How much code do I have to contribute to get a marker on Sydney? :)\n> > > \n> > > That's a good question. We normally give \"pins\" out to people who have\n> > > contributed a code over a significant period of time. For example,\n> > > Peter Eisentraut is almost ready for a pin. We would normally wait for\n> > > his feature patches to be released, and then wait a few months to see if\n> > > he is still around.\n> > \n> > Vadim seens to have disappeared from earth, I hope you are not going to \n> > remove his pin yet ;)\n> > \n> > Hopefully he won't be away from postgres for the full 6 year he'll spend in \n> > America.\n> \n> He's alive and well ... but has >500 email to scan through before he even\n> gets to this thread :)\n\nShouldn't take long, that's less than a days worth of mail for me! But\nit's no fun after a long weekend outa town :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 8 Feb 2000 10:57:18 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "On Tue, 8 Feb 2000, Tom Lane wrote:\n\n> Hannu Krosing <[email protected]> writes:\n> > Vadim seens to have disappeared from earth, I hope you are not going to \n> > remove his pin yet ;)\n> \n> Certainly not --- but it needs to be moved to San Francisco ...\n\nDid Vadim indicate that he wanted his pin moved?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 8 Feb 2000 11:22:43 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New Globe "
},
{
"msg_contents": "Yes, let me remind folks Vince still needs more pictures.\n\n> \n> I have the new developer's globe online. Please check your BIOs and\n> let me know if there's anything that needs correcting. For those \n> without pictures, don't be so shy. Submit a picture - if you need to\n> have one scanned it can be arranged.\n> \n> And before I forget.. Good job on the globe, Jan!\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Jun 2000 08:01:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New Globe"
}
] |
[
{
"msg_contents": "I've got to update my photo - as I shaved off the beard a few weeks ago\n:-)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Vince Vielhaber [mailto:[email protected]]\nSent: Monday, February 07, 2000 12:13 PM\nTo: [email protected]\nSubject: [HACKERS] New Globe\n\n\n\nI have the new developer's globe online. Please check your BIOs and\nlet me know if there's anything that needs correcting. For those \nwithout pictures, don't be so shy. Submit a picture - if you need to\nhave one scanned it can be arranged.\n\nAnd before I forget.. Good job on the globe, Jan!\n\nVince.\n-- \n========================================================================\n==\nVince Vielhaber -- KA8CSH email: [email protected]\nhttp://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n========================================================================\n==\n\n\n\n\n************\n",
"msg_date": "Mon, 7 Feb 2000 12:29:52 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] New Globe"
},
{
"msg_contents": "On Mon, 7 Feb 2000, Peter Mount wrote:\n\n> I've got to update my photo - as I shaved off the beard a few weeks ago\n> :-)\n\nEmail it direct to me.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 7 Feb 2000 07:32:09 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] New Globe"
}
] |
[
{
"msg_contents": "I've checked the source, looked through various mailing\nlists, etc. Considering that PostgreSQL is (at least partially)\nan ORDBMS I would have thought it would be possible but I\ncan't see it.\n\nAny ideas?\n\nThe other thing I wanted to ask is how to find which table/class\nan oid is from but that has already been discussed on this list.\nI agree with the classname idea is good.\n\nFinally, is there a way around the scanner? I have a set of data\nthat needs to go into the database so I need to INSERT or UPDATE.\nHowever, this data may contain quotes, backslashes, etc. Is there\na way of simply say \"here is the literal data, no manipulation\nrequired\". Maybe as %length[literal data of given length].\n\nPlease CC any replies. It makes is easier to find them.\n\nMartijn\n",
"msg_date": "Tue, 08 Feb 2000 01:18:08 +1100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can you dereference an OID?"
}
] |
[
{
"msg_contents": "Greets,\n\nWill Postgres suffer any major performance hits from\nrecomiling with support for column names over 32 chars.\nie: 64 chars.\n\nJeff\n\n\n======================================================\nJeff MacDonald\n\[email protected]\tirc: bignose on EFnet\n======================================================\n\n",
"msg_date": "Mon, 7 Feb 2000 10:24:24 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Longer Column Names"
},
{
"msg_contents": "\"Jeff MacDonald <[email protected]>\" <[email protected]> writes:\n> Will Postgres suffer any major performance hits from\n> recomiling with support for column names over 32 chars.\n> ie: 64 chars.\n\nI doubt it'd make a large difference, except that your system\ntables would get bigger. Try it and let us know what you see...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 11:06:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Longer Column Names "
},
{
"msg_contents": "Ok sounds good then. got a few questions..\n\n1: how to change this (i assume it's in the source, and not a\nconfigure otpion)\n\n2: will it require and initdb , backup, reinstall . or just\nrecompilie and let her rip.\n\njeff\n\n\n======================================================\nJeff MacDonald\n\[email protected]\tirc: bignose on EFnet\n======================================================\n\nOn Mon, 7 Feb 2000, Tom Lane wrote:\n\n> \"Jeff MacDonald <[email protected]>\" <[email protected]> writes:\n> > Will Postgres suffer any major performance hits from\n> > recomiling with support for column names over 32 chars.\n> > ie: 64 chars.\n> \n> I doubt it'd make a large difference, except that your system\n> tables would get bigger. Try it and let us know what you see...\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n",
"msg_date": "Mon, 7 Feb 2000 14:20:06 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Longer Column Names "
},
{
"msg_contents": "On 2000-02-07, Jeff MacDonald <[email protected]> mentioned:\n\n> Ok sounds good then. got a few questions..\n> \n> 1: how to change this (i assume it's in the source, and not a\n> configure otpion)\n\nin src/include/postgres_ext.h the macro NAMEDATALEN\n\n> \n> 2: will it require and initdb , backup, reinstall . or just\n> recompilie and let her rip.\n\nOh yeah, the whole deal. See also comments near the above location.\n\n> \n> jeff\n> \n> \n> ======================================================\n> Jeff MacDonald\n> \[email protected]\tirc: bignose on EFnet\n> ======================================================\n> \n> On Mon, 7 Feb 2000, Tom Lane wrote:\n> \n> > \"Jeff MacDonald <[email protected]>\" <[email protected]> writes:\n> > > Will Postgres suffer any major performance hits from\n> > > recomiling with support for column names over 32 chars.\n> > > ie: 64 chars.\n> > \n> > I doubt it'd make a large difference, except that your system\n> > tables would get bigger. Try it and let us know what you see...\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ************\n> > \n> \n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 00:13:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Longer Column Names "
}
] |
[
{
"msg_contents": "At 07:12 AM 2/7/00 -0500, Vince Vielhaber wrote:\n\n>And before I forget.. Good job on the globe, Jan!\n\nI love the stick-pins. And the photos are nice, too.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 07:15:43 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New Globe"
}
] |
[
{
"msg_contents": "I have just been told that Corel and Inprise are merging. That makes\nInterbase a more formidable foe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 11:39:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inprise/Corel merger"
}
] |
[
{
"msg_contents": "This is a heads up about some changes I'm considering for the PostgreSQL\n7.0 RPMS. If the RPMs do not interest you, or you are not involved in\nsupporting PostgreSQL in the RPM form, then feel free to hit delete.\n\nI wanted to get feedback from the group before I made any changes -- 7.0\nis\nslated to go beta Feb 15, and I plan on having RPM's available within a\nfew hours of the official beta release. I already have feedback from my\ncontact at RedHat -- and it is complete agreement with the changes I am\nproposing.\n\nFirst, 7.0 has changed many small things about the operation of the\nsystem, including a new procedural language (plperl), a reorganized\nregression test suite, and a new 'pg_ctl' command to start and stop the\npostmaster.\n\nAs a result, I will be having to make some minor changes; and there is\none major change I want to make.\n\n1.)\tI want to move the actual database directory from /var/lib/pgsql to\n/var/lib/pgsql/data -- this will give the ability to store backups and\nother scratch data in /var/lib/pgsql without disturbing the main\ndatabase, in an FHS-compliant manner (the current regression tests AND\nupgrade scripts are not in compliance with the FHS in terms of their\npackaging, unfortunately -- I want to rectify this). This of course will\nrequire sufficient documentation -- and I will provide functionality to\nuse an existing data structure in /var/lib/pgsql instead, until the user\nmoves it (new installations will default the initdb to\n/var/lib/pgsql/data).\n\n2.)\tI will be enabling logs and logrotate functionality in the next\nrelease.\n\n3.)\t/etc/rc.d/init.d/postgresql will be rewritten to use the pg_ctl\ncommand, instead of doing the start/stop manually.\n\n4.)\tThe new plperl language will go in postgresql-perl.\n\n5.)\tI am considering splitting out pgaccess and the tk client from\npostgresql-tcl to postgresql-tk -- I have had several requests from\nusers of servers that are using the tcl client and the pltcl language\nwho do not have X11 installed, a current requirement for the\ninstallation of postgresql-tcl.\n\n6.)\tAnd, of course, an update to version 7.0. This will involve\nextensive testing for Alpha, Sparc, and MIPS support -- I am hoping that\nRyan Kirkpatrick and Uncle George can get the Alpha patches in order for\n7.0, as I don't believe Tom had time to do the fmgr rewrite like he\nwanted. I am also hoping that a number of people with MIPS, ARM, Alpha,\nand Sparc (both 32 and 64) will volunteer to beta test 7.0.\n\nThe initial builds will be done for Intel only -- until I get patches\nand/or confirmation of build on other architectures from those who are\nable to test those.\n\nIf you have additional suggestions for improving the RPM distribution,\nplease e-mail them to me.\n\nTIA for your feedback.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 07 Feb 2000 12:12:22 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7 RPMs coming soon"
},
{
"msg_contents": "Lamar,\n\nI got report from bulgarian user who had problem with locale under\nMANDRAKE caused by 'su -l postgres ...' I'll forward it to you.\nIncorrectly configured i18n (LC_ALL) could override (?) locale\nsettings even if LC_CTYPE, LC_COLLATE explicitly specified in\nstartup script. I think you could check if all locale environment\nare consistent with each other.\n\n\tOleg\nOn Mon, 7 Feb 2000, Lamar Owen wrote:\n\n> Date: Mon, 07 Feb 2000 12:12:22 -0500\n> From: Lamar Owen <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] PostgreSQL 7 RPMs coming soon\n> \n> This is a heads up about some changes I'm considering for the PostgreSQL\n> 7.0 RPMS. If the RPMs do not interest you, or you are not involved in\n> supporting PostgreSQL in the RPM form, then feel free to hit delete.\n> \n> I wanted to get feedback from the group before I made any changes -- 7.0\n> is\n> slated to go beta Feb 15, and I plan on having RPM's available within a\n> few hours of the official beta release. I already have feedback from my\n> contact at RedHat -- and it is complete agreement with the changes I am\n> proposing.\n> \n> First, 7.0 has changed many small things about the operation of the\n> system, including a new procedural language (plperl), a reorganized\n> regression test suite, and a new 'pg_ctl' command to start and stop the\n> postmaster.\n> \n> As a result, I will be having to make some minor changes; and there is\n> one major change I want to make.\n> \n> 1.)\tI want to move the actual database directory from /var/lib/pgsql to\n> /var/lib/pgsql/data -- this will give the ability to store backups and\n> other scratch data in /var/lib/pgsql without disturbing the main\n> database, in an FHS-compliant manner (the current regression tests AND\n> upgrade scripts are not in compliance with the FHS in terms of their\n> packaging, unfortunately -- I want to rectify this). This of course will\n> require sufficient documentation -- and I will provide functionality to\n> use an existing data structure in /var/lib/pgsql instead, until the user\n> moves it (new installations will default the initdb to\n> /var/lib/pgsql/data).\n> \n> 2.)\tI will be enabling logs and logrotate functionality in the next\n> release.\n> \n> 3.)\t/etc/rc.d/init.d/postgresql will be rewritten to use the pg_ctl\n> command, instead of doing the start/stop manually.\n> \n> 4.)\tThe new plperl language will go in postgresql-perl.\n> \n> 5.)\tI am considering splitting out pgaccess and the tk client from\n> postgresql-tcl to postgresql-tk -- I have had several requests from\n> users of servers that are using the tcl client and the pltcl language\n> who do not have X11 installed, a current requirement for the\n> installation of postgresql-tcl.\n> \n> 6.)\tAnd, of course, an update to version 7.0. This will involve\n> extensive testing for Alpha, Sparc, and MIPS support -- I am hoping that\n> Ryan Kirkpatrick and Uncle George can get the Alpha patches in order for\n> 7.0, as I don't believe Tom had time to do the fmgr rewrite like he\n> wanted. I am also hoping that a number of people with MIPS, ARM, Alpha,\n> and Sparc (both 32 and 64) will volunteer to beta test 7.0.\n> \n> The initial builds will be done for Intel only -- until I get patches\n> and/or confirmation of build on other architectures from those who are\n> able to test those.\n> \n> If you have additional suggestions for improving the RPM distribution,\n> please e-mail them to me.\n> \n> TIA for your feedback.\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 7 Feb 2000 22:12:31 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 7 RPMs coming soon"
},
{
"msg_contents": "On 2000-02-07, Lamar Owen mentioned:\n\n> 1.)\tI want to move the actual database directory from /var/lib/pgsql to\n> /var/lib/pgsql/data -- this will give the ability to store backups and\n> other scratch data in /var/lib/pgsql without disturbing the main\n> database, in an FHS-compliant manner (the current regression tests AND\n> upgrade scripts are not in compliance with the FHS in terms of their\n> packaging, unfortunately -- I want to rectify this). This of course will\n> require sufficient documentation -- and I will provide functionality to\n> use an existing data structure in /var/lib/pgsql instead, until the user\n> moves it (new installations will default the initdb to\n> /var/lib/pgsql/data).\n\nWhat exactly is FHS and what do they say? I am vaguely phantasizing about\ndoing a little work on the build process for the next release; is that\nsomething that could be addressed?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 00:09:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 7 RPMs coming soon"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> > database, in an FHS-compliant manner (the current regression tests AND\n> > upgrade scripts are not in compliance with the FHS in terms of their\n[snip]\n \n> What exactly is FHS and what do they say? I am vaguely phantasizing about\n> doing a little work on the build process for the next release; is that\n> something that could be addressed?\n\nFHS 2.0 is the Filesystem Hierarchy Standard, successor to the Linux\nFileSystem Standard (FSSTND). You may find the full document at\nhttp://www.pathname.com/fhs/\n\nCurrently, in order to get PostgreSQL into RPM form requires a number of\nmungifications -- the current prefix parameter is set to /usr, which\nputs binaries and shared libs in the right place -- but everything else\nis moved into place manually. It is a rather kludgy build script (ask\nThomas, he knows). To look at the build script (in RPM parlance, a\n'spec file'), load up\nhttp://www.ramifordistat.net/postgres/unpacked/non-beta/postgresql-6.5.3-3.spec \n(one line; it'll probably wrap on your e-mail client).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 07 Feb 2000 18:15:28 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL 7 RPMs coming soon"
},
{
"msg_contents": "Yes, this seems very much like the stuff I've been thinking about. I'll\nbring it up when the next devel cycle has started.\n\nOn Mon, 7 Feb 2000, Lamar Owen wrote:\n\n> Peter Eisentraut wrote:\n> > > database, in an FHS-compliant manner (the current regression tests AND\n> > > upgrade scripts are not in compliance with the FHS in terms of their\n> [snip]\n> \n> > What exactly is FHS and what do they say? I am vaguely phantasizing about\n> > doing a little work on the build process for the next release; is that\n> > something that could be addressed?\n> \n> FHS 2.0 is the Filesystem Hierarchy Standard, successor to the Linux\n> FileSystem Standard (FSSTND). You may find the full document at\n> http://www.pathname.com/fhs/\n> \n> Currently, in order to get PostgreSQL into RPM form requires a number of\n> mungifications -- the current prefix parameter is set to /usr, which\n> puts binaries and shared libs in the right place -- but everything else\n> is moved into place manually. It is a rather kludgy build script (ask\n> Thomas, he knows). To look at the build script (in RPM parlance, a\n> 'spec file'), load up\n> http://www.ramifordistat.net/postgres/unpacked/non-beta/postgresql-6.5.3-3.spec \n> (one line; it'll probably wrap on your e-mail client).\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 12:32:23 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 7 RPMs coming soon"
}
] |
[
{
"msg_contents": "I'm working on getting \"table shape\" from the outer join syntax.\nPretty sure I'm close, and will get to the parser stuff *soon*.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Feb 2000 17:15:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "status"
}
] |
[
{
"msg_contents": "Good news!\n\n FYI, with contributions from Stephan Szabo and Don Baccus,\n the FOREIGN KEY project made impressive progress during the\n last days.\n\n pg_dump is now able to dump FK constraints.\n\n pg_dump arranges to disable/reenable all triggers during data\n only reload.\n\n ALTER TABLE ... ADD CONSTRAINT ... FOREIGN KEY is fully\n implemented, and all existing data in the altered table is\n verified to satisfy the new constraint.\n\n The table actually created can be self referenced in the\n constraints.\n\n In contrast to my proposal, MATCH FULL and MATCH\n <unspecified> will both be fully supported in 7.0 already. So\n only MATCH PARTIAL will be left for 7.1.\n\n The open items left for 7.0 are now the file buffering for\n the trigger queue, the parser problem with NOT DEFERRABLE\n (where Thomas actually jumps in), building a regression suite\n and documentation.\n\n Many thanks to the two guys above. Without them, FOREIGN KEY\n would not only have failed to be finished in time. There\n would have been a big mistake maken for NO ACTION at all,\n leaving a huge hole for possible violations and not\n conforming to the standard.\n\n There is more to do after 7.0 is out, like ensuring that a\n unique constraint is defined on referenced PK columns,\n changing RESTRICT actions to fire as soon as possible and\n ensuring uniqueness of constraint names. But what has been\n done so far is IMHO really a major leap forward.\n\n 7.0 will have better FOREIGN KEY support than I expected.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 7 Feb 2000 19:35:54 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "RI project status"
}
] |
[
{
"msg_contents": "Hi folks,\n\n some days ago, I got a file \"libpq.pas\" as a response for\n some help to use the libpq interface directly from Delphi\n under Windows.\n\n The guy said we can use it freely, and that it isn't anything\n more than just porting the libpq-fe.h from C to Pascal. I\n asked for a little README to put both together under\n ./interfaces, and got nothing back :-(.\n\n I'm not familiar with Delphi. Could someone else verify the\n stuff and write that README? AFAIK he found the libpq.dll in\n the pgaccess corner of a past release.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 7 Feb 2000 20:46:09 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Using libpq.dll from Delphi"
}
] |
[
{
"msg_contents": "Hi,\n\nwhat's happen with network_ops in current CVS ?\nI just synced sources and couldn't load dump from 6.5.3 - \nproblem occures on \nCREATE INDEX \"face_key\" on \"face\" using btree ( \"eid\" \"int4_ops\", \"ip\" \"network_ops\" );\n\nThe message I got:\nCREATE\nERROR: DefineIndex: network_ops class not found\n\n\nTable face:\nelection=# \\d face\n Table \"face\"\n Attribute | Type | Modifier \n-----------+------------+----------\n eid | integer | \n ip | inet | \n vdate | datetime | \n ftrs | smallint[] | \n\n\n\nAlso, does new pg_dump is aware about order of defining of function \nand tables, when function is used in CREATE TABLE, for example:\nCREATE TABLE \"applicant\" (\n \"candx\" int2 DEFAULT next_applicant ( ) NOT NULL,\n \"candidate\" text, \n \"candt\" int2,\n \"img\" text);\nbut function next_applicant() is dumped in 6.5.3 after CREATE TABLE\nand this cause an error. I had manually edit dump file to reverse order :-)\n\n\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 7 Feb 2000 23:31:22 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "network_ops in 7.0 and pg_dump question"
},
{
"msg_contents": "> Hi,\n> \n> what's happen with network_ops in current CVS ?\n> I just synced sources and couldn't load dump from 6.5.3 - \n> problem occures on \n> CREATE INDEX \"face_key\" on \"face\" using btree ( \"eid\" \"int4_ops\", \"ip\" \"network_ops\" );\n> \n> The message I got:\n> CREATE\n> ERROR: DefineIndex: network_ops class not found\n> \n\nOops, my fault. There was some confusing links in the catalog for the\nip/cidr types. They pointed to the same *ops, which made the table\nnon-unique, so the cache would grab a random matching entry. The new\nsystem has separate *ops for each type. We were basically using the\ncache on a non-unique entry. We would grab the first match. The new\ncode uses the same underlying functions, but moves the duplication down\none level.\n\nNow, how to convert these? Not supplying the ops works fine, but\npg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\nwe should just remove that from being passed to the backend for a few\nreleases. Comments?\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 15:52:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Now, how to convert these? Not supplying the ops works fine, but\n> pg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\n> we should just remove that from being passed to the backend for a few\n> releases. Comments?\n\nUgly, but probably the best stopgap for backwards compatibility ...\nat least I can't think of a better answer, since we have no way to\nchange what 6.5 pg_dump will dump.\n\nYou're only going to suppress \"network_ops\" if it appears in the\nops position of a CREATE INDEX, right? Don't want to stop people\nfrom using the name for fields and so on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 18:30:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Now, how to convert these? Not supplying the ops works fine, but\n> pg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\n> we should just remove that from being passed to the backend for a few\n> releases. Comments?\n\nActually, rather than hacking gram.y, it seems like it would be cleaner\nto put the kluge in whatever part of the parser looks up the ops name.\n\nOf course a kluge is a kluge no matter what...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 18:36:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Now, how to convert these? Not supplying the ops works fine, but\n> > pg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\n> > we should just remove that from being passed to the backend for a few\n> > releases. Comments?\n> \n> Ugly, but probably the best stopgap for backwards compatibility ...\n> at least I can't think of a better answer, since we have no way to\n> change what 6.5 pg_dump will dump.\n> \n> You're only going to suppress \"network_ops\" if it appears in the\n> ops position of a CREATE INDEX, right? Don't want to stop people\n> from using the name for fields and so on.\n\nNo, just at that part in the grammar.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 19:03:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Now, how to convert these? Not supplying the ops works fine, but\n> > pg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\n> > we should just remove that from being passed to the backend for a few\n> > releases. Comments?\n> \n> Actually, rather than hacking gram.y, it seems like it would be cleaner\n> to put the kluge in whatever part of the parser looks up the ops name.\n> \n> Of course a kluge is a kluge no matter what...\n\nI like it in gram.y because it is more visible there and easier to\nremove later.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 19:05:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question"
},
{
"msg_contents": "Thanks,\ncreation of index works now. But what about pg_dump ?\nI still have to edit manually dump file.\nlook to excerption from dump file:\nCREATE TABLE \"applicant\" (\n \"candx\" int2 DEFAULT next_applicant() NOT NULL,\n \"candidate\" text,\n \"candt\" int2,\n \"img\" text\n);\n\nThis fails because function next_applicant dumps later !\n\nHere is a psql output:\nYou are now connected as new user megera.\nERROR: Relation 'applicant' does not exist\ninvalid command \\N\ninvalid command \\N\ninvalid command \\N\ninvalid command \\N\ninvalid command \\.\nERROR: parser: parse error at or near \"2\"\ninvalid command \\.\nERROR: parser: parse error at or near \"1\"\ninvalid command \\.\nERROR: parser: parse error at or near \"1\"\ninvalid command \\.\nERROR: parser: parse error at or near \"1\"\ninvalid command \\.\nERROR: parser: parse error at or near \"24\"\ninvalid command \\.\nERROR: parser: parse error at or near \"24\"\nCREATE\nCREATE\n\nHmm, error diagnostics still not very informative :-)\n\n\tRegards,\n\n\t\tOleg\n\nOn Mon, 7 Feb 2000, Bruce Momjian wrote:\n\n> Date: Mon, 7 Feb 2000 19:03:29 -0500 (EST)\n> From: Bruce Momjian <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] network_ops in 7.0 and pg_dump question\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > Now, how to convert these? Not supplying the ops works fine, but\n> > > pg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\n> > > we should just remove that from being passed to the backend for a few\n> > > releases. Comments?\n> > \n> > Ugly, but probably the best stopgap for backwards compatibility ...\n> > at least I can't think of a better answer, since we have no way to\n> > change what 6.5 pg_dump will dump.\n> > \n> > You're only going to suppress \"network_ops\" if it appears in the\n> > ops position of a CREATE INDEX, right? Don't want to stop people\n> > from using the name for fields and so on.\n> \n> No, just at that part in the grammar.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 8 Feb 2000 13:42:57 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> creation of index works now. But what about pg_dump ?\n> I still have to edit manually dump file.\n> look to excerption from dump file:\n> CREATE TABLE \"applicant\" (\n> \"candx\" int2 DEFAULT next_applicant() NOT NULL,\n> \"candidate\" text,\n> \"candt\" int2,\n> \"img\" text\n> );\n> This fails because function next_applicant dumps later !\n\nYeah, it's a known bug. We can't just dump the functions first,\nthough, can we? I'm not sure how carefully function definitions\nget examined by CREATE FUNCTION.\n\nThe simplest real solution I've heard so far is to dump database objects\nin order by OID rather than doing it strictly by type.\n\nIs anyone working on this, or does anyone want to? I haven't looked at\npg_dump in a while, but I know some other folks have been hacking it\nrecently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 11:14:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Ordering of pg_dump output"
},
{
"msg_contents": "> Yeah, it's a known bug. We can't just dump the functions first,\n> though, can we? I'm not sure how carefully function definitions\n> get examined by CREATE FUNCTION.\n> \n> The simplest real solution I've heard so far is to dump database objects\n> in order by OID rather than doing it strictly by type.\n> \n> Is anyone working on this, or does anyone want to? I haven't looked at\n> pg_dump in a while, but I know some other folks have been hacking it\n> recently.\n\nI thought Peter E. was thinking about it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 11:38:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> The simplest real solution I've heard so far is to dump database objects\n> in order by OID rather than doing it strictly by type.\n> \n> Is anyone working on this, or does anyone want to? I haven't looked at\n> pg_dump in a while, but I know some other folks have been hacking it\n> recently.\n\nI'll take a stab at it, if Peter E. isn't already doing it.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 08 Feb 2000 14:01:26 -0500",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output"
},
{
"msg_contents": "> The simplest real solution I've heard so far is to dump database objects\n> in order by OID rather than doing it strictly by type.\n>\n> Is anyone working on this, or does anyone want to? I haven't looked at\n> pg_dump in a while, but I know some other folks have been hacking it\n> recently.\n\n Dumping by Oid or building up a framework of dependencies,\n these where the options. Don't forget, SQL language functions\n are (in contrast to procedural ones) parsed at CREATE time.\n So any operator, aggregate or table you use inside must\n exist. And they can be used in turn in many places, so it\n isn't simple at all.\n\n I think finally pg_dump must scan the entire schema two\n times, first to get all the Oid's, second to dump all the\n objects.\n\n AFAIK, nobody is working on it. And starting on it right now\n seems a little late to make it until BETA.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 8 Feb 2000 20:24:21 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output"
},
{
"msg_contents": "At 02:01 PM 2/8/00 -0500, Mark Hollomon wrote:\n>Tom Lane wrote:\n>> \n>> The simplest real solution I've heard so far is to dump database objects\n>> in order by OID rather than doing it strictly by type.\n>> \n>> Is anyone working on this, or does anyone want to? I haven't looked at\n>> pg_dump in a while, but I know some other folks have been hacking it\n>> recently.\n>\n>I'll take a stab at it, if Peter E. isn't already doing it.\n\nYou might want to e-mail Jan and/or Steve Szabo, who've been working\non dumping referential integrity stuff. Because tables can mutally\nrefer to each other, constraint dumping won't be done until data is\ndumped, so the data will be loaded first when someone recreates the\ndatabase from the dump.\n\nI was busy over the weekend working the MATCH <unspecified> and the\nsemantics of referential integrity actions so mostly ignored the\ne-mails they traded on the subject - you'll need to get details\nfrom them.\n\nYou need to make sure whatever you do doesn't break whatever they've\ndone or are doing...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 08 Feb 2000 11:26:16 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output"
},
{
"msg_contents": "\n> Tom Lane wrote:\n> >\n> > The simplest real solution I've heard so far is to dump database objects\n> > in order by OID rather than doing it strictly by type.\n\nHmm. Now if my OO stuff was working I guess pg_dump could be implemented\nas...\n\nList<PGObject*> dblist = pgselect(\"SELECT ** from object order by oid\");\nwhile (dblist.begin(); !dblist.atEnd(); dblist++) {\n\tdblist.obj().dump();\n",
"msg_date": "Wed, 09 Feb 2000 09:50:03 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output"
},
{
"msg_contents": "On 2000-02-08, Tom Lane mentioned:\n\n> The simplest real solution I've heard so far is to dump database objects\n> in order by OID rather than doing it strictly by type.\n\nAFAIR, it was your idea ... ;)\n\n> \n> Is anyone working on this, or does anyone want to? I haven't looked at\n> pg_dump in a while, but I know some other folks have been hacking it\n> recently.\n\nI might have been putting out remarks to that end once in a while, and I'm\nstill interested in it, but it would be a more extensive project, like the\npsql revision, because pg_dump needs a lot of love as it stands. (I think\nthere are some parts still in it that allow you to dump PostQUEL.)\n\nThe problem with a pure oid-based ordering concept is that (as you\nyourself pointed out) it won't work if you alter some object in question\nafter creation. The obvious case would be an alter function (to be\nimplemented), but another case is (probably) alter column set default (is\nimplemented).\n\nWhat I'd like to do first is to draw up some (semi-)formal\n(dependency-based) concept on paper and either verify it or come to the\nconclusion that it will never work and then give up in disgust. ;) No,\nseriously, I suppose I'll bring this up again in a couple of months when\nwe're ready for it.\n\nAny collaborators are welcome of course.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 9 Feb 2000 01:09:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The problem with a pure oid-based ordering concept is that (as you\n> yourself pointed out) it won't work if you alter some object in question\n> after creation. The obvious case would be an alter function (to be\n> implemented), but another case is (probably) alter column set default (is\n> implemented).\n\nRight; a genuine dependency analysis would be better. Also a lot more\npainful to implement.\n\nAs you say, pg_dump could do with a wholesale rewrite, and maybe that\nwould be a good time to look at the dependency-based approach. In the\nmeantime, I think dumping in OID order would fix 90% of the problem for\n10% of the work...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 19:21:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ordering of pg_dump output "
}
] |
[
{
"msg_contents": "I am applying a patch to gram.y to supress network_ops from coming in. \nThat should fix the problem. Will commit in a few minutes.\n\n\n> > Hi,\n> > \n> > what's happen with network_ops in current CVS ?\n> > I just synced sources and couldn't load dump from 6.5.3 - \n> > problem occures on \n> > CREATE INDEX \"face_key\" on \"face\" using btree ( \"eid\" \"int4_ops\", \"ip\" \"network_ops\" );\n> > \n> > The message I got:\n> > CREATE\n> > ERROR: DefineIndex: network_ops class not found\n> > \n> \n> Oops, my fault. There was some confusing links in the catalog for the\n> ip/cidr types. They pointed to the same *ops, which made the table\n> non-unique, so the cache would grab a random matching entry. The new\n> system has separate *ops for each type. We were basically using the\n> cache on a non-unique entry. We would grab the first match. The new\n> code uses the same underlying functions, but moves the duplication down\n> one level.\n> \n> Now, how to convert these? Not supplying the ops works fine, but\n> pg_dump supplies the ops. Maybe in gram.y, if they supply network_ops,\n> we should just remove that from being passed to the backend for a few\n> releases. Comments?\n> \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 16:03:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] network_ops in 7.0 and pg_dump question"
}
] |
[
{
"msg_contents": "This is a Call For Hackers:\n\nSome time ago, I floated a little discussion on this list about doing\nsome distributed database work with PostgreSQL. The project got back\nburnered at work, but now has a timeline for needing a solution \"this\nsummer.\" Recent discussions on this list about Postgres's historical\nobject roots got me back to the Berkeley db sites, and reminded me about\nMariposa, which is Stonebraker's take on distributed DBs.\n\nhttp://s2k-ftp.cs.berkeley.edu:8000:8000/mariposa/\n\nStoneBraker has gone on to commercialize Mariposa as Cohera, which seems\nto be one of those Enterprise Scale products where if you need to ask\nhow much a license costs, you can't afford it ;-)\n\nSounds like now would be a good time to re-visit Mariposa, and see what\ngood ideas can be folded over into PostgreSQL. Mariposa was funded by\nARPA and ARO, and was used by NASA as the database part of the Sequoia\nProject, which became Big Sur, looking to unify the various kinds of\ngeophysical data collected by earth observing missions.\n\nThe code is an offshoot of Postgres95, with lots of nasty '#ifdef P95's\nscattered around. The split predates lots of good work by the PostgreSQL\nteam to clean up years of academic cruft that had accumulated, so merging\nis not trivial.\n\nAnyway, anyone interested in taking a look at this with me? I think the\nplace to start (i.e., where I'm starting) is to get the June-1996 alpha\nrelease of Mariposa to compile on a current system (I'm running Linux\nmyself.) I've been doing a compare-and-contrast, staring at source code,\nbut I think I need a running system to decide how the parts fit together.\n\nThen, plan what features to 'fold' into pgsql, and run a proposal past\nthis list, some time later in the 7.x series, perhaps in a couple of\nmonths (you guys will probably be on 8.x by then!) Hopefully, not take-up\ntoo much of the core developers time until we're talking integration.\n\nAnyone else interested, I'm using the tarball from:\n\nftp://epoch.cs.berkeley.edu/pub/mariposa/src/alpha-1/mariposa-alpha-1.tar.gz\n\nIf this really takes off, I can host CVS of the mariposa and pgsql\nsources, as well as web pages, mailing list, whatever. If it's just a\ncouple of us (or me all by myself ;-) we'll keep it simple.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 7 Feb 2000 15:11:23 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CFH: Mariposa, distributed DB"
},
{
"msg_contents": "> This is a Call For Hackers:\n> \n> Some time ago, I floated a little discussion on this list about doing\n> some distributed database work with PostgreSQL. The project got back\n> burnered at work, but now has a timeline for needing a solution \"this\n> summer.\" Recent discussions on this list about Postgres's historical\n> object roots got me back to the Berkeley db sites, and reminded me about\n> Mariposa, which is Stonebraker's take on distributed DBs.\n> \n> http://s2k-ftp.cs.berkeley.edu:8000:8000/mariposa/\n> \n\nI have looked at the code. I have files that show all the diffs they\nmade to it and they have some new files. It was hard for me to see what\nthey were doing. Looks like they hacked up the executor and put in some\ntranslation layer to talk to some databroker. It seems like an awfully\ncomplicated way to do it. I would not bother getting it to run, but\nfigure out what they were trying to do, and why, and see how we can\nimplement it. My guess is that they had one central server for each\ntable, and you went to that server to get information.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 16:23:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> \n> Anyone else interested, I'm using the tarball from:\n> \n> ftp://epoch.cs.berkeley.edu/pub/mariposa/src/alpha-1/mariposa-alpha-1.tar.gz\n> \n\nIs mariposa licence compatible with ours ?\n\n------------------\nHannu\n",
"msg_date": "Mon, 07 Feb 2000 23:44:14 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "On Mon, Feb 07, 2000 at 04:23:06PM -0500, Bruce Momjian wrote:\n> > This is a Call For Hackers:\n> > \n> > Some time ago, I floated a little discussion on this list about doing\n> > some distributed database work with PostgreSQL. The project got back\n> > burnered at work, but now has a timeline for needing a solution \"this\n> > summer.\" Recent discussions on this list about Postgres's historical\n> > object roots got me back to the Berkeley db sites, and reminded me about\n> > Mariposa, which is Stonebraker's take on distributed DBs.\n> > \n> > http://s2k-ftp.cs.berkeley.edu:8000:8000/mariposa/\n> > \n> \n> I have looked at the code. I have files that show all the diffs they\n> made to it and they have some new files. It was hard for me to see what\n> they were doing. Looks like they hacked up the executor and put in some\n> translation layer to talk to some databroker. It seems like an awfully\n> complicated way to do it. I would not bother getting it to run, but\n> figure out what they were trying to do, and why, and see how we can\n> implement it. My guess is that they had one central server for each\n> table, and you went to that server to get information.\n> \n\nActually, this being an academic project, there's lots of design\ndocuments about how it's _supposed_ to work. Stonebraker calls in an\n'agoric' distributed database, as in agora, market. The various db\nservers offer tables (or even specific views on tables) 'for sale', and\nbid against/with each other to provide the data to clients requesting\nit. The idea behind it is to us a micro-economic market model to do\nyour distributed optimizations for you, rather than have the DBAs decide\nwhat tables go where, what tables need to be shadowed, etc. The win is\nsupposedly massive scaleability: they Cohera site talks about 10000s\nof servers.\n\nAs I said, I've been doing the compare existing source code thing,\nbut thought working code might be more revealing, and give my project\nmanager something to see progress on ;-) Your right, though, that the\nmost productive way to go, in the long run, might be to reimplement what\nthey've described, in the current pgsql tree, using the Mariposa source\nas an example implementation.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 7 Feb 2000 15:50:25 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "On Mon, Feb 07, 2000 at 11:44:14PM +0200, Hannu Krosing wrote:\n> \"Ross J. Reedstrom\" wrote:\n> > \n> > \n> > Anyone else interested, I'm using the tarball from:\n> > \n> > ftp://epoch.cs.berkeley.edu/pub/mariposa/src/alpha-1/mariposa-alpha-1.tar.gz\n> > \n> \n> Is mariposa licence compatible with ours ?\n\nIt better be, it's the same license ;-) That is, Mariposa is a branch off\nthe Postgres95 tree. Actually, it's a good question: the PG95 license \nwould have let them put just about any license on Mariposa they wanted.\n\nAfter running both COPYRIGHT files throught fmt, here's the diff output:\n\nwallace$ diff COPYRIGHT COPYRIGHT.pgsql \n1c1,2\n< Mariposa Distributed Data Base Management System\n---\n> PostgreSQL Data Base Management System (formerly known as Postgres,\n> then as Postgres95).\n3c4\n< Copyright (c) 1994-6 Regents of the University of California\n---\n> Copyright (c) 1994-7 Regents of the University of California\n21d21\n< \nwallace$ \n\nSo, it is word for word the PostgreSQL license.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 7 Feb 2000 15:56:51 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > This is a Call For Hackers:\n> >\n> > Some time ago, I floated a little discussion on this list about doing\n> > some distributed database work with PostgreSQL. The project got back\n> > burnered at work, but now has a timeline for needing a solution \"this\n> > summer.\" Recent discussions on this list about Postgres's historical\n> > object roots got me back to the Berkeley db sites, and reminded me about\n> > Mariposa, which is Stonebraker's take on distributed DBs.\n> >\n> > http://s2k-ftp.cs.berkeley.edu:8000:8000/mariposa/\n\nIt has a nice concept of simulating free market for distributed query \noptimisation. Auctions, brokers and all ...\n\n> \n> I have looked at the code. I have files that show all the diffs they\n> made to it and they have some new files. It was hard for me to see what\n> they were doing. Looks like they hacked up the executor and put in some\n> translation layer to talk to some databroker. \n\nThe broker was for determining where to get the data from - as each table \ncould be queried from several sites there had to be a mechanism for the \nplanner to figure out the cheapest (or fastest if \"money\" was not a problem)\n\n> It seems like an awfully\n> complicated way to do it. I would not bother getting it to run, but\n> figure out what they were trying to do, and why, and see how we can\n> implement it. My guess is that they had one central server for each\n> table, and you went to that server to get information.\n\nThey would not have needed the broker for such a simple scheme \n\nIIRC they had no central table, but they doubled the length of oid and \nmade it to include the site id of the site that created the tuple.\n\nIt could be that they restricted changing a tuple to that site ?\n\nThe site to go for information was determined by an auction where each site \noffered speed and cost for looking up the data. Usually the didn't also \nquarantee the latest data, just the \"best effort\".\n\n-------------------\nHannu\n",
"msg_date": "Tue, 08 Feb 2000 00:04:52 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "At 12:04 AM 2/8/00 +0200, Hannu Krosing wrote:\n\n>The site to go for information was determined by an auction where each site \n>offered speed and cost for looking up the data. Usually the didn't also \n>quarantee the latest data, just the \"best effort\".\n\nI just glanced at the website. They explicitly mention that they don't\nrequire global synchronization, because it would slow down response time\nfor many things (with thousands of server, that sounds like an\nunderstatement). \n\nSo, yes, it would appear they don't guarantee the latest data.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 14:19:56 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "Seems there was more than just going back to the Berkeley site that\nreminded me of Mariposa. A principle new functionality in Mariposa is \nthe ability to 'fragment' a class, based on a user-defined partitioning\nfunction. The example used is a widgets class, which is partitioned on\nthe 'location' field (i.e., the warehouse the widget is stored in)\n\nCREATE TABLE widgets (\n\tpart_no\t\tint4,\n\tlocation\tchar16,\n\ton_hand\t\tint4,\n\ton_order\tint4,\n\tcommited\tint4\n) PARTITION ON LOCATION USING btchar16cmp;\n\nThen, the table is filled with tuples, all containing locations of either\n'Miami' or 'New York'.\n\nSELECT * from widgets; \n\nworks as expected.\n\nLater, this table is fragmented:\n\nSPLIT FRAGMENT widgets INTO widgets_mi, widgets_ny AT 'Miami';\n\nNow, the original table widgets is _empty_: all the tuples with location <=\n'Miami' go to widgets_mi, location > 'Miami' go to widgets_ny.\n\nSELECT * from widgets; \n\nStill returns all the tuples! So, this works sort of the way Chris Bitmead\nhas implemented subclasses: widgets_mi and widgets_ny are subclasses of\nthe widgets class, so selects return everything below. They differ in\nthat only PARTITIONed classes can be FRAGMENTed.\n\nThe distributed part comes in with the MOVE FRAGMENT command. This\ntransfers the 'master' copy of a table to the designated host, so future\naccess to that FRAGMENT will go over the network.\n\nThere's also a COPY FRAGMENT command, that sets up a local cache of a\nfragment, with a periodic update time. These copies may be either \nREADONLY, or (default) READ/WRITE. Seems updates are timed only (simple\nextension would be to implement write through behavior)\n\nAll this is coming from the Mariposa User's Manual, which is an extended\nversion of the Postgres95 User's Manual.\n\nAs to latest vs. best effort: One defines a BidCurve, who's dimensions are\nCost and Time. A flat curve should get you that latest data. And, since\nthe DataBroker and Bidder are both implemented as Tcl scripts, so it\nwould be possible to define a bid policy that only buys the latest data,\nregardless of how long it's going to take.\n\nOh, BTW, yes that does put _two_ interpreted Tcl scripts on the execution\npath for every query. Wonder what _that'll_ do for execution time. However,\nit's like planning/optimization time, in that it's spent per query, rather\nthan per tuple.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n\nOn Mon, Feb 07, 2000 at 02:19:56PM -0800, Don Baccus wrote:\n> At 12:04 AM 2/8/00 +0200, Hannu Krosing wrote:\n> \n> >The site to go for information was determined by an auction where each site \n> >offered speed and cost for looking up the data. Usually the didn't also \n> >quarantee the latest data, just the \"best effort\".\n> \n> I just glanced at the website. They explicitly mention that they don't\n> require global synchronization, because it would slow down response time\n> for many things (with thousands of server, that sounds like an\n> understatement). \n> \n> So, yes, it would appear they don't guarantee the latest data.\n> \n",
"msg_date": "Mon, 7 Feb 2000 16:57:59 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "At 04:57 PM 2/7/00 -0600, Ross J. Reedstrom wrote:\n\n>CREATE TABLE widgets (\n>\tpart_no\t\tint4,\n>\tlocation\tchar16,\n>\ton_hand\t\tint4,\n>\ton_order\tint4,\n>\tcommited\tint4\n>) PARTITION ON LOCATION USING btchar16cmp;\n\nOracle's partitioning is fixed, in other words once you choose a\ncondition to split on, you can't change it. In other words, in\nyour example:\n\n>Then, the table is filled with tuples, all containing locations of either\n>'Miami' or 'New York'.\n\nAfter splitting the table into \">'Miami'\" and \"<='Miami\" fragments, \nI've been told that you can't (say) change it to \">'Boston'\" and\nhave the proper rows move automatically.\n\nIn practice, partioning is often used to split tables on dates. You\nmight want to partion off your old tax data at the 7-yr old mark, and\neach year as you do your taxes move the oldest tax data in your\n\"recent taxes\" table split off to your \"older taxes\" table.\n\nApparently, Informix is smart enough to do this for you.\n\nSince a couple of the people associated with the project are Informix\npeople, do you have any idea if Mariposa is able to do this?\n\n>\n>SELECT * from widgets; \n>\n>works as expected.\n>\n>Later, this table is fragmented:\n>\n>SPLIT FRAGMENT widgets INTO widgets_mi, widgets_ny AT 'Miami';\n\nIn other words some sort of \"update the two tables AT <some new criteria>\"\n\nWhatever the answer to my question, Mariposa certainly looks interesting.\nIt's functionality that folks who do data warehousing really need.\n\n>Oh, BTW, yes that does put _two_ interpreted Tcl scripts on the execution\n>path for every query. Wonder what _that'll_ do for execution time. However,\n>it's like planning/optimization time, in that it's spent per query, rather\n>than per tuple.\n\nProbably not as bad as you think, if they're simple and short. Once\nsomeone has this up and running and integrated with PostgreSQL and \nrobust and reliable we can measure it and change to something else if\nnecessary :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 07 Feb 2000 15:18:01 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
},
{
"msg_contents": "\n Hi,\n\n the Mariposa db distribution is interesting, but it is very specific. If I\ngood understand it is not real-time and global synchronized DB replication.\nBut for a lot of users (and me) is probably interestion on-line DB replication\nand synchronization. How much users have 10K servers?\n \n I explore current PG's source and is probably possible create support for\non-line replication. My idea is replicate data on a heap_ layout. The parser,\nplaner and executor run on local backend and replicate straight-out tuples \nto the others servers (nodes). It needs synchronize PG's locks too. \nIn near future I want start project for PG on-line replication. Or works on \nthis anyone now? Comments?\n\n\t\t\t\t\t\t\tKarel\n\n\n",
"msg_date": "Tue, 8 Feb 2000 16:44:37 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CFH: Mariposa, distributed DB"
}
] |
[
{
"msg_contents": "\nWe're getting there *muhahaha* Finally, a project *based* on PostgreSQL\nfirst!! :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Mon, 7 Feb 2000 22:04:14 +0100\nFrom: Gunther Stammwitz <[email protected]>\nReply-To: [email protected]\nTo: [email protected]\nSubject: Re: ipmeter-user: Ipmeter + MySql ??\n\n<useless info deleted>\n\n> > Gunther Stammwitz wrote:\n> >\n> > Hello,\n> >\n> > I've just downloaded the latest version of ipmeter, but I cant get it\n> > working. It looks like Ipmeter requires PostgreSql.\n> > The problem is: I'm running mysql. Is there any chance to get it\n> > running with mysql or do I have to change my database.\n>\n> I'm afraid you will have to switch to PostgreSQL, since IPmeter 1.0\n> requires\n> user-defined datatypes.\n>\n> > If yes: can i\n> > run mysql + postgresql parallel ?\n>\n> Other than performance, there's no reason why not.\n>\n> Best Regards,\n>\n> - Lorand\n>\n> --\n> Computer: A device to speed and automate errors\n> Lorand Bruhacs, Internet Engineer\n> IP23 Gesellschaft fuer IP-basierte Dienstleistungen mbH\n>\n\n\n",
"msg_date": "Mon, 7 Feb 2000 17:55:26 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ipmeter-user: Ipmeter + MySql ?? (fwd)"
},
{
"msg_contents": "> \n> We're getting there *muhahaha* Finally, a project *based* on PostgreSQL\n> first!! :)\n> > I'm afraid you will have to switch to PostgreSQL, since IPmeter 1.0\n> > requires\n> > user-defined datatypes.\n\nAw, gee, shame MySQL doesn't have them. :-)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 17:08:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ipmeter-user: Ipmeter + MySql ?? (fwd)"
}
] |
[
{
"msg_contents": "libpq should be back to normal (printing and all). Sorry once again for\nthe mess.\n\nThe psql quoting issue should be fixed as well. As is usual for\nhand-crafted parsers, there's probably something I overlooked, so feel\nfree to bring that to my attention. I haven't done anything about the\necho options yet, although I'm leaning towards \"-a\".\n\nWhile we're at it, there's a setting that causes psql to stop execution of\na script on an error (since usually the later commands will be depending\non the successful completion of earlier ones). I was wondering if that\nshould be the default if you use the -f option.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 00:08:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql and libpq fixes"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> While we're at it, there's a setting that causes psql to stop execution of\n> a script on an error (since usually the later commands will be depending\n> on the successful completion of earlier ones). I was wondering if that\n> should be the default if you use the -f option.\n\nSounds useful, but you can't make it the default without breaking existing\nscripts. Trivial example is this common idiom:\n\tDROP TABLE t1; -- in case it already exists\n\tCREATE TABLE t1;\n\tCOPY ...\n\nIn general, an existing script is not going to be written with the idea\nthat psql will cut it off at the knees for provoking an error. If the\nauthor *does* want all the rest of the commands to be skipped on error,\nhe'll just have written BEGIN and END around the whole script.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 19:46:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes "
},
{
"msg_contents": "On Mon, 7 Feb 2000, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > While we're at it, there's a setting that causes psql to stop execution of\n> > a script on an error (since usually the later commands will be depending\n> > on the successful completion of earlier ones). I was wondering if that\n> > should be the default if you use the -f option.\n> \n> Sounds useful, but you can't make it the default without breaking existing\n> scripts. Trivial example is this common idiom:\n> \tDROP TABLE t1; -- in case it already exists\n> \tCREATE TABLE t1;\n> \tCOPY ...\n\nOh yes, good point.\n\n> \n> In general, an existing script is not going to be written with the idea\n> that psql will cut it off at the knees for provoking an error. If the\n> author *does* want all the rest of the commands to be skipped on error,\n> he'll just have written BEGIN and END around the whole script.\n\nLast time I checked you couldn't roll back a create table. ;)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 12:34:58 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> In general, an existing script is not going to be written with the idea\n>> that psql will cut it off at the knees for provoking an error. If the\n>> author *does* want all the rest of the commands to be skipped on error,\n>> he'll just have written BEGIN and END around the whole script.\n\n> Last time I checked you couldn't roll back a create table. ;)\n\nAu contraire, rolling back a CREATE works fine. It's rolling back\na DROP that gives trouble ;-)\n\nThis does bring up a thought --- should psql's kill-the-script-on-error\noption perhaps zap the script only for errors committed outside of a\ntransaction block? I'm not sure how hard it is for psql to keep track\nof whether the script is in an xact, so maybe this'd be far harder than\nit's worth. Seems like it deserves some consideration though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 10:50:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes "
},
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> >> In general, an existing script is not going to be written with the idea\n> >> that psql will cut it off at the knees for provoking an error. If the\n> >> author *does* want all the rest of the commands to be skipped on error,\n> >> he'll just have written BEGIN and END around the whole script.\n> \n> > Last time I checked you couldn't roll back a create table. ;)\n> \n> Au contraire, rolling back a CREATE works fine. It's rolling back\n> a DROP that gives trouble ;-)\n> \n> This does bring up a thought --- should psql's kill-the-script-on-error\n> option perhaps zap the script only for errors committed outside of a\n> transaction block? I'm not sure how hard it is for psql to keep track\n> of whether the script is in an xact, so maybe this'd be far harder than\n> it's worth. Seems like it deserves some consideration though.\n\nWhy is being in a transaction block important?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 11:02:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> This does bring up a thought --- should psql's kill-the-script-on-error\n>> option perhaps zap the script only for errors committed outside of a\n>> transaction block?\n\n> Why is being in a transaction block important?\n\nI was thinking that the script might be expecting an error, and have\nestablished a begin-block to limit the effects of the error.\n\nBut on third thought, probably the thing that would be really useful\nfor \"expected errors\" is if there is a backslash-command that turns on\nor off the kill-on-error behavior. (The command line switch would\nmerely set the initial state of this flag.) This way, a script could\nuse the option in an intelligent fashion:\n\n\t\\kill-on-error off\n\tDROP TABLE t1;\n\t\\kill-on-error on\n\tCREATE TABLE t1;\n\t...\n\nIt'd still have to default to 'off' for backwards compatibility,\nunfortunately, but something like this would be really useful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 11:29:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes "
},
{
"msg_contents": "> But on third thought, probably the thing that would be really useful\n> for \"expected errors\" is if there is a backslash-command that turns on\n> or off the kill-on-error behavior. (The command line switch would\n> merely set the initial state of this flag.) This way, a script could\n> use the option in an intelligent fashion:\n> \n> \t\\kill-on-error off\n> \tDROP TABLE t1;\n> \t\\kill-on-error on\n> \tCREATE TABLE t1;\n> \t...\n> \n> It'd still have to default to 'off' for backwards compatibility,\n> unfortunately, but something like this would be really useful.\n\nIn Informix 4GL, it is ON ERROR STOP and ON ERROR CONTINUE.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 11:38:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "\nOn 07-Feb-2000 Peter Eisentraut wrote:\n> libpq should be back to normal (printing and all). Sorry once again for\n> the mess.\n> \n> The psql quoting issue should be fixed as well. As is usual for\n> hand-crafted parsers, there's probably something I overlooked, so feel\n> free to bring that to my attention. I haven't done anything about the\n> echo options yet, although I'm leaning towards \"-a\".\n> \n> While we're at it, there's a setting that causes psql to stop execution of\n> a script on an error (since usually the later commands will be depending\n> on the successful completion of earlier ones). I was wondering if that\n> should be the default if you use the -f option.\n\nNo!!! \nI have lots script like \n drop function ....\n create function \nend so on\n\nMay be better going to file like\n~/.pgdefaults \n\n\n-- \nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Tue, 08 Feb 2000 21:00:30 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> Bruce Momjian <[email protected]> writes:\n> >> This does bring up a thought --- should psql's kill-the-script-on-error\n> >> option perhaps zap the script only for errors committed outside of a\n> >> transaction block?\n> \n> But on third thought, probably the thing that would be really useful\n> for \"expected errors\" is if there is a backslash-command that turns on\n> or off the kill-on-error behavior. (The command line switch would\n> merely set the initial state of this flag.) This way, a script could\n> use the option in an intelligent fashion:\n\nUrhm, wouldn't a better idea be to have something like Ingres' \"ON\nERROR\" and \"ON WARNING\" settings? In Ingres esqlc, you can create\nfunctions and then tell Ingres to execute them in the even of a\nwarning or error. Also, you can say \"ON ERROR CONTINUE\" and errors\nwill then be returned to the application as a status, but otherwise\nignored.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "Thu, 10 Feb 2000 11:02:32 -0500 (EST)",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes "
},
{
"msg_contents": "> Urhm, wouldn't a better idea be to have something like Ingres' \"ON\n> ERROR\" and \"ON WARNING\" settings? In Ingres esqlc, you can create\n> functions and then tell Ingres to execute them in the even of a\n> warning or error. Also, you can say \"ON ERROR CONTINUE\" and errors\n> will then be returned to the application as a status, but otherwise\n> ignored.\n> \n\nYes, seems like those are the accepted words to use.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 10 Feb 2000 11:06:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "On Thu, 10 Feb 2000, Brian E Gallew wrote:\n\n> Then <[email protected]> spoke up and said:\n> > Bruce Momjian <[email protected]> writes:\n> > >> This does bring up a thought --- should psql's kill-the-script-on-error\n> > >> option perhaps zap the script only for errors committed outside of a\n> > >> transaction block?\n> > \n> > But on third thought, probably the thing that would be really useful\n> > for \"expected errors\" is if there is a backslash-command that turns on\n> > or off the kill-on-error behavior. (The command line switch would\n> > merely set the initial state of this flag.) This way, a script could\n> > use the option in an intelligent fashion:\n\nFYI, the commands are\n\\set EXIT_ON_ERROR\nand\n\\unset EXIT_ON_ERROR\nIt's a normal psql variable, but incidentally the syntax seems kind of\neasy to remember.\n\n> \n> Urhm, wouldn't a better idea be to have something like Ingres' \"ON\n> ERROR\" and \"ON WARNING\" settings? In Ingres esqlc, you can create\n> functions and then tell Ingres to execute them in the even of a\n> warning or error. Also, you can say \"ON ERROR CONTINUE\" and errors\n> will then be returned to the application as a status, but otherwise\n> ignored.\n\nThat's very nice and all, but psql doesn't work that way. I'm not sure how\nother dbs organize their front-end internally, but that sort of scheme\nwould really take psql places we might not want it to go, and for which it\nhasn't been designed -- namely, to be a programming language.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 17:12:54 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes "
},
{
"msg_contents": "> FYI, the commands are\n> \\set EXIT_ON_ERROR\n> and\n> \\unset EXIT_ON_ERROR\n> It's a normal psql variable, but incidentally the syntax seems kind of\n> easy to remember.\n> \n\nCan we change that to the more standard ON_ERROR_STOP?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 10 Feb 2000 11:16:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "On Thu, 10 Feb 2000, Bruce Momjian wrote:\n\n> > FYI, the commands are\n> > \\set EXIT_ON_ERROR\n> > and\n> > \\unset EXIT_ON_ERROR\n> > It's a normal psql variable, but incidentally the syntax seems kind of\n> > easy to remember.\n> > \n> \n> Can we change that to the more standard ON_ERROR_STOP?\n\nConsider it done.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 17:18:30 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "> > > > FYI, the commands are\n> > > > \\set EXIT_ON_ERROR\n> > > > and\n> > > > \\unset EXIT_ON_ERROR\n> > > > It's a normal psql variable, but incidentally the syntax seems kind of\n> > > > easy to remember.\n> > > Can we change that to the more standard ON_ERROR_STOP?\n> \n> Any chance of multi-word options? Like \"\\set on error stop\"?\n> \n> And at least part of the reason other systems can do some error\n> recovery is that they decouple the parser from the backend, so the\n> parser is carried closer to the client, and the client can be more\n> certain about what is being done. But that carries a lot of baggage\n> too...\n> \n> If/when we do get more decoupling, it might be done through a Corba\n> interface, which would allow us to get away from the string-based\n> client/server protocol, and will handle typing, marshalling, byte\n> ordering, etc more-or-less transparently.\n> \n\nI think we would have to have more need for multi-word setttings than\nthis one before adding that complexity.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 10 Feb 2000 11:55:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "> > > FYI, the commands are\n> > > \\set EXIT_ON_ERROR\n> > > and\n> > > \\unset EXIT_ON_ERROR\n> > > It's a normal psql variable, but incidentally the syntax seems kind of\n> > > easy to remember.\n> > Can we change that to the more standard ON_ERROR_STOP?\n\nAny chance of multi-word options? Like \"\\set on error stop\"?\n\nAnd at least part of the reason other systems can do some error\nrecovery is that they decouple the parser from the backend, so the\nparser is carried closer to the client, and the client can be more\ncertain about what is being done. But that carries a lot of baggage\ntoo...\n\nIf/when we do get more decoupling, it might be done through a Corba\ninterface, which would allow us to get away from the string-based\nclient/server protocol, and will handle typing, marshalling, byte\nordering, etc more-or-less transparently.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 10 Feb 2000 17:00:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "On Thu, 10 Feb 2000, Thomas Lockhart wrote:\n\n> > > > FYI, the commands are\n> > > > \\set EXIT_ON_ERROR\n> > > > and\n> > > > \\unset EXIT_ON_ERROR\n> > > > It's a normal psql variable, but incidentally the syntax seems kind of\n> > > > easy to remember.\n> > > Can we change that to the more standard ON_ERROR_STOP?\n> \n> Any chance of multi-word options? Like \"\\set on error stop\"?\n\nActually, that command would set \"on\" to the value of \"errorstop\". \\set\ndoesn't have any hard-coded parsing rules, like the SQL look-a-similar, it\njust sets variables. They can carry configuration information (like the\nabove), application state (LASTOID), or whatever you want (\\set foo `date\n%Y` \\\\ insert into mytbl values (:foo);). Kind of like a shell or Tcl, I\nthink.\n\n> And at least part of the reason other systems can do some error\n> recovery is that they decouple the parser from the backend, so the\n> parser is carried closer to the client, and the client can be more\n> certain about what is being done. But that carries a lot of baggage\n> too...\n> \n> If/when we do get more decoupling, it might be done through a Corba\n> interface, which would allow us to get away from the string-based\n> client/server protocol, and will handle typing, marshalling, byte\n> ordering, etc more-or-less transparently.\n\nAt that point we may choose to write a completely new client. ;)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 18:23:36 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "On 2000-02-10, Thomas Lockhart mentioned:\n\n> > > > FYI, the commands are\n> > > > \\set EXIT_ON_ERROR\n> > > > and\n> > > > \\unset EXIT_ON_ERROR\n> > > > It's a normal psql variable, but incidentally the syntax seems kind of\n> > > > easy to remember.\n> > > Can we change that to the more standard ON_ERROR_STOP?\n> \n> Any chance of multi-word options? Like \"\\set on error stop\"?\n\nYou can do\n\t\\set 'some string with any character \\t\\001\\n\\n' enabled\nbut that's a little hard to type. ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 21:16:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "On Thu, Feb 10, 2000 at 11:02:32AM -0500, Brian E Gallew wrote:\n> Urhm, wouldn't a better idea be to have something like Ingres' \"ON\n> ERROR\" and \"ON WARNING\" settings? In Ingres esqlc, you can create\n\nYou can do that with ecpg as well. The syntax is exec sql whenever ....\nI doubt though that this was about a precompiler but psql. But then esqlc is\na precompiler too.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 11 Feb 2000 07:39:43 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
},
{
"msg_contents": "On Thu, Feb 10, 2000 at 05:12:54PM +0100, Peter Eisentraut wrote:\n> That's very nice and all, but psql doesn't work that way. I'm not sure how\n> other dbs organize their front-end internally, but that sort of scheme\n> would really take psql places we might not want it to go, and for which it\n> hasn't been designed -- namely, to be a programming language.\n\nI wonder why we compare apples and oranges here. Of course esqlc was\ndesigned to be parse a programming language while psql is a query tool. \n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 11 Feb 2000 07:41:10 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and libpq fixes"
}
] |
[
{
"msg_contents": "\nOn 07-Feb-00 Peter Eisentraut wrote:\n> Bruce mentioned I should be on there, I hope y'all aren't disgruntled,\n> yet. ;)\n\nI meant to mention that I was putting you there anyway and making up a\nbio if necessary :) \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 07 Feb 2000 18:25:35 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New Globe"
},
{
"msg_contents": "> \n> On 07-Feb-00 Peter Eisentraut wrote:\n> > Bruce mentioned I should be on there, I hope y'all aren't disgruntled,\n> > yet. ;)\n> \n> I meant to mention that I was putting you there anyway and making up a\n> bio if necessary :) \n> \n\nGreat.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Feb 2000 19:02:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New Globe"
}
] |
[
{
"msg_contents": "\nIn ExecInitAppend it initialises all the subplans...\n\nfor (i = 0; i < nplans; i++)\n\t{\n...\nappendstate->as_whichplan = i;\nexec_append_initialize_next(node);\n..\n\t}\n\nAnd then at the end of the function, it initialises the first plan\nagain...\n\nappendstate->as_whichplan = 0;\nexec_append_initialize_next(node);\n\n\treturn TRUE;\n\nIs this code correct? Should the first plan really be initialised twice?\n",
"msg_date": "Tue, 08 Feb 2000 14:26:48 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "ExecInitAppend"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> In ExecInitAppend it initialises all the subplans...\n> And then at the end of the function, it initialises the first plan\n> again...\n> Is this code correct? Should the first plan really be initialised twice?\n\nProbably not --- I imagine that's wasting memory, or worse. Do things\nstill work if you remove the extra initialize call?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Feb 2000 23:27:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ExecInitAppend "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > In ExecInitAppend it initialises all the subplans...\n> > And then at the end of the function, it initialises the first plan\n> > again...\n> > Is this code correct? Should the first plan really be initialised twice?\n> \n> Probably not --- I imagine that's wasting memory, or worse. Do things\n> still work if you remove the extra initialize call?\n\nThis code looks ugly because it sets appendstate->as_whichplan so that\nexec_append_initialise_next knows which plan it's supposed to initialise\n-\nyucky side effect.\n\nI suspect it will stop working if the last call is removed because it\n*may*\nbe relying on the first plan to be initialised last, so that the estate\nvariables are initialised to the first plan. It may work if the plans\nare initialised in reverse order, but the right way is probably to \nreorganise the code.\n",
"msg_date": "Tue, 08 Feb 2000 15:46:52 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ExecInitAppend"
}
] |
[
{
"msg_contents": "Been doing more tracing... \n\nThe problem with UPDATE on inheritance hierarchies is that when it gets\ndown into ExecSeqScan, the value of...\n\nnode->scanstate->css_currentScanDesc->rs_rd->rd_id\n\nis not equal to the value of...\n\nnode->plan.state->es_result_relation_info->ri_RelationDesc->rd_id\n\nOn the first scan, the former is equal to the relation for the base\nclass\nand the latter is equal to the relation for the subclass.\n\nAny thoughts anyone?\n",
"msg_date": "Tue, 08 Feb 2000 14:47:35 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "UPDATE on subclass"
},
{
"msg_contents": "\nHmm. In exec_append_initialize_next\n\nnth(whichplan, rtable) \nrefers to the subclass and ...\nnth(whichplan, appendstate->as_result_relation_info_list);\nrefers to the baseclass.\n\nIs there something that is constructing one of these \nstructures in reverse order?\n\nChris Bitmead wrote:\n> \n> Been doing more tracing...\n> \n> The problem with UPDATE on inheritance hierarchies is that when it gets\n> down into ExecSeqScan, the value of...\n> \n> node->scanstate->css_currentScanDesc->rs_rd->rd_id\n> \n> is not equal to the value of...\n> \n> node->plan.state->es_result_relation_info->ri_RelationDesc->rd_id\n> \n> On the first scan, the former is equal to the relation for the base\n> class\n> and the latter is equal to the relation for the subclass.\n> \n> Any thoughts anyone?\n> \n> ************\n",
"msg_date": "Tue, 08 Feb 2000 15:12:11 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] UPDATE on subclass"
},
{
"msg_contents": "\nIn ExecInitAppend it has a loop...\n\n\n\n\tforeach(rtentryP, rtable) \n\t{\n\n\tresultList = lcons(rri, resultList);\n\t}\n\t\n\tappendstate->as_result_relation_info_list = resultList;\n\nIf I'm not mistaken this will generate the as_result_relation_info_list\nin the reverse order to the rtentry list, which is wrong... right?\n",
"msg_date": "Tue, 08 Feb 2000 15:24:14 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is this it?"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> In ExecInitAppend it has a loop...\n> \n> foreach(rtentryP, rtable)\n> {\n> \n> resultList = lcons(rri, resultList);\n> }\n> \n> appendstate->as_result_relation_info_list = resultList;\n\n\nThis seems to be the problem. I'm going to change the above line to...\n\n\t appendstate->as_result_relation_info_list = lreverse(resultList);\n\nAfter I do this, UPDATE and DELETE start working for me on subclasses.\n\nI'll prepare a full patch for inclusion in 7.1 (Unless you want it for\n7.0 :-).\n",
"msg_date": "Tue, 08 Feb 2000 15:52:05 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Is this it?"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> In ExecInitAppend it has a loop...\n> \tforeach(rtentryP, rtable) \n> \t{\n> \tresultList = lcons(rri, resultList);\n> \t}\n> appendstate->as_result_relation_info_list = resultList;\n\n> If I'm not mistaken this will generate the as_result_relation_info_list\n> in the reverse order to the rtentry list,\n\nCheck ...\n\n> which is wrong... right?\n\nMaybe. Is there code elsewhere that assumes these lists are ordered\nalike?\n\nYou could change the lcons call to \"lappend(resultList, rri)\" if\nyou just want to try the experiment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 01:17:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is this it? "
}
] |
[
{
"msg_contents": "\nIs there a step-by-step guide somewhere that tells me how to add a new\nregression test? I've had a bit of a hunt around, and exactly what to do\nto add a test isn't clear.\n",
"msg_date": "Tue, 08 Feb 2000 16:03:38 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression tests..."
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Is there a step-by-step guide somewhere that tells me how to add a new\n> regression test? I've had a bit of a hunt around, and exactly what to do\n> to add a test isn't clear.\n\nThere's not that much to it, assuming you don't need any platform-\nspecific variations in the expected output. You make a script under\nregress/sql/, add its name in an appropriate place in\nsql/run_check.tests, run it, and drop the results file into expected/\n(hopefully after manual verification ;-))\n\nBTW, am I right in thinking that sql/tests is now dead code? If so,\nwe should flush it, or more likely rename run_check.tests to just tests.\nJan?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 01:06:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression tests... "
},
{
"msg_contents": "> Is there a step-by-step guide somewhere that tells me how to add a new\n> regression test? I've had a bit of a hunt around, and exactly what to do\n> to add a test isn't clear.\n\nHmm. Not sure. Here is the procedure, and if you want to plop it into\nthe Developer's Guide sgml sources that would be great:\n\n1) Generate a file containing the test, and place it into\ntest/regress/sql/. Should be named appropriately, in a style similar\nto other files in that directory.\n\n1a) If the test needs new test data, decide if the tables should stay\nfor the rest of the regression tests or if they should be removed on\ncompletion of the individual test. If they stay, you will need to\nupdate the results of one or two other tests which look at the current\ntable list.\n\n2) Add the name of the file to sql/run_check.tests, and perhaps for\ncompleteness to sql/tests (the non-parallel version of the test).\n\n2a) If your test gets data from an external file, you will need to put\nthe templated source file into input/ rather than sql/, and modify the\nMakefiles to generate a runable version for sql/\n\n3) Run the regression tests. Your new test will fail (or succeed,\ncan't remember which) because the \"expected\" output file does not\nexist.\n\n4) Copy results/<your test>.out to expected/<your test>.out\n\n4a) If your test got data from an external file, you will need to put\nthe templated output file into output/ rather than expected/, and\nmodify the Makefiles to generate a non-tempated version for expected/\n\n5) Rerun the regression test, making sure that all tests pass, or that\nyou understand *all* the differences. No fair if you don't analyze the\ndifferences in some detail.\n\n5a) The canonical regression machine is currently a Linux RH5.2 i686\nmachine. Some platforms produce different results, and will need\nplatform-specific versions of the regression test results.\n\n6) Send the patches.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 06:41:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression tests..."
},
{
"msg_contents": "> BTW, am I right in thinking that sql/tests is now dead code? If so,\n> we should flush it, or more likely rename run_check.tests to just tests.\n> Jan?\n\n Yepp, dead.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 8 Feb 2000 11:59:57 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression tests..."
}
] |
[
{
"msg_contents": "\nHi all,\n\nI came across this comment in exec_append_initialize_next....\n\n/* ----------------\n*\t\tinitialize the scan\n*\t\t(and update the range table appropriately)\n*\t\t (doesn't this leave the range table hosed for anybody upstream\n*\t\t of the Append node??? - jolly )\n* ----------------\n*/\n\nI took a stab at guessing what this might mean, and ran the following\ntest.\nIt looks like a bug. Can anybody shed any light on whether the above\ncomment is likely to relate to this bug, and is there anybody who is\nso intimate with this code that they are willing to fix it?\n\n# Comment: b and c inherit from a. d inherits from b.\nchrisb=# begin work;\nBEGIN\nchrisb=# select aa from a;\n aa \n----\n(0 rows)\n\nchrisb=# select aa from b;\n aa \n-----\n ppp\n(1 row)\n\nchrisb=# select aa from c;\n aa \n-------\n cmore\n(1 row)\n\nchrisb=# select aa from d;\n aa \n-------\n dmore\n(1 row)\n\nchrisb=# select aa from a*;\n aa \n-------\n ppp\n cmore\n dmore\n(3 rows)\n\nchrisb=# declare cu cursor for select aa from a*;\nSELECT\nchrisb=# fetch forward 1 in cu;\n aa \n-----\n ppp\n(1 row)\n\nchrisb=# fetch forward 1 in cu;\n aa \n-------\n cmore\n(1 row)\n\nchrisb=# fetch backward 1 in cu;\n aa \n----\n(0 rows)\n",
"msg_date": "Tue, 08 Feb 2000 16:56:06 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in cursors??"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Chris Bitmead\n> \n> Hi all,\n> \n> I came across this comment in exec_append_initialize_next....\n> \n> /* ----------------\n> *\t\tinitialize the scan\n> *\t\t(and update the range table appropriately)\n> *\t\t (doesn't this leave the range table hosed for \n> anybody upstream\n> *\t\t of the Append node??? - jolly )\n> * ----------------\n> */\n> \n> I took a stab at guessing what this might mean, and ran the following\n> test.\n> It looks like a bug. Can anybody shed any light on whether the above\n> comment is likely to relate to this bug, and is there anybody who is\n> so intimate with this code that they are willing to fix it?\n>\n \nI've forgotten to apply the following patch.\nWithout the patch,backward sequential scan is impossible\nafter reaching EOF. \nIt may be one of the cause.\n\nRegards.\n\n*** access/heap/heapam.c.orig\tMon Aug 2 14:56:36 1999\n--- access/heap/heapam.c\tTue Nov 9 12:59:48 1999\n***************\n*** 775,782 ****\n \t\tif (scan->rs_ptup.t_data == scan->rs_ctup.t_data &&\n \t\t\tBufferIsInvalid(scan->rs_pbuf))\n \t\t{\n- \t\t\tif (BufferIsValid(scan->rs_nbuf))\n- \t\t\t\tReleaseBuffer(scan->rs_nbuf);\n \t\t\treturn NULL;\n \t\t}\n\n--- 775,780 ----\n***************\n*** 833,842 ****\n \t\t\t\tReleaseBuffer(scan->rs_pbuf);\n \t\t\tscan->rs_ptup.t_data = NULL;\n \t\t\tscan->rs_pbuf = InvalidBuffer;\n- \t\t\tif (BufferIsValid(scan->rs_nbuf))\n- \t\t\t\tReleaseBuffer(scan->rs_nbuf);\n- \t\t\tscan->rs_ntup.t_data = NULL;\n- \t\t\tscan->rs_nbuf = InvalidBuffer;\n \t\t\treturn NULL;\n \t\t}\n\n--- 831,836 ----\n***************\n*** 855,862 ****\n \t\tif (scan->rs_ctup.t_data == scan->rs_ntup.t_data &&\n \t\t\tBufferIsInvalid(scan->rs_nbuf))\n \t\t{\n- \t\t\tif (BufferIsValid(scan->rs_pbuf))\n- \t\t\t\tReleaseBuffer(scan->rs_pbuf);\n \t\t\tHEAPDEBUG_3;\t\t/* heap_getnext returns NULL at end */\n \t\t\treturn NULL;\n \t\t}\n--- 849,854 ----\n***************\n*** 915,924 ****\n \t\t\t\tReleaseBuffer(scan->rs_nbuf);\n \t\t\tscan->rs_ntup.t_data = NULL;\n \t\t\tscan->rs_nbuf = InvalidBuffer;\n- \t\t\tif (BufferIsValid(scan->rs_pbuf))\n- \t\t\t\tReleaseBuffer(scan->rs_pbuf);\n- \t\t\tscan->rs_ptup.t_data = NULL;\n- \t\t\tscan->rs_pbuf = InvalidBuffer;\n \t\t\tHEAPDEBUG_6;\t\t/* heap_getnext returning EOS */\n \t\t\treturn NULL;\n \t\t}\n--- 907,912 ----\n",
"msg_date": "Tue, 8 Feb 2000 17:44:32 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Bug in cursors??"
},
{
"msg_contents": "\nHiroshi, do you need this applied?\n\n\n> \n> I've forgotten to apply the following patch.\n> Without the patch,backward sequential scan is impossible\n> after reaching EOF. \n> It may be one of the cause.\n> \n> Regards.\n> \n> *** access/heap/heapam.c.orig\tMon Aug 2 14:56:36 1999\n> --- access/heap/heapam.c\tTue Nov 9 12:59:48 1999\n> ***************\n> *** 775,782 ****\n> \t\tif (scan->rs_ptup.t_data == scan->rs_ctup.t_data &&\n> \t\t\tBufferIsInvalid(scan->rs_pbuf))\n> \t\t{\n> - \t\t\tif (BufferIsValid(scan->rs_nbuf))\n> - \t\t\t\tReleaseBuffer(scan->rs_nbuf);\n> \t\t\treturn NULL;\n> \t\t}\n> \n> --- 775,780 ----\n> ***************\n> *** 833,842 ****\n> \t\t\t\tReleaseBuffer(scan->rs_pbuf);\n> \t\t\tscan->rs_ptup.t_data = NULL;\n> \t\t\tscan->rs_pbuf = InvalidBuffer;\n> - \t\t\tif (BufferIsValid(scan->rs_nbuf))\n> - \t\t\t\tReleaseBuffer(scan->rs_nbuf);\n> - \t\t\tscan->rs_ntup.t_data = NULL;\n> - \t\t\tscan->rs_nbuf = InvalidBuffer;\n> \t\t\treturn NULL;\n> \t\t}\n> \n> --- 831,836 ----\n> ***************\n> *** 855,862 ****\n> \t\tif (scan->rs_ctup.t_data == scan->rs_ntup.t_data &&\n> \t\t\tBufferIsInvalid(scan->rs_nbuf))\n> \t\t{\n> - \t\t\tif (BufferIsValid(scan->rs_pbuf))\n> - \t\t\t\tReleaseBuffer(scan->rs_pbuf);\n> \t\t\tHEAPDEBUG_3;\t\t/* heap_getnext returns NULL at end */\n> \t\t\treturn NULL;\n> \t\t}\n> --- 849,854 ----\n> ***************\n> *** 915,924 ****\n> \t\t\t\tReleaseBuffer(scan->rs_nbuf);\n> \t\t\tscan->rs_ntup.t_data = NULL;\n> \t\t\tscan->rs_nbuf = InvalidBuffer;\n> - \t\t\tif (BufferIsValid(scan->rs_pbuf))\n> - \t\t\t\tReleaseBuffer(scan->rs_pbuf);\n> - \t\t\tscan->rs_ptup.t_data = NULL;\n> - \t\t\tscan->rs_pbuf = InvalidBuffer;\n> \t\t\tHEAPDEBUG_6;\t\t/* heap_getnext returning EOS */\n> \t\t\treturn NULL;\n> \t\t}\n> --- 907,912 ----\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 04:16:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in cursors??"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, February 08, 2000 6:16 PM\n> \n> Hiroshi, do you need this applied?\n>\n\nOops,this patch is old,sorry.\nAnother patch may be needed.\n\nDO you think this patch should also be applied to REL tree ?\nIf so,could you please apply it to both trees ?\nOtherwise I would commit it only to current tree myself.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 8 Feb 2000 19:12:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Bug in cursors??"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Tuesday, February 08, 2000 6:16 PM\n> > \n> > Hiroshi, do you need this applied?\n> >\n> \n> Oops,this patch is old,sorry.\n> Another patch may be needed.\n> \n> DO you think this patch should also be applied to REL tree ?\n> If so,could you please apply it to both trees ?\n> Otherwise I would commit it only to current tree myself.\n\nI don't think we are doing more 6.5.* releases\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 09:16:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in cursors??"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > > Sent: Tuesday, February 08, 2000 6:16 PM\n> > > \n> > > Hiroshi, do you need this applied?\n> > >\n> > \n> > Oops,this patch is old,sorry.\n> > Another patch may be needed.\n> > \n> > DO you think this patch should also be applied to REL tree ?\n> > If so,could you please apply it to both trees ?\n> > Otherwise I would commit it only to current tree myself.\n> \n> I don't think we are doing more 6.5.* releases\n>\n\nOK,I have committed the patch to current tree.\n\nBTW I found the following TODO item.\n* update pg_class.relhasindex during vacuum when all indexes are dropped\n\nSeems vacuum has done so from the first.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 9 Feb 2000 15:24:41 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Bug in cursors??"
},
{
"msg_contents": "> > > If so,could you please apply it to both trees ?\n> > > Otherwise I would commit it only to current tree myself.\n> > \n> > I don't think we are doing more 6.5.* releases\n> >\n> \n> OK,I have committed the patch to current tree.\n> \n> BTW I found the following TODO item.\n> * update pg_class.relhasindex during vacuum when all indexes are dropped\n> \n> Seems vacuum has done so from the first.\n\nYes, I see. TODO updated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 9 Feb 2000 15:09:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in cursors??"
}
] |
[
{
"msg_contents": "First, thanks to Peter Eisentraut for putting back pqbool -- I now can build\nPg...\n\nNow, I have a few questions:\n1.)\tWhat's the deal with the man pages now? In 6.5.x, src/man contained\nthe man pages. (Then, in the 6.5.2 RPM's and later, an extra tarball of updated\nman pages was added on top of the existing ones). In 7.0, they are in\ndoc/man.tar.gz -- however, there are many that are no longer there, unless I\nhave overlooked them. Furthermore, while the command names destroy* have been\nchanged to drop*, the man pages haven't changed. Also, the man3 section has\ndisappeared. (I realize Thomas is busy with outer joins et al -- this is just\na listing of what I've found -- I'm not by any means fussing...).\n\n2.)\tMissing man pages that I have found (lost??) thus far:\n\tpg_ctl (being worked on, AFAIK)\n\tecpg.1\n\tpg_passwd.1\n\tpg_encoding.1\n\tpg_hba.conf.5\n\n3.)\tExtraneous man pages:\n\tpgadmin.1 (PgAdmin is a Windows program.....)\n\nAlso, when building the plperl module, a .so is not being created, nor is it\nbeing installed.\n\nI will continue to bang on the build process for the RPM's as we get closer to\nbeta.....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\t\n",
"msg_date": "Tue, 8 Feb 2000 00:58:57 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions on 7.0 for RPM building"
},
{
"msg_contents": "> First, thanks to Peter Eisentraut for putting back pqbool -- I now can build\n> Pg...\n> \n> Now, I have a few questions:\n> 1.)\tWhat's the deal with the man pages now? In 6.5.x, src/man contained\n> the man pages. (Then, in the 6.5.2 RPM's and later, an extra tarball of updated\n> man pages was added on top of the existing ones). In 7.0, they are in\n> doc/man.tar.gz -- however, there are many that are no longer there, unless I\n> have overlooked them. Furthermore, while the command names destroy* have been\n> changed to drop*, the man pages haven't changed. Also, the man3 section has\n> disappeared. (I realize Thomas is busy with outer joins et al -- this is just\n> a listing of what I've found -- I'm not by any means fussing...).\n\nThomas Lockhart generates man pages from SGML, but that usually happens\nbefore final, not beta.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 8 Feb 2000 01:57:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Questions on 7.0 for RPM building"
},
{
"msg_contents": "> 1.) What's the deal with the man pages now? In 6.5.x, src/man contained\n> the man pages. (Then, in the 6.5.2 RPM's and later, an extra tarball of \n> updated man pages was added on top of the existing ones). In 7.0, they are \n> in doc/man.tar.gz -- however, there are many that are no longer there, \n> unless I have overlooked them.\n\n*All* information in the old man pages appears somewhere in the new\nhtml/ps docs. And the reference pages for the new docs are translated\ninto the new man tarball. We can try to track down specific cases (I\n*might* have missed a file or two) but in general if you see something\nmissing it probably means we don't have a reference page for it.\n\n> Furthermore, while the command names destroy* have been\n> changed to drop*, the man pages haven't changed.\n\nRight. They will need to be regenerated for the 7.0 release, and\nhaven't been done so far. You *should* get at least a few days to play\nwith a beta tarball that has these updated.\n\n> Also, the man3 section has disappeared.\n\nafaik those pages were not really appropriate for reference pages, or\nthey do not yet appear in reference pages (library API docs, right?).\nThey have their own chapter(s) in the docs, but may not be reference\npages yet.\n\n> 2.) Missing man pages that I have found (lost??) thus far:\n> pg_ctl (being worked on, AFAIK)\n> ecpg.1\n> pg_passwd.1\n> pg_encoding.1\n> pg_hba.conf.5\n\nProbably so. If they don't make sense as a \"reference page\" style of\ndoc in the main docs, then imho they don't make sense as man pages.\nBut each should be handled on a case by case basis.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 07:03:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Questions on 7.0 for RPM building"
},
{
"msg_contents": "On Tue, 08 Feb 2000, Thomas Lockhart wrote:\n> > 1.) What's the deal with the man pages now? In 6.5.x, src/man contained\n\n> *All* information in the old man pages appears somewhere in the new\n> html/ps docs.\n\nOk. I can package the man pages that we're going to ship, and leave the rest\nalone -- that's easy enough. I was just curious as to what happened to them --\nand you've answered my question. \n\n> > Furthermore, while the command names destroy* have been\n> > changed to drop*, the man pages haven't changed.\n \n> Right. They will need to be regenerated for the 7.0 release, and\n> haven't been done so far. You *should* get at least a few days to play\n> with a beta tarball that has these updated.\n\nThat's fine -- even if it is a 'late' beta, so long as I know when I'm fine.\n\n> > Also, the man3 section has disappeared.\n> \n> afaik those pages were not really appropriate for reference pages, or\n> they do not yet appear in reference pages (library API docs, right?).\n\nWell, they _were_ in the 6.5.x tarball. But, hey, if they're removed, that's\nfine. I can put a note in the README.rpm file that the preferred documentation\nis the sgml source and its derivatives. I was just curious as to what\nhappened, as I don't recall seeing a message about that.\n\n> > 2.) Missing man pages that I have found (lost??) thus far:\n> > pg_ctl (being worked on, AFAIK)\n> > ecpg.1\n> > pg_passwd.1\n> > pg_encoding.1\n> > pg_hba.conf.5\n\n> Probably so. If they don't make sense as a \"reference page\" style of\n> doc in the main docs, then imho they don't make sense as man pages.\n> But each should be handled on a case by case basis.\n\nAgain, these are man pages (except for pg_encoding and pg_ctl) that existed in\n6.5.x. \n\nI realize that these man pages are not the highest priority -- as I was doing a\ntrial build last night, I had finally gotten everything to build properly\n(except plperl), and started working on the %file lists for the packages.,\nsince there are more executables in 7.0 than in 6.5.x. That's when I noticed\nthese.\n\nIf it'll be the final release before they're there, I can easily wait until\nthen (this is admittedly my first major release cycle -- I came on board after\nthe final 6.5 release).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 8 Feb 2000 07:53:23 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Questions on 7.0 for RPM building"
},
{
"msg_contents": "On Tue, 08 Feb 2000, Bruce Momjian wrote:\n> Thomas Lockhart generates man pages from SGML, but that usually happens\n> before final, not beta.\n\nThen I'll wait until before final to worry about the man pages.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 8 Feb 2000 08:02:36 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Questions on 7.0 for RPM building"
}
] |
[
{
"msg_contents": "Just a quick question, but has anyone inserted a large object with a\nspecific OID, rather than getting a new oid?\n\nWhat I'm thinking of, is backing up the large objects in a database into\na zip file, then restoring them (if required).\n\nThis is for a new JDBC example. The backup is easy, but it's the\nrestoring that I can't see how it can be done with the current lo_\nfunctions.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n",
"msg_date": "Tue, 8 Feb 2000 08:57:56 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inserting large objects"
}
] |
[
{
"msg_contents": "How do I make a patch that includes new files?\n\nPreviously I was doing a cvs diff -R -N -c\nto make a patch, but this doesn't work for \nnew files. I can't do cvs add, and using\nplain diff is near-impossible on a CVS directory.\n",
"msg_date": "Tue, 08 Feb 2000 20:10:55 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to make a patch?"
},
{
"msg_contents": "On Tue, 8 Feb 2000, Chris wrote:\n\n> How do I make a patch that includes new files?\n> \n> Previously I was doing a cvs diff -R -N -c\n> to make a patch, but this doesn't work for \n> new files. I can't do cvs add, and using\n> plain diff is near-impossible on a CVS directory.\n\nHere is something that was recommended to me by Jan, and it seems a number\nof other people follow a similar road.\n\nWhen you checked out or updated your cvs copy and you want to start\nworking on something, make one copy like\n$ cp -r pgsql pgsql.orig\n\nThen run configure on that copy. Then copy this one like\n$ cp -r pgsql.orig pgsql.work\n\nThen work on this one. I find it occasionally useful to be able to do a\nmake install on the .orig tree as well do \"see how it used to behave\".\n(You don't want to mess up your cvs tree for that.)\n\nWhen you're done you create a patch between pgsql.orig and pgsql.work\nusing diff -c -r whatnot and send that in. Then you do cvs update again an\nthe game begins anew. This also has the advantage that if someone mangles\nyour patch slightly (such as running indent on it) you won't get funny\nmerge conflicts when you cvs update over your self-patched cvs tree.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 12:50:29 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to make a patch?"
},
{
"msg_contents": "On Tue, 08 Feb 2000, Peter Eisentraut wrote:\n> On Tue, 8 Feb 2000, Chris wrote:\n> \n> > How do I make a patch that includes new files?\n\n> Here is something that was recommended to me by Jan, and it seems a number\n> of other people follow a similar road.\n[snip] \n> When you're done you create a patch between pgsql.orig and pgsql.work\n> using diff -c -r whatnot and send that in. Then you do cvs update again an\n> the game begins anew. This also has the advantage that if someone mangles\n> your patch slightly (such as running indent on it) you won't get funny\n> merge conflicts when you cvs update over your self-patched cvs tree.\n\nThis is also SOP for patching for RPM building. The RPM philosophy is to\nalways build from pristine released sources -- if changes are required in order\nto shoehorn the package into the RedHat FHS confines, then those changes are\ndistributed as a set of patches against the pristine sources. The build\nprocess then applies those patches at build time. The idea is to allow RPM\nusers to easily rebuild packages by simply pulling off the latest pristine\ntarball, then, after editing the patches appropriately, a simple 'rpm -ba'\ncommand completely rebuilds the packages, which you can then install.\n\nFor the RPM's, I have a pristine tree in postgresql-x.x.x.orig, and the work\ntree in postgresql-x.x.x, then issue a 'diff -uNr ' between the two trees. \nThen, I rename the work tree to postgresql-x.x.x.patched, then issue the 'rpm\n-ba' (or variants, depending on how far I want the build to proceed), which\napplies those patches to a new pristine build tree.\n\nI have found that the patchset for PostgreSQL doesn't usually vary much --\nuntil now, as there's a whole new regression test suite, and the majority of\nthe RPM-ifying patches were in the regression suite (so that it could be\nprebuilt and run without using make, and without having a source tree around at\nrun time).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 8 Feb 2000 08:35:58 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to make a patch?"
}
] |
[
{
"msg_contents": "> I see where you're going, and you could possibly make it work, but\n> there are a bunch of problems. One objection is that kernel FDs\n> are a very finite resource on a lot of platforms --- you don't really\n> want to tie up one FD for every dirty buffer, and you *certainly*\n> don't want to get into a situation where you can't release kernel\n> FDs until end of xact. You might be able to get around that by\n> associating the fsync-needed bit with VFDs instead of FDs.\n\n Reminds me to the usefulness of some kind of tablespace\n storage manager. It might not buy us a single saved byte on\n disk, or maybe cost us some extra. But it would save file\n descriptors.\n\n And if this storage manager would work with some amount of\n preallocated blocks, it would be totally happy with a\n fdatasync() instead of a fsync(). Some per tablespace\n configurable options like initial number of blocks, next\n extent size and percentage increase would be fine.\n\n Before someone asks, the difference between a fdatasync() and\n a fsync() is, that the first only forces modified data blocks\n to be flushed to disk. A fsync() causes the inode to be\n flushed too, because at least it has a new modtime. In our\n case, where writes to files can cause block allocations, it\n is a requirement to flush the inode on modifications. But if\n dealing with a file where blocks are already allocated (no\n null faking or write behind the EOF), it is not that\n important. Any difference you might see after a crash can be\n a slightly different last modification time, and this really\n doesn't count.\n\n The result of that difference is, that a write()+fsync()\n nearly allways causes head seeks on the disk (except the\n inode and dirty blocks are on the same cylinder). In contrast\n a series of write()+fdatasync() calls for one and the same\n file, all blocks close together, wouldn't. And isn't that\n what our backends usually do?\n\n Having immediate SCSI error reporting enabled on the disks,\n such a burst of write()+fdatasync() calls wouln't have such a\n big performance impact any more. In that case, the\n fdatasync() call will return already at the time, the flushed\n blocks reached the on-disk cache. Not waiting until they are\n burned into the surface.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n",
"msg_date": "Tue, 8 Feb 2000 13:01:29 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO item"
},
{
"msg_contents": "On Tue, 8 Feb 2000, Jan Wieck wrote:\n\n> And if this storage manager would work with some amount of\n> preallocated blocks, it would be totally happy with a\n> fdatasync() instead of a fsync(). Some per tablespace\n> configurable options like initial number of blocks, next\n> extent size and percentage increase would be fine.\n\nOn Linux, fdatasync() does exactly the same as fsync(). On FreeBSD (3.4),\nfdatasync() isn't even documented and I can't find it in any of the\ninclude files either. What I'm saying is that for the vast majority of our\nusers this would most likely buy exactly nothing. I just wanted to point\nthat out, not dismiss the idea.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 8 Feb 2000 13:57:15 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO item"
}
] |
[
{
"msg_contents": "\nHi,\n\nAt the URL below is a preview patch (I guess for\nfuture 7.1) that I'm putting up for review. I've \nincorporated fixes in response to previous \ncomments.\n\nIt now has working SELECT, UPDATE and DELETE for\ninherited tables and supporting the ONLY syntax.\n\nftp://ftp.tech.com.au/pub/patch.only\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 08 Feb 2000 23:09:07 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "New Patch"
}
] |
[
{
"msg_contents": "Hi,\n\n looking at all the complications about dealing with segmented\n files etc., I wonder if it's really worth the efford to add\n file buffering to the trigger queue.\n\n The memory footprint left by modifying a row where triggers\n have to be run is about 40 + 8 * num_triggers bytes. So for\n one PK/FP relationship, it will be 48 bytes per FK\n inserted/updated or 48 bytes per PK updated/deleted. If one\n PK table has multiple references to it, this will only add\n another 8 bytes to the footprint. Same if one table has\n multiple foreign keys defined.\n\n The question now is, who ever attempts to act on millions of\n rows in one transaction, if referential integrity constraints\n are set up?\n\n Of course, if someone updates millions of rows in an RI\n scenario during one transaction, it could blow away the\n backend. But I'd prefer to leave this as a well known problem\n for 7.1 and better start on creating a good regression test\n and some documentation for it.\n\n Thomas, where should the documentation for FOREIGN KEY go?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 8 Feb 2000 16:54:31 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Deferred trigger queue"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> looking at all the complications about dealing with segmented\n> files etc., I wonder if it's really worth the efford to add\n> file buffering to the trigger queue.\n\nYou shouldn't be thinking about that. Use a BufFile (see\nsrc/include/storage/buffile.h), and you have temp file creation,\nfile segmentation and auto cleanup at xact abort with no more work\nthan fopen/fwrite would be. See nodeHash.c/nodeHashjoin.c for an\nexample of use.\n\n> Of course, if someone updates millions of rows in an RI\n> scenario during one transaction, it could blow away the\n> backend. But I'd prefer to leave this as a well known problem\n> for 7.1 and better start on creating a good regression test\n> and some documentation for it.\n\nHowever, if you think that there are other tasks that are higher\npriority than this one, I won't argue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 11:41:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Deferred trigger queue "
},
{
"msg_contents": "> Thomas, where should the documentation for FOREIGN KEY go?\n\nDepends on what the docs look like. There should be some mention of\nforeign keys in the CREATE TABLE reference page\n(doc/sgml/ref/create_table.sgml) and there should be some mention of\nit in the User's Guide. Eventually, we will probably have a full\nchapter on it (and if you want just make a file doc/sgml/foreign.sgml\nand we will start). If you don't want to do that yet, plop something\nin syntax.sgml.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 17:55:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Deferred trigger queue"
},
{
"msg_contents": "And btw, I've got most of the regression tests passing with a first\ncut at outer join syntax, but the rules system has breakage. Should be\nOK after another pass through to clean up code, which is likely to\ntouch many files since a bit of the RTE structure changes.\n\nI'd have gone ahead and committed, but figured that breaking foreign\nkeys would not be a step ahead for Jan ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 08 Feb 2000 18:17:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Deferred trigger queue"
},
{
"msg_contents": "> And btw, I've got most of the regression tests passing with a first\n> cut at outer join syntax, but the rules system has breakage. Should be\n> OK after another pass through to clean up code, which is likely to\n> touch many files since a bit of the RTE structure changes.\n>\n> I'd have gone ahead and committed, but figured that breaking foreign\n> keys would not be a step ahead for Jan ;)\n\n FOREIGN KEYs aren't related to rules in any way. They are\n implemented as triggers. So break the rule system for a while\n if you feel the need.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 8 Feb 2000 20:33:16 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Deferred trigger queue"
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> > looking at all the complications about dealing with segmented\n> > files etc., I wonder if it's really worth the efford to add\n> > file buffering to the trigger queue.\n>\n> You shouldn't be thinking about that. Use a BufFile (see\n> src/include/storage/buffile.h), and you have temp file creation,\n> file segmentation and auto cleanup at xact abort with no more work\n> than fopen/fwrite would be. See nodeHash.c/nodeHashjoin.c for an\n> example of use.\n\n You already pointed me to that long ago. Surely, something\n the like would be what to use in this case.\n\n> However, if you think that there are other tasks that are higher\n> priority than this one, I won't argue.\n\n It's not that I totally want to forget about it. It's just\n that I think with 7 days left until BETA I better start on\n stressing the code and providing some docs instead of taking\n care for possible abuse.\n\n There are details that MUST be documented IMHO. For example\n FOREIGN KEY needs that there is a UNIQUE constraint defined\n on the set of referenced columns. Actually this requirement\n is not checked in any way, so it MUST be mentioned in the\n docs.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 8 Feb 2000 20:45:37 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Deferred trigger queue"
},
{
"msg_contents": "Jan, I have added to the TODO list:\n\n\t* Add deferred trigger queue file? (Jan)\n\nDo you want this in there?\n\n> Hi,\n> \n> looking at all the complications about dealing with segmented\n> files etc., I wonder if it's really worth the efford to add\n> file buffering to the trigger queue.\n> \n> The memory footprint left by modifying a row where triggers\n> have to be run is about 40 + 8 * num_triggers bytes. So for\n> one PK/FP relationship, it will be 48 bytes per FK\n> inserted/updated or 48 bytes per PK updated/deleted. If one\n> PK table has multiple references to it, this will only add\n> another 8 bytes to the footprint. Same if one table has\n> multiple foreign keys defined.\n> \n> The question now is, who ever attempts to act on millions of\n> rows in one transaction, if referential integrity constraints\n> are set up?\n> \n> Of course, if someone updates millions of rows in an RI\n> scenario during one transaction, it could blow away the\n> backend. But I'd prefer to leave this as a well known problem\n> for 7.1 and better start on creating a good regression test\n> and some documentation for it.\n> \n> Thomas, where should the documentation for FOREIGN KEY go?\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Jun 2000 08:12:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred trigger queue"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Jan, I have added to the TODO list:\n> \n> \t* Add deferred trigger queue file? (Jan)\n> \n> Do you want this in there?\n\n Yes.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Sat, 10 Jun 2000 21:30:21 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Deferred trigger queue"
},
{
"msg_contents": "Added.\n\n> Bruce Momjian wrote:\n> > Jan, I have added to the TODO list:\n> > \n> > \t* Add deferred trigger queue file? (Jan)\n> > \n> > Do you want this in there?\n> \n> Yes.\n> \n> \n> Jan\n> \n> -- \n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 10 Jun 2000 18:11:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred trigger queue"
}
] |
[
{
"msg_contents": "\n> Chris Bitmead <[email protected]> writes:\n> > What about portals? Doesn't psql use portals?\n> \n> No ... portals are a backend concept ...\n> \n\nI think the previous frontend \"monitor\" did use a portal for the\nselects. The so called \"blank portal\".\n\nI don't really see any advantage, that psql does not do a fetch loop\nwith a portal. \nIs it possible in psql do do any \"fetch\" stuff, after doing a\nselect * from table ?\n\nThe result is fed to a pager anyway.\n\nAndreas\n",
"msg_date": "Tue, 8 Feb 2000 17:49:09 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Another nasty cache problem "
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Chris Bitmead <[email protected]> writes:\n> > > What about portals? Doesn't psql use portals?\n> >\n> > No ... portals are a backend concept ...\n> >\n> \n> I think the previous frontend \"monitor\" did use a portal for the\n> selects. The so called \"blank portal\".\n> \n> I don't really see any advantage, that psql does not do a fetch loop\n> with a portal.\n> Is it possible in psql do do any \"fetch\" stuff, after doing a\n> select * from table ?\n\nYes it is if you set up a cursor. I think Tom was right that psql\nshouldn't use a portal just as a matter of course, because things\nwork differently in that case (locks?). I'm sure it could be a \nuseful option though.\n\n> \n> The result is fed to a pager anyway.\n> \n> Andreas\n",
"msg_date": "Wed, 09 Feb 2000 09:41:17 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Another nasty cache problem"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Chris Bitmead <[email protected]> writes:\n> > > What about portals? Doesn't psql use portals?\n> >\n> > No ... portals are a backend concept ...\n> >\n> \n> I think the previous frontend \"monitor\" did use a portal for the\n> selects. The so called \"blank portal\".\n\nIs'nt the \"blank portal\" the name of the cursor you get when you just \ndo a select without creating a cursor ?\n\n> I don't really see any advantage, that psql does not do a fetch loop\n> with a portal.\n\nIt only increases traffic, as explicit fetch commands need to be sent \nto backend. If one does not declare a cursor, an implicit fetch all from \nblank is performed.\n\n> Is it possible in psql do do any \"fetch\" stuff, after doing a\n> select * from table ?\n\nonly if in a declared cursor, and you can only declare cursor if in a \ntransaction.\n\n---------------\nHannu\n",
"msg_date": "Thu, 10 Feb 2000 12:12:04 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Another nasty cache problem"
}
] |
[
{
"msg_contents": "Hey,\nI have found what looks to be a bug in the COPY command.\n\nThis is current CVS as of monday at 7pm CST.\n\nI have a tab delimited file(It's actually just a dump) that includes\nsome non-normal ascii characters, the one in question being y with 2 dots.\nIf the data has ANY of those in a char(or vchar) field the COPY finishes\nsuccessfully with no errors, but the table is empty. If I edit\nthe data file and remove that y(there was only one), it works just fine.\nSo basically, COPY silently dropped that line on the floor and didn't say\nanything about it.\n\nPostgres was compiled like this: \n./configure --prefix=/home/postgres --with-maxbackends=128 ; make\nand ran like this: postmaster -d 9 -o '-F'\non Linux 2.2.14 (Mandrake 7.0)\n\nThe above scenario happens when I just try to run on a one line data \nfile(which I can provide if necessary). If I try the big macdaddy file\nwith around 350 MB of data(and about 12 out of the 3-4 million rows has\nthat pesky y in it), more interesting things happen.\n\nThe backend log with -d 9 says: \nFATAL 1: Socket command type (y with 2 dots goes here) unknown\nNOTICE: AbortTransaction and not in in-progress state \n\nAnd psql says this:\n psql:parts.dump:42: Backend message type 0x45 arrived while idle \npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally \n before or while processing the request. \nPQsendQuery() -- There is no connection to the backend. \nPQsendQuery() -- There is no connection to the backend. \nPQsendQuery() -- There is no connection to the backend.\nPQsendQuery() -- There is no connection to the backend. \nPQsendQuery() -- There is no connection to the backend. \nPQsendQuery() -- There is no connection to the backend. \nPQsendQuery() -- There is no connection to the backend. \nPQsendQuery() -- There is no connection to the backend.\nPQsendQuery() -- There is no connection to the backend. \n\n2 weird things here. psql takes a good 30-40 seconds to notice that the\nbackend has died and I had another session(idle) of psql going, and that\nbackend died as well, but the postmaster was still running.\n\nAll of the data above will import just fine into 6.5.3(on the same\nmachine/os).\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Tue, 8 Feb 2000 11:23:35 -0600 (CST)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": true,
"msg_subject": "COPY from file"
},
{
"msg_contents": "Ole Gjerde <[email protected]> writes:\n> I have found what looks to be a bug in the COPY command.\n\nOoops. Fixed. ('char c' -> 'int c' ... EOF doesn't fit in a char ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Feb 2000 19:11:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] COPY from file "
},
{
"msg_contents": "On 2000-02-08, Ole Gjerde mentioned:\n\n> I have a tab delimited file(It's actually just a dump) that includes\n> some non-normal ascii characters, the one in question being y with 2 dots.\n> If the data has ANY of those in a char(or vchar) field the COPY finishes\n> successfully with no errors, but the table is empty. If I edit\n> the data file and remove that y(there was only one), it works just fine.\n\nFixed it. Thanks for the report, we'd never have found that one.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 9 Feb 2000 01:12:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] COPY from file"
},
{
"msg_contents": "Okay, somebody beat me to it. Seems we can't fix these bugs fast enough.\n;)\n\nOn 2000-02-09, Peter Eisentraut mentioned:\n\n> On 2000-02-08, Ole Gjerde mentioned:\n> \n> > I have a tab delimited file(It's actually just a dump) that includes\n> > some non-normal ascii characters, the one in question being y with 2 dots.\n> > If the data has ANY of those in a char(or vchar) field the COPY finishes\n> > successfully with no errors, but the table is empty. If I edit\n> > the data file and remove that y(there was only one), it works just fine.\n> \n> Fixed it. Thanks for the report, we'd never have found that one.\n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 9 Feb 2000 01:22:57 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] COPY from file"
}
] |
[
{
"msg_contents": "!!!IMPRIME ESTA HOJA ...YAAAAAAAAAAA!!!\n\nPerm�teme empezar diciendo que FINALMENTE LO ENCONTRE !!!. En serio!, lo\nencontr�!. Y eso que odio todos esos\nESQUEMAS DE HACERTE RICO RAPIDO!!!. Odio esos esquemas de marketing con\nvarios niveles, o de hacer �rdenes\npor correo, o poniendo propaganda en sobre de correo, etc., la lista es\ninterminable.\nProb� todos esos malditos esquemas de hacerte rico r�pido durante 5 a�os y\nhasta quede \"enganchado\" en una lista de est�pidos que quieren hacer\ndinero r�pido probando cualquier cosa. Pero bueno, son cosas de esa edad, y\nesa\nfrase diciendo que podr�a ser rico r�pido me sonaba irresistible!.Estaba\ndesesperado por dinero!! Les di a cada uno de ellos un chance y todos y\ncada uno de ellos fallaron!. Tal vez son buenos esquemas para algunos, pero\nseguro no lo fue para m�. Eventualmente, tiraba todo eso a la basura\ncuando recib� este documento. Pude reconocerlo enseguida. Puedo \"oler\"\ndinero a\nun kil�metro de distancia estos d�as, sab�a que iba a ser relativamente\nf�cil\ny no estuve equivocado .... AMO EL INTERNET ... Estaba chequeando el\nNEWSGROUP cuando vi un art�culo que dec�a de CONSEGUIR DINERO RAPIDO!!.\nPense ...\n\"EN EL INTERNET??. Bueno, tengo que ver que tipo de esquema pueden presentar\nen\nel Internet\". Este art�culo describ�a la manera de mandar POR CORREO UN\nBILLETE DE US$ 1.00 A SOLO SEIS PERSONAS Y GANAR US$ 50.000.00 EN EFECTIVO\nEN 4>\nSEMANAS!!!. Bueno, cuanto m�s pens� acerca de esto m�s curioso me pon�a,?\npor qu�?, por la manera en que esto trabajaba y PORQUE SOLO IBA A\nCOSTARME US$ 6.00 (Y SEIS ESTAMPILLAS), Y ESO ERA TODO LO QUE TENIA QUE\nPAGAR ... Y\nNADA MAS!!!. O.K., los US$ 50.000.00 en efectivo pod�a ser una punto alto\nde alcanzar, pero era posible. Me figur� que tal vez pod�a tener un alcance\nde $1.000.00 mas o menos as� que lo hice!!!. Como\ndec�an las instrucciones en el art�culo, mand� por correo un billete de un\nd�lar a cada uno de las seis personas en la lista que conten�a el\narticulo.\nInclu� una peque�a nota que dec�a \"POR FAVOR INCLUYAME EN SU LISTA\" junto\ncon el d�lar. Luego rehice la lista en donde saque al primero, mov� una\nposici�n arriba a cada uno de los restantes e inclu� mi nombre al final de\nella. Esta fue la forma de lograr que el dinero comenzara a llegarme!!.\nLuego tom� esta lista que acababa de modificar y la RE- PUBLIQUE EN TODOS\nLOS NEWSGROUPS Y BBS LOCALES QUE CONOCIA y espere que el dinero empezara a\nllegarme,preparada para recibir entre $1.000 y $1.500 en efectivo. Pero\nque agradable sorpresa cuando todos esos sobres empezaron a llegarme !!!.\nSupe\nenseguida de que se trataban cuando vi que las direcciones de retorno\nprovenian de todas partes del mundo, la mayor�a de Estados Unidos, Canada\ny Australia!. En mi primera semana hice unos 20.00 a 30.00 d�lares. Para el\nfinal de la segunda semana ten�a hecho un total de m�s de US$ 1000.00!!!!!\nEn la tercera semana recib� m�s de U$S 10,000.00 y todav�a segu�a llegando\nm�s. Esta es mi cuarta semana y ya recib� un total de US$ 23.343.00\n!!!!!!FUE EXCITANTE!!! !!!No lo pod�a creer!!!\nTienes que seguirlo y re-publicarlo donde te sea posible, cuanto m�s se\npublique y m�s gente lo vea, habr� m�s posibilidades para todos de ganar\nm�s dinero, esto determinar� cu�nto te llegar� por correo!!!. Es realmente\nf�cil de pasarlo...\nRevisemos las razones de por que hacerlo: los unicos gastos son 6\nestampillas, 6 sobres y 6 d�lares (un billete de UN d�lar para cada uno en\nla lista), luego re-publicar el artculo con tu nombre incluido en�todos\nlos NEWSGROUPS o BBS que se te ocurran (esto es gratis) y luego esperar por\nlos sobre que te lleguen. Todos tenemos 6 d�lares para gastar en una\ninversi�n\nf�cil y que no envuelve ning�n tipo de esfuerzo con UNA ESPECTACULAR\nRECOMPENZA DE ENTRE US$ 15.000.00 Y US$ 120.000.00 en solo 3 a 5 semanas!.\nAs� que private de jugar a la loteria hoy, o mejor come en casa en vez de\nir afuera e invierte esos 6 d�lares en esto que puede darte una grata\nsorpresa!!!. No hay forma de perder !!! Como funciona esto exactamente???,\ncuidadosamente proveo la m�s detalladas y simples instrucciones de como\nvas a conseguir que te llegue dinero f�cil. Prep�rate para lograrlo, esta es\nla forma...\n************************************************************\nLISTA DE NOMBRES -- LISTA DE NOMBRES -- LISTA DE NOMBRES\n*********************************************************************\n1. Nicolas Arancibia Roman\nLidice 749-B\nPoblacion Esperanza\nRancagua\nCHILE\n\n\n2. Rodrigo G�mez Zamorano\nDuble Almeyda # 3074 Departamento 201\n�u�oa - Santiago\nCHILE\n\n3. Sergio Rodr�guez Asensio\nC/Dr Mara��n 11 2A\n03610 Petrer (Alicante)\nESPA�A\n\n\n4. Nacho Escudero Fernandez\n c/ Islas Cies 55 6� G\n 28035 Madrid\n ESPA�A\n\n5. Rosa Company Valverde\n c/ Perez Dolz n�7 6� D\n 12003 Castell�n\n ESPA�A\n\n6. Javier Pi�ol Rey\n Apartado de Correos 1148\n 25080 Lleida\n ESPA�A\n\nO.K. lee esto cuidadosamente. No es que sea necesario, pero es una buena\nidea de imprimirlo como referencia en el caso de que tengas dudas.\nINSTRUCCIONES\n1. En una pagina en blanco escribe lo siguiente:\n\"POR FAVOR INCLUYA MI NOMBRE EN SU LISTA\" (\"PLASE ADD MY NAME TO YOUR\nLIST\"). Esto autom�ticamente genera un servicio y como tal, lo hace\nCOMPLETAMENTE\nLEGAL.\nA partir de ahora no estas mandandole UN DOLAR a alguna persona sin ning�n\nmotivo, estas pagando UN>\nDOLAR por un legi�timo servicio.\nAsegurate de incluir tu nombre y direcci�n. Te aseguro, de nuevo, que esto\nes completamente legal!. Como un lindo detalle pon tambi�n en que posici�n\ncada nombre figuraba cuando mandastes tu dolar: (\"Estabas en la posici�n\n3\") o (\"You were in slot 3\") como para hacerlo mas completo!!!.De eso se\ntrata, de hacer dinero y pasarla bien al mismo tiempo.\n2.Ahora dobla esta p�gina escrita alrededor del billete de UN DOLAR (no\nmandes cheques ni otro tipo de pago, SOLO BILLETES DE UN DOLAR AMERICANO),\nponlo todo dentro de un sobre y mandalo a cada uno de las 6 personas\nlistadas. La idea de doblar el papel alrededor del billete es para\nasegurar que va a llegar a destino y ESTO ES IMPORTANTE!!!. (De otra manera\nla\ngente que trabaja en el correo podr�a detectar que se trata de dinero y\nquedarse\ncon los miles de sobre que van llegando!!!). para m�s seguridad, usa\ntambi�n una hoja de papel carb�n para envolver el billete, de ese modo sera\nmucho\nmas dificil ver lo que va dentro.\n3.Escucha con cuidado, esta es la forma de como vas a recibir dinero por\ncorreo. Mira la lista de las seis personas, borra el primer nombre y\nagrega el tuyo al final de ella, asi que el que era numero 2 pasa a ser\nnumero 1,\nel que era 3 pasa a ser 2, el que era 4 pasa a ser 3, el que era 5 pasa a\nser 4 etc. y t� eres ahora el numero seis. Incluye tu nombre, direcci�n,\nc�digo postal y pa�s.\n4. Ahora publica este articulo en por lo menos 200 \"newsgroups\" (existen\nmas de 24,000 grupos). S�lo necesitas 200, pero cuanto m�s cantidad ponga,\nm�s\ndinero te llegar�!!!, si conoces BBS (Bulletin Board Systems) locales con\n�reas de mensajes, etc., en donde se te ocurra (recuerda que es\nLEGAL).Cuando publiques la descripci�n de este articulo, trata de darle un\nnombre que \"atrape\", como: \"NECESITA DINERO RAPIDO?, LEA ESTE ARTICULO\",\n\"NECESITA DINERO PARA PAGAR SUS DEUDAS???\", etc. Y cuanto m�s se publique\nm�s posibilidades de recibir mas dinero vas a tener, adem�s les das la\noportunidad a otras personas que est�n interesadas de hacer dinero.\n!!!HAGAMOS EL COMPROMISO DE SER HONESTOS Y CUMPLIR CON LO LEGAL,PONIENDO\nUN 120 POR CIENTO DE NOSOTROS PARA QUE ESTE SISTEMA FUNCIONE !!!. Te\nsorprender�s de\nlos beneficios, cr�eme!!!.\n5. Si tienes problemas para publicarlo en tu BBS local simplemente\npreg�ntale al SYSOP (Operador del Sistema) como publicar en ese NEWSGROUP,\nlo mismo en el caso de publicarlo en el Internet. Si tratara de explicarlo\nen este articulo seria para problemas, por hay demasiados programas\ndistintos y al detallar uno o dos de ellos, podr�a crearle problemas al\nque tiene otro distinto. La descripci�n del articulo cuando se lo publique\ndeber�a decir algo as� como:\n\"BAJE ESTE ARCHIVO Y LEA COMO PUEDE RECIBIR DINERO POR CORREO\". No as�:\n\"GANE MILLONES DE DOLARES EN DOS DIAS POR CORREO\" porque nadie te va a\nresponder ...\nAqu� van a algunas indicaciones de c�mo introducirse en los \"newsgroups\":\nCOMO MANEJAR LOS \"NEWSGROUPS\"\nNo.1> Ud. no necesita redactar de nuevo toda esta carta para hacer la suya\npropia. Solamente ponga su cursor al comienzo\nde esta carta, haga click, lo deja presionado y b�jelo hasta el final de\nla carta y l�rguelo. Toda la carta deber� estar \"sombreada\". Entonces haga\nclick en \"Edit\"(Editar) arriba de su pantalla, aqu� seleccione\n\"Copy\"(Copiar). Esto har� que toda la carta quede en la memoria de su\ncomputador.\nNo.2> Abra una nueva p�gina en blanco y lleve el cursor al inicio.\nPresione \"Edit\" y del men� seleccione \"Paste\"(Pegar). Ahora tendr� esta\ncarta en el\n\"notepad\" y podr� agregar su nombre y direcci�n en el lugar #6 siguiendo\nlas instrucciones de m�s arriba.\nNo.3> Grabe esta carta en su nuevo archivo del notepad como un .txt\nfile.(Archivo de Texto). Ycada vez que quiera cambiar algo lo podr� hacer.\nPARA LOS QUE MANEJAN NETSCAPE\nNo.4> Dentro del programa Netscape, vaya a \"Communicator\" y seleccione\n\"Grupo de noticias\", luego vaya a \"Archivo\" y seleccionar \"Suscribir\". En\nsegundos una lista de todos los \"Newsgroups\" de su \"server\" aparecer�.\nHaga click en cualquier newsgroup. De este newsgroup haga click debajo de\n\"TO\nNEWS\", el cual deberia estar arriba, en el extremo izquierdo de la p�gina\nnewsgroups. Esto le llevar� a la caja de mensajes.\nNo.5> Llene este espacio. Este ser� el t�tulo que ver�n todos cuando\nrecorran por la lista de un grupo en particular.\nNo.6> Marque el contenido completo del archivo y copie usando la misma\nNEWS\" y Ud. est� creando y empastando esta carta dentro de su programa o\n\"posting\".\nNo.7> Presione \"send\" que est� en la parte superior izquierda. Y UD. HA\nFINALIZADO CON SU PRIMERO!...FELICITACIONES!!! .\nLOS QUE USAN INTERNET EXPLORER\nPASO No. 4: Vaya al Newsgroups y seleccione \"Post an Article\". O en\nClasificados seleccione \"poner un Anuncio\". O en los\nForos de Discusion, etc.\nPASO No. 5: Copie el art�culo del notepad y peguelo en el lugar del texto\nque va a enviar o anunciar. Utilice la misma tecnica\nanterior.\nPASO No. 7: Presione el bot�n \"Post\", \"Enviar\" o \"Poner\", etc..\n--------------------------------------------------------\nES TODO!. Todo lo que tiene que hacer es meterse en diferentes\n\"Newsgroups\" y empastarlos, cuando ya tenga pr�ctica,\nsolo le tomar� unos 30 segundos por cada newsgroup! **RECUERDE, CUANTOS\nMAS NEWSGROUPS CONSIGA,\nMAS RESPUESTAS (Y DINERO) RECIBIRA!! PERO DEBE DE ENTRAR EN POR LO MENOS\n200** YA ESTA!!!.... Ud. estar� recibiendo dinero de todo el mundo, de\nlugares que\nni conoce y en unos pocos d�as!. Eventualmente\nquerr� arrendar una casilla de correo por la cantidad de sobres que ir�\nrecibiendo.\n**ASEGURESE DE QUE TODAS LAS DIRECCIONES ESTEN CORRECTAS**\nAhora el POR QUE de todo esto: De 200 enviados, digamos que recibo s�lo 5\nrespuestas (baj�simo ejemplo).Entonces hice US$ 5.00 con mi nombre en la\nposici�n #6 de esta carta.\nAhora,cada uno de las 5 personas que ya me enviaron US$ 1.00 tambi�n hacen\nun\nm�nimo 200 newsgroups, cada uno con mi nombre en el #5 de la lista y s�lo\nresponden 5 personas a cada uno de los 5 originales, esto hace US$ 25.00 m�s\nque yo\nrecibo, ahora estas 25 personas pone un m�nimo de\n200 Newsgroups con mi nombre en el #4 y s�lo se recibe 5 respuesta de cada\nuno. Estar�a haciendo otros US$ 125.00 adicionales. Ahora esta 125 personas\nponen sus m�nimo de 200 grupos con mi\nnombre en el #3 y s�lo reciben 5 respuestas cada una, yo recibo un\nadicional de US$ 625.00!. OK, aqu� esta la parte m�s divertida, cada una de\nestas\n625 personas ponen sus cartas en otros 200 grupos con mi nombre en el #2 y\ncada una recibe s�lo 5 respuestas, esto hace que yo reciba US$ 3,125.00!!!.\nEstas 3,125 personas enviar�n este mensaje a un m�nimo de 200 Newsgroup nomb\nre en\nel #1 y si solo 5 personas reResponden de los 200 grupos, estar� recibiendo\nUS$ 15,625.00!!. De una inversion original de US$6.00!! mas estampillas.\nFABULOSO! Y como dije antes, que solo 5 personas respondan muy poca, el\npromedio real seria 20 o 30 personas!. Asi que pongamos un n�mero m�s\ngrande\npara calcular. Si solo 15 personas responden, esto hace:\nEn la #6---------US$ 15.00\nEn la #5---------US$ 225.00\nEn la #4---------US$ 3,375.00\nEn la #3---------US$ 50,625.00\nEn la #2---------US$ 759,375.00\nEn la #1---------US$ 11,390,625.00 s�, m�s de ONCE MILLONES DE D�LARES!!!.\nUna vez que su nombre ya no est� en la lista, saque el ultimo anuncio del\nNewsgroup y envie otros US$ 6.00 a los nombres en\nesa lista, poniendo su nombre en el #6 y repetir todo el proceso. Yempezar\na ponerlos en los Newsgroups otra vez. Lo que\ndebe recordar es que miles de personas mas, en todo el mundo, se conectanal\nInternet cada d�a y leer�n estos art�culos todos\nlos d�as como USTED Y YO LO HACEMOS!!!. As� que creo nadie tendria\nproblemas en invertir US$ 6 .00 y ver si realmente esto funciona. Algunas\npersonas\nllegan a pensar...\"y si nadie decide contestarme?\" Que!! Cu�l es la\nprobabilidad de que esto pas� cuando hay miles y miles de personas (como\nnosotros) que buscan una manera de tener independencia financiera y pagar\nsus deudas!!!., y est�n dispuestas a tratar, pues \"No hay peor lucha de la\nque no se hace\". Se estima que existen de 20,000 a 50.000 nuevos usuarios\nen Internet TODOS LOS DIAS!\n*******OTRO SISTEMA PARA COMUNICARSE ES CONSIGUIENDO E-MAIL DE PERSONAS\nPARTICULARES, DEBE SER POR LO MENOS 200 DIRECCIONES, PERO ESTO TIENE UNA\nEFECTIVIDAD DE MINIMO 5% AL 15%.\nSI SOLO BUSCAN PERSONAS QUE HABLAN EN ESPA�OL, VAYAN A UN PROVEEDOR DE\nE-MAILS E IMPRIMAN UN APELLIDO LATINO Y YA!!!\nRecuerde de hacerlo esto en forma CORRECTA , LIMPIA Y HONESTAMENTE y\nfuncionar� con toda seguridad. Solamente tiene que ser honesto. Asegurese\nde imprimir este art�culo AHORA, trate de mantener la lista de todos\nlos que les envian dinero y siempre f�jese en los Newsgroups y vea si\ntodos est�n participando limpiamente\nRecuerde, HONESTIDAD ES EL MEJOR METODO..\nNo necesitas hacer trucos con la idea b�sica de hacer dinero en esto!\nBENDICIONES PARA TODOS y suerte, juguemos limpio y aprovechar esta hermosa\noportunidad de hacer toneladas de dinero en Internet\n.**Dicho sea de paso, si Ud. defrauda a las personas poniendo mensajes con\nsu nombre y no envia ning�n dinero a los dem�s\nen esa lista, Ud. recibir� casi NADA!.\n\"LOS FRUTOS DE LA HONESTIDAD SE RECOGEN EN MUY POCO TIEMPO Y DURAN PARA\nSIEMPRE\" PROVERVIO CHINO.\nHe conversado con personas que conocieron personas que hicieron eso y\nllegaron a recibir US$ 15.00 en unas 7 semanas!!!\nAlgunos decidieron probar otra vez, haciendolo correctamente, y en 4 a 5\nsemanas recibieron m�s de $10.000. Esto es la mas\nlimpia y honesta manera de compartir fortuna que yo jam�s haya visto, sin\ncostarnos mucho excepto un poco de tiempo. El\nTIEMPO ES IMPORTANTE!, no dejar pasar mas de 7 d�as del momento que vea\neste art�culo!. Tambi�n puede conseguir\nlista de E-MAIL para extra d�lares. Sigamos todos las reglas del negocio!\nRecuerde mencionar esta ganancias extras en sus\ndeclaraciones de impuestos.\n6. Y este es el paso que mas me gusta. SIMPLEMENTE SI�NTATE Y DISFRUTA,\nPORQUE EL EFECTIVO VIENE EN CAMINO!!!. Espera ver un poquito de dinero\ndurante la segunda semana, pero a partir de la tercera semana TORMENTA DE\nSOBRES EN TU CORREO. Todo lo que tienes que hacer es recibirlo y trata de\nno gritar muy fuerte cuando te des cuenta de que esta vez lo lograstes !!\n7. Es tiempo de pagar lo que deb�as y comprar algo especial para t� o para\nesa persona especial en tu vida, un regalo que nunca olvidaras.Disfruta de\nla vida !!!\n8. Cuando te empieces a quedar corto de dinero, reactiva este archivo y\nre-publicalo en los mismos lugares en que lo vas a hacer ahora y nuevos\nlugares que conocer�s en el futuro. Siempre mantiene a mano una copia de\neste art�culo, react�valo cada vez que necesites dinero. ES UNA\nHERRAMIENTA INCREIBLE QUE PUEDES VOLVER A USAR CUANTAS VECES NECESITES\nDINERO EN\nEFECTIVO.\n\nHONESTIDAD ES LO QUE HACE TRIUNFAR A ESTE PROGRAMA ...NO LO OLVIDES\n\n!!!BUENA SUERTE!!!\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 09 Feb 2000 00:40:16 GMT",
"msg_from": "\"Javi Pi���ol\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "POR FIN DINERO REAL SIN TRAMPAS"
}
] |
[
{
"msg_contents": "\nLet's say I want to write regression tests to ensure that everything\nrelated to inheritance works. One test I would want would be to make\nsure pg_dump works for inheritance. What is the preferred way to make\nregression tests which aren't sql?\n",
"msg_date": "Wed, 09 Feb 2000 13:03:55 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression tests"
},
{
"msg_contents": "> Let's say I want to write regression tests to ensure that everything\n> related to inheritance works. One test I would want would be to make\n> sure pg_dump works for inheritance. What is the preferred way to make\n> regression tests which aren't sql?\n\nThis has come up just recently, and afaik we don't have a precedence\nfor it in the current regression tests.\n\nI would advocate having another test fired by the Makefile, to\nminimize the number of components which need to be running for the\nfundamental tests using psql to work.\n\n\"make all\" currently *builds* the tests\n\"make runtest\" currently runs the basic tests\n\"make bigtest\" runs the large numeric tests\n\"make runcheck\" runs the parallel tests\n...\n\nThe basic inheritance tests should be in the basic regression test.\nBut how about something like \"make dumptest\"...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 09 Feb 2000 03:02:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression tests"
},
{
"msg_contents": ">> Let's say I want to write regression tests to ensure that everything\n>> related to inheritance works. One test I would want would be to make\n>> sure pg_dump works for inheritance. What is the preferred way to make\n>> regression tests which aren't sql?\n\n> This has come up just recently, and afaik we don't have a precedence\n> for it in the current regression tests.\n\nNo, there are no formalized tests at all that exercise pg_dump.\n\nThe informal test that's been around for a while is to pg_dump\nthe regression database, load into a fresh DB, pg_dump again,\nand compare the second dump's output to the first's. But there's\nno automation for this, and I'm not sure you can really expect\nthe resulting scripts to be bit-for-bit the same anyway (there\nmight be ordering differences).\n\nIf you feel like developing a more believable testing strategy\nfor pg_dump, ain't nobody gonna get in your way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Feb 2000 00:23:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression tests "
}
] |
[
{
"msg_contents": "Here is the patch to add plperl to droplang.\n\n\n*** droplang.old\tTue Feb 8 21:00:34 2000\n--- droplang\tTue Feb 8 21:02:27 2000\n***************\n*** 159,167 ****\n \t\tlancomp=\"PL/Tcl\"\n \t\thandler=\"pltcl_call_handler\"\n ;;\n \t*)\n \t\techo \"$CMDNAME: unsupported language '$langname'\"\n! \t\techo \"Supported languages are 'plpgsql' and 'pltcl'.\"\n \t\texit 1\n ;;\n esac\n--- 159,171 ----\n \t\tlancomp=\"PL/Tcl\"\n \t\thandler=\"pltcl_call_handler\"\n ;;\n+ \tplperl)\n+ \t\tlancomp=\"PL/Perl\"\n+ \t\thandler=\"plperl_call_handler\"\n+ ;;\n \t*)\n \t\techo \"$CMDNAME: unsupported language '$langname'\"\n! \t\techo \"Supported languages are 'plpgsql', 'pltcl', and 'plperl'.\"\n \t\texit 1\n ;;\n esac\n\n\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Tue, 8 Feb 2000 21:06:03 -0500",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "plperl droplang patch"
},
{
"msg_contents": "Applied.\n\n> Here is the patch to add plperl to droplang.\n> \n> \n> *** droplang.old\tTue Feb 8 21:00:34 2000\n> --- droplang\tTue Feb 8 21:02:27 2000\n> ***************\n> *** 159,167 ****\n> \t\tlancomp=\"PL/Tcl\"\n> \t\thandler=\"pltcl_call_handler\"\n> ;;\n> \t*)\n> \t\techo \"$CMDNAME: unsupported language '$langname'\"\n> ! \t\techo \"Supported languages are 'plpgsql' and 'pltcl'.\"\n> \t\texit 1\n> ;;\n> esac\n> --- 159,171 ----\n> \t\tlancomp=\"PL/Tcl\"\n> \t\thandler=\"pltcl_call_handler\"\n> ;;\n> + \tplperl)\n> + \t\tlancomp=\"PL/Perl\"\n> + \t\thandler=\"plperl_call_handler\"\n> + ;;\n> \t*)\n> \t\techo \"$CMDNAME: unsupported language '$langname'\"\n> ! \t\techo \"Supported languages are 'plpgsql', 'pltcl', and 'plperl'.\"\n> \t\texit 1\n> ;;\n> esac\n> \n> \n> \n> -- \n> Mark Hollomon\n> [email protected]\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 9 Feb 2000 15:15:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] plperl droplang patch"
}
] |
[
{
"msg_contents": "create table foo(b boolean);\ncreate index foo_index on foo(b);\n\nYou get a \"no default operator for type 16.\" error...\n\nThis ecommerce datamodel I'm porting over uses such indices \nfrequently, apparently to grab small subsets of large tables which\nhave few rows with the predicate set to one state. Even if such\nan index might be of dubious usefulness in situations where\nthe table's population is more evenly split, there's no real\nreason not to support indexes on booleans, is there?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 08 Feb 2000 18:31:05 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "minor bug..."
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> ... there's no real reason not to support indexes on booleans, is\n> there?\n\nNot that I can see. Care to whip up the index support? I think the\nonly actual new code needed is a three-way-compare function (return -1,\n0, or +1 according as a < b, a = b, a > b). Then you need to make up\nthe appropriate rows in pg_amop and related tables. See the \"xindex\"\nchapter of the documentation.\n\n(It occurs to me that performance would probably suck, however, because\nbtree doesn't handle lots of equal keys very efficiently. Fixing that\nis on the TODO list...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Feb 2000 11:05:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] minor bug... "
},
{
"msg_contents": "> > ... there's no real reason not to support indexes on booleans, is\n> > there?\n\nafaict the only case where this would be a win is if there is a *very*\nskewed distribution of boolean values, and you *only* want the\nuncommon one. Otherwise, looking up half the rows in a table via index\nhas got to be worse than just scanning the table.\n\n> Not that I can see. Care to whip up the index support? I think the\n> only actual new code needed is a three-way-compare function (return -1,\n> 0, or +1 according as a < b, a = b, a > b). Then you need to make up\n> the appropriate rows in pg_amop and related tables. See the \"xindex\"\n> chapter of the documentation.\n> (It occurs to me that performance would probably suck, however, because\n> btree doesn't handle lots of equal keys very efficiently. Fixing that\n> is on the TODO list...)\n\n... And performance will suck anyway (see above) :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 09 Feb 2000 17:53:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] minor bug..."
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > > ... there's no real reason not to support indexes on booleans, is\n> > > there?\n> \n> afaict the only case where this would be a win is if there is a *very*\n> skewed distribution of boolean values, and you *only* want the\n> uncommon one. Otherwise, looking up half the rows in a table via index\n> has got to be worse than just scanning the table.\n\nOne (maybe only) case I can see use for it is for a multi-field index \ncontaining many booleans (say an index over 16 boolean fields).\n\n------------\nHannu\n",
"msg_date": "Thu, 10 Feb 2000 13:41:13 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] minor bug..."
},
{
"msg_contents": "I've submitted a patch to pgsql-patches to fix the following\nlimitations on type bool:\n\ntest=> create table foo(b bool);\nCREATE\ntest=> create index foo_idx on foo(b);\nERROR: Can't find a default operator class for type 16.\ntest=> select * from foo where b<=b;\nERROR: Unable to identify an operator '<=' for types 'bool' and 'bool'\n You will have to retype this query using an explicit cast\ntest=> select * from foo where b>=b;\nERROR: Unable to identify an operator '>=' for types 'bool' and 'bool'\n You will have to retype this query using an explicit cast\ntest=> \n\nThe oversite that leads to one not being able to define an index on\ntype bool I can understand, but who the heck would bother to go to\nall the trouble of adding type \"bool\" and only define four of the\nsix standard comparison operators?\n\nOh well...\n\nTom suggested I submit the patch to pgsql-patches, and I ran my OID\nassignments for the new procs, bool_ops, etc past Thomas at Tom's\nsuggestion, the regression tests pass, I've done some additional testing,\netc.\n\nI didn't look into adding bool to the hash ops defined in pg_amop,\nafter all yesterday afternoon was the first I'd looked into adding\nsomething to the catalog code and the getting the above set of\nfunctions in took me four hours of reading docs and code, testing\nmaking the diff, etc.\n\nI assume not having a type added to hash ops isn't fatal, because\n\"numeric\" isn't there and Jan strikes me as being a very thorough\nguy...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 10 Feb 2000 10:23:33 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] minor bug... "
},
{
"msg_contents": "> I assume not having a type added to hash ops isn't fatal, because\n> \"numeric\" isn't there and Jan strikes me as being a very thorough\n> guy...\n\nA hash index is probably even less useful than the btree index for\nthis type, unless it can be used with multi-column indices. Because\nthe hash will chain duplicate values into a list of some kind, and\nyou'll get *long* lists.\n\nFind and steal the code for \"char\" (the real one-byte character type).\nBut a one-bit hash is what you really want, so it may be better to\nimplement your own.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 10 Feb 2000 21:45:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] minor bug..."
}
] |
[
{
"msg_contents": "Has anybody heard of porting PostgreSQL on VxWorks (a realtime OS)? I\nknow that someone ported libpq on it, but am interested in the\nbackend. VxWorks has socket, file io and signal but not fork etc.\nMoreover, it has no VM. That make me pessimistic.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 09 Feb 2000 11:36:38 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "VxWorks ports?"
}
] |
[
{
"msg_contents": "\nIs there any particular reason why a backend has to be started by the\npostmaster unless it is the only backend running (in debug mode) ?\n\nI'm thinking here that\n\n(a) It would be more convenient to debug if you didn't have to shut down\nthe postmaster to run gdb postgres and...\n\n(b) If that were the case you be part-way to implementing a\nsingle-process database option like some databases have.\n\nWhat are the issues? Finding the shared memory etc perhaps?\n",
"msg_date": "Wed, 09 Feb 2000 17:50:13 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "backend startup"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Is there any particular reason why a backend has to be started by the\n> postmaster unless it is the only backend running (in debug mode) ?\n\nIf you don't have a postmaster then the backend is running standalone,\nwhich is not really the same environment as running in a live\ninstallation. It's OK for some kinds of debugging but I wouldn't\ntrust it an inch for locking or resource-related issues.\n\n> (a) It would be more convenient to debug if you didn't have to shut down\n> the postmaster to run gdb postgres and...\n\nSay what? I've never yet shut down the postmaster to gdb anything;\nI tell gdb to \"attach\" to a running backend started by the postmaster.\n(See thread a couple weeks ago on exactly this point.) The major\nadvantage of that way of working is you can use a reasonable client\n(psql or whatever floats your boat) instead of having to type queries\ndirectly at a backend that has no input-editing or command history\nsupport. There's also no question about whether you're running in\na realistic environment or not. Finally, you can fire up an additional\nclient+backend to examine the database even when you've got the backend\nunder test stopped somewhere (so long as it's not stopped holding a\nspinlock or anything like that). If it weren't for the needs of initdb,\nI think standalone-backend mode would've gone the way of the dodo\nsome time ago...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Feb 2000 02:51:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend startup "
},
{
"msg_contents": "Tom Lane wrote:\n\n> If you don't have a postmaster then the backend is running standalone,\n> which is not really the same environment as running in a live\n> installation. It's OK for some kinds of debugging but I wouldn't\n> trust it an inch for locking or resource-related issues.\n\nYeh, but for some databases, starting a backend/frontend manually IS\npossible for a live installation, and improves performance because you\ncan run in the one process.\n\n> Say what? I've never yet shut down the postmaster to gdb anything;\n> I tell gdb to \"attach\" to a running backend started by the postmaster.\n\nI guess I'm just too lazy to run ps.\n\n> The major\n> advantage of that way of working is you can use a reasonable \n> client\n> (psql or whatever floats your boat) instead of having to type \n> queries\n> directly at a backend that has no input-editing or command history\n> support.\n\nSure. But if you could run postgres in one-process mode, the backend\nwould appear to support history because you could build a backend with\npsql built in.\n\n\n There's also no question about whether you're running in\n> a realistic environment or not. Finally, you can fire up an additional\n> client+backend to examine the database even when you've got the backend\n> under test stopped somewhere (so long as it's not stopped holding a\n> spinlock or anything like that). If it weren't for the needs of initdb,\n> I think standalone-backend mode would've gone the way of the dodo\n> some time ago...\n> \n> regards, tom lane\n> \n> ************\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Wed, 09 Feb 2000 20:12:56 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend startup"
},
{
"msg_contents": "At 05:50 PM 2/9/00 +1100, Chris Bitmead wrote:\n>\n>Is there any particular reason why a backend has to be started by the\n>postmaster unless it is the only backend running (in debug mode) ?\n>\n>I'm thinking here that\n>\n>(a) It would be more convenient to debug if you didn't have to shut down\n>the postmaster to run gdb postgres and...\n>\n>(b) If that were the case you be part-way to implementing a\n>single-process database option like some databases have.\n\nI can see where (a) is true, but who really cares about (b) any\nmore? NT, BSD, or Linux on a several hundred dollar PC has no problem\nwith dozens of processes...\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 09 Feb 2000 07:54:27 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend startup"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 05:50 PM 2/9/00 +1100, Chris Bitmead wrote:\n> >\n> >Is there any particular reason why a backend has to be started by the\n> >postmaster unless it is the only backend running (in debug mode) ?\n> >\n> >I'm thinking here that\n> >\n> >(a) It would be more convenient to debug if you didn't have to shut down\n> >the postmaster to run gdb postgres and...\n> >\n> >(b) If that were the case you be part-way to implementing a\n> >single-process database option like some databases have.\n> \n> I can see where (a) is true, but who really cares about (b) any\n> more? NT, BSD, or Linux on a several hundred dollar PC has no problem\n> with dozens of processes...\n\nWell there is socket overhead and extra context-switching time.\n",
"msg_date": "Thu, 10 Feb 2000 09:32:00 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] backend startup"
},
{
"msg_contents": "At 09:32 AM 2/10/00 +1100, Chris Bitmead wrote:\n\n>> I can see where (a) is true, but who really cares about (b) any\n>> more? NT, BSD, or Linux on a several hundred dollar PC has no problem\n>> with dozens of processes...\n\n>Well there is socket overhead and extra context-switching time.\n\nGiven how expensive the basic RDBMS structure is, I imagine this\nis a bit like worrying about the fact that the bugs on my windshield\nincrease drag and decrease my gas mileage.\n\nI mean ... this is undoubtably true, but really pales in comparison\nto other factors that impact my gas mileage.\n\nNow, if you got rid of all the baggage associated with sharing buffers,\nlocking, and all the rest that goes with the multiple process model\nused by Postgres you might end up with a single-process/single client\nversion that is noticably faster.\n\nBut just getting rid of the kernel overhead of two processes talking\nto each other isn't going to get you much, I don't think. You might\nbe able to measure it for something like \"select 1\", but real queries\non real databases? I find it hard to believe.\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 09 Feb 2000 14:51:50 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend startup"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 09:32 AM 2/10/00 +1100, Chris Bitmead wrote:\n> \n> >> I can see where (a) is true, but who really cares about (b) any\n> >> more? NT, BSD, or Linux on a several hundred dollar PC has no problem\n> >> with dozens of processes...\n> \n> >Well there is socket overhead and extra context-switching time.\n> \n> Given how expensive the basic RDBMS structure is, I imagine this\n> is a bit like worrying about the fact that the bugs on my windshield\n> increase drag and decrease my gas mileage.\n> \n> I mean ... this is undoubtably true, but really pales in comparison\n> to other factors that impact my gas mileage.\n\nWell I don't know, but I know VERSANT for example provides a lib1p.so\nand a lib2p.so, and I know they make sure to link against 1p.so for\nbenchmarks.\n\n> Now, if you got rid of all the baggage associated with sharing buffers,\n> locking, and all the rest that goes with the multiple process model\n> used by Postgres you might end up with a single-process/single client\n> version that is noticably faster.\n\nWell, I'm not talking about a single client version. That would be of \ndubious value.\n\n> But just getting rid of the kernel overhead of two processes talking\n> to each other isn't going to get you much, I don't think. You might\n> be able to measure it for something like \"select 1\", but real queries\n> on real databases? I find it hard to believe.\n",
"msg_date": "Thu, 10 Feb 2000 11:44:03 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] backend startup"
}
] |
[
{
"msg_contents": "\"Jeff MacDonald \" wrote:\n\n> Ok, i've managed to get all the files i need, dandy.\n>\n> * side note, i am aware that there is a pre-compiled\n> binary on hub. i'm doing this to see how well kevins\n> instructions work for every one. (from scratch)\n>\n> lets start here , first 3 steps\n> 1.Download\nftp://go.cygnus.com/pub/sourceware.cygnus.com/cygwin/latest/full.exe\n>\n> done.\n>\n> 2. Run full.exe and install in c:\\Unix\\Root directory.\n>\n> afaik this means i should have a c:\\Unix\\Root\\Cygwin\n> dir ?\n\nNope. You can install it in any directory :)\n\n> 3.Run Cygwin, and then run \"mount c:/Unix/Root /\"\n> this command will not work. it gives the error\n> \"Device Busy\" , which makes perfect sense, since cygwin\n> is self is running out of a sub-dir of this dir.\n>\n> any thoughts as to what kevin might have meant ?\n\nTry umount /.\n\n> ======================================================\n> Jeff MacDonald\n> [email protected] irc: bignose on EFnet\n> ======================================================\n\n- Kevin\n\n\n\n",
"msg_date": "Wed, 09 Feb 2000 18:28:08 +0800",
"msg_from": "Kevin Lo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] WinNT compiling: ongoing\n\tReferences: <Pine.BSF.4.10.10002020958270.10395"
}
] |
[
{
"msg_contents": "Hi,\n\nI checked the WinNT port yesterday (a few days old snapshot from CVS) and I\nam including a patch to get it compile.\n\nchanges to psql:\n- added less as default pager when compiling on Cygwin\n- need to declare \"filename_completion_function\" because it is not exported\nfrom readline -> added to include/port/win.h\n\nchanges to pg_id:\n- include of <getopt.h>\n- add .exe when installing\n\nI think there is a problem with calling the regress tests on WinNT - it\nshould be called with PORTNAME not HOST as the parameter to regress.sh or\nthe check when to add \"-h localhost\" to psql has to be changed. Now it is\nchecked against the PORTNAME.\n\nThe results of the regress tests were OK with expected failures ;-)\n\n\t\t\tDan\n\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected] ICQ:36448176\n----------------------------------------------",
"msg_date": "Wed, 9 Feb 2000 11:30:33 +0100 ",
"msg_from": "=?iso-8859-2?Q?Hor=E1k_Daniel?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small update for WinNT port"
},
{
"msg_contents": "Applied.\n\n[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I checked the WinNT port yesterday (a few days old snapshot from CVS) and I\n> am including a patch to get it compile.\n> \n> changes to psql:\n> - added less as default pager when compiling on Cygwin\n> - need to declare \"filename_completion_function\" because it is not exported\n> from readline -> added to include/port/win.h\n> \n> changes to pg_id:\n> - include of <getopt.h>\n> - add .exe when installing\n> \n> I think there is a problem with calling the regress tests on WinNT - it\n> should be called with PORTNAME not HOST as the parameter to regress.sh or\n> the check when to add \"-h localhost\" to psql has to be changed. Now it is\n> checked against the PORTNAME.\n> \n> The results of the regress tests were OK with expected failures ;-)\n> \n> \t\t\tDan\n> \n> \n> ----------------------------------------------\n> Daniel Horak\n> network and system administrator\n> e-mail: [email protected]\n> privat e-mail: [email protected] ICQ:36448176\n> ----------------------------------------------\n> \n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 9 Feb 2000 11:19:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small update for WinNT port"
},
{
"msg_contents": "On 2000-02-09, Hor�k Daniel mentioned:\n\n> changes to psql:\n> - added less as default pager when compiling on Cygwin\n\nIs there no \"more\"?\n\n> - need to declare \"filename_completion_function\" because it is not exported\n> from readline -> added to include/port/win.h\n\nI would think this is more of a readline problem, or does cygwin come with\nits only readline edition? What readline version are you using?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 02:14:44 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small update for WinNT port"
}
] |
[
{
"msg_contents": "\n> Zeugswetter Andreas SB wrote:\n> > \n> > > Chris Bitmead <[email protected]> writes:\n> > > > What about portals? Doesn't psql use portals?\n> > >\n> > > No ... portals are a backend concept ...\n> > >\n> > \n> > I think the previous frontend \"monitor\" did use a portal for the\n> > selects. The so called \"blank portal\".\n> > \n> > I don't really see any advantage, that psql does not do a fetch loop\n> > with a portal.\n> > Is it possible in psql do do any \"fetch\" stuff, after doing a\n> > select * from table ?\n> \n> Yes it is if you set up a cursor. \n\nMy question implied, that a cursor was not set up. That is\ntype: select * from tab; in psql.\n\n> I think Tom was right that psql\n> shouldn't use a portal just as a matter of course, because things\n> work differently in that case (locks?).\n\nThere is no difference in locking behavior.\nSo the question remains, why don't we always use a cursor in psql.\nIt seems the current behavior wastes resources without an obvious\nadvantage.\n\nAndreas\n",
"msg_date": "Wed, 9 Feb 2000 14:33:52 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] Another nasty cache problem"
}
] |
[
{
"msg_contents": "I am trying to write to and access an array field in java can you give some \npointers?\n\nEx: \nI've created a loop of 2K entries to benchmark the db using preparedstatement \nin java. When i try to pass info to an array field (lvendorid int4[]) i get \nan error message telling me i must cast an int4 to an _int4. How do i do this \nfrom java?\n\nIf possible, please show code example.\n\nR.Mann\[email protected]\n \n \n",
"msg_date": "Wed, 9 Feb 2000 14:10:21 EST",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "jdbc and sequences --RM"
}
] |
[
{
"msg_contents": "\n> \t/*\n> \t * If no one shared buffer was changed by this transaction then\n> \t * we don't flush shared buffers and don't record commit status.\n> \t */\n> \tif (SharedBufferChanged)\n> \t{\n> \t\tFlushBufferPool();\n> \t\tsync();\n> \t\tif (leak)\n> \t\t\tResetBufferPool();\n> \n> \t\t/*\n> \t\t *\thave the transaction access methods \n> record the status\n> \t\t *\tof this transaction id in the pg_log relation.\n> \t\t */\n> \t\tTransactionIdCommit(xid);\n> \n> \t\t/*\n> \t\t *\tNow write the log info to the disk too.\n> \t\t */\n> \t\tleak = BufferPoolCheckLeak();\n> \t\tFlushBufferPool();\n\nWould it be a win if this second call to FlushBufferPool would only fsync\npg_log ?\nSince if I read correctly, this call will also flush IO from other sessions.\n\nIf I remember correctly I did a test by commenting the second\nFlushBufferPool call,\nbut that resulted in some regression failures.\nThis call would actually not be needed for DB consistency issues (ACID). \nIt is only needed for Client/Server consistency \n(client got commit ok, but server does rollback on crash).\n\n> \t\tsync();\n> \t}\n",
"msg_date": "Wed, 9 Feb 2000 16:50:24 +0100 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] TODO item "
}
] |
[
{
"msg_contents": "If I use CREATE FUNCTION for a C function in a .so file and then use the\nfunction and then change and recompile the function, what steps are needed\nto see the change?\n\nAs I see it the options are:\nA: do nothing, the function is reloaded on every invocation.\nB: Reopen the connection to the backend\nC: Restart the postmaster\n\nI suspect B is correct but I would like to hear someone confirm it.\n\n",
"msg_date": "Wed, 9 Feb 2000 16:31:14 -0500",
"msg_from": "\"Bryan White\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "The persistance of C functions"
},
{
"msg_contents": "\nOn 09-Feb-00 Bryan White wrote:\n> If I use CREATE FUNCTION for a C function in a .so file and then use the\n> function and then change and recompile the function, what steps are needed\n> to see the change?\n> \n> As I see it the options are:\n> A: do nothing, the function is reloaded on every invocation.\n> B: Reopen the connection to the backend\n> C: Restart the postmaster\n> \n> I suspect B is correct but I would like to hear someone confirm it.\n\n I don't know about A or B, but C definitely works :-). If it's\npractical, I sometimes use DROP FUNCTION/CREATE FUNCTION, but I don't\nthink this is practical when you're using this function (or set of \nfunctions) to implement a new data type.\n\n----------------------------------\nDate: 09-Feb-00 Time: 14:16:54\n\nCraig Orsinger (email: <[email protected]>)\nLogicon RDA\nBldg. 8B28 \"Just another megalomaniac with ideas above his\n6th & F Streets station. The Universe is full of them.\"\nFt. Lewis, WA 98433 - The Doctor\n----------------------------------\n",
"msg_date": "Wed, 09 Feb 2000 14:19:19 -0800 (PST)",
"msg_from": "Craig Orsinger <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [INTERFACES] The persistance of C functions"
},
{
"msg_contents": "> On 09-Feb-00 Bryan White wrote:\n> > If I use CREATE FUNCTION for a C function in a .so file and then use the\n> > function and then change and recompile the function, what steps are\nneeded\n> > to see the change?\n> >\n> > As I see it the options are:\n> > A: do nothing, the function is reloaded on every invocation.\n> > B: Reopen the connection to the backend\n> > C: Restart the postmaster\n> >\n> > I suspect B is correct but I would like to hear someone confirm it.\n>\n> I don't know about A or B, but C definitely works :-). If it's\n> practical, I sometimes use DROP FUNCTION/CREATE FUNCTION, but I don't\n> think this is practical when you're using this function (or set of\n> functions) to implement a new data type.\n\nThanks, I have come to the conclusion that B is sufficient based on trial\nand error.\n\n\n",
"msg_date": "Wed, 9 Feb 2000 18:42:16 -0500",
"msg_from": "\"Bryan White\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] The persistance of C functions"
},
{
"msg_contents": "\nOn Wed, 9 Feb 2000, Bryan White wrote:\n\n> If I use CREATE FUNCTION for a C function in a .so file and then use the\n> function and then change and recompile the function, what steps are needed\n> to see the change?\n> \n> As I see it the options are:\n> A: do nothing, the function is reloaded on every invocation.\n> B: Reopen the connection to the backend\n> C: Restart the postmaster\n> \n> I suspect B is correct but I would like to hear someone confirm it.\n\n\n 'B' is right - the postgreSQL not has any persisten cache for this, and \nif you restart connection a backend reload this information again.\n\nOr you can drop/(re)create a function, it is total safe solution.\n\n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 10 Feb 2000 10:42:29 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] The persistance of C functions"
},
{
"msg_contents": "On Thu, Feb 10, 2000 at 10:42:29AM +0100, Karel Zak - Zakkr wrote:\n> \n> On Wed, 9 Feb 2000, Bryan White wrote:\n> \n> > If I use CREATE FUNCTION for a C function in a .so file and then use the\n> > function and then change and recompile the function, what steps are needed\n> > to see the change?\n> > \n> > As I see it the options are:\n> > A: do nothing, the function is reloaded on every invocation.\n> > B: Reopen the connection to the backend\n> > C: Restart the postmaster\n> > \n> > I suspect B is correct but I would like to hear someone confirm it.\n> \n> \n> 'B' is right - the postgreSQL not has any persisten cache for this, and \n> if you restart connection a backend reload this information again.\n> \n> Or you can drop/(re)create a function, it is total safe solution.\n\n\nNot _totally_ safe: if you've got anything that refers to that function,\nlike a user defined type definition, drop/(re)create will change the\nfunction's oid in the pg_proc table, causing errors when the old\nfunction is looked up. Hmm, an ALTER FUNCTION command might be nice...\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 10 Feb 2000 10:34:50 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] The persistance of C functions"
},
{
"msg_contents": "Karel Zak - Zakkr wrote:\n\n> > Not _totally_ safe: if you've got anything that refers to that function,\n> > like a user defined type definition, drop/(re)create will change the\n> > function's oid in the pg_proc table, causing errors when the old\n> > function is looked up. Hmm, an ALTER FUNCTION command might be nice...\n>\n> ... and/or check dependencies on the function's oid if the function is DROP,\n> (via FOREIGN KEYs ?). IMHO it is good item to TODO if really nothing check\n> it. (...resending to hackers)\n>\n> Karel\n\nYes. I think it would be an interesting discussion to see whether or not it would\nbe a good idea to integrate referential integrity with respect to the system\ncataloge. The result *could* be backend code which is far easier to maintain, and\n(with updateable oids), support for ALTER/DROP code which yields sane results. For\nexample, with the little COMMENT code, I had to find the backend code responsible\nfor dropping each of the associated object types - relations, aggregates, types,\nfunctions, etc. in order to also drop the associated COMMENT. *AND *I also had to\nfind those areas where an object might be implicitly dropped by dropping another\nobject by calling a different routine -- for example, DROP TRIGGER calls a\ndifferent routine (DropTrigger) than what is called by the DROP TABLE code to drop\nall triggers associated with it (RelationRemoveTriggers). With RI, a cascading\ndelete from pg_class could automatically drop the associated indexes, triggers,\ncomments, etc. And perhaps another trigger on pg_class should be used to remove the\nactual relation file itself. Then one would only need to determine whether the DROP\nshould be allowed (if, for instance, it is a base class of an inheritence\nhierarchy) or it should be rejected by the RI code. Likewise, the ALTER code could\nperform a cascading update of oid (if necessary), to aide pg_dump when dumping\nobjects in oid order (TODO) to reduce the possibility of breaking a dependency...\n\nJust some thoughts,\n\nMike Mascari\n\n\n\n\n\n",
"msg_date": "Fri, 11 Feb 2000 02:11:25 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [INTERFACES] The persistance of C functions"
},
{
"msg_contents": "\nOn Thu, 10 Feb 2000, Ross J. Reedstrom wrote:\n\n> On Thu, Feb 10, 2000 at 10:42:29AM +0100, Karel Zak - Zakkr wrote:\n> > \n> > On Wed, 9 Feb 2000, Bryan White wrote:\n> > \n> > > If I use CREATE FUNCTION for a C function in a .so file and then use the\n> > > function and then change and recompile the function, what steps are needed\n> > > to see the change?\n> > > \n> > > As I see it the options are:\n> > > A: do nothing, the function is reloaded on every invocation.\n> > > B: Reopen the connection to the backend\n> > > C: Restart the postmaster\n> > > \n> > > I suspect B is correct but I would like to hear someone confirm it.\n> > \n> > 'B' is right - the postgreSQL not has any persisten cache for this, and \n> > if you restart connection a backend reload this information again.\n> > \n> > Or you can drop/(re)create a function, it is total safe solution.\n>\n> Not _totally_ safe: if you've got anything that refers to that function,\n> like a user defined type definition, drop/(re)create will change the\n> function's oid in the pg_proc table, causing errors when the old\n> function is looked up. Hmm, an ALTER FUNCTION command might be nice...\n\n... and/or check dependencies on the function's oid if the function is DROP, \n(via FOREIGN KEYs ?). IMHO it is good item to TODO if really nothing check\nit. (...resending to hackers)\n\n\t\t\t\t\t\t\tKarel\n\n \n\n",
"msg_date": "Fri, 11 Feb 2000 11:34:52 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] The persistance of C functions"
}
] |
[
{
"msg_contents": "Hi\n\nI'm running postgres v6.5.3. I need to make calls to\nthe functions in libpq in my code. For this I need the\nfiles - libpq.lib/libpq.lib.dll/libpqdll.lib.\n\nWhen I run 'nmake /f win32.mak' in the src directory,\nit is unable to open/find config.h . If I use the\nconfig.h generated as a result of 'configure' on\ncygwin, it complains about other .h files not being\nfound. (I do not know if there is a way to do the\nequivalent on the DOS Shell/Command Prompt )\n\nCould anyone let me know how to build libpq to get\nlibpq.dll/libpq.lib/libpqdll.lib ?\n\nThanks,\nRini\n\nps : The administrators guide has a chapter on this\nwhich I followed. (But it mentions Postgres v6.4 ?!)\nHere is an extract :\n\nChapter 20. Installation on Win32\n\nTable of Contents\nBuilding the libraries\nInstalling the libraries\nUsing the libraries\n\n Build and installation instructions for\nPostgres v6.4 client libraries on Win32.\n\nBuilding the libraries\n\nThe makefiles included in Postgres are written for\nMicrosoft Visual C++, and will probably not work with\nother systems. It should be\npossible to compile the libaries manually in other\ncases.\n\nTo build the libraries, change directory into the src\ndirectory, and type the command \n\nnmake /f win32.mak\n\nThis assumes that you have Visual C++ in your path.\n\nThe following files will be built: \n\n interfaces\\libpq\\Release\\libpq.dll - The\ndynamically linkable frontend library\n\n interfaces\\libpq\\Release\\libpqdll.lib - Import\nlibrary to link your program to libpq.dll\n\n interfaces\\libpq\\Release\\libpq.lib - Static\nlibrary version of the frontend library\n\n bin\\psql\\Release\\psql.exe - The Postgresql\ninteractive SQL monitor\n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Wed, 9 Feb 2000 16:02:10 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": "\n \n Hi\n \n I'm running postgres v6.5.3. I need to make calls to\n the functions in libpq in my code. For this I need\n the\n files - libpq.lib/libpq.lib.dll/libpqdll.lib.\n \n When I run 'nmake /f win32.mak' in the src\n directory,\n it is unable to open/find config.h . If I use the\n config.h generated as a result of 'configure' on\n cygwin, it complains about other .h files not being\n found. (I do not know if there is a way to do the\n equivalent on the DOS Shell/Command Prompt )\n \n Could anyone let me know how to build libpq to get\n libpq.dll/libpq.lib/libpqdll.lib ?\n \n Thanks,\n Rini\n \n ps : The administrators guide has a chapter on this\n which I followed. (But it mentions Postgres v6.4 ?!)\n Here is an extract :\n \n Chapter 20. Installation on Win32\n \n Table of Contents\n Building the libraries\n Installing the libraries\n Using the libraries\n \n Build and installation instructions for\n Postgres v6.4 client libraries on Win32.\n \n Building the libraries\n \n The makefiles included in Postgres are written for\n Microsoft Visual C++, and will probably not work\n with\n other systems. It should be\n possible to compile the libaries manually in other\n cases.\n \n To build the libraries, change directory into the\n src\n directory, and type the command \n \n nmake /f win32.mak\n \n This assumes that you have Visual C++ in your path.\n \n The following files will be built: \n \n interfaces\\libpq\\Release\\libpq.dll - The\n dynamically linkable frontend library\n \n interfaces\\libpq\\Release\\libpqdll.lib -\n Import\n library to link your program to libpq.dll\n \n interfaces\\libpq\\Release\\libpq.lib - Static\n library version of the frontend library\n \n bin\\psql\\Release\\psql.exe - The Postgresql\n interactive SQL monitor\n \n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Wed, 9 Feb 2000 16:09:22 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": "\nCan someone give be a brief rundown on what the backendid and backendtag\nare for?\n",
"msg_date": "Thu, 10 Feb 2000 13:49:32 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "backendid and backendtag"
}
] |
[
{
"msg_contents": "> > changes to psql:\n> > - added less as default pager when compiling on Cygwin\n> \n> Is there no \"more\"?\n\nThere is a native version of more in WinNT (like in DOS) but it is not\ncompatible with Cygwin and less is distributed as a part of Cygwin.\n\n> \n> > - need to declare \"filename_completion_function\" because it \n> is not exported\n> > from readline -> added to include/port/win.h\n> \n> I would think this is more of a readline problem, or does \n> cygwin come with\n> its only readline edition? What readline version are you using?\n\nYes, it is a problem of readline. I am not able to get the version number\nbecause it is not defined in the readline headers, but from the copyright\nnotice it is very old (last year is 1992).\n\n\t\t\tDan\n",
"msg_date": "Thu, 10 Feb 2000 10:19:28 +0100",
"msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Small update for WinNT port"
}
] |
[
{
"msg_contents": "Hello!\nI am having problems in connectiing to java with\nPostgreSQL.The statement is\nClass.forName(\"postgresql.Driver\");\nAlso,Iam not able to create postgresql.jar file in\n/path/src/interfaces/jdbc directory with make utility\nas written in the Readme file in the same\ndirectory.Please help me out.Also wherte thde files\nhave to be placed and path and Classpath setting has\nto be done.Do I need to install some Drivers.I think\ntheyu come with s/w.\nHow could I use Odbc over here.In the sense where \nDSN has to be created.\nDo reply fast as my Project is stopped.\nThanks,\nRicha\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 02:45:27 -0800 (PST)",
"msg_from": "Richa Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "jdbc 1.2 and postgrsSQL-6.5.3 on RedHat 6.1"
}
] |
[
{
"msg_contents": "Hello!\nI am having problems in connectiing to java with\nPostgreSQL.The statement is\nClass.forName(\"postgresql.Driver\");\nAlso,Iam not able to create postgresql.jar file in\n/path/src/interfaces/jdbc directory with make utility\nas written in the Readme file in the same\ndirectory.Please help me out.Also wherte thde files\nhave to be placed and path and Classpath setting has\nto be done.Do I need to install some Drivers.I think\ntheyu come with s/w.\nHow could I use Odbc over here.In the sense where \nDSN has to be created.\nDo reply fast as my Project is stopped.\nThanks,\nRicha\n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 02:50:09 -0800 (PST)",
"msg_from": "Richa Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Jdbc and Postfresql-6.5.3 on RedHat 6.1"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.