threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "At 2:52 PM 98.2.27 +0100, Meskes, Michael wrote:\n>But this isn't declared in postmaster.c either. ecpg.c does include\n>unistd.h if getopt.h does not exist and I think unistd.h is the one that\n>puts the getopt stuff into postmaster.\n\nI see declarations of optarg and optind in backend/postmaster/postmaster.c\naround line 216 in Feb 28 snapshot.\n---\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Sun, 1 Mar 1998 11:07:42 +0900",
"msg_from": "[email protected] (Tatsuo Ishii)",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Current 6.3 issues"
},
{
"msg_contents": "Yes, you're right. This should be changed in ecpg.c, too. Could you please\nsubmit a patch for a version that works for you?\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n",
"msg_date": "Tue, 3 Mar 1998 15:06:14 +0100 (CET)",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current 6.3 issues"
},
{
"msg_contents": ">Yes, you're right. This should be changed in ecpg.c, too. Could you please\n>submit a patch for a version that works for you?\n\nSure. I will submit patches the day after tomorrow (sorry, I don't have\ntime for that now).\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Tue, 03 Mar 1998 23:41:56 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current 6.3 issues "
},
{
"msg_contents": ">Yes, you're right. This should be changed in ecpg.c, too. Could you please\n>submit a patch for a version that works for you?\n\nHere it is.\n--\nTatsuo Ishii\[email protected]\n--------------------------- cut here ---------------------------\n*** ecpg.c.orig\tFri Feb 27 21:59:06 1998\n--- ecpg.c\tThu Mar 5 10:36:10 1998\n***************\n*** 9,14 ****\n--- 9,16 ----\n #include <getopt.h>\n #else\n #include <unistd.h>\n+ extern int optind;\n+ extern char *optarg;\n #endif\n #include <stdlib.h>\n #if defined(HAVE_STRING_H)\n",
"msg_date": "Thu, 05 Mar 1998 10:41:18 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current 6.3 issues "
}
] |
[
{
"msg_contents": "The following patches will allow postgreSQL 6.3 to compile and run on a \nUNIXWARE 2.1.2 system with the native C compiler with the following library \nchange:\n\n\tThe alloca function must be copied from the libucb.a archive and added\n\tto the libgen.a archive.\n\nAlso, the GNU flex program is needed to successfully build postgreSQL.\n\nThe patches are UUENCODED because the first two patches remove carriage \nreturns (^M) that made their way into the source files and I want to ensure \nthey survive the various mailers.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sat, 28 Feb 1998 22:10:57 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNIXWARE port patches."
}
] |
[
{
"msg_contents": "When I run configure with the --with-tcl option, it finds tcl.h but not tk.h. \nBoth files exist in the same directory with the same permissions and owner.\n\nThis was with the Feb. 28 snapshot. Any help would be appriciated.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Sat, 28 Feb 1998 23:03:14 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configure --with-tcl problems."
}
] |
[
{
"msg_contents": "Looks like I don't have this fixed, so I have re-added it to the FAQ.\n\ntest=> create table xx (x int, y int) ; \nCREATE\ntest=> insert into xx select usesysid, count(*) from pg_user group by\nusesysid;\nERROR: The field being grouped by must appear in the target list\n\nIn this case, the group by the parser is checking for is x and y, not\nthe results of the select, so it fails.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 1 Mar 1998 00:40:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERT with GROUP"
}
] |
[
{
"msg_contents": "The attached patch will ensure that pg_atoi will return 0 when passed an empty \nstring regardless of what strtol will do with an empty string. Some \nimplementations of strtol will generate an error if passed an emtpy string.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sun, 01 Mar 1998 01:54:30 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_atoi patch for postgreSQL 6.3."
}
] |
[
{
"msg_contents": "Hi,\n\nDo the following:\n\n<PSQL SESSION>\ncreate table t (name text);\nCREATE\n\ninsert int t (oid, name) values (13, 'n1');\nINSERT 18409 1;\n\nSelect * from t where oid =13;\nname\n-------\n(0 rows)\n\n</PGSQL SESSION>\n\nIs this correct?\nI would have expected that it wouldn't be allowed to set the value of an\noid.\nFortunately this doesn't happen. However I don't get an error message\neither.\n\nIs this what it's supposed to be?\n\nThanks,\nMaurice\n\n\n",
"msg_date": "Sun, 1 Mar 1998 13:11:03 +0100",
"msg_from": "\"Maurice Gittens\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oid bug or feature?"
}
] |
[
{
"msg_contents": "Hi,\n\nIn building the March 1 snapshot on linux 2.0.31 / glibc 2.0.7-pre1 / \negcs 1.01, after configuring I blindly typed\n\tmake CFLAGS=\"-O9\" >& make.log &\nand the -I declarations from Makefile.global were omitted and\nthe build failed. After reading Makefile.global, I tried\n\tmake COPT=\"-O9\" >& make.log &\nwhich resulted in both the -O2 from the template and the -O9\noption were used in the build. No errors were produced.\n\nIt appears that if I wanted to build pgsql with different\noptimization levels, the easiest solution would be to remove\nthe -O2 from the linux template and use COPT as above. Is\nthere something obvious I've missed? \n\nI'd like to suggest \nthat this situation be documented in INSTALL, that COPT \noverride the default optimization option, or that the -I\noptions be removed from CFLAGS and included by other means,\nto guard against \"sophisticated but oblivious\" installers like me.\n\nThanks,\n\nMichael\[email protected]\n\n\n",
"msg_date": "Sun, 01 Mar 1998 15:16:50 +0000",
"msg_from": "Michael <[email protected]>",
"msg_from_op": true,
"msg_subject": "very minor CFLAGS/COPT request"
}
] |
[
{
"msg_contents": "> > To try it out. It _may_ just be illustrating my lack of understanding of the locale\n> > support code in Unix...\n\nbingo. Read on :(\n\n> And how is a good outut look like ? Because all I am getting is:\n>\n> shefu (gafton):~/src/locale>./locale\n> numeric decimal point '.'\n> cashin- frac digits '127'; mon decimal ''; mon thousands ''; currency ''; positive ''; negative ''\n> shefu (gafton):~/src/locale>./locale 1\n> locale set to C\n> numeric decimal point '.'\n> cashin- frac digits '127'; mon decimal ''; mon thousands ''; currency ''; positive ''; negative ''\n\nThat is what I get too. So, since I'm going to have to spell it out for you, I figured I'd carry the\nlittle test code over to a Solaris machine. Well, same result there too. *damn* Tried the man page on\nthe Solaris box, and it tells me that the defaults for the \"C\" locale should be empty strings, just\nlike we are getting. The Linux man pages don't mention the expected values afaik.\n\n*slaps forehead* Sheesh. Sorry for the false alarm. We'll fix this for v6.3...\n\n[Marc, we should have this working for the release. Will update docs in the next few minutes then\nwork on a patch from a clean source tree]\n\n> > There is a non-static image also, but it probably requires some Modula-3 libraries. Hey,\n> > that brings up something: would you be interested in a Modula-3 rpm? It would make\n> > installing CVSup much easier, since I wouldn't have to do the static library thing.\n> <snip to make this now out of context :)> Okay, give me the information...\n\nI'm focused on getting Postgres out the door, but will follow up on this with you in a few days, OK?\n\nI'd like to do an install of your Postgres-6.3 package on my RH5.0 machine before you finalize your\npackage. If there are any last-minute problems we can fix them in the source tree and give you a\nfresh tree for release.\n\nThanks again for all your help.\n\n - Tom\n\n",
"msg_date": "Sun, 01 Mar 1998 19:47:33 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Postgresql broken"
}
] |
[
{
"msg_contents": "The purpose of using 'create domain' is as given \nin the example below. I defined domain name 'EMPLOYED'\nand use in create table - see the field 'EMPLOYED' is\nof data-type EMPLOYED:\nCREATE TABLE EMPLOYER (\n PERSON_ID INTEGER NOT NULL,\n EMPLOYER VARCHAR(60),\n EMPLOYED EMPLOYED,\n ^^^^^^^^^^\nUNIQUE (PATIENT_ID));\n\nThe datatype employed is defined by domain which also\nrestricts the values to \"YES\" or \"NO\" or \"RETIRED\" or \"DISABLED\" or\nNULL.\n\nal \n\n---The Hermit Hacker <[email protected]> wrote:\n>\n> On Sat, 28 Feb 1998, al dev wrote:\n> \n> > Hi:\n> > Is create domain command implemented in 6.3??\n> > I am trying to use\n> > create domain employed as char(10)\n> > check (\n> > value = \"YES\" or\n> > value = \"NO\" or\n> > value = \"RETIRED\" or\n> > value = \"DISABLED\" or\n> > value is NULL\n> > );\n> > in SQL scripts but is failing in 6.2.1 postgresql.\n> > \n> > I can find work around BUT there are tons of create domains in my\nSQL\n> > scripts and will be very tedious.\n> > By the way, create domain is in defined in SQL 92\n> > see this chapter 42 in\n> > http://sunsite.unc.edu/LDP/HOWTO/Database-HOWTO.html\n> \n> \tI took a look here, and it didn't say (at least not in chapter\n> 42)...what exactly does 'create domain' do? We don't, and won't,\nhave it\n> for v6.3, not with a release in a few days, and since I do recall\nanyone\n> else having mentioned it before, it isn't on our TODO list, but sounds\n> like something else to be added...\n> \n> \tBut, a short description of what it does would be nice, as I've\n> never heard of that one before :)\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n> \n> \n\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Sun, 1 Mar 1998 11:57:21 -0800 (PST)",
"msg_from": "al dev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
},
{
"msg_contents": "On Sun, 1 Mar 1998, al dev wrote:\n\n> The purpose of using 'create domain' is as given \n> in the example below. I defined domain name 'EMPLOYED'\n> and use in create table - see the field 'EMPLOYED' is\n> of data-type EMPLOYED:\n> CREATE TABLE EMPLOYER (\n> PERSON_ID INTEGER NOT NULL,\n> EMPLOYER VARCHAR(60),\n> EMPLOYED EMPLOYED,\n> ^^^^^^^^^^\n> UNIQUE (PATIENT_ID));\n> \n> The datatype employed is defined by domain which also\n> restricts the values to \"YES\" or \"NO\" or \"RETIRED\" or \"DISABLED\" or\n> NULL.\n\n\tOh, cool...so, essentially, you are creating an enumerated(?) type\nto be used in a table?\n\n\tBruce, can you add this onto the TODO list for v6.4? This is\nsomething that we might be able to do now with triggers, no? But, the\nCREATE DOMAIN is part of the spec... :)\n\n\n",
"msg_date": "Sun, 1 Mar 1998 15:01:12 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
},
{
"msg_contents": "On Sun, Mar 01, 1998 at 03:01:12PM -0500, The Hermit Hacker wrote:\n\n> > The datatype employed is defined by domain which also\n> > restricts the values to \"YES\" or \"NO\" or \"RETIRED\" or \"DISABLED\" or\n> > NULL.\n> \n> \tOh, cool...so, essentially, you are creating an enumerated(?) type\n> to be used in a table?\n\nCool indeed! Actually, a domain definition can be useful for more\nthan just that: if you define a domain, and then use that domain as a\ndata type for various columns in various tables, you can change your\nschema all at once by changing the definition of the domain. Also, a\ndomain can carry extra meaning. Look at this schema (using a somewhat\narcane syntax) for keeping track of suppliers, parts and shipments of\nquantities of parts from suppliers:\n\n\tDOMAIN\t\tS#\tCHARACTER (5)\tPRIMARY\n\tDOMAIN\t\tSNAME\tCHARACTER (40)\n\tDOMAIN\t\tP#\tCHARACTER (5)\tPRIMARY\n\tDOMAIN\t\tPNAME\tCHARACTER (20)\n\n\tRELATION\tS\t(S#, SNAME)\n\t\t\t\tPRIMARY KEY (S#)\n\tRELATION\tP\t(P#, PNAME)\n\t\t\t\tPRIMARY KEY (P#)\n\tRELATION\tSP\t(S#, P#, QTY NUMERIC (4))\n\t\t\t\tPRIMARY KEY (S#,P#)\n\nThis is simplified from an example in \"An Introduction to Database\nSystems\", by C.J. Date, taken from the 1981 third edition. Note how\nthe named domains become the default types for columns of the same\nname as the domains, while the QTY column in the SP relation has an\nexplicit data type. Note also the constraints: the \"PRIMARY KEY\"\nstatements in the RELATION definitions make uniqueness constraints,\nand the word \"PRIMARY\" in the DOMAIN definitions for S# and P# specify\nthat these domains are foreign keys, thus demanding referential\nintegrity from the SP table to the S and P tables. Neat, innit? :-)\n\nDoes modern SQL have this stuff? I'm not up-to-date, I'm afraid...\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "Sun, 1 Mar 1998 22:10:38 +0100",
"msg_from": "Tom I Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
},
{
"msg_contents": "> > The datatype employed is defined by domain which also\n> > restricts the values to \"YES\" or \"NO\" or \"RETIRED\" or \"DISABLED\" or\n> > NULL.\n> \n> \tOh, cool...so, essentially, you are creating an enumerated(?) type\n> to be used in a table?\n> \n> \tBruce, can you add this onto the TODO list for v6.4? This is\n> something that we might be able to do now with triggers, no? But, the\n> CREATE DOMAIN is part of the spec... :)\n\nAdded.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 1 Mar 1998 16:19:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
},
{
"msg_contents": "Tom I Helbekkmo wrote:\n> \n> On Sun, Mar 01, 1998 at 03:01:12PM -0500, The Hermit Hacker wrote:\n> \n> > > The datatype employed is defined by domain which also\n> > > restricts the values to \"YES\" or \"NO\" or \"RETIRED\" or \"DISABLED\" or\n> > > NULL.\n> > \n> > \tOh, cool...so, essentially, you are creating an enumerated(?) type\n> > to be used in a table?\n\n...\n> Does modern SQL have this stuff? I'm not up-to-date, I'm afraid...\n\nThe only thing I know of like this is the REFERENCES keyword. You can\ndo the following (Sybase example):\n\nCreate a table users where the userid field is an identity\n(automatically generates the next number in the sequence during the\ninsert) unique and not null. Sybase makes you use numeric fields for\nidentities (I.E. can't use int), but we could do better :)\n\n1> create table users (username varchar(30) not null,\n2> userid numeric(20,0) identity unique not null)\n3> go\n\nCreate a table that stores information based on a given userid.\n\n1> create table usage(userid numeric(20,0) not null references users(userid),\n2> login_time datetime not null,\n3> logout_time datetime not null)\n4> go\n\nThe \"references\" keyword means that an item can be in this table\n(usage) iff there is a corresponding entry in the users table. For\nexample:\n\n1> insert into users (username) values(\"ocie\")\n2> select @@identity\n3> go\n(1 row affected)\n \n ----------------------------------------- \n 1 \n \n(1 row affected)\n\nThis inserted a user \"ocie\" and selected the magic variable\n@@identity, which is my userid. I can try inserting into usage with\nother userids:\n\n1> insert into usage (userid,login_time,logout_time) values (2,getdate(),getdate())\n2> go\nMsg 546, Level 16, State 1:\nLine 1:\nForeign key constraint violation occurred, dbname = 'ociedb', table name =\n'usage', constraint name = 'usage_userid_1503344420'.\nCommand has been aborted.\n(0 rows affected)\n\nbut it fails because there is no such entry in users. I can also add\nseveral entries under my userid:\n\n1> insert into usage (userid,login_time,logout_time) values (1,getdate(),getdate())\n2> go\n(1 row affected)\n1> insert into usage (userid,login_time,logout_time) values (1,getdate(),getdate())\n2> go\n(1 row affected)\n\nand retrieve them:\n\n1> select * from usage\n2> go\n userid login_time logout_time \n ----------------------- -------------------------- -------------------------- \n 1 Mar 1 1998 5:43PM Mar 1 1998 5:43PM \n 1 Mar 1 1998 5:43PM Mar 1 1998 5:43PM \n \n(2 rows affected)\n\nI can't delete this user from the users table until all the rows that\nreference it have been removed:\n\n1> delete from users where userid=1\n2> go\nMsg 547, Level 16, State 1:\nLine 1:\nDependent foreign key constraint violation in a referential integrity\nconstraint. dbname = 'ociedb', table name = 'users', constraint name =\n'usage_userid_1503344420'.\nCommand has been aborted.\n(0 rows affected)\n\n\nThis can also be set up so that multiple fields in another table\ndefine the reference, and I believe it can also be set up so that\nreferencees (is that a real word?) are deleted, rather than generating\nthe above message.\n\nThis can of course be done with triggers, but I think that external\nkey and references are good examples of \"code as documentation\".\n\n\nOcie\n",
"msg_date": "Sun, 1 Mar 1998 17:48:37 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
}
] |
[
{
"msg_contents": "Somehow, I don't think pgsql/doc/postgres.tar.gz should be there. It\nlooks big.\n\nThomas?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 1 Mar 1998 16:23:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Look at pgsql/doc/postgres.tar.gz"
},
{
"msg_contents": "> Somehow, I don't think pgsql/doc/postgres.tar.gz should be there. It\n> looks big.\n\nWell, _I_ thought it was supposed to be there :) Discussed below...\n\nOK, the new docs are now committed. Can people with access to the CVSup tree\nlook through them for any obvious, hopefully minor, problems?\n\nThere is a Makefile in the doc/ directory; \"make install\" will unpack the\nhtml directories directly underneath doc, or will unpack them under $PGDOCS\nif that is defined in your Makefile.custom in the source area.\n\nSo, there are 4 documents available in both hardcopy and html:\n\n admin - disk and user management, installation instructions, etc.\n user - all user-oriented topics _not_ requiring programming\n programmer - programming topics for application- and postgres-developers\n tutorial - the sql newbie introduction. no installation instructions\n\nThere is a 5th html package, \"postgres\", which contains all of the others as\n\"parts of a book\". That way, you can click around the entire document set\nwithout having to jump to a new URL. Was not useful for hardcopy imo but\nseemed possibly more convenient in html.\n\nIt adds bulk to the distribution, but I thought it would be useful. I hope\nthat there will be lots of discussion on the right way to do this, and we\ncan make adjustments along the way.\n\nfyi, it takes ~5 minutes or less on my machine to completely regenerate all\n5 html documents from the source. Each hardcopy took an hour or so to clean\nup (e.g. fixing a few page breaks, updating the ToC, inserting figures,\netc).\n\nThere is lots of ugliness scattered through the docs, but I've accomplished\nmy main goal for v6.3 (at least I hope I have):\n\nThe minimum time investment for someone to make a meaningful contribution to\nthe non-ascii documentation is now measured in minutes. Small typos and\nparagraphs can be fixed trivially, and larger content can be modified or\ninserted easily. It can all be redone in hardcopy for each release with a\nminimum of effort, and html can be updated immediately if you have the tools\ninstalled. I hope to get postgresql.org set up to be able to do this, so we\ncan get fresh html generated between releases.\n\n - Tom\n\n",
"msg_date": "Sun, 01 Mar 1998 22:33:07 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "New docs available"
},
{
"msg_contents": "> \n> > Somehow, I don't think pgsql/doc/postgres.tar.gz should be there. It\n> > looks big.\n> \n> Well, _I_ thought it was supposed to be there :) Discussed below...\n> \n> OK, the new docs are now committed. Can people with access to the CVSup tree\n> look through them for any obvious, hopefully minor, problems?\n\nYep, I see why they are there now. Sorry.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 1 Mar 1998 17:37:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New docs available"
}
] |
[
{
"msg_contents": "> > > Varchar currently (in 6.2.1 and below) takes up the entire length specified\n> > > in the definition, despite the fact the value in it may actually be\n> > > shorter. Text takes only the space taken by the value.\n> >\n> > Thanks for the clarification. In this case, what happens with varchar's\n> > length if the original definition for that field leaves length undefined?\n> > Does it behave like text in that case?\n>\n> You really shouldn't be doing that. Not sure what happens. Not a good\n> idea:\n>\n> create table test (x varchar);\n\n?? This was defined to be a varchar of unlimited length, much like, or identical\nto, text. Should this now be disallowed? If so, we can fix the parser to disallow\nit so people don't get misled.\n\n> > I also vaguely recall seeing a message last year about the use of indexes\n> > in queries: that in [some circumstances] indexes built on varchar fields\n> > don't get used and a sequential scan through all records takes place\n> > instead. Is there any distinction between varchar and text here?\n>\n> Don't remember that.\n\nThis was probably Bruce's improvements to allow indices on some pattern matching.\nDoesn't make a distinction between these types in its behavior.\n\n - Tom\n\n",
"msg_date": "Sun, 01 Mar 1998 21:28:56 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] varchar vs text"
},
{
"msg_contents": "> \n> > > > Varchar currently (in 6.2.1 and below) takes up the entire length specified\n> > > > in the definition, despite the fact the value in it may actually be\n> > > > shorter. Text takes only the space taken by the value.\n> > >\n> > > Thanks for the clarification. In this case, what happens with varchar's\n> > > length if the original definition for that field leaves length undefined?\n> > > Does it behave like text in that case?\n> >\n> > You really shouldn't be doing that. Not sure what happens. Not a good\n> > idea:\n> >\n> > create table test (x varchar);\n> \n> ?? This was defined to be a varchar of unlimited length, much like, or identical\n> to, text. Should this now be disallowed? If so, we can fix the parser to disallow\n> it so people don't get misled.\n\nOh, I didn't know. There really is no difference between varchar with\nno lenght, and text, but if it doesn't break anything, no problem.\n\n> \n> > > I also vaguely recall seeing a message last year about the use of indexes\n> > > in queries: that in [some circumstances] indexes built on varchar fields\n> > > don't get used and a sequential scan through all records takes place\n> > > instead. Is there any distinction between varchar and text here?\n> >\n> > Don't remember that.\n> \n> This was probably Bruce's improvements to allow indices on some pattern matching.\n> Doesn't make a distinction between these types in its behavior.\n\nNot sure what to say on this. I remember that issue, but not how it\ncaused any problem.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 1 Mar 1998 16:39:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] varchar vs text"
}
] |
[
{
"msg_contents": "Bruce Momjian wrote:\n\n> > In v6.3, indices are supported with use of ~ and LIKE operators.\n> > An unofficial patch for v6.2 is also available to do this.\n> >\n> > Hopefully, the developers will have remembered to change the documentation\n> > as well.\n>\n> Do you have a suggestion on where to put such a mention?\n\nThis should be in release notes in the hardcopy/html documentation. I'd like to\nwork on integrating these soon after v6.3 is released, and then do updates\nthere. That way, there will be a place for people to keep track of significant\nchanges and improvements which would affect a user.\n\nMuch of the new docs which is release-specific is like to old docs. Ugly, but I\nsimply didn't have enough time to rewrite _all_ 200 pages :)\n\n",
"msg_date": "Sun, 01 Mar 1998 21:32:16 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] varchar vs text"
}
] |
[
{
"msg_contents": "> I had a chance to look at the users and programmers manuals you just\n> installed. Very nice. Lots of new stuff and cleanup, and you\n> integrated much of the separate documentation in one place. I have\n> added a mention of it in my release summary.\n>\n> I can easily send you html of what I am doing. The FAQ is already html,\n> and the TODO list is ascii, but converted using txt2html from\n> http://www.cs.wustl.edu/~seth/txt2html/. Works really well. It\n> recoginizes certain text formatting styles, and outputs HTML to make it\n> look correct on a web page. Perhaps we could use that to convert over\n> some of the ASCII-only stuff we have.\n\nYes, that would help, and then I can run a brute-force filter to convert the html to\nalmost-DocBook sgml. From there on we can turn it around and generate html from the\nDocBook sources, for posting on the web page etc.\n\n> Seems it may be nice to have all the docs in the separate directories\n> all in html, and have 'make' grab them and convert them into the manual.\n> I really don't know what is involved, or whether you can just grab html\n> and place it into sgml documents, but it is an idea. Actually, the\n> doc/src/*.sgml files look pretty easy to understand, so maybe we all\n> need to learn it.\n\nWell, it works _almost_ like this. Without getting caught up in the fact that html\n_is_ sgml, just not sufficient to fully specify document content, the document\nsource would all be in DocBook sgml, then converted to html, hardcopy, ascii, and\nman pages from there.\n\nDocBook has a learning curve when starting from scratch, but I've put in the 100\nhours to get over that hump. From here on, the docs can evolve from existing\ndocuments, and stealing formatting specs from those will make a new doc easy to\nwrite.\n\nFor each of the current plain text, man page, or html _source_ docs we will need to\nget the maintainer to agree to try using sgml for that. I'll do, or assist with, the\nconversion to sgml and from then on the maintainer would make maintenance changes to\nthe sgml source. I figured we can tackle that one at a time over the next couple of\nmonths.\n\n> I guess the manual is so nice, I want to make sure it can stay\n> up-to-date without much effort on your part. I am sure you have already\n> thought of that.\n\nWell, that is the advantage to using sgml, as long as others are willing to maintain\ninformation in that format. I'll stress that _new_ information can be written\nwithout sgml in plain text and someone can then help convert it. From then on, it\nwould be easiest if it were maintained from the sgml sources.\n\n> You have certainly jump-started our documentation, and now that it is so\n> nice, I am sure people will start getting involved.\n\nThanks. I really hope so :)\n\nLots of open issues with content, presentation, etc. and as we discuss it on the\nDocs list we can start a ToDo to keep track of where we are headed.\n\n - Tom\n\n",
"msg_date": "Sun, 01 Mar 1998 22:55:11 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] varchar vs text"
},
{
"msg_contents": "> \n> > I had a chance to look at the users and programmers manuals you just\n> > installed. Very nice. Lots of new stuff and cleanup, and you\n> > integrated much of the separate documentation in one place. I have\n> > added a mention of it in my release summary.\n> >\n> > I can easily send you html of what I am doing. The FAQ is already html,\n> > and the TODO list is ascii, but converted using txt2html from\n> > http://www.cs.wustl.edu/~seth/txt2html/. Works really well. It\n> > recoginizes certain text formatting styles, and outputs HTML to make it\n> > look correct on a web page. Perhaps we could use that to convert over\n> > some of the ASCII-only stuff we have.\n> \n> Yes, that would help, and then I can run a brute-force filter to convert the html to\n> almost-DocBook sgml. From there on we can turn it around and generate html from the\n> DocBook sources, for posting on the web page etc.\n\nOK. I recommend you just grab the TODO and FAQ from the web site,\nunless you want HTML versions of them in the distribution along with the\nASCII verions.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 1 Mar 1998 18:09:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] varchar vs text"
},
{
"msg_contents": "> > > I can easily send you html of what I am doing. The FAQ is already html,\n> > > and the TODO list is ascii, but converted using txt2html from\n> > > http://www.cs.wustl.edu/~seth/txt2html/. Works really well. It\n> > > recoginizes certain text formatting styles, and outputs HTML to make it\n> > > look correct on a web page. Perhaps we could use that to convert over\n> > > some of the ASCII-only stuff we have.\n> >\n> > Yes, that would help, and then I can run a brute-force filter to convert the html to\n> > almost-DocBook sgml. From there on we can turn it around and generate html from the\n> > DocBook sources, for posting on the web page etc.\n>\n> OK. I recommend you just grab the TODO and FAQ from the web site,\n> unless you want HTML versions of them in the distribution along with the\n> ASCII verions.\n\nAssuming we aren't doing this until post-v6.3 release, will let you know when we are\nready to start the conversion. Need to draw a line at how much can go into v6.3, and I\nthink we are past it wrt the docs except for perhaps goof-up fixes of the packages.\n\nMy first project after v6.3 will be getting jade/DocBook going on postgresql.org (perhaps\nit already is; Marc pointed me at something which looked like a jade package). Then, we\ncan demonstrate how to run it on that machine, and perhaps tie it in to an automatic html\ndocumentation update from cron or from cvs. Also, I'm hoping to be busy answering\nquestions and helping all those new documenters out there :)\n\n - Tom\n\n",
"msg_date": "Sun, 01 Mar 1998 23:50:19 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Re: [QUESTIONS] varchar vs text"
},
{
"msg_contents": "On Sun, 1 Mar 1998, Thomas G. Lockhart wrote:\n\n> \n> My first project after v6.3 will be getting jade/DocBook going on\n> postgresql.org (perhaps it already is; Marc pointed me at something\n> which looked like a jade package). Then, we can demonstrate how to run\n> it on that machine, and perhaps tie it in to an automatic html\n> documentation update from cron or from cvs. Also, I'm hoping to be busy\n> answering questions and helping all those new documenters out there :) \n\n\tjade was installed ~Jan 13th :) Of course, it hasn't been tested\nyet, but let me know if there are any problems :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 1 Mar 1998 21:22:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Re: [QUESTIONS] varchar vs text"
},
{
"msg_contents": "> jade was installed ~Jan 13th :) Of course, it hasn't been tested\n> yet, but let me know if there are any problems :)\n\nOK, jade is there, but I need the DocBook DTD integrated into jade's catalog:\n\n> gmake admin.tar.gz\ngmake -C sgml clean\ngmake[1]: Entering directory `/home/users/t/thomas/pgsql/doc/src/sgml'\n(rm -rf *.html *.htm)\ngmake[1]: Leaving directory `/home/users/t/thomas/pgsql/doc/src/sgml'\ngmake -C sgml admin.html\ngmake[1]: Entering directory `/home/users/t/thomas/pgsql/doc/src/sgml'\n(rm -rf *.htm)\njade -D sgml -d /home/users/t/thomas/db107.d/docbook/html/docbook.dsl -t sgml\nadmin.sgml\njade:admin.sgml:8:59:W: cannot generate system identifier for public text\n\"-//Davenport//DTD DocBook V3.0//EN\"\njade:admin.sgml:19:0:E: reference to entity \"BOOK\" for which no system identifier\ncould be generated\njade:admin.sgml:8:0: entity was defined here\njade:admin.sgml:19:0:E: DTD did not contain element declaration for document type\nname\njade:admin.sgml:21:5:E: element \"BOOK\" undefined\njade:admin.sgml:25:6:E: element \"TITLE\" undefined\n...\n\nI had given you a reference for the source packages for my installation; do you\nneed that again? I think, as a first step, we just need the catalog stuff\nupdated.\n\n - Tom\n\n",
"msg_date": "Mon, 02 Mar 1998 01:54:24 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Re: [QUESTIONS] varchar vs text"
}
] |
[
{
"msg_contents": "I notice that the regression test performance seems to have decreased\nsometime between 980223 and 980301. I was seeing ~2:30 elapsed time for\nthe last few weeks, and am now seeing ~3:00. Anyone else notice this?\nMight it be due to a bug fix? Inquiring minds want to know. I haven't\nlooked at it very carefully, so could just be imagining things. I think\nit is still substantially faster than v6.2.1...\n\n",
"msg_date": "Sun, 01 Mar 1998 23:06:17 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "v6.3 performance"
},
{
"msg_contents": "> I notice that the regression test performance seems to have decreased\n> sometime between 980223 and 980301. I was seeing ~2:30 elapsed time for\n> the last few weeks, and am now seeing ~3:00. Anyone else notice this?\n> Might it be due to a bug fix? Inquiring minds want to know. I haven't\n> looked at it very carefully, so could just be imagining things. I think\n> it is still substantially faster than v6.2.1...\n\nNever mind. I was measuring the speed with USE_LOCALE enabled. Will put it\nin the docs for v6.4 :)\n\n",
"msg_date": "Mon, 02 Mar 1998 00:08:34 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.3 performance"
}
] |
[
{
"msg_contents": "\n>From the calls to postgres in initdb, correct? Has anyone come up\nwith a decent way of wrapping gdb around postgres in initdb?\n\nHere's what I've got so far -- A file called doit.gdb with the contents:\n\nbreak DefineIndex\nrun -boot -C -F -D/usr/local/pgsql/data -d template1\n\nand I've been experimenting with things like this:\n\ncat $TEMPLATE \\\n| sed -e \"s/postgres PGUID/$POSTGRES_SUPERUSERNAME $POSTGRES_SUPERUID/\" \\\n -e \"s/NAMEDATALEN/$NAMEDATALEN/g\" \\\n -e \"s/OIDNAMELEN/$OIDNAMELEN/g\" \\\n -e \"s/PGUID/$POSTGRES_SUPERUID/\" \\\n > /tmp/fifo &\necho foo\ngdb -batch -tty /tmp/fifo -x /tmp/doit.gdb postgres\n\nthe above certainly doesn't work, and I've gotten it more functional\nthan this, but never actually functional. If I had better knowledge\nof named pipes that would certainly help.\n\nIt doesn't look like I'll be able to take care of this by release date\n(today, tomorrow)..\n\nDoes this mean that alpha is not supported?\n\nOn Thu, 26 February 1998, at 11:36:36, Vadim B. Mikheev wrote:\n\n> Try to set break point inside defind.c:DefineIndex() and continue with 's' \n> after got there... This is also good to check that args of DefineIndex()\n> are Ok - look @ bootparse.y how they get values...\n> \n> You have to find place where backend tries to get OID of index function name \n> (mkoidname).\n> \n> Vadim\n",
"msg_date": "Sun, 1 Mar 1998 17:25:22 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] alpha/64bit & mkoidname problem (fwd)"
}
] |
[
{
"msg_contents": "Yes!! it is very cool feature in SQL. I used them\nvery often and it saves hell of time!! The reason\nI can do 'Alter domain' to change char(10) to char(30)\nand it is propogated in all the tables automatically\nwhereever domain is used!! Also need to have 'ALTER DOMAIN' very\npowerful feature.\n\nal\n\n---Tom I Helbekkmo <[email protected]> wrote:\n>\n> On Sun, Mar 01, 1998 at 03:01:12PM -0500, The Hermit Hacker wrote:\n> \n> > > The datatype employed is defined by domain which also\n> > > restricts the values to \"YES\" or \"NO\" or \"RETIRED\" or \"DISABLED\"\nor\n> > > NULL.\n> > \n> > \tOh, cool...so, essentially, you are creating an enumerated(?) type\n> > to be used in a table?\n> \n> Cool indeed! Actually, a domain definition can be useful for more\n> than just that: if you define a domain, and then use that domain as a\n> data type for various columns in various tables, you can change your\n> schema all at once by changing the definition of the domain. Also, a\n> domain can carry extra meaning. Look at this schema (using a somewhat\n> arcane syntax) for keeping track of suppliers, parts and shipments of\n> quantities of parts from suppliers:\n> \n> \tDOMAIN\t\tS#\tCHARACTER (5)\tPRIMARY\n> \tDOMAIN\t\tSNAME\tCHARACTER (40)\n> \tDOMAIN\t\tP#\tCHARACTER (5)\tPRIMARY\n> \tDOMAIN\t\tPNAME\tCHARACTER (20)\n> \n> \tRELATION\tS\t(S#, SNAME)\n> \t\t\t\tPRIMARY KEY (S#)\n> \tRELATION\tP\t(P#, PNAME)\n> \t\t\t\tPRIMARY KEY (P#)\n> \tRELATION\tSP\t(S#, P#, QTY NUMERIC (4))\n> \t\t\t\tPRIMARY KEY (S#,P#)\n> \n> This is simplified from an example in \"An Introduction to Database\n> Systems\", by C.J. Date, taken from the 1981 third edition. Note how\n> the named domains become the default types for columns of the same\n> name as the domains, while the QTY column in the SP relation has an\n> explicit data type. Note also the constraints: the \"PRIMARY KEY\"\n> statements in the RELATION definitions make uniqueness constraints,\n> and the word \"PRIMARY\" in the DOMAIN definitions for S# and P# specify\n> that these domains are foreign keys, thus demanding referential\n> integrity from the SP table to the S and P tables. Neat, innit? :-)\n> \n> Does modern SQL have this stuff? I'm not up-to-date, I'm afraid...\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n> \n> \n\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Sun, 1 Mar 1998 20:26:35 -0800 (PST)",
"msg_from": "al dev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
},
{
"msg_contents": "On Sun, 1 Mar 1998, al dev wrote:\n\n> Yes!! it is very cool feature in SQL. I used them\n> very often and it saves hell of time!! The reason\n> I can do 'Alter domain' to change char(10) to char(30)\n> and it is propogated in all the tables automatically\n> whereever domain is used!! Also need to have 'ALTER DOMAIN' very\n> powerful feature.\n\n\tWell, let us get one in at at time :) Bruce has added it to the\nTODO list for v6.4...not sure how quickly or easily it can be added\nthough...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 2 Mar 1998 00:41:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Is \"CREATE DOMAIN\" in 6.3 ??"
}
] |
[
{
"msg_contents": "Someone reported a problem with getting the --with-tcl configure\noption to do anything. The problem crops up with Marc's (I think it\nwas his) patch to enable the configure --help output for this option.\nThe fix is below (remember to run autoconf).\n\nCheers,\nBrook\n\n===========================================================================\n--- configure.in.orig\tSun Mar 1 01:00:33 1998\n+++ configure.in\tSun Mar 1 21:36:02 1998\n@@ -239,8 +239,8 @@\n AC_ARG_WITH(\n tcl,\n [ --with-tcl use tcl ],\n- USE_TCL=true AC_MSG_RESULT(enabled),\n- USE_TCL=false AC_MSG_RESULT(disabled)\n+ USE_TCL=true; AC_MSG_RESULT(enabled),\n+ USE_TCL=false; AC_MSG_RESULT(disabled)\n )\n export USE_TCL\n USE_X=$USE_TCL\n",
"msg_date": "Sun, 1 Mar 1998 22:00:00 -0700 (MST)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "configure.in patch for --with-tcl"
},
{
"msg_contents": "On Sun, Mar 01, 1998 at 10:00:00PM -0700, Brook Milligan wrote:\n\n> - USE_TCL=true AC_MSG_RESULT(enabled),\n> - USE_TCL=false AC_MSG_RESULT(disabled)\n> + USE_TCL=true; AC_MSG_RESULT(enabled),\n> + USE_TCL=false; AC_MSG_RESULT(disabled)\n\n...and ditto for Perl, a couple of lines below these, of course.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "Mon, 2 Mar 1998 07:56:03 +0100",
"msg_from": "Tom I Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] configure.in patch for --with-tcl"
}
] |
[
{
"msg_contents": "This is what configure says...\n\nVadim\n",
"msg_date": "Mon, 02 Mar 1998 12:14:55 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostreSQL v6.2 Installation Program"
},
{
"msg_contents": "On Mon, 2 Mar 1998, Vadim B. Mikheev wrote:\n\n> This is what configure says...\n\n\tFixed...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 2 Mar 1998 01:24:45 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostreSQL v6.2 Installation Program"
}
] |
[
{
"msg_contents": "I encountered a problem (bug? feature?) where \"select currval('sequence')\" \nwill generate an error if \"select nextval('sequence')\" is not executed first. \nThe attached patch will change this behaviour by reading the sequence tuple \nand returning the last_value attribute if nextval has not been called on the \nsequence yet.\n\nThe patched code appears to work as intended and did not have any effect on \nthe output of the regression test.\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Mon, 02 Mar 1998 01:16:24 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changes to sequence.c"
},
{
"msg_contents": "Billy G. Allie wrote:\n> \n> I encountered a problem (bug? feature?) where \"select currval('sequence')\"\n> will generate an error if \"select nextval('sequence')\" is not executed first.\n> The attached patch will change this behaviour by reading the sequence tuple\n> and returning the last_value attribute if nextval has not been called on the\n> sequence yet.\n\nThis is feature :)\n1. This is what Oracle does.\n2. currval () is described as returning value returned by\n last nextval() in _session_.\n\nVadim\n",
"msg_date": "Mon, 02 Mar 1998 14:55:47 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Changes to sequence.c"
},
{
"msg_contents": "Vadim B. Mikheev wrote:\n>Billy G. Allie wrote:\n>> \n>> I encountered a problem (bug? feature?) where \"select currval('sequence')\"\n>> will generate an error if \"select nextval('sequence')\" is not executed \nfirst.\n> \n>This is feature :)\n>1. This is what Oracle does.\n>2. currval () is described as returning value returned by\n> last nextval() in _session_.\n> \n>Vadim\n> \nDoes this mean we should not modify this behavior because \"this is what Oracle \ndoes\"? I can envision where using currval() before nextval() can be useful.\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Thu, 05 Mar 1998 22:44:31 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Changes to sequence.c "
},
{
"msg_contents": "Billy G. Allie wrote:\n> \n> Vadim B. Mikheev wrote:\n> >Billy G. Allie wrote:\n> >>\n> >> I encountered a problem (bug? feature?) where \"select currval('sequence')\"\n> >> will generate an error if \"select nextval('sequence')\" is not executed\n> first.\n> >\n> >This is feature :)\n> >1. This is what Oracle does.\n> >2. currval () is described as returning value returned by\n> > last nextval() in _session_.\n> >\n> >Vadim\n> >\n> Does this mean we should not modify this behavior because \"this is what Oracle\n> does\"? I can envision where using currval() before nextval() can be useful.\n\nActually, what you are proposing was initial behaviour of currval().\nThis was changed to be more consistent with 1. & 2. (note - not only 1.,\nbut 2. also).\n\nBut personally I haven't objection against changing this again.\nMen, vote pls!\n\nVadim\n",
"msg_date": "Fri, 06 Mar 1998 19:39:58 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Changes to sequence.c"
}
] |
[
{
"msg_contents": "I had found undefined symbols in interfaces/libpgtcl/libpgtcl.so when\ntrying to use pgaccess on a NetBSD box. It turns out the problem is\nwith not including symbols from interfaces/libpq/libpq.so, but this\napparently only occurs with BSD ports (my guess; Marc can you verify\nthis under FreeBSD?). In the libpgtcl Makefile are the following\nrelevant fragments:\n\nifeq ($(PORTNAME), bsd)\n ifdef BSD_SHLIB\n install-shlib-dep\t:= install-shlib\n shlib\t\t:= libpgtcl.so.1.0\n LDFLAGS_SL\t\t= -x -Bshareable -Bforcearchive\n CFLAGS\t\t+= $(CFLAGS_SL)\n endif\nendif\n\n$(shlib): $(OBJS)\n\t$(LD) $(LDFLAGS_SL) -o $@ $(OBJS)\n\tln -sf $@ libpgtcl.so\n\nIn the same Makefile all other ports include reference to libpq in the\nLDFLAGS_SL variable. If this same reference is included for the BSD\nports, the loader complains because the symbols precede the libpgtcl\nobjects. If, however, the reference to libpq follows the libpgtcl\nobjects (see patch below), all is well. \n\nIt seems logical that the libpq reference should follow the libpgtcl\nobjects for all ports, but the following patch retains the old\nbehavior for all ports except BSD ones (tested only for NetBSD v1.3\nand pgaccess v0.81). If it really works for other ports, I suggest\nremoving the libpq references from any LDFLAGS_SL variable and adding\nthem to the ld command (i.e., replace BSD_LIBPQ with LIBPQ).\n\nPlease try this with other BSD ports to see if this solutions works\nfor ones besides NetBSD.\n\nCheers,\nBrook\n\n===========================================================================\n--- src/interfaces/libpgtcl/Makefile.in.orig\tFri Feb 13 01:01:00 1998\n+++ src/interfaces/libpgtcl/Makefile.in\tSun Mar 1 23:06:24 1998\n@@ -32,12 +32,14 @@\n install-shlib-dep :=\n shlib := \n \n+LIBPQ\t\t\t= -L $(SRCDIR)/interfaces/libpq -lpq\n+\n ifeq ($(PORTNAME), linux)\n ifdef LINUX_ELF\n install-shlib-dep\t:= install-shlib\n shlib\t\t:= libpgtcl.so.1\n CFLAGS\t\t+= $(CFLAGS_SL)\n- LDFLAGS_SL\t\t= -shared -L $(SRCDIR)/interfaces/libpq -lpq\n+ LDFLAGS_SL\t\t= -shared $(LIBPQ)\n endif\n endif\n \n@@ -47,20 +49,21 @@\n shlib\t\t:= libpgtcl.so.1.0\n LDFLAGS_SL\t\t= -x -Bshareable -Bforcearchive\n CFLAGS\t\t+= $(CFLAGS_SL)\n+ BSD_LIBPQ\t\t= $(LIBPQ)\n endif\n endif\n \n ifeq ($(PORTNAME), i386_solaris)\n install-shlib-dep\t:= install-shlib\n shlib\t\t\t:= libpgtcl.so.1\n- LDFLAGS_SL\t\t= -G -z text -L $(SRCDIR)/interfaces/libpq -lpq\n+ LDFLAGS_SL\t\t= -G -z text $(LIBPQ)\n CFLAGS\t\t+= $(CFLAGS_SL)\n endif\n \n ifeq ($(PORTNAME), univel)\n install-shlib-dep\t:= install-shlib\n shlib\t\t\t:= libpgtcl.so.1\n- LDFLAGS_SL\t\t= -G -z text -L $(SRCDIR)/interfaces/libpq -lpq\n+ LDFLAGS_SL\t\t= -G -z text $(LIBPQ)\n CFLAGS\t\t+= $(CFLAGS_SL)\n endif\n \n@@ -78,7 +81,7 @@\n \t$(RANLIB) libpgtcl.a\n \n $(shlib): $(OBJS)\n-\t$(LD) $(LDFLAGS_SL) -o $@ $(OBJS) \n+\t$(LD) $(LDFLAGS_SL) -o $@ $(OBJS) $(BSD_LIBPQ)\n \tln -sf $@ libpgtcl.so\n \n .PHONY: beforeinstall-headers install-headers\n\n\n",
"msg_date": "Sun, 1 Mar 1998 23:17:44 -0700 (MST)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "SOLUTION: undefined symbols in libpgtcl.so"
}
] |
[
{
"msg_contents": "\nJust a couple of last minute reminders...\n\n1. Make sure the latest machine-specific FAQs are included in the release\n\n2. Make sure the patch is applied to the configure script to get it to\nwork for non-gcc compilers correctly (Sorry I haven't had time to try\nto fix autoconf itself.).\n\n\nAndrew\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Mon, 2 Mar 1998 10:29:16 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Last minute reminders (hope they're not too late...)"
},
{
"msg_contents": "> \n> \n> Just a couple of last minute reminders...\n> \n> 1. Make sure the latest machine-specific FAQs are included in the release\n\nMost recent versions already installed.\n\n> \n> 2. Make sure the patch is applied to the configure script to get it to\n> work for non-gcc compilers correctly (Sorry I haven't had time to try\n> to fix autoconf itself.).\n> \n> \n> Andrew\n> ----------------------------------------------------------------------------\n> Dr. Andrew C.R. Martin University College London\n> EMAIL: (Work) [email protected] (Home) [email protected]\n> URL: http://www.biochem.ucl.ac.uk/~martin\n> Tel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 11:49:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last minute reminders (hope they're not too late...)"
}
] |
[
{
"msg_contents": "Hi,\n\njust played a bit with the febr. 28 snapshot, and it looks like the perl \nmodule does not install in the correct place. I'm not sure if this is a \nproblem with postgresql or my local perl-setup (debian package):\n\nperl Makefile.PL PREFIX=/usr/local/pgsql\n\nwill install things in /usr/local/pgsql/local/lib/.... instead of in \n/usr/local/pgsql/lib/...\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 2 Mar 1998 11:56:11 +0100 (MET)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perl module installs in wrong place"
},
{
"msg_contents": "Maarten Boekhold wrote:\n> \n> Hi,\n> \n> just played a bit with the febr. 28 snapshot, and it looks like the perl\n> module does not install in the correct place. I'm not sure if this is a\n> problem with postgresql or my local perl-setup (debian package):\n> \n> perl Makefile.PL PREFIX=/usr/local/pgsql\n> \n> will install things in /usr/local/pgsql/local/lib/.... instead of in\n> /usr/local/pgsql/lib/...\n> \n> Maarten\n> \n> _____________________________________________________________________________\n> | TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n> | Department of Electrical Engineering |\n> | Computer Architecture and Digital Technique section |\n> | [email protected] |\n> -----------------------------------------------------------------------------\n\n\nit works for me with version 5.004_04.\nLooks like this is more related to your perl-setup.\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n",
"msg_date": "Mon, 02 Mar 1998 21:17:18 +0100",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl module installs in wrong place"
},
{
"msg_contents": "> > just played a bit with the febr. 28 snapshot, and it looks like the perl\n> > module does not install in the correct place. I'm not sure if this is a\n> > problem with postgresql or my local perl-setup (debian package):\n> > \n> > perl Makefile.PL PREFIX=/usr/local/pgsql\n> > \n> > will install things in /usr/local/pgsql/local/lib/.... instead of in\n> > /usr/local/pgsql/lib/...\n \n> it works for me with version 5.004_04.\n> Looks like this is more related to your perl-setup.\n\nCorrect, found out that \nI also needed to add INSTALLDIRS=perl to the commandline above to let \nperl skip the 'local' part.....\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 2 Mar 1998 21:38:52 +0100 (MET)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Perl module installs in wrong place"
}
] |
[
{
"msg_contents": "Seeing as people like the new brightened logo, I made it even better by\nbrightening the word PostgreSQL, so it looks like it is jumping out\nmore.\n\nI am loosing definition of the letter edges, but the old one was so\ndark, you had to have your head in the monitor to see them.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 13:42:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New logo"
}
] |
[
{
"msg_contents": "\nI just finally got a copy of that article in Linux Journal (wow, they even\nspelt my name right!)...I think the first paragraph, in itself, is most\ncomical:\n\n\t\"...it is now developed similarly to Linux\"\n\nNow, those that have been around here for *any* stretch of time know that\nwe aren't even *remotely* close to the way that Linux is being developed,\nwith those having suggested us doing that having it, I would imagine, dug\nin quite deeply :)\n\nStill have to read the whole article, but so far (other then that one\n\"slight\" in the first paragraph *bait material here*), it looks pretty\ngood :)\n\n\n",
"msg_date": "Mon, 2 Mar 1998 15:22:12 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "> I just finally got a copy of that article in Linux Journal (wow, they even\n> spelt my name right!)...I think the first paragraph, in itself, is most\n> comical:\n>\n> \"...it is now developed similarly to Linux\"\n>\n> Now, those that have been around here for *any* stretch of time know that\n> we aren't even *remotely* close to the way that Linux is being developed,\n> with those having suggested us doing that having it, I would imagine, dug\n> in quite deeply :)\n>\n> Still have to read the whole article, but so far (other then that one\n> \"slight\" in the first paragraph *bait material here*), it looks pretty\n> good :)\n\nIt's OK Marc, us linux'ists weren't offended _too_ much by that quote :))\n\nfwiw, I think he was drawing an analogy with the whole web/net thing, ya\nknow, as opposed to the floppy disk mailings you BSDers use *ducks head*\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 03:06:57 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Thomas G. Lockhart wrote:\n\n> \n> It's OK Marc, us linux'ists weren't offended _too_ much by that quote :))\n\n\tI'm such a trouble maker, but I find most Linux'ers such easy easy\nprey *grin* I have this University full of Linux'ers that you can spark\nup just with a comment like \"Linux != Unix\"...which, it isn't, its a\nUnix-like clone...but they can't seem to figure the distinction *rofl*\n\n> fwiw, I think he was drawing an analogy with the whole web/net thing, ya\n> know, as opposed to the floppy disk mailings you BSDers use *ducks head*\n\n\t*hrmmm*?? floppy disk maillings? Now, which camp came up with\nCVSup again? *raised eyebrows* We have up to the minute access to any\nkernel changes...and you guys? How often? *grin*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 2 Mar 1998 23:35:37 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "Thomas G. Lockhart wrote:\n> \n> > I just finally got a copy of that article in Linux Journal (wow, they even\n> > spelt my name right!)...I think the first paragraph, in itself, is most\n> > comical:\n> >\n> > \"...it is now developed similarly to Linux\"\n> >\n> > Now, those that have been around here for *any* stretch of time know that\n> > we aren't even *remotely* close to the way that Linux is being developed,\n> > with those having suggested us doing that having it, I would imagine, dug\n> > in quite deeply :)\n> >\n> > Still have to read the whole article, but so far (other then that one\n> > \"slight\" in the first paragraph *bait material here*), it looks pretty\n> > good :)\n> \n> It's OK Marc, us linux'ists weren't offended _too_ much by that quote :))\n> \n> fwiw, I think he was drawing an analogy with the whole web/net thing, ya\n> know, as opposed to the floppy disk mailings you BSDers use *ducks head*\n\n>From what I understand, postgres development is more like BSD\ndevelopment than it it like linux development. With Linux kernels,\nnew versions may come out two in one day. With BSD, there are periods\nof internal development followed by a big release. Postgres has the\ndaily snapshot, but these are intended for developers, and each new\none is not considered a new version of the program.\n\nIn that Postgres is developed over the net by a volunteer effort, I\nwould say they are similar.\n\n\nOcie Mitchell\n\n",
"msg_date": "Mon, 2 Mar 1998 20:06:32 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
}
] |
[
{
"msg_contents": "In a reply to Bruce, I suggested that the create_index man page\nmight be updated to mention the possible use of indices with\nthe LIKE, ~, and ~* operators. I thought that the query used\nto generated the table in the man page might be automatically\nupdated and thus done for 6.3, but probably not since the\nmatched string expression needs to be anchored. Mention\nof the possible use of indices in this circumstance might still be\nincluded in create_index man page and possibly in the explain man page.\n\nProbably the best place to put it would be in a tips section of the\nuser manual. The tips section might include stuff like:\n\ta) running vacuum to optimize query plan\n\tb) when to use char/varchar/text\n\tc) using joins vs subselects\n\nSuggestions are cheap, implementation takes work. I'm very appreciative\nof all that you have already done.\n\nMarc Zuckman\n\n\n",
"msg_date": "Mon, 2 Mar 1998 15:36:18 -0500 (EST)",
"msg_from": "Marc Howard Zuckman <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE, ~ indexing documentation"
},
{
"msg_contents": "> \n> In a reply to Bruce, I suggested that the create_index man page\n> might be updated to mention the possible use of indices with\n> the LIKE, ~, and ~* operators. I thought that the query used\n> to generated the table in the man page might be automatically\n> updated and thus done for 6.3, but probably not since the\n> matched string expression needs to be anchored. Mention\n> of the possible use of indices in this circumstance might still be\n> included in create_index man page and possibly in the explain man page.\n> \n> Probably the best place to put it would be in a tips section of the\n> user manual. The tips section might include stuff like:\n> \ta) running vacuum to optimize query plan\n> \tb) when to use char/varchar/text\n> \tc) using joins vs subselects\n> \n> Suggestions are cheap, implementation takes work. I'm very appreciative\n> of all that you have already done.\n\nActually, the missing use of indexes for LIKE was a deficiency, not a\nreal special feature. Everyone expected it to work and it didn't until\nnow. Also, all commercial databases do this, but no special mention is\nmade if it in their manuals, that I have seen. Perhaps we can added it\nto the FAQ if people start asking about it more often.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 16:40:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIKE, ~ indexing documentation"
}
] |
[
{
"msg_contents": "Hi,\n\na 'make test' fails on my BSDI 3.0 system. It does the first test, and \nthen aborts complaining postmaster is not running, or not on port 5432 or \nnot on a inet domainsocket at all. ofcourse postmaster *is* running, even \nusing internet sockets (psql -p 5432 -l works).\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 2 Mar 1998 21:43:13 +0100 (MET)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "PERL: make test fails on BSDI 3.0"
}
] |
[
{
"msg_contents": "\nWhat version of tar understands how to ungzip a .gz file?\n\nIs this what the 'z' flag is for?\n\nUntar'd and installed them manually...look good, Thomas. Nice work.\n\ndarrenk\n",
"msg_date": "Mon, 2 Mar 1998 15:57:25 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "doc troubles."
},
{
"msg_contents": "> \n> \n> What version of tar understands how to ungzip a .gz file?\n> \n> Is this what the 'z' flag is for?\n> \n> Untar'd and installed them manually...look good, Thomas. Nice work.\n\ngunzip.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 17:11:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "On Mon, 2 Mar 1998, Darren King wrote:\n\n> \n> What version of tar understands how to ungzip a .gz file?\n> \n> Is this what the 'z' flag is for?\n\n\tgnu tar supports the 'z' flag to uncompress and untar at the same\ntime...\n\n> Untar'd and installed them manually...look good, Thomas. Nice work.\n\n\tYa, I've built the Solaris packages with PGDOC set to\n$POSTGRESDIR/doc, so that the docs are part of the one package...:)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 2 Mar 1998 18:19:35 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
}
] |
[
{
"msg_contents": "Congratulations on a job well done, guys!\n\nHere's a little thing for the first round of patches, though: I have\nfigured out why --with-tcl didn't work, even after adding the missing\nsemicolon that was posted about. Here's the patch to configure.in in\nversion 6.3 that will actually fix it all right (including letting you\nuse the previous versions of tcl and tk, since they do work). Note\nthe test for TCL_INCDIR being the empty string, to avoid adding an\nempty \"-I\" to the flags used when checking for tk.h. Note also that\nI've removed the simple tests for tcl.h and tk.h without specifying\ninclude directories for them: the actual include directories need to\nend up in TCL_INCDIR and TK_INCDIR in Makefile.global, because of the\nway they are used in other Makefiles.\n\n*** configure.in.orig\tMon Mar 2 06:33:14 1998\n--- configure.in\tMon Mar 2 21:42:59 1998\n***************\n*** 239,246 ****\n AC_ARG_WITH(\n tcl,\n [ --with-tcl use tcl ],\n! USE_TCL=true AC_MSG_RESULT(enabled),\n! USE_TCL=false AC_MSG_RESULT(disabled)\n )\n export USE_TCL\n USE_X=$USE_TCL\n--- 239,246 ----\n AC_ARG_WITH(\n tcl,\n [ --with-tcl use tcl ],\n! USE_TCL=true; AC_MSG_RESULT(enabled),\n! USE_TCL=false; AC_MSG_RESULT(disabled)\n )\n export USE_TCL\n USE_X=$USE_TCL\n***************\n*** 250,257 ****\n AC_ARG_WITH(\n perl,\n [ --with-perl use perl ],\n! USE_PERL=true AC_MSG_RESULT(enabled),\n! USE_PERL=false AC_MSG_RESULT(disabled)\n )\n export USE_PERL\n \n--- 250,257 ----\n AC_ARG_WITH(\n perl,\n [ --with-perl use perl ],\n! USE_PERL=true; AC_MSG_RESULT(enabled),\n! USE_PERL=false; AC_MSG_RESULT(disabled)\n )\n export USE_PERL\n \n***************\n*** 563,570 ****\n if test \"$USE_TCL\" = \"true\"\n then\n TCL_INCDIR=no\n! AC_CHECK_HEADER(tcl.h, TCL_INCDIR=)\n! for f in /usr/include /usr/include/tcl8.0 /usr/local/include /usr/local/include/tcl8.0; do\n if test \"$TCL_INCDIR\" = \"no\"; then\n AC_CHECK_HEADER($f/tcl.h, TCL_INCDIR=$f)\n fi\n--- 563,569 ----\n if test \"$USE_TCL\" = \"true\"\n then\n TCL_INCDIR=no\n! for f in /usr/include /usr/include/tcl8.0 /usr/include/tcl7.6 /usr/local/include /usr/local/include/tcl8.0 /usr/local/include/tcl7.6; do\n if test \"$TCL_INCDIR\" = \"no\"; then\n AC_CHECK_HEADER($f/tcl.h, TCL_INCDIR=$f)\n fi\n***************\n*** 580,586 ****\n if test \"$USE_TCL\" = \"true\"\n then\n TCL_LIB=\n! for f in tcl8.0 tcl80; do\n if test -z \"$TCL_LIB\"; then\n AC_CHECK_LIB($f, main, TCL_LIB=$f)\n fi\n--- 579,585 ----\n if test \"$USE_TCL\" = \"true\"\n then\n TCL_LIB=\n! for f in tcl8.0 tcl80 tcl7.6 tcl76; do\n if test -z \"$TCL_LIB\"; then\n AC_CHECK_LIB($f, main, TCL_LIB=$f)\n fi\n***************\n*** 606,616 ****\n ice_save_CPPFLAGS=\"$CPPFLAGS\"\n ice_save_LDFLAGS=\"$LDFLAGS\"\n \n CPPFLAGS=\"$CPPFLAGS $X_CFLAGS -I$TCL_INCDIR\"\n \n TK_INCDIR=no\n! AC_CHECK_HEADER(tk.h, TK_INCDIR=)\n! for f in /usr/include /usr/include/tk8.0 /usr/local/include /usr/local/include/tk8.0; do\n if test \"$TK_INCDIR\" = \"no\"; then\n AC_CHECK_HEADER($f/tk.h, TK_INCDIR=$f)\n fi\n--- 605,619 ----\n ice_save_CPPFLAGS=\"$CPPFLAGS\"\n ice_save_LDFLAGS=\"$LDFLAGS\"\n \n+ if test \"$TCL_INCDIR\" = \"\"\n+ then\n+ CPPFLAGS=\"$CPPFLAGS $X_CFLAGS\"\n+ else\n CPPFLAGS=\"$CPPFLAGS $X_CFLAGS -I$TCL_INCDIR\"\n+ fi\n \n TK_INCDIR=no\n! for f in /usr/include /usr/include/tk8.0 /usr/include/tk4.2 /usr/local/include /usr/local/include/tk8.0 /usr/local/include/tk4.2; do\n if test \"$TK_INCDIR\" = \"no\"; then\n AC_CHECK_HEADER($f/tk.h, TK_INCDIR=$f)\n fi\n***************\n*** 631,637 ****\n if test \"$USE_TCL\" = \"true\"\n then\n TK_LIB=\n! for f in tk8.0 tk80; do\n if test -z \"$TK_LIB\"; then\n AC_CHECK_LIB($f, main, TK_LIB=$f)\n fi\n--- 634,640 ----\n if test \"$USE_TCL\" = \"true\"\n then\n TK_LIB=\n! for f in tk8.0 tk80 tk4.2 tk42; do\n if test -z \"$TK_LIB\"; then\n AC_CHECK_LIB($f, main, TK_LIB=$f)\n fi\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "Mon, 2 Mar 1998 22:04:09 +0100",
"msg_from": "Tom I Helbekkmo <[email protected]>",
"msg_from_op": true,
"msg_subject": "That --with-tcl thing..."
}
] |
[
{
"msg_contents": "Hi all:\nMore details on what domains are. Domains are global\ncolumn definitions, upon which column definitions\ncan be based. A domain specifies a data type, and a\nset of column attributes and constraints. Subsequent\ntable definitions can use the domain to define columns.\n\nHere is the detail for 'ALTER DOMAIN' feature. I \npulled this off the chapter 42 at \nhttp://sunsite.unc.edu/LDP/HOWTO/Database-HOWTO.html\n\n<alter domain statement> ::=\n ALTER DOMAIN <domain name> <alter domain action>\n\n <alter domain action> ::=\n <set domain default clause>\n | <drop domain default clause>\n | <add domain constraint definition>\n | <drop domain constraint definition>\n\n <set domain default clause> ::= SET <default clause>\n\n <drop domain default clause> ::= DROP DEFAULT\n\n <add domain constraint definition> ::=\n ADD <domain constraint>\n\n <drop domain constraint definition> ::=\n DROP CONSTRAINT <constraint name>\n\n <drop domain statement> ::=\n DROP DOMAIN <domain name> <drop behavior>\n\nAnd the create domain syntax is as follows:---\n<domain definition> ::=\n CREATE DOMAIN <domain name>\n [ AS ] <data type>\n [ <default clause> ]\n [ <domain constraint>... ]\n [ <collate clause> ]\n\n <domain constraint> ::=\n [ <constraint name definition> ]\n <check constraint definition> [ <constraint attributes> ]\n\n(To search a word in chapter 42, use CTRL+F in browser)\nI hope to see this in postgreSQL 6.4\n\nal\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 2 Mar 1998 14:07:12 -0800 (PST)",
"msg_from": "al dev <[email protected]>",
"msg_from_op": true,
"msg_subject": "domain feature - details"
},
{
"msg_contents": "al dev wrote:\n> \n> Hi all:\n> More details on what domains are. Domains are global\n> column definitions, upon which column definitions\n> can be based. A domain specifies a data type, and a\n> set of column attributes and constraints. Subsequent\n> table definitions can use the domain to define columns.\n> \n> Here is the detail for 'ALTER DOMAIN' feature. I\n> pulled this off the chapter 42 at\n> http://sunsite.unc.edu/LDP/HOWTO/Database-HOWTO.html> \n> <alter domain statement> ::=\n> ALTER DOMAIN <domain name> <alter domain action>\n> \n> <alter domain action> ::=\n> <set domain default clause>\n> | <drop domain default clause>\n> | <add domain constraint definition>\n> | <drop domain constraint definition>\n\nWhat happens if I change a DOMAIN after I have created tables with\nit? Does CONSTRAINT's and DEFAULTS and TYPES change for those tables,\nor should it only affect tables created after the change?\n\nSuppose I do this:\n1. I create DOMAIN for \"Person\", and create lots of tables with\n Person columns.\n2. After some weeks, I want to CONSTRAIN Person to disallow NULL\n social security number, so I change the \"Person\" DOMAIN.\n\nDo I have to re-create all tables, or will the change take effect\nimmediately? Will some changes take effect, like default and constraint,\nbut not the data type? Or will changes in data type cause the tables\nto be modified? Will the database lock the tables and convert them\nwhen I type the ALTER command?\n\n/* m */\n",
"msg_date": "Fri, 06 Mar 1998 11:40:38 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] domain feature - details"
},
{
"msg_contents": "On Fri, 6 Mar 1998, Mattias Kregert wrote:\n\n> al dev wrote:\n> > \n> > Hi all:\n> > More details on what domains are. Domains are global\n> > column definitions, upon which column definitions\n> > can be based. A domain specifies a data type, and a\n> > set of column attributes and constraints. Subsequent\n> > table definitions can use the domain to define columns.\n> > \n> > Here is the detail for 'ALTER DOMAIN' feature. I\n> > pulled this off the chapter 42 at\n> > http://sunsite.unc.edu/LDP/HOWTO/Database-HOWTO.html> \n> > <alter domain statement> ::=\n> > ALTER DOMAIN <domain name> <alter domain action>\n> > \n> > <alter domain action> ::=\n> > <set domain default clause>\n> > | <drop domain default clause>\n> > | <add domain constraint definition>\n> > | <drop domain constraint definition>\n> \n> What happens if I change a DOMAIN after I have created tables with\n> it? Does CONSTRAINT's and DEFAULTS and TYPES change for those tables,\n> or should it only affect tables created after the change?\n> \n> Suppose I do this:\n> 1. I create DOMAIN for \"Person\", and create lots of tables with\n> Person columns.\n> 2. After some weeks, I want to CONSTRAIN Person to disallow NULL\n> social security number, so I change the \"Person\" DOMAIN.\n> \n> Do I have to re-create all tables, or will the change take effect\n> immediately? Will some changes take effect, like default and constraint,\n> but not the data type? Or will changes in data type cause the tables\n> to be modified? Will the database lock the tables and convert them\n> when I type the ALTER command?\n\n\tIf I'm understanding what has been said, then this will affect any\ntable that uses that domain...same as a 'view' that does a subselect will\nchange its results based on how the data changes in the subselect..\n\n\tIn a sense, I sort of see a DOMAIN as being similar to a trigger,\nwhere, upon INSERT, you check the value being entered for a specific range\nof values...\n\n\n\n",
"msg_date": "Fri, 6 Mar 1998 08:33:25 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] domain feature - details"
}
] |
[
{
"msg_contents": "> > \n> > What version of tar understands how to ungzip a .gz file?\n> > \n> > Is this what the 'z' flag is for?\n> \n> \tgnu tar supports the 'z' flag to uncompress and untar at the same\n> time...\n\nThis sucks. As a group that seems to not like GNU (or at least their\nlicense), we require enough of their tools to compile/install postgres.\n\nOff to see the wizard at the gnu ftp site...\n\n> > Untar'd and installed them manually...look good, Thomas. Nice work.\n> \n> \tYa, I've built the Solaris packages with PGDOC set to\n> $POSTGRESDIR/doc, so that the docs are part of the one package...:)\n\nIs that what $POSTDOCDIR is for in the Makefile.global? Would this be\na candidate for \"--doc-prefix=\" to be added to configure? I'd like to\nbe able to put the html docs under my web root instead of the postgres\nroot dir.\n\ndarrenk\n",
"msg_date": "Mon, 2 Mar 1998 17:37:30 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "> > > What version of tar understands how to ungzip a .gz file?\n> > >\n> > > Is this what the 'z' flag is for?\n> >\n> > gnu tar supports the 'z' flag to uncompress and untar at the same\n> > time...\n>\n> This sucks. As a group that seems to not like GNU (or at least their\n> license), we require enough of their tools to compile/install postgres.\n>\n> Off to see the wizard at the gnu ftp site...\n\nQuit whining and send in some patches :) I hacked those makefiles at the end\nof a 10 hour push to get the docs wrapped up. The best thing that could be\nsaid for them is that they seemed to work on my machine (and I guess on\npostgresql.org now that I think about it).\n\nCould we just replace the \"tar zxf\" with \"uncompress ... | tar xf\"? Does\nanyone else have a strong opinion on (or experience with) makefiles for the\npostgres distribution who want to help Darren get out from under the gnu\nusage??\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 03:28:51 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "On Mon, 2 Mar 1998, Darren King wrote:\n\n> > > \n> > > What version of tar understands how to ungzip a .gz file?\n> > > \n> > > Is this what the 'z' flag is for?\n> > \n> > \tgnu tar supports the 'z' flag to uncompress and untar at the same\n> > time...\n> \n> This sucks. As a group that seems to not like GNU (or at least their\n> license), we require enough of their tools to compile/install postgres.\n\n\tActually, I have nothing against GNU...its the GPL that I don't\nlike :) Big big difference...\n\n> Is that what $POSTDOCDIR is for in the Makefile.global? Would this be\n> a candidate for \"--doc-prefix=\" to be added to configure? I'd like to\n> be able to put the html docs under my web root instead of the postgres\n> root dir.\n\n\tActually, I had to edit the Makefile in the doc directory directly\nto get it to install where I wanted...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 2 Mar 1998 23:32:58 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "> > Is that what $POSTDOCDIR is for in the Makefile.global? Would this be\n> > a candidate for \"--doc-prefix=\" to be added to configure? I'd like to\n> > be able to put the html docs under my web root instead of the postgres\n> > root dir.\n>\n> Actually, I had to edit the Makefile in the doc directory directly\n> to get it to install where I wanted...\n\nIt looks for Makefile.global->Makefile.custom, in which you could put\n\nPGDOCS= /your/favorite/docs/location\n\nbut I'm sure it could stand some changes. Didn't know there was a POSTDOCDIR\nalready defined :(\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 03:47:47 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "> Could we just replace the \"tar zxf\" with \"uncompress ... | tar xf\"? Does\n> anyone else have a strong opinion on (or experience with) makefiles for the\n> postgres distribution who want to help Darren get out from under the gnu\n> usage??\n\nI have gnzip, but no GNU tar, so tar zxf doesn't work. Maybe gunzip ...\n| tar xf.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 23:10:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
}
] |
[
{
"msg_contents": "I think I really like today's banner at the top of the web page.\n\nThis afternoon, around 1PM EST, I played with the brightness, and made\nthe words PostgreSQL much brigher without brightening the surrounding\nbackground. This makes the letter stand out much more, and appear to\njump out of the background.\n\nAre those letters too bright for anyone?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 18:09:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL logo"
}
] |
[
{
"msg_contents": "> \n> I think I really like today's banner at the top of the web page.\n> \n> This afternoon, around 1PM EST, I played with the brightness, and made\n> the words PostgreSQL much brigher without brightening the surrounding\n> background. This makes the letter stand out much more, and appear to\n> jump out of the background.\n> \n> Are those letters too bright for anyone?\n> \n\nFine with me. I run at 1280x1024, so the previous one was this blob\nthat was really too dark to read. Much better, IMHO.\n\ndarrenk\n",
"msg_date": "Mon, 2 Mar 1998 18:17:11 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL logo"
}
] |
[
{
"msg_contents": "how is a notice sent from the backend?\ndoes it send a Nxxxxx\\n or a VNxxxxx\\n ??\n\nWhenever I do a lo_close() I get a NOTICE: tablerelease: no lock found.\nand PQfn() tries to read a VNxxxxx\\n when the backend sends a Nxxxxx\\n\n",
"msg_date": "Tue, 03 Mar 1998 11:57:54 +1100",
"msg_from": "Hankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "backend -> interface communication"
},
{
"msg_contents": "On Tue, 3 Mar 1998, Hankin wrote:\n\n> how is a notice sent from the backend?\n> does it send a Nxxxxx\\n or a VNxxxxx\\n ??\n> \n> Whenever I do a lo_close() I get a NOTICE: tablerelease: no lock found.\n> and PQfn() tries to read a VNxxxxx\\n when the backend sends a Nxxxxx\\n\n\nThis last bit sounds familiar. I thought it was fixed a long time ago\n(after I noticed it while implementing PQfn in Java)\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Tue, 3 Mar 1998 06:36:51 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend -> interface communication"
},
{
"msg_contents": "Peter T Mount wrote:\n> \n> On Tue, 3 Mar 1998, Hankin wrote:\n> \n> > how is a notice sent from the backend?\n> > does it send a Nxxxxx\\n or a VNxxxxx\\n ??\n> >\n> > Whenever I do a lo_close() I get a NOTICE: tablerelease: no lock found.\n> > and PQfn() tries to read a VNxxxxx\\n when the backend sends a Nxxxxx\\n\n> \n> This last bit sounds familiar. I thought it was fixed a long time ago\n> (after I noticed it while implementing PQfn in Java)\n\n\nhere's a program that duplicates it on my computer...\n\n\n#include <libpq-fe.h>\n#include <libpq/libpq-fs.h>\n\nmain()\n{\n PGconn *connection;\n PGresult *result;\n Oid oid;\n int handle;\n char buf[1024];\n\n memset(buf,-1,sizeof(buf));\n\n connection=PQsetdb(NULL,NULL,NULL,NULL,NULL);\n if(connection==NULL) { exit(-1); }\n PQtrace(connection,stderr);\n oid=lo_creat(connection,INV_WRITE);\nfprintf(stderr,\"lo_creat: %s\\n\",PQerrorMessage(connection));\n handle=lo_open(connection,oid,INV_WRITE);\nfprintf(stderr,\"lo_open: %s\\n\",PQerrorMessage(connection));\n lo_write(connection,handle,buf,sizeof(buf));\nfprintf(stderr,\"lo_write: %s\\n\",PQerrorMessage(connection));\n lo_write(connection,handle,buf,sizeof(buf));\nfprintf(stderr,\"lo_write: %s\\n\",PQerrorMessage(connection));\n lo_close(connection,handle);\nfprintf(stderr,\"lo_close: %s\\n\",PQerrorMessage(connection));\n result=PQexec(connection,\"select aaa from test\");\n if(result==NULL || PQresultStatus(result)!=PGRES_TUPLES_OK) {\nfprintf(stderr,\"fail: %s\\n\",PQerrorMessage(connection)); }\n PQfinish(connection);\n}\n",
"msg_date": "Tue, 03 Mar 1998 18:26:01 +1100",
"msg_from": "Hankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] backend -> interface communication"
},
{
"msg_contents": "On Tue, 3 Mar 1998, Hankin wrote:\n\n> Peter T Mount wrote:\n> > \n> > On Tue, 3 Mar 1998, Hankin wrote:\n> > \n> > > how is a notice sent from the backend?\n> > > does it send a Nxxxxx\\n or a VNxxxxx\\n ??\n> > >\n> > > Whenever I do a lo_close() I get a NOTICE: tablerelease: no lock found.\n> > > and PQfn() tries to read a VNxxxxx\\n when the backend sends a Nxxxxx\\n\n> > \n> > This last bit sounds familiar. I thought it was fixed a long time ago\n> > (after I noticed it while implementing PQfn in Java)\n> \n> \n> here's a program that duplicates it on my computer...\n\naha, try enclosing everything in a transaction. When I tried the\nfollowing:\n\n> #include <libpq-fe.h>\n> #include <libpq/libpq-fs.h>\n> \n> main()\n> {\n> PGconn *connection;\n> PGresult *result;\n> Oid oid;\n> int handle;\n> char buf[1024];\n> \n> memset(buf,-1,sizeof(buf));\n> \n> connection=PQsetdb(NULL,NULL,NULL,NULL,NULL);\n> if(connection==NULL) { exit(-1); }\n> PQtrace(connection,stderr);\n\nresult=PQexec(connection,\"begin\");\nif(result==NULL) {exit(-1);}\n\n> oid=lo_creat(connection,INV_WRITE);\n> fprintf(stderr,\"lo_creat: %s\\n\",PQerrorMessage(connection));\n> handle=lo_open(connection,oid,INV_WRITE);\n> fprintf(stderr,\"lo_open: %s\\n\",PQerrorMessage(connection));\n> lo_write(connection,handle,buf,sizeof(buf));\n> fprintf(stderr,\"lo_write: %s\\n\",PQerrorMessage(connection));\n> lo_write(connection,handle,buf,sizeof(buf));\n> fprintf(stderr,\"lo_write: %s\\n\",PQerrorMessage(connection));\n> lo_close(connection,handle);\n> fprintf(stderr,\"lo_close: %s\\n\",PQerrorMessage(connection));\n> result=PQexec(connection,\"select aaa from test\");\n> if(result==NULL || PQresultStatus(result)!=PGRES_TUPLES_OK) {\n> fprintf(stderr,\"fail: %s\\n\",PQerrorMessage(connection)); }\n\nresult=PQexec(connection,\"end\");\nif(result==NULL) {exit(-1);}\n\n> PQfinish(connection);\n> }\n\nThis then works fine (except that my test database doesn't contain a test\ntable, so it fails on the select). Removing the select, and it works.\n\nAll large object operations need to be in a transaction.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Tue, 3 Mar 1998 19:48:10 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend -> interface communication"
}
] |
[
{
"msg_contents": "Just got the original posting.\n\nForwarded message:\n> From [email protected] Mon Mar 2 21:59:40 1998\n> Date: Mon, 2 Mar 1998 13:09:33 -0500 (EST)\n> From: The Hermit Hacker <[email protected]>\n> Reply-To: The Hermit Hacker <[email protected]>\n> To: [email protected]\n> Subject: PostgreSQL v6.3 is Released!!\n> Message-ID: <[email protected]>\n> Approved: aicmcp\n> MIME-Version: 1.0\n> Content-Type: TEXT/PLAIN; charset=US-ASCII\n> ReSent-Date: Mon, 2 Mar 1998 21:47:24 -0500 (EST)\n> ReSent-From: The Hermit Hacker <[email protected]>\n> ReSent-To: Bruce Momjian <[email protected]>\n> ReSent-Message-ID: <[email protected]>\n> \n> \n> After several months of intense and productive development, we are pleased\n> to announce that PostgreSQL v6.3 has been officially released.\n> \n> With this release comes substantial improvement, including:\n> \n> \t- Improved User Manuals\n> \t- Improved SQL92 compliance\n> \t- Improved Security\n> \t- CD distribution, including:\n> \t\t- the complete CVS repository\n> \t\t- PHP2/3\n> \t\t- PTS 1.72.00\n> \t\t- pre-built binaries\n> \t\t\n> \n> To see a complete list of changes, visit:\n> \n> \thttp://www.postgresql.org/docs/todo.shtml#section-1.2.5\n> \n> \n> Sources: \tftp://ftp.postgresql.org/pub/postgresql-6.3.tar.gz\n> Binaries:\tftp://ftp.postgresql.org/pub/bindist-v6.3\n> \n> \tBinaries will be uploaded as available for the various ports.\n> \n> \n> \n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 2 Mar 1998 22:00:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v6.3 is Released!! (fwd)"
}
] |
[
{
"msg_contents": "\nSee the kind of performance improvements *we* throw into our kernel?\n*rofl*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Mon, 2 Mar 1998 17:53:17 -0500 (EST)\nFrom: \"John S. Dyson\" <[email protected]>\nTo: Scott Michel <[email protected]>\nCc: [email protected]\nSubject: Re: Really cool feature!\n\nScott Michel said:\n>\n> As of this morning's cvsup (at or about 9:00am, PST), the really\n> cool current feature is FreeBSD w/o keyboard. On bootstrap, everything\n> comes up as it should, except the keyboard no longer accepts anything.\n> \nExcellent, it improves system throughput by ignoring the nasty keyboard\ninterrupts :-). This is a really good idea!!! :-)\n\n-- \nJohn | Never try to teach a pig to sing,\[email protected] | it just makes you look stupid,\[email protected] | and it irritates the pig.\n\nTo Unsubscribe: send mail to [email protected]\nwith \"unsubscribe freebsd-current\" in the body of the message\n\n",
"msg_date": "Mon, 2 Mar 1998 23:42:28 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Really cool feature! (fwd)"
},
{
"msg_contents": "> > As of this morning's cvsup (at or about 9:00am, PST), the really\n> > cool current feature is FreeBSD w/o keyboard. On bootstrap, everything\n> > comes up as it should, except the keyboard no longer accepts anything.\n> >\n> Excellent, it improves system throughput by ignoring the nasty keyboard\n> interrupts :-). This is a really good idea!!! :-)\n\nAn idea obvious stolen from some Linux development version. *sniff*\n\n",
"msg_date": "Tue, 03 Mar 1998 03:52:56 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Really cool feature! (fwd)"
},
{
"msg_contents": "On Tue, 3 Mar 1998, Thomas G. Lockhart wrote:\n\n> > > As of this morning's cvsup (at or about 9:00am, PST), the really\n> > > cool current feature is FreeBSD w/o keyboard. On bootstrap, everything\n> > > comes up as it should, except the keyboard no longer accepts anything.\n> > >\n> > Excellent, it improves system throughput by ignoring the nasty keyboard\n> > interrupts :-). This is a really good idea!!! :-)\n> \n> An idea obvious stolen from some Linux development version. *sniff*\n\n\tWait, I was kidding...it was actually a bug...*rofl*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 3 Mar 1998 00:07:44 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Really cool feature! (fwd)"
}
] |
[
{
"msg_contents": "Marc wrote...\n> I just finally got a copy of that article in Linux Journal (wow, they even\n> spelt my name right!)...I think the first paragraph, in itself, is most\n> comical:\n> \n> \t\"...it is now developed similarly to Linux\"\n> \n....collapses on the floor at the thought of Marc even *looking* at a journal\nwith the word Linux in the name. I'm sure he didn't hand over any money for\nit!\n\nAndrew (only teasing...)\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Tue, 3 Mar 1998 10:32:40 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Andrew Martin wrote:\n\n> Marc wrote...\n> > I just finally got a copy of that article in Linux Journal (wow, they even\n> > spelt my name right!)...I think the first paragraph, in itself, is most\n> > comical:\n> > \n> > \t\"...it is now developed similarly to Linux\"\n> > \n> ....collapses on the floor at the thought of Marc even *looking* at a journal\n> with the word Linux in the name. I'm sure he didn't hand over any money for\n> it!\n\n\tWait? Someone actually *paid* for this magazie? *shocked look*\n\n",
"msg_date": "Tue, 3 Mar 1998 08:21:02 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
}
] |
[
{
"msg_contents": "Marc wrote...\n> On Tue, 3 Mar 1998, Thomas G. Lockhart wrote:\n> \n> > \n> > It's OK Marc, us linux'ists weren't offended _too_ much by that quote :))\n> \n> \tI'm such a trouble maker, but I find most Linux'ers such easy easy\n> prey *grin* I have this University full of Linux'ers \nI wonder why there are SO MANY Linux'ers? :-)\n\n> that you can spark\n> up just with a comment like \"Linux != Unix\"...which, it isn't, its a\n> Unix-like clone...but they can't seem to figure the distinction *rofl*\n\nAgreed... :-) But BSD isn't Unix either - not officially. [Waits for\nMarc to disagree, again...]\n\nNot to mention the fact that at least one release of Linux did go through\nfull Posix certification and is thus allowed to be called Unix :-)\n\n\nAndrew\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Tue, 3 Mar 1998 10:38:35 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "Thus spake Andrew Martin\n> > up just with a comment like \"Linux != Unix\"...which, it isn't, its a\n> > Unix-like clone...but they can't seem to figure the distinction *rofl*\n> \n> Agreed... :-) But BSD isn't Unix either - not officially. [Waits for\n> Marc to disagree, again...]\n\nOf course it is. It has direct lineage back the Bell Labs. There is\nno AT&T code left in but you can most definitely say \"BSD Unix\" where\nyou can't say \"Linux Unix.\" For many years Berkeley was the main\ndevelopment hotbed for Unix. In fact, BSD was eventually fed back\ninto SVR4.\n\n> Not to mention the fact that at least one release of Linux did go through\n> full Posix certification and is thus allowed to be called Unix :-)\n\nPosix != Unix. NT is a Posix system. So is OpenVMS.\n\nBTW, which version of Linux was Posix certified and who paid for it?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 3 Mar 1998 07:17:30 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Andrew Martin wrote:\n\n> Marc wrote...\n> > On Tue, 3 Mar 1998, Thomas G. Lockhart wrote:\n> > \n> > > \n> > > It's OK Marc, us linux'ists weren't offended _too_ much by that quote :))\n> > \n> > \tI'm such a trouble maker, but I find most Linux'ers such easy easy\n> > prey *grin* I have this University full of Linux'ers \n> I wonder why there are SO MANY Linux'ers? :-)\n\n\tActually, I don't...Linux had a much quicker start into the Free\nmarket...the *BSD crowd had to content with the almost year(?) of legal\ndeliberations as to whether or not they were even allowed to distribute\nand work on it :( Linux had no such problems, since Linux had no\nhistory...no roots :)\n\n> Agreed... :-) But BSD isn't Unix either - not officially. [Waits for\n> Marc to disagree, again...]\n\n\tI believing the only \"official\" Unix is the one produced by the\ncompany that this year has decided it wants to own the name, isn't it? :)\n\n> Not to mention the fact that at least one release of Linux did go through\n> full Posix certification and is thus allowed to be called Unix :-)\n\n\tActually, my understanding is that its allowed to be called a\nPosix-compliant Operating System... :)\n\n\n",
"msg_date": "Tue, 3 Mar 1998 08:24:22 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, D'Arcy J.M. Cain wrote:\n\n> Thus spake Andrew Martin\n> > > up just with a comment like \"Linux != Unix\"...which, it isn't, its a\n> > > Unix-like clone...but they can't seem to figure the distinction *rofl*\n> > \n> > Agreed... :-) But BSD isn't Unix either - not officially. [Waits for\n> > Marc to disagree, again...]\n> \n> Of course it is. It has direct lineage back the Bell Labs. There is\n> no AT&T code left in but you can most definitely say \"BSD Unix\" where\n> you can't say \"Linux Unix.\" For many years Berkeley was the main\n> development hotbed for Unix. In fact, BSD was eventually fed back\n> into SVR4.\n\n\tWhat he said *scrambles to save this for next time*\n\n> > Not to mention the fact that at least one release of Linux did go through\n> > full Posix certification and is thus allowed to be called Unix :-)\n> \n> Posix != Unix. NT is a Posix system. So is OpenVMS.\n> \n> BTW, which version of Linux was Posix certified and who paid for it?\n\n\tUmmmm, I don't know the version, but I do know that this was the\ncase...whether they stayed Posix certified or not is another story, but I\ndo remember this...\n\n\n",
"msg_date": "Tue, 3 Mar 1998 08:31:51 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n\n> \tWhat he said *scrambles to save this for next time*\n\nI can not belive this thing... :-) Hey, guys, you had a tough time doing\n6.3, right ?\n\nNow that you all said your necessary rant on the Linux vs. Others thing,\nplease calm down b4 I join the thread :-) (oops, I think I just did)\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Tue, 3 Mar 1998 12:34:59 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Cristian Gafton wrote:\n\n> On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n> \n> > \tWhat he said *scrambles to save this for next time*\n> \n> I can not belive this thing... :-) Hey, guys, you had a tough time doing\n> 6.3, right ?\n> \n> Now that you all said your necessary rant on the Linux vs. Others thing,\n> please calm down b4 I join the thread :-) (oops, I think I just did)\n\n\tYou joined much much too late though...this has been going on\nsince, oh, day one :) And, most ppl involved in the rant know me and my\nopinions (they aren't necessarily the same as what I use as my bait, of\ncourse, but ya gotta admit, Linux'ers are just soooooooo easy to bait\n*grin*)\n\n\n\n",
"msg_date": "Tue, 3 Mar 1998 12:41:18 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Andrew Martin wrote:\n\n> Marc wrote...\n> > On Tue, 3 Mar 1998, Thomas G. Lockhart wrote:\n\n[clipa-clipa]\n\n> > that you can spark\n> > up just with a comment like \"Linux != Unix\"...which, it isn't, its a\n> > Unix-like clone...but they can't seem to figure the distinction *rofl*\n> \n> Agreed... :-) But BSD isn't Unix either - not officially. [Waits for\n> Marc to disagree, again...]\n\nNope - I'm not even sure SCO Open Server is UNIX - and afaik THEY now own\nthe trademark papers.\n\nBe VERY happy that neither Linux nor BSD is a \"real\" unix. Those systems\nare seriously restrictive and clumsy (I suspect SCO is close - sorry, I've\nhad to do a lot of tech-service work on SCO systems recently. Not even\nSolaris is _THAT_ bad... (close though). Guess I just miss my GNU and BSD\ntools too much *grin*)\n\nif you were just to talk about programs, Linux is a superset of\neverything (except Irix at this time). If you were to talk about\nnetworking, BSD is the standard that Linux follows. Who wants STREAMS\nanyways? If you're talking API interface (and here's where I bate\nnon-glibc users), GLIBC-2 is the standard for Unix98+. (I still don't see\nwhy postgres doesn't support it... though I haven't gotten around to\nwriting a patch (or looking recently)...).\n\n> Not to mention the fact that at least one release of Linux did go through\n> full Posix certification and is thus allowed to be called Unix :-)\n\n*heh*\n\nJust being a nard...\n\nG'day, eh? :)\n\t- Teunis\n\n",
"msg_date": "Tue, 3 Mar 1998 16:06:38 -0700 (MST)",
"msg_from": "teunis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, teunis wrote:\n\n> If you were to talk about\n> networking, BSD is the standard that Linux follows.\n\n\tAnd follows badly, last I heard...Linux's networking support\ndoesn't perform as well as *BSDs, and, last I heard, has been rewritten\nfrom scratch 3 times in the past 6 years or so...\n\n Who wants STREAMS\n> anyways? If you're talking API interface (and here's where I bate\n> non-glibc users), GLIBC-2 is the standard for Unix98+. (I still don't see\n> why postgres doesn't support it... though I haven't gotten around to\n> writing a patch (or looking recently)...).\n\n\tKey reason why we don't support it...nobody except for Linux\ncurrently is using it...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 3 Mar 1998 21:50:04 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "> non-glibc users), GLIBC-2 is the standard for Unix98+. (I still don't see\n> why postgres doesn't support it... though I haven't gotten around to\n> writing a patch (or looking recently)...).\n\n*sigh* Postgres runs just fine on a bug-free version of glibc2. We've heard\nrumors that 2.0.7-pre1 from Debian is close enough, but I can't duplicate that\non my RH5.0 production box with Cristian's RH glibc2-2.0.7 package.\n\nbtw Cristian, that library in /home/gafton has most files labeled as 2.0.6; is\nthat expected or are there possibly some more patches available? I tried\ninstalling on my RH5.0 production system and still see the\n\n select '1 min'::timespan;\n\nproblem. Haven't had any luck picking out the math code and duplicating the\nproblem in a 10 line program yet either :(\n\nAlso, v6.3 has some extensive new documentation which you will want to get\ninto /usr/doc, including 4 hardcopy and html manuals. There is a Makefile in\nthe source doc/ distribution to extract them. Let me know if you want more\ndetails.\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 02:35:22 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n\n> \tAnd follows badly, last I heard...Linux's networking support\n> doesn't perform as well as *BSDs, and, last I heard, has been rewritten\n> from scratch 3 times in the past 6 years or so...\n\nYou're right about the rewrite thing. You're quite wrong about the\nbenchmarks, though. Linux's tcp/ip stack is known to be _now_ the fastest\naround re: internal latency. But there are things that are balancing this\nwhen compared with *BSD. Things like Linux's nfs server which sucks big\ntime or sockets creation time which only got better in the development\nreleases. \n\nThings are relative. But again, when you say that it was rewritten so many\ntimes in the last 6 years you have to remember the Linux is barely six\nyears old :-)\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Tue, 3 Mar 1998 22:12:33 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Wed, 4 Mar 1998, Thomas G. Lockhart wrote:\n\n> btw Cristian, that library in /home/gafton has most files labeled as 2.0.6; is\n> that expected or are there possibly some more patches available? I tried\n> installing on my RH5.0 production system and still see the\n\nI am doing a new one with lots of more patches included. Watch that\ndirectory...\n\n> Also, v6.3 has some extensive new documentation which you will want to get\n> into /usr/doc, including 4 hardcopy and html manuals. There is a Makefile in\n\nAlready done that. Check out the new packages.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Tue, 3 Mar 1998 22:13:32 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": ">>>>> \"Thomas\" == Thomas G Lockhart <[email protected]> writes:\nThomas> *sigh* Postgres runs just fine on a bug-free version of glibc2. We've\nThomas> heard rumors that 2.0.7-pre1 from Debian is close enough, but I can't\nThomas> duplicate that on my RH5.0 production box with Cristian's RH\nThomas> glibc2-2.0.7 package.\n\nThought you might like to know, Tom, with Cristian's glibc2-2.0.7 rpm I\ndownloaded yesterday, I was able to get:\n\nmydb=> select '1 min'::timespan;\n?column?\n--------\n@ 1 min\n(1 row)\n\nAnd all time-related regression tests succeeded.\n\nThat was with gcc-2.7.2.3, (-O3 -m486), _however_, when compiled with\ngcc-2.8.0 (-O3 -mpentium), all that nasty time stuff just crept back again,\ndon't know why.\n\nBTW, something I'm a bit concerned about -- re: the ``~30 sec deficit'' in\nregression test timing results you mentioned a couple of days ago -- I'm also\nseeing it here between the official 6.3 and a Feb-15 snapshot (no, LOCALE's\nalways undef) on both Linux and FreeBSD, and can consistently reproduce it,\nwith the same test suite (the one from the official 6.3).\n\nAnd thanks for the docs!\n\n-Pailing\n\n\n",
"msg_date": "03 Mar 1998 23:47:33 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "> Thomas> *sigh* Postgres runs just fine on a bug-free version of glibc2. We've\n> Thomas> heard rumors that 2.0.7-pre1 from Debian is close enough, but I can't\n> Thomas> duplicate that on my RH5.0 production box with Cristian's RH\n> Thomas> glibc2-2.0.7 package.\n>\n> Thought you might like to know, Tom, with Cristian's glibc2-2.0.7 rpm I\n> downloaded yesterday, I was able to get:\n\n> mydb=> select '1 min'::timespan;\n> ?column?\n> --------\n> @ 1 min\n> (1 row)\n>\n> And all time-related regression tests succeeded.\n\nThat's good news, but I'm annoyed I haven't been able to get this result myself\nyet. Did you use the redhat beta rpm for postgres, or did you do a clean install\nfrom sources?\n\n> That was with gcc-2.7.2.3, (-O3 -m486), _however_, when compiled with\n> gcc-2.8.0 (-O3 -mpentium), all that nasty time stuff just crept back again,\n> don't know why.\n>\n> BTW, something I'm a bit concerned about -- re: the ``~30 sec deficit'' in\n> regression test timing results you mentioned a couple of days ago -- I'm also\n> seeing it here between the official 6.3 and a Feb-15 snapshot (no, LOCALE's\n> always undef) on both Linux and FreeBSD, and can consistently reproduce it,\n> with the same test suite (the one from the official 6.3).\n\nI ended up convincing myself that most of the time difference was the overhead\nin compiling with USE_LOCALE turned on. The RH rpm is compiled with it on, and\nmost of my testing is done with it turned off, but I had turned it on to track\ndown problems with the money type. Anyway, my times are now within ~10sec of the\nshortest time I had ever seen (of course, it's easy to shave time off when you\nskip essential code; I'll take the 10sec hit :)\n\n> And thanks for the docs!\n\nSure. Consider finding something which interests you to write about for the next\nrelease :))\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 06:01:32 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Wed, 4 Mar 1998, Thomas G. Lockhart wrote:\n\n> > non-glibc users), GLIBC-2 is the standard for Unix98+. (I still don't see\n> > why postgres doesn't support it... though I haven't gotten around to\n> > writing a patch (or looking recently)...).\n> \n> *sigh* Postgres runs just fine on a bug-free version of glibc2. We've heard\n> rumors that 2.0.7-pre1 from Debian is close enough, but I can't duplicate that\n> on my RH5.0 production box with Cristian's RH glibc2-2.0.7 package.\n\nThen consider that the minimum supported and ignore it... *grin*\nSolution found! :)\n[GNU takes very long time to fix things toujours]\n\nG'day, eh? :)\n\t- Teunis\n\n(PS: Linux's kernel networking layer has been extensively rewritten over\nthe last year and a half.. and is (afaik) considerably faster... at least \nin the 2.1 kernels... though IIRC it was rewritten also in the 1.3\nkernels... The NFS probs were solved in 2.1 a while ago though)\n\n",
"msg_date": "Wed, 4 Mar 1998 00:06:14 -0700 (MST)",
"msg_from": "teunis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": ">>>>> \"Thomas\" == Thomas G Lockhart <[email protected]> writes:\n\nThomas> That's good news, but I'm annoyed I haven't been able to get this\nThomas> result myself yet. Did you use the redhat beta rpm for postgres, or\nThomas> did you do a clean install from sources?\n\nclean install from source, that's why I listed the compiler versions and\nflags. Isn't that what you've been trying to get it to work?\n\n>> That was with gcc-2.7.2.3, (-O3 -m486), _however_, when compiled with\n>> gcc-2.8.0 (-O3 -mpentium), all that nasty time stuff just crept back again,\n>> don't know why.\n>> \n>> BTW, something I'm a bit concerned about -- re: the ``~30 sec deficit'' in\n>> regression test timing results you mentioned a couple of days ago -- I'm\n\nThomas> I ended up convincing myself that most of the time difference was the\nThomas> overhead in compiling with USE_LOCALE turned on. The RH rpm is\nThomas> compiled with it on, and most of my testing is done with it turned\n\nI thought about doing some more testing and possibly tracking it down, and see\nwhether that was caused by some bugfixes somewhere, but without all the\nsnapshots from perhaps Feb-20 all the way up to the official release, it's a\nbit hard.\n\n-Pailing\n\n\n\n",
"msg_date": "04 Mar 1998 02:22:47 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "> Thomas> That's good news, but I'm annoyed I haven't been able to get this\n> Thomas> result myself yet. Did you use the redhat beta rpm for postgres, or\n> Thomas> did you do a clean install from sources?\n>\n> clean install from source, that's why I listed the compiler versions and\n> flags. Isn't that what you've been trying to get it to work?\n\nNo. I've been developing on RH4.2 (making source builds on that), and trying to\nuse rpms for RH5.0 built at RedHat to verify the glibc2 performance. May have to\ndo a build from source to get to the bottom of things, but I'm hoping not...\n\n> >> That was with gcc-2.7.2.3, (-O3 -m486), _however_, when compiled with\n> >> gcc-2.8.0 (-O3 -mpentium), all that nasty time stuff just crept back again,\n> >> don't know why.\n> I thought about doing some more testing and possibly tracking it down, and see\n> whether that was caused by some bugfixes somewhere, but without all the\n> snapshots from perhaps Feb-20 all the way up to the official release, it's a\n> bit hard.\n\nWell, have you tried the CVSup static package on RH5.0 yet? Don't know if it\nwould work, but if it did it would allow you to get snapshots as you want; in\nfact I'm doing that right now trying to track down a problem introduced sometime\nafter 980112 and before 980201. Downloading 980120 right now as I (sort of)\nbinary search through the possibilities.\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 13:31:26 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "In message <[email protected]>, \"Thomas G. Lockhart\" writes:\n> > Thomas> That's good news, but I'm annoyed I haven't been able to get this\n> > Thomas> result myself yet. Did you use the redhat beta rpm for postgres, or\n> > Thomas> did you do a clean install from sources?\n> >\n> > clean install from source, that's why I listed the compiler versions and\n> > flags. Isn't that what you've been trying to get it to work?\n> \n> No. I've been developing on RH4.2 (making source builds on that), and trying \n> to\n> use rpms for RH5.0 built at RedHat to verify the glibc2 performance. May have\n> to\n> do a build from source to get to the bottom of things, but I'm hoping not...\n> \n> > >> That was with gcc-2.7.2.3, (-O3 -m486), _however_, when compiled with\n> > >> gcc-2.8.0 (-O3 -mpentium), all that nasty time stuff just crept back aga\n> in,\n> > >> don't know why.\n> > I thought about doing some more testing and possibly tracking it down, and \n> see\n> > whether that was caused by some bugfixes somewhere, but without all the\n> > snapshots from perhaps Feb-20 all the way up to the official release, it's \n> a\n> > bit hard.\n> \n> Well, have you tried the CVSup static package on RH5.0 yet? Don't know if it\n> would work, but if it did it would allow you to get snapshots as you want; in\n> fact I'm doing that right now trying to track down a problem introduced somet\n> ime\n> after 980112 and before 980201. Downloading 980120 right now as I (sort of)\n> binary search through the possibilities.\n> \n> - Tom\n> \n> \n\nJust thought you'd like another data point. I just installed Cristian's\nglibc2-2.0.7 packages. Like Thomas, I still get:\n\n\tpostgres=> select '1 min'::timespan;\n\t?column? \n\t------------\n\t@ 60.00 secs\n\t(1 row)\n\nWithout recompiling. With recompiling, I get:\n\n\tpostgres=> select '1 min'::timespan;\n\t?column?\n\t--------\n\t@ 1 min \n\t(1 row)\n\n\nI'm using gcc 2.7.2.3 -O2. \n\nTom Szybist\[email protected]\n",
"msg_date": "Wed, 04 Mar 1998 09:57:57 -0500",
"msg_from": "\"Thomas A. Szybist\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "> Just thought you'd like another data point. I just installed Cristian's\n> glibc2-2.0.7 packages. Like Thomas, I still get:\n>\n> postgres=> select '1 min'::timespan;\n> ?column?\n> ------------\n> @ 60.00 secs\n> (1 row)\n>\n> Without recompiling. With recompiling, I get:\n>\n> postgres=> select '1 min'::timespan;\n> ?column?\n> --------\n> @ 1 min\n> (1 row)\n>\n> I'm using gcc 2.7.2.3 -O2.\n\nWell, this narrows it down a lot! Wonder why it requires a recompile?? afaik there\nisn't any static library linking involved...\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 15:48:01 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "In message <[email protected]>, \"Thomas G. Lockhart\" writes:\n> > Just thought you'd like another data point. I just installed Cristian's\n> > glibc2-2.0.7 packages. Like Thomas, I still get:\n> >\n> > postgres=> select '1 min'::timespan;\n> > ?column?\n> > ------------\n> > @ 60.00 secs\n> > (1 row)\n> >\n> > Without recompiling. With recompiling, I get:\n> >\n> > postgres=> select '1 min'::timespan;\n> > ?column?\n> > --------\n> > @ 1 min\n> > (1 row)\n> >\n> > I'm using gcc 2.7.2.3 -O2.\n> \n> Well, this narrows it down a lot! Wonder why it requires a recompile?? afaik there\n> isn't any static library linking involved...\n> \n> - Tom\n> \n\nCould an include file account for this?\n\n\nTom Szybist\[email protected]\n",
"msg_date": "Wed, 04 Mar 1998 11:05:46 -0500",
"msg_from": "\"Thomas A. Szybist\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "On Wed, 4 Mar 1998, Thomas A. Szybist wrote:\n\n> Just thought you'd like another data point. I just installed Cristian's\n> glibc2-2.0.7 packages. Like Thomas, I still get:\n\nMaybe you were using my postgresql package which was a little older ?\n\nI have new rpms on ftp://ftp.redhat.com/home/gafton/pgsql\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Wed, 4 Mar 1998 22:46:18 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
}
] |
[
{
"msg_contents": "What the procedure now? Is there a need to provide patches for 6.3, or is\nthis only done for serious bug? That is new features go only into 6.4 as\nusual.\n\nMy last minor patch (allowing exec sql vacuum) didn't make it into cvs it\nseems. Should this be updated in 6.3 or should I just resubmit with changes\nfor 6.4?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n",
"msg_date": "Tue, 3 Mar 1998 11:55:39 +0100 (CET)",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "patches now that 6.3 has been released"
},
{
"msg_contents": "> What the procedure now? Is there a need to provide patches for 6.3, or is\n> this only done for serious bug? That is new features go only into 6.4 as\n> usual.\n>\n> My last minor patch (allowing exec sql vacuum) didn't make it into cvs it\n> seems. Should this be updated in 6.3 or should I just resubmit with changes\n> for 6.4?\n\nWhat we did for v6.2.1 which seemed to work pretty well was this:\n\nif a patch can fit into v6.2.1, we wrote it into /pub/patches and updated the\nREADME in the same directory. Of the literally hundreds (thousands?) of\nchanges for v6.3, there were only ~7 patches posted for v6.2.1 fixes. Of\ncourse, we also submitted the patch separately for the development code tree.\n\nIf the patch diverged from a clean v6.2.1 installation, we just submitted it\nfor the next release and left it at that. I think that _minor_ and obvious bug\nfixes could go into the code tree now, and then Marc can choose whether to\ninclude them in any new snapshot releases or on the CDROM. We are holding off\non submitting new work for a week or two, partly to recover from the last few\nweeks and partly to see how solid v6.3 is...\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 13:19:16 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] patches now that 6.3 has been released"
},
{
"msg_contents": "On Tue, 3 Mar 1998, Michael Meskes wrote:\n\n> What the procedure now? Is there a need to provide patches for 6.3, or is\n> this only done for serious bug? That is new features go only into 6.4 as\n> usual.\n\n\tCorrect...v6.4 will be as different from v6.3, as v6.3 was from\nv6.2.1...\n\n> My last minor patch (allowing exec sql vacuum) didn't make it into cvs it\n> seems. Should this be updated in 6.3 or should I just resubmit with changes\n> for 6.4?\n\n\tOver time, it gets slightly harder, but if you can, make it a\nseperate patch that we can add to the ftp server itself and that ppl can\ndownload. (and, of course, add it in for v6.4 *grin*)\n\n\n\n",
"msg_date": "Tue, 3 Mar 1998 08:27:41 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] patches now that 6.3 has been released"
},
{
"msg_contents": "The Hermit Hacker writes:\n> \tOver time, it gets slightly harder, but if you can, make it a\n> seperate patch that we can add to the ftp server itself and that ppl can\n> download. (and, of course, add it in for v6.4 *grin*)\n\nOkay, here's the missing bug fix of two minor bugs. I hope I don't have to\nkeep both source trees from now on. :-)\n\ndiff -rcN interfaces/ecpg/preproc/ecpg.c interfaces/ecpg.mm/preproc/ecpg.c\n*** interfaces/ecpg/preproc/ecpg.c\tTue Mar 3 08:29:49 1998\n--- interfaces/ecpg.mm/preproc/ecpg.c\tTue Mar 3 11:52:45 1998\n***************\n*** 58,108 ****\n \t\t/* after the options there must not be anything but filenames */\n \t\tfor (fnr = optind; fnr < argc; fnr++)\n \t\t{\n! \t\t\tchar\t *filename,\n! \t\t\t\t\t *ptr2ext;\n! \t\t\tint\t\t\text = 0;\n \n! \t\t\tfilename = mm_alloc(strlen(argv[fnr]) + 4);\n \n! \t\t\tstrcpy(filename, argv[fnr]);\n \n! \t\t\tptr2ext = strrchr(filename, '.');\n! \t\t\t/* no extension or extension not equal .pgc */\n! \t\t\tif (ptr2ext == NULL || strcmp(ptr2ext, \".pgc\") != 0)\n \t\t\t{\n! \t\t\t\tif (ptr2ext == NULL)\n! \t\t\t\t\text = 1;\t/* we need this information a while later */\n! \t\t\t\tptr2ext = filename + strlen(filename);\n \t\t\t\tptr2ext[0] = '.';\n \t\t\t}\n \n- \t\t\t/* make extension = .c */\n- \t\t\tptr2ext[1] = 'c';\n- \t\t\tptr2ext[2] = '\\0';\n- \n \t\t\tif (out_option == 0)/* calculate the output name */\n \t\t\t{\n! \t\t\t\tyyout = fopen(filename, \"w\");\n \t\t\t\tif (yyout == NULL)\n \t\t\t\t{\n! \t\t\t\t\tperror(filename);\n! \t\t\t\t\tfree(filename);\n \t\t\t\t\tcontinue;\n \t\t\t\t}\n \t\t\t}\n \n- \t\t\tif (ext == 1)\n- \t\t\t{\n- \t\t\t\t/* no extension => add .pgc */\n- \t\t\t\tptr2ext = strrchr(filename, '.');\n- \t\t\t\tptr2ext[1] = 'p';\n- \t\t\t\tptr2ext[2] = 'g';\n- \t\t\t\tptr2ext[3] = 'c';\n- \t\t\t\tptr2ext[4] = '\\0';\n- \t\t\t\tinput_filename = filename;\n- \t\t\t}\n- \t\t\telse\n- \t\t\t\tinput_filename = argv[fnr];\n \t\t\tyyin = fopen(input_filename, \"r\");\n \t\t\tif (yyin == NULL)\n \t\t\t\tperror(argv[fnr]);\n--- 58,102 ----\n \t\t/* after the options there must not be anything but filenames */\n \t\tfor (fnr = optind; fnr < argc; fnr++)\n \t\t{\n! \t\t\tchar\t *output_filename, *ptr2ext;\n \n! \t\t\tinput_filename = mm_alloc(strlen(argv[fnr]) + 5);\n \n! \t\t\tstrcpy(input_filename, argv[fnr]);\n \n! \t\t\tptr2ext = strrchr(input_filename, '.');\n! \t\t\t/* no extension? */\n! \t\t\tif (ptr2ext == NULL)\n \t\t\t{\n! \t\t\t\tptr2ext = input_filename + strlen(input_filename);\n! \t\t\t\t\n! \t\t\t\t/* no extension => add .pgc */\n \t\t\t\tptr2ext[0] = '.';\n+ \t\t\t\tptr2ext[1] = 'p';\n+ \t\t\t\tptr2ext[2] = 'g';\n+ \t\t\t\tptr2ext[3] = 'c';\n+ \t\t\t\tptr2ext[4] = '\\0';\n \t\t\t}\n \n \t\t\tif (out_option == 0)/* calculate the output name */\n \t\t\t{\n! \t\t\t\toutput_filename = strdup(input_filename);\n! \t\t\t\t\n! \t\t\t\tptr2ext = strrchr(output_filename, '.');\n! \t\t\t\t/* make extension = .c */\n! \t\t\t\tptr2ext[1] = 'c';\n! \t\t\t\tptr2ext[2] = '\\0';\n! \t\t\t\t\n! \t\t\t\tyyout = fopen(output_filename, \"w\");\n \t\t\t\tif (yyout == NULL)\n \t\t\t\t{\n! \t\t\t\t\tperror(output_filename);\n! \t\t\t\t\tfree(output_filename);\n! \t\t\t\t\tfree(input_filename);\n \t\t\t\t\tcontinue;\n \t\t\t\t}\n \t\t\t}\n \n \t\t\tyyin = fopen(input_filename, \"r\");\n \t\t\tif (yyin == NULL)\n \t\t\t\tperror(argv[fnr]);\n***************\n*** 122,128 ****\n \t\t\t\t\tfclose(yyout);\n \t\t\t}\n \n! \t\t\tfree(filename);\n \t\t}\n \t}\n \treturn (0);\n--- 116,123 ----\n \t\t\t\t\tfclose(yyout);\n \t\t\t}\n \n! \t\t\tfree(output_filename);\n! \t\t\tfree(input_filename);\n \t\t}\n \t}\n \treturn (0);\ndiff -rcN interfaces/ecpg/preproc/preproc.y interfaces/ecpg.mm/preproc/preproc.y\n*** interfaces/ecpg/preproc/preproc.y\tTue Mar 3 08:29:49 1998\n--- interfaces/ecpg.mm/preproc/preproc.y\tFri Feb 27 16:56:12 1998\n***************\n*** 607,613 ****\n /* FIXME: instead of S_SYMBOL we should list all possible commands */\n sqlcommand : S_SYMBOL | SQL_DECLARE;\n \n! sqlstatement_words : sqlstatement_word\n \t\t | sqlstatement_words sqlstatement_word;\n \t\n sqlstatement_word : ':' symbol \n--- 607,613 ----\n /* FIXME: instead of S_SYMBOL we should list all possible commands */\n sqlcommand : S_SYMBOL | SQL_DECLARE;\n \n! sqlstatement_words : /* empty */\n \t\t | sqlstatement_words sqlstatement_word;\n \t\n sqlstatement_word : ':' symbol \ndiff -rcN interfaces/ecpg/test/perftest.pgc interfaces/ecpg.mm/test/perftest.pgc\n*** interfaces/ecpg/test/perftest.pgc\tTue Mar 3 08:29:49 1998\n--- interfaces/ecpg.mm/test/perftest.pgc\tFri Feb 27 17:01:39 1998\n***************\n*** 16,21 ****\n--- 16,22 ----\n \t\tusec+=1000000;\n \t}\n \tprintf(\"I needed %ld seconds and %ld microseconds for the %s test.\\n\", sec, usec, text);\n+ \texec sql vacuum;\n }\n \n int\n***************\n*** 106,113 ****\n \texec sql drop index number1;\n \n \texec sql drop table perftest1;\n- \n- \texec sql commit;\n \n \treturn (0);\n }\n--- 107,112 ----\n\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n",
"msg_date": "Tue, 3 Mar 1998 14:31:01 +0100 (CET)",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] patches now that 6.3 has been released"
}
] |
[
{
"msg_contents": "\nHello everyone.\n\nI would like to say you have done a great job on PostgreSQL!\nThank you from a very appreciative user.\n\nMichael\n\n* Michael J. Rogan, Network Administrator, 905-624-3020 *\n* Mark IV Industries, F-P Electronics & I.V.H.S. Divisions *\n* [email protected] [email protected] *\n",
"msg_date": "Tue, 3 Mar 1998 13:04:12 +0000",
"msg_from": "\"Michael J. Rogan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Good Work."
}
] |
[
{
"msg_contents": "Of course a first mail after a release must have the earned praise:\nVery nicely done, I like it all ;-)\n\nReading the TODO, I see 'Allow text, char(), and varchar() overhead to be\nonly 2 bytes, not 4 bytes'\nWhile this is very good for char and varchar, text is not a candidate\nsince it is usually a blob datatype, without a length restriction (or a 2Gig\nlimit).\n\nI think it should alternately read:\nAllow varchar() overhead to be only 2 bytes\nremove char() 4 byte overhead, use atttypmod instead\nmake text a blob datatype (maybe storing <= 8k row with tuple, >=8k in blob\ntablespace)\n\nWhat do you think ?\nAndreas\n",
"msg_date": "Tue, 3 Mar 1998 14:41:32 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "text should be a blob field"
},
{
"msg_contents": "> \n> Of course a first mail after a release must have the earned praise:\n> Very nicely done, I like it all ;-)\n> \n> Reading the TODO, I see 'Allow text, char(), and varchar() overhead to be\n> only 2 bytes, not 4 bytes'\n> While this is very good for char and varchar, text is not a candidate\n> since it is usually a blob datatype, without a length restriction (or a 2Gig\n> limit).\n\nIt was an idea. I think I will remove it from the TODO list. I had\nconsidered it so I could save the defined length(atttypmod now) in\nthere, but now that we have atttypmod, we don't need it. It will stay\nat 4 bytes.\n\n> \n> I think it should alternately read:\n> Allow varchar() overhead to be only 2 bytes\n> remove char() 4 byte overhead, use atttypmod instead\n\nOoh, this is interesting. Yea, I guess we really don't need that for\nchar() anymore. The only problem is that we would have to do some fancy\nstuff to track char() separately in the backend, and I am sure atttypmod\nis not available in all the places we need it. Don't think it is worth\nit.\n\n> make text a blob datatype (maybe storing <= 8k row with tuple, >=8k in blob\n> tablespace)\n\nHmmm.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 3 Mar 1998 09:35:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "> > make text a blob datatype (maybe storing <= 8k row with tuple, >=8k in blob\n> > tablespace)\n> \n> Hmmm.\n\nI know Informix has 2 BLOB-types \"text\" and \"binary\"\nbut I do not think we should change the meaning of keyword \"text\" too\nmuch.\n\nAn idea to think about:\nif text fits within a tuple make it varchar-alias (as it works now)\nif text is larger make it a blob.\n\nOr simply call text-BLOBs \"textblob\" of something like that.\nWhat does SQL-92 say about BLOBs anyway?\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna",
"msg_date": "Tue, 03 Mar 1998 17:28:58 +0100",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "> Or simply call text-BLOBs \"textblob\" of something like that.\n> What does SQL-92 say about BLOBs anyway?\n\nNothing afaik. That is why you get different meanings and usages between database\nproducts. I'd like to keep \"text\" as a useful string type. Conventionally, generic\nblobs are just binary objects with not much backend support (e.g. no useful\noperators other than perhaps \"=\").\n\nImo generic blobs make more sense in a system without the capability to add types;\nperhaps a solution for Postgres would look a little different. At the moment, the\nfrontend/backend protocol is different for large objects and everything else, so\nit would be difficult to transparently introduce blobs which behave identically to\ntypes which fit within a normal tuple.\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 16:45:23 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Tue, 3 Mar 1998, Bruce Momjian wrote:\n\n> > make text a blob datatype (maybe storing <= 8k row with tuple, >=8k in blob\n> > tablespace)\n> \n> Hmmm.\n> \n\nThere was some talk about this about a month ago.\n\nAlthough we now have blob support in the JDBC driver, there is one\noutstanding issue with them, that I was waiting for 6.3 to be released\nbefore starting on it (and almost certainly starting a discussion here\nabout it).\n\nAllowing text to use blobs for values larger than the current block size\nwould hit the same problem.\n\nOk, here's what the problem is at the moment:\n\nThe JDBC example ImageViewer uses a table to store the name of an image,\nand the OID of the associated blob.\n\n# create table images (imgname name,imgoid oid);\n\nOk, we now create an entry in the table for an image with:\n\n# insert into images values ('test.gif',lo_import('/home/pmount/test.gif'));\n\nThis is fine so far. Now say we delete that row with:\n\n# delete from images where name = 'test.gif';\n\nFine again, except that the blob is still in the database. To get round\nthis, you would have to add extra statements to handle this, and for JDBC,\nthere is no standard way to do this.\n\nWhat I was thinking of, was to create a new type 'blob' which would delete\nthe associated large object when the row is deleted. However, here's the\nproblems against this:\n\n1. Is there a call made by the backend to each datatype when a row is \n deleted? I can't see one.\n\n2. When we update a row, we don't want the overhead of copying a very\n large blob when a row is first copied, then the original deleted, etc.\n\nAnyhow, I'm thinking of various ways around this - just don't hold your\nbreath ;-)\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Tue, 3 Mar 1998 20:38:24 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "> outstanding issue with them, that I was waiting for 6.3 to be released\n> before starting on it (and almost certainly starting a discussion here\n> about it).\n> \n> Allowing text to use blobs for values larger than the current block size\n> would hit the same problem.\n> \n> Ok, here's what the problem is at the moment:\n> \n> The JDBC example ImageViewer uses a table to store the name of an image,\n> and the OID of the associated blob.\n> \n> # create table images (imgname name,imgoid oid);\n> \n> Ok, we now create an entry in the table for an image with:\n> \n> # insert into images values ('test.gif',lo_import('/home/pmount/test.gif'));\n> \n> This is fine so far. Now say we delete that row with:\n> \n> # delete from images where name = 'test.gif';\n> \n> Fine again, except that the blob is still in the database. To get round\n> this, you would have to add extra statements to handle this, and for JDBC,\n> there is no standard way to do this.\n> \n> What I was thinking of, was to create a new type 'blob' which would delete\n> the associated large object when the row is deleted. However, here's the\n> problems against this:\n> \n> 1. Is there a call made by the backend to each datatype when a row is \n> deleted? I can't see one.\n\nWell, you could have a RULE that deletes the large object at row\ndeletion time. However, if two rows point to the same large object, the\nfirst one deleting it would delete the large object for the other. The\nonly solution to this is to have a separate large object table, and use\nreference counts so only the last user of the object deletes it.\n\n> \n> 2. When we update a row, we don't want the overhead of copying a very\n> large blob when a row is first copied, then the original deleted, etc.\n\nAgain, a deletion-only rule, but if the update the row and change the\nlarge object, you would have to delete the old stuff.\n\nSeems very messy to me. Perhaps put all the large objects in a table,\nand have a process clean up all the unreferenced large objects.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 10:58:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "\nBruce wrote:\n\n> > 1. Is there a call made by the backend to each datatype when a row is\n> > deleted? I can't see one.\n>\n> Well, you could have a RULE that deletes the large object at row\n> deletion time. However, if two rows point to the same large object, the\n> first one deleting it would delete the large object for the other. The\n> only solution to this is to have a separate large object table, and use\n> reference counts so only the last user of the object deletes it.\n\n I think triggers are more appropriate.\n\n On INSERT check that the large object referenced exists.\n\n On UPDATE if large object reference changes, check that new\n large object exists and check if old large object isn't\n referenced any more in which case drop the old large object.\n\n On DELETE check if large object isn't referenced any more ...\n\n Yes - I like triggers :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 4 Mar 1998 17:40:17 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Wed, 4 Mar 1998, Bruce Momjian wrote:\n\n> > 1. Is there a call made by the backend to each datatype when a row is \n> > deleted? I can't see one.\n> \n> Well, you could have a RULE that deletes the large object at row\n> deletion time.\n\nAs I haven't yet played with Rules & Triggers, and now we have 6.3 out of\nthe way, I'm going to start.\n\n> However, if two rows point to the same large object, the first one\n> deleting it would delete the large object for the other. The only\n> solution to this is to have a separate large object table, and use\n> reference counts so only the last user of the object deletes it. \n\nAh, in this case, there would be a single large object per column/row. If\nthe row is deleted, then so will the blob.\n\n> > 2. When we update a row, we don't want the overhead of copying a very\n> > large blob when a row is first copied, then the original deleted, etc.\n> \n> Again, a deletion-only rule, but if the update the row and change the\n> large object, you would have to delete the old stuff.\n\nThat's true.\n\n> Seems very messy to me. Perhaps put all the large objects in a table,\n> and have a process clean up all the unreferenced large objects.\n\nI think that would be a last resort thing to use.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Wed, 4 Mar 1998 20:16:07 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "Peter T Mount wrote:\n> \n> On Tue, 3 Mar 1998, Bruce Momjian wrote:\n> \n> > > make text a blob datatype (maybe storing <= 8k row with tuple, >=8k in blob\n> > > tablespace)\n> >\n> \n> There was some talk about this about a month ago.\n> \n> Although we now have blob support in the JDBC driver, there is one\n> outstanding issue with them, that I was waiting for 6.3 to be released\n> before starting on it (and almost certainly starting a discussion here\n> about it).\n> \n> Allowing text to use blobs for values larger than the current block size\n> would hit the same problem.\n\nWhen I told about multi-representation feature I ment that applications\nwill not be affected by how text field is stored - in tuple or somewhere \nelse. Is this Ok for you ?\n\nVadim\n",
"msg_date": "Thu, 05 Mar 1998 16:08:08 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Thu, 5 Mar 1998, Vadim B. Mikheev wrote:\n\n> Peter T Mount wrote:\n> > \n> > On Tue, 3 Mar 1998, Bruce Momjian wrote:\n> > \n> > > > make text a blob datatype (maybe storing <= 8k row with tuple, >=8k in blob\n> > > > tablespace)\n> > >\n> > \n> > There was some talk about this about a month ago.\n> > \n> > Although we now have blob support in the JDBC driver, there is one\n> > outstanding issue with them, that I was waiting for 6.3 to be released\n> > before starting on it (and almost certainly starting a discussion here\n> > about it).\n> > \n> > Allowing text to use blobs for values larger than the current block size\n> > would hit the same problem.\n> \n> When I told about multi-representation feature I ment that applications\n> will not be affected by how text field is stored - in tuple or somewhere \n> else. Is this Ok for you ?\n\nYes. What I was meaning was if the \"somewhere else\" is in a blob, then we\nwould have to keep track of it if the tuple is updated or deleted.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Fri, 6 Mar 1998 06:56:55 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Wed, 4 Mar 1998, Jan Wieck wrote:\n\n> Bruce wrote:\n> \n> > > 1. Is there a call made by the backend to each datatype when a row is\n> > > deleted? I can't see one.\n> >\n> > Well, you could have a RULE that deletes the large object at row\n> > deletion time. However, if two rows point to the same large object, the\n> > first one deleting it would delete the large object for the other. The\n> > only solution to this is to have a separate large object table, and use\n> > reference counts so only the last user of the object deletes it.\n> \n> I think triggers are more appropriate.\n> \n> On INSERT check that the large object referenced exists.\n> \n> On UPDATE if large object reference changes, check that new\n> large object exists and check if old large object isn't\n> referenced any more in which case drop the old large object.\n> \n> On DELETE check if large object isn't referenced any more ...\n> \n> Yes - I like triggers :-)\n\nI'm begining to agree with you here.\n\nSo far, I've got the trigger to work, so if a row of a table is deleted,\nor an oid referencing a BLOB is updated, then the old BLOB is deleted.\nThis removes the orphaned BLOB problem.\n\nThe only problem I have now, is:\n\n How to get a trigger to be automatically created on a table when the\n table is created. This would be required, so the end user doesn't have\n to do this (normally from within an application).\n\nThis would be required, esp. for expanding the text type (or memo, or\nwhatever).\n\nAny Ideas?\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sun, 15 Mar 1998 13:25:02 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "\nPeter Mount wrote:\n>\n> On Wed, 4 Mar 1998, Jan Wieck wrote:\n>\n> > I think triggers are more appropriate.\n> >\n>\n> I'm begining to agree with you here.\n>\n> So far, I've got the trigger to work, so if a row of a table is deleted,\n> or an oid referencing a BLOB is updated, then the old BLOB is deleted.\n> This removes the orphaned BLOB problem.\n>\n> The only problem I have now, is:\n>\n> How to get a trigger to be automatically created on a table when the\n> table is created. This would be required, so the end user doesn't have\n> to do this (normally from within an application).\n>\n> This would be required, esp. for expanding the text type (or memo, or\n> whatever).\n\n So you think of a new type that automatically causes trigger\n definition if used in CREATE/ALTER TABLE.\n\n Agree - would be a nice feature.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 16 Mar 1998 08:56:58 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Mon, 16 Mar 1998, Jan Wieck wrote:\n\n> \n> Peter Mount wrote:\n> >\n> > On Wed, 4 Mar 1998, Jan Wieck wrote:\n> >\n> > > I think triggers are more appropriate.\n> > >\n> >\n> > I'm begining to agree with you here.\n> >\n> > So far, I've got the trigger to work, so if a row of a table is deleted,\n> > or an oid referencing a BLOB is updated, then the old BLOB is deleted.\n> > This removes the orphaned BLOB problem.\n> >\n> > The only problem I have now, is:\n> >\n> > How to get a trigger to be automatically created on a table when the\n> > table is created. This would be required, so the end user doesn't have\n> > to do this (normally from within an application).\n> >\n> > This would be required, esp. for expanding the text type (or memo, or\n> > whatever).\n> \n> So you think of a new type that automatically causes trigger\n> definition if used in CREATE/ALTER TABLE.\n> \n> Agree - would be a nice feature.\n\nExactly, it would be a nice feature.\n\nI'm about to look at rules to see if that's a way to do it, but seeing it\ntook me about 30 mins to do this with Triggers (and thats when I've never\nused them before), then it would be nice to use these.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Mon, 16 Mar 1998 18:53:17 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "Would it be possible to have a slightly different interface in the\nfrontend library which hides the fact that large objects are transfered\n8kb at a time from the backend? Then the handling of text and large\nobjects/blobs starts to look more alike...\n\n - Tom\n",
"msg_date": "Tue, 17 Mar 1998 03:21:04 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Tue, 17 Mar 1998, Thomas G. Lockhart wrote:\n\n> Would it be possible to have a slightly different interface in the\n> frontend library which hides the fact that large objects are transfered\n> 8kb at a time from the backend? Then the handling of text and large\n> objects/blobs starts to look more alike...\n\nThe front end doesn't show the 8k limit... the storage manager handles\nsplitting up the large object into 8k chunks - it may be that the examples\nshow this because we know about it ourselves ;-)\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Tue, 17 Mar 1998 07:30:47 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Re: [HACKERS] text should be a blob field"
}
] |
[
{
"msg_contents": "I had developed a \"cheat\" to help people convert Unix system time stored\nas an integer into a true date/time type. I noticed that it did not work\nprior to the v6.3 release, but have now gone back to v6.2.1 and\nconfirmed that it works there. Can someone test this on their\ninstallation and confirm that it is a problem for all v6.3 (since\nsomeone reported that it worked for them earlier, but I'm not sure how\nthat could be):\n\n CREATE FUNCTION abstime_datetime(int4)\n RETURNS datetime\n AS '-' LANGUAGE 'internal';\n\nFor v6.2.1, here is the result:\n\npostgres=> select abstime_datetime(0);\nabstime_datetime\n----------------\nepoch\n(1 row)\npostgres=> select abstime_datetime(900000000);\nabstime_datetime\n----------------------------\nThu Jul 09 16:00:00 1998 GMT\n(1 row)\n\nWhen I run this same thing on v6.3, I get a date sometime in 1974 which\nI think might actually be derived from a pointer interpreted as an\ninteger :(\n\npostgres=> select abstime_datetime(0);\nabstime_datetime\n----------------------------\nWed Apr 24 18:51:28 1974 GMT\n(1 row)\npostgres=> select abstime_datetime(900000000);\nabstime_datetime\n----------------------------\nWed Apr 24 18:37:12 1974 GMT\n(1 row)\n\nAny ideas where to look? It would be a shame to lose this capability.\nAlthough the example is perhaps not too respectable, it illustrates a\nuseful feature...\n\n -\nTom\n\n",
"msg_date": "Tue, 03 Mar 1998 13:54:34 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lost a function overloading capability in v6.3"
},
{
"msg_contents": "Thomas G. Lockhart writes:\n> CREATE FUNCTION abstime_datetime(int4)\n> RETURNS datetime\n> AS '-' LANGUAGE 'internal';\n\nDid that. Could anyone please tell me how to drop this function?\n\n> When I run this same thing on v6.3, I get a date sometime in 1974 which\n> I think might actually be derived from a pointer interpreted as an\n> integer :(\n> \n> postgres=> select abstime_datetime(0);\n> abstime_datetime\n> ----------------------------\n> Wed Apr 24 18:51:28 1974 GMT\n> (1 row)\n> postgres=> select abstime_datetime(900000000);\n> abstime_datetime\n> ----------------------------\n> Wed Apr 24 18:37:12 1974 GMT\n> (1 row)\n\nmm=> select abstime_datetime(0);\nabstime_datetime\n----------------\nepoch \n(1 row)\n\nmm=> select abstime_datetime(900000000);\nabstime_datetime\n----------------\nepoch \n(1 row)\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n",
"msg_date": "Tue, 3 Mar 1998 15:03:17 +0100 (CET)",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "> Thomas G. Lockhart writes:\n> > CREATE FUNCTION abstime_datetime(int4)\n> > RETURNS datetime\n> > AS '-' LANGUAGE 'internal';\n>\n> Did that. Could anyone please tell me how to drop this function?\n\ndestroydbcreatedb\n\nOops. Sorry about that. The good news is that the function isn't damaging to\nyour system :-/\n\n> > When I run this same thing on v6.3, I get a date sometime in 1974 which\n> > I think might actually be derived from a pointer interpreted as an\n> > integer :(\n> >\n> > postgres=> select abstime_datetime(0);\n> > abstime_datetime\n> > ----------------------------\n> > Wed Apr 24 18:51:28 1974 GMT\n> > (1 row)\n> > postgres=> select abstime_datetime(900000000);\n> > abstime_datetime\n> > ----------------------------\n> > Wed Apr 24 18:37:12 1974 GMT\n> > (1 row)\n>\n> mm=> select abstime_datetime(0);\n> abstime_datetime\n> ----------------\n> epoch\n> (1 row)\n>\n> mm=> select abstime_datetime(900000000);\n> abstime_datetime\n> ----------------\n> epoch\n> (1 row)\n\nOK, so that is on a v6.3 system Michael? Then does anyone have an idea why\nmy system is showing a problem? Can someone running on Linux (RH4.2, 2.0.30\nkernel) try this out?? _Everything_ in the regression tests is OK...\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 14:33:47 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "Thomas G. Lockhart writes:\n> destroydbcreatedb\n> \n> Oops. Sorry about that. The good news is that the function isn't damaging to\n> your system :-/\n\nNo problem. It's my test DB anyway.\n \n> > mm=> select abstime_datetime(900000000);\n> > abstime_datetime\n> > ----------------\n> > epoch\n> > (1 row)\n\nIs this answer correct?\n\n> OK, so that is on a v6.3 system Michael? Then does anyone have an idea why\n\nThis was on v6.3. cvsup'ed this morning. \n\n> my system is showing a problem? Can someone running on Linux (RH4.2, 2.0.30\n> kernel) try this out?? _Everything_ in the regression tests is OK...\n\nI do run Linux, what else? :-)\n\nMy system is Debian 2.0, 2.0.33 kernel, glibc-2.0.7.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n",
"msg_date": "Tue, 3 Mar 1998 15:46:24 +0100 (CET)",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "Michael Meskes wrote:\n\n> Thomas G. Lockhart writes:\n> > destroydbcreatedb\n> >\n> > Oops. Sorry about that. The good news is that the function isn't damaging to\n> > your system :-/\n>\n> No problem. It's my test DB anyway.\n>\n> > > mm=> select abstime_datetime(900000000);\n> > > abstime_datetime\n> > > ----------------\n> > > epoch\n> > > (1 row)\n>\n> Is this answer correct?\n\nOh! I only noticed the first one, which was the right answer. You are getting\nzero into the function in both cases, where for my machine I'm getting garbage\nwhich might be uninitialized stuff or a pointer.\n\nNeither are correct.\n\nCan someone speculate where this might be happening? I don't even know where to\nstart looking :(\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 15:19:10 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "> Oh! I only noticed the first one, which was the right answer. You are getting\n> zero into the function in both cases, where for my machine I'm getting garbage\n> which might be uninitialized stuff or a pointer.\n>\n> Neither are correct.\n>\n> Can someone speculate where this might be happening? I don't even know where to\n> start looking :(\n\nMore information: my snapshots through 980112 work correctly, so the breakage\nhappened after that.\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 16:12:01 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "\nTom wrote:\n\n> > > When I run this same thing on v6.3, I get a date sometime in 1974 which\n> > > I think might actually be derived from a pointer interpreted as an\n> > > integer :(\n> > >\n> > > postgres=> select abstime_datetime(0);\n> > > abstime_datetime\n> > > ----------------------------\n> > > Wed Apr 24 18:51:28 1974 GMT\n> > > (1 row)\n> > > postgres=> select abstime_datetime(900000000);\n> > > abstime_datetime\n> > > ----------------------------\n> > > Wed Apr 24 18:37:12 1974 GMT\n> > > (1 row)\n> >\n> > mm=> select abstime_datetime(0);\n> > abstime_datetime\n> > ----------------\n> > epoch\n> > (1 row)\n> >\n> > mm=> select abstime_datetime(900000000);\n> > abstime_datetime\n> > ----------------\n> > epoch\n> > (1 row)\n>\n> OK, so that is on a v6.3 system Michael? Then does anyone have an idea why\n> my system is showing a problem? Can someone running on Linux (RH4.2, 2.0.30\n> kernel) try this out?? _Everything_ in the regression tests is OK...\n\n The bug is that when the language is internal but the\n function isn't in the builtin table, fmgr_info() (in fmgr.c)\n doesn't set fn_nargs. So fmgr_c() calls abstime_datetime()\n without arguments.\n\n Add\n\n finfo->fn_nargs = procedureStruct->pronargs;\n\n in the INTERNALlanguageId arm of the switch in fmgr.c (line\n 198).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 4 Mar 1998 11:45:05 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "> > > > When I run this same thing on v6.3, I get a date sometime in 1974 which\n> > > > I think might actually be derived from a pointer interpreted as an\n> > > > integer :(\n>\n> The bug is that when the language is internal but the\n> function isn't in the builtin table, fmgr_info() (in fmgr.c)\n> doesn't set fn_nargs. So fmgr_c() calls abstime_datetime()\n> without arguments.\n>\n> Add\n>\n> finfo->fn_nargs = procedureStruct->pronargs;\n>\n> in the INTERNALlanguageId arm of the switch in fmgr.c (line\n> 198).\n\nTHANKS JAN! I was just getting started doing a binary search of the source trees\ntrying to find when the problem was introduced. This saved me a _lot_ of time...\n\nI just tried it and it works! I added the line just below the elog(ERROR) check\nin that same block of code.\n\nNow, should this be done conditionally or is it OK to set this all the time? I\nlooked back at the v6.2.1 code and this field was not explicitly set in this\narea, so has the behavior of something else changed? What would you suggest??\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 13:53:13 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
},
{
"msg_contents": "\nTom wrote:\n>\n> > > > > When I run this same thing on v6.3, I get a date sometime in 1974 which\n> > > > > I think might actually be derived from a pointer interpreted as an\n> > > > > integer :(\n> >\n> > The bug is that when the language is internal but the\n> > function isn't in the builtin table, fmgr_info() (in fmgr.c)\n> > doesn't set fn_nargs. So fmgr_c() calls abstime_datetime()\n> > without arguments.\n> >\n> > Add\n> >\n> > finfo->fn_nargs = procedureStruct->pronargs;\n> >\n> > in the INTERNALlanguageId arm of the switch in fmgr.c (line\n> > 198).\n>\n> THANKS JAN! I was just getting started doing a binary search of the source trees\n> trying to find when the problem was introduced. This saved me a _lot_ of time...\n>\n> I just tried it and it works! I added the line just below the elog(ERROR) check\n> in that same block of code.\n>\n> Now, should this be done conditionally or is it OK to set this all the time? I\n> looked back at the v6.2.1 code and this field was not explicitly set in this\n> area, so has the behavior of something else changed? What would you suggest??\n\n I think it's O.K. to set it all the time. As far as I can\n see, the declarations for the builtin functions have the\n correct nargs settings (varcharin 3 args). And this is what\n they have in the pg_proc's pronargs attribute. Adding the\n above line only touches overloading builtin functions. Since\n the call of such an overload goes through fmgr_c(), it MUST\n be done (fmgr_c must know how many arguments to pass).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 4 Mar 1998 16:10:19 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lost a function overloading capability in v6.3"
}
] |
[
{
"msg_contents": "> Thus spake Andrew Martin\n> > > up just with a comment like \"Linux != Unix\"...which, it isn't, its a\n> > > Unix-like clone...but they can't seem to figure the distinction *rofl*\n> > \n> > Agreed... :-) But BSD isn't Unix either - not officially. [Waits for\n> > Marc to disagree, again...]\n> \n> Of course it is. It has direct lineage back the Bell Labs. There is\n> no AT&T code left in but you can most definitely say \"BSD Unix\" where\n> you can't say \"Linux Unix.\" For many years Berkeley was the main\n> development hotbed for Unix. In fact, BSD was eventually fed back\n> into SVR4.\n\n'fraid it isn't. Unix is a trademark and can only be applied to systems\nwhich the trademark owner approves. Just 'cos the code has a certain\nheritage doesn't mean that the current version is approved. There is\na FAQ somewhere which discusses all the issues - I forget the details.\n\n> \n> > Not to mention the fact that at least one release of Linux did go through\n> > full Posix certification and is thus allowed to be called Unix :-)\n> \n> Posix != Unix. NT is a Posix system. So is OpenVMS.\nTrue - I was over zealous there. However the release was given approval for\nthe Unix label to be applied.\n\n> \n> BTW, which version of Linux was Posix certified and who paid for it?\nIt was Linux-FT - I believe the company producing it is now defunct :-(\n\n\nAndrew\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Tue, 3 Mar 1998 14:41:59 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "Thus spake Andrew Martin\n> > Of course it is. It has direct lineage back the Bell Labs. There is\n> > no AT&T code left in but you can most definitely say \"BSD Unix\" where\n> > you can't say \"Linux Unix.\" For many years Berkeley was the main\n> > development hotbed for Unix. In fact, BSD was eventually fed back\n> > into SVR4.\n> \n> 'fraid it isn't. Unix is a trademark and can only be applied to systems\n> which the trademark owner approves. Just 'cos the code has a certain\n> heritage doesn't mean that the current version is approved. There is\n> a FAQ somewhere which discusses all the issues - I forget the details.\n\nSure, sure. It isn't Unix if there's a liar^H^H^Hawyer in the room\nbut we know who it's parents are.\n\n> > BTW, which version of Linux was Posix certified and who paid for it?\n> It was Linux-FT - I believe the company producing it is now defunct :-(\n\nFigures. Perhaps they should have spent their money elsewhere. I don't\nknow anyone personally who is really impressed with Posix certification.\nThose who really understand know that it is meaningless and those that\ndon't could care less. There's only a small constituency somewhere in\nthe middle there that think it is important and they aren't buying\nanything that has any hint of \"free\" about it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 3 Mar 1998 23:31:14 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
}
] |
[
{
"msg_contents": "> > > > What version of tar understands how to ungzip a .gz file?\n> > > >\n> > > > Is this what the 'z' flag is for?\n> > >\n> > > gnu tar supports the 'z' flag to uncompress and untar at the same\n> > > time...\n> >\n> > This sucks. As a group that seems to not like GNU (or at least their\n> > license), we require enough of their tools to compile/install postgres.\n> >\n> > Off to see the wizard at the gnu ftp site...\n> \n> Quit whining and send in some patches :) I hacked those makefiles at the end\n> of a 10 hour push to get the docs wrapped up. The best thing that could be\n> said for them is that they seemed to work on my machine (and I guess on\n> postgresql.org now that I think about it).\n\nI didn't send patches since I wasn't sure if you were still working on it.\n\nSomething like \"gzip -dc file.tar.gz | tar -xvf -\" uncompress' it in place.\nI'll play around with -C to move it around.\n\n> Could we just replace the \"tar zxf\" with \"uncompress ... | tar xf\"? Does\n> anyone else have a strong opinion on (or experience with) makefiles for the\n> postgres distribution who want to help Darren get out from under the gnu\n> usage??\n\nI don't really have a problem with gnu stuff, but the machine that I put\npostgres on is a development machine here for other folks too. Also used\nfor the src for our product line. I can't just go drop in a new tar or\nwhat not...\n\nDarren\n",
"msg_date": "Tue, 3 Mar 1998 09:56:04 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "> > > This sucks. As a group that seems to not like GNU (or at least their\n> > > license), we require enough of their tools to compile/install postgres.\n> > Quit whining and send in some patches :) I hacked those makefiles at the end\n> > of a 10 hour push to get the docs wrapped up. The best thing that could be\n> > said for them is that they seemed to work on my machine (and I guess on\n> > postgresql.org now that I think about it).\n> I didn't send patches since I wasn't sure if you were still working on it.\n\nNot until I get some ideas on what would work better on more platforms...\n\n> Something like \"gzip -dc file.tar.gz | tar -xvf -\" uncompress' it in place.\n> I'll play around with -C to move it around.\n\nWell, can't \"uncompress\" work with gzip'd files? I recall that it can, but that\nmay have been on a box (Dec Alpha?) with some upgraded \"uncompress\" capabilities.\nIf it can work, then we should do something like\n\n uncompress -c file.tar.gz | tar xf -\n\nto get away from any non-generic utilities. Is zcat (== uncompress -c) standard\non all machines?\n\nbtw, for generating the docs tar files I used \"--exclude='*.sgml'\" options on\ntar. Is that gnu-specific also?\n\n - Tom\n\n",
"msg_date": "Tue, 03 Mar 1998 15:53:29 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "> to get away from any non-generic utilities. Is zcat (== uncompress -c) standard\n> on all machines?\n> \n> btw, for generating the docs tar files I used \"--exclude='*.sgml'\" options on\n> tar. Is that gnu-specific also?\n\nYep. Any --X option is GNU, I think.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 3 Mar 1998 11:31:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Thomas G. Lockhart wrote:\n\n> Well, can't \"uncompress\" work with gzip'd files? I recall that it can, but that\n> may have been on a box (Dec Alpha?) with some upgraded \"uncompress\" capabilities.\n> If it can work, then we should do something like\n> \n> uncompress -c file.tar.gz | tar xf -\n\n\tgzip -cd will uncompress 'compressed' files, but compress can't\ntouch 'gzip'd files...\n\n\n",
"msg_date": "Tue, 3 Mar 1998 13:26:34 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] doc troubles."
}
] |
[
{
"msg_contents": "> Could we just replace the \"tar zxf\" with \"uncompress ... | tar xf\"? Does\n> anyone else have a strong opinion on (or experience with) makefiles for\nthe\n> postgres distribution who want to help Darren get out from under the gnu\n> usage??\n\nBest is: (believe me, really uncompress will not work nor zcat, I use it for\nSAP DB backup, it works)\n\tgzip -cd <somefile>.tar.gz | tar -xvf -\n\nAndreas\n",
"msg_date": "Tue, 3 Mar 1998 16:34:33 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] doc troubles."
}
] |
[
{
"msg_contents": "\nWhy was ecpg pulled out of the interfaces/Makefile ?\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Tue, 3 Mar 1998 12:36:48 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.3 question..."
},
{
"msg_contents": "Cristian Gafton writes:\n> \n> Why was ecpg pulled out of the interfaces/Makefile ?\n> \n> Cristian\n\nHuh? I didn't realize that! Was it intentionally left out? If so I'd like to\nknow the reason. If it was unintentionally, please put it back in.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n",
"msg_date": "Thu, 5 Mar 1998 09:52:56 +0100 (CET)",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.3 question..."
},
{
"msg_contents": "On Thu, 5 Mar 1998, Michael Meskes wrote:\n\n> Cristian Gafton writes:\n> > \n> > Why was ecpg pulled out of the interfaces/Makefile ?\n> > \n> > Cristian\n> \n> Huh? I didn't realize that! Was it intentionally left out? If so I'd like to\n> know the reason. If it was unintentionally, please put it back in.\n\n\tIts bounced in and out, actually...we removed it for awhile there\nbecause makes were failing miserably right there :( I fear we must have\nforgotten to put it back in again before the release...\n\n\tI've put it back in...those with CVSup access, can you please grab\na new copy and make sure that it works for all of you?\n\n\n",
"msg_date": "Thu, 5 Mar 1998 08:19:58 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.3 question..."
}
] |
[
{
"msg_contents": "\n\n> > Or simply call text-BLOBs \"textblob\" of something like that.\n> > What does SQL-92 say about BLOBs anyway?\n> \n> Nothing afaik. That is why you get different meanings and usages between\n> database\n> products. I'd like to keep \"text\" as a useful string type. Conventionally,\n> generic\n> blobs are just binary objects with not much backend support (e.g. no\n> useful\n> operators other than perhaps \"=\").\n> \n> Imo generic blobs make more sense in a system without the capability to\n> add types;\n> perhaps a solution for Postgres would look a little different. At the\n> moment, the\n> frontend/backend protocol is different for large objects and everything\n> else, so\n> it would be difficult to transparently introduce blobs which behave\n> identically to\n> types which fit within a normal tuple.\n> \n> - Tom\nYup, that all sounds very plausible. But, since the meaning diverges between\nDB Systems\nI would suggest to maybe not enforce text for now (at least not in system\ntables). \nIt has almost the same behavior as varchar (does it ?), and since varchar is\nvery good now :-) \nI would enforce the use of varchar where it fits (like passwd in pg_shadow, \nbut not prosrc in pg_proc where text is appropriate).\nMaybe just to keep the doors open for a larger text datatype in the future.\n\nAndreas\n\n",
"msg_date": "Tue, 3 Mar 1998 18:56:58 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] text should be a blob field"
}
] |
[
{
"msg_contents": "> On Tue, 3 Mar 1998, Cristian Gafton wrote:\n> \n> > On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n> > \n> > > \tWhat he said *scrambles to save this for next time*\n> > \n> > I can not belive this thing... :-) Hey, guys, you had a tough time doing\n> > 6.3, right ?\n> > \n> > Now that you all said your necessary rant on the Linux vs. Others thing,\n> > please calm down b4 I join the thread :-) (oops, I think I just did)\n> \n> \tYou joined much much too late though...this has been going on\n> since, oh, day one :) And, most ppl involved in the rant know me and my\n> opinions (they aren't necessarily the same as what I use as my bait, of\n> course, but ya gotta admit, Linux'ers are just soooooooo easy to bait\n> *grin*)\n\nWell if Linux'ers _and_ BSD'ers ran a real os, maybe this thread would die.\n\nThere, _that's_ bait. :)\n\ndarrenk\n",
"msg_date": "Tue, 3 Mar 1998 12:59:30 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "> \n> > On Tue, 3 Mar 1998, Cristian Gafton wrote:\n> > \n> > > On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n> > > \n> > > > \tWhat he said *scrambles to save this for next time*\n> > > \n> > > I can not belive this thing... :-) Hey, guys, you had a tough time doing\n> > > 6.3, right ?\n> > > \n> > > Now that you all said your necessary rant on the Linux vs. Others thing,\n> > > please calm down b4 I join the thread :-) (oops, I think I just did)\n> > \n> > \tYou joined much much too late though...this has been going on\n> > since, oh, day one :) And, most ppl involved in the rant know me and my\n> > opinions (they aren't necessarily the same as what I use as my bait, of\n> > course, but ya gotta admit, Linux'ers are just soooooooo easy to bait\n> > *grin*)\n> \n> Well if Linux'ers _and_ BSD'ers ran a real os, maybe this thread would die.\n> \n> There, _that's_ bait. :)\n\nOoooh, them's fight'en words.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 10:55:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
},
{
"msg_contents": "On Tue, 3 Mar 1998, Darren King wrote:\n\n> > On Tue, 3 Mar 1998, Cristian Gafton wrote:\n> > \n> > > On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n> > > \n> > > > \tWhat he said *scrambles to save this for next time*\n> > > \n> > > I can not belive this thing... :-) Hey, guys, you had a tough time doing\n> > > 6.3, right ?\n> > > \n> > > Now that you all said your necessary rant on the Linux vs. Others thing,\n> > > please calm down b4 I join the thread :-) (oops, I think I just did)\n> > \n> > \tYou joined much much too late though...this has been going on\n> > since, oh, day one :) And, most ppl involved in the rant know me and my\n> > opinions (they aren't necessarily the same as what I use as my bait, of\n> > course, but ya gotta admit, Linux'ers are just soooooooo easy to bait\n> > *grin*)\n> \n> Well if Linux'ers _and_ BSD'ers ran a real os, maybe this thread would die.\n> \n> There, _that's_ bait. :)\n\n\t*rofl* Surely you aren't referring to AIX...ouch, that\nhurts...don't make me laugh so hard next time :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 4 Mar 1998 17:14:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
}
] |
[
{
"msg_contents": "\ndon't you think there should be some sort of automatic conversion\nhere?\n",
"msg_date": "Tue, 3 Mar 1998 13:43:33 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "no operator '=' for types char16 and text"
},
{
"msg_contents": "> don't you think there should be some sort of automatic conversion\n> here?\n\nthe charX types (char2, 4, 8, 16) are not well supported. They are\nlikely to disappear in the next release, with the parser mapping them to\nchar(X) (or varchar(X), whichever is the best match) for backward\ncompatibility.\n\nThe major string types are char(), varchar(), and text.\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 03:05:12 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "Thus spake Thomas G. Lockhart\n> the charX types (char2, 4, 8, 16) are not well supported. They are\n> likely to disappear in the next release, with the parser mapping them to\n> char(X) (or varchar(X), whichever is the best match) for backward\n> compatibility.\n> \n> The major string types are char(), varchar(), and text.\n\nIs there a performance hit if the values chosen are not powers of 2?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 3 Mar 1998 22:12:12 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "> \n> Thus spake Thomas G. Lockhart\n> > the charX types (char2, 4, 8, 16) are not well supported. They are\n> > likely to disappear in the next release, with the parser mapping them to\n> > char(X) (or varchar(X), whichever is the best match) for backward\n> > compatibility.\n> > \n> > The major string types are char(), varchar(), and text.\n> \n> Is there a performance hit if the values chosen are not powers of 2?\n\nNope.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 3 Mar 1998 22:35:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > > The major string types are char(), varchar(), and text.\n> > Is there a performance hit if the values chosen are not powers of 2?\n> Nope.\n\nCool.\n\nExcept I'm a dinosaur and somehow it doesn't feel right if it isn't a\npower of two. Oh well. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 3 Mar 1998 23:23:25 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "> \n> Thus spake Bruce Momjian\n> > > > The major string types are char(), varchar(), and text.\n> > > Is there a performance hit if the values chosen are not powers of 2?\n> > Nope.\n> \n> Cool.\n> \n> Except I'm a dinosaur and somehow it doesn't feel right if it isn't a\n> power of two. Oh well. :-)\n\nThat's what char2,char4, char8, and char16 are for.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 11:01:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "On Wed, 4 March 1998, at 11:01:16, Bruce Momjian wrote:\n\n> > \n> > Thus spake Bruce Momjian\n> > > > > The major string types are char(), varchar(), and text.\n> > > > Is there a performance hit if the values chosen are not powers of 2?\n> > > Nope.\n> > \n> > Cool.\n> > \n> > Except I'm a dinosaur and somehow it doesn't feel right if it isn't a\n> > power of two. Oh well. :-)\n> \n> That's what char2,char4, char8, and char16 are for.\n\nBut these types are depreciated! :)\n",
"msg_date": "Wed, 4 Mar 1998 09:26:36 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "> \n> On Wed, 4 March 1998, at 11:01:16, Bruce Momjian wrote:\n> \n> > > \n> > > Thus spake Bruce Momjian\n> > > > > > The major string types are char(), varchar(), and text.\n> > > > > Is there a performance hit if the values chosen are not powers of 2?\n> > > > Nope.\n> > > \n> > > Cool.\n> > > \n> > > Except I'm a dinosaur and somehow it doesn't feel right if it isn't a\n> > > power of two. Oh well. :-)\n> > \n> > That's what char2,char4, char8, and char16 are for.\n> \n> But these types are depreciated! :)\n\nWe can call it our \"dinosaur\"-compatability module. :-)\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 13:23:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > Except I'm a dinosaur and somehow it doesn't feel right if it isn't a\n> > power of two. Oh well. :-)\n> \n> That's what char2,char4, char8, and char16 are for.\n\nI thought that that was going away.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 4 Mar 1998 16:57:06 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
}
] |
[
{
"msg_contents": "Hi,\n\nAs part of my 'learning postgresql' project I've been hacking the the\nbackend\na bit.\nNow I get the following error message:\n\nERROR: _bt_orderkeys: key(s) for attribute 1 missed\n\nfrom the index_getnext call in the following code:\n\n ScanKeyEntryInitialize(&skey[0], (bits16)0x0,\n ObjectTalksContextAttributeNumber,\n (RegProcedure)ObjectIdEqualRegProcedure,\n ObjectIdGetDatum(ctxo));\n\n namestrcpy(&name, (char*)ctxn);\n\n ScanKeyEntryInitialize(&skey[1], (bits16)0x0,\n 1,\n (RegProcedure)NameEqualRegProcedure,\n NameGetDatum(&name));\n\n scandesc = index_beginscan(indexrel,false,2,skey);\n\n if ((indexresult = index_getnext(scandesc,\nForwardScanDirection)))\n\nThe code is supposed to search an index created by the following statement:\n\nCREATE UNIQUE INDEX ot_context_idx on ot_context using btree (ctx, name)\n\nAny hints as to where I'm screwing up?:\n\nThanks for any hints,\nwith regards from Maurice.\n\n",
"msg_date": "Tue, 3 Mar 1998 23:08:34 +0100",
"msg_from": "\"Maurice Gittens\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: _bt_orderkeys: key(s) for attribute 1 missed"
}
] |
[
{
"msg_contents": "> > We've been talking with the RedHat guy on trying to identify a glibc2\n> > problem (and fix it) for the rpms. I'm not certain if he will release\n> > something before we're done or not.\n> >\n> > I'm currently having trouble making a small test case which exhibits\n> > the\n> > rounding problem. And I'd _really_ like to avoid doing a clean v6.3\n> > install from sources on my RH5.0 box, but I may have to so I can do\n> > some\n> > debugging there :(\n> > - Tom\n> I've been released to do some debugging if you'd like. I'm not\n> a very advanced Linux admin (read: I'd need lot's of direction).\n> I'm also a infant when it comes to PostgreSQL (read: 6.3 will be\n> my first install in the next hour or so). But I do have access to\n> several machines running Hurricane(RH5.0). Just let me know\n> if there's anything that I can help you with.\n\nOK, do your clean install, then\n1) go into backend/utils/adt and edit the Makefile to add the flag\n\"-DDATEDEBUG\" to the CFLAGS line.\n2) do a \"make clean\" from that directory\n3) go back to src/ and do a \"make install\"\n4) run the backend from a terminal window.\n5) From another window fire up psql, and try\n select '1 min'::timespan;\n\nand\n6) post the output from both windows.\n\nGood luck :)\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 02:47:32 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] 6.3 Release"
}
] |
[
{
"msg_contents": "\ndo we have version control backups of each change submiitted to the\nmaster CVS source? I would like to see at what point the bug is\nintroduced.\n",
"msg_date": "Tue, 3 Mar 1998 23:07:34 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "version control backups?"
},
{
"msg_contents": "> do we have version control backups of each change submiitted to the\n> master CVS source? I would like to see at what point the bug is\n> introduced.\n\nYes, and with CVSup you can specify a time, down to the second, for\nwhich you want a snapshot.\n\n - Tom\n\n",
"msg_date": "Wed, 04 Mar 1998 13:57:51 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] version control backups?"
},
{
"msg_contents": "> \n> \n> do we have version control backups of each change submiitted to the\n> master CVS source? I would like to see at what point the bug is\n> introduced.\n> \n> \n\nI thought the cvs log was on our web site. If you have logins privs to\npostgresql.org, 'cvs log' does the trick.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 11:09:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] version control backups?"
},
{
"msg_contents": "\nWhat if I don't?\nIt can be done remotely no?\n\nOn Wed, 4 March 1998, at 11:09:26, Bruce Momjian wrote:\n\n> I thought the cvs log was on our web site. If you have logins privs to\n> postgresql.org, 'cvs log' does the trick.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 09:27:29 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] version control backups?"
},
{
"msg_contents": "> \n> \n> What if I don't?\n> It can be done remotely no?\n> \n> On Wed, 4 March 1998, at 11:09:26, Bruce Momjian wrote:\n> \n> > I thought the cvs log was on our web site. If you have logins privs to\n> > postgresql.org, 'cvs log' does the trick.\n\nI don't think so.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 12:37:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] version control backups?"
},
{
"msg_contents": "On Wed, 4 Mar 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > What if I don't?\n> > It can be done remotely no?\n> > \n> > On Wed, 4 March 1998, at 11:09:26, Bruce Momjian wrote:\n> > \n> > > I thought the cvs log was on our web site. If you have logins privs to\n> > > postgresql.org, 'cvs log' does the trick.\n> \n> I don't think so.\n\n\tNo, it can't be...I'm including the complete CVS repository at the\ntime of the release on the CD, but that is about it...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 4 Mar 1998 17:18:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] version control backups?"
}
] |
[
{
"msg_contents": "> Best is: (believe me, really uncompress will not work nor zcat, I use it\nfor\n> SAP DB backup, it works)\n> \tgzip -cd <somefile>.tar.gz | tar -xvf -\n\nThis is also the most portable solution. zcat is the same as uncompress -c,\nbut it only works\nfor tar.gz files iff gzip is fully installed (replaced zcat and uncompress)\nand is first on your search path. \nThe above statement only needs to find the gzip executable somewhere in the\npath.\n(Everybody should have that !)\n\nAndreas\n\n",
"msg_date": "Wed, 4 Mar 1998 09:24:25 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] doc troubles with tar.gz"
}
] |
[
{
"msg_contents": ">> the charX types (char2, 4, 8, 16) are not well supported. They are\n>> likely to disappear in the next release, with the parser mapping them to\n>> char(X) (or varchar(X), whichever is the best match) for backward\n>> compatibility.\n>> \n>> The major string types are char(), varchar(), and text.\n\n> Is there a performance hit if the values chosen are not powers of 2?\n\nNo, the smaller the better.\n\nAndreas\n",
"msg_date": "Wed, 4 Mar 1998 09:34:15 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] no operator '=' for types char16 and text"
}
] |
[
{
"msg_contents": "Since I wanted to know it, I extended explain to output the used (only the\nfirst)\nindex on IndexScan.\nAn explain with this patch applied says:\n\ntemplate1=> explain select * from pg_class where oid=1;\nNOTICE: QUERY PLAN:\nIndex Scan using pg_class_oid_index on pg_class (cost=2.03 size=1 width=74)\nEXPLAIN\n\nDoes somebody want to add it to CVS please ?\nAndreas\n\n--- src/backend/commands/explain.c\tTue Mar 3 21:10:34 1998\n\n+++\nsrc/backend/commands/explain.c.orig\tThu Feb 26 05:30:58 1998\n\n@@ -23,6\n+23,7 @@\n\n #include <parser/parse_node.h>\n\n #include <optimizer/planner.h>\n\n\n#include <access/xact.h>\n\n+#include <utils/relcache.h>\n\n \n\n typedef struct\nExplainState\n\n {\n\n@@ -117,6 +118,8 @@\n\n static void\n\n explain_outNode(StringInfo\nstr, Plan *plan, int indent, ExplainState *es)\n\n {\n\n+\tList\n*l;\n\n+\tRelation\trelation;\n\n \tchar\t *pname;\n\n \tchar\nbuf[1000];\n\n \tint\t\t\ti;\n\n@@ -184,8 +187,12 @@\n\n\nappendStringInfo(str, pname);\n\n \tswitch (nodeTag(plan))\n\n \t{\n\n-\ncase T_SeqScan:\n\n \t\tcase T_IndexScan:\n\n+\nappendStringInfo(str, \" using \");\n\n+\t\t\tl = ((IndexScan *)\nplan)->indxid;\n\n+\t\t\trelation =\nRelationIdCacheGetRelation((int) lfirst(l));\n\n+\nappendStringInfo(str, (RelationGetRelationName(relation))->data);\n\n+\ncase T_SeqScan:\n\n \t\t\tif (((Scan *) plan)->scanrelid > 0)\n\n\n{\n\n \t\t\t\tRangeTblEntry *rte = nth(((Scan *)\nplan)->scanrelid - 1, es->rtable);\n\n\n",
"msg_date": "Wed, 4 Mar 1998 11:02:02 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Feature: output index name in explain ..."
},
{
"msg_contents": "Zeugswetter Andreas SARZ wrote:\n> \n> Since I wanted to know it, I extended explain to output the used (only the\n> first)\n> index on IndexScan.\n> An explain with this patch applied says:\n> \n> template1=> explain select * from pg_class where oid=1;\n> NOTICE: QUERY PLAN:\n> Index Scan using pg_class_oid_index on pg_class (cost=2.03 size=1 width=74)\n\nI like this. Any objections ?\n\nVadim\n",
"msg_date": "Wed, 25 Mar 1998 15:52:04 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature: output index name in explain ..."
},
{
"msg_contents": "> \n> Zeugswetter Andreas SARZ wrote:\n> > \n> > Since I wanted to know it, I extended explain to output the used (only the\n> > first)\n> > index on IndexScan.\n> > An explain with this patch applied says:\n> > \n> > template1=> explain select * from pg_class where oid=1;\n> > NOTICE: QUERY PLAN:\n> > Index Scan using pg_class_oid_index on pg_class (cost=2.03 size=1 width=74)\n> \n> I like this. Any objections ?\n\nLove it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 25 Mar 1998 09:46:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature: output index name in explain ..."
}
] |
[
{
"msg_contents": "I can't seem to duplicate this but it happened once and I thought I\nwould mention it in case anyone else has seen it as well. I have a\ntable for one user and another for myself. Both tables have a table\ncalled _key. After creating the second database (I had to destroy and\ncreate it a few times) I looked at the first one and found that the data\nin it matched the new one. I was able to drop that table and recreate\nit without affecting the new one. Very strange.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 4 Mar 1998 09:52:22 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Bad interaction between databases"
}
] |
[
{
"msg_contents": "Hi,\n\n just to let anyone know:\n\n I did some analyzing and searched for areas that could gain\n more speedups for 6.4. First I had something like an\n optimizer cache in mind (planner remembers parsetree and if a\n subsequent parsetree only differs in const values, substitute\n consts by params and reuse saved plans instead of creating a\n new plan all the time).\n\n But this is what I got for the complete regression test (only\n queries that went through the planner counted):\n\n Parsing and rule rewriting 14 %\n Optimizer and planning 6 %\n Query execution 80 %\n ------\n Total time in backend 100 %\n\n It clearly shows that there's no need to speedup the\n optimizer. The parser and the executor are the ones that\n consume the time. Making the planner/optimizer smarter\n resulting better plans faster to execute is the way.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 4 Mar 1998 16:43:03 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Speedups"
},
{
"msg_contents": "> Parsing and rule rewriting 14 %\n> Optimizer and planning 6 %\n> Query execution 80 %\n> ------\n> Total time in backend 100 %\n> \n\nNice analysis. Certainly looks like Query Execution is the way to go. \nprofiling has shown quite a lot to help us. Usually it is not the\nexecutor itself, but the subsystems it calls.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 11:30:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Hi,\n> \n> just to let anyone know:\n> \n> I did some analyzing and searched for areas that could gain\n> more speedups for 6.4. First I had something like an\n> optimizer cache in mind (planner remembers parsetree and if a\n> subsequent parsetree only differs in const values, substitute\n> consts by params and reuse saved plans instead of creating a\n> new plan all the time).\n> \n> But this is what I got for the complete regression test (only\n> queries that went through the planner counted):\n> \n> Parsing and rule rewriting 14 %\n> Optimizer and planning 6 %\n> Query execution 80 %\n> ------\n> Total time in backend 100 %\n> \n> It clearly shows that there's no need to speedup the\n> optimizer. The parser and the executor are the ones that\n> consume the time. Making the planner/optimizer smarter\n> resulting better plans faster to execute is the way.\n\nThis may sound like an obvious question, but if a user defines a\nquery, do we save the query plan? This would reduce the\ncommunications between the client and server (a small gain), and allow\nthe server to start executing the query as soon as it recognized the\nname of the stored query and parsed the arguments.\n\nOcie Mitchell\n\n",
"msg_date": "Wed, 4 Mar 1998 10:57:05 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "On Wed, 4 Mar 1998 [email protected] wrote:\n\n> This may sound like an obvious question, but if a user defines a\n> query, do we save the query plan? This would reduce the\n> communications between the client and server (a small gain), and allow\n> the server to start executing the query as soon as it recognized the\n> name of the stored query and parsed the arguments.\n\nNot sure ofhand, but it would be useful for JDBC's PreparedStatement and\nCallableStatement classes\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Wed, 4 Mar 1998 22:30:51 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "Peter T Mount wrote:\n> \n> On Wed, 4 Mar 1998 [email protected] wrote:\n> \n> > This may sound like an obvious question, but if a user defines a\n> > query, do we save the query plan? This would reduce the\n> > communications between the client and server (a small gain), and allow\n> > the server to start executing the query as soon as it recognized the\n> > name of the stored query and parsed the arguments.\n> \n> Not sure ofhand, but it would be useful for JDBC's PreparedStatement and\n> CallableStatement classes\n\nWe can implement it very easy, and fast. Execution plan may be reused\nmany times. Is this feature in standard ? \nWhat is proposed syntax if not ?\n\nVadim\n",
"msg_date": "Thu, 05 Mar 1998 09:29:39 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "Vadim B. Mikheev wrote:\n> \n> Peter T Mount wrote:\n> > \n> > On Wed, 4 Mar 1998 [email protected] wrote:\n> > \n> > > This may sound like an obvious question, but if a user defines a\n> > > query, do we save the query plan? This would reduce the\n> > > communications between the client and server (a small gain), and allow\n> > > the server to start executing the query as soon as it recognized the\n> > > name of the stored query and parsed the arguments.\n> > \n> > Not sure ofhand, but it would be useful for JDBC's PreparedStatement and\n> > CallableStatement classes\n> \n> We can implement it very easy, and fast. Execution plan may be reused\n> many times. Is this feature in standard ? \n> What is proposed syntax if not ?\n\nI don't think it is so much a question of syntax as it is a question\nof what we do in the backend. Suppose I create a stored query in SQL.\nWe already store the SQL source for this in the database, right? So\nwhen it comes time to execute the query, we take this SQL and execute\nit as if the user had entered it directly. What I am proposing would\nbe to basically store the compiled query plan as well. \n\nI do see a couple sticky points:\n\nWe would need some information about which variables are to be\nsubstituted into this query plan, but this should be fairly\nstraightforward.\n\nSome querys may not respond well to this, for example, if a table had\nan index on an integer field f1, this would probably be the best way\nto satisfy a select where f1<10. But if this were in a query as f1<x,\nthen a sufficiently high value of x might make this not such a good\nway to run the query. I haven't looked into this, but I would assume\nthat the optimizer relies on the specific values in such cases.\n\nWe need to be able to handle changes to the structures and contents of\nthe tables. If the query plan is built and we add 10000 rows to a\ntable it references, the query should probably be recompiled. We\ncould probably do this at vacuum time. There is also a small chance\nthat a table or index that the query plan was using is dropped. We\ncould automatically rebuild the query if the table was created after\nthe query was compiled.\n\n\nBoy, to look at this, you'd think I had already built one of these :)\nI haven't but I'm willing to give it a shot.\n\nOcie\n",
"msg_date": "Wed, 4 Mar 1998 22:26:29 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "[email protected] wrote:\n> \n> > > Not sure ofhand, but it would be useful for JDBC's PreparedStatement and\n> > > CallableStatement classes\n> >\n> > We can implement it very easy, and fast. Execution plan may be reused\n> > many times. Is this feature in standard ?\n> > What is proposed syntax if not ?\n> \n> I do see a couple sticky points:\n> \n> We would need some information about which variables are to be\n> substituted into this query plan, but this should be fairly\n> straightforward.\n\nParser, Planner/Optimizer and Executor are able to handle parameters!\nNo problems with this.\n\n> Some querys may not respond well to this, for example, if a table had\n> an index on an integer field f1, this would probably be the best way\n> to satisfy a select where f1<10. But if this were in a query as f1<x,\n> then a sufficiently high value of x might make this not such a good\n> way to run the query. I haven't looked into this, but I would assume\n> that the optimizer relies on the specific values in such cases.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nUnfortunately, no!\nWe have to add this feature of 'course.\nI don't know how we could deal with pre-compiled plans after this :(\nMay be, we could prepare/store not single plan, but some number of\npossible plans.\n\n> We need to be able to handle changes to the structures and contents of\n> the tables. If the query plan is built and we add 10000 rows to a\n> table it references, the query should probably be recompiled. We\n> could probably do this at vacuum time. There is also a small chance\n> that a table or index that the query plan was using is dropped. We\n> could automatically rebuild the query if the table was created after\n> the query was compiled.\n\nWe could mark stored plans as durty in such cases to force re-compiling\nwhen an application tries to use this plan.\n\nVadim\n",
"msg_date": "Thu, 05 Mar 1998 15:52:52 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "\nVadim wrote:\n>\n> [email protected] wrote:\n> >\n> > > > Not sure ofhand, but it would be useful for JDBC's PreparedStatement and\n> > > > CallableStatement classes\n> > >\n> > > We can implement it very easy, and fast. Execution plan may be reused\n> > > many times. Is this feature in standard ?\n> > > What is proposed syntax if not ?\n> >\n> > I do see a couple sticky points:\n> >\n> > We would need some information about which variables are to be\n> > substituted into this query plan, but this should be fairly\n> > straightforward.\n>\n> Parser, Planner/Optimizer and Executor are able to handle parameters!\n> No problems with this.\n\n Nice discussion - especially when looking at what I initially\n posted.\n\n I assume you think about using SPI's saved plan feature for\n it. Right?\n\n>\n> > Some querys may not respond well to this, for example, if a table had\n> > an index on an integer field f1, this would probably be the best way\n> > to satisfy a select where f1<10. But if this were in a query as f1<x,\n> > then a sufficiently high value of x might make this not such a good\n> > way to run the query. I haven't looked into this, but I would assume\n> > that the optimizer relies on the specific values in such cases.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Unfortunately, no!\n> We have to add this feature of 'course.\n> I don't know how we could deal with pre-compiled plans after this :(\n> May be, we could prepare/store not single plan, but some number of\n> possible plans.\n\n That's something I thought about when I used the SPI\n functions when I built PL/Tcl. Since the saved plan will be\n executed via SPI_execp(), we could change some details there.\n Currently SPI_prepare() and SPI_saveplan() return the plan\n itself. But they could also return a little control struct\n that contains the plan and other information. Since I don't\n expect someone uses these plans for something else than\n calling SPI_execp(), it wouldn't break anything.\n\n SPI_execp() can do some timing calculations. For each\n execution of a plan it collects the runtime in microseconds\n (gettimeofday()). After the 5th or 10th call, it builds an\n average and remembers that permanently. For all subsequent\n calls it calculates the average time of the last 10 calls and\n if that gets much higher than the initial average it wouldn't\n hurt to silently prepare and save the plan again. Using\n averages lowers the problem that differences in the\n parameters can cause the runtime differences.\n\n Another possible reason for the runtime differences is the\n overall workload of the server. This could be very high\n during the initial average calculation. So I think it could\n be smart to rebuild the plan after e.g. 1000 calls ignoring\n any runtimes.\n\n>\n> > We need to be able to handle changes to the structures and contents of\n> > the tables. If the query plan is built and we add 10000 rows to a\n> > table it references, the query should probably be recompiled. We\n> > could probably do this at vacuum time. There is also a small chance\n> > that a table or index that the query plan was using is dropped. We\n> > could automatically rebuild the query if the table was created after\n> > the query was compiled.\n>\n> We could mark stored plans as durty in such cases to force re-compiling\n> when an application tries to use this plan.\n\n Yep. SPI must remember all prepared and saved plans (and\n forget about only prepared ones at transaction end). Things\n like dropping an index or modifying a table structure cause\n invalidations in the relcache, syscache and catcache (even if\n another backend did it in some cases). I think it must be\n possible to tell SPI from there that something happened and\n which relations are affected. If a plans rangetable contains\n the affected relation, the plan is marked durty.\n\n Things like functions, operators and aggregates are also\n objects that might change (drop/recreate function -> funcnode\n in plan get's unusable).\n\n I think the best would be that SPI_prepare() set's up a\n collection of Oid's that cause plan invalidation in the\n control structure. These are the Oid's of ALL objects\n (relations, indices, functions etc.) used in the plan. Then\n a call to SPI_invalidate(Oid) from the cache invalidation\n handlers doesn't have to walk through the plan itself.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 5 Mar 1998 16:53:09 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "> I don't think it is so much a question of syntax as it is a question\n> of what we do in the backend. Suppose I create a stored query in SQL.\n> We already store the SQL source for this in the database, right? So\n> when it comes time to execute the query, we take this SQL and execute\n> it as if the user had entered it directly. What I am proposing would\n> be to basically store the compiled query plan as well. \n> \n> I do see a couple sticky points:\n> \n> We would need some information about which variables are to be\n> substituted into this query plan, but this should be fairly\n> straightforward.\n> \n> Some querys may not respond well to this, for example, if a table had\n> an index on an integer field f1, this would probably be the best way\n> to satisfy a select where f1<10. But if this were in a query as f1<x,\n> then a sufficiently high value of x might make this not such a good\n> way to run the query. I haven't looked into this, but I would assume\n> that the optimizer relies on the specific values in such cases.\n\nI have thought about this. If we take a query string, remove all quoted\nconstants and numeric constants, we can automatically split apart the\nquery from the parameters. We can then look up the non-parameter query\nin our cache, and if it matches, replace the new contants with the old\nand run the query.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 5 Mar 1998 11:35:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
},
{
"msg_contents": "On Thu, 5 Mar 1998, Vadim B. Mikheev wrote:\n\n> [email protected] wrote:\n> > \n> > > > Not sure ofhand, but it would be useful for JDBC's PreparedStatement and\n> > > > CallableStatement classes\n> > >\n> > > We can implement it very easy, and fast. Execution plan may be reused\n> > > many times. Is this feature in standard ?\n> > > What is proposed syntax if not ?\n> > \n> > I do see a couple sticky points:\n> > \n> > We would need some information about which variables are to be\n> > substituted into this query plan, but this should be fairly\n> > straightforward.\n> \n> Parser, Planner/Optimizer and Executor are able to handle parameters!\n> No problems with this.\n> \n> > Some querys may not respond well to this, for example, if a table had\n> > an index on an integer field f1, this would probably be the best way\n> > to satisfy a select where f1<10. But if this were in a query as f1<x,\n> > then a sufficiently high value of x might make this not such a good\n> > way to run the query. I haven't looked into this, but I would assume\n> > that the optimizer relies on the specific values in such cases.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Unfortunately, no!\n> We have to add this feature of 'course.\n> I don't know how we could deal with pre-compiled plans after this :(\n> May be, we could prepare/store not single plan, but some number of\n> possible plans.\n\nFor inserts & updates I would have thought that the plan would be almost\nidentical (correct me here if I'm wrong here), and this is an area where\nJava really crawls at the moment. Try some 40k inserts in JDBC... and go \nout for the night. If we can improve things, it would be a big bonus.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Fri, 6 Mar 1998 06:25:06 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedups"
}
] |
[
{
"msg_contents": "> \n> Hi!\n> \n> I myself read into the sources of version 6.2.1 all over february.\n> \n> I did not learn it by heart, but I feel ready to start implementing\n> something now.\n> \n> I would like to start with the HAVING-clause and when this works I want\n> to continue with the other missing items (UNION, INTERSECT, MINUS,... (maybe\n> something of this is already implemented??))\n> \n> Could you please tell me, if this is all right for you?\n\nYep. HAVING is what I was hoping you could add, and OUTER. UNION and\nsubselects were added for 6.3. Added to TODO list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 11:17:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Claiming Items"
}
] |
[
{
"msg_contents": "> > > On Tue, 3 Mar 1998, Cristian Gafton wrote:\n> > > \n> > > > On Tue, 3 Mar 1998, The Hermit Hacker wrote:\n> > > > \n> > > > > \tWhat he said *scrambles to save this for next time*\n> > > > \n> > > > I can not belive this thing... :-) Hey, guys, you had a tough time doing\n> > > > 6.3, right ?\n> > > > \n> > > > Now that you all said your necessary rant on the Linux vs. Others thing,\n> > > > please calm down b4 I join the thread :-) (oops, I think I just did)\n> > > \n> > > \tYou joined much much too late though...this has been going on\n> > > since, oh, day one :) And, most ppl involved in the rant know me and my\n> > > opinions (they aren't necessarily the same as what I use as my bait, of\n> > > course, but ya gotta admit, Linux'ers are just soooooooo easy to bait\n> > > *grin*)\n> > \n> > Well if Linux'ers _and_ BSD'ers ran a real os, maybe this thread would die.\n> > \n> > There, _that's_ bait. :)\n> \n> Ooooh, them's fight'en words.\n\nThe main point was to take the LINUX vx BSD thread private or introduce _some_\nsort of OPC (Obligatory Postgres Content) into it at least.\n\nOPC...Just have to love how postgres does things thru the .bki scripts that you\ncan't duplicate thru psql, such as creating a function with 9 args. Found this\nwhen doing scripts to re-do the geometric stuff as loadable. Right up there\nwith not being able to recreate the count aggregate and being able to drop the\noid type (not recommended. :).\n\ndarrenk\n",
"msg_date": "Wed, 4 Mar 1998 12:12:29 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL - the Linux of Databases..."
}
] |
[
{
"msg_contents": "This is what I was mentioning earlier:\n<p>There are also PostgreSQL binaries available at <a \nhref=\"ftp://ftp.postgresql.org/pub/bindist\">FTP:/pub/bindist-v6.3</a>.</\np>\n\n<p>Patches are available at <a\nshould be:\n<p>There are also PostgreSQL binaries available at <a \nhref=\"ftp://ftp.postgresql.org/pub/bindist-v6.3\">FTP:/pub/bindist-v6.3</\na>.</p>\n\n<p>Patches are available at <a\n\nHope this clears up something.\n\n\t-DEJ\n",
"msg_date": "Wed, 4 Mar 1998 11:22:50 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "The Downloads page."
},
{
"msg_contents": "\nFixed...\n\n\n\nOn Wed, 4 Mar 1998, Jackson, DeJuan wrote:\n\n> This is what I was mentioning earlier:\n> <p>There are also PostgreSQL binaries available at <a \n> href=\"ftp://ftp.postgresql.org/pub/bindist\">FTP:/pub/bindist-v6.3</a>.</\n> p>\n> \n> <p>Patches are available at <a\n> should be:\n> <p>There are also PostgreSQL binaries available at <a \n> href=\"ftp://ftp.postgresql.org/pub/bindist-v6.3\">FTP:/pub/bindist-v6.3</\n> a>.</p>\n> \n> <p>Patches are available at <a\n> \n> Hope this clears up something.\n> \n> \t-DEJ\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 4 Mar 1998 17:16:56 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The Downloads page."
}
] |
[
{
"msg_contents": "Hi.\n\nSilly bug releated to on-fly recoding patch happened:\nall files (charset.conf, koi-alt.tab, koi-iso.tab, koi-koi.tab, koi-mac.tab,\nkoi-win.tab) in src/data directory was \"doubled\".\n\nWhy I don't know. Even with this bug everything should work, but...\nPackagers, pay attention please.\n\n--\nGoodBye.\nDenis.\n",
"msg_date": "Wed, 04 Mar 1998 20:06:03 +0200",
"msg_from": "\"Denis V. Dmitrienko\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"Doubled\" files related to cyrillic patch in 6.3 release."
},
{
"msg_contents": "On Wed, 4 Mar 1998, Denis V. Dmitrienko wrote:\n\n> Hi.\n> \n> Silly bug releated to on-fly recoding patch happened:\n> all files (charset.conf, koi-alt.tab, koi-iso.tab, koi-koi.tab, koi-mac.tab,\n> koi-win.tab) in src/data directory was \"doubled\".\n> \n> Why I don't know. Even with this bug everything should work, but...\n> Packagers, pay attention please.\n\n\tHuh?\n\n> ls data\nCVS koi-alt.tab koi-koi.tab koi-win.tab\ncharset.conf koi-iso.tab koi-mac.tab\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 4 Mar 1998 17:55:04 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"Doubled\" files related to cyrillic patch in 6.3\n\trelease."
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 4 Mar 1998, Denis V. Dmitrienko wrote:\n> \n> ? Hi.\n> ?\n> ? Silly bug releated to on-fly recoding patch happened:\n> ? all files (charset.conf, koi-alt.tab, koi-iso.tab, koi-koi.tab, koi-mac.tab,\n> ? koi-win.tab) in src/data directory was \"doubled\".\n> ?\n> ? Why I don't know. Even with this bug everything should work, but...\n> ? Packagers, pay attention please.\n> \n> Huh?\n> \n> ? ls data\n> CVS koi-alt.tab koi-koi.tab koi-win.tab\n> charset.conf koi-iso.tab koi-mac.tab\n\n$cat data/koi-koi.tab\n# Hmm ...\n#\n# Hmm ...\n#\n\nand so on...\n\n--\nGoodBye.\nDenis. ([email protected])\n",
"msg_date": "Tue, 10 Mar 1998 23:37:35 -0200",
"msg_from": "\"Denis V. Dmitrienko\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] \"Doubled\" files related to cyrillic patch in 6.3\n\trelease."
},
{
"msg_contents": "Applied.\n\n> \n> The Hermit Hacker wrote:\n> > \n> > On Wed, 4 Mar 1998, Denis V. Dmitrienko wrote:\n> > \n> > ? Hi.\n> > ?\n> > ? Silly bug releated to on-fly recoding patch happened:\n> > ? all files (charset.conf, koi-alt.tab, koi-iso.tab, koi-koi.tab, koi-mac.tab,\n> > ? koi-win.tab) in src/data directory was \"doubled\".\n> > ?\n> > ? Why I don't know. Even with this bug everything should work, but...\n> > ? Packagers, pay attention please.\n> > \n> > Huh?\n> > \n> > ? ls data\n> > CVS koi-alt.tab koi-koi.tab koi-win.tab\n> > charset.conf koi-iso.tab koi-mac.tab\n> \n> $cat data/koi-koi.tab\n> # Hmm ...\n> #\n> # Hmm ...\n> #\n> \n> and so on...\n> \n> --\n> GoodBye.\n> Denis. ([email protected])\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 16 Mar 1998 00:52:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: [HACKERS] \"Doubled\" files related to cyrillic patch\n\tin 6.3 release."
}
] |
[
{
"msg_contents": "> \n> Hi there,\n> \n> I've just compiled postgreSQL v6.3 on one of other Alpha boxes running\n> DIGITAL UNIX 4.0B.\n> \n> Unfortunately, running initdb dumps core with the following error message:\n> \n> initdb: using /cec/scratch/cecweb/postgresql/lib/local1_template1.bki.source\n> as\n> input to create the template database.\n> initdb: using /cec/scratch/cecweb/postgresql/lib/global1.bki.source as input\n> to\n> create the global classes.\n> initdb: using /cec/scratch/cecweb/postgresql/lib/pg_hba.conf.sample as the\n> host-\n> based authentication control file.\n> \n> We are initializing the database system with username altenhof (uid=301).\n> This user will own all the files and must also own the server process.\n> \n> initdb: creating template database in\n> /cec/scratch/cecweb/postgresql/data/base/t\n> emplate1\n> Running: postgres -boot -C -F -D/cec/scratch/cecweb/postgresql/data -Q\n> template1\n> ERROR: BuildFuncTupleDesc: function mkoidname(opaque, opaque) does not\n> exist\n> ERROR: BuildFuncTupleDesc: function mkoidname(opaque, opaque) does not\n> exist\n> longjmp or siglongjmp function used outside of saved context\n> /cec/scratch/cecweb/postgresql/bin/initdb: 10890 Abort - core dumped\n\nKnown bug. We can't get Alpha working on 6.3.\n\nLet me mention on thing that may help the alpha developers trying to fix\nthis.\n\nAs part of 6.3 changes, I changed some contants in\n/src/include/catalog/*.h that used 0L to just plain 0. It seemed to be\ndone inconsistently, and I could not figure out how the 0L could be\ndifferent than 0.\n\nCan someone take a look at the 6.2 source and 6.3 source, and tell me if\nthe 0L entries in 6.2 change the alpha behavior from a plain 0. Perhaps\na way of testing would be to replace the 0L with 0 in a working 6.2, and\nrun initdb to see if the system still works.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 13:27:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] Problems with running v6.3 on DIGITAL UNIX"
},
{
"msg_contents": "On Wed, 4 Mar 1998, Bruce Momjian wrote:\n\n>> ERROR: BuildFuncTupleDesc: function mkoidname(opaque, opaque) does not\n>> exist\n>> ERROR: BuildFuncTupleDesc: function mkoidname(opaque, opaque) does not\n>> exist\n>> longjmp or siglongjmp function used outside of saved context\n>> /cec/scratch/cecweb/postgresql/bin/initdb: 10890 Abort - core dumped\n>\n>Known bug. We can't get Alpha working on 6.3.\n>\n>Let me mention on thing that may help the alpha developers trying to fix\n>this.\n>\n>As part of 6.3 changes, I changed some contants in\n>/src/include/catalog/*.h that used 0L to just plain 0. It seemed to be\n>done inconsistently, and I could not figure out how the 0L could be\n>different than 0.\n>\n>Can someone take a look at the 6.2 source and 6.3 source, and tell me if\n>the 0L entries in 6.2 change the alpha behavior from a plain 0. Perhaps\n>a way of testing would be to replace the 0L with 0 in a working 6.2, and\n>run initdb to see if the system still works.\n\n:-? I've looked at both 6.2.1 and 6.3 and have been unable to find a\nsingle '0L' in either version. I've looked in all .h files in src/include,\nnot just src/include/catalog.\n\n\n\tPedro.\n\n-------------------------------------------------------------------\nPedro José Lobo Perea Tel: +34 1 336 78 19\nCentro de Cálculo Fax: +34 1 331 92 29\nEUIT Telecomunicación - UPM e-mail: [email protected]\n\n",
"msg_date": "Thu, 5 Mar 1998 10:11:17 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Problems with running v6.3 on DIGITAL\n\tUNIX"
},
{
"msg_contents": "> :-? I've looked at both 6.2.1 and 6.3 and have been unable to find a\n> single '0L' in either version. I've looked in all .h files in src/include,\n> not just src/include/catalog.\n> \n\nSorry, they were lowercase, and not usually 0, but -1l, or 323l.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 5 Mar 1998 11:58:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Problems with running v6.3 on DIGITAL\n\tUNIX"
}
] |
[
{
"msg_contents": "subscribe\n\n",
"msg_date": "Wed, 4 Mar 1998 21:50:31 +0200",
"msg_from": "\"Ponomarev S.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "> > Silly bug releated to on-fly recoding patch happened:\n> > all files (charset.conf, koi-alt.tab, koi-iso.tab, koi-koi.tab, koi-mac.tab,\n> > koi-win.tab) in src/data directory was \"doubled\".\n> > \n> > Why I don't know. Even with this bug everything should work, but...\n> > Packagers, pay attention please.\n> \n> \tHuh?\n> \n> > ls data\n> CVS koi-alt.tab koi-koi.tab koi-win.tab\n> charset.conf koi-iso.tab koi-mac.tab\n\nI think Denis means in the file itself.\n\ndarrenk\n",
"msg_date": "Wed, 4 Mar 1998 17:15:54 -0500",
"msg_from": "[email protected] (Darren King)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] \"Doubled\" files related to cyrillic patch in 6.3\n\trelease."
},
{
"msg_contents": "On Wed, 4 Mar 1998, Darren King wrote:\n\n> > > Silly bug releated to on-fly recoding patch happened:\n> > > all files (charset.conf, koi-alt.tab, koi-iso.tab, koi-koi.tab, koi-mac.tab,\n> > > koi-win.tab) in src/data directory was \"doubled\".\n> > > \n> > > Why I don't know. Even with this bug everything should work, but...\n> > > Packagers, pay attention please.\n> > \n> > \tHuh?\n> > \n> > > ls data\n> > CVS koi-alt.tab koi-koi.tab koi-win.tab\n> > charset.conf koi-iso.tab koi-mac.tab\n> \n> I think Denis means in the file itself.\n\n\tThen they must have come that way, since I woudl have just\n'untar'd into that directory...\n\n\tWill take a look at it though...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 4 Mar 1998 22:19:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"Doubled\" files related to cyrillic patch in 6.3\n\trelease."
}
] |
[
{
"msg_contents": "Hi,\n\nIn postgres, is there a way to get the CPU time and I/O time taken (separately)\nto execute a query? Does the profile info. help in some way to \ncalculate the CPU/IO break up at least approximately.\n\nWhat is the best way to do it? Any kind of help is greatly appreciated.\n\nThanks\n--shiby\n\n\n",
"msg_date": "Wed, 04 Mar 1998 17:56:37 -0500",
"msg_from": "Shiby Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Execution time"
},
{
"msg_contents": "Shiby Thomas wrote:\n> \n> Hi,\n> \n> In postgres, is there a way to get the CPU time and I/O time taken (separately)\n> to execute a query? Does the profile info. help in some way to\n> calculate the CPU/IO break up at least approximately.\n> \n> What is the best way to do it? Any kind of help is greatly appreciated.\n> \n> Thanks\n> --shiby\n\nHow about running \"time postgres ...\" and connecting to the backend\nwithout using the postmaster?\n\n/* m */\n",
"msg_date": "Fri, 06 Mar 1998 11:47:23 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Execution time"
},
{
"msg_contents": "\n=> How about running \"time postgres ...\" and connecting to the backend\n=> without using the postmaster?\n=> \nThe time command will give the elapsed, user CPU and System CPU times.\nHow do I interpret it as CPU/IO time ?\n\nEven the -s option of postgres gives those times. Will it be the same as\nusing \"time postgres\" ? Is the System time a reasonable approximation of the\nI/O time and user time that of CPU time ?\n\nThanks\n--shiby \n\n\n",
"msg_date": "Fri, 06 Mar 1998 12:01:59 -0500",
"msg_from": "Shiby Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Execution time "
},
{
"msg_contents": "Shiby Thomas wrote:\n> \n> The time command will give the elapsed, user CPU and System CPU times.\n> How do I interpret it as CPU/IO time ?\n> \n> Even the -s option of postgres gives those times. Will it be the same as\n> using \"time postgres\" ? Is the System time a reasonable approximation of the\n> I/O time and user time that of CPU time ?\n\n\nI would say they are reasonable approximations, at least for\nuser==CPU.\nPerhaps you need to look at elapsed time too, and from that make some\nassumptions about I/O waiting time. I think system time will be lower\nif you use SCSI and higher with IDE.\nI would count (elapsed time - CPU time) as I/O time.\n\n/* m */\n",
"msg_date": "Mon, 09 Mar 1998 13:22:43 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Execution time"
}
] |
[
{
"msg_contents": "\nI've made a little headway -- it can't find the mkoidname function\nbecause the attributes that it looks up for the argument types have a\natttypid of 0 (see the following example):\n\nalso, other information that should be in there is not, so it makes me\nsuspect something wrong with insertion of attributes? I don't know\nenough to be able to see if this is affecting all attributes or just\nsome of them.\n\nDoes anyone have any pointers to where to check this problem out?\n\n$4 = {attrelid = 1249, attname = {\n data = \"\\000\\000\\000\\000attrelid\", '\\000' <repeats 19 times>}, \n atttypid = 0, attdisbursion = 6.89648632e-314, attlen = 0, attnum = 0, \n attnelems = 65540, attcacheoff = 0, atttypmod = -1, attbyval = -1 ', \n attisset = -1 ', attalign = -1 ', attnotnull = -1 ', \n atthasdef = 1 '\\001'}\n",
"msg_date": "Wed, 4 Mar 1998 15:24:00 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "alpha/64bit weirdness"
},
{
"msg_contents": "> \n> \n> I've made a little headway -- it can't find the mkoidname function\n> because the attributes that it looks up for the argument types have a\n> atttypid of 0 (see the following example):\n> \n> also, other information that should be in there is not, so it makes me\n> suspect something wrong with insertion of attributes? I don't know\n> enough to be able to see if this is affecting all attributes or just\n> some of them.\n> \n> Does anyone have any pointers to where to check this problem out?\n> \n> $4 = {attrelid = 1249, attname = {\n> data = \"\\000\\000\\000\\000attrelid\", '\\000' <repeats 19 times>}, \n> atttypid = 0, attdisbursion = 6.89648632e-314, attlen = 0, attnum = 0, \n> attnelems = 65540, attcacheoff = 0, atttypmod = -1, attbyval = -1 ', \n> attisset = -1 ', attalign = -1 ', attnotnull = -1 ', \n> atthasdef = 1 '\\001'}\n> \n> \n\nI have an idea. Edit initdb and add a '-d 3' option to each\n'postgres', then run initdb. You will see dumps of all the structures\nas things are happening, I think. Give it a try.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 18:31:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "\nokay.. it would appear that the bootstrap process has access to\ninformation (about attributes) that it shouldn't, quite immediately.\ni.e. when I breakpoint in DataFill, it has valid attribute structures,\nbut they haven't been inserted yet (?!?) I'm not quite sure how this\nworks.\n\nOn Wed, 4 March 1998, at 15:24:00, Brett McCormick wrote:\n\n> I've made a little headway -- it can't find the mkoidname function\n> because the attributes that it looks up for the argument types have a\n> atttypid of 0 (see the following example):\n> \n> also, other information that should be in there is not, so it makes me\n> suspect something wrong with insertion of attributes? I don't know\n> enough to be able to see if this is affecting all attributes or just\n> some of them.\n> \n> Does anyone have any pointers to where to check this problem out?\n> \n> $4 = {attrelid = 1249, attname = {\n> data = \"\\000\\000\\000\\000attrelid\", '\\000' <repeats 19 times>}, \n> atttypid = 0, attdisbursion = 6.89648632e-314, attlen = 0, attnum = 0, \n> attnelems = 65540, attcacheoff = 0, atttypmod = -1, attbyval = -1 ', \n> attisset = -1 ', attalign = -1 ', attnotnull = -1 ', \n> atthasdef = 1 '\\001'}\n",
"msg_date": "Wed, 4 Mar 1998 16:01:05 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "> \n> \n> I've made a little headway -- it can't find the mkoidname function\n> because the attributes that it looks up for the argument types have a\n> atttypid of 0 (see the following example):\n> \n> also, other information that should be in there is not, so it makes me\n> suspect something wrong with insertion of attributes? I don't know\n> enough to be able to see if this is affecting all attributes or just\n> some of them.\n> \n> Does anyone have any pointers to where to check this problem out?\n> \n> $4 = {attrelid = 1249, attname = {\n> data = \"\\000\\000\\000\\000attrelid\", '\\000' <repeats 19 times>}, \n> atttypid = 0, attdisbursion = 6.89648632e-314, attlen = 0, attnum = 0, \n> attnelems = 65540, attcacheoff = 0, atttypmod = -1, attbyval = -1 ', \n> attisset = -1 ', attalign = -1 ', attnotnull = -1 ', \n> atthasdef = 1 '\\001'}\n> \n> \n\n\nNow that I am looking at this, I see that the attname has four bytes of\nNULL's before it. This looks like some kind of alignment error,\nperhaps, like the previous entry is writing past its end and into the\nthis one. Everything after the 'data' element shows garbage because it\nis all shifted over. I did add the atttypmod field to the pg_attribute\nstructure, and it is an int2/short. Wonder is that threw off some\nalignment, and only Alpha has a problem with it.\n\nPlease try with Assert on:\n\n\tconfigure --enable-cassert\n\nMan, if I introduced this problem somehow, I am going to be upset with\nmyself, and I am sure a few Alpha users will join me.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 20:33:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "\nI just noticed that.. I recompiled & rerean initdb with assert\nchecking on, and, well, no change in output. here it is. i've stuck\nin my own elog check for a value of 0 for atttypid..\n\nsuggestions?\n\ninitdb: using /usr/local/pgsql.test/lib/local1_template1.bki.source as input to create the template database.\ninitdb: using /usr/local/pgsql.test/lib/global1.bki.source as input to create the global classes.\ninitdb: using /usr/local/pgsql.test/lib/pg_hba.conf.sample as the host-based authentication control file.\n\nWe are initializing the database system with username postgres (uid=1706).\nThis user will own all the files and must also own the server process.\n\ninitdb: creating template database in /usr/local/pgsql.test/data/base/template1\nRunning: postgres -boot -C -F -D/usr/local/pgsql.test/data -Q template1\nERROR: DefineIndex: woah, att->atttypid = 0 for attribute \"attrelid\"\nERROR: DefineIndex: woah, att->atttypid = 0 for attribute \"attrelid\"\nlongjmp or siglongjmp function used outside of saved context\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /usr/local/pgsql.test/data/base/template1\n\n\nOn Wed, 4 March 1998, at 20:33:42, Bruce Momjian wrote:\n\n> Now that I am looking at this, I see that the attname has four bytes of\n> NULL's before it. This looks like some kind of alignment error,\n> perhaps, like the previous entry is writing past its end and into the\n> this one. Everything after the 'data' element shows garbage because it\n> is all shifted over. I did add the atttypmod field to the pg_attribute\n> structure, and it is an int2/short. Wonder is that threw off some\n> alignment, and only Alpha has a problem with it.\n> \n> Please try with Assert on:\n> \n> \tconfigure --enable-cassert\n> \n> Man, if I introduced this problem somehow, I am going to be upset with\n> myself, and I am sure a few Alpha users will join me.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 18:26:00 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "\nWhy would the atttypmod affect anything before it in the struct? I\nhave verified that everything is shifted over for bytes, but that\nwould lead be to beleive that somewhere the length of the first\nattribute (Oid) is being miscalculated? Where would the code write to\nthis data structure without using a pointer to actual struct for\nobtaining the correct memory structure? I checked for offsetof macro\ncalls that might cause this effect, to no avail..\n\nWe're a lot closer, though.. right?\n\nOn Wed, 4 March 1998, at 20:33:42, Bruce Momjian wrote:\n\n> Now that I am looking at this, I see that the attname has four bytes of\n> NULL's before it. This looks like some kind of alignment error,\n> perhaps, like the previous entry is writing past its end and into the\n> this one. Everything after the 'data' element shows garbage because it\n> is all shifted over. I did add the atttypmod field to the pg_attribute\n> structure, and it is an int2/short. Wonder is that threw off some\n> alignment, and only Alpha has a problem with it.\n> \n> Please try with Assert on:\n> \n> \tconfigure --enable-cassert\n> \n> Man, if I introduced this problem somehow, I am going to be upset with\n> myself, and I am sure a few Alpha users will join me.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 18:28:28 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "\nI did that.. Postgres doesn't take the option -d 3 however, just\n'-d'.. Have any idea when the pg_attribute cache is populated?\n\nOn Wed, 4 March 1998, at 18:31:34, Bruce Momjian wrote:\n\n> I have an idea. Edit initdb and add a '-d 3' option to each\n> 'postgres', then run initdb. You will see dumps of all the structures\n> as things are happening, I think. Give it a try.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 18:33:18 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "> \n> \n> I did that.. Postgres doesn't take the option -d 3 however, just\n> '-d'.. Have any idea when the pg_attribute cache is populated?\n\nLooks like it should:\n\n\t$ postgres -d 3 -D /u/pg/data test\n\t ---debug info---\n\t Quiet = f\n\t Noversion = f\n\t timings = f\n\t dates = Normal\n\t bufsize = 64\n\t sortmem = 512\n\t query echo = f\n\t DatabaseName = [test]\n\t ----------------\n\t\n\t InitPostgres()..\n\t\n\tPOSTGRES backend interactive interface\n\t$Revision: 1.67 $ $Date: 1998/02/26 04:36:31 $\n\nNot sure when it is initialized.\n\n>\n> \n> On Wed, 4 March 1998, at 18:31:34, Bruce Momjian wrote:\n> \n> > I have an idea. Edit initdb and add a '-d 3' option to each\n> > 'postgres', then run initdb. You will see dumps of all the structures\n> > as things are happening, I think. Give it a try.\n> > \n> > -- \n> > Bruce Momjian | 830 Blythe Avenue\n> > [email protected] | Drexel Hill, Pennsylvania 19026\n> > + If your life is a hard drive, | (610) 353-9879(w)\n> > + Christ can be your backup. | (610) 853-3000(h)\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 21:45:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "> Why would the atttypmod affect anything before it in the struct? I\n> have verified that everything is shifted over for bytes, but that\n> would lead be to beleive that somewhere the length of the first\n> attribute (Oid) is being miscalculated? Where would the code write to\n> this data structure without using a pointer to actual struct for\n> obtaining the correct memory structure? I checked for offsetof macro\n> calls that might cause this effect, to no avail.\n\nJust speculating here, but I do know that the Alpha will force alignment\nwithin structures. So, if the structure is filled by reading a byte stream\nfrom a file, rather than filled field-by-field, it will misalign if it has\nintegers < 4 bytes. During the initialization phase, the backend probably does\nnot go through the file manager, but does some brute-force reading of each\nfile on disk.\n\n - Tom\n\n",
"msg_date": "Thu, 05 Mar 1998 03:47:25 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "\nIt couldn't! The inital values (as far as I can tell) are either\ncompiled in, or fed in through the bootstrap process (text).\n\nthis would have caused problems in previous releases as well, if it is\nthe case.\n\nOn Thu, 5 March 1998, at 03:47:25, Thomas G. Lockhart wrote:\n\n> > Why would the atttypmod affect anything before it in the struct? I\n> > have verified that everything is shifted over for bytes, but that\n> > would lead be to beleive that somewhere the length of the first\n> > attribute (Oid) is being miscalculated? Where would the code write to\n> > this data structure without using a pointer to actual struct for\n> > obtaining the correct memory structure? I checked for offsetof macro\n> > calls that might cause this effect, to no avail.\n> \n> Just speculating here, but I do know that the Alpha will force alignment\n> within structures. So, if the structure is filled by reading a byte stream\n> from a file, rather than filled field-by-field, it will misalign if it has\n> integers < 4 bytes. During the initialization phase, the backend probably does\n> not go through the file manager, but does some brute-force reading of each\n> file on disk.\n> \n> - Tom\n",
"msg_date": "Wed, 4 Mar 1998 20:00:25 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "> \n> \n> Why would the atttypmod affect anything before it in the struct? I\n> have verified that everything is shifted over for bytes, but that\n> would lead be to beleive that somewhere the length of the first\n> attribute (Oid) is being miscalculated? Where would the code write to\n> this data structure without using a pointer to actual struct for\n> obtaining the correct memory structure? I checked for offsetof macro\n> calls that might cause this effect, to no avail..\n> \n> We're a lot closer, though.. right?\n\nI think your dump tells up something. Can you put a beak on the failure\nline, then do a backtrace after the elog(), and start putting breaks in\nthe functions called on that structure, and see if we can find how that\nrelation name is getting messed up.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 23:09:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "\nI suspect that it is getting messed up in a function that has since\nbeen called and returned.. I'll give it a shot though\n\nOn Wed, 4 March 1998, at 23:09:05, Bruce Momjian wrote:\n\n> > We're a lot closer, though.. right?\n> \n> I think your dump tells up something. Can you put a beak on the failure\n> line, then do a backtrace after the elog(), and start putting breaks in\n> the functions called on that structure, and see if we can find how that\n> relation name is getting messed up.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 20:58:58 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "Can you post a backtrace of the problem area.\n\n> \n> \n> I suspect that it is getting messed up in a function that has since\n> been called and returned.. I'll give it a shot though\n> \n> On Wed, 4 March 1998, at 23:09:05, Bruce Momjian wrote:\n> \n> > > We're a lot closer, though.. right?\n> > \n> > I think your dump tells up something. Can you put a beak on the failure\n> > line, then do a backtrace after the elog(), and start putting breaks in\n> > the functions called on that structure, and see if we can find how that\n> > relation name is getting messed up.\n> > \n> > -- \n> > Bruce Momjian | 830 Blythe Avenue\n> > [email protected] | Drexel Hill, Pennsylvania 19026\n> > + If your life is a hard drive, | (610) 353-9879(w)\n> > + Christ can be your backup. | (610) 853-3000(h)\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 5 Mar 1998 00:12:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "> \n> Can you post a backtrace of the problem area.\n\nIf the bad value is coming from the cache, can you add an elog(NOTICE)\nto the cache insert code, so you can see where the bad value is going\nin or out.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 5 Mar 1998 01:07:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
},
{
"msg_contents": "> \n> > \n> > Can you post a backtrace of the problem area.\n> \n> If the bad value is coming from the cache, can you add an elog(NOTICE)\n> to the cache insert code, so you can see where the bad value is going\n> in or out.\n> \n\nAnother thing you could try is to add an Assert() in the cache\ninput/output functions, to check for a leading null in the name, and\ngenerate the error at that point. Would help track it down.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 5 Mar 1998 11:31:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] alpha/64bit weirdness"
}
] |
[
{
"msg_contents": "Tom:\n\nIt just occurred to me that, maybe we should ask Cristian which gcc version\nwas used to build the pgsql rpm at redhat. Like I mentioned, with gcc-2.8.0\nand glibc2-2.0.7 rpm, I did get the time problem, but with gcc-2.7.2.3, it\nwent away. Also with the helpful info Tom (Szybist) provided, this seems to\nmake sense, that is, a recompile with gcc-2.7.2 cures the problem, if the rpm\nwas indeed built with gcc-2.8.0 at redhat, that explains it.\n\nhope this helps,\n-Pailing\n",
"msg_date": "04 Mar 1998 19:19:12 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "Got a good install from Cristian's latest Postgres rpm and the Feb 24 glibc2\nfound in his home area. Passed the \"one minute\" test.\n\nThanks Cristian! What was the secret? Was it the compiler, or -O setting, or ??\n\nbtw, there is a one-line patch you could apply before doing a full release; it\nfixes an obscure but occasionally useful feature in the type support code. I\nhaven't committed it to the source tree yet.\n\n - Tom",
"msg_date": "Thu, 05 Mar 1998 02:43:36 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "On 4 Mar 1998 [email protected] wrote:\n\n> It just occurred to me that, maybe we should ask Cristian which gcc version\n> was used to build the pgsql rpm at redhat. Like I mentioned, with gcc-2.8.0\n\ngcc 2.7.2.3\n\n> and glibc2-2.0.7 rpm, I did get the time problem, but with gcc-2.7.2.3, it\n> went away. Also with the helpful info Tom (Szybist) provided, this seems to\n> make sense, that is, a recompile with gcc-2.7.2 cures the problem, if the rpm\n> was indeed built with gcc-2.8.0 at redhat, that explains it.\n\nI think you \"recompile\" a newer version than the snapshot included in the\nSRPM ?\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Wed, 4 Mar 1998 22:47:24 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "On Thu, 5 Mar 1998, Thomas G. Lockhart wrote:\n\n> Thanks Cristian! What was the secret? Was it the compiler, or -O setting, or ??\n\nYou're welcome !\n\nI just got a newer snapshot and recompiled it.\n \n> btw, there is a one-line patch you could apply before doing a full release; it\n\nThanks, I'll do new rpms.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Wed, 4 Mar 1998 22:49:56 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "In message <[email protected]>, Cristian Gafton wr\nites:\n> On Thu, 5 Mar 1998, Thomas G. Lockhart wrote:\n> \n> > Thanks Cristian! What was the secret? Was it the compiler, or -O setting, or ??\n> \n> You're welcome !\n> \n> I just got a newer snapshot and recompiled it.\n\nOf what? Glibc?\n\n> \n> > btw, there is a one-line patch you could apply before doing a full release; it\n> \n> Thanks, I'll do new rpms.\n> \n> Cristian\n> --\n> ----------------------------------------------------------------------\n> Cristian Gafton -- [email protected] -- Red Hat Software, Inc.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> UNIX is user friendly. It's just selective about who its friends are.\n> \n> \n> \n> \n\nI don't mean to be a party pooper, but... It must be a header file some\nplace. I'm just courious as to what the difference is. Anybody know?\n\nAfter installing 2.0.7 from Cristian, I went back an installed the\n2.0.5 header files, recompiled, and sure enough, it bombed with the 60\nsec thing. Now, that's a 2.0.7 glibc with 2.0.5 header files.\n\nI ran a diff on the header files, and there is only a modest amount of\nchanges. I was looking at this when low and behold, my wife went into\nlabor! (I was waiting for this before posting my bio :).\n\nThis seems simple enough for me. I'd like to figure it out, but I'm\nMr. Mom with a 2 year old for awhile.\n\nIf you're interested in a baby picture, check out http://24.3.148.6.\n\nTom \[email protected]\n",
"msg_date": "Sat, 07 Mar 1998 23:08:41 -0500",
"msg_from": "\"Thomas A. Szybist\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "On Sat, 7 Mar 1998, Thomas A. Szybist wrote:\n\n> > > Thanks Cristian! What was the secret? Was it the compiler, or -O setting, or ??\n> > \n> > You're welcome !\n> > \n> > I just got a newer snapshot and recompiled it.\n> \n> Of what? Glibc?\n\nNo, postgresql, because that is what we were talking about...\nDespite the subject of the thread.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Sat, 7 Mar 1998 23:16:11 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "In message <[email protected]>, Cristian Gafton w\nrites:\n> On Sat, 7 Mar 1998, Thomas A. Szybist wrote:\n> \n> > > > Thanks Cristian! What was the secret? Was it the compiler, or -O setting, or ??\n> > > \n> > > You're welcome !\n> > > \n> > > I just got a newer snapshot and recompiled it.\n> > \n> > Of what? Glibc?\n> \n> No, postgresql, because that is what we were talking about...\n> Despite the subject of the thread.\n> \n> Cristian\n> --\n> ----------------------------------------------------------------------\n> Cristian Gafton -- [email protected] -- Red Hat Software, Inc.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> UNIX is user friendly. It's just selective about who its friends are.\n> \n> \n> \n> \nSorry, I guess I lost track of the snapshots you were packaging.\n\nI think __math.h is the file. Here is small diff (cut & paste) from \n2.0.5 to 2.0.7:\n\n*** __math.h205 Sun Mar 8 00:00:29 1998\n--- __math.h207 Tue Feb 24 16:34:38 1998\n\n\n<snip>\n\n\t***************\n\t*** 21,30 ****\n\t #ifndef __MATH_H\n\t #define __MATH_H 1\n\n\t! #if defined __GNUG__ && \\\n\t! (__GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ <= 7))\n\t! /* gcc 2.7.2 and 2.7.2.1 have problems with inlining `long double'\n\t! functions so we disable this now. */\n\t #undef __NO_MATH_INLINES\n\t #define __NO_MATH_INLINES\n\t #endif\n\t--- 21,29 ----\n\t #ifndef __MATH_H\n\t #define __MATH_H 1\n\t \n\t! #if (__GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ <= 7))\n\t! /* The gcc, version 2.7 or below, has problems with all this inlining\n\t! code. So disable it for this version of the compiler. */\n\t #undef __NO_MATH_INLINES\n\t #define __NO_MATH_INLINES\n\t #endif\n\t***************\n\t*** 484,492 ****\n\n\nIt seems __NO_MATH_INLINES must be defined. 2.0.5 tests for __GNUG__.\n2.0.7 does not, but only checks for __GNUC_MINOR__ <= 7. This would\nexplain why 2.8.0 also fails.\n\n\nTom Szybist\n\[email protected]\n",
"msg_date": "Sun, 08 Mar 1998 00:23:04 -0500",
"msg_from": "\"Thomas A. Szybist\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "On Sun, 8 Mar 1998, Thomas A. Szybist wrote:\n\n> Sorry, I guess I lost track of the snapshots you were packaging.\n> \n> I think __math.h is the file. Here is small diff (cut & paste) from \n> 2.0.5 to 2.0.7:\n\nAll I am interested _now_ is if the glibc + postgresql packages from\nftp;//ftp.redhat.com/home/gafton are working okay for everybody...\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Sun, 8 Mar 1998 00:30:32 -0500 (EST)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
},
{
"msg_contents": "In message <[email protected]>, Cristian Gafton w\nrites:\n> On Sun, 8 Mar 1998, Thomas A. Szybist wrote:\n> \n> > Sorry, I guess I lost track of the snapshots you were packaging.\n> > \n> > I think __math.h is the file. Here is small diff (cut & paste) from \n> > 2.0.5 to 2.0.7:\n> \n> All I am interested _now_ is if the glibc + postgresql packages from\n> ftp;//ftp.redhat.com/home/gafton are working okay for everybody...\n> \n> Cristian\n> --\n> ----------------------------------------------------------------------\n> Cristian Gafton -- [email protected] -- Red Hat Software, Inc.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> UNIX is user friendly. It's just selective about who its friends are.\n> \n> \nJust trying to help... sorry.\n\nTom Szybist\[email protected]\n",
"msg_date": "Sun, 08 Mar 1998 00:40:14 -0500",
"msg_from": "\"Thomas A. Szybist\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Glibc2 (was Re: [HACKERS] PostgreSQL - the Linux of Databases...)"
}
] |
[
{
"msg_contents": "> \n> \n> Just tried the same on DEC UNIX 4.0D. I compiled postgres with the standard\n> C compiler instead of gcc (worked for 6.2.1) and get the same error for\n> initdb. \n\nAnother thing to try would be to enable assert checking:\n\n\tconfigure --enable-cassert\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 4 Mar 1998 20:24:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] Problems with running v6.3 on DIGITAL UNIX"
}
] |
[
{
"msg_contents": "> >> Allowing text to use blobs for values larger than the current block\n> size\n> >> would hit the same problem.\n> > When I told about multi-representation feature I ment that applications\n> > will not be affected by how text field is stored - in tuple or somewhere\n> \n> > else. Is this Ok for you ?\n> \n> This is also what I would have in mind. But I guess a change to the fe-be\n> protocol would still be necessary, since the client now allocates\n> a fixed amount of memory to receive one tuple, wasn't it ?\n> \n> Andreas\n> \n> \n",
"msg_date": "Thu, 5 Mar 1998 10:26:28 +0100 ",
"msg_from": "Zeugswetter Andreas SARZ <[email protected]>",
"msg_from_op": true,
"msg_subject": "WG: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "Zeugswetter Andreas SARZ wrote:\n> \n> > >> Allowing text to use blobs for values larger than the current block\n> > size\n> > >> would hit the same problem.\n> > > When I told about multi-representation feature I ment that applications\n> > > will not be affected by how text field is stored - in tuple or somewhere\n> >\n> > > else. Is this Ok for you ?\n> >\n> > This is also what I would have in mind. But I guess a change to the fe-be\n> > protocol would still be necessary, since the client now allocates\n> > a fixed amount of memory to receive one tuple, wasn't it ?\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nI don't know, but imho it's not too hard to implement.\n\nVadim\n",
"msg_date": "Fri, 06 Mar 1998 09:25:46 +0700",
"msg_from": "\"Vadim B. Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WG: [QUESTIONS] Re: [HACKERS] text should be a blob field"
},
{
"msg_contents": "On Fri, 6 Mar 1998, Vadim B. Mikheev wrote:\n\n> Zeugswetter Andreas SARZ wrote:\n> > \n> > > >> Allowing text to use blobs for values larger than the current block\n> > > size\n> > > >> would hit the same problem.\n> > > > When I told about multi-representation feature I ment that applications\n> > > > will not be affected by how text field is stored - in tuple or somewhere\n> > >\n> > > > else. Is this Ok for you ?\n> > >\n> > > This is also what I would have in mind. But I guess a change to the fe-be\n> > > protocol would still be necessary, since the client now allocates\n> > > a fixed amount of memory to receive one tuple, wasn't it ?\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> I don't know, but imho it's not too hard to implement.\n> \n> Vadim\n\nOne thing, I don't allocate a fixed amount of memory for JDBC when\nreceiving tuples.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sun, 8 Mar 1998 11:09:03 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WG: [QUESTIONS] Re: [HACKERS] text should be a blob field"
}
] |
[
{
"msg_contents": "> :-? I've looked at both 6.2.1 and 6.3 and have been unable to find a\n> single '0L' in either version. I've looked in all .h files in src/include,\n> not just src/include/catalog.\n> \n> \n> =09Pedro.\n\nDidn't look very hard :-)\n\nA quick:\n find . -name '*.c' -exec grep 0L {} \\;\nfinds 27 occurrences of 0L\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Thu, 5 Mar 1998 12:16:45 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Problems with running v6.3 on DIGITAL\n\tUNIX"
},
{
"msg_contents": "On Thu, 5 Mar 1998, Andrew Martin wrote:\n\n>> :-? I've looked at both 6.2.1 and 6.3 and have been unable to find a\n>> single '0L' in either version. I've looked in all .h files in src/include,\n>> not just src/include/catalog.\n>> \n>> \n>> =09Pedro.\n>\n>Didn't look very hard :-)\n>\n>A quick:\n> find . -name '*.c' -exec grep 0L {} \\;\n>finds 27 occurrences of 0L\n\nWell, not exactly. The thing is '0l', instead of '0L', both in 6.3 and\n6.2.1. I'm rebuilding the beast now. I think that the problem *might* be\nin some '-1L' which have been converted to '-1'. Perhaps there is a\nproblem with the sign expansion. I'll let you know what happens.\n\n-------------------------------------------------------------------\nPedro José Lobo Perea Tel: +34 1 336 78 19\nCentro de Cálculo Fax: +34 1 331 92 29\nEUIT Telecomunicación - UPM e-mail: [email protected]\n\n",
"msg_date": "Thu, 5 Mar 1998 15:11:01 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Problems with running v6.3 on DIGITAL\n\tUNIX"
}
] |
[
{
"msg_contents": "Works fine for me.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tThursday, March 05, 1998 2:20 PM\n> To:\tMichael Meskes\n> Cc:\tCristian Gafton; PostgreSQL Hacker\n> Subject:\tRe: [HACKERS] 6.3 question...\n> \n> On Thu, 5 Mar 1998, Michael Meskes wrote:\n> \n> > Cristian Gafton writes:\n> > > \n> > > Why was ecpg pulled out of the interfaces/Makefile ?\n> > > \n> > > Cristian\n> > \n> > Huh? I didn't realize that! Was it intentionally left out? If so I'd\n> like to\n> > know the reason. If it was unintentionally, please put it back in.\n> \n> \tIts bounced in and out, actually...we removed it for awhile\n> there\n> because makes were failing miserably right there :( I fear we must\n> have\n> forgotten to put it back in again before the release...\n> \n> \tI've put it back in...those with CVSup access, can you please\n> grab\n> a new copy and make sure that it works for all of you?\n> \n> \n",
"msg_date": "Thu, 5 Mar 1998 15:37:54 +0100",
"msg_from": "\"Meskes, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.3 question..."
}
] |
[
{
"msg_contents": ">Let me mention on thing that may help the alpha developers trying to fix\n>this.\n>\n>As part of 6.3 changes, I changed some contants in\n>/src/include/catalog/*.h that used 0L to just plain 0. It seemed to be\n>done inconsistently, and I could not figure out how the 0L could be\n>different than 0.\n>\n>Can someone take a look at the 6.2 source and 6.3 source, and tell me if\n>the 0L entries in 6.2 change the alpha behavior from a plain 0. Perhaps\n>a way of testing would be to replace the 0L with 0 in a working 6.2, and\n>run initdb to see if the system still works.\n\nI've just finished the test, and it makes no difference :-( The only file\nwhere 0L appeared is in pg_attributes.h. I have tried also to put '-1L' in\nthe new column, but it makes no difference. Still working...\n\n-------------------------------------------------------------------\nPedro José Lobo Perea Tel: +34 1 336 78 19\nCentro de Cálculo Fax: +34 1 331 92 29\nEUIT Telecomunicación - UPM e-mail: [email protected]\n\n\n",
"msg_date": "Thu, 5 Mar 1998 16:25:08 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Problems with running v6.3 on DIGITAL\n\tUNIX (fwd)"
}
] |
[
{
"msg_contents": "\nso far this week...\n\n Number Of Number of Average Percent Of Percent Of\n Date Files Sent Bytes Sent Xmit Rate Files Sent Bytes Sent\n------------- ---------- ----------- ---------- ---------- ----------\nSat Feb 28 1998 1 1340 1.3 KB/s 0.00 0.00\nSun Mar 1 1998 6053 499080600 5.2 KB/s 12.44 9.12\nMon Mar 2 1998 6389 1624947513 4.1 KB/s 13.13 29.70\nTue Mar 3 1998 11357 1676137023 2.9 KB/s 23.34 30.64\nWed Mar 4 1998 23643 1323387759 2.9 KB/s 48.59 24.19\nThu Mar 5 1998 1211 347715466 2.6 KB/s 2.49 6.36\n\n\n\nAnd this is what has been pulled from the 'root', which is pretty much all\nthe source distribution itself:\n\n ---- Percent Of ----\n Archive Section Files Sent Bytes Sent Files Sent Bytes Sent\n------------------------- ---------- ----------- ---------- ----------\n/pub 1555 4238305965 3.20 77.46\n/pub/bindist-v6.3 327 192720125 0.67 3.52\n\n\n\n",
"msg_date": "Thu, 5 Mar 1998 11:40:23 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ouch..."
}
] |
[
{
"msg_contents": "To aid those of us that don't want to use sequences, can we add a\nfeature to 6.4 that allows the use of an AUTO_INCREMENT statement \nwhen defining tables? MySQL does this, and I like it. It resembles\nthe Autonumber feature in Access as well.\n\ncreate table tblFirm (\n FirmID int PRIMARY KEY AUTO_INCREMENT,\n FirmTypeID int,\n FirmName varchar(64) NOT NULL,\n FirmAlpha char(20) NOT NULL UNIQUE,\n FirmURL varchar(64),\n FirmEmail varchar(64)\n);\n\nJust yet another suggestion.\n\nDante\n.------------------------------------------.-----------------------.\n| _ [email protected] - D. Dante Lorenso | Network Administrator |\n| | | ___ _ _ ___ __ _ ___ ___ | |\n| | |__ / o \\| '_|/ o_\\| \\ |\\_ _\\/ o \\ | Accounting Firms |\n| |____|\\___/|_| \\___/|_|\\_|\\___|\\___/ | Associated, inc. |\n| http://www.afai.com/~dlorenso | http://www.afai.com/ |\n'------------------------------------------'-----------------------'\n\n",
"msg_date": "Thu, 5 Mar 1998 12:00:12 -0500",
"msg_from": "\"D. Dante Lorenso\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "AUTO_INCREMENT suggestion"
},
{
"msg_contents": "D. Dante Lorenso wrote:\n> \n> To aid those of us that don't want to use sequences, can we add a\n> feature to 6.4 that allows the use of an AUTO_INCREMENT statement\n> when defining tables? MySQL does this, and I like it. It resembles\n> the Autonumber feature in Access as well.\n> \n> create table tblFirm (\n> FirmID int PRIMARY KEY AUTO_INCREMENT,\n> FirmTypeID int,\n> FirmName varchar(64) NOT NULL,\n> FirmAlpha char(20) NOT NULL UNIQUE,\n> FirmURL varchar(64),\n> FirmEmail varchar(64)\n> );\n> \n> Just yet another suggestion.\n> \n\nInformix calls something like this SERIAL type, like:\n\ncreate table tblFirm (\n FirmID SERIAL PRIMARY KEY,\n FirmTypeID int,\n FirmName varchar(64) NOT NULL,\n FirmAlpha char(20) NOT NULL UNIQUE,\n FirmURL varchar(64),\n FirmEmail varchar(64)\n);\n\nDon't know if that is standrd or extension.\n\nWe use \"CREATE SEQUENCE\" to do this is PgSQL.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n",
"msg_date": "Fri, 06 Mar 1998 11:26:49 +0100",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AUTO_INCREMENT suggestion"
},
{
"msg_contents": "D. Dante Lorenso wrote:\n> \n> To aid those of us that don't want to use sequences, can we add a\n> feature to 6.4 that allows the use of an AUTO_INCREMENT statement\n> when defining tables? MySQL does this, and I like it. It resembles\n> the Autonumber feature in Access as well.\n> \n> create table tblFirm (\n> FirmID int PRIMARY KEY AUTO_INCREMENT,\n> FirmTypeID int,\n> FirmName varchar(64) NOT NULL,\n> FirmAlpha char(20) NOT NULL UNIQUE,\n> FirmURL varchar(64),\n> FirmEmail varchar(64)\n> );\n> \n> Just yet another suggestion.\n> \n> Dante\n\nSince the PRIMARY KEY is implemented by creating an unique index\non the field, it should be easy to implement AUTO_INCREMENT by\nautomagically creating a sequence and setting it as the default for\nthis field.\n\nWas PRIMARY KEY implemented in the parser?\n\n/* m */\n",
"msg_date": "Fri, 06 Mar 1998 11:45:46 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AUTO_INCREMENT suggestion"
},
{
"msg_contents": "On Fri, 6 Mar 1998, Goran Thyni wrote:\n\n> D. Dante Lorenso wrote:\n> > \n> > To aid those of us that don't want to use sequences, can we add a\n> > feature to 6.4 that allows the use of an AUTO_INCREMENT statement\n> > when defining tables? MySQL does this, and I like it. It resembles\n> > the Autonumber feature in Access as well.\n> > \n> > create table tblFirm (\n> > FirmID int PRIMARY KEY AUTO_INCREMENT,\n> > FirmTypeID int,\n> > FirmName varchar(64) NOT NULL,\n> > FirmAlpha char(20) NOT NULL UNIQUE,\n> > FirmURL varchar(64),\n> > FirmEmail varchar(64)\n> > );\n> > \n> > Just yet another suggestion.\n> > \n> \n> Informix calls something like this SERIAL type, like:\n> \n> create table tblFirm (\n> FirmID SERIAL PRIMARY KEY,\n> FirmTypeID int,\n> FirmName varchar(64) NOT NULL,\n> FirmAlpha char(20) NOT NULL UNIQUE,\n> FirmURL varchar(64),\n> FirmEmail varchar(64)\n> );\n> \n> Don't know if that is standrd or extension.\n> \n> We use \"CREATE SEQUENCE\" to do this is PgSQL.\n\n\tJust like PRIMARY KEY pretty much masks a 'CREATE UNIQUE INDEX',\nwhy not SERIAL/AUTO_INCREMENT masking a \"CREATE SEQUENCE\"?\n\n\n",
"msg_date": "Fri, 6 Mar 1998 08:31:17 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AUTO_INCREMENT suggestion"
},
{
"msg_contents": "> > To aid those of us that don't want to use sequences\n\n?? What is our next feature?\n\n \"To aid those who don't want to use Postgres...\"\n\nSorry, couldn't resist ;-)\n\n> , can we add a\n> > feature to 6.4 that allows the use of an AUTO_INCREMENT statement\n> > when defining tables? MySQL does this, and I like it. It resembles\n> > the Autonumber feature in Access as well.\n> >\n> > create table tblFirm (\n> > FirmID int PRIMARY KEY AUTO_INCREMENT,\n> > FirmTypeID int,\n> > FirmName varchar(64) NOT NULL,\n> > FirmAlpha char(20) NOT NULL UNIQUE,\n> > FirmURL varchar(64),\n> > FirmEmail varchar(64)\n> > );\n>\n> Since the PRIMARY KEY is implemented by creating an unique index\n> on the field, it should be easy to implement AUTO_INCREMENT by\n> automagically creating a sequence and setting it as the default for\n> this field.\n>\n> Was PRIMARY KEY implemented in the parser?\n\nYes, in gram.y and then is transformed into essentially a\nCREATE UNIQUE INDEX statement afterwards, still in the parser-related\ncode. This kind of change is ugly, since it has side effects (an index is\ncreated with a specific name which might conflict with an existing name),\nbut was done for SQL92 compatibility. I'd be less than excited about\ndoing ugly code with side effects (a sequence is created, etc) for\ncompatibility with a specific commercial database.\n\n - Tom\n\n",
"msg_date": "Fri, 06 Mar 1998 14:03:40 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AUTO_INCREMENT suggestion"
},
{
"msg_contents": "Goran Thyni wrote:\n> \n> D. Dante Lorenso wrote:\n> > \n> > To aid those of us that don't want to use sequences, can we add a\n> > feature to 6.4 that allows the use of an AUTO_INCREMENT statement\n> > when defining tables? MySQL does this, and I like it. It resembles\n> > the Autonumber feature in Access as well.\n> > \n> > create table tblFirm (\n> > FirmID int PRIMARY KEY AUTO_INCREMENT,\n> > FirmTypeID int,\n> > FirmName varchar(64) NOT NULL,\n> > FirmAlpha char(20) NOT NULL UNIQUE,\n> > FirmURL varchar(64),\n> > FirmEmail varchar(64)\n> > );\n> > \n> > Just yet another suggestion.\n> > \n> \n> Informix calls something like this SERIAL type, like:\n> \n> create table tblFirm (\n> FirmID SERIAL PRIMARY KEY,\n> FirmTypeID int,\n> FirmName varchar(64) NOT NULL,\n> FirmAlpha char(20) NOT NULL UNIQUE,\n> FirmURL varchar(64),\n> FirmEmail varchar(64)\n> );\n> \n> Don't know if that is standrd or extension.\n\nSybase calls this an identity. I don't think there is a standard name\nfor this, sigh.\n\nOcie\n",
"msg_date": "Fri, 6 Mar 1998 12:24:26 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AUTO_INCREMENT suggestion"
},
{
"msg_contents": "[email protected] wrote:\n> > > the Autonumber feature in Access as well.\n> > >\n> > > create table tblFirm (\n> > > FirmID int PRIMARY KEY AUTO_INCREMENT,\n>\n> > Informix calls something like this SERIAL type, like:\n> >\n> > create table tblFirm (\n> > FirmID SERIAL PRIMARY KEY,\n> > FirmTypeID int,\n> > FirmName varchar(64) NOT NULL,\n> > FirmAlpha char(20) NOT NULL UNIQUE,\n> > FirmURL varchar(64),\n> > FirmEmail varchar(64)\n> > );\n> >\n> > Don't know if that is standrd or extension.\n> \n> Sybase calls this an identity. I don't think there is a standard name\n> for this, sigh.\n> \n> Ocie\n\n\nHow about adding all those keywords?\n AUTONUMBER, IDENTITY, AUTO_INCREMENT, SERIAL\n\nThen, anybody could switch to PostgreSQL without having to relearn\nthis.\n\nWould it be possible to have a \"compatability\" variable, like this?\n psql=> set sqlmode to {STRICT_ANSI|POSTGRESQL|ORACLE ...}\nso that ppl can set it to STRICT when they want to write portable\nSQL, and PGSQL when they want all these nice features?\n\n/* m */\n",
"msg_date": "Mon, 09 Mar 1998 13:08:10 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AUTO_INCREMENT suggestion"
}
] |
[
{
"msg_contents": "I'm getting confused between the methods of connectivity with PERL.\nRight now I'm accessing several MySQL tables using DBI/DBD. However,\nI'd like to convert these programs into PostgreSQL apps. So, I'm\nassuming that since I used DBI (a common database connection format)\nthat I'd only have to change the connect string to point to the\nPostgreSQL source rather than MySQL and all should be good.\n\nSo, what is Pg.pm? and how is that connected (if at all) to DBI?\nI'm running 6.3 on Redhat 5, and psql seems to work Ok. I look in the\nsrc/interfaces/perl5/ and All I see are files for Pg...and nothing\nmentions DBI. Then, I search Altavista and come up with this in the\nlatest from CPAN under DBI:\n\nhttp://www.perl.com/CPAN-local/modules/by-category/07_Database_Interfaces/DB\nD/DBD-Pg-0.68.readme\n\nOk, so the source for PG.pm is version 1.7.0 or so, and the version of this\nDBD-Pg is 0.68. Do these numbers have ANYTHING to do with each other,\nor are they two separate products? Which one do I need, and if I need\nthe DBD from CPAN (as I suspect), will it work with 6.3?\n\nDante\n\n.------------------------------------------.-----------------------.\n| _ [email protected] - D. Dante Lorenso | Network Administrator |\n| | | ___ _ _ ___ __ _ ___ ___ | |\n| | |__ / o \\| '_|/ o_\\| \\ |\\_ _\\/ o \\ | Accounting Firms |\n| |____|\\___/|_| \\___/|_|\\_|\\___|\\___/ | Associated, inc. |\n| http://www.afai.com/~dlorenso | http://www.afai.com/ |\n'------------------------------------------'-----------------------'\n\n",
"msg_date": "Thu, 5 Mar 1998 14:50:39 -0500",
"msg_from": "\"D. Dante Lorenso\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and DBI/DBD...vs Pg.pm"
},
{
"msg_contents": "D. Dante Lorenso wrote:\n> \n> I'm getting confused between the methods of connectivity with PERL.\n> Right now I'm accessing several MySQL tables using DBI/DBD. However,\n> I'd like to convert these programs into PostgreSQL apps. So, I'm\n> assuming that since I used DBI (a common database connection format)\n> that I'd only have to change the connect string to point to the\n> PostgreSQL source rather than MySQL and all should be good.\n\nSince DBI uses a different DBD for every database, you should get\nthe DBD for PostgreSQL (DBD:Pg) from your local perl archive...\n\nThis DBD really should be in the distribution.\n\n> So, what is Pg.pm? and how is that connected (if at all) to DBI?\n\nPg.pm is not in any way connected to DBI. It is marginally faster\nthan DBI, but you loose the ability to choose between different\ndatabases.\n\n/* m */\n",
"msg_date": "Fri, 06 Mar 1998 11:58:30 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and DBI/DBD...vs Pg.pm"
}
] |
[
{
"msg_contents": "\nAnnounce: Release of PyGreSQL version 2.1\n===============================================\n\nPyGreSQL v2.1 has been released.\nIt is available at: ftp://ftp.druid.net/pub/distrib/PyGreSQL-2.1.tgz.\n\nPostgreSQL is a database system derived from Postgres4.2. It conforms\nto (most of) ANSI SQL and offers many interesting capabilities (C\ndynamic linking for functions or type definition, etc.). This package\nis copyright by the Regents of the University of California, and is\nfreely distributable.\n\nPython is a interpretated programming langage. It is object oriented,\nsimple to use (light syntax, simple and straighforward statements), and\nhas many extensions for building GUIs, interfacing with WWW, etc. An\nintelligent web browser (HotJava like) is currently under development\n(november 1995), and this should open programmers many doors. Python is\ncopyrighted by Stichting S Mathematisch Centrum, Amsterdam, The\nNetherlands, and is freely distributable.\n\nPyGreSQL is a python module that interfaces to a PostgreSQL database. It\nembeds the PostgreSQL query library to allow easy use of the powerful\nPostgreSQL features from a Python script.\n\nPyGreSQL 2.1 was developed and tested on a NetBSD 1.3_BETA system. It\nis based on the PyGres95 code written by Pascal Andre,\[email protected]. I changed the version to 2.0 and updated the\ncode for Python 1.5 and PostgreSQL 6.2.1. While I was at it I upgraded\nthe code to use full ANSI style prototypes and changed the order of\narguments to connect. Version 2.1 is fixes and enhancements to that.\n\nImportant changes from PyGreSQL 2.0 to PyGreSQL 2.1:\n - return fields as proper Python objects for field type\n - Cleaned up pgext.py\n - Added dictresult method\n\nImportant changes from Pygres95 1.0b to PyGreSQL 2.0:\n - Updated code for PostgreSQL 6.2.1 and Python 1.5.\n - Reformatted code and converted to ANSI .\n - Changed name to PyGreSQL (from PyGres95.)\n - Changed order of arguments to connect function.\n - Created new type pgqueryobject and moved certain methods to it.\n - Added a print function for pgqueryobject\n - Various code changes - mostly stylistic.\n\nFor more information about each package, please have a look to their\nweb pages:\n - Python : http://www.python.org/\n - PostgreSQL : http://www.PostgreSQL.org/\n - PyGreSQL : http://www.druid.net/pygresql/\n\n\nD'Arcy J.M. Cain\[email protected]\n\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 6 Mar 1998 00:14:44 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "PyGreSQL 2.1 released"
}
] |
[
{
"msg_contents": "\nBilly G. Allie wrote:\n> \n> Vadim B. Mikheev wrote:\n> >Billy G. Allie wrote:\n> >>\n> >> I encountered a problem (bug? feature?) where \"select currval('sequence')\"\n> >> will generate an error if \"select nextval('sequence')\" is not executed\n> first.\n> >\n> >This is feature :)\n> >1. This is what Oracle does.\n> >2. currval () is described as returning value returned by\n> > last nextval() in _session_.\n> >\n> >Vadim\n> >\n> Does this mean we should not modify this behavior because \"this is what Oracle\n> does\"? I can envision where using currval() before nextval() can be useful.\n\nActually, what you are proposing was initial behaviour of currval().\nThis was changed to be more consistent with 1. & 2. (note - not only 1.,\nbut 2. also).\n\nBut personally I haven't objection against changing this again.\nMen, vote pls!\n\nVadim\n\nNo, I would not change this again, my question is iff instead of elog(ERROR\nthe old code could be reinserted. This would mean, that when a session did a\nprevious nextval it gets it's session currval, but if it did not, it gets a global\ncurrval as in previous implementation. The problem is somebody who\ncalls currval often without ever calling nextval (like a monitor) will totally kill\nperformance. (same as a select max(field))\n\nAndreas\n\n\n\n",
"msg_date": "Fri, 6 Mar 1998 14:06:25 +-100",
"msg_from": "Zeugswetter Andreas <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Re: [PATCHES] Changes to sequence.c"
}
] |
[
{
"msg_contents": "> Vadim B. Mikheev wrote:\n> >\n> > Peter T Mount wrote:\n> > >\n> > > On Wed, 4 Mar 1998 [email protected] wrote:\n> > >\n> > > > This may sound like an obvious question, but if a user defines a\n> > > > query, do we save the query plan? This would reduce the\n> > > > communications between the client and server (a small gain), and allow\n> > > > the server to start executing the query as soon as it recognized the\n> > > > name of the stored query and parsed the arguments.\n> > >\n> > > Not sure ofhand, but it would be useful for JDBC's PreparedStatement and\n> > > CallableStatement classes\n> >\n> > We can implement it very easy, and fast. Execution plan may be reused\n> > many times. Is this feature in standard ?\n> > What is proposed syntax if not ?\n>\n> I don't think it is so much a question of syntax as it is a question\n> of what we do in the backend. Suppose I create a stored query in SQL.\n> We already store the SQL source for this in the database, right? So\n> when it comes time to execute the query, we take this SQL and execute\n> it as if the user had entered it directly. What I am proposing would\n> be to basically store the compiled query plan as well.\n>\n> I do see a couple sticky points:\n>\n> We would need some information about which variables are to be\n> substituted into this query plan, but this should be fairly\n> straightforward.\n>\n> Some querys may not respond well to this, for example, if a table had\n> an index on an integer field f1, this would probably be the best way\n> to satisfy a select where f1<10. But if this were in a query as f1<x,\n> then a sufficiently high value of x might make this not such a good\n> way to run the query. I haven't looked into this, but I would assume\n> that the optimizer relies on the specific values in such cases.\n>\n> We need to be able to handle changes to the structures and contents of\n> the tables. If the query plan is built and we add 10000 rows to a\n> table it references, the query should probably be recompiled. We\n> could probably do this at vacuum time. There is also a small chance\n> that a table or index that the query plan was using is dropped. We\n> could automatically rebuild the query if the table was created after\n> the query was compiled.\n>\n>\n> Boy, to look at this, you'd think I had already built one of these :)\n> I haven't but I'm willing to give it a shot.\n>\n> Ocie\n>\nNot to pile on, but, I have a great interest in this subject. We do a\nlot of work using off-the-shelf ODBC tools. And, we have observed that\nthese tools use PREPARE for two purposes.\n\nOne is to speed up iterative queries which join data from different\ndatabases. You seem to be addressing this issue.\n\nThe other reason PREPARE is used is to retrieve a description of a\nquery's projection (target/result) with out actually running the\nquery. Currently, ODBC drivers must simulate the prepare statement by\nsubmitting the full query and discard the data just to get the result\ndescription. Obviously this slows response time greatly when the query\nis a large data set. So if you haven't considered returning the the\nresults description, please do.\n\nThank Very Much",
"msg_date": "Fri, 06 Mar 1998 09:33:08 -0500",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Speedups"
}
] |
[
{
"msg_contents": "Hi,\n\nIn the file large_object/inv_api.c there is a statement in the function\ninv_create\nwhich goes:\n\n file_oid=newoid() + 1;\n\nlater on a heap_create_with_catalog call is performed to create a heap\nfor the large object called xinv<file_oid>.\n\nAccording to code (and the comments in the code) the assumption is that the\noid\nof the heap_relation will be equal to the value of the variable file_oid.\n\nThis of course will only be the case if nobody else called newoid()\nbefore the heap relation is created.\n\nThis might lead the large object implementation to confuse\nlarge object relations with other relations.\n\nAccording to me this is a bug. I'm I right?\n\nThanks,\nMaurice\n\n\n",
"msg_date": "Fri, 6 Mar 1998 16:30:56 +0100",
"msg_from": "\"Maurice Gittens\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "newoid in invapi.c"
},
{
"msg_contents": "On Fri, 6 Mar 1998, Maurice Gittens wrote:\n\n> Hi,\n> \n> In the file large_object/inv_api.c there is a statement in the function\n> inv_create\n> which goes:\n> \n> file_oid=newoid() + 1;\n> \n> later on a heap_create_with_catalog call is performed to create a heap\n> for the large object called xinv<file_oid>.\n> \n> According to code (and the comments in the code) the assumption is that the\n> oid\n> of the heap_relation will be equal to the value of the variable file_oid.\n> \n> This of course will only be the case if nobody else called newoid()\n> before the heap relation is created.\n> \n> This might lead the large object implementation to confuse\n> large object relations with other relations.\n> \n> According to me this is a bug. I'm I right?\n\nYes, and no. LargeObjects are supposed to run within a transaction (if you\ndon't then some fun things happen), and (someone correct me if I'm wrong)\nif newoid() is called from within the transaction, it is safe? \n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sun, 8 Mar 1998 19:43:25 +0000 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] newoid in invapi.c"
}
] |
[
{
"msg_contents": "\nI've dumped a 6.1 database using the pg_dumpall from 6.3 to load into\n6.3.\n\nThe format of the data to be copied into pg_user is wrong - there are\nnot enough columns. I guess this may be a problem from having skipped\nout 6.2, but it would be nice if it worked properly :-)\n\nAlso one of the column names I had used in my old database is no longer\nallowed ('local'). Again, it would be nice if pg_dumpall could spot\nno-longer allowed column names and warn the user rather than having\npsql crash when one tries to import the data and then having to trace\nback to where the problem was...\n\n\nBest wishes,\n\nAndrew\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Fri, 6 Mar 1998 16:45:15 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dumpall"
},
{
"msg_contents": "> \n> \n> I've dumped a 6.1 database using the pg_dumpall from 6.3 to load into\n> 6.3.\n> \n> The format of the data to be copied into pg_user is wrong - there are\n> not enough columns. I guess this may be a problem from having skipped\n> out 6.2, but it would be nice if it worked properly :-)\n> \n> Also one of the column names I had used in my old database is no longer\n> allowed ('local'). Again, it would be nice if pg_dumpall could spot\n> no-longer allowed column names and warn the user rather than having\n> psql crash when one tries to import the data and then having to trace\n> back to where the problem was...\n\nYikes, we never changed pg_user to pg_shadow in pg_dumpall. Isn't that\nthe real problem. Need to have that patched, or people will not be able\nto upgrade. Applying patch now.\n\nDo I put this patch in the patches directory?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 6 Mar 1998 12:23:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dumpall"
},
{
"msg_contents": "On Fri, 6 Mar 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > I've dumped a 6.1 database using the pg_dumpall from 6.3 to load into\n> > 6.3.\n> > \n> > The format of the data to be copied into pg_user is wrong - there are\n> > not enough columns. I guess this may be a problem from having skipped\n> > out 6.2, but it would be nice if it worked properly :-)\n> > \n> > Also one of the column names I had used in my old database is no longer\n> > allowed ('local'). Again, it would be nice if pg_dumpall could spot\n> > no-longer allowed column names and warn the user rather than having\n> > psql crash when one tries to import the data and then having to trace\n> > back to where the problem was...\n> \n> Yikes, we never changed pg_user to pg_shadow in pg_dumpall. Isn't that\n> the real problem. Need to have that patched, or people will not be able\n> to upgrade. Applying patch now.\n> \n> Do I put this patch in the patches directory?\n\n\tYes...Neil, can you put a \"patches page\" up on the WWW site,\nlinked to the main page, that lists the patches as well as a short\ndescription of what each one does?\n\n\tBruce, can you get me a patch for this? I'm going to review the\npatches that I do have now, and the ones that look perfectly safe (ie. I\nhave no doubt about), will get included on the CD rom also...not as part\nof the source, just as a seperate file to be used...\n\n\n\n",
"msg_date": "Fri, 6 Mar 1998 12:34:42 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dumpall"
}
] |
[
{
"msg_contents": "SQL test suite version 6.0 is available here\nhttp://www.itl.nist.gov/div897/ctg/sql_form.htm\n\nWe can use this to validate postgresql\n\n\n\n\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 6 Mar 1998 12:11:05 -0800 (PST)",
"msg_from": "gold bag <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL Test suite V6.0 is here --"
}
] |
[
{
"msg_contents": "\nwhew.. after some more debugging, it would appear that the problem\nlies somewhere in the page stuff, which I know less than nothing\nabout.\n\nHere's the point where I'm at: heapam.c line 442 a macro call to\nHeapTupleSatisfies graps our data for us (the messed up struct), which\nactually calls the PageGetItem macro for the data.\n\nbut, the curious thing is that the relation pointer that gets passed\nto both heapgettup and the macro calls contains the correct struct in\nrelation->rd_att->attrs[0], but then a faulty one is being returned by\nPageGetItem. PageGetItem just appears to return a pointer somewhere\nin the page.. where does this page stuff get written? I'm not sure\nhow much farther I can go.. I'll check out the backend flowchart for\nmore info.\n\nI might also do a diff to see which page stuff has changed.. Is it\npossible to back out the atttypmod changes to see if that fixes it?\n",
"msg_date": "Fri, 6 Mar 1998 16:05:59 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "dec alpha/64bit stuff"
},
{
"msg_contents": "Brett McCormick wrote:\n\n> whew.. after some more debugging, it would appear that the problem\n> lies somewhere in the page stuff, which I know less than nothing\n> about.\n>\n> Here's the point where I'm at: heapam.c line 442 a macro call to\n> HeapTupleSatisfies graps our data for us (the messed up struct), which\n> actually calls the PageGetItem macro for the data.\n>\n> but, the curious thing is that the relation pointer that gets passed\n> to both heapgettup and the macro calls contains the correct struct in\n> relation->rd_att->attrs[0], but then a faulty one is being returned by\n> PageGetItem. PageGetItem just appears to return a pointer somewhere\n> in the page.. where does this page stuff get written? I'm not sure\n> how much farther I can go.. I'll check out the backend flowchart for\n> more info.\n>\n> I might also do a diff to see which page stuff has changed.. Is it\n> possible to back out the atttypmod changes to see if that fixes it?\n\nI predict that if you pump up attypmod to a 32 bit field your problems\nwill go away. I'll bet that the page is being read off of disk and the\nstruct is memcpy'd (or something similar) into it, rather than being\ncopied field-by-field. The struct internal alignments are off for the\nAlpha, which will pad structs to get the optimal access alignment.\n\n - Tom\n\n",
"msg_date": "Sat, 07 Mar 1998 05:52:33 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "> I predict that if you pump up attypmod to a 32 bit field your problems\n> will go away. I'll bet that the page is being read off of disk and the\n> struct is memcpy'd (or something similar) into it, rather than being\n> copied field-by-field. The struct internal alignments are off for the\n> Alpha, which will pad structs to get the optimal access alignment.\n\nOoooh, good guess. Can't wait to hear if it is correct.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 7 Mar 1998 01:06:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "> \n> Brett McCormick wrote:\n> \n> > whew.. after some more debugging, it would appear that the problem\n> > lies somewhere in the page stuff, which I know less than nothing\n> > about.\n> >\n> > Here's the point where I'm at: heapam.c line 442 a macro call to\n> > HeapTupleSatisfies graps our data for us (the messed up struct), which\n> > actually calls the PageGetItem macro for the data.\n\nOK, I have an idea. Contact Marc, [email protected]. Have him\ngive you a login account to postgresql.org. Use cvs to pull snapshots\nby date. Compile and run initdb on several dates, and by process of\nelimination, find out the day that alpha broke.\n\nWe can then analyze the patches for that day and find the problem. I\nassume 6.2.1 worked for you, and that was October 17th. Go from there\nto the 6.3 release and find the date of failure.\n\nWith initdb problems, there is really no good way to debug problems like\nthis.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 7 Mar 1998 18:10:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "> OK, I have an idea. Contact Marc, [email protected]. Have him\n> give you a login account to postgresql.org. Use cvs to pull snapshots\n> by date. Compile and run initdb on several dates, and by process of\n> elimination, find out the day that alpha broke.\n> \n> We can then analyze the patches for that day and find the problem. I\n> assume 6.2.1 worked for you, and that was October 17th. Go from there\n> to the 6.3 release and find the date of failure.\n> \n> With initdb problems, there is really no good way to debug problems like\n> this.\n\nAnother suggestion: use a binary search to find the date it broke. Will \nsave you a lot of time :)\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Sun, 8 Mar 1998 11:40:52 +0100 (MET)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "> \n> > OK, I have an idea. Contact Marc, [email protected]. Have him\n> > give you a login account to postgresql.org. Use cvs to pull snapshots\n> > by date. Compile and run initdb on several dates, and by process of\n ^^^^^^^^^^^\n> > elimination, find out the day that alpha broke.\n ^^^^^^^^^^^\n> Another suggestion: use a binary search to find the date it broke. Will \n> save you a lot of time :)\n\nWas it not clear that is was I was suggesting? Try mid-January first,\nthen mid December or mid-February, depending on whether mid-January works.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 8 Mar 1998 13:05:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "\nA binary search?\n\nOn Sun, 8 March 1998, at 11:40:52, Maarten Boekhold wrote:\n\n> Another suggestion: use a binary search to find the date it broke. Will \n> save you a lot of time :)\n> \n> Maarten\n> \n> _____________________________________________________________________________\n> | TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n> | Department of Electrical Engineering |\n> | Computer Architecture and Digital Technique section |\n> | [email protected] |\n> -----------------------------------------------------------------------------\n",
"msg_date": "Sun, 8 Mar 1998 15:01:20 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "On Sat, 7 March 1998, at 18:10:36, Bruce Momjian wrote:\n\n> OK, I have an idea. Contact Marc, [email protected]. Have him\n> give you a login account to postgresql.org. Use cvs to pull snapshots\n> by date. Compile and run initdb on several dates, and by process of\n> elimination, find out the day that alpha broke.\n\nThat's what I've been thinking of, but I haven't had a chance to get\nthe cvs archive yet.\n\n> \n> We can then analyze the patches for that day and find the problem. I\n> assume 6.2.1 worked for you, and that was October 17th. Go from there\n> to the 6.3 release and find the date of failure.\n> \n> With initdb problems, there is really no good way to debug problems like\n> this.\n\nI found a decent way. I just put a printf(getpid()), sleep 10 in\nBootstrapMain. Then I run initdb in one window and gdb in another,\nattaching gdb to that postgres -boot process. Worked fairly well\nuntil I got stumped by the page stuff.\n",
"msg_date": "Sun, 8 Mar 1998 15:03:44 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "> \n> On Sat, 7 March 1998, at 18:10:36, Bruce Momjian wrote:\n> \n> > OK, I have an idea. Contact Marc, [email protected]. Have him\n> > give you a login account to postgresql.org. Use cvs to pull snapshots\n> > by date. Compile and run initdb on several dates, and by process of\n> > elimination, find out the day that alpha broke.\n> \n> That's what I've been thinking of, but I haven't had a chance to get\n> the cvs archive yet.\n\nAs I said in another post, by binary search, he meant try Jan 15, and\nthen Dec 15 or Feb 15 depending on whether Jan 15 worked. Same thing I\nexpect you were going to do when you could get to the cvs archive.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 8 Mar 1998 18:15:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
},
{
"msg_contents": "\nDoh, silly me. I understood bruce & you.. I thought you meant\nexecutable binary search or something :)\n\nfoot in mouth,\n\nOn Sun, 8 March 1998, at 15:01:20, Brett McCormick wrote:\n\n> A binary search?\n> \n> On Sun, 8 March 1998, at 11:40:52, Maarten Boekhold wrote:\n> \n> > Another suggestion: use a binary search to find the date it broke. Will \n> > save you a lot of time :)\n> > \n> > Maarten\n> > \n> > _____________________________________________________________________________\n> > | TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n> > | Department of Electrical Engineering |\n> > | Computer Architecture and Digital Technique section |\n> > | [email protected] |\n> > -----------------------------------------------------------------------------\n",
"msg_date": "Sun, 8 Mar 1998 15:20:44 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dec alpha/64bit stuff"
}
] |
[
{
"msg_contents": "\nif you look at the schema for pg_attribute, down at the very end there\nare some 12491 that should be 1249l\n\nwould this cause our alignment problem?\nsomehow I doubt it..\n\nin any case, i found this by trying to put a 4 byte field after\nattrelid and before attname to see if that cleared up the problem.\nsilly eh?\n\nwe'll see\n",
"msg_date": "Fri, 6 Mar 1998 16:43:21 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "errors in pg_attribute.h"
},
{
"msg_contents": "\nI fixed this (the 1/l thing) and did a recompile but it didn't help\nour alpha problems any..\n\nCan anyone describe the way the page stuff works?\n\nOn Fri, 6 March 1998, at 16:43:21, Brett McCormick wrote:\n\n> if you look at the schema for pg_attribute, down at the very end there\n> are some 12491 that should be 1249l\n> \n> would this cause our alignment problem?\n> somehow I doubt it..\n> \n> in any case, i found this by trying to put a 4 byte field after\n> attrelid and before attname to see if that cleared up the problem.\n> silly eh?\n> \n> we'll see\n",
"msg_date": "Fri, 6 Mar 1998 17:21:37 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] errors in pg_attribute.h"
},
{
"msg_contents": "> \n> \n> if you look at the schema for pg_attribute, down at the very end there\n> are some 12491 that should be 1249l\n> \n> would this cause our alignment problem?\n> somehow I doubt it..\n> \n> in any case, i found this by trying to put a 4 byte field after\n> attrelid and before attname to see if that cleared up the problem.\n> silly eh?\n\nThat is surely a bug. Patch applied to source tree.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 6 Mar 1998 23:48:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] errors in pg_attribute.h"
}
] |
[
{
"msg_contents": "Here is a discussion from the Informix group on subselect performance. \nI think it makes us look pretty good.\n\n---------------------------------------------------------------------------\n\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!newsfeed.direct.ca!cpk-news-hub1.bbnplanet.com!cpk-news-feed4.bbnplanet.com!cpk-news-feed1.bbnplanet.com!news.bbnplanet.com!news.iquest.net!not-for-mail\nFrom: \"Matt Reprogle\" <[email protected]>\nNewsgroups: comp.databases.informix\nSubject: SELECT subquery much slower than IN ( list...)???\nDate: 5 Mar 1998 05:07:31 GMT\nOrganization: IQuest Internet, Inc.\nLines: 30\nMessage-ID: <01bd47f3$ab35d9a0$55392bd1@reprogle>\nNNTP-Posting-Host: iq-ind-ns006-21.iquest.net\nX-Newsreader: Microsoft Internet News 4.70.1155\nXref: readme1.op.net comp.databases.informix:43794 \n\nI have been having problems with a select statement of the type:\n\nselect col1, col2\nfrom bigtable\nwhere col1 in \n (select key from temp_list_table);\n\nIn one case I looked at, the subquery returns just 13 unique values in\nsubsecond time, yet it took almost 7 minutes for the main query to\ncomplete.\n\nOn the other hand, if I write out the result of the subquery explicitly,\nsuch as:\n\nselect col1, col2 \nfrom bigtable\nwhere col1 in ('A','B','C','D','E','F','G','H','I','J','K','L','M');\n\nthe query completes in less than 2 seconds.\n\nI guess I had the mistaken assumption that the main query treated the\nsubquery result like an explicit list of the form ('val1','val2',...).\n\nWhat could cause the huge performance difference between the two query\nforms?\n\nI am on 7.23 and Solaris 2.5.1, Sun E3000.\n-- \nMatt Reprogle\[email protected]\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!howland.erols.net!news.idt.net!woodstock.news.demon.net!demon!news.demon.co.uk!demon!smooth1.demon.co.uk!djw\nFrom: David Williams <[email protected]>\nNewsgroups: comp.databases.informix\nSubject: Re: SELECT subquery much slower than IN ( list...)???\nDate: Thu, 5 Mar 1998 22:00:41 +0000\nOrganization: not applicable\nMessage-ID: <[email protected]>\nReferences: <01bd47f3$ab35d9a0$55392bd1@reprogle>\nNNTP-Posting-Host: smooth1.demon.co.uk\nX-NNTP-Posting-Host: smooth1.demon.co.uk [194.222.39.154]\nMIME-Version: 1.0\nX-Newsreader: Turnpike (32) Version 3.05 <9Hhi+s$5$1$z+XxjwCrFWIswYg>\nLines: 65\nXref: readme1.op.net comp.databases.informix:43884 \n\nIn article <01bd47f3$ab35d9a0$55392bd1@reprogle>, Matt Reprogle\n<[email protected]> writes\n>I have been having problems with a select statement of the type:\n>\n>select col1, col2\n>from bigtable\n>where col1 in \n> (select key from temp_list_table);\n>\n Foreach row in big table\n get value of col1 (A)\n run the subquery and get the results (B)\n check if a in B\n end foreach\n\n If bigtable has n rows this runs the subquery n times.\n\n Also it scans every row in BIGTABLE!!! No indexeson BIG TABLE are \n used.\n\n\n \n>In one case I looked at, the subquery returns just 13 unique values in\n>subsecond time, yet it took almost 7 minutes for the main query to\n>complete.\n>\n>On the other hand, if I write out the result of the subquery explicitly,\n>such as:\n>\n>select col1, col2 \n>from bigtable\n>where col1 in ('A','B','C','D','E','F','G','H','I','J','K','L','M');\n>\n This will use the index on col1..\n\n>the query completes in less than 2 seconds.\n>\n>I guess I had the mistaken assumption that the main query treated the\n>subquery result like an explicit list of the form ('val1','val2',...).\n>\n>What could cause the huge performance difference between the two query\n>forms?\n>\n>I am on 7.23 and Solaris 2.5.1, Sun E3000.\n\n Try\n select col1, col2\n from bigtable,temp_list_table\n where bigtable.col1 = temp_list_table.key\n\n\n i.e. do a join not a corelated subquery!!\n\n-- \nDavid Williams\n\nMaintainer of the Informix FAQ\n Primary site (Beta Version) http://www.smooth1.demon.co.uk\n Official site http://www.iiug.org/techinfo/faq/faq_top.html\n\nI see you standin', Standin' on your own, It's such a lonely place for you, For \nyou to be If you need a shoulder, Or if you need a friend, I'll be here \nstanding, Until the bitter end...\nSo don't chastise me Or think I, I mean you harm...\nAll I ever wanted Was for you To know that I care\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!news1.ispnews.com!europa.clark.net!206.251.127.50!newsfeed.gte.net!news.gte.net!not-for-mail\nFrom: [email protected] (Douglas Wilson)\nNewsgroups: comp.databases.informix\nSubject: Re: SELECT subquery much slower than IN ( list...)???\nDate: Fri, 06 Mar 1998 18:21:30 GMT\nOrganization: gte.net\nLines: 31\nMessage-ID: <[email protected]>\nReferences: <01bd47f3$ab35d9a0$55392bd1@reprogle> <[email protected]>\nNNTP-Posting-Host: fw.brightpoint.com\nX-Auth: D203870C029BCB8A4BC48491\nX-Newsreader: Forte Free Agent 1.11/32.235\nXref: readme1.op.net comp.databases.informix:43942 \n\nOn Thu, 5 Mar 1998 22:00:41 +0000, David Williams\n<[email protected]> wrote:\n\n>In article <01bd47f3$ab35d9a0$55392bd1@reprogle>, Matt Reprogle\n><[email protected]> writes\n>>I have been having problems with a select statement of the type:\n>>\n>>select col1, col2\n>>from bigtable\n>>where col1 in \n>> (select key from temp_list_table);\n\n(stuff clipped)\n\n> i.e. do a join not a corelated subquery!!\n\nTrue, if the 'key' in the temp table has no duplicates\nthen just join; if there are duplicates, you can\n'select unique key' from the temp table into another\ntemp table, but I dont think this is a corelated\nsubquery, just a subquery. A corelated subquery\nwould be something like\n\nselect col1, col2\nfrom bigtable\nwhere col1 in \n (select col1 from temp_list_table\n where temp_list_table.col2=bigtable.col2);\n\nCheers,\nDouglas Wilson\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!darla.visi.com!news-out.visi.com!feed2.news.erols.com!erols!cpk-news-hub1.bbnplanet.com!news.bbnplanet.com!newsfeed.gte.net!news.gte.net!not-for-mail\nFrom: [email protected] (Douglas Wilson)\nNewsgroups: comp.databases.informix\nSubject: Re: SELECT subquery much slower than IN ( list...)???\nDate: Thu, 05 Mar 1998 23:39:05 GMT\nOrganization: gte.net\nLines: 24\nMessage-ID: <[email protected]>\nReferences: <01bd47f3$ab35d9a0$55392bd1@reprogle>\nNNTP-Posting-Host: fw.brightpoint.com\nX-Auth: D203990A1986CB8653C88491\nX-Newsreader: Forte Free Agent 1.11/32.235\nXref: readme1.op.net comp.databases.informix:43888 \n\nOn 5 Mar 1998 05:07:31 GMT, \"Matt Reprogle\" <[email protected]>\nwrote:\n\n>I have been having problems with a select statement of the type:\n>\n>select col1, col2\n>from bigtable\n>where col1 in \n> (select key from temp_list_table);\n>\n>In one case I looked at, the subquery returns just 13 unique values in\n>subsecond time, yet it took almost 7 minutes for the main query to\n>complete.\n\nhave you done a 'set explain on'?\nI had a similar situation once, and I didn't realize \n(until the 'explain') that the\ntable in the main query was really an alias (synonym, whatever) for\na table in another database on another machine. The optimizer\ntherefore could not use the index on the main table.\nAlso could be an 'update statistics' thing.\n\nCheers,\nDouglas Wilson\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!news.bconnex.net!nac!news-xfer.netaxs.com!fastnet!howland.erols.net!news.idt.net!nntp2.cerf.net!nntp3.cerf.net!hacgate2.hac.com!news.delcoelect.com!not-for-mail\nFrom: Matt Reprogle <[email protected]>\nNewsgroups: comp.databases.informix\nSubject: Re: SELECT subquery much slower than IN ( list...)???\nDate: Fri, 06 Mar 1998 15:31:10 -0500\nOrganization: Delco Electronics\nLines: 120\nMessage-ID: <[email protected]>\nReferences: <01bd47f3$ab35d9a0$55392bd1@reprogle> <[email protected]>\nNNTP-Posting-Host: koicew00.delcoelect.com\nMime-Version: 1.0\nContent-Type: text/plain; charset=us-ascii\nContent-Transfer-Encoding: 7bit\nX-Mailer: Mozilla 3.0 (X11; I; HP-UX A.09.01 9000/715)\nXref: readme1.op.net comp.databases.informix:43973 \n\nDouglas Wilson wrote:\n> have you done a 'set explain on'?\n> I had a similar situation once, and I didn't realize\n> (until the 'explain') that the\n> table in the main query was really an alias (synonym, whatever) for\n> a table in another database on another machine. The optimizer\n> therefore could not use the index on the main table.\n> Also could be an 'update statistics' thing.\n> \n> Cheers,\n> Douglas Wilson\nFirst, some additional information:\n1) the main table I am querying is about 3,000,000 rows.\n2) I have a unique index for table h_tab on columns (l_key, h_seq)\n\nHere is the sqexplain.out for each query mode:\n\nEXPLICIT LIST (runs in about 3 seconds)\nQUERY:\n------\nselect l_key,max(h_seq) last_h_seq\nfrom h_tab\nwhere l_key in (\n'80914',\n'80D74',\n'80C30',\n'80C28',\n'80F98',\n'80915',\n'80A26',\n'80917',\n'80F92',\n'80A25',\n'80A24',\n'80A23',\n'80811')\ngroup by l_key\ninto temp last_temp\nwith no log\n\nEstimated Cost: 362\nEstimated # of Rows Returned: 2\nTemporary Files Required For: Group By\n\n1) h_tab: INDEX PATH\n\n (1) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80914'\n\n (2) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80D74'\n\n (3) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80C30'\n\n (4) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80C28'\n\n (5) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80F98'\n\n (6) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80915'\n\n (7) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80A26'\n\n (8) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80917'\n\n (9) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80F92'\n\n (10) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80A25'\n\n (11) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80A24'\n\n (12) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80A23'\n\n (13) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n Lower Index Filter: h_tab.l_key = '80811'\n\n\nSUBQUERY: runs in about 7 minutes\nQUERY:\n------\nselect l_key,max(h_seq) last_h_seq\nfrom h_tab\nwhere l_key in (select temp_l from l_temp_tab)\ngroup by l_key\ninto temp last_temp\nwith no log\n\nEstimated Cost: 88140\nEstimated # of Rows Returned: 9142\n\n1) h_tab: INDEX PATH\n\n Filters: h_tab.l_key = ANY <subquery>\n\n (1) Index Keys: l_key h_seq (Key-Only) (Serial, fragments: ALL)\n\n Subquery:\n ---------\n Estimated Cost: 2\n Estimated # of Rows Returned: 10\n\n 1) mcreprog.l_temp_tab: SEQUENTIAL SCAN (Serial, fragments: ALL)\n\nThis tells me that it is doing a key-only query on the big table, and a\nsequential scan on the temp table. Isn't that what you would expect?\n\n-- \nMatt Reprogle \nIS Engineer, Delphi Delco Electronics Systems\nphone:(765)451-9651 FAX: (765)451-8230 \[email protected]\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!darla.visi.com!news-out.visi.com!feed2.news.erols.com!erols!newsfeed.internetmci.com!131.103.1.116!news2.chicago.iagnet.net!qual.net!iagnet.net!203.29.160.2!ihug.co.nz!nsw1.news.telstra.net!egprod05.westpac.com.au!fbox@westpac.com.au\nFrom: [email protected] (Jason Harris)\nNewsgroups: comp.databases.informix\nSubject: Re: SELECT subquery much slower than IN ( list...)???\nDate: Fri, 06 Mar 1998 00:05:57 GMT\nOrganization: Westpac Banking Corporation\nLines: 48\nMessage-ID: <[email protected]>\nReferences: <01bd47f3$ab35d9a0$55392bd1@reprogle>\nReply-To: [email protected]\nNNTP-Posting-Host: egprod03.westpac.com.au\nX-Newsreader: Forte Free Agent 1.11/32.235\nXref: readme1.op.net comp.databases.informix:43876 \n\nMatt,\n\nI too am interested in this.\n\nI have approx 50 delete statements that use a subquery on a key. All\ntables have at least one index on the column that I am using, with\nthat column as the first or only element. At the moment around half\nuse the indexes and about half sequential scan. I have not been able\nto figure out why they dont all use the index.\n\nIf I found out more I will let you know.\n\nJason\n\nOn 5 Mar 1998 05:07:31 GMT, \"Matt Reprogle\" <[email protected]>\nwrote:\n\n>I have been having problems with a select statement of the type:\n>\n>select col1, col2\n>from bigtable\n>where col1 in \n> (select key from temp_list_table);\n>\n>In one case I looked at, the subquery returns just 13 unique values in\n>subsecond time, yet it took almost 7 minutes for the main query to\n>complete.\n>\n>On the other hand, if I write out the result of the subquery explicitly,\n>such as:\n>\n>select col1, col2 \n>from bigtable\n>where col1 in ('A','B','C','D','E','F','G','H','I','J','K','L','M');\n>\n>the query completes in less than 2 seconds.\n>\n>I guess I had the mistaken assumption that the main query treated the\n>subquery result like an explicit list of the form ('val1','val2',...).\n>\n>What could cause the huge performance difference between the two query\n>forms?\n>\n>I am on 7.23 and Solaris 2.5.1, Sun E3000.\n>-- \n>Matt Reprogle\n>[email protected]\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 7 Mar 1998 00:47:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "subselects"
}
] |
[
{
"msg_contents": "Patrick Scott Pierce wrote:\n> \n> I run this query:\n> \n> select distinct task.*\n> from project, task\n> where task.projid = (select projid from project where custid =\n> (select custid from customer where domain = 'atlantahighrise.com')\n> and title = 'Initial site design')\n> and task.owner = 'ninjaman' order by priority\n> \n> And get this error:\n> ERROR: ComputeDataSize: attribute 0 has len 0\n> \n\nAt least 6.3 is strict about tables.\nThe table \"project\" is not used in the main query,\nremove it and it should be OK.\n\nI don't know if this behaviour is intentional or a \"feature\", \nanyone having more insite into this.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna",
"msg_date": "Sat, 07 Mar 1998 12:21:25 +0100",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] Error: ComputeDataSize"
},
{
"msg_contents": "I changed the query after sending it (within minutes) and all was and is\nwell. Thanks.\n\nPatrick Scott Pierce\[email protected]\nCGI Programming\nMindspring Enterprises\n\n\n\n\nOn Sat, 7 Mar 1998, Goran Thyni wrote:\n\n> Date: Sat, 07 Mar 1998 12:21:25 +0100\n> From: Goran Thyni <[email protected]>\n> To: Patrick Scott Pierce <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [QUESTIONS] Error: ComputeDataSize\n> \n> Patrick Scott Pierce wrote:\n> > \n> > I run this query:\n> > \n> > select distinct task.*\n> > from project, task\n> > where task.projid = (select projid from project where custid =\n> > (select custid from customer where domain = 'atlantahighrise.com')\n> > and title = 'Initial site design')\n> > and task.owner = 'ninjaman' order by priority\n> > \n> > And get this error:\n> > ERROR: ComputeDataSize: attribute 0 has len 0\n> > \n> \n> At least 6.3 is strict about tables.\n> The table \"project\" is not used in the main query,\n> remove it and it should be OK.\n> \n> I don't know if this behaviour is intentional or a \"feature\", \n> anyone having more insite into this.\n> \n> \tregards,\n> -- \n> ---------------------------------------------\n> G�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n",
"msg_date": "Sat, 7 Mar 1998 09:33:47 -0500 (EST)",
"msg_from": "Patrick Scott Pierce <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Error: ComputeDataSize"
}
] |
[
{
"msg_contents": "Previously I said:\n> From darcy Wed Mar 4 09:52:23 1998\n> I can't seem to duplicate this but it happened once and I thought I\n> would mention it in case anyone else has seen it as well. I have a\n> table for one user and another for myself. Both tables have a table\n> called _key. After creating the second database (I had to destroy and\n> create it a few times) I looked at the first one and found that the data\n> in it matched the new one. I was able to drop that table and recreate\n> it without affecting the new one. Very strange.\n\nIt got more serious. After playing with the second database for a\nwhile I managed to trash the first one altogether. I had to destroy\nit and reload from a backup dump. I have never seen these problems\nwith PostgreSQL until I created a second database and accessed it as\na different user. I have multiple database on my other system with\nno problem but I access all of them as myself, at least as far as\ncreating, dropping and renaming schema goes. There is some strange\ninteraction happening here. I'm afraid I don't know where to start\nlooking for this.\nBTW, this is on NetBSD 1.3.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 7 Mar 1998 07:45:03 -0500 (EST)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: Bad interaction between databases"
}
] |
[
{
"msg_contents": "David Hartwig <[email protected]>\n\n> Not to pile on, but, I have a great interest in this subject. We do a\n> lot of work using off-the-shelf ODBC tools. And, we have observed that\n> these tools use PREPARE for two purposes.\n>\n> One is to speed up iterative queries which join data from different\n> databases. You seem to be addressing this issue.\n>\n> The other reason PREPARE is used is to retrieve a description of a\n> query's projection (target/result) with out actually running the\n> query. Currently, ODBC drivers must simulate the prepare statement by\n> submitting the full query and discard the data just to get the result\n> description.\n>\nI did ask the PostODBC team (about a year ago, before Julie took over\nthe maintenance) to change this so that it opens a _cursor_ and get just\nenough rows to determine the types and lengths (you can determine the\ntypes by getting just one row, but you also can get the real length of\nvarchar fields by getting enough rows of a _binary_ cursor to get each\nvarchar field to be non-null. At that time (and maybe even now) the\nbehaviour was to open an ASCII cursor and to get the whole recordset and\nfind the longest field ;), this got mostly wrong results and messed up\nDelphi in a big way.)\n\nI have since stopped using ODBC (and have never had a setup to develop\nodbc drivers), but if this change is not yet there, it can be used as a\nquick, client-side-only, workaround.\n\nOf course the real soultion would be changing the front-end protocol to\nbe somewhat compatible with ISO-ANSI SQL CLI/ODBC and to use prepared\nstatements at the protocol level (as I understand the SPI already does\nit?). I would also recommend taking notice of X-Window protocol when\ndesigning the new DB protocol.\n\nAnd it would be a really good idea tyo have some design effort put into\nthe specifing the new protocol before starting to implement it.\n\nAt the very least the core postgresql developers, JDBC and ODBC\ndevelopers should be involved in defining the new protocol.\n\nThe current protocol seems not designed but just evolved from some\nprotocol that has started as telnet-to-port-5432 and added various parts\n(like connect options and binary cursors) later - nice for initial\ndebugging but a real pita to implement fast clients.\n\nMy ideal protocol would be one that merges ISO-ANSI SQL CLI\nfunctionality with X-Window like protocol. That would also be easily\nextensible for any other be-fe communication like user-defined functions\nsending their info to frontends using their own packet types or even\nasking for info from them. Or having special higher priority packets for\nsending signals to backend that would by-pass others in the send queue\n(this is not an Xproto feature, but much needed anyhow.)\n\n> Obviously this slows response time greatly when the query\n> is a large data set. So if you haven't considered returning the the\n> results description, please do.\n>\n> Thank Very Much\n>\nIn my opinion the first thing to change is the protocol as it has to be\nchanged anyhow when implementing types longer than 8k.\n\nIt would be nice to give a list of requested headers to backend when\nestablishing the connection and later just get these (so that when you\ndon't wand/need some bookkeeping info you dont get it, and when you want\nloads of debugging info you can request it also.\n\nCurrently you can't even ignore the response packets you dont want\neasily, because you have still to parse them in order to know when they\nare over. A clean protocol design would just allow you to ignore the\nresponses you don't understand. (Isn't this also one principle of OO?).\n\nHannu Krosing\n\n\n",
"msg_date": "Sat, 07 Mar 1998 15:34:57 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "PREPARE statement (was Speedups)"
}
] |
[
{
"msg_contents": "CREATE SEQUENCE is also what ORACLE does.\n\nMichael\n--\nDr. Michael Meskes, Projekt-Manager | topystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Use Debian GNU/Linux! | Tel: (+49) 2405/4670-44\n\n> ----------\n> From: \tGoran Thyni[SMTP:[email protected]]\n> Sent: \tFreitag, 6. März 1998 11:26\n> To: \tD. Dante Lorenso\n> Cc: \[email protected]\n> Subject: \tRe: [HACKERS] AUTO_INCREMENT suggestion\n> \n> D. Dante Lorenso wrote:\n> > \n> > To aid those of us that don't want to use sequences, can we add a\n> > feature to 6.4 that allows the use of an AUTO_INCREMENT statement\n> > when defining tables? MySQL does this, and I like it. It resembles\n> > the Autonumber feature in Access as well.\n> > \n> > create table tblFirm (\n> > FirmID int PRIMARY KEY AUTO_INCREMENT,\n> > FirmTypeID int,\n> > FirmName varchar(64) NOT NULL,\n> > FirmAlpha char(20) NOT NULL UNIQUE,\n> > FirmURL varchar(64),\n> > FirmEmail varchar(64)\n> > );\n> > \n> > Just yet another suggestion.\n> > \n> \n> Informix calls something like this SERIAL type, like:\n> \n> create table tblFirm (\n> FirmID SERIAL PRIMARY KEY,\n> FirmTypeID int,\n> FirmName varchar(64) NOT NULL,\n> FirmAlpha char(20) NOT NULL UNIQUE,\n> FirmURL varchar(64),\n> FirmEmail varchar(64)\n> );\n> \n> Don't know if that is standrd or extension.\n> \n> We use \"CREATE SEQUENCE\" to do this is PgSQL.\n> \n> \tregards,\n> -- \n> ---------------------------------------------\n> Göran Thyni, sysadm, JMS Bildbasen, Kiruna\n> \n",
"msg_date": "Sat, 7 Mar 1998 14:54:00 +0100",
"msg_from": "\"Meskes, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] AUTO_INCREMENT suggestion"
}
] |
[
{
"msg_contents": "Hi there!\n\nI played around with subselects and union and noticed that issuing strange\nstatements can make the backend crash. The problem is that other clients\nconnected to the same db get disconnected as well.\n\nI issued the following statement by mistake. (x1 is a table consisting only\nof one int4 value):\nselect * from x1 union select * from pg_user;\n\nThis crashes the backend. Postmaster (-d 2) says:\n> NOTICE: Message from PostgreSQL backend:\n> The Postmaster has informed me that some other backend died\n> abnormally and possibly corrupted shared memory.\n> I have rolled back the current transaction and am going to\n> terminate your database system connection and exit.\n> Please reconnect to the database system and repeat your query.\n\nHaving nothing to lose, I typed:\ncreate table x1 (a text, b int4);\ncreate table x2 (c int4, d int4);\ninsert into x1 values ('Test', '123');\ninsert into x2 values (1,2);\nselect * from x1 union select * from x2;\nFATAL: unrecognized data from the backend. It probably dumped core.\nFATAL: unrecognized data from the backend. It probably dumped core.\n\npostmaster prints:\n> Too Large Allocation Request(\"!(0 < (size) && (size) <=\n> (0xfffffff)):size=-2 [0xfffffffe]\", File: \"mcxt.c\", Line: 232)\n> ProcessQuery() at Sat Mar 7 16:45:48 1998\n> \n> !(0 < (size) && (size) <= (0xfffffff)) (0)\n> NOTICE: Message from ... [same as above]\n\nIs there anything one can do to stop postgres from breaking all connections\nexcept telling your users not to type such useless statements?\n\nTIA\n\nMfG\nMB\n\nPS: I noticed that NO_ASSERT_CHECKING is undef'ed by default (so assert\nchecking is enabled) and will only be defined if you specify either \n--enable-cassert or --disable-cassert.\nThe INSTALL file, however, says assert checking is disabled by default (so\nNO_ASSERT_CHECKING should be set). Am I confusing something, or is it\nconfigure that gets confused?\n\n-- \nMichael Bussmann <[email protected]> [Tel.: +49 228 9435 211; Fax: +49 228 348953]\nToday's Excuse:\n static from plastic slide rules\n",
"msg_date": "Sat, 7 Mar 1998 17:01:19 +0100",
"msg_from": "Michael Bussmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend crashes on select ... union"
},
{
"msg_contents": "> \n> Hi there!\n> \n> I played around with subselects and union and noticed that issuing strange\n> statements can make the backend crash. The problem is that other clients\n> connected to the same db get disconnected as well.\n> \n> I issued the following statement by mistake. (x1 is a table consisting only\n> of one int4 value):\n> select * from x1 union select * from pg_user;\n> \n\nMy fault. I never went back to make sure all union columns have the same\ntypes. I will work on that.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 7 Mar 1998 16:16:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend crashes on select ... union"
}
] |
[
{
"msg_contents": "> By the way, I have a sugestion I'd like to do about psql:\n> \n> It would be a good idea, IMHO, that if psql is called alone (without\n> database nor any parameter), instead of try to connect to database 'user'\n> starts interactively without connection. Then, simple calling \"\\c dbname\"\n> would connect the user to the desired database. What you think?\n> Cheers,\n\nThis seems like a good idea. Any comments?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 7 Mar 1998 11:11:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: your mail"
},
{
"msg_contents": "\nI dunno, I think it's kind of nice the way is really. Perhaps if the\nuser db doesn't exist, it could say so and start interactively as per\nbelow. Either way I'm not sure it matters though. I think it just\nnice for connecting to my random test databases.\n\nOn Sat, 7 March 1998, at 11:11:42, Bruce Momjian wrote:\n\n> > By the way, I have a sugestion I'd like to do about psql:\n> > \n> > It would be a good idea, IMHO, that if psql is called alone (without\n> > database nor any parameter), instead of try to connect to database 'user'\n> > starts interactively without connection. Then, simple calling \"\\c dbname\"\n> > would connect the user to the desired database. What you think?\n> > Cheers,\n> \n> This seems like a good idea. Any comments?\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 7 Mar 1998 12:08:45 -0800 (PST)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: your mail"
},
{
"msg_contents": ">>>>> \"bm\" == Bruce Momjian <[email protected]> writes:\n\n >> By the way, I have a sugestion I'd like to do about psql:\n >> \n >> It would be a good idea, IMHO, that if psql is called alone\n >> (without database nor any parameter), instead of try to connect\n >> to database 'user' starts interactively without connection.\n >> Then, simple calling \"\\c dbname\" would connect the user to the\n >> desired database. What you think? Cheers,\n\n bm> This seems like a good idea. Any comments?\n\nI don't mind the current setup, but would like it to not dump me out\nif I don't have a database named `roland'. I tend to give my\ndatabases a name that pertains to what they contain, not who uses\nthem.\n\nStill, having it attempt to connect to `roland' and then leave me at\nthe psql prompt with a message like \"You are not connected to any\ndatabase\" would be more friendly....\n\nroland\n-- \nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 101 West 15th St #4NN\n New York, NY 10011\n\n",
"msg_date": "07 Mar 1998 22:59:01 -0500",
"msg_from": "\"Roland B. Roberts\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: your mail"
},
{
"msg_contents": "> > It would be a good idea, IMHO, that if psql is called alone (without\n> > database nor any parameter), instead of try to connect to database 'user'\n> > starts interactively without connection. Then, simple calling \"\\c dbname\"\n> > would connect the user to the desired database. What you think?\n> > Cheers,\n>\n> This seems like a good idea. Any comments?\n\nOf course :) I personally like the current default behavior, and I think that\nsome others find it similarly convenient. If the alternate behavior is\ndesirable for some, how about implementing a command line switch which would\nchange the default behavior to \"don't open anything\". Then, you can alias the\ndefinition of psql to get what you want.\n\nI actually had a patch of some sort which changed the behavior of \"\\c\nunknownDB\"; at the moment if a connection fails psql bails out. The patch left\nthe psql session open and connected to the previous database. That behavior\nwould be dangerous in some cases so we didn't apply it; going to an\n\"unconnected state\" would be more helpful and less dangerous.\n\n - Tom\n\n",
"msg_date": "Tue, 10 Mar 1998 03:01:22 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: your mail"
}
] |
[
{
"msg_contents": "> > It would be a good idea, IMHO, that if psql is called alone (without\n> > database nor any parameter), instead of try to connect to database 'user'\n> > starts interactively without connection. Then, simple calling \"\\c dbname\"\n> > would connect the user to the desired database. What you think?\n> > Cheers,\n> \n> This seems like a good idea. Any comments?\n> \n\n When I was doing my own changes in the source to achieve this behavior, I've\nnoticed that in several places exit is called without free'ing the PGconn\npointer, the prompt and other things, specially when malloc fails and when\npsql cannot establish the connection to a new database.\n It's not a great thing, but it'd be good to fix it.\n Cheers,\n\n Federico.\n\n-- \nBefore humanity was born to this world Federico Schwindt\nthe stars shone in the heavens. [email protected]\nLong after humanity is gone\nthe stars will continue to shine.\n",
"msg_date": "Sat, 7 Mar 1998 16:05:46 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: your mail"
}
] |
[
{
"msg_contents": "At 15:17 -0000 on 5/3/98, The Hermit Hacker wrote:\n\n\n> \tYou should technically be able to run it on a different port,\n> *but* you might have problems with the shared libraries, where trying to\n> run v6.3 is seeing v6.1's shared libraries, and won't work...\n\nMay I make a suggestion? In future versions, include the version number in\nthe names of the libraries.\n\nHerouth\n\n\n",
"msg_date": "Sun, 8 Mar 1998 11:44:15 +0200 (IST)",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] Testing Postgresql v6.3"
},
{
"msg_contents": "> \n> At 15:17 -0000 on 5/3/98, The Hermit Hacker wrote:\n> \n> \n> > \tYou should technically be able to run it on a different port,\n> > *but* you might have problems with the shared libraries, where trying to\n> > run v6.3 is seeing v6.1's shared libraries, and won't work...\n> \n> May I make a suggestion? In future versions, include the version number in\n> the names of the libraries.\n\nThen everyone has to update all their Makefiles after an upgrade.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 8 Mar 1998 12:51:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTIONS] Testing Postgresql v6.3"
},
{
"msg_contents": "At 17:51 -0000 on 8/3/98, Bruce Momjian wrote:\n\n\n> >\n> > At 15:17 -0000 on 5/3/98, The Hermit Hacker wrote:\n> >\n> >\n> > > \tYou should technically be able to run it on a different port,\n> > > *but* you might have problems with the shared libraries, where trying to\n> > > run v6.3 is seeing v6.1's shared libraries, and won't work...\n> >\n> > May I make a suggestion? In future versions, include the version number in\n> > the names of the libraries.\n>\n> Then everyone has to update all their Makefiles after an upgrade.\n\nMore likely, re-link a symbolic link.\n\nIf the executables in the postgres distributions rely on the libraries\nthemselves, they should link directly to the versioned library. There\nshould be a symbolic link giving the versioned library the generic name.\n\nIt's rather important to be able to easily keep two versions of the same\nsoftware on the same machine. It allows for testing of the new version,\ncleaning it and weeding any problems, while still using the old version for\nproduction stuff. I'm sure not everyone has a spare machine, and if you\nhave, it doesn't always have the same setup as your main machine (for\nexample, it may not be accessible from the web).\n\nHerouth\n\n\n",
"msg_date": "Mon, 9 Mar 1998 09:47:51 +0200 (IST)",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTIONS] Testing Postgresql v6.3"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > At 15:17 -0000 on 5/3/98, The Hermit Hacker wrote:\n> >\n> >\n> > > You should technically be able to run it on a different port,\n> > > *but* you might have problems with the shared libraries, where trying to\n> > > run v6.3 is seeing v6.1's shared libraries, and won't work...\n> >\n> > May I make a suggestion? In future versions, include the version number in\n> > the names of the libraries.\n> \n> Then everyone has to update all their Makefiles after an upgrade.\n\nNo, 'make install' would of course update the symlinks, just\nas with any shared library.\n\n/* m */\n",
"msg_date": "Mon, 09 Mar 1998 15:07:27 +0100",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Testing Postgresql v6.3"
}
] |
[
{
"msg_contents": "Hi!\n\nMaybe I have overlooked something, but I think there should be a small line\nin migration/6.2.1_to_6.3 file that tells the user he has to recompile and\nrelink each program that uses libpq.so.\n\nWhen I upgraded to 6.3 I thought that the interface routines of libpq\ndidn't change (I know that the protocol has changed, but this should be\nnothing the application has to worry about), so I simply replaced libpq.so\nand I got _very_ surprised when I couldn't start my binaries any more due\nto a missing PQsetdb() function in libpq (it's now defined to\nPQsetdbLogin).\n\n(As a quick and dirty solution I made PQsetdb a function again that calls\nPQsetdbLogin)\n\nWhat about adding a version number to the library? The minor release\nnumber could be used to indicate changes in the protocol, the major number\nwould _only_ be increased when changes in the interface occur (e.g. other\nfunctions or other/new parameter of the routines), that would break\nexisting applications.\n\nJust a thought...\n\nBest regards,\nMB\n\n-- \nMichael Bussmann <[email protected]> [Tel.: +49 228 9435 211; Fax: +49 228 348953]\nToday's Excuse:\n static from plastic slide rules\n",
"msg_date": "Sun, 8 Mar 1998 12:42:51 +0100",
"msg_from": "Michael Bussmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq and PQsetdb()"
}
] |
[
{
"msg_contents": "Hello again, postgres documenters and now developers too,\n\nAs I have received some positive feedback (and only one 'No way!'), I was inspired to work\nsome more on the crocodile idea. Thanks to all supporters.\n\nBTW, if you answer to the list please CC to [email protected], as I only get the digests.\n\nYou can see some new images at\n\nhttp://www.trust.ee/Info/PostgreSQL.figs/logo/page2.html\n\nthe older ones are at\n\nhttp://www.trust.ee/Info/PostgreSQL.figs/logo/page.html\n\n-----------------------\n\nBruce Momjian <[email protected]> wrote:\n\n> I have been thinking about a logo. Basically, all SQL databases are\n> based on tables, like spreadsheets.\n>\n> Perhaps we could have a black-and-white stone tablet saying SQL laying\n> on the ground, perhaps broken or old looking, and a color PostgreSQL\n> tablet leaning up and looking like an SQL table containing data, except\n> the data forms the words PostgreSQL. We could do other exciting things\n> in the table fields like have things jumping out of them, because we\n> support functions/inheritance/user types.\n\nI added some ideas from this post to\nhttp://www.trust.ee/Info/PostgreSQL.figs/logo/page2.html ;)\n\n> ------------------------------\n>\n> Date: Fri, 06 Mar 1998 10:37:59 -0800\n> From: Michael Yount <[email protected]>\n> Subject: Re: [DOCS] Hannu K.'s logos and SQL-92 compliance\n>\n> Hannu: ROTF...Hilarious. Two thumbs and two big toes up from me.\n\nThanks!\n\n> ------------------------------\n>\n\n\"D. Dante Lorenso\" <[email protected]>\n\n> Subject: Re: [DOCS] Hannu K.'s logos\n>\n> >Hannu: ROTF...Hilarious. Two thumbs and two big toes up from me.\n>\n> Love the logos! Cute, characteristic, classy, creative, yet simple.\n> Do we have to have an elephant? I like the crocodile idea ;)\n>\n> I might change the font a bit, but the artistic influences are\n> a nice theme variation from the whole 3D pop out and grab you stuff.\n> Plus, the croc/gator is managable and can be worked into many other\n> logos, banners, and images.\n>\n> What about it? I say Do the Croc/Gator \"PostgreSQL\" and make it\n> the official logo!\n\nThese were some ideas behind my proposing the crocodile. It has some properties (big,\nagile, fast, ...) that can be used in verbal image. It is also recognizable when drawn very\nsmall, like in logos and buttons. And it seems to be not taken yet in the software world.\n(The two closest that I know of are Chameleon and Mozilla ;).\n\nAnd of course it can be rendered in 3D and still be recognizable.\n\n> ------------------------------\n>\n> Date: Fri, 6 Mar 1998 14:45:22 -0500 (EST)\n> From: The Hermit Hacker <[email protected]>\n> Subject: Re: [DOCS] Hannu K.'s logos\n>\n>\n> I don't :( Isn't the Croc the logo on a polo shirt or something\n> like that? :( And the images don't pop out at you like the current logo\n> does...kinda dry :(\n\nI assume you mean the 3D text PostgreSQL rendered with silver coating (in font called Orbit\nor something?) and not the dotted cat on the \"Powered by PostgreSQL\" button ?\n\nIt is nice artwork and I quite like it. But somehow I can't think of it as a 'logo', its\nmore like a book cover or web-page header. Requirements for logo are much wider. A good\nlogo is one that you can immediately recognize also at small sizes and in monochrome.\n\n> I think that's going to be the problem with just about any animal\n> logo though, isn't it? :(\n\nAs I don't have 3D programs installed here right now, so I could not make any nice pop-out\nimages of crocodile but I think that it is not beyond possible ;)\n\nMy idea was first to introduce the idea of a crocodile for PostgreSQL logo. I did\ncontemplate other creatures as well, but chose the crocodile for several reasons:\n\n* PostgresSQL is a big database, it is fast for bigger and more complicated jobs., so the\nmascot should also be a big animal.\n\n* PostgreSQL predecessors used the Turtle logo, Crocodile and turtle don't seem too\ndistant.\n\n* PostgreSQL is one the oldest relational databases now freely available, Crocodiles are\nancient\n\n* Crocodiles stuff their pray into some underwater shelves to mature - PostgreSQL is a\ndatabase (ok, this is a little far-fetched ;)\n\n* Crocodile image is easily recognizable. One can use it in many ways (for example for\ndifferent PostgreSQL tools)\n\n> ------------------------------\n>\n> Date: Fri, 6 Mar 1998 17:10:15 -0500 (EST)\n> From: \"Matthew N. Dodd\" <[email protected]>\n> Subject: Re: [DOCS] Hannu K.'s logos\n>\n> On Fri, 6 Mar 1998, The Hermit Hacker wrote:\n> > And the images don't pop out at you like the current logo does...kinda\n> > dry :(\n>\n> Actually the current logo is a bit on the busy side.\n>\n> Remember the old FreeBSD.org banner? Contrast that with the new one...\n>\n> Other than the weird perspective of the croc's legs, those logos are\n> clean, and simple (and lets you have tastefull buttons and thumbnails that\n> are of a consistent style.)\n\nIf the people like the crocodile idea the actual shape can be worked out to satisfy any\ntaste (not really, ;-p )\n\n> Date: Fri, 6 Mar 1998 17:52:55 -0500 (EST)\n> From: Tripp Lilley <[email protected]>\n> Subject: Re: [DOCS] Hannu K.'s logos\n>\n> On Fri, 6 Mar 1998, The Hermit Hacker wrote:\n>\n> > I don't :( Isn't the Croc the logo on a polo shirt or something\n> > like that? :( And the images don't pop out at you like the current logo\n> > does...kinda dry :(\n> >\n> > I think that's going to be the problem with just about any animal\n> > logo though, isn't it? :(\n>\n> The essential problem with all of the \"vivid 3-d explosion\" logos is that\n> they don' translate well to other media. A good logo has enough innate\n> simplicity that it can be rendered faithfully in one color, in low-res\n> media (like embroidery), etc.\n>\n> A good logo is also a springboard for more complex 'derivative' work. By\n> associating PGSQL with the notion of a crocodile, and by promoting the\n> simple, easy to recognize, easily mentally 'imprinted' image as its\n> signature, we create \"brand recognition\". People grasp the clean lines in\n> the croc logo, and they grasp the \"shape\" of a crocodile.\n>\n> Once that work is 'done', and people have the croc notion firmly\n> implanted, then we get to mess with it and do eye-popping 3D craziness, if\n> that moves us. We can do cool rendered chrome \"crocobots\" and what-not.\n> Starting from 3D, on the other hand, challenges us to divine from the\n> rendered, textured complexity of the pictures, the fundamental, long-lived\n> message.\n>\n> In this case, I think the croc work is a masterpiece. It has lots of room\n> to grow, lots of possbilities for derivative work. It makes a strong, bold\n> statement about the code -- \"the dinosaur that survived\" may or may not be\n> a good marketing slogan, but regardless of those words, I think of a\n> crocodile as a deceptively fast, deceptively strong creature.\n>\n> My vote is for the croc.\n>\n> - - t.\n>\n> - -----------------------------------------------------------------------\n> Tripp Lilley, Perspex Imageworks, Inc. ([email protected])\n> - -----------------------------------------------------------------------\n> \"Give me a fast computer, for I intend to go in harm's way\"\n> - updating John Paul Jones\n>\n> ------------------------------\n>\n> Date: Fri, 6 Mar 1998 18:15:17 -0500\n> From: \"D. Dante Lorenso\" <[email protected]>\n> Subject: New Crocodile Logo ... was [Re: [DOCS] Hannu K.'s logos]\n>\n> >\n> >My vote is for the croc.\n>\n> YES!! YES!!\n>\n> I like where we are going with this...It seems MOST of us are in\n> agreement.\n>\n> Now, if only H.H.(Marc) *ahem* could see clear to agree...\n> *ho hum ... looks at sky*\n>\n> BTW how much support is needed to make something official? Who gives\n> stamp of approval? Can we override he who vetos?\n>\n> Go Crocodile!! Hooray! *rallies crowd*\n>\n> Dante\n>\n> ------------------------------\n>\n> Date: Sat, 7 Mar 1998 08:58:50 +0800 (HKT)\n> From: \"neil d. quiogue\" <[email protected]>\n> Subject: Re: [DOCS] Hannu K.'s logos\n>\n> On Fri, 6 Mar 1998, Tripp Lilley wrote:\n>\n> > A good logo is also a springboard for more complex 'derivative' work. By\n> > associating PGSQL with the notion of a crocodile, and by promoting the\n> > simple, easy to recognize, easily mentally 'imprinted' image as its\n> > signature, we create \"brand recognition\". People grasp the clean lines in\n> > the croc logo, and they grasp the \"shape\" of a crocodile.\n> [snip]\n>\n> i couldn't have said it better. the lines are neatly drawn, distinctive\n> even at low resolutions (and even thumbnail). though it reminds me a bit\n> about lacoste... ;)\n>\n> fyi, when it has been agreed upon and all things settled, i'll update the\n> site.\n>\n> [---]\n> Neil D. Quiogue <[email protected]>\n> IPhil Communications Network, Inc.\n> Other: [email protected]\n>\n> ------------------------------\n>\n> Date: Fri, 6 Mar 1998 19:17:29 -0600 (CST)\n> From: \"Ing. Roberto Andrade\" <[email protected]>\n> Subject: Re: [DOCS] Hannu K.'s logos\n>\n> Hi:\n>\n> > i couldn't have said it better. the lines are neatly drawn, distinctive\n> > even at low resolutions (and even thumbnail). though it reminds me a bit\n> > about lacoste... ;)\n>\n> So what? When I entered Linux the camel logo reminded me a cigarette\n> brand, and now when I see the cigarette box I ALWAYS jumps to Perl!\n\nTo dispel fears about becoming a clone of lacoste ;), I made some crocodile imagery that\nhas only parts of the crocodile.\n\nAs to what to do next, I think I'll post this to hackers list as well, so that maybe we can\nget some more opinions from the core developers.\n\n---------------\nWaiting for reactions,\nHannu Krosing\n\n\n\n\n",
"msg_date": "Sun, 08 Mar 1998 21:05:27 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql-docs-digest V1 #312"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> wrote:\n> \n> > I have been thinking about a logo. Basically, all SQL databases are\n> > based on tables, like spreadsheets.\n> >\n> > Perhaps we could have a black-and-white stone tablet saying SQL laying\n> > on the ground, perhaps broken or old looking, and a color PostgreSQL\n> > tablet leaning up and looking like an SQL table containing data, except\n> > the data forms the words PostgreSQL. We could do other exciting things\n> > in the table fields like have things jumping out of them, because we\n> > support functions/inheritance/user types.\n> \n> I added some ideas from this post to\n> http://www.trust.ee/Info/PostgreSQL.figs/logo/page2.html ;)\n\nI think I like the crocodile animal idea. The water as an SQL table\nidea is interesting. Another idea would be the crocodile eating an SQL\ntable.\n\nAs far as the above post, if we changed the old brick wall so it looks\nlike an SQL table, we can change PostgreSQL to emphasize POSTgreSQL, so\nwe can show we are post/beyond SQL. This may be a nice distinction to\nmake in the web site banner image.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 8 Mar 1998 14:48:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: pgsql-docs-digest V1 #312"
},
{
"msg_contents": "At 2:05 pm -0500 3/8/98, Hannu Krosing wrote:\n>Hello again, postgres documenters and now developers too,\n>\n>As I have received some positive feedback (and only one 'No way!'), I was\n>inspired to work\n\nThis one really grabs me.\n\nhttp://www.trust.ee/Info/PostgreSQL.figs/logo/pgundert.gif\n\nIt's clean, simple, different and catchy. The conceptual link between the\ncroc logo and the PostgreSQL underneath put a smile on my face. Great work.\n\nDoug\n\n\n_____________________________________________________________________________\n\n ____/ _ / / J. Douglas Dunlop <[email protected]>\n / / / / Earth Observations Lab\n _/ / / / Institute for Space and Terrestrial Science\n / / / / 4850 Keele St.,2nd Floor\n____/ _____/ _____/ W:416-665-5411 North York, Ontario\n H:519-747-1710 CANADA, M3J-3K1\n F:416-665-2032\n_____________________________________________________________________________\n http://www.eol.ists.ca/\n_____________________________________________________________________________\n",
"msg_date": "Sun, 8 Mar 1998 21:52:48 -0500",
"msg_from": "\"J. Douglas Dunlop\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: pgsql-docs-digest V1 #312"
}
] |
[
{
"msg_contents": "\nI might have missed somethign with all the dump/reload discussions that \nwent on, but what is:\n\npostgres@zeus> psql -e -f db.dump template1\n\\connect template1\nconnecting to new database: template1\nselect datdba into table tmp_pguser from pg_database where datname =\n'template1';\nQUERY: select datdba into table tmp_pguser from pg_database where\ndatname = 'template1';\nSELECT\ndelete from pg_user where usesysid <> tmp_pguser.datdba;\nQUERY: delete from pg_user where usesysid <> tmp_pguser.datdba;\nDELETE 0\ndrop table tmp_pguser;\nQUERY: drop table tmp_pguser;\nDROP\ncopy pg_user from stdin;\nQUERY: copy pg_user from stdin;\nEnter info followed by a newline\nEnd with a backslash and a period on a line by itself.\n>> \n\n\nAnd then it just stops...?\n\nThis is using pg_dump/pg_dumpall from 6.2.1, before shutting down v6.2.1,\nto create the db.dump file...\n\n\n\n",
"msg_date": "Sun, 8 Mar 1998 17:50:20 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "What is this...?"
},
{
"msg_contents": "> drop table tmp_pguser;\n> QUERY: drop table tmp_pguser;\n> DROP\n> copy pg_user from stdin;\n> QUERY: copy pg_user from stdin;\n> Enter info followed by a newline\n> End with a backslash and a period on a line by itself.\n> >> \n> \n\n[Redirected to appropriate group. Sorry, couldn't resist. :-)]\n\n> \n> And then it just stops...?\n> \n> This is using pg_dump/pg_dumpall from 6.2.1, before shutting down v6.2.1,\n> to create the db.dump file...\n\nSure you are not running 6.3 pg_dumpall, which has a bug?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 8 Mar 1998 18:34:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What is this...?"
},
{
"msg_contents": "On Sun, 8 Mar 1998, Bruce Momjian wrote:\n\n> > drop table tmp_pguser;\n> > QUERY: drop table tmp_pguser;\n> > DROP\n> > copy pg_user from stdin;\n> > QUERY: copy pg_user from stdin;\n> > Enter info followed by a newline\n> > End with a backslash and a period on a line by itself.\n> > >> \n> > \n> \n> [Redirected to appropriate group. Sorry, couldn't resist. :-)]\n> \n> > \n> > And then it just stops...?\n> > \n> > This is using pg_dump/pg_dumpall from 6.2.1, before shutting down v6.2.1,\n> > to create the db.dump file...\n> \n> Sure you are not running 6.3 pg_dumpall, which has a bug?\n\n\tPositive...hadn't even installed v6.3 when I did my pg_dumpall :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 8 Mar 1998 21:05:59 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] What is this...?"
},
{
"msg_contents": "> \n> \n> I might have missed somethign with all the dump/reload discussions that \n> went on, but what is:\n> \n> postgres@zeus> psql -e -f db.dump template1\n> \\connect template1\n> connecting to new database: template1\n> select datdba into table tmp_pguser from pg_database where datname =\n> 'template1';\n> QUERY: select datdba into table tmp_pguser from pg_database where\n> datname = 'template1';\n> SELECT\n> delete from pg_user where usesysid <> tmp_pguser.datdba;\n> QUERY: delete from pg_user where usesysid <> tmp_pguser.datdba;\n> DELETE 0\n> drop table tmp_pguser;\n> QUERY: drop table tmp_pguser;\n> DROP\n> copy pg_user from stdin;\n> QUERY: copy pg_user from stdin;\n\n ^^^^^^^ COPY into a view? Cool!\n\n> Enter info followed by a newline\n> End with a backslash and a period on a line by itself.\n> >> \n> \n> \n> And then it just stops...?\n> \n> This is using pg_dump/pg_dumpall from 6.2.1, before shutting down v6.2.1,\n> to create the db.dump file...\n> \n> \n> \n> \n> \n\n So we missed something when renaming pg_user into pg_shadow.\n Damn.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Mon, 9 Mar 1998 09:27:47 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What is this...?"
},
{
"msg_contents": "> > QUERY: copy pg_user from stdin;\n> \n> ^^^^^^^ COPY into a view? Cool!\n> \n> > Enter info followed by a newline\n> > End with a backslash and a period on a line by itself.\n> \n> So we missed something when renaming pg_user into pg_shadow.\n> Damn.\n\nHere is a patch. Haven't tested it yet, but patch has been applied to\nsource tree.\n\n---------------------------------------------------------------------------\n\n*** ./bin/pg_dump/pg_dumpall.orig\tFri Mar 6 12:17:36 1998\n--- ./bin/pg_dump/pg_dumpall\tFri Mar 6 12:18:26 1998\n***************\n*** 2,8 ****\n #\n # pg_dumpall [pg_dump parameters]\n # dumps all databases to standard output\n! # It also dumps the pg_user table\n #\n # to adapt to System V vs. BSD 'echo'\n #set -x\n--- 2,8 ----\n #\n # pg_dumpall [pg_dump parameters]\n # dumps all databases to standard output\n! # It also dumps the pg_shadow table\n #\n # to adapt to System V vs. BSD 'echo'\n #set -x\n***************\n*** 30,50 ****\n # we don't use POSTGRES_SUPER_USER_ID because the postgres super user id\n # could be different on the two installations\n #\n! echo \"select datdba into table tmp_pguser \\\n from pg_database where datname = 'template1';\"\n! echo \"delete from pg_user where usesysid <> tmp_pguser.datdba;\"\n! echo \"drop table tmp_pguser;\"\n #\n # load all the non-postgres users\n #\n! echo \"copy pg_user from stdin;\"\n psql -q template1 <<END\n! select pg_user.* \n! into table tmp_pg_user\n! from pg_user\n where usesysid <> $POSTGRES_SUPER_USER_ID;\n! copy tmp_pg_user to stdout;\n! drop table tmp_pg_user;\n END\n echo \"${BS}.\"\n psql -l -A -q -t| tr '|' ' ' | grep -v '^template1 ' | \\\n--- 30,50 ----\n # we don't use POSTGRES_SUPER_USER_ID because the postgres super user id\n # could be different on the two installations\n #\n! echo \"select datdba into table tmp_pg_shadow \\\n from pg_database where datname = 'template1';\"\n! echo \"delete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;\"\n! echo \"drop table tmp_pg_shadow;\"\n #\n # load all the non-postgres users\n #\n! echo \"copy pg_shadow from stdin;\"\n psql -q template1 <<END\n! select pg_shadow.* \n! into table tmp_pg_shadow\n! from pg_shadow\n where usesysid <> $POSTGRES_SUPER_USER_ID;\n! copy tmp_pg_shadow to stdout;\n! drop table tmp_pg_shadow;\n END\n echo \"${BS}.\"\n psql -l -A -q -t| tr '|' ' ' | grep -v '^template1 ' | \\\n***************\n*** 52,58 ****\n do\n \tPOSTGRES_USER=\"`echo \\\" \\\n \t\tselect usename \\\n! \t\tfrom pg_user \\\n \t\twhere usesysid = $DBUSERID; \\\" | \\\n \t\tpsql -A -q -t template1`\"\n \techo \"${BS}connect template1 $POSTGRES_USER\"\n--- 52,58 ----\n do\n \tPOSTGRES_USER=\"`echo \\\" \\\n \t\tselect usename \\\n! \t\tfrom pg_shadow \\\n \t\twhere usesysid = $DBUSERID; \\\" | \\\n \t\tpsql -A -q -t template1`\"\n \techo \"${BS}connect template1 $POSTGRES_USER\"\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 9 Mar 1998 09:48:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What is this...?"
}
] |
[
{
"msg_contents": "\n\\connect template1\nselect datdba into table tmp_pguser from pg_database where datname =\n'temp\nlate1';\ndelete from pg_user where usesysid <> tmp_pguser.datdba;\ndrop table tmp_pguser;\ncopy pg_user from stdin;\nroot 0 f t f t\nacctng 103 f t f t\nnobody 65534 f t f t\n\\.\n~\n\nThis dump from v6.2.1 fails to reload into v6.3...my guess being, of\ncourse, because of the field(s)...have we compensated pg_dump in v6.3 for\nthis?\n\n\n\n",
"msg_date": "Sun, 8 Mar 1998 17:59:29 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it just me...?"
}
] |
[
{
"msg_contents": "\nBruce...\n\n\tDid I miss something here? I just looked through the migration\nfile for 6.2.1->6.3, and it doesn't seem to say, but how do you dump\nthe data from a v6.2.1 database and then reload it to a v6.3 one?\n\n\tUsing v6.2.1's pg_dump/pg_dumpall, I did:\n\npg_dumpall -o > db.dump\n\n\tThat worked.\n\n\tThen, I installed v6.3, and using its psql, I did:\n\npsql -e < db.dump\n\n\tThat failed miserably.\n\n\tFirst thing that failed was building the new pg_user...so I cut\nout that and did it manually using createuser...\n\n\tThen, using what was left, I did:\n\npsql -e < db.dump \n\n\tAgain. Failed misearbly, with the following coming from the 'copy\nto <relname> from stdin;' section:\n\n344984 johnb xgSldZdYEgIWo clio.trends.ca n \\N\n \\? -- help\n \\a -- toggle field-alignment (currenty on)\n \\C [<captn>] -- set html3 caption (currently '')\n \\connect <dbname|-> <user> -- connect to new database (currently\n'acctng')\n \\copy table {from | to} <fname>\n \\d [<table>] -- list tables and indices, columns in <table>, or * for all\n \\da -- list aggregates\n \\dd [<object>]- list comment for table, field, type, function, or\noperator.\n \\df -- list functions\n \\di -- list only indices\n \\do -- list operators\n \\ds -- list only sequences\n \\dS -- list system tables and indexes\n \\dt -- list only tables\n \\dT -- list types\n \\e [<fname>] -- edit the current query buffer or <fname>\n \\E [<fname>] -- edit the current query buffer or <fname>, and execute\n \\f [<sep>] -- change field separater (currently '|')\n \\g [<fname>] [|<cmd>] -- send query to backend [and results in <fname> or\npipe]\n \\h [<cmd>] -- help on syntax of sql commands, * for all commands\n \\H -- toggle html3 output (currently off)\n \\i <fname> -- read and execute queries from filename\n \\l -- list all databases\n \\m -- toggle monitor-like table display (currently off)\n \\o [<fname>] [|<cmd>] -- send all query results to stdout, <fname>, or\npipe\n \\p -- print the current query buffer\n \\q -- quit\n \\r -- reset(clear) the query buffer\n \\s [<fname>] -- print history or save it in <fname>\n \\t -- toggle table headings and row count (currently on)\n \\T [<html>] -- set html3.0 <table ...> options (currently '')\n \\x -- toggle expanded output (currently off)\n \\z -- list current grant/revoke permissions\n \\! [<cmd>] -- shell escape or command\n344985 bonnies x/lgef4ULWJv2 clio.trends.ca n \\N\n \\? -- help\n \\a -- toggle field-alignment (currenty on)\n \\C [<captn>] -- set html3 caption (currently '')\n \\connect <dbname|-> <user> -- connect to new database (currently\n'acctng')\n \\copy table {from | to} <fname>\n \\d [<table>] -- list tables and indices, columns in <table>, or * for all\n \\da -- list aggregates\n \\dd [<object>]- list comment for table, field, type, function, or\noperator.\n\n\n\tSo, is there, like, a trick to this? *raised eyebrow* Have I\nmissed something important here?\n\nThanks...\n\n",
"msg_date": "Sun, 8 Mar 1998 18:23:03 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to...?"
},
{
"msg_contents": "> \n> \n> Bruce...\n> \n> \tDid I miss something here? I just looked through the migration\n> file for 6.2.1->6.3, and it doesn't seem to say, but how do you dump\n> the data from a v6.2.1 database and then reload it to a v6.3 one?\n> \n> \tUsing v6.2.1's pg_dump/pg_dumpall, I did:\n> \n> pg_dumpall -o > db.dump\n\nYes.\n\n> \n> \tThat worked.\n> \n> \tThen, I installed v6.3, and using its psql, I did:\n> \n> psql -e < db.dump\n> \n> \tThat failed miserably.\n\nOK.\n\n> \n> \tFirst thing that failed was building the new pg_user...so I cut\n> out that and did it manually using createuser...\n> \n> \tThen, using what was left, I did:\n> \n> psql -e < db.dump \n> \n> \tAgain. Failed misearbly, with the following coming from the 'copy\n> to <relname> from stdin;' section:\n> \n> 344984 johnb xgSldZdYEgIWo clio.trends.ca n \\N\n\nCheck what is in the file around this line. It has existed the COPY for\nsome reason, and the \\N is triggering the \\? output.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 8 Mar 1998 19:28:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to...?"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Peter T Mount <[email protected]>\nTo: Maurice Gittens <[email protected]>\nCc: PostgreSQL-development <[email protected]>\nDate: maandag 9 maart 1998 3:34\nSubject: Re: [HACKERS] newoid in invapi.c\n\n\n>> This might lead the large object implementation to confuse\n>> large object relations with other relations.\n>>\n>> According to me this is a bug. I'm I right?\n>\n>Yes, and no. LargeObjects are supposed to run within a transaction (if you\n>don't then some fun things happen), and (someone correct me if I'm wrong)\n>if newoid() is called from within the transaction, it is safe?\n>\nI see no evidence in the code that suggests that it is safe in transactions.\nThe GetNewObjectIdBlock() function which generates the OID blocks _does_\nacquire a spinlock before it generates a new block of oids so usually all\nwill be well.\nBut sometimes ((a chance of <usercount>/32) when there <usercount> active\nusers\nfor the same db) the newoid might have a quite different value than\nfileoid+1.\n\nAgain I see no evidence in the code that it is safe in transactions. I only\nsee evidence that it will _usually_ work.\n\nActually I wonder how it could be efficiently made safe within transactions\ngiven\nthat the oids generated are guaranteed to be unique within an\n_entire_ postgres installation. This would seem to imply that, effectively,\nonly one transaction would be possible at the same time in an entire\npostgresql database.\n\nMy current strategy to solve this problem involves the use of a new\nsystem catalog which I call pg_large_object. This catalog contains\ninformation about each large object in the system.\nCurrently the information maintained is:\n- identification of heap and index relations used by the large_object\n- the size of the large object\n- information about the type of the large object.\nI still need to figure out how to create a new _unique_ index on a system\ncatalog using information in the indexing.h file.\n\nGiven an oid this table allow us to determine if it is a valid large object.\nI think this is necesary (to be able to maintain referential integrity) if\nwe're ever\ngoing to have large object type.\n\nSimilarly I have defined a table pg_tuple which allows one to\ndetermine if a given oid is a valid tuple.\nThis together with some other minor changes allows some cool\nobject oriented features for postgresql.\n\nFancy the idea of persistent Java object which live in postgresql databases?\n\nAnyway if it all works as expected I'll submit some patches.\n\nThanks,\nMaurice\n\n\n",
"msg_date": "Mon, 9 Mar 1998 11:07:25 +0100",
"msg_from": "\"Maurice Gittens\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] newoid in invapi.c"
}
] |
[
{
"msg_contents": "\n> > Yikes, we never changed pg_user to pg_shadow in pg_dumpall. Isn't that\n> > the real problem. Need to have that patched, or people will not be able\n> > to upgrade. Applying patch now.\n> > \n> > Do I put this patch in the patches directory?\n> \n[...]\n\n> \tBruce, can you get me a patch for this? I'm going to review the\n> patches that I do have now, and the ones that look perfectly safe (ie. I\n> have no doubt about), will get included on the CD rom also...not as part\n> of the source, just as a seperate file to be used...\n> \n> \n> \nThis is really so important that surely it's worthy of a 6.3.1 and should\ngo onto the CD-ROM in this form if at all possible.\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Mon, 9 Mar 1998 10:52:43 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dumpall"
}
] |
[
{
"msg_contents": "> > By the way, I have a sugestion I'd like to do about psql:\n> > \n> > It would be a good idea, IMHO, that if psql is called alone (without\n> > database nor any parameter), instead of try to connect to database 'user'\n> > starts interactively without connection. Then, simple calling \"\\c dbname\"\n> > would connect the user to the desired database. What you think?\n> > Cheers,\n> \n> This seems like a good idea. Any comments?\n> \nWhat about having an environment variable to check and if this isn't set either,\nstart without connecting.\n\ni.e. Priority 1 is the command line\n Priority 2 the envvar\n Default, start without connection\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Mon, 9 Mar 1998 10:57:54 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: your mail"
}
] |
[
{
"msg_contents": "> \tThen, using what was left, I did:\n> \n> psql -e < db.dump \n> \n> \tAgain. Failed misearbly, with the following coming from the 'copy\n> to <relname> from stdin;' section:\n> \n> 344984 johnb xgSldZdYEgIWo clio.trends.ca n \\N\n\nI got something kind-of similar with a core dump... This was 'cos of a \ncolumn name which is now a reserved word.\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n",
"msg_date": "Mon, 9 Mar 1998 11:04:56 GMT",
"msg_from": "Andrew Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How to...?"
},
{
"msg_contents": "On Mon, 9 Mar 1998, Andrew Martin wrote:\n\n> > \tThen, using what was left, I did:\n> > \n> > psql -e < db.dump \n> > \n> > \tAgain. Failed misearbly, with the following coming from the 'copy\n> > to <relname> from stdin;' section:\n> > \n> > 344984 johnb xgSldZdYEgIWo clio.trends.ca n \\N\n> \n> I got something kind-of similar with a core dump... This was 'cos of a \n> column name which is now a reserved word.\n\n\tAck, I fear you are correct...the third field above is 'password',\nwhich became a reserved word in v6.3...oh man, is this upgrade ever going\nto hurt...my 'db.dump' file is 84Meg...vi just loves it :)\n\n\n",
"msg_date": "Mon, 9 Mar 1998 08:04:32 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to...?"
},
{
"msg_contents": "On Mon, 9 Mar 1998, The Hermit Hacker wrote:\n\n> On Mon, 9 Mar 1998, Andrew Martin wrote:\n> \n> > > \tThen, using what was left, I did:\n> > > \n> > > psql -e < db.dump \n> > > \n> > > \tAgain. Failed misearbly, with the following coming from the 'copy\n> > > to <relname> from stdin;' section:\n> > > \n> > > 344984 johnb xgSldZdYEgIWo clio.trends.ca n \\N\n> > \n> > I got something kind-of similar with a core dump... This was 'cos of a \n> > column name which is now a reserved word.\n> \n> \tAck, I fear you are correct...the third field above is 'password',\n> which became a reserved word in v6.3...oh man, is this upgrade ever going\n> to hurt...my 'db.dump' file is 84Meg...vi just loves it :)\n\nHow about starting up your old pgsql, then psql, the 'alter table rename \ncolumn.....'\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 9 Mar 1998 16:47:10 +0100 (MET)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to...?"
},
{
"msg_contents": "On Mon, 9 Mar 1998, Maarten Boekhold wrote:\n\n> On Mon, 9 Mar 1998, The Hermit Hacker wrote:\n> \n> > On Mon, 9 Mar 1998, Andrew Martin wrote:\n> > \n> > > > \tThen, using what was left, I did:\n> > > > \n> > > > psql -e < db.dump \n> > > > \n> > > > \tAgain. Failed misearbly, with the following coming from the 'copy\n> > > > to <relname> from stdin;' section:\n> > > > \n> > > > 344984 johnb xgSldZdYEgIWo clio.trends.ca n \\N\n> > > \n> > > I got something kind-of similar with a core dump... This was 'cos of a \n> > > column name which is now a reserved word.\n> > \n> > \tAck, I fear you are correct...the third field above is 'password',\n> > which became a reserved word in v6.3...oh man, is this upgrade ever going\n> > to hurt...my 'db.dump' file is 84Meg...vi just loves it :)\n> \n> How about starting up your old pgsql, then psql, the 'alter table rename \n> column.....'\n\n\tCause I keep forgetting that I can do that :( Point taken and\nwill try that out, thanks...\n\n\n",
"msg_date": "Mon, 9 Mar 1998 11:52:28 -0500 (EST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to...?"
},
{
"msg_contents": "> > \tAck, I fear you are correct...the third field above is 'password',\n> > which became a reserved word in v6.3...oh man, is this upgrade ever going\n> > to hurt...my 'db.dump' file is 84Meg...vi just loves it :)\n> \n> How about starting up your old pgsql, then psql, the 'alter table rename \n> column.....'\n\nI assume normal users do the pg_dump, delete their databases and old\nbinaries, then try to reload into 6.3.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 9 Mar 1998 12:11:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to...?"
},
{
"msg_contents": "> Ack, I fear you are correct...the third field above is 'password',\n> which became a reserved word in v6.3...oh man, is this upgrade ever going\n> to hurt...my 'db.dump' file is 84Meg...vi just loves it :)\n\nWell, this doesn't solve the general problem, but \"password\" can be used as a\ncolumn name without inducing shift/reduce conflicts. I'll patch the source\ntree sometime soon; in the meantime add the obvious source around line 4618\nin gram.y:\n\n | PASSWORD { $$ = \"password\"; }\n\n(add tabs to taste :)\n\n - Tom\n\n",
"msg_date": "Tue, 10 Mar 1998 04:00:42 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to...?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.