threads
listlengths
1
2.99k
[ { "msg_contents": "It's just version 0.1 but it looks very good IMO. From the READmE:\n\nGtkSQL v0.1 - 02 May 1998\n=========================\n by Lionel ULMER ([email protected])\n\n GtkSQL is a graphical query tool for PostgreSQL 6.3. I've written it\nto learn how to use Gtk, and this version is quite unpolished (it\nlacks several features I plan to add), but as it is quite stable and\nusable in its present state, I decided to release it now :-).\n A list of changes from version 0.0 is in the Changelog file. Feel\nfree to mail me for any comments / questions / bug reports /\nimprovements requests you have (I LOVE mail :-)).\n\n There is a home page for GtkSQL :\n http//www.mygale.org/~bbrox/GtkSQL/\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Tue, 19 May 1998 12:36:09 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone seen gtksql?" } ]
[ { "msg_contents": "Hi all,\n\nI think there's an error on pg_dump, \nmy environment is:\n Lynux 2.0.33\n\t PostgreSQL 6.3\n\n1) ----VARCHAR(-50)------------------------------------------\n\nI created a table as:\nCREATE TABLE utente (\n\tintestazione_azienda \tvarchar,\n\tindirizzo \t\tvarchar\n\t);\n\nusing pg_dump -d mydatabase > file\n\nfile is like:\n\\connect - postgres\nCREATE TABLE utente (intestazione_azienda varchar(-5), indirizzo varchar(-5));\n\nif I try to load it using\npsql -d mydatabase < file\nI have this:\n\nERROR: length for 'varchar' type must be at least 1\n\n2) ----CONSTRAINT--------------------------------------------\n\nI created a table like:\n\nCREATE TABLE attivita_a (\n\tazienda\t\t\tCHAR(11) NOT NULL,\n\tattivita\t\tCHAR(03) NOT NULL,\n\toperatore\t\tCHAR(03),\t\n\tvet_esterno\t\tVARCHAR(45),\n\ttipo_allevamento1\tCHAR(02),\t\t\n\ttipo_allevamento2\tCHAR(02),\n\tesonerato\t\tCHAR CHECK(esonerato = 'S' OR esonerato = 'N'),\n\trazza_prevalente1\tCHAR(03),\t\n\trazza_prevalente2\tCHAR(03),\t\t\n\tiscrizione_libro\tDATE,\n\tiscritta_funzionali\tCHAR CHECK(iscritta_funzionali = 'S' OR iscritta_funzionali = 'N'),\n\tiscritta_tutela\t\tCHAR CHECK(iscritta_tutela = 'S' OR iscritta_tutela = 'N'),\n\tsigla_tutela\t\tCHAR(04),\n\tadesione_altri_piani\tVARCHAR(50),\n\tdata_adesione\t\tDATE,\n PRIMARY KEY (azienda,attivita)\n\t);\n\n\nusing pg_dump I have this:\n\n\\connect - postgres\nCREATE TABLE attivita_a (\n azienda \t\tchar(11) NOT NULL,\n attivita \t\tchar(3) NOT NULL,\n operatore \t\tchar(3),\n vet_esterno \t\tvarchar(45),\n tipo_allevamento1 \tchar(2),\n tipo_allevamento2 \tchar(2),\n esonerato \t\tchar,\n razza_prevalente1 \tchar(3),\n razza_prevalente2 \tchar(3),\n iscrizione_libro \tdate,\n iscritta_funzionali \tchar,\n iscritta_tutela \tchar,\n sigla_tutela \t\tchar(4),\n adesione_altri_piani \tvarchar(50),\n data_adesione date)\n CONSTRAINT attivita_a_esonerato CHECK esonerato = 'S' OR esonerato = 'N',\n CONSTRAINT attivita_a_iscritta_funzionali CHECK iscritta_funzionali = 'S' OR iscritta_funzionali = 'N',\n CONSTRAINT attivita_a_iscritta_tutela CHECK iscritta_tutela = 'S' OR iscritta_tutela = 'N';\n--\nNote that CONSTRAINTs are the wrong syntax, they are defined after the close\nparenthesis of CREATE TABLE.\n\n3)----VIEWS-------------------------------------------------\nI have some views on my database but seems that pg_dump doesn't see those\nviews.\n Jose'\n\n", "msg_date": "Tue, 19 May 1998 12:24:31 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump error" }, { "msg_contents": "> \n> Hi all,\n> \n> I think there's an error on pg_dump, \n> my environment is:\n> Lynux 2.0.33\n> \t PostgreSQL 6.3\n> \n> 1) ----VARCHAR(-50)------------------------------------------\n> \n> I created a table as:\n> CREATE TABLE utente (\n> \tintestazione_azienda \tvarchar,\n> \tindirizzo \t\tvarchar\n> \t);\n> \n> using pg_dump -d mydatabase > file\n> \n> file is like:\n> \\connect - postgres\n> CREATE TABLE utente (intestazione_azienda varchar(-5), indirizzo varchar(-5));\n\nBasically, something major is wrong in your installation. I have never\nheard a report like this, and people use pg_dump all the time.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 15 Jun 1998 23:19:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump error" }, { "msg_contents": "On Mon, 15 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > Hi all,\n> > \n> > I think there's an error on pg_dump, \n> > my environment is:\n> > Lynux 2.0.33\n> > \t PostgreSQL 6.3\n> > \n> > 1) ----VARCHAR(-50)------------------------------------------\n> > \n> > I created a table as:\n> > CREATE TABLE utente (\n> > \tintestazione_azienda \tvarchar,\n> > \tindirizzo \t\tvarchar\n> > \t);\n> > \n> > using pg_dump -d mydatabase > file\n> > \n> > file is like:\n> > \\connect - postgres\n> > CREATE TABLE utente (intestazione_azienda varchar(-5), indirizzo varchar(-5));\n> \n> Basically, something major is wrong in your installation. I have never\n> heard a report like this, and people use pg_dump all the time.\n> \nI have three bugs Bruce:\n\n1) VARCHAR(-5)\n2) CONSTRAINTs wrong syntax\n3) no VIEWs ??\n\nhygea=> create table prova (var varchar, bp bpchar check (bp='zero'));\nCREATE\nhygea=> create view wprova as select var from prova;\nCREATE\n\npg_dump hygea -s prova\n\n\\connect - postgres\nCREATE TABLE prova (var varchar(-5), bp char(-5)) CONSTRAINT prova_bp CHECK bp\n=COPY prova FROM stdin;\n\\.\n Jose'\n\n", "msg_date": "Tue, 16 Jun 1998 11:18:16 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump error" }, { "msg_contents": "\nI (thought I) forwarded fixes for the pg_dump constraint syntax\nbug to this list a couple of weeks ago. I added a -c (compatible)\nswitch to pg_dump to force it to dump constraints in a syntax that\npgsql can understand.\n\nHere's another copy of the diffs (against 6.3.2).\n\nccb\n\n----------------\n*** /usr/local/src/pgsql/6.3.2/src/bin/pg_dump/pg_dump.c\tThu Apr 9 19:02:24 1998\n--- ./pg_dump.c\tTue Jun 9 14:27:36 1998\n***************\n*** 110,115 ****\n--- 110,116 ----\n int\t\t\tattrNames;\t\t\t/* put attr names into insert strings */\n int\t\t\tschemaOnly;\n int\t\t\tdataOnly;\n+ int compatConstraint;\n \n char\t\tg_opaque_type[10];\t/* name for the opaque type */\n \n***************\n*** 126,131 ****\n--- 127,134 ----\n \tfprintf(stderr,\n \t\t\t\"\\t -a \\t\\t dump out only the data, no schema\\n\");\n \tfprintf(stderr,\n+ \t\t \"\\t -c \\t\\t generate pgsql-compatible CONSTRAINT syntax\\n\");\n+ \tfprintf(stderr,\n \t\t\t\"\\t -d \\t\\t dump data as proper insert strings\\n\");\n \tfprintf(stderr,\n \t \"\\t -D \\t\\t dump data as inserts with attribute names\\n\");\n***************\n*** 551,567 ****\n \tg_comment_end[0] = '\\0';\n \tstrcpy(g_opaque_type, \"opaque\");\n \n! \tdataOnly = schemaOnly = dumpData = attrNames = 0;\n \n \tprogname = *argv;\n \n! \twhile ((c = getopt(argc, argv, \"adDf:h:op:st:vzu\")) != EOF)\n \t{\n \t\tswitch (c)\n \t\t{\n \t\t\tcase 'a':\t\t\t/* Dump data only */\n \t\t\t\tdataOnly = 1;\n \t\t\t\tbreak;\n \t\t\tcase 'd':\t\t\t/* dump data as proper insert strings */\n \t\t\t\tdumpData = 1;\n \t\t\t\tbreak;\n--- 554,574 ----\n \tg_comment_end[0] = '\\0';\n \tstrcpy(g_opaque_type, \"opaque\");\n \n! \tcompatConstraint = dataOnly = schemaOnly = dumpData = attrNames = 0;\n \n \tprogname = *argv;\n \n! \twhile ((c = getopt(argc, argv, \"acdDf:h:op:st:vzu\")) != EOF)\n \t{\n \t\tswitch (c)\n \t\t{\n \t\t\tcase 'a':\t\t\t/* Dump data only */\n \t\t\t\tdataOnly = 1;\n \t\t\t\tbreak;\n+ \t\t case 'c': /* generate constraint syntax that\n+ \t\t\t\t\t\t\t can be read back into postgreSQL */\n+ \t\t\t compatConstraint = 1;\n+ \t\t\t\tbreak;\n \t\t\tcase 'd':\t\t\t/* dump data as proper insert strings */\n \t\t\t\tdumpData = 1;\n \t\t\t\tbreak;\n***************\n*** 1496,1502 ****\n \t\t\t\tquery[0] = 0;\n \t\t\t\tif (name[0] != '$')\n \t\t\t\t\tsprintf(query, \"CONSTRAINT %s \", name);\n! \t\t\t\tsprintf(query, \"%sCHECK %s\", query, expr);\n \t\t\t\ttblinfo[i].check_expr[i2] = strdup(query);\n \t\t\t}\n \t\t\tPQclear(res2);\n--- 1503,1514 ----\n \t\t\t\tquery[0] = 0;\n \t\t\t\tif (name[0] != '$')\n \t\t\t\t\tsprintf(query, \"CONSTRAINT %s \", name);\n! \t\t\t\tif( compatConstraint ) {\n! \t\t\t\t sprintf(query, \"%sCHECK (%s)\", query, expr);\n! \t\t\t\t}\n! \t\t\t\telse {\n! \t\t\t\t sprintf(query, \"%sCHECK %s\", query, expr);\n! \t\t\t\t}\n \t\t\t\ttblinfo[i].check_expr[i2] = strdup(query);\n \t\t\t}\n \t\t\tPQclear(res2);\n***************\n*** 2518,2523 ****\n--- 2530,2546 ----\n \t\t\t\t}\n \t\t\t}\n \n+ \t\t\tif( compatConstraint ) {\n+ \t\t\t\t/* put the CONSTRAINTS inside the table def */\n+ \t\t\t\tfor (k = 0; k < tblinfo[i].ncheck; k++)\n+ \t\t\t\t{\n+ \t\t\t\t\tsprintf(q, \"%s%s %s\",\n+ \t\t\t\t\t\tq,\n+ \t\t\t\t\t\t(actual_atts + k > 0) ? \", \" : \"\",\n+ \t\t\t\t\t\ttblinfo[i].check_expr[k]);\n+ \t\t\t\t}\n+ \t\t\t}\n+ \n \t\t\tstrcat(q, \")\");\n \n \t\t\tif (numParents > 0)\n***************\n*** 2533,2540 ****\n \t\t\t\tstrcat(q, \")\");\n \t\t\t}\n \n! \t\t\tif (tblinfo[i].ncheck > 0)\n \t\t\t{\n \t\t\t\tfor (k = 0; k < tblinfo[i].ncheck; k++)\n \t\t\t\t{\n \t\t\t\t\tsprintf(q, \"%s%s %s\",\n--- 2556,2564 ----\n \t\t\t\tstrcat(q, \")\");\n \t\t\t}\n \n! \t\t\tif( !compatConstraint )\n \t\t\t{\n+ \t\t\t\t/* put the CONSTRAINT defs outside the table def */\n \t\t\t\tfor (k = 0; k < tblinfo[i].ncheck; k++)\n \t\t\t\t{\n \t\t\t\t\tsprintf(q, \"%s%s %s\",\n***************\n*** 2543,2548 ****\n--- 2567,2573 ----\n \t\t\t\t\t\t\ttblinfo[i].check_expr[k]);\n \t\t\t\t}\n \t\t\t}\n+ \n \t\t\tstrcat(q, \";\\n\");\n \t\t\tfputs(q, fout);\n \t\t\tif (acls)\n", "msg_date": "Wed, 17 Jun 1998 11:10:34 -0300", "msg_from": "Charles Bennett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Re: [HACKERS] pg_dump error " }, { "msg_contents": "> \n> \n> I (thought I) forwarded fixes for the pg_dump constraint syntax\n> bug to this list a couple of weeks ago. I added a -c (compatible)\n> switch to pg_dump to force it to dump constraints in a syntax that\n> pgsql can understand.\n> \n> Here's another copy of the diffs (against 6.3.2).\n> \n\nI just applied this patch a few days ago. I e-mailed you asking why\nthere is an option for this behavour. Seems like it should always be\non.\n\nPlease let me know.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 17 Jun 1998 18:54:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Re: [HACKERS] pg_dump error" }, { "msg_contents": "On Wed, 17 Jun 1998, Charles Bennett wrote:\n\n> \n> I (thought I) forwarded fixes for the pg_dump constraint syntax\n> bug to this list a couple of weeks ago. I added a -c (compatible)\n> switch to pg_dump to force it to dump constraints in a syntax that\n> pgsql can understand.\n> \n> Here's another copy of the diffs (against 6.3.2).\n> \n> ccb\nI applied your patch, Charles and it works, obviouly I remove the -c parameter\nbecause there isn't another syntax for CONSTRAINTs. PostgreSQL has\nthe SQL92 syntax.\n Thanks, Jose'\n\n", "msg_date": "Thu, 18 Jun 1998 10:27:01 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Re: [HACKERS] pg_dump error " }, { "msg_contents": "\nBruce Momjian said:\n\n> I just applied this patch a few days ago. I e-mailed you asking why\n> there is an option for this behavour. Seems like it should always be\n> on.\n> \n> Please let me know.\n\n\nSorry I missed the mail...\n\nI set this up as an option because I though the initial behavior\nmight have been put in for a reason - one that I didn't understand.\nI have no objection if you decide to make PGSQL-compatible dump\nsyntax the default.\n\nccb\n\n---\nCharles C. Bennett, Jr.\t\t\tPubWeb, Inc.\nSoftware Engineer\t\t\tThe Publishing <-> Printing Network\nAgent of Disintermediation\t\t4A Gill St.\[email protected]\t\t\t\tWoburn, MA 01801\n", "msg_date": "Thu, 18 Jun 1998 14:35:37 -0300", "msg_from": "Charles Bennett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Re: [HACKERS] pg_dump error " }, { "msg_contents": "> \n> On Wed, 17 Jun 1998, Charles Bennett wrote:\n> \n> > \n> > I (thought I) forwarded fixes for the pg_dump constraint syntax\n> > bug to this list a couple of weeks ago. I added a -c (compatible)\n> > switch to pg_dump to force it to dump constraints in a syntax that\n> > pgsql can understand.\n> > \n> > Here's another copy of the diffs (against 6.3.2).\n> > \n> > ccb\n> I applied your patch, Charles and it works, obviouly I remove the -c parameter\n> because there isn't another syntax for CONSTRAINTs. PostgreSQL has\n> the SQL92 syntax.\n\nOK, I have removed the -c syntax for pg_dump, so all dumps now use the\nnew format.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 19 Jun 1998 22:49:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Re: [HACKERS] pg_dump error" }, { "msg_contents": "> \n> \n> Bruce Momjian said:\n> \n> > I just applied this patch a few days ago. I e-mailed you asking why\n> > there is an option for this behavour. Seems like it should always be\n> > on.\n> > \n> > Please let me know.\n> \n> \n> Sorry I missed the mail...\n> \n> I set this up as an option because I though the initial behavior\n> might have been put in for a reason - one that I didn't understand.\n> I have no objection if you decide to make PGSQL-compatible dump\n> syntax the default.\n\nDone.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 19 Jun 1998 23:01:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Re: [HACKERS] pg_dump error" } ]
[ { "msg_contents": "> Bruce Wrote..\n> \n> > Bruce Momjian wrote:\n> > \n> > > OK, thanks to Tom Lane's many patches, I have query cancel working on my\n> > > machine. However, it is not working with Unix domain sockets. I get:\n> > >\n> > > Cannot send cancel request:\n> > > PQrequestCancel() -- couldn't send OOB data: errno=45\n> > > Operation not supported\n> > >\n> > > This is under BSDI 3.1.\n> > >\n> > > Do Unix Domain sockets support OOB(out-of-band) data?\n> > >\n> > \n> > Unix domain sockets don't support OOB (Stevens, Unix Network Programming).\n> \n> Yea, I found that too, late last night, Section 6.14, page 332.\n> \n> I basically need some way to 'signal' the backend of a cancellation\n> request. Polling the socket is not an option because it would impose\n> too great a performance penalty. Maybe async-io on a read(), but that\n> is not going to be very portable.\n> \n> I could pass the backend pid to the front end, and send a kill(SIG_URG)\n> to that pid on a cancel, but the frontend can be running as a different\n> user than the backend. Problem is, the only communcation channel is\n> that unix domain socket.\n> \n> We basically need some way to get the attention of the backend,\n> hopefully via some signal.\n> \n> Any ideas?\n> \n\nThink..Think..Think..\n\nIf the notification has to be a signal, it has to come from a process\nwith the same pid (or running as root). That means another processes,\nperhaps listening to another socket. To interrupt the user process\nconnects to the other (Unix domain) socket, sends some sort of cancell\nid, and closes. The signaller process then signalls the backend. Ugly.\n\nHmm... The postmaster is still hanging around, isn't it. So to cancel\nyou make another identical connection to the postmaster and send a different\ncode. A bit less ugly.\n\nIs the cancell flag in shared memory? The postmaster could set it directly\nwithout the signal() call. In fact, if it were in shared memory the\npostmaster could take a peek and see how much of the query was completed,\nif some sort of counter was maintained.\n\nWish I could come up with something better. Another example of the\nfact that our beloved operating system does indeed have a few warts.\n\n-- cary\n", "msg_date": "Tue, 19 May 1998 08:47:10 -0400 (EDT)", "msg_from": "\"Cary B. O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cancell/OOB over a Unix Domain Socket" }, { "msg_contents": "> Think..Think..Think..\n> \n> If the notification has to be a signal, it has to come from a process\n> with the same pid (or running as root). That means another processes,\n> perhaps listening to another socket. To interrupt the user process\n> connects to the other (Unix domain) socket, sends some sort of cancell\n> id, and closes. The signaller process then signalls the backend. Ugly.\n\nYep.\n\n> \n> Hmm... The postmaster is still hanging around, isn't it. So to cancel\n> you make another identical connection to the postmaster and send a different\n> code. A bit less ugly.\n\nYep.\n\n> \n> Is the cancell flag in shared memory? The postmaster could set it directly\n> without the signal() call. In fact, if it were in shared memory the\n> postmaster could take a peek and see how much of the query was completed,\n> if some sort of counter was maintained.\n\nShared memory really doesn't buy us much. If we have privs to attach to\nshared memory, we have enough to send a signal, and because it just\nneeds to tell it to stop, extra bandwidth of shared memory isn't buying\nus anything. In fact, it could make it worse, because we would have to\nsynchronize access to the shared memory.\n\n> \n> Wish I could come up with something better. Another example of the\n> fact that our beloved operating system does indeed have a few warts.\n\nI am still looking for that silver bullet.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 19 May 1998 15:36:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Cancell/OOB over a Unix Domain Socket" } ]
[ { "msg_contents": "> > 1. Implement addition of atttypmod field to RowDescriptor messages.\n> > The client-side code is there but ifdef'd out. I have no idea\n> > what to change on the backend side. The field should be sent\n> > only if protocol >= 2.0, of course.\n\nHmm. I was hoping to do something in the backend to allow data types\nlike numeric(p,s) which take multiple qualifying arguments (in this\ncase, precision and scale). One possibility was to shoehorn both fields\ninto the existing atttypmod 16-bit field.\n\nSeems like atttypmod is now being used for things outside of the\nbackend, but I'm not sure how to support these other uses with these\nother possible data types.\n\nA better general approach to the type qualifier problem might be to\ndefine a variable-length data type which specifies column\ncharacteristics, and then pass that around. For character strings, it\nwould have one field, and for numeric() and decimal() it would have two.\n\nComments? Ideas??\n\n - Tom\n", "msg_date": "Wed, 20 May 1998 01:07:57 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> \n> > > 1. Implement addition of atttypmod field to RowDescriptor messages.\n> > > The client-side code is there but ifdef'd out. I have no idea\n> > > what to change on the backend side. The field should be sent\n> > > only if protocol >= 2.0, of course.\n> \n> Hmm. I was hoping to do something in the backend to allow data types\n> like numeric(p,s) which take multiple qualifying arguments (in this\n> case, precision and scale). One possibility was to shoehorn both fields\n> into the existing atttypmod 16-bit field.\n> \n> Seems like atttypmod is now being used for things outside of the\n> backend, but I'm not sure how to support these other uses with these\n> other possible data types.\n\nWe are just passing it back. There is no special handling of atttypmod\nthat you need to worry about. I think we pass it back to Openlink can\nknow the actual length of the char() and varchar() fields without doing\na dummy select. However, they better know it is a char()/varchar()\nfield before using it for such a purpose, because it could be used from\nsomething else later on, as you suggest.\n\n> A better general approach to the type qualifier problem might be to\n> define a variable-length data type which specifies column\n> characteristics, and then pass that around. For character strings, it\n> would have one field, and for numeric() and decimal() it would have two.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 19 May 1998 21:54:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> A better general approach to the type qualifier problem might be to\n> define a variable-length data type which specifies column\n> characteristics, and then pass that around. For character strings, it\n> would have one field, and for numeric() and decimal() it would have two.\n\n... and for ordinary column datatypes of fixed properties, it needn't\nhave *any* fields. That would more than pay for the space cost of\nsupporting a variable-width data type, I bet. I like this.\n\nOnce atttypmod is exposed to applications it will be much harder to\nchange its representation or meaning, so I'd suggest getting this right\nbefore 6.4 comes out. If that doesn't seem feasible, I think I'd even\nvote for backing out the change that makes atttypmod visible until it\ncan be done right.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 May 1998 10:07:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: FE/BE protocol revision patch " }, { "msg_contents": "> Once atttypmod is exposed to applications it will be much harder to\n> change its representation or meaning\n\nYeah, that is what I'm worried about too...\n\n - Tom (the other \"tgl\")\n", "msg_date": "Wed, 20 May 1998 15:02:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> \n> > Once atttypmod is exposed to applications it will be much harder to\n> > change its representation or meaning\n> \n> Yeah, that is what I'm worried about too...\n> \n> - Tom (the other \"tgl\")\n\nWell, atttypmod is stored in various C structures, like Resdom, so we\nwould need some C representation for the type, even it is just a void *.\n\nVery few system columns are varlena/text, and for good reason, perhaps.\n\nSuch a change is certainly going to make things in the backend slightly\nharder, so please let me what advantage a varlena atttypmod is going to\nhave.\n\nZero overhead for types that don't use it is meaningless, because the\nvarlena length is 4 bytes, while current atttypmod is only two. Second,\nI don't see how a varlena makes atttypmod less type-specific. We\ncurrently return a -1 when it is not being used.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 11:33:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Zero overhead for types that don't use it is meaningless, because the\n> varlena length is 4 bytes, while current atttypmod is only two. Second,\n> I don't see how a varlena makes atttypmod less type-specific.\n\nWell, the issue is making sure that it will be adequate for future\ndatatypes that we can't foresee.\n\nI can see that a variable-size atttypmod might be a tad painful to\nsupport. If you don't want to go that far, a reasonable compromise\nwould be to make it int4 instead of int2. int2 is already uncomfortably\ntight for the numeric/decimal datatypes, which we surely will want to\nsupport soon (at least I do ;-)). int4 should give a little breathing\nroom for datatypes that need to encode more than one subfield into\natttypmod.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 May 1998 12:07:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > Zero overhead for types that don't use it is meaningless, because the\n> > varlena length is 4 bytes, while current atttypmod is only two. Second,\n> > I don't see how a varlena makes atttypmod less type-specific.\n> \n> Well, the issue is making sure that it will be adequate for future\n> datatypes that we can't foresee.\n> \n> I can see that a variable-size atttypmod might be a tad painful to\n> support. If you don't want to go that far, a reasonable compromise\n> would be to make it int4 instead of int2. int2 is already uncomfortably\n> tight for the numeric/decimal datatypes, which we surely will want to\n> support soon (at least I do ;-)). int4 should give a little breathing\n> room for datatypes that need to encode more than one subfield into\n> atttypmod.\n\nComments? I am willing to change it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 12:21:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> \n> > > > Comments? I am willing to change it.\n> > >\n> > > An int 4 atttypmod should be fine. A bit of overhead perhaps, but \n> > > who quibles about a few bytes these days? And, perhaps there is a \n> > > use.\n> > Yea, no one commented, so it stays an int2 until someone finds a type\n> > that needs more than a two-byte atttypmod. Right now, it fits the \n> > need.\n> \n> Well, I didn't comment because I haven't yet worked out the issues. But\n> I'll go with Bruce's and David's inclination that we should shoehorn\n> numeric()/decimal() into something like the existing atttypmod field\n> rather than trying for \"the general solution\" which btw isn't obvious\n> how to do.\n> \n> However, I don't think that 16 bits vs 32 bits is an issue at all\n> performance-wise, and I'd to see atttypmod go to 32 bits just to give a\n> little breathing room. I'm already using int32 to send attypmod to the\n> new char/varchar sizing functions.\n\nOK, I can change it, but it is not easy. Will take time.\n\n> \n> Can we go to int32 on atttypmod? I'll try to break it up into two\n> sub-fields to implement numeric().\n> \n> btw, anyone know of a package for variable- and large-precision\n> numerics? I have looked at the GNU gmp package, but it looks to me that\n> it probably won't fit into the db backend without lots of overhead. Will\n> probably try to use the int64 package in contrib for now...\n> \n> - Tom\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 31 May 1998 23:00:27 +2000 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patcht" }, { "msg_contents": "\nAgain, old news, but am wading through my backlog.\n\nBruce Momjian and Tom Lane are discussing atttypmod, its uses, and prospects:\n> > \n> > Bruce Momjian <[email protected]> writes:\n> > > Zero overhead for types that don't use it is meaningless, because the\n> > > varlena length is 4 bytes, while current atttypmod is only two. Second,\n> > > I don't see how a varlena makes atttypmod less type-specific.\n> > \n> > Well, the issue is making sure that it will be adequate for future\n> > datatypes that we can't foresee.\n> > \n> > I can see that a variable-size atttypmod might be a tad painful to\n> > support. If you don't want to go that far, a reasonable compromise\n> > would be to make it int4 instead of int2. int2 is already uncomfortably\n> > tight for the numeric/decimal datatypes, which we surely will want to\n> > support soon (at least I do ;-)). int4 should give a little breathing\n> > room for datatypes that need to encode more than one subfield into\n> > atttypmod.\n> \n> Comments? I am willing to change it.\n\nAn int 4 atttypmod should be fine. A bit of overhead perhaps, but who\nquibles about a few bytes these days? And, perhaps there is a use.\n \nAndreas Zeugswetter <[email protected]> add to the discussion:\n> \n> > Once atttypmod is exposed to applications it will be much harder to\n> > change its representation or meaning, so I'd suggest getting this right\n> > before 6.4 comes out. If that doesn't seem feasible, I think I'd even\n> > vote for backing out the change that makes atttypmod visible until it\n> > can be done right.\n> \n> atttypmod is the right direction, it only currently lacks extendability.\n> \n> Andreas\n\nBut, I think a line needs to be drawn. There is no way to forsee all the\npossible uses to cover all future extendibility within the protocol. But,\nthe protocol should not be responsible for this anyway, that is really \nthe role of type implementation.\n\nRight now the protocol supports some types (char, int, float etc) in a\nspecial way. And it provides for composites. But it doesn't (and no-one\nis arguing that it should) support images or sounds or timeseries in a\nspecial way. The type itself has to handle that chore. All the protocol\nreally should do is provide a way to find the size and type of a value.\nWhich it does.\n\nNumeric is a kind of borderline case. I think a perfectly good numeric\nimplementation could be made using varlenas to hold binary representations\nof infinite precision scaled integers with precision and scale embedded in\nthe data. But, Numeric is an SQL92 type, and it is very common in SQL\napplications and so the extra convenience of built-in support in the\nprotocol is probably justified. And, Numeric suport is something we know\nabout the need for now.\n\nBut, I don't think that spending a lot of effort or complicating the backend\ncode to support currently unknown and undefined possible future extensibility\nis worthwhile.\n\nMy opinion only, but every project I have seen that started to get serious\nabout predicting future requireements ended up failing to meet known current\nrequirements.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n\n", "msg_date": "Sat, 30 May 1998 22:18:03 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> > \n> > Comments? I am willing to change it.\n> \n> An int 4 atttypmod should be fine. A bit of overhead perhaps, but who\n> quibles about a few bytes these days? And, perhaps there is a use.\n\nYea, no one commented, so it stays an int2 until someone finds a type\nthat needs more than a two-byte atttypmod. Right now, it fits the need.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 31 May 1998 01:51:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> > > Comments? I am willing to change it.\n> >\n> > An int 4 atttypmod should be fine. A bit of overhead perhaps, but \n> > who quibles about a few bytes these days? And, perhaps there is a \n> > use.\n> Yea, no one commented, so it stays an int2 until someone finds a type\n> that needs more than a two-byte atttypmod. Right now, it fits the \n> need.\n\nWell, I didn't comment because I haven't yet worked out the issues. But\nI'll go with Bruce's and David's inclination that we should shoehorn\nnumeric()/decimal() into something like the existing atttypmod field\nrather than trying for \"the general solution\" which btw isn't obvious\nhow to do.\n\nHowever, I don't think that 16 bits vs 32 bits is an issue at all\nperformance-wise, and I'd to see atttypmod go to 32 bits just to give a\nlittle breathing room. I'm already using int32 to send attypmod to the\nnew char/varchar sizing functions.\n\nCan we go to int32 on atttypmod? I'll try to break it up into two\nsub-fields to implement numeric().\n\nbtw, anyone know of a package for variable- and large-precision\nnumerics? I have looked at the GNU gmp package, but it looks to me that\nit probably won't fit into the db backend without lots of overhead. Will\nprobably try to use the int64 package in contrib for now...\n\n - Tom\n", "msg_date": "Sun, 31 May 1998 16:08:44 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> \n> > > > Comments? I am willing to change it.\n> > >\n> > > An int 4 atttypmod should be fine. A bit of overhead perhaps, but \n> > > who quibles about a few bytes these days? And, perhaps there is a \n> > > use.\n> > Yea, no one commented, so it stays an int2 until someone finds a type\n> > that needs more than a two-byte atttypmod. Right now, it fits the \n> > need.\n> \n> Well, I didn't comment because I haven't yet worked out the issues. But\n> I'll go with Bruce's and David's inclination that we should shoehorn\n> numeric()/decimal() into something like the existing atttypmod field\n> rather than trying for \"the general solution\" which btw isn't obvious\n> how to do.\n> \n> However, I don't think that 16 bits vs 32 bits is an issue at all\n> performance-wise, and I'd to see atttypmod go to 32 bits just to give a\n> little breathing room. I'm already using int32 to send attypmod to the\n> new char/varchar sizing functions.\n> \n> Can we go to int32 on atttypmod? I'll try to break it up into two\n> sub-fields to implement numeric().\n> \n> btw, anyone know of a package for variable- and large-precision\n> numerics? I have looked at the GNU gmp package, but it looks to me that\n> it probably won't fit into the db backend without lots of overhead. Will\n> probably try to use the int64 package in contrib for now...\n> \n> - Tom\n\nInt32 is fine with me. Or maybe uint32? Or maybe\n\nunion {\n u uint32;\n struct {\n h int16;\n l int16;\n }\n}\n\nOh no, it is happening again....\n\nLets just go with uint32.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 31 May 1998 16:49:05 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> OK, I can change it, but it is not easy. Will take time.\n> > Can we go to int32 on atttypmod? I'll try to break it up into two\n> > sub-fields to implement numeric().\n\nI am planning on stripping out the atttypmod usage for string type input\nfunctions (that third parameter). \n\nThat was the wrong end to check, since it is the point at which storage\nhappens that things really need to be checked. Otherwise, no\nvalidation/verification can happen on expression results, only on\nconstant input values.\n\nDon't know if ignoring that area makes things any easier for you...\n\n - Tom\n", "msg_date": "Mon, 01 Jun 1998 06:19:44 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patcht" }, { "msg_contents": "> \n> > OK, I can change it, but it is not easy. Will take time.\n> > > Can we go to int32 on atttypmod? I'll try to break it up into two\n> > > sub-fields to implement numeric().\n> \n> I am planning on stripping out the atttypmod usage for string type input\n> functions (that third parameter). \n> \n> That was the wrong end to check, since it is the point at which storage\n> happens that things really need to be checked. Otherwise, no\n> validation/verification can happen on expression results, only on\n> constant input values.\n> \n> Don't know if ignoring that area makes things any easier for you...\n> \n\nI will make that change to whatever the current tree is at the time. \nMay not do it for a few days or a week, but will do it all at once.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 10:15:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patcht" }, { "msg_contents": "> > Well, I didn't comment because I haven't yet worked out the issues. But\n> > I'll go with Bruce's and David's inclination that we should shoehorn\n> > numeric()/decimal() into something like the existing atttypmod field\n> > rather than trying for \"the general solution\" which btw isn't obvious\n> > how to do.\n> > \n> > However, I don't think that 16 bits vs 32 bits is an issue at all\n> > performance-wise, and I'd to see atttypmod go to 32 bits just to give a\n> > little breathing room. I'm already using int32 to send attypmod to the\n> > new char/varchar sizing functions.\n> > \n> > Can we go to int32 on atttypmod? I'll try to break it up into two\n> > sub-fields to implement numeric().\n> > \n> > btw, anyone know of a package for variable- and large-precision\n> > numerics? I have looked at the GNU gmp package, but it looks to me that\n> > it probably won't fit into the db backend without lots of overhead. Will\n> > probably try to use the int64 package in contrib for now...\n> > \n> > - Tom\n> \n> Int32 is fine with me. Or maybe uint32? Or maybe\n> \n> union {\n> u uint32;\n> struct {\n> h int16;\n> l int16;\n> }\n> }\n> \n> Oh no, it is happening again....\n> \n> Lets just go with uint32.\n\nCan't be unsigned. -1 must be a valid value.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 16 Jun 1998 01:54:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch" }, { "msg_contents": "> > \n> > > > > Comments? I am willing to change it.\n> > > >\n> > > > An int 4 atttypmod should be fine. A bit of overhead perhaps, but \n> > > > who quibles about a few bytes these days? And, perhaps there is a \n> > > > use.\n> > > Yea, no one commented, so it stays an int2 until someone finds a type\n> > > that needs more than a two-byte atttypmod. Right now, it fits the \n> > > need.\n> > \n> > Well, I didn't comment because I haven't yet worked out the issues. But\n> > I'll go with Bruce's and David's inclination that we should shoehorn\n> > numeric()/decimal() into something like the existing atttypmod field\n> > rather than trying for \"the general solution\" which btw isn't obvious\n> > how to do.\n> > \n> > However, I don't think that 16 bits vs 32 bits is an issue at all\n> > performance-wise, and I'd to see atttypmod go to 32 bits just to give a\n> > little breathing room. I'm already using int32 to send attypmod to the\n> > new char/varchar sizing functions.\n> \n> OK, I can change it, but it is not easy. Will take time.\n> \n> > \n> > Can we go to int32 on atttypmod? I'll try to break it up into two\n> > sub-fields to implement numeric().\n> > \n> > btw, anyone know of a package for variable- and large-precision\n> > numerics? I have looked at the GNU gmp package, but it looks to me that\n> > it probably won't fit into the db backend without lots of overhead. Will\n> > probably try to use the int64 package in contrib for now...\n> > \n> > - Tom\n> > \n\nOK, I have made the change so we now have 32-bit atttypmod fields. We\nwere already passing it as int32 to the clients, so no changes to libpq.\n\nThis change will required all developers to do a new initdb.\n\nI had forgotten how much code there is for atttypmod. I did it in\nstages, but forgot how intertwined it is in much of the code. I used a\nconsistent naming convention, so it was very easy to change all\nreferences very quickly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Jul 1998 18:04:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patcht" } ]
[ { "msg_contents": "\nHi...\n\n\thas anyone had any experience setting this up? I've just gone\nthrough every shred of documentation I can find, and *believe* I have it\nsetup right, but its not working as expected...\n\n\tanyone with *experience* in this...?\n\n\n\n", "msg_date": "Wed, 20 May 1998 14:28:16 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "AnonCVS ..." } ]
[ { "msg_contents": "Quick note... Just to say that I found a bug in postgres 6.3.2 that I just this\nminute downloaded from the ftp site... It doesn't compile under AIX 4.2.1 with\nthe latest C for AIX ver 3.1.4\n \nIt's only aminor problem, some of the variables in pqcomm.c are declared as\nint, and being passed to functions that expect a long * variable (Actually the\nfunction paramaters are declared as size_t).\n \nThe fix is to change the addrlen variable used on line 673 to a size_t instead\nof an int, and also for the len variable used on line 787.\n \nSorry... No diffs... No time, and I dont' subscribe to the list... I just like\npostgres (Maybe I'll subscribe one day... Too busy at the moment).\n \nTIA.\n \nhamish.\n \n \n----------------------- External Addressing ----------------------\n-----------------------------------------------------------------------------\n \n \n", "msg_date": "20 May 1998 20:52:06 Z", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Bug in postgresql-6.3.2" } ]
[ { "msg_contents": "Originally the JDBC driver had these small writes, because the libpq code\nhad them. This had the downside that there was a lot of small packets being\nsent to the backend, slowing things up a lot.\n\nAnyhow, I've been supplied with a patch that does this buffering, and while\ntesting this, it does improve things.\n\nI haven't posted it to the patches list yet as another part of the driver is\nhaving some problems at the moment.\n\nThis doesn't help with SSL or KB5 though, as Java doesn't have them (yet).\n\n--\nPeter T Mount, [email protected], [email protected]\nJDBC FAQ: http://www.retep.org.uk/postgres\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On\nBehalf Of Matthew N. Dodd\nSent: Wednesday, May 20, 1998 7:22 PM\nTo: Tom Ivar Helbekkmo\nCc: Bruce Momjian; [email protected]\nSubject: Re: [HACKERS] Kerberos 5 breakage.\n\n\nOn 20 May 1998, Tom Ivar Helbekkmo wrote:\n> > While Kerberos 5 authentication and authorization is nice, I'd like to\n> > investigate the possibility of adding encryption as well.\n>\n> Absolutely. This should be specified in the pg_hba.conf file, so that\n> you could demand Kerberos authentication plus encryption for sensitive\n> data. When not demanded by pg_hba.conf, it should be a client option.\n\nI read through the SSL patch and am convinced that we need a little more\ncoherent arrangment of interface methods. Allowing direct manipulation of\nthe file descriptors is really going to make adding stuff like this (SSL,\nKerb5 encryption etc) next to impossible.\n\nTake a look at Apache 1.2 vx. 1.3 for an idea of what I'm talking about.\n\nAlso, allowing writes of single characters is bad; you incur a context\nswitch each write. The client and server should be writing things into\nlargish buffers and writing those instead of doing small writes.\n\nThe existence of the following scare me...\n\npqPutShort(int integer, FILE *f)\npqPutLong(int integer, FILE *f)\npqGetShort(int *result, FILE *f)\npqGetLong(int *result, FILE *f)\npqGetNBytes(char *s, size_t len, FILE *f)\npqPutNBytes(const char *s, size_t len, FILE *f)\npqGetString(char *s, size_t len, FILE *f)\npqPutString(const char *s, FILE *f)\npqGetByte(FILE *f)\npqPutByte(int c, FILE *f)\n\n(from src/backend/libpq/pqcomprim.c)\n\nA select based I/O buffering system would seem to be in order here...\n\nI'd like to see these routines passing around a connection information\nstruct that contains the file handle and other connection options as well.\n\nI'll not bother beating on this anymore as I'm unlikely to cover anything\nthat has not already been covered. Regardless, this issue needs some\ncritical analysis before any code is changed.\n\nFailing to address this issue really raises the cost of adding stuff like\nSSL and Kerberos5 encryption.\n\nTake a look at src/main/buff.c and src/include/buff.h in Apache 1.3 at how\nthey use their 'struct buff_struct' for some interesting examples.\n\n/*\n Matthew N. Dodd\t\t| A memory retaining a love you had for life\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\n*/\n\n", "msg_date": "Thu, 21 May 1998 08:07:46 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Kerberos 5 breakage." } ]
[ { "msg_contents": "Hi all,\n\n Can anybody pinpoint where is wrong on lo_write(and possibly\nlo_read)? I tried to lo_import(and lo_write) few MegaBytes of data\nwith large_object but it was painfully SLOW! So I simplified the\nproblem, just test \"testlo.c\" under test/examples directory\n\n with few MegaBytes of file X.dat\n\n and printout debug messages like this with lo_import ()->\nimportFile() in main()\n\n IT CANNOT LO_WRITE > 640KB !!! WHY?\n\ntestlo.c-------------------\n...\nin importFile()\n{\n...\n// with BUFSIZE -> 1024 or 8*1024 or 64*1024 i.e. 1K, 8K, 64K, ...\n /*\n * read in from the Unix file and write to the inversion file\n */\n while ((nbytes = read(fd, buf, BUFSIZE)) > 0)\n {\n printf(\"COUNT [%d] %d Kb\\n\", ++count, count*BUFSIZE/1024);\n tmp = lo_write(conn, lobj_fd, buf, nbytes);\n...\nif (tmp < nbytes)\n {\n fprintf(stderr, \"error while reading \\\"%s\\\"\", filename);\n }\n printf(\"READ %d , WRITE %d\\n\", nbytes, tmp);\n_________________________\n\n./testlo template1 X.dat out\n\n....\nimporting file \"in\" ...\nCOUNT [1] 0 Kb\nREAD 65536 , WRITE 65536\nCOUNT [2] 64 Kb\n...\nREAD 65536 , WRITE 65536\nCOUNT [9] 512 Kb\nerror while reading \"in\"READ 65536 , WRITE -1\nCOUNT [10] 576 Kb\nerror while reading \"in\"READ 65536 , WRITE -1\nCOUNT [11] 640 Kb\n________________________________________________\n\nand \"strace ./testlo template1 in out\" shows\n\n...\nwrite(3, \"F \\0\", 3) = 3\nwrite(3, \"\\0\\0\\3\\273\\0\\0\\0\\2\\0\\0\\0\\4\\0\\0\\0\"..., 1024) = 1024\nwrite(3, \"\\331\\5\\200\\t\\317\\t\\331]\\334\\331E\"..., 64512) = 64512\nwrite(3, \"\\213\\r\\30Lv\\10I\\211\\312\\1\\322\\1\\312\"..., 20) = 20\nread(4, \"EERROR: no empty local buffer.\\n\"..., 1024) = 33\nwrite(2, \"error while reading \\\"in\\\"\", 24error while reading \"in\") = 24\nread(5, \"\\0\\0\\0\\0\\1\\310\\272\\300\\276\\345\\t\"..., 65536) = 65536\nwrite(1, \"COUNT [10] 576 Kb\\n\", 20COUNT [10] 576 Kb\n) = 20\nwrite(3, \"F \\0\", 3) = 3\nwrite(3, \"\\0\\0\\3\\273\\0\\0\\0\\2\\0\\0\\0\\4\\0\\0\\0\"..., 1024) = 1024\nwrite(3, \"\\'j\\4j\\4h\\4\\273j\\10\\241\\30Lv\\10H\"..., 64512) = 64512\nwrite(3, \"\\331\\311\\331]\\370\\331E\\370\\336\\311\"..., 20) = 20\nread(4, \"EERROR: cannot read block 62 of\"..., 1024) = 45\nwrite(2, \"error while reading \\\"in\\\"\", 24error while reading \"in\") = 24\nread(5, \"u5hp\\366v\\10\\350pk\\374\\377\\203\\304\"..., 65536) = 65536\nwrite(1, \"COUNT [11] 640 Kb\\n\", 20COUNT [11] 640 Kb\n...\n___________________________________________________________\n\nSo I lo_open/lo_lseek(SEEK_END)/lo_write/lo_close() INSIDE of the loop,\nit solved the problem, which means MEM LEAKS or buffer size problem?\nAnd just\n lobj_fd = lo_open(conn, lobjId, INV_WRITE| INV_READ);\n...\n /*\n * read in from the Unix file and write to the inversion file\n */\n while ((nbytes = read(fd, buf, BUFSIZE)) > 0)\n {\n printf(\"COUNT [%d] %d Kb\\n\", ++count, count*BUFSIZE/1024);\n o_lseek(conn, lobj_fd, 0, SEEK_END);\n tmp = lo_write(conn, lobj_fd, buf, nbytes);\n...\n\nAlso solved the problem... But I want reliable(lo_read/lo_write) large\nobject with BIG data.\n\nPlease Help me out.\n\nC.S.Park\n\n", "msg_date": "Thu, 21 May 1998 18:09:54 +0900", "msg_from": "\"Park, Chul-Su\" <[email protected]>", "msg_from_op": true, "msg_subject": "[QUESTIONS] lo_write cannot > 640Kb? memory leaks?" }, { "msg_contents": "> Hi all,\n> \n> Can anybody pinpoint where is wrong on lo_write(and possibly\n> lo_read)? I tried to lo_import(and lo_write) few MegaBytes of data\n> with large_object but it was painfully SLOW! So I simplified the\n> problem, just test \"testlo.c\" under test/examples directory\n> \n> with few MegaBytes of file X.dat\n> \n> and printout debug messages like this with lo_import ()->\n> importFile() in main()\n> \n> IT CANNOT LO_WRITE > 640KB !!! WHY?\n> \n> testlo.c-------------------\n> ...\n> in importFile()\n> {\n> ...\n> // with BUFSIZE -> 1024 or 8*1024 or 64*1024 i.e. 1K, 8K, 64K, ...\n> /*\n> * read in from the Unix file and write to the inversion file\n> */\n> while ((nbytes = read(fd, buf, BUFSIZE)) > 0)\n> {\n> printf(\"COUNT [%d] %d Kb\\n\", ++count, count*BUFSIZE/1024);\n> tmp = lo_write(conn, lobj_fd, buf, nbytes);\n> ...\n> if (tmp < nbytes)\n> {\n> fprintf(stderr, \"error while reading \\\"%s\\\"\", filename);\n> }\n> printf(\"READ %d , WRITE %d\\n\", nbytes, tmp);\n> _________________________\n> \n> ./testlo template1 X.dat out\n> \n> ....\n> importing file \"in\" ...\n> COUNT [1] 0 Kb\n> READ 65536 , WRITE 65536\n> COUNT [2] 64 Kb\n> ...\n> READ 65536 , WRITE 65536\n> COUNT [9] 512 Kb\n> error while reading \"in\"READ 65536 , WRITE -1\n> COUNT [10] 576 Kb\n> error while reading \"in\"READ 65536 , WRITE -1\n> COUNT [11] 640 Kb\n> ________________________________________________\n> \n> and \"strace ./testlo template1 in out\" shows\n> \n> ...\n> write(3, \"F \\0\", 3) = 3\n> write(3, \"\\0\\0\\3\\273\\0\\0\\0\\2\\0\\0\\0\\4\\0\\0\\0\"..., 1024) = 1024\n> write(3, \"\\331\\5\\200\\t\\317\\t\\331]\\334\\331E\"..., 64512) = 64512\n> write(3, \"\\213\\r\\30Lv\\10I\\211\\312\\1\\322\\1\\312\"..., 20) = 20\n> read(4, \"EERROR: no empty local buffer.\\n\"..., 1024) = 33\n> write(2, \"error while reading \\\"in\\\"\", 24error while reading \"in\") = 24\n> read(5, \"\\0\\0\\0\\0\\1\\310\\272\\300\\276\\345\\t\"..., 65536) = 65536\n> write(1, \"COUNT [10] 576 Kb\\n\", 20COUNT [10] 576 Kb\n> ) = 20\n> write(3, \"F \\0\", 3) = 3\n> write(3, \"\\0\\0\\3\\273\\0\\0\\0\\2\\0\\0\\0\\4\\0\\0\\0\"..., 1024) = 1024\n> write(3, \"\\'j\\4j\\4h\\4\\273j\\10\\241\\30Lv\\10H\"..., 64512) = 64512\n> write(3, \"\\331\\311\\331]\\370\\331E\\370\\336\\311\"..., 20) = 20\n> read(4, \"EERROR: cannot read block 62 of\"..., 1024) = 45\n> write(2, \"error while reading \\\"in\\\"\", 24error while reading \"in\") = 24\n> read(5, \"u5hp\\366v\\10\\350pk\\374\\377\\203\\304\"..., 65536) = 65536\n> write(1, \"COUNT [11] 640 Kb\\n\", 20COUNT [11] 640 Kb\n> ...\n> ___________________________________________________________\n> \n> So I lo_open/lo_lseek(SEEK_END)/lo_write/lo_close() INSIDE of the loop,\n> it solved the problem, which means MEM LEAKS or buffer size problem?\n> And just\n> lobj_fd = lo_open(conn, lobjId, INV_WRITE| INV_READ);\n> ...\n> /*\n> * read in from the Unix file and write to the inversion file\n> */\n> while ((nbytes = read(fd, buf, BUFSIZE)) > 0)\n> {\n> printf(\"COUNT [%d] %d Kb\\n\", ++count, count*BUFSIZE/1024);\n> o_lseek(conn, lobj_fd, 0, SEEK_END);\n> tmp = lo_write(conn, lobj_fd, buf, nbytes);\n> ...\n> \n> Also solved the problem... But I want reliable(lo_read/lo_write) large\n> object with BIG data.\n> \n> Please Help me out.\n> \n> C.S.Park\n\nHave you tried increasing the number of postgres buffer cache buffers? I am\nspeculating that the lo_write() is using these buffers but perhaps not\nflushing them. As the default configuration has way too few buffers anyway\nthis might be the problem. Could you try increasing the buffers to say 1024 or\nso and rerun your test and post the results?\n\nThanks,\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Thu, 21 May 1998 11:10:07 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [QUESTIONS] lo_write cannot > 640Kb? memory leaks?" }, { "msg_contents": "\nHi,\n\n> Have you tried increasing the number of postgres buffer cache buffers? I am\n> speculating that the lo_write() is using these buffers but perhaps not\n> flushing them. As the default configuration has way too few buffers anyway\n> this might be the problem. Could you try increasing the buffers to say 1024 or\n> so and rerun your test and post the results?\n>\n> -dg\n\n\tThanks for your reply, but as I posted I tested many different size of \nbuffers in lotest.c, e.g. 1Kb, 4Kb, 8Kb, 9Kb, 10Kb, 16Kb, 32Kb, 64Kb, 640Kb, \n1Mb, 10Mb, ... or your \"cache\" or \"buffer\" size have some diffferent meaning?\n\nbut lo_write cannot exceed 640kb always, only open/write(less than \n640kb)/close looping OR open LOOP lo_write(buffer<640Kb)) , lo_lseek to \nSEEK_END, write, ... seems to solve this problem(as in my last email), and I \nfound that 16Kb buffer seems to be the optimal buffer size for the SPEED(not \n8Kb) for network & socket connections why?...\n\nand I think that above lo_lseek write loop is not a real solution, because \nlo_read seems does not have such defects(but 16kb buffer seems to be the \noptimal size also), there must be some problems in lo_write or buffer \nmanaging... does anybody have some guess or fixes?\n\nc.s.park\n\n\n", "msg_date": "Fri, 22 May 1998 05:47:55 +0900", "msg_from": "Chul Su Park <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [QUESTIONS] lo_write cannot > 640Kb? memory leaks? " }, { "msg_contents": "> Hi,\n> \n> > Have you tried increasing the number of postgres buffer cache buffers? I am\n> > speculating that the lo_write() is using these buffers but perhaps not\n> > flushing them. As the default configuration has way too few buffers anyway\n> > this might be the problem. Could you try increasing the buffers to say 1024 or\n> > so and rerun your test and post the results?\n> >\n> > -dg\n> \n> \tThanks for your reply, but as I posted I tested many different size of \n> buffers in lotest.c, e.g. 1Kb, 4Kb, 8Kb, 9Kb, 10Kb, 16Kb, 32Kb, 64Kb, 640Kb, \n> 1Mb, 10Mb, ... or your \"cache\" or \"buffer\" size have some diffferent meaning?\n\nYes, my \"cache\" or \"buffer\" size has a different meaning. I am referring to\nthe size of the postgres buffer pool which can be changed by a command line\noption when you start the postmaster. I forget what the exact option is but\nit is something like \"-o -B <number_of_buffers>\". It is documented I think.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Thu, 21 May 1998 14:13:29 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [QUESTIONS] lo_write cannot > 640Kb? memory leaks?" } ]
[ { "msg_contents": "Hi all,\n\n When I try testlo.c with some modifications, I hit \"Ctrl-C\" key\nsometimes OR program crash abnormally I got messages when I try to\nconnect a db , see below. But all other db seems to be working,\nin such case is there any way to revive server without \"restarting\" by\nkill -TERM xxx and restart the postmaster? Because the other db is very\nfrequently used by other people! I guess that such problem is\ndue to the \"large object\" or locks(I tried to remove pg_vlock, vacuum\n..).\n\n The only way is restarting the postmaster?\n\nC.S.Park\n\nAfter crashing \"testlo.c\" with incorrect treatment on a large object\n\n> psql -h xxx testdb\ntestdb=> \\dt\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\nand I could not see any tables... but usual sql queris was possible! on\ntestdb such as create table, select...\ne.g.\nafter crashing my program(testlo.c try to connect \"testdb\")\n\n> psql -h xxx testdb\ntestdb=> \\dt\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\ntestdb=>create table t (i int4);\nPQexec() -- There is no connection to the backend.\ntestdb=>\\c testdb <- reconnect\ntestdb=>select * from t where int4mul(1,i) > 4;\ni\n-\n(0 rows)\ntestdb=>\\dt\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\n", "msg_date": "Fri, 22 May 1998 07:20:30 +0900", "msg_from": "\"Park, Chul-Su\" <[email protected]>", "msg_from_op": true, "msg_subject": "[QUESTION] backend closed the channel ... after crash usr prog,\n\thow can I fix?" } ]
[ { "msg_contents": "Got no reply on \"questions\". Someone here may want to\nknow this...\n\nI think I may have uncovered an error in the parser. The \nfollowing is the simplest example that shows the problem.\nMaybe a counter needs to be reset by 'union' or checked\nafter select not statement. I would like to use this syntax\nin my libpq program. Is this a bug? Is it already known?\n\nWould someone please e-mail me the syntax for the\n\"explicit cast\" the system wants\n\nI am using 6.3.2 on an Ultra SPARC. The error occurs on\na Linux RH50 Intel system too.\n\nI think the following should work but does not:\n\n testdb=> select 'a' as X \n testdb-> union\n testdb-> select 'b' as X;\n NOTICE: there is more than one operator < for types\n NOTICE: unknown and unknown. You will have to retype this query\n ERROR: using an explicit cast\n\nNotice that this does work\n \n testdb=> select 'b' as X;\n x\n -\n b\n (1 row)\n\nAnd this works too:\n\n testdb=> select 1 as X\n testdb-> union\n testdb-> select 2 as X;\n x\n -\n 1\n 2\n (2 rows)\n\n\n-- \n--Chris Albertson\n\n [email protected] Voice: 626-351-0089 X127\n Logicon RDA, Pasadena California Fax: 626-351-0699\n", "msg_date": "Thu, 21 May 1998 15:39:14 -0700", "msg_from": "Chris Albertson <[email protected]>", "msg_from_op": true, "msg_subject": "Error in parser with UNIONS." }, { "msg_contents": "> I think I may have uncovered an error in the parser. The\n> following is the simplest example that shows the problem.\n> Maybe a counter needs to be reset by 'union' or checked\n> after select not statement. I would like to use this syntax\n> in my libpq program. Is this a bug? Is it already known?\n\nNot already known, and it is a feature for now. I _should_ be able to\nget it to work in v6.4, since I have already made changes elsewhere to\ndo a better job of guessing types in underspecified queries.\n\nFor example, in v6.3.2 the following query does not work:\n\npostgres=> select 'a' || 'b' as \"Concat\";\nConcat\n------\nab\n(1 row)\n\nThe underlying reason for the problem is Postgres' conservative approach\nto typing and type coersion. I've made changes to make it a bit more\nthorough in its matching attempts, and will look at this case soon.\n\n> Would someone please e-mail me the syntax for the\n> \"explicit cast\" the system wants\n\npostgres=> select text 'a' as X\npostgres-> union\npostgres-> select text 'b';\nx\n-\na\nb\n(2 rows)\n\nNote that this is the SQL92-style of specification; you can also use\n\"'a'::text\" rather than \"text 'a'\". This example was run on something\nsimilar to the current development source tree, but I would expect\nv6.3.2 to behave the same way.\n\n - Tom\n", "msg_date": "Fri, 22 May 1998 01:26:03 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Error in parser with UNIONS." }, { "msg_contents": "> \n> Got no reply on \"questions\". Someone here may want to\n> know this...\n> \n> I think I may have uncovered an error in the parser. The \n> following is the simplest example that shows the problem.\n> Maybe a counter needs to be reset by 'union' or checked\n> after select not statement. I would like to use this syntax\n> in my libpq program. Is this a bug? Is it already known?\n> \n> Would someone please e-mail me the syntax for the\n> \"explicit cast\" the system wants\n> \n> I am using 6.3.2 on an Ultra SPARC. The error occurs on\n> a Linux RH50 Intel system too.\n> \n> I think the following should work but does not:\n> \n> testdb=> select 'a' as X \n> testdb-> union\n> testdb-> select 'b' as X;\n> NOTICE: there is more than one operator < for types\n> NOTICE: unknown and unknown. You will have to retype this query\n> ERROR: using an explicit cast\n> \n> Notice that this does work\n> \n> testdb=> select 'b' as X;\n> x\n> -\n> b\n> (1 row)\n> \n> And this works too:\n> \n> testdb=> select 1 as X\n> testdb-> union\n> testdb-> select 2 as X;\n> x\n> -\n> 1\n> 2\n> (2 rows)\n> \n\nThis caused because UNION removes duplicates, and to do that, it must\nsort, but the character constants can't be sorted because they could be\ntext, varchar, char(), etc. 6.4 will fix that with new auto-casting. \nFor now, us UNION ALL, which will not remove duplicates.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 21 May 1998 23:38:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Error in parser with UNIONS." }, { "msg_contents": "Made some progress:\n\npostgres=> select 1.2 as float8 union select 1;\nfloat8\n------\n 1\n 1.2\n(2 rows)\n\npostgres=> select text 'a' as text union select 'b';\ntext\n----\na\nb\n(2 rows)\n\nAt the moment I'm forcing the types of the union to match the types of\nthe first/top clause in the union:\n\npostgres=> select 1 as all_integers\npostgres-> union select '2.2'::float4 union select 3.3;\nall_integers\n------------\n 1\n 2\n 3\n(3 rows)\n\nThe better strategy might be to choose the \"best\" type of the bunch, but\nis more difficult because of the nice recursion technique used in the\nparser. However, it does work OK when selecting _into_ a table:\n\npostgres=> create table ff (f float);\nCREATE\npostgres=> insert into ff\npostgres-> select 1 union select '2.2'::float4 union select 3.3;\nINSERT 0 3\npostgres=> select * from ff;\n f\n----------------\n 1\n2.20000004768372\n 3.3\n(3 rows)\n\nComments??\n\n - Tom\n", "msg_date": "Fri, 22 May 1998 15:38:46 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Error in parser with UNIONS." }, { "msg_contents": "Tom Lane he say:\n> Made some progress:\n> \n> postgres=> select 1.2 as float8 union select 1;\n> float8\n> ------\n> 1\n> 1.2\n> (2 rows)\n> \n> postgres=> select text 'a' as text union select 'b';\n> text\n> ----\n> a\n> b\n> (2 rows)\n> \n> At the moment I'm forcing the types of the union to match the types of\n> the first/top clause in the union:\n> \n> postgres=> select 1 as all_integers\n> postgres-> union select '2.2'::float4 union select 3.3;\n> all_integers\n> ------------\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> The better strategy might be to choose the \"best\" type of the bunch, but\n> is more difficult because of the nice recursion technique used in the\n> parser. However, it does work OK when selecting _into_ a table:\n> \n> postgres=> create table ff (f float);\n> CREATE\n> postgres=> insert into ff\n> postgres-> select 1 union select '2.2'::float4 union select 3.3;\n> INSERT 0 3\n> postgres=> select * from ff;\n> f\n> ----------------\n> 1\n> 2.20000004768372\n> 3.3\n> (3 rows)\n> \n> Comments??\n> \n\nGreat stuff!\n-dg\n", "msg_date": "Sat, 30 May 1998 22:29:43 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Error in parser with UNIONS." } ]
[ { "msg_contents": "About a month ago I wrote\n> Yes, if anything were to be done along this line it'd also make sense\n> to revise libpgtcl. I think it ought to work more like this:\n> (a) the idle loop is invoked while waiting for a query response\n> (so that a pg_exec statement behaves sort of like \"tkwait\");\n> (b) a \"listen\" command is sent via a new pg_listen statement that\n> specifies a callback command string. Subsequent notify responses\n> can occur whenever a callback is possible.\n> I suppose (a) had better be an option to pg_exec statements so that\n> we don't break existing Tcl code...\n\nI was a little startled to discover that pg_listen already exists\n(it's not documented!). But the way things are currently set up,\nthe callbacks specified by pg_listen are executed only when the Tcl\nscript invokes the also-undocumented pg_notifies command.\n\npg_notifies executes an empty query (which is no longer necessary) and\nthen looks for notify responses. A practical Tcl application would have\nto execute pg_notifies often, perhaps every few seconds from an \"after\"\ntimer event.\n\nWhat I'd like to do is eliminate pg_notifies and define pg_listen\ncallbacks as happening automatically whenever Tcl can fire event\ncallbacks (ie, at the idle loop).\n\nThere's some risk of breaking existing applications, since the apps\nmight not be prepared for pg_listen callbacks occurring except at the\nspecific time they execute pg_notifies. But that doesn't seem really\nlikely to be a problem. Besides, since both these commands are\nundocumented, I imagine not very many libpgtcl applications use them.\n(A quick search of the archives turned up only a message from Massimo\nDal Zotto about this topic, and he seemed to agree that getting rid of\npg_notifies would be better.)\n\nAny comments?\n\nBTW, I'm not currently planning to tackle the other point about providing\nan asynchronous pg_exec capability in libpgtcl. I've concluded that the\nTcl code I'm planning to write wouldn't use it, at least not soon; so\nI'll leave that project for another person or another day.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 May 1998 19:13:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Time to fix libpgtcl for async NOTIFY" }, { "msg_contents": "> There's some risk of breaking existing applications, since the apps\n> might not be prepared for pg_listen callbacks occurring except at the\n> specific time they execute pg_notifies. But that doesn't seem really\n> likely to be a problem. Besides, since both these commands are\n> undocumented, I imagine not very many libpgtcl applications use them.\n> (A quick search of the archives turned up only a message from Massimo\n> Dal Zotto about this topic, and he seemed to agree that getting rid of\n> pg_notifies would be better.)\n> \n> Any comments?\n\nYep, get rid of the old stuff. I am sure people didn't use it because\nof the performance problem. Your cleanup will make it use-able.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 00:36:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Time to fix libpgtcl for async NOTIFY" }, { "msg_contents": "> \n> > There's some risk of breaking existing applications, since the apps\n> > might not be prepared for pg_listen callbacks occurring except at the\n> > specific time they execute pg_notifies. But that doesn't seem really\n> > likely to be a problem. Besides, since both these commands are\n> > undocumented, I imagine not very many libpgtcl applications use them.\n> > (A quick search of the archives turned up only a message from Massimo\n> > Dal Zotto about this topic, and he seemed to agree that getting rid of\n> > pg_notifies would be better.)\n> > \n> > Any comments?\n> \n> Yep, get rid of the old stuff. I am sure people didn't use it because\n> of the performance problem. Your cleanup will make it use-able.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\nThe old stuff works fine, at least for me!\n\nI'm using it, and my users like it very much. It suffers of performance\nproblems but they are caused by bottlenecks on pg_listener and not by\nthe pg_notifies loop in tcl (wich is done every 1 second by 30+ clients).\nI'm probably the only one who uses this feature, partly because I never\nfound the time to write down some documentation about it (my fault), so\nany change you make shouldn't break many applications.\n\nIf you are making changes to the tcl listen code please consider adding the\nfollowing missing feature:\n\n- the possibility to temporarily suspend callbacks for a particular listen\n or for all listens. This would help to avoid the above possible problem.\n I currently do it with ad-hoc code in the application but having it done\n by a single call to libpgtcl would be better.\n\n- an option to get the callback associated to a particular listen or to all\n listens\n\n- better support for unlistening. There is an unlisten function in my\n contrib code but it should be integrated into the backend with a real\n UNLISTEN command.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Tue, 26 May 1998 23:36:20 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Time to fix libpgtcl for async NOTIFY" }, { "msg_contents": "Massimo Dal Zotto <[email protected]> writes:\n> If you are making changes to the tcl listen code please consider adding the\n> following missing feature:\n> - the possibility to temporarily suspend callbacks for a particular listen\n> or for all listens. This would help to avoid the above possible problem.\n\nThat seems reasonable. I presume that if a notify comes in while the\ncallback is suspended, we should fire the callback after it gets\nreenabled? What happens if multiple notifies for the same name arrive\nwhile suspended --- one callback afterwards, or many? (I think I'd vote\nfor only one, but...)\n\n> - an option to get the callback associated to a particular listen or to all\n> listens\n\nPerhaps also reasonable. In the code I submitted last night, it is\npossible for several different Tcl interpreters to be listening within\none client process. I think that an interpreter should only be able to\nask about its own callbacks, however --- anything else is a security\nviolation that defeats the point of multiple interpreters.\n\n(Your \"suspend callbacks\" feature would likewise have to be interpreter-\nlocal.)\n\n> - better support for unlistening. There is an unlisten function in my\n> contrib code but it should be integrated into the backend with a real\n> UNLISTEN command.\n\nI agree with that, and will do the libpq and libpgtcl changes if someone\nelse will implement UNLISTEN on the backend side. I'm not competent to\ntwiddle the backend however.\n\n\t\t\tregards, tom lane\n\nPS: I'm on vacation till next week. Don't expect any fast responses.\n", "msg_date": "Tue, 26 May 1998 19:24:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Time to fix libpgtcl for async NOTIFY " } ]
[ { "msg_contents": "\nI have seen a few patches fly by on the list, but when I checked the\nlast snapshot was dated May 4th. Unhappily, CVSup is not working for me\nright now. Is there a later snapshot, or should I just work with this one?\n\nOh, and who should I really direct this kind of \"site admin\" question to\nanyway?\n\nThanks\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Fri, 22 May 1998 00:33:16 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Current sources?" }, { "msg_contents": "On Fri, 22 May 1998, David Gould wrote:\n\n> \n> I have seen a few patches fly by on the list, but when I checked the\n> last snapshot was dated May 4th. Unhappily, CVSup is not working for me\n> right now. Is there a later snapshot, or should I just work with this one?\n\n\tsnapshot's only happen once a week unless in a BETA freeze...what\nis wrong with CVSup for you though?\n\n>Oh, and who should I really direct this kind of \"site admin\" question to\n> anyway?\n\n\tMe...but the list works too, since then everyone knows what is\ngoing on (for those new *grin*)\n\n\n\n\n", "msg_date": "Fri, 22 May 1998 07:54:11 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> \n> I have seen a few patches fly by on the list, but when I checked the\n> last snapshot was dated May 4th. Unhappily, CVSup is not working for me\n> right now. Is there a later snapshot, or should I just work with this one?\n> \n> Oh, and who should I really direct this kind of \"site admin\" question to\n> anyway?\n> \n\nMarc, Mr. scrappy, [email protected].\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:17:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "At 7:54 AM 98.5.22 -0400, The Hermit Hacker wrote:\n>On Fri, 22 May 1998, David Gould wrote:\n>\n>> \n>> I have seen a few patches fly by on the list, but when I checked the\n>> last snapshot was dated May 4th. Unhappily, CVSup is not working for me\n>> right now. Is there a later snapshot, or should I just work with this one?\n>\n>\tsnapshot's only happen once a week unless in a BETA freeze...what\n>is wrong with CVSup for you though?\n\nOnce a week? the snapshot hasn't been updated for at least\n2 weeks now.\n--\nTatsuo Ishii\[email protected]\n--\nTatsuo Ishii\[email protected]\n\n", "msg_date": "Sat, 23 May 1998 00:29:55 +0900", "msg_from": "[email protected] (Tatsuo Ishii)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> \tsnapshot's only happen once a week unless in a BETA freeze...what\n> is wrong with CVSup for you though?\n\nCVSup can only be used on a few platforms, and is a hell of a big job\nto port to new ones, because you have to begin by porting a Modula-3\ncompiler. Decidedly non-trivial.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "22 May 1998 21:30:47 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Sat, 23 May 1998, Tatsuo Ishii wrote:\n\n> At 7:54 AM 98.5.22 -0400, The Hermit Hacker wrote:\n> >On Fri, 22 May 1998, David Gould wrote:\n> >\n> >> \n> >> I have seen a few patches fly by on the list, but when I checked the\n> >> last snapshot was dated May 4th. Unhappily, CVSup is not working for me\n> >> right now. Is there a later snapshot, or should I just work with this one?\n> >\n> >\tsnapshot's only happen once a week unless in a BETA freeze...what\n> >is wrong with CVSup for you though?\n> \n> Once a week? the snapshot hasn't been updated for at least\n> 2 weeks now.\n\n\tMy error...hard coded tar into the script, vs letting it take it\nfrom the path...fixed, with a snapshot generated later this afternoon...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 22 May 1998 20:17:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On 22 May 1998, Tom Ivar Helbekkmo wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> \n> > \tsnapshot's only happen once a week unless in a BETA freeze...what\n> > is wrong with CVSup for you though?\n> \n> CVSup can only be used on a few platforms, and is a hell of a big job\n> to port to new ones, because you have to begin by porting a Modula-3\n> compiler. Decidedly non-trivial.\n\n\tI've tried to get anonCVS working, and am still working on it,\nbut, for some odd reason, it isn't working as expected, even with\nfollowing the instructions laid out in several FAQs :(\n\n\tTry this:\n\ncvs -d :pserver:[email protected]:/usr/local/cvsroot login\n\t- password of 'postgresql'\ncvs -d :pserver:[email protected]:/usr/local/cvsroot co pgsql\n\n\tAnd tell me what it comes up with...it might just be me *shrug*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 22 May 1998 20:19:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "At 8:17 PM 98.5.22 -0300, The Hermit Hacker wrote:\n>On Sat, 23 May 1998, Tatsuo Ishii wrote:\n\n>> >\tsnapshot's only happen once a week unless in a BETA freeze...what\n>> >is wrong with CVSup for you though?\n>> \n>> Once a week? the snapshot hasn't been updated for at least\n>> 2 weeks now.\n>\n>\tMy error...hard coded tar into the script, vs letting it take it\n>from the path...fixed, with a snapshot generated later this afternoon...\n\nThanks!\n--\nTatsuo Ishii\[email protected]\n\n", "msg_date": "Sat, 23 May 1998 08:45:31 +0900", "msg_from": "[email protected] (Tatsuo Ishii)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> \tTry this:\n> \n> cvs -d :pserver:[email protected]:/usr/local/cvsroot login\n> \t- password of 'postgresql'\n> cvs -d :pserver:[email protected]:/usr/local/cvsroot co pgsql\n> \n> \tAnd tell me what it comes up with...it might just be me *shrug*\n\nWorks fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and it's\npulling the PostgreSQL distribution in as I type. For some reason all\nthe files are mode 666 (directories are 755, as per UMASK), but that's\njust a minor nit I'll figure out or work around.\n\nIs logging in really necessary, though? This is the first time I ever\nuse anonymous CVS, but I'd assumed it would \"just work\", without any\ninteractive specification of passwords...\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "23 May 1998 15:29:12 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n\n> Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and\n> it's pulling the PostgreSQL distribution in as I type.\n\nThe \"cvs checkout\" worked fine, and a \"cvs update\" afterwards scanned\nthe repository and confirmed I was up to date. Looks good! :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "23 May 1998 16:26:21 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On 23 May 1998, Tom Ivar Helbekkmo wrote:\n\n> Tom Ivar Helbekkmo <[email protected]> writes:\n> \n> > Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and\n> > it's pulling the PostgreSQL distribution in as I type.\n> \n> The \"cvs checkout\" worked fine, and a \"cvs update\" afterwards scanned\n> the repository and confirmed I was up to date. Looks good! :-)\n\n\tOdd...it was doing a 'second checkout' that screwed me, where i\ndidn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell\nme what that does...\n\n\tAnd, yes, password is required...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 23 May 1998 13:33:43 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Tom Ivar Helbekkmo <[email protected]> writes:\n>>>> Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and\n>>>> it's pulling the PostgreSQL distribution in as I type.\n\nI'm at the same point using cvs 1.9 and rcs 5.7. I also see the\nbug that individual files are checked out with permissions 666.\n(I've seen the same thing with Mozilla's anon CVS server, BTW.\nSo if it's a server config mistake rather than an outright CVS bug,\nthen at least Marc is in good company...)\n\n> \tOdd...it was doing a 'second checkout' that screwed me, where i\n> didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell\n> me what that does...\n\nI'd expect that to choke, because you've specified a nonexistent\nrepository...\n\nWhy would you need to do a second checkout anyway? Once you've got\na local copy of the CVS tree, cd'ing into it and saying \"cvs update\"\nis the right way to pull an update.\n\nBTW, \"cvs checkout\" is relatively inefficient across a slow link,\nbecause it has to pull down each file separately. The really Right Way\nto do this (again stealing a page from Mozilla) is to offer snapshot\ntarballs that are images of a CVS checkout done locally at the server.\nThen, people can pull a fresh fileset by downloading the tarball, and\nsubsequently use \"cvs update\" within that tree to grab updates.\nIn other words, the snapshot creation script should go something like\n\n\trm -rf pgsql\n\tcvs -d :pserver:[email protected]:/usr/local/cvsroot co pgsql\n\ttar cvfz postgresql.snapshot.tar.gz pgsql\n\nI dunno how you're doing it now, but the snapshot does not contain\nthe CVS control files so it can't be used as a basis for \"cvs update\".\n\n\t\t\tregards, tom lane\n\nPS: for cvs operations across slow links, the Mozilla guys recommend\n-z3 (eg, \"cvs -z3 update\") to apply gzip compression to the data being\ntransferred. I haven't tried this yet but it seems like a smart idea,\nespecially for a checkout.\n", "msg_date": "Sat, 23 May 1998 13:19:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "On Sat, 23 May 1998, Tom Lane wrote:\n\n> > \tOdd...it was doing a 'second checkout' that screwed me, where i\n> > didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell\n> > me what that does...\n> \n> I'd expect that to choke, because you've specified a nonexistent\n> repository...\n\n\t<> == :pserver:[email protected]:/usr/local/cvsroot *grin*\n\n> Why would you need to do a second checkout anyway? Once you've got\n> a local copy of the CVS tree, cd'ing into it and saying \"cvs update\"\n> is the right way to pull an update.\n\n\tMy understanding (and the way I've always done it) is that:\n\n\tcvs checkout -P pgsql\n\n\tWill remove any old files, update any existing, and bring in any\nnew...always worked for me...\n\n\n> PS: for cvs operations across slow links, the Mozilla guys recommend\n> -z3 (eg, \"cvs -z3 update\") to apply gzip compression to the data being\n> transferred. I haven't tried this yet but it seems like a smart idea,\n> especially for a checkout.\n\n\tGeez, sounds like someone with enough knowledge to build a\n'AnonCVS Instructions' web page? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 23 May 1998 14:45:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> \tOdd...it was doing a 'second checkout' that screwed me, where\n> i didn't think it worked...try doing 'cvs -d <> checkout -P pgsql'\n> and tell me what that does...\n\nI assume \"<>\" means \"the same path as before\", in which case this is\njust doing a new checkout on top of an old one, right? I've got one\nof those running now, to see what happens, but I don't see why you\nwould want do do this. \"cvs update\" is the way it's supposed to be\ndone, once you've got the tree checked out. I know _that_ worked.\n\nRight now, the second \"cvs checkout\" is running, and there seems to be\nmuch communication going on, but no new downloading of files I already\nhave. Pretty much like during the \"cvs update\", that is. We'll see.\n\nAh, yes. Here we go. This looks just like the \"cvs update\" pass. In\nfact, it seems that a second checkout on top of an existing one turns\nout to behave just like a (more proper) update from within the tree.\n\nDone now, worked fine.\n\nI'm starting to look forward to when the CVS source tree gets into a\nbuildable state again! This could be a comfortable way of keeping up\nto date with the current sources. Here's hoping you find a good\nsolution to the s_lock.h misunderstandings soon... :-)\n\n> \tAnd, yes, password is required...\n\nI've found where it's stashed, now. I guess that means you only need\nto supply it once, to do the initial checkout, and after that you\nwon't have to worry about it.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "23 May 1998 20:12:24 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "\n\nOn Sat, 23 May 1998, The Hermit Hacker wrote:\n\n> On Sat, 23 May 1998, Tom Lane wrote:\n> \n> > > \tOdd...it was doing a 'second checkout' that screwed me, where i\n> > > didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell\n> > > me what that does...\n> > \n> > I'd expect that to choke, because you've specified a nonexistent\n> > repository...\n> \n> \t<> == :pserver:[email protected]:/usr/local/cvsroot *grin*\n> \n> > Why would you need to do a second checkout anyway? Once you've got\n> > a local copy of the CVS tree, cd'ing into it and saying \"cvs update\"\n> > is the right way to pull an update.\n> \n> \tMy understanding (and the way I've always done it) is that:\n> \n> \tcvs checkout -P pgsql\n> \n> \tWill remove any old files, update any existing, and bring in any\n> new...always worked for me...\n\nWhat's that? In my understanding you have to login first. Then do one\ncheckout. The checkout (co) creates a new directory and updates everything\nat that time. If you stay in /usr/local 'co pgsql' creates\n/usr/local/pgsql. Next day or week you go to /usr/local/pgsql and try\n'cvs update -d'. Only the changed files will be updated on your local\ndisc. \n\n-Egon\n\n", "msg_date": "Sat, 23 May 1998 20:14:26 +0200 (MET DST)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> \tMy understanding (and the way I've always done it) is that:\n> \tcvs checkout -P pgsql\n> \tWill remove any old files, update any existing, and bring in any\n> new...always worked for me...\n\nHmm. Now that I read the cvs manual carefully, it does say:\n\n: Running `checkout' on a directory that was already built by a prior\n: `checkout' is also permitted, and has the same effect as specifying the\n: `-d' option to the `update' command, that is, any new directories that\n: have been created in the repository will appear in your work area.\n\nBut the more usual way to do it is with \"update\". Maybe the \"checkout\"\nmethod is broken, or has some peculiar requirements about how to\nspecify the repository --- ordinarily you don't need to name the\nrepository in an update command, since cvs finds it in CVS/Root.\nI don't know whether the same is true for the \"checkout\" method.\nCould there be some kind of conflict between what you said on the\ncommand line and what was in CVS/Root?\n\n> \tGeez, sounds like someone with enough knowledge to build a\n> 'AnonCVS Instructions' web page? :)\n\nActually I'm just a novice with cvs. OTOH a novice might be just the\nright person to make such a page ;-). Let me see if I can gin up some\ntext.\n\nDo you want to adopt the Mozilla method where there is a tarball\navailable with the CVS control files already in place? I'd recommend\nit; my initial checkout over a 28.8K modem took well over two hours.\nAdmittedly I forgot to supply -z, but judging from the pauses in\ndata transfer, overload of the hub.org server was also a factor ...\n-z would've made that worse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 May 1998 15:18:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "Well, the upshot from here is that having done \"cvs checkout pgsql\"\nonce, I can do either\n\tcvs update\t\t\t-- within pgsql\n\tcvs -d blahblah checkout pgsql\t-- in parent directory\nand they both seem to work and do the same thing (although with no\nupdates committed in the last two hours, it's hard to be sure).\n\nIf I omit the -d option to the cvs checkout, it fails; it does not\nknow enough to fetch the repository address from pgsql/CVS/Root.\nBut cvs update is cool.\n\nDunno why it doesn't work for Marc. I'm running cvs 1.9; you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 May 1998 15:33:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "On 23 May 1998, Tom Ivar Helbekkmo wrote:\n\n> Ah, yes. Here we go. This looks just like the \"cvs update\" pass. In\n> fact, it seems that a second checkout on top of an existing one turns\n> out to behave just like a (more proper) update from within the tree.\n> \n> Done now, worked fine.\n\n\tOdd, must be a problem with the machine I was trying to run it\nfrom then *shrug*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 23 May 1998 16:38:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Sat, 23 May 1998, Tom Lane wrote:\n\n> Do you want to adopt the Mozilla method where there is a tarball\n> available with the CVS control files already in place? I'd recommend\n> it; my initial checkout over a 28.8K modem took well over two hours.\n> Admittedly I forgot to supply -z, but judging from the pauses in\n> data transfer, overload of the hub.org server was also a factor ...\n> -z would've made that worse.\n\n\tCan you try it with a -z and time it? I would judge it to be\nfaster, since the -z should be more processor intensive, and the processor\non Hub is idle most of the time. The -z would reduce bandwidth, though...\n\n\tAs for the CVS control files...will look at it...but even then,\nits going to take awhile to download, due to the overall size of the file\nin question...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 23 May 1998 16:41:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "On Sat, 23 May 1998, Tom Lane wrote:\n\n> Well, the upshot from here is that having done \"cvs checkout pgsql\"\n> once, I can do either\n> \tcvs update\t\t\t-- within pgsql\n> \tcvs -d blahblah checkout pgsql\t-- in parent directory\n> and they both seem to work and do the same thing (although with no\n> updates committed in the last two hours, it's hard to be sure).\n> \n> If I omit the -d option to the cvs checkout, it fails; it does not\n> know enough to fetch the repository address from pgsql/CVS/Root.\n> But cvs update is cool.\n> \n> Dunno why it doesn't work for Marc. I'm running cvs 1.9; you?\n\n\tYup, but I have a few suspicious on why it doesn't work that I'll\ninvestigate on monday :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 23 May 1998 16:41:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Sat, 23 May 1998, Tom Lane wrote:\n>> Do you want to adopt the Mozilla method where there is a tarball\n>> available with the CVS control files already in place? I'd recommend\n>> it; my initial checkout over a 28.8K modem took well over two hours.\n\n> \tCan you try it with a -z and time it?\n\nWell, the Mozilla boys are quite right: -z3 is a nice win over a modem\nline. A full checkout with -z3 took 38 minutes, vs about 130 without.\n\nLet's see, the tarfile equivalent of the checked-out tree comes to\n3947839 bytes, which would take about 25-30 minutes to download across\nthis line. So there's not much savings to be had from downloading a\ntarfile instead of doing a checkout.\n\nIt might still be worth providing CVS control files in the snapshot\ntarballs, just so that someone who downloads a snap and later decides\nto start using CVS doesn't have to start from scratch. The overhead of\ndoing so seems to be only about a 1% increase in the tar.gz filesize.\nBut it may not be worth your trouble to mess with the snapshot\ngeneration script...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 May 1998 16:51:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "Here is what I have been using. I use -z6 because that is the default\ngzip compression level. This is not an anon login, but I have been\nusing it for over a month now. You will probably have to do an anon\nlogin first, so it stores a file in your home directory for later use.\n\nAlso attached is my pgcommit script.\n\n---------------------------------------------------------------------------\n\n:\ncd /pgcvs || exit 1\ncvs -z 6 -d :pserver:[email protected]:/usr/local/cvsroot -q update \"$@\" pgsql\nchown -R postgres .\npgfixups\ncd /u/src/pg/src\nnice configure \\\n\t--with-x --with-tcl \\\n\t--enable-cassert \\\n\t--with-template=bsdi-3.0 \\\n\t--with-includes=\"/u/readline\" \\\n\t--with-libraries=\"/u/readline /usr/contrib/lib\"\n\n\n---------------------------------------------------------------------------\n\n:\ntrap \"rm -f /tmp/$$ /tmp/$$a\" 0 1 2 3 15\ncd /u/src/pgsql || exit 1\nema /tmp/$$\nfmt /tmp/$$ >/tmp/$$a\ncat /tmp/$$a\nchown postgres /tmp/$$a\ncvs -z 6 -d :pserver:[email protected]:/usr/local/cvsroot commit -F /tmp/$$a pgsql\nfind /u/src/pgsql \\( -name '*.rej' -print \\) -o \\( -name '*.orig' -exec rm {} \\; \\)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 23 May 1998 19:32:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \tGeez, sounds like someone with enough knowledge to build a\n> 'AnonCVS Instructions' web page? :)\n\nHere's some text with HTML markup.\n\nThe claim in the text that CVS 1.9.28 fixes the mode-666 problem is\nbased on noting a promising-looking entry in that version's ChangeLog.\nI have not tested it for myself. Anyone want to try that version\nand see if it works?\n\n\t\t\tregards, tom lane\n\n<html>\n<head>\n\t<title>PostgreSQL: Getting the source via CVS</title>\n</head>\n<body bgcolor=white text=black link=blue vlink=purple>\n\n<font size=\"+3\">Getting the source via CVS</font>\n\n<p>If you would like to keep up with the current sources on a regular\nbasis, you can fetch them from our CVS server and then use CVS to\nretrieve updates from time to time.\n\n<P>To do this you first need a local copy of CVS (Concurrent Version Control\nSystem), which you can get from\n<A HREF=\"http://www.cyclic.com/\">http://www.cyclic.com/</A> or\nany GNU software archive site. Currently we recommend version 1.9.\n\n<P>Once you have installed the CVS software, do this:\n<PRE>\ncvs -d :pserver:[email protected]:/usr/local/cvsroot login\n</PRE>\nYou will be prompted for a password; enter '<tt>postgresql</tt>'.\nYou should only need to do this once, since the password will be\nsaved in <tt>.cvspass</tt> in your home directory.\n\n<P>Having logged in, you are ready to fetch the PostgreSQL sources.\nDo this:\n<PRE>\ncvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n</PRE>\nwhich will install the PostgreSQL sources into a subdirectory <tt>pgsql</tt>\nof the directory you are currently in.\n\n<P>(If you have a fast link to the Internet, you may not need <tt>-z3</tt>,\nwhich instructs CVS to use gzip compression for transferred data. But\non a modem-speed link, it's a very substantial win.)\n\n<P>This initial checkout is a little slower than simply downloading\na <tt>tar.gz</tt> file; expect it to take 40 minutes or so if you\nhave a 28.8K modem. The advantage of CVS doesn't show up until you\nwant to update the file set later on.\n\n<P>Whenever you want to update to the latest CVS sources, <tt>cd</tt> into\nthe <tt>pgsql</tt> subdirectory, and issue\n<PRE>\ncvs -z3 update -d -P\n</PRE>\nThis will fetch only the changes since the last time you updated.\nYou can update in just a couple of minutes, typically, even over\na modem-speed line.\n\n<P>You can save yourself some typing by making a file <tt>.cvsrc</tt>\nin your home directory that contains\n\n<PRE>\ncvs -z3\nupdate -d -P\n</PRE>\n\nThis supplies the <tt>-z3</tt> option to all cvs commands, and the\n<tt>-d</tt> and <tt>-P</tt> options to cvs update. Then you just have\nto say\n<PRE>\ncvs update\n</PRE>\nto update your files.\n\n<P><strong>CAUTION:</strong> some versions of CVS have a bug that\ncauses all checked-out files to be stored world-writable in your\ndirectory. If you see that this has happened, you can do something like\n<PRE>\nchmod -R go-w pgsql\n</PRE>\nto set the permissions properly. This bug is allegedly fixed in the\nlatest beta version of CVS, 1.9.28 ... but it may have other, less\npredictable bugs.\n\n<P>CVS can do a lot of other things, such as fetching prior revisions\nof the PostgreSQL sources rather than the latest development version.\nFor more info consult the manual that comes with CVS, or see the online\ndocumentation at <A HREF=\"http://www.cyclic.com/\">http://www.cyclic.com/</A>.\n\n</body>\n</html>\n", "msg_date": "Sat, 23 May 1998 19:37:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "> \n> Here is what I have been using. I use -z6 because that is the default\n> gzip compression level. This is not an anon login, but I have been\n> using it for over a month now. You will probably have to do an anon\n> login first, so it stores a file in your home directory for later use.\n> \n> Also attached is my pgcommit script.\n\nI am going to change to -z 3. Netscape may have found -z 3 to be faster\nthan -z 6 for one-time modem transfers.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 23 May 1998 22:34:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "I wrote:\n\n> I'm starting to look forward to when the CVS source tree gets into a\n> buildable state again! This could be a comfortable way of keeping\n> up to date with the current sources. Here's hoping you find a good\n> solution to the s_lock.h misunderstandings soon... :-)\n\nA closer look shows that you've actually got it worked out, except\nthat the ugly hack for Sparcs running BSD now has broken completely.\nIt used to work when it was in s_lock.h, but in a separately compiled\nfile, it doesn't. (It relies on an entry point declared inside asm()\nwithin an unused function that's explicitly declared static.)\n\nI just replaced it with the simpler one for SparcLinux, and it's OK.\n\nOn the weird side, after I updated to the current sources, the backend\ndies on me whenever I try to delete a database, whether from psql with\n'drop database test' or from the command line with 'destroydb test'.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "24 May 1998 11:13:14 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> The claim in the text that CVS 1.9.28 fixes the mode-666 problem is\n> based on noting a promising-looking entry in that version's ChangeLog.\n> I have not tested it for myself. Anyone want to try that version\n> and see if it works?\n\nI just did, and it did.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "24 May 1998 13:43:05 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Tom Ivar Helbekkmo wrote:\n> \n> > I'm starting to look forward to when the CVS source tree gets into a\n> > buildable state again! This could be a comfortable way of keeping\n> > up to date with the current sources. Here's hoping you find a good\n> > solution to the s_lock.h misunderstandings soon... :-)\n> \n> A closer look shows that you've actually got it worked out, except\n> that the ugly hack for Sparcs running BSD now has broken completely.\n> It used to work when it was in s_lock.h, but in a separately compiled\n> file, it doesn't. (It relies on an entry point declared inside asm()\n> within an unused function that's explicitly declared static.)\n\nOoops, sorry about that.\n\nI guess I should have added a \".globl tas\" or whatever the native asm phrase\nfor globalizing an entry point is and then it would have worked as I intended.\n\n> I just replaced it with the simpler one for SparcLinux, and it's OK.\n\nThis is a very nice way to do this. In general, if we can count on having\nGCC we should use the GCC inlines.\n\nHmmmm, on that note, the current sources are factored:\n\n #if defined(linux)\n #if defined(x86)\n // x86 code\n #else if defined(sparc)\n // sparc code\n #endif\n #else\n // all non linux\n ...\n #endif\n\nI think that the real commonality might better be expressed as:\n\n #if defined(gcc)\n // all gcc variants\n #else\n // no gcc\n #endif\n\nAs GCC has a unique (but common to gcc!) \"asm\" facility. This would allow\nall the free unixes and many of the comercial ones to share the same\nasm implementation which should make it easier to get it right on all the\nplatforms.\n\nSince I am planning another revision, does anyone object to this?\n\n> On the weird side, after I updated to the current sources, the backend\n> dies on me whenever I try to delete a database, whether from psql with\n> 'drop database test' or from the command line with 'destroydb test'.\n\nTry making the 's_lock_test' target in src/backend/storage/buffer/Makefile.\nIt will let you be sure that spinlocks are working.\n\nJust btw, I have been doing some testing based on Bruce's reservations about\nthe inline vs call implementation of spinlocks, and will be posting an updated\nset of patches and the results of my testing \"real soon now\". \n\nNow that I have at least anoncvs access to the current tree, I think I can\ndo this with fewer iterations (crossing fingers...).\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 24 May 1998 18:38:44 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": ">> A closer look shows that you've actually got it worked out, except\n>> that the ugly hack for Sparcs running BSD now has broken completely.\n>> It used to work when it was in s_lock.h, but in a separately compiled\n>> file, it doesn't. (It relies on an entry point declared inside asm()\n>> within an unused function that's explicitly declared static.)\n>\n>Ooops, sorry about that.\n>\n>I guess I should have added a \".globl tas\" or whatever the native asm phrase\n>for globalizing an entry point is and then it would have worked as I intended.\n\nPPC/Linux has been broken too.\n\n>> On the weird side, after I updated to the current sources, the backend\n>> dies on me whenever I try to delete a database, whether from psql with\n>> 'drop database test' or from the command line with 'destroydb test'.\n\nI have made small changes to solve the global tas problem, and got\nexactly same experience.\n\n>Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile.\n>It will let you be sure that spinlocks are working.\n\nI have tested the s_lock_test and seems it is working. However I have\nlots of failure with various SQL's including 'drop database', 'delete\nfrom'.\nHave you succeeded in running regression tests? If so, what kind of\nplatforms are you using?\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Mon, 25 May 1998 11:02:47 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "[email protected] (David Gould) writes:\n\n> Try making the 's_lock_test' target in\n> src/backend/storage/buffer/Makefile. It will let you be sure that\n> spinlocks are working.\n\nThis is how it looks like here (NetBSD/sparc 1.3, GCC 1.7.2.2), with\nthe broken TAS support replaced with that from SparcLinux:\n\n| barsoom# gmake s_lock_test\n| gcc -I../../../include -I../../../backend -I/usr/local/include -O2 -pipe -Wall -Wmissing-prototypes -I../.. -DS_LOCK_TEST=1 -g s_lock.c -o s_lock_test\n| s_lock.c: In function `main':\n| s_lock.c:313: warning: implicit declaration of function `select'\n| ./s_lock_test\n| S_LOCK_TEST: this will hang for a few minutes and then abort\n| with a 'stuck spinlock' message if S_LOCK()\n| and TAS() are working.\n| \n| FATAL: s_lock(00004168) at s_lock.c:324, stuck spinlock. Aborting.\n| \n| FATAL: s_lock(00004168) at s_lock.c:324, stuck spinlock. Aborting.\n| gmake: *** [s_lock_test] Abort trap (core dumped)\n| gmake: *** Deleting file `s_lock_test'\n| barsoom# \n\n...and it did take a couple of minutes (didn't time it, though), so I\nguess it works, right?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "25 May 1998 05:26:47 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "[email protected] writes:\n\n> I have tested the s_lock_test and seems it is working. However I\n> have lots of failure with various SQL's including 'drop database',\n> 'delete from'.\n\nI'm seeing the same thing Tatsuo-san does. This is on NetBSD/sparc\n1.3, GCC 1.7.2.2, running the very latest anonCVS-fetched PostgreSQL.\nHaven't run regression tests -- assume they would fail horribly. The\ninstallation was done from scratch, including an 'initdb' run.\n\nInterestingly, a 'delete from' will kill the backend even if it has a\n'where' clause that does not match anything whatsoever, but a 'drop\ntable' is fine, including non-empty tables. Brief testing of 'insert'\nand 'select' show them working, including joins, as do transactions\nusing 'begin', 'commit' and 'abort'.\n\nAny idea where to start looking?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "25 May 1998 05:45:47 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Death on deletion attempts (was: Current sources?)" }, { "msg_contents": "> This is a very nice way to do this. In general, if we can count on having\n> GCC we should use the GCC inlines.\n> \n> Hmmmm, on that note, the current sources are factored:\n> \n> #if defined(linux)\n> #if defined(x86)\n> // x86 code\n> #else if defined(sparc)\n> // sparc code\n> #endif\n> #else\n> // all non linux\n> ...\n> #endif\n> \n> I think that the real commonality might better be expressed as:\n> \n> #if defined(gcc)\n> // all gcc variants\n> #else\n> // no gcc\n> #endif\n> \n> As GCC has a unique (but common to gcc!) \"asm\" facility. This would allow\n> all the free unixes and many of the comercial ones to share the same\n> asm implementation which should make it easier to get it right on all the\n> platforms.\n> \n> Since I am planning another revision, does anyone object to this?\n\nSounds great.\n\n> \n> > On the weird side, after I updated to the current sources, the backend\n> > dies on me whenever I try to delete a database, whether from psql with\n> > 'drop database test' or from the command line with 'destroydb test'.\n> \n> Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile.\n> It will let you be sure that spinlocks are working.\n> \n> Just btw, I have been doing some testing based on Bruce's reservations about\n> the inline vs call implementation of spinlocks, and will be posting an updated\n> set of patches and the results of my testing \"real soon now\". \n> \n> Now that I have at least anoncvs access to the current tree, I think I can\n> do this with fewer iterations (crossing fingers...).\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 24 May 1998 23:59:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "[email protected] writes:\n> >> A closer look shows that you've actually got it worked out, except\n> >> that the ugly hack for Sparcs running BSD now has broken completely.\n> >> It used to work when it was in s_lock.h, but in a separately compiled\n> >> file, it doesn't. (It relies on an entry point declared inside asm()\n> >> within an unused function that's explicitly declared static.)\n> >\n> >Ooops, sorry about that.\n> >\n> >I guess I should have added a \".globl tas\" or whatever the native asm phrase\n> >for globalizing an entry point is and then it would have worked as I intended.\n> \n> PPC/Linux has been broken too.\n\nPlease let me know what the problem was, even if it was just the 'global tas'\nthing. I am trying to make sure this works on all platforms. Thanks.\n\n> >> On the weird side, after I updated to the current sources, the backend\n> >> dies on me whenever I try to delete a database, whether from psql with\n> >> 'drop database test' or from the command line with 'destroydb test'.\n> \n> I have made small changes to solve the global tas problem, and got\n> exactly same experience.\n> \n> >Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile.\n> >It will let you be sure that spinlocks are working.\n> \n> I have tested the s_lock_test and seems it is working. However I have\n> lots of failure with various SQL's including 'drop database', 'delete\n> from'.\n> Have you succeeded in running regression tests? If so, what kind of\n> platforms are you using?\n\nI made this patch against 6.3.2 and ran regression successfully. This on a\nglibc Linux x86 system. I just rebuilt against the latest CVS (from anoncvs)\nand see 27 tests that fail, many with dropconns. I looked a little into the\n'drop database failure' and it does not look related to spinlocks as far as\nI looked.\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Mon, 25 May 1998 01:25:02 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Tom Ivar Helbekkmo writes: \n> [email protected] writes:\n> \n> > I have tested the s_lock_test and seems it is working. However I\n> > have lots of failure with various SQL's including 'drop database',\n> > 'delete from'.\n> \n> I'm seeing the same thing Tatsuo-san does. This is on NetBSD/sparc\n> 1.3, GCC 1.7.2.2, running the very latest anonCVS-fetched PostgreSQL.\n> Haven't run regression tests -- assume they would fail horribly. The\n> installation was done from scratch, including an 'initdb' run.\n> \n> Interestingly, a 'delete from' will kill the backend even if it has a\n> 'where' clause that does not match anything whatsoever, but a 'drop\n> table' is fine, including non-empty tables. Brief testing of 'insert'\n> and 'select' show them working, including joins, as do transactions\n> using 'begin', 'commit' and 'abort'.\n> \n> Any idea where to start looking?\n\nNot at me for starters ;-) I really think I _might_ be innocent here...\n\nBtw, could you send me diffs for your bsd s_lock fix if you have not sent\nthem in to be applied yet. I would like avoid the multiple unsynched update\nproblem that happened last time. Thanks.\n\nI did spend all of five minutes with gdb looking at the \"drop database\"\nfailure. It looks like ExecutorStart() returned a null tupleDesc to \nProcessQueryDesc() which then passed to BeginCommand() who did not like\nit at all. So I would start looking at why ExecutorStart() failed.\n\nThe transcript is below:\n \n\nProgram received signal SIGSEGV, Segmentation fault.\nBeginCommand (pname=0x0, operation=4, tupdesc=0x0, isIntoRel=0 '\\000', \n isIntoPortal=0, tag=0x81359c0 \"DELETE\", dest=Remote) at dest.c:241\n241 AttributeTupleForm *attrs = tupdesc->attrs;\n(gdb) where\n#0 BeginCommand (pname=0x0, operation=4, tupdesc=0x0, isIntoRel=0 '\\000', \n isIntoPortal=0, tag=0x81359c0 \"DELETE\", dest=Remote) at dest.c:241\n#1 0x80e24f9 in ProcessQueryDesc (queryDesc=0x81ba640) at pquery.c:293\n#2 0x80e258e in ProcessQuery (parsetree=0x81b68f8, plan=0x81ba468, argv=0x0, \n typev=0x0, nargs=0, dest=Remote) at pquery.c:378\n#3 0x80e13b0 in pg_exec_query_dest (\n query_string=0xbfffd5f8 \"delete from pg_database where pg_database.oid = '18080'::oid\", argv=0x0, typev=0x0, nargs=0, dest=Remote) at postgres.c:702\n#4 0x80e12b2 in pg_exec_query (\n query_string=0xbfffd5f8 \"delete from pg_database where pg_database.oid = '18080'::oid\", argv=0x0, typev=0x0, nargs=0) at postgres.c:601\n#5 0x8096596 in destroydb (dbname=0x81b2558 \"regression\") at dbcommands.c:136\n#6 0x80e304c in ProcessUtility (parsetree=0x81b2578, dest=Remote)\n at utility.c:570\n#7 0x80e1350 in pg_exec_query_dest (\n query_string=0xbfffd928 \"drop database regression;\", argv=0x0, typev=0x0, \n nargs=0, dest=Remote) at postgres.c:656\n#8 0x80e12b2 in pg_exec_query (\n query_string=0xbfffd928 \"drop database regression;\", argv=0x0, typev=0x0, \n nargs=0) at postgres.c:601\n#9 0x80e2001 in PostgresMain (argc=9, argv=0xbffff960) at postgres.c:1417\n#10 0x80a7707 in main (argc=9, argv=0xbffff960) at main.c:105\n(gdb) l\n236 bool isIntoPortal,\n237 char *tag,\n238 CommandDest dest)\n239 {\n240 PortalEntry *entry;\n241 AttributeTupleForm *attrs = tupdesc->attrs;\n242 int natts = tupdesc->natts;\n243 int i;\n244 char *p;\n245\n(gdb) up\n#1 0x80e24f9 in ProcessQueryDesc (queryDesc=0x81ba640) at pquery.c:293\n293 BeginCommand(NULL,\n(gdb) l 280,300\n281 /* ----------------\n282 * call ExecStart to prepare the plan for execution\n283 * ----------------\n284 */\n285 attinfo = ExecutorStart(queryDesc, state);\n286\n287 /* ----------------\n288 * report the query's result type information\n289 * back to the front end or to whatever destination\n290 * we're dealing with.\n291 * ----------------\n292 */\n293 BeginCommand(NULL,\n294 operation,\n295 attinfo,\n296 isRetrieveIntoRelation,\n297 isRetrieveIntoPortal,\n298 tag,\n299 dest);\n(gdb) p attinfo\n$1 = (struct tupleDesc *) 0x0\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Mon, 25 May 1998 01:33:48 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Death on deletion attempts (was: Current sources?)" }, { "msg_contents": ">> PPC/Linux has been broken too.\n>\n>Please let me know what the problem was, even if it was just the 'global tas'\n>thing. I am trying to make sure this works on all platforms. Thanks.\n\nHere are patches for s_lock.c (against May23 snapshot).\n----------------------------------------------------------\n*** s_lock.c.orig\tMon May 25 18:08:20 1998\n--- s_lock.c\tMon May 25 18:08:57 1998\n***************\n*** 151,161 ****\n \n #if defined(PPC)\n \n! static int\n! tas_dummy()\n {\n \t__asm__(\"\t\t\t\t\\n\\\n- tas:\t\t\t\t\t\t\\n\\\n \t\t\tlwarx\t5,0,3\t\\n\\\n \t\t\tcmpwi\t5,0\t\t\\n\\\n \t\t\tbne\t\tfail\t\\n\\\n--- 151,160 ----\n \n #if defined(PPC)\n \n! int\n! tas(slock_t *lock)\n {\n \t__asm__(\"\t\t\t\t\\n\\\n \t\t\tlwarx\t5,0,3\t\\n\\\n \t\t\tcmpwi\t5,0\t\t\\n\\\n \t\t\tbne\t\tfail\t\\n\\\n----------------------------------------------------------\n>> I have tested the s_lock_test and seems it is working. However I have\n>> lots of failure with various SQL's including 'drop database', 'delete\n>> from'.\n>> Have you succeeded in running regression tests? If so, what kind of\n>> platforms are you using?\n>\n>I made this patch against 6.3.2 and ran regression successfully. This on a\n>glibc Linux x86 system. I just rebuilt against the latest CVS (from anoncvs)\n>and see 27 tests that fail, many with dropconns. I looked a little into the\n>'drop database failure' and it does not look related to spinlocks as far as\n>I looked.\n\nI see. BTW, I have tested on FreeBSD box and found exactly same thing\nhas occured.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Mon, 25 May 1998 18:14:58 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "tih wrote:\n> \n> > I'm starting to look forward to when the CVS source tree gets into a\n> > buildable state again! This could be a comfortable way of keeping\n> > up to date with the current sources. Here's hoping you find a good\n> > solution to the s_lock.h misunderstandings soon... :-)\n... \n> On the weird side, after I updated to the current sources, the backend\n> dies on me whenever I try to delete a database, whether from psql with\n> 'drop database test' or from the command line with 'destroydb test'.\n> \n> -tih\n\n\nOk, I think I have found the source of the dropconns on \"delete\" querys\nthat are causing the current problem. The change listed below sets\ntupType to the junkFilter (whatever that is) jf_cleanTupType unconditionally.\nThis makes a SEGV later as the tupType ends up NULL.\n\nHere is what CVS says:\n ---------------\n revision 1.46\n date: 1998/05/21 03:53:50; author: scrappy; state: Exp; lines: +6 -4\n \n From: David Hartwig <[email protected]>\n \n Here is a patch to remove the requirement that ORDER/GROUP BY clause\n identifiers be included in the target list.\n --------------\n\nI do not believe that this could ever have passed regression. Do we have\nthe whole patch to back out, or do we need to just \"fix what we have now\"?\n\nAlso, perhaps we need to be more selective about checkins? \n\nHere is the source containing the problem:\n\nsrc/backend/executor/execMain.c in InitPlan() at about line 515\n /* ----------------\n * now that we have the target list, initialize the junk filter\n * if this is a REPLACE or a DELETE query.\n * We also init the junk filter if this is an append query\n * (there might be some rule lock info there...)\n * NOTE: in the future we might want to initialize the junk\n * filter for all queries.\n * ----------------\n * SELECT added by [email protected] 5/20/98 to allow\n * ORDER/GROUP BY have an identifier missing from the target.\n */\n if (operation == CMD_UPDATE || operation == CMD_DELETE ||\n operation == CMD_INSERT || operation == CMD_SELECT)\n {\n JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n estate->es_junkFilter = j;\n>>>> tupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n }\n else\n estate->es_junkFilter = NULL;\n\nHere is my debug transcript for \"drop database regression\" \n\n(gdb) where\n#0 InitPlan (operation=CMD_DELETE, parseTree=0x81b68f8, plan=0x81ba468, \n estate=0x81ba668) at execMain.c:527\n#1 0x8098017 in ExecutorStart (queryDesc=0x81ba640, estate=0x81ba668)\n at execMain.c:128\n#2 0x80e24d9 in ProcessQueryDesc (queryDesc=0x81ba640) at pquery.c:285\n#3 0x80e258e in ProcessQuery (parsetree=0x81b68f8, plan=0x81ba468, argv=0x0, \n typev=0x0, nargs=0, dest=Remote) at pquery.c:378\n#4 0x80e13b0 in pg_exec_query_dest (\n query_string=0xbfffd5f8 \"delete from pg_database where pg_database.oid = '18080'::oid\", argv=0x0, typev=0x0, nargs=0, dest=Remote) at postgres.c:702\n#5 0x80e12b2 in pg_exec_query (\n query_string=0xbfffd5f8 \"delete from pg_database where pg_database.oid = '18080'::oid\", argv=0x0, typev=0x0, nargs=0) at postgres.c:601\n#6 0x8096596 in destroydb (dbname=0x81b2558 \"regression\") at dbcommands.c:136\n#7 0x80e304c in ProcessUtility (parsetree=0x81b2578, dest=Remote)\n at utility.c:570\n#8 0x80e1350 in pg_exec_query_dest (\n query_string=0xbfffd928 \"drop database regression;\", argv=0x0, typev=0x0, \n nargs=0, dest=Remote) at postgres.c:656\n#9 0x80e12b2 in pg_exec_query (\n query_string=0xbfffd928 \"drop database regression;\", argv=0x0, typev=0x0, \n nargs=0) at postgres.c:601\n#10 0x80e2001 in PostgresMain (argc=9, argv=0xbffff960) at postgres.c:1417\n#11 0x80a7707 in main (argc=9, argv=0xbffff960) at main.c:105\n530 JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n531 estate->es_junkFilter = j;\n(gdb) p j\n$7 = (JunkFilter *) 0x81bb2c8\n(gdb) p *j\n$8 = {type = T_JunkFilter, jf_targetList = 0x81ba600, jf_length = 1, \n jf_tupType = 0x81bb238, jf_cleanTargetList = 0x0, jf_cleanLength = 0, \n jf_cleanTupType = 0x0, jf_cleanMap = 0x0}\n(gdb) n\n533 tupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n(gdb) p tupType\n$9 = (struct tupleDesc *) 0x81baf18\n(gdb) n\n534 }\n(gdb) p tupType\n$10 = (struct tupleDesc *) 0x0\n(gdb) n\n542 intoRelationDesc = (Relation) NULL;\n(gdb) n\n544 if (operation == CMD_SELECT)\n(gdb) n\n588 estate->es_into_relation_descriptor = intoRelationDesc;\n(gdb) n\n600 return tupType;\n\nReturns NULL to ExecutorStart who then pukes.\n\n-dg\n\n\nDavid Gould [email protected] 510.536.1443 (h) 510.628.3783 (w)\n or [email protected] 510.305.9468 (Bat phone)\n-- A child of five could understand this! Fetch me a child of five.\n\n", "msg_date": "Mon, 25 May 1998 12:23:46 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> --------------\n> \n> I do not believe that this could ever have passed regression. Do we have\n> the whole patch to back out, or do we need to just \"fix what we have now\"?\n> \n> Also, perhaps we need to be more selective about checkins? \n\nNot sure. Marc and I reviewed it, and it looked very good. In fact, I\nwould like to see more of such patches, of course, without the destroydb\nproblem, but many patches have little quirks the author could not have\nanticipated.\n\n> {\n> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n> estate->es_junkFilter = j;\n> >>>> tupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n> }\n> else\n> estate->es_junkFilter = NULL;\n> \n> Here is my debug transcript for \"drop database regression\" \n\nHere is the original patch. I got it with the command:\n\n$ pgcvs diff -c -D'05/21/1998 03:00:00 GMT' -D'05/21/1998 04:00:00\nGMT' \n\npgcvs on my machines does postgresql cvs for me.\n\n---------------------------------------------------------------------------\n\nIndex: backend/executor/execMain.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v\nretrieving revision 1.45\nretrieving revision 1.46\ndiff -c -r1.45 -r1.46\n*** execMain.c\t1998/02/27 08:43:52\t1.45\n--- execMain.c\t1998/05/21 03:53:50\t1.46\n***************\n*** 26,32 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v 1.45 1998/02/27 08:43:52 vadim Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 26,32 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v 1.46 1998/05/21 03:53:50 scrappy Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 521,534 ****\n \t *\t NOTE: in the future we might want to initialize the junk\n \t *\t filter for all queries.\n \t * ----------------\n \t */\n \tif (operation == CMD_UPDATE || operation == CMD_DELETE ||\n! \t\toperation == CMD_INSERT)\n \t{\n- \n \t\tJunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n- \n \t\testate->es_junkFilter = j;\n \t}\n \telse\n \t\testate->es_junkFilter = NULL;\n--- 521,536 ----\n \t *\t NOTE: in the future we might want to initialize the junk\n \t *\t filter for all queries.\n \t * ----------------\n+ \t * SELECT added by [email protected] 5/20/98 to allow \n+ \t * ORDER/GROUP BY have an identifier missing from the target.\n \t */\n \tif (operation == CMD_UPDATE || operation == CMD_DELETE ||\n! \t\toperation == CMD_INSERT || operation == CMD_SELECT)\n \t{\n \t\tJunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n \t\testate->es_junkFilter = j;\n+ \n+ \t\ttupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n \t}\n \telse\n \t\testate->es_junkFilter = NULL;\nIndex: backend/parser/parse_clause.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_clause.c,v\nretrieving revision 1.15\nretrieving revision 1.16\ndiff -c -r1.15 -r1.16\n*** parse_clause.c\t1998/03/31 04:43:53\t1.15\n--- parse_clause.c\t1998/05/21 03:53:50\t1.16\n***************\n*** 7,13 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_clause.c,v 1.15 1998/03/31 04:43:53 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 7,13 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_clause.c,v 1.16 1998/05/21 03:53:50 scrappy Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 182,187 ****\n--- 182,218 ----\n \t\t\t}\n \t\t}\n \t}\n+ \n+ \t/* BEGIN add missing target entry hack.\n+ \t *\n+ \t * Prior to this hack, this function returned NIL if no target_result.\n+ \t * Thus, ORDER/GROUP BY required the attributes be in the target list.\n+ \t * Now it constructs a new target entry which is appended to the end of\n+ \t * the target list. This target is set to be resjunk = TRUE so that\n+ \t * it will not be projected into the final tuple.\n+ \t * [email protected] 5/20/98\n+ \t */ \n+ \tif ( ! target_result) { \n+ \t\tList *p_target = tlist;\n+ \t\tIdent *missingTargetId = (Ident *)makeNode(Ident);\n+ \t\tTargetEntry *tent = makeNode(TargetEntry);\n+ \t\t\n+ \t\t/* Fill in the constructed Ident node */\n+ \t\tmissingTargetId->type = T_Ident;\n+ \t\tmissingTargetId->name = palloc(strlen(sortgroupby->name) + 1);\n+ \t\tstrcpy(missingTargetId->name, sortgroupby->name);\n+ \n+ \t\ttransformTargetId(pstate, missingTargetId, tent, missingTargetId->name, TRUE);\n+ \n+ \t\t/* Add to the end of the target list */\n+ \t\twhile (lnext(p_target) != NIL) {\n+ \t\t\tp_target = lnext(p_target);\n+ \t\t}\n+ \t\tlnext(p_target) = lcons(tent, NIL);\n+ \t\ttarget_result = tent;\n+ \t}\n+ \t/* END add missing target entry hack. */\n+ \n \treturn target_result;\n }\n \n***************\n*** 203,212 ****\n \t\tResdom\t *resdom;\n \n \t\trestarget = find_targetlist_entry(pstate, lfirst(grouplist), targetlist);\n- \n- \t\tif (restarget == NULL)\n- \t\t\telog(ERROR, \"The field being grouped by must appear in the target list\");\n- \n \t\tgrpcl->entry = restarget;\n \t\tresdom = restarget->resdom;\n \t\tgrpcl->grpOpoid = oprid(oper(\"<\",\n--- 234,239 ----\n***************\n*** 262,270 ****\n \n \n \t\trestarget = find_targetlist_entry(pstate, sortby, targetlist);\n- \t\tif (restarget == NULL)\n- \t\t\telog(ERROR, \"The field being ordered by must appear in the target list\");\n- \n \t\tsortcl->resdom = resdom = restarget->resdom;\n \t\tsortcl->opoid = oprid(oper(sortby->useOp,\n \t\t\t\t\t\t\t\t resdom->restype,\n--- 289,294 ----\nIndex: backend/parser/parse_target.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.12\nretrieving revision 1.13\ndiff -c -r1.12 -r1.13\n*** parse_target.c\t1998/05/09 23:29:54\t1.12\n--- parse_target.c\t1998/05/21 03:53:51\t1.13\n***************\n*** 7,13 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v 1.12 1998/05/09 23:29:54 thomas Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 7,13 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v 1.13 1998/05/21 03:53:51 scrappy Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 52,57 ****\n--- 52,102 ----\n \t\t\t\t Oid type_id,\n \t\t\t\t Oid attrtype);\n \n+ \n+ /*\n+ * transformTargetId - transforms an Ident Node to a Target Entry\n+ * Created this a function to allow the ORDER/GROUP BY clause be able \n+ * to construct a TargetEntry from an Ident.\n+ *\n+ * resjunk = TRUE will hide the target entry in the final result tuple.\n+ * [email protected] 5/20/98\n+ */\n+ void\n+ transformTargetId(ParseState *pstate,\n+ \t\t\t\tIdent *ident,\n+ \t\t\t\tTargetEntry *tent,\n+ \t\t\t\tchar *resname,\n+ \t\t\t\tint16 resjunk)\n+ {\n+ \tNode *expr;\n+ \tOid\ttype_id;\n+ \tint16\ttype_mod;\n+ \n+ \t/*\n+ \t * here we want to look for column names only, not\n+ \t * relation names (even though they can be stored in\n+ \t * Ident nodes, too)\n+ \t */\n+ \texpr = transformIdent(pstate, (Node *) ident, EXPR_COLUMN_FIRST);\n+ \ttype_id = exprType(expr);\n+ \tif (nodeTag(expr) == T_Var)\n+ \t\ttype_mod = ((Var *) expr)->vartypmod;\n+ \telse\n+ \t\ttype_mod = -1;\n+ \ttent->resdom = makeResdom((AttrNumber) pstate->p_last_resno++,\n+ \t\t\t\t\t\t\t (Oid) type_id,\n+ \t\t\t\t\t\t\t type_mod,\n+ \t\t\t\t\t\t\t resname,\n+ \t\t\t\t\t\t\t (Index) 0,\n+ \t\t\t\t\t\t\t (Oid) 0,\n+ \t\t\t\t\t\t\t resjunk);\n+ \n+ \ttent->expr = expr;\n+ \treturn;\n+ }\n+ \n+ \n+ \n /*\n * transformTargetList -\n *\t turns a list of ResTarget's into a list of TargetEntry's\n***************\n*** 71,106 ****\n \t\t{\n \t\t\tcase T_Ident:\n \t\t\t\t{\n- \t\t\t\t\tNode\t *expr;\n- \t\t\t\t\tOid\t\t\ttype_id;\n- \t\t\t\t\tint16\t\ttype_mod;\n \t\t\t\t\tchar\t *identname;\n \t\t\t\t\tchar\t *resname;\n \n \t\t\t\t\tidentname = ((Ident *) res->val)->name;\n \t\t\t\t\thandleTargetColname(pstate, &res->name, NULL, identname);\n- \n- \t\t\t\t\t/*\n- \t\t\t\t\t * here we want to look for column names only, not\n- \t\t\t\t\t * relation names (even though they can be stored in\n- \t\t\t\t\t * Ident nodes, too)\n- \t\t\t\t\t */\n- \t\t\t\t\texpr = transformIdent(pstate, (Node *) res->val, EXPR_COLUMN_FIRST);\n- \t\t\t\t\ttype_id = exprType(expr);\n- \t\t\t\t\tif (nodeTag(expr) == T_Var)\n- \t\t\t\t\t\ttype_mod = ((Var *) expr)->vartypmod;\n- \t\t\t\t\telse\n- \t\t\t\t\t\ttype_mod = -1;\n \t\t\t\t\tresname = (res->name) ? res->name : identname;\n! \t\t\t\t\ttent->resdom = makeResdom((AttrNumber) pstate->p_last_resno++,\n! \t\t\t\t\t\t\t\t\t\t\t (Oid) type_id,\n! \t\t\t\t\t\t\t\t\t\t\t type_mod,\n! \t\t\t\t\t\t\t\t\t\t\t resname,\n! \t\t\t\t\t\t\t\t\t\t\t (Index) 0,\n! \t\t\t\t\t\t\t\t\t\t\t (Oid) 0,\n! \t\t\t\t\t\t\t\t\t\t\t 0);\n! \n! \t\t\t\t\ttent->expr = expr;\n \t\t\t\t\tbreak;\n \t\t\t\t}\n \t\t\tcase T_ParamNo:\n--- 116,128 ----\n \t\t{\n \t\t\tcase T_Ident:\n \t\t\t\t{\n \t\t\t\t\tchar\t *identname;\n \t\t\t\t\tchar\t *resname;\n \n \t\t\t\t\tidentname = ((Ident *) res->val)->name;\n \t\t\t\t\thandleTargetColname(pstate, &res->name, NULL, identname);\n \t\t\t\t\tresname = (res->name) ? res->name : identname;\n! \t\t\t\t\ttransformTargetId(pstate, (Ident*)res->val, tent, resname, FALSE);\n \t\t\t\t\tbreak;\n \t\t\t\t}\n \t\t\tcase T_ParamNo:\nIndex: include/parser/parse_target.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/parser/parse_target.h,v\nretrieving revision 1.4\nretrieving revision 1.5\ndiff -c -r1.4 -r1.5\n*** parse_target.h\t1998/02/26 04:42:49\t1.4\n--- parse_target.h\t1998/05/21 03:53:51\t1.5\n***************\n*** 6,12 ****\n *\n * Copyright (c) 1994, Regents of the University of California\n *\n! * $Id: parse_target.h,v 1.4 1998/02/26 04:42:49 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 6,12 ----\n *\n * Copyright (c) 1994, Regents of the University of California\n *\n! * $Id: parse_target.h,v 1.5 1998/05/21 03:53:51 scrappy Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 24,28 ****\n--- 24,30 ----\n \n extern List *transformTargetList(ParseState *pstate, List *targetlist);\n extern List *makeTargetNames(ParseState *pstate, List *cols);\n+ extern void transformTargetId(ParseState *pstate, Ident *ident,\n+ \tTargetEntry *tent, char *resname, int16 resjunk);\n \n #endif\t\t\t\t\t\t\t/* PARSE_TARGET_H */\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 25 May 1998 18:30:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Mon, 25 May 1998, David Gould wrote:\n\n> Ok, I think I have found the source of the dropconns on \"delete\" querys\n> that are causing the current problem. The change listed below sets\n> tupType to the junkFilter (whatever that is) jf_cleanTupType unconditionally.\n> This makes a SEGV later as the tupType ends up NULL.\n> \n> Here is what CVS says:\n> ---------------\n> revision 1.46\n> date: 1998/05/21 03:53:50; author: scrappy; state: Exp; lines: +6 -4\n> \n> From: David Hartwig <[email protected]>\n> \n> Here is a patch to remove the requirement that ORDER/GROUP BY clause\n> identifiers be included in the target list.\n> --------------\n> \n> I do not believe that this could ever have passed regression. Do we have\n> the whole patch to back out, or do we need to just \"fix what we have now\"?\n\n\tfix what we have now...if its possible...\n\n> Also, perhaps we need to be more selective about checkins? \n\n\tWe're in an alpha/development phase here...my general opinion on\nthat is that I'd rather someone throw in code that is 95% good, with 5%\nerror, then nobody making a start anywhere...\n\n\tUntil a Beta freeze, *never* expect the server to actually\nwork...if it does, bonus. \n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 25 May 1998 19:43:26 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": ">> I do not believe that this could ever have passed regression. Do we have\n>> the whole patch to back out, or do we need to just \"fix what we have now\"?\n>> \n>> Also, perhaps we need to be more selective about checkins? \n>\n>Not sure. Marc and I reviewed it, and it looked very good. In fact, I\n>would like to see more of such patches, of course, without the destroydb\n>problem, but many patches have little quirks the author could not have\n>anticipated.\n>\n>> {\n>> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n>> estate->es_junkFilter = j;\n>> >>>> tupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n>> }\n>> else\n>> estate->es_junkFilter = NULL;\n>> \n>> Here is my debug transcript for \"drop database regression\" \n>\n>Here is the original patch. I got it with the command:\n\nI have just removed the patch using patch -R and confirmed that \"drop\ntable\", and \"delete from\" works again. regression tests also look\ngood, except char/varchar/strings.\n\nNow I can start to create patches for snapshot...\n--\nTatsuo Ishii\[email protected]\n\n", "msg_date": "Tue, 26 May 1998 11:09:08 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "On Tue, 26 May 1998 [email protected] wrote:\n\n> I have just removed the patch using patch -R and confirmed that \"drop\n> table\", and \"delete from\" works again. regression tests also look\n> good, except char/varchar/strings.\n\n\tWait...that doesn't solve the problem...David requested that this\npatch get added in order to improve the ODBC driver, which I feel is\nimportant. Can we please investigate *why* his patch broken things, as\nopposed to eliminating it?\n\n\tDavidH? \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 25 May 1998 23:23:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "> \n> On Tue, 26 May 1998 [email protected] wrote:\n> \n> > I have just removed the patch using patch -R and confirmed that \"drop\n> > table\", and \"delete from\" works again. regression tests also look\n> > good, except char/varchar/strings.\n> \n> \tWait...that doesn't solve the problem...David requested that this\n> patch get added in order to improve the ODBC driver, which I feel is\n> important. Can we please investigate *why* his patch broken things, as\n> opposed to eliminating it?\n> \n> \tDavidH? \n\nYes, abosolutely. Let's give David some time investigate it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 25 May 1998 22:45:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": ">On Tue, 26 May 1998 [email protected] wrote:\n>\n>> I have just removed the patch using patch -R and confirmed that \"drop\n>> table\", and \"delete from\" works again. regression tests also look\n>> good, except char/varchar/strings.\n>\n>\tWait...that doesn't solve the problem...David requested that this\n>patch get added in order to improve the ODBC driver, which I feel is\n>important. Can we please investigate *why* his patch broken things, as\n>opposed to eliminating it?\n\nI wouldn't say that the patch should be removed from the CVS. I just\nneed my *private* working snapshot to make sure my patches would not\nbreak anything before submitting.\n\nI wish I could solve the problem by myself, but spending a few hours\nfor debugging last Sunday, I have not find fixes for that yet. Sorry.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Tue, 26 May 1998 12:08:26 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "> I wouldn't say that the patch should be removed from the CVS. I just\n> need my *private* working snapshot to make sure my patches would not\n> break anything before submitting.\n> \n> I wish I could solve the problem by myself, but spending a few hours\n> for debugging last Sunday, I have not find fixes for that yet. Sorry.\n\nOK, here is a fix I have just applied to the tree. It appears to meet\nthe intent of the original patch. David H. will have to comment on its\naccuracy.\n\n---------------------------------------------------------------------------\n\n\nIndex: backend/executor/execMain.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v\nretrieving revision 1.46\ndiff -c -r1.46 execMain.c\n*** execMain.c\t1998/05/21 03:53:50\t1.46\n--- execMain.c\t1998/05/26 03:33:22\n***************\n*** 530,536 ****\n \t\tJunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n \t\testate->es_junkFilter = j;\n \n! \t\ttupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n \t}\n \telse\n \t\testate->es_junkFilter = NULL;\n--- 530,537 ----\n \t\tJunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n \t\testate->es_junkFilter = j;\n \n! \t\tif (operation == CMD_SELECT)\n! \t\t\ttupType = j->jf_cleanTupType;\n \t}\n \telse\n \t\testate->es_junkFilter = NULL;\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 25 May 1998 23:43:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> >> I do not believe that this could ever have passed regression. Do we have\n> >> the whole patch to back out, or do we need to just \"fix what we have now\"?\n> >> \n> >> Also, perhaps we need to be more selective about checkins? \n> >\n> >Not sure. Marc and I reviewed it, and it looked very good. In fact, I\n> >would like to see more of such patches, of course, without the destroydb\n> >problem, but many patches have little quirks the author could not have\n> >anticipated.\n> >\n> >> {\n> >> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n> >> estate->es_junkFilter = j;\n> >> >>>> tupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n> >> }\n> >> else\n> >> estate->es_junkFilter = NULL;\n> >> \n> >> Here is my debug transcript for \"drop database regression\" \n> >\n> >Here is the original patch. I got it with the command:\n> \n> I have just removed the patch using patch -R and confirmed that \"drop\n> table\", and \"delete from\" works again. regression tests also look\n> good, except char/varchar/strings.\n> \n> Now I can start to create patches for snapshot...\n\nMaybe I should elaborate a bit on what I meant by \"more selective about\ncheckins\".\n\nFirst, I agree that we should see more patches like this. Patches in the\nparser, optimiser, planner, etc are hard. It takes a fair amount of effort\nto understand this stuff even to the point of being able to attempt a patch\nin this area. So I applaud anyone who makes that kind of investment and wish\nthat they continue their efforts.\n\nSecond, patches in the parser, optimiser, planner, etc are hard. It is\nincredibly easy to do something that works in a given case but creates a\nproblem for some other statement. And these problems can be very obscure\nand hard to debug, this one was relatively easy.\n\nSo, how do we balance the goals of \"encouraging people to make the effort\nto do a hard patch\" and \"keeping the codeline stable enough to work from\"?\n\nWhere I work we have had good success with the following:\n\n - every night a from scratch build and regression test is run.\n\n - if the build and regression is good, then a snapshot is made into a\n \"last known good\" location. This lets someone find a good \"recent\" source\n tree even if there is a problem that doesn't get solved for a few days.\n\n - a report is generated that lists the results of the build and regression,\n AND, a list of the checkins since the \"last known good\" snapshot. This\n lets someone who just submitted a patch see: a) the patch was applied,\n b) whether it created any problems. It also helps identify conflicting\n patches etc.\n\nI believe most people submitting patches want them to work, and will be very\ngood about monitoring the results of their submissions and fixing any\nproblems, IF the results are visible. Right now they aren't really.\n\n\nThe other tool I believe to be very effective in improving code quality is\ncode review. My experience is that review is both more effective and\ncheaper than testing in finding problems. To that end, I suggest we create\na group of volunteer reviewers, possibly with their own mailing list. The idea\nis not to impose a bureaucratic barrier to submitting patches, but rather to\nallow people who have an idea to get some assistance on whether a given change\nwill fit in and work well. I see some people on this list using the list\nfor this purpose now, I merely propose to normalise this so that everyone\nknows that this resource is available to them, and given an actual patch\n(rather than mere discussion) to be able to identify specific persons to do\na review.\n\nI don't have strong preferences for the form of this, so ideas are welcome.\nMy initial concept supposes a group of maybe 5 to 10 people with some\nexperience in the code who would agree to review any patches within say two\ndays of submission and respond directly to the submitter. Perhaps somehow the\nmailing list could be contrived to randomly pick (or allow reviewers to pick)\nso that say two reviewers had a look at each patch. Also, I think it is\nimportant to identify any reviewers in the changelog or checkin comments so\nthat if there is a problem and the author is unavailable, there are at least\nthe reviewers with knowledge of what the patch was about.\n\nI would be happy to volunteer for a term on the (\"possible proposed mythical\")\nreview team.\n\nI would be even happier to know that next time I had a tricky patch that\nsome other heads would take the time to help me see what I had overlooked.\n\nComments?\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"you can no more argue against quality than you can argue against world\n peace or free video games.\" -- p.j. plauger\n", "msg_date": "Mon, 25 May 1998 22:33:05 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Mon, 25 May 1998, David Gould wrote:\n\n> Where I work we have had good success with the following:\n> \n> - every night a from scratch build and regression test is run.\n> \n> - if the build and regression is good, then a snapshot is made into a\n> \"last known good\" location. This lets someone find a good \"recent\" source\n> tree even if there is a problem that doesn't get solved for a few days.\n\n\tActually, ummm...I've been considering removing the snapshot's\naltogether, now that anoncvs works. The only reason I was doing it before\nwas that not everyone had access to CVSup for their particular\nplatform...the snapshots are a terrible waste of resources, especially\nconsidering that you have to download all ~3.5Meg (and growing) .tar.gz\nfile each time...whereas anoncvs/CVSup only updates those files requiring\nit...\n\n\tIMHO, the snapshot's are only important during the beta freeze\nperiod...\n\n> The other tool I believe to be very effective in improving code quality\n> is code review. My experience is that review is both more effective and\n> cheaper than testing in finding problems. To that end, I suggest we\n> create a group of volunteer reviewers, possibly with their own mailing\n> list. \n\n\tThat's kinda what pgsql-patches is/was meant for...some ppl (I\nwon't mention a name though) seems to get nervous if a patch isn't applied\nwithin a short period of time after being posted, but if we were to adopt\na policy of leaving a patch unapplied for X days after posting, so that\neveryone gets a chance to see/comment on it...\n\n> I don't have strong preferences for the form of this, so ideas are welcome.\n> My initial concept supposes a group of maybe 5 to 10 people with some\n> experience in the code who would agree to review any patches within say two\n> days of submission and respond directly to the submitter. Perhaps somehow the\n> mailing list could be contrived to randomly pick (or allow reviewers to pick)\n> so that say two reviewers had a look at each patch. Also, I think it is\n> important to identify any reviewers in the changelog or checkin comments so\n> that if there is a problem and the author is unavailable, there are at least\n> the reviewers with knowledge of what the patch was about.\n\n\tThis sounds reasonable to me...this is something that FreeBSD does\nright now...one of these days, I have to sit back and decode how they have\ntheir CVS setup. They have some things, as far as logs are concerned,\nthat are slightly cleaner then I have currently setup :)\n\n> I would be even happier to know that next time I had a tricky patch that\n> some other heads would take the time to help me see what I had overlooked.\n\n\tYou always have that...:)\n\n\n", "msg_date": "Tue, 26 May 1998 08:23:19 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Yikes!\n\nI see my little patch as stirred things up a bit. Bruce, your addition does meet the needs of the\nintent of my patch. I tried it here with positive results. I hope you will keep the whole patch.\n\nFor what it is worth, I did run the regression test. But I did not get any failures that appeared to\nbe the a result of the patch. There were, however, many failures before and after my patch. Most\nwere due to AIX system messages but there many, though, I could not explain. I will gladly report them\nif any one is interested.\n\nI have to admit that I was nervous about submitting my first patch into an area code as important this\none. I would have liked to start off with a new data type or something. Unfortunately, I was\ngetting beat up by ODBC/MS Access users which routinely generate queries which the backend could not\nhandle.\n\nThanks for your tolerance.\n\nBruce Momjian wrote:\n\n> > I wouldn't say that the patch should be removed from the CVS. I just\n> > need my *private* working snapshot to make sure my patches would not\n> > break anything before submitting.\n> >\n> > I wish I could solve the problem by myself, but spending a few hours\n> > for debugging last Sunday, I have not find fixes for that yet. Sorry.\n>\n> OK, here is a fix I have just applied to the tree. It appears to meet\n> the intent of the original patch. David H. will have to comment on its\n> accuracy.\n>\n> ---------------------------------------------------------------------------\n>\n> Index: backend/executor/execMain.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v\n> retrieving revision 1.46\n> diff -c -r1.46 execMain.c\n> *** execMain.c 1998/05/21 03:53:50 1.46\n> --- execMain.c 1998/05/26 03:33:22\n> ***************\n> *** 530,536 ****\n> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n> estate->es_junkFilter = j;\n>\n> ! tupType = j->jf_cleanTupType; /* Added by [email protected] 5/20/98 */\n> }\n> else\n> estate->es_junkFilter = NULL;\n> --- 530,537 ----\n> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList);\n> estate->es_junkFilter = j;\n>\n> ! if (operation == CMD_SELECT)\n> ! tupType = j->jf_cleanTupType;\n> }\n> else\n> estate->es_junkFilter = NULL;\n>", "msg_date": "Tue, 26 May 1998 10:14:43 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Mon, 25 May 1998, David Gould wrote:\n>> - if the build and regression is good, then a snapshot is made into a\n>> \"last known good\" location.\n\n> \tActually, ummm...I've been considering removing the snapshot's\n> altogether, now that anoncvs works.\n\nIt may be worth pointing out that cvs allows anyone to retrieve *any*\nprior state of the code. This opens up a great number of options that\na simple periodic snapshot does not. I think it's worth continuing the\nsnapshot series for those who don't want to install cvs for some reason,\nbut that probably won't be the primary access method anymore.\n\nThe thing that I thought was worth adopting from David's list was the\nnightly automatic regression test run. Assuming that there are cycles\nto spare on the server, posting the results of a build and regression\ntest attempt would help provide a reality check for everyone. (It'd\nbe too bulky to send to the mailing lists, and not worth archiving\nanyway; perhaps the output could be put up as a web page at\npostgresql.org?)\n\nThis sort of fiasco could be minimized if everyone got in the habit of\nrunning regression tests before submitting their patches. Here I have\nto disagree with Marc's opinion that it's not really important whether\npre-alpha code works. If the tree is currently broken, that prevents\neveryone else from running regression tests on what *they* are doing,\nand consequently encourages the submission of even more code that hasn't\nbeen adequately tested. I would like to see a policy that you don't\ncheck in code until it passes regression test for you locally. We will\nstill have failures because of (a) portability problems --- ie it works\nat your site, but not for someone else; and (b) unforeseen interactions\nbetween patches submitted at about the same time. But hopefully those\nwill be relatively easy to track down if the normal state is that things\nmostly work.\n\nWe might also consider making more use of cvs' ability to track multiple\ndevelopment branches. If several people need to cooperate on a large\nchange, they could work together in a cvs branch until their mods are\nfinished, allowing them to share development files without breaking the\nmain branch for others.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 1998 10:17:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "On Tue, 26 May 1998, David Hartwig wrote:\n\n> Yikes!\n> \n> I see my little patch as stirred things up a bit. Bruce, your addition\n> does meet the needs of the intent of my patch. I tried it here with\n> positive results. I hope you will keep the whole patch. \n\n\tIt will be kept...I'd rather see new stuff added that has some\n*minor* bugs (as this one turned out to be, actually) vs never seeing\nanything new because ppl are too nervous to submit them :)\n\n\n", "msg_date": "Tue, 26 May 1998 11:41:48 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> This is a multi-part message in MIME format.\n> --------------929DA74C986399373CF663B0\n> Content-Type: text/plain; charset=us-ascii\n> Content-Transfer-Encoding: 7bit\n> \n> Yikes!\n> \n> I see my little patch as stirred things up a bit. Bruce, your addition does meet the needs of the\n> intent of my patch. I tried it here with positive results. I hope you will keep the whole patch.\n\nI was wondering, should the patch be:\n\n if (j->jf_cleanTupType)\n tupType = j->jf_cleanTupType;\n \nrather than my current:\n\n if (operation == CMD_SELECT)\n tupType = j->jf_cleanTupType;\n\nNot sure.\n \n> \n> For what it is worth, I did run the regression test. But I did not get any failures that appeared to\n> be the a result of the patch. There were, however, many failures before and after my patch. Most\n> were due to AIX system messages but there many, though, I could not explain. I will gladly report them\n> if any one is interested.\n\nThis brings up an interesting point that bears comment. There has been\ndiscussion about how to better review these patches. Certainly, the\npatch list is for general review, and many people use that for requests\nfor people to review their patches. I even post small patches for\nreview to hackers when I really need help.\n\nSecond, many bugs do not show up in the regression tests, and often the\nonly way to find the cause is to try checking/backing out recent\npatches. cvs allows us to revert our tree to any date in the past, and\nthat works well too.\n\nI have been more concerned about a patch that works, but adds some ugly\nhack that causes performance/random problems. That is where my review\neye is usually looking.\n\n> \n> I have to admit that I was nervous about submitting my first patch into an area code as important this\n> one. I would have liked to start off with a new data type or something. Unfortunately, I was\n> getting beat up by ODBC/MS Access users which routinely generate queries which the backend could not\n> handle.\n\nNo way we would NOT use this patch. Obviously, a sophisticated patch\nthat meets a real need for us.\n\n> \n> Thanks for your tolerance.\n\nThanks for the patch. We have gotten quite a few 'nice' patches\nrecently that fix some big problems.\n\nAlso, all your e-mail comes with this attachement. Thought you should\nknow.\n\n> \n> --------------929DA74C986399373CF663B0\n> Content-Type: text/x-vcard; charset=us-ascii; name=\"vcard.vcf\"\n> Content-Transfer-Encoding: 7bit\n> Content-Description: Card for David Hartwig\n> Content-Disposition: attachment; filename=\"vcard.vcf\"\n> \n> begin: vcard\n> fn: David Hartwig\n> n: Hartwig;David\n> email;internet: [email protected]\n> x-mozilla-cpt: ;2\n> x-mozilla-html: TRUE\n> version: 2.1\n> end: vcard\n> \n> \n> --------------929DA74C986399373CF663B0--\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 11:50:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> The Hermit Hacker <[email protected]> writes:\n> > On Mon, 25 May 1998, David Gould wrote:\n> >> - if the build and regression is good, then a snapshot is made into a\n> >> \"last known good\" location.\n> \n> > \tActually, ummm...I've been considering removing the snapshot's\n> > altogether, now that anoncvs works.\n> \n> It may be worth pointing out that cvs allows anyone to retrieve *any*\n> prior state of the code. This opens up a great number of options that\n> a simple periodic snapshot does not. I think it's worth continuing the\n> snapshot series for those who don't want to install cvs for some reason,\n> but that probably won't be the primary access method anymore.\n\nI have to agree with Marc. The author did test with the regression\ntests. In fact, the regression tests are not up-to-date, so there are\nmeny diffs even when the code works, and we can't expect someone to keep\nthe regression tests spotless at all times. What I normally do is to\nrun the regression tests, save the output, run them with the patch, and\ncompare the differences. But, sometimes, they don't show up.\n\nWhen people report problems, we do research, find the cause, and get the\ncurrent tree fixed. cvs with -Ddate to find the date it broke is\nusually all we need.\n\nAnd I am the one who wants patches applied within a few days of\nappearance. I think it encourages people to submit patches. Nothing\nmore annoying than to submit a patch that fixes a problem you have, and\nfind that it is not yet in the source tree for others to use and test.\n\n\n> This sort of fiasco could be minimized if everyone got in the habit of\n> running regression tests before submitting their patches. Here I have\n> to disagree with Marc's opinion that it's not really important whether\n> pre-alpha code works. If the tree is currently broken, that prevents\n> everyone else from running regression tests on what *they* are doing,\n> and consequently encourages the submission of even more code that hasn't\n> been adequately tested. I would like to see a policy that you don't\n> check in code until it passes regression test for you locally. We will\n> still have failures because of (a) portability problems --- ie it works\n> at your site, but not for someone else; and (b) unforeseen interactions\n> between patches submitted at about the same time. But hopefully those\n> will be relatively easy to track down if the normal state is that things\n> mostly work.\n> \n> We might also consider making more use of cvs' ability to track multiple\n> development branches. If several people need to cooperate on a large\n> change, they could work together in a cvs branch until their mods are\n> finished, allowing them to share development files without breaking the\n> main branch for others.\n\nActually, things have been working well for quite some time. The age of\nthe snapshots and the lack of anon-cvs/CVSup was slowing down people\nfrom seeing/fixing patches. The old folks had access, but the people\nwho were submitting patches did not. That should be fixed now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:00:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Tue, 26 May 1998, Bruce Momjian wrote:\n\n> And I am the one who wants patches applied within a few days of\n> appearance. I think it encourages people to submit patches. Nothing\n> more annoying than to submit a patch that fixes a problem you have, and\n> find that it is not yet in the source tree for others to use and test.\n\n\tExcept...of course...nobody should be *using* an Alpha/development\nsource tree except for testing :)\n\n\n", "msg_date": "Tue, 26 May 1998 12:04:12 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> On Tue, 26 May 1998, David Hartwig wrote:\n> \n> > Yikes!\n> > \n> > I see my little patch as stirred things up a bit. Bruce, your addition\n> > does meet the needs of the intent of my patch. I tried it here with\n> > positive results. I hope you will keep the whole patch. \n> \n> \tIt will be kept...I'd rather see new stuff added that has some\n> *minor* bugs (as this one turned out to be, actually) vs never seeing\n> anything new because ppl are too nervous to submit them :)\n\nYep, I remember my first patch. Though someone's machine might melt\ndown. Only later did I become confident enough to make changes with my\neyes closed. :-)\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:07:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> On Tue, 26 May 1998, Bruce Momjian wrote:\n> \n> > And I am the one who wants patches applied within a few days of\n> > appearance. I think it encourages people to submit patches. Nothing\n> > more annoying than to submit a patch that fixes a problem you have, and\n> > find that it is not yet in the source tree for others to use and test.\n> \n> \tExcept...of course...nobody should be *using* an Alpha/development\n> source tree except for testing :)\n\nYes. Good point. Our development tree is not for production use.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:16:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> The author did test with the regression tests. In fact, the\n> regression tests are not up-to-date, so there are meny diffs even when\n> the code works, and we can't expect someone to keep the regression\n> tests spotless at all times.\n\nActually, I sympathize with David on this: I got burnt the same way\njust a couple weeks ago. (I blithely assumed that the regression tests\nwould test copy in/out ... they don't ...)\n\nPerhaps the real lesson to be learned is that a little more effort\nshould be expended on the regression tests. I have a couple of\nsuggestions:\n\n1. As far as I've seen there is no documentation on how to create\n regression tests. This should be documented and made as easy as\n possible, to encourage people to create tests for missing cases.\n\n2. System variations (roundoff error differences, etc) create spurious\n test complaints that make it hard to interpret the results properly.\n Can anything be done to clean this up?\n\n3. The TODO list should maintain a section on missing regression tests;\n any failure that gets by the regression tests should cause an entry\n to get made here. This list would have a side benefit of warning\n developers about areas that are not getting tested, so that they know\n they have to do some hand testing if they change relevant code.\n\nWe can start the new TODO section with:\n\n* Check destroydb. (Currently, running regression a second time checks\n this, but a single run in a clean tree won't.)\n* Check copy from stdin/to stdout.\n* Check large-object interface.\n\nWhat else have people been burnt by lately?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 1998 12:25:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "On Tue, 26 May 1998, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > The author did test with the regression tests. In fact, the\n> > regression tests are not up-to-date, so there are meny diffs even when\n> > the code works, and we can't expect someone to keep the regression\n> > tests spotless at all times.\n> \n> Actually, I sympathize with David on this: I got burnt the same way\n> just a couple weeks ago. (I blithely assumed that the regression tests\n> would test copy in/out ... they don't ...)\n> \n> Perhaps the real lesson to be learned is that a little more effort\n> should be expended on the regression tests. I have a couple of\n> suggestions:\n> \n> 1. As far as I've seen there is no documentation on how to create\n> regression tests. This should be documented and made as easy as\n> possible, to encourage people to create tests for missing cases.\n> \n> 2. System variations (roundoff error differences, etc) create spurious\n> test complaints that make it hard to interpret the results properly.\n> Can anything be done to clean this up?\n\n\tSee the expected/int2-FreeBSD.out and similar files...I've done\nwhat I can with the 'spurious test complaints...\n\n\n", "msg_date": "Tue, 26 May 1998 13:19:27 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources? " }, { "msg_contents": "Tom Lane:\n> Perhaps the real lesson to be learned is that a little more effort\n> should be expended on the regression tests. I have a couple of\n> suggestions:\n> \n> 1. As far as I've seen there is no documentation on how to create\n> regression tests. This should be documented and made as easy as\n> possible, to encourage people to create tests for missing cases.\n\nExcellent idea. If everyone making a new feature could also make a test case\nfor it this would help us keep the system stable.\n\n> 2. System variations (roundoff error differences, etc) create spurious\n> test complaints that make it hard to interpret the results properly.\n> Can anything be done to clean this up?\n\nHmmm, perhaps we could modify the tests to display results through a function\nthat rounded to the expected precision eg:\n\ninstead of\n\n select floatcol, doublecol from testtab;\n\nuse\n select display(floatcol, 8), display(doublecol, 16) from testtab;\n\n \n> 3. The TODO list should maintain a section on missing regression tests;\n> any failure that gets by the regression tests should cause an entry\n> to get made here. This list would have a side benefit of warning\n> developers about areas that are not getting tested, so that they know\n> they have to do some hand testing if they change relevant code.\n> \n> We can start the new TODO section with:\n> \n> * Check destroydb. (Currently, running regression a second time checks\n> this, but a single run in a clean tree won't.)\n> * Check copy from stdin/to stdout.\n> * Check large-object interface.\n> \n> What else have people been burnt by lately?\n\nThe int2, oidint2, int4, and oidint4 tests (and some others I think) are\ncurrently failing because the text of a couple error messages changed and\nthe \"expected\" output was not updated. This kind of thing is pretty annoying\nas whoever changed the messages really should have updated the tests as well.\n\nIf the current messages are preferred to the old messages, I will fix the test\noutput to match, although personally, I like the old messages better.\n\nI will argue once again for a clean snapshot that is known to pass regression.\nThis snapshot could be just a CVS tag, but it is important when starting work\non a complex change to be able to know that any problems you have when you\nare done are due to your work, not some pre-existing condition.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"I believe OS/2 is destined to be the most important operating\nsystem, and possibly program, of all time\" - Bill Gates, Nov, 1987.\n", "msg_date": "Tue, 26 May 1998 10:52:40 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I was wondering, should the patch be:\n>\n> if (j->jf_cleanTupType)\n> tupType = j->jf_cleanTupType;\n\n\n\n> rather than my current:\n>\n> if (operation == CMD_SELECT)\n> tupType = j->jf_cleanTupType;\n>\n> Not sure.\n>\n\nThe second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and\njf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement:\n\n SELECT FROM foo GROUP BY bar;\n\nCurrently the parser will not accept it. Sufficient.\n\nThe first option will set tupType, for non-SELECT statements, to something it otherwise may not have been.\nI would rather not risk effecting those calling routines which are not executing a SELECT command. At this\ntime, I do not understand them enough, and I see no benefit. Necessary?", "msg_date": "Tue, 26 May 1998 14:44:04 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and\n> jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement:\n> \n> SELECT FROM foo GROUP BY bar;\n> \n> Currently the parser will not accept it. Sufficient.\n> \n> The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been.\n> I would rather not risk effecting those calling routines which are not executing a SELECT command. At this\n> time, I do not understand them enough, and I see no benefit. Necessary?\n\nOK, I will leave it alone. Is there a way to use junk filters only in\ncases where we need them?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 16:24:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > Perhaps the real lesson to be learned is that a little more effort\n> > should be expended on the regression tests. I have a couple of\n> > suggestions:\n> > 1. As far as I've seen there is no documentation on how to create\n> > regression tests. This should be documented and made as easy as\n> > possible, to encourage people to create tests for missing cases.\n\nHmm. It ain't hard, but afaik the only people who have pushed on the\nregression tests are scrappy and myself. We went for years with no\nupdates to the regression tests at all, and now have a somewhat stable\nset of tests which actually measure many features of a s/w build.\n\n> Excellent idea. If everyone making a new feature could also make a\n> test case for it this would help us keep the system stable.\n\nThis would seem to be a truism. Any takers??\n\n> > 2. System variations (roundoff error differences, etc) create \n> > spurious test complaints that make it hard to interpret the results \n> > properly. Can anything be done to clean this up?\n> Hmmm, perhaps we could modify the tests to display results through a \n> function that rounded to the expected precision eg:\n> select display(floatcol, 8), display(doublecol, 16) from testtab;\n\nGee, maybe we should go back to the original v4.2/v1.0.x behavior of\nrounding _all_ double precision floating point results to 6 digits :(\n\nWe've worked hard to get all of the regression tests to match at least\none platform (at the moment, Linux/i686) and scrappy has extended the\ntest mechanism to allow for platform-specific differences. But we don't\nhave access to all of the supported platforms, so others will need to\nhelp (and they have been, at least some).\n\n> > 3. The TODO list should maintain a section on missing regression \n> > tests; any failure that gets by the regression tests should cause an \n> > entry to get made here. This list would have a side benefit of \n> > warning developers about areas that are not getting tested, so that \n> > they know they have to do some hand testing if they change relevant \n> > code.\n\nimho it will take more effort to maintain a todo list than to just\nsubmit a patch for testing. I would be happy to maintain the \"expected\"\noutput if people would suggest new tests (and better, submit patches for\nthe test).\n\n> I will argue once again for a clean snapshot that is known to pass \n> regression.\n\nI snapshot the system all the time, and then do development for a week\nor two or more on that revlocked snapshot. afaik the failures in\nregression testing at the time I submitted my last \"typing\" patches were\ndue to differences in type conversion behavior; I didn't want the\nchanged behavior to become formalized until others had had a chance to\ntest and comment. (btw, no one has, and anyway I'm changing the results\nfor most of those back to what it was before).\n\nIt's pretty clear that many patches are submitted without full\nregression testing; either way it would be helpful to post a comment\nwith patches saying how the patches affect the regression tests, or that\nno testing was done. I'd like to see another person test patches before\ncommitting to the source tree, but others might like to see where the\npatches/changes are heading even before that so I can see arguments both\nways.\n\nAs has been suggested by yourself and others, regression test\ncontributions would be very helpful; so far the discussion amounts to\nasking scrappy and myself to do _more_ work on the regression tests. I'd\nlike to see someone offering specific help at some point :)\n\nAnyway, if Marc or I led this discussion you will probably just get more\nideas similar to what is already there; more brainstorming on the\nhackers list from y'all will lead to some good new ideas so I'll shut up\nnow...\n\n - Tom\n", "msg_date": "Wed, 27 May 1998 02:22:05 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Tom Lane wrote:\n> \n> 2. System variations (roundoff error differences, etc) create spurious\n> test complaints that make it hard to interpret the results properly.\n> Can anything be done to clean this up?\n> \n\nIt would be good if the backend looked at errno and put out an\nappropriate sqlcode number and a human readable message, instead\nof using the system error messages.\nThat would eliminate some of the regression test diff output.\n\n/* m */\n", "msg_date": "Wed, 27 May 1998 13:34:01 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and\n> > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement:\n> >\n> > SELECT FROM foo GROUP BY bar;\n> >\n> > Currently the parser will not accept it. Sufficient.\n> >\n> > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been.\n> > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this\n> > time, I do not understand them enough, and I see no benefit. Necessary?\n>\n> OK, I will leave it alone. Is there a way to use junk filters only in\n> cases where we need them?\n\nI have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the\ntatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the\nfilter unconditionally on all SELECTS.\n\n", "msg_date": "Wed, 27 May 1998 13:53:03 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the\n> tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the\n> filter unconditionally on all SELECTS.\n\nCan you foreach() through the target list, and check/set a flag, then\ncall the junk filter if you found a resjunk?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 27 May 1998 15:07:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "The Hermit Hacker wrote: \n> On Tue, 26 May 1998, Tom Lane wrote:\n> > Perhaps the real lesson to be learned is that a little more effort\n> > should be expended on the regression tests. I have a couple of\n> > suggestions:\n> > \n> > 1. As far as I've seen there is no documentation on how to create\n> > regression tests. This should be documented and made as easy as\n> > possible, to encourage people to create tests for missing cases.\n> > \n> > 2. System variations (roundoff error differences, etc) create spurious\n> > test complaints that make it hard to interpret the results properly.\n> > Can anything be done to clean this up?\n> \n> \tSee the expected/int2-FreeBSD.out and similar files...I've done\n> what I can with the 'spurious test complaints...\n> \n\nThanks. One question, is there any reason we can't use the intx tests on\nall the platforms? I realize that float it another set of problems, but it\nseems that int should be be the same?\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Sun, 31 May 1998 01:00:00 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Sun, 31 May 1998, David Gould wrote:\n\n> Thanks. One question, is there any reason we can't use the intx tests on\n> all the platforms? I realize that float it another set of problems, but it\n> seems that int should be be the same?\n\n10c10\n< ERROR: pg_atoi: error reading \"100000\": Result too large\n---\n> ERROR: pg_atoi: error reading \"100000\": Math result not representable\n\nThe changes are more error message relatd then anything...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 31 May 1998 14:40:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> On Sun, 31 May 1998, David Gould wrote:\n> \n> > Thanks. One question, is there any reason we can't use the intx tests on\n> > all the platforms? I realize that float it another set of problems, but it\n> > seems that int should be be the same?\n> \n> 10c10\n> < ERROR: pg_atoi: error reading \"100000\": Result too large\n> ---\n> > ERROR: pg_atoi: error reading \"100000\": Math result not representable\n> \n> The changes are more error message relatd then anything...\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nThats what I thought. So can we just rename the int*-*BSD.out to int*.out?\n-dg\n", "msg_date": "Sun, 31 May 1998 11:36:27 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Sun, 31 May 1998, David Gould wrote:\n\n> > On Sun, 31 May 1998, David Gould wrote:\n> > \n> > > Thanks. One question, is there any reason we can't use the intx tests on\n> > > all the platforms? I realize that float it another set of problems, but it\n> > > seems that int should be be the same?\n> > \n> > 10c10\n> > < ERROR: pg_atoi: error reading \"100000\": Result too large\n> > ---\n> > > ERROR: pg_atoi: error reading \"100000\": Math result not representable\n> > \n> > The changes are more error message relatd then anything...\n> > \n> > Marc G. Fournier \n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> Thats what I thought. So can we just rename the int*-*BSD.out to int*.out?\n\n\tNo cause then that will break the Linux regression tests :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 31 May 1998 15:42:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > > > Is there any reason we can't use the intx tests on all the \n> > > > platforms?\n> > > The changes are more error message relatd then anything...\n> > So can we just rename the int*-*BSD.out to int*.out?\n> No cause then that will break the Linux regression tests :)\n\n... which has been the regression reference platform since scrappy and I\nresurrected the regression test suite 'bout a year ago for v6.1...\n\nI assume that most platforms have some differences. Or would we find\nlots more matching each other if we chose something other than Linux for\nthe reference output?\n\n - Tom\n", "msg_date": "Sun, 31 May 1998 19:01:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": " Marc G. Fournier wrote:\n> > > On Sun, 31 May 1998, David Gould wrote:\n> > > \n> > > > Thanks. One question, is there any reason we can't use the intx tests on\n> > > > all the platforms? I realize that float it another set of problems, but it\n> > > > seems that int should be be the same?\n> > > \n> > > 10c10\n> > > < ERROR: pg_atoi: error reading \"100000\": Result too large\n> > > ---\n> > > > ERROR: pg_atoi: error reading \"100000\": Math result not representable\n> > > \n> > > The changes are more error message relatd then anything...\n> > > \n> > Thats what I thought. So can we just rename the int*-*BSD.out to int*.out?\n> \n> \tNo cause then that will break the Linux regression tests :)\n\nFor the \"int\" tests? I hope not. Anyhow, I will test this and see if I can\nclean up some regression issues. I plan to break a lot of stuff in the next\nfew weeks (ok, months) and sure want to be able to count on the regression\nsuite to help me find my way.\n\n-dg\n\n", "msg_date": "Sun, 31 May 1998 15:07:45 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > > > > Is there any reason we can't use the intx tests on all the \n> > > > > platforms?\n> > > > The changes are more error message relatd then anything...\n> > > So can we just rename the int*-*BSD.out to int*.out?\n> > No cause then that will break the Linux regression tests :)\n> \n> ... which has been the regression reference platform since scrappy and I\n> resurrected the regression test suite 'bout a year ago for v6.1...\n> \n> I assume that most platforms have some differences. Or would we find\n> lots more matching each other if we chose something other than Linux for\n> the reference output?\n> \n> - Tom\n\nHmmm, I find that I get lots of diffs on the floating point tests as\nI am running Linux with the new \"glibc\". I suspect the reference platform\nis the old \"libc5\" Linux. We might want to move the reference to \"glibc\" \nLinux as this will be the majority plaform very soon. And since glibc is\nthe not just for Linux it will even help with other platforms in the comming\nfew months.\n\nAnyway, I am willing to work on the tests a little but do not want to \"take\nthem over\". Who \"owns\" them now, perhaps I could co-ordinate and ask for\nadvice from that person?\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 31 May 1998 15:13:15 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Sun, 31 May 1998, David Gould wrote:\n\n> Anyway, I am willing to work on the tests a little but do not want to \"take\n> them over\". Who \"owns\" them now, perhaps I could co-ordinate and ask for\n> advice from that person?\n\n\tNobody \"owns\" them...Thomas and I have tried to keep them\nrelatively up to date, with Thomas doing the most part of the work on a\nLinux platform...\n\n\tStuff like the int* test 'expected' output files are generated\nunder Linux, which generates a different error message then the same\ntest(s) under FreeBSD/NetBSD :(\n\n\n\n", "msg_date": "Sun, 31 May 1998 18:25:02 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> On Sun, 31 May 1998, David Gould wrote:\n> \n> > Anyway, I am willing to work on the tests a little but do not want to \"take\n> > them over\". Who \"owns\" them now, perhaps I could co-ordinate and ask for\n> > advice from that person?\n> \n> \tNobody \"owns\" them...Thomas and I have tried to keep them\n> relatively up to date, with Thomas doing the most part of the work on a\n> Linux platform...\n> \n> \tStuff like the int* test 'expected' output files are generated\n> under Linux, which generates a different error message then the same\n> test(s) under FreeBSD/NetBSD :(\n\nOk, now I am confused. Isn't the error message \"our\" error message? If so,\ncan't we make it the same?\n\n-dg\n\n", "msg_date": "Sun, 31 May 1998 16:17:33 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > Nobody \"owns\" them...Thomas and I have tried to keep them\n> > relatively up to date, with Thomas doing the most part of the work \n> > on a Linux platform...\n> > Stuff like the int* test 'expected' output files are generated\n> > under Linux, which generates a different error message then the same\n> > test(s) under FreeBSD/NetBSD :(\n> Ok, now I am confused. Isn't the error message \"our\" error message? If \n> so, can't we make it the same?\n\nNope. Some messages come from the system apparently. I can't remember\nhow they come about, but the differences are not due to #ifdef FreeBSD\nblocks in the code :)\n\nThe only differences I know of in the regression tests are due to\nnumeric rounding, math libraries and system error messages.\n\nI will point out that although no one really \"owns\" the regression tests\n(in the spirit that everyone can and should contribute) I (and others)\nhave run them extensively in support of releases. It is important that\nwhoever is running the \"reference platform\" be willing to run regression\ntests ad nauseum, and to track down any problems. I've done so the last\nfew releases.\n\nWhen/if this doesn't happen, we get a flakey release.\n\n - Tom\n", "msg_date": "Mon, 01 Jun 1998 06:29:55 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > > Nobody \"owns\" them...Thomas and I have tried to keep them\n> > > relatively up to date, with Thomas doing the most part of the work \n> > > on a Linux platform...\n> > > Stuff like the int* test 'expected' output files are generated\n> > > under Linux, which generates a different error message then the same\n> > > test(s) under FreeBSD/NetBSD :(\n> > Ok, now I am confused. Isn't the error message \"our\" error message? If \n> > so, can't we make it the same?\n> \n> Nope. Some messages come from the system apparently. I can't remember\n> how they come about, but the differences are not due to #ifdef FreeBSD\n> blocks in the code :)\n\nThank goodness! I always worry about that when dealing with *BSD people ;-)\n \n> The only differences I know of in the regression tests are due to\n> numeric rounding, math libraries and system error messages.\n\nThat is about what I see. \n\n> I will point out that although no one really \"owns\" the regression tests\n> (in the spirit that everyone can and should contribute) I (and others)\n> have run them extensively in support of releases. It is important that\n> whoever is running the \"reference platform\" be willing to run regression\n> tests ad nauseum, and to track down any problems. I've done so the last\n> few releases.\n\nOk, I will make a set of Linux glibc expected files for 6.3.2 and if that\nworks send them in. Not sure how to handle the reference Linux vs glibc\nLinux issue in terms of the way the tests are structured and platforms named,\nbut they do have different rounding behavior and messages.\n\nOf course, someone is welcome to beat me to this, no really, go ahead...\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n\n", "msg_date": "Mon, 1 Jun 1998 23:01:14 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > It is important that\n> > whoever is running the \"reference platform\" be willing to run \n> > regression tests ad nauseum, and to track down any problems.\n> Ok, I will make a set of Linux glibc expected files for 6.3.2 and if \n> that works send them in. Not sure how to handle the reference Linux vs \n> glibc Linux issue in terms of the way the tests are structured and \n> platforms named, but they do have different rounding behavior and \n> messages.\n\nI'm running RH5.0 at work, but have RH4.2 at home. I'm reluctant to\nupgrade at home because I have _all_ Postgres releases from v1.0.9 to\ncurrent installed and I can fire them up for testing in less than a\nminute. If I upgrade to the new glibc2, I might have trouble rebuilding\nthe old source trees. Anyway, will probably upgrade sometime in the next\nfew months, and then the reference platform will be glibc2-based.\n\nIf you are generating new \"expected\" files for glibc2 shouldn't they be\nbased on the current development tree? Or are you providing them as a\npatch for v6.3.2 to be installed in /pub/patches??\n\n - Tom\n", "msg_date": "Tue, 02 Jun 1998 13:56:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "On Mon, 1 Jun 1998, David Gould wrote:\n\n> > Nope. Some messages come from the system apparently. I can't remember\n> > how they come about, but the differences are not due to #ifdef FreeBSD\n> > blocks in the code :)\n> \n> Thank goodness! I always worry about that when dealing with *BSD people ;-)\n\n\t*BSD people?? At least us *BSD people work at making sure that\nsoftware developed works on everyone's elses choice of OS too...not like\nsome Linux developers out there (ie. Wine) :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 3 Jun 1998 18:26:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> On Mon, 1 Jun 1998, David Gould wrote:\n> \n> > > Nope. Some messages come from the system apparently. I can't remember\n> > > how they come about, but the differences are not due to #ifdef FreeBSD\n> > > blocks in the code :)\n> > \n> > Thank goodness! I always worry about that when dealing with *BSD people ;-)\n> \n> \t*BSD people?? At least us *BSD people work at making sure that\n> software developed works on everyone's elses choice of OS too...not like\n> some Linux developers out there (ie. Wine) :)\n\nI hear that they didn't do an AS/400 port either ;-).\n-dg\n", "msg_date": "Wed, 3 Jun 1998 20:18:11 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> \n> >> PPC/Linux has been broken too.\n> >\n> >Please let me know what the problem was, even if it was just the 'global tas'\n> >thing. I am trying to make sure this works on all platforms. Thanks.\n> \n> Here are patches for s_lock.c (against May23 snapshot).\n> ----------------------------------------------------------\n> *** s_lock.c.orig\tMon May 25 18:08:20 1998\n> --- s_lock.c\tMon May 25 18:08:57 1998\n> ***************\n> *** 151,161 ****\n> \n> #if defined(PPC)\n> \n> ! static int\n> ! tas_dummy()\n> {\n> \t__asm__(\"\t\t\t\t\\n\\\n> - tas:\t\t\t\t\t\t\\n\\\n> \t\t\tlwarx\t5,0,3\t\\n\\\n> \t\t\tcmpwi\t5,0\t\t\\n\\\n> \t\t\tbne\t\tfail\t\\n\\\n> --- 151,160 ----\n> \n> #if defined(PPC)\n> \n> ! int\n> ! tas(slock_t *lock)\n> {\n> \t__asm__(\"\t\t\t\t\\n\\\n> \t\t\tlwarx\t5,0,3\t\\n\\\n> \t\t\tcmpwi\t5,0\t\t\\n\\\n> \t\t\tbne\t\tfail\t\\n\\\n\nThis patch appears to have been applied already.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 16 Jun 1998 03:53:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> > >> PPC/Linux has been broken too.\n> > >\n> > >Please let me know what the problem was, even if it was just the 'global tas'\n> > >thing. I am trying to make sure this works on all platforms. Thanks.\n> > \n> > Here are patches for s_lock.c (against May23 snapshot).\n> > ----------------------------------------------------------\n> > *** s_lock.c.orig\tMon May 25 18:08:20 1998\n> > --- s_lock.c\tMon May 25 18:08:57 1998\n> > ***************\n> > *** 151,161 ****\n> > \n> > #if defined(PPC)\n> > \n> > ! static int\n> > ! tas_dummy()\n> > {\n> > \t__asm__(\"\t\t\t\t\\n\\\n> > - tas:\t\t\t\t\t\t\\n\\\n> > \t\t\tlwarx\t5,0,3\t\\n\\\n> > \t\t\tcmpwi\t5,0\t\t\\n\\\n> > \t\t\tbne\t\tfail\t\\n\\\n> > --- 151,160 ----\n> > \n> > #if defined(PPC)\n> > \n> > ! int\n> > ! tas(slock_t *lock)\n> > {\n> > \t__asm__(\"\t\t\t\t\\n\\\n> > \t\t\tlwarx\t5,0,3\t\\n\\\n> > \t\t\tcmpwi\t5,0\t\t\\n\\\n> > \t\t\tbne\t\tfail\t\\n\\\n> \n> This patch appears to have been applied already.\n> \n> -- \n\nYes. I picked up all the S_LOCK related patches and messages from the\nmailinglist and folded them into the big S_LOCK patch that just got commited.\nSo, unless I have messed up, or someone comes up with something new, no\nother S_LOCK patches should be applied.\n\nThanks\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Tue, 16 Jun 1998 01:20:36 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and\n> > > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement:\n> > >\n> > > SELECT FROM foo GROUP BY bar;\n> > >\n> > > Currently the parser will not accept it. Sufficient.\n> > >\n> > > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been.\n> > > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this\n> > > time, I do not understand them enough, and I see no benefit. Necessary?\n> >\n> > OK, I will leave it alone. Is there a way to use junk filters only in\n> > cases where we need them?\n> \n> I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the\n> tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the\n> filter unconditionally on all SELECTS.\n\nDavid, attached is a patch to conditionally use the junk filter only\nwhen their is a Resdom that has the resjunk field set. Please review it\nand let me know if there are any problems with it.\n\nI am committing the patch to the development tree.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 18 Jul 1998 23:45:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> >\n> > > > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and\n> > > > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement:\n> > > >\n> > > > SELECT FROM foo GROUP BY bar;\n> > > >\n> > > > Currently the parser will not accept it. Sufficient.\n> > > >\n> > > > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been.\n> > > > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this\n> > > > time, I do not understand them enough, and I see no benefit. Necessary?\n> > >\n> > > OK, I will leave it alone. Is there a way to use junk filters only in\n> > > cases where we need them?\n> >\n> > I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the\n> > tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the\n> > filter unconditionally on all SELECTS.\n>\n> David, attached is a patch to conditionally use the junk filter only\n> when their is a Resdom that has the resjunk field set. Please review it\n> and let me know if there are any problems with it.\n>\n> I am committing the patch to the development tree.\n\nI did not get any attached patch. ??? I can check it out at home where I have cvsup.\n\nWhere there any confirmed problems cause by the aggressive use of the junkfilter? I ask because, adding this extra\ncheck probably will not resolve them. It may only reduce the problem.\n\nI was planning on including an additional check for resjunk as part of another patch I am working on. (GROUP/ORDER BY\nfunc(x) where func(x) is not in the targetlist) Graciously accepted.\n\n", "msg_date": "Mon, 20 Jul 1998 15:35:23 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?" }, { "msg_contents": "> I did not get any attached patch. ??? I can check it out at home where I have cvsup.\n\nMaybe I forgot to attach it.\n\n> \n> Where there any confirmed problems cause by the aggressive use of the junkfilter? I ask because, adding this extra\n> check probably will not resolve them. It may only reduce the problem.\n\nI did not address the problems. This will probably just reduce them.\n\n> \n> I was planning on including an additional check for resjunk as part of another patch I am working on. (GROUP/ORDER BY\n> func(x) where func(x) is not in the targetlist) Graciously accepted.\n> \n> \n\nThis is the code fragment I added to execMain.c:\n\n---------------------------------------------------------------------------\n\n {\n bool junk_filter_needed = false;\n List *tlist;\n \n if (operation == CMD_SELECT)\n {\n foreach(tlist, targetList)\n {\n TargetEntry *tle = lfirst(tlist);\n \n if (tle->resdom->resjunk)\n {\n junk_filter_needed = true;\n break;\n }\n }\n }\n \n if (operation == CMD_UPDATE || operation == CMD_DELETE ||\n operation == CMD_INSERT ||\n (operation == CMD_SELECT && junk_filter_needed))\n {\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 20 Jul 1998 16:20:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current sources?t" } ]
[ { "msg_contents": "That sounds like a good idea.\n\nHow about having them stored in a new system table (say pg_errormsg) which\ncontains each possible error in all the supported languages. That way, you\ncan have multiple language support when users from different countries use\nthe same server?\n\nI have done (in other projects), multiple language support in Java, and it's\nquite simple to implement.\n\n--\nPeter T Mount, [email protected], [email protected]\nJDBC FAQ: http://www.retep.org.uk/postgres\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On\nBehalf Of Jose' Soares Da Silva\nSent: Friday, May 22, 1998 10:59 AM\nTo: hackers postgres\nCc: general postgres\nSubject: [HACKERS] error messages not only English\n\n\nHi all,\n\nI see that PostgreSQL mainly gives error messages in English, I see also\nthat\nin some cases there's the possibility to configure it to give messages in\nother languages like global.c that may be configured to give messages in\nGerman.\nMySQL gives the possibility to configure it using an external file\ncontaining\nthe messages by specifying it using the parameter LANGUAGE=<language>\nwhere <language> is one of the following:\n\n czech\n english\n french\n germany\n italian\n norwegian\n norwegian-ny\n polish\n portuguese\n spanish\n swedish\n\nIt will be great if we could have also this feature on PostreSQL.\nI'm available to help on translation to Portuguese, Spanish and Italian.\n Jose'\n\n", "msg_date": "Fri, 22 May 1998 11:07:23 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] error messages not only English" }, { "msg_contents": "Peter Mount wrote:\n> \n> That sounds like a good idea.\n> \n> How about having them stored in a new system table (say pg_errormsg) which\n> contains each possible error in all the supported languages. That way, you\n> can have multiple language support when users from different countries use\n> the same server?\n\nYes, this is nice. One note: server have to load from this table all messages \nin a language requested by user when switching to this language - it's not\npossible to read any table from elog() in most cases.\n\nVadim\n", "msg_date": "Tue, 26 May 1998 23:55:27 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "On Tue, 26 May 1998, Vadim Mikheev wrote:\n\n> Peter Mount wrote:\n> > \n> > That sounds like a good idea.\n> > \n> > How about having them stored in a new system table (say pg_errormsg) which\n> > contains each possible error in all the supported languages. That way, you\n> > can have multiple language support when users from different countries use\n> > the same server?\n> \n> Yes, this is nice. One note: server have to load from this table all messages \n> in a language requested by user when switching to this language - it's not\n> possible to read any table from elog() in most cases.\n\n\tHrmmm...one thing to note with any of this is that by 'hardcoding'\nin the errormsg itself, it makes it difficult to 'internationalize' a\nprogram...in that I can't run a single server on my machine that can\neasily deal with a 'French' customer vs an 'English' one, vs a 'Japanese'\none...I'd have to recompile for each.\n\n\tVadim's other point, about putting this in the front end vs the\nbackend, I think, is more appropriate, that way it is application specific\nvs server...\n\n\n", "msg_date": "Tue, 26 May 1998 12:00:41 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "> \n> Peter Mount wrote:\n> > \n> > That sounds like a good idea.\n> > \n> > How about having them stored in a new system table (say pg_errormsg) which\n> > contains each possible error in all the supported languages. That way, you\n> > can have multiple language support when users from different countries use\n> > the same server?\n> \n> Yes, this is nice. One note: server have to load from this table all messages \n> in a language requested by user when switching to this language - it's not\n> possible to read any table from elog() in most cases.\n\nError messages in a system table. That is cool, and would be very easy\nto add/maintain. Would have to be loaded into C variables for use,\nhowever.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:09:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "> \tHrmmm...one thing to note with any of this is that by 'hardcoding'\n> in the errormsg itself, it makes it difficult to 'internationalize' a\n> program...in that I can't run a single server on my machine that can\n> easily deal with a 'French' customer vs an 'English' one, vs a 'Japanese'\n> one...I'd have to recompile for each.\n> \n> \tVadim's other point, about putting this in the front end vs the\n> backend, I think, is more appropriate, that way it is application specific\n> vs server...\n\nThey could be over-ridden by any postgres backend. I would recommend\nthe postmaster loading in the default, and any backend can change it. \nHave to get the exec() removed first.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:11:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "> \tHrmmm...one thing to note with any of this is that by 'hardcoding'\n> in the errormsg itself, it makes it difficult to 'internationalize' a\n> program...in that I can't run a single server on my machine that can\n> easily deal with a 'French' customer vs an 'English' one, vs a 'Japanese'\n> one...I'd have to recompile for each.\n\nOops, postmaster can't access any database tables, so each backend\nwould have to load it. I would recommend dumping the default out with\nCOPY as part of initdb, let the postmaster load that file, and backends\ncan load their own. In fact, we may just keep them as COPY files, so\nthey can be easily edited and loaded/unloaded.\n\n> \n> \tVadim's other point, about putting this in the front end vs the\n> backend, I think, is more appropriate, that way it is application specific\n> vs server...\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:13:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 26 May 1998, Vadim Mikheev wrote:\n> >\n> > Yes, this is nice. One note: server have to load from this table all messages\n> > in a language requested by user when switching to this language - it's not\n> > possible to read any table from elog() in most cases.\n> \n> Hrmmm...one thing to note with any of this is that by 'hardcoding'\n> in the errormsg itself, it makes it difficult to 'internationalize' a\n> program...in that I can't run a single server on my machine that can\n> easily deal with a 'French' customer vs an 'English' one, vs a 'Japanese'\n> one...I'd have to recompile for each.\n> \n> Vadim's other point, about putting this in the front end vs the\n> backend, I think, is more appropriate, that way it is application specific\n> vs server...\n\nI didn't mean to put them in the front-end - this is another good idea :).\nYes, we could let fe-libpq to read message corresponding to received\nerror code in appropriate file on client side.\n\nVadim\n", "msg_date": "Wed, 27 May 1998 00:16:49 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "On Wed, 27 May 1998, Vadim Mikheev wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Tue, 26 May 1998, Vadim Mikheev wrote:\n> > >\n> > > Yes, this is nice. One note: server have to load from this table all messages\n> > > in a language requested by user when switching to this language - it's not\n> > > possible to read any table from elog() in most cases.\n> > \n> > Hrmmm...one thing to note with any of this is that by 'hardcoding'\n> > in the errormsg itself, it makes it difficult to 'internationalize' a\n> > program...in that I can't run a single server on my machine that can\n> > easily deal with a 'French' customer vs an 'English' one, vs a 'Japanese'\n> > one...I'd have to recompile for each.\n\nThis could be got round by using error numbers (which is now on the TODO\nlist). Error messages that include data (like table names) could include\nthem using fprintf style %d or %s parameters.\n\n> > Vadim's other point, about putting this in the front end vs the\n> > backend, I think, is more appropriate, that way it is application specific\n> > vs server...\n> \n> I didn't mean to put them in the front-end - this is another good idea :).\n> Yes, we could let fe-libpq to read message corresponding to received\n> error code in appropriate file on client side.\n\nIn libpq, this could be added to PQerrorMessage(). This could attempt to\nget the native language error message from the DB, defaulting to either an\nenglish one, or just the error code (it would be a bad problem in this\ncase).\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 19:05:05 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "On Tue, 26 May 1998, Bruce Momjian wrote:\n\n> > \n> > Peter Mount wrote:\n> > > \n> > > That sounds like a good idea.\n> > > \n> > > How about having them stored in a new system table (say pg_errormsg) which\n> > > contains each possible error in all the supported languages. That way, you\n> > > can have multiple language support when users from different countries use\n> > > the same server?\n> > \n> > Yes, this is nice. One note: server have to load from this table all messages \n> > in a language requested by user when switching to this language - it's not\n> > possible to read any table from elog() in most cases.\n> \n> Error messages in a system table. That is cool, and would be very easy\n> to add/maintain. Would have to be loaded into C variables for use,\n> however.\n\nWhy into C variables? You could have a function that returns the correct\nstring for the error code, and have it default if it can't access the\ntable.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 19:07:35 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "> \n> Why into C variables? You could have a function that returns the correct\n> string for the error code, and have it default if it can't access the\n> table.\n\nSo we run an internal SQL query to get the error message from a table? \nI guess we could do it, but the system state while in elog() is only\npartial. You would have to do the longjump, reset, then ru the query.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 14:18:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "Peter T Mount <[email protected]> writes:\n> In libpq, this could be added to PQerrorMessage(). This could attempt to\n> get the native language error message from the DB, defaulting to either an\n> english one, or just the error code (it would be a bad problem in this\n> case).\n\nUm. libpq has its own error messages that it can generate --- the most\nobvious ones being those about \"failed to connect to postmaster\"\nand \"lost connection to backend\". How is it supposed to get a localized\nequivalent message from the server in cases like that?\n\nBear in mind that libpq may be executing on a remote machine, so\n\"have it read the error message file directly\" is not a usable answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 1998 15:59:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English " }, { "msg_contents": "On Tue, 26 May 1998, Bruce Momjian wrote:\n\n> > \n> > Why into C variables? You could have a function that returns the correct\n> > string for the error code, and have it default if it can't access the\n> > table.\n> \n> So we run an internal SQL query to get the error message from a table? \n> I guess we could do it, but the system state while in elog() is only\n> partial. You would have to do the longjump, reset, then ru the query.\n\nAh, I see what you mean.\n\nThe idea of using text files, could be a way round this.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 21:12:09 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English" }, { "msg_contents": "On Tue, 26 May 1998, Tom Lane wrote:\n\n> Peter T Mount <[email protected]> writes:\n> > In libpq, this could be added to PQerrorMessage(). This could attempt to\n> > get the native language error message from the DB, defaulting to either an\n> > english one, or just the error code (it would be a bad problem in this\n> > case).\n> \n> Um. libpq has its own error messages that it can generate --- the most\n> obvious ones being those about \"failed to connect to postmaster\"\n> and \"lost connection to backend\". How is it supposed to get a localized\n> equivalent message from the server in cases like that?\n> \n> Bear in mind that libpq may be executing on a remote machine, so\n> \"have it read the error message file directly\" is not a usable answer.\n\nI'd think these would be ones where the current messages wouldn't be\ntranslated, simply because translation is not possible at those points.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 21:34:19 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: [HACKERS] error messages not only English " } ]
[ { "msg_contents": ">.. and for ordinary column datatypes of fixed properties, it needn't\n>have *any* fields. That would more than pay for the space cost of\n>supporting a variable-width data type, I bet. I like this.\n\nActually not, since attypmod is stored with the table definition, it does not waste any space\non a per tuple basis. So I think the correct solution would rather be to extend the atttypmod idea \n(maybe make atttypmod an array). Maybe we should add a atttypformat field of type varchar() \n(this could be used for language and the like). \n\nIt would be rather bad to convert fixed length fields into varlena, since varlena costs a lot\nduring tuple access. The cheapest rows are those that have an overall fixed length.\nSo I think it is best to store as much info with the table definition as possible.\n\n> Once atttypmod is exposed to applications it will be much harder to\n> change its representation or meaning, so I'd suggest getting this right\n> before 6.4 comes out. If that doesn't seem feasible, I think I'd even\n> vote for backing out the change that makes atttypmod visible until it\n> can be done right.\n\natttypmod is the right direction, it only currently lacks extendability.\n\nAndreas\n\n", "msg_date": "Fri, 22 May 1998 12:10:49 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Re: FE/BE protocol revision patch " } ]
[ { "msg_contents": "Hi all,\n\nI see that PostgreSQL mainly gives error messages in English, I see also that\nin some cases there's the possibility to configure it to give messages in \nother languages like global.c that may be configured to give messages in\nGerman.\nMySQL gives the possibility to configure it using an external file containing\nthe messages by specifying it using the parameter LANGUAGE=<language>\nwhere <language> is one of the following:\n\n czech\n english\n french\n germany\n italian\n norwegian\n norwegian-ny\n polish\n portuguese\n spanish\n swedish\n\nIt will be great if we could have also this feature on PostreSQL. \nI'm available to help on translation to Portuguese, Spanish and Italian.\n Jose'\n\n", "msg_date": "Fri, 22 May 1998 10:52:19 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "error messages not only English" }, { "msg_contents": "Added to TODO.\n\n> \n> Hi all,\n> \n> I see that PostgreSQL mainly gives error messages in English, I see also that\n> in some cases there's the possibility to configure it to give messages in \n> other languages like global.c that may be configured to give messages in\n> German.\n> MySQL gives the possibility to configure it using an external file containing\n> the messages by specifying it using the parameter LANGUAGE=<language>\n> where <language> is one of the following:\n> \n> czech\n> english\n> french\n> germany\n> italian\n> norwegian\n> norwegian-ny\n> polish\n> portuguese\n> spanish\n> swedish\n> \n> It will be great if we could have also this feature on PostreSQL. \n> I'm available to help on translation to Portuguese, Spanish and Italian.\n> Jose'\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:20:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] error messages not only English" }, { "msg_contents": "On Fri, 22 May 1998, Bruce Momjian wrote:\n\n> Added to TODO.\n> \n> > \n> > Hi all,\n> > \n> > I see that PostgreSQL mainly gives error messages in English, I see also that\n> > in some cases there's the possibility to configure it to give messages in \n> > other languages like global.c that may be configured to give messages in\n> > German.\n> > MySQL gives the possibility to configure it using an external file containing\n> > the messages by specifying it using the parameter LANGUAGE=<language>\n> > where <language> is one of the following:\n> > \n> > czech\n> > english\n> > french\n> > germany\n> > italian\n> > norwegian\n> > norwegian-ny\n> > polish\n> > portuguese\n> > spanish\n> > swedish\n> > \n> > It will be great if we could have also this feature on PostreSQL. \n> > I'm available to help on translation to Portuguese, Spanish and Italian.\n\nHrmmm...create an 'include/utils/errmsg.h file that is a link created by\nconfigure based on a --with-language=<insert your language here>...the\nfile would contain:\n\n#define <ERRMSG TOKEN> \"Error message in your language\"\n\nThen use the TOKEN with elog...\n\nIf we did something like this, we wouldn't have to convert all at once\neither, just as we pick up a new one...\n\n\n\n\n", "msg_date": "Fri, 22 May 1998 10:45:10 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] error messages not only English" }, { "msg_contents": "> Hrmmm...create an 'include/utils/errmsg.h file that is a link created by\n> configure based on a --with-language=<insert your language here>...the\n> file would contain:\n> \n> #define <ERRMSG TOKEN> \"Error message in your language\"\n> \n> Then use the TOKEN with elog...\n> \n> If we did something like this, we wouldn't have to convert all at once\n> either, just as we pick up a new one...\n\nAlso only a small set of error messages get sent to users. Most of them\nare rarely used or are for debugging.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:49:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] error messages not only English" }, { "msg_contents": "On Fri, 22 May 1998, Bruce Momjian wrote:\n\n> > Hrmmm...create an 'include/utils/errmsg.h file that is a link created by\n> > configure based on a --with-language=<insert your language here>...the\n> > file would contain:\n> > \n> > #define <ERRMSG TOKEN> \"Error message in your language\"\n> > \n> > Then use the TOKEN with elog...\n> > \n> > If we did something like this, we wouldn't have to convert all at once\n> > either, just as we pick up a new one...\n> \n> Also only a small set of error messages get sent to users. Most of them\n> are rarely used or are for debugging.\n\n\tTrue, but having those also in various languages makes us more\n\"admin friendly\" *grin*\n\n\tIf this looks good, I'll setup the appropriate configure related\nissues...let me know...\n\n\n", "msg_date": "Fri, 22 May 1998 10:53:30 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] error messages not only English" }, { "msg_contents": "> \n> \tTrue, but having those also in various languages makes us more\n> \"admin friendly\" *grin*\n> \n> \tIf this looks good, I'll setup the appropriate configure related\n> issues...let me know...\n\nOne nice thing is that all the error messages are wrapped up in elog(),\nso we can easily extract them, and make macros for them.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:55:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] error messages not only English" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> Hrmmm...create an 'include/utils/errmsg.h file that is a link created by\n> configure based on a --with-language=<insert your language here>...the\n> file would contain:\n> \n> #define <ERRMSG TOKEN> \"Error message in your language\"\n ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Then use the TOKEN with elog...\n> \n> If we did something like this, we wouldn't have to convert all at once\n> either, just as we pick up a new one...\n\nSometime ago we told about using error codes in elog (and put them to the\nclient) - this would very useful for non-interactive applications...\n\nHow about to implement this ?\n\nVadim\n", "msg_date": "Tue, 26 May 1998 23:46:21 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] error messages not only English" }, { "msg_contents": "> \n> The Hermit Hacker wrote:\n> > \n> > Hrmmm...create an 'include/utils/errmsg.h file that is a link created by\n> > configure based on a --with-language=<insert your language here>...the\n> > file would contain:\n> > \n> > #define <ERRMSG TOKEN> \"Error message in your language\"\n> ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > \n> > Then use the TOKEN with elog...\n> > \n> > If we did something like this, we wouldn't have to convert all at once\n> > either, just as we pick up a new one...\n> \n> Sometime ago we told about using error codes in elog (and put them to the\n> client) - this would very useful for non-interactive applications...\n> \n> How about to implement this ?\n\nAdded to TODO. The first part was already there:\n\n* allow international error message support and add error codes\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 12:08:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] error messages not only English" } ]
[ { "msg_contents": "> \tint accept(int s, struct sockaddr *addr, int *addrlen);\n>\n> So AIX has the last parameter defined as size_t, huh? \n\nYes, and this is consistently done with all size parameters. (therefore also with fread, fwrite ...)\nBut only if _NONSTD_TYPES is not defined. I don't know where that would come from.\n(it's not in any /usr/include header)\n\ntypedef unsigned long size_t;\n\n#ifdef _NONSTD_TYPES\nextern int fread();\n#else\nextern size_t fread(void *, size_t, size_t, FILE *);\n#endif\n\nAndreas\n\n", "msg_date": "Fri, 22 May 1998 14:25:44 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in postgresql-6.3.2 (AIX specific)" }, { "msg_contents": "On Fri, 22 May 1998, Andreas Zeugswetter wrote:\n\n> > \tint accept(int s, struct sockaddr *addr, int *addrlen);\n> >\n> > So AIX has the last parameter defined as size_t, huh? \n> \n> Yes, and this is consistently done with all size parameters. (therefore also with fread, fwrite ...)\n> But only if _NONSTD_TYPES is not defined. I don't know where that would come from.\n> (it's not in any /usr/include header)\n> \n> typedef unsigned long size_t;\n> \n> #ifdef _NONSTD_TYPES\n> extern int fread();\n> #else\n> extern size_t fread(void *, size_t, size_t, FILE *);\n> #endif\n\nHrmmm...just checked, and, under FreeBSD:\n\n> grep fread *.h\nstdio.h:size_t fread __P((void *, size_t, size_t, FILE *));\n\nwhere size_t is defined to be an unsigned int, vs long on AIX...\n\ni sort of suspect that the use of size_t is more the norm then the\nexception, if you all check your fread defines...?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 24 May 1998 15:28:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in postgresql-6.3.2 (AIX specific)" } ]
[ { "msg_contents": "> That sounds like a good idea.\n> \n> How about having them stored in a new system table (say pg_errormsg) which\n> contains each possible error in all the supported languages.\n\nWhile this sounds good for end users, it is an absolute nightmare for somebody trying to \ngive support. Or what would you do if someone told you:\nI get:\n\tFEHLER: Des vastehts ihr nie, weu des Wiena Dialekt is.\nand now my program won't respond anymore.\n\nAnyway, we are still missing the first step in this direction: enumerate ERROR messages.\n\nAndreas\n\n", "msg_date": "Fri, 22 May 1998 14:45:21 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] error messages not only English" }, { "msg_contents": "On Fri, 22 May 1998, Andreas Zeugswetter wrote:\n\n> > That sounds like a good idea.\n> > \n> > How about having them stored in a new system table (say pg_errormsg) which\n> > contains each possible error in all the supported languages.\n> \n> While this sounds good for end users, it is an absolute nightmare for somebody trying to \n> give support. Or what would you do if someone told you:\n> I get:\n> \tFEHLER: Des vastehts ihr nie, weu des Wiena Dialekt is.\n> and now my program won't respond anymore.\n> \n> Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n> \nWe can have both of them for example:\n-----------------------------------------------------------------------------\n err no. message\n-----------------------------------------------------------------------------\nERROR #342 - \"Column number too big\" -- English\nERROR #342 - \"Numero de columna demasiado alto\" -- Spanish\nFEHLER #342 - \"Die Spaltennummer ist zu gross\" -- German\nERRO #342 - \"Numero de coluna alto demais\" -- Portuguese\nERRORE #342 - \"Numero di colonna troppo alto\" -- Italian\n-----------------------------------------------------------------------------\n Ciao, Jose'\n\n", "msg_date": "Fri, 22 May 1998 16:38:17 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error messages not only English" }, { "msg_contents": "Andreas Zeugswetter wrote:\n> \n> Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n\nBTW, are error codes in standard ?\n\nVadim\n", "msg_date": "Wed, 27 May 1998 00:35:16 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error messages not only English" }, { "msg_contents": "Vadim writes:\n> Andreas Zeugswetter wrote:\n> > \n> > Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n> \n> BTW, are error codes in standard ?\n\nSome are. There is also a format for standard severity levels etc.\n\nOn the internationalized message topic, how about storing all the messages\nin a text file. This file would be opened (but not read) at startup by each\nbackend (and the postmaster). To change languages, just open a different\nfile. ELOG would scan the message file to get the message text corresponding\nto an errog code. Since reading a pre-opened text file does not depend on\nmuch of the system working, it should work even in the catastrophic cases.\n\nTo keep performance good for frequently issued messages, the backend could\nkeep an in memory LRU cache of a few dozen messages and only search the file\nwhen the message was not in the cache. This cache could be a simple\nsequentially searched array as error messages are not a real performance\ninfluence except in extremely pathological cases.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Tue, 26 May 1998 11:02:13 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error messages not only English" }, { "msg_contents": "On Tue, 26 May 1998, David Gould wrote:\n\n> Vadim writes:\n> > Andreas Zeugswetter wrote:\n> > > \n> > > Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n> > \n> > BTW, are error codes in standard ?\n> \n> Some are. There is also a format for standard severity levels etc.\n> \n> On the internationalized message topic, how about storing all the messages\n> in a text file. This file would be opened (but not read) at startup by each\n> backend (and the postmaster). To change languages, just open a different\n> file. ELOG would scan the message file to get the message text corresponding\n> to an errog code. Since reading a pre-opened text file does not depend on\n> much of the system working, it should work even in the catastrophic cases.\n\nDo you want me to post a brief outline on how Java does this? It uses\nplain text files to handle internationalized messages, and can handle\nregional dialects as well?\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 19:10:31 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error messages not only English" }, { "msg_contents": "\nOh dear, even I'm answering myself now ;-)\n\nOn Tue, 26 May 1998, Peter T Mount wrote:\n\n> On Tue, 26 May 1998, David Gould wrote:\n> \n> > Vadim writes:\n> > > Andreas Zeugswetter wrote:\n> > > > \n> > > > Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n> > > \n> > > BTW, are error codes in standard ?\n> > \n> > Some are. There is also a format for standard severity levels etc.\n> > \n> > On the internationalized message topic, how about storing all the messages\n> > in a text file. This file would be opened (but not read) at startup by each\n> > backend (and the postmaster). To change languages, just open a different\n> > file. ELOG would scan the message file to get the message text corresponding\n> > to an errog code. Since reading a pre-opened text file does not depend on\n> > much of the system working, it should work even in the catastrophic cases.\n> \n> Do you want me to post a brief outline on how Java does this? It uses\n> plain text files to handle internationalized messages, and can handle\n> regional dialects as well?\n\nHere goes:\n\nJava 1.1 introduced Internationalization using Resource Bundles. Now these\ncould be either custom classes, or defined using Property Files (which are\nmore like what what were looking at here).\n\nAnyhow, this is a brief description on how this works. I'm not\nsuggesting this is the way to go, but presenting this here as\nsomething to base this on.\n\nFirst, a few files:\n\n# The file colours.properties is the default bundle.\n# As I'm British, I've made my own locale the default (re: Colour)\ncolours=Colours\ncolours.red=Red\ncolours.green=Green\ncolours.blue=Blue\n\n# The file colours.en.US.properties overides the default locale.\n# As you can see it overides only one resource, as the other\n# defaults are fine for this locale\ncolours=Colors\n\n# The file colours.fr.properties handles the French locale\ncolours=Couleurs\ncolours.red=Rouge\ncolours.green=Vert\ncolours.blue=Bleu\n\nWhen searching for a property, it follows the following algorithm: \n\n\tbasename_language_country_variant\n\tbasename_language_country\n\tbasename_language\n\tbasename\n\nEntries found first take precedence. Only the lowest level needs to have\nevery possible entry.\n\nThere is a downside to this scheme (as far as PostgreSQL is concerned), in\nthat we could have several files open, although error handling isn't\nsomething that needs to be too fast.\n\nNow for formatted messages:\n\nIn Java, this is handled by the MessageFormat class. It takes a static\nstring, and inserts into the correct places additional strings to form the\nfinal message.\n\nie:\n\nIn English:\t\"Error at line {0} in file {1}.\"\nIn French:\t\"Erreur: {1}: {0}.\"\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 22:00:23 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error messages not only English" }, { "msg_contents": "On Wed, 27 May 1998, Vadim Mikheev wrote:\n\n> Andreas Zeugswetter wrote:\n> > \n> > Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n> \n> BTW, are error codes in standard ?\n> \nOf course. And IMHO PostgreSQL should use them.\nOn \"A Guide To The SQL Standard (C.J.Date) chapter 22 - Exception Handling\"\nthere's an explanation of STATUS CODE. (SQLCODE and SQLSTATE values)\n\n Jose'\n\n", "msg_date": "Wed, 27 May 1998 09:56:51 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error messages not only English" } ]
[ { "msg_contents": "\n> I have an idea. Can he run CLUSTER on the data? If so, the sort will\n> not use small batches, and the disk space during sort will be reduced.\n\nI think a real winner would be to use an existing index. This is what others do\nto eliminate a sort completely. Of course the optimizer has to choose what is cheaper \non a per query basis (index access or sort of result set).\nresult set small --> use sort\nresult set large --> use available index\n\nAndreas\n\n\n", "msg_date": "Fri, 22 May 1998 14:56:19 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \n> \n> > I have an idea. Can he run CLUSTER on the data? If so, the sort will\n> > not use small batches, and the disk space during sort will be reduced.\n> \n> I think a real winner would be to use an existing index. This is what others do\n> to eliminate a sort completely. Of course the optimizer has to choose what is cheaper \n> on a per query basis (index access or sort of result set).\n> result set small --> use sort\n> result set large --> use available index\n\nKeep in mind an index is going to be seeking all over the table, making\nthe cache of limited use. Sometime, when doing a join, the optimizer\nchooses a sequential scan rather than use an index for this reason, and\nthe sequential scan is faster.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:24:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" } ]
[ { "msg_contents": "Hi\n\n> createdb aaa - OK\n> destroydb aaa\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\ndestroydb: database destroy failed on aaa.\n\nIs bug ?\n\n-- \nSY, Serj\n", "msg_date": "Fri, 22 May 1998 19:49:21 +0400", "msg_from": "serj <[email protected]>", "msg_from_op": true, "msg_subject": "destroydb fail in pgsql CVS from 22.5.98" } ]
[ { "msg_contents": "\nIgnore this...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 24 May 1998 01:08:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Just a test..." } ]
[ { "msg_contents": ">\n>However, in thinking about it, I don't think there is any way to avoid\n>your solution of pid/secret key. The postmaster, on receiving the\n>secret key, can send a signal to the backend, and the query will be\n>cancelled. Nothing will be sent along the backend/client channel. All\n>other interfaces that want cancel handling will have to add some code\n>for this too.\n>\n\n\nAssuming that every user has a password which is known by both the client\nand the server, it seem to me like using a one-way function based on the\nclientuser password as the secret key (refered to above) is appropiate.\nThis avoids the need for introducing \"yet another shared secret into the\nsystem\".\n\nA one-way function is expected to make it computationaly infeasible to\ndeduce the password given the secretkey. One-way functions (SHA1, MD5) are\nalso quite fast. (If I'm not mistaken these functions are allowed to be\nexported\nfrom the US. )\n\nBy including a cancel request id (together with the user password) with the\ninformation being hashed (by the one-way function) it is also possible to\ndetect (and avoid) denial of service attacks\n(which are based on replaying the cancel request secret keys).\n\nThis does however imply that a certain amount of extra booking is needed.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Sun, 24 May 1998 13:43:41 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "\"Maurice Gittens\" <[email protected]> writes:\n> Assuming that every user has a password which is known by both the client\n> and the server, it seem to me like using a one-way function based on the\n> clientuser password as the secret key (refered to above) is appropiate.\n> This avoids the need for introducing \"yet another shared secret into the\n> system\".\n\nWell, I think that the cancel security mechanism ought to be per backend\nprocess, not per user. That is, simply being the same \"Postgres user\"\nshould not give you the ability to issue a cancel; you ought to be\nrequired to have some direct association with a particular client/backend\nsession. Access to the client/backend connection channel is one way;\nknowledge of a per-connection secret is another.\n\nAlso, isn't it true that not all the supported authentication mechanisms\nuse a password? Taking this approach would mean we have to design a new\ncancel security mechanism for each authentication protocol.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 May 1998 11:34:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "> \n> \"Maurice Gittens\" <[email protected]> writes:\n> > Assuming that every user has a password which is known by both the client\n> > and the server, it seem to me like using a one-way function based on the\n> > clientuser password as the secret key (refered to above) is appropiate.\n> > This avoids the need for introducing \"yet another shared secret into the\n> > system\".\n> \n> Well, I think that the cancel security mechanism ought to be per backend\n> process, not per user. That is, simply being the same \"Postgres user\"\n> should not give you the ability to issue a cancel; you ought to be\n> required to have some direct association with a particular client/backend\n> session. Access to the client/backend connection channel is one way;\n> knowledge of a per-connection secret is another.\n> \n> Also, isn't it true that not all the supported authentication mechanisms\n> use a password? Taking this approach would mean we have to design a new\n> cancel security mechanism for each authentication protocol.\n\nYes, most connections don't have passwords. Better to keep cancel\nseparate. \n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 24 May 1998 13:35:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" } ]
[ { "msg_contents": "\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nTo: Maurice Gittens <[email protected]>\nCc: [email protected] <[email protected]>\nDate: zondag 24 mei 1998 23:52\nSubject: Re: [HACKERS] Query cancel and OOB data\n\n\n>\"Maurice Gittens\" <[email protected]> writes:\n>> Assuming that every user has a password which is known by both the client\n>> and the server, it seem to me like using a one-way function based on the\n>> clientuser password as the secret key (refered to above) is appropiate.\n>> This avoids the need for introducing \"yet another shared secret into the\n>> system\".\n>\n>Well, I think that the cancel security mechanism ought to be per backend\n>process, not per user.\n\nI assumed that this was understood.\n\n> That is, simply being the same \"Postgres user\"\n>should not give you the ability to issue a cancel; you ought to be\n>required to have some direct association with a particular client/backend\n>session. Access to the client/backend connection channel is one way;\n>knowledge of a per-connection secret is another.\n>\n>Also, isn't it true that not all the supported authentication mechanisms\n>use a password? Taking this approach would mean we have to design a new\n>cancel security mechanism for each authentication protocol.\nThis may be true. The point I'm trying to make is that using one\nway-functions\ntogether with a shared secret will make it possible to avoid denial of\nservice attacks\nwhich rely on replaying the \"magic token\".\n\nAgain I assumed it to be understood that the pid of the particular backend\nwould exchanged with the client during the initial handshake. It would also\nbe included (together with the shared secret e.g. the password and\nand some form of a sequence id) in the one-way hash.\n\n>\n> regards, tom lane\n\nRegards, Maurice.\n\n\n", "msg_date": "Sun, 24 May 1998 17:47:04 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "\"Maurice Gittens\" <[email protected]> writes:\n> This may be true. The point I'm trying to make is that using one\n> way-functions together with a shared secret will make it possible to\n> avoid denial of service attacks which rely on replaying the \"magic\n> token\".\n\n> Again I assumed it to be understood that the pid of the particular backend\n> would exchanged with the client during the initial handshake. It would also\n> be included (together with the shared secret e.g. the password and\n> and some form of a sequence id) in the one-way hash.\n\nAh, now I think I see your point: you want to encrypt the cancel request\nso that even a packet sniffer could not generate additional cancel\nrequests after seeing the first one. That seems like a good idea, but\nthere is still the problem of what to use for the encryption key (the\n\"shared secret\"). A password would work in those authentication schemes\nthat have a password, but what about those that don't?\n\nMore generally, I think we risk overdesigning the cancel authorization\nmechanism while failing to deal with systemic security issues. Above\nwe are blithely assuming that a user's Postgres password is secret ...\nwhich it is hardly likely to be against an attacker with packet-sniffing\ncapability. I don't think it's worth trying to make the cancel mechanism\n(alone) proof against attacks that really need to be dealt with by\nusing a secure transport method.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 May 1998 12:01:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Maurice Gittens\" <[email protected]> writes:\n> > This may be true. The point I'm trying to make is that using one\n> > way-functions together with a shared secret will make it possible to\n> > avoid denial of service attacks which rely on replaying the \"magic\n> > token\".\n> \n> > Again I assumed it to be understood that the pid of the particular backend\n> > would exchanged with the client during the initial handshake. It would also\n> > be included (together with the shared secret e.g. the password and\n> > and some form of a sequence id) in the one-way hash.\n> \n> Ah, now I think I see your point: you want to encrypt the cancel request\n> so that even a packet sniffer could not generate additional cancel\n> requests after seeing the first one. That seems like a good idea, but\n> there is still the problem of what to use for the encryption key (the\n> \"shared secret\"). A password would work in those authentication schemes\n> that have a password, but what about those that don't?\n\nAha!\n\nI'm slowly working through back emails, so I apologize if someone else\nalready posted this. If we want to create a shared secret between the\npostmaster and the client, we should think about the Diffe-Helman\nalgorithm. \n\nFor those unfamiliar with this, we start by picking large numbers b\nand m. The client picks a number k and then sends K=b^k%m, while the\nserver picks a number l and sends L=b^l%m. The client calculates\nL^k%m and the server calculates K^l%m, and these numbers are\nidentical. A third party eavesdropping on the conversation would only\nget K and L, and would have no idea what the shared number is, unless\nthey can calculate the computationally infeasible discrete logarithm.\n\nAnyway, something to think about.\n\nOcie\n", "msg_date": "Tue, 26 May 1998 14:17:16 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "[email protected] writes:\n> If we want to create a shared secret between the\n> postmaster and the client, we should think about the Diffe-Helman\n> [ discrete logarithm ] algorithm. \n\nI used Diffie-Hellman for that purpose years ago, and perhaps could\nstill dig up the code for it. But I thought discrete logarithm had been\nbroken since then, or at least shown to be far less intractable than\npeople thought. In any case, D-H is pretty slow --- are we prepared to\nadd seconds to the backend startup time in the name of security?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 1998 19:28:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "On Tue, 26 May 1998, Tom Lane wrote:\n> I used Diffie-Hellman for that purpose years ago, and perhaps could\n> still dig up the code for it. But I thought discrete logarithm had been\n> broken since then, or at least shown to be far less intractable than\n> people thought. In any case, D-H is pretty slow --- are we prepared to\n> add seconds to the backend startup time in the name of security?\n\nI think everyone is thinking too hard on this issue.\n\nTransport security should be just that.\n\nUse SSL or Kerberos encryption if you wish thoe entire session to be (more\nor less) unsnoopable/unspoofable.\n\nTrying to hack things in will only result in an incomplete and/or ugly\nsolution.\n\nThe way I see it people have several choices:\n\n- Run with no network listeners and therefore no network clients to expose\nto snooping/spoofing attacks.\n\n- Require SSLed or Kerberized connections, incuring longer startup times\nbut insuring a secure channel.\n\n- Use SKIP or some other IP level encryption system to provide a secure\n'virtual lan' insuring a secure channel.\n\n- Isolate communication across secure, private networks insuring a secure\nchannel.\n\nSo long as we make people aware of the risks they are exposing themselves\nto, adding 'security features' in places better left to lower level\nprotocols is unnecessary.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Tue, 26 May 1998 21:17:48 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "Matthew N. Dodd wrote:\n> \n> On Tue, 26 May 1998, Tom Lane wrote:\n> > I used Diffie-Hellman for that purpose years ago, and perhaps could\n> > still dig up the code for it. But I thought discrete logarithm had been\n> > broken since then, or at least shown to be far less intractable than\n> > people thought. In any case, D-H is pretty slow --- are we prepared to\n> > add seconds to the backend startup time in the name of security?\n> \n> I think everyone is thinking too hard on this issue.\n> \n> Transport security should be just that.\n> \n> Use SSL or Kerberos encryption if you wish thoe entire session to be (more\n> or less) unsnoopable/unspoofable.\n> \n> Trying to hack things in will only result in an incomplete and/or ugly\n> solution.\n> \n> The way I see it people have several choices:\n> \n> - Run with no network listeners and therefore no network clients to expose\n> to snooping/spoofing attacks.\n> \n> - Require SSLed or Kerberized connections, incuring longer startup times\n> but insuring a secure channel.\n> \n> - Use SKIP or some other IP level encryption system to provide a secure\n> 'virtual lan' insuring a secure channel.\n> \n> - Isolate communication across secure, private networks insuring a secure\n> channel.\n> \n> So long as we make people aware of the risks they are exposing themselves\n> to, adding 'security features' in places better left to lower level\n> protocols is unnecessary.\n> \n\nHMM, you do make a convincing argument. As one of my H.S. teachers\nused to say, we are putting \"Descartes before Horace\". Probably\nbetter to just have the postmaster generate and issue a random number\nto the client. \n\nIt would be nice if this can be done in a forward/backward-compatible\nway. I.E. old clients that don't know ablout cancelling should be\nable to work with servers that can cancel, and newer clients that can\ncancel should be able to disable this feature if talking with an older\nserver. A rolling database gathers no development community :)\n\nOcie\n\n\n", "msg_date": "Tue, 26 May 1998 19:10:44 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Matthew N. Dodd writes:\n> I think everyone is thinking too hard on this issue.\n> \n> Transport security should be just that.\n> \n> Use SSL or Kerberos encryption if you wish thoe entire session to be (more\n> or less) unsnoopable/unspoofable.\n> \n> Trying to hack things in will only result in an incomplete and/or ugly\n> solution.\n> \n> The way I see it people have several choices:\n> \n> - Run with no network listeners and therefore no network clients to expose\n> to snooping/spoofing attacks.\n> \n> - Require SSLed or Kerberized connections, incuring longer startup times\n> but insuring a secure channel.\n> \n> - Use SKIP or some other IP level encryption system to provide a secure\n> 'virtual lan' insuring a secure channel.\n> \n> - Isolate communication across secure, private networks insuring a secure\n> channel.\n> \n> So long as we make people aware of the risks they are exposing themselves\n> to, adding 'security features' in places better left to lower level\n> protocols is unnecessary.\n\nRight on. I have been following this discussion about securing the\ncancel channel hoping for it to come back to earth and now it has.\n\nAll the major systems I am familiar with (Sybase, Informix, Illustra,\nMS SQL Server) use TCP as their primary client/server transport and do not\nuse encryption (most even send cleartext passwords over the wire). Some of\nthese systems support only TCP.\n\nThe assumption is that the dbms and clients are on a private network and not\nexposed to the internet at large except through gateways of some kind. \nAs I have not heard any horror stories about breakins, denial of service\netc at customer sites in my ten years working with this stuff, I assume\nthat while it may happen, it does not happen often enough for the customers\nto complain to their db vendors about.\n\nThe other thing is that security is hard. It is hard to make a system\nsecure, and it is even harder to make it usable after you make it secure.\nAnd if you don't make it usable, then you find the office and dumpsters filled\nwith post-its with passwords on them. \n\nLikewise, most environments are not really secure anyway, it will usually be\neasier to hack a root shell and kill the postmaster or copy out the data\nbase files than to fool around figuring out the postgres on the wire traffic.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n \n", "msg_date": "Sun, 31 May 1998 01:44:42 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" } ]
[ { "msg_contents": "I put together this readme on password and crypt authentication in\nanswer to a request from a user of the Debian package.\n\nPlease check it and point out any errors or missing information.\n\n=========================================================================\n\nHow to use clear or encrypted passwords for PostgreSQL access:\n=============================================================\n\nUse lines such as\n\n local\t\tall\t\t\t\tpassword\n host\t\t192.137.23\t255.255.255.0\tcrypt\n\nin /etc/postgresql/pg_hba.conf; then you can use\n\n CREATE USER user WITH PASSWORD password...\n\nto create a new user with the specified password, or\n\n ALTER USER user WITH PASSWORD password...\n\nto change the password of an existing user. Any user with create-user\nprivilege can alter a password for any user, *INCLUDING* the postgres\nsuper-user.\n\nIf connecting with psql, use the -u option; the user is prompted for username\nand password. If you don't use -u, the connection fails.\n\nIf using your own program with libpq, it is up to you to collect the user name\nand password from the user and send them to the backend with PQsetdbLogin().\n[How can one know, with libpq, whether this is necessary?]\n\nPasswords are stored in pg_shadow in clear, but if `crypt' authentication is\nspecified, the frontend encrypts the password with a random salt and\nthe backend uses the same salt to encrypt the password in the database.\nIf the two encrypted passwords match, the user is allowed access. If the\nauthentication method is `password', the password is transmitted and\ncompared in clear.\n\nIf passwords are turned on, it becomes impossible to connect as\na user, if no password is defined for that user. Neither can you use\n\\connect to change user within psql.\n\n<Debian-specific>\nIf you turn on passwords for local, the default do.maintenance cron job\nwill stop working, because it will not supply a username or password.\nIn this case, you must alter /etc/cron.d/postgresql to supply the\nuser and password for the postgres superuser, with the -u and -p options.\nIt will then be necessary to change the permissions on /etc/cron.d/postgresql\nto make it readable by root only.\n</Debian-specific>\n\n\nProblems with password authentication\n=====================================\n\n1. There is no easy and secure way to automate access when passwords are\n in use. It would be good if the postgres super-user (as identified by\n Unix on a Unix sockets connection) could bypass the authentication.\n\n2. pgaccess has no mechanism for specifying username and password. It cannot\n be used if password/crypt authentication is turned on for host\n connections from localhost.\n\n3. In general, passwords are insecure, because they are held in clear\n in pg_shadow. Anyone with create-user privilege can not only alter but\n also read them. They ought to be stored with one-way encryption, as\n with the Unix password system.\n\n4. The postgres super-user's password can be changed by anyone with \n create-user privilege. It ought to be the case that people can\n only change their own passwords and that only the super-user can change\n other peoples' passwords.\n\n5. If passwords are turned on, the -u option must be supplied to psql. If\n it is not, psql merely says \"Connection to database 'xxxx' failed.\". A\n more helpful error message would be desirable.\n=========================================================================\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And Jesus answering said unto them, They that are\n whole need not a physician; but they that are sick. I\n come not to call the righteous, but sinners to\n repentance.\" Luke 5:31,32\n\n\n", "msg_date": "Mon, 25 May 1998 06:33:23 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Use of password/crypt authentication" } ]
[ { "msg_contents": "This message raises the doubt of a possible PostgreSQL bug connected\nwith\nlarge objects and the locking mechanism.\n\n1) The problem\n\nI'm currently experiencing the followin problem.\nI need to store in a PostgreSQL database a large amount of double\nprecision\nnumbers (they are wavelets coefficients, if you know what they are).\nSince they are more than 8kb, I store them as a large object of about\n46kb.\nI've also written a set of functions that operate on them.\nOne of this functions is the following:\n\nfloat8 *\nget_stddev (Oid wdata, int4 elem)\n{\n float8 *result;\n\n result = (float8 *) palloc (sizeof (float8));\n if ((fd = lo_open (wdata, INV_READ)) == -1)\n elog (ERROR, \"wav_dist: Cannot access wavelet data\");\n\n <some `lo_read's>\n\n lo_close (fd);\n return result;\n}\n\nOnce registered in the database, I call it as\n\n SELECT DISTINCT get_stddev (fieldname, 1) FROM tablename;\n\nOf course, there are also more complicated functions.\n\nWhen the number of records in the db is around 300 (and above), I get\nthe\nfollowing messages:\n\nNOTICE: LockReleaseAll: cannot remove lock from HTAB\nNOTICE: LockRelease: find xid, table corrupted\n\nNOTICE: LockRelease: find xid, table corrupted\n\nNOTICE: LockRelease: find xid, table corrupted\n\nFATAL: unrecognized data from the backend. It probably dumped core.\nFATAL: unrecognized data from the backend. It probably dumped core.\n\nPlease note that the first run of the query gives the expected results\n(sometimes).\n\nIf I run\n\n gdb postgres core\n\nand type where, I get\n\n#0 0x8100bd9 in hash_search ()\n#1 0x8100aec in hash_search ()\n#2 0x80d4a75 in LockAcquire ()\n#3 0x80d6538 in SingleLockPage ()\n#4 0x80d4486 in RelationSetSingleRLockPage ()\n#5 0x8070b6a in _bt_pagedel ()\n#6 0x80708c0 in _bt_getbuf ()\n#7 0x80703d9 in _bt_getroot ()\n#8 0x8072105 in _bt_first ()\n#9 0x8070fef in btgettuple ()\n#10 0x8100414 in fmgr_c ()\n#11 0x810071b in fmgr ()\n#12 0x806b608 in index_getnext ()\n#13 0x80d36c3 in inv_read ()\n#14 0x80d354a in inv_read ()\n#15 0x8098d09 in lo_read ()\n#16 0x40230856 in ?? () from <<<this is my shared library>>>\n<other frames follow>\n\n1.1) Further analisys\n\nTo further study this problem, I've created the following table:\n\n CREATE TABLE foo (fii oid);\n\nand added it\n\n INSERT INTO foo VALUES (lo_import ('/tmp/f'));\n\n300 times. /tmp/f is a sample file of 46116 bytes.\nThe problem continues to arise.\nI also noted that, using a code that does the following:\n\nfor each tuple\n open connection\n lo_export\n close connection\n <something on the exported file>\n\nall goes well.\nOtherwise, the following\n\nopen connection\nfor each tuple\n lo_export\n <something on the exported file>\nclose connection\n\nfails around the same tuple.\nOnce, using dmesg, I found the message\n\nVFS: file-max limit 1024 reached\n\nbut only once.\n\n-->> Everything seems connected with the locking mechanism.\n-->> If I run the postmaster with -o -L, everything (but not all) works.\n\nI usually run postmaster with the -F flag. I tried to disable it, but\nPostgreSQL continues to fail.\nI'm running a Linux box (i586 120Mh) with kernel 2.1.65 ELF,\nPostgreSQL 6.3.2 compiled with GCC 2.8.1, 64 Mb RAM.\n\nThanks for any help or suggestion\n\nAlessandro Baldoni\[email protected]\nhttp://www.csr.unibo.it/~abaldoni\n\n\n", "msg_date": "Mon, 25 May 1998 09:51:59 +0200", "msg_from": "Alessandro Baldoni <[email protected]>", "msg_from_op": true, "msg_subject": "Large objects and locking mechanism" } ]
[ { "msg_contents": "A while ago we agreed to use DB names like the JDBC standard:\n\n<protocol>:postgresql:\\\\<server>[:port]\\dbname[?<option>]\n\nBefore I start working on libpq I would like to know which protocols it\nshould accept. Is there a norm for this?\n\nSuggestions: sql, esql, pq, ecpg ...\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 25 May 1998 16:22:16 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Connect string again" }, { "msg_contents": "> A while ago we agreed to use DB names like the JDBC standard:\n> \n> <protocol>:postgresql:\\\\<server>[:port]\\dbname[?<option>]\n> \n> Before I start working on libpq I would like to know which protocols it\n> should accept. Is there a norm for this?\n> \n> Suggestions: sql, esql, pq, ecpg ...\n\nHey, what's with the backslashes?? Didn't know a non-M$ system even had\n'em on the keyboard :)\n\nSeriously, is that a typo?\n\n - Tom\n", "msg_date": "Tue, 26 May 1998 14:57:56 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Connect string again" }, { "msg_contents": "On Mon, 25 May 1998, Michael Meskes wrote:\n\n> A while ago we agreed to use DB names like the JDBC standard:\n> \n> <protocol>:postgresql:\\\\<server>[:port]\\dbname[?<option>]\n> \n> Before I start working on libpq I would like to know which protocols it\n> should accept. Is there a norm for this?\n> \n> Suggestions: sql, esql, pq, ecpg ...\n\npq seems to be a little too short to me.\n\nI'd have thought sql, or libpq. ecpg could be used as well for clients\nthat use ecpg.\n\nWhile thinking about this, an alternative could be the network protocol\nbeing used, tcp or unix (although the server part of the url would be\nignored for this one).\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 26 May 1998 18:56:39 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Connect string again" }, { "msg_contents": "Peter T Mount writes:\n> > Suggestions: sql, esql, pq, ecpg ...\n> \n> pq seems to be a little too short to me.\n> \n> I'd have thought sql, or libpq. ecpg could be used as well for clients\n> that use ecpg.\n\nHow about adding things like proc since we are able to parse most of the\nOracle stuff?\n\n> While thinking about this, an alternative could be the network protocol\n> being used, tcp or unix (although the server part of the url would be\n> ignored for this one).\n\nThat one makes even more sense IMO. If it's unix, the server only can be the\nlocal host. \n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 27 May 1998 09:50:32 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Connect string again" } ]
[ { "msg_contents": "\n\tI am making some progress on cleaning up regression problems on\nLinux/Alpha. I have finally got datetimes from causing postgres to SIGFPE\n(floating point expections). The secret appears to be to include the\n'-mieee' compile option, via the CFLAGS line in template/linuxalpha. I am\nnot 100% sure what it does, but it has often been recommend by the people\non the axp-redhat list when people are having SIGFPE problems with thier\nprograms. It doesn't appear to do any harm, and actually one entire test\n(reltime) has been totally fixed. I am still getting a few SIGFPE during\nthe regression tests, but only a small number of the original many. \n\tThe major problem at the moment appears to be that while dates\ncan be instered and selected sucessfully, they are not correct. I insert\n'5/20/98' and when I select I get '5/19/98 11:00 MDT'. It appears too\nlarge for it to be a timezone problem, and the local time zone is properly\nset to MDT. The same sort of things happens during the regression tests\nwhen the TZ variable is set as directed (PST). I am thinking that there is\nsome type of round off error or such occuring, but I am having a rough\ntime following the sequence a date takes from MM, DD, YY format to the\nsingle integer used to store the date interally, and then back out to a\nhuman readable format again. Would some one please outline what is\nhappening here, and point out any spots that might cause trouble? Thanks!\n\n\tPS. What is the correct format for a date in an SQL string? I.e.\nthe format that includes date, time, and timezone?\n\n\tPPS. I am using the May 15th, 1998 snapshot still. I doubt that\nupdating would do me a lot of good at the moment.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Mon, 25 May 1998 15:37:09 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Linux/Alpha.... Progress..." }, { "msg_contents": "> \n> \n> \tI am making some progress on cleaning up regression problems on\n> Linux/Alpha. I have finally got datetimes from causing postgres to SIGFPE\n> (floating point expections). The secret appears to be to include the\n> '-mieee' compile option, via the CFLAGS line in template/linuxalpha. I am\n> not 100% sure what it does, but it has often been recommend by the people\n> on the axp-redhat list when people are having SIGFPE problems with thier\n> programs. It doesn't appear to do any harm, and actually one entire test\n> (reltime) has been totally fixed. I am still getting a few SIGFPE during\n> the regression tests, but only a small number of the original many. \n\nAdded to template.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 16 Jun 1998 00:02:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Linux/Alpha.... Progress..." } ]
[ { "msg_contents": "1. I think a last building (and maybe even regressing) snapshot would be nice.\n\n>The other tool I believe to be very effective in improving code quality is\n>code review. My experience is that review is both more effective and\n>cheaper than testing in finding problems.\n\nYes. But actually we (in or development company Spardat) don't do it here. \nWhat we do is ask a coworker to review, when we are not too confident about a change.\n\n>To that end, I suggest we create\n>a group of volunteer reviewers, possibly with their own mailing list. The idea\n>is not to impose a bureaucratic barrier to submitting patches, but rather to\n>allow people who have an idea to get some assistance on whether a given change\n>will fit in and work well. I see some people on this list using the list\n>for this purpose now, I merely propose to normalise this so that everyone\n>knows that this resource is available to them, and given an actual patch\n>(rather than mere discussion) to be able to identify specific persons to do\n>a review.\n\nI am not sure if I would not rather see those that have enough knowledge to judge if a patch\nis good or not and have time to really do reviewing, contribute to the code itself.\nThere are a lot of things on the Todo, and another lot that did not make it to the list yet.\nThe last beta freeze was actually reviewed by Bruce and Vadim.\nMaybe a review team would be good during beta freeze to take some work off of those two,\nbut I am not sure that a review team is necessary and productive during a development phase.\n\nAndreas\n\n\n", "msg_date": "Tue, 26 May 1998 10:03:44 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Current sources?" } ]
[ { "msg_contents": "Yes, it's a typo. That happens if you have to use a M$ system for work.\n:-(\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThomas G. Lockhart [SMTP:[email protected]]\n> Sent:\tTuesday, May 26, 1998 4:58 PM\n> To:\tMichael Meskes\n> Cc:\tPostgreSQL Hacker\n> Subject:\tRe: [HACKERS] Connect string again\n> \n> > A while ago we agreed to use DB names like the JDBC standard:\n> > \n> > <protocol>:postgresql:\\\\<server>[:port]\\dbname[?<option>]\n> > \n> > Before I start working on libpq I would like to know which protocols\n> it\n> > should accept. Is there a norm for this?\n> > \n> > Suggestions: sql, esql, pq, ecpg ...\n> \n> Hey, what's with the backslashes?? Didn't know a non-M$ system even\n> had\n> 'em on the keyboard :)\n> \n> Seriously, is that a typo?\n> \n> - Tom\n", "msg_date": "Tue, 26 May 1998 16:57:55 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Connect string again" } ]
[ { "msg_contents": "Yes, there is a preferred standard, it defines char(5) strings that correspond to specific errors.\nIt is in SQLSTATE in ESQL/C programs and conforms to X/Open and ANSI.\nIt consits of numbers and upper case letters only. The first 2 digits are the Class the, last 3\nare the subclass. I think only the Class is in the norm.\n\ne.g.:\t02000\tNo data found or End of data reached\n\t00000\tSuccess.\n\t0A000\tFeature not supported\n\nYou get additional info through:\n\texec sql get diagnostics :num_rows_affected = ROW_COUNT [, :num = NUMBER, ...];\n\n\nThe integer SQLCODE + the sqlca structure is from an old ANSI norm: \n\t0 Success\n\t100 end of data \n\t< 0 execution not successful\n\t> 0 warning\n\nAndreas\n----------\nVon: \tVadim Mikheev[SMTP:[email protected]]\nGesendet: \tDienstag, 26. Mai 1998 18:43\nAn: \tZeugswetter Andreas SARZ\nCc: \t'[email protected]'; '[email protected]'\nBetreff: \tRe: [HACKERS] error messages not only English\n\nAndreas Zeugswetter wrote:\n> \n> Anyway, we are still missing the first step in this direction: enumerate ERROR messages.\n\nBTW, are error codes in standard ?\n\nVadim\n\n\n", "msg_date": "Tue, 26 May 1998 19:47:20 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] error messages not only English" } ]
[ { "msg_contents": "Nice to know that PostgreSQL is used by greatest ISP in Russia :)\n\nVadim\nI investigated carefully POSTGRES data base (in idea to use it for our \ninternal IP routing data base, and because I have participated in Ingres \ndevelopment here in Russia in RUBIN/DEMOS project - through it was not \nfreeware work - and it was very interesting for me too see such good \nfreeware data base as PostgreSQL), and I modified 'ipaddr' data type \nlibrary in \naccordance to our requests and to allow SQL do indexing over ipaddr objects.\n\nYou can read description at 'http://relcom.EU.net/ipaddr.html' and get \nsources at 'http://relcom.EU.net/ip_class.tar.gz'. It contains sources, \nsql scripts for incorporating new data type into postgres (including \nipaddr_ops operator class incorporation) and 20,000 records based data \ntest for the indexing.\n\nI am not sure if it's proper mail list for this information, and if \nit's interesting for anyone except me to get full-functional ipaddress \nclass. I am ready to make all modifications, bug fixing and documentation \nfor this data class if it's nessesary for it's contribution to the \nPostgres data base.\n\nAnyway, all my work was based at original 'ip&mac data type' \ncontribution, written by Tom Ivar Helbekkmo.\n\nBe free to write me any questions or requests about this work.\n==============================================================\n\nAleksei Roudnev, Network Operations Center, Relcom, Moscow\n(+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)\n(+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)", "msg_date": "Wed, 27 May 1998 02:16:24 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION HERE]" }, { "msg_contents": "> Date: Tue, 26 May 1998 21:04:06 +0400 (MSD)\n> From: \"Alex P. Rudnev\" <[email protected]>\n> To: [email protected]\n> cc: [email protected]\n> Subject: [ANNOUNCE] ipaddr data type - EXTENDED VERSION HERE \n> Message-ID: <[email protected]>\n> Organization: Relcom Corp.\n> MIME-Version: 1.0\n> Content-Type: TEXT/PLAIN; charset=US-ASCII\n> Sender: [email protected]\n> Precedence: bulk\n> \n> I investigated carefully POSTGRES data base (in idea to use it for our \n> internal IP routing data base, and because I have participated in Ingres \n> development here in Russia in RUBIN/DEMOS project - through it was not \n> freeware work - and it was very interesting for me too see such good \n> freeware data base as PostgreSQL), and I modified 'ipaddr' data type \n> library in \n> accordance to our requests and to allow SQL do indexing over ipaddr objects.\n> \n> You can read description at 'http://relcom.EU.net/ipaddr.html' and get \n> sources at 'http://relcom.EU.net/ip_class.tar.gz'. It contains sources, \n> sql scripts for incorporating new data type into postgres (including \n> ipaddr_ops operator class incorporation) and 20,000 records based data \n> test for the indexing.\n> \n> I am not sure if it's proper mail list for this information, and if \n> it's interesting for anyone except me to get full-functional ipaddress \n> class. I am ready to make all modifications, bug fixing and documentation \n> for this data class if it's nessesary for it's contribution to the \n> Postgres data base.\n> \n> Anyway, all my work was based at original 'ip&mac data type' \n> contribution, written by Tom Ivar Helbekkmo.\n> \n> Be free to write me any questions or requests about this work.\n> ==============================================================\n> \n> Aleksei Roudnev, Network Operations Center, Relcom, Moscow\n> (+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)\n> (+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)\n> \n> \n\nI have added this to replace the current ip /contrib handling.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 16 Jun 1998 00:32:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION\n\tHERE]" }, { "msg_contents": "> > Date: Tue, 26 May 1998 21:04:06 +0400 (MSD)\n> > From: \"Alex P. Rudnev\" <[email protected]>\n> > To: [email protected]\n> > cc: [email protected]\n> > Subject: [ANNOUNCE] ipaddr data type - EXTENDED VERSION HERE \n...\n> > I investigated carefully POSTGRES data base (in idea to use it for our \n> > internal IP routing data base, and because I have participated in Ingres \n> > development here in Russia in RUBIN/DEMOS project - through it was not \n> > freeware work - and it was very interesting for me too see such good \n> > freeware data base as PostgreSQL), and I modified 'ipaddr' data type \n> > library in \n> > accordance to our requests and to allow SQL do indexing over ipaddr objects.\n> > \n> > You can read description at 'http://relcom.EU.net/ipaddr.html' and get \n> > sources at 'http://relcom.EU.net/ip_class.tar.gz'. It contains sources, \n> > sql scripts for incorporating new data type into postgres (including \n> > ipaddr_ops operator class incorporation) and 20,000 records based data \n> > test for the indexing.\n> > \n> > I am not sure if it's proper mail list for this information, and if \n> > it's interesting for anyone except me to get full-functional ipaddress \n> > class. I am ready to make all modifications, bug fixing and documentation \n> > for this data class if it's nessesary for it's contribution to the \n> > Postgres data base.\n> > \n> > Anyway, all my work was based at original 'ip&mac data type' \n> > contribution, written by Tom Ivar Helbekkmo.\n> > \n> > Be free to write me any questions or requests about this work.\n> > ==============================================================\n> > \n> > Aleksei Roudnev, Network Operations Center, Relcom, Moscow\n> > (+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)\n> > (+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)\n> > \n> > \n> \n> I have added this to replace the current ip /contrib handling.\n\nIs this user-application compatible with our existing ip/contrib handling?\n\n-dg\n", "msg_date": "Mon, 15 Jun 1998 22:35:30 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION\n\tHERE]" }, { "msg_contents": "> > > You can read description at 'http://relcom.EU.net/ipaddr.html' and get \n> > > sources at 'http://relcom.EU.net/ip_class.tar.gz'. It contains sources, \n> > > sql scripts for incorporating new data type into postgres (including \n> > > ipaddr_ops operator class incorporation) and 20,000 records based data \n> > > test for the indexing.\n> > > \n> > > I am not sure if it's proper mail list for this information, and if \n> > > it's interesting for anyone except me to get full-functional ipaddress \n> > > class. I am ready to make all modifications, bug fixing and documentation \n> > > for this data class if it's nessesary for it's contribution to the \n> > > Postgres data base.\n> > > \n> > > Anyway, all my work was based at original 'ip&mac data type' \n> > > contribution, written by Tom Ivar Helbekkmo.\n> > > \n> > > Be free to write me any questions or requests about this work.\n> > > ==============================================================\n> > > \n> > > Aleksei Roudnev, Network Operations Center, Relcom, Moscow\n> > > (+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)\n> > > (+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)\n> > > \n> > > \n> > \n> > I have added this to replace the current ip /contrib handling.\n> \n> Is this user-application compatible with our existing ip/contrib handling?\n\nNot sure, I have heard reports the ip handling needed work, so I applied it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 16 Jun 1998 01:39:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION\n\tHERE]" }, { "msg_contents": "Sorry I haven't commented on this earlier, but I have been very busy,\namong other things becoming a father for the first time! :-)\n\[email protected] (David Gould) writes:\n\n> Is this user-application compatible with our existing ip/contrib\n> handling?\n\nMostly, yes. Aleksei Roudnev did a great job adding indexing to my IP\naddress data type, for which I'm very grateful, and he also added some\nfunctions that can come in handy. Good work! I'll certainly be using\nhis index building technique extensively! However, it should be noted\nthat he's also built some assumptions into the current code that may\nnot be expected by all users. In particular, I dislike the hardcoding\nof the notion of class A, B and C network, since that's outdated and\ndeprecated these days.\n\nOn the other hand, Aleksei has done some good thinking on how subnet\nmask specifications can be useful in the data base, doing things like\nstoring router interface addresses and their netmasks in the same\nrecord, as in 193.124.23.6/24.\n\nAlex: maybe we can sort this out and put together a \"final\" version\nthat combines the best ideas? There's a whole unused byte in the data\nstructure right now, that could be put to use...\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "19 Jul 1998 11:51:50 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION\n\tHERE]" }, { "msg_contents": "> \n> Alex: maybe we can sort this out and put together a \"final\" version\n> that combines the best ideas? There's a whole unused byte in the data\n> structure right now, that could be put to use...\nYes, I can. First, there was some (small) mistakes in those version I \npublished fist, they are fixed now. Then, we have some experience of \nusing this class -because our work over IP data base is in progress. \n\nI'll be out since 23 July, unfortunately, and have not time before this \ndate; but when I'll come bach from my vacation (approx. 12 August) I can \ndo this work.\n\nBut how to collect different opinions about IPADDR class?\nI was not subscribed to Postgres mail lists (because postgres is treated \nby me as THE TOOL, not the SUBJECT OF MY WORK.\n\n ----------\nAnd then, may be some additional result of this work will be 'OBJECT' \nand 'OBJECT EDITOR' conception - something like RIPE DBA objects, but \nbased at SQL tables, with WWW and PERL interface. Exactly, it's simple \nSQL classes with some small restrictions (the table name and the key \nattribute name should be the same, there is 2 - level references, etc \netc), and if it's usefull I can public this tools too (our interes is in \nthe whole IP routing/monitoring system, and some parts of this system \nshould have FRee status because we are ISP, not soft. development \ncompany).\n \n\n\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n\nAleksei Roudnev, Network Operations Center, Relcom, Moscow\n(+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)\n(+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)\n\n", "msg_date": "Sun, 19 Jul 1998 15:30:21 +0400 (MSD)", "msg_from": "\"Alex P. Rudnev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION\n\tHERE]" }, { "msg_contents": "> [email protected], [email protected]\n> Subject: Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION HERE]\n> \n> Sorry I haven't commented on this earlier, but I have been very busy,\n> among other things becoming a father for the first time! :-)\n> \n> [email protected] (David Gould) writes:\n> \n> > Is this user-application compatible with our existing ip/contrib\n> > handling?\n> \n> Mostly, yes. Aleksei Roudnev did a great job adding indexing to my IP\nYes, except INPUT/OUTPUT functions. The 'ipclass' contributed into PSQL \nbefore have the strict concept of 'IPADDR is SUBNET always, this mean it \nhave not HOST bits at all_. My concept was:\n\n- ipaddr consist of IPADDRESS and MASK; this mean you can store both \nNETWORKS and INTERFACE ADDRESSES, and you can easyly found all router \ninterfaces connected to the same network, for example;\n- input and output functions use /PREFIX form of address with 2 \nexceptions:\n\n(1) HOST (/32) address and last byte is not '0';\n(2) A, B or C network address, and last 3, 2 or 1 bytes is ZERO.\n\nThis was done becuase it's near intuitive writing we use widely - if we \ntreat '193.124.23.0' as host address, or you treat '193.124.23.4' as \nnetwork address, it's out of common usage; all other cases you use \n'/prefix' form.\n\nIn addition, there is 6'th byte of the 'ipaddr' structure not used yet, \n(exactly my idea was to use it for 'undefined' values but I did not \nchecked my realisation of this).\n\n\n\n\n> address data type, for which I'm very grateful, and he also added some\n> functions that can come in handy. Good work! I'll certainly be using\n> his index building technique extensively! However, it should be noted\n> that he's also built some assumptions into the current code that may\n> not be expected by all users. In particular, I dislike the hardcoding\n> of the notion of class A, B and C network, since that's outdated and\n> deprecated these days.\n> \n> On the other hand, Aleksei has done some good thinking on how subnet\n> mask specifications can be useful in the data base, doing things like\n> storing router interface addresses and their netmasks in the same\n> record, as in 193.124.23.6/24.\n> \n> Alex: maybe we can sort this out and put together a \"final\" version\n> that combines the best ideas? There's a whole unused byte in the data\n> structure right now, that could be put to use...\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n\nAleksei Roudnev, Network Operations Center, Relcom, Moscow\n(+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)\n(+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)\n\n", "msg_date": "Sun, 19 Jul 1998 15:36:22 +0400 (MSD)", "msg_from": "\"Alex P. Rudnev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: [ANNOUNCE] ipaddr data type - EXTENDED VERSION\n\tHERE]" } ]
[ { "msg_contents": "> OK, lets review this, with thought about our various authentication\n> options:\n> \n> \ttrust, password, ident, crypt, krb4, krb5\n> \n> As far as I know, they all transmit queries and results as clear text\n> across the network. They encrypt the passwords and tickets, but not the\n> data. [Even kerberos does not encrypt the data stream, does it?]\n> \n> So, if someone snoops the network, they will see the query and results,\n> and see the cancel secret key. Of course, once they see the cancel\n> secret key, it is trivial for them to send that to the postmaster to\n> cancel a query. However, if they are already snooping, how much harder\n> is it for them to insert their own query into the tcp stream? If it is \n> as easy as sending the cancel secret key, then the additional\n> vulnerability of being able to replay the cancel packet is trivial\n> compared to the ability to send your own query, so we don't loose\n> anything by using a non-encrypted cancel secret key.\n\nCan someone answer this for me?\n\n> \n> Of course, if the stream were encrypted, they could not see the secret key\n> needs to be accepted and sent in an encrypted format.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 26 May 1998 17:31:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> However, if they are already snooping, how much harder\n>> is it for them to insert their own query into the tcp stream?\n\n> Can someone answer this for me?\n\nWell, that depends entirely on what your threat model is --- for\nexample, someone with read access on /dev/kmem on a relay machine\nmight be able to watch packets going by, yet not be able to inject\nmore. On the other hand, someone with root privileges on another\nmachine on your local LAN could likely do both.\n\nMy guess is that most of the plausible cases that allow one also\nallow the other. But it's only a guess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 1998 19:14:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd) " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> >> However, if they are already snooping, how much harder\n> >> is it for them to insert their own query into the tcp stream?\n> \n> > Can someone answer this for me?\n> \n> Well, that depends entirely on what your threat model is --- for\n> example, someone with read access on /dev/kmem on a relay machine\n> might be able to watch packets going by, yet not be able to inject\n> more. On the other hand, someone with root privileges on another\n> machine on your local LAN could likely do both.\n> \n> My guess is that most of the plausible cases that allow one also\n> allow the other. But it's only a guess.\n\nOk, I agree. If someone sees the cancel secret going over the wire,\nthey almost just as easily can send their own query on the wire. \nRemember, the cancel is going directly to the postmaster as a separate\nconnection, so it is a little easier than spoofing a packet.\n\nSo, with that decided, the only issue is how the postmaster should\ngenerate the random key. Currently, to get the password salt, which\ndoes not have to be un-guessable, RandomSalt() seeds the random number\ngenerator with the current time, and then just continues to call random.\n\nIf we continue that practice, someone can guess the first cancel\npassword by finding out when the first postgres backend needed a random\nnumber, and use that time in seconds to figure out the new random\nnumber. We could load/save the seed on postmaster start/exit, and\nsomehow seed that value during install or initdb. Perhaps the\ncompletion time of initdb can be used. Maybe a:\n\n\t'date +%s' >/pgsql/data/pg_random\n\nand have the postmaster load it on startup, and write it on exit. \nBecause initdb takes some time to run, we could put it in between two of\nthe initdb commands that take some time to run. Their timestamp of\nactivity is burried in pgsql/data and only postgres read-able.\n\nComments?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 27 May 1998 00:25:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> >> However, if they are already snooping, how much harder\n> >> is it for them to insert their own query into the tcp stream?\n> \n> > Can someone answer this for me?\n> \n> Well, that depends entirely on what your threat model is --- for\n> example, someone with read access on /dev/kmem on a relay machine\n> might be able to watch packets going by, yet not be able to inject\n> more. On the other hand, someone with root privileges on another\n> machine on your local LAN could likely do both.\n> \n> My guess is that most of the plausible cases that allow one also\n> allow the other. But it's only a guess.\n> \n\nOh, yes, one more thing. When generating the cancel key, We will have\nto call random twice and return part of each so users will not see our\ncurrent random values.\n\nWhen I remove the exec(), people will be able to call random() in the\nbackend to see the random value. May need to reset the seed on\nbackend startup.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 27 May 1998 00:39:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "> > Bruce Momjian <[email protected]> writes:\n> > >> However, if they are already snooping, how much harder\n> > >> is it for them to insert their own query into the tcp stream?\n> > \n> > > Can someone answer this for me?\n> > \n> > Well, that depends entirely on what your threat model is --- for\n> > example, someone with read access on /dev/kmem on a relay machine\n> > might be able to watch packets going by, yet not be able to inject\n> > more. On the other hand, someone with root privileges on another\n> > machine on your local LAN could likely do both.\n> > \n> > My guess is that most of the plausible cases that allow one also\n> > allow the other. But it's only a guess.\n> \n> Ok, I agree. If someone sees the cancel secret going over the wire,\n> they almost just as easily can send their own query on the wire. \n> Remember, the cancel is going directly to the postmaster as a separate\n> connection, so it is a little easier than spoofing a packet.\n> \n> So, with that decided, the only issue is how the postmaster should\n> generate the random key. Currently, to get the password salt, which\n> does not have to be un-guessable, RandomSalt() seeds the random number\n> generator with the current time, and then just continues to call random.\n\nJust do time and pid. But get the time from gettimeofday so it will be\ndown to the millisecond or so. Anything more is overkill for this application.\n\n> If we continue that practice, someone can guess the first cancel\n> password by finding out when the first postgres backend needed a random\n> number, and use that time in seconds to figure out the new random\n> number. We could load/save the seed on postmaster start/exit, and\n> somehow seed that value during install or initdb. Perhaps the\n> completion time of initdb can be used. Maybe a:\n> \n> \t'date +%s' >/pgsql/data/pg_random\n> \n> and have the postmaster load it on startup, and write it on exit. \n> Because initdb takes some time to run, we could put it in between two of\n> the initdb commands that take some time to run. Their timestamp of\n> activity is burried in pgsql/data and only postgres read-able.\n>\n> Comments?\n\nSee Mr Dodds excellent post. This is getting too elaborate.\n\nOne possibility, make it configurable to allow cancels at all. Then if\nsomeone really had a spurious cancel problem they could work around it by\nturning cancels off.\n\nBut hey, I think we should just use TCP only and then we could count on OOB.\n\nBtw, on my P166 at work, lmbench says Linux 2.1.101 can do round trip tcp\nin 125 microseconds. That is pretty quick.\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Sun, 31 May 1998 01:50:12 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "> Just do time and pid. But get the time from gettimeofday so it will be\n> down to the millisecond or so. Anything more is overkill for this application.\n\n\nYou have given me exactly what I needed. If I run gettimeofday() on\npostmaster startup, and run it when the first backend is started, I can\nuse the microseconds from both calls to generate a truely random seed. \nEven if the clock is only accurate to 10 ms, I still get a 10,000 random\nkeyspace. I can mix the values by taking/swapping the low and high\n16-bit parts so even with poor resolution, both get used.\n\nThe micro-second times are not visible via ps, or probably even kept in\nthe kernel, so these values will work fine.\n\nOnce random is seeded, for each backend we call random twice and return\na merge of the two random values. I wonder if we just call random once,\nand XOR it with our randeom seed, if that would be just as good or\nbetter? Cryptologists?\n\nComments? Sounds like a plan. The thought of giving the users yet\nanother option to disable cancel just made me squirm.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 00:53:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "> From: Bruce Momjian <[email protected]>\n> Date: Mon, 1 Jun 1998 00:53:21 -0400 (EDT)\n...\n> > Just do time and pid. But get the time from gettimeofday so it will be\n> > down to the millisecond or so. Anything more is overkill for this application.\n> \n> You have given me exactly what I needed. If I run gettimeofday() on\n> postmaster startup, and run it when the first backend is started, I can\n> use the microseconds from both calls to generate a truely random seed. \n> Even if the clock is only accurate to 10 ms, I still get a 10,000 random\n> keyspace. I can mix the values by taking/swapping the low and high\n> 16-bit parts so even with poor resolution, both get used.\n> \n> The micro-second times are not visible via ps, or probably even kept in\n> the kernel, so these values will work fine.\n> \n> Once random is seeded, for each backend we call random twice and return\n> a merge of the two random values. I wonder if we just call random once,\n> and XOR it with our randeom seed, if that would be just as good or\n> better? Cryptologists?\n> \n> Comments? Sounds like a plan. The thought of giving the users yet\n> another option to disable cancel just made me squirm.\n\nFor FreeBSD and Linux, isn't /dev/urandom the method of choice for\ngetting random bits? [I've been away from this thread awhile - please\nexcuse if this option was already discussed].\n\n", "msg_date": "Mon, 1 Jun 1998 01:33:35 -0400 (EDT)", "msg_from": "Hal Snyder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "> \n> > From: Bruce Momjian <[email protected]>\n> > Date: Mon, 1 Jun 1998 00:53:21 -0400 (EDT)\n> ...\n> > > Just do time and pid. But get the time from gettimeofday so it will be\n> > > down to the millisecond or so. Anything more is overkill for this application.\n> > \n> > You have given me exactly what I needed. If I run gettimeofday() on\n> > postmaster startup, and run it when the first backend is started, I can\n> > use the microseconds from both calls to generate a truely random seed. \n> > Even if the clock is only accurate to 10 ms, I still get a 10,000 random\n> > keyspace. I can mix the values by taking/swapping the low and high\n> > 16-bit parts so even with poor resolution, both get used.\n> > \n> > The micro-second times are not visible via ps, or probably even kept in\n> > the kernel, so these values will work fine.\n> > \n> > Once random is seeded, for each backend we call random twice and return\n> > a merge of the two random values. I wonder if we just call random once,\n> > and XOR it with our randeom seed, if that would be just as good or\n> > better? Cryptologists?\n> > \n> > Comments? Sounds like a plan. The thought of giving the users yet\n> > another option to disable cancel just made me squirm.\n> \n> For FreeBSD and Linux, isn't /dev/urandom the method of choice for\n> getting random bits? [I've been away from this thread awhile - please\n> excuse if this option was already discussed].\n\nNot available on most/all platforms.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 10:12:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" }, { "msg_contents": "Hal Snyder wrote:\n> \n> > From: Bruce Momjian <[email protected]>\n> > Date: Mon, 1 Jun 1998 00:53:21 -0400 (EDT)\n> ...\n> > > Just do time and pid. But get the time from gettimeofday so it will be\n> > > down to the millisecond or so. Anything more is overkill for this application.\n> > \n> > You have given me exactly what I needed. If I run gettimeofday() on\n> > postmaster startup, and run it when the first backend is started, I can\n> > use the microseconds from both calls to generate a truely random seed. \n> > Even if the clock is only accurate to 10 ms, I still get a 10,000 random\n> > keyspace. I can mix the values by taking/swapping the low and high\n> > 16-bit parts so even with poor resolution, both get used.\n> > \n> > The micro-second times are not visible via ps, or probably even kept in\n> > the kernel, so these values will work fine.\n> > \n> > Once random is seeded, for each backend we call random twice and return\n> > a merge of the two random values. I wonder if we just call random once,\n> > and XOR it with our randeom seed, if that would be just as good or\n> > better? Cryptologists?\n> > \n> > Comments? Sounds like a plan. The thought of giving the users yet\n> > another option to disable cancel just made me squirm.\n> \n> For FreeBSD and Linux, isn't /dev/urandom the method of choice for\n> getting random bits? [I've been away from this thread awhile - please\n> excuse if this option was already discussed].\n\nI believe /dev/random is guaranteed to be \"random\", while /dev/urandom\nis guaranteed to return a certain number of psuedorandom bytes in a\ngiven time. These are not universally available though. Seeding with\nbits taken from the pid and hi-res time should be OK.\n\nSomething I did on a similar task was to set up a max-keys-per-key and\nmax-elapsed-time-per-key. Basically, seed the random number generator\nwhen the postmaster starts, and reseed after every 10 keys, or if 10\nminutes have elapsed since the random number generator was las seeded.\nThis way, the would be attacker can't know for sure when the random\nnumber generator was last reseeded.\n\n\nOcie Mitchell\n", "msg_date": "Mon, 1 Jun 1998 14:32:17 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data (fwd)" } ]
[ { "msg_contents": "Tom Ivar Helbekkmo writes:\n> > Yes, it's a typo. That happens if you have to use a M$ system for work.\n> \n> Another thing that happens when you have to do that is that you follow\n> up someone's mail message, and end up generating a message that begins\n> with a short answer to no obvious question, then has a legal signature\n> separator, then your signature, and finally the entire message that\n> you're responding to, including a selection of header lines that your\n> M$ software has mangled so that they're now plain wrong.\n\nIt's horrible, isn't it? There is a way to tell M$ Exchange to not put the\nanswered mail at the end. But Exchange isn't able to use international\nstandars, like Re: for reply.- It insist on AW: for the german Antwort.\nSo I have to stick with Outlook.\n\n> I'm considering telling Procmail to dump anything written with Outlook\n> (that's its name, right?) directly into /dev/null. It takes too much\n> time trying to figure out what the context of the message is.\n\nGood move.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 27 May 1998 09:34:24 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Connect string again" }, { "msg_contents": "Michael,\n\nIt's generally not considered good form to move a private conversation\nto a newsgroup or mailing list without mutual consent, but since you\nhave chosen to do so, I don't mind commenting briefly. In fact, among\nthe many mailing lists and newsgroups I read, the PostgreSQL lists are\nnoticably more difficult to read than most, so it might be useful.\nI'll thus take the opportunity to sum up some common problems below.\n\nMichael Meskes <[email protected]> writes:\n\n> It's horrible, isn't it? There is a way to tell M$ Exchange to not\n> put the answered mail at the end. But Exchange isn't able to use\n> international standars, like Re: for reply.- It insist on AW: for\n> the german Antwort.\n\nIt is, indeed, horrible. One would think that as time passed, the\nsoftware available to us for communication would get better, and this\nwas the case until personal computing started complicating things.\nThose who write software for the mass market know that quality is not\nworth a large investment of time and money. Instead, products must\ncome out in ever new versions, each with more colors, longer feature\nlists and more marketing hype than the last.\n\nMicrosoft is much worse than most (although Lotus and Netscape are not\nthat far behind, to name but two). A reasonable explanation for this\nhas two parts: first, the teenagers who write software for Microsoft\nhave little or no experience with the network community and the way\nthings have been done here since the beginning, and second, they have\nthe secure knowledge that this does not matter. Thus, what they don't\nknow about standards and conventions on the net, they certainly aren't\ngoing to bother to find out. What they do will be the new \"standard\",\neffective immediately, because of the label on the box.\n\n> So I have to stick with Outlook.\n\nI feel sorry for you if you have an employer so lacking in common\nsense that you're forced to use a Microsoft application for email. It\nis one thing to demand that employees use Microsoft's poor excuse for\nan operating system, but you should at least be allowed to use what\nyou want for tasks where it cannot make a difference to anyone but you\nwhich tool you choose.\n\n> > I'm considering telling Procmail to dump anything written with Outlook\n> > (that's its name, right?) directly into /dev/null. It takes too much\n> > time trying to figure out what the context of the message is.\n> \n> Good move.\n\nI suspect sarcasm. :-) Actually, I'd like to defend this as being,\nindeed, a good move. I always have so many interesting things to do,\nand very much want to use my time as effeciently as I can. With the\nsheer volume of traffic on the PostgreSQL mailing lists, this means\nthat I have to make an effort to get as much out of reading the lists\nas I possibly can. This, unfortunately, includes _not_ reading much\nof the material posted to the lists. But what not to read?\n\nOf course, I try to skip lightly over discussions on topics that I\ndon't find very interesting. That's not the hardest part. The real\nproblem is in the threads of discussion that I really want to follow.\nIn the \"good old days\", technical mailing lists and newsgroups were\ngenerally easy to read, because most people followed the same set of\nconventions: text was properly formatted for 80 column terminals,\ncommon quoting rules made it easy to see what was old and new in a\nmessage, and selective quoting of relevant bits of what was being\ncommented on made it easy to follow a thread of discussion smoothly.\nYou could very quickly determine whether a message held interesting\nmaterial or not. If some newcomer didn't follow conventions, they\nwere pointed out to him or her, and everything was fine.\n\nThese days, it's not always so easy. In many of the fora I follow,\nthings are still the way they were. The NetBSD mailing lists, for\ninstance, are easy to read -- almost everybody follows conventions.\nHere on the PostgreSQL mailing lists, however, the picture is very\nmuch different: every new message that I read is fundamentally\ndifferent from the last, so I have to _start_ by figuring out what the\nsyntax and semantics of this particular message happens to be. After\nsorting out multi-part MIME, quoting, new content only at top or only\nat bottom, visually coming to grips with overlong lines and quoted\nprintable encoding and so on and so forth, I can finally start to\nevaluate whether the content of the message is interesting. This\ntakes enough time that I could have digested two or three properly\npresented messages in the time it takes to get ready to start reading\none of the ones produced by newcomers with \"modern\" software!\n\nThe whole point of conventions is to ease communication!\n\n- Stick to at most 75 characters per line. Monospaced displays of\n 80 character width are the norm, and lines longer than that are\n difficult to read comfortably anyway, especially on-screen.\n\n- Write plain text. Do not use HTML formatting and suchlike, since\n it makes it very difficult for those who don't use a web browser\n to read their mail to read the text.\n\n- Quote selectively, using \"> \" in front of quoted text, and clearly\n indicating who wrote what you're quoting. (See the early parts of\n this message for what I mean.)\n\n- Avoid MIME \"multipart\" messages when not needed. Particularly, do\n not use VCARD and the like, and do not let your email software\n generate an alternative HTML version of the text.\n\n- Above all, remember that you're the one trying to communicate your\n thoughts to others, so it's your responsibility to do this well!\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "27 May 1998 21:22:36 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Off-topic: Communication. (was: Connect string again)" }, { "msg_contents": "\n[First for those who didn't see the begining of this thread, I'm one of\nthose who when emailing from work, has to use Outlook. I'm hoping that I'm\ngoing to be able to get procmail or sendmail to divert stuff from these\nlists to one of the Linux boxes I have there. \n\nPS: If anyone knows how to configure sendmail.cf to forward mail to any\nother host other than localhost or the relayhost I'd be interested in\nhearing from them.]\n\nOn 27 May 1998, Tom Ivar Helbekkmo wrote:\n\n> > It's horrible, isn't it? There is a way to tell M$ Exchange to not\n> > put the answered mail at the end. But Exchange isn't able to use\n> > international standars, like Re: for reply.- It insist on AW: for\n> > the german Antwort.\n> \n> It is, indeed, horrible. One would think that as time passed, the\n> software available to us for communication would get better, and this\n> was the case until personal computing started complicating things.\n> Those who write software for the mass market know that quality is not\n> worth a large investment of time and money. Instead, products must\n> come out in ever new versions, each with more colors, longer feature\n> lists and more marketing hype than the last.\n\nIt's horrible here - middle and upper management seem to love M$ because\nits either the presumed standard, or simply because its M$\n\nWorse still, is when a user gets a brand new PC, and moans at us because\nit doesn't to the same job as their old Dumb Terminal did (the DT proving\nto be more reliable).\n\n> Microsoft is much worse than most (although Lotus and Netscape are not\n> that far behind, to name but two). A reasonable explanation for this\n> has two parts: first, the teenagers who write software for Microsoft\n> have little or no experience with the network community and the way\n> things have been done here since the beginning, and second, they have\n> the secure knowledge that this does not matter. Thus, what they don't\n> know about standards and conventions on the net, they certainly aren't\n> going to bother to find out. What they do will be the new \"standard\",\n> effective immediately, because of the label on the box.\n\nWhat anoys me more with their versions of the \"standards\" is that they\ndon't even keep to them within their own product range, or even with\ndifferent versions of the same product.\n\n> > So I have to stick with Outlook.\n> \n> I feel sorry for you if you have an employer so lacking in common\n> sense that you're forced to use a Microsoft application for email. It\n> is one thing to demand that employees use Microsoft's poor excuse for\n> an operating system, but you should at least be allowed to use what\n> you want for tasks where it cannot make a difference to anyone but you\n> which tool you choose.\n\nSadly were going down the M$ Exchange route for email also. Even though\nit's a log better than what it's replacing (a mail can be 10 lines of 80\nchars only), it's a real pig to keep up. Sometimes users call saying that\nthe server's gone down, when it's their PC deciding to forget the servers\nname, or the server deciding that it would be fun to resent the last\nmonths email to every single user (this little gem happens about once\nevery two months).\n\n> > > I'm considering telling Procmail to dump anything written with Outlook\n> > > (that's its name, right?) directly into /dev/null. It takes too much\n> > > time trying to figure out what the context of the message is.\n> > \n> > Good move.\n> \n> I suspect sarcasm. :-) Actually, I'd like to defend this as being,\n> indeed, a good move. I always have so many interesting things to do,\n> and very much want to use my time as effeciently as I can. With the\n> sheer volume of traffic on the PostgreSQL mailing lists, this means\n> that I have to make an effort to get as much out of reading the lists\n> as I possibly can.\n\nI agree with you. If I can sort out getting mail from the lists to arrive\nat the linux box under my desk, I'd switch over immediately.\n\n> This, unfortunately, includes _not_ reading much of the material posted\n> to the lists. But what not to read? \n> \n> Of course, I try to skip lightly over discussions on topics that I\n> don't find very interesting. That's not the hardest part. The real\n> problem is in the threads of discussion that I really want to follow.\n\n> In the \"good old days\", technical mailing lists and newsgroups were\n> generally easy to read, because most people followed the same set of\n> conventions: text was properly formatted for 80 column terminals,\n> common quoting rules made it easy to see what was old and new in a\n> message, and selective quoting of relevant bits of what was being\n> commented on made it easy to follow a thread of discussion smoothly.\n\nThis is the reason I prefer Pine. It's text only, but it handles all of\nthe standards correctly, formats for 80 column screens (unlike Outlook\nwhich formats it on screen, but a paragraph is still a single line), and\nit automatically quotes the message correctly (if you want to place a > at\nthe begining of the line in Outlook, you have to add it manually, and\nformat each line manually).\n\n> You could very quickly determine whether a message held interesting\n> material or not.\n\n> If some newcomer didn't follow conventions, they were pointed out to him\n> or her, and everything was fine. \n\nI remember when I first started on the \"Net\" 5 years ago, netiquete was\none of the first things you picked up.\n\n> These days, it's not always so easy. In many of the fora I follow,\n> things are still the way they were. The NetBSD mailing lists, for\n> instance, are easy to read -- almost everybody follows conventions.\n\nThat's most probably because they are reading them on NetBSD machines.\n\n> Here on the PostgreSQL mailing lists, however, the picture is very\n> much different: every new message that I read is fundamentally\n> different from the last, so I have to _start_ by figuring out what the\n> syntax and semantics of this particular message happens to be.\n\nThis is partly due to the number of different platforms that either\nPostgres runs on, or the clients run on.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Wed, 27 May 1998 22:03:24 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: Communication. (was: Connect string again)" }, { "msg_contents": "Peter T Mount <[email protected]> writes:\n\n> What anoys me more with their versions of the \"standards\" is that they\n> don't even keep to them within their own product range, or even with\n> different versions of the same product.\n\nWhat annoys me even more than that is that there is a growing group of\npeople who actually think that Microsoft invented the web, email (even\nSMTP) and TCP.\n\nI know it is hard to believe, but these people exist. They must be stopped.\n\n> > These days, it's not always so easy. In many of the fora I follow,\n> > things are still the way they were. The NetBSD mailing lists, for\n> > instance, are easy to read -- almost everybody follows conventions.\n> \n> That's most probably because they are reading them on NetBSD machines.\n\nSome are, some are not. We get many questions on the NetBSD mailing list\nfrom \"newbies\" who format things right. I think the difference is we\naren't flooded by Microsloth's Following who just want to run the latest\ncool toy.\n\n--Michael\n", "msg_date": "27 May 1998 16:14:03 -0700", "msg_from": "Michael Graff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: Communication. (was: Connect string again)" }, { "msg_contents": "Hi there,\n\nI am just starting to use PostgreSQL, and I couldn't find the answer on\nthe\nPostgreSQL site...so if any one could help me it would be very\nappreciated.\n\n1) Good books/reference on PostgreSQL for a novice?\n2) How do I delete a columm from a table, I have tried to use the\nfollowing:\n\nDELETE FROM table_name columm_name\n\nThanks much,\n\nJP\n", "msg_date": "Wed, 27 May 1998 16:41:06 -0700", "msg_from": "Joao Paulo Felix <[email protected]>", "msg_from_op": false, "msg_subject": "delete columm help" }, { "msg_contents": "Joao Paulo Felix <[email protected]> writes:\n\n> 1) Good books/reference on PostgreSQL for a novice?\n\nI've no particular SQL book to recommend, although I hear much good\nabout Joe Celco's \"SQL for Smarties\". For a very good online\ntutorial, check out <http://w3.one.net/~jhoffman/sqltut.htm>.\n\n> 2) How do I delete a columm from a table, I have tried to use the\n> following:\n> \n> DELETE FROM table_name columm_name\n\nThat should be\n\n\talter table table_name drop column column_name;\n\nHowever, it's not yet implemented. You might rename the table, create\na new one lacking the offending column, and copy the data over. On\nthe other hand, you might just not worry about it. That's one of the\ngreat things about SQL: if you can take the space consumption hit, an\nextraneous column doesn't matter. Just don't \"select *\", which you\nnormally shouldn't do anyway.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "28 May 1998 07:32:08 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] delete columm help" }, { "msg_contents": "On 27 May 1998, Michael Graff wrote:\n\n> Peter T Mount <[email protected]> writes:\n> \n> > What anoys me more with their versions of the \"standards\" is that they\n> > don't even keep to them within their own product range, or even with\n> > different versions of the same product.\n> \n> What annoys me even more than that is that there is a growing group of\n> people who actually think that Microsoft invented the web, email (even\n> SMTP) and TCP.\n> \n> I know it is hard to believe, but these people exist. They must be stopped.\n\nOh, I know they exist. About a year ago, I had a heated discussion with\nsomeone who really believed that Microsoft invented Java, and was puzzled\nwhy Sun was taking them to court.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 28 May 1998 06:32:45 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: Communication. (was: Connect string again)" }, { "msg_contents": "Tom Ivar Helbekkmo writes:\n> Michael,\n> \n> It's generally not considered good form to move a private conversation\n> to a newsgroup or mailing list without mutual consent, but since you\n> have chosen to do so, I don't mind commenting briefly. In fact, among\n\nOh! Please take my apologies. Up to this moment I haven't noticed that your\noriginal mail was send in private and not via the list. I'm really sorry\nabout this, since I absolutely agree that this behaviour is not good.\n\n> I feel sorry for you if you have an employer so lacking in common\n> sense that you're forced to use a Microsoft application for email. It\n> is one thing to demand that employees use Microsoft's poor excuse for\n> an operating system, but you should at least be allowed to use what\n> you want for tasks where it cannot make a difference to anyone but you\n> which tool you choose.\n\nI couldn't agree more. But tell that to my boss. :-( But then I'm leaving\nthis job anyway, so I don't care about it that much anymore. \n\n> These days, it's not always so easy. In many of the fora I follow,\n> things are still the way they were. The NetBSD mailing lists, for\n> instance, are easy to read -- almost everybody follows conventions.\n\nAnd almost noone will use M$ I think. :-)\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 28 May 1998 09:58:13 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Off-topic: Communication. (was: Connect string again)" } ]
[ { "msg_contents": "\nI'm rewriting my SSL for patch so it's a little less messy, and I've\ncome across something interesting.\n\nWhat I've done is replaced Pfin,Pfout and Pfdebug with a struct called\nPGcomm.\n\npqcomm.c had Pfin/Pfout/Pfdebug as \"global\" variables. some other c\nfiles have \"extern\" entries for these variables. the in/out funcs in\npqcomprim.c take a FILE * as an argument instead of the extern\napproach. I'm not sure there are any cases where the FILE * passed\ndiffers from the one in the global Pfin, but to maintain consistency,\nI haven't changed it. So the functions in pqcomm.c still access the\nglobal copy of the PGcomm struct (my replacement for\nPfin/Pfout/Pfdebug) and pqcomprim.c still takes a PGcomm * as an\nargument. There are actually little [f]read/[f]write system calls in\npqcomm.c, most of the communication takes place by calling pqcomprim.c\nfunctions.\n\nthe reason i'm writing this mail are twofold, one is: are the\ndevelopers interested in merging my input/output changes into the\ndistribution. this has the benefit of making the io a little more\ncoherent, right now it seems sort of patched together, read/write\nmixed with fread/fwrite, functions that do the same thing but take\ndifferent arguments, fread/fwrite in the actual code instead of\ncalling an appropriate function. this seems like a good idea to me.\nwe could also define an interface for implementing transport layers,\nso my patch could be an add-on module.\n\nso, the interesting part is this: there is a call to pq_putstr after\nthe client has disconnected. so, when I exit out of psql, I get an\nerror (with my patch) whereas before, if fputs gets a NULL pointer, it\ndoesn't signal an error for some reason. I've modified my patch to\nmatch the behavoir, but it does seem a little odd. I will try to find\nthe place this is being called from, as it does not seem like a good\nthing.\n\nLet me know if I need to clarify.\n", "msg_date": "Wed, 27 May 1998 16:05:00 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "backend/frontend communication" }, { "msg_contents": "Brett McCormick <[email protected]> writes:\n\n> I'm rewriting my SSL for patch so it's a little less messy, [...]\n\nDoes this mean that you're adding a facility for an encrypted data\nstream between server and clients? If so, great! Are you adding this\nin such a way that other mechanisms than SSL can be facilitated? I'd\nlike to take a shot at adding Kerberos IV encryption to your model...\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "28 May 1998 07:40:37 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend/frontend communication" }, { "msg_contents": "On , 28 May 1998, at 07:40:37, Tom Ivar Helbekkmo wrote:\n\n> > I'm rewriting my SSL for patch so it's a little less messy, [...]\n> \n> Does this mean that you're adding a facility for an encrypted data\n> stream between server and clients? If so, great! Are you adding this\n> in such a way that other mechanisms than SSL can be facilitated? I'd\n> like to take a shot at adding Kerberos IV encryption to your model...\n\nOnce the patch is rewritten, yes, all fe/be communication will take\nplace in two functions, pq_read and pq_write. It'll take a little\nmore to make it completely modularized (once bruce removes the exec()\nit will make things much better -- as it is the SSL connection must be\nrenegotiated at that point) but I think it is worth the effort. I may\ngo as far as to allow pluggable transport mechanisms and\nauthentication.\n\nIt's a work in progress. The info page is at\nhttp://www.chicken.org/pgsql/ssl/\n\nIt details some of the changes I plan to make, as well as a short\ndescription of the patch and how I feel about the fe/be communication.\nHowever, it is probably poorly written, so I should probably change\nthat.\n\nI warn against using it at this point -- libpq is the only interface\nguarunteed to work, which means no perl interface without some ugly\nhacking. This will change.\n", "msg_date": "Wed, 27 May 1998 23:17:40 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] backend/frontend communication" }, { "msg_contents": "On Wed, 27 May 1998, Brett McCormick wrote:\n\n> the reason i'm writing this mail are twofold, one is: are the\n> developers interested in merging my input/output changes into the\n> distribution. this has the benefit of making the io a little more\n> coherent, right now it seems sort of patched together, read/write\n> mixed with fread/fwrite, functions that do the same thing but take\n> different arguments, fread/fwrite in the actual code instead of\n> calling an appropriate function. this seems like a good idea to me.\n> we could also define an interface for implementing transport layers,\n> so my patch could be an add-on module.\n\n\tGo for it...I like the thought of simplifying the code, which this\nsounds like it will do.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 29 May 1998 19:21:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend/frontend communication" }, { "msg_contents": "> \n> On Wed, 27 May 1998, Brett McCormick wrote:\n> \n> > the reason i'm writing this mail are twofold, one is: are the\n> > developers interested in merging my input/output changes into the\n> > distribution. this has the benefit of making the io a little more\n> > coherent, right now it seems sort of patched together, read/write\n> > mixed with fread/fwrite, functions that do the same thing but take\n> > different arguments, fread/fwrite in the actual code instead of\n> > calling an appropriate function. this seems like a good idea to me.\n> > we could also define an interface for implementing transport layers,\n> > so my patch could be an add-on module.\n> \n> \tGo for it...I like the thought of simplifying the code, which this\n> sounds like it will do.\n> \n\nI also encourge you to try and improve the handling of the variables\nthat you mentioned. You can use ctags and mkid (see developers FAQ). \nThat makes it easy. I have noticed the inconstency, where some were\npassed, and others were global, and could not figure out what they were\nall used for.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 29 May 1998 23:52:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend/frontend communication" } ]
[ { "msg_contents": "I am testing a patch for removing exec() and using just fork(). I will\npost it to the hackers list for review, if that is OK. Should be only a\nfew hundred lines.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 27 May 1998 20:41:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "removal of exec()" } ]
[ { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> \n> > I am testing a patch for removing exec() and using just fork(). I will\n> > post it to the hackers list for review, if that is OK. Should be only a\n> > few hundred lines.\n> \n> To do what exactly?\n> \n> --Michael\n> \n\n[FYI for others on the list.]\n\nCurrently, a backend is created by forking the postmaster, then exec'ing\nan identical binary to run the backend. This change makes it just\nfork(), but not exec() a new identical binary.\n\n\nWhy it was originally done this way, I don't know. It was not trivial\nto change it. It saves 0.01 seconds on backend startup with single\nquery, which usually takes 0.08 seconds, so the 0.01 seconds is\nsignificant.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 28 May 1998 00:27:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] removal of exec()" } ]
[ { "msg_contents": "Hey, wow. \n\nWhere is it, I am curious. I'll try AIX (will it apply to the last snapshot, I still did not have time\nto compile CVSup on my AIX box)\n\nAndreas\n\nI am testing a patch for removing exec() and using just fork(). I will\npost it to the hackers list for review, if that is OK. Should be only a\nfew hundred lines.\n\n", "msg_date": "Thu, 28 May 1998 11:41:47 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] removal of exec()" } ]
[ { "msg_contents": "I have to ask, what is really so wrong with Outlook.\nAnd before you say well, you must not have used it before. I'm using it \nright now.\nAnd I have also used Pine, elm, ZMail, ...\nI often find myself hitting Ctrl-e (emacs end of line) in Outlook and \nthen I hit Ctrl-z (M$ undo).\nI'm not a M$ advocate and I agree that they promote the cluelessness of\nthe masses. But I also don't think that Outlook is in and of itself \nevil.\n\n\tJust my $0.01, (hey if I had two would I have jumped into this)\n DEJ\n", "msg_date": "Thu, 28 May 1998 16:20:58 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [GENERAL] Re: [HACKERS] Off-topic: Communication. (was: Conne\n\tct string again)" }, { "msg_contents": "On Thu, 28 May 1998, Jackson, DeJuan wrote:\n\n> I have to ask, what is really so wrong with Outlook.\n> And before you say well, you must not have used it before. I'm using it \n> right now.\n> And I have also used Pine, elm, ZMail, ...\n> I often find myself hitting Ctrl-e (emacs end of line) in Outlook and \n> then I hit Ctrl-z (M$ undo).\n> I'm not a M$ advocate and I agree that they promote the cluelessness of\n> the masses. But I also don't think that Outlook is in and of itself \n> evil.\n\nIt's fine for internal stuff, as the quoting scheme they use works well.\nBut it doesn't conform the the established standards, and those of us who\nhave suffered by it find it makes being involved in serious internet based\ndiscussions pretty hard.\n\nThe Ctrl-e problem happens to me a lot as well.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 29 May 1998 06:57:36 +0100 (BST)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [GENERAL] Re: [HACKERS] Off-topic: Communication. (was: Conne ct\n\tstring again)" } ]
[ { "msg_contents": "I hate to bother you developers, but I have a question about the\nPostgres server. Can someone explain to me the following error messages\nand how I can avoid them? I recieved when trying to select from a\nsimple table with about 5 rows. When there were only two rows in the\ntable, the select came back ok, however when I then inerserted 3 more\nrows, I started to see this message. Thank You.\n\nMay 27 16:07:31 pclinux postgres: Too Large Allocation Request(\"!(0 <\n(size) && (size) <= (0xfffffff)):size=540029106 [0x203030b2]\", File:\n\"mcxt.c\", Line: 232)\nMay 27 16:07:31 pclinux postgres: !(0 < (size) && (size) <= (0xfffffff))\n(0)\nMay 27 16:28:59 pclinux postgres: NOTICE: buffer leak [9] detected in\nBufferPoolCheckLeak()\n\n- Greg\n\n--\nGreg Krasnow\nHNC Software Inc.\nFinancial Industry Solutions\nSenior Software Engineer\nEmail: [email protected]\nDirect Phone: 619.799.8341\nFax: 619.546.9464\n\n\n", "msg_date": "Thu, 28 May 1998 16:09:09 -0700", "msg_from": "Gregory Krasnow <[email protected]>", "msg_from_op": true, "msg_subject": "Error Message" }, { "msg_contents": "> \n> I hate to bother you developers, but I have a question about the\n> Postgres server. Can someone explain to me the following error messages\n> and how I can avoid them? I recieved when trying to select from a\n> simple table with about 5 rows. When there were only two rows in the\n> table, the select came back ok, however when I then inerserted 3 more\n> rows, I started to see this message. Thank You.\n> \n> May 27 16:07:31 pclinux postgres: Too Large Allocation Request(\"!(0 <\n> (size) && (size) <= (0xfffffff)):size=540029106 [0x203030b2]\", File:\n> \"mcxt.c\", Line: 232)\n> May 27 16:07:31 pclinux postgres: !(0 < (size) && (size) <= (0xfffffff))\n> (0)\n> May 27 16:28:59 pclinux postgres: NOTICE: buffer leak [9] detected in\n> BufferPoolCheckLeak()\n\nCan you send us a reproducable example? That would be great. I assume\nthis is 6.3.2.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 28 May 1998 20:06:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Error Message" } ]
[ { "msg_contents": "\nI've decided to make two separate patches, a \"communication\" patch to\nclean up the fe-be communication (to be submitted for inclusion) and\nthen a separate SSL patch. this is good because, if approved for\nmerging, will clear up a lot of inconsistency regarding the io\ncommunication in the backend and frontend. it also has the added\nbenefit of making my SSL patch less hack-ish.\n\nI haven't heard much from you guys regarding the backend\ncommunication, but I figure if I make a good patch that doesn't\ninterfere and has positive changes, what have we got to lose.\n\nI'm considering going as far as making it more even more modular so\nthat it is easier for other people to take advantage of that, but I'm\nnot sure how anyone feels about that. First things first I guess.\n\nbtw, i'm still planning on implementing stored procedures in perl.\ni'd like to gauge the relative interest of these two projects so I can\ndecide how to spend my time.\n\n", "msg_date": "Thu, 28 May 1998 20:13:46 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "comm patch & ssl patch" }, { "msg_contents": "> I haven't heard much from you guys regarding the backend\n> communication, but I figure if I make a good patch that doesn't\n> interfere and has positive changes, what have we got to lose.\n\nMy impression is that the frontend/backend comm has been less-than-ideal\nfor some time. Someone submitted patches to fix the reversed network\nbyte ordering (Postgres sends little-endian using home-grown versions of\nthe big-endian ntoh/hton routines) but got discouraged when they didn't\nquite work right on mixed-order networks.\n\nAnyway, it would be great if a few people would take an interest, as you\nhave, in cleaning this up. The OOB discussion touches on this also, and\nif there are non-backward-compatible changes for v6.4 then you may as\nwell clean up other stuff while we're at it.\n\nFor something as fundamental as client/server communication we should\nprobably have a few people testing your patches before applying things\nto the source tree; I'd be happy to help (but can only test on a\nlittle-endian machine) and Tatsuo in Japan has a mixed-order network\nwhich he has used for extensive testing in the past.\n\n - Tom\n", "msg_date": "Fri, 29 May 1998 03:36:52 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] comm patch & ssl patch" }, { "msg_contents": "On Thu, 28 May 1998, Brett McCormick wrote:\n> I'm considering going as far as making it more even more modular so\n> that it is easier for other people to take advantage of that, but I'm\n> not sure how anyone feels about that. First things first I guess.\n\nYou have my blessing. (For as much as it counts for anything.)\n\n:)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Fri, 29 May 1998 00:00:17 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] comm patch & ssl patch" }, { "msg_contents": "On Fri, 29 May 1998, at 03:36:52, Thomas G. Lockhart wrote:\n\n> Anyway, it would be great if a few people would take an interest, as you\n> have, in cleaning this up. The OOB discussion touches on this also, and\n> if there are non-backward-compatible changes for v6.4 then you may as\n> well clean up other stuff while we're at it.\n\nthe changes I propose are completely backward compatible, as far as\nthe network communication goes. What other compatibility aspects\nshould I be worried about?\n\nCan you fill me in on the OOB discussion? As far as I know, we were\nplanning on using it for the synchronous notification, but it turns\nout we can't because unix sockets won't support it. So now we're\nthinking of opening another connection to the postmaster (?) to send\nthe cancel message, along with some sort of authorization cookie.\nWe're now trying to determine the best way of making it secure, right?\nI wouldn't be too worried about it, really. Postgres can't really\nprotect itself against packet sniffing. If someone can connect to\nyour database and delete all your tables, why are we worried about\nbeing able to send a cancel message?\n\nHass this list been especially quiet lately? Or am I not up-to-date\non the new list scheme?\n\n> For something as fundamental as client/server communication we should\n> probably have a few people testing your patches before applying things\n> to the source tree; I'd be happy to help (but can only test on a\n> little-endian machine) and Tatsuo in Japan has a mixed-order network\n> which he has used for extensive testing in the past.\n\nI agree wholeheartedly. BTW, it passes the regression tests. Not\nthat this means it should have the living daylights tested out of it,\nbut it is a promising sign.\n\nAnother question: how does each backend end up exiting? (I'm about to\nfind out). From what I can tell, when the backend receives the 'X'\ncharacter from the front-end (meaning: front-end exiting), it calls\npq_close, which closes the file pointers and sets them to null.\nThen it proceeds to call NullCommand, which signals the end of a response.\nBut, of course, it can't do this, because its file pointers are gone.\nThis is inside of an infinite for(;;) loop.\n\nI guess I'll answer my own question. On the next iteration of the for\nloop, ReadCommand is called, which in turn calls SocketBackend, which\ntries to read a character. If this fails (returns EOF) it decides to\nexit. It would seem more appropriate to exit after pq_close is called\n(but not in pq_close).\n\ncomments?\n", "msg_date": "Thu, 28 May 1998 21:21:04 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] comm patch & ssl patch" }, { "msg_contents": "> \n> On Fri, 29 May 1998, at 03:36:52, Thomas G. Lockhart wrote:\n> \n> > Anyway, it would be great if a few people would take an interest, as you\n> > have, in cleaning this up. The OOB discussion touches on this also, and\n> > if there are non-backward-compatible changes for v6.4 then you may as\n> > well clean up other stuff while we're at it.\n> \n> the changes I propose are completely backward compatible, as far as\n> the network communication goes. What other compatibility aspects\n> should I be worried about?\n> \n> Can you fill me in on the OOB discussion? As far as I know, we were\n> planning on using it for the synchronous notification, but it turns\n> out we can't because unix sockets won't support it. So now we're\n> thinking of opening another connection to the postmaster (?) to send\n> the cancel message, along with some sort of authorization cookie.\n> We're now trying to determine the best way of making it secure, right?\n> I wouldn't be too worried about it, really. Postgres can't really\n> protect itself against packet sniffing. If someone can connect to\n> your database and delete all your tables, why are we worried about\n> being able to send a cancel message?\n\nYes, you are correct.\n> I agree wholeheartedly. BTW, it passes the regression tests. Not\n> that this means it should have the living daylights tested out of it,\n> but it is a promising sign.\n> \n> Another question: how does each backend end up exiting? (I'm about to\n> find out). From what I can tell, when the backend receives the 'X'\n> character from the front-end (meaning: front-end exiting), it calls\n> pq_close, which closes the file pointers and sets them to null.\n> Then it proceeds to call NullCommand, which signals the end of a response.\n> But, of course, it can't do this, because its file pointers are gone.\n> This is inside of an infinite for(;;) loop.\n> \n> I guess I'll answer my own question. On the next iteration of the for\n> loop, ReadCommand is called, which in turn calls SocketBackend, which\n> tries to read a character. If this fails (returns EOF) it decides to\n> exit. It would seem more appropriate to exit after pq_close is called\n> (but not in pq_close).\n\nAny cleanup you can do would be helpful. Sounds like you are on-top of\nit.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 29 May 1998 10:25:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] comm patch & ssl patch" }, { "msg_contents": "At 3:36 AM 98.5.29 +0000, Thomas G. Lockhart wrote:\n>For something as fundamental as client/server communication we should\n>probably have a few people testing your patches before applying things\n>to the source tree; I'd be happy to help (but can only test on a\n>little-endian machine) and Tatsuo in Japan has a mixed-order network\n>which he has used for extensive testing in the past.\n\nI'm more than happy to join mixed-byte-order testing. Please let me know\nif you need help.\n--\nTatsuo Ishii\[email protected]\n--\nTatsuo Ishii\[email protected]\n\n", "msg_date": "Sat, 30 May 1998 00:07:49 +0900", "msg_from": "[email protected] (Tatsuo Ishii)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] comm patch & ssl patch" } ]
[ { "msg_contents": "Dear,\nI am french student in computer science and I'm develloping a monitoring \nas psql in java and visual with Macro, automatic creation ..... It's \nvery complete.\nIf you are interesting (it's a very good example) write me.\n\nBye\n \n-----------------------------------------------------\n- Nicolas PROCHAZKA - \n- Etudiant en maitr�se d'Informatique -\n-----------------------------------------------------\n\ne-mail : [email protected]\ncv : http://www.etu.info.unicaen.fr/~nprochaz/\n\n\n______________________________________________________\nGet Your Private, Free Email at http://www.hotmail.com\n", "msg_date": "Fri, 29 May 1998 11:33:27 PDT", "msg_from": "\"Nicolas PROCHAZKA\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql and Java" } ]
[ { "msg_contents": "I just downloaded an tried mpsql (see\nhttp://www.troubador.com/~keidav/index.html) and I have to say I like it.\n\nIt's like a graphical version of psql. In fact it is modelled after Oracle's\nSQL Worksheet or similar products.\n\nNow I wonder what's the correct way to handle this kind of software. Shall\nwe try to boundle it with our release? Or is it just a separete program for\nus?\n\nSidenote to Oliver Elphick, I think we package this one for Debian, too.\nWhat do you think?\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 29 May 1998 15:01:00 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "mpsql" }, { "msg_contents": "On Fri, 29 May 1998, Michael Meskes wrote:\n\n> I just downloaded an tried mpsql (see\n> http://www.troubador.com/~keidav/index.html) and I have to say I like it.\n> \n> It's like a graphical version of psql. In fact it is modelled after Oracle's\n> SQL Worksheet or similar products.\n> \n> Now I wonder what's the correct way to handle this kind of software. Shall\n> we try to boundle it with our release? Or is it just a separete program for\n> us?\n\n\tIts bundled as part of the CD distribution...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 29 May 1998 19:13:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mpsql" }, { "msg_contents": "On Fri, 29 May 1998, Michael Meskes wrote:\n\n> I just downloaded an tried mpsql (see\n> http://www.troubador.com/~keidav/index.html) and I have to say I like it.\n\nWie geht's Michael,\n\nI read Marc's reply - that mpsql is bundled with the CD (which I\neagerly await ;-) I am wondering if mpmgr (the sibling application)\nis also bundled?\n\nAlso - I have become reliant on Edmund Mergl's DBD-Pg (ver 0.69)\nmodule for the Perl DBI interface. (Using version 0.91)\nAre these also bundled with the CD? \n\nMarc, can we not have links off the postgresql.org page to these\ndandy items? And could I appeal to the powers that be to include\nthese tools with the tarball? I do expect that the reason mpsql\nand mpmgr are not in the archive is due to their size but the DBI\ndriver and Edmund's great module are very lean.\n\nBTW, my superiors have agreed that we will deploy PostgreSQL on\nOctober 01-1998 when I bring our newest server online.\nThis represents the complete departure from our original model:\nPROGRESS 7.3A09 on UnixWare (1.1). We now use PosgreSQL 6.3.2\n(ecpg 1.1, DBI::DBD-Pg) on Slackware 3.4.\n\nWe are very pleased. The week of 15 June 1998 is PC Expo at the\nJacob Javitts Centre in lower Manhattan. I will be tabling at this\nevent and expect to trumpet the changes in my shop made possible\nby PostgreSQL (dot-org).\n\nCheers,\nTom\n\n\n===================================================================\n\t\tUser Guide Dog Database Project\n===================================================================\n Project Coordinator: Peter J. Puckall <[email protected]>\n Programmers: \n C/Perl: Paul Anderson <[email protected]>\n SQL/Perl: Tom Good <[email protected]>\n HTML: Chris House <[email protected]>\n SQL/Perl: Phil R. Lawrence <[email protected]> \n Perl: Mike List <[email protected]>\n Progress 4GL: Robert March <[email protected]>\n===================================================================\n Powered by PostgreSQL 6.3.2 // DBI-0.91::DBD-PG-0.69 // Perl5\n===================================================================\n\n", "msg_date": "Sat, 30 May 1998 05:05:34 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mpsql" }, { "msg_contents": "On Sat, 30 May 1998, Tom Good wrote:\n\n> I read Marc's reply - that mpsql is bundled with the CD (which I\n> eagerly await ;-) I am wondering if mpmgr (the sibling application)\n> is also bundled?\n\nSee http://www.postgresql.org/cd-dist.shtml for everything that is\ncurrently bundled in...\n\nI'm working on updating the CD image this weekend, which has held off on\nsome ppl having been ship'd theirs :( \n\n> Marc, can we not have links off the postgresql.org page to these\n> dandy items? And could I appeal to the powers that be to include\n> these tools with the tarball? I do expect that the reason mpsql\n> and mpmgr are not in the archive is due to their size but the DBI\n> driver and Edmund's great module are very lean.\n\n\tI can definitely say that none of these tools will be included in\nthe tarball...right now, that would add another 3+Meg to the distribution\n:)\n\n\tIf there is anything that I don't have listed at the above URL\n(ie. that isn't bundled on the CD that you'd like to see), please let me\nknow. And, of course, if anything at the above URL is out of date, again,\nplease let me know...some stuff is just impossible to keep up on :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 30 May 1998 14:08:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mpsql" }, { "msg_contents": "Tom Good writes:\n> Wie geht's Michael,\n\nGut. Danke, Tom.\n\n> I read Marc's reply - that mpsql is bundled with the CD (which I\n> eagerly await ;-) I am wondering if mpmgr (the sibling application)\n> is also bundled?\n\nmpmgr is not that useful right now. I think the author is looking for a\nclass library to program with. That is to say development is currently on\nhold.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 3 Jun 1998 10:40:07 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mpsql" } ]
[ { "msg_contents": "I've just committed a bunch of patches, mostly to help with parsing and\ntype conversion. The quick summary:\n\n1) The UNION construct will now try to coerce types across each UNION\nclause. At the moment, the types are converted to match the _first_\nselect clause, rather than matching the \"best\" data type across all the\nclauses. I can see arguments for either behavior, and I'm pretty sure\neither behavior can be implemented. Since the first clause is a bit\n\"special\" anyway (that is the one which can name output columns, for\nexample), it seemed that perhaps this was a good choice. Any comments??\n\n2) The name data type will now transparently convert to and from other\nstring types. For example, \n\n SELECT USER || ' is me';\n\nnow works.\n\n3) A regression test for UNIONs has been added. SQL92 string functions\nare now included in the \"strings\" regression test. Other regression\ntests have been updated, and all tests pass on my Linux/i686 box.\n\nI'm planning on writing a section in the new docs discussing type\nconversion and coercion, once the behavior becomes set for v6.4.\n\nI think the new type conversion/coercion stuff is pretty solid, and I've\ntested as much as I can think of wrt behavior. It can benefit from\ntesting by others to uncover any unanticipated problems, so let me know\nwhat you find...\n\n - Tom\n\nOh, requires a dump/reload to get the string conversions for the name\ndata type.\n", "msg_date": "Fri, 29 May 1998 14:28:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Lots 'o patches" }, { "msg_contents": "> I've just committed a bunch of patches, mostly to help with parsing and\n> type conversion. The quick summary:\n> \n> 1) The UNION construct will now try to coerce types across each UNION\n> clause. At the moment, the types are converted to match the _first_\n> select clause, rather than matching the \"best\" data type across all the\n> clauses. I can see arguments for either behavior, and I'm pretty sure\n> either behavior can be implemented. Since the first clause is a bit\n> \"special\" anyway (that is the one which can name output columns, for\n> example), it seemed that perhaps this was a good choice. Any comments??\n\nI think this is good. The important thing really is that we have a\nconsistant \"story\" we can tell about how and why it works so that a user can\nform a mental model of the system that is useful when trying to compose\na query. Ie, the principal of least surprise.\n\nThe story \"the first select picks the names and types for the columns and\nthe other selects are cooerced match\" seems quite clear and easy to understand.\n\nThe story \"the first select picks the names and then we consider all the\npossible conversions throughout the other selects and resolve them using\nthe type heirarchy\" is not quite as obvious.\n\nWhat we don't want is a story that approximates \"we sacrifice a goat and\nexamine the entrails\".\n\n> 2) The name data type will now transparently convert to and from other\n> string types. For example, \n> \n> SELECT USER || ' is me';\n> \n> now works.\n\nGood.\n \n> 3) A regression test for UNIONs has been added. SQL92 string functions\n> are now included in the \"strings\" regression test. Other regression\n> tests have been updated, and all tests pass on my Linux/i686 box.\n\nVery good.\n \n> I'm planning on writing a section in the new docs discussing type\n> conversion and coercion, once the behavior becomes set for v6.4.\n\nEven better.\n \n> I think the new type conversion/coercion stuff is pretty solid, and I've\n> tested as much as I can think of wrt behavior. It can benefit from\n> testing by others to uncover any unanticipated problems, so let me know\n> what you find...\n\nWill do.\n\n> - Tom\n> \n> Oh, requires a dump/reload to get the string conversions for the name\n> data type.\n\nOoops. I guess we need to add \"make a useful upgrade procedure\" to our\ntodo list. I am not picking on this patch, it is a problem of long standing\nbut as we get into real applications it will become increasingly\nunacceptable.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 31 May 1998 16:31:27 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lots 'o patches" }, { "msg_contents": "\nI don't quite understand \"to get the string conversions for the name\ndata type\" (unless it refers to inserting the appropriate info into\nthe system catalogs), but dump/reload it isn't a problem at all for\nme. It used to really suck, mostly because it was broken, but now it\nworks great.\n\nOn Sun, 31 May 1998, at 16:31:27, David Gould wrote:\n\n> > Oh, requires a dump/reload to get the string conversions for the name\n> > data type.\n> \n> Ooops. I guess we need to add \"make a useful upgrade procedure\" to our\n> todo list. I am not picking on this patch, it is a problem of long standing\n> but as we get into real applications it will become increasingly\n> unacceptable.\n> \n> -dg\n> \n> David Gould [email protected] 510.628.3783 or 510.305.9468 \n> Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n> \"Of course, someone who knows more about this will correct me if I'm wrong,\n> and someone who knows less will correct me if I'm right.\"\n> --David Palmer ([email protected])\n> \n", "msg_date": "Sun, 31 May 1998 19:01:09 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lots 'o patches" }, { "msg_contents": "> Ooops. I guess we need to add \"make a useful upgrade procedure\" to our\n> todo list. I am not picking on this patch, it is a problem of long standing\n> but as we get into real applications it will become increasingly\n> unacceptable.\n\nYou don't like the fact that upgrades require a dump/reload? I am not\nsure we will ever succeed in not requiring that. We change the system\ntables too much, because we are a type-neutral system.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 00:27:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lots 'o patches" }, { "msg_contents": "> I don't quite understand \"to get the string conversions for the name\n> data type\" (unless it refers to inserting the appropriate info into\n> the system catalogs), but dump/reload it isn't a problem at all for\n> me. It used to really suck, mostly because it was broken, but now it\n> works great.\n> \n> On Sun, 31 May 1998, at 16:31:27, David Gould wrote:\n> \n> > > Oh, requires a dump/reload to get the string conversions for the name\n> > > data type.\n> > \n> > Ooops. I guess we need to add \"make a useful upgrade procedure\" to our\n> > todo list. I am not picking on this patch, it is a problem of long standing\n> > but as we get into real applications it will become increasingly\n> > unacceptable.\n\nOne of the Illustra customers moving to Informix UDO that I have had the\npleasure of working with is Egghead software. They sell stuff over the web.\n24 hours a day. Every day. Their database takes something like 20 hours to\ndump and reload. The last time they did that they were down the whole time\nand it made the headline spot on cnet news. Not good. I don't think they\nwant to do it again.\n\nIf we want postgresql to be usable by real businesses, requiring downtime is\nnot acceptable.\n\nA proper upgrade would just update the catalogs online and fix any other\nissues without needing a dump / restore cycle.\n\nAs a Sybase customer once told one of our support people in a very loud voice\n \"THIS is NOT a \"Name and Address\" database. WE SELL STOCKS!\".\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n\n", "msg_date": "Sun, 31 May 1998 23:54:10 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lots 'o patches" }, { "msg_contents": "> One of the Illustra customers moving to Informix UDO that I have had the\n> pleasure of working with is Egghead software. They sell stuff over the web.\n> 24 hours a day. Every day. Their database takes something like 20 hours to\n> dump and reload. The last time they did that they were down the whole time\n> and it made the headline spot on cnet news. Not good. I don't think they\n> want to do it again.\n> \n> If we want postgresql to be usable by real businesses, requiring downtime is\n> not acceptable.\n> \n> A proper upgrade would just update the catalogs online and fix any other\n> issues without needing a dump / restore cycle.\n> \n> As a Sybase customer once told one of our support people in a very loud voice\n> \"THIS is NOT a \"Name and Address\" database. WE SELL STOCKS!\".\n\nThat is going to be difficult to do. We used to have some SQL scripts\nthat could make the required database changes, but when system table\nstructure changes, I can't imagine how we would migrate that without a\ndump/reload. I suppose we could keep the data/index files with user data,\nrun initdb, and move the data files back, but we need the system table\ninfo reloaded into the new system tables.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 10:24:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lots 'o patches" }, { "msg_contents": ">\n> That is going to be difficult to do. We used to have some SQL scripts\n> that could make the required database changes, but when system table\n> structure changes, I can't imagine how we would migrate that without a\n> dump/reload. I suppose we could keep the data/index files with user data,\n> run initdb, and move the data files back, but we need the system table\n> info reloaded into the new system tables.\n\nIf the tuple header info doesn't change, this doesn't seem that tough.\nJust do a dump the pg_* tables and reload them. The system tables are\n\"small\" compared to the size of user data/indexes, no?\n\nOr is there some extremely obvious reason that this is harder than it\nseems?\n\nBut then again, what are the odds that changes for a release will only\naffect system tables so not to require a data dump? Not good I'd say.\n\ndarrenk\n\n", "msg_date": "Mon, 1 Jun 1998 20:44:23 -0400", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Lots 'o patches" }, { "msg_contents": "> \n> >\n> > That is going to be difficult to do. We used to have some SQL scripts\n> > that could make the required database changes, but when system table\n> > structure changes, I can't imagine how we would migrate that without a\n> > dump/reload. I suppose we could keep the data/index files with user data,\n> > run initdb, and move the data files back, but we need the system table\n> > info reloaded into the new system tables.\n> \n> If the tuple header info doesn't change, this doesn't seem that tough.\n> Just do a dump the pg_* tables and reload them. The system tables are\n> \"small\" compared to the size of user data/indexes, no?\n\nI like this idea.\n \n> Or is there some extremely obvious reason that this is harder than it\n> seems?\n> \n> But then again, what are the odds that changes for a release will only\n> affect system tables so not to require a data dump? Not good I'd say.\n\nHmmm, not bad either, especially if we are a little bit careful not to\nbreak existing on disk structures, or to make things downward compatible.\n\nFor example, if we added a b-tree clustered index access method, this should\nnot invalidate all existing tables and indexes, they just couldn't take\nadvantage of it until rebuilt.\n\nOn the other hand, if we decided to change to say 64 bit oids, I can see \na reload being required.\n\nI guess that in our situation we will occassionally have changes that require\na dump/load. But this should really only be required for the addition of a\nmajor feature that offers enough benifit to the user that they can see that\nit is worth the pain.\n\nWithout knowing the history, the impression I have formed is that we have\nsort of assumed that each release will require a dump/load to do the upgrade.\nI would like to see us adopt a policy of trying to avoid this unless there\nis a compelling reason to make an exception.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n\n\n", "msg_date": "Mon, 1 Jun 1998 22:54:40 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lots 'o patches" }, { "msg_contents": "David Gould wrote:\n> \n> >\n> > >\n> > > That is going to be difficult to do. We used to have some SQL scripts\n> > > that could make the required database changes, but when system table\n> > > structure changes, I can't imagine how we would migrate that without a\n> > > dump/reload. I suppose we could keep the data/index files with user data,\n> > > run initdb, and move the data files back, but we need the system table\n> > > info reloaded into the new system tables.\n> >\n> > If the tuple header info doesn't change, this doesn't seem that tough.\n> > Just do a dump the pg_* tables and reload them. The system tables are\n> > \"small\" compared to the size of user data/indexes, no?\n> \n> I like this idea.\n> \n> > Or is there some extremely obvious reason that this is harder than it\n> > seems?\n> >\n> > But then again, what are the odds that changes for a release will only\n> > affect system tables so not to require a data dump? Not good I'd say.\n> \n> Hmmm, not bad either, especially if we are a little bit careful not to\n> break existing on disk structures, or to make things downward compatible.\n> \n> For example, if we added a b-tree clustered index access method, this should\n> not invalidate all existing tables and indexes, they just couldn't take\n> advantage of it until rebuilt.\n> \n\n\n> On the other hand, if we decided to change to say 64 bit oids, I can see\n> a reload being required.\n> \n> I guess that in our situation we will occassionally have changes that require\n> a dump/load. But this should really only be required for the addition of a\n> major feature that offers enough benifit to the user that they can see that\n> it is worth the pain.\n> \n> Without knowing the history, the impression I have formed is that we have\n> sort of assumed that each release will require a dump/load to do the upgrade.\n> I would like to see us adopt a policy of trying to avoid this unless there\n> is a compelling reason to make an exception.\n\n\nHow about making a file specifying what to do when upgrading from one\nversion of pg to another? Then a program, let's call it 'pgconv', would\nread this file and do the conversions from the old to the new format\nusing pg_dump and psql and/or some other helper programs.\n\npgconv should be able to skip versions (upgrade from 6.2 to 6.4 for\nexample, skipping 6.2.1, 6.3 and 6.3.2) by simply going through all\nsteps from version to version.\n\nWouldn't this be much easier than having to follow instructions\nwritten in HRF? Nobody could mess up their data, because the\nprogram would always do the correct conversions.\n\nBtw, does pg_dump quote identifiers? CREATE TABLE \"table\"\n(\"int\" int, \"char\" char) for example? I know it did not\nuse to, but perhaps it does now?\n\n\n(Very simplified example follows):\n----------------------------------\n% cd /usr/src/pgsql6.4\n% pgconv /usr/local/pgsql -y\n-- PgConv1.0 - PostgreSQL data conversion program --\n\nFound old version 6.3 in /usr/local/pgsql/\nConvert to 6.4 (y/n)? (yes)\n\n>> Converting 6.3->6.3.2\n> Creating shadow passwords\n\n>> Converting 6.3.2->6.3.4\n> System tables converted\n> Data files converted\n\nPgConv done. Now delete the old binaries, install\nthe new binaries with 'make install' and make sure\nyou have your PATH set correctly.\nPlease don't forget to run 'ldconfig' after\ninstalling the new libraries.\n\n\n(pgconv.data):\n--------------\n#From\tTo\tWhat to do\n#\nepoch\t6.2\tERROR(\"Can not upgrade - too old version\")\n6.2\t6.3\tSQL(\"some-sql-commands-here\")\n\t\tDELETE(\"obsolete-file\")\n\t\tOLDVER_DUMPALL()\t\t# To temp file\n\t\tNEWVER_LOADALL()\t\t# From temp file\n6.3\t6.3.2\tPRINT(\"Creating shadow passwords\")\n\t\tSQL(\"create-pg_shadow\")\n\t\tSYSTEM(\"chmod go-rwx pg_user\")\n\t\tSQL(\"some-sql-commands\")\n6.3.2\t6.4\tSQL(\"some-commands\")\n\t\tSYSTEM(\"chmod some-files\")\n\t\tPRINT(\"System tables converted\")\n\t\tSQL(\"some-other-commands\")\n\t\tPRINT(\"Data files converted\")\n\n/* m */\n", "msg_date": "Tue, 02 Jun 1998 11:22:39 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "An easier way to upgrade (Was: Lots 'o patches)" }, { "msg_contents": "Mattias Kregert wrote:\n> How about making a file specifying what to do when upgrading from one\n> version of pg to another? Then a program, let's call it 'pgconv', would\n> read this file and do the conversions from the old to the new format\n> using pg_dump and psql and/or some other helper programs.\n\nI think what is needed is a replication program, since pgsql uses\nsocket comunication it is quiet easy to run 2 concurrent systems\nsay one each of 6.3.2 and 6.4 and copy beteewn them at run-time.\n\nThe easiest way would be to use dump&load but as David pointed out in\na case where dump&load takes 20 hours it means 20 hours downtime unless\nwe want inconsistent data (data inserted/updated while copying).\n\nA smarter replication would make the downtime shorter since most data\nwould be upto date and only latest changes need to be transfer during\n\"update downtime\".\n\nSuch a mechanism would be even more useful for other proposes like\nclustering/backup/redundancy etc.\n\nHas anyone looked as this?\nThe only thing I have seen is the Mariposa project which seems to be\nsomewhat overkill for most applications.\n\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna", "msg_date": "Tue, 02 Jun 1998 16:04:18 +0200", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)" }, { "msg_contents": "On Tue, 2 Jun 1998, Goran Thyni wrote:\n\n> Mattias Kregert wrote:\n> > How about making a file specifying what to do when upgrading from one\n> > version of pg to another? Then a program, let's call it 'pgconv', would\n> > read this file and do the conversions from the old to the new format\n> > using pg_dump and psql and/or some other helper programs.\n> \n> I think what is needed is a replication program, since pgsql uses\n> socket comunication it is quiet easy to run 2 concurrent systems\n> say one each of 6.3.2 and 6.4 and copy beteewn them at run-time.\n> \n> The easiest way would be to use dump&load but as David pointed out in\n> a case where dump&load takes 20 hours it means 20 hours downtime unless\n> we want inconsistent data (data inserted/updated while copying).\n> \n> A smarter replication would make the downtime shorter since most data\n> would be upto date and only latest changes need to be transfer during\n> \"update downtime\".\n> \n> Such a mechanism would be even more useful for other proposes like\n> clustering/backup/redundancy etc.\n> \n> Has anyone looked as this?\n> The only thing I have seen is the Mariposa project which seems to be\n> somewhat overkill for most applications.\n\n\tSomeone had scripts for this that they were going to submit, but I\nnever heard further on it :(\n\n\n", "msg_date": "Tue, 2 Jun 1998 10:11:00 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)" }, { "msg_contents": "> > ... if we are a little bit careful not to\n> > break existing on disk structures, or to make things downward \n> > compatible.\n> > For example, if we added a b-tree clustered index access method, \n> > this should not invalidate all existing tables and indexes, they \n> > just couldn't take advantage of it until rebuilt.\n> > On the other hand, if we decided to change to say 64 bit oids, I can \n> > see a reload being required.\n> > I guess that in our situation we will occassionally have changes \n> > that require a dump/load. But this should really only be required \n> > for the addition of a major feature that offers enough benifit to \n> > the user that they can see that it is worth the pain.\n> > Without knowing the history, the impression I have formed is that we \n> > have sort of assumed that each release will require a dump/load to \n> > do the upgrade. I would like to see us adopt a policy of trying to \n> > avoid this unless there is a compelling reason to make an exception.\n\nWe tried pretty hard to do this at the start of the v6.x releases, and\nfailed. A few of the reasons as I recall:\n1) most changes/improvements involve changes to one or more system\ncatalogs\n2) postgres does not allow updates/inserts to at least some system\ncatalogs (perhaps because of interactions with the compiled catalog\ncache?).\n3) system catalogs appear in every database directory, so all databases\nwould need to be upgraded\n\n> How about making a file specifying what to do when upgrading from one\n> version of pg to another? Then a program, let's call it 'pgconv', \n> would read this file and do the conversions from the old to the new \n> format using pg_dump and psql and/or some other helper programs.\n> \n> pgconv should be able to skip versions (upgrade from 6.2 to 6.4 for\n> example, skipping 6.2.1, 6.3 and 6.3.2) by simply going through all\n> steps from version to version.\n> \n> Wouldn't this be much easier than having to follow instructions\n> written in HRF? Nobody could mess up their data, because the\n> program would always do the correct conversions.\n\nThis will be a good bit of work, and would be nice to have but we'd\nprobably need a few people to take this on as a project. Right now, the\nmost active developers are already spending more time than they should\nworking on Postgres :)\n\nI haven't been too worried about this, but then I don't run big\ndatabases which need to be upgraded. Seems the dump/reload frees us to\nmake substantial improvements with each release without a huge burden of\nensuring backward compatibility. At the prices we charge, it might be a\ngood tradeoff for users...\n\n> Btw, does pg_dump quote identifiers? CREATE TABLE \"table\"\n> (\"int\" int, \"char\" char) for example? I know it did not\n> use to, but perhaps it does now?\n\nIf it doesn't yet (I assume it doesn't), I'm planning on looking at it\nfor v6.4. Or do you want to look at it Bruce? We should be looking to\nhave all identifiers double-quoted, to preserve case, reserved words,\nand weird characters in names.\n\n - Tom\n", "msg_date": "Tue, 02 Jun 1998 14:12:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)" }, { "msg_contents": "1> How about making a file specifying what to do when upgrading from one\n> version of pg to another? Then a program, let's call it 'pgconv', would\n> read this file and do the conversions from the old to the new format\n> using pg_dump and psql and/or some other helper programs.\n\nWe already have the migration directory, but it only text, no scripts\ncurrently. During 1.*, we did supply script for the upgrade, but the\nfeature changes were small.\n\n> \n> (pgconv.data):\n> --------------\n> #From\tTo\tWhat to do\n> #\n> epoch\t6.2\tERROR(\"Can not upgrade - too old version\")\n> 6.2\t6.3\tSQL(\"some-sql-commands-here\")\n> \t\tDELETE(\"obsolete-file\")\n> \t\tOLDVER_DUMPALL()\t\t# To temp file\n> \t\tNEWVER_LOADALL()\t\t# From temp file\n> 6.3\t6.3.2\tPRINT(\"Creating shadow passwords\")\n> \t\tSQL(\"create-pg_shadow\")\n> \t\tSYSTEM(\"chmod go-rwx pg_user\")\n> \t\tSQL(\"some-sql-commands\")\n> 6.3.2\t6.4\tSQL(\"some-commands\")\n> \t\tSYSTEM(\"chmod some-files\")\n> \t\tPRINT(\"System tables converted\")\n> \t\tSQL(\"some-other-commands\")\n> \t\tPRINT(\"Data files converted\")\n\nInteresting ideas, but in fact, all installs will probably require a new\ninitdb. Because of the interdependent nature of the system tables, it\nis hard to make changes to them using SQL statements. What we could try\nis doing a pg_dump_all -schema-only, moving all the non pg_* files to a\nseparate directory, running initdb, loading the pg_dumped schema, then\nmoving the data files back into place.\n\nThat may work. But if we change the on-disk layout of the data, like we\ndid when we made varchar() variable length, a dump-reload would be\nrequired. Vadim made on-disk data improvements for many releases.\n\nWe could make it happen even for complex cases, but then we come up on\nthe problem of whether it is wise to allocate limited development time\nto migration issues.\n\nI think the requirement of running the new initdb, and moving the data\nfiles back into place is our best bet.\n\nI would be intested to see if that works. Does someone want to try\ndoing this with the regression test database? Do a pg_dump with data\nbefore and after the operation, and see if it the same. This is a good\nway to test pg_dump too.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 2 Jun 1998 13:08:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)" }, { "msg_contents": "> If it doesn't yet (I assume it doesn't), I'm planning on looking at it\n> for v6.4. Or do you want to look at it Bruce? We should be looking to\n> have all identifiers double-quoted, to preserve case, reserved words,\n> and weird characters in names.\n\nWould someone research this, and I can add it to the todo list. Never\nused quoted identifiers.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 2 Jun 1998 13:26:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)" } ]
[ { "msg_contents": "I have completed removal of exec(), so now the postmaster just forks a\nchild, and the child runs sharing the same address space, rather than\nforking the same binary.\n\nIn doing this, I had to consider the changes that exec performs in the\nold fork/exec. There are a few main issues. First, the signal handlers\nand sigmask are inherited from the parent, and have to be reset. \nFortunately, they are blocked at the fork() point in the code, so it is\neasy to change them before unblocking. The second issue is that any\ninitialized variables that were modified by the postmaster AND should be\nthe old values in the child had to be reset. There were only a few of\nthese, but they were tricky to find. Fortunately, the child does not do\nmuch. If I missed any other interactions of exec(), please let me know.\n\nThird, I found that the child could not dynamically load files because\nit had changed directories to the database dir, and the BSDI load was\nusing argv[0], which was relative to the startup directory, not the\ncurrent one. A good fix was to get argv[0] in the child to be absolute\npath 'postgres' binary, which is how the old exec() worked anyway. That\nworked great, and actually caused 'ps' to show 'postgres' for the child\nrather than 'postmaster'. If this is the same for others, I recommend\nwe keep both binaries, so we can easily invoke either, and show them in\n'ps' with proper names.\n\nThe regression tests pass, so I am applying the patch to the main source\ntree. People, let me know if you see problems on your platforms. I\nhave posted the patch to the patches list for review.\n\nI think this opens up a lot of things we can now do once in the\npostmaster, rather than doing them in every backend.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 29 May 1998 13:15:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Removal of exec() patch applied" } ]
[ { "msg_contents": "FYI,\n\nROD STEWART LAUNCHES OFFICIAL WEB SITE, WWW.RODSTEWARTLIVE.COM,\nDESIGNED BY MEDIAX\n\nFree Web Cast Live from the Roxy on June 2 at 8:45 p.m. (PST)\n\nMediaX Corporation and Stiefel Entertainment announced the launch of\nRod Stewart's official Web site, http://www.rodstewartlive.com\n\nThe launch will be celebrated via a live Web cast from the Roxy in Los\nAngeles on Tuesday, June 2 at 845 p.m. (PST). The day's events begin\nat Tower Records on Sunset Boulevard in Los Angeles at 6 p.m. with an\nalbum signing followed by a free live concert at Tower Records parking\nlot at 750 p.m. The festivities will continue with the live Web cast\nfrom the Roxy at 845 p.m. and end at the Whiskey with a 1000 p.m. set.\n\nRod Stewart fans in Los Angeles can win a pair of tickets to the\nWhiskey performance by being the 20th member to sign up on the site.\n\nTo view the Web cast, users must download Macromedia's Shockwave Flash\nPlug-in--available at Stewart's site for free. For more information,\nvisit: http://www.rodstewartlive.com\n\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 May 1998 11:17:35 -0700 (PDT)", "msg_from": "Web Events <[email protected]>", "msg_from_op": true, "msg_subject": "Rod Stewart Live Online" } ]
[ { "msg_contents": "Dear sir\n\n\nI want to add keywords in psql (it is not function or operation)\nLet me know what source files must to be modified.\n\nThank you... Good Luck!\n\n--\n-----------------------------------------------------------\nTel : +82-51-582-0491(office)\n +82-51-515-2208(Fax)\nmailto:[email protected]\nhttp://asadal.cs.pusan.ac.kr/~whtak\nAddr: Dept. Computer Science, Pusan National Univ.,\n Kumjung-GU, Jangjun-Dong, San 30, Pusan, Korea 609-735\n-----------------------------------------------------------\n\n\n", "msg_date": "Sat, 30 May 1998 20:35:41 +0900", "msg_from": "\"Woohyun,Tak\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hi. Sir....!!" }, { "msg_contents": "> I want to add keywords in psql (it is not function or operation)\n> Let me know what source files must to be modified.\n\ngram.y, keywords.c just to get the keyword recognized by the parser.\nSome other files in src/backend/parser/ will need to be changed if the\nnew keywords don't map to existing functionality, and of course other\nfiles in the optimizer/executor would need to be changed to support new\ncapabilities.\n\nGood luck!\n\n - Tom\n", "msg_date": "Sat, 30 May 1998 14:14:04 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hi. Sir....!!" } ]
[ { "msg_contents": "Hi,\n\nis there a way to get a specific release out of the repository? Say I \nwant to get 6.3.2 from the repository, or 6.2, how would I proceed?\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Sat, 30 May 1998 18:30:15 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": true, "msg_subject": "anon cvs and specific versions/releases" }, { "msg_contents": "On Sat, 30 May 1998, Maarten Boekhold wrote:\n\n> Hi,\n> \n> is there a way to get a specific release out of the repository? Say I \n> want to get 6.3.2 from the repository, or 6.2, how would I proceed?\n\n\tYou'll want to check the man pages for most of the details, but\nlook up the use of 'cvs log' to determine what tags are available, and\n'cvs checkout -D' to specify a *date* that you want to pull from, or 'cvs\ncheckout -r' to specify a 'tag'\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 30 May 1998 13:54:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] anon cvs and specific versions/releases" }, { "msg_contents": "> \n> Hi,\n> \n> is there a way to get a specific release out of the repository? Say I \n> want to get 6.3.2 from the repository, or 6.2, how would I proceed?\n> \n> Maarten\n> \n\nCHeck the cvs manual page. You can use -d, or there is probably a tag\nfor those releases.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 30 May 1998 13:07:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] anon cvs and specific versions/releases" }, { "msg_contents": "\nbtw, should I be hacking on the latest CVS snapshot?\n", "msg_date": "Sat, 30 May 1998 18:47:17 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] anon cvs and specific versions/releases" }, { "msg_contents": "> \n> \n> btw, should I be hacking on the latest CVS snapshot?\n> \n> \n\nSure, why not.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 30 May 1998 21:52:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] anon cvs and specific versions/releases" }, { "msg_contents": "\nOriginally my SSL patch was for users of 6.3.2.\nBut now that I'm doing some communication cleanup...\n\nOn Sat, 30 May 1998, at 21:52:30, Bruce Momjian wrote:\n\n> \n> Sure, why not.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 30 May 1998 19:01:42 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] anon cvs and specific versions/releases" }, { "msg_contents": "On Sat, 30 May 1998, Brett McCormick wrote:\n\n> \n> btw, should I be hacking on the latest CVS snapshot?\n\n\tAlways...else bringing in patches is a major headache :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 31 May 1998 12:58:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] anon cvs and specific versions/releases" } ]
[ { "msg_contents": "\nI haven't seen this, is it available anywhere?\n", "msg_date": "Sat, 30 May 1998 23:28:52 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "exec patch?" } ]
[ { "msg_contents": "\nI know that custom types account for a portion of overhead, and I'm\nnot by any means advocating their removal. I also know that the\nefficiency of postgres has improved greatly since the early days, and\nI'm wondering what more can be done.\n\nFor instance, would it be possible to cache results of the\ninput/output functions for the types? i.e. if we've already called\nfoobar_out for a peice of data, why call it again? We could store the\nprevious result in a hash, and then use that.\n\nNote that I next to nothing about how the query node tree gets\nexecuted (I'm reading up on it now) so this may not be possible or\ncould even introduce extra overhead.\n\nI'd like to get postgres up to speed. I know it is a great database,\nand I tell all my friends this, but there is too much pg bashing\nbecause of the early days. People think mysql rocks because it is so\nfast, but in reality, well.. It's all IMHO, and the right tool for\nthe right job.\n\nSo my real question is: have we hit the limit on optimization and\nreduction of overhead, or is there more work to be done? Or should we\nconcentrate on other aspects such as inheritance issues? I'm not\nquite as interested in ANSI compliance.\n\n--brett\n", "msg_date": "Sun, 31 May 1998 00:22:17 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "custom types and optimization" }, { "msg_contents": "\n[ CC'ing general list so you can see what we are working on, and my plea\nfor help in getting the word out about PostgreSQL's speed and features. \nReplies will go the the hackers list because we don't want long\ndiscussions like this in the general list.]\n\nFirst, let me say you are thinking exactly like me. I agree 100% with\nyour ideas, and analysis of the issues.\n\n \n> I know that custom types account for a portion of overhead, and I'm\n> not by any means advocating their removal. I also know that the\n> efficiency of postgres has improved greatly since the early days, and\n> I'm wondering what more can be done.\n\nGood question, and a question I have been asking myself.\n\n> For instance, would it be possible to cache results of the\n> input/output functions for the types? i.e. if we've already called\n> foobar_out for a peice of data, why call it again? We could store the\n> previous result in a hash, and then use that.\n\nNot sure if that would help. We cache system tables lookups, and data\nblocks. gprof does not show a huge problem in the type extensibility\narea, at least in my tests.\n\ngprof is your friend. Try compiling with the options, and run it and\nanalyze gmon.out (See FAQ for info.) That usually tells me quite a bit.\n\n> Note that I next to nothing about how the query node tree gets\n> executed (I'm reading up on it now) so this may not be possible or\n> could even introduce extra overhead.\n\nAlso, I hope people are reading the developers FAQ, because I think that\ncan help people get started with coding.\n\n> I'd like to get postgres up to speed. I know it is a great database,\n> and I tell all my friends this, but there is too much pg bashing\n> because of the early days. People think mysql rocks because it is so\n> fast, but in reality, well.. It's all IMHO, and the right tool for\n> the right job.\n\nYes, this has frustrated me too. Why are we not getting better mention\nfrom people? I think we can now be classified as the 'most advanced'\nfree database. Can we do something about mentioning that to others? We\ncertainly are growing market share, but i guess I would like to see more\ntransfers from other databases.\n\nThe highly-biased MySQL comparison page hurts us too, but other people\nexplaining real issues can counter that.\n\n> So my real question is: have we hit the limit on optimization and\n> reduction of overhead, or is there more work to be done? Or should we\n> concentrate on other aspects such as inheritance issues? I'm not\n> quite as interested in ANSI compliance.\n\nNot sure. I just removed exec(), so that saves us 0.01 on startup,\nwhich is pretty major. We can move some of the initialization that is\ndone in every backend to the postmaster, but these will only do major\nspeedups for backends that do startup, short query, exit. Longer\nqueries and long-running backends don't see much change.\n\nI have tested the throughput of sequential table scan, and it appears to\nrun pretty quickly, almost as quick as dd on the same file. That is\npretty good. Faster than wc on my system.\n\nSo why are we considered slow? First, historically, performance has not\nbeen a major concern, first not at Berkeley(?), and second there were so\nmany other problems, that we did not have the resources to concentrate\non it. Only in the past nine months have there been real improvements,\nand it takes time to get the word out.\n\nSecond, it is our features that make us slower. Transactions, type\nsystem, optimizer all add to the slowness. We are very modular, and\nhave a large call overhead moving in and out of modules, though\nprofiling has enabled us to reduce this. \n\nMySQL also has certain limitations that allow them to be faster, like\nbeing able to specify indexes ONLY at table creation time, so their\nindexes are in with the data. They use ISAM, which doesn't grow well,\nbut does provide good performance because the data is kind of pre-sorted\non the disk. Our CLUSTER command now does a similar function, without\nthe problems of ISAM.\n\nI am glad David Gould and others are involved, because I am starting to\nrun out of tricks to speed things up. I need new ideas and perhaps\nredesigned modules to get better performance.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 31 May 1998 15:38:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "> I'd like to get postgres up to speed. I know it is a great database,\n> and I tell all my friends this, but there is too much pg bashing\n> because of the early days. People think mysql rocks because it is so\n> fast, but in reality, well.. It's all IMHO, and the right tool for\n> the right job.\n\nYes, you're right. mysql is a lighter-weight system, without some of the\nfundamental capabilities of postgres, but well suited to some\napplications. Postgres is a \"real database\" (from David Gould :) with\nmore capabilities and more machine cycles needed to get those\ncapabilities.\n\n> So my real question is: have we hit the limit on optimization and\n> reduction of overhead, or is there more work to be done? Or should we\n> concentrate on other aspects such as inheritance issues? I'm not\n> quite as interested in ANSI compliance.\n\nI think that v6.4 will have a good chunk of SQL92 compliance finished\noff, and that other topics will become more actively developed in future\nreleases. Just guessing, but the area of postgres which has had the\nfewest fundamental changes is in the backend executor. Or I should say\nthat the ripest place for more changes and improvements is in that area,\nsince I know that Vadim, Bruce, and others have been working on it for\nsome time.\n\nThere are some data integrity features that Vadim is planning on working\non which should/may improve performance by allowing you to trade\nperformance for transactional integrity. For some applications this\nwould allow you to burn fewer cycles on the same query, getting similar\ndata integrity to what mysql might provide for example.\n\nIf you're looking for areas to work on, array handling needs to be fixed\nup (hint hint)...\n\n - Tom\n", "msg_date": "Sun, 31 May 1998 19:40:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "> I am glad David Gould and others are involved, because I am starting to\n> run out of tricks to speed things up. I need new ideas and perhaps\n> redesigned modules to get better performance.\n\nAw shucks guys, you shouldn't have... I haven't even done anything yet.\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n\n", "msg_date": "Sun, 31 May 1998 15:05:12 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "On Sun, 31 May 1998, Bruce Momjian wrote:\n\n> Yes, this has frustrated me too. Why are we not getting better mention\n> from people? I think we can now be classified as the 'most advanced'\n> free database. Can we do something about mentioning that to others? We\n> certainly are growing market share, but i guess I would like to see more\n> transfers from other databases.\n\n\tI hate to use myself as an example...but why do I hate Linux? And\nwhy wouldn't I recommend anyone to use it? Past Experience.\n\n\tWhen we first took this on, we were *very* problematic. But,\nsince we considered it to be the best that was out there, we\npersevered(sp?) with the problems and improved it overall. There are\nbound to be alot that, at the beginning, just didn't want to waste time\nwith it, saw all the problems and left...taking their bad experience with\nthem. \n\n\tMy experience is that \"bad experiences\" are heard more often then\ngood ones.\n\n\tNeil built up a 'registration page' that I'm curious as to how\nmany ppl are actually using it...just checked, and:\n\npostgresql=> select count(name) from register;\ncount\n-----\n 1361\n(1 row)\n\n\tNot bad...but I don't imagine that's a tenth of all the users, is\nit?\n\n\t\n\n", "msg_date": "Sun, 31 May 1998 18:23:00 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "On Sun, 31 May 1998, Bruce Momjian wrote:\n\n> > and I tell all my friends this, but there is too much pg bashing\n> > because of the early days. People think mysql rocks because it is so\n> > fast, but in reality, well.. It's all IMHO, and the right tool for\n> > the right job.\n> \n> Yes, this has frustrated me too. Why are we not getting better mention\n> from people? I think we can now be classified as the 'most advanced'\n> free database. Can we do something about mentioning that to others? We\n> certainly are growing market share, but i guess I would like to see more\n> transfers from other databases.\n> \n> The highly-biased MySQL comparison page hurts us too, but other people\n> explaining real issues can counter that.\n\n\nI just about had a hard time getting our system admin to install\nPostgresql. All I ever heard about was MSQL is already installed. I was\nunder the impression that Postgresql was a more full featured SQL server\nthan msql. And besides that, it was what I was able to install on my home\nMkLinux box, to learn. We're slowly making a switch from Microsoft SQL and\nEveryware's Butler SQL server to a Linux/Postgresql combo. I've been very\npleased. \n\nThe only problem I haven't been able to fix to date is calling \"Dates\"\nfrom a database and displaying them like \"Sunday May 31, 1998\" instead\n\"05-31-1998\"\n\nCurrently using PHP2.x not PHP3 yet...\n\nKevin\n\n\n\n--------------------------------------------------------------------\nKevin Heflin | ShreveNet, Inc. | Ph:318.222.2638 x103\nVP/Mac Tech | 333 Texas St #619 | FAX:318.221.6612\[email protected] | Shreveport, LA 71101 | http://www.shreve.net\n--------------------------------------------------------------------\n\n", "msg_date": "Sun, 31 May 1998 17:45:07 -0500 (CDT)", "msg_from": "Kevin Heflin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] custom types and optimization" }, { "msg_contents": "On Sun, 31 May 1998, The Hermit Hacker wrote:\n\n> \tNeil built up a 'registration page' that I'm curious as to how\n> many ppl are actually using it...just checked, and:\n> \n> postgresql=> select count(name) from register;\n> count\n> -----\n> 1361\n> (1 row)\n> \n> \tNot bad...but I don't imagine that's a tenth of all the users, is\n> it?\n\nLet me know if Camping-USA isn't one of them. I'm getting over 400 hits\nper day and without PostgreSQL I'd be lucky to be getting 3 a month. \nMysql wouldn't even compile without spending three weeks upgrading every\nsingle library that was current for FreeBSD at that time (2.0.5 or 2.1.0)\nand I never did figure out how many of my kids I'd have to sell to comply\nwith whatever licensing msql had.\n\nI program with Sybase for a living (among the many other things admins get\nto do) and the only thing that I wish were in libpq that isn't in the\nSybase dblibraries is bind. I find it convenient to bind a program\nvariable to a column and not have to screw with it during the retrieval\nprocess. BUT!! at the same time, libpq has things that dblibrary doesn't\nhave. psql absolutely blows isql away. I've had to write applications to\ngive me table definitions that psql is happy to provide. Take note,\nhowever, since it's a pain to upgrade Sybase every time it comes out and\nwe're stuck with the microsoft <gag> libraries that come with our license,\nour upgrade path is slow and although we acquire the upgrade quickly, we\ndon't necessarily perform the upgrade with the same speed!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity! \n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n", "msg_date": "Sun, 31 May 1998 18:51:07 -0400 (edt)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "> On Sun, 31 May 1998, Bruce Momjian wrote:\n> \n> > Yes, this has frustrated me too. Why are we not getting better mention\n> > from people? I think we can now be classified as the 'most advanced'\n> > free database. Can we do something about mentioning that to others? We\n> > certainly are growing market share, but i guess I would like to see more\n> > transfers from other databases.\n> \n> \tI hate to use myself as an example...but why do I hate Linux? And\n> why wouldn't I recommend anyone to use it? Past Experience.\n\nOk, why do you hate Linux? I have been using it since 94 and am happier than\na pig in mud. Maybe I am easy to please (doubtful) or maybe I am missing\nsomething? I don't want to start an OS war here (there are enough of those\nin other places), so please reply (if you choose to do so) privately.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 31 May 1998 16:19:54 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "On Sun, 31 May 1998, David Gould wrote:\n\n> > On Sun, 31 May 1998, Bruce Momjian wrote:\n> > \n> > > Yes, this has frustrated me too. Why are we not getting better mention\n> > > from people? I think we can now be classified as the 'most advanced'\n> > > free database. Can we do something about mentioning that to others? We\n> > > certainly are growing market share, but i guess I would like to see more\n> > > transfers from other databases.\n> > \n> > \tI hate to use myself as an example...but why do I hate Linux? And\n> > why wouldn't I recommend anyone to use it? Past Experience.\n> \n> Ok, why do you hate Linux? I have been using it since 94 and am happier than\n> a pig in mud. Maybe I am easy to please (doubtful) or maybe I am missing\n> something? I don't want to start an OS war here (there are enough of those\n> in other places), so please reply (if you choose to do so) privately.\n\n\tNo no, this wasn't meant to start a flame war...most of the\noldtimers here know of my hatred for Linux, and I've admitted often that\nwith Linux today, it is pretty unfounded... \n\n\tI used Linux pre-94...pre-v1.0..in a business/production\nenvironment. At that time, I was hard-core Linux advocate...it was the\ngreatest thing since sliced bread, but, the day I hoooked it onto the\nInternet, keeping it alive more then 24hrs was a chore, and it was all in\nthe TCP/IP networking code...switched to *BSD and have been here ever\nsince...\n\n\tBut, that first *bad* experience tends to stick with you, no\nmatter how good things become over time *shrug* One becomes jaded, such\nthat when someone asks you what OS (or RDBMS) to use, you tend to\nautomatically warn against the one that you've personally had bad\nexperiences with...*shrug*\n\n\tAgain, not a flame war, and not a \"you should try it now\"...I\nhave, and even looked at running Linux for a projeect here at the office\n(but I found a simpler/better solution)...\n\n", "msg_date": "Sun, 31 May 1998 19:38:20 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "On Sun, 31 May 1998, The Hermit Hacker wrote:\n\n> On Sun, 31 May 1998, David Gould wrote:\n> \n> > > \tI hate to use myself as an example...but why do I hate Linux? And\n> > > why wouldn't I recommend anyone to use it? Past Experience.\n> > \n> > Ok, why do you hate Linux? I have been using it since 94 and am happier than\n> > a pig in mud. Maybe I am easy to please (doubtful) or maybe I am missing\n> > something? I don't want to start an OS war here (there are enough of those\n> > in other places), so please reply (if you choose to do so) privately.\n> \n> \tNo no, this wasn't meant to start a flame war...most of the\n> oldtimers here know of my hatred for Linux, and I've admitted often that\n> with Linux today, it is pretty unfounded... \n> \n> \tI used Linux pre-94...pre-v1.0..in a business/production\n> environment. At that time, I was hard-core Linux advocate...it was the\n> greatest thing since sliced bread, but, the day I hoooked it onto the\n> Internet, keeping it alive more then 24hrs was a chore, and it was all in\n> the TCP/IP networking code...switched to *BSD and have been here ever\n> since...\n\nI'd gone thru similar, but a more recent set of experiences have soured\nme. I've seen some out-of-the-box linux installations failing from\noverload where the equivelent in a FreeBSD environment wasn't. When I'd\nhave 200+ users getting their email and a bunch of 'em also getting their\nweb pages hit, nasty things were hitting the fan. As an admin it's a\nproblem that eventual tuning wasn't good enuf. When you have PAYING\ncustomers crabbing about the services they're paying for not being up\nto their expectations, as a provider you have to answer to 'em. We were\nable to get a couple of linux boxen going to meet the need, but our\nexperiences beyond that were that a FreeBSD box was able to instantly\nprovide a much higher level of service with a much higher level of\nreliability to it for a lower cost (the cost of setting things up is,\nof course, figured into the overall cost). \n\nDo I wanna see all of the linux boxen removed? No way! Most of the \nsecurity exploits are written to run on a linux platform. I don't need\nto waste any time porting an exploit to a FreeBSD machine (no matter how\neasy it is) just to make sure my machines aren't vulnerable.\n\nMy personal opinion? Use the proper OS for the job at hand. I can come\nup with jobs that are best suited to many operating systems. If you want\nto choose an operating system that's not up to par with what you need to\ndo or what your PAYING customer needs? Then you need to rethink your\nbusiness strategies. For what I do, OS/2 provides me with the tools I \nneed. For what my wife does, 95 is her choice. For my news machines\nand web servers, it has to be UN*X and currently that platform is FreeBSD.\nAt work, it's HP-UX. You gotta use the proper tool for the job or you're\nonly screwing yourself. The way you evaluate the tool is noone's\nresponsibility but your own. Make the proper decision and you keep your\ncustomers; blow it and someone else gets your customers. Personally I \ndon't like those all nite panic sessions.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity! \n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n", "msg_date": "Sun, 31 May 1998 20:39:04 -0400 (edt)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "\nAgreed -- my experiences with postgresql have usually been good\n(except documentation/speed wise) and I'll go near msql (or php for\nthat matter) just because my experiences with them at first were truly\npainful.\n\nOn Sun, 31 May 1998, at 19:38:20, The Hermit Hacker wrote:\n\n> \tBut, that first *bad* experience tends to stick with you, no\n> matter how good things become over time *shrug* One becomes jaded, such\n> that when someone asks you what OS (or RDBMS) to use, you tend to\n> automatically warn against the one that you've personally had bad\n> experiences with...*shrug*\n", "msg_date": "Sun, 31 May 1998 18:58:30 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "\nI'm \"in the process of\" preparing a contrib module for this purpose.\nReal Soon Now.\n\nOn Sun, 31 May 1998, at 17:45:07, Kevin Heflin wrote:\n\n> The only problem I haven't been able to fix to date is calling \"Dates\"\n> from a database and displaying them like \"Sunday May 31, 1998\" instead\n> \"05-31-1998\"\n> \n> Currently using PHP2.x not PHP3 yet...\n> \n", "msg_date": "Sun, 31 May 1998 19:04:56 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [HACKERS] custom types and optimization" }, { "msg_contents": "> \n> On Sun, 31 May 1998, Bruce Momjian wrote:\n> \n> > Yes, this has frustrated me too. Why are we not getting better mention\n> > from people? I think we can now be classified as the 'most advanced'\n> > free database. Can we do something about mentioning that to others? We\n> > certainly are growing market share, but i guess I would like to see more\n> > transfers from other databases.\n> \n> \tI hate to use myself as an example...but why do I hate Linux? And\n> why wouldn't I recommend anyone to use it? Past Experience.\n> \n> \tWhen we first took this on, we were *very* problematic. But,\n> since we considered it to be the best that was out there, we\n> persevered(sp?) with the problems and improved it overall. There are\n> bound to be alot that, at the beginning, just didn't want to waste time\n> with it, saw all the problems and left...taking their bad experience with\n> them. \n> \n> \tMy experience is that \"bad experiences\" are heard more often then\n> good ones.\n> \n> \tNeil built up a 'registration page' that I'm curious as to how\n> many ppl are actually using it...just checked, and:\n> \n> postgresql=> select count(name) from register;\n> count\n> -----\n> 1361\n> (1 row)\n> \n> \tNot bad...but I don't imagine that's a tenth of all the users, is\n> it?\n\nWow, that is a big number, and the 10% is probably correct. I don't\nthink I am even in there.\n\nHow can we reverse the \"bad publicity\" and get people to start looking\nat us again?\n\nUsers, we need to hear from you on this, and why you chose to use\nPostgreSQL. We don't need people foaming at the mouth, but we do need\nour users to give use good visibility and publicity.\n\n[Moved to general.]\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 00:12:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] custom types and optimization" }, { "msg_contents": "> I'm \"in the process of\" preparing a contrib module for this purpose.\n> Real Soon Now.\n> > The only problem I haven't been able to fix to date is calling \n> > \"Dates\" from a database and displaying them like \"Sunday May 31, \n> > 1998\" instead \"05-31-1998\"\n\nHmm. Should we have the date type pay attention to the SET DATESTYLE\ncommand? I think it doesn't at the moment...\n\n - Tom\n", "msg_date": "Mon, 01 Jun 1998 06:11:34 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] custom types and optimization" }, { "msg_contents": "At 7:12 +0300 on 1/6/98, Bruce Momjian wrote:\n\n\n> Users, we need to hear from you on this, and why you chose to use\n> PostgreSQL. We don't need people foaming at the mouth, but we do need\n> our users to give use good visibility and publicity.\n\nHere is my story:\n\nWe needed to write some web-based applications, and they needed to rely on\na database, as the data stored in them needed something more complex than\nndbm.\n\nThe head of my programming team said PostgreSQL. Our sysadmin insisted on\ndeciding from a list of alternatives. So I set about with three main goals:\n(1) ANSI compatibility (the more compatibility, the less migration pain in\ncase migration was needed). (2) Support for multiuser access. (3)\nInterfaces to Perl and Java.\n\nI saw the MySQL page. It seemed to be more ANSI compatible. We downloaded\nit, and then it turned out that MySQL doesn't support transactions.\n\nNo transactions? That means no multiuser access. We want people to be able\nto update the database. That immediately classified MySQL as \"not a real\ndatabase\", and put us back on the PostgreSQL route, as no other free\ndatabase was even close to the required feature list.\n\nPostgreSQL has all the interfaces we need, it supports transactions and\nlocks, it is becoming more ANSI compatible with every version update, and\nit seems to perform well enough.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Mon, 1 Jun 1998 12:56:09 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Postgres (was Re: [HACKERS] custom types and optimization)" }, { "msg_contents": "On Mon, 1 Jun 1998, Bruce Momjian wrote:\n\n> > postgresql=> select count(name) from register;\n> > count\n> > -----\n> > 1361\n> > (1 row)\n> > \n> > \tNot bad...but I don't imagine that's a tenth of all the users, is\n> > it?\n> \n> Wow, that is a big number, and the 10% is probably correct. I don't\n> think I am even in there.\n\n\tI know I'm not ;(\n\n> How can we reverse the \"bad publicity\" and get people to start looking\n> at us again?\n\n\tTestimonials? Not fanatical statements...something intelligent\nlike what Herouth Maoz wrote? Maybe with a pointer to the project that\nhe/she used PostgreSQL for? Start up a 'User Comments and Projects'\npage...?\n\n\tHow many ppl are actually using the 'Powered by' logo on their\nsites? Actually, just looking at that, and am I the only one that finds\nit unreadable? I can barely even see that its a 'cat'...\n\n\n", "msg_date": "Mon, 1 Jun 1998 10:06:57 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] custom types and optimization" } ]
[ { "msg_contents": "\nI've noticed there are no less then 10^10 regex implementations.\nIs there a standard? Does ANSI have a regexp standard, or is there\na regex standard in the ANSI SQL spec? What do we use?\n\nPersonally, I'm a perl guy, so everytime I have to bend my brain to\nsome other regex syntax, I get a headache. As part of my perl PL\npackage, perl regexps will be included as a set of operators.\n\nIs there interest in the release of perl-style regexp operators for\npostgres before the PL is completed? Note that this requires the\nentire perl library to be loaded when the operator is used (possibly\nexpensive). But, if you have a shared perl library, this only has to\nhappen once.\n", "msg_date": "Sun, 31 May 1998 00:30:00 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "regular expressions from hell" }, { "msg_contents": "> I've noticed there are no less then 10^10 regex implementations.\n> Is there a standard? Does ANSI have a regexp standard, or is there\n> a regex standard in the ANSI SQL spec? What do we use?\n\nafaik the only regex in ANSI SQL is that implemented for the LIKE\noperator. Pretty pathetic: uses \"%\" for match-all and \"_\" for match-any\nand that's it. Ingres had a bit more, with bracketed character ranges\nalso. None as rich as what we already have in the backend of Postgres.\n\nDon't know about any other ANSI standards for regex, but I don't know\nthat there isn't one either...\n\n - Tom\n", "msg_date": "Sun, 31 May 1998 17:30:22 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "> I've noticed there are no less then 10^10 regex implementations.\n> Is there a standard? Does ANSI have a regexp standard, or is there\n> a regex standard in the ANSI SQL spec? What do we use?\n\nGood question. I think one of the standard unix regex's should be ok. At least\neveryone knows how to work it, and they are quite small.\n \n> Personally, I'm a perl guy, so everytime I have to bend my brain to\n> some other regex syntax, I get a headache. As part of my perl PL\n> package, perl regexps will be included as a set of operators.\n> \n> Is there interest in the release of perl-style regexp operators for\n> postgres before the PL is completed? Note that this requires the\n> entire perl library to be loaded when the operator is used (possibly\n> expensive). But, if you have a shared perl library, this only has to\n> happen once.\n\nHmmm, I really like the perl regex's, especially the extended syntax, but\nI don't want to load a whole perl lib to get this. \n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 31 May 1998 16:46:30 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "\nUnfortunately, there's no other way. This is mentioned in the\nperlcall manpage, I beleive. One method which is ok in my book is to\nload the shared perl lib once, in one backend, and then it can be\nshared between all other backends when they need perl regex's.\n\nThere is no mechanism for auto-loading the type/func shared libraries\non postmaster startup correct? It happens per backend sessions? So\nto do the above you'd have to have one \"Dummy\" connection which just\ndid a simple regex and then while(1) { sleep(10^32) };\n\nOn Sun, 31 May 1998, at 16:46:30, David Gould wrote:\n\n> Hmmm, I really like the perl regex's, especially the extended syntax, but\n> I don't want to load a whole perl lib to get this. \n> \n> -dg\n> \n> David Gould [email protected] 510.628.3783 or 510.305.9468 \n> Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n> \"Of course, someone who knows more about this will correct me if I'm wrong,\n> and someone who knows less will correct me if I'm right.\"\n> --David Palmer ([email protected])\n> \n", "msg_date": "Sun, 31 May 1998 17:23:16 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "\nNot to mention the fact that if perl (or mod_perl) is already running\n(and you're using a shared libperl), the library is already loaded.\n\nOn Sun, 31 May 1998, at 17:23:16, Brett McCormick wrote:\n\n> Unfortunately, there's no other way. This is mentioned in the\n> perlcall manpage, I beleive. One method which is ok in my book is to\n> load the shared perl lib once, in one backend, and then it can be\n> shared between all other backends when they need perl regex's.\n> \n> There is no mechanism for auto-loading the type/func shared libraries\n> on postmaster startup correct? It happens per backend sessions? So\n> to do the above you'd have to have one \"Dummy\" connection which just\n> did a simple regex and then while(1) { sleep(10^32) };\n> \n> On Sun, 31 May 1998, at 16:46:30, David Gould wrote:\n> \n> > Hmmm, I really like the perl regex's, especially the extended syntax, but\n> > I don't want to load a whole perl lib to get this. \n> > \n> > -dg\n> > \n> > David Gould [email protected] 510.628.3783 or 510.305.9468 \n> > Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n> > \"Of course, someone who knows more about this will correct me if I'm wrong,\n> > and someone who knows less will correct me if I'm right.\"\n> > --David Palmer ([email protected])\n> > \n> \n", "msg_date": "Sun, 31 May 1998 18:56:29 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "> Date: Sun, 31 May 1998 18:56:29 -0700 (PDT)\n> From: Brett McCormick <[email protected]>\n> Sender: [email protected]\n\n> Not to mention the fact that if perl (or mod_perl) is already running\n> (and you're using a shared libperl), the library is already loaded.\n\nIf you're running Apache, mod_perl or not, isn't Posix regex loaded?\n(HSREGEX or compatible?)\n\n> On Sun, 31 May 1998, at 17:23:16, Brett McCormick wrote:\n> \n> > Unfortunately, there's no other way. This is mentioned in the\n> > perlcall manpage, I beleive. One method which is ok in my book is to\n> > load the shared perl lib once, in one backend, and then it can be\n> > shared between all other backends when they need perl regex's.\n> > \n> > There is no mechanism for auto-loading the type/func shared libraries\n> > on postmaster startup correct? It happens per backend sessions? So\n> > to do the above you'd have to have one \"Dummy\" connection which just\n> > did a simple regex and then while(1) { sleep(10^32) };\n...\n\n", "msg_date": "Sun, 31 May 1998 22:17:30 -0500 (CDT)", "msg_from": "Hal Snyder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "> \n> Not to mention the fact that if perl (or mod_perl) is already running\n> (and you're using a shared libperl), the library is already loaded.\n\nOk, my vote is to build regexes into the pgsql binary or into a .so that\nwe distribute. There should be no need to have perl installed on a system\nto run postgresql. If we are going to extend the language to improve on\nthe very lame sql92 like clause, we need to have it be part of the system\nthat can be counted on, not something you might or might not have depending\non what else is installed.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Sun, 31 May 1998 23:44:55 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "On Sun, 31 May 1998, Thomas G. Lockhart wrote:\n\n> > I've noticed there are no less then 10^10 regex implementations.\n> > Is there a standard? Does ANSI have a regexp standard, or is there\n> > a regex standard in the ANSI SQL spec? What do we use?\n> \n> afaik the only regex in ANSI SQL is that implemented for the LIKE\n> operator. Pretty pathetic: uses \"%\" for match-all and \"_\" for match-any\n> and that's it. Ingres had a bit more, with bracketed character ranges\n> also. None as rich as what we already have in the backend of Postgres.\n> \n> Don't know about any other ANSI standards for regex, but I don't know\n> that there isn't one either...\n> \n- SQL3 SIMILAR condition.\nSIMILAR is intended for character string pattern matching. The difference \nbetween SIMILAR and LIKE is that SIMILAR supports a much more extensive \nrange of possibilities (\"wild cards,\" etc.) than LIKE does.\nHere the syntax:\n\n expression [ NOT ] SIMILAR TO pattern [ ESCAPE escape ]\n\n\t Jose'\n\n", "msg_date": "Mon, 1 Jun 1998 09:52:57 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "> \n> > \n> > Not to mention the fact that if perl (or mod_perl) is already running\n> > (and you're using a shared libperl), the library is already loaded.\n> \n> Ok, my vote is to build regexes into the pgsql binary or into a .so that\n> we distribute. There should be no need to have perl installed on a system\n> to run postgresql. If we are going to extend the language to improve on\n> the very lame sql92 like clause, we need to have it be part of the system\n> that can be counted on, not something you might or might not have depending\n> on what else is installed.\n\nWe already have it as ~, just not with Perl extensions. Our\nimplementation is very slow, and the author has said he is working on a\nrewrite, though no time frame was given.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 10:16:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "On Mon, 1 June 1998, at 10:16:35, Bruce Momjian wrote:\n\n> > Ok, my vote is to build regexes into the pgsql binary or into a .so that\n> > we distribute. There should be no need to have perl installed on a system\n> > to run postgresql. If we are going to extend the language to improve on\n> > the very lame sql92 like clause, we need to have it be part of the system\n> > that can be counted on, not something you might or might not have depending\n> > on what else is installed.\n\nI'm not suggesting we require perl to be installed to run postgres, or\nreplace the current regexp implementation with perl. i was just\nlamenting the fact that there are no less than 10 different regexp\nimplementations, with different metacharacters. why should I have to\nremember one syntax when I use perl, one for sed, one for emacs, and\nanother for postgresql? this isn't a problem with postgres per se,\njust the fact that there seems to be no standard.\n\nI love perl regex's. I'm merely suggesting (and planning on\nimplementing) a different set of regexp operators (not included with\npostgres, but as a contrib module) that use perl regex's. There are\nsome pros and cons, which have been discussed.\n\nIt should be there for people who want it.\n\n> \n> We already have it as ~, just not with Perl extensions. Our\n> implementation is very slow, and the author has said he is working on a\n> rewrite, though no time frame was given.\n", "msg_date": "Mon, 1 Jun 1998 07:27:50 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "On Sun, 31 May 1998, David Gould wrote:\n\n> > \n> > Not to mention the fact that if perl (or mod_perl) is already running\n> > (and you're using a shared libperl), the library is already loaded.\n> \n> Ok, my vote is to build regexes into the pgsql binary or into a .so that\n> we distribute. There should be no need to have perl installed on a system\n> to run postgresql. If we are going to extend the language to improve on\n> the very lame sql92 like clause, we need to have it be part of the system\n> that can be counted on, not something you might or might not have depending\n> on what else is installed.\n\n\tOdd question here, but how many systems nowadays *don't* have Perl\ninstalled that would be running PostgreSQL? IMHO, perl is an invaluable\nenough tool that I can't imagine a site not running it *shrug*\n\n\n", "msg_date": "Mon, 1 Jun 1998 10:42:21 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "Brett McCormick wrote:\n> \n> On Mon, 1 June 1998, at 10:16:35, Bruce Momjian wrote:\n> \n> > > Ok, my vote is to build regexes into the pgsql binary or into a .so that\n> > > we distribute. There should be no need to have perl installed on a system\n> > > to run postgresql. If we are going to extend the language to improve on\n> > > the very lame sql92 like clause, we need to have it be part of the system\n> > > that can be counted on, not something you might or might not have depending\n> > > on what else is installed.\n> \n> I'm not suggesting we require perl to be installed to run postgres, or\n> replace the current regexp implementation with perl. i was just\n> lamenting the fact that there are no less than 10 different regexp\n> implementations, with different metacharacters. why should I have to\n> remember one syntax when I use perl, one for sed, one for emacs, and\n> another for postgresql? this isn't a problem with postgres per se,\n> just the fact that there seems to be no standard.\n\nI think most of this is due to different decisions on what needs to be\nescaped or not. For instance, if memory serves, GNU grep treats\nparens as metacharacters, which must be escaped with a backslash to\nmatch parens, while in Emacs, parens match parens and must be escaped\nto get their meta-character meaning. Things have gone too far to have\none standard now I'm afraid.\n\nOcie\n", "msg_date": "Mon, 1 Jun 1998 14:41:23 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"ocie\" == ocie <[email protected]> writes:\n\n ocie> I think most of this is due to different decisions on what\n ocie> needs to be escaped or not. For instance, if memory serves,\n ocie> GNU grep treats parens as metacharacters, which must be\n ocie> escaped with a backslash to match parens, while in Emacs,\n ocie> parens match parens and must be escaped to get their\n ocie> meta-character meaning. Things have gone too far to have\n ocie> one standard now I'm afraid.\n\nPlease try to remember that there are historical reasons for some of\nthis. grep and egrep behave differently with respect to parentheses;\nagain, this is historical. \n\nPersonally, I like Perl regexps. And there is a library for Tcl/Tk\n(nre) that implements the same syntax for that language. But I do\nlike Emacs' syntax tables and character classes. I can live with\nswitching back and forth to some extent....\n\nroland\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\nComment: Processed by Mailcrypt 3.4, an Emacs/PGP interface\n\niQCVAwUBNXSyLuoW38lmvDvNAQHatQQAsyp+akdXl0TiptXsSlrp7tM2/Jb/jLnW\nSfpkYVkk53iER/JMYMU4trfQQssePkqGmaF8GMeU5i8eMW6Vi3Vus2pqovnLa1eV\nw5rCgxKXqpZnIhGJZeHIYieMfWxfdmWOUjawrjKv85vBRdZDYdRkLBoAWvI4ZaJb\nJxAEwqbZrQw=\n=Zgvo\n-----END PGP SIGNATURE-----\n-- \nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 101 West 15th St #4NN\n New York, NY 10011\n\n", "msg_date": "02 Jun 1998 22:17:42 -0400", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "Roland B. Roberts, PhD writes:\n> >>>>> \"ocie\" == ocie <[email protected]> writes:\n> \n> ocie> I think most of this is due to different decisions on what\n> ocie> needs to be escaped or not. For instance, if memory serves,\n> ocie> GNU grep treats parens as metacharacters, which must be\n> ocie> escaped with a backslash to match parens, while in Emacs,\n> ocie> parens match parens and must be escaped to get their\n> ocie> meta-character meaning. Things have gone too far to have\n> ocie> one standard now I'm afraid.\n> \n> Please try to remember that there are historical reasons for some of\n> this. grep and egrep behave differently with respect to parentheses;\n> again, this is historical. \n> \n> Personally, I like Perl regexps. And there is a library for Tcl/Tk\n> (nre) that implements the same syntax for that language. But I do\n> like Emacs' syntax tables and character classes. I can live with\n> switching back and forth to some extent....\n\nEmacs! Huh! I like VI regexes... Uh oh, sorry, wrong flamewar.\n\nIsn't there a POSIX regex? Perhaps we could consider that, unless of course\nit is well and truly broken.\n\nSecondly, I seem to remember a post here in this same thread that said\nwe already had regexes. Perhaps we should move on.\n\nSeriously as part of a Perl extension to postgresql, perl regexes would \nbe the naturaly thing. But if we already have a regex package, I think\nadding just perl regexes without perl, but requireing perl.so is uhmmm,\npremature.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Tue, 2 Jun 1998 23:26:06 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "> I've noticed there are no less then 10^10 regex implementations.\n> Is there a standard? Does ANSI have a regexp standard, or is there\n> a regex standard in the ANSI SQL spec? What do we use?\n> \n> Personally, I'm a perl guy, so everytime I have to bend my brain to\n> some other regex syntax, I get a headache. As part of my perl PL\n> package, perl regexps will be included as a set of operators.\n> \n> Is there interest in the release of perl-style regexp operators for\n> postgres before the PL is completed? Note that this requires the\n> entire perl library to be loaded when the operator is used (possibly\n> expensive). But, if you have a shared perl library, this only has to\n> happen once.\n\nWell, not to bring this up for discussion again, but there is apparently\na Posix standard, and even better a free implementation:\n\n\nArticle 10705 of comp.os.linux.misc:\nNewsgroups: gnu.announce,gnu.utils.bug,comp.os.linux.misc,alt.sources.d\nSubject: Rx 1.9\nDate: Wed, 10 Jun 1998 10:40:00 -0700 (PDT)\nApproved: [email protected]\n\nThe latest version of Rx, 1.9, is available on the web at:\n\n\thttp://users.lanminds.com/~lord\n\tftp://emf.net/users/lord/src/rx-1.9.tar.gz\n and at ftp://ftp.gnu.org/pub/gnu/rx-1.9.tar.gz and mirrors of that \n site (see list below).\n\nRx is a regexp pattern matching library. The library exports these\nfunctions which are standardized by Posix:\n\n regcomp - compile a regexp\n regexec - search for a match\n regfree - release storage for a regexp\n regerr - translate error codes to strings\n\nThe library exports many other functions as well, and does a lot\nmore than Posix requires.\n\n\t\t\t RECENT CHANGES\n\n1. Rx 1.9\n Recent changes: More \"dead code\" was recently discarded,\n\t\t and the remaining code simplified.\n\n\t\t Benchmark comparisons to GNU regex and older\n\t\t versions of Rx were added to the distribution.\n\n0. Rx 1.8\n Recent changes: Various bug-fixes and small performance improvements.\n\t\t A great deal of \"dead code\" was recently discarded,\n\t\t making the size of the Rx library smaller and the\n\t\t source easier to maintain (in theory).\n\n\n[ Most GNU software is compressed using the GNU `gzip' compression program.\n Source code is available on most sites distributing GNU software.\n Executables for various systems and information about using gzip can be\n found at the URL http://www.gzip.org.\n\n For information on how to order GNU software on CD-ROM and\n printed GNU manuals, see http://www.gnu.org/order/order.html\n or e-mail a request to: [email protected]\n\n By ordering your GNU software from the FSF, you help us continue to\n develop more free software. Media revenues are our primary source of\n support. Donations to FSF are deductible on US tax returns.\n\n The above software will soon be at these ftp sites as well.\n Please try them before ftp.gnu.org as ftp.gnu.org is very busy!\n A possibly more up-to-date list is at the URL\n http://www.gnu.org/order/ftp.html\n\n thanx [email protected]\n\n Here are the mirrored ftp sites for the GNU Project, listed by country:\n\n \n \n United States:\n \n California - labrea.stanford.edu/pub/gnu, gatekeeper.dec.com/pub/GNU\n Hawaii - ftp.hawaii.edu/mirrors/gnu\n Illinois - uiarchive.cso.uiuc.edu/pub/gnu (Internet address 128.174.5.14)\n Kentucky - ftp.ms.uky.edu/pub/gnu\n Maryland - ftp.digex.net/pub/gnu (Internet address 164.109.10.23)\n Michigan - gnu.egr.msu.edu/pub/gnu\n Missouri - wuarchive.wustl.edu/systems/gnu\n New York - ftp.cs.columbia.edu/archives/gnu/prep\n Ohio - ftp.cis.ohio-state.edu/mirror/gnu\n Utah - jaguar.utah.edu/gnustuff\n Virginia - ftp.uu.net/archive/systems/gnu\n \n Africa:\n \n South Africa - ftp.sun.ac.za/pub/gnu\n \n The Americas:\n \n Brazil - ftp.unicamp.br/pub/gnu \n Canada - ftp.cs.ubc.ca/mirror2/gnu \n Chile - ftp.inf.utfsm.cl/pub/gnu (Internet address 146.83.198.3)\n Costa Rica - sunsite.ulatina.ac.cr/GNU \n Mexico - ftp.uaem.mx/pub/gnu\n \n Asia and Australia:\n \n Australia - archie.au/gnu (archie.oz or archie.oz.au for ACSnet)\n Australia - ftp.progsoc.uts.edu.au/pub/gnu \n Japan - tron.um.u-tokyo.ac.jp/pub/GNU/prep\n Japan - ftp.cs.titech.ac.jp/pub/gnu \n Korea - cair-archive.kaist.ac.kr/pub/gnu (Internet address 143.248.186.3)\n Thailand - ftp.nectec.or.th/pub/mirrors/gnu (Internet address - 192.150.251.32)\n \n Europe:\n \n Austria - ftp.univie.ac.at/packages/gnu\n Czech Republic - ftp.fi.muni.cz/pub/gnu/\n Denmark - ftp.denet.dk/mirror/ftp.gnu.org/pub/gnu\n Finland - ftp.funet.fi/pub/gnu (Internet address 128.214.6.100)\n France - ftp.univ-lyon1.fr/pub/gnu \n France - ftp.irisa.fr/pub/gnu\n Germany - ftp.informatik.tu-muenchen.de/pub/comp/os/unix/gnu/\n Germany - ftp.informatik.rwth-aachen.de/pub/gnu\n Germany - ftp.de.uu.net/pub/gnu\n Greece - ftp.ntua.gr/pub/gnu \n Greece - ftp.aua.gr/pub/mirrors/GNU (Internet address 143.233.187.61)\n Ireland - ftp.ieunet.ie/pub/gnu (Internet address 192.111.39.1)\n Netherlands - ftp.eu.net/gnu (Internet address 192.16.202.1)\n Netherlands - ftp.nluug.nl/pub/gnu\n Netherlands - ftp.win.tue.nl/pub/gnu (Internet address 131.155.70.100)\n Norway - ugle.unit.no/pub/gnu (Internet address 129.241.1.97)\n Spain - ftp.etsimo.uniovi.es/pub/gnu\n Sweden - ftp.isy.liu.se/pub/gnu \n Sweden - ftp.stacken.kth.se\n Sweden - ftp.luth.se/pub/unix/gnu\n Sweden - ftp.sunet.se/pub/gnu (Internet address 130.238.127.3)\n \t Also mirrors the Mailing List Archives.\n Switzerland - ftp.eunet.ch/mirrors4/gnu\n Switzerland - sunsite.cnlab-switch.ch/mirror/gnu (Internet address 193.5.24.1)\n United Kingdom - ftp.mcc.ac.uk/pub/gnu (Internet address 130.88.203.12)\n United Kingdom - unix.hensa.ac.uk/mirrors/gnu\n United Kingdom - ftp.warwick.ac.uk (Internet address 137.205.192.14)\n United Kingdom - SunSITE.doc.ic.ac.uk/gnu (Internet address 193.63.255.4)\n \n]\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Thu, 11 Jun 1998 14:37:06 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" }, { "msg_contents": "On Thu, 11 Jun 1998, David Gould wrote:\n\n> Article 10705 of comp.os.linux.misc:\n> Newsgroups: gnu.announce,gnu.utils.bug,comp.os.linux.misc,alt.sources.d\n> Subject: Rx 1.9\n> Date: Wed, 10 Jun 1998 10:40:00 -0700 (PDT)\n> Approved: [email protected]\n> \n> The latest version of Rx, 1.9, is available on the web at:\n> \n> \thttp://users.lanminds.com/~lord\n> \tftp://emf.net/users/lord/src/rx-1.9.tar.gz\n> and at ftp://ftp.gnu.org/pub/gnu/rx-1.9.tar.gz and mirrors of that \n> site (see list below).\n\nThe reason that we do not use this particular Regex package is that *it*\nfalls under the \"Almighty GPL\", which conflicts with our Berkeley\nCopyright...\n\nNow, is there is a standardized spec on this, though, what would it take\nto change our Regex to follow it, *without* the risk of tainting our code\nwith GPLd code?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 Jun 1998 22:27:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regular expressions from hell" } ]
[ { "msg_contents": "The Hermit Hacker wrote:\n\n> If there is anything that I don't have listed at the above URL\n> (ie. that isn't bundled on the CD that you'd like to see), please \n> let me know. And, of course, if anything at the above URL is out \n> of date, again, please let me know...some stuff is just impossible\n> to keep up on :)\n\nThe KpgSQL might also be interesting to people, see:\n\nhttp://home.primus.baynet.de/mgeisler/kpgsql/\n\n\nAs well as the GNOME one\n\nhttp://www.mygale.org/~bbrox/GtkSQL/\n\nI have'nt checked either out myself yet\n\n\nCheers,\nHannu Krosing\n\n\nPS, I still think it would be a good idea to have a mention of \npgsql-interfaces list at www.postgresql.org under mailing lists ;c)\n\nHannu", "msg_date": "Sun, 31 May 1998 12:41:44 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: packages to include on CD" } ]
[ { "msg_contents": "There are some duplicate oids in pg_proc of May30 snapshot.\n\ntest=> select p1.oid,p1.proname,p2.proname from pg_proc p1,pg_proc p2\nwhere p1.oid = p2.oid and p1.proname > p2.proname;\n\n oid|proname |proname \n----+------------+-----------\n1377|textoctetlen|char \n1374|octet_length|char_bpchar\n1375|octet_length|bpchar \n1376|octet_length|bpchar_char\n1600|version |line \n(5 rows)\n\nI know octet_length's are from my patch but for others I don't know.\nComments?\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Mon, 01 Jun 1998 11:55:17 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "duplicate oids in pg_proc" }, { "msg_contents": "> There are some duplicate oids in pg_proc of May30 snapshot.\n> \n> test=> select p1.oid,p1.proname,p2.proname from pg_proc p1,pg_proc p2\n> where p1.oid = p2.oid and p1.proname > p2.proname;\n> \n> oid|proname |proname\n> ----+------------+-----------\n> 1377|textoctetlen|char\n> 1374|octet_length|char_bpchar\n> 1375|octet_length|bpchar\n> 1376|octet_length|bpchar_char\n> 1600|version |line\n> (5 rows)\n> \n> I know octet_length's are from my patch but for others I don't know.\n> Comments?\n\nThose must be mine (char, char_bpchar, bpchar, bpchar_char). I am sure\nthat I ran ./unused_oids to assign them, but obviously messed it up.\nWill reassign soon. Darn :(\n\n - Tom\n", "msg_date": "Mon, 01 Jun 1998 06:15:33 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] duplicate oids in pg_proc" }, { "msg_contents": "> > There are some duplicate oids in pg_proc of May30 snapshot.\n> > I know octet_length's are from my patch but for others I don't know.\n> > Comments?\n> \n> Those must be mine (char, char_bpchar, bpchar, bpchar_char). I am sure\n> that I ran ./unused_oids to assign them, but obviously messed it up.\n> Will reassign soon. Darn :(\n\nOh, I see what happened. I developed on a revlocked tree from 980513 and\nthen patched the current tree and ran the regression tests. Patches\napplied cleanly and the regression tests passed, so I submitted the\npatches.\n\nIn the meantime your patches used the same OIDs, and the regression test\nis apparently not sensitive to the overlap.\n\nJust to avoid confusion: only one of us should try to fix the problem :)\nShall I, or are you already working on it?\n\n - Tom\n", "msg_date": "Mon, 01 Jun 1998 15:08:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] duplicate oids in pg_proc" }, { "msg_contents": ">> > There are some duplicate oids in pg_proc of May30 snapshot.\n>> > I know octet_length's are from my patch but for others I don't know.\n>> > Comments?\n>> \n>> Those must be mine (char, char_bpchar, bpchar, bpchar_char). I am sure\n>> that I ran ./unused_oids to assign them, but obviously messed it up.\n>> Will reassign soon. Darn :(\n>\n>Oh, I see what happened. I developed on a revlocked tree from 980513 and\n>then patched the current tree and ran the regression tests. Patches\n>applied cleanly and the regression tests passed, so I submitted the\n>patches.\n>\n>In the meantime your patches used the same OIDs, and the regression test\n>is apparently not sensitive to the overlap.\n>\n>Just to avoid confusion: only one of us should try to fix the problem :)\n>Shall I, or are you already working on it?\n\nNot yet. Could you solve the duplication? Thanks in advance,\n--\nTatuso Ishii\[email protected]\n", "msg_date": "Tue, 02 Jun 1998 09:52:18 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] duplicate oids in pg_proc " } ]
[ { "msg_contents": "Thomas G. Lockhart wrote:\n\n> btw, anyone know of a package for variable- and large-precision\n> numerics? I have looked at the GNU gmp package, but it looks to me\n> that it probably won't fit into the db backend without lots of\n> overhead. Will probably try to use the int64 package in contrib\n> for now...\n\nYou might check the long (infinite precision) int support in the python\ndistribution. It is all in one 32k C file in Objects/longobject.c\n\nIt would require a little untangling, but I think it is not too much.\nAs python is known to work on more platforms then postgreSQL, its \nlong ints should as well ;)\n\nyou can get the distribution, about 2.5M, from www.python.org.\n\nAnother nice project would be to get python to act as a PL inside the\nbackend. Having it there in would probably get more python folk engaged \nwith postgres and that in turn could help to get the word \"Object\" \nback into postgres. \n\nCurrently it seems that most PG developers are die-hard database folks \nthat see the OO features of postgres as a minor nuisance and something \nto get rid of \"as they can easily be implemented (retrofitted) using \nother means\" ;)\n\nHannu\n", "msg_date": "Mon, 01 Jun 1998 11:18:22 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large-precision numeric support" } ]
[ { "msg_contents": "\nwow. i didn't realize I was in for such a nasty surprise.\nI just downloaded the latest cvs snapshot, and there appears to\nbe some major changes to the code that i've patched.\n\nit would appear as though (I've only looked at one function so far, pqGetc\nin fe-misc.c) there's now a buffer for the connection as opposed to\njust doing a getc. this is a good thing of course, but I need\nto recontemplate my patch.\n\ncan someone who knows about the fe/be func changes fill me in on the\nnew deal?\n\nthanks\nnow I know to work with the latest snapshot.\n:)\n", "msg_date": "Mon, 1 Jun 1998 08:45:22 -0700", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "cvs snapshot comm changes" }, { "msg_contents": "> \n> \n> wow. i didn't realize I was in for such a nasty surprise.\n> I just downloaded the latest cvs snapshot, and there appears to\n> be some major changes to the code that i've patched.\n> \n> it would appear as though (I've only looked at one function so far, pqGetc\n> in fe-misc.c) there's now a buffer for the connection as opposed to\n> just doing a getc. this is a good thing of course, but I need\n> to recontemplate my patch.\n> \n> can someone who knows about the fe/be func changes fill me in on the\n> new deal?\n> \n> thanks\n> now I know to work with the latest snapshot.\n> :)\n\nTom Lane made some protocol changes, like adding Cancel, and cleaning up\nstuff, and fixes for async notifies and queries.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 1 Jun 1998 15:36:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs snapshot comm changes" } ]
[ { "msg_contents": "\nit would appear that libpq now has a outgoing data buffer associated\nwith PGconn struct which only gets sent (with send()!?) when pqFlush\ngets called. the backend still appears to use and pass FILE * for\nreading and writing. I wasn't aware that you can read data from a\nFILE * sent with send() over a socket. Is this portable? Time to\npull out stevens.\n\nIn any case, I don't think this bodes well for my SSL patch -- and I\nthink I've missed something -- why have we switched to send/recv? I\nassume for the synchronous notification? I haven't been following\nthat discussion as much as I possibly could be, so I'll look in the\narchives. Anyway, this is sort of a plea for help -- I'm totally\nconfused, so if there's just something I'm missing, please let me\nknow.\n\nDoes anyone know what implications the new communication scheme has\nfor SSL? I know this isn't a postgresql priority, but it is an\ninterest of mine. Is it still possible? I'll start doing my\nhomework.\n", "msg_date": "Mon, 1 Jun 1998 09:08:09 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "some more rambling on the new fe/be communication" }, { "msg_contents": "Brett McCormick <[email protected]> writes:\n> [ Brett was unpleasantly surprised to find major changes in libpq ]\n\nSorry about that, Brett. This was discussed on the hackers list a month\nor so ago, but evidently you missed the thread. I made some fairly\nmajor changes in the client-side libpq to allow it to be used\nasychronously, that is without blocking until the completion of a query.\n\nI didn't bother to do much cleanup of the backend side, since it didn't\nhave to change to get the functionality I was after. I agree that it\ncould stand a cleanup, so if you want to do it, by all means do.\n\n> it would appear that libpq now has a outgoing data buffer associated\n> with PGconn struct which only gets sent (with send()!?) when pqFlush\n> gets called. the backend still appears to use and pass FILE * for\n> reading and writing. I wasn't aware that you can read data from a\n> FILE * sent with send() over a socket. Is this portable?\n\nYes. Data on a connection is data; there's no way for the far end to\ntell what syscall or library was used to collect and send the data.\n(The far end might not even be Unix or C based, after all.)\n\n> In any case, I don't think this bodes well for my SSL patch -- and I\n> think I've missed something -- why have we switched to send/recv?\n\nBecause going through the stdio library gives up control over blocking\nwhen no data is available. getc() will block, period.\n\nDoes SSL support non-blocking recv? If so it shouldn't be hard to put\nan SSL layer under what I did with libpq. Note the existence of\nPQsocket() however. If an SSL connection can't be select()'d for then\nwe have got trouble.\n\nBTW, I believe I did fix your earlier complaint that the backend called\npq_putstr again after closing the client connection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Jun 1998 13:23:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] some more rambling on the new fe/be communication " }, { "msg_contents": "On Thu, 4 June 1998, at 13:23:17, Tom Lane wrote:\n\n> Sorry about that, Brett. This was discussed on the hackers list a month\n> or so ago, but evidently you missed the thread. I made some fairly\n> major changes in the client-side libpq to allow it to be used\n> asychronously, that is without blocking until the completion of a query.\n\ndon't be sorry! that's a good thing. not sure how I missed that.\n\n> I didn't bother to do much cleanup of the backend side, since it didn't\n> have to change to get the functionality I was after. I agree that it\n> could stand a cleanup, so if you want to do it, by all means do.\n\nI beleive I will. I thought about this last night, and I came up with\nthis: Since what we need is secure database connections under a stable\nrelease, I'll continue to develop my SSL patch for 6.3.2. Since I've\nalready done the work of \"cleaning up\" (I use quotes because all I've\nreally done is changed the functions to pass the struct ptr around,\nand isolated all read/writes to two functions, pq_read & pq_write)\nI'll issue two separate patches. One which modularizes the IO a\nlittle which will make it easy for people who wish to add other layers\n(like kerberos encryption) and an SSL patch to run on top of that.\n\nI'll have to familiarize myself with the new frontend code, but I plan\non making a similar patch for 6.4 (as we'll also want SSL connections\nwith that). I am tempted to hold off on that once I get my current\nSSL up to snuff and instead work on perl stored procedures, as I feel\nthat is more valuable (and will also do more to familiarize myself\nwith the code).\n\n> \n> > it would appear that libpq now has a outgoing data buffer associated\n> > with PGconn struct which only gets sent (with send()!?) when pqFlush\n> > gets called. the backend still appears to use and pass FILE * for\n> > reading and writing. I wasn't aware that you can read data from a\n> > FILE * sent with send() over a socket. Is this portable?\n> \n> Yes. Data on a connection is data; there's no way for the far end to\n> tell what syscall or library was used to collect and send the data.\n> (The far end might not even be Unix or C based, after all.)\n\nWhat about OOB data? is that just data as well?\n\n> \n> > In any case, I don't think this bodes well for my SSL patch -- and I\n> > think I've missed something -- why have we switched to send/recv?\n> \n> Because going through the stdio library gives up control over blocking\n> when no data is available. getc() will block, period.\n> \n> Does SSL support non-blocking recv? If so it shouldn't be hard to put\n> an SSL layer under what I did with libpq. Note the existence of\n> PQsocket() however. If an SSL connection can't be select()'d for then\n> we have got trouble.\n\nI'm sure an SSL connection can be select()'d, and it does support\nnon-blocking recv (I think that's the only way). I think it does\nblock, however, if it doesn't get a full SSL \"packet\" (or whatever the\nappropriate term may be).\n\n> BTW, I believe I did fix your earlier complaint that the backend called\n> pq_putstr again after closing the client connection.\n\nexcellent.\n", "msg_date": "Thu, 4 Jun 1998 15:53:08 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] some more rambling on the new fe/be communication " }, { "msg_contents": "Brett McCormick <[email protected]> writes:\n> I'll have to familiarize myself with the new frontend code, but I plan\n> on making a similar patch for 6.4 (as we'll also want SSL connections\n> with that).\n\nThis seems like a reasonable plan, if you need SSL *now* and not after\n6.4 is released. But:\n\n> I am tempted to hold off on that once I get my current\n> SSL up to snuff and instead work on perl stored procedures, as I feel\n> that is more valuable (and will also do more to familiarize myself\n> with the code).\n\nI'd recommend you do the 6.4 version of the patch first, while it's\nstill fresh in your mind. AFAIK, stored procedures are a completely\ndifferent area of the system; you won't learn anything there that is\nrelevant to the FE/BE protocol.\n\n> On Thu, 4 June 1998, at 13:23:17, Tom Lane wrote:\n>> Yes. Data on a connection is data; there's no way for the far end to\n>> tell what syscall or library was used to collect and send the data.\n>> (The far end might not even be Unix or C based, after all.)\n\n> What about OOB data? is that just data as well?\n\nAs far as the TCP protocol is concerned, yes. A lot of libraries that\nyou might want to use do not have an API that accounts for the separate\n\"OOB\" channel within one TCP connection, so it may be difficult or\nimpossible to get at TCP's OOB facility from within a particular\nprogramming environment. But that's not exactly a cross-environment\ncompatibility problem, it's just a missing feature in a library API.\nAny two implementations that both handle OOB should be able to\ncommunicate.\n\n> I'm sure an SSL connection can be select()'d, and it does support\n> non-blocking recv (I think that's the only way). I think it does\n> block, however, if it doesn't get a full SSL \"packet\" (or whatever the\n> appropriate term may be).\n\nHmm ... I don't know enough about SSL to know if this is really a\nproblem, but your comment raises warning flags in my head. This\nneeds investigation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Jun 1998 19:10:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] some more rambling on the new fe/be communication " }, { "msg_contents": "On Thu, 4 June 1998, at 19:10:09, Tom Lane wrote:\n\n> > I am tempted to hold off on that once I get my current\n> > SSL up to snuff and instead work on perl stored procedures, as I feel\n> > that is more valuable (and will also do more to familiarize myself\n> > with the code).\n> \n> I'd recommend you do the 6.4 version of the patch first, while it's\n> still fresh in your mind. AFAIK, stored procedures are a completely\n> different area of the system; you won't learn anything there that is\n> relevant to the FE/BE protocol.\n\nI know -- I'm looking to learn more about other areas of the system.\nAnd because the perl stored procedures will support lots of cool\nbackend functions (what sort of stuff is permissible to interface to?)\ni'll learn about them that way.\n\nI've learned as much about the fe/be protocol as I wish to know ;)\nBut I may take your advice.\n\n> > I'm sure an SSL connection can be select()'d, and it does support\n> > non-blocking recv (I think that's the only way). I think it does\n> > block, however, if it doesn't get a full SSL \"packet\" (or whatever the\n> > appropriate term may be).\n> \n> Hmm ... I don't know enough about SSL to know if this is really a\n> problem, but your comment raises warning flags in my head. This\n> needs investigation.\n\nUnfortunately, SSL documention is rather incomplete.\n\nthanks for your info, comments and suggestions\n", "msg_date": "Thu, 4 Jun 1998 16:17:33 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] some more rambling on the new fe/be communication " }, { "msg_contents": "> As far as the TCP protocol is concerned, yes. A lot of libraries that\n> you might want to use do not have an API that accounts for the separate\n> \"OOB\" channel within one TCP connection, so it may be difficult or\n> impossible to get at TCP's OOB facility from within a particular\n> programming environment. But that's not exactly a cross-environment\n> compatibility problem, it's just a missing feature in a library API.\n> Any two implementations that both handle OOB should be able to\n> communicate.\n\nLooks like we will be removing OOB in favor of a CANCEL cookie sent to\nthe postmaster. I will work up something soon.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 19:46:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] some more rambling on the new fe/be communication" } ]
[ { "msg_contents": "> On Sun, 31 May 1998, David Gould wrote:\n> \n> > As you may have noticed, I am something of a Linux advocate. And, quite\n> > seriously, I believe it possible that in five years not even Sun will be\n> > shipping anything else. \n> \n> \tThat must be why Sun is currently paying for developers to port\n> FreeBSD over to the Sparc architecture, eh? *grin*\n> \n> \tAnd, ya, I am serious...one of the developers on the FreeBSD\n> mailing lists is contracted to Sun MicroSystems to get a FreeBSD port off\n> the ground...\n\nThats great. Any victory for freed software is a victory for freed software!\n\nBtw, Sun has also apparently just paid to join Linux International. \n\nSo maybe in five years Sun will be shipping either FreeBSD or Linux. ;-)\n\n-dg\n \n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Mon, 1 Jun 1998 15:43:54 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] custom types and optimization" }, { "msg_contents": "On Mon, 1 Jun 1998, David Gould wrote:\n\n> > On Sun, 31 May 1998, David Gould wrote:\n> > \n> > > As you may have noticed, I am something of a Linux advocate. And, quite\n> > > seriously, I believe it possible that in five years not even Sun will be\n> > > shipping anything else. \n> > \n> > \tThat must be why Sun is currently paying for developers to port\n> > FreeBSD over to the Sparc architecture, eh? *grin*\n> > \n> > \tAnd, ya, I am serious...one of the developers on the FreeBSD\n> > mailing lists is contracted to Sun MicroSystems to get a FreeBSD port off\n> > the ground...\n> \n> Thats great. Any victory for freed software is a victory for freed software!\n\n\tActually, I would rather see Unix dominate the market, period...I\ndon't really care whether that be Linux or FreeBSD or Solaris...they all\nhave their 'niches'...\n\n\tI've said it before, and will continue to say it...the Linux\ncommunity has done a *great* job towards providing an alternative to\nMicroSloth...they have a ways to go, but at least they are making inroads.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 1 Jun 1998 21:31:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] custom types and optimization" } ]
[ { "msg_contents": "\nMorning...\n\n\tthe following is starting to get on my nerves :( The table I'm\ntrying to update is ~22k records, and I can't seem to get through it...\n\n\tSo, my first question is *what* does this message mean? The\nbackend gives an error about too many open files, but there are less then\n20 process running on this machine...\n\nzeus:/usr/local/acctng/bin> ./close.pl isdn-1.trends.ca\nprocessing 1405 records ... Found 1405 open records...\nNOTICE: AbortTransaction and not in in-progress state\n\n\tI suspect that this is still a 6.3.1 server, and we're planning on\nmoving this to another machine that has 6.3.2 installed, but I figured I'd\nask regardless...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 1 Jun 1998 22:26:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "AbortTransaction and not in in-progress state" }, { "msg_contents": "> \n> Morning...\n> \n> \tthe following is starting to get on my nerves :( The table I'm\n> trying to update is ~22k records, and I can't seem to get through it...\n> \n> \tSo, my first question is *what* does this message mean? The\n> backend gives an error about too many open files, but there are less then\n> 20 process running on this machine...\n\nThere are two limits on the number of open files in UNIX systems (ok, three, if your C library imposes it's own limit on the number of FILE *), the system wide limit and the per process limit. The per process limit is usually much smaller than the system wide limit (i.e 60 vs. 400). It is therefor possible that the backend process is running out of file handles, even though there are plenty of file handles available on a system wide bases.\n\nI hope this helps.\n\nBilly G. Allie.\n\n\n", "msg_date": "Tue, 02 Jun 1998 01:00:57 -0400", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] AbortTransaction and not in in-progress state " } ]
[ { "msg_contents": "Can anybody tell me how to set up anon CVS? I'm living inside a\nfirewall.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Tue, 02 Jun 1998 12:10:06 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "anon CVS inside from firewall?" } ]
[ { "msg_contents": "> The only problem I haven't been able to fix to date is calling \"Dates\"\n> from a database and displaying them like \"Sunday May 31, 1998\" instead\n> \"05-31-1998\"\n> \n> Currently using PHP2.x not PHP3 yet...\n> \n> Kevin\n> \ntry:\n$mydate = '05-31-1998';\n$mon = strtok($mydate, '-');\n$day = strtok('-');\n$year = strtok('-');\necho date('l F d, Y', mktime(0, 0, 0, $mon, $day, $year));\n\n\n\n> --------------------------------------------------------------------\n> Kevin Heflin | ShreveNet, Inc. | Ph:318.222.2638 x103\n> VP/Mac Tech | 333 Texas St #619 | FAX:318.221.6612\n> [email protected] | Shreveport, LA 71101 | http://www.shreve.net\n> --------------------------------------------------------------------\n> \n", "msg_date": "Tue, 2 Jun 1998 11:37:38 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [GENERAL] Re: [HACKERS] custom types and optimization" } ]
[ { "msg_contents": "\nthey don't always work, in the case of a table with an attribute that\ncalls a function for its default value.\n\npostgres=> create function foo() returns int4 as 'select 1' language 'sql';\nCREATE\npostgres=> create table a (b int4 default foo());\nCREATE\n\n% pg_dump postgres > tmpfile\n% cat tmpfile\n\\connect - postgres\nCREATE TABLE a (b int4 DEFAULT foo ( ));\n\\connect - postgres\nCREATE FUNCTION foo ( ) RETURNS int4 AS 'select 1' LANGUAGE 'SQL';\nCOPY a FROM stdin;\n\\.\n% destroydb\n% createdb\n% psql < tmpfile\n\nwhich of course doesn't work, because it tries to create the table before\nthe function, which fails.\n\nthen it spits out the help message because it can't understand \\.\n\nthis happens every time I dump/reload my db\n\nnot a super easy fix, because sql functions can depend on tables to be\ncreated as well as table depending on functions. are circular\nreferences possible? doesn't seem like they would be.\n\nso pg_dump would have to figure out what order to put the\ntable/function creation in. perhaps having ppl manually re-ordering\nthe dump output (and documenting this accordingly!) is the best way.\n", "msg_date": "Tue, 2 Jun 1998 18:25:37 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "dump/reload" }, { "msg_contents": "> \n> \n> they don't always work, in the case of a table with an attribute that\n> calls a function for its default value.\n> \n> postgres=> create function foo() returns int4 as 'select 1' language 'sql';\n> CREATE\n> postgres=> create table a (b int4 default foo());\n> CREATE\n> \n> % pg_dump postgres > tmpfile\n> % cat tmpfile\n> \\connect - postgres\n> CREATE TABLE a (b int4 DEFAULT foo ( ));\n> \\connect - postgres\n> CREATE FUNCTION foo ( ) RETURNS int4 AS 'select 1' LANGUAGE 'SQL';\n> COPY a FROM stdin;\n> \\.\n> % destroydb\n> % createdb\n> % psql < tmpfile\n> \n> which of course doesn't work, because it tries to create the table before\n> the function, which fails.\n> \n> then it spits out the help message because it can't understand \\.\n> \n> this happens every time I dump/reload my db\n> \n> not a super easy fix, because sql functions can depend on tables to be\n> created as well as table depending on functions. are circular\n> references possible? doesn't seem like they would be.\n> \n> so pg_dump would have to figure out what order to put the\n> table/function creation in. perhaps having ppl manually re-ordering\n> the dump output (and documenting this accordingly!) is the best way.\n> \n> \n\nThis is a good point, and something worth thinking about. Maybe we\ncould scan through the defaults for a table, and call the dumpfunction\ncommand for any functions. Then when they are later attempted to be\ncreated, they would fail, or we could somehow mark them as already\ndumped.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 2 Jun 1998 22:19:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] dump/reload" }, { "msg_contents": "On Tue, 2 June 1998, at 22:19:01, Bruce Momjian wrote:\n\n> > they don't always work, in the case of a table with an attribute that\n> > calls a function for its default value.\n\n> This is a good point, and something worth thinking about. Maybe we\n> could scan through the defaults for a table, and call the dumpfunction\n> command for any functions. Then when they are later attempted to be\n> created, they would fail, or we could somehow mark them as already\n> dumped.\n\nWould we look at the binary plan (aiee!) or just try and parse the\nstring value 'pg_attdef.adsrc` for a function call?\n", "msg_date": "Tue, 2 Jun 1998 20:12:37 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] dump/reload" }, { "msg_contents": "> \n> On Tue, 2 June 1998, at 22:19:01, Bruce Momjian wrote:\n> \n> > > they don't always work, in the case of a table with an attribute that\n> > > calls a function for its default value.\n> \n> > This is a good point, and something worth thinking about. Maybe we\n> > could scan through the defaults for a table, and call the dumpfunction\n> > command for any functions. Then when they are later attempted to be\n> > created, they would fail, or we could somehow mark them as already\n> > dumped.\n> \n> Would we look at the binary plan (aiee!) or just try and parse the\n> string value 'pg_attdef.adsrc` for a function call?\n> \n\nJust thought about it. With our new subselects we could:\n\n\tselect * from pg_proc where proid in (select deffunc from pg_class)\n\t\tdump each func\n\tdump tables\n\tselect * from pg_proc where proid not in (select deffunc from pg_class)\n\t\tdump each func\n\nThis sounds like a winner. (I just made up the field names and stuff.) \n\nOr are the oid's of the functions used in default values not immediately\navailable. Is that the binary plan you were talking about. That could\nbe very messy. Now I see pg_attrdef. This looks tough to grab function\nnames from.\n\nOh well.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 00:05:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] dump/reload" }, { "msg_contents": "> \n> On Tue, 2 June 1998, at 22:19:01, Bruce Momjian wrote:\n> \n> > > they don't always work, in the case of a table with an attribute that\n> > > calls a function for its default value.\n> \n> > This is a good point, and something worth thinking about. Maybe we\n> > could scan through the defaults for a table, and call the dumpfunction\n> > command for any functions. Then when they are later attempted to be\n> > created, they would fail, or we could somehow mark them as already\n> > dumped.\n> \n> Would we look at the binary plan (aiee!) or just try and parse the\n> string value 'pg_attdef.adsrc` for a function call?\n\nI see pg_attrdef.adsrc now. Wow, that looks tough. Could we grab any\nidentifier before an open paren?\n\nThere has to be an easy fix for this. Can't think of it though.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 00:09:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] dump/reload" }, { "msg_contents": "On Tue, 2 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > they don't always work, in the case of a table with an attribute that\n> > calls a function for its default value.\n> > \n> > postgres=> create function foo() returns int4 as 'select 1' language 'sql';\n> > CREATE\n> > postgres=> create table a (b int4 default foo());\n> > CREATE\n> > \n> > % pg_dump postgres > tmpfile\n> > % cat tmpfile\n> > \\connect - postgres\n> > CREATE TABLE a (b int4 DEFAULT foo ( ));\n> > \\connect - postgres\n> > CREATE FUNCTION foo ( ) RETURNS int4 AS 'select 1' LANGUAGE 'SQL';\n> > COPY a FROM stdin;\n> > \\.\n> > % destroydb\n> > % createdb\n> > % psql < tmpfile\n> > \n> > which of course doesn't work, because it tries to create the table before\n> > the function, which fails.\n> > \n> > then it spits out the help message because it can't understand \\.\n> > \n> > this happens every time I dump/reload my db\n> > \n> > not a super easy fix, because sql functions can depend on tables to be\n> > created as well as table depending on functions. are circular\n> > references possible? doesn't seem like they would be.\n> > \n> > so pg_dump would have to figure out what order to put the\n> > table/function creation in. perhaps having ppl manually re-ordering\n> > the dump output (and documenting this accordingly!) is the best way.\n> > \n> > \n> \n> This is a good point, and something worth thinking about. Maybe we\n> could scan through the defaults for a table, and call the dumpfunction\n> command for any functions. Then when they are later attempted to be\n> created, they would fail, or we could somehow mark them as already\n> dumped.\n> \nApologies for intrusion,\n\nI have also a problem with pg_dump, I already posted a bug-report but\nnobody replays. If you already know this problem, forget this message.\n\npostgres=> create table prova (var varchar, bp bpchar check (bp='zero'));\nCREATE\npostgres=> create view wprova as select var from prova;\nCREATE\n\n$ pg_dump hygea -s prova > file\n$ cat file\n\n\\connect - postgres\nCREATE TABLE prova (var varchar(-5), bp char(-5)) CONSTRAINT prova_bp CHECK bp = 'zero';\nCOPY prova FROM stdin;\n\\.\n---------------\n. pg_dump don't recreate VIEWs\n. recreates varchar as varchar(-5)\n. recreates bpchar as CHAR(-5)\n. recreates CONSTRAINTs with wrong syntax\n Jose'\n\n", "msg_date": "Wed, 3 Jun 1998 11:33:28 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] dump/reload" }, { "msg_contents": "> > > % pg_dump postgres > tmpfile\n> > > % cat tmpfile\n> > > \\connect - postgres\n> > > CREATE TABLE a (b int4 DEFAULT foo ( ));\n> > > \\connect - postgres\n> > > CREATE FUNCTION foo ( ) RETURNS int4 AS 'select 1' LANGUAGE 'SQL';\n> > > COPY a FROM stdin;\n> > > \\.\n> > > % destroydb\n> > > % createdb\n> > > % psql < tmpfile\n> > > \n\n\nYes, I have this in my mailbox. Doesn't hurt to be reminded though.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 09:00:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] dump/reload" } ]
[ { "msg_contents": "\nI would love a way to keep track of the connections/attempted\nconnections to the postmaster. I'm thinking that when the postmaster\naccept()s a connection, it can just insert a record into a table\n(system catalog or not) with the information, which can be updated\nafter the authentication succeeds/fails or whatnot.\n\nsomething like 'smbstatus' for the samba system.\n\nSo, my question is: how should I go about doing this? should I look\ninto SPI, which I know nothing about? or, what.. I don't think the\ncatalog cache stuff needs to be changed, it isn't as if this info\nneeds to be immediately accessible.\n", "msg_date": "Tue, 2 Jun 1998 20:28:06 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "keeping track of connections" }, { "msg_contents": "> \n> \n> I would love a way to keep track of the connections/attempted\n> connections to the postmaster. I'm thinking that when the postmaster\n> accept()s a connection, it can just insert a record into a table\n> (system catalog or not) with the information, which can be updated\n> after the authentication succeeds/fails or whatnot.\n> \n> something like 'smbstatus' for the samba system.\n> \n> So, my question is: how should I go about doing this? should I look\n> into SPI, which I know nothing about? or, what.. I don't think the\n> catalog cache stuff needs to be changed, it isn't as if this info\n> needs to be immediately accessible.\n\nGood question. Postmaster does not have access to the system tables, so\nit can't access them. You could add a debug option to show it in the\nserver logs, or add it to the -d2 debug option that already shows SQL\nstatements.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 00:11:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 June 1998, at 00:11:01, Bruce Momjian wrote:\n\n> Good question. Postmaster does not have access to the system tables, so\n> it can't access them. You could add a debug option to show it in the\n> server logs, or add it to the -d2 debug option that already shows SQL\n> statements.\n\nHow about something like this: a pool of shared memory where this\ninformation is stored, and then a view which calls a set of functions\nto return the information from the shared memory?\n\nApache does something similar.\n", "msg_date": "Tue, 2 Jun 1998 23:19:03 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Brett M writes: \n> On Wed, 3 June 1998, at 00:11:01, Bruce Momjian wrote:\n> \n> > Good question. Postmaster does not have access to the system tables, so\n> > it can't access them. You could add a debug option to show it in the\n> > server logs, or add it to the -d2 debug option that already shows SQL\n> > statements.\n> \n> How about something like this: a pool of shared memory where this\n> information is stored, and then a view which calls a set of functions\n> to return the information from the shared memory?\n> \n> Apache does something similar.\n\nI am curious, what is it you are trying to accomplish with this? Are you \ntrying to build a persistant log that you can query later for billing\nor load management/capacity planning information? Are you trying to monitor\nlogin attempts for security auditing? Are you trying to catch logins in\nreal time for some sort of middleware integration?\n\nHere we are discussion solutions, but I don't even know what the problem\nis. So, please describe what is needed in terms of requirements/functionality.\n\nThanks\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Wed, 3 Jun 1998 01:05:17 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 June 1998, at 01:05:17, David Gould wrote:\n\n> I am curious, what is it you are trying to accomplish with this? Are you \n> trying to build a persistant log that you can query later for billing\n> or load management/capacity planning information? Are you trying to monitor\n> login attempts for security auditing? Are you trying to catch logins in\n> real time for some sort of middleware integration?\n\nThe problem is that when I do a process listing for the postgres user,\nI see many backends. There's no (convenient) way to see what those\nbackends are doing, what db they're connected to or the remote\nhost/postgres user.\n\nMy required functionality is this: a list of all backends and\nconnection details. IP, queries issued, listens/notifications\nrequested/served, bytes transfered, postgres user, db, current query,\nclient version, etcetcetc.\n\nWhat problem am I trying to solve? It is purely a desire for this\ninformation. I also feel it will help be debug problems. It would be\nnice to track down my clients that are now failing because of password\nauthentication, but I do admit that this would not help much.\n\nWhat I shall be doing is hacking libpq to report the name of the\nprocess and related information like environment when connecting to a\ndatabase. This would let me track down those programs. As it is, I\nhave programs failing, and I don't know which ones. Obviously they\naren't very crucial, but it would be nice to know how much more it is\nthan me typing 'psql' on the host and expecting to connect.\n\nObviously, this is unrelated. But it is purely a desire for\ninformation. The more info the better. The debug log is quite\nhenious when trying to figure out what's going on, especially with\nlots of connections.\n\nOn another unrelated note, the postmaster has been dying lately,\nleaving children hanging about. I thought something might be\ncorrupted (disk full at one point) so I did a dump/reload. We'll see\nwhat happens.\n\nCall it a feature.\n", "msg_date": "Wed, 3 Jun 1998 02:37:58 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> \n> On Wed, 3 June 1998, at 00:11:01, Bruce Momjian wrote:\n> \n> > Good question. Postmaster does not have access to the system tables, so\n> > it can't access them. You could add a debug option to show it in the\n> > server logs, or add it to the -d2 debug option that already shows SQL\n> > statements.\n> \n> How about something like this: a pool of shared memory where this\n> information is stored, and then a view which calls a set of functions\n> to return the information from the shared memory?\n> \n> Apache does something similar.\n> \n> \n\nYes, that would work. Are you looking for something to show current\nbackend status. What type of info would be in there?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 08:43:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> \n> On Wed, 3 June 1998, at 01:05:17, David Gould wrote:\n> \n> > I am curious, what is it you are trying to accomplish with this? Are you \n> > trying to build a persistant log that you can query later for billing\n> > or load management/capacity planning information? Are you trying to monitor\n> > login attempts for security auditing? Are you trying to catch logins in\n> > real time for some sort of middleware integration?\n> \n> The problem is that when I do a process listing for the postgres user,\n> I see many backends. There's no (convenient) way to see what those\n> backends are doing, what db they're connected to or the remote\n> host/postgres user.\n> \n> My required functionality is this: a list of all backends and\n> connection details. IP, queries issued, listens/notifications\n> requested/served, bytes transfered, postgres user, db, current query,\n> client version, etcetcetc.\n\nThat's a lot of info. One solution for database and username would be\nto modify argv[1] and argv[2] for the postgres backend so it shows this\ninformation on the ps command line. As long as these args are already\nused as part of startup ( and they are when started under the\npostmaster), we could set argv to whatever values we are interested in,\nand clear the rest of them so the output would look nice.\n\nThis would be easy to do, and I would be glad to do it.\n\n\n> What problem am I trying to solve? It is purely a desire for this\n> information. I also feel it will help be debug problems. It would be\n> nice to track down my clients that are now failing because of password\n> authentication, but I do admit that this would not help much.\n\nI think you need a log entry for that, and it would be a good idea.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 08:58:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> Date: Wed, 3 Jun 1998 02:37:58 -0700 (PDT)\n> From: Brett McCormick <[email protected]>\n> Cc: [email protected], [email protected]\n> Sender: [email protected]\n\n> On Wed, 3 June 1998, at 01:05:17, David Gould wrote:\n> \n> > I am curious, what is it you are trying to accomplish with this? Are you \n> > trying to build a persistant log that you can query later for billing\n> > or load management/capacity planning information? Are you trying to monitor\n> > login attempts for security auditing? Are you trying to catch logins in\n> > real time for some sort of middleware integration?\n> \n> The problem is that when I do a process listing for the postgres user,\n> I see many backends. There's no (convenient) way to see what those\n> backends are doing, what db they're connected to or the remote\n> host/postgres user.\n> \n> My required functionality is this: a list of all backends and\n> connection details. IP, queries issued, listens/notifications\n> requested/served, bytes transfered, postgres user, db, current query,\n> client version, etcetcetc.\n....\n\nCan backend monitoring be compatible with one or more extant\nmonitoring techniques?\n\n1. syslog\n2. HTML (like Apache's real time status)\n3. SNMP/SMUX/AgentX\n\n", "msg_date": "Wed, 3 Jun 1998 10:20:30 -0400 (EDT)", "msg_from": "Hal Snyder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> Can backend monitoring be compatible with one or more extant\n> monitoring techniques?\n> \n> 1. syslog\n> 2. HTML (like Apache's real time status)\n> 3. SNMP/SMUX/AgentX\n\nOooh. An SNMP agent for Postgres. That would be slick...\n\n - Tom\n", "msg_date": "Wed, 03 Jun 1998 14:46:49 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 Jun 1998, Hal Snyder wrote:\n\n> > Date: Wed, 3 Jun 1998 02:37:58 -0700 (PDT)\n> > From: Brett McCormick <[email protected]>\n> > Cc: [email protected], [email protected]\n> > Sender: [email protected]\n> \n> > On Wed, 3 June 1998, at 01:05:17, David Gould wrote:\n> > \n> > > I am curious, what is it you are trying to accomplish with this? Are you \n> > > trying to build a persistant log that you can query later for billing\n> > > or load management/capacity planning information? Are you trying to monitor\n> > > login attempts for security auditing? Are you trying to catch logins in\n> > > real time for some sort of middleware integration?\n> > \n> > The problem is that when I do a process listing for the postgres user,\n> > I see many backends. There's no (convenient) way to see what those\n> > backends are doing, what db they're connected to or the remote\n> > host/postgres user.\n> > \n> > My required functionality is this: a list of all backends and\n> > connection details. IP, queries issued, listens/notifications\n> > requested/served, bytes transfered, postgres user, db, current query,\n> > client version, etcetcetc.\n> ....\n> \n> Can backend monitoring be compatible with one or more extant\n> monitoring techniques?\n> \n> 1. syslog\n> 2. HTML (like Apache's real time status)\n\n\tI like this method the best...it makes it easier for clients to\nmonitor as well, without having access to the machines...but does it pose\nany security implications?\n\n\n", "msg_date": "Wed, 3 Jun 1998 11:52:35 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Hi,\n\n> On Wed, 3 June 1998, at 01:05:17, David Gould wrote:\n> \n> > I am curious, what is it you are trying to accomplish with this? Are you \n> > trying to build a persistant log that you can query later for billing\n> > or load management/capacity planning information? Are you trying to monitor\n> > login attempts for security auditing? Are you trying to catch logins in\n> > real time for some sort of middleware integration?\n> \n> The problem is that when I do a process listing for the postgres user,\n> I see many backends. There's no (convenient) way to see what those\n> backends are doing, what db they're connected to or the remote\n> host/postgres user.\n> \n> My required functionality is this: a list of all backends and\n> connection details. IP, queries issued, listens/notifications\n> requested/served, bytes transfered, postgres user, db, current query,\n> client version, etcetcetc.\n> \n>\nPerhaps a wild guess ...\n\nMassimo had a patch, which added the pid in the first field of the \ndebug output (and I guess a timestamp). So you can easily \nsort/grep/trace the debug output. \n\nPerhaps this would help and should be really easy.\n\nBTW., I think this feature is so neat, it should be integrated even \nif it doesn't solve *your* problem ;-)\n\nCiao\n\nUlrich\n\n\n\n\nUlrich Voss \\ \\ / /__ / ___|__ _| |\nVoCal web publishing \\ \\ / / _ \\| | / _` | |\[email protected] \\ V / (_) | |__| (_| | |\nhttp://www.vocalweb.de \\_/ \\___/ \\____\\__,_|_|\nTel: (++49) 203-306-1560 web publishing\n", "msg_date": "Wed, 3 Jun 1998 16:38:43 +0000", "msg_from": "\"Ulrich Voss\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Ulrich Voss writes: \n> Massimo had a patch, which added the pid in the first field of the \n> debug output (and I guess a timestamp). So you can easily \n> sort/grep/trace the debug output. \n> \n> Perhaps this would help and should be really easy.\n> \n> BTW., I think this feature is so neat, it should be integrated even \n> if it doesn't solve *your* problem ;-)\n\nThis is very very helpful when trying to debug interactions between backends\ntoo. For example if something blows up in the lock manager this can give\na record of who did what to who when.\n\nGreat idea.\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Wed, 3 Jun 1998 11:58:22 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Hal Synder writes:\n> \n> Can backend monitoring be compatible with one or more extant\n> monitoring techniques?\n> \n> 1. syslog\n> 2. HTML (like Apache's real time status)\n> 3. SNMP/SMUX/AgentX\n\nIn Illustra, we use (gasp) SQL for this.\n\n> select * from procs;\n\nprocc_pid |proc_xid |proc_database|proc_locktab |proc_locktid |proc_locktype|proc_prio |proc_licenseid|proc_status |proc_user |proc_host |proc_display |proc_spins |proc_buffers |\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n|4787 |0 |58.201e |tables |(7,0) |R |0 |0 |lock wait |miadmin |warbler.illustra.com|/dev/pts/4 |[] |[] |\n|3997 |0 |58.201e |- |(-1,0) | |0 |0 |client input |miadmin |warbler.illustra.com|/dev/pts/11 |[] |[] |\n|29597 |1320638 |58.201e |- |(-1,0) | |0 |0 |running *|miadmin |warbler.illustra.com|/dev/pts/5 |[] |[] |\n|4790 |1320646 |58.7 |- |(-1,0) | |0 |0 |running *|miadmin |warbler.illustra.com|/dev/pts/4 |[6] |[] |\n-------------------------------------------------------------------------------\n\n\"procs\" is a pseudo-table that is generated on the fly from the process data\nstructures in the shared memory when queried. There are also pseudo-tables\nfor locks and traces and other information.\n\n\nThe advantage of using SQL is that the data can be selected into other tables,\ngrouped, projected, joined or whatever. The other advantage is that all the\nexiting clients can take advantage of the data. So if you wanted to write\na graphical status monitor, you could do so quite simply in pgtcl.\n\nIllustra also provides a set of prewritten functions (which are just sql\nfuncs) to provide convenient access to many kinds of common catalog queries.\n\n\n\nI often see posts on this list that overlook the fact that postgresql is\na \"relational database system\" and also \"an SQL system\". Relational\nsystems are meant to be both \"complete\" and \"reflexive\". That is, the\nquery language (SQL) should suffice to do _any_ task needed. And any\nmeta-information about the system itself should be available and manageable\nthrough the query language.\n\nThat is why we have system catalogs describing things like columns, tables,\ntypes, indexes etc. The system maintains its metadata by doing queries and\nupdates to the catalogs in the same way that a user can query the catalogs.\nThis reflexivity is the main reason relational systems have such power.\n\nSo, whenever you are thinking about managing information related to a\ndatabase system, think about using the system itself to do it. Managing\ninformation is what database systems are _for_. That is, if the current\nSQL facilities cannot implement your feature, extend the SQL system,\ndon't invent some other _kind_ of facility.\n\n\nThe observation that Apache provides status in HTML means that the Apache\nteam _understand_ that *Apache is a web server*. The natural form of\ninteraction with a web server is HTML.\n\nPostgres is a SQL database server. The natural form of interaction with\na database server is relational queries and tuples.\n\n\nSorry if this is a bit of a rant, but I really think we will have a much\nbetter system if we understand what our system _is_ and try to extend it\nin ways that make it better at that rather than to let it go all shapeless\nand bloated with unrelated features and interfaces.\n\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nIf simplicity worked, the world would be overrun with insects.\n", "msg_date": "Wed, 3 Jun 1998 12:50:00 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> Sorry if this is a bit of a rant, but I really think we will have a much\n> better system if we understand what our system _is_ and try to extend it\n> in ways that make it better at that rather than to let it go all shapeless\n> and bloated with unrelated features and interfaces.\n\nI'll wait for this discussion to come down to earth, thanks. :-)\n\nMeaning, wow, that sounds nice, but sounds pretty hard too.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 16:44:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 Jun 1998, David Gould wrote:\n\n> Hal Synder writes:\n> > \n> > Can backend monitoring be compatible with one or more extant\n> > monitoring techniques?\n> > \n> > 1. syslog\n> > 2. HTML (like Apache's real time status)\n> > 3. SNMP/SMUX/AgentX\n> \n> In Illustra, we use (gasp) SQL for this.\n> \n> > select * from procs;\n> \n> procc_pid |proc_xid |proc_database|proc_locktab |proc_locktid |proc_locktype|proc_prio |proc_licenseid|proc_status |proc_user |proc_host |proc_display |proc_spins |proc_buffers |\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> |4787 |0 |58.201e |tables |(7,0) |R |0 |0 |lock wait |miadmin |warbler.illustra.com|/dev/pts/4 |[] |[] |\n> |3997 |0 |58.201e |- |(-1,0) | |0 |0 |client input |miadmin |warbler.illustra.com|/dev/pts/11 |[] |[] |\n> |29597 |1320638 |58.201e |- |(-1,0) | |0 |0 |running *|miadmin |warbler.illustra.com|/dev/pts/5 |[] |[] |\n> |4790 |1320646 |58.7 |- |(-1,0) | |0 |0 |running *|miadmin |warbler.illustra.com|/dev/pts/4 |[6] |[] |\n> -------------------------------------------------------------------------------\n> \n> \"procs\" is a pseudo-table that is generated on the fly from the process\n> data structures in the shared memory when queried. There are also\n> pseudo-tables for locks and traces and other information. \n> \n> \n> The advantage of using SQL is that the data can be selected into other\n> tables, grouped, projected, joined or whatever. The other advantage is\n> that all the exiting clients can take advantage of the data. So if you\n> wanted to write a graphical status monitor, you could do so quite simply\n> in pgtcl. \n> \n> Illustra also provides a set of prewritten functions (which are just sql\n> funcs) to provide convenient access to many kinds of common catalog\n> queries. \n\n\tI definitely like this...it keeps us self-contained as far as the\ndata is concerned, and everyone that is using it knows enough about SQL\n(or should) to be able to gleam information as required...\n\n\tWhat would it take to do this though? The 'postmaster' itself,\nunless I've misunderstand a good many of the conversations on this, can't\naccess the tables themselves, only 'flat files' (re: the password issue),\nso it would have to be done in the fork'd process itself. That, IMHO,\nwould pose a possible inconsequential problem though...what if the backend\ndies? Its 'record' in the proc table wouldn't be removed, which would be\nlike having our own internal 'process zombies'...\n\n\tI think this does bear further discussion though...one 'branch' of\nthis would be to have a dynamic table for 'live' processes, but also one\nthat contains a history of past ones...?\n\n\t\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 3 Jun 1998 18:40:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > I would love a way to keep track of the connections/attempted\n> > connections to the postmaster. I'm thinking that when the postmaster\n> > accept()s a connection, it can just insert a record into a table\n> > (system catalog or not) with the information, which can be updated\n> > after the authentication succeeds/fails or whatnot.\n> > \n> > something like 'smbstatus' for the samba system.\n> > \n> > So, my question is: how should I go about doing this? should I look\n> > into SPI, which I know nothing about? or, what.. I don't think the\n> > catalog cache stuff needs to be changed, it isn't as if this info\n> > needs to be immediately accessible.\n> \n> Good question. Postmaster does not have access to the system tables, so\n> it can't access them. You could add a debug option to show it in the\n> server logs, or add it to the -d2 debug option that already shows SQL\n> statements.\n\nHere's one for you...and don't laugh at me, eh? :)\n\npostmaster starts up to listen for connections, and then starts up its own\nbackend to handle database queries? So, on a quiet system, you would have\ntwo processes running, one postmaster, and one postgres...\n\nbasically, the idea is that postmaster can't talk to a table, only\npostgres can...so, setup postmaster the same way that any other interface\nis setup...connect to a backend and pass its transactions through that\nway...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 3 Jun 1998 18:46:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 Jun 1998, David Gould wrote:\n\n> I am curious, what is it you are trying to accomplish with this? Are you \n> trying to build a persistant log that you can query later for billing\n> or load management/capacity planning information? Are you trying to monitor\n> login attempts for security auditing? Are you trying to catch logins in\n> real time for some sort of middleware integration?\n> \n> Here we are discussion solutions, but I don't even know what the problem\n> is. So, please describe what is needed in terms of\n> requirements/functionality.\n\n\tI think the uses could be many. Keep track, on a per 'backend'\nbasis, max memory used during the life of the process, so that you can\nestimate memory requirements/upgrades. Average query times for the\nduration of the process? Or maybe even bring it down to a 'per query'\nlogging, so that you know what the query was, how long it took, and what\nresources were required? Tie that to a table of processes, maybe with a\ntimestamp for when the process started up and when it started. \n\n\tThen, using a simple query, you could figure out peak times for\nprocesses, or number of processes per hour, or...\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 3 Jun 1998 18:52:33 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> Here's one for you...and don't laugh at me, eh? :)\n> \n> postmaster starts up to listen for connections, and then starts up its own\n> backend to handle database queries? So, on a quiet system, you would have\n> two processes running, one postmaster, and one postgres...\n> \n> basically, the idea is that postmaster can't talk to a table, only\n> postgres can...so, setup postmaster the same way that any other interface\n> is setup...connect to a backend and pass its transactions through that\n> way...\n\nSo have the postmaster use the libpq library to open a database\nconnection and execute queries. Sounds interesting.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 3 Jun 1998 18:16:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "\nthat's a really cool idea. I think I'll try that.\n\nOn Wed, 3 June 1998, at 18:46:02, The Hermit Hacker wrote:\n\n> postmaster starts up to listen for connections, and then starts up its own\n> backend to handle database queries? So, on a quiet system, you would have\n> two processes running, one postmaster, and one postgres...\n> \n> basically, the idea is that postmaster can't talk to a table, only\n> postgres can...so, setup postmaster the same way that any other interface\n> is setup...connect to a backend and pass its transactions through that\n> way...\n", "msg_date": "Wed, 3 Jun 1998 16:50:28 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 June 1998, at 18:40:10, The Hermit Hacker wrote:\n\n> > > select * from procs;\n<stuff deleted>\n> > \n> > \"procs\" is a pseudo-table that is generated on the fly from the process\n> > data structures in the shared memory when queried. There are also\n> > pseudo-tables for locks and traces and other information. \n\nThat's exactly what I envision. PRobably not what I articulated.\n\n> > The advantage of using SQL is that the data can be selected into other\n> > tables, grouped, projected, joined or whatever. The other advantage is\n> > that all the exiting clients can take advantage of the data. So if you\n> > wanted to write a graphical status monitor, you could do so quite simply\n> > in pgtcl. \n\nExactly.\n", "msg_date": "Wed, 3 Jun 1998 16:53:25 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Bruce Momjian gently chides:\n> I wrote:\n> > Sorry if this is a bit of a rant, but I really think we will have a much\n> > better system if we understand what our system _is_ and try to extend it\n> > in ways that make it better at that rather than to let it go all shapeless\n> > and bloated with unrelated features and interfaces.\n> \n> I'll wait for this discussion to come down to earth, thanks. :-)\n> \n> Meaning, wow, that sounds nice, but sounds pretty hard too.\n\nReally? Most of the data we need to collect is in the process table, or lock\nmanager data structure or could be added fairly readily.\n\nSo you need a few things:\n\n - parser/planner needs to recognize the special tables and flag them in\n the query plan. Easy way to do this is to store catalog and type info\n for them in the normal places except that the tables table entry would\n have a flag that says \"I'm special\", and maybe a function oid to the\n actual iterator function (see next item).\n\n The idea is that you rewrite the query \"select * from procs\" into\n \"select * from pg_pseudo_procs()\".\n\n - you then need an iterator function (returns next row per call) for each\n fake table. This function reads the data from whatever the in memory\n structure is and returns a tuple. That is, to the caller it looks a lot\n like heapgetnext() or whatever we call that.\n\nThe rest of this, joins, projections, grouping, insert to another table etc\npretty much falls out of the basic functionality of the system for free.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Wed, 3 Jun 1998 20:12:25 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Marc G. Fournier writes:\n> On Wed, 3 Jun 1998, Bruce Momjian wrote:\n> > \n> > Good question. Postmaster does not have access to the system tables, so\n> > it can't access them. You could add a debug option to show it in the\n> > server logs, or add it to the -d2 debug option that already shows SQL\n> > statements.\n> \n> Here's one for you...and don't laugh at me, eh? :)\n> \n> postmaster starts up to listen for connections, and then starts up its own\n> backend to handle database queries? So, on a quiet system, you would have\n> two processes running, one postmaster, and one postgres...\n> \n> basically, the idea is that postmaster can't talk to a table, only\n> postgres can...so, setup postmaster the same way that any other interface\n> is setup...connect to a backend and pass its transactions through that\n> way...\n\nOk, can I laugh now?\n\nSeriously, if we are going to have a separate backend to do the table access\n(and I agree that this is both neccessary and reasonable), why not have it\nbe a plain ordinary backend like all the others and just connect to it from\nthe client? Why get the postmaster involved at all? \n\nFirst, modifying the postmaster to add services has a couple of problems:\n\n - we have to modify the postmaster. This adds code bloat and bugs etc, and\n since the same binary is also the backend, it means the backends carry\n around extra baggage that only is used in the postmaster.\n\n - more importantly, if the postmaster is busy processing a big select from\n a pseudo table or log (well, forwarding results etc), then it cannot also\n respond to a new connection request. Unless we multithread the postmaster.\n\n\nSecond, it really isn't required to get the postmaster involved except in\nmaintaining its portion of the shared memory. Anyone that wants to do\nstatus monitoring can connect in the normal way from a client to a backend\nand query the pseudo-tables every second or however often they want. I\nimagine an event log in a circular buffer could even be maintained in the\nshared memory and made available as a pseudo-table for those who want that\nsort of thing.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Wed, 3 Jun 1998 20:29:52 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Wed, 3 June 1998, at 16:38:43, Ulrich Voss wrote:\n\n> Massimo had a patch, which added the pid in the first field of the \n> debug output (and I guess a timestamp). So you can easily \n> sort/grep/trace the debug output. \n\nI'm looking for a little more than that.\n\n> \n> Perhaps this would help and should be really easy.\n> \n> BTW., I think this feature is so neat, it should be integrated even \n> if it doesn't solve *your* problem ;-)\n\nThere isn't much of a problem, I just would love to have the feature I\nmentioned. What are you referring to, the above?\n", "msg_date": "Wed, 3 Jun 1998 23:46:21 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Hi,\n\n> On Wed, 3 June 1998, at 16:38:43, Ulrich Voss wrote:\n> \n> > Massimo had a patch, which added the pid in the first field of the \n> > debug output (and I guess a timestamp). So you can easily \n> > sort/grep/trace the debug output. \n> \n> I'm looking for a little more than that.\n\nOK, but step one is simple, Massimo's patch could possibly be \nintegrated in two or three hours. And it adds valuable debugging \ninfo.\n\n(Btw., Massimo's patch was the first (and I hope last) very helpful \npatch, which for obscure reasons never made into the official \ndistribution. And it had this simple pid/time patch (not in current \ncvs), it had a spinlock patch (not in current cvs), a better deadlock \ndetection (than 6.2.1, not 6.3) and an async listen option (also the \n6.4. version will be much better I gues). That's why we still use \n6.2.1p6 + massimo patch).\n\n> > \n> > Perhaps this would help and should be really easy.\n> > \n> > BTW., I think this feature is so neat, it should be integrated even \n> > if it doesn't solve *your* problem ;-)\n> \n> There isn't much of a problem, I just would love to have the feature I\n> mentioned. What are you referring to, the above?\n\nYeah, fine. Monitoring the backend is wonderful, but the \npid/timestamp addition is simple and useful too. \n\nThanks again for a great product!\n\nUlrich\n\n\n\nUlrich Voss \\ \\ / /__ / ___|__ _| |\nVoCal web publishing \\ \\ / / _ \\| | / _` | |\[email protected] \\ V / (_) | |__| (_| | |\nhttp://www.vocalweb.de \\_/ \\___/ \\____\\__,_|_|\nTel: (++49) 203-306-1560 web publishing\n", "msg_date": "Thu, 4 Jun 1998 12:57:09 +0000", "msg_from": "\"Ulrich Voss\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Ulrich Voss writes:\n> > On Wed, 3 June 1998, at 16:38:43, Ulrich Voss wrote:\n> > \n> > > Massimo had a patch, which added the pid in the first field of the \n> > > debug output (and I guess a timestamp). So you can easily \n> > > sort/grep/trace the debug output. \n> > \n> > I'm looking for a little more than that.\n> \n> OK, but step one is simple, Massimo's patch could possibly be \n> integrated in two or three hours. And it adds valuable debugging \n> info.\n> \n> (Btw., Massimo's patch was the first (and I hope last) very helpful \n> patch, which for obscure reasons never made into the official \n> distribution. And it had this simple pid/time patch (not in current \n> cvs), it had a spinlock patch (not in current cvs), a better deadlock \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWell, uhmmm, yes there is a spinlock patch in the current CVS. Please look\nagain.\n\nBtw, I am about to update the spinlock patch based on some testing I did to\nresolve some of Bruce Momjians performance concerns. I will post the results\nof the testing (which are quite interesting if you are a performance fanatic)\nlater today, and the patch this weekend.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n\n\n", "msg_date": "Thu, 4 Jun 1998 11:13:03 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> Ulrich Voss writes:\n> > > On Wed, 3 June 1998, at 16:38:43, Ulrich Voss wrote:\n> > > \n> > > > Massimo had a patch, which added the pid in the first field of the \n> > > > debug output (and I guess a timestamp). So you can easily \n> > > > sort/grep/trace the debug output. \n> > > \n> > > I'm looking for a little more than that.\n> > \n> > OK, but step one is simple, Massimo's patch could possibly be \n> > integrated in two or three hours. And it adds valuable debugging \n> > info.\n> > \n> > (Btw., Massimo's patch was the first (and I hope last) very helpful \n> > patch, which for obscure reasons never made into the official \n> > distribution. And it had this simple pid/time patch (not in current \n> > cvs), it had a spinlock patch (not in current cvs), a better deadlock \n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Well, uhmmm, yes there is a spinlock patch in the current CVS. Please look\n> again.\n\nSorry. I meant in the current stable release ...\n\n> Btw, I am about to update the spinlock patch based on some testing I did to\n> resolve some of Bruce Momjians performance concerns. I will post the results\n> of the testing (which are quite interesting if you are a performance fanatic)\n> later today, and the patch this weekend.\n> \n> -dg\nI hope a patch for 6.3.2 will make its way someday ...\n\nCiao\n\nUlrich\n \n\n\n\nUlrich Voss \\ \\ / /__ / ___|__ _| |\nVoCal web publishing \\ \\ / / _ \\| | / _` | |\[email protected] \\ V / (_) | |__| (_| | |\nhttp://www.vocalweb.de \\_/ \\___/ \\____\\__,_|_|\nTel: (++49) 203-306-1560 web publishing\n", "msg_date": "Fri, 5 Jun 1998 12:31:24 +0000", "msg_from": "\"Ulrich Voss\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> \n> Hi,\n> \n> > On Wed, 3 June 1998, at 16:38:43, Ulrich Voss wrote:\n> > \n> > > Massimo had a patch, which added the pid in the first field of the \n> > > debug output (and I guess a timestamp). So you can easily \n> > > sort/grep/trace the debug output. \n> > \n> > I'm looking for a little more than that.\n> \n> OK, but step one is simple, Massimo's patch could possibly be \n> integrated in two or three hours. And it adds valuable debugging \n> info.\n> \n> (Btw., Massimo's patch was the first (and I hope last) very helpful \n> patch, which for obscure reasons never made into the official \n> distribution. And it had this simple pid/time patch (not in current \n> cvs), it had a spinlock patch (not in current cvs), a better deadlock \n> detection (than 6.2.1, not 6.3) and an async listen option (also the \n> 6.4. version will be much better I gues). That's why we still use \n> 6.2.1p6 + massimo patch).\n\nMe too. I'm still using 6.2.1p6 because I didn't found the time to port all\nthe patches to 6.3. They are almost done except for the lock code which was\nin the meantime modified by Bruce. I hope they will be available before 6.4.\n\n> > > Perhaps this would help and should be really easy.\n> > > \n> > > BTW., I think this feature is so neat, it should be integrated even \n> > > if it doesn't solve *your* problem ;-)\n> > \n> > There isn't much of a problem, I just would love to have the feature I\n> > mentioned. What are you referring to, the above?\n> \n> Yeah, fine. Monitoring the backend is wonderful, but the \n> pid/timestamp addition is simple and useful too. \n> \n> Thanks again for a great product!\n> \n> Ulrich\n> \n> \n> \n> Ulrich Voss \\ \\ / /__ / ___|__ _| |\n> VoCal web publishing \\ \\ / / _ \\| | / _` | |\n> [email protected] \\ V / (_) | |__| (_| | |\n> http://www.vocalweb.de \\_/ \\___/ \\____\\__,_|_|\n> Tel: (++49) 203-306-1560 web publishing\n> \n> \n> \n\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Mon, 8 Jun 1998 16:53:10 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Mon, 8 Jun 1998, Massimo Dal Zotto wrote:\n\n> > (Btw., Massimo's patch was the first (and I hope last) very helpful \n> > patch, which for obscure reasons never made into the official \n> > distribution. And it had this simple pid/time patch (not in current \n> > cvs), it had a spinlock patch (not in current cvs), a better deadlock \n> > detection (than 6.2.1, not 6.3) and an async listen option (also the \n> > 6.4. version will be much better I gues). That's why we still use \n> > 6.2.1p6 + massimo patch).\n> \n> Me too. I'm still using 6.2.1p6 because I didn't found the time to port all\n> the patches to 6.3. They are almost done except for the lock code which was\n> in the meantime modified by Bruce. I hope they will be available before 6.4.\n\n\tWhat are we currently missing?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 8 Jun 1998 16:58:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" } ]
[ { "msg_contents": "Hello,\n\nAfter reading the recent mailing list thread about PostgreSQL not\ngrowing in popularity as fast as MySQL and the lack of a legible logo, I\ngot bored and took a stab at some. I'd like to make a new logo for\npeople to pass around and also get an HTML/Logo usage page in the\ndistribution. Feedback would be appreciated.\n\nCheers,\n-STEVEl\n\nhttp://www.nettek-llc.com/postgresql/\n\n--\n--------------------------------------------\n http://www.nettek-llc.com/\n Southern Oregon's PC network technicians\n--------------------------------------------", "msg_date": "Wed, 03 Jun 1998 08:35:10 +0000", "msg_from": "Steve Logue <[email protected]>", "msg_from_op": true, "msg_subject": "NEW POSTGRESQL LOGOS" }, { "msg_contents": "\n\tHi.\n\nDue to some bug report for the postgresql python interface, I started\ntesting the current large objects support. Two points seems to be wrong,\nbut yet I only studied one.\n\nLO may span over some blocks and whenever a block boundary is crossed (for\nthe first access for example, or whenever a full block has been read), the\nlo_read() query gets a: \n \"NOTICE: buffer leak [xx] detected in BufferPoolCheckLeak()\"\nThe leak is located in an index_getnext() call to seek the next\nblock (using a btree index). But as this part of code is less easy to\nfollow and I can't go further.\nThis call is locate in inv_fetchtup(), called by inv_read() from the\ninv_api.c file.\n\nCould someone give me some pointers on how I could track where the faulty\nbuffer is allocated?\n \nThanks.\n\n---\nPascal ANDRE, Internet and Media Consulting\[email protected]\n\"Use the source, Luke. Be one with the Code.\" -- Linus Torvalds\n\n", "msg_date": "Wed, 3 Jun 1998 16:15:31 +0200 (MEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Buffer leak?" }, { "msg_contents": "Steve Logue writes: \n> \n> After reading the recent mailing list thread about PostgreSQL not\n> growing in popularity as fast as MySQL and the lack of a legible logo, I\n> got bored and took a stab at some. I'd like to make a new logo for\n> people to pass around and also get an HTML/Logo usage page in the\n> distribution. Feedback would be appreciated.\n> \n> Cheers,\n> -STEVEl\n> \n> http://www.nettek-llc.com/postgresql/\n\nThese are not bad, although the difference in size between the \"Postgre\" and\nthe \"SQL\" make it a little hard to read as one word. The different background\nfor the two parts of the word adds to this. Still, they are attractive. \n\nHmmm, I have an idea, what about a Penguin?\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Wed, 3 Jun 1998 19:45:24 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "[email protected] (David Gould) writes:\n\n| > http://www.nettek-llc.com/postgresql/\n| \n| These are not bad, although the difference in size between the \"Postgre\" and\n| the \"SQL\" make it a little hard to read as one word. The different background\n| for the two parts of the word adds to this. Still, they are attractive. \n\nI would tend to agree; unfortunately, the name is a bit too long and awkward to\nbe used as a visual logo.\n\n| Hmmm, I have an idea, what about a Penguin?\n\nAlready taken by Apple's Quicktime product. However, a cross between the BSD\ndaemon 'toon and a penguin would be fairly funny; a red and white penguin with\nhorns and a long, barbed tail? :)\n\n\t\t\t\t\t\t\t---Ken\n", "msg_date": "Wed, 3 Jun 1998 20:03:07 -0700 (PDT)", "msg_from": "Ken McGlothlen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Wed, 3 Jun 1998, David Gould wrote:\n\n> Steve Logue writes: \n> > \n> > After reading the recent mailing list thread about PostgreSQL not\n> > growing in popularity as fast as MySQL and the lack of a legible logo, I\n> > got bored and took a stab at some. I'd like to make a new logo for\n> > people to pass around and also get an HTML/Logo usage page in the\n> > distribution. Feedback would be appreciated.\n> > \n> > Cheers,\n> > -STEVEl\n> > \n> > http://www.nettek-llc.com/postgresql/\n> \n> These are not bad, although the difference in size between the \"Postgre\" and\n> the \"SQL\" make it a little hard to read as one word. The different background\n> for the two parts of the word adds to this. Still, they are attractive. \n> \n> Hmmm, I have an idea, what about a Penguin?\n\n\tDamn linux fanatics :)\n\n\tWe want \"product identification\"...the Penguin is what Linux is\nidentified with, and, as we *all* know, the last thing I would want to be\nidentified with :)\n\n\tSomeone came up with an alligator as our totem...but nobody seemed\nable to come up with a *strong* image to use :( I kinda like an elephant\nor turtle...a little slower, but highly dependable...\n\n\tWe need something to identify with that isn't already used (ie. I\nwon't suggest a little devil *grin*)...\n\n\tSteve...I looked at the ones you made, and found then a little\ndry...I personally think it is going to be difficult to do a 'Powered by'\nicon with an animal on it, but is it possible to *jumps* out at you?\nMaybe a little depth? Maybe emboss the work PostgreSQL, so that it stands\nout from the page? I like the top-left one the most, against the white\nbackground like that, but maybe have it stick out like a button?\n\n\tI'm just throwing ideas out...take or leave any and/or all of them\n:)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 4 Jun 1998 00:07:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n\n> Already taken by Apple's Quicktime product. However, a cross between the BSD\n> daemon 'toon and a penguin would be fairly funny; a red and white penguin with\n> horns and a long, barbed tail? :)\n\nActually, I've seen something along those lines before also, a spoof on\nthe Linux penguin, Tux.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy/\n-----------------------------------------------------------------------\n\"The number of UNIX installations has grown to 10, with more expected.\"\n -- The UNIX Programmer's Manual, 2nd Edition, June, 1972\n\n", "msg_date": "Wed, 3 Jun 1998 23:14:19 -0400 (EDT)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Marc G. Fournier writes:\n> On Wed, 3 Jun 1998, David Gould wrote:\n> > Hmmm, I have an idea, what about a Penguin?\n> \n> \tDamn linux fanatics :)\n> \n> \tWe want \"product identification\"...the Penguin is what Linux is\n> identified with, and, as we *all* know, the last thing I would want to be\n> identified with :)\n\nOff topic, but this is too good, so I am going to share anyway:\n\nFor background, the official Linux logo is a drawing of a rather plump\nand distinctly non-threatening penguin known as 'Tux'.\n-dg\n\nForwarded message:\n> Subject: Re: Q: Why a penguin? \n> From: C. Chan <[email protected]>\n> Newsgroups: comp.os.linux.misc\n> \n> Clive Clomsbarrow <[email protected]> wrote:\n> >\n> >Apparently Linus is person to blame for adopting\n> >the stupid-looking thing. I guess he thought a\n> >penguin would be cool (literally and figuratively).\n> >I guess he deserves to get his way, even if it is\n> >a demonstrable fact that marketing and engineering\n> >are incompatible specialties.\n> >\n> \n> A penguin is OK, though I'd prefer the N. hemisphere\n> puffin.\n> \n> I think it needs more attitude though, the mascot\n> looks too tame sitting on its fat duff. To really\n> appeal to Americans, the penguin should have a hot\n> pink mohawk, mirrorshades, an ammo belt draped about\n> its shoulders, an Uzi tucked under one wing...\n> \n> ...and the severed head of Bill Gates under the other.\n> \n> -- \n\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n\"Linux was made by foreign terrorists to steal money from true\n AMERICAN companies like Microsoft who invented computing as we\n know it, and are being punished for their success...\"\n", "msg_date": "Wed, 3 Jun 1998 20:17:07 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Wed, 3 Jun 1998, David Gould wrote:\n\n> > I think it needs more attitude though, the mascot\n> > looks too tame sitting on its fat duff. To really\n> > appeal to Americans, the penguin should have a hot\n> > pink mohawk, mirrorshades, an ammo belt draped about\n> > its shoulders, an Uzi tucked under one wing...\n> > \n> > ...and the severed head of Bill Gates under the other.\n\n\nSounds good.. print it!\n\n\nKevin\n\n\n\n--------------------------------------------------------------------\nKevin Heflin | ShreveNet, Inc. | Ph:318.222.2638 x103\nVP/Mac Tech | 333 Texas St #619 | FAX:318.221.6612\[email protected] | Shreveport, LA 71101 | http://www.shreve.net\n--------------------------------------------------------------------\n\n", "msg_date": "Wed, 3 Jun 1998 22:32:13 -0500 (CDT)", "msg_from": "Kevin Heflin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> We want \"product identification\"...the Penguin is what Linux is\n> identified with, and, as we *all* know, the last thing I would want to be\n> identified with :)\n\n(First off - thanks to everyone for the feed back.)\n\nI agree - actually I think animals should be out alltogether. IMHO Postgres needs\na professional, corporate image. Look at MySQL's stuff again. Very\nclean/professional - something that \"soothes\" the business types into trying the\nproduct for potentially important things. Many people just assume MySQL is a\nsuperior product because it is the product of a company, and looks it (whom happen\nto give sources too).\n\n>\n>\n> Someone came up with an alligator as our totem...but nobody seemed\n> able to come up with a *strong* image to use :( I kinda like an elephant\n> or turtle...a little slower, but highly dependable...\n>\n> We need something to identify with that isn't already used (ie. I\n> won't suggest a little devil *grin*)...\n>\n\nThe alligators look good and certainly were more effort. If we must do an animal,\nI think the original logo had it right - something FAST - a chetah, a gazell,\ncan't go wrong with an eagle in this country :)\n\n> Steve...I looked at the ones you made, and found then a little\n> dry...I personally think it is going to be difficult to do a 'Powered by'\n> icon with an animal on it, but is it possible to *jumps* out at you?\n> Maybe a little depth? Maybe emboss the work PostgreSQL, so that it stands\n> out from the page? I like the top-left one the most, against the white\n> background like that, but maybe have it stick out like a button?\n\nOK we have a concensus somewhat - everyone has been pretty much saying the say\nthings. The logo's are \"flat\" and the portions Postgre and SQL are too disjoint,\nand most people like the blue/white combo. I will make up some more interesting\nones this weekend using these notions.\n\nThanks Again,\n-STEVEl\n\n--\n--------------------------------------------\n http://www.nettek-llc.com/\n Southern Oregon's PC network technicians\n--------------------------------------------\n\n\n\n", "msg_date": "Thu, 04 Jun 1998 11:24:13 +0000", "msg_from": "Steve Logue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Steve Logue wrote:\n\n> > Someone came up with an alligator as our totem...but nobody seemed\n> > able to come up with a *strong* image to use :( I kinda like an elephant\n> > or turtle...a little slower, but highly dependable...\n> >\n> > We need something to identify with that isn't already used (ie. I\n> > won't suggest a little devil *grin*)...\n> >\n> \n> The alligators look good and certainly were more effort. If we must do\n> an animal, I think the original logo had it right - something FAST - a\n> chetah, a gazell, can't go wrong with an eagle in this country :) \n\n\tBut...we aren't fast...we're getting better, mind you :)\n\n\tAnd...ummmm...an eagle is kinda like the Penguin...I don't want to\n\"identify\" with the US ... now, a beaver might be nice? :)\n\n> OK we have a concensus somewhat - everyone has been pretty much saying\n> the say things. The logo's are \"flat\" and the portions Postgre and SQL\n> are too disjoint, and most people like the blue/white combo. I will\n> make up some more interesting ones this weekend using these notions. \n\n\tJust be patient with us...we are a hard crowd to please :)\n\n\n", "msg_date": "Thu, 4 Jun 1998 07:46:07 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Wed, 3 Jun 1998, David Gould wrote:\n\n> Steve Logue writes: \n> > \n> > After reading the recent mailing list thread about PostgreSQL not\n> > growing in popularity as fast as MySQL and the lack of a legible logo, I\n> > got bored and took a stab at some. I'd like to make a new logo for\n> > people to pass around and also get an HTML/Logo usage page in the\n> > distribution. Feedback would be appreciated.\n> > \n> > Cheers,\n> > -STEVEl\n> > \n> > http://www.nettek-llc.com/postgresql/\n> \n> These are not bad, although the difference in size between the \"Postgre\" and\n> the \"SQL\" make it a little hard to read as one word. The different background\n> for the two parts of the word adds to this. Still, they are attractive. \n> \n> Hmmm, I have an idea, what about a Penguin?\n\nWhat about *the* Penguin? He's available at www.dccomics.com...\nSorry, couldn't resist. I like the logos and will borrow them\nshortly for the UGD PostgreSQL page...thanks Steve.\n\nCheers,\nTom\n\n\n===================================================================\n\t\tUser Guide Dog Database Project\n===================================================================\n Project Coordinator: Peter J. Puckall <[email protected]>\n Programmers: \n C/Perl: Paul Anderson <[email protected]>\n SQL/Perl: Tom Good <[email protected]>\n HTML: Chris House <[email protected]>\n SQL/Perl: Phil R. Lawrence <[email protected]> \n Perl: Mike List <[email protected]>\n Progress 4GL: Robert March <[email protected]>\n===================================================================\n Powered by PostgreSQL 6.3.2 // DBI-0.91::DBD-PG-0.69 // Perl5\n===================================================================\n\n", "msg_date": "Thu, 4 Jun 1998 08:05:25 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> Already taken by Apple's Quicktime product. However, a cross between\n> the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> white penguin with horns and a long, barbed tail? :) \n\nHow about the BSD daemon and the Linux penguin clubing down baby harp\nseals?\n\nThat would be fairly distinct and eye-catching at the same time.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 11:10:07 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "\nIt seems to me like we need to solicit help from contest.gimp.org.\n\nOn Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> \tSteve...I looked at the ones you made, and found then a little\n> dry...I personally think it is going to be difficult to do a 'Powered\n> by' icon with an animal on it, but is it possible to *jumps* out at you? \n> Maybe a little depth? Maybe emboss the work PostgreSQL, so that it\n> stands out from the page? I like the top-left one the most, against the\n> white background like that, but maybe have it stick out like a button? \n> \n> \tI'm just throwing ideas out...take or leave any and/or all of them\n\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 11:12:46 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Tom Good wrote:\n\n> What about *the* Penguin? He's available at www.dccomics.com...\n> Sorry, couldn't resist. I like the logos and will borrow them\n> shortly for the UGD PostgreSQL page...thanks Steve.\n\nMaybe we should present the idea of the logo to the keepers of the Gimp,\nand have them run it as one of their monthly contests. The GNOME project\ndid this, and there were some fabulous logos entered by some very talented\nartists.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy/\n-----------------------------------------------------------------------\n\"The number of UNIX installations has grown to 10, with more expected.\"\n -- The UNIX Programmer's Manual, 2nd Edition, June, 1972\n\n", "msg_date": "Thu, 4 Jun 1998 12:23:22 -0400 (EDT)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n\n> On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > Already taken by Apple's Quicktime product. However, a cross between\n> > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > white penguin with horns and a long, barbed tail? :) \n> \n> How about the BSD daemon and the Linux penguin clubing down baby harp\n> seals?\n\n\tHow about the BSD daemon just saving time and clubbing down the\nLinux penguin? :)\n\n\n", "msg_date": "Thu, 4 Jun 1998 12:23:29 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> How about the BSD daemon just saving time and clubbing down the Linux\n> penguin? :) \n\nLinux really isn't a problem as far as I'm concerned.\n\nYou spend far too much time engaging in sniper warfare against Linux and\nits users in this group. While I agree with you that Linux has some\nphilosophical issues that I'm not completly happy with its users do not\nneed our remarks or appriciate our efforts to 'save' them.\n\nLinux will reap what it sows. (And it has)\n\nIf you wish to create dissident waves in what should be a fairly coherent\neffort to produce a superior -database- system, by all means continue OS\nbashing.\n\nSo long as the PostgreSQL development efforts maintain high standards and\ncontinues to demand coherent, well designed and implemented code from the\nvarious submiters I see no chance of us having to cope with the problems\nof a completly open and arbitrary development effort (Like Linux.) \n\nCompared to the MySQL source, PostgreSQL is remarkably clean. We have a\nvery nice build enviornment, clear divisions in the source tree for\nvarious modules etc.\n\nLets concentrate our efforts on converting people from MySQL and WindowsNT\nto Unix (any unix) and leave the OS dicksizing to the Advocacy groups.\n\nBesides, everyone knows that Satanix is the one true OS. :)\n\n(Satanix: When the rapture comes will you have root?)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 12:35:55 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n\n> On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n> \n> > On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > > Already taken by Apple's Quicktime product. However, a cross between\n> > > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > > white penguin with horns and a long, barbed tail? :) \n> > \n> > How about the BSD daemon and the Linux penguin clubing down baby harp\n> > seals?\n> \n> \tHow about the BSD daemon just saving time and clubbing down the\n> Linux penguin? :)\n\nI was thinking maybe the BSD daemon grilling the Linux penguin. :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity!\n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n\n", "msg_date": "Thu, 4 Jun 1998 12:57:21 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n\n> Linux really isn't a problem as far as I'm concerned.\n\n\tYou haven't been around very long, have you? Ask Thomas, one of\nthe core developers and a Linux user, what Linux' history has been like as\nfar as PostgreSQL :)\n\n\tAnyone that has been here long enough knows how I feel about\nLinux, and they know that I enjoy Baiting as much as possible, cause,\nquite frankly...you guys are *sooooo* easy to bait :)\n\n> You spend far too much time engaging in sniper warfare against Linux and\n> its users in this group. While I agree with you that Linux has some\n> philosophical issues that I'm not completly happy with its users do not\n> need our remarks or appriciate our efforts to 'save' them.\n\n\tGeez, I wasn't even considering philosophical issues...I was\nconsidering the issues of portability issues as far as PostgreSQL was/is\nconcerned...\n\n\t...look at the mailing list archives...its been a nightmare,\nspecially with the new glibc \"standard\" that is only standard on Linux :(\n\n> Besides, everyone knows that Satanix is the one true OS. :)\n\n\tErr? What's Satanix and where can I get a copy? :)\n\n", "msg_date": "Thu, 4 Jun 1998 12:57:38 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Vince Vielhaber wrote:\n\n> On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> \n> > On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n> > \n> > > On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > > > Already taken by Apple's Quicktime product. However, a cross between\n> > > > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > > > white penguin with horns and a long, barbed tail? :) \n> > > \n> > > How about the BSD daemon and the Linux penguin clubing down baby harp\n> > > seals?\n> > \n> > \tHow about the BSD daemon just saving time and clubbing down the\n> > Linux penguin? :)\n> \n> I was thinking maybe the BSD daemon grilling the Linux penguin. :)\n\n\tOn, like, a BBQ? :)\n\n\n", "msg_date": "Thu, 4 Jun 1998 13:00:07 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n\n> On Thu, 4 Jun 1998, Vince Vielhaber wrote:\n> \n> > On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> > \n> > > On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n> > > \n> > > > On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > > > > Already taken by Apple's Quicktime product. However, a cross between\n> > > > > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > > > > white penguin with horns and a long, barbed tail? :) \n> > > > \n> > > > How about the BSD daemon and the Linux penguin clubing down baby harp\n> > > > seals?\n> > > \n> > > \tHow about the BSD daemon just saving time and clubbing down the\n> > > Linux penguin? :)\n> > \n> > I was thinking maybe the BSD daemon grilling the Linux penguin. :)\n> \n> \tOn, like, a BBQ? :)\n\nYeah, and he can stick the thing a couple of times with his pitchfork to \nsee if it's done! :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity!\n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n\n", "msg_date": "Thu, 4 Jun 1998 13:04:18 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> \tYou haven't been around very long, have you? Ask Thomas, one of\n> the core developers and a Linux user, what Linux' history has been like as\n> far as PostgreSQL :)\n> \n> \tAnyone that has been here long enough knows how I feel about\n> Linux, and they know that I enjoy Baiting as much as possible, cause,\n> quite frankly...you guys are *sooooo* easy to bait :)\n\nPlonk. I was here before you were.\n\nRead the old mailing list archives. :)\n\nNewcomers might not understand you was well as everyone here does.\n\nWe'd really hate to end up like QMail and have an essentially good product\nbut a bad rep because everyone thinks one of the authors is 'an asshole'.\n\n> \tGeez, I wasn't even considering philosophical issues...I was\n> considering the issues of portability issues as far as PostgreSQL was/is\n> concerned...\n> \n> \t...look at the mailing list archives...its been a nightmare,\n> specially with the new glibc \"standard\" that is only standard on Linux :(\n\nAs I said; Linux will reap what they sow.\n\nAs long as we are not forced to change important things in order to\n'support' Linux we're ok.\n\nThe Linux users/developers associated with this project seem more than\nwilling to do the work of making PostgreSQL work with Linux.\n\nThe Linux community seems more than willing to play upgrade of the week in\norder to maintain a stable operating platform.\n\nIf this is the system that work for them then more power to them.\n\nYou and I and others who enjoy sleeping at night and not having to worry\nabout mucking about with glibc (and other horrors) can use Solaris or\nFreeBSD or BSDI or whatever.\n\n> > Besides, everyone knows that Satanix is the one true OS. :)\n> \n> \tErr? What's Satanix and where can I get a copy? :)\n\nIts nearly free but the NDA is a complete bitch.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 13:06:55 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Vince Vielhaber wrote:\n\n> On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> \n> > On Thu, 4 Jun 1998, Vince Vielhaber wrote:\n> > \n> > > On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> > > \n> > > > On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n> > > > \n> > > > > On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > > > > > Already taken by Apple's Quicktime product. However, a cross between\n> > > > > > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > > > > > white penguin with horns and a long, barbed tail? :) \n> > > > > \n> > > > > How about the BSD daemon and the Linux penguin clubing down baby harp\n> > > > > seals?\n> > > > \n> > > > \tHow about the BSD daemon just saving time and clubbing down the\n> > > > Linux penguin? :)\n> > > \n> > > I was thinking maybe the BSD daemon grilling the Linux penguin. :)\n> > \n> > \tOn, like, a BBQ? :)\n> \n> Yeah, and he can stick the thing a couple of times with his pitchfork to \n> see if it's done! :)\n\n\tWoah, an animated gif at that? :)\n\n\tMatthew...we are only kidding...go back through my older posts\nwhere I explain *why* I personally don't like Linux, but, at the same\ntime, I acknowledge everything Linux *has* done for the anti-MicroSloth\ncampaign. My experience with Linux users, in general, is that you guys\nare sooooo easy to incite to riot, and I enjoy playing with that. I\nalways have, ever since this whole project started, and, as long as its\neasy to do, I'll continue to do so...\n\n\t...take it with the grain of salt that I serve it with :)\n\n\n", "msg_date": "Thu, 4 Jun 1998 13:19:44 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n\n> > \tErr? What's Satanix and where can I get a copy? :)\n> \n> Its nearly free but the NDA is a complete bitch.\n\n\tI think I already signed it, years ago...must have forgotten to\nship me the release :(\n\n\n", "msg_date": "Thu, 4 Jun 1998 13:21:20 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n\n> On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> > How about the BSD daemon just saving time and clubbing down the Linux\n> > penguin? :) \n> \n> Linux really isn't a problem as far as I'm concerned.\n> \n> You spend far too much time engaging in sniper warfare against Linux and\n> its users in this group. While I agree with you that Linux has some\n\nI'm a Linux guy who doesn't take offense at all...I dislike that\npenguin as much as anyone. ;-)\n\n> Besides, everyone knows that Satanix is the one true OS. :)\n> (Satanix: When the rapture comes will you have root?)\n\n`Du siehst mit diesem Trank im Leibe bald Helenen in jedem Weibe...'\n - Mephistopheles (well, as recorded by JW v Goethe ;-)\n\nCheers Matthew,\nTom\n\n> /* \n> Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n> [email protected]\t\t| As cruel as it seems nothing ever seems to\n> http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n> */\n> \n> \n\nCheers,\nTom\n\n\n===================================================================\n\t\tUser Guide Dog Database Project\n===================================================================\n Project Coordinator: Peter J. Puckall <[email protected]>\n Programmers: \n C/Perl: Paul Anderson <[email protected]>\n SQL/Perl: Tom Good <[email protected]>\n HTML: Chris House <[email protected]>\n SQL/Perl: Phil R. Lawrence <[email protected]> \n Perl: Mike List <[email protected]>\n Progress 4GL: Robert March <[email protected]>\n===================================================================\n Powered by PostgreSQL 6.3.2 // DBI-0.91::DBD-PG-0.69 // Perl5\n===================================================================\n\n", "msg_date": "Thu, 4 Jun 1998 14:18:29 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "> On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n> \n> > On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > > Already taken by Apple's Quicktime product. However, a cross between\n> > > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > > white penguin with horns and a long, barbed tail? :) \n> > \n> > How about the BSD daemon and the Linux penguin clubing down baby harp\n> > seals?\n> \n> \tHow about the BSD daemon just saving time and clubbing down the\n> Linux penguin? :)\n\nI like it. Only, there is a big open ice floe with one solitary BSD daemon\nin the foreground clubbing like mad at ...\n\n\n\n a crowd of penguins covering the scene from here to the horizon. \n\nGreat fun!\n\nCan we get back to work now?\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n >And what if you are using a platform where there is no source code to the\n >server? Say, IIS on NT? \n Some sins carry with them their own automatic punishment. \n Microsoft is one such. Live by the Bill, suffer by the\n Bill, die by the Bill. -- Tom Christiansen\n", "msg_date": "Thu, 4 Jun 1998 11:24:31 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "> \t...look at the mailing list archives...its been a nightmare,\n> specially with the new glibc \"standard\" that is only standard on Linux :(\n\nExcuse? I think glibc is intended to be standard everywhere. As far as I\nknow, it has nothing to do with Linux other than Linux (as usual) is faster\nto adopt it than some of the \"legacy systems\".\n\n> > Besides, everyone knows that Satanix is the one true OS. :)\n> \n> \tErr? What's Satanix and where can I get a copy? :)\n\nNT 5.0. You have to sell your soul to Bill.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\nYou will cooperate with Microsoft, for the good of Microsoft\nand for your own survival. -- Navindra Umanee\n", "msg_date": "Thu, 4 Jun 1998 11:35:38 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, David Gould wrote:\n\n> > On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n> > \n> > > On Wed, 3 Jun 1998, Ken McGlothlen wrote:\n> > > > Already taken by Apple's Quicktime product. However, a cross between\n> > > > the BSD daemon 'toon and a penguin would be fairly funny; a red and\n> > > > white penguin with horns and a long, barbed tail? :) \n> > > \n> > > How about the BSD daemon and the Linux penguin clubing down baby harp\n> > > seals?\n> > \n> > \tHow about the BSD daemon just saving time and clubbing down the\n> > Linux penguin? :)\n> \n> I like it. Only, there is a big open ice floe with one solitary BSD daemon\n> in the foreground clubbing like mad at ...\n> \n> \n> \n> a crowd of penguins covering the scene from here to the horizon. \n> \n> Great fun!\n\n\t*rofl* I love it...:) \n\n> Can we get back to work now?\n\n\tA little light humor is refreshing, no? :)\n\n\n", "msg_date": "Thu, 4 Jun 1998 14:47:48 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, David Gould wrote:\n\n> > \t...look at the mailing list archives...its been a nightmare,\n> > specially with the new glibc \"standard\" that is only standard on Linux :(\n> \n> Excuse? I think glibc is intended to be standard everywhere. As far as I\n> know, it has nothing to do with Linux other than Linux (as usual) is faster\n> to adopt it than some of the \"legacy systems\".\n\n\tNo, my point was that, as far as I know, Linux is the *only* one\nthat has adopted it so far. I've yet to even *hear* anything about in the\nFreeBSD mailing lists...\n\n> NT 5.0. You have to sell your soul to Bill.\n\n\tAck, damn, I don't want a copy *that* badly :)\n\n\n", "msg_date": "Thu, 4 Jun 1998 14:48:54 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Matthew N. Dodd wrote:\n\n> On Thu, 4 Jun 1998, The Hermit Hacker wrote:\n> > How about the BSD daemon just saving time and clubbing down the Linux\n> > penguin? :)\n>\n> If you wish to create dissident waves in what should be a fairly coherent\n> effort to produce a superior -database- system, by all means continue OS\n> bashing.\n\nWow - Man - have a beer or two... I started this thread to get some logo ideas\nand eventually help promote PGSL - back to the subject... (to those in contact\nwith the GIMP project - a contest is a great idea)\n\n-STEVEl\n\n--\n--------------------------------------------\n http://www.nettek-llc.com/\n Southern Oregon's PC network technicians\n--------------------------------------------\n\n\n\n", "msg_date": "Thu, 04 Jun 1998 18:52:06 +0000", "msg_from": "Steve Logue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, David Gould wrote:\n> Excuse? I think glibc is intended to be standard everywhere. As far as I\n> know, it has nothing to do with Linux other than Linux (as usual) is\n> faster to adopt it than some of the \"legacy systems\". \n\n'standard' in what sense?\n\nIn the sense that Linux uses it you are correct.\n\nI don't expect NetBSD, FreeBSD, OpenBSD, BSDI, Solaris, Digital Unix, AIX,\nHPUX, SCO/Unixware etc to use it.\n\nIn that sense Linux is still doing things its own way, breaking things for\nno aparent reason. \n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 14:52:29 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "> \n> On Thu, 4 Jun 1998, David Gould wrote:\n> \n> > > \t...look at the mailing list archives...its been a nightmare,\n> > > specially with the new glibc \"standard\" that is only standard on Linux :(\n> > \n> > Excuse? I think glibc is intended to be standard everywhere. As far as I\n> > know, it has nothing to do with Linux other than Linux (as usual) is faster\n> > to adopt it than some of the \"legacy systems\".\n> \n> \tNo, my point was that, as far as I know, Linux is the *only* one\n> that has adopted it so far. I've yet to even *hear* anything about in the\n> FreeBSD mailing lists...\n\nBSDI is going to be using glibc, I think as part of a way of running\nLinux binaries.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 15:11:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > On Thu, 4 Jun 1998, David Gould wrote:\n> > \n> > > > \t...look at the mailing list archives...its been a nightmare,\n> > > > specially with the new glibc \"standard\" that is only standard on Linux :(\n> > > \n> > > Excuse? I think glibc is intended to be standard everywhere. As far as I\n> > > know, it has nothing to do with Linux other than Linux (as usual) is faster\n> > > to adopt it than some of the \"legacy systems\".\n> > \n> > \tNo, my point was that, as far as I know, Linux is the *only* one\n> > that has adopted it so far. I've yet to even *hear* anything about in the\n> > FreeBSD mailing lists...\n> \n> BSDI is going to be using glibc, I think as part of a way of running\n> Linux binaries.\n\n\tJust for the Linux emulation though (ie. take Linux libraries to\nuse for the Linux emulator)...that's what we aer doing here also...\n\n\n", "msg_date": "Thu, 4 Jun 1998 15:23:12 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Bruce Momjian wrote:\n> BSDI is going to be using glibc, I think as part of a way of running\n> Linux binaries.\n\nThere is a difference between offering glibc as a linux compat lib and\nusing it to link your world against.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 16:02:33 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n\n> We'd really hate to end up like QMail and have an essentially good product\n> but a bad rep because everyone thinks one of the authors is 'an asshole'.\n\nSince there's only ONE author of qmail, I'll assume you're referring to \nDan. I've been using qmail since version 0.72 and have had many\nencounters with him. He's stubborn. Very stubborn. He's also rather\nopinionated. But he's not 'an asshole'.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity! \n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n", "msg_date": "Thu, 4 Jun 1998 17:45:41 -0400 (edt)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Vince Vielhaber wrote:\n> Since there's only ONE author of qmail, I'll assume you're referring to\n> Dan. I've been using qmail since version 0.72 and have had many\n> encounters with him. He's stubborn. Very stubborn. He's also rather\n> opinionated. But he's not 'an asshole'. \n\nMaybe not, but he sure comes off that way at times. :)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 17:48:27 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Matthew N. Dodd writes:\n> > Excuse? I think glibc is intended to be standard everywhere. As far as I\n> > know, it has nothing to do with Linux other than Linux (as usual) is\n> > faster to adopt it than some of the \"legacy systems\". \n> \n> 'standard' in what sense?\n\nIn that it is the base for a common binary format for all Intel Unix\nplatforms. The idea is for applications to be used on all these systems\nwithout need for recompilation.\n\n> In the sense that Linux uses it you are correct.\n\nNo. The others will/should follow.\n\n> I don't expect NetBSD, FreeBSD, OpenBSD, BSDI, Solaris, Digital Unix, AIX,\n> HPUX, SCO/Unixware etc to use it.\n\nAs long as they are not on Intel architecturey ou're probably right.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 5 Jun 1998 08:56:56 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Bruce Momjian writes:\n> BSDI is going to be using glibc, I think as part of a way of running\n> Linux binaries.\n\nI think they are also part of this Unix on Intel group.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 5 Jun 1998 08:57:33 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Matthew N. Dodd wrote:\n\n> On Thu, 4 Jun 1998, David Gould wrote:\n> > Excuse? I think glibc is intended to be standard everywhere. As far as I\n> > know, it has nothing to do with Linux other than Linux (as usual) is\n> > faster to adopt it than some of the \"legacy systems\". \n> \n> 'standard' in what sense?\n> \n> In the sense that Linux uses it you are correct.\n> \n> I don't expect NetBSD, FreeBSD, OpenBSD, BSDI, Solaris, Digital Unix, AIX,\n> HPUX, SCO/Unixware etc to use it.\n> \n> In that sense Linux is still doing things its own way, breaking things for\n> no aparent reason. \n\nActually, some year ago a group started up, I think they're called the \nx86open group, that is working to making a standard c-library interface \nfor x86 based UNIX's. In that group, Linux, BSD *and* SCO and some other \ncommercial entities are represented. The plan is to use glibc 2 as a base.\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Fri, 5 Jun 1998 10:36:26 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Dr. Michael Meskes writes:\n> Matthew N. Dodd writes:\n> > > Excuse? I think glibc is intended to be standard everywhere. As far as I\n> > > know, it has nothing to do with Linux other than Linux (as usual) is\n> > > faster to adopt it than some of the \"legacy systems\". \n> > \n> > 'standard' in what sense?\n> \n> In that it is the base for a common binary format for all Intel Unix\n> platforms. The idea is for applications to be used on all these systems\n> without need for recompilation.\n> \n> > In the sense that Linux uses it you are correct.\n> \n> No. The others will/should follow.\n> \n> > I don't expect NetBSD, FreeBSD, OpenBSD, BSDI, Solaris, Digital Unix, AIX,\n> > HPUX, SCO/Unixware etc to use it.\n> \n> As long as they are not on Intel architecturey ou're probably right.\n> \n> Michael\n\nThank you. I was going to say all this, but you have done a better job.\n\nLinux previously used its very own libc. This was not a bad libc and we\nhave all been very happy with it, but programs tended to get littered with\n\"/usr/include/linux\" includes as the libc depended on Linux.\n\nThe whole point of Glibc is to make Linux MORE STANDARD. Glibc is intended\nto be the reference libc. And it is not a Linux thing. It is a standard thing.\nSo that no Linuxisms creep into your code. So that it is portable.\n\nThis of course is most useful if the remaining platforms adopt it too. \nGiven current trends, I suspect this will happen.\n\nOf course, right now only Redhat 5.0 and 5.1 use it. And Redhat has taken\na lot of heat for it too in the more ignorant parts of the Linux community.\n\nThe other distributions will follow. Debian is almost there. And all the\nneat new packages will follow. And so eventually all the Intel platforms will\nhave a Glibc so they can use all that fun new stuff. And the good news is\nthat Glibc is portable in the sense that if you have Glibc, Glibc programs\nwork. So the effect is now the other Intel Unixs get to run all the nice\nnew Linux binaries. Look at that, install a library, get access to more\nsoftware. If you want (not have, want) to you can even build software that\nwill run on Linux with no Linux.\n\nBut \"no good deed ever goes unpunished\", so now we have people _complaining_\nabout how awful and nonstandard and horrible Linux is for useing Glibc.\n\nAll else aside, the non Linux Unixs are going to support Linux compatibility.\nOr educate all 10 million Linux users ;-). And Glibc is far better for a\nnon Linux system then Linux Libc5.\n\nIngrates! ;-)\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"There is this special biologist word we use for 'stable'.\n It is 'dead'.\" -- Jack Cohen\n", "msg_date": "Fri, 5 Jun 1998 01:40:33 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "> > Besides, everyone knows that Satanix is the one true OS. :)\n> > (Satanix: When the rapture comes will you have root?)\n> \n> `Du siehst mit diesem Trank im Leibe bald Helenen in jedem Weibe...'\n> - Mephistopheles (well, as recorded by JW v Goethe ;-)\n\nWhile we're citing Goethe:\n\n`Ich bin ein teil des teils der anfang alles war, ein teil des finsternis die\nsich das licht verdrang'\n\n(For native german speakers, please forgive me any grammar and spelling \nerrors, I'm doing this from head)\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Fri, 5 Jun 1998 10:43:40 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Fri, 5 Jun 1998, David Gould wrote:\n> This of course is most useful if the remaining platforms adopt it too. \n> Given current trends, I suspect this will happen. \n\nI suspect you're talking about 'remaining Linux distributions, on all\nplatforms (Alpha,MIPS,Sparc,ix86 etc.)\n\nIn that sense you are correct and it is already happening.\n\n> The other distributions will follow. Debian is almost there. And all the\n> neat new packages will follow. And so eventually all the Intel platforms\n> will have a Glibc so they can use all that fun new stuff. And the good\n> news is that Glibc is portable in the sense that if you have Glibc,\n> Glibc programs work. So the effect is now the other Intel Unixs get to\n> run all the nice new Linux binaries. Look at that, install a library,\n> get access to more software. If you want (not have, want) to you can\n> even build software that will run on Linux with no Linux. \n\nAll the other OS will be able to use glibc2 binaries in the same way that\nLinux uses Solaris2 binaries; through a binary ABI interface.\n\nFreeBSD already runs libc5 and glibc2 binaries. Suggesting that FreeBSD\nor other platforms compile native binaries against glibc2 is silly; they\nhave their own libc which works just fine.\n\n> All else aside, the non Linux Unixs are going to support Linux\n> compatibility.\n\nSupporting a binary ABI is completly different from using glibc2 natively.\n\n> Or educate all 10 million Linux users ;-).\n\nWhy should non-Linux Unix give a rats ass what the level of education the\naverage Linux user has?\n\n> And Glibc is far better for a non Linux system then Linux Libc5. \n\nI haven't noticed the difference.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Fri, 5 Jun 1998 11:23:03 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Fri, 5 Jun 1998, Matthew N. Dodd wrote:\n\n> On Fri, 5 Jun 1998, David Gould wrote:\n> > This of course is most useful if the remaining platforms adopt it too. \n> > Given current trends, I suspect this will happen. \n> \n> I suspect you're talking about 'remaining Linux distributions, on all\n> platforms (Alpha,MIPS,Sparc,ix86 etc.)\n\n\tI'm suspecting the same thing...I follow the developers mailin\nlist for FreeBSD, and have yet to hear of *any* work towards adopting the\nglibc \"standard\"...if someone wishes to point me at work being done for\nanything *other* then Linux (ie. NetBSD? Solaris x86) towards adopting\nthis, I'd be interested...\n\n> > And Glibc is far better for a non Linux system then Linux Libc5. \n> \n> I haven't noticed the difference.\n\n\tOther then that libc5 was stable, of course...?\n\n\n", "msg_date": "Fri, 5 Jun 1998 12:08:32 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Hello!\n\nOn Mon, 8 Jun 1998, Jose' Soares Da Silva wrote:\n> ...but I don't think that an animal is a good image for PostgreSQL.\n\n Too. I think animal is not good for postgreSQL...\n\n> Are ask yourself, what ppl see in their mind, when they read the word\n> PostgreSQL ?\n\n I think PostgreSQL is a DataBASE system. Because of this, I think logo\nshould be something that is BASE. For example, bridge - big and strong\nbridge full of pieces of data. Or full of trucks full of data.\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 8 Jun 1998 14:34:39 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "1) If you really want an animal then why not a DOG? the best friend of man ;-)\n\n...but I don't think that an animal is a good image for PostgreSQL.\n\nAre ask yourself, what ppl see in their mind, when they read the word\nPostgreSQL ?\n\nIf you want to know what I see, I see an acronym or monogram of more than\none word (exactly 3 words; Post, Ingres and SQL).\n\n2) lets analyzing PostgreSQL...\n\nThe origin of POST should be Latin and it stands for after or beyond.\nThe root of INGRES should be the Latin word INGRESSIO that stands for entrance\nThe meaning of SQL... (you know that)\n\nWell POST INGRESSIONE (beyond entrance) SQL\n\nWhat's there beyond entrance ? Information, data, knowledge, etc.\n\nI have recently see a home page with an splendid door with a legend that\nsaid: \"click here to enter\".\n\nIn our case we can have a library beyond the entrance with information and\nknowledge, and users can access to this marvelous world using POST INGRESSIONE.\n\n3) An alternative should be the word:\n\nPostgr(ADUATE) = PostgreSQL\n In British English a postgraduate is a student with a first degree\n from a university who is doing research at a more advanced level.\nDo we want graduate PostgreSQL... well maybe a laurel crown should represent\nit.\n Comments ?\n\t\t\t Jose'\n\n\n", "msg_date": "Mon, 8 Jun 1998 11:21:13 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" } ]
[ { "msg_contents": "Hi. I'm looking for non-English-using Postgres hackers to participate in\nimplementing NCHAR() and alternate character sets in Postgres. I think\nI've worked out how to do the implementation (not the details, just a\nstrategy) so that multiple character sets will be allowed in a single\ndatabase, additional character sets can be loaded at run-time, and so\nthat everything will behave transparently.\n\nI would propose to do this for v6.4 as user-defined packages (with\ncompile-time parser support) on top of the existing USE_LOCALE and MB\npatches so that the existing compile-time options are not changed or\ndamaged.\n\nSo, the initial questions:\n\n1) Is the NCHAR/NVARCHAR/CHARACTER SET syntax and usage acceptable for\nnon-English applications? Do other databases use this SQL92 convention,\nor does it have difficulties?\n\n2) Would anyone be interested in helping to define the character sets\nand helping to test? I don't know the correct collation sequences and\ndon't think they would display properly on my screen...\n\n3) I'd like to implement the existing Cyrillic and EUC-jp character\nsets, and also some European languages (French and ??) which use the\nLatin-1 alphabet but might have different collation sequences. Any\nsuggestions for candidates??\n\n - Tom\n", "msg_date": "Wed, 03 Jun 1998 14:24:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch" }, { "msg_contents": "\nHi Tom,\n\n> I would propose to do this for v6.4 as user-defined packages (with\n> compile-time parser support) on top of the existing USE_LOCALE and MB\n> patches so that the existing compile-time options are not changed or\n> damaged.\n\nBe careful that system locales may not be here, though you may need the\nlocale information in Postgres. They may also be broken (which is in fact\noften the case), so don't depend on them.\n\n> So, the initial questions:\n> \n> 1) Is the NCHAR/NVARCHAR/CHARACTER SET syntax and usage acceptable for\n> non-English applications? Do other databases use this SQL92 convention,\n> or does it have difficulties?\n\nDon't know (yet).\n> \n> 2) Would anyone be interested in helping to define the character sets\n> and helping to test? I don't know the correct collation sequences and\n> don't think they would display properly on my screen...\n\nI can help for french, icelandic, and german and norwegian (though for the\ntwo last ones, I guess there are more appropriate persons on this list :). \n\n> 3) I'd like to implement the existing Cyrillic and EUC-jp character\n> sets, and also some European languages (French and ??) which use the\n> Latin-1 alphabet but might have different collation sequences. Any\n> suggestions for candidates??\n\nThey all have, as soon as we take care of accents, which are all put at\nthe end with an english system. And of course, they are different for each\nlanguage :)\n\nPatrice\n\nPS : I'm sorry, Tom, I haven't been able to work on the faq for the past\nmonth :(( because I've been busy in my free time learning norwegian ! I\nwill submit something very soon, I promise !\n\n--\nPatrice H�D� --------------------------------- [email protected] -----\n ... Looking for a job in Iceland or in Norway !\nIng�nieur informaticien - Computer engineer - T�lvufr��ingur\n----- http://www.idf.net/patrice/ ----------------------------------\n\n", "msg_date": "Wed, 3 Jun 1998 17:36:55 +0200 (MET DST)", "msg_from": "=?ISO-8859-1?Q?Patrice_H=E9d=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Postgres-6.3.2 locale patch" }, { "msg_contents": ">Hi. I'm looking for non-English-using Postgres hackers to participate in\n>implementing NCHAR() and alternate character sets in Postgres. I think\n>I've worked out how to do the implementation (not the details, just a\n>strategy) so that multiple character sets will be allowed in a single\n>database, additional character sets can be loaded at run-time, and so\n>that everything will behave transparently.\n\nSounds interesting idea... But before going into discussion, Let me\nmake clarify what \"character sets\" means. A character sets consists of\nsome characters. One of the most famous character set is ISO646\n(almost same as ASCII). In western Europe, ISO 8859 series character\nsets are widely used. For example, ISO 8859-1 includes English,\nFrench, German etc. and ISO 8859-2 includes Albanian, Romanian\netc. These are \"single byte\" and there is one to many correspondacne\nbetween the character set and Languages.\n\nExample1:\nISO 8859-1 <------> English, French, German\n\nOn the other hand, some asian languages such as Japanese, Chinese, and\nKorean do not correspond to a chacter set, rather correspond to\nmultiple character sets.\n\nExample2:\nASCII, JIS X0208, JIS X0201, JIS X0212 <-------> Japanese\n(ASCII, JIS X0208, JIS X0201, JIS X0212 are individual character sets)\n\nAn \"encoding\" is a way to represent set of charactser sets in\ncomputers. The above set of characters sets are encoded in the EUC_JP\nencdoing.\n\nI think SQL92 uses a term \"character set\" as encoding.\n\n>So, the initial questions:\n>\n>1) Is the NCHAR/NVARCHAR/CHARACTER SET syntax and usage acceptable for\n>non-English applications? Do other databases use this SQL92 convention,\n>or does it have difficulties?\n\nAs far as I know, there is no commercial RDBMS that supports\nNCHAR/NVARCHAR/CHARACTER SET syntax. Oracle supports multiple\nencodings. An encoding for a database is defined while creating the\ndatabase and cannot be changed at runtime. Clients can use different\nencoding as long as it is a \"subset\" of the database's encoding. For\nexample, a oracle client can use ASCII if the database encoding is\nEUC_JP.\n\nI think the idea that the \"default\" encoding for a database being\ndefined at the database creation time is nice.\n\ncreate database with encoding EUC_JP;\n\nIf NCHAR/NVARCHAR/CHARACTER SET syntax would be supported, a user\ncould use a different encoding other than EUC_JP. Sound very nice too.\n\n>2) Would anyone be interested in helping to define the character sets\n>and helping to test? I don't know the correct collation sequences and\n>don't think they would display properly on my screen...\n\nI would be able to help you in the Japanese part. For Chinese and\nKorean, I'm going to find volunteers in the local PostgreSQL mailing\nlist I'm running if necessary.\n\n>3) I'd like to implement the existing Cyrillic and EUC-jp character\n>sets, and also some European languages (French and ??) which use the\n>Latin-1 alphabet but might have different collation sequences. Any\n>suggestions for candidates??\n\nCollation sequences for EUC_JP? How nice it would be! One of a problem\nfor collation sequences for multi-byte encodings is the sequence might\nbecome huge. Seems you have a solution for that. Please let me know\nmore details.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Thu, 04 Jun 1998 14:23:42 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch " }, { "msg_contents": "Hello!\n\nOn Wed, 3 Jun 1998, Thomas G. Lockhart wrote:\n> Hi. I'm looking for non-English-using Postgres hackers to participate in\n> implementing NCHAR() and alternate character sets in Postgres. I think\n> I've worked out how to do the implementation (not the details, just a\n> strategy) so that multiple character sets will be allowed in a single\n> database, additional character sets can be loaded at run-time, and so\n> that everything will behave transparently.\n\n All this sounds nice, but I am afraid the job is not for me. Actually I\nam very new to Postgres and SQL world. I started to learn SQL 3 months ago;\nI started to play with Postgres 2 months ago. I started to hack Potsgres\nsources (about locale) a little more than a month ago.\n\n> 2) Would anyone be interested in helping to define the character sets\n> and helping to test? I don't know the correct collation sequences and\n> don't think they would display properly on my screen...\n\n It would be nice to test it, providing that it wouldn't break existing\ncode. Our site is running hundreds CGIs that rely on current locale support\nin Postgres...\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 4 Jun 1998 10:42:44 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch" }, { "msg_contents": "On Thu, 4 Jun 1998 [email protected] wrote:\n\n> >Hi. I'm looking for non-English-using Postgres hackers to participate in\n> >implementing NCHAR() and alternate character sets in Postgres. I think\n> >I've worked out how to do the implementation (not the details, just a\n> >strategy) so that multiple character sets will be allowed in a single\n> >database, additional character sets can be loaded at run-time, and so\n> >that everything will behave transparently.\n> \n> Sounds interesting idea... But before going into discussion, Let me\n> make clarify what \"character sets\" means. A character sets consists of\n> some characters. One of the most famous character set is ISO646\n> (almost same as ASCII). In western Europe, ISO 8859 series character\n> sets are widely used. For example, ISO 8859-1 includes English,\n> French, German etc. and ISO 8859-2 includes Albanian, Romanian\n> etc. These are \"single byte\" and there is one to many correspondacne\n> between the character set and Languages.\n> \n> Example1:\n> ISO 8859-1 <------> English, French, German\n> \n> On the other hand, some asian languages such as Japanese, Chinese, and\n> Korean do not correspond to a chacter set, rather correspond to\n> multiple character sets.\n> \n> Example2:\n> ASCII, JIS X0208, JIS X0201, JIS X0212 <-------> Japanese\n> (ASCII, JIS X0208, JIS X0201, JIS X0212 are individual character sets)\n> \n> An \"encoding\" is a way to represent set of charactser sets in\n> computers. The above set of characters sets are encoded in the EUC_JP\n> encdoing.\n> \n> I think SQL92 uses a term \"character set\" as encoding.\n> \n> >So, the initial questions:\n> >\n> >1) Is the NCHAR/NVARCHAR/CHARACTER SET syntax and usage acceptable for\n> >non-English applications? Do other databases use this SQL92 convention,\n> >or does it have difficulties?\n> \n> As far as I know, there is no commercial RDBMS that supports\n> NCHAR/NVARCHAR/CHARACTER SET syntax. Oracle supports multiple\n> encodings. An encoding for a database is defined while creating the\n> database and cannot be changed at runtime. Clients can use different\n> encoding as long as it is a \"subset\" of the database's encoding. For\n> example, a oracle client can use ASCII if the database encoding is\n> EUC_JP.\n\nI try the following databases on Linux and no one has this feature:\n. MySql\n. Solid \n. Empress \n. Kubl\n. ADABAS D\n\nI found only one under M$-Windows that implement this feature:\n. OCELOT\nI'm playing with it, but so far I don't understand its behavior.\nThere's an interesting documentation about it on OCELOT manual,\nif you want I can send it to you.\n\n> \n> I think the idea that the \"default\" encoding for a database being\n> defined at the database creation time is nice.\n> \n> create database with encoding EUC_JP;\n> \n> If NCHAR/NVARCHAR/CHARACTER SET syntax would be supported, a user\n> could use a different encoding other than EUC_JP. Sound very nice too.\n> \n> >2) Would anyone be interested in helping to define the character sets\n> >and helping to test? I don't know the correct collation sequences and\n> >don't think they would display properly on my screen...\n> \n> I would be able to help you in the Japanese part. For Chinese and\n> Korean, I'm going to find volunteers in the local PostgreSQL mailing\n> list I'm running if necessary.\n\nI may help with Italian, Spanish and Portuguese.\n\n> \n> >3) I'd like to implement the existing Cyrillic and EUC-jp character\n> >sets, and also some European languages (French and ??) which use the\n> >Latin-1 alphabet but might have different collation sequences. Any\n> >suggestions for candidates??\n> \n> Collation sequences for EUC_JP? How nice it would be! One of a problem\n> for collation sequences for multi-byte encodings is the sequence might\n> become huge. Seems you have a solution for that. Please let me know\n> more details.\n> --\n> Tatsuo Ishii\n> [email protected]\n Ciao, Jose'\n\n", "msg_date": "Thu, 4 Jun 1998 10:13:31 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch " }, { "msg_contents": "> > Sounds interesting idea... But before going into discussion, Let me\n> > make clarify what \"character sets\" means.\n> > An \"encoding\" is a way to represent set of charactser sets in\n> > computers.\n> > I think SQL92 uses a term \"character set\" as encoding.\n\nI have found the SQL92 terminology confusing, because they do not seem\nto make the nice clear distinction between encoding and collation\nsequence which you have pointed out. I suppose that there can be an\nissue of visual appearance of an alphabet for different locales also.\n\nafaik, SQL92 uses the term \"character set\" to mean an encoding with an\nimplicit collation sequence. SQL92 allows alternate collation sequences\nto be specified for a \"character set\" when it can be made meaningful.\n\nI would propose to implement \n VARCHAR(length) WITH CHARACTER SET setname\n\nas a type with a type name of, for example, \"VARSETNAME\". This type\nwould have the comparison functions and operators which implement\ncollation sequences.\n\nI would propose to implement\n VARCHAR(length) WITH CHARACTER SET setname COLLATION collname\n\nas a type with a name of, for example, \"VARCOLLNAME\". For the EUC-jp\nencoding, \"collname\" could be \"Korean\" or \"Japanese\" so the type name\nwould become \"varkorean\" or \"varjapanese\". Don't know for sure yet\nwhether this is adequate, but other possibilities can be used if\nnecessary.\n\nWhen a database is created, it can be specified with a default character\nset/collation sequence for the database; this would correspond to the\nNCHAR/NVARCHAR/NTEXT types. We could implement a \n SET NATIONAL CHARACTER SET = 'language';\n\ncommand to determine the default character set for the session when\nNCHAR is used.\n\nThe SQL92 technique for specifying an encoding/collation sequence in a\nliteral string is\n _language 'string'\n\nso for example to specify a string in the French language (implying an\nencoding, collation, and representation?) you would use\n _FRENCH 'string'\n\n> > I would be able to help you in the Japanese part. For Chinese and\n> > Korean, I'm going to find volunteers in the local PostgreSQL mailing\n> > list I'm running if necessary.\n> \n> I may help with Italian, Spanish and Portuguese.\n\nGreat, and perhaps Oleg could help test with Cyrillic (I assume I can\nsteal code from the existing \"CYR_LOCALE\" blocks in the Postgres\nbackend).\n\n> > Collation sequences for EUC_JP? How nice it would be! One of a \n> > problem for collation sequences for multi-byte encodings is the \n> > sequence might become huge. Seems you have a solution for that. \n> > Please let me know more details.\n\nUm, no, I just assume we can find a solution :/ I'd like to implement\nthe infrastructure in the Postgres parser to allow multiple\nencodings/collations, and then see where we are. As I mentioned, this\nwould be done for v6.4 as a transparent add-on, so that existing\ncapabilities are not touched or damaged. Implementing everything for\nsome European languages (with the 1-byte Latin-1 encoding?) may be\neasiest, but the Asian languages might be more fun :)\n\n - Tom\n", "msg_date": "Thu, 04 Jun 1998 15:07:11 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch" }, { "msg_contents": "Hi!\n\nOn Thu, 4 Jun 1998, Thomas G. Lockhart wrote:\n> Great, and perhaps Oleg could help test with Cyrillic (I assume I can\n> steal code from the existing \"CYR_LOCALE\" blocks in the Postgres\n> backend).\n\n Before sending my patch to pgsql-patches I gave it out to few testers\nhere. It wouldn't be too hard to find testers for Cyrillic support, sure.\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 5 Jun 1998 10:39:39 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch" }, { "msg_contents": ">When a database is created, it can be specified with a default character\n>set/collation sequence for the database; this would correspond to the\n>NCHAR/NVARCHAR/NTEXT types. We could implement a \n> SET NATIONAL CHARACTER SET = 'language';\n\nIn the current implementation of MB, the encoding used by BE is\ndetermined at the compile time. This time I would like to add more\nflexibility in that the encoding can be specified when creating a\ndatabase. I would like to add a new option to the CREATE DATABASE\nstatement:\n\nCREATE DATABASE WITH ENCODING 'encoding';\n\nI'm not sure if this kind of thing is defined in the\nstandard. Suggestion?\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Tue, 16 Jun 1998 13:47:33 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch " } ]
[ { "msg_contents": "On Wed, 3 Jun 1998, Constantin Teodorescu wrote:\n\n> Jose' Soares Da Silva wrote:\n> > \n> > Is there a libpgtcl.so different for v6.3.2 ?\n> \n> It seems that it is.\n> \n> You shoulnd't mix up binaries from 6.3 with libraries from 6.3.2\n> \n> I don't know what could happend, but I \"feel\" isn't good.\n> If PgAccess works with 6.3 , no problem!\n> \n> But I am convinced that PgAccess should work also with libraries\n> (libpgtcl and libpq) from 6.3.2\n> There isn't a problem.\n\n-- I tried this trick because I can't compile libpgtcl on 6.3.2.\n-- It works well on 6.3 but 6.3.2 give me this error:\n\nrosso:~/postgresql-6.3.2/src/interfaces/libpgtcl$ make\ngcc -I../../include -I../../backend -Wall -Wmissing-prototypes\n-I../../backend -I../../include -I../../interfaces/libpq -c pgtcl.c -o\npgtcl.o\npgtcl.c:20: tcl.h: No such file or directory\nIn file included from pgtcl.c:21:\nlibpgtcl.h:17: tcl.h: No such file or directory\nIn file included from pgtcl.c:22:\npgtclCmds.h:16: tcl.h: No such file or directory\nmake: *** [pgtcl.o] Error 1\nrosso:~/postgresql-6.3.2/src/interfaces/libpgtcl$\n\n-- while version 6.3 is ok:\n\nrosso:~/postgresql-6.3/src/interfaces/libpgtcl$ make\ngcc -I../../include -I../../backend -I/usr/include/ncurses\n-I/usr/include/readline -O2 -Wall -Wmissing-prototypes -I/usr/include/tcl\n-I../../backend -I../../include -I../../interfaces/libpq -I -fpic -c pgtcl.c\n-o pgtcl.o\ngcc -I../../include -I../../backend -I/usr/include/ncurses\n-I/usr/include/readline -O2 -Wall -Wmissing-prototypes -I/usr/include/tcl\n-I../../backend -I../../include -I../../interfaces/libpq -I -fpic -c\npgtclCmds.c -o pgtclCmds.o\ngcc -I../../include -I../../backend -I/usr/include/ncurses\n-I/usr/include/readline -O2 -Wall -Wmissing-prototypes -I/usr/include/tcl\n-I../../backend -I../../include -I../../interfaces/libpq -I -fpic -c\npgtclId.c -o pgtclId.o\nar crs libpgtcl.a pgtcl.o pgtclCmds.o pgtclId.o\nranlib libpgtcl.a\nld -shared -L ../../interfaces/libpq -lpq -o libpgtcl.so.1 pgtcl.o pgtclCmds.o\npgtclId.o\nln -sf libpgtcl.so.1 libpgtcl.so\n-------------------------------------------------------------------\n Jose'\n\n", "msg_date": "Wed, 3 Jun 1998 16:25:27 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] pgaccess 0.86" } ]
[ { "msg_contents": "David Gould wrote:\n>\n> Steve Logue writes: \n> > \n> > After reading the recent mailing list thread about PostgreSQL not\n> > growing in popularity as fast as MySQL and the lack of a legible logo, I\n> > got bored and took a stab at some. I'd like to make a new logo for\n> > people to pass around and also get an HTML/Logo usage page in the\n> > distribution. Feedback would be appreciated.\n> > \n> > Cheers,\n> > -STEVEl\n> > \n> > http://www.nettek-llc.com/postgresql/\n> \n> These are not bad, although the difference in size between the \"Postgre\" and\n> the \"SQL\" make it a little hard to read as one word. The different background\n> for the two parts of the word adds to this. Still, they are attractive. \n\nYes. PostgreSQL is hard to read as one word even when printed ;)\n\nI think it would help to move the \"powered\" word up (and make it \n\"powered by\") and have the baselines for \"Postgre\" and \"SQL\" line up.\n\n> Hmmm, I have an idea, what about a Penguin?\n\nI did some logos a few months ago based on a Crocodile image. \nUnfortunately I have'nt had time to work on it lately.\n\nThey are still up for viewing at :\n\nhttp://www.trust.ee/Info/PostgreSQL.figs/logo/page.html\n\nI got even some positive feedback ;), especially for the \n'PostgreSQL underneath' logo button on page2.\n\nBut to make a proper logo out of that material still requires some work.\n\nHannu\n", "msg_date": "Thu, 04 Jun 1998 09:04:55 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Hannu Krosing wrote:\n\n> I did some logos a few months ago based on a Crocodile image.\n> Unfortunately I have'nt had time to work on it lately.\n> \n> They are still up for viewing at :\n> \n> http://www.trust.ee/Info/PostgreSQL.figs/logo/page.html\n> \n> I got even some positive feedback ;), especially for the\n> 'PostgreSQL underneath' logo button on page2.\n> \n> But to make a proper logo out of that material still requires some work.\n\n\nI think the croco is great!\n\n\n/* m */\n", "msg_date": "Thu, 04 Jun 1998 12:48:08 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "On Thu, 4 Jun 1998, Mattias Kregert wrote:\n\n> Hannu Krosing wrote:\n> \n> > I did some logos a few months ago based on a Crocodile image.\n> > Unfortunately I have'nt had time to work on it lately.\n> > \n> > They are still up for viewing at :\n> > \n> > http://www.trust.ee/Info/PostgreSQL.figs/logo/page.html\n> > \n> > I got even some positive feedback ;), especially for the\n> > 'PostgreSQL underneath' logo button on page2.\n> > \n> > But to make a proper logo out of that material still requires some work.\n> \n> \n> I think the croco is great!\n\n\tI think alot of ppl had agreed that it was, but the image needed\nto be worked on...\n\n\tI even like the 'commentary' at the top of the page, which we\ncould use as a \"How we choose our Totem\" page? :)\n\n\tI really liked his 'doing a search over an Object-relational...\"\nimagine ...\n\n\n", "msg_date": "Thu, 4 Jun 1998 07:51:41 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "> Yes. PostgreSQL is hard to read as one word even when printed ;)\n> \n> I think it would help to move the \"powered\" word up (and make it \n> \"powered by\") and have the baselines for \"Postgre\" and \"SQL\" line up.\n> \n> > Hmmm, I have an idea, what about a Penguin?\n> \n> I did some logos a few months ago based on a Crocodile image. \n> Unfortunately I have'nt had time to work on it lately.\n> \n> They are still up for viewing at :\n> \n> http://www.trust.ee/Info/PostgreSQL.figs/logo/page.html\n> \n> I got even some positive feedback ;), especially for the \n> 'PostgreSQL underneath' logo button on page2.\n> \n> But to make a proper logo out of that material still requires some work.\n\nI have an idea. Instead of making the SQL with horizontal lines, could\nwe make it with rectangles, so it looks like an SQL table. That may be\na nice effect.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 7 Jun 1998 22:48:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I have an idea. Instead of making the SQL with horizontal lines, could\n> we make it with rectangles, so it looks like an SQL table. That may be\n> a nice effect.\n\nYup - good idea :)\n\n-STEVEl\n\n--\n--------------------------------------------\n http://www.nettek-llc.com/\n Southern Oregon's PC network technicians\n--------------------------------------------\n\n\n\n\nBruce Momjian wrote:\n\nI have an idea.  Instead of making the SQL with horizontal lines,\ncould\nwe make it with rectangles, so it looks like an SQL table.  That\nmay be\na nice effect.\nYup - good idea :)\n\n-STEVEl\n-- \n--------------------------------------------\n  http://www.nettek-llc.com/\n  Southern Oregon's PC network technicians\n--------------------------------------------", "msg_date": "Mon, 08 Jun 1998 04:39:48 +0000", "msg_from": "Steve Logue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NEW POSTGRESQL LOGOS" } ]
[ { "msg_contents": "\nOn Thu, 4 Jun 1998, Thomas G. Lockhart wrote:\n\n> Hi. I'm looking for non-English-using Postgres hackers to participate in\n> implementing NCHAR() and alternate character sets in Postgres. I think\n> I've worked out how to do the implementation (not the details, just a\n> strategy) so that multiple character sets will be allowed in a single\n> database, additional character sets can be loaded at run-time, and so\n> that everything will behave transparently.\n\nOk, I'm English, but I'll keep a close eye on this topic as the JDBC\ndriver has two methods that handle Unicode strings.\n\nCurrently, they simply call the Ascii/Binary methods. But they could (when\nNCHAR/NVARCHAR/CHARACTER SET is the columns type) handle the translation\nbetween the character set and Unicode.\n\n> I would propose to do this for v6.4 as user-defined packages (with\n> compile-time parser support) on top of the existing USE_LOCALE and MB\n> patches so that the existing compile-time options are not changed or\n> damaged.\n\nIn a same vein, for getting JDBC up to speed with this, we may need to\nhave a function on the backend that will handle the translation between\nthe encoding and Unicode. This would allow the JDBC driver to\nautomatically handle a new character set without having to write a class\nfor each package.\n\n-- \nPeter Mount, [email protected]\nPostgres email to [email protected] & [email protected]\nRemember, this is my work email, so please CC my home address, as I may\nnot always have time to reply from work.\n\n\n", "msg_date": "Thu, 4 Jun 1998 08:47:31 +0100 (BST)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Postgres-6.3.2 locale patch (fwd)" }, { "msg_contents": ">In a same vein, for getting JDBC up to speed with this, we may need to\n>have a function on the backend that will handle the translation between\n>the encoding and Unicode. This would allow the JDBC driver to\n>automatically handle a new character set without having to write a class\n>for each package.\n\nI already have a patch to handle the translation on the backend\nbetween the encoding and SJIS (yet another encoding for Japanese).\nTranslation for other encodings such as Big5(Chinese) and Unicode are\nin my plan.\n\nThe biggest problem for Unicode is that the translation is not\nsymmetrical. An encoding to Unicode is ok. However, Unicode to an\nencoding is like one-to-many. The reason for that is \"Unification.\" A\ncode point of Unicode might correspond to either Chinese, Japanese or\nKorean. To determine that, we need additional infomation what language\nwe are using. Too bad. Any idea?\n---\nTatsuo Ishii\[email protected]\n", "msg_date": "Thu, 04 Jun 1998 17:16:13 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Postgres-6.3.2 locale patch (fwd) " }, { "msg_contents": "On Thu, 4 Jun 1998 [email protected] wrote:\n\n> >In a same vein, for getting JDBC up to speed with this, we may need to\n> >have a function on the backend that will handle the translation between\n> >the encoding and Unicode. This would allow the JDBC driver to\n> >automatically handle a new character set without having to write a class\n> >for each package.\n> \n> I already have a patch to handle the translation on the backend\n> between the encoding and SJIS (yet another encoding for Japanese).\n> Translation for other encodings such as Big5(Chinese) and Unicode are\n> in my plan.\n> \n> The biggest problem for Unicode is that the translation is not\n> symmetrical. An encoding to Unicode is ok. However, Unicode to an\n> encoding is like one-to-many. The reason for that is \"Unification.\" A\n> code point of Unicode might correspond to either Chinese, Japanese or\n> Korean. To determine that, we need additional infomation what language\n> we are using. Too bad. Any idea?\n\nI'm not sure. I brought this up as it's something that I feel should be\ndone somewhere in the backend, rather than in the clients, and should be\nthought about at this stage.\n\nI was thinking on the lines of a function that handled the translation\nbetween any two given encodings (ie it's told what the initial and final\nencodings are), and returns the translated string (be it single or\nmulti-byte). It could then throw an error if the translation between the\ntwo encodings is not possible, or (optionally) that part of the\ntranslation would fail.\n\nAlso, having this in the backend would allow all the interfaces access to\ninternational encodings without too much work. Adding a new encoding can\nthen be done just on the server (say by adding a module), without having\nto recompile/link everything else. \n\n--\nPeter Mount, [email protected] \nPostgres email to [email protected] & [email protected]\nRemember, this is my work email, so please CC my home address, as I may \nnot always have time to reply from work.\n\n\n", "msg_date": "Thu, 4 Jun 1998 09:58:12 +0100 (BST)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Postgres-6.3.2 locale patch (fwd) " }, { "msg_contents": "Someone whos headers I am too lazy to retreive wrote:\n> On Thu, 4 Jun 1998, Thomas G. Lockhart wrote:\n> \n> > Hi. I'm looking for non-English-using Postgres hackers to participate in\n> > implementing NCHAR() and alternate character sets in Postgres. I think\n...\n> Currently, they simply call the Ascii/Binary methods. But they could (when\n> NCHAR/NVARCHAR/CHARACTER SET is the columns type) handle the translation\n> between the character set and Unicode.\n> \n> > I would propose to do this for v6.4 as user-defined packages (with\n> > compile-time parser support) on top of the existing USE_LOCALE and MB\n> > patches so that the existing compile-time options are not changed or\n> > damaged.\n> \n> In a same vein, for getting JDBC up to speed with this, we may need to\n> have a function on the backend that will handle the translation between\n> the encoding and Unicode. This would allow the JDBC driver to\n> automatically handle a new character set without having to write a class\n> for each package.\n\nJust an observation or two on the topic of internationalization:\n\nIllustra went to unicode internally. This allowed things like kanji table\nnames etc. It worked, but it was very costly in terms of work, bugs, and\nespecially performance although we eventually got most of it back.\n\nThen we created encodings (char set, sort order, error messages etc) for\na bunch of languages. Then we made 8 bit chars convert to unicode and\nassumed 7 bit chars were in 7-bit ascii.\n\nThis worked and was in some sense \"the right thing to do\".\n\nBut, the european customers hated it. Before, when we were \"plain ole\nAmuricans, don't hold with this furrin stuff\", we ignored 8 vs 7 bit\nissues and the europeans were free to stick any characters they wanted\nin and get them out unchanged and it was just as fast as anything else.\n\nWhen we changed to unicode and 7 vs 8 bit sensitivity it forced everyone\nto install an encoding and store their data in unicode. Needless to say\ncustomers in eg Germany did not want to double their disk space and give\nup performance to do something only a little better than they could do\nalready.\n\nUltimately, we backed it out and allowed 8 bit chars again. You could still\nget unicode, but except for asian sites it was not widely used, and even in\nasia it was not universally popular.\n\nBottom line, I am not opposed to internationalization. But, it is harder\neven than it looks. And some of the \"correct\" technical solutions turn\nout to be pretty annoying in the real world. \n\nSo, having it as an add on is fine. Providing support in the core is fine\ntoo. An incremental approach of perhaps adding sort orders for 8 bit char\nsets today and something else next release might be ok. But, be very very\ncareful and do not accept that the \"popular\" solutions are useable or try\nto solve the \"whole\" problem in one grand effort.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"And there _is_ a real world. In fact, some of you\n are in it right now.\" -- Gene Spafford\n", "msg_date": "Thu, 4 Jun 1998 22:40:53 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Postgres-6.3.2 locale patch (fwd)" }, { "msg_contents": "> The biggest problem for Unicode is that the translation is not\n> symmetrical. An encoding to Unicode is ok. However, Unicode to an\n> encoding is like one-to-many. The reason for that is \"Unification.\" A\n> code point of Unicode might correspond to either Chinese, Japanese or\n> Korean. To determine that, we need additional infomation what language\n> we are using. Too bad. Any idea?\n\nIt seems not that bad for the translation from Unicode to Japanese EUC\n(or SJIS or Big5).\nBecause Japanese EUC(or SJIS) has only Japanese characters and Big5 has only Chinese characters(regarding to only CJK).\nRight?\nIt would be virtually one-to-one or one-to-none when translating\nfrom unicode to them mono-lingual encodings.\nIt, however, would not be that simple to translate from Unicdoe to\nanother multi-lingual encoding(like iso-2022 based Mule encoding?).\n\nKinoshita\n", "msg_date": "Fri, 12 Jun 1998 15:47:16 +0900", "msg_from": "Satoshi Kinoshita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Postgres-6.3.2 locale patch (fwd)" }, { "msg_contents": ">> The biggest problem for Unicode is that the translation is not\n>> symmetrical. An encoding to Unicode is ok. However, Unicode to an\n>> encoding is like one-to-many. The reason for that is \"Unification.\" A\n>> code point of Unicode might correspond to either Chinese, Japanese or\n>> Korean. To determine that, we need additional infomation what language\n>> we are using. Too bad. Any idea?\n>\n>It seems not that bad for the translation from Unicode to Japanese EUC\n>(or SJIS or Big5).\n>Because Japanese EUC(or SJIS) has only Japanese characters and Big5 has only Chinese characters(regarding to only CJK).\n>Right?\n>It would be virtually one-to-one or one-to-none when translating\n>from unicode to them mono-lingual encodings.\n\nOh, I was wrong. We have already an information about \"what language\nwe are using\" when try to make a translation between Unicode and\nJapanese EUC:-)\n\n>It, however, would not be that simple to translate from Unicdoe to\n>another multi-lingual encoding(like iso-2022 based Mule encoding?).\n\nCorrect.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Fri, 12 Jun 1998 16:19:05 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Postgres-6.3.2 locale patch (fwd) " } ]
[ { "msg_contents": "Hi!\n\nI have a serious problem with PostgreSQL 6.3 concerning array of char...\nI can create table with array of char2 for exemple, but not array of\nchar.\ncreate table tab1(a char2[][]); -> OK\ncreate table tab2(a int2[][]); -> OK\ncreate table tab3(a char[][]); -> error... \n\nI can't find it as a known bug and this functionality was possible with\nprevious version of Postgresql.\n\nDo you have an idea? A solution? I have very large tables and char[][]\nis the only way I found to save values of 1 byte (as thin int are not\nsupported by PostgreSQL).\n\nThanks!\n\nStephane\n\n-- \n________________________________________________________________________\n Network Computing Technologies\n\n St�phane MONEGER\n Information System Designer\n Marimba Certified Consultant\n\nNCTech Phone : +33 4 78 61 46 29\n8, rue Hermann Frenkel Fax : +33 4 78 61 46 99 \n69007 LYON Cedex FRANCE Email : [email protected] \n________________________________________________________________________\n", "msg_date": "Thu, 04 Jun 1998 10:03:57 +0200", "msg_from": "Stephane MONEGER <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with array of char" } ]
[ { "msg_contents": "Hi, all!\n\nI am trying to create a trigger to update a field on a \"son\" table\nwhen a linked field (foreign key) is modified on a table \"father\".\nexample:\n\ntable son: table father:\n--------------------- ------------------------\nid /-----< id\ndescription / name\nson_id <----------/ address\n... ...\n--------------------- ------------------------\n\nI see there's a check_foreign_key() function doing the following:\n\nCASCADE - to delete corresponding foreign key,\nRESTRICT - to abort transaction if foreign keys exist,\nSETNULL - to set foreign key referencing primary/unique key\n being deleted to null)\n\nI need to implement a MODIFY clause to set 'son.son_id' equal to 'father.id'\nwhen 'father.id' is updated.\n\nI'm not a C-programmer, then I created a SQL function, but seems that\nTRIGGER doesn't recognize SQL functions.\nAm I right ?\n Thanks, Jose'\n\n", "msg_date": "Thu, 4 Jun 1998 11:10:25 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "TRIGGERS" } ]
[ { "msg_contents": "Basically I would rename template1 to pg_master and connect postmaster\nto this database. Then it would have access to system global sql info.\nThe problem is: postgres backends would need to do a reconnect to their \nown database instead of a connect.\n\nAndreas\n\n", "msg_date": "Thu, 4 Jun 1998 15:34:44 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] keeping track of connections" } ]
[ { "msg_contents": "Over in pgsql-patches, Magosanyi Arpad <[email protected]> wrote:\n> libpgtcl segmentation faults in any pg_exec call, if it fails for any reason\n> There is a patch which has worked for me. The real problem might be in\n> PQreset, which can't reset a conninfo based connection. The patch:\n\n> - --- pgtclCmds.c\t1998/05/27 10:54:36\t1.1\n> +++ pgtclCmds.c\t1998/05/27 10:58:07\n> @@ -454,7 +454,7 @@\n> else {\n> \t/* error occurred during the query */\n> \tTcl_SetResult(interp, conn->errorMessage, TCL_STATIC);\n> - -\tif (connStatus == CONNECTION_OK) {\n> +\tif (connStatus != CONNECTION_OK) {\n> \t PQreset(conn);\n> \t if (conn->status == CONNECTION_OK) {\n> \t\tresult = PQexec(conn, argv[2]);\n> - --\n\nActually, that entire block of \"error recovery\" code looks thoroughly\nbogus to me. I thought seriously about just ripping it out when I was\nmodifying libpgtcl last week, but I refrained. Now I think I should've.\n(For starters, the Tcl_SetResult call is wrong --- TCL_STATIC says that\nthe string passed to Tcl_SetResult is a constant. But if the PQreset\npath is taken then the error message will be overwritten; the Tcl code\nwill not see the original error message, but whatever is left there\nafter the reconnection. Together with Magosanyi's observation that the\nif-test is backwards, it seems clear that this section of the code has\nnever been tested or debugged.) The larger point is that I don't think\nthis low-level routine has any business calling PQreset. Blowing away\nthe connection and making another is a sledgehammer recovery method\nthat ought only be invoked by the application, not by library routines.\nI don't like PQendcopy's use of PQreset either, and would like to take\nthat out too. Any comments?\n\nBut the real reason I'm writing this message is the comment about PQreset\npossibly failing. I know of one case in which PQreset will not work:\nif the database requires a password then PQreset will fail. (Why, you\nask? Because connectDB() in fe-connect.c deliberately erases the\npassword after the first successful connection.) Is this the situation\nyou are running into, Magosanyi? Or is there another problem in there?\nIt seems to me that the password issue should only result in a failed\nreconnection, not a coredump. Where exactly is the segfault occurring?\n\nI have been intending to propose that connectDB's deletion of the\npassword be removed. The security gain is marginal, if not completely\nillusory. (If a bad guy has access to the client's address space,\nwhether he can find the password is the least of your worries. Besides,\nwhere did the password come from? There are probably other copies of\nit outside libpq's purview.) So I don't think it's worth breaking\nPQreset for.\n\nAlternatively, we could eliminate PQreset entirely. It doesn't really\ndo anything that the client application can't do for itself (just close\nand re-open the connection; two lines instead of one) and its presence\nseems to encourage the use of poorly-considered error \"recovery\"\nschemes...\n\n<end rant>\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Jun 1998 13:06:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpgtcl bug (and symptomatic treatment)" }, { "msg_contents": "> \n> Over in pgsql-patches, Magosanyi Arpad <[email protected]> wrote:\n> > libpgtcl segmentation faults in any pg_exec call, if it fails for any reason\n> > There is a patch which has worked for me. The real problem might be in\n> > PQreset, which can't reset a conninfo based connection. The patch:\n> \n> > - --- pgtclCmds.c\t1998/05/27 10:54:36\t1.1\n> > +++ pgtclCmds.c\t1998/05/27 10:58:07\n> > @@ -454,7 +454,7 @@\n> > else {\n> > \t/* error occurred during the query */\n> > \tTcl_SetResult(interp, conn->errorMessage, TCL_STATIC);\n> > - -\tif (connStatus == CONNECTION_OK) {\n> > +\tif (connStatus != CONNECTION_OK) {\n> > \t PQreset(conn);\n> > \t if (conn->status == CONNECTION_OK) {\n> > \t\tresult = PQexec(conn, argv[2]);\n> > - --\n> \n> Actually, that entire block of \"error recovery\" code looks thoroughly\n> bogus to me. I thought seriously about just ripping it out when I was\n> modifying libpgtcl last week, but I refrained. Now I think I should've.\n> (For starters, the Tcl_SetResult call is wrong --- TCL_STATIC says that\n> the string passed to Tcl_SetResult is a constant. But if the PQreset\n> path is taken then the error message will be overwritten; the Tcl code\n> will not see the original error message, but whatever is left there\n> after the reconnection. Together with Magosanyi's observation that the\n> if-test is backwards, it seems clear that this section of the code has\n> never been tested or debugged.) The larger point is that I don't think\n> this low-level routine has any business calling PQreset. Blowing away\n> the connection and making another is a sledgehammer recovery method\n> that ought only be invoked by the application, not by library routines.\n> I don't like PQendcopy's use of PQreset either, and would like to take\n> that out too. Any comments?\n\nPlease, do whatever you think is best in this area.\n\n> \n> But the real reason I'm writing this message is the comment about PQreset\n> possibly failing. I know of one case in which PQreset will not work:\n> if the database requires a password then PQreset will fail. (Why, you\n> ask? Because connectDB() in fe-connect.c deliberately erases the\n> password after the first successful connection.) Is this the situation\n> you are running into, Magosanyi? Or is there another problem in there?\n> It seems to me that the password issue should only result in a failed\n> reconnection, not a coredump. Where exactly is the segfault occurring?\n\nI saw a comment around this code a week ago, saying it breaks PQreset(),\nand was going to remove it myself, with a comment to this list in case\nsome else mentioned a problem. Yes, please remove the password erasure.\n\n> \n> I have been intending to propose that connectDB's deletion of the\n> password be removed. The security gain is marginal, if not completely\n> illusory. (If a bad guy has access to the client's address space,\n> whether he can find the password is the least of your worries. Besides,\n> where did the password come from? There are probably other copies of\n> it outside libpq's purview.) So I don't think it's worth breaking\n> PQreset for.\n\nYes, if they can see the address space, they can see the password typed\nin. If the app coredumps, they can read the password IF they have\naccess to the core file, but again, why would they?\n\n> \n> Alternatively, we could eliminate PQreset entirely. It doesn't really\n> do anything that the client application can't do for itself (just close\n> and re-open the connection; two lines instead of one) and its presence\n> seems to encourage the use of poorly-considered error \"recovery\"\n> schemes...\n\nInteresting.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 13:30:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: libpgtcl bug (and symptomatic treatment)" }, { "msg_contents": "[please cc: to me the answer, as I am not on the mailinglist (yet?)]\n\nA levelezőm azt hiszi, hogy Tom Lane a következőeket írta:\n> \n> But the real reason I'm writing this message is the comment about PQreset\n> possibly failing. I know of one case in which PQreset will not work:\n> if the database requires a password then PQreset will fail. (Why, you\n> ask? Because connectDB() in fe-connect.c deliberately erases the\n> password after the first successful connection.) Is this the situation\n> you are running into, Magosanyi? Or is there another problem in there?\n> It seems to me that the password issue should only result in a failed\n> reconnection, not a coredump. Where exactly is the segfault occurring?\n\nExactly, the connection had a password. I can't tell you exactly where the\ncore dump occurred, but surely it was inside PQreset.\n\n> \n> I have been intending to propose that connectDB's deletion of the\n> password be removed. The security gain is marginal, if not completely\n> illusory. (If a bad guy has access to the client's address space,\n> whether he can find the password is the least of your worries. Besides,\n> where did the password come from? There are probably other copies of\n> it outside libpq's purview.) So I don't think it's worth breaking\n> PQreset for.\n\nAnd anyway the password goes out plaintext on the net (okay, it can go\ncrypt()ed, but the crypted version is also enough to connect to your postgres \naccount, should someone snooping on the net).\nAs setting up kerberos is a PITA, especially for us in the free world \n(in cryptoexportlaw sense), is it possible to hack in some other light yet\nunsnoopable authentication method? (SRP comes to mind) Also, the encryption\nof the connections would be a nifty thing.\n[I am aware of the following facts: (1) there is kerberos also in the free \nworld, (2) with ssh port forwarding I can work around the 'plain on the net'\nproblem.]\n\n> \n> Alternatively, we could eliminate PQreset entirely. It doesn't really\n> do anything that the client application can't do for itself (just close\n> and re-open the connection; two lines instead of one) and its presence\n> seems to encourage the use of poorly-considered error \"recovery\"\n> schemes...\n\nI guess it would break compatibility. Maybe a two-step method would be \nworth considering: first insert a warning (to stderr), that PQreset is\nconsidered harmful. After some time remove it.\nMaybe it is not worth to do the second step.\n\n-- \nGNU GPL: csak tiszta forrásból\n", "msg_date": "Fri, 5 Jun 1998 07:37:34 +0100", "msg_from": "Magosanyi Arpad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpgtcl bug (and symptomatic treatment)" } ]
[ { "msg_contents": "\nI was just browsing the PostgreSQL site and stumbled on a link to \n\nhttp://datasplash.CS.Berkeley.EDU:8000/tioga/\n\nWhich appears to be a really nifty data visualization tool for databases.\nIt also appears to use/support Postgres95 though there seems to be some\nquestion of it working or not.\n\nAnyone looked at this?\n\nI'm downloading now.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Thu, 4 Jun 1998 13:28:33 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Neat tool? (Datasplash)" }, { "msg_contents": "> \n> \n> I was just browsing the PostgreSQL site and stumbled on a link to \n> \n> http://datasplash.CS.Berkeley.EDU:8000/tioga/\n> \n> Which appears to be a really nifty data visualization tool for databases.\n> It also appears to use/support Postgres95 though there seems to be some\n> question of it working or not.\n> \n> Anyone looked at this?\n> \n> I'm downloading now.\n\nSee also the source directory:\n\n\t/pg/backend/tioga/\n\nThey must be related.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 14:19:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Neat tool? (Datasplash)" } ]
[ { "msg_contents": "I have completed a patch that shows the backend status on the 'ps' line\nfor each process:\n\n24081 ./bin/postmaster -i -B 401 -d -o -F -S 1024\n24089 /usr/local/postgres/./bin/postgres postgres test idle \"\" \"\" \"\" (postmaster)\n24106 /usr/local/postgres/./bin/postgres postgres test SELECT \"\" \"\" \"\" (postmaster)\n\nAs you can see, the backend shows the user, database, and status, which\nis either 'idle' or 'SELECT', 'UPDATE', 'VACUUM', etc. Those \"\" are\nthere because I erased the other args. \"(postmaster)\" is there because\nthat was the initial argv[0] value (we don't fork() anymore).\n\nThis will be useful, even if we go with further status features. I am\ninterested in any other information I should be showing. I believe\nthere is almost zero performance overhead in assigning/showing these\nvalues, except that the strings should be valid during the entire time\nit is assigned to argv. We could almost display the row number as we\nscan through a table. Nifty feature.\n \nThis worked under BSDI, because if you say argv[1] = \"test\", and argc is\nat least 2, it shows \"test\" in ps. If argc is only one (they didn't use\nany args), it will not show it, but I have added a nifty hack to the\npostmaster to re-exec it so it is sure to have a least three args. I\nstrip them off before processing. You can see the patch in the patches\nlist.\n\nBSDI uses the kvm interface for ps, which allows 'ps' to grab the args\nright out of the process's address space. This is a nifty trick,\nconsidering that 'ps' is run inside the address space of another\nprocess. I did not use the sendmail wack-the-environment method of\nchanging 'ps'-displayed args, because it is ugly code, and will probably\ncause more problems than it is worth. Hopefully most platforms will\nallow this kind of assignment to be shown in 'ps'.\n\nI have also removed some unused args to pg_exec_query(). Again, it is\nin the patch posted.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 14:32:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "backend now show status in 'ps'" }, { "msg_contents": "> \n> I have completed a patch that shows the backend status on the 'ps' line\n> for each process:\n> \n> 24081 ./bin/postmaster -i -B 401 -d -o -F -S 1024\n> 24089 /usr/local/postgres/./bin/postgres postgres test idle \"\" \"\" \"\" (postmaster)\n> 24106 /usr/local/postgres/./bin/postgres postgres test SELECT \"\" \"\" \"\" (postmaster)\n> \n> As you can see, the backend shows the user, database, and status, which\n> is either 'idle' or 'SELECT', 'UPDATE', 'VACUUM', etc. Those \"\" are\n> there because I erased the other args. \"(postmaster)\" is there because\n> that was the initial argv[0] value (we don't fork() anymore).\n> \n> This will be useful, even if we go with further status features. I am\n> interested in any other information I should be showing. I believe\n> there is almost zero performance overhead in assigning/showing these\n> values, except that the strings should be valid during the entire time\n> it is assigned to argv. We could almost display the row number as we\n> scan through a table. Nifty feature.\n\nI agree.\n\n> This worked under BSDI, because if you say argv[1] = \"test\", and argc is\n> at least 2, it shows \"test\" in ps. If argc is only one (they didn't use\n> any args), it will not show it, but I have added a nifty hack to the\n> postmaster to re-exec it so it is sure to have a least three args. I\n> strip them off before processing. You can see the patch in the patches\n> list.\n\nI believe, this won't work under Linux. I'm not 100% sure about it but from\nwhat I can remember Linux pass a copy of the original argv to the program\nand changing it doesn't change the argv strings shown by ps. You must zap\nthe strings itself inside the page allocated for argv.\nI would suggest the following code which works fine also under linux, even\nwith zero args.\n\n#ifdef linux\n progname = argv[0];\n /* Fill the argv buffer vith 0's, once during the initialization */\n for (i=0; i<argc; argc++) {\n memset(argv[i], 0, strlen(argv[i]));\n }\n#endif\n\n /* Build status info */\n sprintf(status, \"%s ...\", ...);\n#ifdef bsdi\n argv[1] = status;\n#endif\n#ifdef linux\n /* Print the original argv[0] + status info in the argv buffer */\n sprintf(argv[0], \"%s %s\", progname, status);\n#endif\n\nI would also suggest using only lowercase messages if possible. They don't\nhurt the eyes too much.\n\n> BSDI uses the kvm interface for ps, which allows 'ps' to grab the args\n> right out of the process's address space. This is a nifty trick,\n> considering that 'ps' is run inside the address space of another\n> process. I did not use the sendmail wack-the-environment method of\n> changing 'ps'-displayed args, because it is ugly code, and will probably\n> cause more problems than it is worth. Hopefully most platforms will\n> allow this kind of assignment to be shown in 'ps'.\n> \n> I have also removed some unused args to pg_exec_query(). Again, it is\n> in the patch posted.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Mon, 8 Jun 1998 17:17:52 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend now show status in 'ps'" }, { "msg_contents": "Hello!\n\nOn Mon, 8 Jun 1998, Massimo Dal Zotto wrote:\n> > This worked under BSDI, because if you say argv[1] = \"test\", and argc is\n> > at least 2, it shows \"test\" in ps. If argc is only one (they didn't use\n> > any args), it will not show it, but I have added a nifty hack to the\n> > postmaster to re-exec it so it is sure to have a least three args. I\n> > strip them off before processing. You can see the patch in the patches\n> > list.\n> \n> I believe, this won't work under Linux. I'm not 100% sure about it but from\n> what I can remember Linux pass a copy of the original argv to the program\n> and changing it doesn't change the argv strings shown by ps. You must zap\n> the strings itself inside the page allocated for argv.\n> I would suggest the following code which works fine also under linux, even\n> with zero args.\n\n AFAIK the only \"portable\" way is to steal copy from well-known sendmail\nhacks...\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 8 Jun 1998 19:36:42 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend now show status in 'ps'" }, { "msg_contents": "> I believe, this won't work under Linux. I'm not 100% sure about it but from\n> what I can remember Linux pass a copy of the original argv to the program\n> and changing it doesn't change the argv strings shown by ps. You must zap\n> the strings itself inside the page allocated for argv.\n> I would suggest the following code which works fine also under linux, even\n> with zero args.\n> \n> #ifdef linux\n> progname = argv[0];\n> /* Fill the argv buffer vith 0's, once during the initialization */\n> for (i=0; i<argc; argc++) {\n> memset(argv[i], 0, strlen(argv[i]));\n> }\n> #endif\n\nThis is OK. It will work.\n\n> \n> /* Build status info */\n> sprintf(status, \"%s ...\", ...);\n> #ifdef bsdi\n> argv[1] = status;\n> #endif\n> #ifdef linux\n> /* Print the original argv[0] + status info in the argv buffer */\n> sprintf(argv[0], \"%s %s\", progname, status);\n> #endif\n\nThis may not work. The problem is that there is no guarantee that there\nenough string space in argv[0] to hold the new string value. That is\nwhy sendmail actually re-allocates/moves the argv[] strings to make\nroom, but such code is very ugly.\n\nWe can perform some tricks to make argv[0] larger by re-exec'ing the\npostmaster, which we already do to make sure we have enough args, but\nlet's see what Linux people report.\n \n> I would also suggest using only lowercase messages if possible. They don't\n> hurt the eyes too much.\n\nYes, that would be nice, but I want to assign fixed string constants, so\nthey don't change, and currently I use the same strings that are\ndisplayed as part of psql:\n\n\ttest=> update test set x=2;\n\tUPDATE 2\n ^^^^^^\n\nDidn't seem worth making another string for every command type, and\nbecause it is a string constant, I can't lowercase it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 12:04:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] backend now show status in 'ps'" } ]
[ { "msg_contents": "unsubscribe\n\n", "msg_date": "Thu, 04 Jun 1998 19:25:06 -0400", "msg_from": "Chris Olivier <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "Looking For An Extra Income? How would you like to Assemble Products at Home & get Paid! Choose your own Hours! Be your own Boss! Easy Work! Excellent Pay! Earn Hundreds of Dollars Weekly! Here are just a few examples of the work you will have to choose from:\nWooden Products - up to $220.00 Weekly!\nHair Accessories - up to $320.00 Weekly!\nHoilday Crafts - up to $270.00 Weekly!\nBeaded Accessories - up to $350.00 Weekly!\n.......plus many others. There are over 75 Companies to choose from! Why not enjoy the Benefits and Freedom of Home Assembly Work! To Get Started.......Call Toll Free 1-800-600-0343 Office Extension # 2425.\n\nUSA Publishing Co.\nP.O. Box 8950 \nRocky Mount NC 27804\n\nName_______________________________________\n\nAddress_____________________________________\n\nCity & State______________________________\n", "msg_date": "Fri, 05 Jun 1998 02:43:37 +0200", "msg_from": "[email protected] (USA Publishing Co.)", "msg_from_op": true, "msg_subject": "Wanted Home Product Assemblers" } ]
[ { "msg_contents": "Hello,\n\nI really need a Standard Deviation aggregate function so I will\ntry to write one.\n\nI know about the man pages for \"create aggregate\" and \"create\nfunction\". Is there something else I should look at?\nJust a few pointers could save me a few hours of hunting around.\nAll advice accepted.\n\nIt seems kind of hard to do with only two state functions unless\nI \"cheat\". I need to keep three values, Count, Sum, and Sum of\nSquares. I could use three static variables and have the final\nfunction ignore its input and use the static vars instead. This\nwill likely blow up if the new Standard Deviation aggregate is\nused twice in the same select.\n\nAny hints or advice??\n\nIf someone has this done already let me know.\n\nI may want do a \"median\" aggregate function too as I'll need that\nlater. This would require private storage and a sort.\n \nCould you cc me at both addresses below as I move around between\nthem\n\nThanks,\n\n-- \n--Chris Albertson\n\n [email protected]\n [email protected] Voice: 626-351-0089 X127\n Fax: 626-351-0699\n", "msg_date": "Thu, 04 Jun 1998 18:20:50 -0700", "msg_from": "Chris Albertson <[email protected]>", "msg_from_op": true, "msg_subject": "Standard Deviation function." }, { "msg_contents": "> I really need a Standard Deviation aggregate function...\n> \n> I know about the man pages for \"create aggregate\" and \"create\n> function\". Is there something else I should look at?\n> \n> It seems kind of hard to do with only two state functions unless\n> I \"cheat\". I need to keep three values, Count, Sum, and Sum of\n> Squares.\n> \n> Any hints or advice??\n\nI thought about this a long time ago and had an idea but never\ngot around to trying to implement it. I was going to have some\nfunctions that worked on a structure of two doubles to track\nthe sum and square instead of using only one simple type.\n\ndarrenk\n", "msg_date": "Thu, 4 Jun 1998 21:35:56 -0400", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Standard Deviation function." }, { "msg_contents": "> \n> > I really need a Standard Deviation aggregate function...\n> > \n> > I know about the man pages for \"create aggregate\" and \"create\n> > function\". Is there something else I should look at?\n> > \n> > It seems kind of hard to do with only two state functions unless\n> > I \"cheat\". I need to keep three values, Count, Sum, and Sum of\n> > Squares.\n> > \n> > Any hints or advice??\n> \n> I thought about this a long time ago and had an idea but never\n> got around to trying to implement it. I was going to have some\n> functions that worked on a structure of two doubles to track\n> the sum and square instead of using only one simple type.\n\nI remember talking about this to someone, and the problem is that you\nneeded the average WHILE scanning through the table, which required two\npasses, which the aggregate system is not designed to do. I may be\nwrong on this, though.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 21:55:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Standard Deviation function." }, { "msg_contents": "> > > I really need a Standard Deviation aggregate function...\n> > \n> > I thought about this a long time ago and had an idea but never\n> > got around to trying to implement it. I was going to have some\n> > functions that worked on a structure of two doubles to track\n> > the sum and square instead of using only one simple type.\n> \n> I remember talking about this to someone, and the problem is that you\n> needed the average WHILE scanning through the table, which required two\n> passes, which the aggregate system is not designed to do. I may be\n> wrong on this, though.\n\nI had asked you how to calculate this and the variance early last\nyear. One (I think the variance) was two-pass because of the need\nfor the average, but I thought the StdDev would work with the struct.\n\nBeen a while and I still haven't configured #(*&^ FreeBSD ppp yet.\n\ndarrenk\n", "msg_date": "Thu, 4 Jun 1998 23:22:09 -0400", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Standard Deviation function." }, { "msg_contents": "> I had asked you how to calculate this and the variance early last\n> year. One (I think the variance) was two-pass because of the need\n> for the average, but I thought the StdDev would work with the struct.\n\nVariance is just square of std. dev, no?\n\n> \n> Been a while and I still haven't configured #(*&^ FreeBSD ppp yet.\n\nBummer.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 4 Jun 1998 23:24:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Standard Deviation function." }, { "msg_contents": "> \n> > > > I really need a Standard Deviation aggregate function...\n> > > \n> > > I thought about this a long time ago and had an idea but never\n> > > got around to trying to implement it. I was going to have some\n> > > functions that worked on a structure of two doubles to track\n> > > the sum and square instead of using only one simple type.\n> > \n> > I remember talking about this to someone, and the problem is that you\n> > needed the average WHILE scanning through the table, which required two\n> > passes, which the aggregate system is not designed to do. I may be\n> > wrong on this, though.\n> \n> I had asked you how to calculate this and the variance early last\n> year. One (I think the variance) was two-pass because of the need\n> for the average, but I thought the StdDev would work with the struct.\n> \n> Been a while and I still haven't configured #(*&^ FreeBSD ppp yet.\n\nThe Perl Module \"Statistics/Descriptive\" has on the fly variance calculation.\n\nsub add_data {\n my $self = shift; ##Myself\n my $oldmean;\n my ($min,$mindex,$max,$maxdex);\n\n ##Take care of appending to an existing data set\n $min = (defined ($self->{min}) ? $self->{min} : $_[0]);\n $max = (defined ($self->{max}) ? $self->{max} : $_[0]);\n $maxdex = $self->{maxdex} || 0;\n $mindex = $self->{mindex} || 0;\n\n ##Calculate new mean, pseudo-variance, min and max;\n foreach (@_) {\n $oldmean = $self->{mean};\n $self->{sum} += $_;\n $self->{count}++;\n if ($_ >= $max) {\n $max = $_;\n $maxdex = $self->{count}-1;\n }\n if ($_ <= $min) {\n $min = $_;\n $mindex = $self->{count}-1;\n }\n $self->{mean} += ($_ - $oldmean) / $self->{count};\n $self->{pseudo_variance} += ($_ - $oldmean) * ($_ - $self->{mean});\n }\n\n $self->{min} = $min;\n $self->{mindex} = $mindex;\n $self->{max} = $max;\n $self->{maxdex} = $maxdex;\n $self->{sample_range} = $self->{max} - $self->{min};\n if ($self->{count} > 1) {\n $self->{variance} = $self->{pseudo_variance} / ($self->{count} -1);\n $self->{standard_deviation} = sqrt( $self->{variance});\n }\n return 1;\n}\n\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Fri, 5 Jun 1998 01:15:46 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Standard Deviation function." } ]
[ { "msg_contents": "> On Wed, 3 June 1998, at 20:29:52, David Gould wrote:\n> \n> > Ok, can I laugh now?\n> > \n> > Seriously, if we are going to have a separate backend to do the table access\n> > (and I agree that this is both neccessary and reasonable), why not have it\n> > be a plain ordinary backend like all the others and just connect to it from\n> > the client? Why get the postmaster involved at all? \n> \n> I'm confused, I guess.\n> > \n> > First, modifying the postmaster to add services has a couple of problems:\n> \n> I wasn't quite suggesting this, I think we should just modify the\n> postmaster to store the information. As you say below, doing queries\n> is probably bad, shared memory seems like the way to go. I'll assume\n> we'll use a different block of shared memory than the one currently\n> used.\n\nOh, ok. Some suggestions have been made the the postmaster would open a\nconnection to it's own backend to do queries. I was responding to this.\nI agree that we should just store the information in shared memory.\n \n> do you know how shared memory is currently used? I'm fairly clueless\n> on this aspect.\n\nThe shared memory stores the process table, the lock table, the buffer cache,\nand the shared invalidate list, and a couple of other minor things that all\nthe backends need to know about.\n\nStrangely, the shared memory does not share a copy of the system catalog\ncache. This seems like a real misfeature as the catalog data is very useful\nto all the backends.\n\nThe shared memory is managed by its own allocator. It is not hard to carve\nout a block for a new use, the only real trick is to make sure you account\nfor it when the system starts up so it can get the size right as the shared\nmemory is not extendable.\n \n> > - we have to modify the postmaster. This adds code bloat and bugs etc, and\n> > since the same binary is also the backend, it means the backends carry\n> > around extra baggage that only is used in the postmaster.\n> \n> the reverse could also be said -- why does the postmaster need the\n> bloat of a backend?\n\nWell, right now the postmaster and the backend are the same binary. This\nhas the advantage of keeping them in sync as we make changes, and now with\nBruces patch we can avoid an exec() on backend startup. Illustra has a\nseparate backend and postmaster binary. This works too, but they share a\nlot of code and sometimes a change in something you thought was only in the\nbackend will break the postmaster.\n\n> > - more importantly, if the postmaster is busy processing a big select from\n> > a pseudo table or log (well, forwarding results etc), then it cannot also\n> > respond to a new connection request. Unless we multithread the postmaster.\n> good point. I think storing this information in shared memory and\n> accessing it from a view is good -- how do other dbs do this sort of\n> thing?\n\nWell, it is not really a view, although a view is a good analogy. The term\nof art is pseudo-table. That is, a table you generate on the fly. This concept\nis very useful as you can use it to read text files or rows from some other\ndatabase (think gateways) etc. It is also pretty common. Sybase and Informix\nboth support system specific pseudo-tables. Illustra supports extendable\naccess methods where you can plug a set of functions (opentable, getnext,\nupdate, delete, insert etc) into the server and they create a table interface\nto whatever datasource you want. \n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Thu, 4 Jun 1998 22:00:19 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> Oh, ok. Some suggestions have been made the the postmaster would open a\n> connection to it's own backend to do queries. I was responding to this.\n> I agree that we should just store the information in shared memory.\n> \n> > do you know how shared memory is currently used? I'm fairly clueless\n> > on this aspect.\n> \n> The shared memory stores the process table, the lock table, the buffer cache,\n> and the shared invalidate list, and a couple of other minor things that all\n> the backends need to know about.\n> \n> Strangely, the shared memory does not share a copy of the system catalog\n> cache. This seems like a real misfeature as the catalog data is very useful\n> to all the backends.\n\nOn TODO list. Vadim wants to do this, perhaps for 6.4(not sure):\n\n* Shared catalog cache, reduce lseek()'s by caching table size in shared area\n\n> \n> The shared memory is managed by its own allocator. It is not hard to carve\n> out a block for a new use, the only real trick is to make sure you account\n> for it when the system starts up so it can get the size right as the shared\n> memory is not extendable.\n> \n> > > - we have to modify the postmaster. This adds code bloat and bugs etc, and\n> > > since the same binary is also the backend, it means the backends carry\n> > > around extra baggage that only is used in the postmaster.\n> > \n> > the reverse could also be said -- why does the postmaster need the\n> > bloat of a backend?\n> \n> Well, right now the postmaster and the backend are the same binary. This\n> has the advantage of keeping them in sync as we make changes, and now with\n> Bruces patch we can avoid an exec() on backend startup. Illustra has a\n> separate backend and postmaster binary. This works too, but they share a\n> lot of code and sometimes a change in something you thought was only in the\n> backend will break the postmaster.\n\nThen a good reason not to split them up.\n\n> Well, it is not really a view, although a view is a good analogy. The term\n> of art is pseudo-table. That is, a table you generate on the fly. This concept\n> is very useful as you can use it to read text files or rows from some other\n> database (think gateways) etc. It is also pretty common. Sybase and Informix\n> both support system specific pseudo-tables. Illustra supports extendable\n> access methods where you can plug a set of functions (opentable, getnext,\n> update, delete, insert etc) into the server and they create a table interface\n> to whatever datasource you want. \n\nYes, this would be nice, but don't we have more important items to the\nTODO list to address?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 5 Jun 1998 16:56:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "On Thu, 4 Jun 1998, David Gould wrote:\n\n> Oh, ok. Some suggestions have been made the the postmaster would open a\n> connection to it's own backend to do queries. I was responding to this.\n> I agree that we should just store the information in shared memory.\n\n\tHow does one get a history for long term monitoring and statistics\nby storing in shared memory?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 6 Jun 1998 00:03:18 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> On Thu, 4 Jun 1998, David Gould wrote:\n> \n> > Oh, ok. Some suggestions have been made the the postmaster would open a\n> > connection to it's own backend to do queries. I was responding to this.\n> > I agree that we should just store the information in shared memory.\n> \n> \tHow does one get a history for long term monitoring and statistics\n> by storing in shared memory?\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nMy thought was a circular event buffer which could provide short term\nhistory. If someone wanted to store long term history (most sites probably\nwon't, but I agree it can be useful), they would have an application which\nqueried the short term history and saved it to what ever long term history\nthey wanted. Eg:\n\nFOREVER {\n sleep(1);\n insert into long_term_hist values\n (select * from pg_eventlog where event_num > highest_seen_so_far);\n}\n\nObviously some details need to be worked out to make sure no history is\never lost (if that is important). But the basic mechanism is general and \nuseful for many purposes.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Fri, 5 Jun 1998 22:14:20 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Strangely, the shared memory does not share a copy of the system catalog\n> > cache. This seems like a real misfeature as the catalog data is very useful\n> > to all the backends.\n> \n> On TODO list. Vadim wants to do this, perhaps for 6.4(not sure):\n> \n> * Shared catalog cache, reduce lseek()'s by caching table size in shared area\n\nYes, for 6.4...\n\nVadim\n", "msg_date": "Sat, 06 Jun 1998 19:43:19 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > Strangely, the shared memory does not share a copy of the system catalog\n> > > cache. This seems like a real misfeature as the catalog data is very useful\n> > > to all the backends.\n> > \n> > On TODO list. Vadim wants to do this, perhaps for 6.4(not sure):\n> > \n> > * Shared catalog cache, reduce lseek()'s by caching table size in shared area\n> \n> Yes, for 6.4...\n\nCan you share any other 6.4 plans with us?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 6 Jun 1998 11:28:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> \n> On Thu, 4 Jun 1998, David Gould wrote:\n> \n> > Oh, ok. Some suggestions have been made the the postmaster would open a\n> > connection to it's own backend to do queries. I was responding to this.\n> > I agree that we should just store the information in shared memory.\n> \n> \tHow does one get a history for long term monitoring and statistics\n> by storing in shared memory?\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nWhy not simply append history lines to a normal log file ? In this way you\ndon't have the overhead for accessing tables and can do real-time processing\nof the data with a simple tail -f on the file.\nI use this trick to monitor the log file written by 30 backends and it works\nfine for me.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Mon, 8 Jun 1998 17:26:52 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> \n> > \n> > On Thu, 4 Jun 1998, David Gould wrote:\n> > \n> > > Oh, ok. Some suggestions have been made the the postmaster would open a\n> > > connection to it's own backend to do queries. I was responding to this.\n> > > I agree that we should just store the information in shared memory.\n> > \n> > \tHow does one get a history for long term monitoring and statistics\n> > by storing in shared memory?\n> > \n> > Marc G. Fournier \n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> Why not simply append history lines to a normal log file ? In this way you\n> don't have the overhead for accessing tables and can do real-time processing\n> of the data with a simple tail -f on the file.\n> I use this trick to monitor the log file written by 30 backends and it works\n> fine for me.\n\nI agree. We have more important items to address.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 12:05:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] keeping track of connections" }, { "msg_contents": "> > On Thu, 4 Jun 1998, David Gould wrote:\n> > \n> > > Oh, ok. Some suggestions have been made the the postmaster would open a\n> > > connection to it's own backend to do queries. I was responding to this.\n> > > I agree that we should just store the information in shared memory.\n> > \n> > \tHow does one get a history for long term monitoring and statistics\n> > by storing in shared memory?\n> > \n> > Marc G. Fournier \n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> Why not simply append history lines to a normal log file ? In this way you\n> don't have the overhead for accessing tables and can do real-time processing\n> of the data with a simple tail -f on the file.\n> I use this trick to monitor the log file written by 30 backends and it works\n> fine for me.\n> \n> -- \n> Massimo Dal Zotto\n\nI was going to suggest this too, but didn't want to be too much of a\nspoilsport.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Mon, 8 Jun 1998 15:22:14 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] keeping track of connections" } ]
[ { "msg_contents": "\n>> I had asked you how to calculate this and the variance early last\n>> year. One (I think the variance) was two-pass because of the need\n>> for the average, but I thought the StdDev would work with the struct.\n\n>Variance is just square of std. dev, no?\n\nNo ! Stdev is divided by count, Variance by (count - 1)\n\nIt was some time ago, but I thing there is a running function that \ncan be calculated with one pass. I might be able to dig it up somewhere.\nI had it in an excel sheet for learning purposes.\n\nAndreas\n\n\n", "msg_date": "Fri, 5 Jun 1998 10:07:40 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Standard Deviation function." } ]
[ { "msg_contents": "David Gould wrote:\n>The Perl Module \"Statistics/Descriptive\" has on the fly variance calculation.\n>\n>sub add_data {\n> my $self = shift; ##Myself\n> my $oldmean;\n> my ($min,$mindex,$max,$maxdex);\n>\n> ##Take care of appending to an existing data set\n> $min = (defined ($self->{min}) ? $self->{min} : $_[0]);\n> $max = (defined ($self->{max}) ? $self->{max} : $_[0]);\n> $maxdex = $self->{maxdex} || 0;\n> $mindex = $self->{mindex} || 0;\n>\n> ##Calculate new mean, pseudo-variance, min and max;\n> foreach (@_) {\n> $oldmean = $self->{mean};\n> $self->{sum} += $_;\n> $self->{count}++;\n> if ($_ >= $max) {\n> $max = $_;\n> $maxdex = $self->{count}-1;\n> }\n> if ($_ <= $min) {\n> $min = $_;\n> $mindex = $self->{count}-1;\n> }\n> $self->{mean} += ($_ - $oldmean) / $self->{count};\n> $self->{pseudo_variance} += ($_ - $oldmean) * ($_ - $self->{mean});\n> }\n>\n> $self->{min} = $min;\n> $self->{mindex} = $mindex;\n> $self->{max} = $max;\n> $self->{maxdex} = $maxdex;\n> $self->{sample_range} = $self->{max} - $self->{min};\n> if ($self->{count} > 1) {\n> $self->{variance} = $self->{pseudo_variance} / ($self->{count} -1);\n> $self->{standard_deviation} = sqrt( $self->{variance});\n\nWow, this is it. But as I said, the above line is wrong (By the way: this is a very common mistake).\nIt should read:\n\t$self->{standard_deviation} = sqrt( $self->{pseudo_variance} / $self->{count} )\nNote: The - 1 is missing\n\n> }\n> return 1;\n>}\n\n\n", "msg_date": "Fri, 5 Jun 1998 10:43:24 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Standard Deviation function." }, { "msg_contents": "Andreas Zeugswetter <[email protected]> writes:\n> Wow, this is it. But as I said, the above line is wrong (By the way:\n> this is a very common mistake).\n> It should read:\n> \t$self->{standard_deviation} = sqrt( $self->{pseudo_variance} / $self->{count} )\n> Note: The - 1 is missing\n\nThe formula with N-1 in the divisor is correct for the \"sample standard\ndeviation\". That is what you use when your N data points represent a\nsample from a larger population, and you want to estimate the standard\ndeviation of the whole population.\n\nIf your N data points in fact are the *whole* population of interest,\nthen you calculate the \"population standard deviation\" which has just N\nin the divisor. So both versions of the formula are correct depending\non the situation, and you really ought to provide both.\n\n(To justify the difference intuitively: if you have exactly one data\npoint, and it is the *whole* population, then the mean equals the\ndata value and the standard deviation is zero. That is what you get\nwith N in the divisor. But if your one data point is a sample from\na larger population, you cannot estimate the population's standard\ndeviation; you need more data. The N-1 equation gives 0/0 in this\ncase, correctly signifying that the value is indeterminate.)\n\nI think the Perl code given earlier in the thread pretty much sucks\nfrom a numerical accuracy point of view. The running mean calculation\nsuffers from accumulation of errors, and that propagates into the\npseudo-variance in a big way. It's particularly bad if the data is\ntightly clustered about the mean; the code ends up doing lots of\nsubtractions of nearly equal values.\n\nThe accepted way to do sample standard deviation in one pass is this:\n\nSTDDEV = SQRT( (N*SIGMA(Xi^2) - SIGMA(Xi)^2) / (N*(N-1)) )\n\nwhere N is the number of data points and SIGMA(Xi) means the sum\nof the data values Xi. You keep running sums of Xi and Xi^2 as\nyou pass over the data, then you apply the above equation once\nat the end. (For population standard deviation, you use N^2 as\nthe denominator. For variance, you just leave off the SQRT().)\n\nAll that you need to implement this is room to keep two running\nsums instead of one. I haven't looked at pgsql's aggregate functions,\nbut I'd hope that the working state can be a struct not just a\nsingle number.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Jun 1998 11:24:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Standard Deviation function. " } ]
[ { "msg_contents": "Hi,\n\nFirst of all I would like to thank you for your work on the Statistics Module.\nUnfortunately a lot of books differ in their formula for variance and stdev.\nIn Europe the below corrected definition where stdev is not simply the sqrt of variance\nseems to be more popular.\nFor large populations (>400) the calculation will be almost the same,\nbut for small populations (like 5) the below calculation will be different.\n \n[Hackers] please forget my last mail to this subject. It was wrong.\nTanx\nAndreas Zeugswetter\n\nDavid Gould wrote:\n>The Perl Module \"Statistics/Descriptive\" has on the fly variance calculation.\n>\n>sub add_data {\n> my $self = shift; ##Myself\n> my $oldmean;\n> my ($min,$mindex,$max,$maxdex);\n>\n> ##Take care of appending to an existing data set\n> $min = (defined ($self->{min}) ? $self->{min} : $_[0]);\n> $max = (defined ($self->{max}) ? $self->{max} : $_[0]);\n> $maxdex = $self->{maxdex} || 0;\n> $mindex = $self->{mindex} || 0;\n>\n> ##Calculate new mean, pseudo-variance, min and max;\n> foreach (@_) {\n> $oldmean = $self->{mean};\n> $self->{sum} += $_;\n> $self->{count}++;\n> if ($_ >= $max) {\n> $max = $_;\n> $maxdex = $self->{count}-1;\n> }\n> if ($_ <= $min) {\n> $min = $_;\n> $mindex = $self->{count}-1;\n> }\n> $self->{mean} += ($_ - $oldmean) / $self->{count};\n> $self->{pseudo_variance} += ($_ - $oldmean) * ($_ - $self->{mean});\n> }\n>\n> $self->{min} = $min;\n> $self->{mindex} = $mindex;\n> $self->{max} = $max;\n> $self->{maxdex} = $maxdex;\n> $self->{sample_range} = $self->{max} - $self->{min};\n> if ($self->{count} > 1) {\n> $self->{variance} = $self->{pseudo_variance} / ($self->{count} -1);\n> $self->{standard_deviation} = sqrt( $self->{variance});\n\nMost books state:\n\t$self->{variance} = $self->{pseudo_variance} / $self->{count};\n\t$self->{standard_deviation} = sqrt( $self->{pseudo_variance} / ( $self->{count} - 1 ))\n\n> }\n> return 1;\n>}\n\n\n\n", "msg_date": "Fri, 5 Jun 1998 12:05:18 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Perl Standard Deviation function is wrong !" }, { "msg_contents": " >Variance is just square of std. dev, no?\n\n No ! Stdev is divided by count, Variance by (count - 1)\n\nI think the difference really has to do with what you are calculating.\nIf you want the std. dev./var. of the data THEMSELVES, divide by the\ncount. If you want an estimate about the properties of the POPULATION\nfrom which the data were sampled, divide by count-1. People have\nneeds for both in different circumstances.\n\nPerhaps there needs to be two versions, or a function argument, to\ndistinguish the two uses, both of which are legitimate.\n\nCheers,\nBrook\n", "msg_date": "Fri, 5 Jun 1998 09:16:27 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Perl Standard Deviation function is wrong !" }, { "msg_contents": "Brook Milligan wrote:\n> \n> >Variance is just square of std. dev, no?\n> \n> No ! Stdev is divided by count, Variance by (count - 1)\n> \n> I think the difference really has to do with what you are calculating.\n> If you want the std. dev./var. of the data THEMSELVES, divide by the\n> count. If you want an estimate about the properties of the POPULATION\n> from which the data were sampled, divide by count-1. People have\n> needs for both in different circumstances.\n>\n> Perhaps there needs to be two versions, or a function argument, to\n> distinguish the two uses, both of which are legitimate.\n\nGentlemen,\nFirst let me apologize if this conversation has been taking place in\nthe Perl newsgroups. You've caught me at a time when I'm sans news\nreader. (I could use Netscape, but .... <shudder> and I'd be ignored\nby most of the guru's in the group).\n\nBack to the topic at hand. The module states its references for the\nstatistical formulae as well as its methods of calculation so you\nshould always know what you're getting.\n\nI haven't done intensive statistics for a long time. I inherited the\nmodule from Jason Kastner to add more methods to it and to see if I\ncould make some changes to the interface. Since then, I've released\nseveral bug fixes caused by those changes. If the public demands \nmore statistics, then I'll make it so.\n\nI'm a little leary of making changes without having some hard\nreferences. If any of you would like to send me some (I'll be tracking\nthem down, too!) I'd appreciate it.\n\nOnce I have that warm fuzzy that I'm not just inventing mathematics,\nthen I'll change the methods for standard variation and variance to\naccept a single argument that causes them to give the DATA statistics\ninstead of the population statistics. I can't see overhauling the\ndefault behavior and forcing people to rewrite scripts already in place.\nIt made them angry enough when I changed the OO interface...\n\nI look forward to hearing from you, or having results to share with\nyou, soon!\n\nColin Kuskie\n\np.s. I recently changed jobs. My new email address is:\[email protected] A new release will give me the excuse to change\nthe modules documentation to reflect that.\n", "msg_date": "Fri, 05 Jun 1998 18:41:07 -0700", "msg_from": "Colin Kuskie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Perl Standard Deviation function is wrong !" } ]
[ { "msg_contents": "\nFor on-the-fly standard deviation calculations with an eye towards\nnumerical accuracy, check out:\n\nChan, T.F, and Lewis, J.G., \"Computing Standard Deviations: Accuracy\",\nCommunications of the ACM, Vol 22, No. 9, September 1979, p. 526\n\nand\n\nWest, D.H.D, \"Updating Mean and Variance Estimates: An Improved Method\",\nCommunications of the ACM, Vol 22, No. 9, September 1979, p. 532\n\nThe articles were writen when single precision floating point\nwas the limiting factor, but the principles are just as relevant.\n\nDiab\n\n-------------\nDiab Jerius Harvard-Smithsonian Center for Astrophysics\n 60 Garden St, MS 70, Cambridge MA 02138 USA\[email protected] vox: 617 496 7575 fax: 617 495 7356\n", "msg_date": "Fri, 5 Jun 1998 12:38:41 -0400", "msg_from": "[email protected] (Diab Jerius)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Standard Deviation function. " } ]
[ { "msg_contents": "On Sat, 6 Jun 1998, ADM. Diego Cueva wrote:\n\n> \n> Hi Everybody.\n> \n> I have problems with my CGI programs in C.\n> When i execute the CGI, the message \"Internal Server Error\" is displayed \n> in the browser.\n> This error only ocurr when i compile the programs using -lpq\n> \n> Exist documentation about this ?\n\nMake sure the web user (whatever user the web server runs as) has access\nto the database and table. It needs to at least be able to select.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"I'm just not a fan of promoting stupidity!\n We have elected officials for that job!\" -- Rock\n==========================================================================\n\n\n\n", "msg_date": "Fri, 5 Jun 1998 12:56:53 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CGI with lilbpq" }, { "msg_contents": "\nHi Everybody.\n\nI have problems with my CGI programs in C.\nWhen i execute the CGI, the message \"Internal Server Error\" is displayed \nin the browser.\nThis error only ocurr when i compile the programs using -lpq\n\nExist documentation about this ?\n\nPLEASE:\nH H HHHH H HHHH\nH H H H H H\nHHHH HH H HHHH\nH H H H H\nH H HHHH HHHH H\n\nI have:\nLinux 2.0.30\nPostgreSQL 6.1.1\n\[email protected]\n\n", "msg_date": "Sat, 6 Jun 1998 12:28:39 -0500 (EST)", "msg_from": "\"ADM. Diego Cueva\" <[email protected]>", "msg_from_op": false, "msg_subject": "CGI with lilbpq" }, { "msg_contents": "On Sat, 6 Jun 1998, ADM. Diego Cueva wrote:\n\n> \n> Hi Everybody.\n> \n> I have problems with my CGI programs in C.\n> When i execute the CGI, the message \"Internal Server Error\" is displayed \n> in the browser.\n> This error only ocurr when i compile the programs using -lpq\n> \n> Exist documentation about this ?\n> \n> PLEASE:\n> H H HHHH H HHHH\n> H H H H H H\n> HHHH HH H HHHH\n> H H H H H\n> H H HHHH HHHH H\n> \n> I have:\n> Linux 2.0.30\n> PostgreSQL 6.1.1\n> \n> [email protected]\n> \n> \n\nWhat www-server do you use ?\nWorking with apache I have to create user 'nobody'.\n\n", "msg_date": "Mon, 8 Jun 1998 16:03:09 +0300 (EEST)", "msg_from": "Alexzander Blashko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CGI with lilbpq" }, { "msg_contents": "\nThe www-server (Apache) can't load the library libpq.so, this is the\nproblem.\n\nThanks for for your help.\n\n", "msg_date": "Tue, 9 Jun 1998 08:38:16 -0500 (EST)", "msg_from": "\"ADM. Diego Cueva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CGI with lilbpq" } ]
[ { "msg_contents": "unsubscribe\npgsql-hackers\n\n", "msg_date": "Fri, 05 Jun 1998 19:13:34 -0400", "msg_from": "Chris Olivier <[email protected]>", "msg_from_op": true, "msg_subject": "dfas" } ]
[ { "msg_contents": "\nMorning all...\n\n\tJust curious, but what *is* planned for v6.4? We have a TODO\nlist, but I imagine there are things on that TODO list that ppl are\nplanning on for v6.4? Can we add a \"planned for v6.4\" to various items,\nsuch that ppl have an idea of what they could be expecting? Even a\ndisclaimer at the top that states that altho \"the following items are\nplanned for v6.4, time might not permit completion\"?\n\n\tWith that in mind, is anyone working on 'row level locking'? I\nwould think that, as far as importance is concerned, that would be one of\nthe most important features we are missing...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 6 Jun 1998 13:01:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "v6.4 - What is planned...?" }, { "msg_contents": "> \n> \n> Morning all...\n> \n> \tJust curious, but what *is* planned for v6.4? We have a TODO\n> list, but I imagine there are things on that TODO list that ppl are\n> planning on for v6.4? Can we add a \"planned for v6.4\" to various items,\n> such that ppl have an idea of what they could be expecting? Even a\n> disclaimer at the top that states that altho \"the following items are\n> planned for v6.4, time might not permit completion\"?\n> \n> \tWith that in mind, is anyone working on 'row level locking'? I\n> would think that, as far as importance is concerned, that would be one of\n> the most important features we are missing...\n\nWe do have in the TODO list:\n\n\tA dash(-) marks changes to be in the next release.\n\nand appears to be fairly accurate. Haven't hear much about people\nclaiming items for 6.4 yet.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 6 Jun 1998 17:32:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "On Sat, 6 Jun 1998, Bruce Momjian wrote:\n> > Morning all...\n> > \n> > \tJust curious, but what *is* planned for v6.4? We have a TODO\n> > list, but I imagine there are things on that TODO list that ppl are\n> > planning on for v6.4? Can we add a \"planned for v6.4\" to various items,\n> > such that ppl have an idea of what they could be expecting? Even a\n> > disclaimer at the top that states that altho \"the following items are\n> > planned for v6.4, time might not permit completion\"?\n> > \n> > \tWith that in mind, is anyone working on 'row level locking'? I\n> > would think that, as far as importance is concerned, that would be one of\n> > the most important features we are missing...\n\nThe bit's I'm working on for 6.4 (mostly Java/JDBC) are listed at\nhttp://www.retep.org.uk/postgres\n\n> \n> We do have in the TODO list:\n> \n> \tA dash(-) marks changes to be in the next release.\n> \n> and appears to be fairly accurate. Haven't hear much about people\n> claiming items for 6.4 yet.\n\nI should be down already for one of the large object bits\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 7 Jun 1998 17:41:39 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "> > \n> > \tA dash(-) marks changes to be in the next release.\n> > \n> > and appears to be fairly accurate. Haven't hear much about people\n> > claiming items for 6.4 yet.\n> \n> I should be down already for one of the large object bits\n\nYou are.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 7 Jun 1998 13:10:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "Well, my plans for 6.4:\n\n1. Btree: use TID as (last) part of index key; prepare btree \n for low-level locking (it's now possible to lose root page).\n2. Vacuum: speed up index cleaning; release pg_class lock after \n updation statistic for a table.\n3. Buffer manager: error handling broken; should flush only \n buffers changed by backend itself.\n4. Implement shared catalog cache; get rid of invalidation code.\n5. Subselects: in target list; in FROM.\n6. Transaction manager: get rid of pg_variable; do not prefetch\n XIDs; nested transactions; savepoints.\n\nVadim\n", "msg_date": "Mon, 08 Jun 1998 14:37:01 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "> \n> Well, my plans for 6.4:\n> \n> 1. Btree: use TID as (last) part of index key; prepare btree \n> for low-level locking (it's now possible to lose root page).\n> 2. Vacuum: speed up index cleaning; release pg_class lock after \n> updation statistic for a table.\n> 3. Buffer manager: error handling broken; should flush only \n> buffers changed by backend itself.\n> 4. Implement shared catalog cache; get rid of invalidation code.\n> 5. Subselects: in target list; in FROM.\n> 6. Transaction manager: get rid of pg_variable; do not prefetch\n> XIDs; nested transactions; savepoints.\n\nThat's quite a list.\n\nVadim, I hate to ask, but how about the buffering of pg_log writes and\nthe ability to do sync() every 30 seconds then flush pg_log, so we can\nhave crash reliability without doing fsync() on every transaction.\n\nWe discussed this months ago, and I am not sure if you were thinking of\ndoing this for 6.4. I can send the old posts if that would help. It\nwould certainly increase our speed vs. fsync().\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 11:29:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "I noticed that the way Lotus Notes updates there database to a new\nversion is through something similar to our Vacuum. In other word, if\nyou upgrade the server the databases are still stored in the old format.\nTo get the new format you would perform a Vacuum and this would rewrite\nthe database into the new format.\n\nIs this a possible solution to simplifying the steps in the upgrade\nprocess?\n\nCheck out the article about some of the new database features planed for\nNotes 5.\nhttp://www.notes.net/today.nsf/b1d67fedee86c741852563cc005019c5/9489b036757596c58525660900627762?OpenDocument\n\n\n\nOliver\n\n\n", "msg_date": "Mon, 08 Jun 1998 13:03:53 -0400", "msg_from": "mark metzger <[email protected]>", "msg_from_op": false, "msg_subject": "[HACKERS] Upgrade improvements." }, { "msg_contents": "On Sat, 6 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > Morning all...\n> > \n> > \tJust curious, but what *is* planned for v6.4? We have a TODO\n> > list, but I imagine there are things on that TODO list that ppl are\n> > planning on for v6.4? Can we add a \"planned for v6.4\" to various items,\n> > such that ppl have an idea of what they could be expecting? Even a\n> > disclaimer at the top that states that altho \"the following items are\n> > planned for v6.4, time might not permit completion\"?\n> > \n> > \tWith that in mind, is anyone working on 'row level locking'? I\n> > would think that, as far as importance is concerned, that would be one of\n> > the most important features we are missing...\n> \n> We do have in the TODO list:\n> \n> \tA dash(-) marks changes to be in the next release.\n> \nDoes it means that items begining with a dash instead of an asterisk\nwill be in the next release ?\nI can't see any item begining with dash on TODO list v6.3.2 !\n \n Jose'\n\n", "msg_date": "Tue, 9 Jun 1998 10:21:07 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n> >\n> >\n> > Morning all...\n> >\n> > Just curious, but what *is* planned for v6.4? We have a TODO\n> > list, but I imagine there are things on that TODO list that ppl are\n> > planning on for v6.4? Can we add a \"planned for v6.4\" to various items,\n> > such that ppl have an idea of what they could be expecting? Even a\n> > disclaimer at the top that states that altho \"the following items are\n> > planned for v6.4, time might not permit completion\"?\n> >\n> > With that in mind, is anyone working on 'row level locking'? I\n> > would think that, as far as importance is concerned, that would be one of\n> > the most important features we are missing...\n>\n> We do have in the TODO list:\n>\n> A dash(-) marks changes to be in the next release.\n>\n> and appears to be fairly accurate. Haven't hear much about people\n> claiming items for 6.4 yet.\n>\n\nBruce,\nItem \"Remove restriction that ORDER BY field must be in SELECT list\", in the\nTODO list, has been completed.\n\nStephan or Anyone,\nWhat is the status of the HAVING clause? I noticed that it almost made the\n6.3.2 cut, but I haven't heard any thing for a while. I would really like to\nsee this feature implemented. It is important to my user community.\n\nEveryone especially Vadim,\nI agree with Marc. Row locking is huge. In my user community, it is\nunacceptable to wait for up to 30 minutes (or even one minute) for a report to\nfinish so that a users can commit an invoice or commit a change to a customer\nattribute. I can deal with it for now because my databases are batch loaded\nfor reporting purposes only. However, I plan to go forward with some pretty\nimportant projects that assume that record/page locking will exist within the\nnext 12 month or so. Am I being too presumptuous?\n\n\n", "msg_date": "Tue, 09 Jun 1998 09:44:12 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > 6. Transaction manager: get rid of pg_variable; do not prefetch\n> > XIDs; nested transactions; savepoints.\n> \n> That's quite a list.\n> \n> Vadim, I hate to ask, but how about the buffering of pg_log writes and\n> the ability to do sync() every 30 seconds then flush pg_log, so we can\n> have crash reliability without doing fsync() on every transaction.\n> \n> We discussed this months ago, and I am not sure if you were thinking of\n> doing this for 6.4. I can send the old posts if that would help. It\n> would certainly increase our speed vs. fsync().\n\nI never forgot about this :)\nOk, but let's wait ~ Aug 1st: I'm not sure that I'll have\ntime for 6. and delayed fsync implemetation depends on\ndesign of transaction manager...\n\nBTW, I have another item:\n\n7. Re-use transaction XID (no commit --> no fsync) of read only\n transactions (SELECTs could be left un-commited!).\n\nAnd more about performance of sequential scans:\nas you know HeapTupleSatisfies can perfome scan key test and\nso bypass expensive HeapTupleSatisfiesVisibility test for\nunqualified tuples ... but this ability is never used by \nSeqScan!!! ALL visible tuples are returned to top level\nExecScan and qualified by ExecQual - this is very very bad.\nSeqScan should work like IndexScan: put quals from WHERE into\nScanKey-s for low level heap scan functions (it's now\npossible for ANDs but could be extended for ORs too)...\n\nAnother issue - handling of functions with constant args \nin queries - for query\n\nselect * from T where A = upper ('bbb')\n\nfunction upper ('bbb') will be executed for each tuple in T!\nMore of that - if there is index on T(A) then this index will\nnot be used for this query!\nObviously, upper ('bbb') should be executed (by Executor, not\nparser/planner) once: new Param type (PARAM_EXEC) implemented \nfor subselects could help here too...\n\nVadim\n", "msg_date": "Tue, 09 Jun 1998 21:54:12 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "> \n> On Sat, 6 Jun 1998, Bruce Momjian wrote:\n> \n> > > \n> > > \n> > > Morning all...\n> > > \n> > > \tJust curious, but what *is* planned for v6.4? We have a TODO\n> > > list, but I imagine there are things on that TODO list that ppl are\n> > > planning on for v6.4? Can we add a \"planned for v6.4\" to various items,\n> > > such that ppl have an idea of what they could be expecting? Even a\n> > > disclaimer at the top that states that altho \"the following items are\n> > > planned for v6.4, time might not permit completion\"?\n> > > \n> > > \tWith that in mind, is anyone working on 'row level locking'? I\n> > > would think that, as far as importance is concerned, that would be one of\n> > > the most important features we are missing...\n> > \n> > We do have in the TODO list:\n> > \n> > \tA dash(-) marks changes to be in the next release.\n> > \n> Does it means that items begining with a dash instead of an asterisk\n> will be in the next release ?\n> I can't see any item begining with dash on TODO list v6.3.2 !\n\nOn the web site. 6.3.2 TODO doesn't have dashes because they were\nremoved prior to the release.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 12:42:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "> > We do have in the TODO list:\n> >\n> > A dash(-) marks changes to be in the next release.\n> >\n> > and appears to be fairly accurate. Haven't hear much about people\n> > claiming items for 6.4 yet.\n> >\n> \n> Bruce,\n> Item \"Remove restriction that ORDER BY field must be in SELECT list\", in the\n> TODO list, has been completed.\n\nDash added to TODO. Gee, I could swear I marked that as complete\nalready. Strange.\n\n> \n> Stephan or Anyone,\n> What is the status of the HAVING clause? I noticed that it almost made the\n> 6.3.2 cut, but I haven't heard any thing for a while. I would really like to\n> see this feature implemented. It is important to my user community.\n\nIt works, but has some bugs, so we dis-abled it in gram.y until it was\nworking perfectly. I can forward the bug reports if you wish. Stephan\nwas working on it, but I haven't heard anything from him in months. You\nare welcome to fix it.\n\n> \n> Everyone especially Vadim,\n> I agree with Marc. Row locking is huge. In my user community, it is\n> unacceptable to wait for up to 30 minutes (or even one minute) for a report to\n> finish so that a users can commit an invoice or commit a change to a customer\n> attribute. I can deal with it for now because my databases are batch loaded\n> for reporting purposes only. However, I plan to go forward with some pretty\n> important projects that assume that record/page locking will exist within the\n> next 12 month or so. Am I being too presumptuous?\n\nSounds like you need dirty read rather than row locking. If they lock a\nrow or the entire table, it still would cause the program to stall. I\nam not saying you don't need row or page locking, just that this may not\nhelp even if we had it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 13:25:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" }, { "msg_contents": "> I never forgot about this :)\n> Ok, but let's wait ~ Aug 1st: I'm not sure that I'll have\n> time for 6. and delayed fsync implemetation depends on\n> design of transaction manager...\n\nThanks. Makes sense.\n\n> \n> BTW, I have another item:\n> \n> 7. Re-use transaction XID (no commit --> no fsync) of read only\n> transactions (SELECTs could be left un-commited!).\n> \n> And more about performance of sequential scans:\n> as you know HeapTupleSatisfies can perfome scan key test and\n> so bypass expensive HeapTupleSatisfiesVisibility test for\n> unqualified tuples ... but this ability is never used by \n> SeqScan!!! ALL visible tuples are returned to top level\n> ExecScan and qualified by ExecQual - this is very very bad.\n> SeqScan should work like IndexScan: put quals from WHERE into\n> ScanKey-s for low level heap scan functions (it's now\n> possible for ANDs but could be extended for ORs too)...\n> \n> Another issue - handling of functions with constant args \n> in queries - for query\n> \n> select * from T where A = upper ('bbb')\n> \n> function upper ('bbb') will be executed for each tuple in T!\n> More of that - if there is index on T(A) then this index will\n> not be used for this query!\n> Obviously, upper ('bbb') should be executed (by Executor, not\n> parser/planner) once: new Param type (PARAM_EXEC) implemented \n> for subselects could help here too...\n\nI see what you are saying.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 13:27:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4 - What is planned...?" } ]
[ { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Fri, 5 Jun 1998, Matthew N. Dodd wrote:\n> \n> > On Fri, 5 Jun 1998, David Gould wrote:\n> > > This of course is most useful if the remaining platforms adopt it too.\n> > > Given current trends, I suspect this will happen.\n> >\n> > I suspect you're talking about 'remaining Linux distributions, on all\n> > platforms (Alpha,MIPS,Sparc,ix86 etc.)\n> \n> I'm suspecting the same thing...I follow the developers mailin\n> list for FreeBSD, and have yet to hear of *any* work towards adopting the\n> glibc \"standard\"...if someone wishes to point me at work being done for\n> anything *other* then Linux (ie. NetBSD? Solaris x86) towards adopting\n> this, I'd be interested...\n\nI think the main website for the effort is:\n\nhttp://www.telly.org/86open/\n\nThis is some of what it sez:\n\n----8<---------8<---------8<---------8<---------8<---------8<-----\n\nAt a meeting held mid-August at the head office of SCO, participants \nachieved consensus on a way to create software applications which\nwould run, without modification or emulation, on the Intel-based \nversions of:\n\n BSDI \n FreeBSD \n Linux \n NetBSD \n SCO OpenServer \n Sunsoft Solaris X86 \n SCO UnixWare\n\nThe goal of this effort is to encourage software developers to port \nto the Unix-Intel platform by reducing the effort needed to support the\ndiverse mix of operating systems of this kind currently available.\n\nThe specification, called \"86open\", will be published and freely \navailable to any environment wishing compliance. It involves the \nuse of a standardized libc shared library of basic functions to \nbe provided on all systems. This library will provide a consistent \ninterface to programmers, hiding the differences between the various \noperating systems and allowing the resulting binary programs to run \nunaltered on any compliant system. Whenever possible, it will be \nconsistent with The Open Group's Single Unix Specification.\n\nEach participating operating system will be free to implement the \n86open library specification on its own. However, the reference\nimplementation will be based upon GNU's glibc version 2, ensuring \nthat it will remain open and freely available. The actual list and \nbehavior of the 86open functions is presently being determined.\n\n----8<---------8<---------8<---------8<---------8<---------8<-----\n\nHannu\n", "msg_date": "Sat, 06 Jun 1998 20:14:50 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hackers-digest V1 #843" } ]
[ { "msg_contents": "Hi,\n\nI was trying to change to cluster command to do the its writes clustered \nby a 100 tuples, thus hoping to improve performance. However, the code \nI've written crashes. This has certainly to do with some internal states \nof pgsql that aren't preserved in a HeapTuple.\n\nCould somebody with knowledge have a brief glimpse on my code and perhaps \ntell me how to do it properly?\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\nstatic void\nrebuildheap(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex)\n{\n\tRelation\tLocalNewHeap,\n\t\t\t\tLocalOldHeap,\n\t\t\t\tLocalOldIndex;\n\tIndexScanDesc ScanDesc;\n\tRetrieveIndexResult ScanResult;\n\tItemPointer HeapTid;\n\tHeapTuple\tLocalHeapTuple;\n\tBuffer\t\tLocalBuffer[100];\n\tOid\t\t\tOIDNewHeapInsert;\n\tDllist *ScanResList;\n\tDlelem *ListEl;\n\tint count, loop;\n\n\t/*\n\t * Open the relations I need. Scan through the OldHeap on the OldIndex\n\t * and insert each tuple into the NewHeap.\n\t */\n\tLocalNewHeap = (Relation) heap_open(OIDNewHeap);\n\tLocalOldHeap = (Relation) heap_open(OIDOldHeap);\n\tLocalOldIndex = (Relation) index_open(OIDOldIndex);\n\tScanResList = DLNewList();\n\n\tScanDesc = index_beginscan(LocalOldIndex, false, 0, (ScanKey) NULL);\n\n\tloop = 1;\n\twhile (loop) {\n\t\tcount = 0;\n\t\twhile ((count < 100) &&\n\t\t\t ((ScanResult =\n\t\t\t\tindex_getnext(ScanDesc, \n\t\t\t\t\tForwardScanDirection)) != NULL))\n\t\t{\n\t\t\t\n\t\t\tHeapTid = &ScanResult->heap_iptr;\n\t\t\tpfree(ScanResult);\n\t\t\tLocalHeapTuple = heap_fetch(LocalOldHeap, false,\n\t\t\t\t\tHeapTid, &LocalBuffer[count]);\n\t\t\tListEl = DLNewElem(LocalHeapTuple);\n\t\t\tDLAddTail(ScanResList, ListEl);\n\t\t\tcount++;\n\t\t}\n\n\t\tif (count < 100) loop = 0;\n\n\t\tcount = 0;\n\t\twhile ((ListEl = DLRemHead(ScanResList)) != NULL) {\n\t\t\tLocalHeapTuple = (HeapTuple)ListEl->dle_val;\n\t\t\tDLFreeElem(ListEl);\n\t\t\tOIDNewHeapInsert =\n\t\t\t\theap_insert(LocalNewHeap, LocalHeapTuple);\n\t\t\tReleaseBuffer(LocalBuffer[count]);\n\t\t\tcount++;\n\t\t}\n\t}\n\n\tindex_endscan(ScanDesc);\n\n\tindex_close(LocalOldIndex);\n\theap_close(LocalOldHeap);\n\theap_close(LocalNewHeap);\n\tDLFreeList(ScanResList);\n}\n", "msg_date": "Sun, 7 Jun 1998 21:27:13 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": true, "msg_subject": "Need some help on code" }, { "msg_contents": "> \n> Hi,\n> \n> I was trying to change to cluster command to do the its writes clustered \n> by a 100 tuples, thus hoping to improve performance. However, the code \n> I've written crashes. This has certainly to do with some internal states \n> of pgsql that aren't preserved in a HeapTuple.\n> \n> Could somebody with knowledge have a brief glimpse on my code and perhaps \n> tell me how to do it properly?\n\nI did not look at the code, but I can pretty much tell you that bunching\nthe write will not help performance. We already do that pretty well\nwith the cache.\n\nTHe problem with the cluster is the normal problem of using an index to\nseek into a data table, where the data is not clustered on the index. \nEvery entry in the index requires a different page, and each has to be\nread in from disk.\n\nOften the fastest way is to discard the index, and just read the table, \nsorting each in pieces, and merging them in. That is what psort does,\nwhich is our sort code. That is why I recommend the SELECT INTO\nsolution if you have enough disk space.\n\nOnce it is clustered, subsequent clusters should be very fast, because\nonly the out-of-order entries cause random disk seeks.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 7 Jun 1998 16:26:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need some help on code" }, { "msg_contents": "On Sun, 7 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > Hi,\n> > \n> > I was trying to change to cluster command to do the its writes clustered \n> > by a 100 tuples, thus hoping to improve performance. However, the code \n> > I've written crashes. This has certainly to do with some internal states \n> > of pgsql that aren't preserved in a HeapTuple.\n> > \n> > Could somebody with knowledge have a brief glimpse on my code and perhaps \n> > tell me how to do it properly?\n> \n> I did not look at the code, but I can pretty much tell you that bunching\n> the write will not help performance. We already do that pretty well\n> with the cache.\n> \n> THe problem with the cluster is the normal problem of using an index to\n> seek into a data table, where the data is not clustered on the index. \n> Every entry in the index requires a different page, and each has to be\n> read in from disk.\n\nMy thinking was that the reading from the table is very scattered, but \nthat the writing to the new table could be done 'sequentially'. Therefore \nI thought it was interesting to see if it would help to cluster the writes.\n \n> Often the fastest way is to discard the index, and just read the table, \n> sorting each in pieces, and merging them in. That is what psort does,\n> which is our sort code. That is why I recommend the SELECT INTO\n> solution if you have enough disk space.\n\nA 'select into ... order by ...' you mean? \n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Mon, 8 Jun 1998 09:34:58 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Need some help on code" }, { "msg_contents": "> My thinking was that the reading from the table is very scattered, but \n> that the writing to the new table could be done 'sequentially'. Therefore \n> I thought it was interesting to see if it would help to cluster the writes.\n> \n> > Often the fastest way is to discard the index, and just read the table, \n> > sorting each in pieces, and merging them in. That is what psort does,\n> > which is our sort code. That is why I recommend the SELECT INTO\n> > solution if you have enough disk space.\n> \n> A 'select into ... order by ...' you mean? \n\nYes. See CLUSTER manual page:\n\n Another way is to use SELECT ... INTO TABLE temp FROM\n ...ORDER BY ... This uses the PostgreSQL sorting code in\n ORDER BY to match the index, and is much faster for\n unordered data. You then drop the old table, use ALTER\n \n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 11:16:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need some help on code" }, { "msg_contents": "Maarten wrote:\n\n> I was trying to change to cluster command to do the its writes clustered \n> by a 100 tuples, thus hoping to improve performance. However, the code \n> I've written crashes. This has certainly to do with some internal states \n> of pgsql that aren't preserved in a HeapTuple.\n> \n> Could somebody with knowledge have a brief glimpse on my code and perhaps \n> tell me how to do it properly?\n... \n> static void\n> rebuildheap(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex)\n> {\n> \tRelation\tLocalNewHeap,\n> \t\t\t\tLocalOldHeap,\n> \t\t\t\tLocalOldIndex;\n> \tIndexScanDesc ScanDesc;\n> \tRetrieveIndexResult ScanResult;\n> \tItemPointer HeapTid;\n> \tHeapTuple\tLocalHeapTuple;\n> \tBuffer\t\tLocalBuffer[100];\n> \tOid\t\t\tOIDNewHeapInsert;\n> \tDllist *ScanResList;\n> \tDlelem *ListEl;\n> \tint count, loop;\n> \n> \t/*\n> \t * Open the relations I need. Scan through the OldHeap on the OldIndex\n> \t * and insert each tuple into the NewHeap.\n> \t */\n> \tLocalNewHeap = (Relation) heap_open(OIDNewHeap);\n> \tLocalOldHeap = (Relation) heap_open(OIDOldHeap);\n> \tLocalOldIndex = (Relation) index_open(OIDOldIndex);\n> \tScanResList = DLNewList();\n> \n> \tScanDesc = index_beginscan(LocalOldIndex, false, 0, (ScanKey) NULL);\n> \n> \tloop = 1;\n> \twhile (loop) {\n> \t\tcount = 0;\n> \t\twhile ((count < 100) &&\n> \t\t\t ((ScanResult =\n> \t\t\t\tindex_getnext(ScanDesc, \n> \t\t\t\t\tForwardScanDirection)) != NULL))\n> \t\t{\n> \t\t\t\n> \t\t\tHeapTid = &ScanResult->heap_iptr;\n> \t\t\tpfree(ScanResult);\n ^^^^^^^^^^^^^^^^^^\nHmmm, at this point, HeapTid is a pointer to what? \n\n> \t\t\tLocalHeapTuple = heap_fetch(LocalOldHeap, false,\n> \t\t\t\t\tHeapTid, &LocalBuffer[count]);\n\nGiven more than one tuple on a page, then there may exist some \n LocalBuffer[i] == LocalBuffer[j] where i and j are distinct values of count.\n\n> \t\t\tListEl = DLNewElem(LocalHeapTuple);\n> \t\t\tDLAddTail(ScanResList, ListEl);\n> \t\t\tcount++;\n> \t\t}\n> \n> \t\tif (count < 100) loop = 0;\n> \n> \t\tcount = 0;\n> \t\twhile ((ListEl = DLRemHead(ScanResList)) != NULL) {\n> \t\t\tLocalHeapTuple = (HeapTuple)ListEl->dle_val;\n> \t\t\tDLFreeElem(ListEl);\n> \t\t\tOIDNewHeapInsert =\n> \t\t\t\theap_insert(LocalNewHeap, LocalHeapTuple);\n> \t\t\tReleaseBuffer(LocalBuffer[count]);\n\nSo here we ReleaseBuffer(LocalBuffer[count]) which if there are more than\none LocalBuffer[] that are in fact the same buffer will release the buffer\nmultiple times.\n\n> \t\t\tcount++;\n> \t\t}\n> \t}\n> \n> \tindex_endscan(ScanDesc);\n> \n> \tindex_close(LocalOldIndex);\n> \theap_close(LocalOldHeap);\n> \theap_close(LocalNewHeap);\n> \tDLFreeList(ScanResList);\n> }\n\n\nHope this helps.\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Mon, 8 Jun 1998 15:12:55 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need some help on code" }, { "msg_contents": "> > \t\t\tHeapTid = &ScanResult->heap_iptr;\n> > \t\t\tpfree(ScanResult);\n> ^^^^^^^^^^^^^^^^^^\n> Hmmm, at this point, HeapTid is a pointer to what? \n\nI have no idea :) I think I missed the '&' in front of ScanResult, and \nthus figured that HeapTid would still be valid after the pfree().\n\n> > \t\t\t\theap_insert(LocalNewHeap, LocalHeapTuple);\n> > \t\t\tReleaseBuffer(LocalBuffer[count]);\n> \n> So here we ReleaseBuffer(LocalBuffer[count]) which if there are more than\n> one LocalBuffer[] that are in fact the same buffer will release the buffer\n> multiple times.\n\nWell, I don't even know what 'LocalBuffer' is. It would be nice if there \nwould be some documentation describing the access methods with all \nstructs and types etc. that they use.\n\nThanx for trying to explain things,\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Tue, 9 Jun 1998 10:20:14 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Need some help on code" } ]
[ { "msg_contents": "Hi,\n\nI saw the file INSTALL and I ran regression tests, I saw lots of failed tests,\nwhat do they mean?\nIf I have failed tests, how do I fix it? If I don't run regression tests, what's\nthe consequences I have?\nWould anyone tell me, thanks.\n\nBest regards,\nDoug.\n\n", "msg_date": "Mon, 08 Jun 1998 11:21:31 +0800", "msg_from": "Doug Lo <[email protected]>", "msg_from_op": true, "msg_subject": "Should I run regression tests?" }, { "msg_contents": "On Mon, 8 Jun 1998, Doug Lo wrote:\n\n> Hi,\n> \n> I saw the file INSTALL and I ran regression tests, I saw lots of failed tests,\n> what do they mean?\n> If I have failed tests, how do I fix it? If I don't run regression tests, what's\n> the consequences I have?\n> Would anyone tell me, thanks.\n\n\tTo be honest, the only ppl that should be required to run\nregression tests are those that are developing and preparing for\nreleases...for someone installing, they don't really give a warm fuzzy\nfeeling due to the discrepencies that the various platforms show that we\nconsider to be \"normal\" :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 8 Jun 1998 01:01:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "> \n> On Mon, 8 Jun 1998, Doug Lo wrote:\n> \n> > Hi,\n> > \n> > I saw the file INSTALL and I ran regression tests, I saw lots of failed tests,\n> > what do they mean?\n> > If I have failed tests, how do I fix it? If I don't run regression tests, what's\n> > the consequences I have?\n> > Would anyone tell me, thanks.\n> \n> \tTo be honest, the only ppl that should be required to run\n> regression tests are those that are developing and preparing for\n> releases...for someone installing, they don't really give a warm fuzzy\n> feeling due to the discrepencies that the various platforms show that we\n> consider to be \"normal\" :(\n\nBut INSTALL says:\n\n 18) If you wish to skip the regression tests then skip to step 21.\n However, we think skipping the tests is a BAD idea!\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 00:19:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "On Mon, 8 Jun 1998, Bruce Momjian wrote:\n\n> > \tTo be honest, the only ppl that should be required to run\n> > regression tests are those that are developing and preparing for\n> > releases...for someone installing, they don't really give a warm fuzzy\n> > feeling due to the discrepencies that the various platforms show that we\n> > consider to be \"normal\" :(\n> \n> But INSTALL says:\n> \n> 18) If you wish to skip the regression tests then skip to step 21.\n> However, we think skipping the tests is a BAD idea!\n\n\tand we think this because? its always confused me as to why an\nend-user would generally have to run regression tests on \"supported and\ntested platforms\". I can understand us, as developers, doing it prior to\na release, and I can understand someone doing it on an 'untested'\nplatform...but anything on a supported/tested platform should be caught\nby us, the developers, before the end-users see the software...\n\n\tNow, if we can get the regression tests to pass 100% on all\nplatforms, the point becomes moot, but, IMHO, all it does is causes/adds\nmore confusion to the end user then required... :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 8 Jun 1998 01:29:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "> \n> \tNow, if we can get the regression tests to pass 100% on all\n> platforms, the point becomes moot, but, IMHO, all it does is causes/adds\n> more confusion to the end user then required... :(\n\nLet's change the INSTALL. We are much more mature now as a product.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 00:34:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "At 7:29 +0300 on 8/6/98, The Hermit Hacker wrote:\n\n\n> \tand we think this because? its always confused me as to why an\n> end-user would generally have to run regression tests on \"supported and\n> tested platforms\". I can understand us, as developers, doing it prior to\n> a release, and I can understand someone doing it on an 'untested'\n> platform...but anything on a supported/tested platform should be caught\n> by us, the developers, before the end-users see the software...\n>\n> \tNow, if we can get the regression tests to pass 100% on all\n> platforms, the point becomes moot, but, IMHO, all it does is causes/adds\n> more confusion to the end user then required... :(\n\nMay I protest, please?\n\nWhat exactly is a supported/tested platform? Timezone differences make some\nof the failures, and I think it's important that we recognise them and know\nthat we have a timezone problem. Also, have you really tested the system on\nall available systems? I saw it compiled for solaris 2.6. Has it been\ntested for 2.5? Library differences, a slightly different installation\nprocedure, and the regression test points you, at least, in the right\ndirection to ask questions. After all, unix is the administrator's\ncreation, and he/she may decide to move things around. The regression tests\ntell him if one of his inventions are a bit overboard.\n\nEnd users which merely use the database should not be concerned with such\nthings, but if we are to run the system in a serious environment, my system\nadmin wants to be sure that postgres works *here*.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Mon, 8 Jun 1998 10:44:42 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "On Mon, 8 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > \tNow, if we can get the regression tests to pass 100% on all\n> > platforms, the point becomes moot, but, IMHO, all it does is causes/adds\n> > more confusion to the end user then required... :(\n> \n> Let's change the INSTALL. We are much more mature now as a product.\n\n\tAgreed, let's just remove the extra line that says that we don't\nrecommend skipping it...\n\n\n", "msg_date": "Mon, 8 Jun 1998 07:34:49 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "\nMoved to pgsql-hackers...\n\nOn Mon, 8 Jun 1998, Herouth Maoz wrote:\n\n> May I protest, please?\n\n\tOf course you can...\n\n> What exactly is a supported/tested platform? \n\n\tWe have a list of platforms that are tested prior to each\nrelease...\n\n> Timezone differences make some\n> of the failures, and I think it's important that we recognise them and know\n> that we have a timezone problem. Also, have you really tested the system on\n> all available systems? I saw it compiled for solaris 2.6. Has it been\n> tested for 2.5? \n\n\tSolaris 2.6, 2.5.1 and SunOS 4.1.x were tested for last release by\ntwo ppl (myself included)\n\n> End users which merely use the database should not be concerned with such\n> things, but if we are to run the system in a serious environment, my system\n> admin wants to be sure that postgres works *here*.\n\n\tWe aren't removing the regression tests, we are just removing the\ncomment that we strongly encourage ppl to run them...we have yet to have\nan end-user report a problem with the regression tests that was something\nthat was actually a 'bug'...\n\n\n", "msg_date": "Mon, 8 Jun 1998 07:40:30 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Moved to pgsql-hackers...\n>\n> On Mon, 8 Jun 1998, Herouth Maoz wrote:\n>\n> > End users which merely use the database should not be concerned with such\n> > things, but if we are to run the system in a serious environment, my system\n> > admin wants to be sure that postgres works *here*.\n>\n> We aren't removing the regression tests, we are just removing the\n> comment that we strongly encourage ppl to run them...we have yet to have\n> an end-user report a problem with the regression tests that was something\n> that was actually a 'bug'...\n\nHi,\n\nI'm curious about running regression tests. Why running regression tests is\nimportant\nfor a ppl not an end-user? If I'm an end-user, running regression tests and get\nfailed tests,\nmay I fix'em? If yes, would you like to tell me how to fix? Otherwise, what do\nthey mean?\nThanks in advance.\n\nBest wishes,\nDoug.\n\n\n\n\n", "msg_date": "Mon, 08 Jun 1998 23:26:52 +0800", "msg_from": "Doug Lo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Moved to pgsql-hackers...\n>\n> On Mon, 8 Jun 1998, Herouth Maoz wrote:\n>\n> > End users which merely use the database should not be concerned with such\n> > things, but if we are to run the system in a serious environment, my system\n> > admin wants to be sure that postgres works *here*.\n>\n> We aren't removing the regression tests, we are just removing the\n> comment that we strongly encourage ppl to run them...we have yet to have\n> an end-user report a problem with the regression tests that was something\n> that was actually a 'bug'...\n\nHi,\n\nI'm curious about running regression tests. Why running regression tests is\nimportant\nfor a ppl not an end-user? If I'm an end-user, running regression tests and get\nfailed tests,\nmay I fix'em? If yes, would you like to tell me how to fix? Otherwise, what do\nthey mean?\nThanks in advance.\n\nBest wishes,\nDoug.\n\n\n\n\n", "msg_date": "Mon, 08 Jun 1998 23:27:01 +0800", "msg_from": "Doug Lo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "I am glad to hear that Postgresql has gotten so good that the developers\nfeel that the regression test can be hidden. Hiding it will improve the\nappearance of stability. In today's world of mass marketing appearance\nseems to count for a lot.\n\nOliver\n\nThe Hermit Hacker wrote:\n> \n> Moved to pgsql-hackers...\n> \n> On Mon, 8 Jun 1998, Herouth Maoz wrote:\n> \n> > May I protest, please?\n> \n> Of course you can...\n> \n> > What exactly is a supported/tested platform?\n> \n> We have a list of platforms that are tested prior to each\n> release...\n> \n> > Timezone differences make some\n> > of the failures, and I think it's important that we recognise them and know\n> > that we have a timezone problem. Also, have you really tested the system on\n> > all available systems? I saw it compiled for solaris 2.6. Has it been\n> > tested for 2.5?\n> \n> Solaris 2.6, 2.5.1 and SunOS 4.1.x were tested for last release by\n> two ppl (myself included)\n> \n> > End users which merely use the database should not be concerned with such\n> > things, but if we are to run the system in a serious environment, my system\n> > admin wants to be sure that postgres works *here*.\n> \n> We aren't removing the regression tests, we are just removing the\n> comment that we strongly encourage ppl to run them...we have yet to have\n> an end-user report a problem with the regression tests that was something\n> that was actually a 'bug'...\n", "msg_date": "Mon, 08 Jun 1998 13:19:09 -0400", "msg_from": "mark metzger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Should I run regression tests?" }, { "msg_contents": "On Mon, 8 Jun 1998, mark metzger wrote:\n\n> I am glad to hear that Postgresql has gotten so good that the developers\n> feel that the regression test can be hidden. Hiding it will improve the\n> appearance of stability. In today's world of mass marketing appearance\n> seems to count for a lot.\n\n\tNobody is hiding anything, nor are we removing anything...the\nINSTALL guide still lists the regression tests as a step, but it isn't\nconsidered a required step...\n\n > \n> Oliver\n> \n> The Hermit Hacker wrote:\n> > \n> > Moved to pgsql-hackers...\n> > \n> > On Mon, 8 Jun 1998, Herouth Maoz wrote:\n> > \n> > > May I protest, please?\n> > \n> > Of course you can...\n> > \n> > > What exactly is a supported/tested platform?\n> > \n> > We have a list of platforms that are tested prior to each\n> > release...\n> > \n> > > Timezone differences make some\n> > > of the failures, and I think it's important that we recognise them and know\n> > > that we have a timezone problem. Also, have you really tested the system on\n> > > all available systems? I saw it compiled for solaris 2.6. Has it been\n> > > tested for 2.5?\n> > \n> > Solaris 2.6, 2.5.1 and SunOS 4.1.x were tested for last release by\n> > two ppl (myself included)\n> > \n> > > End users which merely use the database should not be concerned with such\n> > > things, but if we are to run the system in a serious environment, my system\n> > > admin wants to be sure that postgres works *here*.\n> > \n> > We aren't removing the regression tests, we are just removing the\n> > comment that we strongly encourage ppl to run them...we have yet to have\n> > an end-user report a problem with the regression tests that was something\n> > that was actually a 'bug'...\n> \n\n", "msg_date": "Mon, 8 Jun 1998 13:26:21 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Should I run regression tests?" } ]
[ { "msg_contents": "Hello All.\n\nMy mailer munged the intro text in my last post. Here is the text in a more \nreadable form.\n--\nI am submitting the following patches to the June 6, 1998 snapshot of \nPostgreSQL. These patches implement a port of PostgreSQL to SCO UnixWare 7, \nand updates the Univel port (UnixWare 2.x). The patched files, and the reason \n for the patch are:\n\nFile\t\tReason for the patch\n--------------- ---------------------------------------------------------------\nsrc/backend/port/dynloader/unixware.c\nsrc/backend/port/dynloader/unixware.h\nsrc/include/port/unixware.h\nsrc/makefiles/Makefile.unixware\nsrc/template/unixware\n \t\tCreated for the UNIXWARE port.\n\nsrc/include/port/univel.h\n \t\tModifed this file to work with the changes made to s_lock.[ch].\n\nsrc/backend/storage/buffer/s_lock.c\nsrc/include/storage/s_lock.h\n \t\tMoved the UNIXWARE (and Univel) tas() function from s_lock.c to\n \t\ts_lock.h. The UnixWare compiler asm construct is treated as a\n \t\tmacro and needs to be in the s_lock.h file. I also reworked\n\t\tthe tas() function to correct some errors in the code.\n\nsrc/include/version.h.in\n \t\tThe use of the ## operator with quoted strings in the VERSION\n \t\tmacro caused problems with the UnixWare C compiler. I removed\n \t\tthe ## operators since they were not needed in this case. The\n \t\tmacro expands into a sequence of quoted strings that will be\n \t\tconcatenated by any ANSI C compiler.\n\nsrc/config.guess\n \t\tThis script was modified to recognize SCO UnixWare 7.\n\nsrc/configure src/configure.in\n \t\tThe configure script was modified to recognize SCO UnixWare 7.\n\n--\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: [email protected]\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n", "msg_date": "Mon, 08 Jun 1998 00:39:51 -0400", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "UnixWare 7 port (corrected intro text)." }, { "msg_contents": "Patch applied.\n\n\n> Hello All.\n> \n> My mailer munged the intro text in my last post. Here is the text in a more \n> readable form.\n> --\n> I am submitting the following patches to the June 6, 1998 snapshot of \n> PostgreSQL. These patches implement a port of PostgreSQL to SCO UnixWare 7, \n> and updates the Univel port (UnixWare 2.x). The patched files, and the reason \n> for the patch are:\n> \n> File\t\tReason for the patch\n> --------------- ---------------------------------------------------------------\n> src/backend/port/dynloader/unixware.c\n> src/backend/port/dynloader/unixware.h\n> src/include/port/unixware.h\n> src/makefiles/Makefile.unixware\n> src/template/unixware\n> \t\tCreated for the UNIXWARE port.\n> \n> src/include/port/univel.h\n> \t\tModifed this file to work with the changes made to s_lock.[ch].\n> \n> src/backend/storage/buffer/s_lock.c\n> src/include/storage/s_lock.h\n> \t\tMoved the UNIXWARE (and Univel) tas() function from s_lock.c to\n> \t\ts_lock.h. The UnixWare compiler asm construct is treated as a\n> \t\tmacro and needs to be in the s_lock.h file. I also reworked\n> \t\tthe tas() function to correct some errors in the code.\n> \n> src/include/version.h.in\n> \t\tThe use of the ## operator with quoted strings in the VERSION\n> \t\tmacro caused problems with the UnixWare C compiler. I removed\n> \t\tthe ## operators since they were not needed in this case. The\n> \t\tmacro expands into a sequence of quoted strings that will be\n> \t\tconcatenated by any ANSI C compiler.\n> \n> src/config.guess\n> \t\tThis script was modified to recognize SCO UnixWare 7.\n> \n> src/configure src/configure.in\n> \t\tThe configure script was modified to recognize SCO UnixWare 7.\n> \n> --\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: [email protected]\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 19 Jul 1998 00:15:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UnixWare 7 port (corrected intro text)." } ]
[ { "msg_contents": "I had recieved your mail. Thank you.\n\nIf I wish to add keyword for editing image :example keyword => ImageUnion,\nWhich files must to be changed.\n\nWould you expain where to begin and how to do something.\n\nAnd would you expain functionality of files in executor directory to me.\n\nI am sorry that trouble you.\n\nThank you . Have nice day!\n", "msg_date": "Mon, 8 Jun 1998 16:16:00 +0900 (KST)", "msg_from": "Tak Woohyun <[email protected]>", "msg_from_op": true, "msg_subject": "[question]postgreSQL...." } ]
[ { "msg_contents": "In interfaces/libpq/libpq-fe.h there is this well-meaning comment:\n\n/*\n * We can't use the conventional \"bool\", because we are designed to be\n * included in a user's program, and user may already have that type\n * defined. Pqbool, on the other hand, is unlikely to be used.\n */\n\nUnfortunately, libpq-fe.h includes libpq/pgcomm.h, which in turn \nincludes c.h, which defines the bool type. This causes me problems as\nthe code I am working with also defines bool.\n\nOf c.h, the only section that pqcomm.h requires is section 3 (standard\nsystem types). What I have done locally, therefore, is to move that\nsection into a new file (sys_types.h) and include that from pqcomm.h\ninstead of c.h. Does this solution seem reasonable, or does anyone have\na different idea about the way the include files should be arranged? If\nthere are no objections, I can submit a patch if you want.\n\nEwan Mellor.\n", "msg_date": "Mon, 08 Jun 1998 16:56:43 +0100", "msg_from": "Ewan Mellor <[email protected]>", "msg_from_op": true, "msg_subject": "bool exported to user namespace" } ]
[ { "msg_contents": "I have added code to the postmaster to generate a random cancel key by\ncalling gettimeofday() on postmaster startup and on the first invocation\nof a backend, and merged the micro-seconds of the two times to seed the\nrandom number generator.\n\nI added a PostmasterRandom() function which returns a random that is\nXOR'ed with the original random seed, so it it not possible to take a\ngiven cancel key and predict future random keys.\n\nThe only way you could do it would be to call random in your backend,\nand somehow find the PREVIOUS random value. You could XOR it with your\ncancel key to find the original seed, and then try going forward to\npredict the next cancel value. Seems impossible to me.\n\nThis fulfills two goals, to make the random seed truly random, so the\ncancel keys are not guess-able, and to make seeing your own cancel key\nalmost useless in computing other cancel keys. Not sure if you can\npredict forward, but it is probably impossible to predict randoms\nbackward on any of our supported platforms.\n\nPatch is posted to patches list.\n\nNow I need help in passing the value to the font-end, and having the\nfront-end pass it to the backend for a cancel. I do not recommend\npassing the pid because I will store the cancel key in the per-backend\nstructure, so having the pid does not help me find the backend. Might\nas well just scan the table to find the matching cancel key, and kill\nthat backend. We will have to store the pid in the structure, but that\nis easy to do.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 12:24:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Cancel key now ready" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Now I need help in passing the value to the font-end, and having the\n> front-end pass it to the backend for a cancel.\n\nI can work on that. Have you checked the postmaster changes into cvs?\n\n> I do not recommend passing the pid because I will store the cancel key\n> in the per-backend structure, so having the pid does not help me find\n> the backend. Might as well just scan the table to find the matching\n> cancel key, and kill that backend. We will have to store the pid in\n> the structure, but that is easy to do.\n\nI don't like this. Backend PIDs are guaranteed unique (among active\nbackends); cancel keys are not guaranteed unique, unless you took some\nspecial measure to make them so. So you could hit the wrong backend\nif you only compare cancel keys. Since you must store the PID anyway to\nsend the signal, you may as well use both to verify that you have found\nthe right backend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Jun 1998 12:57:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cancel key now ready " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > Now I need help in passing the value to the font-end, and having the\n> > front-end pass it to the backend for a cancel.\n> \n> I can work on that. Have you checked the postmaster changes into cvs?\n\nAlways. You bet.\n\n> \n> > I do not recommend passing the pid because I will store the cancel key\n> > in the per-backend structure, so having the pid does not help me find\n> > the backend. Might as well just scan the table to find the matching\n> > cancel key, and kill that backend. We will have to store the pid in\n> > the structure, but that is easy to do.\n> \n> I don't like this. Backend PIDs are guaranteed unique (among active\n> backends); cancel keys are not guaranteed unique, unless you took some\n> special measure to make them so. So you could hit the wrong backend\n> if you only compare cancel keys. Since you must store the PID anyway to\n> send the signal, you may as well use both to verify that you have found\n> the right backend.\n\nOK, sure, pass the pid. However, one problem is that the postmaster\ndoes not know the pid until after it forks the backend, so if you want\nto send the pid with the cancel key, you will have to send the pid from\nthe backend.\n\nAlso, the odds of two backends have the same cancel key, when random\nreturns a long() is so astonomically small, that I am willing to live\nwith the risk, to take the advantage of cleaner, more modular code.\n\nConsidering the problem of sending the cancel key from the backend and\nnot the postmaster, I dropped the pid. Remember, you have to store the\ncancel key in the postmaster so sending it to the client at that point\nmade sense. Setting the pid after the fork is easy because there is no\ncommunication required.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 14:14:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have added code to the postmaster to generate a random cancel key by\n> calling gettimeofday() on postmaster startup and on the first invocation\n> of a backend, and merged the micro-seconds of the two times to seed the\n> random number generator.\n\nThere were several things I didn't like about Bruce's first cut.\nHis method for generating a random seed in the postmaster is good,\nbut there are several security holes elsewhere.\n\n> Not sure if you can\n> predict forward, but it is probably impossible to predict randoms\n> backward on any of our supported platforms.\n\nActually, it's not that hard. Nearly all implementations of random()\nare basically just\n\tseed = (seed * a + b) % c;\n return seed;\nfor some constants a,b,c --- which a potential attacker could easily\nfind out. So if the attacker can execute random() in a backend before\nit gets used for anything else, he can back up to the last random value\ngenerated in the postmaster. The most secure way to prevent this is to\nre-execute srandom() during the startup of each backend, so that it will\nhave a random() sequence that's unrelated to the postmaster's.\n\nAlso, Bruce was assuming that the random_seed value wouldn't be visible\nin a backend ... but the forked-off backend will have a copy just\nsitting there, readily accessible if you can figure out its address.\nBackend startup should zero out this variable to be on the safe side.\n\nAttached is a patch that fixes these leaks, and does a couple other\nthings as well:\n * Computes and saves a cancel key for each backend.\n * fflush before forking, to eliminate double-buffering problems\n between postmaster and backends.\n * Go back to two random() calls instead of one to generate random\n salt. I'm not sure why Bruce changed that, but it looks much\n less secure to me; the one-call way is exposing twice as many\n bits of the current random seed.\n * Fix \"ramdom\" typo.\n\nNext is to transmit the PID + cancel key to the frontend and modify\nlibpq's cancel routine to send it back. I'll work on that later.\n\n\t\t\tregards, tom lane\n\n\n*** postmaster.c~\tMon Jun 8 14:10:39 1998\n--- postmaster.c\tMon Jun 8 14:44:15 1998\n***************\n*** 113,118 ****\n--- 113,119 ----\n typedef struct bkend\n {\n \tint\t\t\tpid;\t\t\t/* process id of backend */\n+ \tlong\t\tcancel_key;\t\t/* cancel key for cancels for this backend */\n } Backend;\n \n /* list of active backends. For garbage collection only now. */\n***************\n*** 198,204 ****\n--- 199,212 ----\n static\tint\t\t\torgsigmask = sigblock(0);\n #endif\n \n+ /*\n+ * State for assigning random salts and cancel keys.\n+ * Also, the global MyCancelKey passes the cancel key assigned to a given\n+ * backend from the postmaster to that backend (via fork).\n+ */\n+ \n static unsigned int random_seed = 0;\n+ long MyCancelKey = 0;\n \n extern char *optarg;\n extern int\toptind,\n***************\n*** 602,618 ****\n \t\t\treturn (STATUS_ERROR);\n \t\t}\n \n! \t\tif (random_seed == 0)\n \t\t{\n \t\t\tgettimeofday(&later, &tz);\n \t\n \t\t\t/*\n \t\t\t *\tWe are not sure how much precision is in tv_usec, so we\n! \t\t\t *\tswap the nibbles of 'later' and XOR them with 'now'\n \t\t\t */\n \t\t\trandom_seed = now.tv_usec ^\n \t\t\t\t\t((later.tv_usec << 16) |\n! \t\t\t\t\t((unsigned int)(later.tv_usec & 0xffff0000) >> 16));\n \t\t}\n \t\t\t\t\n \t\t/*\n--- 610,631 ----\n \t\t\treturn (STATUS_ERROR);\n \t\t}\n \n! \t\t/*\n! \t\t * Select a random seed at the time of first receiving a request.\n! \t\t */\n! \t\twhile (random_seed == 0)\n \t\t{\n \t\t\tgettimeofday(&later, &tz);\n \t\n \t\t\t/*\n \t\t\t *\tWe are not sure how much precision is in tv_usec, so we\n! \t\t\t *\tswap the nibbles of 'later' and XOR them with 'now'.\n! \t\t\t * On the off chance that the result is 0, we loop until\n! \t\t\t * it isn't.\n \t\t\t */\n \t\t\trandom_seed = now.tv_usec ^\n \t\t\t\t\t((later.tv_usec << 16) |\n! \t\t\t\t\t((later.tv_usec >> 16) & 0xffff));\n \t\t}\n \t\t\t\t\n \t\t/*\n***************\n*** 1075,1080 ****\n--- 1088,1101 ----\n \t}\n #endif\n \n+ \t/*\n+ \t * Compute the cancel key that will be assigned to this backend.\n+ \t * The backend will have its own copy in the forked-off process'\n+ \t * value of MyCancelKey, so that it can transmit the key to the\n+ \t * frontend.\n+ \t */\n+ \tMyCancelKey = PostmasterRandom();\n+ \n \tif (DebugLvl > 2)\n \t{\n \t\tchar\t **p;\n***************\n*** 1088,1104 ****\n \t\tfprintf(stderr, \"-----------------------------------------\\n\");\n \t}\n \n if ((pid = fork()) == 0)\n \t{ /* child */\n if (DoBackend(port))\n \t\t{\n fprintf(stderr, \"%s child[%d]: BackendStartup: backend startup failed\\n\",\n! progname, pid);\n! \t\t\t/* use _exit to keep from double-flushing stdio */\n! \t \t\t_exit(1);\n \t\t}\n \t\telse\n! \t \t_exit(0);\n \t}\n \n \t/* in parent */\n--- 1109,1129 ----\n \t\tfprintf(stderr, \"-----------------------------------------\\n\");\n \t}\n \n+ \t/* Flush all stdio channels just before fork,\n+ \t * to avoid double-output problems.\n+ \t */\n+ \tfflush(NULL);\n+ \n if ((pid = fork()) == 0)\n \t{ /* child */\n if (DoBackend(port))\n \t\t{\n fprintf(stderr, \"%s child[%d]: BackendStartup: backend startup failed\\n\",\n! progname, (int) getpid());\n! \t \t\texit(1);\n \t\t}\n \t\telse\n! \t \texit(0);\n \t}\n \n \t/* in parent */\n***************\n*** 1130,1135 ****\n--- 1155,1161 ----\n \t}\n \n \tbn->pid = pid;\n+ \tbn->cancel_key = MyCancelKey;\n \tDLAddHead(BackendList, DLNewElem(bn));\n \n \tActiveBackends = TRUE;\n***************\n*** 1192,1197 ****\n--- 1218,1225 ----\n \tchar\t\tdbbuf[ARGV_SIZE + 1];\n \tint\t\t\tac = 0;\n \tint\t\t\ti;\n+ \tstruct timeval now;\n+ \tstruct timezone tz;\n \n \t/*\n \t *\tLet's clean up ourselves as the postmaster child\n***************\n*** 1225,1231 ****\n \tif (NetServer)\n \t\tStreamClose(ServerSock_INET);\n \tStreamClose(ServerSock_UNIX);\n! \t\n \t/* Now, on to standard postgres stuff */\n \t\n \tMyProcPid = getpid();\n--- 1253,1268 ----\n \tif (NetServer)\n \t\tStreamClose(ServerSock_INET);\n \tStreamClose(ServerSock_UNIX);\n! \n! \t/*\n! \t * Don't want backend to be able to see the postmaster random number\n! \t * generator state. We have to clobber the static random_seed *and*\n! \t * start a new random sequence in the random() library function.\n! \t */\n! \trandom_seed = 0;\n! \tgettimeofday(&now, &tz);\n! \tsrandom(now.tv_usec);\n! \n \t/* Now, on to standard postgres stuff */\n \t\n \tMyProcPid = getpid();\n***************\n*** 1365,1374 ****\n static void\n RandomSalt(char *salt)\n {\n! \tlong rand = PostmasterRandom();\n! \t\n! \t*salt = CharRemap(rand % 62);\n! \t*(salt + 1) = CharRemap(rand / 62);\n }\n \n /*\n--- 1402,1409 ----\n static void\n RandomSalt(char *salt)\n {\n! \t*salt = CharRemap(PostmasterRandom());\n! \t*(salt + 1) = CharRemap(PostmasterRandom());\n }\n \n /*\n***************\n*** 1387,1391 ****\n \t\tinitialized = true;\n \t}\n \n! \treturn ramdom() ^ random_seed;\n }\n--- 1422,1426 ----\n \t\tinitialized = true;\n \t}\n \n! \treturn random() ^ random_seed;\n }\n", "msg_date": "Mon, 08 Jun 1998 14:55:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cancel key now ready " }, { "msg_contents": "In the current CVS may be a little bug or a big typo. IMHO should line\n1390 in postmaster.c return random and not ramdom.\n\nSince the change around 06/04 I have trouble starting the postmaster. I\ncan only access db's with postgres.\n\n-Egon\n\nOn Mon, 8 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > Bruce Momjian <[email protected]> writes:\n> > > Now I need help in passing the value to the font-end, and having the\n> > > front-end pass it to the backend for a cancel.\n> > \n> > I can work on that. Have you checked the postmaster changes into cvs?\n> \n> Always. You bet.\n\n\n", "msg_date": "Mon, 8 Jun 1998 21:11:17 +0200 (MET DST)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> In the current CVS may be a little bug or a big typo. IMHO should line\n> 1390 in postmaster.c return random and not ramdom.\n> \n> Since the change around 06/04 I have trouble starting the postmaster. I\n> can only access db's with postgres.\n\nFixing now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 15:13:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I have added code to the postmaster to generate a random cancel key by\n> > calling gettimeofday() on postmaster startup and on the first invocation\n> > of a backend, and merged the micro-seconds of the two times to seed the\n> > random number generator.\n> \n> There were several things I didn't like about Bruce's first cut.\n> His method for generating a random seed in the postmaster is good,\n> but there are several security holes elsewhere.\n> \n> > Not sure if you can\n> > predict forward, but it is probably impossible to predict randoms\n> > backward on any of our supported platforms.\n> \n> Actually, it's not that hard. Nearly all implementations of random()\n> are basically just\n> \tseed = (seed * a + b) % c;\n> return seed;\n> for some constants a,b,c --- which a potential attacker could easily\n> find out. So if the attacker can execute random() in a backend before\n> it gets used for anything else, he can back up to the last random value\n> generated in the postmaster. The most secure way to prevent this is to\n> re-execute srandom() during the startup of each backend, so that it will\n> have a random() sequence that's unrelated to the postmaster's.\n\nI thought about this. I can force a re-seeding of random in the\nbackend on first use. Didn't get that far yet. Could re-seed on every\nstartup, but again, could be an expensive function.\n\n> \n> Also, Bruce was assuming that the random_seed value wouldn't be visible\n> in a backend ... but the forked-off backend will have a copy just\n> sitting there, readily accessible if you can figure out its address.\n> Backend startup should zero out this variable to be on the safe side.\n\nIf they have access the backend address space, they can see the entire\npostmaster backend structure at time of fork(), so seeing the seed is\nmeanless. Basically, for any user who is installing their own functions\nor stuff is already able to do more severe damage than just cancel. \nThey can write directly into the database.\n\n\n> \n> Attached is a patch that fixes these leaks, and does a couple other\n> things as well:\n> * Computes and saves a cancel key for each backend.\n> * fflush before forking, to eliminate double-buffering problems\n> between postmaster and backends.\n\nCan you elaborate on what this fixes?\n\n> * Go back to two random() calls instead of one to generate random\n> salt. I'm not sure why Bruce changed that, but it looks much\n> less secure to me; the one-call way is exposing twice as many\n> bits of the current random seed.\n\nThe code is similar to taking a random() and doing:\n\n\trand % 10\n\n\t(rand / 10) % 10\n\nwhich for a random of 123456 returns 6 and 5. In the postmaster case\nthe values are 62 and not 10, but the concept is the same. No reason to\ncall random() twice. May be an expensive function on some platforms.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 15:29:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, sure, pass the pid. However, one problem is that the postmaster\n> does not know the pid until after it forks the backend, so if you want\n> to send the pid with the cancel key, you will have to send the pid from\n> the backend.\n\nAh, I see: you were planning to send the cancel authorization data to\nthe FE as part of the AuthenticationOK message. I was assuming it\nshould be sent by the backend as part of backend startup.\nIt could be done either way I suppose. The transmission of the cancel\nkey to the backend is essentially free (see my recent patch), so really\nit boils down to whether we'd rather add version-dependent fields to\nAuthenticationOK or just add a whole new message type.\n\n> Also, the odds of two backends have the same cancel key, when random\n> returns a long() is so astonomically small, that I am willing to live\n> with the risk, to take the advantage of cleaner, more modular code.\n\nIt would only take a few more lines to make it safe: generate a key,\ncheck for a duplicate in the list of active backends, repeat if there\nis a duplicate. However I think that using PID+key is better, because\nit makes it that much harder for an attacker to guess a valid cancel\nrequest.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Jun 1998 15:29:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready " }, { "msg_contents": "Already fixed, compiled and started postmaster I see the following:\n\nmarliesle$ postmaster -i &\n[1] 22619\nmarliesle$ No such file or directory\nPostmasterMain execv failed on postmaster\n\n[1]+ Exit 1 postmaster -i &\n\nI'm on debian (bo) and till July 4 it worked everyday :)\n\n-Egon \n\nOn Mon, 8 Jun 1998, Bruce Momjian wrote:\n> Fixing now.\n\n\n", "msg_date": "Mon, 8 Jun 1998 21:33:05 +0200 (MET DST)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> It would only take a few more lines to make it safe: generate a key,\n> check for a duplicate in the list of active backends, repeat if there\n> is a duplicate. However I think that using PID+key is better, because\n> it makes it that much harder for an attacker to guess a valid cancel\n> request.\n\nAnother way to do it is that when scanning looking for a match of a\ncancel key, do not execute the cancel if there is more than one match. \nSimple, and we are already scannig the structure. I see no reason to\nscan it at cancel assignment time because the odds are so small.\n\nBut the PID is clearly visible to an attacker, so I see no benefit. If\nit can be sent easily, lets do it. I am not sure where/how to send it,\nso do it the way you think is best. Again, if you send the pid, you\ncan't have duplicates, you are right. Also, remember if we send the\ncancel and the pid, we have to store each value in every interface. It\nis not just libpq.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 15:34:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> Already fixed, compiled and started postmaster I see the following:\n> \n> marliesle$ postmaster -i &\n> [1] 22619\n> marliesle$ No such file or directory\n> PostmasterMain execv failed on postmaster\n> \n> [1]+ Exit 1 postmaster -i &\n> \n> I'm on debian (bo) and till July 4 it worked everyday :)\n\nOK, this is another issue. To properly show status on the command line,\nI have the postmaster re-exec() itself with at least three args. For\nsome reason, on your platform, this is not working. I have just\ncommitted a patch that shows the file name on exec failure. Please\nre-sync and tell me what it displays.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 15:37:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "Sadly to say the same.\n\n-Egon\n\nOn Mon, 8 Jun 1998, Bruce Momjian wrote:\n\n> > marliesle$ postmaster -i &\n> > [1] 22619\n> > marliesle$ No such file or directory\n> > PostmasterMain execv failed on postmaster\n> \n> OK, this is another issue. To properly show status on the command line,\n> I have the postmaster re-exec() itself with at least three args. For\n> some reason, on your platform, this is not working. I have just\n> committed a patch that shows the file name on exec failure. Please\n> re-sync and tell me what it displays.\n\n", "msg_date": "Mon, 8 Jun 1998 22:10:26 +0200 (MET DST)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> Sadly to say the same.\n> \n> -Egon\n> \n> On Mon, 8 Jun 1998, Bruce Momjian wrote:\n> \n> > > marliesle$ postmaster -i &\n> > > [1] 22619\n> > > marliesle$ No such file or directory\n ^^^^^^^^^^^^^^\n\nThere should be some more information in the error message at this\npoint.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 16:16:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I thought about this. I can force a re-seeding of random in the\n> backend on first use.\n\nNo you can't; you might make PostmasterRandom behave that way, but\nthat doesn't stop an installed function from just calling random()\ndirectly. You really need to wipe out the state saved by the random\nlibrary function.\n\n> Could re-seed on every\n> startup, but again, could be an expensive function.\n\nsrandom() is generally not much more than a store into a\nstatic variable. If there's anything expensive about this,\nit'd be the gettimeofday() call to produce the new seed value.\n\n> If they have access the backend address space, they can see the entire\n> postmaster backend structure at time of fork(), so seeing the seed is\n> meanless.\n\nThat's a good point --- in particular, they could trace the postmaster\nbackend-process list to find out everyone else's cancel keys. This\nsort of thing is one of the disadvantages of not using an exec().\n\nWhat do you think of freeing that process list as part of backend startup?\n\n> Basically, for any user who is installing their own functions\n> or stuff is already able to do more severe damage than just cancel. \n> They can write directly into the database.\n\nThat's certainly true ... but last week we were trying to make the\ncancel protocol proof against someone with the ability to spy on\nTCP packets in transit (which is not that easy) and now you seem\nto be unworried about attacks that only require loading a function\ninto one of the backends. I'd prefer to do whatever we can easily\ndo to defend against that.\n\n>> * fflush before forking, to eliminate double-buffering problems\n>> between postmaster and backends.\n\n> Can you elaborate on what this fixes?\n\nI have not seen any failure cases, if that's what you mean; but I\nhaven't yet done anything with the new no-exec code. The risk is\nthat if any data is waiting in a postmaster stdio output buffer,\nit will eventually get written twice, once by the postmaster and\nonce by the backend. You want to flush it out before forking\nto ensure that doesn't happen. This wasn't an issue before with\nthe exec-based code, because the child process' copy of the postmaster's\nstdio buffers got thrown away when the exec() occurred. With no\nexec, the unflushed buffers are still there and still valid as far\nas the stdio library in the child knows.\n\n> The code is similar to taking a random() and doing:\n> \trand % 10\n> \t(rand / 10) % 10\n> which for a random of 123456 returns 6 and 5. In the postmaster case\n> the values are 62 and not 10, but the concept is the same. No reason to\n> call random() twice. May be an expensive function on some platforms.\n\nIt's not that expensive (you were doing it twice before, with no visible\nproblem). I'm concerned that the new way exposes more info about the\ncurrent state of the postmaster's random sequence. For that matter,\nI'm probably going to want to change the computation of the cancel key\nlater on --- the code I just sent in was only\n\tMyCancelKey = PostmasterRandom();\nbut I think it would be better to synthesize the cancel key from\nfragments of a couple of random values. This code will do to get the\nprotocol working but I don't think it's cryptographically secure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Jun 1998 16:18:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready " }, { "msg_contents": "No, postmaster jumps from main/main.c\n\n\tif (len >= 10 && !strcmp(argv[0] + len - 10, \"postmaster\")) \n\t\texit(PostmasterMain(argc, argv));\n\nto postmaster/postmaster.c\n\n int\n PostmasterMain(int argc, char *argv[])\n { .. \n if (argc < 4)\n .. /* How did we get here, error! */\n fprintf(stderr, \"PostmasterMain execv failed on %s\\n\", argv[0]);\n\nI tried this today and after the fix the message is the same. Will start a\nnext time tomorrow.\n\n-Egon\n\nOn Mon, 8 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > Sadly to say the same.\n> > \n> > -Egon\n> > \n> > On Mon, 8 Jun 1998, Bruce Momjian wrote:\n> > \n> > > > marliesle$ postmaster -i &\n> > > > [1] 22619\n> > > > marliesle$ No such file or directory\n> ^^^^^^^^^^^^^^\n> \n> There should be some more information in the error message at this\n> point.\n\n", "msg_date": "Mon, 8 Jun 1998 22:33:24 +0200 (MET DST)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I thought about this. I can force a re-seeding of random in the\n> > backend on first use.\n> \n> No you can't; you might make PostmasterRandom behave that way, but\n> that doesn't stop an installed function from just calling random()\n> directly. You really need to wipe out the state saved by the random\n> library function.\n\nYou can't just call random directly. You have to call an install\nfunction with a pg_proc entry for it to work. If we set that one up to\ninitialize itself, it should work.\n\n> \n> > Could re-seed on every\n> > startup, but again, could be an expensive function.\n> \n> srandom() is generally not much more than a store into a\n> static variable. If there's anything expensive about this,\n> it'd be the gettimeofday() call to produce the new seed value.\n\nBut we don't call that for every backend startup, just twice for the\nlife of the postmaster, once for postmaster startup, and once for the\nstartup of the first backend. What I don't want is to profile the\nbackend and find that random/srandom() is showing up as significant.\n\n> \n> > If they have access the backend address space, they can see the entire\n> > postmaster backend structure at time of fork(), so seeing the seed is\n> > meanless.\n> \n> That's a good point --- in particular, they could trace the postmaster\n> backend-process list to find out everyone else's cancel keys. This\n> sort of thing is one of the disadvantages of not using an exec().\n> \n> What do you think of freeing that process list as part of backend startup?\n\nAgain, being able to connect to the backend, and accessing its address\nspace are two separate privs. Only the postgres user can do such\naccess.\n\n> \n> > Basically, for any user who is installing their own functions\n> > or stuff is already able to do more severe damage than just cancel. \n> > They can write directly into the database.\n> \n> That's certainly true ... but last week we were trying to make the\n> cancel protocol proof against someone with the ability to spy on\n> TCP packets in transit (which is not that easy) and now you seem\n> to be unworried about attacks that only require loading a function\n> into one of the backends. I'd prefer to do whatever we can easily\n> do to defend against that.\n\nOnly the postgres super-user can load functions. This is something we\nhave protection against. Someone snooping the wire may not even have\npermissions to access the database.\n\n> p\n> >> * fflush before forking, to eliminate double-buffering problems\n> >> between postmaster and backends.\n> \n> > Can you elaborate on what this fixes?\n> \n> I have not seen any failure cases, if that's what you mean; but I\n> haven't yet done anything with the new no-exec code. The risk is\n> that if any data is waiting in a postmaster stdio output buffer,\n> it will eventually get written twice, once by the postmaster and\n> once by the backend. You want to flush it out before forking\n> to ensure that doesn't happen. This wasn't an issue before with\n> the exec-based code, because the child process' copy of the postmaster's\n> stdio buffers got thrown away when the exec() occurred. With no\n> exec, the unflushed buffers are still there and still valid as far\n> as the stdio library in the child knows.\n\nYes. Excellent point.\n\n> \n> > The code is similar to taking a random() and doing:\n> > \trand % 10\n> > \t(rand / 10) % 10\n> > which for a random of 123456 returns 6 and 5. In the postmaster case\n> > the values are 62 and not 10, but the concept is the same. No reason to\n> > call random() twice. May be an expensive function on some platforms.\n> \n> It's not that expensive (you were doing it twice before, with no visible\n> problem). I'm concerned that the new way exposes more info about the\n> current state of the postmaster's random sequence. For that matter,\n> I'm probably going to want to change the computation of the cancel key\n> later on --- the code I just sent in was only\n> \tMyCancelKey = PostmasterRandom();\n> but I think it would be better to synthesize the cancel key from\n> fragments of a couple of random values. This code will do to get the\n> protocol working but I don't think it's cryptographically secure.\n\nAgain, XOR'ing with the seed should do what we need.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 17:50:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> No, postmaster jumps from main/main.c\n> \n> \tif (len >= 10 && !strcmp(argv[0] + len - 10, \"postmaster\")) \n> \t\texit(PostmasterMain(argc, argv));\n> \n> to postmaster/postmaster.c\n> \n> int\n> PostmasterMain(int argc, char *argv[])\n> { .. \n> if (argc < 4)\n> .. /* How did we get here, error! */\n> fprintf(stderr, \"PostmasterMain execv failed on %s\\n\", argv[0]);\n> \n> I tried this today and after the fix the message is the same. Will start a\n> next time tomorrow.\n> \n> -Egon\n\nOK, I can recreate it here now. Fixing now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 17:55:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "> \n> No, postmaster jumps from main/main.c\n> \n> \tif (len >= 10 && !strcmp(argv[0] + len - 10, \"postmaster\")) \n> \t\texit(PostmasterMain(argc, argv));\n> \n> to postmaster/postmaster.c\n> \n> int\n> PostmasterMain(int argc, char *argv[])\n> { .. \n> if (argc < 4)\n> .. /* How did we get here, error! */\n> fprintf(stderr, \"PostmasterMain execv failed on %s\\n\", argv[0]);\n> \n> I tried this today and after the fix the message is the same. Will start a\n> next time tomorrow.\n\nFixed. Thank you for finding this.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 8 Jun 1998 18:28:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" }, { "msg_contents": "\nHere is a patch that will auto-seed any request for random from the\nuser. This will prevent users from seeing random values that use our\npostmaster cancel seed.\n\n\n---------------------------------------------------------------------------\n\nIndex: src/backend/utils/adt/misc.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/misc.c,v\nretrieving revision 1.12\ndiff -c -r1.12 misc.c\n*** misc.c\t1998/02/24 03:47:26\t1.12\n--- misc.c\t1998/06/09 19:16:16\n***************\n*** 13,18 ****\n--- 13,19 ----\n */\n #include <sys/types.h>\n #include <sys/file.h>\n+ #include <time.h>\n #include \"postgres.h\"\n #include \"utils/datum.h\"\n #include \"catalog/pg_type.h\"\n***************\n*** 60,65 ****\n--- 61,69 ----\n * will return about 1/10 of the tuples in TEMP\n *\n */\n+ \n+ static bool random_initialized = false;\n+ \n bool\n oidrand(Oid o, int32 X)\n {\n***************\n*** 68,73 ****\n--- 72,88 ----\n \tif (X == 0)\n \t\treturn true;\n \n+ \t/*\n+ \t *\tWe do this because the cancel key is actually a random, so we don't\n+ \t *\twant them to be able to request random numbers using our postmaster\n+ \t *\tseeded value.\n+ \t */\n+ \tif (!random_initialized)\n+ \t{\n+ \t\tsrandom((unsigned int)time(NULL));\n+ \t\trandom_initialized = true;\n+ \t}\n+ \n \tresult = (random() % X == 0);\n \treturn result;\n }\n***************\n*** 81,86 ****\n--- 96,102 ----\n oidsrand(int32 X)\n {\n \tsrand(X);\n+ \trandom_initialized = true;\n \treturn true;\n }\n \n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 15:20:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Cancel key now ready" } ]
[ { "msg_contents": "Hi Bruce,\nsome great steps forward. Actually postgres is called from postmaster\nand display's the usage() from postgres. I have compared it with older\nversion's but can't see anything strange here. \n\nPostgres itself runs if called with a database. BTW how should I stop\nit. I have tried stop, exit, bye, quit, and halt. ^C worked!\n\nmarliesle# su - postgres\nmarliesle$ postmaster -i 2>&1 &\n[1] 15187\nmarliesle$ Usage: /usr/local/pgsql/bin/postgres [options] [dbname]\n\t-B buffers\tset number of buffers in buffer pool\n\t-C \t\tsupress version info\n\t-D dir\t\tdata directory\n\t-E \t\techo query before execution\n\t-F \t\tturn off fsync\n\t-P port\t\tset port file descriptor\n\t-Q \t\tsuppress informational messages\n\t-S buffers\tset amount of sort memory available\n\t-d [1|2|3]\tset debug level\n\t-e \t\tturn on European date format\n\t-o file\t\tsend stdout and stderr to given filename \n\t-s \t\tshow stats after each query\n\t-v version\tset protocol version being used by frontend\n[1]+ Exit 1 postmaster -i 2>&1\n\n-Egon\n", "msg_date": "Tue, 09 Jun 1998 11:22:59 +0200", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": true, "msg_subject": "Postmaster not starting" }, { "msg_contents": "On Tue, 9 Jun 1998, Egon Schmid wrote:\n\n> Hi Bruce,\n> some great steps forward. Actually postgres is called from postmaster\n> and display's the usage() from postgres. I have compared it with older\n> version's but can't see anything strange here. \n> \n> Postgres itself runs if called with a database. BTW how should I stop\n> it. I have tried stop, exit, bye, quit, and halt. ^C worked!\n> \n> marliesle# su - postgres\n> marliesle$ postmaster -i 2>&1 &\n> [1] 15187\n> marliesle$ Usage: /usr/local/pgsql/bin/postgres [options] [dbname]\n> \t-B buffers\tset number of buffers in buffer pool\n> \t-C \t\tsupress version info\n> \t-D dir\t\tdata directory\n> \t-E \t\techo query before execution\n> \t-F \t\tturn off fsync\n> \t-P port\t\tset port file descriptor\n> \t-Q \t\tsuppress informational messages\n> \t-S buffers\tset amount of sort memory available\n> \t-d [1|2|3]\tset debug level\n> \t-e \t\tturn on European date format\n> \t-o file\t\tsend stdout and stderr to given filename \n> \t-s \t\tshow stats after each query\n> \t-v version\tset protocol version being used by frontend\n> [1]+ Exit 1 postmaster -i 2>&1\n\nI start mine up with something like:\n\n#!/bin/tcsh\nsetenv HOMEDIR /home/db\n${HOMEDIR}/bin/postmaster -o \"-F -o ${HOMEDIR}/errout\" \\\n\t-i -p 5000 -D${HOMEDIR}/data &\n\nFirst thing I'd try is adding the -D<dir> switch so that postmaster knows\nwhere your databases are...\n\n\n", "msg_date": "Tue, 9 Jun 1998 07:39:33 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster not starting" }, { "msg_contents": "I added '-D /usr/local/pgsql/data' my first time I ever know. It worked! \nBut this means that the environment variables (PGDATA) are ignored. The\nversion I took was from 5:30am GMT (after Bruce's commit's to\npostmaster.c). \n\n-Egon \n\nOn Tue, 9 Jun 1998, The Hermit Hacker wrote:\n> I start mine up with something like:\n> \n> #!/bin/tcsh\n> setenv HOMEDIR /home/db\n> ${HOMEDIR}/bin/postmaster -o \"-F -o ${HOMEDIR}/errout\" \\\n> \t-i -p 5000 -D${HOMEDIR}/data &\n> \n> First thing I'd try is adding the -D<dir> switch so that postmaster knows\n> where your databases are...\n\n", "msg_date": "Tue, 9 Jun 1998 16:38:27 +0200 (MET DST)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster not starting" }, { "msg_contents": "> \n> On Tue, 9 Jun 1998, Egon Schmid wrote:\n> \n> > Hi Bruce,\n> > some great steps forward. Actually postgres is called from postmaster\n> > and display's the usage() from postgres. I have compared it with older\n> > version's but can't see anything strange here. \n> > \n> > Postgres itself runs if called with a database. BTW how should I stop\n> > it. I have tried stop, exit, bye, quit, and halt. ^C worked!\n> > \n> > marliesle# su - postgres\n> > marliesle$ postmaster -i 2>&1 &\n> > [1] 15187\n> > marliesle$ Usage: /usr/local/pgsql/bin/postgres [options] [dbname]\n> > \t-B buffers\tset number of buffers in buffer pool\n> > \t-C \t\tsupress version info\n> > \t-D dir\t\tdata directory\n> > \t-E \t\techo query before execution\n> > \t-F \t\tturn off fsync\n> > \t-P port\t\tset port file descriptor\n> > \t-Q \t\tsuppress informational messages\n> > \t-S buffers\tset amount of sort memory available\n> > \t-d [1|2|3]\tset debug level\n> > \t-e \t\tturn on European date format\n> > \t-o file\t\tsend stdout and stderr to given filename \n> > \t-s \t\tshow stats after each query\n> > \t-v version\tset protocol version being used by frontend\n> > [1]+ Exit 1 postmaster -i 2>&1\n\nOK, I now realize it is still broken. Working on it now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 12:48:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster not starting" }, { "msg_contents": "> \n> On Tue, 9 Jun 1998, Egon Schmid wrote:\n> \n> > Hi Bruce,\n> > some great steps forward. Actually postgres is called from postmaster\n> > and display's the usage() from postgres. I have compared it with older\n> > version's but can't see anything strange here. \n> > \n> > Postgres itself runs if called with a database. BTW how should I stop\n> > it. I have tried stop, exit, bye, quit, and halt. ^C worked!\n> > \n> > marliesle# su - postgres\n> > marliesle$ postmaster -i 2>&1 &\n> > [1] 15187\n> > marliesle$ Usage: /usr/local/pgsql/bin/postgres [options] [dbname]\n> > \t-B buffers\tset number of buffers in buffer pool\n> > \t-C \t\tsupress version info\n> > \t-D dir\t\tdata directory\n> > \t-E \t\techo query before execution\n> > \t-F \t\tturn off fsync\n> > \t-P port\t\tset port file descriptor\n> > \t-Q \t\tsuppress informational messages\n> > \t-S buffers\tset amount of sort memory available\n> > \t-d [1|2|3]\tset debug level\n> > \t-e \t\tturn on European date format\n> > \t-o file\t\tsend stdout and stderr to given filename \n> > \t-s \t\tshow stats after each query\n> > \t-v version\tset protocol version being used by frontend\n> > [1]+ Exit 1 postmaster -i 2>&1\n> \n> I start mine up with something like:\n\nOK, I have just made another fix for this. I missed some of the\nhandling I needed. Please try again. Thanks.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 13:13:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster not starting" }, { "msg_contents": "> \n> I added '-D /usr/local/pgsql/data' my first time I ever know. It worked! \n> But this means that the environment variables (PGDATA) are ignored. The\n> version I took was from 5:30am GMT (after Bruce's commit's to\n> postmaster.c). \n> \n\nStill, the problem was that the exec() with three args fix was broken\nbecause it was exec'ing postgres, not postmaster. Fixed now. It worked\nbecause your addition of -D dir added the needed three args.\n\nIt will work properly now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 13:30:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster not starting" } ]
[ { "msg_contents": "[reposted from pgsql-admin list]\n\n\nHi,\n\nWe have a rather simple database with 2 tables and 2 indices. The tables\ncontain char, int, and bool type fields, and both has ~60000 records now.\n\n-rw------- 1 postgres postgres 3727360 Jun 5 11:45 mail\n-rw------- 1 postgres postgres 1843200 Jun 4 02:45 mail_name_key\n-rw------- 1 postgres postgres 9977856 Jun 5 11:45 pers\n-rw------- 1 postgres postgres 1835008 Jun 4 02:45 pers_name_key\n\nWe would like to reach at least 15-20 query per second, 95 percent\nSELECT id FROM mail WHERE name='name' queries. The rest is SELECT pers or\nUPDATE in one of the two tables.\n\nWhen the number of concurrent queries is 30 or higher, the postgres \nanswers very slowly, and it writes\n\n NOTICE: SIAssignBackendId: discarding tag 2147339305\n FATAL 1: Backend cache invalidation initialization failed\n\nmessages to the log.\n\nIf the number of concurrencies are 10, then everything goes fine, but the\nnumber of queries/sec are 8. Is this the maximum loadability of postgres?\n\nIs the any fine tuning possibilities for higher performance?\n\nSome other questions:\n\n1. How often the database has to be vacuumed? (Our database is vacuumed 3 \n times a day now.)\n2. Why select * much more fast than select id? (before vacuum)\n (`id' is a field in the table)\n\nPostmaster runs with options: postmaster -B 468 -i -o -F.\n\nBackend system: FreeBSD-2.2.6R, PII-400MHz, 64MB, UW SCSI RAID\nPostgres version: 6.3.2\n\nThanks,\nMarci\n\n\n\n", "msg_date": "Tue, 9 Jun 1998 16:01:57 +0200 (MET DST)", "msg_from": "Fernezelyi Marton <[email protected]>", "msg_from_op": true, "msg_subject": "maximum of postgres ? " }, { "msg_contents": "Fernezelyi Marton wrote:\n> \n> We would like to reach at least 15-20 query per second, 95 percent\n> SELECT id FROM mail WHERE name='name' queries. The rest is SELECT pers or\n> UPDATE in one of the two tables.\n> \n> When the number of concurrent queries is 30 or higher, the postgres\n> answers very slowly, and it writes\n> \n> NOTICE: SIAssignBackendId: discarding tag 2147339305\n> FATAL 1: Backend cache invalidation initialization failed\n> \n> messages to the log.\n> \n> If the number of concurrencies are 10, then everything goes fine, but the\n> number of queries/sec are 8. Is this the maximum loadability of postgres?\n\nI hope that both issues will be addressed in 6.4 by removing\ninvalidation code and skipping fsync() after each SELECT...\n\nVadim\n", "msg_date": "Tue, 09 Jun 1998 22:59:51 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] maximum of postgres ?" }, { "msg_contents": "> [reposted from pgsql-admin list]\n> \n> \n> Hi,\n> \n> We have a rather simple database with 2 tables and 2 indices. The tables\n> contain char, int, and bool type fields, and both has ~60000 records now.\n> \n> -rw------- 1 postgres postgres 3727360 Jun 5 11:45 mail\n> -rw------- 1 postgres postgres 1843200 Jun 4 02:45 mail_name_key\n> -rw------- 1 postgres postgres 9977856 Jun 5 11:45 pers\n> -rw------- 1 postgres postgres 1835008 Jun 4 02:45 pers_name_key\n> \n> We would like to reach at least 15-20 query per second, 95 percent\n> SELECT id FROM mail WHERE name='name' queries. The rest is SELECT pers or\n> UPDATE in one of the two tables.\n> \n> When the number of concurrent queries is 30 or higher, the postgres \n> answers very slowly, and it writes\n> \n> NOTICE: SIAssignBackendId: discarding tag 2147339305\n> FATAL 1: Backend cache invalidation initialization failed\n> \n> messages to the log.\n> \n> If the number of concurrencies are 10, then everything goes fine, but the\n> number of queries/sec are 8. Is this the maximum loadability of postgres?\n> \n> Is the any fine tuning possibilities for higher performance?\n> \n> Some other questions:\n> \n> 1. How often the database has to be vacuumed? (Our database is vacuumed 3 \n> times a day now.)\n> 2. Why select * much more fast than select id? (before vacuum)\n> (`id' is a field in the table)\n> \n> Postmaster runs with options: postmaster -B 468 -i -o -F.\n> \n> Backend system: FreeBSD-2.2.6R, PII-400MHz, 64MB, UW SCSI RAID\n> Postgres version: 6.3.2\n> \n> Thanks,\n> Marci\n\n\nA couple of suggestions:\n\n Increase the number of buffers. I suggest you use 1024 or even more.\n\n Dump and reload the tables and rebuild the indexes. If this helps, try\n to do it periodically.\n\n I will post a patch to 6.3.2 on the patches and hackers lists this weekend\n that may improve your performance when there are large numbers of concurrent\n queries. This will be the S_LOCK patch. Since I will also post a version for\n 6.4, make sure you get the 6.3.2 version. I would also suggest backing up\n your source tree before applying the patch just in case I make a mistake.\n\n If the machine is paging at all under heavy load, add memory. 64Mb is not\n very much to support 30 db sessions.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Thu, 11 Jun 1998 11:07:52 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] maximum of postgres ?" } ]
[ { "msg_contents": "On Mon, 8 Jun 1998, Fredrick Meunier wrote:\n\n> Hi,\n> \n> Jose' Soares Da Silva wrote:\n> > On Mon, 8 Jun 1998, Fredrick Meunier wrote:\n> > > CREATE VIEW ALL_TEXT (TEXTIDTF,TEXTNSEG)\n> > > AS SELECT B.BLBJ,B.NSEG FROM BLBJ B WHERE B.BTYP = 1\n> > > ERROR from backend during send_query: 'ERROR: parser: parse error \n> > > at or near \"(\"'\n> > PostgreSQL doesn't support the above syntax. Try this instead:\n> > \n> > CREATE VIEW ALL_TEXT\n> > AS SELECT B.BLBJ AS TEXTIDTF, B.NSEG AS TEXTNSEG\n> > FROM BLBJ B WHERE B.BTYP = 1\n> Sorry if I was not clear, but the tool I am using (Powersoft's Power\n> Designer) does not let me change the queries it makes for maintaining\n> it's Metadata repository, I was asking when (if?) there may be support\n> for this syntax in PostgreSQL.\nWell, this syntax is SQL92 but PostgreSQL doesn't support it yet,\nI'm afraid there's not a work around to solve this problem.\n> \n> > > What are the chances of getting view creation syntax like the above\n> > > accepted?\n> > >\n> > > The other problem is:\n> > > CREATE TABLE MPDREFR\n> > > ( REFR int4 NOT NULL,\n> > > SRCE int4 NOT NULL,\n> > > TRGT int4 NOT NULL,\n> > > LABL varchar(254) ,\n> > > URUL int2 ,\n> > > DRUL int2 ,\n> > > MAND int2 ,\n> > > CPRT int2 ,\n> > > TOBJ int2 ,\n> > > COBJ varchar(80) ,\n> > > SOID int4 ,\n> > > FKCN varchar(64) ,\n> > > CMIN varchar(10) ,\n> > > CMAX varchar(10) ,\n> > > NGEN int2 )'\n> > > ERROR from backend during send_query: 'ERROR: create: system \n> > > attribute named \"cmin\"'\n> > \n> > cmin and cmax are reserved words, try to rename to C_MIN C_MAX for \n> > example.\n> Again, I can't change the tool's schema definition, but since cmin and\n> cmax are legal SQL92 column names I was wondering if there were any\n> enhancements than could be made to PostgreSQL's system attributes to\n> prevent clashes with SQL92-legal names.\n> \n> > > Can the system attribute limitation be removed, or can the system\n> > > attributes be renamed to not conflict with legal SQL92 column names?\n\nPostgreSQL has five internal column names that you can't use\ntake a look:\n\ncreate table niente (avoid int);\ninsert into niente values (1);\nselect oid,cmin,cmax,xmin,xmax from niente;\n\n oid|cmin|cmax| xmin|xmax\n------+----+----+-----+----\n199369| 0| 0|45781| 0\n(1 row)\n\nfrom man sql...\n\nFIELDS AND COLUMNS\n Fields\n A field is either an attribute of a given class or one of\n the following:\n oid, xmin, xmax, cmin, cmax.\n\n Oid stands for the unique identifier of an instance which\n is added by Postgres to all instances automatically. Oids\n are not reused and are 32 bit quantities.\n\n Xmin, cmin, xmax and cmax stand respectively for the iden�\n tity of the inserting transaction, the command identifier\n within the transaction, the identity of the deleting\n transaction and its associated deleting command. For fur�\n ther information on these fields consult [STON87]. \n\nI think that it's impossible to change this names. \nBut you can send this question to the psql-hackers list.\n Ciao, Jose'\n\n", "msg_date": "Tue, 9 Jun 1998 15:08:48 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Missing SQL Syntax & Problem with Create Table" } ]
[ { "msg_contents": "jerome doucerain wrote:\n> \n> Please, can you tell me if row-level lock is functional \n> in Postgres 6.3.2 and I have to do to get it ?\n...\n\nDavid Hartwig wrote:\n> \n> Everyone especially Vadim,\n> I agree with Marc. Row locking is huge. In my user community, it is\n> unacceptable to wait for up to 30 minutes (or even one minute) for a report to\n> finish so that a users can commit an invoice or commit a change to a customer\n> attribute. I can deal with it for now because my databases are batch loaded\n> for reporting purposes only. However, I plan to go forward with some pretty\n> important projects that assume that record/page locking will exist within the\n> next 12 month or so. Am I being too presumptuous?\n\nThis is my old posting:\n---\nSubject: Re: [QUESTIONS] Locking tables ? \n Date: Sat, 25 Oct 1997 00:40:37 +0700 \n\nWell, low-level locking is claimed by me for very long time (one year)\nand still isn't implemented. This is my plan:\n\n6.3: remove time-travel and re-design transaction manager\n (to speed up things and make transaction id allocations\n serialized)\n6.4: implement shared system cache (to speed up things and \n get synchronization of some (many) things)\n6.5: low-level locking (with all 4 transaction isolation levels\n implemented)\n\nNote that low-level locking implementation will use 6.3 and 6.4 \nfeatures above... And non-overwriting feature of postgres.\n\nAlso note that this is \"optimistic\" plan. There are many another\nthings to do ...\n---\n\nTransaction manager was not re-designed in 6.3 - I hope\nto do this in 6.4 (and shared catalog cache too)...\n\nTaking into account our 6.4 release date (1 Oct), 6.5 (with\nlow level locking) should be released ~ 1 Mar 1999.\n\nVadim\n", "msg_date": "Tue, 09 Jun 1998 23:47:06 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Row-level lock" } ]
[ { "msg_contents": "Hey, guys...\n I suppose this probably belongs in questions, but I thought I might\nget a quicker answer here. I have a table in a customer's database\nthat has gotten quite large via lack of vacuuming (57MB). Queries\ninvolving this table started failing in the form of the backend just\nsitting there chugging away on the CPU (but not disk so much) for\nhours on end.\n This began about 24 hours ago, and as of about 12 hours ago, no\nqueries on this table work. I started a vacuum about 3 hours ago, and\nit has had upper-90s percent CPU usage the whole time, and still\nhasn't completed.\n Any ideas on what might be going on here? And, if postgres won't\nbe able to access the table, is there any hope of extracting rows from\nthe raw database file, such that I could reconstruct the table?\n Please cc responses to me directly, as I'm only on the digest list,\nand thanks in advance for any advice/help!\n\n-Brandon :)\n", "msg_date": "Tue, 9 Jun 1998 15:21:12 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Table corrupt?" }, { "msg_contents": "> \n> Hey, guys...\n> I suppose this probably belongs in questions, but I thought I might\n> get a quicker answer here. I have a table in a customer's database\n> that has gotten quite large via lack of vacuuming (57MB). Queries\n> involving this table started failing in the form of the backend just\n> sitting there chugging away on the CPU (but not disk so much) for\n> hours on end.\n> This began about 24 hours ago, and as of about 12 hours ago, no\n> queries on this table work. I started a vacuum about 3 hours ago, and\n> it has had upper-90s percent CPU usage the whole time, and still\n> hasn't completed.\n> Any ideas on what might be going on here? And, if postgres won't\n> be able to access the table, is there any hope of extracting rows from\n> the raw database file, such that I could reconstruct the table?\n> Please cc responses to me directly, as I'm only on the digest list,\n> and thanks in advance for any advice/help!\n\npg_dump -t tablename, drop and reload?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 16:32:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Table corrupt?" }, { "msg_contents": "Bruce Momjian said:\n> \n> > Any ideas on what might be going on here? And, if postgres won't\n> > be able to access the table, is there any hope of extracting rows from\n> > the raw database file, such that I could reconstruct the table?\n> \n> pg_dump -t tablename, drop and reload?\n> \n I thought pg_dump got the data out via queries through the backend?\n(But, then, I could be wrong... please correct me if so...)\n\n-Brandon :)\n", "msg_date": "Tue, 9 Jun 1998 16:13:12 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table corrupt?" }, { "msg_contents": "> \n> Bruce Momjian said:\n> > \n> > > Any ideas on what might be going on here? And, if postgres won't\n> > > be able to access the table, is there any hope of extracting rows from\n> > > the raw database file, such that I could reconstruct the table?\n> > \n> > pg_dump -t tablename, drop and reload?\n> > \n> I thought pg_dump got the data out via queries through the backend?\n> (But, then, I could be wrong... please correct me if so...)\n> \n> -Brandon :)\n> \n\nI gets the data out via COPY, which is slightly different than a normal\nquery that does through the parser/optimizer/executor. It is possible\nyou just have a lot of extra data and it is taking time to vacuum.\n\nIf there is a real problem, I would dump the entire database and reload\nit.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 17:16:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Table corrupt?" }, { "msg_contents": "Bruce Momjian said:\n> \n> > \n> > Bruce Momjian said:\n> > > \n> > > > Any ideas on what might be going on here? And, if postgres won't\n> > > > be able to access the table, is there any hope of extracting rows from\n> > > > the raw database file, such that I could reconstruct the table?\n> > > \n> > > pg_dump -t tablename, drop and reload?\n> > > \n> > I thought pg_dump got the data out via queries through the backend?\n> > (But, then, I could be wrong... please correct me if so...)\n> > \n> > -Brandon :)\n> > \n> \n> I gets the data out via COPY, which is slightly different than a normal\n> query that does through the parser/optimizer/executor. It is possible\n> you just have a lot of extra data and it is taking time to vacuum.\n> \n Hmmm... well, the table may be 57 Meg, but then, the backend\nrunning the vacuum has consumed 5 1/2 hours of CPU time so far, and\nstill going strong, so something tells me there may be something\ndeeper. :)\n\n> If there is a real problem, I would dump the entire database and reload\n> it.\n> \n Probably good advice, tho the rest of the tables seem to be just\nfine. *shrug*\n\n-Brandon :)\n\n", "msg_date": "Tue, 9 Jun 1998 18:07:44 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table corrupt?" } ]
[ { "msg_contents": "PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\nthis if we could help it. I think we will still need to run initdb, and\nmove the data files.\n\n\n -- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 9 Jun 1998 19:01:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "now 6.4" }, { "msg_contents": "Bruce Momjian:\n> PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n> this if we could help it. I think we will still need to run initdb, and\n> move the data files.\n\nI had thought we were going to avoid changing this unless there were changes\nto persistant structures. Do you know what changed to require this?\n\nThanks\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Wed, 10 Jun 1998 11:31:25 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "On Wed, 10 Jun 1998, David Gould wrote:\n\n> Bruce Momjian:\n> > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n> > this if we could help it. I think we will still need to run initdb, and\n> > move the data files.\n> \n> I had thought we were going to avoid changing this unless there were changes\n> to persistant structures. Do you know what changed to require this?\n\n\tHuh? PG_VERSION should reflect that which we release, so that ppl\nknow what version they are running, for bug reports and whatnot...\n\n\n", "msg_date": "Wed, 10 Jun 1998 14:45:01 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> \n> Bruce Momjian:\n> > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n> > this if we could help it. I think we will still need to run initdb, and\n> > move the data files.\n> \n> I had thought we were going to avoid changing this unless there were changes\n> to persistant structures. Do you know what changed to require this?\n> \n> Thanks\n\nThe contents of the system tables are going to change between releases,\nalmost for sure. What I think we are going to do is have people pg_dump\n-schema their databases, mv /data to /data.old, run initdb, run to\ncreate the old schema, and move the data/index files back into place.\nI will probably write the script and have people test it.\n\nAs long as we don't change the data/index structure, we are OK. Is that\ngood, or did you think we would be able to get away without system table\nchanges?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 14:51:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> > Bruce Momjian:\n> > > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n> > > this if we could help it. I think we will still need to run initdb, and\n> > > move the data files.\n> > \n> > I had thought we were going to avoid changing this unless there were changes\n> > to persistant structures. Do you know what changed to require this?\n> > \n> > Thanks\n> \n> The contents of the system tables are going to change between releases,\n> almost for sure. What I think we are going to do is have people pg_dump\n> -schema their databases, mv /data to /data.old, run initdb, run to\n> create the old schema, and move the data/index files back into place.\n> I will probably write the script and have people test it.\n> \n> As long as we don't change the data/index structure, we are OK. Is that\n> good, or did you think we would be able to get away without system table\n> changes?\n\nI have no problem with catalog changes and dumping the schema if we can\nwrite a script to help them do it. I would hope we can avoid having to make\nsomeone dump and reload their own data. I am thinking that it could be\npretty inconvenient to dump/load and reindex something like a 50GB table with\n6 indexes. \n\nThanks for the clarification.\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Wed, 10 Jun 1998 12:06:42 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> \n> On Wed, 10 Jun 1998, David Gould wrote:\n> \n> > Bruce Momjian:\n> > > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n> > > this if we could help it. I think we will still need to run initdb, and\n> > > move the data files.\n> > \n> > I had thought we were going to avoid changing this unless there were changes\n> > to persistant structures. Do you know what changed to require this?\n> \n> \tHuh? PG_VERSION should reflect that which we release, so that ppl\n> know what version they are running, for bug reports and whatnot...\n\nIt also requires the postmaster/postgres to match that version so they\ncan run. PG_VERSION gets set at initdb time, so if we update it, we\nbasically require them to run initdb so it matches the\nbackend/postmaster version.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 15:10:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> I have no problem with catalog changes and dumping the schema if we can\n> write a script to help them do it. I would hope we can avoid having to make\n> someone dump and reload their own data. I am thinking that it could be\n> pretty inconvenient to dump/load and reindex something like a 50GB table with\n> 6 indexes. \n> \n> Thanks for the clarification.\n\nYep, I think this is do'able, UNLESS Vadim decides he needs to change\nthe structure of the data/index files. At that point, we are lost.\n\nIn the past, we have made such changes, and they were very much needed. \nNot sure about the 6.4 release, but no such changes have been made yet.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 15:12:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > I have no problem with catalog changes and dumping the schema if we can\n> > write a script to help them do it. I would hope we can avoid having to make\n> > someone dump and reload their own data. I am thinking that it could be\n> > pretty inconvenient to dump/load and reindex something like a 50GB table with\n> > 6 indexes.\n> >\n> > Thanks for the clarification.\n> \n> Yep, I think this is do'able, UNLESS Vadim decides he needs to change\n> the structure of the data/index files. At that point, we are lost.\n\nUnfortunately, I want to change btree!\nBut not HeapTuple structure...\n\nVadim\n", "msg_date": "Thu, 11 Jun 1998 09:36:12 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": ">Bruce Momjian:\n>> PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n>> this if we could help it. I think we will still need to run initdb, and\n>> move the data files.\n>\n>I had thought we were going to avoid changing this unless there were changes\n>to persistant structures. Do you know what changed to require this?\n\nHumm... I think:\n\n\tEven if catalogs would not be changed, initdb is required\n\tsince we have added a new function octet_length().\n\nPlease correct me if I'm wrong.\n---\nTatsuo Ishii\[email protected]\n", "msg_date": "Thu, 11 Jun 1998 10:45:46 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4 " }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > I have no problem with catalog changes and dumping the schema if we can\n> > > write a script to help them do it. I would hope we can avoid having to make\n> > > someone dump and reload their own data. I am thinking that it could be\n> > > pretty inconvenient to dump/load and reindex something like a 50GB table with\n> > > 6 indexes.\n> > >\n> > > Thanks for the clarification.\n> > \n> > Yep, I think this is do'able, UNLESS Vadim decides he needs to change\n> > the structure of the data/index files. At that point, we are lost.\n> \n> Unfortunately, I want to change btree!\n> But not HeapTuple structure...\n\nSo we will just need to re-create indexes. Sounds OK to me, but\nfrankly, I am not sure what the objection to dump/reload is.\n\nVadim, you make any changes you feel are necessary, and near release\ntime, we will develop the best migration script we can.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 22:33:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> So we will just need to re-create indexes. Sounds OK to me, but\n> frankly, I am not sure what the objection to dump/reload is.\n\nIt takes too long time to reload big tables...\n\n> Vadim, you make any changes you feel are necessary, and near release\n> time, we will develop the best migration script we can.\n\nNice.\n\nVadim\n", "msg_date": "Thu, 11 Jun 1998 11:19:40 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> Even if catalogs would not be changed, initdb is required\n> since we have added a new function octet_length().\n> \n> Please correct me if I'm wrong.\n\nAnd functions for implicit conversion between the old 1-byte \"char\" type\nand the new 1-byte \"char[1]\" type, same for \"name\" to/from \"text\".\n\nI have int8 (64-bit integers) ready to put into the backend. Once enough\nplatforms figure out how to get 64-bit integers defined, then we can\nconsider using them for numeric() and decimal() types also. Alphas and\nix86/Linux should already work. \n\nI had assumed that PowerPC had 64-bit ints (along with 64-bit\naddressing) but now suspect I was wrong. If anyone volunteers info on\nhow to get 64-bit ints on their platform I'll including that in the\nfirst version. For gcc on ix86, \"long long int\" does the trick, and for\nAlphas \"long int\" should be enough.\n\n - Tom\n", "msg_date": "Fri, 12 Jun 1998 05:34:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": ">And functions for implicit conversion between the old 1-byte \"char\" type\n>and the new 1-byte \"char[1]\" type, same for \"name\" to/from \"text\".\n>\n>I have int8 (64-bit integers) ready to put into the backend. Once enough\n>platforms figure out how to get 64-bit integers defined, then we can\n>consider using them for numeric() and decimal() types also. Alphas and\n>ix86/Linux should already work. \n>\n>I had assumed that PowerPC had 64-bit ints (along with 64-bit\n>addressing) but now suspect I was wrong. If anyone volunteers info on\n>how to get 64-bit ints on their platform I'll including that in the\n>first version. For gcc on ix86, \"long long int\" does the trick, and for\n>Alphas \"long int\" should be enough.\n\nRegarding PowerPC, I successfully compiled a test program below and\ngot result \"8\" usging gcc 2.8.0 on MkLinux(DR2.1).\n\nmain()\n{\n long long int a;\n printf(\"%d\\n\",sizeof(a));\n}\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Fri, 12 Jun 1998 14:52:30 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4 " }, { "msg_contents": "> \n> > Even if catalogs would not be changed, initdb is required\n> > since we have added a new function octet_length().\n> > \n> > Please correct me if I'm wrong.\n> \n> And functions for implicit conversion between the old 1-byte \"char\" type\n> and the new 1-byte \"char[1]\" type, same for \"name\" to/from \"text\".\n> \n> I have int8 (64-bit integers) ready to put into the backend. Once enough\n> platforms figure out how to get 64-bit integers defined, then we can\n> consider using them for numeric() and decimal() types also. Alphas and\n> ix86/Linux should already work. \n> \n> I had assumed that PowerPC had 64-bit ints (along with 64-bit\n> addressing) but now suspect I was wrong. If anyone volunteers info on\n> how to get 64-bit ints on their platform I'll including that in the\n> first version. For gcc on ix86, \"long long int\" does the trick, and for\n> Alphas \"long int\" should be enough.\n\nI thought all the GNU sites would work.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 12 Jun 1998 07:31:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "On Thu, 11 Jun 1998, Vadim Mikheev wrote:\n\n> Bruce Momjian wrote:\n> > \n> > So we will just need to re-create indexes. Sounds OK to me, but\n> > frankly, I am not sure what the objection to dump/reload is.\n> \n> It takes too long time to reload big tables...\n\n\tI have to agree here...the one application that *I* really use\nthis for is an accounting server...any downtime is unacceptable, because\nthe whole system revolves around the database backend.\n\n\tTake a look at Michael Richards application (a search engine)\nwhere it has several *million* rows, and that isn't just one table.\nMichael, how long would it take to dump and reload that? \n\n\tHow many ppl *don't* upgrade because of how expensive it would be\nfor them to do, considering that their applications \"work now\"?\n\n\tNow, I liked the idea that was presented about moving the\ndatabase directories out of the way and then moving them back in after an\ninitdb...is this not doable? What caveats are there to doing this?\nIndividual database's will be missing fields added in the release upgrade,\nso if you want some of the v6.4 new features, you'd have to dump the\nindividual database and then reload it, but if you don't care, you'd have\nsome optimizations associated with the new release?\n\n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 Jun 1998 20:37:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "On Wed, 10 Jun 1998, Bruce Momjian wrote:\n\n> So we will just need to re-create indexes. Sounds OK to me, but\n> frankly, I am not sure what the objection to dump/reload is.\n\n\tThe cost associated with the downtime required in order to do the\ndump/reload...how much money is a company losing while their database is\ndown to do the upgrade?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 Jun 1998 20:39:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> \tNow, I liked the idea that was presented about moving the\n> database directories out of the way and then moving them back in after an\n> initdb...is this not doable? What caveats are there to doing this?\n> Individual database's will be missing fields added in the release upgrade,\n> so if you want some of the v6.4 new features, you'd have to dump the\n> individual database and then reload it, but if you don't care, you'd have\n> some optimizations associated with the new release?\n\nWe will move the old data files out of the way, run initdb, reload a\npg_dump with schema-only, then move the data files back into the proper\nlocations, and perhaps drop/recreate all indexes. They will have all\nthe features. They have just kept their raw data files.\n\nHow long does re-indexing the tables take vs. reloading and re-indexing?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 29 Jun 1998 19:42:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "On Mon, 29 Jun 1998, The Hermit Hacker wrote:\n\n> On Thu, 11 Jun 1998, Vadim Mikheev wrote:\n> \n> > Bruce Momjian wrote:\n> > > \n> > > So we will just need to re-create indexes. Sounds OK to me, but\n> > > frankly, I am not sure what the objection to dump/reload is.\n> > \n> > It takes too long time to reload big tables...\n> \n> \tI have to agree here...the one application that *I* really use\n> this for is an accounting server...any downtime is unacceptable, because\n> the whole system revolves around the database backend.\n> \n> \tTake a look at Michael Richards application (a search engine)\n> where it has several *million* rows, and that isn't just one table.\n> Michael, how long would it take to dump and reload that? \n> \n> \tHow many ppl *don't* upgrade because of how expensive it would be\n> for them to do, considering that their applications \"work now\"?\n\nI cringe when it comes time to upgrade and now with the main site getting\n~1000 hits/day I can't have the downtime (this web site is really\nseasonal). Not only is there dump/reload to do, I also have to make sure\nto recompile the cgi stuff when libpq changes.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n", "msg_date": "Mon, 29 Jun 1998 20:06:33 -0400 (edt)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "On Mon, 29 Jun 1998, Bruce Momjian wrote:\n\n> > \tNow, I liked the idea that was presented about moving the\n> > database directories out of the way and then moving them back in after an\n> > initdb...is this not doable? What caveats are there to doing this?\n> > Individual database's will be missing fields added in the release upgrade,\n> > so if you want some of the v6.4 new features, you'd have to dump the\n> > individual database and then reload it, but if you don't care, you'd have\n> > some optimizations associated with the new release?\n> \n> We will move the old data files out of the way, run initdb, reload a\n> pg_dump with schema-only, then move the data files back into the proper\n> locations, and perhaps drop/recreate all indexes. They will have all\n> the features. They have just kept their raw data files.\n> \n> How long does re-indexing the tables take vs. reloading and re-indexing?\n\n\tIs re-indexing required? With the old indexes work with a new\nrelease, albeit slower? Or just not work at all?\n\n\tAs for dropping/recreating all indices...that isn't really so bad,\nanyway...once all the data is there, th edatabase can go live...albeit\n*very* slow, in some cases, if I have 4 indices on a table, each one built\nshould improve the speed of queries, but each build shouldn't limit the\nability for the database to be up...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 Jun 1998 21:50:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> \n> On Mon, 29 Jun 1998, Bruce Momjian wrote:\n> \n> > > \tNow, I liked the idea that was presented about moving the\n> > > database directories out of the way and then moving them back in after an\n> > > initdb...is this not doable? What caveats are there to doing this?\n> > > Individual database's will be missing fields added in the release upgrade,\n> > > so if you want some of the v6.4 new features, you'd have to dump the\n> > > individual database and then reload it, but if you don't care, you'd have\n> > > some optimizations associated with the new release?\n> > \n> > We will move the old data files out of the way, run initdb, reload a\n> > pg_dump with schema-only, then move the data files back into the proper\n> > locations, and perhaps drop/recreate all indexes. They will have all\n> > the features. They have just kept their raw data files.\n> > \n> > How long does re-indexing the tables take vs. reloading and re-indexing?\n> \n> \tIs re-indexing required? With the old indexes work with a new\n> release, albeit slower? Or just not work at all?\n\nVadim is changing the index format for 6.4.\n\n> \tAs for dropping/recreating all indices...that isn't really so bad,\n> anyway...once all the data is there, th edatabase can go live...albeit\n> *very* slow, in some cases, if I have 4 indices on a table, each one built\n> should improve the speed of queries, but each build shouldn't limit the\n> ability for the database to be up...\n\nDoesn't index creation lock the table?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 29 Jun 1998 21:10:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "On Mon, 29 Jun 1998, Bruce Momjian wrote:\n\n> > \tAs for dropping/recreating all indices...that isn't really so bad,\n> > anyway...once all the data is there, th edatabase can go live...albeit\n> > *very* slow, in some cases, if I have 4 indices on a table, each one built\n> > should improve the speed of queries, but each build shouldn't limit the\n> > ability for the database to be up...\n> \n> Doesn't index creation lock the table?\n\n\tI'm not sure why it would...creation of indices doesn't write\nnything to the table itself, just reads...no?\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 29 Jun 1998 23:01:20 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > As for dropping/recreating all indices...that isn't really so bad,\n> > anyway...once all the data is there, th edatabase can go live...albeit\n> > *very* slow, in some cases, if I have 4 indices on a table, each one built\n> > should improve the speed of queries, but each build shouldn't limit the\n> > ability for the database to be up...\n> \n> Doesn't index creation lock the table?\n\nLock for read...\n\nVadim\n", "msg_date": "Tue, 30 Jun 1998 12:20:02 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] now 6.4" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > As for dropping/recreating all indices...that isn't really so bad,\n> > > anyway...once all the data is there, th edatabase can go live...albeit\n> > > *very* slow, in some cases, if I have 4 indices on a table, each one built\n> > > should improve the speed of queries, but each build shouldn't limit the\n> > > ability for the database to be up...\n> > \n> > Doesn't index creation lock the table?\n> \n> Lock for read...\n\nYep, good point. Reads are OK.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 30 Jun 1998 00:36:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] now 6.4" } ]
[ { "msg_contents": "Once upon a time Bruce Momjian wrote:\n> OK, here is my argument for inlining tas().\n> \n> This output is for startup, a single query returning a single row result\n> on an indexed column, and shutdown. Guess who is at the top of the\n> list, tas(). Mcount() is a gprof internal counter, so it doesn't\n> \"count\". tas() is showing 0.01 cummulative seconds. Psql shows\n> wallclock time for the query at 0.13 seconds. That 0.01 is pretty\n> significant.\n...\n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 20.0 0.02 0.02 mcount (463)\n> 10.0 0.03 0.01 5288 0.00 0.00 _tas [31]\n\n\nAs promised, I did a little testing to see what part of this overhead\nis measurement error due to the nature of gprof and to see what the real\noverhead of various spinlock implementations are. Here is what I learned.\n\nSection 1. Summary\n\n1.1. Spinlocks are pretty cheap.\n\n Spinlocks are relatively cheap operations. The worst implementation I\n tested took 0.34 microseconds for a spinlock roundtrip (take and release).\n The current (in CVS as of May 98) code takes 0.28 microseconds. The best\n hand tuned asm implementation took only 0.14 microseconds.\n\n This is fast enough that I had to use a huge iteration count (100,000,000)\n to get reasonably large run times.\n\n Table 1.1 Overheads of spinlock roundtrip in microseconds\n\n Test Case Time (usec) Notes\n\n Original 0.14 S_LOCK in 6.3.2 (no backoff, asm)\n InlineTas 0.15 Patch to be submitted (backoff, _inline_)\n TasFunction2 0.20 Refined S_LOCK patch TAS as function.\n MacroSLOCK 0.28 May 98 S_LOCK patch as in CVS\n \n\n1.2. gprof doesn't work for small quick functions.\n\n gprof introduces severe experimental error for small quick functions.\n According to the gprof profile done by Bruce, 5288 tas calls took 0.1\n second. That would require the spinlock roundtrips to take almost 19\n microseconds each, not 0.28 microseconds. So in reality the 5288 calls\n took only 0.0015 seconds, not 0.1 seconds.\n\n So gprof has overestimated the cost of the spinlock by 68 to 1.\n\n Perhaps the spinlock function is so short and quick compared to the\n mcount overhead added to the function prolog that the overhead dominates\n the measurement. gprof remains a useful tool for larger functions with\n longer runtimes, but must be considered very suspect for tiny functions.\n\n\n1.3 Function calls are pretty cheap or Macros may not save all that much.\n\n The difference between the current (late May) macro version and the same\n code removed to a separate function and called with three arguments was\n only 0.06 microseconds. That is 60 nanoseconds for the argument passing,\n the call, and the return.\n\n I suspect that on the x86 \"architecture\" the limited number of registers\n means that inline code has to save results to memory as it goes along\n which may offset to some extent the overhead of the register saves for\n a function call. \n\n\n1.4 There are mysteries. Beware.\n\n In some of the test cases there was significant timing variation from\n run to run even though the test conditions were apparently identical.\n Even more strangely, the variation in time was not random but appeared\n to represent two different modes. And, the variation was itself repeatable.\n\n Here are the raw times in CPU seconds from two different experiments each\n run six consecutive times:\n\n case 1: 49.81, 49.43, 40.68, 49.51, 40.68, 40.69\n clusters about 40.7 and 49.5 seconds\n\n case 2: 39.34, 29.09, 28.65, 40.34, 28.64, 28.64\n clusters about 28.9 and 39.6 seconds\n\n Note that the testrun times have a bimodal distribution with one group of\n fast runs clustered tightly about one time and then a much slower group\n clustered tightly about the second time. The difference between groups is\n huge (about 25%) while the diffence within a group is very small (probably\n within the measurement error.\n \n I have no explanation for this variation. Possibly it is some interaction\n of where the program is loaded and the state of the memory heirarchy, but\n even this is hard to sustain. I would be very curious to hear of any\n plausible explainations.\n\n\n1.5 Timing very small functions in isolation with large counts is effective.\n\n Notwithstanding the strange behavior in section 1.4, it is possible to\n time differences in functions that amount to the addition or deletion of\n one or two instructions. For example, the TasFunction and TasFunction2\n cases below are almost but not quite identical yet show noticably\n different runtimes.\n\n \n1.6. New patch to follow.\n\n The current S_LOCK and TAS() implementations (my patch of late May) are\n slower than they need to be and cause more code bloat than they need to.\n The bloat is caused by using a macro to inline a relatively complex bit\n of code that is only used in the blocked lock case. I suspect the slowness\n is caused at least partly by the macro as it requires more registers.\n\n I have developed a new patch that separates out the lock available case\n from the busywaiting case and that uses the GCC _inline_ facilty to make\n the asm interface still look as clean as a function while not costing\n anything. For a preview, see\n\n\n\nSection 2. Test Procedure\n\nMy test takes and releases a spinlock 100,000,000 times and measures the\ntotal CPU time. I ran this test with many variations of spinlock\nimplementation. I also ran a baseline test that has just the loop and call\noverheads with no actual spinlock so that we can separate out just the S_LOCK\ntime. The test harness code appears below and the variant spinlock\nimplementations, generated assembler output and raw result timings appear\nlater in this message.\n\nTesting was done on \"leslie\" (P133 HX chipset 512K L2) running Linux 2.0.33.\nThe system was up and running X but no other workloads. I avoided typing\nor touching the mouse during the test. Each variation was run three times\nand the results averaged. For some tests there was significant variation in\ntimes for the three iterations. In this case another set of three was run\nand the average of six runs used.\n\n\n\nSection 2.1 Test Harness Code\n\n/*\n * Test s_lock timing with variations\n */\ntypedef unsigned char slock_t;\nvolatile slock_t the_lock;\n\nint main(void)\n{\n int i = 0;\n\n the_lock = 0;\n while (i < 100000000) { /* 100 million iterations */\n i = tryit(&the_lock, &i); /* take and release spinlock */\n }\n return i & 1;\n}\n\n/*\n * Take and release lock\n */\nint tryit(volatile slock_t *lock, int *p)\n{\n int i;\n S_LOCK(lock);\n i = ++(*p);\t\t\t/* just to make it more realistic */\n S_UNLOCK(lock);\n return *p;\n}\n\n\n\nSection 3.0 Test Case Summary\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nTZero: 0.14 usec compare lock to 0, do nothing\nTZeroNoCall 0.17 usec compare lock to 0, call s_lock if not\n\nTZeroCall 0.30 usec call function that compares lock to 0\nTasFunction 0.45 usec lock spinlock in separate tas() function\nTasFunction2 0.37 usec improved separate tas function\nSlockAsmMacro 0.31 usec Inline xchgb in S_LOCK macro, call s_lock if\n needed. Note strange variation in recorded\n times. I have no explaination.\nOriginal 0.31 usec The original S_LOCK from 6.3.2\nMacroSLOCK 0.45 usec Current in CVS as of 5/30/98. Tends to bloat.\nAllFunctions 0.51 usec Call s_lock() function. tas() as function\nInlineTas 0.32 usec Use function inlining. Patch to follow.\n\n\n\nSection 3.1 Test Cases and Raw Timing Results.\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nTZero:\t\t0.14 usec\tcompare lock to 0, do nothing\n\n\n#define TAS(lock) (*lock != 0)\n#define S_LOCK(lock) TAS(lock)\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n movl 8(%ebp),%eax\n movl 12(%ebp),%edx\n movb (%eax),%cl\n incl (%edx)\n movb $0,(%eax)\n movl (%edx),%eax\n leave\n ret\n\nCPU times: 14.32, 14.32, 14.32\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nTZeroNoCall\t0.17 usec\tcompare lock to 0, call s_lock if not\n\n\n#define TAS(lock) (*lock != 0)\n#define S_LOCK(lock) if (TAS(lock)) s_lock(lock, __FILE__, __LINE__); else\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%ebx\n movl 12(%ebp),%esi\n movb (%ebx),%al\n testb %al,%al\n je .L14\n pushl $141\n pushl $.LC1\n pushl %ebx\n call s_lock\n.L14:\n incl (%esi)\n movb $0,(%ebx)\n movl (%esi),%eax\n leal -8(%ebp),%esp\n popl %ebx\n popl %esi\n leave\n ret\n\nCPU times: 17.33, 17.35, 17.31\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nTZeroCall 0.30 usec\tcall function that compares lock to 0\n\n#define TAS(lock) tas_test(lock)\n#define S_LOCK(lock) if (TAS(lock)) s_lock(lock, __FILE__, __LINE__); else\n\nint tas_test(volatile slock_t *lock)\n{\n return *lock == 0;\n}\n\ntas_test:\n pushl %ebp\n movl %esp,%ebp\n movl 8(%ebp),%eax\n movb (%eax),%al\n testb %al,%al\n setne %al\n andl $255,%eax\n leave\n ret\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%ebx\n movl 12(%ebp),%esi\n pushl %ebx\n call tas_test\n addl $4,%esp\n testl %eax,%eax\n je .L13\n pushl $141\n pushl $.LC1\n pushl %ebx\n call s_lock\n.L13:\n incl (%esi)\n movb $0,(%ebx)\n movl (%esi),%eax\n leal -8(%ebp),%esp\n popl %ebx\n popl %esi\n leave\n ret\n\nCPU times: 30.16, 30.15, 30.14\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nTasFunction\t0.45 usec\tlock spinlock in separate tas() function\n\n\n#define TAS(lock) tas(lock)\n#define S_LOCK(lock) if (TAS(lock)) s_lock(lock, __FILE__, __LINE__); else\n\nint tas(volatile slock_t *lock)\n{\n slock_t _res = 1;\n\n __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1));\n return _res != 0;\n}\n\ntas:\n pushl %ebp\n movl %esp,%ebp\n movl 8(%ebp),%eax\n movl $1,%edx\n#APP\n lock; xchgb %dl,(%eax)\n#NO_APP\n testb %dl,%dl\n setne %al\n andl $255,%eax\n leave\n ret\n\nCPU times: 46.13, 47.48, 45.01, 39.51, 45.79, 45.86\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nTasFunction2\t0.37 usec.\timproved separate tas function\n\n\n#define TAS(lock) tas2(lock)\n#define S_LOCK(lock) if (TAS(lock)) s_lock(lock, __FILE__, __LINE__); else\n\nint tas2(volatile slock_t *lock)\n{\n slock_t _res = 1;\n\n __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1));\n return (int) _res;\n}\n\ntas2:\n pushl %ebp\n movl %esp,%ebp\n movl 8(%ebp),%edx\n movl $1,%eax\n#APP\n lock; xchgb %al,(%edx)\n#NO_APP\n andl $255,%eax\n leave\n ret\n\n\nCPU times: 37.67, 37.67, 37.68, 37.57, 37.12, 36.91\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nSlockAsmMacro 0.31 usec\tInline xchgb in S_LOCK macro, call s_lock if\n needed. Note strange variation in recorded\n times. I have no explaination.\n\n\n#define TAS(lock) tas2(lock)\n#define S_LOCK(lock) if (1) { \\\n slock_t _res = 1; \\\n __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n if (_res) \\\n s_lock(lock, __FILE__, __LINE__); \\\n } else\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%ebx\n movl 12(%ebp),%esi\n movl $1,%eax\n#APP\n lock; xchgb %al,(%ebx)\n#NO_APP\n testb %al,%al\n je .L14\n pushl $141\n pushl $.LC1\n pushl %ebx\n call s_lock\n.L14:\n incl (%esi)\n movb $0,(%ebx)\n movl (%esi),%eax\n leal -8(%ebp),%esp\n popl %ebx\n popl %esi\n leave\n ret\n\nCPU times: 40.53, 30.14, 30.13, 40.44, 30.12, 40.50, 28.65, 28.63, 28.62\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nOriginal\t0.31 usec\tThe original S_LOCK from 6.3.2\n\n\n#define S_LOCK(lock) do { \\\n slock_t _res = 1; \\\n do { \\\n __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n } while (_res !=0); \\\n } while (0)\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n movl 8(%ebp),%edx\n movl 12(%ebp),%ecx\n .align 4\n .align 4\n.L15:\n movl $1,%eax\n#APP\n lock; xchgb %al,(%edx)\n#NO_APP\n testb %al,%al\n jne .L15\n incl (%ecx)\n movb $0,(%edx)\n movl (%ecx),%eax\n leave\n ret\n\nCPU times: 28.55, 33.31, 31.40\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nMacroSLOCK\t0.45 usec\tCurrent in CVS as of 5/30/98. Tends to bloat.\n\n\n#define TAS(lock) tas(lock)\n#define S_LOCK(lock) if (1) { \\\n int spins = 0; \\\n while (TAS(lock)) { \\\n struct timeval delay; \\\n delay.tv_sec = 0; \\\n delay.tv_usec = s_spincycle[spins++ % S_NSPINCYCLE]; \\\n (void) select(0, NULL, NULL, NULL, &delay); \\\n if (spins > S_MAX_BUSY) { \\\n /* It's been well over a minute... */ \\\n s_lock_stuck(lock, __FILE__, __LINE__); \\\n } \\\n } \\\n} else\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n subl $8,%esp\n pushl %edi\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%esi\n movl 12(%ebp),%edi\n xorl %ebx,%ebx\n .align 4\n.L13:\n pushl %esi\n call tas\n addl $4,%esp\n testl %eax,%eax\n je .L18\n movl $0,-8(%ebp)\n movl %ebx,%edx\n movl %ebx,%eax\n incl %ebx\n testl %edx,%edx\n jge .L16\n leal 15(%edx),%eax\n.L16:\n andb $240,%al\n subl %eax,%edx\n movl %edx,%eax\n movl s_spincycle(,%eax,4),%eax\n movl %eax,-4(%ebp)\n leal -8(%ebp),%eax\n pushl %eax\n pushl $0\n pushl $0\n pushl $0\n pushl $0\n call select\n addl $20,%esp\n cmpl $16000,%ebx\n jle .L13\n pushl $141\n pushl $.LC1\n pushl %esi\n call s_lock_stuck\n addl $12,%esp\n jmp .L13\n .align 4\n.L18:\n incl (%edi)\n movb $0,(%esi)\n movl (%edi),%eax\n leal -20(%ebp),%esp\n popl %ebx\n popl %esi\n popl %edi\n leave\n ret\n\nCPU times: 49.81, 49.43, 40.68, 49.51, 40.68, 40.69\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nAllFunction\t0.51 usec\tCall s_lock() function. tas() as function\n\n\n#define TAS(lock) tas2(lock)\n#define S_LOCK(lock) s_lock(lock, __FILE__, __LINE__)\n\nvoid s_lock(volatile slock_t *lock, char *file, int line)\n{\n int spins = 0;\n\n while (TAS(lock))\n {\n struct timeval delay;\n\n delay.tv_sec = 0;\n delay.tv_usec = s_spincycle[spins++ % S_NSPINCYCLE];\n (void) select(0, NULL, NULL, NULL, &delay);\n if (spins > S_MAX_BUSY)\n {\n /* It's been well over a minute... */\n s_lock_stuck(lock, file, line);\n }\n }\n}\n\ns_lock:\n pushl %ebp\n movl %esp,%ebp\n subl $8,%esp\n pushl %edi\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%esi\n movl 16(%ebp),%edi\n xorl %ebx,%ebx\n .align 4\n.L5:\n pushl %esi\n call tas2\n addl $4,%esp\n testl %eax,%eax\n je .L6\n movl $0,-8(%ebp)\n movl %ebx,%edx\n movl %ebx,%eax\n incl %ebx\n testl %edx,%edx\n jge .L8\n leal 15(%edx),%eax\n.L8:\n andb $240,%al\n subl %eax,%edx\n movl %edx,%eax\n movl s_spincycle(,%eax,4),%eax\n movl %eax,-4(%ebp)\n leal -8(%ebp),%eax\n pushl %eax\n pushl $0\n pushl $0\n pushl $0\n pushl $0\n call select\n addl $20,%esp\n cmpl $16000,%ebx\n jle .L5\n pushl %edi\n pushl 12(%ebp)\n pushl %esi\n call s_lock_stuck\n addl $12,%esp\n jmp .L5\n .align 4\n.L6:\n leal -20(%ebp),%esp\n popl %ebx\n popl %esi\n popl %edi\n leave\n ret\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%esi\n movl 12(%ebp),%ebx\n pushl $141\n pushl $.LC1\n pushl %esi\n call s_lock\n incl (%ebx)\n movb $0,(%esi)\n movl (%ebx),%eax\n leal -8(%ebp),%esp\n popl %ebx\n popl %esi\n leave\n ret\n\nCPU times: 51.23, 51.23, 51.23\n\n\n\n\n\nCase Tag Per Iteration Comments\n============= ============= ============================================\nInlineTas\t0.32\t\tUse function inlining. Patch to follow.\n\n\n#define TAS(lock) tas_i(lock)\n#define S_LOCK(lock) if (TAS(lock)) s_lock(lock, __FILE__, __LINE__); else\n\nstatic __inline__ int tas_i(volatile slock_t *lock)\n{\n slock_t _res = 1;\n\n __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock): \"0\"(_res));\n return (int) _res;\n}\n\ntryit:\n pushl %ebp\n movl %esp,%ebp\n pushl %esi\n pushl %ebx\n movl 8(%ebp),%ebx\n movl 12(%ebp),%esi\n movb $1,%al\n#APP\n lock; xchgb %al,(%ebx)\n#NO_APP\n testb %al,%al\n je .L16\n pushl $156\n pushl $.LC1\n pushl %ebx\n call s_lock\n.L16:\n incl (%esi)\n movb $0,(%ebx)\n movl (%esi),%eax\n leal -8(%ebp),%esp\n popl %ebx\n popl %esi\n leave\n ret\n\nCPU times: 39.34, 29.09, 28.65, 40.34, 28.64, 28.64\n\n\n----------------------------------------------------------------------------\nSection 4.0 Test Suite\n\n\nbegin 644 s_lock_test.tar.gz\nM'XL(`&B<?34``^T;:W/;.*Y?I5_!39N>G;J*9$MVDFXZD_.T.Y[+IITFW;N]\nMZXY'MIA875GR67*:3B?__0!2E*B'GW&2]E9H8TD@2((D`((@&?:]8/AG/Z)A\nMM/_DGH\"8>L>RR!-\"C\"9_XAM_<M`):5NM3LNPVJT.)!I-4W]\"K/MB2(99&-E3\nM0IXX5XOIOHPH]1Z\"H8>%4!I__-&&VZ_#T/6V:<X??[/#QK_9[K3;+;.)B:;9\nM?$+T[;-2A+_X^*O[>RK9(SCT))RX/DH#F=#I)6#W537Z.J$.O20S/W2O?.J0\nMX0AZ*^0R\\TI5KP//CER/\"A2)1I1)%*31FXA.?>+Z@)U^=:-:@7@/GPU&L3>I\nM0Q9\\&]NN#Z2N4U>_J2@>B'3),=&!`+]%%1R%F\"\\C++7FDI\\)R!J'.OFF*/M[\nMB\"!CU_/<`%@!AH\"#P`^Q<4+\\L&S.X'-1=(,\\=^N\\[%OV.Z71#-M\"GA/CE7JK\nMJH\\];MN\"O/[CU[9MP!+]!\\UO,OMO`H`1@,16JV55^O\\0L%C]G[K^T)LYE/P<\nM1HX;:*/7,NIKN!^Y8XK8I8:\"U],=V?X510T.X3<@$Q>J8Y4/[9!JD!ZX0QH2\nM>TJ/D/[BY)R\\?$T\"GY+@\\@@_^V>]TP9[N7AS?L'?NK^=\\Y<FYCGOG[[K_D/.\nMQC']LS?_;(AWEB5^?_>A]TL#<Q8AINB=G?;.WB09WGX\\Z[+^>0HM=J$2J+N&\nM+:TKR$6/OR>I/%=,D/(24^7(D#.>0MQ+4C/`C)%/JH(V$,<G9$8/,;'-2ZJ.\nM\"94PFLZ&8'%A8*YM3W&H9W]E&?BK%EWW0SI,BDFQ,XX.^UC/\\.O0H_]A-;YX\nM07:!N;/S][VS[N_=TS=_\\(S<1I.0>G08U?0&.?MX>IK]?<X*KW-Z;`YOPFLH\nM[M>3?_7__O'\\=\\$VFNI>]+>0#\"CUR1?J>22XIE-B@_'V9Q'5-(U`EW/:V&B%\nMT6SX9XW;ZW[_;>_T3;^/;SA8_7Y<[2W^XL\\MH5Y(\"_V-HQ]WN!-P9L3TU)]2\nM[&\\#\"T)I4`2!TN_;X;C?K^VPJ8[<#$=7`[*K-W:-G2.R<_S?G1KFK3?@?;Q3\nM8]-<_6A'WZGI-T8]82P>0E;-3\\<ZQR=HO2@;*'@QK[P+YC>^D#>1.\"97&;%9\nM5M8K,J?ON%X4Q56H3UE';J?SL#)&A5\\X-,KR1K#.Y>TH]NR'-W$[E%I<)3D&\nM'9&U^..9I,=*;2]N=XZJ=]:[R\"J\\G$\\B3!5*,=H27FB&HJ`OPPQ:2BI[58JL\nMJG^\\$BFHEF)(YWE<S#CO74(*][X\\J+I>5D*L8VN6H\\J6$2VVZ%J6#61=[K+8\nM@L<DD1WF>HH9]C2U69*,4T$\\&D#!G)D2JEY:2-\\5UE=V4J'J\\H:F71/3-5<E\nMY+S,)>8>[S*ZV`^.?=\"T#YDCBB6H\"UB'S,*@@>H)141/^HYZJ\"HQ0[']RO$S\nMMXONGZ$:L%!G;'&.^GUP;$`*^OUD6-SOA[N\\)F>F6G*L?H.IE?]'>]`@%ON-\nM/^'1PE]50N`;)SIDOTV>KYU0Z^HMBAZJN;J)H<`^0J[!Z\"8NB5KBD:C*MX)#\nM0KA#HI;Y(QMZ(^OY(O,]$4CYMKHG,L</X5T5VT&8Y&'.8:/,_=]\"ACIY\"3T\"\nMW?+%C4:$X1,_G+F8TB!M8HMQJ\"XG4_B^K($+3Z?3!C*^\\\\E_>W)Q<GHD)HI=\nM_>\"F3NR([(9'NTXCQXE&3@;!-'+]*^V3O\\-*J\"6./A/GLL9+]0:SZ.'JM3%/\nMK9YHUVK1!R'4+I0@>^SPB0&\"%R]J+$21F\\]?92SS!.M\\[!7=>I!9_X.*NI$6\nMCK9<Q[+XGZ&;<?ROH[<-'=?_\\*]:_S\\$//UI?^#Z^^%(Q=$GVKXD$\"NB?C\")\nMKT`&6?]_M?^D:$JW7<<R_6]://[?`=?$Z+0PL=-J5?K_$\"\"-_Q'A&T`DC0,3\nM(1*J\\JS6[8*W\\HZ\\/)<)I(1B[I<!D0V%4F(\\5L4]=C_]OX*L_\\PQTF!A,O.B\nM<(MU+-;_EMXQ6KG]7],P.I7^/P1T[9\"2\"_M*V.+WL-+IB6TR^.X&XS'UHU`]\nME@$2%GXO`?7BWW0:'$G[_YIA$K;DXS`,QA-[2IE[C_L$L*)S`N('T0C6`CSW\nM6=\"U86D6Y^XLR3U$6B[K&*B$DE1>3%((%M/2,\\5@TN7,'[*NB$:P2HE+#M.B\nMU0L[?\"MH>\"FF)9?\"*).M%=>'12H484>4Q9OJ205R0<V8G4RKW/%D\"@M1)U-`\nMFOT<RS\\)Q[_:PVG`LQMR]AX+@<1A\"V`CWB89(WF^?Y(`[CSP*76HHY&S`-B`\nM!3[;U;FVIRX7&RA^2H?!%&B6%H6V/M1(CXSL:PH#0^C-Q+-=GY6DJ>^F[A5\\\nMB#$J-.MB1$D@:.(V74Z#,6EK+:VILMXX9]C2T>G.IE.*^\\L^Z?YV3J!#@TMB\nM[;?T_<,#C5Q0WPEQF`=>8$>:>N)Y8HA\"7IJ58::;]J(TLEH\\TO)H\\=&`(4^;\nMU91+^@A:F8@>BU[A(IB\\MZ/A\"!FZ##PO^*+!`O?=-9V.J.TPSF4Y&[O0=\"@Q\nM@#9@_*HH7TT=T,4.-DQ`%SNN>:`**9+Y-BPU\"6]<L&U$+D4PKL`RCVLDDA&R\nMF,:2O<)%9PJV>T`@(XAW.RN0%A*!RT1LWP$5\\\"A:UV10XG+FG\"B`K+]0'RO%\nM'@D<\"GVE8EN/DMHGLW#DD5TZF\"2H<7\"-F'#2R*##V<`CS\\P&IA2SWV2S/],;\nM+\\T:YJ\\G\"1X&I02VD<FCV1X,&S%5[=0LX>TFAWHF.BK!,V/#>C'?#/NFR(GM\nM.%#(0;8IP_$$D(<<&I@Q2?H,(PV<I?E]S&]DB5A]+P_*F@<MOZ;)%PP3C@/;\nM#1=2GA_@)%BW3K`ID2C`L.>K!)4-/R6$4A0JI5P2C&)`Y@'35_0`0M:>#]SS\nM4]7'=`H4)7$&E,5N@%K<?V?<ENXRR7U*T@RJRL9K$Q5CZ%2\"\\L)E-),D)Y,T\nM(#6DAH2A)TG!T$.\\<U//TH)N<O)LZ9PT6VU1<-7N^X]\\@CT\"GQJFF$;FP83C\nM$<>:NW!*XKZ5C7>9X[;YN.>VOI?O&?/MXHUE1%\"&[E(C7&J+\\I(DE2,D:8!B\nM8*>2A\"NY`0%,!OL9;:)AYKAX!@NM/$H[[>9Q&9:D`5&QR*.\"#(=NJ0P/2F08\nM2`LRG+'+\\NP53/+S\"\\=(G;)$!3I:J]5@#XL_C,=6@;+EA[+:PF..%LS9`-^*\nM'FRZ67TL-H=%YJW;V]2H+E6%D$;0'S*&>PA-RUIJ3[]'0S!/1T5?YQRIG$_(\nMHG7<\\\\IZ4<Q@M+9O,%H_D,%HP=34;K\"'Q1_F8QJ,=`&G).M89>7XP@*#<8^V\nM8HF9$$@!F1,7=S]S$0]J\\90*<'8//A];98\"W]_3D_?M4Q&36'4^X<T_/WO5E\nMLMA8`0'\\;<5828)LMC4#9CZSHYD'\\+`TW0!Y/M0L@WUU#MGCH/U]2'=326)?\nMFK(D[+5`JO,GQ;8IUHM.-SVL7.>.$R%O6Y!LITRR[462;3/)=DHD>W6)S=C>\nMCM;N-.3'`7M8_,N`14RKK1T^I@<W-^2J_*#!UGG*M%R=<H=_YYV@1KC[\\5]1\nMDGP(6&[E*D>!D>[V^U[>K:YU@_GS2;4.+)D-=<UJQ?X<?^#<\"\"Y5_-5D7Q8>\nMG3S0VA9_M/CC,8,F(E2OI(9FR?9'_IQ[]J*#X'Z1MN9($;:GP.57(#(I^A9#\nM=/E)3=*\\84EXNX``,;>.-M/0LGEQCH;Z3$6M@CH-RT.#3HDZ#=<+#8)<6[B^\nM:8%8P</03/T1I3S=>9)6.>OMTRV<Q]:8Q@JWKA!9?O%*]%'NM#-)[E\\)@K);\nM6(6T]>YBB>SKW<A*Y&O!O2PBP<H7M.1,J]_4$CEN$PN07MG:V`#P;;#<WI'(\nM[A1G[;4F\\OQL+4?\\TZ2;8,J+F;N))H=$2A@1P9R[Q'$.<E9+;XBY-=^#C,^\\\nMH8S14K',*&4:)!AP<@5\\OF(<M+-3NV').QB8GO8!K!?`N#5-/1NVPZ&,VU?D\nMS[DI68Y+ZE-CR0VS;-5>ONV8]T\"D7&*0\"JAG^ET0W*-BRIL;ZZ9>MOMIL!L=\nMF4'@>Y^;A^WR8B>K;XXG\\)(R/'T>3[)5RQ)^4/0.G7+O,.<T\"M(R[[\"I;^H>\nMQAAG98?Q4#O`2,FA9G)/$=>B9AP^B;_@<?B(LZ9T-$5)CJ4HJQ](V3B.LOPB\nMZL87$J7C'0#R72/\\+EXW0FRZL3[GWI%(+MX^*J2L<P=)9%[G)E*B$G/N(XGT\nMM$T(*UY/DK,LNZ<DZ/C)%'9AB6?Y_N?<]F9SKK72E-N\\PYS;SC+Z\\%-N.N>7\nMSK@'U82[M0G7*G!:D'/A'&Y_RDTKE^1;<J<>9+)\\V!#:`L^[>/SL;KY/P6\\9\nME\"_#R_R60<DR_/ZB6I:A-<$[D1_J(SHDR4E1C%8U%67U$ZWSSS6X][&C$T;0\nM$4.RYB5Q9\"G=Y-G&]@[!$!G#+][@^3Z#U0,6\"H/I8]NQZG9!GZT\":LU8=?L'\nMBE6W#K66V2#-0TT_3.+1L-A@2/@2#[6ZAEE!!1544$$%%51000455%!!!154\nM4$$%%51000455%!!!1544$$%%51000455/\"7@O\\!J&#G)P!X````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\nM````````````````````````````````````````````````````````````\n9````````````````````````````````````\n`\nend\n\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Tue, 9 Jun 1998 16:09:26 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "> 1.6. New patch to follow.\n> \n> The current S_LOCK and TAS() implementations (my patch of late May) are\n> slower than they need to be and cause more code bloat than they need to.\n> The bloat is caused by using a macro to inline a relatively complex bit\n> of code that is only used in the blocked lock case. I suspect the slowness\n> is caused at least partly by the macro as it requires more registers.\n> \n> I have developed a new patch that separates out the lock available case\n> from the busywaiting case and that uses the GCC _inline_ facilty to make\n> the asm interface still look as clean as a function while not costing\n> anything. For a preview, see\n\nQuite and analysis. I want to comment on the code more, but I just want\nto point out now that many of our i386 platforms are not GNU. I think\nwe have to use macros. I can't think of any GNU-specific code in the\nsource tree at this point, and I don't think it makes sense add it now\njust to make the code look a litter cleaner.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 00:39:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "> > 1.6. New patch to follow.\n> > \n> > The current S_LOCK and TAS() implementations (my patch of late May) are\n> > slower than they need to be and cause more code bloat than they need to.\n> > The bloat is caused by using a macro to inline a relatively complex bit\n> > of code that is only used in the blocked lock case. I suspect the slowness\n> > is caused at least partly by the macro as it requires more registers.\n> > \n> > I have developed a new patch that separates out the lock available case\n> > from the busywaiting case and that uses the GCC _inline_ facilty to make\n> > the asm interface still look as clean as a function while not costing\n> > anything. For a preview, see\n> \n> Quite and analysis. I want to comment on the code more, but I just want\n\nPlease do. I am very interested in reactions or followup investigations.\n\n> to point out now that many of our i386 platforms are not GNU. I think\n> we have to use macros. I can't think of any GNU-specific code in the\n> source tree at this point, and I don't think it makes sense add it now\n> just to make the code look a litter cleaner.\n\nMost of the original tas() __asm__() implementations are GCC specific. This\nincludes all the Linux platforms except PPC, all the *BSD platforms, even the\nVAX. GCC is also fairly commonly used even on the commercial OSes.\n\nAs far as I can tell, the only C coded platforms that are not GCC specific\nare SCO i386 and SunOS/Solaris on Sun3 and Sparc. The other non-GCC platforms\nhave external tas.s function implementations (HP), or have system specific\ncalls (AIX, OSF, SGI, Nextstep).\n\nFinally, the difference between a tas() function implementation and the best\npossible inline implementation appears to be only 0.06 microseconds on a P133.\nThis will add 0.0003 seconds to startup. On SCO only. On Sparc this is a leaf\ncall and possibly even cheaper. No other platforms are affected.\n\nRemember also that I am adding two features that previously did not exist,\nbackoff, and stuck lock detection. \n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"If you lie to the compiler, it will get its revenge.\" -- Henry Spencer\n", "msg_date": "Tue, 9 Jun 1998 22:53:37 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "On Wed, 10 Jun 1998, Bruce Momjian wrote:\n> Quite and analysis. I want to comment on the code more, but I just want\n> to point out now that many of our i386 platforms are not GNU. I think\n> we have to use macros. I can't think of any GNU-specific code in the\n> source tree at this point, and I don't think it makes sense add it now\n> just to make the code look a litter cleaner. \n\nIndeed. Those of use who have thousand dollar SunPro compilers thank you.\n\n(can you say progressive optomizer?)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Wed, 10 Jun 1998 03:24:50 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "Matthew N. Dodd writes:\n> On Wed, 10 Jun 1998, Bruce Momjian wrote:\n> > Quite and analysis. I want to comment on the code more, but I just want\n> > to point out now that many of our i386 platforms are not GNU. I think\n> > we have to use macros. I can't think of any GNU-specific code in the\n> > source tree at this point, and I don't think it makes sense add it now\n> > just to make the code look a litter cleaner. \n> \n> Indeed. Those of use who have thousand dollar SunPro compilers thank you.\n> \n> (can you say progressive optomizer?)\n ^^^^^^^^^ uhhh, no. ;-)\n\n\nHmmmm, looking at the original code, non-GCC Sparc makes a function call to\nthe tas() routine which is coded as asm. I have not in fact changed it.\nAs I understand your comment, you wish this to be a macro.\n\nThe code is:\n\n#if defined(NEED_SPARC_TAS_ASM)\n/*\n * sparc machines not using gcc\n */\nstatic void tas_dummy() /* really means: extern int tas(slock_t *lock); */\n{\n asm(\".seg \\\"data\\\"\");\n asm(\".seg \\\"text\\\"\");\n asm(\"_tas:\");\n /*\n * Sparc atomic test and set (sparc calls it \"atomic load-store\")\n */\n asm(\"ldstub [%r8], %r8\");\n asm(\"retl\");\n asm(\"nop\");\n}\n#endif /* NEED_SPARC_TAS_ASM */\n\n\nI doubt there are any major performance gains to be had here, but I would\nbe interested to learn otherwise. I don't have access to a Sparc machine\nthat I can use for this, so if anyone cares to test this implementation and\nany others they can think of please post the results.\n\nBut I think perhaps we are micro-optimizing here. I only bothered to do\nall the i386 flavors because Bruce had some gprof output that suggested\nwe had a problem (we didn't), and then I just got kinda interested in the\nexperiment itself.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"A week of coding can sometimes save an hour of thought.\"\n", "msg_date": "Wed, 10 Jun 1998 00:56:48 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "> Most of the original tas() __asm__() implementations are GCC specific. This\n> includes all the Linux platforms except PPC, all the *BSD platforms, even the\n> VAX. GCC is also fairly commonly used even on the commercial OSes.\n> \n> As far as I can tell, the only C coded platforms that are not GCC specific\n> are SCO i386 and SunOS/Solaris on Sun3 and Sparc. The other non-GCC platforms\n> have external tas.s function implementations (HP), or have system specific\n> calls (AIX, OSF, SGI, Nextstep).\n\nThat s_lock.h file is a hornets nest of portability problems. I really\ndon't want to have multiple functions/macros for different CPU's if I\ncan help it. I don't even want to mix functions/macros for the same\nfunction name if I can help it. I also do not want to start playing\naround with isGNU/isnotGNU in a file that is already complex.\n\nMacros work and we already have tons of them, we don't use inline\nanywhere else, and the actual locks are 80% asm code anyway, so it looks\nthe same whether it is in a macro or an inline function.\n\nI have made them macros before for this file, so I can do it again quite\neasily.\n\nAs for the benefits, well, when I see lots of calls to a function, and I\ntry and eliminate the calls if it is reasonable. In many places, the\ncall handling is actually more instructions than the inlining. I look\nat the measured performance change vs. the executable size increase and\nmake a decision. With something like s_lock, it just seems normal to\nmake it a macro.\n\n> Finally, the difference between a tas() function implementation and the best\n> possible inline implementation appears to be only 0.06 microseconds on a P133.\n> This will add 0.0003 seconds to startup. On SCO only. On Sparc this is a leaf\n> call and possibly even cheaper. No other platforms are affected.\n> \n> Remember also that I am adding two features that previously did not exist,\n> backoff, and stuck lock detection. \n\nYes, and good improvements.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 12:50:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "Bruce Momjian writes:\n> David Gould writes:\n> > Most of the original tas() __asm__() implementations are GCC specific. This\n> > includes all the Linux platforms except PPC, all the *BSD platforms, even the\n> > VAX. GCC is also fairly commonly used even on the commercial OSes.\n> > \n> > As far as I can tell, the only C coded platforms that are not GCC specific\n> > are SCO i386 and SunOS/Solaris on Sun3 and Sparc. The other non-GCC platforms\n> > have external tas.s function implementations (HP), or have system specific\n> > calls (AIX, OSF, SGI, Nextstep).\n> \n> That s_lock.h file is a hornets nest of portability problems. I really\n> don't want to have multiple functions/macros for different CPU's if I\n> can help it. I don't even want to mix functions/macros for the same\n> function name if I can help it. I also do not want to start playing\n> around with isGNU/isnotGNU in a file that is already complex.\n\nActually, my main motivation for this file is to reduce the portability\nproblems. If you will look at the next patch (when I submit it, probably\ntonight) I think you will see that it is fairly clear what to do to port to\na new platform, and how the existing platforms work. \n\nWe already implicitly make a isGCC vs notGCC distinction when we use the\nGCC asm() syntax. I am merely intending to make it explict.\n \n> Macros work and we already have tons of them, we don't use inline\n> anywhere else, and the actual locks are 80% asm code anyway, so it looks\n> the same whether it is in a macro or an inline function.\n>\n> I have made them macros before for this file, so I can do it again quite\n> easily.\n>\n> As for the benefits, well, when I see lots of calls to a function, and I\n> try and eliminate the calls if it is reasonable. In many places, the\n> call handling is actually more instructions than the inlining. I look\n> at the measured performance change vs. the executable size increase and\n> make a decision. With something like s_lock, it just seems normal to\n> make it a macro.\n\nWith the old S_LOCK this was a reasonable choice. With the new S_LOCK which\nis quite a bit more complex, the macro expansion generates quite a bit of\ncode. See the generated code for the \"MacroSLOCK\" case in my large post.\n\n> > Finally, the difference between a tas() function implementation and the best\n> > possible inline implementation appears to be only 0.06 microseconds on a P133.\n> > This will add 0.0003 seconds to startup. On SCO only. On Sparc this is a leaf\n> > call and possibly even cheaper. No other platforms are affected.\n> > \n> > Remember also that I am adding two features that previously did not exist,\n> > backoff, and stuck lock detection. \n> \n> Yes, and good improvements.\n\nAgain, please have a look at the (forthcoming) patch. It gives up nothing in\neither space or time performance compared to the original, is clearer imho,\nand incorporates the the new features. \n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n\n", "msg_date": "Wed, 10 Jun 1998 11:24:04 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "> \n> Bruce Momjian writes:\n> > David Gould writes:\n> > > Most of the original tas() __asm__() implementations are GCC specific. This\n> > > includes all the Linux platforms except PPC, all the *BSD platforms, even the\n> > > VAX. GCC is also fairly commonly used even on the commercial OSes.\n> > > \n> > > As far as I can tell, the only C coded platforms that are not GCC specific\n> > > are SCO i386 and SunOS/Solaris on Sun3 and Sparc. The other non-GCC platforms\n> > > have external tas.s function implementations (HP), or have system specific\n> > > calls (AIX, OSF, SGI, Nextstep).\n> > \n> > That s_lock.h file is a hornets nest of portability problems. I really\n> > don't want to have multiple functions/macros for different CPU's if I\n> > can help it. I don't even want to mix functions/macros for the same\n> > function name if I can help it. I also do not want to start playing\n> > around with isGNU/isnotGNU in a file that is already complex.\n> \n> Actually, my main motivation for this file is to reduce the portability\n> problems. If you will look at the next patch (when I submit it, probably\n> tonight) I think you will see that it is fairly clear what to do to port to\n> a new platform, and how the existing platforms work. \n> \n> We already implicitly make a isGCC vs notGCC distinction when we use the\n> GCC asm() syntax. I am merely intending to make it explict.\n\nAh, I see. I wondered how other compilers were understanding the asm()\nstuff. I thought it was gcc-specific, but then other platforms were\nusing it. I guess they have gcc.\n\n> \n> > Macros work and we already have tons of them, we don't use inline\n> > anywhere else, and the actual locks are 80% asm code anyway, so it looks\n> > the same whether it is in a macro or an inline function.\n> >\n> > I have made them macros before for this file, so I can do it again quite\n> > easily.\n> >\n> > As for the benefits, well, when I see lots of calls to a function, and I\n> > try and eliminate the calls if it is reasonable. In many places, the\n> > call handling is actually more instructions than the inlining. I look\n> > at the measured performance change vs. the executable size increase and\n> > make a decision. With something like s_lock, it just seems normal to\n> > make it a macro.\n> \n> With the old S_LOCK this was a reasonable choice. With the new S_LOCK which\n> is quite a bit more complex, the macro expansion generates quite a bit of\n> code. See the generated code for the \"MacroSLOCK\" case in my large post.\n\nYes, I suspected that may be a problem. I will apply your patch as soon\nas I see it.\n\n> \n> > > Finally, the difference between a tas() function implementation and the best\n> > > possible inline implementation appears to be only 0.06 microseconds on a P133.\n> > > This will add 0.0003 seconds to startup. On SCO only. On Sparc this is a leaf\n> > > call and possibly even cheaper. No other platforms are affected.\n> > > \n> > > Remember also that I am adding two features that previously did not exist,\n> > > backoff, and stuck lock detection. \n> > \n> > Yes, and good improvements.\n> \n> Again, please have a look at the (forthcoming) patch. It gives up nothing in\n> either space or time performance compared to the original, is clearer imho,\n> and incorporates the the new features. \n\nSounds like a plan.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 10 Jun 1998 14:49:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contention" }, { "msg_contents": "[email protected] (David Gould) writes:\n> [ Much careful testing snipped ]\n\nNice job, David!\n\n> gprof introduces severe experimental error for small quick functions.\n> Perhaps the spinlock function is so short and quick compared to the\n> mcount overhead added to the function prolog that the overhead dominates\n> the measurement. gprof remains a useful tool for larger functions with\n> longer runtimes, but must be considered very suspect for tiny functions.\n\nRight. gprof is a fine example of Heisenberg's Uncertainty Principle\napplied to software ;-). You can't measure something without affecting it.\n\nAs Bruce Momjian just pointed out in another followup, running the test\nfunction a lot of times in a tight loop isn't a perfect answer either:\nyou find out what happens when the function's code and referenced data are\nall in cache, but you can't tell much about what happens when they are\nnot; and you can't tell whether the whole application's memory usage\npatterns are such that the function will remain in cache.\n\nI'm guessing that the backend uses tas() enough that it will probably\nstay in cache, but that is *strictly* a guess with no evidence.\n\n> In some of the test cases there was significant timing variation from\n> run to run even though the test conditions were apparently identical.\n> Even more strangely, the variation in time was not random but appeared\n> to represent two different modes. And, the variation was itself\n> repeatable.\n> [snip]\n> I have no explanation for this variation. Possibly it is some interaction\n> of where the program is loaded and the state of the memory heirarchy, but\n> even this is hard to sustain. I would be very curious to hear of any\n> plausible explainations.\n\nAfter chewing on this for a while I think that your speculation is\nright. You were using a Pentium, you said. The Pentium has a two-way\nset associative cache, which means that any given main-memory address\nhas exactly two cache lines it could be loaded into. Main-memory\naddresses that are 1000H apart contend for the same pair of cache lines.\nThus, if your program happens to repeatedly hit three locations that are\nexactly 1000H apart, it will suffer a cache miss every time. Change the\naddress spacing, and no miss occurs. The cache miss takes forty-some\nclock cycles, versus one if the target location is in cache.\n(BTW, I'm getting this info out of Rick Booth's \"Inner Loops\", a fine\nreference book if you are into hand-optimized assembly coding for Intel\nprocessors.)\n\nSo what I think you were seeing is that on some runs, the loop involved\nhitting three addresses that contend for the same cache line pair, while\non other runs there was no cache contention. This could be explained\nby varying load addresses for the program, if it touched both its own\naddresses (variable) and some non-varying addresses --- say, C library\nroutines executed from a shared library that remained loaded throughout.\nIf you can link with a non-shared C library then you should get more\nconsistent runtimes, because the offsets between all the locations\ntouched by your loop would be fixed, and thus cache hits or misses ought\nto be consistent from run to run.\n\nThe bottom line, however, is that this behavior is too unpredictable\nto be worth worrying about in a production program. A cache miss in\na tight loop could be created or eliminated by unrelated changes in\ndistant parts of the code ... and in any case the behavior will be\ndifferent on different CPUs. The 486, Pentium, and Pentium Pro all\nhave radically different cache layouts, let alone non-Intel CPUs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Jun 1998 10:42:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] S_LOCK reduced contention " }, { "msg_contents": "Tom Lane writes:\n> [email protected] (David Gould) writes:\n> > [ Much careful testing snipped ]\n> > In some of the test cases there was significant timing variation from\n> > run to run even though the test conditions were apparently identical.\n> > Even more strangely, the variation in time was not random but appeared\n> > to represent two different modes. And, the variation was itself\n> > repeatable.\n> > [snip]\n> > I have no explanation for this variation. Possibly it is some interaction\n> > of where the program is loaded and the state of the memory heirarchy, but\n> > even this is hard to sustain. I would be very curious to hear of any\n> > plausible explainations.\n> \n> After chewing on this for a while I think that your speculation is\n> right. You were using a Pentium, you said. The Pentium has a two-way\n> set associative cache, which means that any given main-memory address\n> has exactly two cache lines it could be loaded into. Main-memory\n> addresses that are 1000H apart contend for the same pair of cache lines.\n> Thus, if your program happens to repeatedly hit three locations that are\n> exactly 1000H apart, it will suffer a cache miss every time. Change the\n> address spacing, and no miss occurs. The cache miss takes forty-some\n> clock cycles, versus one if the target location is in cache.\n> (BTW, I'm getting this info out of Rick Booth's \"Inner Loops\", a fine\n> reference book if you are into hand-optimized assembly coding for Intel\n> processors.)\n> \n> So what I think you were seeing is that on some runs, the loop involved\n> hitting three addresses that contend for the same cache line pair, while\n> on other runs there was no cache contention. This could be explained\n> by varying load addresses for the program, if it touched both its own\n> addresses (variable) and some non-varying addresses --- say, C library\n> routines executed from a shared library that remained loaded throughout.\n> If you can link with a non-shared C library then you should get more\n> consistent runtimes, because the offsets between all the locations\n> touched by your loop would be fixed, and thus cache hits or misses ought\n> to be consistent from run to run.\n\nThis is in line with my own speculation. However, I am not convinced.\n\nFirst, the test loop and the function it calls are only about 100 bytes\ntotal taken together. And, no system calls or library calls are made during\nthe test. This tends to rule out \"locations 1000H apart\" and shared library\neffects. Also, I would expect the system to load programs and libraries\non VM page boundaries. Unless there is some cachability difference from one\npage to the next, I am at a loss to account for this. \n\n> The bottom line, however, is that this behavior is too unpredictable\n> to be worth worrying about in a production program. A cache miss in\n> a tight loop could be created or eliminated by unrelated changes in\n> distant parts of the code ... and in any case the behavior will be\n> different on different CPUs. The 486, Pentium, and Pentium Pro all\n> have radically different cache layouts, let alone non-Intel CPUs.\n\nAgreed, the reasonable thing to do is to try to be sensitive to cache\neffects and accept that there are mysteries.\n\nthanks\n\n-dg\n \n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n", "msg_date": "Mon, 15 Jun 1998 15:27:26 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] S_LOCK reduced contention" } ]
[ { "msg_contents": "Vadim Mikheev said:\n> \n> Brandon Ibach wrote:\n> > \n> > Hmmm... well, the table may be 57 Meg, but then, the backend\n> > running the vacuum has consumed 5 1/2 hours of CPU time so far, and\n> > still going strong, so something tells me there may be something\n> > deeper. :)\n> \n> Did you have any indices for this table ?\n> \n> Vadim\n> \n Nope... no indices at all.\n\n-Brandon :)\n", "msg_date": "Tue, 9 Jun 1998 20:10:14 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table corrupt?" } ]
[ { "msg_contents": "Vadim Mikheev said:\n> \n> Well, could you use gdb to attach to backend runnig vacuum ?\n> \n> Vadim\n> \n Okay... using pg_dump to get the data out of the table is causing\nmuch the same situation. I did a backtrace in gdb, but unfortnately,\nI'm running a copy of postgres without debugging symbols, so it may be\nof limited use. Here 'tis...\n\n#0 0x80b7590 in WaitIO ()\n#1 0x80b6e3c in BufferAlloc ()\n#2 0x80b6c6d in ReadBufferWithBufferLock ()\n#3 0x80b7390 in ReleaseAndReadBuffer ()\n#4 0x8065b90 in heapgettup ()\n#5 0x80660a5 in heap_getnext ()\n#6 0x8088c61 in SeqNext ()\n#7 0x8084480 in ExecScan ()\n#8 0x8088ca3 in ExecSeqScan ()\n#9 0x80833ce in ExecProcNode ()\n#10 0x8082a61 in ExecutePlan ()\n#11 0x8082644 in ExecutorRun ()\n#12 0x80c1577 in ProcessQueryDesc ()\n#13 0x80c15d6 in ProcessQuery ()\n#14 0x80c0048 in pg_eval_dest ()\n#15 0x80bff56 in pg_eval ()\n#16 0x80c0ff1 in PostgresMain ()\n#17 0x808f7af in main ()\n#18 0x805e1ab in _start ()\n\n I'll see about getting a trace out of a version of postgres with\ndebugging symbols.\n\n-Brandon :)\n", "msg_date": "Tue, 9 Jun 1998 20:37:40 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table corrupt?" } ]
[ { "msg_contents": "Vadim Mikheev said:\n> \n> Brandon Ibach wrote:\n> > \n> > Vadim Mikheev said:\n> > Okay... using pg_dump to get the data out of the table is causing\n> > much the same situation. I did a backtrace in gdb, but unfortnately,\n> > I'm running a copy of postgres without debugging symbols, so it may be\n> > of limited use. Here 'tis...\n> > \n> > #0 0x80b7590 in WaitIO ()\n> \n> Did you restart postmaster after killing backend (vacuum) ?\n> \n> Vadim\n> \n Nope... :) And I just thought of that possibility while sifting\nthrough the buffer manager code (and meantime, your email arrived).\nThanks for the tip, I bet it will work.\n\n-Brandon :)\n", "msg_date": "Tue, 9 Jun 1998 20:58:57 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table corrupt?" } ]
[ { "msg_contents": "Am I right in my thinking that Postgres does not have a pseudo timestamp\nfield that is updated whenever a row is added or updated? Did it used\nto have one before time travel was removed?\n\nIf no timestamp field, are there any other pseudo fields that are\nupdated every time? I noticed that the 'ctid' field seems to qualify,\nbut what does something like (1,54) mean?\n\nThanks for any help.\n\nByron\n\n", "msg_date": "Wed, 10 Jun 1998 13:19:23 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Timestamp field" }, { "msg_contents": "On Wed, 10 Jun 1998, Byron Nikolaidis wrote:\n\n> Am I right in my thinking that Postgres does not have a pseudo timestamp\n> field that is updated whenever a row is added or updated? Did it used\n> to have one before time travel was removed?\n> \n> If no timestamp field, are there any other pseudo fields that are\n> updated every time? I noticed that the 'ctid' field seems to qualify,\n> but what does something like (1,54) mean?\n> \n> Thanks for any help.\n> \n> Byron\n\nSeems that XMIN field is updated whenever a row is inserted or updated.\nsee man sql.\n Jose'\n\n", "msg_date": "Thu, 11 Jun 1998 10:56:07 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Timestamp field" } ]