threads
listlengths
1
2.99k
[ { "msg_contents": "Latest output that the Snapshot PostgreSQL replication tool outputs to\na borne shell script. PostgreSQL v6.4 needs to be installed. This script\ncan run alone on the command line or under CRON control. Snapshot\nuses a named pipe to send output from \"pg_dump\" to \"psql\" and use\nthe environment variables to setup passwords and user id's.\n\n <<t2.sh>> \n\nSnapshot as developed in Tcl/Tk.\n\n <<SNAPSHOT.TCL>> \n\nSome example configuration files.\n\n <<t2.snp>> <<t3.snp>> <<testcfg.snp>> \n\nOpinions?\nD. Gowin", "msg_date": "Mon, 18 Jan 1999 17:15:00 -0500", "msg_from": "Dan Gowin <[email protected]>", "msg_from_op": true, "msg_subject": "Snapshot replication tool" } ]
[ { "msg_contents": "How about add to \nlibpq-fe.h\n\n#define LIBPQ_VERSION 64\n\nor something like?\n\n(I work with both 6.3 and 6.4 version and this is a problem) \n \n\n-- \nDmitry Samersoff\n DM\\S, [email protected]\n http://devnull.wplus.net\n\n", "msg_date": "Tue, 19 Jan 1999 11:11:22 +0300 (MSK)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "Good Idea ;-))" } ]
[ { "msg_contents": "Hello, \n\nI still have the same mistake after upgrading to v6.4.2 : \nadd_one is a simple function that add 1 to a given int. It has been\ncompiled as a .so using gcc. The first time I use it, it gives an error\nmessage, and the second and following times, it seems to work well : \n\ntemplate1=> select add_one(1) ;\nERROR: Load of file /usr/local/pgsql/lib/add_one.so failed:\n/usr/local/pgsql/lib/add_one.so: undefined symbol: mem_set\ntemplate1=> select add_one(1) ;\nadd_one\n-------\n 2\n(1 row)\ntemplate1=>\n\nAny help would be much appreciated, \n\nJose Paumard\n", "msg_date": "Tue, 19 Jan 1999 14:08:42 +0000", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Paumard <[email protected]>", "msg_from_op": true, "msg_subject": "Error using home-made functions" } ]
[ { "msg_contents": "I discovered different results when selecting data from base tables or\ntheir views. The only thing I can image for this strange behaviour is using\nthe date constant 'today' when creating the view. Does 'today' in a view\nuse the current day time of the view creation or the time the query is made\n?\n\nmagic moving pixel s.a. http://www.mmp.lu\n", "msg_date": "Tue, 19 Jan 1999 15:18:57 +0100", "msg_from": "Matthias Schmitt <[email protected]>", "msg_from_op": true, "msg_subject": "different results selecting from base tables or views" }, { "msg_contents": "> I discovered different results when selecting data from base tables or\n> their views. The only thing I can image for this strange behaviour is \n> using the date constant 'today' when creating the view. Does 'today' \n> in a view use the current day time of the view creation or the time \n> the query is made?\n\nThat is probably the problem. For most data types, a string constant is\nreally constant, and Postgres evaluates it once, during parsing. For a\nfew data types this is not correct behavior, since the \"constant\" should\nbe evaluated at run time.\n\nThe workaround, at least in some cases, is to force the string constant\nto be a real string type. Postgres will then do the conversion to your\nintended type at run time.\n\nI'm not sure what your view looks like, but the same issue comes up when\ndefining default values for columns:\n\n create table t1 (d datetime default 'now');\n\ngives unexpected results, while\n\n create table t2 (d datetime default text 'now');\n\ndoes what you would (presumably) prefer.\n\n - Tom\n", "msg_date": "Tue, 19 Jan 1999 14:35:58 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] different results selecting from base tables or views" } ]
[ { "msg_contents": "The following patch finishes primary key support. Previously, when\na field was labelled as a primary key, the system automatically\ncreated a unique index on the field. This patch extends it so\nthat the index has the indisprimary field set. You can pull a list\nof primary keys with the followiing select.\n\nSELECT pg_class.relname, pg_attribute.attname\n FROM pg_class, pg_attribute, pg_index\n WHERE pg_class.oid = pg_attribute.attrelid AND\n pg_class.oid = pg_index.indrelid AND\n pg_index.indkey[0] = pg_attribute.attnum AND\n pg_index.indisunique = 't';\n\nThere is nothing in this patch that modifies the template database to\nset the indisprimary attribute for system tables. Should they be\nchanged or should we only be concerned with user tables?\n\n\n*** ../src.original/./backend/parser/analyze.c\tSun Jan 17 23:39:14 1999\n--- ./backend/parser/analyze.c\tSun Jan 17 23:39:56 1999\n***************\n*** 716,725 ****\n--- 716,729 ----\n \t\t\t\telog(ERROR, \"CREATE TABLE/PRIMARY KEY multiple keys for table %s are not legal\", stmt->relname);\n \n \t\t\thave_pkey = TRUE;\n+ \t\t\tindex->primary = TRUE;\n \t\t\tindex->idxname = makeTableName(stmt->relname, \"pkey\", NULL);\n \t\t}\n \t\telse\n+ \t\t{\n+ \t\t\tindex->primary = FALSE;\n \t\t\tindex->idxname = NULL;\n+ \t\t}\n \n \t\tindex->relname = stmt->relname;\n \t\tindex->accessMethod = \"btree\";\n*** ../src.original/./backend/catalog/index.c\tSat Jan 16 09:49:31 1999\n--- ./backend/catalog/index.c\tSat Jan 16 09:56:53 1999\n***************\n*** 82,88 ****\n static void UpdateIndexRelation(Oid indexoid, Oid heapoid,\n \t\t\t\t\tFuncIndexInfo *funcInfo, int natts,\n \t\t\t\t\tAttrNumber *attNums, Oid *classOids, Node *predicate,\n! \t\t\t\t\tList *attributeList, bool islossy, bool unique);\n static void DefaultBuild(Relation heapRelation, Relation indexRelation,\n \t\t\t int numberOfAttributes, AttrNumber *attributeNumber,\n \t\t\t IndexStrategy indexStrategy, uint16 parameterCount,\n--- 82,88 ----\n static void UpdateIndexRelation(Oid indexoid, Oid heapoid,\n \t\t\t\t\tFuncIndexInfo *funcInfo, int natts,\n \t\t\t\t\tAttrNumber *attNums, Oid *classOids, Node *predicate,\n! \t\t\t\t\tList *attributeList, bool islossy, bool unique, bool primary);\n static void DefaultBuild(Relation heapRelation, Relation indexRelation,\n \t\t\t int numberOfAttributes, AttrNumber *attributeNumber,\n \t\t\t IndexStrategy indexStrategy, uint16 parameterCount,\n***************\n*** 734,740 ****\n \t\t\t\t\tNode *predicate,\n \t\t\t\t\tList *attributeList,\n \t\t\t\t\tbool islossy,\n! \t\t\t\t\tbool unique)\n {\n \tForm_pg_index indexForm;\n \tIndexElem *IndexKey;\n--- 734,741 ----\n \t\t\t\t\tNode *predicate,\n \t\t\t\t\tList *attributeList,\n \t\t\t\t\tbool islossy,\n! \t\t\t\t\tbool unique,\n! bool primary)\n {\n \tForm_pg_index indexForm;\n \tIndexElem *IndexKey;\n***************\n*** 775,780 ****\n--- 776,782 ----\n \tindexForm->indproc = (PointerIsValid(funcInfo)) ?\n \t\tFIgetProcOid(funcInfo) : InvalidOid;\n \tindexForm->indislossy = islossy;\n+ \tindexForm->indisprimary = primary;\n \tindexForm->indisunique = unique;\n \n \tindexForm->indhaskeytype = 0;\n***************\n*** 1014,1020 ****\n \t\t\t Datum *parameter,\n \t\t\t Node *predicate,\n \t\t\t bool islossy,\n! \t\t\t bool unique)\n {\n \tRelation\theapRelation;\n \tRelation\tindexRelation;\n--- 1016,1023 ----\n \t\t\t Datum *parameter,\n \t\t\t Node *predicate,\n \t\t\t bool islossy,\n! \t\t\t bool unique,\n! bool primary)\n {\n \tRelation\theapRelation;\n \tRelation\tindexRelation;\n***************\n*** 1126,1132 ****\n \t */\n \tUpdateIndexRelation(indexoid, heapoid, funcInfo,\n \t\t\t\t\t\tnumatts, attNums, classObjectId, predicate,\n! \t\t\t\t\t\tattributeList, islossy, unique);\n \n \tpredInfo = (PredInfo *) palloc(sizeof(PredInfo));\n \tpredInfo->pred = predicate;\n--- 1129,1135 ----\n \t */\n \tUpdateIndexRelation(indexoid, heapoid, funcInfo,\n \t\t\t\t\t\tnumatts, attNums, classObjectId, predicate,\n! \t\t\t\t\t\tattributeList, islossy, unique, primary);\n \n \tpredInfo = (PredInfo *) palloc(sizeof(PredInfo));\n \tpredInfo->pred = predicate;\n*** ../src.original/./backend/commands/cluster.c\tSat Jan 16 09:58:59 1999\n--- ./backend/commands/cluster.c\tSat Jan 16 09:59:34 1999\n***************\n*** 321,327 ****\n \t\t\t\t Old_pg_index_Form->indclass,\n \t\t\t\t (uint16) 0, (Datum) NULL, NULL,\n \t\t\t\t Old_pg_index_Form->indislossy,\n! \t\t\t\t Old_pg_index_Form->indisunique);\n \n \theap_close(OldIndex);\n \theap_close(NewHeap);\n--- 321,328 ----\n \t\t\t\t Old_pg_index_Form->indclass,\n \t\t\t\t (uint16) 0, (Datum) NULL, NULL,\n \t\t\t\t Old_pg_index_Form->indislossy,\n! \t\t\t\t Old_pg_index_Form->indisunique,\n! Old_pg_index_Form->indisprimary);\n \n \theap_close(OldIndex);\n \theap_close(NewHeap);\n*** ../src.original/./backend/commands/defind.c\tMon Jan 18 08:16:36 1999\n--- ./backend/commands/defind.c\tMon Jan 18 08:20:09 1999\n***************\n*** 71,76 ****\n--- 71,77 ----\n \t\t\tList *attributeList,\n \t\t\tList *parameterList,\n \t\t\tbool unique,\n+ \t\t\tbool primary,\n \t\t\tExpr *predicate,\n \t\t\tList *rangetable)\n {\n***************\n*** 189,195 ****\n \t\t\t\t\t &fInfo, NULL, accessMethodId,\n \t\t\t\t\t numberOfAttributes, attributeNumberA,\n \t\t\t classObjectId, parameterCount, parameterA, (Node *) cnfPred,\n! \t\t\t\t\t lossy, unique);\n \t}\n \telse\n \t{\n--- 190,196 ----\n \t\t\t\t\t &fInfo, NULL, accessMethodId,\n \t\t\t\t\t numberOfAttributes, attributeNumberA,\n \t\t\t classObjectId, parameterCount, parameterA, (Node *) cnfPred,\n! \t\t\t\t\t lossy, unique, primary);\n \t}\n \telse\n \t{\n***************\n*** 206,212 ****\n \t\t\t\t\t attributeList,\n \t\t\t\t\t accessMethodId, numberOfAttributes, attributeNumberA,\n \t\t\t classObjectId, parameterCount, parameterA, (Node *) cnfPred,\n! \t\t\t\t\t lossy, unique);\n \t}\n }\n \n--- 207,213 ----\n \t\t\t\t\t attributeList,\n \t\t\t\t\t accessMethodId, numberOfAttributes, attributeNumberA,\n \t\t\t classObjectId, parameterCount, parameterA, (Node *) cnfPred,\n! \t\t\t\t\t lossy, unique, primary);\n \t}\n }\n \n*** ../src.original/./backend/storage/large_object/inv_api.c\tSat Jan 16 10:03:19 1999\n--- ./backend/storage/large_object/inv_api.c\tSat Jan 16 10:04:19 1999\n***************\n*** 178,184 ****\n \tclassObjectId[0] = INT4_OPS_OID;\n \tindex_create(objname, indname, NULL, NULL, BTREE_AM_OID,\n \t\t\t\t 1, &attNums[0], &classObjectId[0],\n! \t\t\t\t 0, (Datum) NULL, NULL, FALSE, FALSE);\n \n \t/* make the index visible in this transaction */\n \tCommandCounterIncrement();\n--- 178,184 ----\n \tclassObjectId[0] = INT4_OPS_OID;\n \tindex_create(objname, indname, NULL, NULL, BTREE_AM_OID,\n \t\t\t\t 1, &attNums[0], &classObjectId[0],\n! \t\t\t\t 0, (Datum) NULL, NULL, FALSE, FALSE, FALSE);\n \n \t/* make the index visible in this transaction */\n \tCommandCounterIncrement();\n*** ../src.original/./backend/bootstrap/bootparse.y\tSat Jan 16 10:05:39 1999\n--- ./backend/bootstrap/bootparse.y\tSun Jan 17 23:33:13 1999\n***************\n*** 225,231 ****\n \t\t\t\t\tDefineIndex(LexIDStr($5),\n \t\t\t\t\t\t\t\tLexIDStr($3),\n \t\t\t\t\t\t\t\tLexIDStr($7),\n! \t\t\t\t\t\t\t\t$9, NIL, 0, 0, NIL);\n \t\t\t\t\tDO_END;\n \t\t\t\t}\n \t\t;\n--- 225,231 ----\n \t\t\t\t\tDefineIndex(LexIDStr($5),\n \t\t\t\t\t\t\t\tLexIDStr($3),\n \t\t\t\t\t\t\t\tLexIDStr($7),\n! \t\t\t\t\t\t\t\t$9, NIL, 0, 0, 0, NIL);\n \t\t\t\t\tDO_END;\n \t\t\t\t}\n \t\t;\n*** ../src.original/./backend/tcop/utility.c\tSun Jan 17 01:18:44 1999\n--- ./backend/tcop/utility.c\tMon Jan 18 08:21:12 1999\n***************\n*** 404,409 ****\n--- 404,410 ----\n \t\t\t\t\t\t\tstmt->indexParams,\t/* parameters */\n \t\t\t\t\t\t\tstmt->withClause,\n \t\t\t\t\t\t\tstmt->unique,\n+ \t\t\t\t\t\t\t0,\t\t/* CREATE INDEX can't be primary */\n \t\t\t\t\t\t\t(Expr *) stmt->whereClause,\n \t\t\t\t\t\t\tstmt->rangetable);\n \t\t\t}\n*** ../src.original/./include/nodes/parsenodes.h\tSun Jan 17 23:40:37 1999\n--- ./include/nodes/parsenodes.h\tSun Jan 17 23:41:05 1999\n***************\n*** 332,337 ****\n--- 332,338 ----\n \t\t\t\t\t\t\t\t * transformStmt() */\n \tbool\t *lossy;\t\t\t/* is index lossy? */\n \tbool\t\tunique;\t\t\t/* is index unique? */\n+ \tbool\t\tprimary;\t\t/* is index on primary key? */\n } IndexStmt;\n \n /* ----------------------\n*** ../src.original/./include/catalog/index.h\tSat Jan 16 09:57:09 1999\n--- ./include/catalog/index.h\tSat Jan 16 10:04:30 1999\n***************\n*** 38,44 ****\n \t\t\t Datum *parameter,\n \t\t\t Node *predicate,\n \t\t\t bool islossy,\n! \t\t\t bool unique);\n \n extern void index_destroy(Oid indexId);\n \n--- 38,45 ----\n \t\t\t Datum *parameter,\n \t\t\t Node *predicate,\n \t\t\t bool islossy,\n! \t\t\t bool unique,\n! bool primary);\n \n extern void index_destroy(Oid indexId);\n \n*** ../src.original/./include/commands/defrem.h\tSat Jan 16 10:02:21 1999\n--- ./include/commands/defrem.h\tMon Jan 18 08:12:52 1999\n***************\n*** 25,30 ****\n--- 25,31 ----\n \t\t\tList *attributeList,\n \t\t\tList *parameterList,\n \t\t\tbool unique,\n+ \t\t\tbool primary,\n \t\t\tExpr *predicate,\n \t\t\tList *rangetable);\n extern void ExtendIndex(char *indexRelationName,\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n\n", "msg_date": "Tue, 19 Jan 1999 09:51:07 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Primary key support" } ]
[ { "msg_contents": "Hi,\n\nI've already asked on pgsql-general but there was now answer so I'll\ntry here.\n\nI have to play a lot with temp tables.\nCould you pls, give me a hint on emulating them under Postgres.\n\nI have to use them quite often with 10000 rows each.\nI don't use indices cause I only cache a search result and don't need\nany sorting, searching on temptables.\n\nWhat's faster:\n\n1)\nonce:\n create tmptable;\n\nmany times:\n delete from tmptable;\n insert into tmptable select ... ;\n\nor\n\n2)\nmany times:\ndrop tmptable;\nselect ... into tmptable;\n\nI prefer the second way since I have a lot of concurrent access and\ncreate unique named tables under each transaction.\nUnfortunately the DROP thing dumps a core under PLPGSQL function. :(\n\nTIA,\nPawel\n\n\n", "msg_date": "Tue, 19 Jan 1999 19:57:35 +0100", "msg_from": "Pawel Pierscionek <[email protected]>", "msg_from_op": true, "msg_subject": "Temporary tables" } ]
[ { "msg_contents": "I have two questions.\n1) Has anyone tried to compile and run Postgres 6.4.2 on a Solaris 7(2.7) machine? Was there any\nproblems or issues that need to be resolved? We are porting our app the that version of Solaris and\nwere wondering if postgres will work.\n2) I am currently running Postgres 6.4....any major issues on upgrading to 6.4.2 and can someone\noutline the steps needed to do so.\n\nThanks again for your time\n---------\nChris Williams\nSterling Software\nRome, New York\nPhone: (315) 336-0500\nEmail: [email protected]\n\n", "msg_date": "Tue, 19 Jan 1999 16:48:05 -0500", "msg_from": "\"Chris Williams\" <[email protected]>", "msg_from_op": true, "msg_subject": "2 Questions, Solaris 7 & Upgrading to 6.4.2" } ]
[ { "msg_contents": "> > I have seen gnu sed version 1.* beat bsd sed by 2-3x.\n> \n> Partly my fault, I fear. The regular-expression code used in BSD sed,\n> last I checked, was the version I supplied for 4.4BSD... which works but\n> was done in haste and is rather slow. A much improved new release (new\n> implementation, in fact) is imminent, and the BSDI folk are likely to take\n> advantage of it (they've prodded me about it from time to time).\n\nCool. I think I have bugged you a few times too.\n\nI hate to see the GNU tools beat the BSD tools. \n\nPostgreSQL(www.postgresql.org) is ready to your replace your old regex\ncode with your new code as soon as you are ready. FYI, we have a\nFebruary 1 beta planned if you need testers.\n\nHope you will announce something publically when it is ready.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Jan 1999 18:08:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BSDI 3.1 -> BSDI 4.0 performance degradation;(h" }, { "msg_contents": ">> > I have seen gnu sed version 1.* beat bsd sed by 2-3x.\n>> \n>> Partly my fault, I fear. The regular-expression code used in BSD sed,\n>> last I checked, was the version I supplied for 4.4BSD... which works but\n>> was done in haste and is rather slow. A much improved new release (new\n>> implementation, in fact) is imminent, and the BSDI folk are likely to take\n>> advantage of it (they've prodded me about it from time to time).\n>\n>Cool. I think I have bugged you a few times too.\n>\n>I hate to see the GNU tools beat the BSD tools. \n>\n>PostgreSQL(www.postgresql.org) is ready to your replace your old regex\n>code with your new code as soon as you are ready. FYI, we have a\n>February 1 beta planned if you need testers.\n>\n>Hope you will announce something publically when it is ready.\n\nPlease do not remove the old regex code in current source tree\n(ifdef'ed is ok) since I will not have enough time right now to make\nI18N version of the new regex code.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 20 Jan 1999 09:49:43 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: BSDI 3.1 -> BSDI 4.0 performance degradation;(h " }, { "msg_contents": "> >PostgreSQL(www.postgresql.org) is ready to your replace your old regex\n> >code with your new code as soon as you are ready. FYI, we have a\n> >February 1 beta planned if you need testers.\n> >\n> >Hope you will announce something publically when it is ready.\n> \n> Please do not remove the old regex code in current source tree\n> (ifdef'ed is ok) since I will not have enough time right now to make\n> I18N version of the new regex code.\n\nI did not realize someone has made modifications to that. I will keep\nthat in mind.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Jan 1999 21:40:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: BSDI 3.1 -> BSDI 4.0 performance degradation;(h" } ]
[ { "msg_contents": "Here is a new version of my patch for allowing pg_dump to DROP schema\nelements prior to CREATEing new ones. It is under control of the -c\ncommand line option (with the default being status quo).\n\nThe DROP TRIGGER portion still needs implementation. Anyone able to\nhelp clarify what exactly the CREATE TRIGGER portion does so I can fix\nthis?\n\nAgain, I have tried this with tables/indexes/sequences, but do not\nhave other schema elements in my database. As a result, I am not 100%\nconvinced that I got the syntax correct in all cases (but think I did,\nnonetheless). If anyone can check the other cases, I'd appreciate it.\n\nCheers,\nBrook\n\n===========================================================================\n--- bin/pgdump/pg_dump.c.orig\tFri Jan 15 12:26:34 1999\n+++ bin/pgdump/pg_dump.c\tMon Jan 18 09:27:35 1999\n@@ -113,6 +113,7 @@\n int\t\tschemaOnly;\n int\t\tdataOnly;\n int\t\taclsOption;\n+bool\t\tdrop_schema;\n \n char\t\tg_opaque_type[10];\t\t/* name for the opaque type */\n \n@@ -129,6 +130,8 @@\n \tfprintf(stderr,\n \t\t\t\"\\t -a \\t\\t dump out only the data, no schema\\n\");\n \tfprintf(stderr,\n+\t\t\t\"\\t -c \\t\\t clean (i.e., drop) schema prior to create\\n\");\n+\tfprintf(stderr,\n \t\t\t\"\\t -d \\t\\t dump data as proper insert strings\\n\");\n \tfprintf(stderr,\n \t\t\t\"\\t -D \\t\\t dump data as inserts\"\n@@ -552,6 +555,7 @@\n \n \tg_verbose = false;\n \tforce_quotes = true;\n+\tdrop_schema = false;\n \n \tstrcpy(g_comment_start, \"-- \");\n \tg_comment_end[0] = '\\0';\n@@ -561,13 +565,16 @@\n \n \tprogname = *argv;\n \n-\twhile ((c = getopt(argc, argv, \"adDf:h:nNop:st:vzu\")) != EOF)\n+\twhile ((c = getopt(argc, argv, \"acdDf:h:nNop:st:vzu\")) != EOF)\n \t{\n \t\tswitch (c)\n \t\t{\n \t\t\tcase 'a':\t\t\t/* Dump data only */\n \t\t\t\tdataOnly = 1;\n \t\t\t\tbreak;\n+\t\t\tcase 'c':\t\t\t/* clean (i.e., drop) schema prior to create */\n+\t\t\t\tdrop_schema = true;\n+\t\t\t\tbreak;\n \t\t\tcase 'd':\t\t\t/* dump data as proper insert strings */\n \t\t\t\tdumpData = 1;\n \t\t\t\tbreak;\n@@ -1630,6 +1637,18 @@\n \t\t\t\t\texit_nicely(g_conn);\n \t\t\t\t}\n \t\t\t\ttgfunc = finfo[findx].proname;\n+\n+#if 0\t\t\t\t\n+\t\t\t\t/* XXX - how to emit this DROP TRIGGER? */\n+\t\t\t\tif (drop_schema)\n+\t\t\t\t {\n+\t\t\t\t sprintf(query, \"DROP TRIGGER %s ON %s;\\n\",\n+\t\t\t\t\t fmtId(PQgetvalue(res2, i2, i_tgname), force_quotes),\n+\t\t\t\t\t fmtId(tblinfo[i].relname, force_quotes));\n+\t\t\t\t fputs(query, fout);\n+\t\t\t\t }\n+#endif\n+\n \t\t\t\tsprintf(query, \"CREATE TRIGGER %s \", fmtId(PQgetvalue(res2, i2, i_tgname), force_quotes));\n \t\t\t\t/* Trigger type */\n \t\t\t\tfindx = 0;\n@@ -2026,6 +2045,12 @@\n \n \t\tbecomeUser(fout, tinfo[i].usename);\n \n+\t\tif (drop_schema)\n+\t\t {\n+\t\t sprintf(q, \"DROP TYPE %s;\\n\", fmtId(tinfo[i].typname, force_quotes));\n+\t\t fputs(q, fout);\n+\t\t }\n+\n \t\tsprintf(q,\n \t\t\t\t\"CREATE TYPE %s \"\n \t\t\t\t\"( internallength = %s, externallength = %s, input = %s, \"\n@@ -2122,6 +2147,9 @@\n \t\tlanname = checkForQuote(PQgetvalue(res, i, i_lanname));\n \t\tlancompiler = checkForQuote(PQgetvalue(res, i, i_lancompiler));\n \n+\t\tif (drop_schema)\n+\t\t fprintf(fout, \"DROP PROCEDURAL LANGUAGE '%s';\\n\", lanname);\n+\n \t\tfprintf(fout, \"CREATE %sPROCEDURAL LANGUAGE '%s' \"\n \t\t\t\"HANDLER %s LANCOMPILER '%s';\\n\",\n \t\t\t(PQgetvalue(res, i, i_lanpltrusted)[0] == 't') ? \"TRUSTED \" : \"\",\n@@ -2237,6 +2265,23 @@\n \t\tPQclear(res);\n \t}\n \n+\tif (drop_schema)\n+\t {\n+\t sprintf(q, \"DROP FUNCTION %s (\", fmtId(finfo[i].proname, force_quotes));\n+\t for (j = 0; j < finfo[i].nargs; j++)\n+\t {\n+\t\tchar\t *typname;\n+\t\t\n+\t\ttypname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j]);\n+\t\tsprintf(q, \"%s%s%s\",\n+\t\t\tq,\n+\t\t\t(j > 0) ? \",\" : \"\",\n+\t\t\tfmtId(typname, false));\n+\t }\n+\t sprintf (q, \"%s);\\n\", q);\n+\t fputs(q, fout);\n+\t }\n+\n \tsprintf(q, \"CREATE FUNCTION %s (\", fmtId(finfo[i].proname, force_quotes));\n \tfor (j = 0; j < finfo[i].nargs; j++)\n \t{\n@@ -2347,6 +2392,14 @@\n \n \t\tbecomeUser(fout, oprinfo[i].usename);\n \n+\t\tif (drop_schema)\n+\t\t {\n+\t\t sprintf(q, \"DROP OPERATOR %s (%s, %s);\\n\", oprinfo[i].oprname, \n+\t\t\t fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprleft), false),\n+\t\t\t fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprright), false));\n+\t\t fputs(q, fout);\n+\t\t }\n+\n \t\tsprintf(q,\n \t\t\t\t\"CREATE OPERATOR %s \"\n \t\t\t\t\"(PROCEDURE = %s %s %s %s %s %s %s %s %s);\\n \",\n@@ -2442,6 +2495,13 @@\n \n \t\tbecomeUser(fout, agginfo[i].usename);\n \n+\t\tif (drop_schema)\n+\t\t {\n+\t\t sprintf(q, \"DROP AGGREGATE %s %s;\\n\", agginfo[i].aggname,\n+\t\t\t fmtId(findTypeByOid(tinfo, numTypes, agginfo[i].aggbasetype), false));\n+\t\t fputs(q, fout);\n+\t\t }\n+\n \t\tsprintf(q, \"CREATE AGGREGATE %s ( %s %s%s %s%s %s );\\n\",\n \t\t\t\tagginfo[i].aggname,\n \t\t\t\tbasetype,\n@@ -2641,6 +2701,12 @@\n \n \t\t\tbecomeUser(fout, tblinfo[i].usename);\n \n+\t\t\tif (drop_schema)\n+\t\t\t {\n+\t\t\t sprintf(q, \"DROP TABLE %s;\\n\", fmtId(tblinfo[i].relname, force_quotes));\n+\t\t\t fputs(q, fout);\n+\t\t\t }\n+\n \t\t\tsprintf(q, \"CREATE TABLE %s (\\n\\t\", fmtId(tblinfo[i].relname, force_quotes));\n \t\t\tactual_atts = 0;\n \t\t\tfor (j = 0; j < tblinfo[i].numatts; j++)\n@@ -2857,6 +2923,13 @@\n \n \t\t\tstrcpy(id1, fmtId(indinfo[i].indexrelname, force_quotes));\n \t\t\tstrcpy(id2, fmtId(indinfo[i].indrelname, force_quotes));\n+\n+\t\t\tif (drop_schema)\n+\t\t\t {\n+\t\t\t sprintf(q, \"DROP INDEX %s;\\n\", id1);\n+\t\t\t fputs(q, fout);\n+\t\t\t }\n+\n \t\t\tfprintf(fout, \"CREATE %s INDEX %s on %s using %s (\",\n \t\t\t (strcmp(indinfo[i].indisunique, \"t\") == 0) ? \"UNIQUE\" : \"\",\n \t\t\t\t\tid1,\n@@ -3116,6 +3189,12 @@\n \tcalled = *t;\n \n \tPQclear(res);\n+\n+\tif (drop_schema)\n+\t {\n+\t sprintf(query, \"DROP SEQUENCE %s;\\n\", fmtId(tbinfo.relname, force_quotes));\n+\t fputs(query, fout);\n+\t }\n \n \tsprintf(query,\n \t\t\t\"CREATE SEQUENCE %s start %d increment %d maxvalue %d \"\n", "msg_date": "Tue, 19 Jan 1999 16:09:14 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump -c option to drop prior to create" }, { "msg_contents": "Applied.\n\n> Here is a new version of my patch for allowing pg_dump to DROP schema\n> elements prior to CREATEing new ones. It is under control of the -c\n> command line option (with the default being status quo).\n> \n> The DROP TRIGGER portion still needs implementation. Anyone able to\n> help clarify what exactly the CREATE TRIGGER portion does so I can fix\n> this?\n> \n> Again, I have tried this with tables/indexes/sequences, but do not\n> have other schema elements in my database. As a result, I am not 100%\n> convinced that I got the syntax correct in all cases (but think I did,\n> nonetheless). If anyone can check the other cases, I'd appreciate it.\n> \n> Cheers,\n> Brook\n> \n> ===========================================================================\n> --- bin/pgdump/pg_dump.c.orig\tFri Jan 15 12:26:34 1999\n> +++ bin/pgdump/pg_dump.c\tMon Jan 18 09:27:35 1999\n> @@ -113,6 +113,7 @@\n> int\t\tschemaOnly;\n> int\t\tdataOnly;\n> int\t\taclsOption;\n> +bool\t\tdrop_schema;\n> \n> char\t\tg_opaque_type[10];\t\t/* name for the opaque type */\n> \n> @@ -129,6 +130,8 @@\n> \tfprintf(stderr,\n> \t\t\t\"\\t -a \\t\\t dump out only the data, no schema\\n\");\n> \tfprintf(stderr,\n> +\t\t\t\"\\t -c \\t\\t clean (i.e., drop) schema prior to create\\n\");\n> +\tfprintf(stderr,\n> \t\t\t\"\\t -d \\t\\t dump data as proper insert strings\\n\");\n> \tfprintf(stderr,\n> \t\t\t\"\\t -D \\t\\t dump data as inserts\"\n> @@ -552,6 +555,7 @@\n> \n> \tg_verbose = false;\n> \tforce_quotes = true;\n> +\tdrop_schema = false;\n> \n> \tstrcpy(g_comment_start, \"-- \");\n> \tg_comment_end[0] = '\\0';\n> @@ -561,13 +565,16 @@\n> \n> \tprogname = *argv;\n> \n> -\twhile ((c = getopt(argc, argv, \"adDf:h:nNop:st:vzu\")) != EOF)\n> +\twhile ((c = getopt(argc, argv, \"acdDf:h:nNop:st:vzu\")) != EOF)\n> \t{\n> \t\tswitch (c)\n> \t\t{\n> \t\t\tcase 'a':\t\t\t/* Dump data only */\n> \t\t\t\tdataOnly = 1;\n> \t\t\t\tbreak;\n> +\t\t\tcase 'c':\t\t\t/* clean (i.e., drop) schema prior to create */\n> +\t\t\t\tdrop_schema = true;\n> +\t\t\t\tbreak;\n> \t\t\tcase 'd':\t\t\t/* dump data as proper insert strings */\n> \t\t\t\tdumpData = 1;\n> \t\t\t\tbreak;\n> @@ -1630,6 +1637,18 @@\n> \t\t\t\t\texit_nicely(g_conn);\n> \t\t\t\t}\n> \t\t\t\ttgfunc = finfo[findx].proname;\n> +\n> +#if 0\t\t\t\t\n> +\t\t\t\t/* XXX - how to emit this DROP TRIGGER? */\n> +\t\t\t\tif (drop_schema)\n> +\t\t\t\t {\n> +\t\t\t\t sprintf(query, \"DROP TRIGGER %s ON %s;\\n\",\n> +\t\t\t\t\t fmtId(PQgetvalue(res2, i2, i_tgname), force_quotes),\n> +\t\t\t\t\t fmtId(tblinfo[i].relname, force_quotes));\n> +\t\t\t\t fputs(query, fout);\n> +\t\t\t\t }\n> +#endif\n> +\n> \t\t\t\tsprintf(query, \"CREATE TRIGGER %s \", fmtId(PQgetvalue(res2, i2, i_tgname), force_quotes));\n> \t\t\t\t/* Trigger type */\n> \t\t\t\tfindx = 0;\n> @@ -2026,6 +2045,12 @@\n> \n> \t\tbecomeUser(fout, tinfo[i].usename);\n> \n> +\t\tif (drop_schema)\n> +\t\t {\n> +\t\t sprintf(q, \"DROP TYPE %s;\\n\", fmtId(tinfo[i].typname, force_quotes));\n> +\t\t fputs(q, fout);\n> +\t\t }\n> +\n> \t\tsprintf(q,\n> \t\t\t\t\"CREATE TYPE %s \"\n> \t\t\t\t\"( internallength = %s, externallength = %s, input = %s, \"\n> @@ -2122,6 +2147,9 @@\n> \t\tlanname = checkForQuote(PQgetvalue(res, i, i_lanname));\n> \t\tlancompiler = checkForQuote(PQgetvalue(res, i, i_lancompiler));\n> \n> +\t\tif (drop_schema)\n> +\t\t fprintf(fout, \"DROP PROCEDURAL LANGUAGE '%s';\\n\", lanname);\n> +\n> \t\tfprintf(fout, \"CREATE %sPROCEDURAL LANGUAGE '%s' \"\n> \t\t\t\"HANDLER %s LANCOMPILER '%s';\\n\",\n> \t\t\t(PQgetvalue(res, i, i_lanpltrusted)[0] == 't') ? \"TRUSTED \" : \"\",\n> @@ -2237,6 +2265,23 @@\n> \t\tPQclear(res);\n> \t}\n> \n> +\tif (drop_schema)\n> +\t {\n> +\t sprintf(q, \"DROP FUNCTION %s (\", fmtId(finfo[i].proname, force_quotes));\n> +\t for (j = 0; j < finfo[i].nargs; j++)\n> +\t {\n> +\t\tchar\t *typname;\n> +\t\t\n> +\t\ttypname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j]);\n> +\t\tsprintf(q, \"%s%s%s\",\n> +\t\t\tq,\n> +\t\t\t(j > 0) ? \",\" : \"\",\n> +\t\t\tfmtId(typname, false));\n> +\t }\n> +\t sprintf (q, \"%s);\\n\", q);\n> +\t fputs(q, fout);\n> +\t }\n> +\n> \tsprintf(q, \"CREATE FUNCTION %s (\", fmtId(finfo[i].proname, force_quotes));\n> \tfor (j = 0; j < finfo[i].nargs; j++)\n> \t{\n> @@ -2347,6 +2392,14 @@\n> \n> \t\tbecomeUser(fout, oprinfo[i].usename);\n> \n> +\t\tif (drop_schema)\n> +\t\t {\n> +\t\t sprintf(q, \"DROP OPERATOR %s (%s, %s);\\n\", oprinfo[i].oprname, \n> +\t\t\t fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprleft), false),\n> +\t\t\t fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprright), false));\n> +\t\t fputs(q, fout);\n> +\t\t }\n> +\n> \t\tsprintf(q,\n> \t\t\t\t\"CREATE OPERATOR %s \"\n> \t\t\t\t\"(PROCEDURE = %s %s %s %s %s %s %s %s %s);\\n \",\n> @@ -2442,6 +2495,13 @@\n> \n> \t\tbecomeUser(fout, agginfo[i].usename);\n> \n> +\t\tif (drop_schema)\n> +\t\t {\n> +\t\t sprintf(q, \"DROP AGGREGATE %s %s;\\n\", agginfo[i].aggname,\n> +\t\t\t fmtId(findTypeByOid(tinfo, numTypes, agginfo[i].aggbasetype), false));\n> +\t\t fputs(q, fout);\n> +\t\t }\n> +\n> \t\tsprintf(q, \"CREATE AGGREGATE %s ( %s %s%s %s%s %s );\\n\",\n> \t\t\t\tagginfo[i].aggname,\n> \t\t\t\tbasetype,\n> @@ -2641,6 +2701,12 @@\n> \n> \t\t\tbecomeUser(fout, tblinfo[i].usename);\n> \n> +\t\t\tif (drop_schema)\n> +\t\t\t {\n> +\t\t\t sprintf(q, \"DROP TABLE %s;\\n\", fmtId(tblinfo[i].relname, force_quotes));\n> +\t\t\t fputs(q, fout);\n> +\t\t\t }\n> +\n> \t\t\tsprintf(q, \"CREATE TABLE %s (\\n\\t\", fmtId(tblinfo[i].relname, force_quotes));\n> \t\t\tactual_atts = 0;\n> \t\t\tfor (j = 0; j < tblinfo[i].numatts; j++)\n> @@ -2857,6 +2923,13 @@\n> \n> \t\t\tstrcpy(id1, fmtId(indinfo[i].indexrelname, force_quotes));\n> \t\t\tstrcpy(id2, fmtId(indinfo[i].indrelname, force_quotes));\n> +\n> +\t\t\tif (drop_schema)\n> +\t\t\t {\n> +\t\t\t sprintf(q, \"DROP INDEX %s;\\n\", id1);\n> +\t\t\t fputs(q, fout);\n> +\t\t\t }\n> +\n> \t\t\tfprintf(fout, \"CREATE %s INDEX %s on %s using %s (\",\n> \t\t\t (strcmp(indinfo[i].indisunique, \"t\") == 0) ? \"UNIQUE\" : \"\",\n> \t\t\t\t\tid1,\n> @@ -3116,6 +3189,12 @@\n> \tcalled = *t;\n> \n> \tPQclear(res);\n> +\n> +\tif (drop_schema)\n> +\t {\n> +\t sprintf(query, \"DROP SEQUENCE %s;\\n\", fmtId(tbinfo.relname, force_quotes));\n> +\t fputs(query, fout);\n> +\t }\n> \n> \tsprintf(query,\n> \t\t\t\"CREATE SEQUENCE %s start %d increment %d maxvalue %d \"\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 17:54:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] pg_dump -c option to drop prior to create" } ]
[ { "msg_contents": "Hello all,\n\nIt seems that SPI_prepare() doesn't work well in some cases.\n\nPawel Pierscionek [[email protected]] reported about the \nfollowing case 1([SQL] drop table in pgsql).\nMichael Contzen [[email protected]] reported about the \nfollowing case 2(PL/PGSQL bug using aggregates).\nYou can find it from pgsql-hackers archive.\n\n1. PL/pgSQL can't execute UTILITY commands.\n SPI_prepare() doesn't copy(save) the utilityStmt member of\n Query type nodes,because copyObject() is not implemented \n for nodes of (Create/Destroy etc)Stmt type.\n\n2. Aggregates in PL/pgSQL cause wrong results.\n\n create table t1 (i int, a int, b int);\n create table t2 (i int, x int, y int);\n\n insert into t1 values(1, 1,10);\n insert into t1 values(1, 2,10);\n insert into t1 values(2, 3,10);\n insert into t1 values(2, 4,10);\n\n create function func1()\n returns int\n as '\n declare\n begin\n insert into t2\n select i,\n sum(a) as x,\n sum(b) as y\n from t1\n group by i;\n return 1;\n end;\n ' language 'plpgsql';\n\n select func1();\n \n select * from t2;\n\n The result must be the following.\n\n i| x| y\n - -+--+--\n 1| 3|20\n 2| 7|20\n (2 rows)\n\n But the result is as follows.\n \n i| x| y\n - -+--+--\n 1|20|20\n 2|20|20\n (2 rows)\n\n The result of x's are overwritten by y's.\n\n There is a patch for this case at the end of this mail.\n After I applied it,I got a correct result. \n But I'm not sure this patch is right for all aggregate cases.\n\n SPI_prepare() doesn't copy(save) nodes of Agg type\n correctly.\n The node of Agg type has a member named aggs.\n It's a list including Aggreg type nodes which exist in \n TargetList(i.e Aggreg type nodes are common to aggs \n member list and TargetList).\n AFAIC the common pointer is not copied to the same \n pointer by copyObject() function.\n In my patch I reconstruct aggs member node from \n new(copied) Agg type node.\n Is it proper to use set_agg_tlist_references() function to \n reconstruct aggs member node for Agg type nodes ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n*** backend/nodes/copyfuncs.c.orig\tTue Jan 19 09:07:48 1999\n--- backend/nodes/copyfuncs.c\tWed Jan 20 14:42:37 1999\n***************\n*** 506,512 ****\n \n \tCopyPlanFields((Plan *) from, (Plan *) newnode);\n \n! \tNode_Copy(from, newnode, aggs);\n \tNode_Copy(from, newnode, aggstate);\n \n \treturn newnode;\n--- 506,512 ----\n \n \tCopyPlanFields((Plan *) from, (Plan *) newnode);\n \n! \tnewnode->aggs = set_agg_tlist_references(newnode);\n \tNode_Copy(from, newnode, aggstate);\n \n \treturn newnode;\n\n", "msg_date": "Wed, 20 Jan 1999 19:53:36 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "SPI_prepare() doesn't work well ?" }, { "msg_contents": "Hiroshi Inoue wrote:\n\n>\n> Hello all,\n>\n> It seems that SPI_prepare() doesn't work well in some cases.\n>\n> Pawel Pierscionek [[email protected]] reported about the\n> following case 1([SQL] drop table in pgsql).\n> Michael Contzen [[email protected]] reported about the\n> following case 2(PL/PGSQL bug using aggregates).\n> You can find it from pgsql-hackers archive.\n>\n> 1. PL/pgSQL can't execute UTILITY commands.\n> SPI_prepare() doesn't copy(save) the utilityStmt member of\n> Query type nodes,because copyObject() is not implemented\n> for nodes of (Create/Destroy etc)Stmt type.\n\n Thank's for that. I wondered why PL/pgSQL wasn't able to\n execute utility statements. Unfortunately I wasn't able to\n track it down the last days, because I had trouble with my\n shared libraries (glibc6 and libstdc++ aren't easy-going on\n Linux :-).\n\n Knowing where the problem is located saves me a lot of time.\n\n>\n> 2. Aggregates in PL/pgSQL cause wrong results.\n>\n> Is it proper to use set_agg_tlist_references() function to\n> reconstruct aggs member node for Agg type nodes ?\n\n Don't know. It is important, that the copy of the tree has\n absolutely NO references to anything outside itself. The\n parser/rewrite/planner combo creates all plans in the actual\n transactions memory context. So they will get destroyed at\n transaction end.\n\n SPI_saveplan() simply copies it into another memory context\n that lives until the backend dies. It uses the node copy\n function for this, so the result of that should be a totally\n independed, self referencing tree that can stand alone.\n\n I think that copyObject() should produce an ERROR if there\n are nodes it cannot handle correctly (like queries for\n utilities). This would prevent the backend crashes from\n trying to invoke utilities inside procedural languages.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 20 Jan 1999 16:56:52 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI_prepare() doesn't work well ?" }, { "msg_contents": "\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jan Wieck\n> Sent: Thursday, January 21, 1999 12:57 AM\n> To: Hiroshi Inoue\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] SPI_prepare() doesn't work well ?\n> \n> \n> Hiroshi Inoue wrote:\n> \n> >\n> > Hello all,\n> >\n> > It seems that SPI_prepare() doesn't work well in some cases.\n> >\n> > Pawel Pierscionek [[email protected]] reported about the\n> > following case 1([SQL] drop table in pgsql).\n> > Michael Contzen [[email protected]] reported about the\n> > following case 2(PL/PGSQL bug using aggregates).\n> > You can find it from pgsql-hackers archive.\n> >\n\n[snip]\n\n> >\n> > 2. Aggregates in PL/pgSQL cause wrong results.\n> >\n> > Is it proper to use set_agg_tlist_references() function to\n> > reconstruct aggs member node for Agg type nodes ?\n> \n> Don't know. It is important, that the copy of the tree has\n> absolutely NO references to anything outside itself. The\n> parser/rewrite/planner combo creates all plans in the actual\n> transactions memory context. So they will get destroyed at\n> transaction end.\n>\n\nIn my patch set_agg_tlist_refereces() function is appiled for \ncopied(new) Agg type nodes,not for original(old) nodes.\n\nThis case is a special case of\n\n How does copyObject() copy multiply referenced objects ?\n\nAs to this case,PostgreSQL Executor sets aggno's of Aggreg Nodes \nusing aggs member node of Agg type nodes.\nAs a result,aggno's of the Aggreg nodes contained in TargetList \nare set because they have common pointers.\nSo multiply referenced Aggreg type nodes must be copied to new \nplan as multiply referenced nodes.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n \n\n", "msg_date": "Thu, 21 Jan 1999 09:13:53 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] SPI_prepare() doesn't work well ?" }, { "msg_contents": "Applied.\n\n---------------------------------------------------------------------------\n\nHello all,\n\nIt seems that SPI_prepare() doesn't work well in some cases.\n\nPawel Pierscionek [[email protected]] reported about the \nfollowing case 1([SQL] drop table in pgsql).\nMichael Contzen [[email protected]] reported about the \nfollowing case 2(PL/PGSQL bug using aggregates).\nYou can find it from pgsql-hackers archive.\n\n1. PL/pgSQL can't execute UTILITY commands.\n SPI_prepare() doesn't copy(save) the utilityStmt member of\n Query type nodes,because copyObject() is not implemented \n for nodes of (Create/Destroy etc)Stmt type.\n\n2. Aggregates in PL/pgSQL cause wrong results.\n\n create table t1 (i int, a int, b int);\n create table t2 (i int, x int, y int);\n\n insert into t1 values(1, 1,10);\n insert into t1 values(1, 2,10);\n insert into t1 values(2, 3,10);\n insert into t1 values(2, 4,10);\n\n create function func1()\n returns int\n as '\n declare\n begin\n insert into t2\n select i,\n sum(a) as x,\n sum(b) as y\n from t1\n group by i;\n return 1;\n end;\n ' language 'plpgsql';\n\n select func1();\n \n select * from t2;\n\n The result must be the following.\n\n i| x| y\n - -+--+--\n 1| 3|20\n 2| 7|20\n (2 rows)\n\n But the result is as follows.\n \n i| x| y\n - -+--+--\n 1|20|20\n 2|20|20\n (2 rows)\n\n The result of x's are overwritten by y's.\n\n There is a patch for this case at the end of this mail.\n After I applied it,I got a correct result. \n But I'm not sure this patch is right for all aggregate cases.\n\n SPI_prepare() doesn't copy(save) nodes of Agg type\n correctly.\n The node of Agg type has a member named aggs.\n It's a list including Aggreg type nodes which exist in \n TargetList(i.e Aggreg type nodes are common to aggs \n member list and TargetList).\n AFAIC the common pointer is not copied to the same \n pointer by copyObject() function.\n In my patch I reconstruct aggs member node from \n new(copied) Agg type node.\n Is it proper to use set_agg_tlist_references() function to \n reconstruct aggs member node for Agg type nodes ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n*** backend/nodes/copyfuncs.c.orig\tTue Jan 19 09:07:48 1999\n--- backend/nodes/copyfuncs.c\tWed Jan 20 14:42:37 1999\n***************\n*** 506,512 ****\n \n \tCopyPlanFields((Plan *) from, (Plan *) newnode);\n \n! \tNode_Copy(from, newnode, aggs);\n \tNode_Copy(from, newnode, aggstate);\n \n \treturn newnode;\n--- 506,512 ----\n \n \tCopyPlanFields((Plan *) from, (Plan *) newnode);\n \n! \tnewnode->aggs = set_agg_tlist_references(newnode);\n \tNode_Copy(from, newnode, aggstate);\n \n \treturn newnode;\n\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 17:55:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI_prepare() doesn't work well ?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jan Wieck\n> Sent: Thursday, January 21, 1999 12:57 AM\n> To: Hiroshi Inoue\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] SPI_prepare() doesn't work well ?\n> \n> \n> Hiroshi Inoue wrote:\n> \n> >\n> > Hello all,\n> >\n> > It seems that SPI_prepare() doesn't work well in some cases.\n> >\n> > Pawel Pierscionek [[email protected]] reported about the\n> > following case 1([SQL] drop table in pgsql).\n> > Michael Contzen [[email protected]] reported about the\n> > following case 2(PL/PGSQL bug using aggregates).\n> > > You can find it from pgsql-hackers archive.\n> > >\n> \n> [snip]\n> \n> > >\n> > > 2. Aggregates in PL/pgSQL cause wrong results.\n> > >\n> > > Is it proper to use set_agg_tlist_references() function to\n> > > reconstruct aggs member node for Agg type nodes ?\n> > \n> > Don't know. It is important, that the copy of the tree has\n> > absolutely NO references to anything outside itself. The\n> > parser/rewrite/planner combo creates all plans in the actual\n> > transactions memory context. So they will get destroyed at\n> > transaction end.\n> >\n> \n> In my patch set_agg_tlist_refereces() function is appiled for \n> copied(new) Agg type nodes,not for original(old) nodes.\n> \n> This case is a special case of\n> \n> How does copyObject() copy multiply referenced objects ?\n> \n> As to this case,PostgreSQL Executor sets aggno's of Aggreg Nodes \n> using aggs member node of Agg type nodes.\n> As a result,aggno's of the Aggreg nodes contained in TargetList \n> are set because they have common pointers.\n> So multiply referenced Aggreg type nodes must be copied to new \n> plan as multiply referenced nodes.\n> \n\nIt is my understanding that this has always been a problem. I have\nnever 100% been confident I understand it.\n\nAs I remember, there used to be a:\n\n Aggreg **qry_agg\n\nthat was a member of the Query structure, and cause all sorts of\nproblems because the Agg was in the target list _and_ in the qry_agg. \nThat was removed. Looks like I did it, but I don't remember doing it:\n\t\n\trevision 1.44\n\tdate: 1998/01/15 19:00:11; author: momjian; state: Exp; lines: +2 -4\n\tRemove Query->qry_aggs and qry_numaggs and replace with Query->hasAggs.\n\t\n\tPass List* of Aggregs into executor, and create needed array there.\n\tNo longer need to double-processs Aggregs with second copy in Query.\n\nThe removal of that fixed many problems. Can you explain a little more\non how multiple Agg references happen. Perhaps we can get a good clean\nfix for this. Maybe redesign is needed, as I did for the removal of\nqry_aggs.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 23:37:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI_prepare() doesn't work well ?" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Friday, January 22, 1999 1:38 PM\n> To: Hiroshi Inoue\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] SPI_prepare() doesn't work well ?\n>\n>\n[snip]\n\n> > > >\n> > > > 2. Aggregates in PL/pgSQL cause wrong results.\n> > > >\n[snip]\n\n>\n> It is my understanding that this has always been a problem. I have\n> never 100% been confident I understand it.\n>\n> As I remember, there used to be a:\n>\n> Aggreg **qry_agg\n>\n> that was a member of the Query structure, and cause all sorts of\n> problems because the Agg was in the target list _and_ in the qry_agg.\n> That was removed. Looks like I did it, but I don't remember doing it:\n>\n> \trevision 1.44\n> \tdate: 1998/01/15 19:00:11; author: momjian; state: Exp;\n> lines: +2 -4\n> \tRemove Query->qry_aggs and qry_numaggs and replace with\n> Query->hasAggs.\n>\n> \tPass List* of Aggregs into executor, and create needed array there.\n> \tNo longer need to double-processs Aggregs with second copy in Query.\n>\n> The removal of that fixed many problems. Can you explain a little more\n> on how multiple Agg references happen. Perhaps we can get a good clean\n> fix for this. Maybe redesign is needed, as I did for the removal of\n> qry_aggs.\n>\n\nSorry,I don't know details.\nThe source tree I can trace is as follows..\n\n1.Multiple references to Aggreg nodes\n [ Fucntion union_planner() in src/backend/optimizer/plan/planner.c ]\n\n if (parse->hasAggs)\n {\n result_plan = (Plan *) make_agg(tlist, result_plan);\n\n /*\n * set the varno/attno entries to the appropriate references\nto\n * the result tuple of the subplans.\n */\n ((Agg *) result_plan)->aggs =\n set_agg_tlist_references((Agg *) result_plan);\n\n\n [ Function set_agg_tlist_references() in optimzer/opt/setrefs.c ]\n\n aggreg_list = nconc(\n replace_agg_clause(tle->expr, subplanTargetList),\naggreg_list)\n;\n }\n return aggreg_list;\n\n\n [ Function replace_agg_clause() in optimizer/opt/setrefs.c ]\n\n else if (IsA(clause, Aggreg))\n {\n return lcons(clause,\n\t\t ^^^^^^^\n replace_agg_clause(((Aggreg *) clause)->target,\nsubplanTargetList));\n\n\n clause is contained in the return of replace_agg_clause() and so\n contained in the return of set_agg_tlist_references().\n\n2.aggno's of aggs list members are set\n [ Function ExecAgg() in executor/nodeAgg.c ]\n\n alist = node->aggs;\n for (i = 0; i < nagg; i++)\n {\n aggregates[i] = lfirst(alist);\n aggregates[i]->aggno = i;\n alist = lnext(alist);\n }\n\n3.aggno's are used\n [ Function ExecEvalAggreg() in executor/execQual.c\n called from ExecEvalExpr() ]\n\nstatic Datum\nExecEvalAggreg(Aggreg *agg, ExprContext *econtext, bool *isNull)\n{\n *isNull = econtext->ecxt_nulls[agg->aggno];\n return econtext->ecxt_values[agg->aggno];\n}\n\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Fri, 22 Jan 1999 15:38:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] SPI_prepare() doesn't work well ?" }, { "msg_contents": "Hiroshi, I believe I have fixed this problem. Let me know if it still\ndoesn't work.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\nHello all,\n\nIt seems that SPI_prepare() doesn't work well in some cases.\n\nPawel Pierscionek [[email protected]] reported about the \nfollowing case 1([SQL] drop table in pgsql).\nMichael Contzen [[email protected]] reported about the \nfollowing case 2(PL/PGSQL bug using aggregates).\nYou can find it from pgsql-hackers archive.\n\n1. PL/pgSQL can't execute UTILITY commands.\n SPI_prepare() doesn't copy(save) the utilityStmt member of\n Query type nodes,because copyObject() is not implemented \n for nodes of (Create/Destroy etc)Stmt type.\n\n2. Aggregates in PL/pgSQL cause wrong results.\n\n create table t1 (i int, a int, b int);\n create table t2 (i int, x int, y int);\n\n insert into t1 values(1, 1,10);\n insert into t1 values(1, 2,10);\n insert into t1 values(2, 3,10);\n insert into t1 values(2, 4,10);\n\n create function func1()\n returns int\n as '\n declare\n begin\n insert into t2\n select i,\n sum(a) as x,\n sum(b) as y\n from t1\n group by i;\n return 1;\n end;\n ' language 'plpgsql';\n\n select func1();\n \n select * from t2;\n\n The result must be the following.\n\n i| x| y\n - -+--+--\n 1| 3|20\n 2| 7|20\n (2 rows)\n\n But the result is as follows.\n \n i| x| y\n - -+--+--\n 1|20|20\n 2|20|20\n (2 rows)\n\n The result of x's are overwritten by y's.\n\n There is a patch for this case at the end of this mail.\n After I applied it,I got a correct result. \n But I'm not sure this patch is right for all aggregate cases.\n\n SPI_prepare() doesn't copy(save) nodes of Agg type\n correctly.\n The node of Agg type has a member named aggs.\n It's a list including Aggreg type nodes which exist in \n TargetList(i.e Aggreg type nodes are common to aggs \n member list and TargetList).\n AFAIC the common pointer is not copied to the same \n pointer by copyObject() function.\n In my patch I reconstruct aggs member node from \n new(copied) Agg type node.\n Is it proper to use set_agg_tlist_references() function to \n reconstruct aggs member node for Agg type nodes ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n*** backend/nodes/copyfuncs.c.orig\tTue Jan 19 09:07:48 1999\n--- backend/nodes/copyfuncs.c\tWed Jan 20 14:42:37 1999\n***************\n*** 506,512 ****\n \n \tCopyPlanFields((Plan *) from, (Plan *) newnode);\n \n! \tNode_Copy(from, newnode, aggs);\n \tNode_Copy(from, newnode, aggstate);\n \n \treturn newnode;\n--- 506,512 ----\n \n \tCopyPlanFields((Plan *) from, (Plan *) newnode);\n \n! \tnewnode->aggs = set_agg_tlist_references(newnode);\n \tNode_Copy(from, newnode, aggstate);\n \n \treturn newnode;", "msg_date": "Mon, 15 Mar 1999 09:31:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI_prepare() doesn't work well ?" } ]
[ { "msg_contents": "Sir,\n I'm a graduate student of Institue of Computing Technology of Academe of Sinica.Since my major is database, I earnestly want to join the Internet developer team. Can you tell me how I can become one of the Internet developers of postgreSQL?\n Yours truly,\n Yuansen Chen \n\n\n\n\n\n\n\n Sir,     I'm a graduate student of Institue \nof Computing Technology of Academe of Sinica.Since my major is database, I \nearnestly want to join the Internet developer team. Can you tell me how I can \nbecome one of the Internet developers of \npostgreSQL?                                                          \nYours \ntruly,                                                           \nYuansen Chen", "msg_date": "Wed, 20 Jan 1999 19:02:31 +0800", "msg_from": "\"yschen\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to join PGSQL mailing list?" } ]
[ { "msg_contents": "> \n> Hi again,\n> \n> I'm trying to use your hint, but I don't find the new select sintax in the\n> docs, so excuse me for asking again.\n> If I've got it right, whe I make a SELECT ... FOR UPDATE within a connection\n> (Autocommit = off), the selected records are blocked until the connection is\n> closed, commited or rolled-back. Thus, only the same transaction can modify\n> them, and consistency is thus achieved.\n> \n> I've tried lauching two processes that make simultaneous updates on one\n> table (different rows) and simultaneous inserts on another (these processes\n> are client order imports from file). One of the processes run while the\n> other kept waiting in the first insert it atempted. Do I have to use any SQL\n> or configure something, or is this the normal behaviour?\n> \n> Thanks again for your answers.\n\nI assume you are running the snapshot, and not 6.4.*. You are actually\nusing FOR UPDATE, so I think it is the snapshot. \n\nThis is normal behavior, I think. I believe the issue with SELECT FOR\nUPDATE is that it has to lock the entire table. We allow non-blocking\nreaders and non-blocking writers on different rows by using the\ntransaction id and multi-version system. SELECT FOR UPDATE does not\nactually modify any rows, so we can't look at any transaction id.\n\nVadim, can you comment?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jan 1999 06:09:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Beta test of Postgresql 6.5" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Hi again,\n> >\n> > I'm trying to use your hint, but I don't find the new select sintax in the\n> > docs, so excuse me for asking again.\n> > If I've got it right, whe I make a SELECT ... FOR UPDATE within a connection\n> > (Autocommit = off), the selected records are blocked until the connection is\n> > closed, commited or rolled-back. Thus, only the same transaction can modify\n> > them, and consistency is thus achieved.\n> >\n> > I've tried lauching two processes that make simultaneous updates on one\n> > table (different rows) and simultaneous inserts on another (these processes\n> > are client order imports from file). One of the processes run while the\n> > other kept waiting in the first insert it atempted. Do I have to use any SQL\n> > or configure something, or is this the normal behaviour?\n> >\n> > Thanks again for your answers.\n\nOnly syntax is implemented currently.\nPlease wait for 1-2 days.\n\n> I assume you are running the snapshot, and not 6.4.*. You are actually\n> using FOR UPDATE, so I think it is the snapshot.\n> \n> This is normal behavior, I think. I believe the issue with SELECT FOR\n> UPDATE is that it has to lock the entire table. We allow non-blocking\n> readers and non-blocking writers on different rows by using the\n> transaction id and multi-version system. SELECT FOR UPDATE does not\n> actually modify any rows, so we can't look at any transaction id.\n\nFOR UPDATE modifies rows.\nIt changes t_xmax and sets HEAP_MARKED_FOR_UPDATE\nflag in t_infomask.\n\nBTW, I think that MVCC stuff will not be ready\nfor beta testing 1 Feb...\n\nVadim\n", "msg_date": "Wed, 20 Jan 1999 19:50:56 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Beta test of Postgresql 6.5" }, { "msg_contents": "> BTW, I think that MVCC stuff will not be ready\n> for beta testing 1 Feb...\n\nistm that the MVCC stuff could/should be the focus of this release. So\nunless Vadim is already worried about this taking much longer than\nthrough February, we should just plan around his schedule.\n\nbtw, I'd like to go through the parser at some point and (if possible)\nconvert the new MVCC-related parsing from (Ident strings + string tests)\nto (yacc keywords). I think that can happen just before or after the\nstart of beta. OK?\n\n - Tom\n", "msg_date": "Wed, 20 Jan 1999 15:09:49 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Beta test of Postgresql 6.5" }, { "msg_contents": "On Wed, 20 Jan 1999, Thomas G. Lockhart wrote:\n\n> > BTW, I think that MVCC stuff will not be ready\n> > for beta testing 1 Feb...\n> \n> istm that the MVCC stuff could/should be the focus of this release. So\n> unless Vadim is already worried about this taking much longer than\n> through February, we should just plan around his schedule.\n\nI kinda agree here...I think that MVCC is crucial to the next release, and\nif we have to hold off a little bit for that, so be it. \n\nLet's go for our beta cycle starting the moment that Vadim states that he\nis prepared, and from that day forth, *nothing*, *nadda*, gets added\nunexcept to fix bugs...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 20 Jan 1999 15:21:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Beta test of Postgresql 6.5" }, { "msg_contents": "> On Wed, 20 Jan 1999, Thomas G. Lockhart wrote:\n> \n> > > BTW, I think that MVCC stuff will not be ready\n> > > for beta testing 1 Feb...\n> > \n> > istm that the MVCC stuff could/should be the focus of this release. So\n> > unless Vadim is already worried about this taking much longer than\n> > through February, we should just plan around his schedule.\n> \n> I kinda agree here...I think that MVCC is crucial to the next release, and\n> if we have to hold off a little bit for that, so be it. \n> \n> Let's go for our beta cycle starting the moment that Vadim states that he\n> is prepared, and from that day forth, *nothing*, *nadda*, gets added\n> unexcept to fix bugs...\n\nYes, but only on/after Feb 1. We have to give others warning.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jan 1999 14:49:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Beta test of Postgresql 6.5" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n> \n> > BTW, I think that MVCC stuff will not be ready\n> > for beta testing 1 Feb...\n> \n> istm that the MVCC stuff could/should be the focus of this release. So\n> unless Vadim is already worried about this taking much longer than\n> through February, we should just plan around his schedule.\n> \n> btw, I'd like to go through the parser at some point and (if possible)\n> convert the new MVCC-related parsing from (Ident strings + string tests)\n> to (yacc keywords). I think that can happen just before or after the\n> start of beta. OK?\n\nI don't object, I just thoght that having as few keywords\nas possible is good thing.\n\nVadim\n", "msg_date": "Thu, 21 Jan 1999 09:10:43 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Beta test of Postgresql 6.5" }, { "msg_contents": "> I don't object, I just thoght that having as few keywords\n> as possible is good thing.\n\nYes, I understand the concern. But istm it's OK to do things\nconsistantly throughout the parser once your MVCC project has settled\ndown. At least as long as the \"keyword explosion\" stays small...\n\n - Tom\n", "msg_date": "Thu, 21 Jan 1999 06:12:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Beta test of Postgresql 6.5" } ]
[ { "msg_contents": "A few days ago I sent some patches in to complete primary key support.\nI haven't seen them and I was wondering if you had to be subscribed\nto the pathes list in order to submit like the others.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 20 Jan 1999 08:52:25 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Patches" }, { "msg_contents": "> A few days ago I sent some patches in to complete primary key support.\n> I haven't seen them and I was wondering if you had to be subscribed\n> to the pathes list in order to submit like the others.\n\nI think I recall seeing those go by --- check the patches mail archive\nat postgresql.org if you want to be sure.\n\nWhether anyone has applied them is another question...\n\n\t\t\tregards, tom lane\n\nPS: I believe all the PG lists are set up so that Marc has to manually\napprove nonmember submissions --- we were getting inundated by spam\nuntil he locked 'em that way. So if you're not subscribed, there's\nan extra delay until he gets around to sifting through the rejected\nmessages. Dunno how often he does that.\n", "msg_date": "Wed, 20 Jan 1999 10:46:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patches " }, { "msg_contents": "> A few days ago I sent some patches in to complete primary key support.\n> I haven't seen them and I was wondering if you had to be subscribed\n> to the pathes list in order to submit like the others.\n> \n\nReally? I never saw them, and believe me, we all would have remembered\nthat. Please post to hackers I guess.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jan 1999 12:08:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patches" } ]
[ { "msg_contents": "FYI\nHi all!\n\nI've just released the first version of GNOME-SQL, which is a tool for\naccessing databases from different RDBMS. Right now (version 0.1) only\nPostgreSQL is supported but I am right now starting to include MySQL and\nlater on I will do with ODBC...\n\nCurrent version is quite stable (in my system at least) so I ask people\ninterested in it to try it in order to detect bugs.\n\nMaybe the GNOME hackers don't like the name I've chosen, so if somebody\ndisagree with GNOME-SQL, tell me and I'll change it to whatever other name.\n\nAnother thing: the makefiles are hardcoded with my system's settings, so you\nmay have to change it. If somebody wants to convert them to make use of\nautoconf/automake, please let me know and email the changes to me.\n\nThe URL is :\nhttp://www.chez.com/rmoya/software/gnome/gnome-sql/doc/gnome-sql.html\n\nCheers and enjoy\n\n\n-- \n FAQ: Frequently-Asked Questions at http://www.gnome.org/gnomefaq\n To unsubscribe: mail [email protected] with \n \"unsubscribe\" as the Subject.", "msg_date": "Wed, 20 Jan 1999 15:12:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: GNOME-SQL version 0.1]" } ]
[ { "msg_contents": "I've posted another set of patches for date/time types. They fix arcane\nproblems with BC dates; if you don't use them very often then you may\nnot want to bother patching at this time. The fixes are in the\ndevelopment tree for the next release.\n\nLook in ftp://postgresql.org/pub/patches/dt-2.patches.\n\n>From the top of the patch file:\n\nThese patches allow BC dates for the date data type,\nand fix a behavior that two-digit BC dates were adjusted\nby two millenia!\n\n - Tom\n", "msg_date": "Wed, 20 Jan 1999 16:43:14 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "new date/time patches" } ]
[ { "msg_contents": "I am forwarding this to hackers to see if anyone can comment on it.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> -----Original Message-----\n> De: Bruce Momjian <[email protected]>\n> Para: Juan Alvarez Ferrando <[email protected]>\n> CC: Vadim B. Mikheev <[email protected]>; PostgreSQL-development\n> <[email protected]>\n> Fecha: mi_rcoles 20 de enero de 1999 12:31\n> Asunto: Re: Beta test of Postgresql 6.5\n> \n> \n> >\n> >I assume you are running the snapshot, and not 6.4.*. You are actually\n> >using FOR UPDATE, so I think it is the snapshot.\n> \n> \n> Yes, I'm runnign the snapshot.\n> \n> >\n> >This is normal behavior, I think. I believe the issue with SELECT FOR\n> >UPDATE is that it has to lock the entire table. We allow non-blocking\n> >readers and non-blocking writers on different rows by using the\n> >transaction id and multi-version system. SELECT FOR UPDATE does not\n> >actually modify any rows, so we can't look at any transaction id.\n> \n> Maybe, I didn't explain my case enough. Though my question regards the\n> SELECT FOR UPDATE command, I wasn't using it in the test case I explained.\n> \n> I have two twin processes like this:\n> \n> BEGIN TRANSACTION\n> Read order header\n> INSERT INTO GCABE VALUES (num,date,client)\n> while there are order lines\n> Read order line -> part_number, qty\n> SELECT AVAILABLE FROM PARTS WHERE PARTNUM=part_number\n> if (AVAILABLE >= qty)\n> INSERT INTO ORDERLINES VALUES (part_number, qty,num)\n> UPDATE PARTS SET AVAILABLE=AVAILABLE-qty WHERE PARTNUM=part_number\n> endif\n> endwhile\n> COMMIT\n> \n> I run this on to different order files, from different customers and\n> different part numbers (NOTE I DONT USE THE 'FOR UPDATE' SINTAX), and as I\n> explained the last one to begin is blocked until the other one finishes.\n> \n> Best regards, and thanks again.\n> \n> \n> Juan Alvarez Ferrando\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jan 1999 12:05:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Beta test of Postgresql 6.5" } ]
[ { "msg_contents": "> >> I'd say, go ahead and release them. If you'll tell me where to find\n> >> them, I'd like to put them on my own web site too.\n> > Will do. May not get to it for a little while, since I want to build and\n> > test one more time to make sure I remember how to do it.\n\nI've posted new static binaries for cvsup and cvsupd, built from the\nsources labeled as version 15.5-linux, on the Postgres web site at:\n\n ftp://postgresql.org/pub/CVSup/\n\nFor reference, I built using the commands\n\nmake M3TARGET=LINUXLIBC6 M3FLAGS=\"-D_pm3 -DSTATIC -DNOGUI\"\nmake M3TARGET=LINUXLIBC6 M3FLAGS=\"-D_pm3 -DSTATIC -DNOGUI\" install\n\nTo do the static linking, I had to make one more change in prog.quake. I\nwas recalling using \"build_standalone()\", but my installation of pm3\nseemed to instead want\n\nif defined(\"STATIC\")\n option(\"standalone\",\"T\")\nend\n\n(this replaces the last three lines of the original prog.quake).\n\n> >> Could you also let me know which specific versions of Linux the\n> >> binaries are for? I'm not very familiar with the various versions.\n> >> I know it's some Redhat version, but that's about the extent of my\n> >> knowledge. Also, which version of PM3 did you use?\n\nThese binaries were build on a RedHat-5.1 system running glibc-2.07. The\nversioning for m3build leaves something to be desired:\n\n[root@mythos]# m3build -version\nm3build: SRC Modula-3 version XX.X\n\nBut all of the RPMs in the PM3 distribution are versioned with file name\nlabels of \"1.1.10-1\". Usually the last \"-1\" is an RPM-specific internal\nversion field, so \"1.1.10\" may be the right thing to specify.\n\nI'll try building and posting new static binaries for my libc5 machine\nat home, probably using my existing SRC m3 compiler.\n\nThanks again for a great utility!\n\n - Tom\n\nFor those who care, there is an RPM distribution of Modula-3 which is\n*much* easier to work with than the original from-source distribution.\nIt can be found at\n\n ftp://m3.polymtl.ca/pub/m3/index.html\n\nIt has a bazillion separate RPMs, but if you fetch them into a single\ndirectory and then do \"rpm -Uvh *\" you will get everything you need and\nmore.\n\n-- \nThomas Lockhart\nCaltech/JPL\nInterferometry Systems and Technology\n", "msg_date": "Thu, 21 Jan 1999 02:35:14 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "New CVSup static binaries for Linux glibc2" } ]
[ { "msg_contents": "Has anyone thought of putting a bzip2-compressed tarball up there? Might save\nbandwidth...\n\nTaral\n", "msg_date": "Wed, 20 Jan 1999 20:48:23 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": true, "msg_subject": "gzip vs bzip2 in packing" }, { "msg_contents": "> Has anyone thought of putting a bzip2-compressed tarball up there? Might save\n> bandwidth...\n> \n\nI don't even know what that is.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jan 1999 22:38:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] gzip vs bzip2 in packing" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Has anyone thought of putting a bzip2-compressed tarball up there?\n>> Might save bandwidth...\n\n> I don't even know what that is.\n\nI know what it is, and I also know that it has achieved near-zero\nmarket penetration. Yes, it compresses better than gzip; but evidently\nnot enough better to persuade people to switch.\n\nAnother consideration you have to pay attention to in today's world is\npatent status. gzip has stood the test of time and is widely agreed to\nbe patent-free. (It'd be pretty hard for anyone to secure a patent on\ngzip at this late date, even though the cluelessness of the USPTO is\nnearly unbounded.) bzip2's author claims it is patent-free, but that\nreally only means that *he* didn't patent it. I don't think anyone has\ndone a serious patent search on Burrows-Wheeler methods.\n\nEventually something will come along that's enough better than gzip\nto warrant a universal upgrade cycle, but as far as I can see bzip2\nain't it. In any case I see no need for Postgres to be out front of\nthe curve on this question...\n\n\t\t\tregards, tom lane\n\nPS: If you want more info see http://www.faqs.org/faqs/compression-faq/,\nitem 78.\n", "msg_date": "Thu, 21 Jan 1999 00:54:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] gzip vs bzip2 in packing " }, { "msg_contents": "On Wed, 20 Jan 1999, you wrote:\n>Eventually something will come along that's enough better than gzip\n>to warrant a universal upgrade cycle, but as far as I can see bzip2\n>ain't it. In any case I see no need for Postgres to be out front of\n>the curve on this question...\n\nErr, I was actually recommending we do like many sites, and put up gzip and\nbzip2 versions of the tarball. Those with bzip2 can download the (smaller)\nbzip2 version, and save network bandwidth (and time, for those with modems).\n\nTaral\n", "msg_date": "Thu, 21 Jan 1999 00:05:11 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] gzip vs bzip2 in packing" } ]
[ { "msg_contents": "\nSorry about this. I _try_ to keep up with the list, I really do...\n\nWhat exactly is MVCC? I'm not familiar with the term. It sounds\nlike something better than row-level locking, and I'll bet one of \nthe 'C's stands for commit.\n\nInquiring minds want to know.\n\nThanks,\n\n-- cary\n\nCary O'Brien\[email protected]\n\n", "msg_date": "Thu, 21 Jan 1999 00:07:42 -0500 (EST)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "What is MVCC?" } ]
[ { "msg_contents": "Hello!\n\n I use a modified version of contrib/apache_logginig. The table I am\nusing:\n\nCREATE TABLE combinedlog (\n host text,\n accdate abstime,\n request text,\n authuser text,\n cookie text,\n referer text,\n useragent text,\n stime int2,\n status int2,\n bytes int4\n);\n\n Once a month I run a very simple script to put WWW logs into the table.\nThe very loading is not fast (I am running postmaster with -F, I use\nBEGIN/END and I drop indicies before loading. I remember when I started\nusing BEGIN/END loading speed up a bit, but not significantly), but is not\nmy biggest concern. What is worse is spped of my queries.\n After inserting into the table, I run this shell script:\n\nsel_f() {\n field=$1\n psql -d ce_wwwlog -c \"SELECT COUNT($field), $field FROM raw_combinedlog GROUP BY 2 ORDER BY 1 DESC;\"\n}\n\nfor i in host request referer useragent; do\n sel_f $i > by-$i\ndone\n\n This works very, very slow. I tried to use indicies:\n\nCREATE INDEX host ON combinedlog (host);\nCREATE INDEX request ON combinedlog (request);\nCREATE INDEX referer ON combinedlog (referer);\nCREATE INDEX useragent ON combinedlog (useragent);\n\n but indicies do not help much, and indexing time is so big, that sum of\nCREATE INDEX + SELECT is even bigger :(\n\n Why is it so slow? How can I speed it up?\n\n I am running postgres compiled with --enable-locale. that is, for every\nstring comparision there are 2 (two) malloc calls and one strcoll. Can I\nincrease speed turning strcoll off? If so, postgres need a SET command to\nturn localization temporary off. I can hack in, as I already rewrote\nlocalization stuff a year ago. The only thing I want to hear from postgres\ncommunity (hackers, actually :) is how it should be named:\n SET STRCOLL=off\nor such?\n I remember when I submitted my locale patch there was a discussion on\nhow to do it the Right Way, but I didn't remember the conlusion. What\nfinally we decided? I want to add command (if I should to) that is\ncompliant with other stuff here.\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 21 Jan 1999 14:19:41 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "GROUP BY / ORDER BY string is very slow" }, { "msg_contents": "> Once a month I run a very simple script to put WWW logs into the \n> table.\n> What is worse is spped of my queries.\n> Why is it so slow? How can I speed it up?\n\nAre you running vacuum after removing your indices? If you don't then\nthe table storage area does not actually shrink.\n\n> I am running postgres compiled with --enable-locale. that is, for \n> every string comparision there are 2 (two) malloc calls and one \n> strcoll. Can I increase speed turning strcoll off? If so, postgres \n> need a SET command to turn localization temporary off.\n\nHow can you \"turn localization off\" if you have localized strings in\nyour database? If you build indices without localization, then turn\nlocalization back on, the things are probably hopelessly out of order.\n\n> I remember when I submitted my locale patch there was a discussion on\n> how to do it the Right Way, but I didn't remember the conlusion. What\n> finally we decided? I want to add command (if I should to) that is\n> compliant with other stuff here.\n\nIs the Right Way to implement the NATIONAL CHARACTER type rather than\nhaving the CHAR type be localized? That way, you could have both types\nin the same database. Or is that SQL92 feature not widely used or\nuseful?\n\n - Tom\n", "msg_date": "Fri, 22 Jan 1999 05:50:40 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GROUP BY / ORDER BY string is very slow" }, { "msg_contents": "Hello!\n\nOn Fri, 22 Jan 1999, Thomas G. Lockhart wrote:\n\n> > Once a month I run a very simple script to put WWW logs into the \n> > table.\n> > What is worse is spped of my queries.\n> > Why is it so slow? How can I speed it up?\n> \n> Are you running vacuum after removing your indices? If you don't then\n> the table storage area does not actually shrink.\n\n Yes, I did a dozen of experiments, running VACUUM with and without\nindices. VACUUM helped a bit, but not much...\n\n> > I am running postgres compiled with --enable-locale. that is, for \n> > every string comparision there are 2 (two) malloc calls and one \n> > strcoll. Can I increase speed turning strcoll off? If so, postgres \n> > need a SET command to turn localization temporary off.\n> \n> How can you \"turn localization off\" if you have localized strings in\n> your database? If you build indices without localization, then turn\n> localization back on, the things are probably hopelessly out of order.\n\n What are \"localized strings\"?\n In this particular database there are only strings from WWW-log. If I\ncould turn localization off, I would turn it off for this entire db\nforever.\n\n> Is the Right Way to implement the NATIONAL CHARACTER type rather than\n> having the CHAR type be localized? That way, you could have both types\n> in the same database. Or is that SQL92 feature not widely used or\n> useful?\n\n In this particular case NATIONAL CHARACTER (actually, non-NATIONAL\nCHARACTER) is a solution. Not sure about other cases.\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 22 Jan 1999 12:46:52 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] GROUP BY / ORDER BY string is very slow" } ]
[ { "msg_contents": "Hello!\n\n I found Large File Summit in Linux Weekly News.\n LFS is a kernel patch (very experimental) that allows big files on\n32-bit linux (ext2 and UFS file systems only).\n URL - ftp://mea.tmt.tele.fi/linux/LFS/\n\n I cannot test it - my linux partition is only 512M, and I have 15M free.\nIf someone can test the patch and verify the PostgreSQL can work with big\ntables on 32bit systems - it would be great!\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 21 Jan 1999 16:46:59 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Big files on 32-bit Linux" } ]
[ { "msg_contents": "I still wonder if the for-update-tests in SelectStmt: are okay. Shouldn't\nthe test check for intersect_present=TRUE instead of intersectClause !=\nNULL?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 21 Jan 1999 16:06:32 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "INTERSECT in gram.y again" }, { "msg_contents": "> I still wonder if the for-update-tests in SelectStmt: are okay. Shouldn't\n> the test check for intersect_present=TRUE instead of intersectClause !=\n> NULL?\n\nLooks like they are the same. Both are true or false.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 15:08:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INTERSECT in gram.y again" }, { "msg_contents": "On Thu, Jan 21, 1999 at 03:08:20PM -0500, Bruce Momjian wrote:\n> > I still wonder if the for-update-tests in SelectStmt: are okay. Shouldn't\n> > the test check for intersect_present=TRUE instead of intersectClause !=\n> > NULL?\n> \n> Looks like they are the same. Both are true or false.\n\nBut I think intersectClause is set to op regardless if there is an intersect\nclause at all. Or did I miss something?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 22 Jan 1999 08:37:30 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INTERSECT in gram.y again" }, { "msg_contents": "> On Thu, Jan 21, 1999 at 03:08:20PM -0500, Bruce Momjian wrote:\n> > > I still wonder if the for-update-tests in SelectStmt: are okay. Shouldn't\n> > > the test check for intersect_present=TRUE instead of intersectClause !=\n> > > NULL?\n> > \n> > Looks like they are the same. Both are true or false.\n> \n> But I think intersectClause is set to op regardless if there is an intersect\n> clause at all. Or did I miss something?\n\nAs far as I can tell, The 'else' part of the query only gets executed in\nthe case of UNION, EXCEPT, or INTERCEPT. Because FOR UPDATE is invalid\nin all these cases, the intersectClause being non-NULL is an OK test,\nthough, as you point out, it is not accurate. I have modified gram.y to\ncheck just for unionClause:\n\nif (n->unionClause != NULL)\n elog(ERROR, \"SELECT FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPT claus$\n\nand removed the intersectClause test. Thanks for pointing this out.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jan 1999 14:34:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INTERSECT in gram.y again" }, { "msg_contents": "On Fri, Jan 22, 1999 at 02:34:52PM -0500, Bruce Momjian wrote:\n> As far as I can tell, The 'else' part of the query only gets executed in\n> the case of UNION, EXCEPT, or INTERCEPT. Because FOR UPDATE is invalid\n> in all these cases, the intersectClause being non-NULL is an OK test,\n\nYou're right of course. \n\n> though, as you point out, it is not accurate. I have modified gram.y to\n> check just for unionClause:\n> \n> if (n->unionClause != NULL)\n> elog(ERROR, \"SELECT FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPT claus$\n\nBut isn't the pure existance of for update enough to have an error in the\nelse branch?\n\nAnd can't we get the same error in the if branch as well with a having\nclause or something like that?\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sat, 23 Jan 1999 13:27:36 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INTERSECT in gram.y again" }, { "msg_contents": "> But isn't the pure existance of for update enough to have an error in the\n> else branch?\n> \n> And can't we get the same error in the if branch as well with a having\n> clause or something like that?\n\nBut we have those tests at the end, after the if/else, don't we?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jan 1999 16:31:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INTERSECT in gram.y again" }, { "msg_contents": "On Sat, Jan 23, 1999 at 04:31:59PM -0500, Bruce Momjian wrote:\n> But we have those tests at the end, after the if/else, don't we?\n\nYou're right again of course. Sorry for misreading this.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sun, 24 Jan 1999 10:02:21 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INTERSECT in gram.y again" } ]
[ { "msg_contents": "I just checked the ecpg docs. There are ecpg.1 and ecpg.sgml, a man page and\na converted texinfo file. I remember Tom saying something like the manpages\nshall be computed from the sgml source. Is this correct?\n\nIf so I wonder how it should work. The ecpg.sgml file contains totally\ndifferent stuff than the man page.\n\nAnyway, I attach the actual version of the man page. Could anyone please put\nthis into cvs as the one in there is severly outdated.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!", "msg_date": "Thu, 21 Jan 1999 16:14:43 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "ecpg docs" }, { "msg_contents": "> I just checked the ecpg docs. There are ecpg.1 and ecpg.sgml, a man page and\n> a converted texinfo file. I remember Tom saying something like the manpages\n> shall be computed from the sgml source. Is this correct?\n\nI am wondering myself. Thomas, where are we on this? Can we convert\nthe html to man pages, or doesn't that work. Fortunately, we have not\nbeen making many man pages changes lately, so it has not been an issue.\n\n> \n> If so I wonder how it should work. The ecpg.sgml file contains totally\n> different stuff than the man page.\n\nI rememeber someone updating only one of the files.\n\nYou actually did it:\n\n---------------------------------------------------------------------------\n\n> On Sun, Dec 13, 1998 at 05:11:04AM +0000, Thomas G. Lockhart wrote:\n> > Michael, the ecpg author, contributed ecpg.sgml several months ago. I\n> \n> I thought you did transform it to sgml. \n> \n> > would think that he was just reconciling the information in the man page\n> > with the existing sgml-based information, but don't know that for sure.\n> > Michael?\n> \n> My problem is that I do not speak sgml. Therefore I only updated the man\n> page from Tom Good's version but not the sgml part. Is there a tool I\n> could\n> use to transform it? I try to remember that the next time and will only\n> update the sgml file which should be much easier than trasnforming the\n> current man page.\n> \n> Michael\n\n\n---------------------------------------------------------------------------\n\nSo the sgml has to be reconverted from the ecpg man page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 14:02:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg docs" }, { "msg_contents": "> > I just checked the ecpg docs. There are ecpg.1 and ecpg.sgml, a man \n> > page and a converted texinfo file. I remember Tom saying something \n> > like the manpages shall be computed from the sgml source. Is this \n> > correct?\n> I am wondering myself. Thomas, where are we on this? Can we convert\n> the html to man pages, or doesn't that work. Fortunately, we have not\n> been making many man pages changes lately, so it has not been an \n> issue.\n\nI'm waiting for someone to send an updated version of \"Instant\", which\nconverts DocBook-v3.0 sgml <refentry> pages into man pages. It should\nshow up in a week or two. We'll see if it works well enough.\n\nOthers have expressed interest in helping, but have not had time to\ncontribute a solution. I personally consider it a second-order problem,\nin that if we had waited for a perfect man page solution before starting\nthe new docs we wouldn't have any new docs. \n\nAlso, imho the man pages should be kept very simple, with just syntax\nand no \"how to\" info, but I think others don't see it that way so I'm\nstill willing to look into converting the bigger docs back into man\npages.\n\nAnyway, if someone wants to look at converting html into man pages, be\nmy guest. If it is completely turn-key, then perhaps I could support it\nfor document releases, otherwise someone else will have to take\nresponsibility, and we'll all have to finish release docs a bit earlier\nthan we have been doing. But that's a detail...\n\n - Tom\n", "msg_date": "Fri, 22 Jan 1999 06:04:09 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg docs" }, { "msg_contents": "On Thu, Jan 21, 1999 at 02:02:25PM -0500, Bruce Momjian wrote:\n> I rememeber someone updating only one of the files.\n> \n> You actually did it:\n\nYes, the man page. \n\n> So the sgml has to be reconverted from the ecpg man page.\n\nNo. It contains completely different stuff. The mail you got from the\narchive was somewhere out of context. \n\nIn the original ecpg package there was a texinfo file describing some stuff\nabout implementation and so. This is the file that was converted to sgml\nafter being brought somewhat up-to-date. The man page though is completely\ndifferent. It's scope is to describe the usage of ecpg.\n\nmichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 22 Jan 1999 08:36:41 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ecpg docs" }, { "msg_contents": "On Fri, Jan 22, 1999 at 06:04:09AM +0000, Thomas G. Lockhart wrote:\n> Also, imho the man pages should be kept very simple, with just syntax\n> and no \"how to\" info, but I think others don't see it that way so I'm\n> still willing to look into converting the bigger docs back into man\n> pages.\n\nI do agree. So this means I have to put some of ecpg.1 info into ecpg.sgml.\nAnyone speaking sgml willing to help?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 22 Jan 1999 20:39:51 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ecpg docs" }, { "msg_contents": "> I do agree. So this means I have to put some of ecpg.1 info into \n> ecpg.sgml. Anyone speaking sgml willing to help?\n\nSure (or if someone else wants to help, that would be great!). I'd like\nto eventually have the sources of ecpg info in two places: the existing\necpg.sgml which forms a chapter in the User's Guide, and in a new\nreference page ref/ecpg-ref.sgml which would be a relatively short\nsyntax/options reference for how to run the program itself. This last\nwould also become the man page when we (eventually) get the capability\nto convert DocBook <refentry> pages to man pages.\n\nIf you want to try the conversion, great. If you would prefer not, then\nI'll be happy to, and it would be great if you were able to help\nmaintain the information in them after that.\n\nYou might use the ref/psql-ref.sgml as an example of what a program\nreference page contains and how it is marked up. Look at the html\n\"integrated docs\" postgres.html toward the bottom of the User's Guide\npart to see how the psql-ref.sgml page looks when formatted.\n\nCheers.\n\n - Tom\n", "msg_date": "Sat, 23 Jan 1999 16:24:32 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg docs" }, { "msg_contents": "On Sat, Jan 23, 1999 at 04:24:32PM +0000, Thomas G. Lockhart wrote:\n> to eventually have the sources of ecpg info in two places: the existing\n> ecpg.sgml which forms a chapter in the User's Guide, and in a new\n> reference page ref/ecpg-ref.sgml which would be a relatively short\n> syntax/options reference for how to run the program itself. This last\n> would also become the man page when we (eventually) get the capability\n> to convert DocBook <refentry> pages to man pages.\n\nSounds good.\n\n> If you want to try the conversion, great. If you would prefer not, then\n> I'll be happy to, and it would be great if you were able to help\n> maintain the information in them after that.\n\nI will try my best if I find the time.\n\n> You might use the ref/psql-ref.sgml as an example of what a program\n> reference page contains and how it is marked up. Look at the html\n> \"integrated docs\" postgres.html toward the bottom of the User's Guide\n> part to see how the psql-ref.sgml page looks when formatted.\n\nI will.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sat, 23 Jan 1999 21:36:03 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ecpg docs" } ]
[ { "msg_contents": "See attached file. Now accepts \"exec sql whenever sqlwarning\".\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!", "msg_date": "Thu, 21 Jan 1999 17:04:16 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "more patches for ecpg" }, { "msg_contents": "\nApplied...but why isn't/wasn't it posted tothe patches list instead of\nhere? :(\n\n\nOn Thu, 21 Jan 1999, Michael Meskes wrote:\n\n> See attached file. Now accepts \"exec sql whenever sqlwarning\".\n> \n> Michael\n> -- \n> Michael Meskes | Go SF 49ers!\n> Th.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\n> Tel.: (+49) 2431/72651 | Use Debian GNU/Linux!\n> Email: [email protected] | Use PostgreSQL!\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 21 Jan 1999 16:07:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] more patches for ecpg" }, { "msg_contents": "Thus spake The Hermit Hacker\n> Applied...but why isn't/wasn't it posted tothe patches list instead of\n> here? :(\n\nPerhaps like me he doesn't subscribe to patches and you don't accept\npostings to mailing lists from non-subscribers. Personally I find\nthe CVS update summaries informational enough and prefer not to have\nto wade through the raw patches. Is the assumption that those that\nsubmit should also be peer reviewing?\n\nBy the way, what ever happened to my primary key patches? On Bruce's\nsuggestion I posted them to hackers and I saw them but so far no one\nhas commented on them or checked them in.\n\nP.S. If anyone is trying to mail me directly, druid.net is on hold\ndue to Internic snafu. It should be fixed soon. In the meantime\nyou can reach me at [email protected].\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 21 Jan 1999 17:06:56 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] more patches for ecpg" }, { "msg_contents": "> By the way, what ever happened to my primary key patches? On Bruce's\n> suggestion I posted them to hackers and I saw them but so far no one\n> has commented on them or checked them in.\n\nI still have them. They look good.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 17:32:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] more patches for ecpg" }, { "msg_contents": "On Thu, Jan 21, 1999 at 05:06:56PM -0500, D'Arcy J.M. Cain wrote:\n> > Applied...but why isn't/wasn't it posted tothe patches list instead of\n> > here? :(\n> \n> Perhaps like me he doesn't subscribe to patches and you don't accept\n> postings to mailing lists from non-subscribers. Personally I find\n\nCorrect. In fact I even tried subscribe in the process of changing my\nhackers and interfaces account but am still waiting for Marc to approve\nthis. \n\n> the CVS update summaries informational enough and prefer not to have\n> to wade through the raw patches. Is the assumption that those that\n> submit should also be peer reviewing?\n\nThat one is also my problem. As it is now I get way too much email and\ncannot cope with lots of patches I will delete anyway.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 22 Jan 1999 08:39:10 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] more patches for ecpg" }, { "msg_contents": "On Thu, Jan 21, 1999 at 04:07:39PM -0400, The Hermit Hacker wrote:\n> Applied...but why isn't/wasn't it posted tothe patches list instead of\n> here? :(\n\nThanks Marc.\n\nAs I already said the patches list is a problem for me. The only reason for\nthis list I see is to get the patches to all people with direct CVS access\nso the work does not have to be done by just one person. If this is the real\nreason I wonder why we don't set it up this way, i.e. distribution to only\nsome and the others can send patches but don't get the patches send to them.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 22 Jan 1999 08:42:54 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] more patches for ecpg" } ]
[ { "msg_contents": "Hi,\n\nI have PostgresSQL installed on my BSDi. I have a database which is hosted\nin a \nUnix Oracle Server. I'd like to import (every day) a table into my Postgre\nServer.\n\nWhat is the solution?\n\nThanks in advance for your help.\n\nLionel MOTTAY.\n", "msg_date": "Thu, 21 Jan 1999 18:07:19 GMT", "msg_from": "lionel mottay <[email protected]>", "msg_from_op": true, "msg_subject": "Importing from Oracle" } ]
[ { "msg_contents": "[I am CC'ing Xshare on this.]\n\nI am very impressed with Xshare at http://www.xshare.com. They have a\nnice timely list of open source Unix software, with nice descriptions\nand links to home pages.\n\nI have worked with them on a number of issues, and they have been\nhelpful. Of course, they list PostgreSQL. Recently, I mentioned we\nhave released 6.4.2(they were listing 6.4.1), and mentioned they could\nsubscribe to the 'announce' list to receive news on updates. They have\nsubscribed, so will always have the most recent version listed. Their\ncurrent listing for us is:\n\n\tLogin\t Add Software\tAccount \n\tPostgreSQL 6.4.2 \n\t\t\t\t\t\t\t\tPlexus\n\t\t\t\t\t\t\t 18/01/1999\n\tPostgreSQL is a robust, next-generation,\n\tObject-Relational DBMS (ORDBMS), derived from\n\tthe Berkeley Postgres database management system.\n\tWhile PostgreSQL retains the powerful\n\tobject-relational data model, rich data types and\n\teasy extensibility of Postgres, it replaces the PostQuel\n\tquery language with an extended subset of SQL. \n\n\tAfter 5 intense months of development, the\n\tPostgreSQL Global Development Group is pleased to\n\tannounce the release of v6.4, a much improved\n\trelease of the popular PostgreSQL ORDBMS. As\n\talways, there is *alot* of new features, bug fixes and\n\tchanges since the last release...over 200 lines worth in\n\tour HISTORY file. \n\t\t\t\t\t \n\tAuthor: \n\tHomepage: \n\t\thttp://www.postgresql.org \n\tDocumentation: \n\t\thttp://www.postgresql.org/docs/ \n\tLicence: \n\t\tunknown \n\tSystems: \n\t\tFreeBSD, HP-Unix, Irix, Linux, NetBSD,\n\t\tOpenBSD, Solaris \n\tDownload: \n\t\thttp://www.postgresql.org/sites.html\n\t\thttp://www.postgresql.org/sites.html\n\t\tftp://ftp.postgresql.org/pub/postgresql-6.4.2.tar.gz\n\t\tftp://ftp.postgresql.org/pub/\n\t\tftp://ftp.chicks.net/pub/postgresql \n\n\nThis is quite a bit of interesting information. Perhaps we should add a\nlink on our homepage that says: \"For information on other open source\nsoftware, see Xshare.\" It is hard to know all the software available,\nand Xshare helps in that.\n \n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jan 1999 13:45:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Adding Xshare to our web site" }, { "msg_contents": "Neat. A couple of details:\n\n> After 5 intense months of development, the\n> PostgreSQL Global Development Group is pleased to\n> announce the release of v6.4, a much improved\n> release of the popular PostgreSQL ORDBMS. As\n> always, there is *alot* of new features, bug fixes and\n> changes since the last release...over 200 lines worth in\n> our HISTORY file.\n\nThe reader won't know that each change item is listed on one line. So\nperhaps the next Xshare synopsis could refer to \"over 200 distinct fixes\nand enhancements from the last release\".\n\n> Author:\n\nThe PostgreSQL Global Development Group, per our copyright notice?\n\n> Licence:\n> unknown\n\nBSD?\n\n> Systems:\n> FreeBSD, HP-Unix, Irix, Linux, NetBSD,\n> OpenBSD, Solaris\n\nMany more than that...\n\n - Tom\n", "msg_date": "Fri, 22 Jan 1999 06:08:20 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding Xshare to our web site" } ]
[ { "msg_contents": "Good job with the PostgreSQL ODBC driver Byron!\n\n - Tom\nHowdy all,\n\njust a status update on MySQL problems. According to one of our engineers\nmore familiar with Data, it would seem that I've been misinformed about\nwhat levels of ODBC and SQL Grammar conformance we require:\n\n In my previous message I confused ODBC Conformance Levels with ODBC\n SQL Grammar Conformance Levels. We require Level 1 (beyond Core)\n ODBC Conformance, and we are still confused about SQL Grammer\n Conformance. It appears that we require \"Extended SQL\" grammar\n conformance, although we can probably code around this. There is an\n ODBC call to get the SQL Grammar conformance level from a driver.\n\nI'm not certain yet what we'll do, but perhaps this can give folks an idea\nof how MySQL could be changed to work better with Applix Data, even as we\ninvestigate options for changing this from our side. Perhaps PostgreSQL is\nlooking better. :)\n\nEric\n\n>>>>> [email protected] writes:\n\n> Hi Applix guys,\n> Thanks for your patch for libaxel.so. I have replaced new libaxel.so, \n> and the last error message does not appear.\n> Now, there is another incompatible clause of MySQL. After editing the \n> records, when I call edit_commit@ method, another message box appears: \n-- \nSenior Software Engineer / [email protected] <><\nApplix, Inc. / 112 Turnpike Road / Westboro MA 01581-2842\n\n--\nTo unsubscribe from the list, send \"unsubscribe applixware-list\" in the\nbody of a message to [email protected]. Before you ask a question,\ncheck out the Applixware for Linux FAQ, Errata, and mailing list\narchives at http://linux.applixware.com/ under \"Applix Resources.\"", "msg_date": "Fri, 22 Jan 1999 06:14:45 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: AWL: Re: Another unrecognized SQL clause in MySQL \"WHERE\n\tCURRENT OF\"]" } ]
[ { "msg_contents": "I've posted new static binaries for CVSup on Linux on the ftp site:\n\n ftp://postgresql.org/pub/CVSup/cvsup-15.5-*linux*.tar.gz\n\nThere are binaries for both libc5 and glibc2 platforms, and I've\nincluded binaries in separate tar files for cvsupd, the server-side\nprogram just in case it helps someone.\n\nThe new versions fix \"checksum error\" problems with pre-v15.4 versions\nof the clients (originating from whitespace changes in the output of\nnewer versions of diff or chksum, I can't remember which).\n\nFor reference, I built the glibc2 versions using the \"PM3\" version of\nthe Modula-3 compiler, available as RPMs, source, and other packages\nfrom\n\n http://m3.polymtl.ca/m3/\n\nOnce Modula-3 is installed, cvsup was built by unpacking a tarball into\n/usr/local/src and then compiling using the following commands:\n\n$ make M3TARGET=LINUXLIBC6 M3FLAGS=\"-D_pm3 -DSTATIC -DNOGUI\"\n$ make M3TARGET=LINUXLIBC6 M3FLAGS=\"-D_pm3 -DSTATIC -DNOGUI\" install\n\nI built the libc5 version using the older DEC/SRC version of the\nModula-3 compiler, installed from-source, with the following commands:\n\n$ make M3TARGET=LINUXELF M3FLAGS=\"-D_pm3 -DSTATIC -DNOGUI\"\n$ make M3TARGET=LINUXELF M3FLAGS=\"-D_pm3 -DSTATIC -DNOGUI\" install\n\nOnce again, John Polstra has been very helpful in giving us fresh\nversions of source code and in helping to resolve minor porting issues.\n\nPlease let me know if you find any problems.\n\n - Tom\n", "msg_date": "Fri, 22 Jan 1999 06:46:48 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "New CVSup static binaries for Linux" } ]
[ { "msg_contents": "Hello Cary,\n\nI'm following this list on the PostgreSQL Home Page, and I'm not \na PostgreSQL hacker, but I do know my acronyms and my database\ntheory.\n\nMVCC stands for Multi Version Concurrency Control.\n\nIt is a feature which is very, very powerful, and deals with the\nfact that multiple requests are handled by the database-server.\n\nLets say multiple request come in at the same time, open a\ntransaction-block, and try to access the same data.\n\nIn the previous versions of PostgreSQL the whole table was locked\nexclusively until the transaction-block was commited.\nThis means the second transaction-request that comes in, is delayed from\nprocessing until the lock has been released by the previous transaction.\n\nWith MVCC, the database-server detects the second transaction-request,\nscans it, looks for interfering requests, and, if possible, creates a\nsecond version of the data being requested. Hence the name Multi\nVersion.\n\nIt creates multiple versions of the data, in memory only ofcourse, if it\nwere on the disk, things would get out of hand pretty quickly,\nand only on ONE occasion it does not.\n\nIf the first request that comes in is a \"writer\" (a transaction-block\nthat updates data) and the second request is also a \"writer\" on the same\ndata, then it detects an exclusive write-lock.\n\nIf the first is \"reader\" (a transaction-block that only uses \"select\" to\n\"read\" the data) and the second is a \"reader\", or a \"writer\" it creates\na second version of the same data and hands it to the second\ntransaction.\n\nThis means that if MVCC is stable, and you are using it on a heavily\nmulti-user environment, you will see a very, very big improvement in\nresponsiveness and overall performance of the database.\nIt does mean however, that it uses more resources on the server, because\nit can now handle the load better.\n\n(hackers, please feel free to correct me on any of my statements)\n\nHope that helped.\n\nWalter van der Schee\n(not a PostgreSQL hacker)\n", "msg_date": "Fri, 22 Jan 1999 10:44:50 +0100", "msg_from": "Walter van der Schee <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is MVCC?" } ]
[ { "msg_contents": "Recently I have started seeing this error when starting the server.\n\nshell-init: could not get current directory: getcwd: cannot access parent directories\n\nHere is the command I use to start PostgreSQL.\n\nsu postgres -c \"/usr/local/pgsql/bin/postmaster -S -D /usr/local/pgsql/data\"\n\nDid something change recently to cause this? I find I can get around it\nby changing the command to the following.\n\nsu postgres -c \"cd /usr/local/pgsql; /usr/local/pgsql/bin/postmaster -S -D /usr/local/pgsql/data\"\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 22 Jan 1999 08:26:19 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "getcwd failing suddenly" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Recently I have started seeing this error when starting the server.\n> shell-init: could not get current directory: getcwd: cannot access parent directories\n> Did something change recently to cause this?\n\nA quick glimpse scan shows no such error message in the Postgres\nsources. This must be coming out of your shell. Protection change\non one of the ancestors of your home directory, maybe?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jan 1999 10:19:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getcwd failing suddenly " }, { "msg_contents": "Thus spake Tom Lane\n> > shell-init: could not get current directory: getcwd: cannot access parent directories\n> \n> A quick glimpse scan shows no such error message in the Postgres\n> sources. This must be coming out of your shell. Protection change\n> on one of the ancestors of your home directory, maybe?\n\nI forgot to mention that this happens on 3 different machines as I\nupgrade to the latest PostgreSQL and each machine has a different\nversion of the OS. I can't recall changing anything to do with\ndirectory permissions but I certainly didn't change all 3.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 22 Jan 1999 10:45:50 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] getcwd failing suddenly" }, { "msg_contents": "On Fri, 22 Jan 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake Tom Lane\n> > > shell-init: could not get current directory: getcwd: cannot access parent directories\n> > \n> > A quick glimpse scan shows no such error message in the Postgres\n> > sources. This must be coming out of your shell. Protection change\n> > on one of the ancestors of your home directory, maybe?\n> \n> I forgot to mention that this happens on 3 different machines as I\n> upgrade to the latest PostgreSQL and each machine has a different\n> version of the OS. I can't recall changing anything to do with\n> directory permissions but I certainly didn't change all 3.\n\nDumb question but I got bit by it before. Is there a makefile.custom\nnearby? When I was cleaning up the docbook tags I put one in and it\nhad a couple of directories that didn't exist on my machine. They \nprobably existed on Tom Lockhart's machine, tho. :) I never noticed\nit till the next time I built PostgreSQL.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n", "msg_date": "Fri, 22 Jan 1999 11:12:13 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getcwd failing suddenly" }, { "msg_contents": "On Fri, 22 Jan 1999, you wrote:\n>Recently I have started seeing this error when starting the server.\n>\n>shell-init: could not get current directory: getcwd: cannot access parent directories\n\nYou're starting it from a directory which user postgres does not have access\nto. Therefore reading '..' (which the shell that su launched uses to determine\ncurrent directory) fails. It doesn't matter, although your solution works.\nOther solution: Just use 'cd' instead of 'cd /usr/local/pgsql'\n\nTaral\n", "msg_date": "Fri, 22 Jan 1999 17:33:22 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getcwd failing suddenly" } ]
[ { "msg_contents": "Hello!\n\n I want to hack into CYR_RECODE. I need to reimplement encoding selection\nalgorithm. Actually I want to add a few bytes of code - after default\nselection I want to SET destination encoding.\n From you, folks, I need the following information:\n\n1. How to add new SET command?\n\n2. How to implement SET command?\n Detalied explanation would be good, but it is enough to point me into\nalread existed code.\n\n3. Any advice on how to name it. I prefer SET DEST_ENCODING='windows-1251'.\nAny objection?\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 22 Jan 1999 17:17:49 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "SET encoding" } ]
[ { "msg_contents": "Hi,\n\nCurrently psql show views like:\n\nDatabase = hygea\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | abbattimenti | table |\n | postgres | wattivita | view? |\n | postgres | attivita_a | table |\n\nbecause it seeks for relhasrules field and if you have a table (not a\ntable) with a rule it thinks it is a view\nand displays \"view?\" instead of \"table\".\n\nI modified psql.c to use pg_get_viewdef() function to seek for views and\nnow I can display only tables using \\dt\nor only views using \\dv like:hygea=> \\dv\n\n\\dv\nDatabase = hygea\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | wattivita | view |\n | postgres | wtabelle | view |\n +------------------+----------------------------------+----------+\n\n\\dt\nDatabase = hygea\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | abbattimenti | table |\n | postgres | attivita | table |\n | postgres | attivita_a | table |\n | postgres | attivita_b | table |\n | postgres | brogliacci | table |\n | postgres | capi | table |\n | postgres | comuni | table |\n +------------------+----------------------------------+----------+\n\nIf this interests to someone there is the attached patch.\n\n-Jose'-", "msg_date": "Fri, 22 Jan 1999 15:38:18 +0100", "msg_from": "\"Jose' Soares\" <[email protected]>", "msg_from_op": true, "msg_subject": "view?" }, { "msg_contents": "Jose' Soares wrote:\n\n> Hi,\n>\n> Currently psql show views like:\n>\n> Database = hygea\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | postgres | abbattimenti | table |\n> | postgres | wattivita | view? |\n> | postgres | attivita_a | table |\n>\n> because it seeks for relhasrules field and if you have a table (not a\n> table) with a rule it thinks it is a view\n> and displays \"view?\" instead of \"table\".\n>\n> I modified psql.c to use pg_get_viewdef() function to seek for views and\n> now I can display only tables using \\dt\n> or only views using \\dv like:hygea=> \\dv\n> [...]\n\n I suggest not to apply this patch\n\n 1. The function pg_get_viewdef() is definitely too much\n overhead. In fact it must parse back the complete view\n definition, doing many system table lookups, just to tell\n if this is a view or not.\n\n 2. The function pg_get_viewdef() is currently out of sync\n with the possible parsetrees for rule actions. CASE (and\n maybe some other constructs) aren't implemented and if it\n hit's on such a rule it will elog() out.\n\n Rules on SELECT event are restricted totally to view rules\n since v6.4. There can be only one rule on SELECT that is\n INSTEAD and selects exactly the attributes on one table. And\n AFAIC this restriction will stay. The check should be if\n there is a rule with event SELECT --> view.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 22 Jan 1999 17:52:43 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] view?" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Jose' Soares wrote:\n> \n> > I modified psql.c to use pg_get_viewdef() function to seek for views and\n> > now I can display only tables using \\dt\n>\n> I suggest not to apply this patch\n> \n> 1. The function pg_get_viewdef() is definitely too much\n> overhead. In fact it must parse back the complete view\n> .......\n\nI used pg_get_viewdef() function to properly detect views and tables in\nPgAccess.\nFor the moment, I have released a new version 0.94 of PgAccess based on\nthis and it works fine.\n\nI am sure that you are right concerning pg_get_viewdef() function, but\nplease, could you tell me another way of detecting views from \"false\nviews\" ? relhasrules field isn't good enough for it and for the moment,\npg_get_viewdef() seems to be a good method. If anyone could tell me\nanother way of safely detecting views I can change it.\n\nAlso, I used pg_get_viewdef() in order to get views's definition for the\n\"Design\" view function so, I will need also such a function in order to\nimplement this feature.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Sat, 23 Jan 1999 01:11:07 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] view?" }, { "msg_contents": "\n\nConstantin Teodorescu ha scritto:\n\n> Jan Wieck wrote:\n> >\n> > Jose' Soares wrote:\n> >\n> > > I modified psql.c to use pg_get_viewdef() function to seek for views and\n> > > now I can display only tables using \\dt\n> >\n> > I suggest not to apply this patch\n> >\n> > 1. The function pg_get_viewdef() is definitely too much\n> > overhead. In fact it must parse back the complete view\n> > .......\n>\n> I used pg_get_viewdef() function to properly detect views and tables in\n> PgAccess.\n> For the moment, I have released a new version 0.94 of PgAccess based on\n> this and it works fine.\n>\n> I am sure that you are right concerning pg_get_viewdef() function, but\n> please, could you tell me another way of detecting views from \"false\n> views\" ? relhasrules field isn't good enough for it and for the moment,\n> pg_get_viewdef() seems to be a good method. If anyone could tell me\n> another way of safely detecting views I can change it.\n>\n> Also, I used pg_get_viewdef() in order to get views's definition for the\n> \"Design\" view function so, I will need also such a function in order to\n> implement this feature.\n>\n> --\n> Constantin Teodorescu\n> FLEX Consulting Braila, ROMANIA\n\nI'm not sure if we may consider good the pg_views data.\nIf so you can check for views into it, as..\n\nhygea=> \\d pg_views\n\nTable = pg_views\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| viewname | name | 32 |\n| viewowner | name | 32 |\n| definition | text | var |\n+----------------------------------+----------------------------------+-------+\n\nhygea=> \\dv\nDatabase = hygea\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | wattivita | view |\n | postgres | wtabelle | view |\n +------------------+----------------------------------+----------+\n\nhygea=> select 'yes' from pg_views where viewname='wattivita';\n?column?\n--------\nyes\n(1 row)\n\n-Jose'-\n\n", "msg_date": "Fri, 12 Feb 1999 14:53:21 +0100", "msg_from": "\"Jose' Soares\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] view?" } ]
[ { "msg_contents": "I just uploaded the RPMs of PostgreSQL 6.4.2 to\nftp://ftp.postgresql.org/pub/.incoming directory.\nCan someone please move it up to pub directory??\n\nthanks\n\nal dev\n\n", "msg_date": "Fri, 22 Jan 1999 15:45:28 +0000", "msg_from": "Al Dev <[email protected]>", "msg_from_op": true, "msg_subject": "Move PostgreSQL 6.4.2 RedHat RPM packages to proper locations" } ]
[ { "msg_contents": "\nI've gotten a fairly recent snapshot from FTP and compiled it\nup on Alpha/linux.\n\nOn running, the forked process dies with a palloc error....it\nwas trying to grab 4 gig worth of memory.\n\nI've tracked this problem down to\nbackend/utils/cache/relcache.c\n\nin function init_irels:\nlen is defined as Size, which I've checked out and it turns\nout to be size_t...which happens to be 8 byte on an alpha.\nlen is uninitialised.\n\nfurther down in init_irels\nif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n\nwhich should set len to be the size of the relation read from disk.\nWhat is happening is that 4 bytes are being read from the file\nand saved in len. Writing 4 into 8 doesn't work too well so a\ncomplete mess is made.\n\nInitialising len to 0 fixes the problem, but I'm thinking that this\nisn't the best solution. Since len is size_t, I'm figuring that the\nbig scheme of things is to allow for larger tables on 64 bit machines.\n\nI'm new to the internals so I don't know if the DB files are architecture\nindependant (still wading through documentation)....if it isn't...then I\nguess Size should be replaced with int...and all should be happy.\n\nWhat is the correct way about fixing this?\n\nMy ultimate goal here is to get timestamps working correctly on the alpha.\nYou probably already know...but incase you didn't...\n\ncreate table dat ( d date, t timestamp);\nCREATE\ninsert into dat values ('1999-01-22', '1999-01-22 16:00:00'); \nINSERT 18602 1\nselect * from dat;\n d|t \n----------+----------------------\n01-22-1999|2135-02-28 22:28:16+00\n(1 row)\n\nSometimes...the select the first time returns the correct values...\nbut on doing the select again immidately after the first select\nwill return 2135 again (probably due to the caching).\n\nAny help on where I should be looking would be greatly received!\n\nCheers!\n\n-- \nAdrian Gartland - Server Development Manager\nOregan Networks UK Ltd Tel: +44 (0) 1530 56 33 11\nHuntingdon Court, Ashby de la Zouch Fax: +44 (0) 1530 56 33 22\nLeicestershire, LE65 1AH, United Kingdom WWW: http://www.oregan.net/\n\n", "msg_date": "Fri, 22 Jan 1999 16:09:39 +0000 (GMT)", "msg_from": "Adrian Gartland <[email protected]>", "msg_from_op": true, "msg_subject": "Alpha Size (size_t) != int problem" }, { "msg_contents": "> \n> I've gotten a fairly recent snapshot from FTP and compiled it\n> up on Alpha/linux.\n> \n> On running, the forked process dies with a palloc error....it\n> was trying to grab 4 gig worth of memory.\n> \n> I've tracked this problem down to\n> backend/utils/cache/relcache.c\n> \n> in function init_irels:\n> len is defined as Size, which I've checked out and it turns\n> out to be size_t...which happens to be 8 byte on an alpha.\n> len is uninitialised.\n> \n> further down in init_irels\n> if ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n> \n> which should set len to be the size of the relation read from disk.\n> What is happening is that 4 bytes are being read from the file\n> and saved in len. Writing 4 into 8 doesn't work too well so a\n> complete mess is made.\n> \n> Initialising len to 0 fixes the problem, but I'm thinking that this\n> isn't the best solution. Since len is size_t, I'm figuring that the\n> big scheme of things is to allow for larger tables on 64 bit machines.\n> \n> I'm new to the internals so I don't know if the DB files are architecture\n> independant (still wading through documentation)....if it isn't...then I\n> guess Size should be replaced with int...and all should be happy.\n> \n> What is the correct way about fixing this?\n> \n> My ultimate goal here is to get timestamps working correctly on the alpha.\n> You probably already know...but incase you didn't...\n> \n> create table dat ( d date, t timestamp);\n> CREATE\n> insert into dat values ('1999-01-22', '1999-01-22 16:00:00'); \n> INSERT 18602 1\n> select * from dat;\n> d|t \n> ----------+----------------------\n> 01-22-1999|2135-02-28 22:28:16+00\n> (1 row)\n> \n> Sometimes...the select the first time returns the correct values...\n> but on doing the select again immidately after the first select\n> will return 2135 again (probably due to the caching).\n> \n> Any help on where I should be looking would be greatly received!\n\nI have applied a fix to the tree. The fix is to replace sizeof(int)\nwith sizeof(len). I checked the rest of the source code, and couldn't\nfind any other places where this would be a problem.\n\nPatch is attached.\n\n---------------------------------------------------------------------------\n\nIndex: relcache.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/cache/relcache.c,v\nretrieving revision 1.53\nretrieving revision 1.54\ndiff -c -r1.53 -r1.54\n*** relcache.c\t1999/01/17 06:18:51\t1.53\n--- relcache.c\t1999/01/22 16:49:25\t1.54\n***************\n*** 7,13 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/utils/cache/relcache.c,v 1.53 1999/01/17 06:18:51 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 7,13 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/utils/cache/relcache.c,v 1.54 1999/01/22 16:49:25 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 1876,1882 ****\n \tfor (relno = 0; relno < Num_indices_bootstrap; relno++)\n \t{\n \t\t/* first read the relation descriptor length */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n--- 1876,1882 ----\n \tfor (relno = 0; relno < Num_indices_bootstrap; relno++)\n \t{\n \t\t/* first read the relation descriptor length */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(len))) != sizeof(len))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n***************\n*** 1899,1905 ****\n \t\tird->lockInfo = (char *) NULL;\n \n \t\t/* next, read the access method tuple form */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n--- 1899,1905 ----\n \t\tird->lockInfo = (char *) NULL;\n \n \t\t/* next, read the access method tuple form */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(len))) != sizeof(len))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n***************\n*** 1915,1921 ****\n \t\tird->rd_am = am;\n \n \t\t/* next read the relation tuple form */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n--- 1915,1921 ----\n \t\tird->rd_am = am;\n \n \t\t/* next read the relation tuple form */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(len))) != sizeof(len))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n***************\n*** 1937,1943 ****\n \t\tlen = ATTRIBUTE_TUPLE_SIZE;\n \t\tfor (i = 0; i < relform->relnatts; i++)\n \t\t{\n! \t\t\tif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n \t\t\t{\n \t\t\t\twrite_irels();\n \t\t\t\treturn;\n--- 1937,1943 ----\n \t\tlen = ATTRIBUTE_TUPLE_SIZE;\n \t\tfor (i = 0; i < relform->relnatts; i++)\n \t\t{\n! \t\t\tif ((nread = FileRead(fd, (char *) &len, sizeof(len))) != sizeof(len))\n \t\t\t{\n \t\t\t\twrite_irels();\n \t\t\t\treturn;\n***************\n*** 1953,1959 ****\n \t\t}\n \n \t\t/* next, read the index strategy map */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n--- 1953,1959 ----\n \t\t}\n \n \t\t/* next, read the index strategy map */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(len))) != sizeof(len))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n***************\n*** 1985,1991 ****\n \t\tird->rd_istrat = strat;\n \n \t\t/* finally, read the vector of support procedures */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(int))) != sizeof(int))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n--- 1985,1991 ----\n \t\tird->rd_istrat = strat;\n \n \t\t/* finally, read the vector of support procedures */\n! \t\tif ((nread = FileRead(fd, (char *) &len, sizeof(len))) != sizeof(len))\n \t\t{\n \t\t\twrite_irels();\n \t\t\treturn;\n***************\n*** 2082,2089 ****\n \t\tlen = sizeof(RelationData);\n \n \t\t/* first, write the relation descriptor length */\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(int)))\n! \t\t\t!= sizeof(int))\n \t\t\telog(FATAL, \"cannot write init file -- descriptor length\");\n \n \t\t/* next, write out the Relation structure */\n--- 2082,2089 ----\n \t\tlen = sizeof(RelationData);\n \n \t\t/* first, write the relation descriptor length */\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(len)))\n! \t\t\t!= sizeof(len))\n \t\t\telog(FATAL, \"cannot write init file -- descriptor length\");\n \n \t\t/* next, write out the Relation structure */\n***************\n*** 2092,2099 ****\n \n \t\t/* next, write the access method tuple form */\n \t\tlen = sizeof(FormData_pg_am);\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(int)))\n! \t\t\t!= sizeof(int))\n \t\t\telog(FATAL, \"cannot write init file -- am tuple form length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) am, len)) != len)\n--- 2092,2099 ----\n \n \t\t/* next, write the access method tuple form */\n \t\tlen = sizeof(FormData_pg_am);\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(len)))\n! \t\t\t!= sizeof(len))\n \t\t\telog(FATAL, \"cannot write init file -- am tuple form length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) am, len)) != len)\n***************\n*** 2101,2108 ****\n \n \t\t/* next write the relation tuple form */\n \t\tlen = sizeof(FormData_pg_class);\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(int)))\n! \t\t\t!= sizeof(int))\n \t\t\telog(FATAL, \"cannot write init file -- relation tuple form length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) relform, len)) != len)\n--- 2101,2108 ----\n \n \t\t/* next write the relation tuple form */\n \t\tlen = sizeof(FormData_pg_class);\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(len)))\n! \t\t\t!= sizeof(len))\n \t\t\telog(FATAL, \"cannot write init file -- relation tuple form length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) relform, len)) != len)\n***************\n*** 2112,2119 ****\n \t\tlen = ATTRIBUTE_TUPLE_SIZE;\n \t\tfor (i = 0; i < relform->relnatts; i++)\n \t\t{\n! \t\t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(int)))\n! \t\t\t\t!= sizeof(int))\n \t\t\t\telog(FATAL, \"cannot write init file -- length of attdesc %d\", i);\n \t\t\tif ((nwritten = FileWrite(fd, (char *) ird->rd_att->attrs[i], len))\n \t\t\t\t!= len)\n--- 2112,2119 ----\n \t\tlen = ATTRIBUTE_TUPLE_SIZE;\n \t\tfor (i = 0; i < relform->relnatts; i++)\n \t\t{\n! \t\t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(len)))\n! \t\t\t\t!= sizeof(len))\n \t\t\t\telog(FATAL, \"cannot write init file -- length of attdesc %d\", i);\n \t\t\tif ((nwritten = FileWrite(fd, (char *) ird->rd_att->attrs[i], len))\n \t\t\t\t!= len)\n***************\n*** 2123,2130 ****\n \t\t/* next, write the index strategy map */\n \t\tlen = AttributeNumberGetIndexStrategySize(relform->relnatts,\n \t\t\t\t\t\t\t\t\t\t\t\t am->amstrategies);\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(int)))\n! \t\t\t!= sizeof(int))\n \t\t\telog(FATAL, \"cannot write init file -- strategy map length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) strat, len)) != len)\n--- 2123,2130 ----\n \t\t/* next, write the index strategy map */\n \t\tlen = AttributeNumberGetIndexStrategySize(relform->relnatts,\n \t\t\t\t\t\t\t\t\t\t\t\t am->amstrategies);\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(len)))\n! \t\t\t!= sizeof(len))\n \t\t\telog(FATAL, \"cannot write init file -- strategy map length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) strat, len)) != len)\n***************\n*** 2132,2139 ****\n \n \t\t/* finally, write the vector of support procedures */\n \t\tlen = relform->relnatts * (am->amsupport * sizeof(RegProcedure));\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(int)))\n! \t\t\t!= sizeof(int))\n \t\t\telog(FATAL, \"cannot write init file -- support vector length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) support, len)) != len)\n--- 2132,2139 ----\n \n \t\t/* finally, write the vector of support procedures */\n \t\tlen = relform->relnatts * (am->amsupport * sizeof(RegProcedure));\n! \t\tif ((nwritten = FileWrite(fd, (char *) &len, sizeof(len)))\n! \t\t\t!= sizeof(len))\n \t\t\telog(FATAL, \"cannot write init file -- support vector length\");\n \n \t\tif ((nwritten = FileWrite(fd, (char *) support, len)) != len)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jan 1999 11:57:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alpha Size (size_t) != int problem" } ]
[ { "msg_contents": "Hi hackers:\n\nI plan to do some research on indexing and concurrency control, and\nimplement \nthe algorithms into PostgreSQL. Do some comparison on them.\n\nWould anyone suggest me where I should go into first?\n\nI saw there should be a developer's guide out there, but I looked all over\nthe web and\ncould not find it. Please point me the direction. Thank you.\n\nJack Ho\n========================================\nUniversity of Oklahoma\nSchool of Computer Science\n\nHome:\nPhone: 405-579-3368\nAddress: 333 E Brooks #6\n Norman, OK 73069\n\nOklahoma State Department of Health\nImmunization Division\n\nOffice:\nPhone: 405-271-7200 ext 46182\nFAX: 405-271-6133\nAddress: Oklahoma State Department of Health\n NCH-Immunization, Jack Ho\n 1000 N.E. 10th\n Oklahoma City, OK 73117-1299\n", "msg_date": "Fri, 22 Jan 1999 10:32:49 -0600", "msg_from": "\"ho9221\" <[email protected]>", "msg_from_op": true, "msg_subject": "developer's guide?" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi hackers:\n> \n> I plan to do some research on indexing and concurrency control, and\n> implement \n> the algorithms into PostgreSQL. Do some comparison on them.\n> \n> Would anyone suggest me where I should go into first?\n> \n> I saw there should be a developer's guide out there, but I looked all over\n> the web and\n> could not find it. Please point me the direction. Thank you.\n\nAt www.postgresql.org, choose support, then documenation. It's all\nthere, developers guide, flow chart, and developers FAQ.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jan 1999 11:59:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] developer's guide?" } ]
[ { "msg_contents": ">Would anyone suggest me where I should go into first?\n>\n>I saw there should be a developer's guide out there, but I looked all over\n>the web and\n>could not find it. Please point me the direction. Thank you.\n\nYou can find all documentation as part of the normal Postgres distribution\nin the 'doc' directory.\n\nGood luck\n\nMatthias Schmitt\nmagic moving pixel s.a. http://www.mmp.lu\n", "msg_date": "Fri, 22 Jan 1999 17:49:13 +0100", "msg_from": "Matthias Schmitt <[email protected]>", "msg_from_op": true, "msg_subject": "RE: developer's guide?" } ]
[ { "msg_contents": "I found this item on a list of security vulnerabilities:\n\nmysql(1114) Remote stack overflow, create world-writable root-owned\nfiles\n\n(Port 1114 is mysql's listening port). I guess Postgres doesn't have a\nvulnerability for root ownership since nothing we do is run under the\nroot account, right?\n\nAre we vulnerable to stack or buffer overflows with our on the wire\nprotocol?\n\n - Tom\n", "msg_date": "Sat, 23 Jan 1999 03:00:19 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL vulnerability" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> I found this item on a list of security vulnerabilities:\n> mysql(1114) Remote stack overflow, create world-writable root-owned\n> files\n> (Port 1114 is mysql's listening port). I guess Postgres doesn't have a\n> vulnerability for root ownership since nothing we do is run under the\n> root account, right?\n\nNot unless someone ignores the instructions and installs it to run as\nroot :-(\n\n> Are we vulnerable to stack or buffer overflows with our on the wire\n> protocol?\n\nThe postmaster seems to be secure against that --- pqpacket.c will\nreject oversize packets out of hand. The backend used to have an\noff-by-one bug in pq_getstr, such that an overlength query would write\none byte past the end of the query buffer, but that's been fixed (it'd\nbe hard to exploit anyway). libpq is careful about this sort of\nthing also, although I suspect you could force a client application\ncrash by sending a query response large enough to exhaust memory :-(\n\nOf course, a bad guy who's able to get past the postmaster's\nauthorization checks can do you far more damage by messing up your\ndatabase than by just crashing a particular backend or client...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jan 1999 13:01:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MySQL vulnerability " }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n> Are we vulnerable to stack or buffer overflows with our on the wire\n> protocol?\n\nThere are lots of sprintf and such in there, \nwhich are potential stack overflows.\n\nA security audit should be good thing, but it is a where time consuming\n(and not very fun) task in a complex system like a RDBMS.\n\n\tregards,\n-- \n-----------------\nG�ran Thyni\nThis is Penguin Country. On a quiet night you can hear Windows NT\nreboot!\n\n", "msg_date": "Tue, 26 Jan 1999 17:49:39 +0100", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MySQL vulnerability" } ]
[ { "msg_contents": "I am sending this patch to hackers because I think it needs some\ndiscussion before being added. I'm not 100% sure that there\nisn't some internal issue with making these changes but so far\nit seems to work for me.\n\nIn interfaces/libpq/libpq-fe.h there are some structures that include\nchar pointers. Often one would expect the user to send const strings\nto the functions using these pointers. The following keeps external\nprograms from failing when full error checking is enabled.\n\n\n*** ../src.original/./interfaces/libpq/libpq-fe.h\tSat Jan 16 07:33:49 1999\n--- ./interfaces/libpq/libpq-fe.h\tFri Jan 22 07:14:21 1999\n***************\n*** 100,108 ****\n \t\tpqbool\t\thtml3;\t\t/* output html tables */\n \t\tpqbool\t\texpanded;\t/* expand tables */\n \t\tpqbool\t\tpager;\t\t/* use pager for output if needed */\n! \t\tchar\t *fieldSep;\t/* field separator */\n! \t\tchar\t *tableOpt;\t/* insert to HTML <table ...> */\n! \t\tchar\t *caption;\t/* HTML <caption> */\n \t\tchar\t **fieldName;\t/* null terminated array of repalcement\n \t\t\t\t\t\t\t\t * field names */\n \t} PQprintOpt;\n--- 100,108 ----\n \t\tpqbool\t\thtml3;\t\t/* output html tables */\n \t\tpqbool\t\texpanded;\t/* expand tables */\n \t\tpqbool\t\tpager;\t\t/* use pager for output if needed */\n! \t\tconst char *fieldSep;\t/* field separator */\n! \t\tconst char *tableOpt;\t/* insert to HTML <table ...> */\n! \t\tconst char *caption;\t/* HTML <caption> */\n \t\tchar\t **fieldName;\t/* null terminated array of repalcement\n \t\t\t\t\t\t\t\t * field names */\n \t} PQprintOpt;\n***************\n*** 113,124 ****\n */\n \ttypedef struct _PQconninfoOption\n \t{\n! \t\tchar\t *keyword;\t/* The keyword of the option\t\t\t*/\n! \t\tchar\t *envvar;\t/* Fallback environment variable name\t*/\n! \t\tchar\t *compiled;\t/* Fallback compiled in default value\t*/\n! \t\tchar\t *val;\t\t/* Options value\t\t\t\t\t\t*/\n! \t\tchar\t *label;\t\t/* Label for field in connect dialog\t*/\n! \t\tchar\t *dispchar;\t/* Character to display for this field\t*/\n \t\t\t\t\t\t\t\t/* in a connect dialog. Values are:\t\t*/\n \t\t\t\t\t\t\t\t/* \"\"\tDisplay entered value as is */\n \t\t\t\t\t\t\t\t/* \"*\"\tPassword field - hide value */\n--- 113,124 ----\n */\n \ttypedef struct _PQconninfoOption\n \t{\n! \t\tconst char\t*keyword;\t/* The keyword of the option\t\t\t*/\n! \t\tconst char\t*envvar;\t/* Fallback environment variable name\t*/\n! \t\tconst char\t*compiled;\t/* Fallback compiled in default value\t*/\n! \t\tchar\t\t*val;\t\t/* Options value\t\t\t\t\t\t*/\n! \t\tconst char\t*label;\t\t/* Label for field in connect dialog\t*/\n! \t\tconst char\t*dispchar;\t/* Character to display for this field\t*/\n \t\t\t\t\t\t\t\t/* in a connect dialog. Values are:\t\t*/\n \t\t\t\t\t\t\t\t/* \"\"\tDisplay entered value as is */\n \t\t\t\t\t\t\t\t/* \"*\"\tPassword field - hide value */\n*** ../src.original/./interfaces/libpq/fe-print.c\tFri Jan 22 07:02:10 1999\n--- ./interfaces/libpq/fe-print.c\tFri Jan 22 07:03:09 1999\n***************\n*** 681,687 ****\n \t\tp = border;\n \t\tif (po->standard)\n \t\t{\n! \t\t\tchar\t *fs = po->fieldSep;\n \n \t\t\twhile (*fs++)\n \t\t\t\t*p++ = '+';\n--- 681,687 ----\n \t\tp = border;\n \t\tif (po->standard)\n \t\t{\n! \t\t\tconst char\t *fs = po->fieldSep;\n \n \t\t\twhile (*fs++)\n \t\t\t\t*p++ = '+';\n***************\n*** 693,699 ****\n \t\t\tfor (len = fieldMax[j] + (po->standard ? 2 : 0); len--; *p++ = '-');\n \t\t\tif (po->standard || (j + 1) < nFields)\n \t\t\t{\n! \t\t\t\tchar\t *fs = po->fieldSep;\n \n \t\t\t\twhile (*fs++)\n \t\t\t\t\t*p++ = '+';\n--- 693,699 ----\n \t\t\tfor (len = fieldMax[j] + (po->standard ? 2 : 0); len--; *p++ = '-');\n \t\t\tif (po->standard || (j + 1) < nFields)\n \t\t\t{\n! \t\t\t\tconst char\t *fs = po->fieldSep;\n \n \t\t\t\twhile (*fs++)\n \t\t\t\t\t*p++ = '+';\n*** ../src.original/./interfaces/libpq/fe-connect.c\tFri Jan 22 07:04:03 1999\n--- ./interfaces/libpq/fe-connect.c\tFri Jan 22 07:13:09 1999\n***************\n*** 48,54 ****\n static void freePGconn(PGconn *conn);\n static void closePGconn(PGconn *conn);\n static int\tconninfo_parse(const char *conninfo, char *errorMessage);\n! static char *conninfo_getval(char *keyword);\n static void conninfo_free(void);\n static void defaultNoticeProcessor(void *arg, const char *message);\n \n--- 48,54 ----\n static void freePGconn(PGconn *conn);\n static void closePGconn(PGconn *conn);\n static int\tconninfo_parse(const char *conninfo, char *errorMessage);\n! static const char *conninfo_getval(const char *keyword);\n static void conninfo_free(void);\n static void defaultNoticeProcessor(void *arg, const char *message);\n \n***************\n*** 172,179 ****\n PGconn *\n PQconnectdb(const char *conninfo)\n {\n! \tPGconn\t *conn;\n! \tchar\t *tmp;\n \n \t/* ----------\n \t * Allocate memory for the conn structure\n--- 172,179 ----\n PGconn *\n PQconnectdb(const char *conninfo)\n {\n! \tPGconn\t\t *conn;\n! \tconst char\t *tmp;\n \n \t/* ----------\n \t * Allocate memory for the conn structure\n***************\n*** 284,291 ****\n PGconn *\n PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions, const char *pgtty, const char *dbName, const char *login, const char *pwd)\n {\n! \tPGconn\t *conn;\n! \tchar\t *tmp;\n \n \t/* An error message from some service we call. */\n \tbool\t\terror = FALSE;\n--- 284,291 ----\n PGconn *\n PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions, const char *pgtty, const char *dbName, const char *login, const char *pwd)\n {\n! \tPGconn\t\t*conn;\n! \tconst char\t*tmp;\n \n \t/* An error message from some service we call. */\n \tbool\t\terror = FALSE;\n***************\n*** 1137,1143 ****\n \tchar\t *pname;\n \tchar\t *pval;\n \tchar\t *buf;\n! \tchar\t *tmp;\n \tchar\t *cp;\n \tchar\t *cp2;\n \tPQconninfoOption *option;\n--- 1137,1143 ----\n \tchar\t *pname;\n \tchar\t *pval;\n \tchar\t *buf;\n! \tconst char *tmp;\n \tchar\t *cp;\n \tchar\t *cp2;\n \tPQconninfoOption *option;\n***************\n*** 1343,1350 ****\n }\n \n \n! static char *\n! conninfo_getval(char *keyword)\n {\n \tPQconninfoOption *option;\n \n--- 1343,1350 ----\n }\n \n \n! static const char *\n! conninfo_getval(const char *keyword)\n {\n \tPQconninfoOption *option;\n \n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 23 Jan 1999 07:42:19 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Adding some const keywords to external interfaces" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> In interfaces/libpq/libpq-fe.h there are some structures that include\n> char pointers. Often one would expect the user to send const strings\n> to the functions using these pointers. The following keeps external\n> programs from failing when full error checking is enabled.\n\nYeah, I thought about const-ifying libpq's interfaces when I was working\non it last summer, but desisted for fear of breaking existing\napplication code. The trouble with adding a few const keywords is that\nthey propagate. Just as you had to change some of libpq's internal\nvariables from \"char *\" to \"const char *\" after modifying these structs,\nso an application program would likely find that it needed to const-ify\nsome of its declarations to avoid errors/warnings created by this\nchange. So, I didn't do it for fear of complaints.\n\nIMHO, a partially const-ified program is worse than no consts at all;\nyou find yourself introducing casts all over the place to deal with\ntransitions between code that knows things are const and code that\ndoesn't use const. So if we were going to do this I'd recommend doing\nit whole-sale, and marking everything we could const in libpq-fe.h.\nFor example, most of the routines that accept or return char * really\nought to accept or return const char *; the pure inquiry functions ought\nto take a const PGresult *, but not PQclear; etc. But that would make\nit even more likely that app programmers would be forced to clean up\ncode that is working fine for them now. (We'd definitely have to clean\nup the Postgres code that calls libpq.)\n\nOn the whole this seems like a can of worms better left unopened.\nI certainly don't see it as something that one small patch will fix;\nif we want to take const-safety seriously the effects will ripple\nthroughout the code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jan 1999 13:27:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces " }, { "msg_contents": "> Yeah, I thought about const-ifying libpq's interfaces when I was working\n> on it last summer, but desisted for fear of breaking existing\n> application code. The trouble with adding a few const keywords is that\n> they propagate. Just as you had to change some of libpq's internal\n> variables from \"char *\" to \"const char *\" after modifying these structs,\n> so an application program would likely find that it needed to const-ify\n> some of its declarations to avoid errors/warnings created by this\n> change. So, I didn't do it for fear of complaints.\n> \n> IMHO, a partially const-ified program is worse than no consts at all;\n> you find yourself introducing casts all over the place to deal with\n> transitions between code that knows things are const and code that\n> doesn't use const. So if we were going to do this I'd recommend doing\n> it whole-sale, and marking everything we could const in libpq-fe.h.\n> For example, most of the routines that accept or return char * really\n> ought to accept or return const char *; the pure inquiry functions ought\n> to take a const PGresult *, but not PQclear; etc. But that would make\n> it even more likely that app programmers would be forced to clean up\n> code that is working fine for them now. (We'd definitely have to clean\n> up the Postgres code that calls libpq.)\n> \n> On the whole this seems like a can of worms better left unopened.\n> I certainly don't see it as something that one small patch will fix;\n> if we want to take const-safety seriously the effects will ripple\n> throughout the code...\n\nWell said. I have always suspected const would ripple through the code,\nbut had never heard it described so well.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jan 1999 16:35:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > In interfaces/libpq/libpq-fe.h there are some structures that include\n> > char pointers. Often one would expect the user to send const strings\n> > to the functions using these pointers. The following keeps external\n> > programs from failing when full error checking is enabled.\n> \n> Yeah, I thought about const-ifying libpq's interfaces when I was working\n> on it last summer, but desisted for fear of breaking existing\n> application code. The trouble with adding a few const keywords is that\n> they propagate. Just as you had to change some of libpq's internal\n> variables from \"char *\" to \"const char *\" after modifying these structs,\n> so an application program would likely find that it needed to const-ify\n> some of its declarations to avoid errors/warnings created by this\n> change. So, I didn't do it for fear of complaints.\n\nActually, all the changes should be internal to our own code. Functions\nthat take const char pointers can still send non-const pointers. The\nerrors would be generated if we changed the return value of the library\nfunctions, not the arguments.\n\nNot that I don't think that return values should be changed where it\nis appropriate as well but this change doesn't do that.\n\n> code that is working fine for them now. (We'd definitely have to clean\n> up the Postgres code that calls libpq.)\n\nLike what? I compiled the whole tree including my PyGreSQL module\nand all the changes were inside libpq as I expected.\n\n> if we want to take const-safety seriously the effects will ripple\n> throughout the code...\n\nI would love to see const-safety given more attention but certainly\nwe should make sure our external interfaces are correct so that people\nwriting to them have the opportunity to do full error checking.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 23 Jan 1999 22:42:39 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "> Actually, all the changes should be internal to our own code. Functions\n> that take const char pointers can still send non-const pointers. The\n> errors would be generated if we changed the return value of the library\n> functions, not the arguments.\n> \n> Not that I don't think that return values should be changed where it\n> is appropriate as well but this change doesn't do that.\n> \n...\n> > if we want to take const-safety seriously the effects will ripple\n> > throughout the code...\n> \n> I would love to see const-safety given more attention but certainly\n> we should make sure our external interfaces are correct so that people\n> writing to them have the opportunity to do full error checking.\n\nThese are good points. Can you post the patch again? I deleted it. \nSounds like it would be safe. I am interested in const-ify-ing the\nbackend code, if possible. It does offer a level of code checking that\nwe don't currently have.\n\nThe only issue is that is has to be done pretty exhaustively. If you\ndon't, your new const function parameters start passing params to\nfunctions that takes non-const params, and warnings start to fly.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jan 1999 23:33:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "Thus spake Bruce Momjian\n> These are good points. Can you post the patch again? I deleted it. \n\nI bounced it directly to you rather than reposting to the list.\n\n> Sounds like it would be safe. I am interested in const-ify-ing the\n> backend code, if possible. It does offer a level of code checking that\n> we don't currently have.\n\nMe too but as I said, this patch doesn't do that. It only const-ifies\nthe the arguments to an external interface.\n\n> The only issue is that is has to be done pretty exhaustively. If you\n> don't, your new const function parameters start passing params to\n> functions that takes non-const params, and warnings start to fly.\n\nI compiled the entire tree without any warnings so I assume that the\nchanges wound up being pretty localized.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 24 Jan 1999 09:22:18 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Actually, all the changes should be internal to our own code. Functions\n> that take const char pointers can still send non-const pointers. The\n> errors would be generated if we changed the return value of the library\n> functions, not the arguments.\n\nTrue, adding const decorations to function arguments is fairly harmless\nfrom the caller's point of view, but it also provides only a small\nfraction of the error checking available with a fully const-ified\ninterface.\n\nMore to the point, the patch you submitted was *not* adding consts to\nfunction arguments, it was adding consts to struct fields. That *can*\ncause errors in calling code, if the caller happens to copy the value\nof such a field into a local variable that's not declared const, pass\nit as an argument to a function not marked const, etc.\n\nI guess my question is \"why start here?\".\n\n>> if we want to take const-safety seriously the effects will ripple\n>> throughout the code...\n\n> I would love to see const-safety given more attention but certainly\n> we should make sure our external interfaces are correct so that people\n> writing to them have the opportunity to do full error checking.\n\nWell, that's exactly my point. I don't see much value in doing a\nhalf-baked job of decorating the interface with const declarations;\nyou don't get much real error checking that way. If we are going to\ntake const-safety seriously then we need to do the whole job.\n\nThat's a fair amount of work that will impact outside applications as\nwell as a lot of our own code (certainly most of interfaces/ and bin/,\nmore if we start const-ifying the backend's internal interfaces).\nI think we need a pgsql-hackers consensus and commitment to the idea\nbefore we start doing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jan 1999 13:19:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces " }, { "msg_contents": "We agreed to skip this right now, right?\n\n\n> Thus spake Bruce Momjian\n> > These are good points. Can you post the patch again? I deleted it. \n> \n> I bounced it directly to you rather than reposting to the list.\n> \n> > Sounds like it would be safe. I am interested in const-ify-ing the\n> > backend code, if possible. It does offer a level of code checking that\n> > we don't currently have.\n> \n> Me too but as I said, this patch doesn't do that. It only const-ifies\n> the the arguments to an external interface.\n> \n> > The only issue is that is has to be done pretty exhaustively. If you\n> > don't, your new const function parameters start passing params to\n> > functions that takes non-const params, and warnings start to fly.\n> \n> I compiled the entire tree without any warnings so I assume that the\n> changes wound up being pretty localized.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 13:36:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "Thus spake Bruce Momjian\n> We agreed to skip this right now, right?\n\nI still think it's benign at worst. Shall I keepthe changes and resubmit\nlater?\n\n\n> > Thus spake Bruce Momjian\n> > > These are good points. Can you post the patch again? I deleted it. \n> > \n> > I bounced it directly to you rather than reposting to the list.\n> > \n> > > Sounds like it would be safe. I am interested in const-ify-ing the\n> > > backend code, if possible. It does offer a level of code checking that\n> > > we don't currently have.\n> > \n> > Me too but as I said, this patch doesn't do that. It only const-ifies\n> > the the arguments to an external interface.\n> > \n> > > The only issue is that is has to be done pretty exhaustively. If you\n> > > don't, your new const function parameters start passing params to\n> > > functions that takes non-const params, and warnings start to fly.\n> > \n> > I compiled the entire tree without any warnings so I assume that the\n> > changes wound up being pretty localized.\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 2 Feb 1999 13:58:28 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "> \n> > > Thus spake Bruce Momjian\n> > > > These are good points. Can you post the patch again? I deleted it. \n> > > \n> > > I bounced it directly to you rather than reposting to the list.\n> > > \n> > > > Sounds like it would be safe. I am interested in const-ify-ing the\n> > > > backend code, if possible. It does offer a level of code checking that\n> > > > we don't currently have.\n> > > \n> > > Me too but as I said, this patch doesn't do that. It only const-ifies\n> > > the the arguments to an external interface.\n> > > \n> > > > The only issue is that is has to be done pretty exhaustively. If you\n> > > > don't, your new const function parameters start passing params to\n> > > > functions that takes non-const params, and warnings start to fly.\n> > > \n> > > I compiled the entire tree without any warnings so I assume that the\n> > > changes wound up being pretty localized.\n\n> Thus spake Bruce Momjian\n> > We agreed to skip this right now, right?\n> \n> I still think it's benign at worst. Shall I keepthe changes and resubmit\n> later?\n> \n\nDidn't we agree we have to do all the const stuff at once? And people\naddressing those internal fields would now find them to be const? I\nreally don't remember.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 14:19:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" }, { "msg_contents": "Um, I hate to say \"I told you so\", but this const-addition has\nin fact created more warning messages than it eliminated:\n\ngcc -I../../interfaces/libpq -I../../include -I../../backend -g -O -Wall -Wmissing-prototypes -c psql.c -o psql.o\npsql.c: In function `HandleSlashCmds':\npsql.c:1833: warning: passing arg 1 of `free' discards `const' from pointer target type\npsql.c:2192: warning: passing arg 1 of `free' discards `const' from pointer target type\npsql.c:2257: warning: passing arg 1 of `free' discards `const' from pointer target type\npsql.c:2265: warning: passing arg 1 of `free' discards `const' from pointer target type\npsql.c:2309: warning: passing arg 1 of `free' discards `const' from pointer target type\npsql.c: In function `main':\npsql.c:2982: warning: passing arg 1 of `free' discards `const' from pointer target type\n\nI think this is a fairly graphic demonstration of my assertion that\nadding const to application-visible fields is not entirely transparent.\n\nI would like to back this patch out until such time as we are prepared\nto fully const-ify libpq's interface (eg, declare PQgetvalue and all\nthe other accessor functions as returning const char* not just char*).\nWe shouldn't annoy application programmers a little bit here and a\nlittle bit there --- we should go the whole nine yards at once rather\nthan forcing them to insert a few more \"const\"s with each release.\nIMHO, anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Feb 1999 23:11:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces " }, { "msg_contents": "I see the problem here too. Reversing it now out.\n\nD'Arcy, do you care to comment?\n\n\n> Um, I hate to say \"I told you so\", but this const-addition has\n> in fact created more warning messages than it eliminated:\n> \n> gcc -I../../interfaces/libpq -I../../include -I../../backend -g -O -Wall -Wmissing-prototypes -c psql.c -o psql.o\n> psql.c: In function `HandleSlashCmds':\n> psql.c:1833: warning: passing arg 1 of `free' discards `const' from pointer target type\n> psql.c:2192: warning: passing arg 1 of `free' discards `const' from pointer target type\n> psql.c:2257: warning: passing arg 1 of `free' discards `const' from pointer target type\n> psql.c:2265: warning: passing arg 1 of `free' discards `const' from pointer target type\n> psql.c:2309: warning: passing arg 1 of `free' discards `const' from pointer target type\n> psql.c: In function `main':\n> psql.c:2982: warning: passing arg 1 of `free' discards `const' from pointer target type\n> \n> I think this is a fairly graphic demonstration of my assertion that\n> adding const to application-visible fields is not entirely transparent.\n> \n> I would like to back this patch out until such time as we are prepared\n> to fully const-ify libpq's interface (eg, declare PQgetvalue and all\n> the other accessor functions as returning const char* not just char*).\n> We shouldn't annoy application programmers a little bit here and a\n> little bit there --- we should go the whole nine yards at once rather\n> than forcing them to insert a few more \"const\"s with each release.\n> IMHO, anyway.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Feb 1999 23:26:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Adding some const keywords to external interfaces" } ]
[ { "msg_contents": "This is just weird. I tested my changes before sending them in and\nthey worked. The indisprimary field got set to TRUE just as it was\nsupposed to when I created a primary key. Now that I have pulled in\nthe updated surces from the tree it doesn't work any more. I went\nover all the current code and it has all my changes and I reviewed\nthe logic which seems fine but for some reason it doesn't work.\nTwo questions; did something related change since then that might\naffect this and does anyone else see the problem? Here is my test.\n\ncreate table x (i int primary key, t text);\nSELECT pg_class.relname, pg_attribute.attname, indisunique \n FROM pg_class, pg_attribute, pg_index \n WHERE pg_class.oid = pg_attribute.attrelid AND \n pg_class.oid = pg_index.indrelid AND \n pg_index.indkey[0] = pg_attribute.attnum AND \n pg_index.indisprimary = 't'; \n\nThis should show this new index but it doesn't on my system.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 23 Jan 1999 12:47:20 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Primary key update not working" } ]
[ { "msg_contents": "With latest CVS sources, the UNION regress test isn't working.\nWell, it's still giving the right query results, but it's also\nproducing a lot of messages like this:\n\nNOTICE: equal: don't know whether nodes of type 600 are equal\n\nThat's a failure according to the expected output...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jan 1999 16:28:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "UNION regress test is failing" } ]
[ { "msg_contents": "> Someone (you, according to the cvs logs) checked in an update to the\n> \"expected\" file for the datetime regress test, but didn't check in the\n> corresponding update to the test file itself. sql/datetime.sql is\n> still dated 1997 ...\n\nIt came from someone, not sure who. Can someone comment? If not, let's\nback it out.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jan 1999 16:37:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "Hello!\n\nOn Sat, 23 Jan 1999, Bruce Momjian wrote:\n\n> > Someone (you, according to the cvs logs) checked in an update to the\n> > \"expected\" file for the datetime regress test, but didn't check in the\n> > corresponding update to the test file itself. sql/datetime.sql is\n> > still dated 1997 ...\n> \n> It came from someone, not sure who. Can someone comment? If not, let's\n> back it out.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n Probably that's me. The patch attached. Is there any problem?\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.", "msg_date": "Mon, 25 Jan 1999 11:31:44 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "> Hello!\n> \n> On Sat, 23 Jan 1999, Bruce Momjian wrote:\n> \n> > > Someone (you, according to the cvs logs) checked in an update to the\n> > > \"expected\" file for the datetime regress test, but didn't check in the\n> > > corresponding update to the test file itself. sql/datetime.sql is\n> > > still dated 1997 ...\n> > \n> > It came from someone, not sure who. Can someone comment? If not, let's\n> > back it out.\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> Probably that's me. The patch attached. Is there any problem?\n\n\nYou need to patch datetime.sql too. That is required to generate the\nproper expected file, which is compared to the out file. Can you supply\nthe patch?\n\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\nContent-Description: \n\n> *** ./src/test/regress/expected/datetime.out.orig\tMon Jan 5 06:35:27 1998\n> --- ./src/test/regress/expected/datetime.out\tWed Jan 6 12:50:50 1999\n> ***************\n> *** 28,33 ****\n> --- 28,40 ----\n> @ 0 \n> (1 row)\n> \n> + QUERY: SET DateStyle = 'Postgres,noneuropean';\n> + QUERY: SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> + Jan_01_1994_11am\n> + ----------------------------\n> + Sat Jan 01 11:00:00 1994 PST\n> + (1 row)\n> + \n> QUERY: CREATE TABLE DATETIME_TBL( d1 datetime);\n> QUERY: INSERT INTO DATETIME_TBL VALUES ('current');\n> QUERY: INSERT INTO DATETIME_TBL VALUES ('today');\n> *** ./src/test/regress/sql/datetime.sql.orig\tSat Nov 15 05:55:57 1997\n> --- ./src/test/regress/sql/datetime.sql\tWed Jan 6 12:49:23 1999\n> ***************\n> *** 10,15 ****\n> --- 10,18 ----\n> SELECT ('current'::datetime = 'now'::datetime) as \"True\";\n> SELECT ('now'::datetime - 'current'::datetime) AS \"ZeroSecs\";\n> \n> + SET DateStyle = 'Postgres,noneuropean';\n> + SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> + \n> CREATE TABLE DATETIME_TBL( d1 datetime);\n> \n> INSERT INTO DATETIME_TBL VALUES ('current');\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jan 1999 10:11:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "Hi!\n\nOn Mon, 25 Jan 1999, Bruce Momjian wrote:\n> You need to patch datetime.sql too. That is required to generate the\n> proper expected file, which is compared to the out file. Can you supply\n> the patch?\n\n But it is here - 20 lines below from the beginning of the patch. The\npatch was generated by make_diff tools. Isn't it enough?\n\n> > *** ./src/test/regress/expected/datetime.out.orig\tMon Jan 5 06:35:27 1998\n> > --- ./src/test/regress/expected/datetime.out\tWed Jan 6 12:50:50 1999\n> > ***************\n> > *** 28,33 ****\n> > --- 28,40 ----\n> > @ 0 \n> > (1 row)\n> > \n> > + QUERY: SET DateStyle = 'Postgres,noneuropean';\n> > + QUERY: SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> > + Jan_01_1994_11am\n> > + ----------------------------\n> > + Sat Jan 01 11:00:00 1994 PST\n> > + (1 row)\n> > + \n> > QUERY: CREATE TABLE DATETIME_TBL( d1 datetime);\n> > QUERY: INSERT INTO DATETIME_TBL VALUES ('current');\n> > QUERY: INSERT INTO DATETIME_TBL VALUES ('today');\n> > *** ./src/test/regress/sql/datetime.sql.orig\tSat Nov 15 05:55:57 1997\n> > --- ./src/test/regress/sql/datetime.sql\tWed Jan 6 12:49:23 1999\n> > ***************\n> > *** 10,15 ****\n> > --- 10,18 ----\n> > SELECT ('current'::datetime = 'now'::datetime) as \"True\";\n> > SELECT ('now'::datetime - 'current'::datetime) AS \"ZeroSecs\";\n> > \n> > + SET DateStyle = 'Postgres,noneuropean';\n> > + SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> > + \n> > CREATE TABLE DATETIME_TBL( d1 datetime);\n> > \n> > INSERT INTO DATETIME_TBL VALUES ('current');\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 25 Jan 1999 18:34:26 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "> Probably that's me. The patch attached. Is there any problem?\n\nCould you phrase the query in the same style as most of the other tests\n(a style we inherited from the original sources), where the first column\nis a select of an empty string with the table count as the label? I\nwould suggest something like:\n\nSELECT '' AS one,\n datetime('1994-01-01', '11:00') AS \"Sat Jan 01 11:00:00 1994 PST\";\n\nI think we can use the actual current result in the result label (and\nwithout underscores), since you are surrounding it with double quotes\nanyway.\n\nTIA\n\n - Tom\n\n> + QUERY: SET DateStyle = 'Postgres,noneuropean';\n> + QUERY: SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> + Jan_01_1994_11am\n> + ----------------------------\n> + Sat Jan 01 11:00:00 1994 PST\n> + (1 row)\n", "msg_date": "Mon, 25 Jan 1999 15:55:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "> Hi!\n> \n> On Mon, 25 Jan 1999, Bruce Momjian wrote:\n> > You need to patch datetime.sql too. That is required to generate the\n> > proper expected file, which is compared to the out file. Can you supply\n> > the patch?\n> \n> But it is here - 20 lines below from the beginning of the patch. The\n> patch was generated by make_diff tools. Isn't it enough?\n\nGot it. I am a dope. I will apply it as soon as I apply my current\naggregate work.\n\n> \n> > > *** ./src/test/regress/expected/datetime.out.orig\tMon Jan 5 06:35:27 1998\n> > > --- ./src/test/regress/expected/datetime.out\tWed Jan 6 12:50:50 1999\n> > > ***************\n> > > *** 28,33 ****\n> > > --- 28,40 ----\n> > > @ 0 \n> > > (1 row)\n> > > \n> > > + QUERY: SET DateStyle = 'Postgres,noneuropean';\n> > > + QUERY: SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> > > + Jan_01_1994_11am\n> > > + ----------------------------\n> > > + Sat Jan 01 11:00:00 1994 PST\n> > > + (1 row)\n> > > + \n> > > QUERY: CREATE TABLE DATETIME_TBL( d1 datetime);\n> > > QUERY: INSERT INTO DATETIME_TBL VALUES ('current');\n> > > QUERY: INSERT INTO DATETIME_TBL VALUES ('today');\n> > > *** ./src/test/regress/sql/datetime.sql.orig\tSat Nov 15 05:55:57 1997\n> > > --- ./src/test/regress/sql/datetime.sql\tWed Jan 6 12:49:23 1999\n> > > ***************\n> > > *** 10,15 ****\n> > > --- 10,18 ----\n> > > SELECT ('current'::datetime = 'now'::datetime) as \"True\";\n> > > SELECT ('now'::datetime - 'current'::datetime) AS \"ZeroSecs\";\n> > > \n> > > + SET DateStyle = 'Postgres,noneuropean';\n> > > + SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> > > + \n> > > CREATE TABLE DATETIME_TBL( d1 datetime);\n> > > \n> > > INSERT INTO DATETIME_TBL VALUES ('current');\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jan 1999 11:00:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "Hi!\n\nOn Mon, 25 Jan 1999, Thomas G. Lockhart wrote:\n> Could you phrase the query in the same style as most of the other tests\n> (a style we inherited from the original sources), where the first column\n> is a select of an empty string with the table count as the label? I\n> would suggest something like:\n> \n> SELECT '' AS one,\n> datetime('1994-01-01', '11:00') AS \"Sat Jan 01 11:00:00 1994 PST\";\n\n I got my style from the lines above my test. There are things like:\n\nSELECT ('now'::datetime - 'current'::datetime) AS \"ZeroSecs\";\n\n \"Sat Jan 01 11:00:00 1994 PST\" looks bad as a result, I think.\n\n Of course, I can recreate the patch, but should I?\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 25 Jan 1999 19:10:02 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "Hi!\n\nOn Mon, 25 Jan 1999, Bruce Momjian wrote:\n> Got it. I am a dope. I will apply it as soon as I apply my current\n> aggregate work.\n\n May be you need some rest, some sleep? For me, sleep is usually of big\nhelp! :)\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 25 Jan 1999 19:13:32 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "> > Could you phrase the query in the same style as most of the other \n> > tests (a style we inherited from the original sources),\n> I got my style from the lines above my test.\n> Of course, I can recreate the patch, but should I?\n\nNaw. You're right...\n\n - Tom\n", "msg_date": "Mon, 25 Jan 1999 16:25:16 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "Applied. Sorry.\n\n\n> Hello!\n> \n> On Sat, 23 Jan 1999, Bruce Momjian wrote:\n> \n> > > Someone (you, according to the cvs logs) checked in an update to the\n> > > \"expected\" file for the datetime regress test, but didn't check in the\n> > > corresponding update to the test file itself. sql/datetime.sql is\n> > > still dated 1997 ...\n> > \n> > It came from someone, not sure who. Can someone comment? If not, let's\n> > back it out.\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> Probably that's me. The patch attached. Is there any problem?\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\nContent-Description: \n\n> *** ./src/test/regress/expected/datetime.out.orig\tMon Jan 5 06:35:27 1998\n> --- ./src/test/regress/expected/datetime.out\tWed Jan 6 12:50:50 1999\n> ***************\n> *** 28,33 ****\n> --- 28,40 ----\n> @ 0 \n> (1 row)\n> \n> + QUERY: SET DateStyle = 'Postgres,noneuropean';\n> + QUERY: SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> + Jan_01_1994_11am\n> + ----------------------------\n> + Sat Jan 01 11:00:00 1994 PST\n> + (1 row)\n> + \n> QUERY: CREATE TABLE DATETIME_TBL( d1 datetime);\n> QUERY: INSERT INTO DATETIME_TBL VALUES ('current');\n> QUERY: INSERT INTO DATETIME_TBL VALUES ('today');\n> *** ./src/test/regress/sql/datetime.sql.orig\tSat Nov 15 05:55:57 1997\n> --- ./src/test/regress/sql/datetime.sql\tWed Jan 6 12:49:23 1999\n> ***************\n> *** 10,15 ****\n> --- 10,18 ----\n> SELECT ('current'::datetime = 'now'::datetime) as \"True\";\n> SELECT ('now'::datetime - 'current'::datetime) AS \"ZeroSecs\";\n> \n> + SET DateStyle = 'Postgres,noneuropean';\n> + SELECT datetime('1994-01-01', '11:00') AS \"Jan_01_1994_11am\";\n> + \n> CREATE TABLE DATETIME_TBL( d1 datetime);\n> \n> INSERT INTO DATETIME_TBL VALUES ('current');\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jan 1999 13:04:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" }, { "msg_contents": "Hello!\n\n Please, everyone, run the regression test and watch datetime test. I\nhave tested it on my computers (there is Pentium with Debian 2.0 and\nUltra-1 with Solaris 2.5.1) - the test passed well. I want to know how it\nis going on other systems.\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Tue, 26 Jan 1999 13:51:05 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: datetime regress test busted by incomplete checkin" } ]
[ { "msg_contents": "I've built some linux rpms for cvsup, both client and server. At the\nmoment, I've posted ones which are statically linked for libc5, but plan\nto build some glibc2/libc6 versions as well, and perhaps ones which are\nnot statically linked (which would then require at least one Modula-3\nlibrary rpm).\n\nCould someone please try installing these (the cvsup client especially)\nand let me know if they work? Testing on both libc5 and glibc2 systems\nwould be helpful; if it works under glibc2 then I wouldn't need to\nbother making a special version for it.\n\nThe files are in ftp://postgresql.org/pub/CVSup and are named:\n\n cvsup-client-15.5-1-libc5-static.i386.rpm\n cvsup-server-15.5-1-libc5-static.i386.rpm\n cvsup-15.5-1-static.src.rpm\n\nI did not include in the rpms any example configuration files (yet), but\nlet me know if you would need one for testing.\n\nAlso, if anyone has experience with building rpm packages: is there an\neasy way for me to add the \"libc5-static\" part to the package names from\nwithin the spec file? I just added those fields using \"mv\" after the\nfact...\n\n - Tom\n", "msg_date": "Sun, 24 Jan 1999 06:08:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "cvsup RPMs available" } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Sat, 23 Jan 1999 07:42:19 -0500 (EST)\nFrom: D'Arcy J.M. Cain <[email protected]>\nTo: [email protected]\nSubject: [HACKERS] Adding some const keywords to external interfaces\n\nI am sending this patch to hackers because I think it needs some\ndiscussion before being added. I'm not 100% sure that there\nisn't some internal issue with making these changes but so far\nit seems to work for me.\n\nIn interfaces/libpq/libpq-fe.h there are some structures that include\nchar pointers. Often one would expect the user to send const strings\nto the functions using these pointers. The following keeps external\nprograms from failing when full error checking is enabled.\n\n\n*** ../src.original/./interfaces/libpq/libpq-fe.h\tSat Jan 16 07:33:49 1999\n--- ./interfaces/libpq/libpq-fe.h\tFri Jan 22 07:14:21 1999\n***************\n*** 100,108 ****\n \t\tpqbool\t\thtml3;\t\t/* output html tables */\n \t\tpqbool\t\texpanded;\t/* expand tables */\n \t\tpqbool\t\tpager;\t\t/* use pager for output if needed */\n! \t\tchar\t *fieldSep;\t/* field separator */\n! \t\tchar\t *tableOpt;\t/* insert to HTML <table ...> */\n! \t\tchar\t *caption;\t/* HTML <caption> */\n \t\tchar\t **fieldName;\t/* null terminated array of repalcement\n \t\t\t\t\t\t\t\t * field names */\n \t} PQprintOpt;\n--- 100,108 ----\n \t\tpqbool\t\thtml3;\t\t/* output html tables */\n \t\tpqbool\t\texpanded;\t/* expand tables */\n \t\tpqbool\t\tpager;\t\t/* use pager for output if needed */\n! \t\tconst char *fieldSep;\t/* field separator */\n! \t\tconst char *tableOpt;\t/* insert to HTML <table ...> */\n! \t\tconst char *caption;\t/* HTML <caption> */\n \t\tchar\t **fieldName;\t/* null terminated array of repalcement\n \t\t\t\t\t\t\t\t * field names */\n \t} PQprintOpt;\n***************\n*** 113,124 ****\n */\n \ttypedef struct _PQconninfoOption\n \t{\n! \t\tchar\t *keyword;\t/* The keyword of the option\t\t\t*/\n! \t\tchar\t *envvar;\t/* Fallback environment variable name\t*/\n! \t\tchar\t *compiled;\t/* Fallback compiled in default value\t*/\n! \t\tchar\t *val;\t\t/* Options value\t\t\t\t\t\t*/\n! \t\tchar\t *label;\t\t/* Label for field in connect dialog\t*/\n! \t\tchar\t *dispchar;\t/* Character to display for this field\t*/\n \t\t\t\t\t\t\t\t/* in a connect dialog. Values are:\t\t*/\n \t\t\t\t\t\t\t\t/* \"\"\tDisplay entered value as is */\n \t\t\t\t\t\t\t\t/* \"*\"\tPassword field - hide value */\n--- 113,124 ----\n */\n \ttypedef struct _PQconninfoOption\n \t{\n! \t\tconst char\t*keyword;\t/* The keyword of the option\t\t\t*/\n! \t\tconst char\t*envvar;\t/* Fallback environment variable name\t*/\n! \t\tconst char\t*compiled;\t/* Fallback compiled in default value\t*/\n! \t\tchar\t\t*val;\t\t/* Options value\t\t\t\t\t\t*/\n! \t\tconst char\t*label;\t\t/* Label for field in connect dialog\t*/\n! \t\tconst char\t*dispchar;\t/* Character to display for this field\t*/\n \t\t\t\t\t\t\t\t/* in a connect dialog. Values are:\t\t*/\n \t\t\t\t\t\t\t\t/* \"\"\tDisplay entered value as is */\n \t\t\t\t\t\t\t\t/* \"*\"\tPassword field - hide value */\n*** ../src.original/./interfaces/libpq/fe-print.c\tFri Jan 22 07:02:10 1999\n--- ./interfaces/libpq/fe-print.c\tFri Jan 22 07:03:09 1999\n***************\n*** 681,687 ****\n \t\tp = border;\n \t\tif (po->standard)\n \t\t{\n! \t\t\tchar\t *fs = po->fieldSep;\n \n \t\t\twhile (*fs++)\n \t\t\t\t*p++ = '+';\n--- 681,687 ----\n \t\tp = border;\n \t\tif (po->standard)\n \t\t{\n! \t\t\tconst char\t *fs = po->fieldSep;\n \n \t\t\twhile (*fs++)\n \t\t\t\t*p++ = '+';\n***************\n*** 693,699 ****\n \t\t\tfor (len = fieldMax[j] + (po->standard ? 2 : 0); len--; *p++ = '-');\n \t\t\tif (po->standard || (j + 1) < nFields)\n \t\t\t{\n! \t\t\t\tchar\t *fs = po->fieldSep;\n \n \t\t\t\twhile (*fs++)\n \t\t\t\t\t*p++ = '+';\n--- 693,699 ----\n \t\t\tfor (len = fieldMax[j] + (po->standard ? 2 : 0); len--; *p++ = '-');\n \t\t\tif (po->standard || (j + 1) < nFields)\n \t\t\t{\n! \t\t\t\tconst char\t *fs = po->fieldSep;\n \n \t\t\t\twhile (*fs++)\n \t\t\t\t\t*p++ = '+';\n*** ../src.original/./interfaces/libpq/fe-connect.c\tFri Jan 22 07:04:03 1999\n--- ./interfaces/libpq/fe-connect.c\tFri Jan 22 07:13:09 1999\n***************\n*** 48,54 ****\n static void freePGconn(PGconn *conn);\n static void closePGconn(PGconn *conn);\n static int\tconninfo_parse(const char *conninfo, char *errorMessage);\n! static char *conninfo_getval(char *keyword);\n static void conninfo_free(void);\n static void defaultNoticeProcessor(void *arg, const char *message);\n \n--- 48,54 ----\n static void freePGconn(PGconn *conn);\n static void closePGconn(PGconn *conn);\n static int\tconninfo_parse(const char *conninfo, char *errorMessage);\n! static const char *conninfo_getval(const char *keyword);\n static void conninfo_free(void);\n static void defaultNoticeProcessor(void *arg, const char *message);\n \n***************\n*** 172,179 ****\n PGconn *\n PQconnectdb(const char *conninfo)\n {\n! \tPGconn\t *conn;\n! \tchar\t *tmp;\n \n \t/* ----------\n \t * Allocate memory for the conn structure\n--- 172,179 ----\n PGconn *\n PQconnectdb(const char *conninfo)\n {\n! \tPGconn\t\t *conn;\n! \tconst char\t *tmp;\n \n \t/* ----------\n \t * Allocate memory for the conn structure\n***************\n*** 284,291 ****\n PGconn *\n PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions, const char *pgtty, const char *dbName, const char *login, const char *pwd)\n {\n! \tPGconn\t *conn;\n! \tchar\t *tmp;\n \n \t/* An error message from some service we call. */\n \tbool\t\terror = FALSE;\n--- 284,291 ----\n PGconn *\n PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions, const char *pgtty, const char *dbName, const char *login, const char *pwd)\n {\n! \tPGconn\t\t*conn;\n! \tconst char\t*tmp;\n \n \t/* An error message from some service we call. */\n \tbool\t\terror = FALSE;\n***************\n*** 1137,1143 ****\n \tchar\t *pname;\n \tchar\t *pval;\n \tchar\t *buf;\n! \tchar\t *tmp;\n \tchar\t *cp;\n \tchar\t *cp2;\n \tPQconninfoOption *option;\n--- 1137,1143 ----\n \tchar\t *pname;\n \tchar\t *pval;\n \tchar\t *buf;\n! \tconst char *tmp;\n \tchar\t *cp;\n \tchar\t *cp2;\n \tPQconninfoOption *option;\n***************\n*** 1343,1350 ****\n }\n \n \n! static char *\n! conninfo_getval(char *keyword)\n {\n \tPQconninfoOption *option;\n \n--- 1343,1350 ----\n }\n \n \n! static const char *\n! conninfo_getval(const char *keyword)\n {\n \tPQconninfoOption *option;\n \n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n\n", "msg_date": "Sun, 24 Jan 1999 04:13:01 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "[HACKERS] Adding some const keywords to external interfaces (fwd)" } ]
[ { "msg_contents": "I've been doing some more backend profiling, and observe that in a large\nSELECT from a table with lots of columns, nocachegetattr (the guts of\nheap_getattr) is at the top of the list, accounting for about 15% of\nruntime.\n\nThe percentage would be lower in a table with fewer columns or no null\ncolumns, but it still seems worth working on. (Besides, this case right\nhere is a real-world case for me.)\n\nWhat's drawing my eye is that printtup() is calling heap_getattr twice\nfor each attribute of each tuple --- once in the first scan that\nprepares the null-fields bitmap, and then again to actually output the\nfield value. So, what I want to do is call heap_getattr only once per\nattribute and save the returned value for use in the second loop.\nThat should halve the time spent in nocachegetattr and thus knock\n7 or so percent off the runtime of SELECT.\n\nThe question for the list: how long is the Datum value returned by\nheap_getattr valid? In particular, could it be invalidated by calling\nheap_getattr for another field of the same tuple? If there are any\ncases like that, then this optimization won't work. I don't know the\nbackend well enough to guess whether this is safe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jan 1999 12:53:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Q about heap_getattr" }, { "msg_contents": "Tom Lane wrote:\n> \n> I've been doing some more backend profiling, and observe that in a large\n> SELECT from a table with lots of columns, nocachegetattr (the guts of\n> heap_getattr) is at the top of the list, accounting for about 15% of\n> runtime.\n> \n> The percentage would be lower in a table with fewer columns or no null\n> columns, but it still seems worth working on. (Besides, this case right\n> here is a real-world case for me.)\n> \n> What's drawing my eye is that printtup() is calling heap_getattr twice\n> for each attribute of each tuple --- once in the first scan that\n> prepares the null-fields bitmap, and then again to actually output the\n> field value. So, what I want to do is call heap_getattr only once per\n> attribute and save the returned value for use in the second loop.\n> That should halve the time spent in nocachegetattr and thus knock\n> 7 or so percent off the runtime of SELECT.\n\nTry to use heap_attisnull in first scan!\nThis func just tests nulls bitmap array of tuple...\n\nVadim\nP.S. Tom, I forgot to attach new allocation code in my prev letter,\nbut now I want to reimplement them.\n", "msg_date": "Mon, 25 Jan 1999 01:09:13 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Q about heap_getattr" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> So, what I want to do is call heap_getattr only once per\n>> attribute and save the returned value for use in the second loop.\n\n> Try to use heap_attisnull in first scan!\n\nAh, that looks like a much better idea. Consider it done...\n\n\t\tthanks, tom lane\n", "msg_date": "Sun, 24 Jan 1999 13:31:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Q about heap_getattr " }, { "msg_contents": "> Tom Lane wrote:\n> > \n> > I've been doing some more backend profiling, and observe that in a large\n> > SELECT from a table with lots of columns, nocachegetattr (the guts of\n> > heap_getattr) is at the top of the list, accounting for about 15% of\n> > runtime.\n> > \n> > The percentage would be lower in a table with fewer columns or no null\n> > columns, but it still seems worth working on. (Besides, this case right\n> > here is a real-world case for me.)\n> > \n> > What's drawing my eye is that printtup() is calling heap_getattr twice\n> > for each attribute of each tuple --- once in the first scan that\n> > prepares the null-fields bitmap, and then again to actually output the\n> > field value. So, what I want to do is call heap_getattr only once per\n> > attribute and save the returned value for use in the second loop.\n> > That should halve the time spent in nocachegetattr and thus knock\n> > 7 or so percent off the runtime of SELECT.\n> \n> Try to use heap_attisnull in first scan!\n> This func just tests nulls bitmap array of tuple...\n> \n> Vadim\n> P.S. Tom, I forgot to attach new allocation code in my prev letter,\n> but now I want to reimplement them.\n> \n> \n\nGood idea. Hadn't thought of that. To answer Tom's question, it\ndoesn't matter how many times you call heap_getattr(). You can cache\nthe values, as long as the tuple doesn't change.\n\nnocachegetattr() computes all offsets, even offsets after the column you\nare requesting, to prevent future calls. You must have nulls or\nvarlena's that is causing nocachegetattr to be called so many times.\nIs this true? \n\nheap_getattr() certainly is called many times, and needs any\noptimization we can give it. I have done as much as I could. Perhaps\nthere are more opportunities I missed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Jan 1999 13:56:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Q about heap_getattr" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Tom Lane wrote:\n>>>> I've been doing some more backend profiling, and observe that in a large\n>>>> SELECT from a table with lots of columns, nocachegetattr (the guts of\n>>>> heap_getattr) is at the top of the list, accounting for about 15% of\n>>>> runtime.\n>>>> \n>>>> The percentage would be lower in a table with fewer columns or no null\n>>>> columns, but it still seems worth working on. (Besides, this case right\n>>>> here is a real-world case for me.)\n\n> nocachegetattr() computes all offsets, even offsets after the column you\n> are requesting, to prevent future calls. You must have nulls or\n> varlena's that is causing nocachegetattr to be called so many times.\n> Is this true? \n\nRight, this table has 38 columns, many of which can be NULL and several\nof which are variable-size. So it's probably the worst-case scenario as\nfar as the cost of nocachegetattr is concerned. It looked to me like\nthe pre-computation aspect of nocachegetattr only works for tables where\nall the tuples have the same physical layout, ie, no varlenas or nulls;\nis that right?\n\n> heap_getattr() certainly is called many times, and needs any\n> optimization we can give it. I have done as much as I could. Perhaps\n> there are more opportunities I missed.\n\nI thought I had spotted a couple of possibilities for small improvements\nof the code inside nocachegetattr, but it was awfully late by then so\nI didn't try changing anything. I'll take another look.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jan 1999 15:05:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Q about heap_getattr " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Tom Lane wrote:\n> >>>> I've been doing some more backend profiling, and observe that in a large\n> >>>> SELECT from a table with lots of columns, nocachegetattr (the guts of\n> >>>> heap_getattr) is at the top of the list, accounting for about 15% of\n> >>>> runtime.\n> >>>> \n> >>>> The percentage would be lower in a table with fewer columns or no null\n> >>>> columns, but it still seems worth working on. (Besides, this case right\n> >>>> here is a real-world case for me.)\n> \n> > nocachegetattr() computes all offsets, even offsets after the column you\n> > are requesting, to prevent future calls. You must have nulls or\n> > varlena's that is causing nocachegetattr to be called so many times.\n> > Is this true? \n> \n> Right, this table has 38 columns, many of which can be NULL and several\n> of which are variable-size. So it's probably the worst-case scenario as\n> far as the cost of nocachegetattr is concerned. It looked to me like\n> the pre-computation aspect of nocachegetattr only works for tables where\n> all the tuples have the same physical layout, ie, no varlenas or nulls;\n> is that right?\n> \n> > heap_getattr() certainly is called many times, and needs any\n> > optimization we can give it. I have done as much as I could. Perhaps\n> > there are more opportunities I missed.\n> \n> I thought I had spotted a couple of possibilities for small improvements\n> of the code inside nocachegetattr, but it was awfully late by then so\n> I didn't try changing anything. I'll take another look.\n\nAlso, I see a few places where heap_getattr is called, just looking for\na null. You can use mkid(see developers faq) to find them. If you\ndon't modify them, I can.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Jan 1999 15:51:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Q about heap_getattr" } ]
[ { "msg_contents": "Came across this offhand remark on another mailing list:\n\n> 2. The code for substitute versions of snprintf() and vsnprintf(),\n> for systems without native versions has been replaced. nmh\n> now uses the version of these routines taken from the Apache\n> web server code.\n\nHmm. I don't know how bulletproof the snprintf/vsnprintf code we have\nis, but it might be worth comparing what Apache is using to see if\ntheirs is better (and if they have a compatible copyright...).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jan 1999 13:41:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Another source of snprintf/vsnprintf code" }, { "msg_contents": "On Sun, 24 Jan 1999, Tom Lane wrote:\n\n> Hmm. I don't know how bulletproof the snprintf/vsnprintf code we have\n> is, but it might be worth comparing what Apache is using to see if\n> theirs is better (and if they have a compatible copyright...).\n\nI assume LGPL is license non grata? glib has a good *printf*\nimplementation...\n\n--\nTodd Graham Lewis 32�49'N,83�36'W (800) 719-4664, x2804\n******Linux****** MindSpring Enterprises [email protected]\n\n\"Those who write the code make the rules.\" -- Jamie Zawinski\n\n", "msg_date": "Mon, 25 Jan 1999 02:24:17 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another source of snprintf/vsnprintf code" }, { "msg_contents": "Todd Graham Lewis <[email protected]> writes:\n> I assume LGPL is license non grata?\n\nProbably. I'm not sure what Marc's position is, but I'd say we ought\nto try to keep everything under a single set of license rules --- and\nfor better or worse, BSD license is what we have for the existing code.\nIf we distribute a system that has some BSD and some LGPL code, then\nusers have to follow *both* sets of rules if they want to live a clean\nlife, and that gets annoying. (Also, LGPL is more restrictive about\nwhat recipients can do with the code, which might mean some potential\nPostgres users couldn't use it anymore.)\n\n> glib has a good *printf* implementation...\n\nStephen Kogge <[email protected]> was looking at extracting printf\nfrom glib (because his platform's printf didn't handle long long),\nbut I think he concluded that it wasn't practical to separate it\nfrom the rest of glib --- seems everything's connected to everything\nelse...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jan 1999 10:51:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another source of snprintf/vsnprintf code " }, { "msg_contents": "On Mon, 25 Jan 1999, Tom Lane wrote:\n\n> Todd Graham Lewis <[email protected]> writes:\n> > I assume LGPL is license non grata?\n> \n> Probably. I'm not sure what Marc's position is, but I'd say we ought\n> to try to keep everything under a single set of license rules --- and\n> for better or worse, BSD license is what we have for the existing code.\n\nExactly...\n\nIf there are any problems with our current implementation, let us know so\nthat we can correct it...I haven't heard of any recently though (either\nhaven't heard, or its fallen on deaf ears?)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 25 Jan 1999 14:27:32 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another source of snprintf/vsnprintf code " } ]
[ { "msg_contents": "I have some SQL table-creation scripts that include foreign key clauses,\nin the hope that it will be implemented soon. This has shown up what seems\nto me to be an inadequacy in the planned linkage between foreign keys and\ninheritance.\n\nConsider these tables:\n\ncreate table invoice\n(\n\tinvno\t\tint\t\tprimary key,\n\tcustomer\tchar(10)\tnot null\n\t\t\t\t references customer (id),\n\tdate\t\tdatetime\tnot null\n\t\t\t\t\tdefault datetime(now()),\n\tcustref\t\ttext\n);\n\ncreate table export_invoice\n(\n\t[various fields appropriate to exporting]\n)\n\tinherits (invoice)\n;\n\ncreate table invoice_line\n(\n\tinvno\t\tint\t\tnot null\n\t\t\t\t references invoice* (invno),\n\tproduct\t\tchar(10)\t\tnot null\n\t\t\t\t references price (product),\n\tqty\t\tint\t\tnot null,\n\tprice\t\tfloat\t\tnot null\n\t\n\tprimary key (invno, product)\n)\n;\n\nI want invoice_line to reference either invoice or export_invoice,\nbecause there is no difference in the structure of an invoice line\nbetween the two cases. However the parser does not allow \n`references invoice*'.\n\nSince this feature of the parser seems likely to make it impossible\nto name an inheritance tree in a foreign key reference when foreign\nkeys are finally implemented, can I suggest that this be changed now,\nplease?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If anyone has material possessions and sees his\n brother in need but has no pity on him, how can the\n love of God be in him?\"\n I John 3:17 \n\n\n", "msg_date": "Sun, 24 Jan 1999 20:36:31 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Foreign key parsing and inherited classes" } ]
[ { "msg_contents": "With 6.4 or current sources, I find that coercing a datetime to float8\nis a no-op:\n\ntreetest=> create table dt1 (t datetime);\nCREATE\ntreetest=> insert into dt1 values('now');\nINSERT 159593 1\ntreetest=> select t from dt1;\nt\n----------------------------\nSun Jan 24 18:28:50 1999 EST\n(1 row)\n\ntreetest=> select t::float8 from dt1;\n?column?\n----------------------------\nSun Jan 24 18:28:50 1999 EST\n(1 row)\n\n\nI was expecting to get either some numerical equivalent to the date\n(seconds since 1970 would do nicely, but I'll take the internal rep...)\nor an error message saying \"no such conversion available\". I was\ncertainly not expecting to find that the result was still a datetime,\nbut such it appears to be. This is a bug, wouldn't you say?\n\nWhat's even more curious is that coercing to int4 does produce\nsomething numeric:\n\ntreetest=> select t::int4 from dt1;\n int4\n---------\n-29464270\n(1 row)\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jan 1999 18:36:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Odd behavior of type coercion for datetime" }, { "msg_contents": "> With 6.4 or current sources, I find that coercing a datetime to float8\n> is a no-op:\n> treetest=> select t::float8 from dt1;\n> ----------------------------\n> Sun Jan 24 18:28:50 1999 EST\n> (1 row)\n> I was expecting to get either some numerical equivalent to the date\n> or an error message saying \"no such conversion available\". I was\n> certainly not expecting to find that the result was still a datetime,\n> but such it appears to be. This is a bug, wouldn't you say?\n\nSure, now that you bring it up. You are running into the code associated\nwith the following comment (remember that type coersions are done as\nfunction calls):\n\n/*\n * See if this is a single argument function with the function\n * name also a type name and the input argument and type name\n * binary compatible...\n * This means that you are trying for a type conversion which does not\n * need to take place, so we'll just pass through the argument itself.\n * (make this clearer with some extra brackets - thomas 1998-12-05)\n */\n\nIf the operation stays internal, the result behaves correctly, but if\nthe coersion is going to an output routine turning it into a no-op is\nnot such a good idea. Actually, this code is hit only in the case that\nthe requested function/coersion does not exist at all, and is there as a\nlast-gasp effort to DTRT.\n\n> What's even more curious is that coercing to int4 does produce\n> something numeric\n\nbecause int4 and datetime are *not* binary compatible (and don't claim\nto be), you never hit this too-aggressive optimization.\n\nThere are probably a couple of problems here: equivalencing datetime and\nfloat8 might be too much of a cheat, and the \"drop the function call\"\noptimization breaks down if the output representation of the two types\nis different. Not sure if I can force the apparent type of the column\nfor purposes of output, but that would help.\n\nWe shouldn't allow this behavior to persist into v6.5, though I'm not\ncertain what the best solution is yet. I conveniently ignored the fact\nthat there are reserved values for datetime which give astoundingly\nnon-intuitive results if interpreted as float8; for example, \"now\" is a\nfloating point number near, but not at, zero, as is \"current\". So I\nshould probably remove the float8 == datetime equivalence as a start...\n\n - Tom\n", "msg_date": "Mon, 25 Jan 1999 02:33:03 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior of type coercion for datetime" }, { "msg_contents": "Have we dealt with this?\n\n\n> With 6.4 or current sources, I find that coercing a datetime to float8\n> is a no-op:\n> \n> treetest=> create table dt1 (t datetime);\n> CREATE\n> treetest=> insert into dt1 values('now');\n> INSERT 159593 1\n> treetest=> select t from dt1;\n> t\n> ----------------------------\n> Sun Jan 24 18:28:50 1999 EST\n> (1 row)\n> \n> treetest=> select t::float8 from dt1;\n> ?column?\n> ----------------------------\n> Sun Jan 24 18:28:50 1999 EST\n> (1 row)\n> \n> \n> I was expecting to get either some numerical equivalent to the date\n> (seconds since 1970 would do nicely, but I'll take the internal rep...)\n> or an error message saying \"no such conversion available\". I was\n> certainly not expecting to find that the result was still a datetime,\n> but such it appears to be. This is a bug, wouldn't you say?\n> \n> What's even more curious is that coercing to int4 does produce\n> something numeric:\n> \n> treetest=> select t::int4 from dt1;\n> int4\n> ---------\n> -29464270\n> (1 row)\n> \n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 09:37:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Odd behavior of type coercion for datetime" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Have we dealt with this?\n>> With 6.4 or current sources, I find that coercing a datetime to float8\n>> is a no-op:\n\nWith current sources I get\n\nregression=> select t from dt1;\nt\n----------------------------\nMon Mar 15 09:56:01 1999 EST\n(1 row)\n\nregression=> select t::float8 from dt1;\nERROR: Bad float8 input format 'Mon Mar 15 09:56:01 1999 EST'\nregression=>\n\nwhich seems to be reasonable behavior. (I believe Tom made this\nhappen by removing binary equivalence between datetime and float8.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 09:58:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Odd behavior of type coercion for datetime " } ]
[ { "msg_contents": "printtup() does a SearchSysCache call for each attribute of each tuple\nin order to find the appropriate output routine for the attribute's\ntype. (Up till yesterday it did *two* such calls per attribute, but\nI fixed that.) This is fairly expensive, amounting to about 10% of\nthe runtime in the SELECT-a-large-table test case I'm looking at.\nIt's probably even more than that in cases that don't stress\nheap_getattr as badly as this one does.\n\nIt occurs to me that there's no good reason to do this lookup more\nthan once per column --- all the tuples in a relation should have\nthe same set of column types, no? So if we could do these lookups\nonce at the start of an output pass, and cache the results for use\nin individual printtup calls, we could drive that 10% down to zero\nat essentially no penalty.\n\nThere are a couple different ways this could be handled. The way\nthat looks good to me at first glance is to extend the notion of\na \"tuple destination\" (as selected by DestToFunction in dest.c)\nto include not just a per-tuple processing routine but also setup and\ncleanup routines, and some storage accessible to all three routines.\nThe setup routine would be passed the TupleDesc info that is expected\nto apply to all tuples subsequently sent to that destination, and it can\ndo nothing or do setup work for use by the per-tuple routine. What\nwe'd actually have it do for the printtup destination type is create\nand fill in an array of per-column output function info. The cleanup\nroutine is for symmetry --- for this immediate issue all it would need\nto do is free the data created by the setup routine, but I can imagine\nnew kinds of destinations that need more setup/cleanup someday.\n\nThat gives us a place to precalculate the system cache search that\nfinds the type-specific output routine's OID. But as long as we are\nprecalculating stuff, it would also be worthwhile to precalculate the\ninfo that fmgr.c needs in order to invoke the routine. For builtin\nfunctions it seems to me that we ought to be able to reduce the\nper-tuple call effort to a straight jump through a function pointer,\nwhich would save almost another 10% of SELECT's runtime. Even for\nnon-builtins, finding out that it's not a builtin once per select\ninstead of once per tuple would be helpful.\n\nThis last idea could perhaps be combined with the revision of the\nfunction manager interface that some folks have been muttering about\nfor a while (ie, fix its deficiencies w.r.t. null parameter values).\n\nI think we're too close to 6.5 beta to start hacking on a function\nmanager refit, but maybe the tuple destination improvement could be\ndone in time for 6.5?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jan 1999 20:06:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Another speedup idea (two, even)" }, { "msg_contents": "> printtup() does a SearchSysCache call for each attribute of each tuple\n> in order to find the appropriate output routine for the attribute's\n> type. (Up till yesterday it did *two* such calls per attribute, but\n> I fixed that.) This is fairly expensive, amounting to about 10% of\n> the runtime in the SELECT-a-large-table test case I'm looking at.\n> It's probably even more than that in cases that don't stress\n> heap_getattr as badly as this one does.\n\nInteresting. It seems anything that is applied to every row, or every\ncolumn of every row is a good candidate for optimization. That is what\nI have done in the past.\n\n> \n> It occurs to me that there's no good reason to do this lookup more\n> than once per column --- all the tuples in a relation should have\n> the same set of column types, no? So if we could do these lookups\n> once at the start of an output pass, and cache the results for use\n> in individual printtup calls, we could drive that 10% down to zero\n> at essentially no penalty.\n\nSee copy.c. It does much of what you suggest, I think.\n\n> \n> There are a couple different ways this could be handled. The way\n> that looks good to me at first glance is to extend the notion of\n> a \"tuple destination\" (as selected by DestToFunction in dest.c)\n> to include not just a per-tuple processing routine but also setup and\n> cleanup routines, and some storage accessible to all three routines.\n> The setup routine would be passed the TupleDesc info that is expected\n> to apply to all tuples subsequently sent to that destination, and it can\n> do nothing or do setup work for use by the per-tuple routine. What\n> we'd actually have it do for the printtup destination type is create\n> and fill in an array of per-column output function info. The cleanup\n> routine is for symmetry --- for this immediate issue all it would need\n> to do is free the data created by the setup routine, but I can imagine\n> new kinds of destinations that need more setup/cleanup someday.\n> \n> That gives us a place to precalculate the system cache search that\n> finds the type-specific output routine's OID. But as long as we are\n> precalculating stuff, it would also be worthwhile to precalculate the\n> info that fmgr.c needs in order to invoke the routine. For builtin\n> functions it seems to me that we ought to be able to reduce the\n> per-tuple call effort to a straight jump through a function pointer,\n> which would save almost another 10% of SELECT's runtime. Even for\n> non-builtins, finding out that it's not a builtin once per select\n> instead of once per tuple would be helpful.\n> \n> This last idea could perhaps be combined with the revision of the\n> function manager interface that some folks have been muttering about\n> for a while (ie, fix its deficiencies w.r.t. null parameter values).\n> \n> I think we're too close to 6.5 beta to start hacking on a function\n> manager refit, but maybe the tuple destination improvement could be\n> done in time for 6.5?\n\nIf you think you understand it, I would recommend going for it. You\ncan't put it in between 6.5 beta and 6.5, so why not do it now, though\nthe function manager fixup may be best left for post 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Jan 1999 21:24:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another speedup idea (two, even)" }, { "msg_contents": "I wrote:\n>> It occurs to me that there's no good reason to do this lookup more\n>> than once per column --- all the tuples in a relation should have\n>> the same set of column types, no? So if we could do these lookups\n>> once at the start of an output pass, and cache the results for use\n>> in individual printtup calls, we could drive that 10% down to zero\n>> at essentially no penalty.\n>> [ snip ]\n>> ... as long as we are\n>> precalculating stuff, it would also be worthwhile to precalculate the\n>> info that fmgr.c needs in order to invoke the routine. For builtin\n>> functions it seems to me that we ought to be able to reduce the\n>> per-tuple call effort to a straight jump through a function pointer,\n>> which would save almost another 10% of SELECT's runtime.\n\nI have implemented and checked in both of these ideas, and gotten the\nexpected savings in runtime of large SELECTs.\n\nIt turns out that someone was way ahead of me concerning optimizing\ncalls through fmgr.c --- it already is possible to precalculate the\ntarget function address (fmgr_info) and then do a direct jump through\nthe function pointer (fmgr_faddr). But printtup.c was using the\ncombined-lookup-and-call routine fmgr() for each tuple, rather than\nprecalculating the function info and re-using it. This was probably\nbecause it didn't have any good place to cache the info --- but it\ndoes now.\n\nThere are a number of other places that look like they might profit from\nthe same kind of optimization --- in particular, GROUP BY and UNIQUE\n(SELECT DISTINCT) processing call fmgr() for each tuple. Also, index\nprocessing uses fmgr() rather than precalculated calls. I haven't done\nanything about this but perhaps someone else would like to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jan 1999 16:44:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another speedup idea (two, even) " }, { "msg_contents": "> I have implemented and checked in both of these ideas, and gotten the\n> expected savings in runtime of large SELECTs.\n> \n> It turns out that someone was way ahead of me concerning optimizing\n> calls through fmgr.c --- it already is possible to precalculate the\n> target function address (fmgr_info) and then do a direct jump through\n> the function pointer (fmgr_faddr). But printtup.c was using the\n> combined-lookup-and-call routine fmgr() for each tuple, rather than\n> precalculating the function info and re-using it. This was probably\n> because it didn't have any good place to cache the info --- but it\n> does now.\n> \n> There are a number of other places that look like they might profit from\n> the same kind of optimization --- in particular, GROUP BY and UNIQUE\n> (SELECT DISTINCT) processing call fmgr() for each tuple. Also, index\n> processing uses fmgr() rather than precalculated calls. I haven't done\n> anything about this but perhaps someone else would like to.\n> \n\nCertainly sounds like it would be a big win.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 17:27:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another speedup idea (two, even)" }, { "msg_contents": "Tom, where are we on this? As I rememeber, you did this already, right?\n\n\n\n\n> printtup() does a SearchSysCache call for each attribute of each tuple\n> in order to find the appropriate output routine for the attribute's\n> type. (Up till yesterday it did *two* such calls per attribute, but\n> I fixed that.) This is fairly expensive, amounting to about 10% of\n> the runtime in the SELECT-a-large-table test case I'm looking at.\n> It's probably even more than that in cases that don't stress\n> heap_getattr as badly as this one does.\n> \n> It occurs to me that there's no good reason to do this lookup more\n> than once per column --- all the tuples in a relation should have\n> the same set of column types, no? So if we could do these lookups\n> once at the start of an output pass, and cache the results for use\n> in individual printtup calls, we could drive that 10% down to zero\n> at essentially no penalty.\n> \n> There are a couple different ways this could be handled. The way\n> that looks good to me at first glance is to extend the notion of\n> a \"tuple destination\" (as selected by DestToFunction in dest.c)\n> to include not just a per-tuple processing routine but also setup and\n> cleanup routines, and some storage accessible to all three routines.\n> The setup routine would be passed the TupleDesc info that is expected\n> to apply to all tuples subsequently sent to that destination, and it can\n> do nothing or do setup work for use by the per-tuple routine. What\n> we'd actually have it do for the printtup destination type is create\n> and fill in an array of per-column output function info. The cleanup\n> routine is for symmetry --- for this immediate issue all it would need\n> to do is free the data created by the setup routine, but I can imagine\n> new kinds of destinations that need more setup/cleanup someday.\n> \n> That gives us a place to precalculate the system cache search that\n> finds the type-specific output routine's OID. But as long as we are\n> precalculating stuff, it would also be worthwhile to precalculate the\n> info that fmgr.c needs in order to invoke the routine. For builtin\n> functions it seems to me that we ought to be able to reduce the\n> per-tuple call effort to a straight jump through a function pointer,\n> which would save almost another 10% of SELECT's runtime. Even for\n> non-builtins, finding out that it's not a builtin once per select\n> instead of once per tuple would be helpful.\n> \n> This last idea could perhaps be combined with the revision of the\n> function manager interface that some folks have been muttering about\n> for a while (ie, fix its deficiencies w.r.t. null parameter values).\n> \n> I think we're too close to 6.5 beta to start hacking on a function\n> manager refit, but maybe the tuple destination improvement could be\n> done in time for 6.5?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 09:37:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another speedup idea (two, even)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, where are we on this? As I rememeber, you did this already, right?\n\nYeah, it's done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 10:01:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another speedup idea (two, even) " }, { "msg_contents": "\nAdded to TODO:\n\n* use fmgr_info()/fmgr_faddr() instead of fmgr() calls in high-traffic\n places, like GROUP BY, UNIQUE, index processing, etc.\n\n\n\n> I wrote:\n> >> It occurs to me that there's no good reason to do this lookup more\n> >> than once per column --- all the tuples in a relation should have\n> >> the same set of column types, no? So if we could do these lookups\n> >> once at the start of an output pass, and cache the results for use\n> >> in individual printtup calls, we could drive that 10% down to zero\n> >> at essentially no penalty.\n> >> [ snip ]\n> >> ... as long as we are\n> >> precalculating stuff, it would also be worthwhile to precalculate the\n> >> info that fmgr.c needs in order to invoke the routine. For builtin\n> >> functions it seems to me that we ought to be able to reduce the\n> >> per-tuple call effort to a straight jump through a function pointer,\n> >> which would save almost another 10% of SELECT's runtime.\n> \n> I have implemented and checked in both of these ideas, and gotten the\n> expected savings in runtime of large SELECTs.\n> \n> It turns out that someone was way ahead of me concerning optimizing\n> calls through fmgr.c --- it already is possible to precalculate the\n> target function address (fmgr_info) and then do a direct jump through\n> the function pointer (fmgr_faddr). But printtup.c was using the\n> combined-lookup-and-call routine fmgr() for each tuple, rather than\n> precalculating the function info and re-using it. This was probably\n> because it didn't have any good place to cache the info --- but it\n> does now.\n> \n> There are a number of other places that look like they might profit from\n> the same kind of optimization --- in particular, GROUP BY and UNIQUE\n> (SELECT DISTINCT) processing call fmgr() for each tuple. Also, index\n> processing uses fmgr() rather than precalculated calls. I haven't done\n> anything about this but perhaps someone else would like to.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 20:52:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another speedup idea (two, even)" } ]
[ { "msg_contents": "\nWell, I thought I'd test to see how postgres handles tables larger than\n2Gb, mainly because some data sets I have will end up that big, and I know\nof atleast one project out there (www.tass-survey.org) that's using\npostgres and will hit this limit.\n\nNow, I know that (in theory), when a table reaches the magical 2Gb file\nlimit (imposed on most Unixes as the max file size), it should start a\nfresh file, and use it after that point.\n\nSo, I created a table:\n\ncreate table smallcat (gsc char(18),ra float4,dec float4,mag float4);\n\nThen I wrote a short bash script that repeatedly ran psql, and copy from a\nflat file containing 26653 rows.\n\nThe first 1000 loops ran in 3h 59m. Now this is an improvement over 6.4.2,\nwhich when I ran this same test, I killed it after 10 hours.\n\nNow this table now contains 26,653,000 rows, but is still under the 2Gb\nlimit, so I ran the script to insert another 100 copies of the file.\n\nWhile the 29th block was being inserted, the table reached the 2Gb limit.\nIn the database directory, a new file appeared smallcat.1, then the error:\n\nERROR: cannot read block 262143 of smallcat\n\nThe smallcat.1 file is of zero length, and the backend then dies.\n\nIt looks to me like the code that splits the file does the split, but the\nbackend still tries to append to the original file.\n\nIf I get chance, I may have a peek at the source, but I'm catching up on\nthe JDBC driver at the moment (amongst other things), so may not get the\nchance.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Mon, 25 Jan 1999 10:22:56 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with 6.5 and tables >2Gb" } ]
[ { "msg_contents": "I am sending this mail on hackers list because the bug list seems to be\ndissapeared !\n\nSomething strange start happening on my PostgreSQL server :\n\nLinux RedHat 5.2 i386 Pentium machine 64 Mb RAM\nPostgreSQL 6.4.2 official release\n\nthere are a number of maximum 6 users working simultaneously but not so\nhard on the database that isn't so big (2 Mb dumped).\nthe clients are Tcl/Tk programs. 3 clients are accesing server from a\nlocal network, 3 or 4 clients are accesing server through a serial 115\nkb line through a CISCO.\n\nTill now, everything went ok, but sometimes, in the last few days, I\nfound some postgres (<zombie>) processes and when every client is\nlogging out, another postgres <zombie> process appears. I had to kill\n-SIGTERM the master, wait for 5 or 6 seconds and then restart it again.\n\nWhen 1 postgres <zombie> process is appearing, the current working\nclients can work ahead, no problem at all. But newer connections aren't\naccepted.\n\n=======\nI am not sure, but I think that the serial line is broked sometimes and\nthe client-server communication has small interrupts.\nCould it be possible that these problems hang up postgresql so bad ?\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Mon, 25 Jan 1999 13:02:10 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "postgres (zombie)" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> Till now, everything went ok, but sometimes, in the last few days, I\n> found some postgres (<zombie>) processes and when every client is\n> logging out, another postgres <zombie> process appears. I had to kill\n> -SIGTERM the master, wait for 5 or 6 seconds and then restart it again.\n> When 1 postgres <zombie> process is appearing, the current working\n> clients can work ahead, no problem at all. But newer connections aren't\n> accepted.\n\nThis sounds like the postmaster process has gotten hung up somehow ---\nit's not responding to incoming connection requests, nor is it noticing\nSIGCHLD (signal that one of its child processes exited --- the zombies\nare there because the postmaster hasn't done a wait() to reap them).\n\nI've never seen this myself, but it sure sounds like a bug.\n\nNext time you see the condition, would you kill the postmaster with a\nsignal that will produce a coredump (SIGABRT or SIGSEGV should work)\nand extract a backtrace from the core file? That will give us more\nto go on. Note it will help if you've compiled the backend with -g ...\nand don't throw away the corefile, we may need to ask more questions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jan 1999 10:37:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres (zombie) " } ]
[ { "msg_contents": "\nRe: memory allocation.\n\nApache uses a set of memory \"pools\" that are cleared at\ndifferent times. I.E. there is one pool that is freed\nat the end of each page request. So code can just allocate\nfrom that pool and not worry about it getting freed.\n\nPerhaps a pool for each transaction could be used.\n\nI'm not 100% sure I like this idea -- I kind of think that\na hallmark of a good programmer is taking care of his\nmallocs and frees, but it is probably faster.\n\nOh, and I'm not too keen on alloca() either.\n\n-- cary\n\n", "msg_date": "Mon, 25 Jan 1999 10:05:00 -0500 (EST)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres Speed or lack thereof" } ]
[ { "msg_contents": "I've got the bug working on Sparc Linux also. Don't understand why\nyou can't recreate it.\n\nD.\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]] \nSent: Monday, January 25, 1999 11:21 AM\nTo: [email protected]\nCc: [email protected]; [email protected]\nSubject: Re: [CORE] create database bug\n\n\n> All,\n> \tIf you create a database with \"_\" in the name some strange\n> things\n> occur. For example:\n> \n> 1)\tcreate database cfg_smb;\n> \t\tok\n> 2)\tdrop database cfg_smb;\n> \t\tok\n> 3)\tcreate database cfg_smb;\n> \t\terror: database already exists.\n> 4)\tdrop database cfg_smb;\n> \t\terror: database does not exist.\n> \t(Note: the database directory still exists, but no files are\n> within it.)\n> \n> But on the other hand:\n> \n> 1)\tcreate database cfgsmb;\n> \t\tok\n> 2)\tdrop database cfgsmb;\n> \t\tok\n> 3)\tcreate database cfgsmb;\n> \t\tok\n> 4)\tdrop database cfgsmb;\n> \t\tok\n> \n> \tEverything is fine.\n> \n> I don't know where the code is that handles the dropping of databases,\n> but I would\n> think this would be easy to fix. \n> \n> Versions this was tried on:\n> PostgreSQL v6.4, PostgreSQL v6.4.1, PostgreSQL v6.4.2\n> Red Hat Linux v5.2, Intel Pentium II 300 MHz\n\nCan't recreate the problem here on bsdi and current development sources.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Mon, 25 Jan 1999 12:06:37 -0500", "msg_from": "Dan Gowin <[email protected]>", "msg_from_op": true, "msg_subject": "FW: [CORE] create database bug" }, { "msg_contents": ">\n> I've got the bug working on Sparc Linux also. Don't understand why\n> you can't recreate it.\n>\n> D.\n>\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, January 25, 1999 11:21 AM\n> To: [email protected]\n> Cc: [email protected]; [email protected]\n> Subject: Re: [CORE] create database bug\n>\n>\n> > All,\n> > If you create a database with \"_\" in the name some strange\n> > things\n> > occur. For example:\n> >\n> > 1) create database cfg_smb;\n> > ok\n> > 2) drop database cfg_smb;\n> > ok\n> > 3) create database cfg_smb;\n> > error: database already exists.\n> > 4) drop database cfg_smb;\n> > error: database does not exist.\n> > (Note: the database directory still exists, but no files are\n> > within it.)\n> >\n> > But on the other hand:\n> >\n> > 1) create database cfgsmb;\n> > ok\n> > 2) drop database cfgsmb;\n> > ok\n> > 3) create database cfgsmb;\n> > ok\n> > 4) drop database cfgsmb;\n> > ok\n> >\n> > Everything is fine.\n> >\n> > I don't know where the code is that handles the dropping of databases,\n> > but I would\n> > think this would be easy to fix.\n> >\n> > Versions this was tried on:\n> > PostgreSQL v6.4, PostgreSQL v6.4.1, PostgreSQL v6.4.2\n> > Red Hat Linux v5.2, Intel Pentium II 300 MHz\n>\n> Can't recreate the problem here on bsdi and current development sources.\n\n I can't too here on i486-pc-linux-gnu and current development\n tree.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 25 Jan 1999 19:33:07 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FW: [CORE] create database bug" } ]
[ { "msg_contents": "I'll recheck everything tonight and recompile with the latest\nsource on a fresh Red Hat machine.\n\nD.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]\nSent: Monday, January 25, 1999 1:33 PM\nTo: [email protected]\nCc: [email protected]; [email protected]\nSubject: Re: [HACKERS] FW: [CORE] create database bug\n\n\n>\n> I've got the bug working on Sparc Linux also. Don't understand why\n> you can't recreate it.\n>\n> D.\n>\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, January 25, 1999 11:21 AM\n> To: [email protected]\n> Cc: [email protected]; [email protected]\n> Subject: Re: [CORE] create database bug\n>\n>\n> > All,\n> > If you create a database with \"_\" in the name some strange\n> > things\n> > occur. For example:\n> >\n> > 1) create database cfg_smb;\n> > ok\n> > 2) drop database cfg_smb;\n> > ok\n> > 3) create database cfg_smb;\n> > error: database already exists.\n> > 4) drop database cfg_smb;\n> > error: database does not exist.\n> > (Note: the database directory still exists, but no files are\n> > within it.)\n> >\n> > But on the other hand:\n> >\n> > 1) create database cfgsmb;\n> > ok\n> > 2) drop database cfgsmb;\n> > ok\n> > 3) create database cfgsmb;\n> > ok\n> > 4) drop database cfgsmb;\n> > ok\n> >\n> > Everything is fine.\n> >\n> > I don't know where the code is that handles the dropping of\ndatabases,\n> > but I would\n> > think this would be easy to fix.\n> >\n> > Versions this was tried on:\n> > PostgreSQL v6.4, PostgreSQL v6.4.1, PostgreSQL v6.4.2\n> > Red Hat Linux v5.2, Intel Pentium II 300 MHz\n>\n> Can't recreate the problem here on bsdi and current development\nsources.\n\n I can't too here on i486-pc-linux-gnu and current development\n tree.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 25 Jan 1999 13:53:38 -0500", "msg_from": "Dan Gowin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] FW: [CORE] create database bug" }, { "msg_contents": "Dan Gowin <[email protected]> writes:\n>>>> If you create a database with \"_\" in the name some strange\n>>>> things occur.\n\nI don't see this bug either.\n\nAre you perhaps running with non-English locale settings?\nThat might make a difference...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jan 1999 18:44:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FW: [CORE] create database bug " } ]
[ { "msg_contents": "\nI've been looking at fixing the the timestamp problem on the alpha.\nI can get it working but I'm wondering if there is any particular\nreason that postgres is currently being limited to 2038 as being\nthe top year.\ndefine UTIME_MAXYEAR (2038)\n\nAs you probably know....time_t (a long) on the alpha is a 64 bit\nvalue, so it can cope with dates waaaaaay into the future.\n\nThe problem currently breaking the timestamps on the alpha boils\ndown to \n\n#define AbsoluteTimeIsReal(time) \\\n\t((bool) (((AbsoluteTime) time) < NOEND_ABSTIME && \\\n\t((AbsoluteTime) time) > NOSTART_ABSTIME))\n\nreturning false cause \"time\" after being propriately cast around\nis always returning a value < NOSTART_ABSTIME which\nwas defined as \n#define NOSTART_ABSTIME ((AbsoluteTime) 0x80000001)\n\nI changed AbosoluteTime to be a time_t instead of a int32...\nwhich I'm wondering whether this is a good idea now.\n\nThe long an short of it...\nI can get it working..changing a number of stuff to time_ts from\nint32...this has no effect on any 32bit machines as they are the\nsame bitsize.\n\nI can get it working (well..i think it was working) so that epoch=0\ninfinity=infinty 'now' is the time of the transaction.\nor\nHave a nightmare of a time trying to workout how to extend the available\ntime-range. I have tried that...and it all seems to be working bar I\ncannot get >2038 due to other code in the proggy.\n\nUmm...comments?\n\nTa.\n-- \nAdrian Gartland - Server Development Manager\nOregan Networks UK Ltd Tel: +44 (0) 1530 56 33 11\nHuntingdon Court, Ashby de la Zouch Fax: +44 (0) 1530 56 33 22\nLeicestershire, LE65 1AH, United Kingdom WWW: http://www.oregan.net/\n\n", "msg_date": "Mon, 25 Jan 1999 19:48:23 +0000 (GMT)", "msg_from": "Adrian Gartland <[email protected]>", "msg_from_op": true, "msg_subject": "handling 64bit time_t's" }, { "msg_contents": "> I've been looking at fixing the the timestamp problem on the alpha.\n> I can get it working but I'm wondering if there is any particular\n> reason that postgres is currently being limited to 2038 as being\n> the top year.\n> define UTIME_MAXYEAR (2038)\n> As you probably know....time_t (a long) on the alpha is a 64 bit\n> value, so it can cope with dates waaaaaay into the future.\n> The problem currently breaking the timestamps on the alpha boils\n> down to\n> #define AbsoluteTimeIsReal(time) \\\n> ((bool) (((AbsoluteTime) time) < NOEND_ABSTIME && \\\n> ((AbsoluteTime) time) > NOSTART_ABSTIME))\n> \n> returning false cause \"time\" after being propriately cast around\n> is always returning a value < NOSTART_ABSTIME which\n> was defined as\n> #define NOSTART_ABSTIME ((AbsoluteTime) 0x80000001)\n> I changed AbosoluteTime to be a time_t instead of a int32...\n> which I'm wondering whether this is a good idea now.\n\nI'm a bit slow. If the Alpha's time is read back as signed 8 bytes, but\nis then coerced to AbsoluteTime as signed 4 bytes, then why would this\ncomparison fail? Though if things aren't 4 bytes at this stage and the\n0x80000001 is a large positive integer in 2038 then the comparison fails\nas you say.\n\n> The long an short of it...\n> I can get it working..changing a number of stuff to time_ts from\n> int32...this has no effect on any 32bit machines as they are the\n> same bitsize.\n> I can get it working (well..i think it was working) so that epoch=0\n> infinity=infinty 'now' is the time of the transaction.\n> or\n> Have a nightmare of a time trying to workout how to extend the \n> available time-range. I have tried that...and it all seems to be \n> working bar I cannot get >2038 due to other code in the proggy.\n\nThe problem is that abstime is stored in all of the database tables in\nsome system fields, specifically as 4 bytes (the length is defined in\npg_type). I'm not sure how to make something this fundamental have a\nplatform-specific length.\n\nMy solution (at least for now) would be to make sure it becomes a 4 byte\nquantity just after the call to time() and before calls to localtime().\nLook for all instances in src/backend/utils/adt/*.c. Not sure at the\nmoment what else might need looking at.\n\n - Tom\n", "msg_date": "Tue, 26 Jan 1999 03:09:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] handling 64bit time_t's" } ]
[ { "msg_contents": "Hi all\n\nIn looking for how to do table constraints psql help says:\n\nsoftware=> \\h create table\nCommand: create table\nDescription: create a new table\nSyntax:\n CREATE TABLE class_name\n (attr1 type1 [DEFAULT expression] [NOT NULL], ...attrN)\n [INHERITS (class_name1, ...class_nameN)\n [[CONSTRAINT name] CHECK condition1, ...conditionN] ]\n;\n\n\nBut this both does not work, and does not agree with \"The Practical SQL\nHandbook\", the examples of which do work.\n\nShould the syntax not be more like: (constraint inside the main parens)\n\nCommand: create table\nDescription: create a new table\nSyntax:\n CREATE TABLE class_name\n (attr1 type1 [DEFAULT expression] [NOT NULL][, ...attrN]\n [,[CONSTRAINT name] CHECK condition1, ...conditionN] ]);\n\nI'm not sure where to put:\n [INHERITS (class_name1, ...class_nameN)\nas I've never used it. But I suspect it may need inside the '()' as well,\nno?\n\nOH, also, what is / is there, a comment character to use in SQL scripts\nfeed into psql?\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\n\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n", "msg_date": "Mon, 25 Jan 1999 15:19:06 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "help bug and comment char." }, { "msg_contents": "> In looking for how to do table constraints psql help says:\n> software=> \\h create table\n> Command: create table\n> Description: create a new table\n> Syntax:\n> CREATE TABLE class_name\n> (attr1 type1 [DEFAULT expression] [NOT NULL], ...attrN)\n> [INHERITS (class_name1, ...class_nameN)\n> [[CONSTRAINT name] CHECK condition1, ...conditionN]]\n\nThis syntax help is out of date. The syntax for v6.4 (and perhaps\nv6.3.2) became compatible with SQL92, except of course for the INHERITS\nclause. That still must appear outside of the column-definition parens.\n\n - Tom\n", "msg_date": "Tue, 26 Jan 1999 03:19:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] help bug and comment char." }, { "msg_contents": "> Hi all\n> \n> In looking for how to do table constraints psql help says:\n> \n> software=> \\h create table\n> Command: create table\n> Description: create a new table\n> Syntax:\n> CREATE TABLE class_name\n> (attr1 type1 [DEFAULT expression] [NOT NULL], ...attrN)\n> [INHERITS (class_name1, ...class_nameN)\n> [[CONSTRAINT name] CHECK condition1, ...conditionN] ]\n> ;\n> \n> \n> But this both does not work, and does not agree with \"The Practical SQL\n> Handbook\", the examples of which do work.\n> \n> Should the syntax not be more like: (constraint inside the main parens)\n> \n> Command: create table\n> Description: create a new table\n> Syntax:\n> CREATE TABLE class_name\n> (attr1 type1 [DEFAULT expression] [NOT NULL][, ...attrN]\n> [,[CONSTRAINT name] CHECK condition1, ...conditionN] ]);\n\nFixed.\n\n\n> \n> I'm not sure where to put:\n> [INHERITS (class_name1, ...class_nameN)\n> as I've never used it. But I suspect it may need inside the '()' as well,\n> no?\n> \n> OH, also, what is / is there, a comment character to use in SQL scripts\n> feed into psql?\n\n-- is the comment character. Man sql says:\n\n---------------------------------------------------------------------------\n\nComments\n A comment is an arbitrary sequence of characters following\n double dashes up to the end of the line. We also support\n double-slashes as comments, e.g.:\n -- This is a standard SQL comment\n // And this is another supported comment style, like C++\n\n We also support C-style comments, e.g.:\n /* multi\n line\n comment */\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 13:40:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] help bug and comment char." }, { "msg_contents": "> > In looking for how to do table constraints psql help says:\n> > software=> \\h create table\n> > Command: create table\n> > Description: create a new table\n> > Syntax:\n> > CREATE TABLE class_name\n> > (attr1 type1 [DEFAULT expression] [NOT NULL], ...attrN)\n> > [INHERITS (class_name1, ...class_nameN)\n> > [[CONSTRAINT name] CHECK condition1, ...conditionN]]\n> \n> This syntax help is out of date. The syntax for v6.4 (and perhaps\n> v6.3.2) became compatible with SQL92, except of course for the INHERITS\n> clause. That still must appear outside of the column-definition parens.\n\nFixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 13:42:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] help bug and comment char." }, { "msg_contents": "Hi PostgreSQL hackers\n\nAs we are again approaching the beta (feature freeze), \nI will ask my ordinary question ;)\n\nIs the patch by Jan that eliminated the duplicate sort node in case it\nwas redundant included in 6.5 ?\n\n---------------\nHannu\n", "msg_date": "Tue, 02 Feb 1999 21:22:13 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "6.5 beta and ORDER BY patch" }, { "msg_contents": ">\n> Hi PostgreSQL hackers\n>\n> As we are again approaching the beta (feature freeze),\n> I will ask my ordinary question ;)\n>\n> Is the patch by Jan that eliminated the duplicate sort node in case it\n> was redundant included in 6.5 ?\n\n Sorry,\n\n I missed to put it into after v6.4 release. And since it\n wasn't there during v6.5 development, I would not put it in\n now.\n\n Note that it wasn't in the v6.4 feature patches either, so it\n isn't tested enough to get released.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 2 Feb 1999 23:58:44 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "> >\n> > Hi PostgreSQL hackers\n> >\n> > As we are again approaching the beta (feature freeze),\n> > I will ask my ordinary question ;)\n> >\n> > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > was redundant included in 6.5 ?\n> \n> Sorry,\n> \n> I missed to put it into after v6.4 release. And since it\n> wasn't there during v6.5 development, I would not put it in\n> now.\n> \n> Note that it wasn't in the v6.4 feature patches either, so it\n> isn't tested enough to get released.\n\nWe haven't started beta yet. Anything on LIMIT?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 18:07:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": ">\n> > >\n> > > Hi PostgreSQL hackers\n> > >\n> > > As we are again approaching the beta (feature freeze),\n> > > I will ask my ordinary question ;)\n> > >\n> > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > was redundant included in 6.5 ?\n> >\n> > Sorry,\n> >\n> > I missed to put it into after v6.4 release. And since it\n> > wasn't there during v6.5 development, I would not put it in\n> > now.\n> >\n> > Note that it wasn't in the v6.4 feature patches either, so it\n> > isn't tested enough to get released.\n>\n> We haven't started beta yet. Anything on LIMIT?\n\n LIMIT is in there and was during entire v6.5 development.\n But ORDER BY suppressing sort using index wasn't.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 3 Feb 1999 00:16:12 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "> > We haven't started beta yet. Anything on LIMIT?\n> \n> LIMIT is in there and was during entire v6.5 development.\n> But ORDER BY suppressing sort using index wasn't.\n> \n\nGreat.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 18:16:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "Jan Wieck wrote:\n> \n> >\n> > Hi PostgreSQL hackers\n> >\n> > As we are again approaching the beta (feature freeze),\n> > I will ask my ordinary question ;)\n> >\n> > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > was redundant included in 6.5 ?\n> \n> Sorry,\n> \n> I missed to put it into after v6.4 release. And since it\n> wasn't there during v6.5 development, I would not put it in\n> now.\n> \n> Note that it wasn't in the v6.4 feature patches either, so it\n> isn't tested enough to get released.\n\nBut if it is not relesed it will _never_ be tested enough ...\n\nAs we are just going into beta, not relese, I would suggest to put \nit in now, and back out if it relly breaks anything. \n\nI have been using it with 6.4 almost since the relese an have \nseen no problems - in fact it solved a big problem and provided about \n1000X speedup for certain queries (a fraction of second instead of \n6 minutes) , not to mention avoiding backend crashes due to disk space \nexhaustion.\n\nAnd it did not break anything in regression tests either, the only \nargument then was that there is nothing in regression tests that \ncould possibly be broken by it ;)\n\nI greatly prefer it over my previous method of doing the same on the \nclient side (issuing an EXPLAIN, parsing it to see if it is SORT on \nINDEX SCAN, and omitting the ORDER BY if it is)\n\nAlso, not having it greatly diminishes the value of LIMIT.\n\nI agree that it is a hack and only a partial solution and that in \nideal world the optimiser would also know about sort nodes. \n\nBut it is a very useful hack, and for some (like me) it is \nmuch bigger improvement than some 10% due to better memory \nallocation (which is of course great too).\n\n\n----------------\nHannu\n", "msg_date": "Wed, 03 Feb 1999 12:17:10 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Jan Wieck wrote:\n> > >\n> > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > was redundant included in 6.5 ?\n> >\n> > Sorry,\n> >\n> > I missed to put it into after v6.4 release. And since it\n> > wasn't there during v6.5 development, I would not put it in\n\n...\n\n> But if it is not relesed it will _never_ be tested enough ...\n> \n> As we are just going into beta, not relese, I would suggest to put\n> it in now, and back out if it relly breaks anything.\n\nI will download the latest snapshot tonight and test the patch there.\n\nDoes anyone know if something introduced in 6.5 can break by omitting \nthe top sort node ? \n\nPerhaps any of the following:\n\n * MVCC \n\n * temp tables \n\n * Some exotic use of rules \n\n * SELECT FOR UPDATE\n\nI myself can't see how it could break, as the only thing the patch does\nis omitting a top sort node if the query is already in the right \norder. So it should be equivalent of just not including the ORDER BY\nin the SELECT in the first place.\n\nJan - I often feel the same about some of my code that are part of some\nlarger complex project (ie. if it aint broke, don't fix it), but this\ntime\nI think the patch is quite safe, and very very useful for at least two \noccasions: getting the start of some table out to users web and for\nprocessing\nhuge tables in predictable/repeatable order.\n\nI somewhat understand your hesitation, because I can't either think of\nany test\nin regression that could be broken by the patch, but instead of making\nme \nuneasy it makes me happy ;)\n\n-----------------\nHannu\n", "msg_date": "Wed, 03 Feb 1999 12:38:53 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "Hannu Krosing wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > >\n> > > Hi PostgreSQL hackers\n> > >\n> > > As we are again approaching the beta (feature freeze),\n> > > I will ask my ordinary question ;)\n> > >\n> > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > was redundant included in 6.5 ?\n> >\n> > Sorry,\n> >\n> > I missed to put it into after v6.4 release. And since it\n> > wasn't there during v6.5 development, I would not put it in\n> > now.\n> >\n> > Note that it wasn't in the v6.4 feature patches either, so it\n> > isn't tested enough to get released.\n>\n> But if it is not relesed it will _never_ be tested enough ...\n>\n> As we are just going into beta, not relese, I would suggest to put\n> it in now, and back out if it relly breaks anything.\n>\n> I have been using it with 6.4 almost since the relese an have\n> seen no problems - in fact it solved a big problem and provided about\n> 1000X speedup for certain queries (a fraction of second instead of\n> 6 minutes) , not to mention avoiding backend crashes due to disk space\n> exhaustion.\n>\n> And it did not break anything in regression tests either, the only\n> argument then was that there is nothing in regression tests that\n> could possibly be broken by it ;)\n>\n> I greatly prefer it over my previous method of doing the same on the\n> client side (issuing an EXPLAIN, parsing it to see if it is SORT on\n> INDEX SCAN, and omitting the ORDER BY if it is)\n>\n> Also, not having it greatly diminishes the value of LIMIT.\n\n Ok ok ok - OK. You got me, I'll go ahead and put it in.\n\n>\n> I agree that it is a hack and only a partial solution and that in\n> ideal world the optimiser would also know about sort nodes.\n\n First the executor must know better how to handle LIMIT's\n OFFSET. For now it processes the query until OFFSET is\n reached, simply suppressing the in fact produced result\n tuples in the output. The it stops sending if the LIMIT count\n is reached. For joins or other complex things, it has no\n chance to do something different. But for an indexed single\n table scan, where ALL the qualifications are done on the\n index, it should handle the OFFSET by skipping index tuples\n only.\n\n Second the optimizer must take LIMIT into account and\n depending on the known number of tuples, LIMIT and OFFSET\n produce an index scan even if the query isn't qualified at\n all but has an ORDER BY clause matched by the index.\n\n These two features would finally solve your huge table\n problems.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 3 Feb 1999 12:07:14 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "On Wed, 3 Feb 1999, Jan Wieck wrote:\n\n> >\n> > > >\n> > > > Hi PostgreSQL hackers\n> > > >\n> > > > As we are again approaching the beta (feature freeze),\n> > > > I will ask my ordinary question ;)\n> > > >\n> > > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > > was redundant included in 6.5 ?\n> > >\n> > > Sorry,\n> > >\n> > > I missed to put it into after v6.4 release. And since it\n> > > wasn't there during v6.5 development, I would not put it in\n> > > now.\n> > >\n> > > Note that it wasn't in the v6.4 feature patches either, so it\n> > > isn't tested enough to get released.\n> >\n> > We haven't started beta yet. Anything on LIMIT?\n> \n> LIMIT is in there and was during entire v6.5 development.\n> But ORDER BY suppressing sort using index wasn't.\n\nSinc we haven't started BETA yet, why not throw it in? Once beta, we stil\nlhave another month of testing before release, so lots of time...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 3 Feb 1999 11:38:19 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "Jan Wieck wrote:\n> \n> \n> Ok ok ok - OK. You got me, I'll go ahead and put it in.\n\nThanks ;)\n \n> > I agree that it is a hack and only a partial solution and that in\n> > ideal world the optimiser would also know about sort nodes.\n> \n> First the executor must know better how to handle LIMIT's\n> OFFSET. For now it processes the query until OFFSET is\n> reached, simply suppressing the in fact produced result\n> tuples in the output. The it stops sending if the LIMIT count\n> is reached. For joins or other complex things, it has no\n> chance to do something different. But for an indexed single\n> table scan, where ALL the qualifications are done on the\n> index, it should handle the OFFSET by skipping index tuples\n> only.\n\nAnd we must also tie this kind of scan to triggers (my quess is that \ncurrently the triggers are fired by accessing the data in the actual\nrelation data).\n\nIt probably does not affect rules as much, though it would be cool to \ndefine rules for index scans or sort nodes.\n\n> Second the optimizer must take LIMIT into account and\n> depending on the known number of tuples, LIMIT and OFFSET\n> produce an index scan even if the query isn't qualified at\n> all but has an ORDER BY clause matched by the index.\n> \n> These two features would finally solve your huge table\n> problems.\n\nYes, it seems so.\n\nNext thing to attack then would be aggregates, so that they too can \nbenefit from indexes, I can immediately think of MIN, MAX and COUNT\non simple scans. But as the aggregates are user-defined, we probably \nneed a flag that tells the optimiser if said aggregate can in fact \nuse indexes (and what type of index)\n\nMaybe we can even cache some data (for example tuple count) in \nbackend, so that COUNT(*) can be made real fast ?\n\nAfter that the reverse index scans, so that the index that are \nbackwards can also be used for sorting.\nBTW, can this be easily implemented/effective in PostgreSQL or are\nour btree indexes optimised for forward scans ?\n\nAlso, how do indexes interact with TRX manager (is there some docs\non it).\n\n---------------------\nHannu\n", "msg_date": "Wed, 03 Feb 1999 20:42:43 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "> Next thing to attack then would be aggregates, so that they too can \n> benefit from indexes, I can immediately think of MIN, MAX and COUNT\n> on simple scans. But as the aggregates are user-defined, we probably \n> need a flag that tells the optimiser if said aggregate can in fact \n> use indexes (and what type of index)\n> \n> Maybe we can even cache some data (for example tuple count) in \n> backend, so that COUNT(*) can be made real fast ?\n> \n> After that the reverse index scans, so that the index that are \n> backwards can also be used for sorting.\n> BTW, can this be easily implemented/effective in PostgreSQL or are\n> our btree indexes optimised for forward scans ?\n\nJan, I have kept the postings on optimizing LIMIT for joins. Let me\nknow if/when you want to see them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Feb 1999 13:46:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "> Jan, I have kept the postings on optimizing LIMIT for joins. Let me\n> know if/when you want to see them.\n\n Are they patches ready to go in or just suggestions how to\n do?\n\n ORDER BY patch is now in CURRENT.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 3 Feb 1999 20:37:17 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "Hello all,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Hannu Krosing\n> Sent: Thursday, February 04, 1999 3:43 AM\n> To: Jan Wieck\n> Cc: [email protected]\n> Subject: Re: [HACKERS] 6.5 beta and ORDER BY patch\n>\n\n[snip]\n \n> \n> After that the reverse index scans, so that the index that are \n> backwards can also be used for sorting.\n> BTW, can this be easily implemented/effective in PostgreSQL or are\n> our btree indexes optimised for forward scans ?\n>\n\nPostgreSQL seems to have the ability to scan Index backward \nbecause we can execute \"fetch backward\" command. \nIMHO _bt_first() fucntion used to find first item in a scan should \nbe changed to work well in case of backward positioning.\n\nI think this change also gives the partial solution for the problem \n[ [HACKERS] Cursor Movement - Past the End ] reported by \nDavid Hartwig. \n\nI have a sample code for this change.\nI can send it if someone want to check or test it.\n \nThanks.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 4 Feb 1999 12:48:24 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "\nJan, is this implemented in 6.5 beta?\n\n> > > > As we are again approaching the beta (feature freeze),\n> > > > I will ask my ordinary question ;)\n> > > >\n> > > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > > was redundant included in 6.5 ?\n> > >\n> > > Sorry,\n> > >\n> > > I missed to put it into after v6.4 release. And since it\n> > > wasn't there during v6.5 development, I would not put it in\n> > > now.\n> > >\n> > > Note that it wasn't in the v6.4 feature patches either, so it\n> > > isn't tested enough to get released.\n> >\n> > But if it is not relesed it will _never_ be tested enough ...\n> >\n> > As we are just going into beta, not relese, I would suggest to put\n> > it in now, and back out if it relly breaks anything.\n> >\n> > I have been using it with 6.4 almost since the relese an have\n> > seen no problems - in fact it solved a big problem and provided about\n> > 1000X speedup for certain queries (a fraction of second instead of\n> > 6 minutes) , not to mention avoiding backend crashes due to disk space\n> > exhaustion.\n> >\n> > And it did not break anything in regression tests either, the only\n> > argument then was that there is nothing in regression tests that\n> > could possibly be broken by it ;)\n> >\n> > I greatly prefer it over my previous method of doing the same on the\n> > client side (issuing an EXPLAIN, parsing it to see if it is SORT on\n> > INDEX SCAN, and omitting the ORDER BY if it is)\n> >\n> > Also, not having it greatly diminishes the value of LIMIT.\n> \n> Ok ok ok - OK. You got me, I'll go ahead and put it in.\n> \n> >\n> > I agree that it is a hack and only a partial solution and that in\n> > ideal world the optimiser would also know about sort nodes.\n> \n> First the executor must know better how to handle LIMIT's\n> OFFSET. For now it processes the query until OFFSET is\n> reached, simply suppressing the in fact produced result\n> tuples in the output. The it stops sending if the LIMIT count\n> is reached. For joins or other complex things, it has no\n> chance to do something different. But for an indexed single\n> table scan, where ALL the qualifications are done on the\n> index, it should handle the OFFSET by skipping index tuples\n> only.\n> \n> Second the optimizer must take LIMIT into account and\n> depending on the known number of tuples, LIMIT and OFFSET\n> produce an index scan even if the query isn't qualified at\n> all but has an ORDER BY clause matched by the index.\n> \n> These two features would finally solve your huge table\n> problems.\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 07:54:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "It is my assumption this has been applied to 6.5 beta, right?\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hello all,\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Hannu Krosing\n> > Sent: Thursday, February 04, 1999 3:43 AM\n> > To: Jan Wieck\n> > Cc: [email protected]\n> > Subject: Re: [HACKERS] 6.5 beta and ORDER BY patch\n> >\n> \n> [snip]\n> \n> > \n> > After that the reverse index scans, so that the index that are \n> > backwards can also be used for sorting.\n> > BTW, can this be easily implemented/effective in PostgreSQL or are\n> > our btree indexes optimised for forward scans ?\n> >\n> \n> PostgreSQL seems to have the ability to scan Index backward \n> because we can execute \"fetch backward\" command. \n> IMHO _bt_first() fucntion used to find first item in a scan should \n> be changed to work well in case of backward positioning.\n> \n> I think this change also gives the partial solution for the problem \n> [ [HACKERS] Cursor Movement - Past the End ] reported by \n> David Hartwig. \n> \n> I have a sample code for this change.\n> I can send it if someone want to check or test it.\n> \n> Thanks.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 07:55:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Sunday, May 09, 1999 8:56 PM\n> To: Hiroshi Inoue\n> Cc: Hannu Krosing; David Hartwig; Jan Wieck; pgsql-hackers\n> Subject: Re: [HACKERS] 6.5 beta and ORDER BY patch\n> \n> \n> It is my assumption this has been applied to 6.5 beta, right?\n>\n \nIt has been applied with subject [Index backward scan patch].\nHowever it doesn't include a change to omit sorting in all descending \nORDER BY cases. \n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Hello all,\n> > \n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of Hannu Krosing\n> > > Sent: Thursday, February 04, 1999 3:43 AM\n> > > To: Jan Wieck\n> > > Cc: [email protected]\n> > > Subject: Re: [HACKERS] 6.5 beta and ORDER BY patch\n> > >\n> > \n> > [snip]\n> > \n> > > \n> > > After that the reverse index scans, so that the index that are \n> > > backwards can also be used for sorting.\n> > > BTW, can this be easily implemented/effective in PostgreSQL or are\n> > > our btree indexes optimised for forward scans ?\n> > >\n> > \n> > PostgreSQL seems to have the ability to scan Index backward \n> > because we can execute \"fetch backward\" command. \n> > IMHO _bt_first() fucntion used to find first item in a scan should \n> > be changed to work well in case of backward positioning.\n> > \n> > I think this change also gives the partial solution for the problem \n> > [ [HACKERS] Cursor Movement - Past the End ] reported by \n> > David Hartwig. \n> > \n> > I have a sample code for this change.\n> > I can send it if someone want to check or test it.\n> > \n> > Thanks.\n> > \n> > Hiroshi Inoue\n> > [email protected]\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n", "msg_date": "Mon, 10 May 1999 10:24:15 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": ">\n>\n> Jan, is this implemented in 6.5 beta?\n\n It is still the simple suppressing of the sort if the choosen\n index scan has already the requested sort order. The possible\n enhancements of the optimizer (mainly taking LIMIT into\n account and use index scan if sort order can be obtained from\n that) aren't implemented AFAIK.\n\n I have too less knowledge in the planner/optimizer corner to\n get my hands on it at this stage! And there are things left\n in the rewrite system. It might be better to leave this all\n for v6.6.\n\n\nJan\n\n>\n> > > > > As we are again approaching the beta (feature freeze),\n> > > > > I will ask my ordinary question ;)\n> > > > >\n> > > > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > > > was redundant included in 6.5 ?\n> > > >\n> > > > Sorry,\n> > > >\n> > > > I missed to put it into after v6.4 release. And since it\n> > > > wasn't there during v6.5 development, I would not put it in\n> > > > now.\n> > > >\n> > > > Note that it wasn't in the v6.4 feature patches either, so it\n> > > > isn't tested enough to get released.\n> > >\n> > > But if it is not relesed it will _never_ be tested enough ...\n> > >\n> > > As we are just going into beta, not relese, I would suggest to put\n> > > it in now, and back out if it relly breaks anything.\n> > >\n> > > I have been using it with 6.4 almost since the relese an have\n> > > seen no problems - in fact it solved a big problem and provided about\n> > > 1000X speedup for certain queries (a fraction of second instead of\n> > > 6 minutes) , not to mention avoiding backend crashes due to disk space\n> > > exhaustion.\n> > >\n> > > And it did not break anything in regression tests either, the only\n> > > argument then was that there is nothing in regression tests that\n> > > could possibly be broken by it ;)\n> > >\n> > > I greatly prefer it over my previous method of doing the same on the\n> > > client side (issuing an EXPLAIN, parsing it to see if it is SORT on\n> > > INDEX SCAN, and omitting the ORDER BY if it is)\n> > >\n> > > Also, not having it greatly diminishes the value of LIMIT.\n> >\n> > Ok ok ok - OK. You got me, I'll go ahead and put it in.\n> >\n> > >\n> > > I agree that it is a hack and only a partial solution and that in\n> > > ideal world the optimiser would also know about sort nodes.\n> >\n> > First the executor must know better how to handle LIMIT's\n> > OFFSET. For now it processes the query until OFFSET is\n> > reached, simply suppressing the in fact produced result\n> > tuples in the output. The it stops sending if the LIMIT count\n> > is reached. For joins or other complex things, it has no\n> > chance to do something different. But for an indexed single\n> > table scan, where ALL the qualifications are done on the\n> > index, it should handle the OFFSET by skipping index tuples\n> > only.\n> >\n> > Second the optimizer must take LIMIT into account and\n> > depending on the known number of tuples, LIMIT and OFFSET\n> > produce an index scan even if the query isn't qualified at\n> > all but has an ORDER BY clause matched by the index.\n> >\n> > These two features would finally solve your huge table\n> > problems.\n> >\n>\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 16:47:26 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": ">\n> It is my assumption this has been applied to 6.5 beta, right?\n\n Don't know. Hiroshi - do you see your code anywhere?\n\n\nJan\n\n>\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Hello all,\n> >\n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of Hannu Krosing\n> > > Sent: Thursday, February 04, 1999 3:43 AM\n> > > To: Jan Wieck\n> > > Cc: [email protected]\n> > > Subject: Re: [HACKERS] 6.5 beta and ORDER BY patch\n> > >\n> >\n> > [snip]\n> >\n> > >\n> > > After that the reverse index scans, so that the index that are\n> > > backwards can also be used for sorting.\n> > > BTW, can this be easily implemented/effective in PostgreSQL or are\n> > > our btree indexes optimised for forward scans ?\n> > >\n> >\n> > PostgreSQL seems to have the ability to scan Index backward\n> > because we can execute \"fetch backward\" command.\n> > IMHO _bt_first() fucntion used to find first item in a scan should\n> > be changed to work well in case of backward positioning.\n> >\n> > I think this change also gives the partial solution for the problem\n> > [ [HACKERS] Cursor Movement - Past the End ] reported by\n> > David Hartwig.\n> >\n> > I have a sample code for this change.\n> > I can send it if someone want to check or test it.\n> >\n> > Thanks.\n> >\n> > Hiroshi Inoue\n> > [email protected]\n> >\n> >\n>\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 16:49:11 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" }, { "msg_contents": "\n\nAdded to TODO:\n\n\t* Have optimizer take LIMIT into account when considering index scans\n\n> Hannu Krosing wrote:\n> \n> >\n> > Jan Wieck wrote:\n> > >\n> > > >\n> > > > Hi PostgreSQL hackers\n> > > >\n> > > > As we are again approaching the beta (feature freeze),\n> > > > I will ask my ordinary question ;)\n> > > >\n> > > > Is the patch by Jan that eliminated the duplicate sort node in case it\n> > > > was redundant included in 6.5 ?\n> > >\n> > > Sorry,\n> > >\n> > > I missed to put it into after v6.4 release. And since it\n> > > wasn't there during v6.5 development, I would not put it in\n> > > now.\n> > >\n> > > Note that it wasn't in the v6.4 feature patches either, so it\n> > > isn't tested enough to get released.\n> >\n> > But if it is not relesed it will _never_ be tested enough ...\n> >\n> > As we are just going into beta, not relese, I would suggest to put\n> > it in now, and back out if it relly breaks anything.\n> >\n> > I have been using it with 6.4 almost since the relese an have\n> > seen no problems - in fact it solved a big problem and provided about\n> > 1000X speedup for certain queries (a fraction of second instead of\n> > 6 minutes) , not to mention avoiding backend crashes due to disk space\n> > exhaustion.\n> >\n> > And it did not break anything in regression tests either, the only\n> > argument then was that there is nothing in regression tests that\n> > could possibly be broken by it ;)\n> >\n> > I greatly prefer it over my previous method of doing the same on the\n> > client side (issuing an EXPLAIN, parsing it to see if it is SORT on\n> > INDEX SCAN, and omitting the ORDER BY if it is)\n> >\n> > Also, not having it greatly diminishes the value of LIMIT.\n> \n> Ok ok ok - OK. You got me, I'll go ahead and put it in.\n> \n> >\n> > I agree that it is a hack and only a partial solution and that in\n> > ideal world the optimiser would also know about sort nodes.\n> \n> First the executor must know better how to handle LIMIT's\n> OFFSET. For now it processes the query until OFFSET is\n> reached, simply suppressing the in fact produced result\n> tuples in the output. The it stops sending if the LIMIT count\n> is reached. For joins or other complex things, it has no\n> chance to do something different. But for an indexed single\n> table scan, where ALL the qualifications are done on the\n> index, it should handle the OFFSET by skipping index tuples\n> only.\n> \n> Second the optimizer must take LIMIT into account and\n> depending on the known number of tuples, LIMIT and OFFSET\n> produce an index scan even if the query isn't qualified at\n> all but has an ORDER BY clause matched by the index.\n> \n> These two features would finally solve your huge table\n> problems.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #======================================== [email protected] (Jan Wieck) #\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 21:49:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 beta and ORDER BY patch" } ]
[ { "msg_contents": "Is anyone else seeing major breakage of the regression tests with\ntoday's (Monday's) CVS checkins? Or did I break something myself?\n\nI'm seeing wrong answers in tests numerology and select_having;\nplus coredumps in opr_sanity, subselect and rules. Also the same\nunexpected messages in union and misc as were there a few days ago.\n\nI've been making what I thought were perfectly safe changes, so\nI was surprised when things blew up in my face just before I was\nready to check in. Noting the scope of what other people committed\ntoday, I'd like to believe it's someone else's fault...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Jan 1999 01:55:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Major breakage?" }, { "msg_contents": "Tom Lane wrote:\n> \n> Is anyone else seeing major breakage of the regression tests with\n> today's (Monday's) CVS checkins? Or did I break something myself?\n> \n> I'm seeing wrong answers in tests numerology and select_having;\n> plus coredumps in opr_sanity, subselect and rules. Also the same\n> unexpected messages in union and misc as were there a few days ago.\n> \n> I've been making what I thought were perfectly safe changes, so\n> I was surprised when things blew up in my face just before I was\n> ready to check in. Noting the scope of what other people committed\n> today, I'd like to believe it's someone else's fault...\n\nTry gmake clean + initdb.\nAt least RULES were affected by my changes...\nI forgot to say, sorry.\n\nVadim\n", "msg_date": "Tue, 26 Jan 1999 14:22:15 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Major breakage?" }, { "msg_contents": "I wrote:\n>> Is anyone else seeing major breakage of the regression tests with\n>> today's (Monday's) CVS checkins? Or did I break something myself?\n\nNope, Vadim broke something. It looks like anything with a subplan\nwill coredump in Monday's sources. executor/nodeSubPlan.c has\n\nbool\nExecInitSubPlan(SubPlan *node, EState *estate, Plan *parent)\n{\n ...\n ExecCheckPerms(CMD_SELECT, 0, node->rtable, (Query *) NULL);\n ^^^^^^^^^^^^^^\n\n(and has had that for a long time, evidently). One of the additions\nVadim checked in yesterday extends ExecCheckPerms() to try to use\nits parseTree argument --- unconditionally. Guaranteed null-pointer\ndereference.\n\nPerhaps ExecInitSubPlan is in error to pass a null parseTree; if not,\nthen ExecCheckPerms needs to be modified to cope. I don't understand\neither routine enough to fix it correctly.\n\nThis bug is the cause of the opr_sanity coredump I'm seeing.\nI don't have time to investigate the other test failures right now,\nbut very possibly they are the same thing.\n\n\nBTW, anyone who is *not* seeing regression test coredumps with the\ncurrent CVS sources must have their compile/link options set so that\ndereferencing a null pointer isn't fatal. I think that's a very bad\nchoice for software development --- you want to hear about it, loud\nand clear, if your code tries to use a null pointer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Jan 1999 14:43:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Major breakage? " }, { "msg_contents": "> I wrote:\n> >> Is anyone else seeing major breakage of the regression tests with\n> >> today's (Monday's) CVS checkins? Or did I break something myself?\n> \n> Nope, Vadim broke something. It looks like anything with a subplan\n> will coredump in Monday's sources. executor/nodeSubPlan.c has\n> \n> bool\n> ExecInitSubPlan(SubPlan *node, EState *estate, Plan *parent)\n> {\n> ...\n> ExecCheckPerms(CMD_SELECT, 0, node->rtable, (Query *) NULL);\n> ^^^^^^^^^^^^^^\n> \n> (and has had that for a long time, evidently). One of the additions\n> Vadim checked in yesterday extends ExecCheckPerms() to try to use\n> its parseTree argument --- unconditionally. Guaranteed null-pointer\n> dereference.\n> \n> Perhaps ExecInitSubPlan is in error to pass a null parseTree; if not,\n> then ExecCheckPerms needs to be modified to cope. I don't understand\n> either routine enough to fix it correctly.\n\nI caused the 'having' problems. I am working on a fix.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jan 1999 17:05:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Major breakage?" }, { "msg_contents": "Tom Lane wrote:\n> \n> Nope, Vadim broke something. It looks like anything with a subplan\n> will coredump in Monday's sources. executor/nodeSubPlan.c has\n> \n> bool\n> ExecInitSubPlan(SubPlan *node, EState *estate, Plan *parent)\n> {\n> ...\n> ExecCheckPerms(CMD_SELECT, 0, node->rtable, (Query *) NULL);\n> ^^^^^^^^^^^^^^\n> \n> (and has had that for a long time, evidently). One of the additions\n> Vadim checked in yesterday extends ExecCheckPerms() to try to use\n> its parseTree argument --- unconditionally. Guaranteed null-pointer\n> dereference.\n> \n> Perhaps ExecInitSubPlan is in error to pass a null parseTree; if not,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNo.\n\n> then ExecCheckPerms needs to be modified to cope. I don't understand\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nYes.\n\n> either routine enough to fix it correctly.\n\nThanks!\n\nUnfortunately, I can't fix this in CVS - I'm changing\nexecMain.c now to support READ COMMITTED mode. Could someone\nadd check in ExecCheckPerms ?\nSorry.\n\nVadim\n", "msg_date": "Wed, 27 Jan 1999 09:20:42 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Major breakage?" } ]
[ { "msg_contents": "My changes for primary key support weren't quite right. This change\ncompletes the job.\n\n*** ../src.original/./backend/parser/analyze.c\tMon Jan 25 23:44:26 1999\n--- ./backend/parser/analyze.c\tTue Jan 26 08:39:05 1999\n***************\n*** 714,719 ****\n--- 714,720 ----\n \t\tindex = makeNode(IndexStmt);\n \n \t\tindex->unique = TRUE;\n+ \t\tindex->primary = (constraint->contype == CONSTR_PRIMARY ? TRUE:FALSE);\n \t\tif (constraint->name != NULL)\n \t\t\tindex->idxname = constraint->name;\n \t\telse if (constraint->contype == CONSTR_PRIMARY)\n***************\n*** 722,735 ****\n \t\t\t\telog(ERROR, \"CREATE TABLE/PRIMARY KEY multiple keys for table %s are not legal\", stmt->relname);\n \n \t\t\thave_pkey = TRUE;\n- \t\t\tindex->primary = TRUE;\n \t\t\tindex->idxname = makeTableName(stmt->relname, \"pkey\", NULL);\n \t\t}\n \t\telse\n- \t\t{\n- \t\t\tindex->primary = FALSE;\n \t\t\tindex->idxname = NULL;\n- \t\t}\n \n \t\tindex->relname = stmt->relname;\n \t\tindex->accessMethod = \"btree\";\n--- 723,732 ----\n*** ../src.original/./backend/tcop/utility.c\tMon Jan 25 23:40:17 1999\n--- ./backend/tcop/utility.c\tMon Jan 25 23:40:34 1999\n***************\n*** 404,410 ****\n \t\t\t\t\t\t\tstmt->indexParams,\t/* parameters */\n \t\t\t\t\t\t\tstmt->withClause,\n \t\t\t\t\t\t\tstmt->unique,\n! \t\t\t\t\t\t\t0,\t\t/* CREATE INDEX can't be primary */\n \t\t\t\t\t\t\t(Expr *) stmt->whereClause,\n \t\t\t\t\t\t\tstmt->rangetable);\n \t\t\t}\n--- 404,410 ----\n \t\t\t\t\t\t\tstmt->indexParams,\t/* parameters */\n \t\t\t\t\t\t\tstmt->withClause,\n \t\t\t\t\t\t\tstmt->unique,\n! \t\t\t\t\t\t\tstmt->primary,\n \t\t\t\t\t\t\t(Expr *) stmt->whereClause,\n \t\t\t\t\t\t\tstmt->rangetable);\n \t\t\t}\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 26 Jan 1999 08:44:56 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Primary keys additional" }, { "msg_contents": "Thus spake D'Arcy J.M. Cain\n> My changes for primary key support weren't quite right. This change\n> completes the job.\n\nDoh! Sorry. I subscribed to pgsql-patches so I could post there then\nI forgot. Sorry about that. I'll repost.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n\n", "msg_date": "Tue, 26 Jan 1999 09:12:46 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Primary keys additional" } ]
[ { "msg_contents": "> \n> Bruce Momjian wrote:\n> \n> > > Thank you very much for the response of the other mail i`ve sent.\n> > > I want to make you a very important question. This is critical\n> > for\n> > > my pourposes and i cannot find a clear answer in the documentation\n> > of\n> > > postgres.\n> > > There is a capability in Oracle that allows you to make\n> > > \"non-blocking queries\". That`s it. The normal query from a program,\n> > > throw an API to the DBMS, opens a socket, and waits untill the\n> > response\n> > > comes. But the other way, returns the control inmediatly and the\n> > user\n> > > must poll on this socket to know when the answer comes.\n> > > My english is poor, but i hope you`ve understood what i`m\n> > talking\n> > > about. If it is not allowed in postgres i cannot use it as my\n> > database.\n> > > I have to make queries to other machines, and i cannot wait untill\n> > the\n> > > connection is made, and the response comes.\n> > > There is PQexec and i want to know how i could make a PQexecNB\n> > > (non-blocking). In Oracle this is the diference of the two funcion\n> > calls\n> > > to the API.\n> >\n> > New feature in 6.4.*. See the libpq manual under async. You can send\n> > a\n> > query and poll to see when the result is ready. See PQsendQuery() C\n> > function call. Also mention in the manuals in the docs directory.\n> > You\n> > can also use C function select() to wait for data from any number of\n> > concurrent backend sockets.\n> >\n> > --\n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> > 19026\n> \n> Thank you very much!!!\n> But I have a llitle problem. I`m afraid that the lipq++ interface\n> doesn't have this funcion call. Is it true?\n> Have you got any examples of the using of PQsendQuery and Non Blocking\n> queries using this interface lipq or libpq++? I will be very usefull for\n> me. Just simply attach it on the response.\n> \n> It would be very appreciated. Thank you again.\n\nMaybe it hasn't been added to libpq++ yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jan 1999 10:43:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Non-blocking queries in postgresql" } ]
[ { "msg_contents": "So good,\n\tI'm just starting a couple of db projects, and would like to run\nthem on PG, whose feature set seems rich enough for my needs... Now, I see\nalmost any new release added very useful stuff (as opposed to some well\nknown office suite new releases, adding mostly uncollected garbage), but\nthere is the dump/reload needed to realign tables to the changed server;\nvery good: no need to waste server's time in on the fly conversion from\nlegacy format to the current data representation... Still, I see those big\nwarnings about un-dumpable BLOBs and views; I understand the second is\nimpossible since views are stored only after having been \"massaged\" into\nrules; still, I think that in the long term both should be 'teleported'\nfrom one releases' to the next one's format, but I feel that BLOBs should\nbe handled asap... IMHO, this should be considered at the very least as a\nbig inconvenience, but I can hear people laughing at this big \"feature\" as\na DOCUMENTED bug of an seemengly excellent free program. Sorry for not\nbeing able to submit code to correct this state of facts, I hope you will\nnot take this note as an offense, or the troll it isn't, but as a real\nconcern for a real problem I'm going to face. At the very least, I'm\nsurprised not to see the incomplete dump problem noted in PG's TODO list.\nYours,\n\n\nl.\n\n\n", "msg_date": "Tue, 26 Jan 1999 19:49:53 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "dump doesn't save views and blobs" } ]
[ { "msg_contents": "Included patches fix a portability problem of unsetenv() used in\n6.4.2 multi-byte support. unsetenv() is only avaliable on FreeBSD and\nLinux so I decided to replace with putenv().\n--\nTatsuo Ishii\[email protected]\n----------------------------- cut here ----------------------------\n*** postgresql-6.4.2/src/bin/psql/psql.c.orig\tWed Jan 6 13:25:45 1999\n--- postgresql-6.4.2/src/bin/psql/psql.c\tWed Jan 6 13:26:27 1999\n***************\n*** 1498,1504 ****\n \t\t PGCLIENTENCODING value. -- 1998/12/12 Tatsuo Ishii */\n \t\t \n \t\tif (!has_client_encoding) {\n! \t\t\tunsetenv(\"PGCLIENTENCODING\");\n \t\t}\n #endif\n \n--- 1498,1505 ----\n \t\t PGCLIENTENCODING value. -- 1998/12/12 Tatsuo Ishii */\n \t\t \n \t\tif (!has_client_encoding) {\n! \t\t static const char ev[] = \"PGCLIENTENCODING=\";\n! \t\t\tputenv(ev);\n \t\t}\n #endif\n \n*** postgresql-6.4.2/src/interfaces/libpq/fe-print.c.orig\tWed Jan 6 13:27:21 1999\n--- postgresql-6.4.2/src/interfaces/libpq/fe-print.c\tWed Jan 6 13:29:19 1999\n***************\n*** 506,512 ****\n \tint\t\t\tencoding = -1;\n \n \tstr = getenv(\"PGCLIENTENCODING\");\n! \tif (str)\n \t\tencoding = pg_char_to_encoding(str);\n \tif (encoding < 0)\n \t\tencoding = MULTIBYTE;\n--- 506,512 ----\n \tint\t\t\tencoding = -1;\n \n \tstr = getenv(\"PGCLIENTENCODING\");\n! \tif (str && *str != NULL)\n \t\tencoding = pg_char_to_encoding(str);\n \tif (encoding < 0)\n \t\tencoding = MULTIBYTE;\n*** postgresql-6.4.2/src/interfaces/libpq/fe-connect.c.orig\tWed Jan 6 13:29:47 1999\n--- postgresql-6.4.2/src/interfaces/libpq/fe-connect.c\tWed Jan 6 13:30:55 1999\n***************\n*** 813,819 ****\n #ifdef MULTIBYTE\n \t/* query server encoding */\n \tenv = getenv(envname);\n! \tif (!env)\n \t{\n \t\trtn = PQexec(conn, \"select getdatabaseencoding()\");\n \t\tif (rtn && PQresultStatus(rtn) == PGRES_TUPLES_OK)\n--- 813,819 ----\n #ifdef MULTIBYTE\n \t/* query server encoding */\n \tenv = getenv(envname);\n! \tif (!env || *env == NULL)\n \t{\n \t\trtn = PQexec(conn, \"select getdatabaseencoding()\");\n \t\tif (rtn && PQresultStatus(rtn) == PGRES_TUPLES_OK)\n", "msg_date": "Wed, 27 Jan 1999 09:59:38 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "mb support fix" }, { "msg_contents": "\nApplied...\n\n\nOn Wed, 27 Jan 1999, Tatsuo Ishii wrote:\n\n> Included patches fix a portability problem of unsetenv() used in\n> 6.4.2 multi-byte support. unsetenv() is only avaliable on FreeBSD and\n> Linux so I decided to replace with putenv().\n> --\n> Tatsuo Ishii\n> [email protected]\n> ----------------------------- cut here ----------------------------\n> *** postgresql-6.4.2/src/bin/psql/psql.c.orig\tWed Jan 6 13:25:45 1999\n> --- postgresql-6.4.2/src/bin/psql/psql.c\tWed Jan 6 13:26:27 1999\n> ***************\n> *** 1498,1504 ****\n> \t\t PGCLIENTENCODING value. -- 1998/12/12 Tatsuo Ishii */\n> \t\t \n> \t\tif (!has_client_encoding) {\n> ! \t\t\tunsetenv(\"PGCLIENTENCODING\");\n> \t\t}\n> #endif\n> \n> --- 1498,1505 ----\n> \t\t PGCLIENTENCODING value. -- 1998/12/12 Tatsuo Ishii */\n> \t\t \n> \t\tif (!has_client_encoding) {\n> ! \t\t static const char ev[] = \"PGCLIENTENCODING=\";\n> ! \t\t\tputenv(ev);\n> \t\t}\n> #endif\n> \n> *** postgresql-6.4.2/src/interfaces/libpq/fe-print.c.orig\tWed Jan 6 13:27:21 1999\n> --- postgresql-6.4.2/src/interfaces/libpq/fe-print.c\tWed Jan 6 13:29:19 1999\n> ***************\n> *** 506,512 ****\n> \tint\t\t\tencoding = -1;\n> \n> \tstr = getenv(\"PGCLIENTENCODING\");\n> ! \tif (str)\n> \t\tencoding = pg_char_to_encoding(str);\n> \tif (encoding < 0)\n> \t\tencoding = MULTIBYTE;\n> --- 506,512 ----\n> \tint\t\t\tencoding = -1;\n> \n> \tstr = getenv(\"PGCLIENTENCODING\");\n> ! \tif (str && *str != NULL)\n> \t\tencoding = pg_char_to_encoding(str);\n> \tif (encoding < 0)\n> \t\tencoding = MULTIBYTE;\n> *** postgresql-6.4.2/src/interfaces/libpq/fe-connect.c.orig\tWed Jan 6 13:29:47 1999\n> --- postgresql-6.4.2/src/interfaces/libpq/fe-connect.c\tWed Jan 6 13:30:55 1999\n> ***************\n> *** 813,819 ****\n> #ifdef MULTIBYTE\n> \t/* query server encoding */\n> \tenv = getenv(envname);\n> ! \tif (!env)\n> \t{\n> \t\trtn = PQexec(conn, \"select getdatabaseencoding()\");\n> \t\tif (rtn && PQresultStatus(rtn) == PGRES_TUPLES_OK)\n> --- 813,819 ----\n> #ifdef MULTIBYTE\n> \t/* query server encoding */\n> \tenv = getenv(envname);\n> ! \tif (!env || *env == NULL)\n> \t{\n> \t\trtn = PQexec(conn, \"select getdatabaseencoding()\");\n> \t\tif (rtn && PQresultStatus(rtn) == PGRES_TUPLES_OK)\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 26 Jan 1999 21:18:12 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] mb support fix" } ]
[ { "msg_contents": "Included patches make some enhancements to the multi-byte support.\n\no allow to use Big5 (a Chinese encoding used in Taiwan) as a client\n encoding. In this case the server side encoding should be EUC_TW\n\no add EUC_TW and Big5 test cases to the regression and the mb test\n (contributed by Jonah Kuo)\n\no fix mistake in include/mb/pg_wchar.h. An encoding id for EUC_TW was\n not correct (was 3 and now is 4)\n\no update documents (doc/README.mb and README.mb.jp)\n\no update psql helpfile (bin/psql/psqlHelp.h)\n\n--\nTatsuo Ishii\[email protected]\n--------------------------------------------------------------------\nbegin 644 big5.patch.gz\nM'XL(\"):[KC8``V)I9S4N<&%T8V@`[%MM;R-'<OXL_XK>]7I%2B.)\\\\87:66'\nM;W*TMZMU)&T,1R<(0W(HC9?D<&>&J]6MY/]RN(N-!(&1(`;.!A+`\"`S?\"Q($\nMB!'`_G!W0&S8P<&YP^$^)A]25=TSG&X-1<JK&+Y`Q&+%J:ZNKJY^ZNF:YLS\"\nMP@(;'H:/>RMAT%YI.>U'[J\"S,HJ\\7KC2;ZW<=QZY7:_G+ON!=SBW,QJPNZ,>\nM,XI,-U=-?;5@,KU2*;^PM+0TBY6YU]T.N^L,F%%BNKUJ&JMF!0U47EB0/WC-\nM]*)F&(PN&:MOW*N^NK.XSF[E[M?H>QZD,(K7A;\\OW&`/:G=WV#IK^_V^/UCV\nMX<O@\"?SIM\\@+^';</G(\"_!M&0;L_%-\\&_.L3)_\"<%LX3K3'F]'JK;.=A;>>-\nM'2[\"&:)#)H-O2]^F0^S[8$Q\\6MZA/='%A8N7DOJV^4).7(<I\"\\EM7&X9\"WP)\nM*8\"::95X`!?9R@+\\QQ8H+FX0>OZ`M=SHV'4'K+;YJLV<08?='_5<MCF(W&#@\nM]%C=[[BY^M8.T_5BT3*78$B#VQCVG('+=.K#OQOY9=ZT>^2%;!CXAX'39_C5\nM\"2(/8G<\"`P\\]F$@W\\/NL]R0'8\\$TO<'A\"(9\"M+(GGGOL!GEN!TV'/>_P*(*N\nM?1^6&3HO0T<T>AQX402.HQ*8/0E0#VRW3MA6=7MSM\\IV_;[G<TNY01#]R;';\nM6G8ZRV\\.8S_Y_QC%%7U%M]FN$X4CGVV&1YY';5SAUF;G%GU;@?_AWXO>H-T;\nM=5QV$Y9G>'C`075TD[=&)T.WXW89X&K4CM@S[,E&@]`['(!WX9$?1.!OQ]78\nMT'6#-6@^H^OP(%KC%E866-\\9LAHL/+OG/G%[$.7(9V(5Q\"*PUT3\\R:LP<B*O\nM'=LAS%)/?=>O#T)2U?>88;-]2(]G<S!\"X`P.7=Z9L6>L\\+2J6P4-_AJZH;,S\nM+27?*)+<L,N*O\"3D)45>%O**)#>J&R2W%/MFG<:U#%GNF*Z.\\@)\\)'V+^VFI\nM=NH;393;)=N4[+2[W4P[#6['-@U)O[K1*`@[5EI>J]5I7L6:+8U;:]J&D!<4\nMN4GR#;N>EM?U:@WEX&91DAOU6&XK\\CK)2Z84Y[I9I/B4RD93EM<J7%ZT%7E5\nMR\"U%7A-RV1_++@K[#45.ZUZJ%(VTO%TLI>*,L);A/`&^`&P)Z%EP;L<0WO5K\nM\";+W<$N<#&>$L29@G783X:H)^\"KRLI`7%7E%R\"7X&Z:5\"2N$MR;@+LGM4B:<\nM$?::2`-9;AF9^K%]2YX7PE@3L);D)0Y#3`]%;FD\"[FDYPE@3L%;D-\"ZF@20'\nM>`M]*8T0QIJ`M2(G6&$:2'*`M]\"7T@7AIPDX*G)*=TP#65ZT-`%W16X+>461\nM%X5<]@?@+<:5<%+J6.V+8)X\"LS&1M8V+6=L8L[:QQZSR!3\"O5[)9NUVQG\"SX\nM@'Y-Z-=D>;'.Y9:<[95:4\\CKBIRSN5Y2])L-SOZ6S$K5>+=0[%?%;F&9TK(T\nM2B5B*W.C6%7DY+]9+DGV&[5J2>A+\\VHT-FA>EEXJ*O*&V$4D-FR6JY0NMBW/\nMMUDI%?FN(.\\6S9I-_MA5HR++-W1AIZ;(\"59VS93HH5EOB-W+,!0YQ=FNRW%H\nM-JH4'[M1DEB^V6R2/T7+V%#D%(=B08[S1H&S/-BQ%#G%LP@+(\\OK\\7S+BIS;\nM-^5=:D,O\\EW0,@N*G/M9DM,7Y'6A+]LWBGSWK9B6(B=Z+MHEV4^CSL<MRCC<\nM,$NV&%>*\\X;%:0#LV[*\\9@L[#44N]%4['%?%JHR?C:*(0U7&`\\B)KHIUF6XW\nM*B6^R^I%.9Z5NB7D145.?@(\\+47.=W%#WA9`7A+Z\\GPK#4/H2^O2K72*E][=\nM#65WSZ:]>'<W4KL[TEYEAMT=Z4_:98'6A+PFRZVZ)NA,D?/=!>A/EI>$O\";O\nMXH;0KZK5@\\5W(Z`_N1HP*T)?@C/2ER;H3)(#W0EY59'3O)#FE&K`RJP2=$Y7\nM2'^*O#2AJK\"$ON0_TI<FZ$R14SR1+N6JPB#X()TI<H(_TJ@D!QK1!*TH\\HHF\nMZ%62`VT*?Z1T07H4XTKKA;2F\"9I3Y)0N2*-2-5/@ZX5T*<EA(84=*?Y(=\\)_\nM61_H5]BIR7*3MFND145>%G+9#M\":)FA.DA<Y#I&>%'E3$_0GR4O\\9@%I5Y'3\nM>B$M2G*@63%N69';FJ!+62[L`\"U*\\BK'#]*?(J]K@BXE>9W?3\"$M2E67SND9\nMZ4^1\\_@#74IR0_@/M*C(;2%7]'D9A72IR'G<@!85N3V)#H'$V(;WQ$UN:>I^\nM$+CAT!]TW$';G5@46A([*B<&+;UM[>WO&42)W`U@E7*I(HHY,W:.I'&)9TG2\nMAI!6T]*JP:6V<?$TC%FG85XX#:-MGIL&!)<6UX)D&KL&4EI::\\,N2-(RK]NL\nMIB2E0-@%JRI)B12,NMV0I+R\";)1D\"[Q.K-GU=\"\"RIU'S!DYPLN,Z0?MH&_<F\nMT,S%^]F\"$P3.B<:\\0<2.O,,C+>OTA^$1%X\\`ZO7\\8XWUO8[&.AX,\"?'56-0?\nM\"B<8MD/$\"FMT`7IP@:;9RR\\S/5'J^D&.K9'N'=&^)I1S)%WDPCQU8WD^/#C0\nMS8$\">;U'^OO+Y.*==>$JNWU;5EB$[D+IY5@G,<<-%MCZNFP3C[MHVG.!&XV\"\nM03R=N`<W![-\\ZL`]^T-N$'=_O\"]\"N$&AP+$U!Z&A28E.M[%`Z4(FYME2YDQ2\nM\"C3Y,HX\\1\\%8'YLH%+I=:N#1SC:3*-%9ZAQ;8%L/=INKW$FT>-`ZB=Q5FL42\nMC`H+\">82H54`8<G5QNW\"RK1/+G`Z7MOIA:OD!'0UR8K9A:^5=CXV0WZ@[E,\\\nM)(6FSG*FO;W`/?3\\P0$A3WQ']_=C0Z^[\"-=1KP.M?;??@L7C<W1\"%AW[`-1N\nMUPU<`\"_O';*<T_*?N'2X2A9VCZ`=O7\"@)_0\"?++HR$T@GIP]@S!T8S/+XX!@\nMC&!%,6``.;*Q#LC+-50#I(@'P:0*(^K+^4EQ%6:6J&H3@Q&H$K?6,?6@*P8O\nMSIDEGD\"D#A#CR<>QREX1Z36^+K!5,0`'/`R[>DYID2O`>B(LLS!U[$*TGQ#5\nM\\A/[2(X?,(</P25L#GVX6I8F#?%/M<V'\"1`Y,@R=#7NC$`_6#X`Z7;GS)BT+\nMZWI!&&GHB?LT\"IPV]R#NPOPN78,_0>0-#B4+XZ$U%HY:U!MT:&2-5LOI=,:3\nM@:EXYR<PP*$!]NU1SP''G4/'&ZC#][DUV)PB/Y#SB4\\!?\\A`;X0S,&SL1WK]\nM8U(Y3UIQWL,Z+8X=I@4F]IW253#/(A$6JJXP\\C[/[MQAY1@ABWQ)%DGC)=)8\nM2Y&EV`WXYXRY/<B8F\"\"1&\"%\\E)Y729`*'#=W'C#\\J2-THY#?DH6I##0XG;%<\nMQ1)*>051U\"LA)@`<JXT(3YPE!GZ*&3!)>]XCE__\\=G&6VFX<09@PK'`^)W,Z\nMGS5OF$SJDY)PQ^M[/2=(`BPVHO%/:%)V<&@\";P`BI=D+=(X5)B7ORH0=2$)B\nM\"G5\\8T^A\\AP9P=T+XWN/F\"&W'N.LTOFF&(99J!@6@^/&Y*IC0S,T3@:U^\";`\nMC27!^=6*RXUQL2%V<=19XL60DB'\\P^<LBI=$B_Z>Q054NBX10OBGUGZ;K]J1\nM#Z5O3JUM`1VI6@^3@\"WTVJE*3ZT#!^&XJ,,JT$M*.9P\\H>T.H[/<N!S\"#R1\\\nM+_UCB1#C[IKSU@MKWIW0^X'K=W-XSY!?$1?RV/DU;W$QS](E&R/]/6]_K[\"/\nMM1L.SS7F<!;K[%[]`&9-]?Z!E5K%I)^^ST[!W7*A7'B8SUC7L:^\\0+P#-3.%\nMX%PYG<O^!5-C>(I`<<GG*:[\\<]X_L<(I*%&O]74>3B<KG$9\\P'2!018O&MZ`\nM&-(@%]G+6!NX$;K4VN\"-TVQK8TIK(_KI^U>V(NE?)S1F%6=;D3A8:833/0(.\nMF*=3/M@`^LYPB'MS33\"M<H>I+%!R`R$25PWD_\"OSRJA\"$<>,H9I*=B4_8?#(\nMQW3/962NFNE2HB?)G,IYE'$0<K^I%1VYC7`J=4O=1!H>>U'[*$<VN<&V`PB3\nM\\+@JIBY,9BQ8YJ^FD$*6QF,>!Z\\5N,ZCM>QAC-F'D8YO`1>E2PQCQL-<7:+H\nM^P)<(D^4?\"A,RH=IKEH7N/H-^7:2JPD=3W2UXW:=42]:/>_W61KNN'@<YE.>\nM5Z+GI<3S2O?]`6NX;:9;^.\"975FURS,]>,9M7.YY);-4T/`$,GEJ\"056,7GP\nMZXP_?A4_OD3L`\"TOL_N;=1*E3IR>^!Y67CAGH^^U<\\J&G+E+#_EI3<\\=3-ZL\nM];4L,9IKC;J$]=JHJ^@(9A!0PHN6?SA\"Q3UC/VO?/S[\"AZ!RX`C64'3NTM8A\nM[\\AM0%#,L#*00.7..E6&I>[#//_)I+I3W]R,JTEWL+1$N\\+\"<'$1CSST%*)2\nM]Q(X+EM:9T36<V)NI$]E'DG3_DAJI['=.1X+I(ND5DKB=+O7YD4H.@X$=P,<\nM%[CGDT%F5S8T=GK*SHNMI!L`@,^+BEF<_;WZ:]M_+O:+.9X-<XD6+`CJ\\./*\nM+7\\Y41/M.>$_W@?EY9([T1$J4F,<2;HE<P8#/TINV^,1B$:3`\"?6YG/S0I!B\nM%6/,&33'<!@`6(!F!(2TFR\\5C*<WM1@;:THP$J@5]B<VZ7&3'*'Y//?G[(5L\nMWEG`6X7Y[Q?F4YMGG)R0DCPW,4LG)\"?DI8%>J\\D)\\IERDP19^3@W-S=[HEZ<\nM<>!+1KKUH&EX>`\"-!_T6=,K!-Q%[H0\"&8(%[R0#X61&'27W7@2TQ#4XIB16`\nMZXC[\\V)C+$:C=.N5RI]4RQ@[R90@9Q&>AVXD'MH<I/`O0)\"D;X[WR-^Y4UZ3\nMFD[';6I^C$EC7#[Q7A!Z/08I;7^Q)F>`C*P1OQ[?2!S$L^]V$EX24%+4$,]U\nM@$ON-D'H]C`>*,UNJ01(!L_,\\7&>\"\"VE_4S.#7&/,>;A4C?/%`I^)K.'+A/&\nMW\"R3AYD)%I\\RZYER%5O8O>KNYM9@O)4RRE8F96O6?FWKEF;KY?%^72P86K%0\nMBO?KN6=D&2K-F_S+38T5--8#JP/+(&>1`OCUF4;3Q].E<MFNL'LHY;^&)9;L\nMV)*=LF0KENQL2W9L:>?NY@[8P3]@!4KA\\$TO'-O`*V$!5>*%0P1#+_S#>\\65\nMA9;0F.A5$^=O.-82*-[DON*_,[[$4\"C!+4[PB&N=K4VKQ_B3Q:(@VQVY;,<=\nM,KCOQS<!S%5\\)6\"&@DP8N61%5BEJ5D%/560H,`K)\"L>W7,C+:W&)MBB5:\"KY\nM`V3A`M@38R;H$]`>1BKEAS'3ST$7B!PH\"CJEJF$AI`,FR#):H#G^@?`^<@9O\nM>J^(A:-B)MYI*6,QV90>XA>.EBLEJ^BJC[O*LQ5YQ%C\\\\/=!U.JE+O`E@CW^\nM$R=B`1J<L.UY!F^&F[NC`S\"DL;B!Q^),B[7=4?O-898V-23:6>MF&17-,NWX\nM%8ZY9X0^;<+7&_PK6$;P\"\\,<F:+@%K<0(9VDTV\\'H'(8'>'1N\\.._:!#)4L?\nM7QKPXI<&Z(@.0HFXL:RR9MGVF!DN[U!:/(;.%?HY)0OCUT=$'FX$'N6A83.]\nML&H55XW9WLA)S%SRE9R\"IA?C]81_$]\\^H#=@XF>K>IX[B`[<`4P1SU/6V9*^\nM)A,[Y)'0BGRL9_(LEU_#E/#[;G2$G6`'PF).W1'&'?$UCJ0KY1/J0[>QC8S.\nMH1O`!C?#J/3Z\"LU>@&?6V6-Y^/\\B`C(PQ=17I)E/PJ2=A<E,\"Y>$HZ[II1B.\nM+W;<K@=%9/-A_>#N:TP7K`J7E&UWG2&4F*'+8Y!6KF\\Q0U&NP]PS=;^WS4Q%\nM]WM^`)4TJMZ05'=?/Z>ZZWC'7'5L]>'69OU!H\\ELH?MPX!$5/-S=6\"K+JO<?\nMWFL>;&[M-K>WJO=8D>O?SV21<2>J5'16$N:A'EE*U2-Z@FT*I<#V=S\"4UA]'\nM*+-0:MB:F7H)#B]U-=)D&I!N9]NN<$>@H>OW>OXQ9&;(G,\"E4/#T9PF[^(/>\nMB>PXE9\"F$1<:.T=>-V))61EKT5MWIAEKC2M(HKHNOD<&8=O=K+VQVP11_&Z9\nM=*@<9_):=KX:AJ8;I=0+@8:IZ692R+E/,?:<F7;<J.%`]>*$;E/,C'XV73NO\nMM^OVA_B+_\"0]+-Q>S5##_E1!\";T9?UP[=RIP@8WQ?>>%-O*B=GB1WA^=PX5.\nM0BW60*5?&*WKM-UPI>>UAH_'+^IZ@]2[NI`A.A&I7EDUK:Q7/\"\\RI)*QN5HP\nM+JC2=<U,R)A_.KV>%T;+/AL^QLDZO?C=515-_&W9Q=3;LO'+L?RMV?2;M1`E\nMMMEE[>!D&/$'>D)WZ`3X1`;,('\"\"$XW!U9&+#_B`\\_C\")U9>T-C6^.T#>2HP\nM=\\6>CE_1O2*'LT)=A!O<5!;!9:F0JDOXJ?<JNY7;V:XW-K?SDP[&L=I=NI6[\nMMW6PDY^NS9;Y?0]_\"?A\"\\UP%;U9F,<^UP;P2-'QF*.$[%@`[AZSMC&`SZ;A#\nMEQ[]]$#D\\:>#A%G6\\0*W'?G!\"=8NV31D`0U9Y1BKRZ_]Z8.M-U:!1V'SP?#A\nMWU6ZK^RS)5J'X>-E!V80'L%WG`IB((]W`EQ#H*<=@Z(M0-%F29R]KON8Y6[E\nM7GNPO;M5O=_,:PQFE1^/`D,`TI(0$#M:MJ;;R5[Q?^^G6-IOX*[,3I$;1BN!\nM>QBX8;B\"0KA7/(B.E^'KI=X_O\\#.<[R#'A\\<++).X`\\9W2&SKW_UV?M?_O[?\nM_Q.IO`U5\"\"2GW,!R/_OJBX]__G<L`J;7V$?_\\_'?_\\OO__!C?%$?8ZBQG_['\nMSW\\=7^7T8CZ?,N4!6)\\FINA*AWUZ;'P4(L1;4>\"Z\\4!3^AL9_8^<\\(CE8M?(\nM@#<(\\0\"/'O!)M)\\XO1$D3F[^J\\^^_.LO/I[7V/Q7[W[U[K_][A<?XO>/?U@M\nMZ!_]=GX&`[_Y[U_^FAOX])^^^/QG[WWR6QP=KS_ZO*87/OI\\%B-_>.?S'W$C\nMG_SPGS_\\U_=^^D':T*?_]1?@S=O<T!.G/1KUI=4*W1[D.UO@+_)/;V''])`7\nM+=?ZV/QL'=Y:P!X_N$0/>G!L_@`&.;ADCY>FZR<H?\"M>P;W/OO[TO5]\\N'^9\nMO@N9G5/]-2I2G#:4\"@?\\*\"/&Z.2@:\\P'_>@2^D,_]\"+/'^3F<?GGD=<3*%_4\nM+QRUPBC`@BZ9$FF;5!T7V;G.%Y\"5^W0(5MU.S#3^*/KFC)5A[#EHJVS'M/5G\nM#YO;L`],8*_F]O:#[57&MET\\B$Z31,.'=-OR(]9\\\"MO`C;&E*R0\\V>(WYKVI\nM9F:D/V'GN5EP)CLSDN%,MB[!B<)>!C6*ELD,R2-^FJRM^)S2TM(9^A)]%I>4\nM3RP`#1Z]TSATB0T1/]#@<3E-!X4T1%1`@\\_V5)WJJ9@G:.1,%OC'=!@_;6(7\nM$/Q5S'<&7W7T];*NJEO+=]E7=5/[8_#UI>^RIU,V<L7KR=Y.SLM4/CZ'=]FE\nMPK?GWF4JD2E+?<K[3U_QQ4D^JU1W2E?F=,8;*TZ!TECQ0OZ;M=::%I&TE9GC\nM<KGH\\$]QUAB-U6>*U%A]2KPN5VM.BQN4GV#PRI%4N'(DV=,C<^EJ>H;PD$DN\nMO'I8I84S!\"PMG\"%L::$2O(S;B'[KN8\\[SIFX/NFX/NFX/NFX/NFXHI,.X)<K\nM.>3(MG-]OG%]OG%]OG%]OG%]OO&M^WI]OG%]OB'==EV?;Z@1N3[?N#[?N#[?\nM>/[S#7I,Y'E.-V(#5WVV\\0_1WW[R_F_>_?C<V4;<P')O?_F3M]YY((K^'__C\nM7S[^J]]]^,:XZ'_;>>?LXK.-V-2XVD^,2]4^'VA*?R.C/R_S8]?.G2HDVDDM\nM_<'C]W_TD[>P;[email protected]_GZG5_];WG7UM/&$86?S5_(R]C,!DA\\6]^(DQ@,\nMMN,;OG!)J03$,MB`P:RI+VV(:'X-::NH4:DBM0]MI?Z%_H'^C;ZT4L^9R^[:\nM7N,U3B.BYB&LS\\R<.9>9,S/'GF_Q^4T%QL_5#\\-IB6$&OW3>7W`&UWL___;-\nM^W?OL'?\\?'4%H^KJR@Z3WW=_^I4S^?[O[_YY^\\>;O\\R,KM_\">+M*]>4VS-[J\nM7Y_'EXB5F[DK9K\"WUP\"7^>NWKR9H<7-NXZ865KF-@?KZ*'PM/;CS8^:Z].V?\nM5KF-D6T?6#:^.;<AQNAHHP^LO#;J&RL/NI^M//I0OJF=$9=UE:SBLM%X3&Z#\nM_PYSVLR&Y#)-7B,T.J]A-H55KF*ZL&659+A%]!K+QF80L\\@+W\"J6V>)C,Z39\nMXC5!9.O/55BX=W2<XQ:_U'TK]P',M39/;MQZE])T.@]A/ZC![7)I-@JK(:P\"\nM-;BVEX.J7@H][>4J;(3I#Z&O#5G'G@7M+!!W6=;!I>E3D%6YRY*.68X'I!XM\nM[>AY:9J/4TAGO>!_//$FV4^,<?4M<A5C0IUQ<!P3\\8R*8X;2[7(5M[;(=+D*\nM6];A_R)V;614MV4IH[KM7(6='>,XNTV>J[!E*_\\''TD3Y2IL[HEMF&>J7,4X\nM4YF)-@QF)MHPFYEH*U=QMB]^BNWM'(]X@V70$DC,BL&$5Y/#[H`J;Q@=-OB5\nMIO+F^EK,=0Z]$(]&/'7B^<(UXR386R?FPF]13\\X9$`C!Y],V^W.`EPKY159$\nM#*C(\"ZIXS;P'^]8N*:<3:]E4<2M53)22V6)ZAD,W-7`F4<8<*+46/X\\PN9@[\nMIY>+?^_+@:0^D(Q6MK2X0&M@.#@&.<?PMBLK@5,<WL&TZ-KAH*@S$:H])<SQ\nMH\"`FI<@2`N3VFMT.)\\'QCP26[JLSB,8Q0AL)OX,`/(U#LD-H0T#VD+TGW>.Z\nMQAH/B8HW0UG)*%$?]HD*MN:BROR9251Y4N6BDM&B$@$^@E5J8/AVZZ*VC_(N\nM^6KU+WU:K]GD//KGU'Y#\\^$(8?]EZLUS>=T?IT4)XA5B\\X4>JWYQ270(@F*X\nM_80H,$&_.QC4T2=0MW1JO42V2F2N5-R)S>[-7<Z5GCV;V]5V1?E&I;RV4MP4\nM58:*6;RMK&4+V2VL,HLE3E;\"35:1-F,,^&5X8(+C\"_[P^^[B(;\\A'K:VX:'O\nM=CE\\YA?'Y4-`/@3E0T@^A.=<#(-D5GK(A;+N=C=36R0)I^_-[D6SSJ3);I90\nME'5D7VYUNABE4(1>NW5>KVKP^!R%++8TG83JH4^D(<4,NLN&Q/EQI^UIS)%:\nMZ\\\"WD5I)%E+>LWWOR;E8;ZI=#EP9)?[@8[\\*\\V-P<@PVG'\"=<8?XC'#B/HJ)\nM#6$AX@UA(.XV/`Q^9KZPN@\";B7,68WA?#A&N4!B?&O\"I$7*/KC[6XMY[\\ZM\\\nM51#(1$\".E_V'B=GL-B_BY*ZG@:^FC7?:5>]!\"^06Z2Z1[1H0)VQ?''S_;6!Z\nM:2R-!2-?#9H!;`CV$TI&/+GUY2XP'`J76$[=\\4KRA(9IDSZCE)XN-:-'M$1C\nMM)@H4'=!B1]CF=.EE)4$%XJQ57S*O-)1LJOW:<:_0W/TE'T..I\\J:646GOS4\nM_2136*=IZJ$O:-0YB]T$DND8S<F:R,XIV,E&):=KJ1[VT'P@0>>=+JC\"\\!WF\nML<ZB\\ES)``4(\"TSR)Y^#Y'H'AC6Q<`5DJM\"\\P0*));0\"?/C<'_\"KC%)$BHK<\nMRPI5,LI#)>=T@1U:])\">.C_++`77@;8LW('UL,8VW5?\"RAXHFZ&+S&X@`=1!\nM,1>/D60T%0V*S!IM^B#T$)FQ\\20\\)F+5_\\!C^$Q6&^DP^=3]9[F5\\^/^T[R9\nM8X0^-`:4>9G&F<Z+=-FD-90$7B=/\"J^*;><<4@6LI1$U&&@)JB!$3V8#\"5[1\nM\\4`OI&ZP^1EX)J.[NSCD:*9UX:5P]BD]%FRX?/=I'HQ_(B6D05K$,<9'E&D8\nM#?,MZ7P$L@VR>P$*,\\?UX]*P\\0`]Z+YG:IC:,=>*(PK^%!(WTDQ]%7J,TK9)\nM?8(OA=+@-+G?$V\\FS[6TZC')]UKDZ0D^GO9:\\;-JH^GM=C4(HF?>[E=+\"W+2\nM]:\\3^5TEJWB4NE)%*\\9H/\"R]!/+I\"\"('O4X7CJ<H:)HIMQH3B$)(RM$LS;!I\nM&J6GN:_S3FX!7+J@V)^+5L0@LEPGK5=7?_3&U?6C+:U,*(&(:%J]^M[I_I\\M\nMI$S!\"-=K\\LXM+:$&W:H98I,3HOHAC&_A@)]$>P*\"I.<W@\"Z0G4SDK6TF$T-Q\nM0C(,4=SM(1%?S(()(01AXORP4=_4<#CZP)JPG&\\*H;V$W/2H)*4=-4%!]DH>\nMQ#0C<O-&FE7MJ%<]JG?TI@%STX\"MII:VBCQRJXM!DZTB4;?ZR`\"KY&A4<);F\nML$LU@FE=+_=4V$LR#7PWRP4/;=R5;NY+-OE7:C7=XRR>X2GZL-W2$/I$0D\")\nM^7[1ZA&M#CW@:V+XMW=54A.@2@3!'&4L@0H(JX(,%XQ^!@,+PY_'_GBKJ:,*\nHFR5AMYPF&*%[1QWH%E&:F-2P'Z]P,$RI\\V'C9;TV\\R_9\\%R#MH0``'BK\n`\nend\n", "msg_date": "Wed, 27 Jan 1999 16:23:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "MB big5 support patch (current)" }, { "msg_contents": "Applied.\n\n> Included patches make some enhancements to the multi-byte support.\n> \n> o allow to use Big5 (a Chinese encoding used in Taiwan) as a client\n> encoding. In this case the server side encoding should be EUC_TW\n> \n> o add EUC_TW and Big5 test cases to the regression and the mb test\n> (contributed by Jonah Kuo)\n> \n> o fix mistake in include/mb/pg_wchar.h. An encoding id for EUC_TW was\n> not correct (was 3 and now is 4)\n> \n> o update documents (doc/README.mb and README.mb.jp)\n> \n> o update psql helpfile (bin/psql/psqlHelp.h)\n> \n> --\n> Tatsuo Ishii\n> [email protected]\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 13:51:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] MB big5 support patch (current)" }, { "msg_contents": "Which hackers are on ICQ?\nThanks!\nClark\n", "msg_date": "Sun, 14 Mar 1999 21:12:55 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "ICQ?" }, { "msg_contents": "> Which hackers are on ICQ?\n> Thanks!\n> Clark\n> \n> \n\nWe are on irc. See the FAQ.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Mar 1999 16:21:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ICQ?" }, { "msg_contents": "> > Which hackers are on ICQ?\n> > Thanks!\n> > Clark\n> > \n> > \n> \n> We are on irc. See the FAQ.\n> \n\nI'm not a complete hacker, but I'm on ICQ ;-))\n\n-- \nDmitry Samersoff\n DM\\S, [email protected], ICQ: 3161705 \n http://devnull.wplus.net\n\n", "msg_date": "Mon, 15 Mar 1999 10:42:46 +0300 (MSK)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ICQ?" }, { "msg_contents": "> Which hackers are on ICQ?\n> Thanks!\n> Clark\n> \n> \nI'm note complete hacker but I'm on ICQ ;-))\n\n-- \nDmitry Samersoff\n DM\\S, [email protected], ICQ: 3161705 \n http://devnull.wplus.net\n\n", "msg_date": "Mon, 15 Mar 1999 10:43:40 +0300 (MSK)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ICQ?" } ]
[ { "msg_contents": "Vadim changed the parser to allow FOR UPDATE for all queries except those\nfor which QueryIsRule is set to true. Does that mean we allow FOR UPDATE for\nunions etc? Or do all these cases set QueryIsRule?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 27 Jan 1999 12:31:32 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "FOR UPDATE question" }, { "msg_contents": "Michael Meskes wrote:\n> \n> Vadim changed the parser to allow FOR UPDATE for all queries except those\n> for which QueryIsRule is set to true. Does that mean we allow FOR UPDATE for\n> unions etc? Or do all these cases set QueryIsRule?\n ^^^^^^^^^^\nNo. I just moved tests from gram.y to analyze.c and rewrite system.\n\nVadim\n", "msg_date": "Thu, 28 Jan 1999 15:22:32 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FOR UPDATE question" } ]
[ { "msg_contents": "Here's a very small bugfix. \n\nMichael\n\nP.S.: Did we find a solution for the patches list yet? It seems MArc is busy\nas my resubscription process doesn't get acknowledged too.\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!", "msg_date": "Wed, 27 Jan 1999 12:50:35 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Another small ecpg bug fixed" } ]
[ { "msg_contents": "After recent changes I find an error with SUM when summing more than\none column. Here is the test sequence.\n\nDROP TABLE x;\nCREATE TABLE x (a int, b int); \nINSERT INTO x VALUES (1, 5);\nINSERT INTO x VALUES (2, 7);\nSELECT * FROM x;\nSELECT SUM(a) FROM x;\nSELECT SUM(b) FROM x;\nSELECT SUM(a), SUM(b) FROM x;\n\nThe last three statements give the following expected results when\nrun on a system compiled Jan 19.\n\ndarcy=> SELECT SUM(a) FROM x;\nsum\n---\n 3\n(1 row)\n\ndarcy=> SELECT SUM(b) FROM x;\nsum\n---\n 12\n(1 row)\n\ndarcy=> SELECT SUM(a), SUM(b) FROM x;\nsum|sum\n---+---\n 3| 12\n(1 row)\n\nOn a system compiled Jan 27, I see the following.\n\ndarcy=> SELECT SUM(a) FROM x;\nsum\n---\n 3\n(1 row)\n\ndarcy=> SELECT SUM(b) FROM x;\nsum\n---\n 12\n(1 row)\n\ndarcy=> SELECT SUM(a), SUM(b) FROM x;\nsum|sum\n---+---\n 12| 12\n(1 row)\n\nSee how the individual sums are correct but I can no longer get both\nsums in one select.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 27 Jan 1999 07:51:55 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with multiple SUMs" }, { "msg_contents": "I am working on it.\n\n\n> After recent changes I find an error with SUM when summing more than\n> one column. Here is the test sequence.\n> \n> DROP TABLE x;\n> CREATE TABLE x (a int, b int); \n> INSERT INTO x VALUES (1, 5);\n> INSERT INTO x VALUES (2, 7);\n> SELECT * FROM x;\n> SELECT SUM(a) FROM x;\n> SELECT SUM(b) FROM x;\n> SELECT SUM(a), SUM(b) FROM x;\n> \n> The last three statements give the following expected results when\n> run on a system compiled Jan 19.\n> \n> darcy=> SELECT SUM(a) FROM x;\n> sum\n> ---\n> 3\n> (1 row)\n> \n> darcy=> SELECT SUM(b) FROM x;\n> sum\n> ---\n> 12\n> (1 row)\n> \n> darcy=> SELECT SUM(a), SUM(b) FROM x;\n> sum|sum\n> ---+---\n> 3| 12\n> (1 row)\n> \n> On a system compiled Jan 27, I see the following.\n> \n> darcy=> SELECT SUM(a) FROM x;\n> sum\n> ---\n> 3\n> (1 row)\n> \n> darcy=> SELECT SUM(b) FROM x;\n> sum\n> ---\n> 12\n> (1 row)\n> \n> darcy=> SELECT SUM(a), SUM(b) FROM x;\n> sum|sum\n> ---+---\n> 12| 12\n> (1 row)\n> \n> See how the individual sums are correct but I can no longer get both\n> sums in one select.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 09:29:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with multiple SUMs" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> After recent changes I find an error with SUM when summing more than\n> one column. ...\n> See how the individual sums are correct but I can no longer get both\n> sums in one select.\n\nGood eye!\n\nActually, it looks like *any* two aggregates conflict --- we're\nreporting the result of the rightmost aggregate for all aggregate\nfunctions in a SELECT. Using D'Arcy's test table, I also tried\n\ntreetest=> SELECT AVG(a), SUM(a) FROM x;\navg|sum\n---+---\n 3| 3\n(1 row)\n\ntreetest=> SELECT AVG(a), SUM(b) FROM x;\navg|sum\n---+---\n 12| 12\n(1 row)\n\ntreetest=> SELECT AVG(a), COUNT(b) FROM x;\navg|count\n---+-----\n 2| 2\n(1 row)\n\nOops.\n\nThis bug appears to explain some of the regression-test failures I'm\nseeing --- numerology and select_having both contain multiple-aggregate\ncommands that are failing.\n\nIn the select_having test, it looks like multiple aggregates used in\nthe HAVING clause of a SELECT are suffering the same sort of fate\nas those in the target list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jan 1999 10:30:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with multiple SUMs " }, { "msg_contents": "Fixed.\n\n\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > After recent changes I find an error with SUM when summing more than\n> > one column. ...\n> > See how the individual sums are correct but I can no longer get both\n> > sums in one select.\n> \n> Good eye!\n> \n> Actually, it looks like *any* two aggregates conflict --- we're\n> reporting the result of the rightmost aggregate for all aggregate\n> functions in a SELECT. Using D'Arcy's test table, I also tried\n> \n> treetest=> SELECT AVG(a), SUM(a) FROM x;\n> avg|sum\n> ---+---\n> 3| 3\n> (1 row)\n> \n> treetest=> SELECT AVG(a), SUM(b) FROM x;\n> avg|sum\n> ---+---\n> 12| 12\n> (1 row)\n> \n> treetest=> SELECT AVG(a), COUNT(b) FROM x;\n> avg|count\n> ---+-----\n> 2| 2\n> (1 row)\n> \n> Oops.\n> \n> This bug appears to explain some of the regression-test failures I'm\n> seeing --- numerology and select_having both contain multiple-aggregate\n> commands that are failing.\n> \n> In the select_having test, it looks like multiple aggregates used in\n> the HAVING clause of a SELECT are suffering the same sort of fate\n> as those in the target list.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 11:16:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with multiple SUMs" } ]
[ { "msg_contents": "> > > But I have a llitle problem. I`m afraid that the lipq++ interface\n> > > doesn't have this funcion call. Is it true?\n> > > Have you got any examples of the using of PQsendQuery and Non Blocking\n> > > queries using this interface lipq or libpq++? I will be very usefull for\n> > > me. Just simply attach it on the response.\n> > >\n> > > It would be very appreciated. Thank you again.\n> >\n> > Maybe it hasn't been added to libpq++ yet.\n> \n> OK!!! NO PROBLEM!!! BUT I`AM VERY INTERESTED ON FINDING EXAMPLES OF USING\n> NON-BLOCKING QUERIES with libpq. Do you know where it could be? Have you got\n> some ones?\n> \n> I`m sorry for asking you too much. Thanks a lot.\n> \n> \n\nNot sure. Forwarding to hackers list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 09:28:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Non-blocking queries in postgresql" }, { "msg_contents": ">>>>>> But I have a llitle problem. I`m afraid that the lipq++ interface\n>>>>>> doesn't have this funcion call. Is it true?\n>>>>>> Have you got any examples of the using of PQsendQuery and Non Blocking\n>>>>>> queries using this interface lipq or libpq++? I will be very usefull for\n>>>>>> me. Just simply attach it on the response.\n>>>> \n>>>> Maybe it hasn't been added to libpq++ yet.\n\nIndeed it has not been added to libpq++. (libpq++ desperately needs to\nbe adopted by some caring soul. None of the current contributors to\nPostgres seem to use it, so it's not getting maintained. As long as it\nkeeps compiling we just ignore it...)\n\n\n>> BUT I`AM VERY INTERESTED ON FINDING EXAMPLES OF USING\n>> NON-BLOCKING QUERIES with libpq.\n\nThe trouble is that applications that need to do this are usually\nnon-trivial; I doubt you'll find any simple readily-available examples.\n\nThe only such application I've built myself is a C++/Tcl/Tk app\nin which the Tcl user interface remains \"live\" during SQL query execution,\nrather than freezing up during each query as it does with libpgtcl's\npg_exec. The idea is to enter a nested Tcl event loop after firing off\na query with PQsendQuery. Checking for the response is done by a Tcl\nfile handler that creates a special event type when the query is\ncomplete, and execution of that event type sets a flag to get out of the\nnested event loop. Meanwhile, regular Tcl events such as keypresses\nand mouse actions are responded to by the nested event loop.\n\nIt'd be difficult to explain it more fully than that without showing you\nlarge chunks of the application code, which I can't really do because\nit's proprietary software :-(.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jan 1999 11:05:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Non-blocking queries in postgresql " } ]
[ { "msg_contents": "Fixed now.\n\n\n> I am working on it.\n> \n> \n> > After recent changes I find an error with SUM when summing more than\n> > one column. Here is the test sequence.\n> > \n> > DROP TABLE x;\n> > CREATE TABLE x (a int, b int); \n> > INSERT INTO x VALUES (1, 5);\n> > INSERT INTO x VALUES (2, 7);\n> > SELECT * FROM x;\n> > SELECT SUM(a) FROM x;\n> > SELECT SUM(b) FROM x;\n> > SELECT SUM(a), SUM(b) FROM x;\n> > \n> > The last three statements give the following expected results when\n> > run on a system compiled Jan 19.\n> > \n> > darcy=> SELECT SUM(a) FROM x;\n> > sum\n> > ---\n> > 3\n> > (1 row)\n> > \n> > darcy=> SELECT SUM(b) FROM x;\n> > sum\n> > ---\n> > 12\n> > (1 row)\n> > \n> > darcy=> SELECT SUM(a), SUM(b) FROM x;\n> > sum|sum\n> > ---+---\n> > 3| 12\n> > (1 row)\n> > \n> > On a system compiled Jan 27, I see the following.\n> > \n> > darcy=> SELECT SUM(a) FROM x;\n> > sum\n> > ---\n> > 3\n> > (1 row)\n> > \n> > darcy=> SELECT SUM(b) FROM x;\n> > sum\n> > ---\n> > 12\n> > (1 row)\n> > \n> > darcy=> SELECT SUM(a), SUM(b) FROM x;\n> > sum|sum\n> > ---+---\n> > 12| 12\n> > (1 row)\n> > \n> > See how the individual sums are correct but I can no longer get both\n> > sums in one select.\n> > \n> > -- \n> > D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> > http://www.druid.net/darcy/ | and a sheep voting on\n> > +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 10:50:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with multiple SUMs" }, { "msg_contents": "Thus spake Bruce Momjian\n> Fixed now.\n\nAnd just seconds short of 3 hours after my report. I wonder how long\nit would take to get a fix from Oracle for something like this.\n\nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 27 Jan 1999 20:32:51 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with multiple SUMs" } ]
[ { "msg_contents": "I have fixed the problem I introduced with aggregates. They should work\nnow, and the HAVING regression test should work too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 10:52:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "regression test HAVING fixed" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have fixed the problem I introduced with aggregates. They should work\n> now, and the HAVING regression test should work too.\n\nAlso, I put in Vadim's recommended fix for the subplan problem.\nThe regression tests look a lot better than they did. The \"union\"\ntest is still failing by adding a bunch of\n\n\tNOTICE: equal: don't know whether nodes of type 600 are equal\n\nlines to the expected output. Anybody know what's causing that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jan 1999 13:34:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression test HAVING fixed " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > I have fixed the problem I introduced with aggregates. They should work\n> > now, and the HAVING regression test should work too.\n> \n> Also, I put in Vadim's recommended fix for the subplan problem.\n> The regression tests look a lot better than they did. The \"union\"\n> test is still failing by adding a bunch of\n> \n> NOTICE: equal: don't know whether nodes of type 600 are equal\n> \n> lines to the expected output. Anybody know what's causing that?\n\nType 600 is Query node. Attempt to compare Queries?\nTry gdb with break point @ equalfuncs.c:746...\n\nVadim\n", "msg_date": "Thu, 28 Jan 1999 01:49:29 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression test HAVING fixed" } ]
[ { "msg_contents": "FYI, I am working on temp tables. I may have something for 6.5,\ndepending on when we start beta.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 11:49:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "TEMP tables" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> FYI, I am working on temp tables. I may have something for 6.5,\n> depending on when we start beta.\n\nHow much time is required?\n\nVadim\nP.S. It's hell to handling SELECT FOR UPDATE in READ COMMITTED \nfor joins -:( It takes so much time...\n", "msg_date": "Thu, 28 Jan 1999 00:07:18 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TEMP tables" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > FYI, I am working on temp tables. I may have something for 6.5,\n> > depending on when we start beta.\n> \n> How much time is required?\n> \n> Vadim\n> P.S. It's hell to handling SELECT FOR UPDATE in READ COMMITTED \n> for joins -:( It takes so much time...\n> \n\nNot sure. I would say a few days, but I am not sure. I think I can get\nsomething in place by the 1st, but am not sure.\n\nSeems pretty simple at this point, but I am not sure what I am going to\nfind when I test it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 12:50:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TEMP tables" } ]
[ { "msg_contents": "Hi,\n\n 1. I've just committed some changes to PL/pgSQL and the SPI\n manager.\n\n It's a speedup of PL/pgSQL execution by calling\n ExecEvalExpr() in the executor directly for simple\n expressions that return one single Datum.\n\n For the speed test I've removed all the setup stuff from\n the plpgsql regression and ran the normal queries all in\n one transaction. There are 196 query plans generated\n during the regression and only 37 are left now for which\n PL/pgSQL really calls SPI_execp().\n\n This saves 30% of total execution time! I don't know how\n much of the whole execution time is spent in PL/pgSQL and\n how much is consumed by the normal query processing.\n\n In another test I used a silly add function that simply\n does a \"return $1 + $2\" and built a sum() aggregate on\n top of it. In that case 65% of execution time to\n summarize 20000 int4 values where saved. This is a\n speedup by factor 3.\n\n To be able to do so I've moved some of the declarations\n from spi.c into a new header spi_priv.h so someone has\n access to the _SPI_plan structure for past preparing\n plan-/querytree analysis. And I've added two silly\n functions SPI_push() and SPI_pop() that simply\n increment/decrement the _SPI_curid value. This is\n required for calling ExecEvalExpr(), because there could\n be functions evaluated that use SPI themself and\n otherwise they could not connect to the SPI manager. They\n are dangerous and I'm in doubt if we should document\n them.\n\n 2. While doing the above I've encountered some bad details\n of the SPI manager and the executor. The Func and Oper\n nodes point to a function cache, which is initially NULL\n and is not copied by copyNode().\n\n For every call of SPI_execp() to execute a prepared plan,\n the whole plan is copied into the current memory context.\n Since this clears out the fcache, the executor has to do\n several syscache lookups for every function or operator\n hit during execution of the plan.\n\n Unfortunately I haven't found a way yet to avoid it.\n Anything I tried so far ended in coredumps or other\n misbehaviour. Maybe someone else has an idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 27 Jan 1999 18:08:25 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "PL/pgSQL and SPI" }, { "msg_contents": "Jan Wieck wrote:\n> \n> ... And I've added two silly\n> functions SPI_push() and SPI_pop() that simply\n> increment/decrement the _SPI_curid value. This is\n> required for calling ExecEvalExpr(), because there could\n> be functions evaluated that use SPI themself and\n> otherwise they could not connect to the SPI manager. They\n> are dangerous and I'm in doubt if we should document\n ^^^^^^^^^\nNo more than improper call of SPI_finish()...\n\n> them.\n> \n> 2. While doing the above I've encountered some bad details\n> of the SPI manager and the executor. The Func and Oper\n> nodes point to a function cache, which is initially NULL\n> and is not copied by copyNode().\n> \n> For every call of SPI_execp() to execute a prepared plan,\n> the whole plan is copied into the current memory context.\n> Since this clears out the fcache, the executor has to do\n> several syscache lookups for every function or operator\n> hit during execution of the plan.\n> \n> Unfortunately I haven't found a way yet to avoid it.\n> Anything I tried so far ended in coredumps or other\n> misbehaviour. Maybe someone else has an idea.\n\nCould we fill most of FunctionCache while parsing query ?!\nWe can do this for \n\n int typlen; /* length of the return type */\n int typbyval; /* true if return type is pass by value */\n...\n Oid foid; /* oid of the function in pg_proc */\n Oid language; /* oid of the language in pg_language */\n int nargs; /* number of arguments */\n\n Oid *argOidVect; /* oids of all the arguments */\n...\n bool istrusted; /* trusted fn? */\n\nand may be others too.\n\nVadim\n", "msg_date": "Thu, 28 Jan 1999 00:43:21 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PL/pgSQL and SPI" }, { "msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> > 2. While doing the above I've encountered some bad details\n> > of the SPI manager and the executor. The Func and Oper\n> > nodes point to a function cache, which is initially NULL\n> > and is not copied by copyNode().\n> >\n> > For every call of SPI_execp() to execute a prepared plan,\n> > the whole plan is copied into the current memory context.\n> > Since this clears out the fcache, the executor has to do\n> > several syscache lookups for every function or operator\n> > hit during execution of the plan.\n> >\n> > Unfortunately I haven't found a way yet to avoid it.\n> > Anything I tried so far ended in coredumps or other\n> > misbehaviour. Maybe someone else has an idea.\n>\n> Could we fill most of FunctionCache while parsing query ?!\n> We can do this for\n>\n> int typlen; /* length of the return type */\n> int typbyval; /* true if return type is pass by value */\n> ...\n> Oid foid; /* oid of the function in pg_proc */\n> Oid language; /* oid of the language in pg_language */\n> int nargs; /* number of arguments */\n>\n> Oid *argOidVect; /* oids of all the arguments */\n> ...\n> bool istrusted; /* trusted fn? */\n>\n> and may be others too.\n\n And then letting copyNode() copy the fcache too so it's\n allocated in the same memory context.\n\n Will require a flag in the fcache that is used to tell that\n setFcache() must be called to fill in the remaining fields\n (there are some things taken from the actual executor state).\n This flag is then cleared by copyNode() and the fields in\n question left uncopied.\n\n This might also let us get rid of the tree copy in\n SPI_execp(), if we form another tree-traversal function that\n resets the flag in all Func and Oper nodes of the whole tree,\n so the prepared/saved plan can be used directly.\n\n I'll give it a try some time.\n\n Thanks for the kick, Vadim.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 27 Jan 1999 19:04:18 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PL/pgSQL and SPI" }, { "msg_contents": "Jan Wieck wrote:\n> \n> >\n> > Could we fill most of FunctionCache while parsing query ?!\n> \n> And then letting copyNode() copy the fcache too so it's\n> allocated in the same memory context.\n\nOr we could move these items from fcache struct to\nFunc/Oper node...\n\n> Will require a flag in the fcache that is used to tell that\n> setFcache() must be called to fill in the remaining fields\n> (there are some things taken from the actual executor state).\n> This flag is then cleared by copyNode() and the fields in\n> question left uncopied.\n\nI missed here, please explain. What fields are you talking about?\nNote that to support READ COMMITTED level I copy execution plan\n_after_ execution started and so nothing used to keep execution\nstates, but not handled (re-initialized) by ExecInitNode, \nmust be copied.\nAlso, see below.\n\n> This might also let us get rid of the tree copy in\n> SPI_execp(), if we form another tree-traversal function that\n> resets the flag in all Func and Oper nodes of the whole tree,\n> so the prepared/saved plan can be used directly.\n> \n> I'll give it a try some time.\n\nMaybe. But note that if executor will try to use/pfree something\nallocated in previous execution (in another memory context)\nthen we'll get trouble.\n\nVadim\n", "msg_date": "Thu, 28 Jan 1999 01:44:33 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PL/pgSQL and SPI" }, { "msg_contents": "Jan Wieck wrote:\n> \n> 1. I've just committed some changes to PL/pgSQL and the SPI\n> manager.\n> \n> It's a speedup of PL/pgSQL execution by calling\n> ExecEvalExpr() in the executor directly for simple\n> expressions that return one single Datum.\n> \n...\n> \n> To be able to do so I've moved some of the declarations\n> from spi.c into a new header spi_priv.h so someone has\n> access to the _SPI_plan structure for past preparing\n> plan-/querytree analysis. And I've added two silly\n> functions SPI_push() and SPI_pop() that simply\n> increment/decrement the _SPI_curid value. This is\n> required for calling ExecEvalExpr(), because there could\n> be functions evaluated that use SPI themself and\n> otherwise they could not connect to the SPI manager. They\n> are dangerous and I'm in doubt if we should document\n> them.\n\nBTW, Jan, did you consider ability to add new function\nfor fast expression evaluation to SPI itself and than just\nuse this func in PL/pgSQL?\nThis function seems to be generally usefull.\nAnd we could avoid SPI_push/SPI_pop...\n\nVadim\n", "msg_date": "Thu, 28 Jan 1999 04:27:25 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PL/pgSQL and SPI" }, { "msg_contents": "Vadim wrote:\n>\n> Jan Wieck wrote:\n> >\n> > 1. I've just committed some changes to PL/pgSQL and the SPI\n> > manager.\n> >\n> > It's a speedup of PL/pgSQL execution by calling\n> > ExecEvalExpr() in the executor directly for simple\n> > expressions that return one single Datum.\n>\n> BTW, Jan, did you consider ability to add new function\n> for fast expression evaluation to SPI itself and than just\n> use this func in PL/pgSQL?\n> This function seems to be generally usefull.\n> And we could avoid SPI_push/SPI_pop...\n\n Clarification:\n\n I'm doing many tests on the SPI generated plan to ensure\n that it is so simple that ExecEvalExpr() cannot stumble\n over it. In detail it must be something that has only one\n targetentry, absolutely no qual, lefttree, righttree or\n something else. And all the nodes in the TLE expression\n must only be Expr (OP, FUNC, OR, AND, NOT only), Const or\n Param ones.\n\n This is required, because I have to fake an ExprContext\n that contains the values for the parameters only. The\n above ensures, that ExecEvalExpr() will never touch\n anything else than the ecxt_param_list_info and thus will\n not notice that it is a faked one.\n\n Well, but you're right, I could add some smartness to SPI.\n First, it could do the same checks on the generated plan that\n ensure it really returns 1 (and only ever 1) Datum based only\n on function calls, constants or parameters. If this is the\n case, it could internally call ExecEvalExpr() and build a\n faked heap tuple on SPI_execp(). Someone using SPI_exec()\n isn't interested in speed, so I would leave it out there.\n\n And two new functions\n\n bool SPI_is_simple_expr(void *plan);\n Datum SPI_eval_simple_expr(void *plan,\n Datum *values,\n char *Nulls,\n bool *isNull,\n Oid *rettype);\n\n could gain more direct access to such expressions suppressing\n the need to diddle with the SPI tuple table for getting just\n one Datum.\n\n Yes, I think it would be a good enhancement. I'll go for it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 28 Jan 1999 13:31:10 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PL/pgSQL and SPI" } ]
[ { "msg_contents": "Current CVS sources are giving silly values for the cost fields of\nEXPLAIN output:\n\ntreetest=> explain select * from marketorderhistory order by ordertime;\nNOTICE: QUERY PLAN:\n\nSort (cost=??????? size=1610612736 width=1081364283)\n -> Seq Scan on marketorderhistory (cost=??????? size=1610612736 width=108136\n4283)\n\nEXPLAIN\ntreetest=> explain select * from marketorderhistory where ordertime = 'now';\nNOTICE: QUERY PLAN:\n\nIndex Scan using marketorderhistory_ordertime_in on marketorderhistory (cost=??\n????? size=-536870912 width=1074351372)\n\nEXPLAIN\n\nLooks the same before and after VACUUM, btw.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jan 1999 15:18:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Something wacko about EXPLAIN cost stats" }, { "msg_contents": "Can't reproduce it here.\n\n> Current CVS sources are giving silly values for the cost fields of\n> EXPLAIN output:\n> \n> treetest=> explain select * from marketorderhistory order by ordertime;\n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=??????? size=1610612736 width=1081364283)\n> -> Seq Scan on marketorderhistory (cost=??????? size=1610612736 width=108136\n> 4283)\n> \n> EXPLAIN\n> treetest=> explain select * from marketorderhistory where ordertime = 'now';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using marketorderhistory_ordertime_in on marketorderhistory (cost=??\n> ????? size=-536870912 width=1074351372)\n> \n> EXPLAIN\n> \n> Looks the same before and after VACUUM, btw.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Feb 1999 14:06:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Something wacko about EXPLAIN cost stats" } ]
[ { "msg_contents": "\nIs there a workaround for an undeclared NAN in backend/utils/adt/float.c\non FreeBSD? I thought I compiled this before, but right now I don't see\nany evidence that I did. If I did, the only difference between then and \nnow would be adding tcl support. The sources are cvsup'd from 1/11/99,\nbut I'm getting Connection Refused so I can't update.\n\nSuggestions? Hints?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n", "msg_date": "Wed, 27 Jan 1999 18:45:55 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "NAN on FreeBSD 2.2.8" }, { "msg_contents": "On Wed, 27 Jan 1999, Vince Vielhaber wrote:\n\n> \n> Is there a workaround for an undeclared NAN in backend/utils/adt/float.c\n> on FreeBSD? I thought I compiled this before, but right now I don't see\n> any evidence that I did. If I did, the only difference between then and \n> now would be adding tcl support. The sources are cvsup'd from 1/11/99,\n> but I'm getting Connection Refused so I can't update.\n> \n> Suggestions? Hints?\n\nGive me a few minutes...just did the aout->elf upgrade on Hub and am still\nrecovering a few of the old aout binaries...cvsupd is one of them, and, I\nthink, the last one...*cross fingers*\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 27 Jan 1999 20:19:40 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NAN on FreeBSD 2.2.8" }, { "msg_contents": "\nOn 28-Jan-99 The Hermit Hacker wrote:\n> On Wed, 27 Jan 1999, Vince Vielhaber wrote:\n> \n>> \n>> Is there a workaround for an undeclared NAN in backend/utils/adt/float.c\n>> on FreeBSD? I thought I compiled this before, but right now I don't see\n>> any evidence that I did. If I did, the only difference between then and \n>> now would be adding tcl support. The sources are cvsup'd from 1/11/99,\n>> but I'm getting Connection Refused so I can't update.\n>> \n>> Suggestions? Hints?\n> \n> Give me a few minutes...just did the aout->elf upgrade on Hub and am still\n> recovering a few of the old aout binaries...cvsupd is one of them, and, I\n> think, the last one...*cross fingers*\n\nYep, that's what I figured from your posts on the freebsd list. I just \ncvsupped but haven't had the chance to try it again. I'm setting up \nanother machine so I can get cloud-nine-gifts off of my desktop machine.\nEventually I'll have all the various businesses on that machine. I \nfinally finished the shopping cart and ordering system (using PostgreSQL),\nand got the java creditcard encoder/decoder running. With a bit of luck\nI'll have some available time again!!! With *alot* of luck, perhaps even\nenough time to look at libpq++ (from the comment I heard here earlier). \nAt any rate, I'll get the compile running and holler if it works now or\nnot.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n", "msg_date": "Wed, 27 Jan 1999 20:48:12 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] NAN on FreeBSD 2.2.8" }, { "msg_contents": "> \n> Is there a workaround for an undeclared NAN in backend/utils/adt/float.c\n> on FreeBSD? I thought I compiled this before, but right now I don't see\n> any evidence that I did. If I did, the only difference between then and \n> now would be adding tcl support. The sources are cvsup'd from 1/11/99,\n> but I'm getting Connection Refused so I can't update.\n> \n> Suggestions? Hints?\n> \n\nIt is fixed in the current tree. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 27 Jan 1999 21:02:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NAN on FreeBSD 2.2.8" }, { "msg_contents": "\nOn 28-Jan-99 Bruce Momjian wrote:\n>> \n>> Is there a workaround for an undeclared NAN in backend/utils/adt/float.c\n>> on FreeBSD? I thought I compiled this before, but right now I don't see\n>> any evidence that I did. If I did, the only difference between then and \n>> now would be adding tcl support. The sources are cvsup'd from 1/11/99,\n>> but I'm getting Connection Refused so I can't update.\n>> \n>> Suggestions? Hints?\n>> \n> \n> It is fixed in the current tree. \n>\n\nYep. that it is. Have you tried compiling it with tcl yet? It's not a\npretty picture on FreeBSD. I'll need to dig into it to see just why it\ndoes what it does, but configure a) doesn't know that tcl and tk could be\nin different locations; b) misses the include path (/usr/local/include/tk8.0)\nand c) does the same for tk. To get past 'a' I went into the tcl8.0 dir\nand added a link to the tk directory. Then each time the compile failed\nI had to add the tcl8.0 and tk8.0 dirs to the include path to fix 'b' and\n'c'. I'll dig into it tomorrow if I get a chance.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n", "msg_date": "Wed, 27 Jan 1999 21:42:58 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] NAN on FreeBSD 2.2.8" } ]
[ { "msg_contents": "\ntesting ...\n\n", "msg_date": "Thu, 28 Jan 1999 04:47:41 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy>", "msg_from_op": true, "msg_subject": "a test" } ]
[ { "msg_contents": "\n\ntest\n\n", "msg_date": "Thu, 28 Jan 1999 04:50:30 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy>", "msg_from_op": true, "msg_subject": "a test" } ]
[ { "msg_contents": "\ntest\n\n", "msg_date": "Thu, 28 Jan 1999 04:51:09 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy>", "msg_from_op": true, "msg_subject": "a test" } ]
[ { "msg_contents": "\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 28 Jan 1999 05:51:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Test two..." } ]
[ { "msg_contents": "\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 28 Jan 1999 05:52:44 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Tsest .." } ]
[ { "msg_contents": "\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 28 Jan 1999 05:54:12 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Death!" }, { "msg_contents": "\n Whom?\n\nOn Thu, 28 Jan 1999, The Hermit Hacker wrote:\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 28 Jan 1999 13:00:17 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Death!" }, { "msg_contents": "\nWas a test :)\n\n\nOn Thu, 28 Jan 1999, Oleg Broytmann wrote:\n\n> \n> Whom?\n> \n> On Thu, 28 Jan 1999, The Hermit Hacker wrote:\n> > \n> > Marc G. Fournier \n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> \n> Oleg.\n> ---- \n> Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n> Programmers don't die, they just GOSUB without RETURN.\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 28 Jan 1999 08:39:16 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Death!" }, { "msg_contents": "Hi!\n\n I knew this, sure :) Esp. after you've sent 2 test messages before.\n Finally I did a test by myself - sent my mail from a different address.\nThen you got a bounce, forward it here - now I know there is \"non-member\nsubmission disabled\".\n Thank you for helping me test it :)\n\nOn Thu, 28 Jan 1999, The Hermit Hacker wrote:\n\n> \n> Was a test :)\n> \n> \n> On Thu, 28 Jan 1999, Oleg Broytmann wrote:\n> \n> > \n> > Whom?\n> > \n> > On Thu, 28 Jan 1999, The Hermit Hacker wrote:\n> > > \n> > > Marc G. Fournier \n> > > Systems Administrator @ hub.org \n> > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > > \n> > > \n> > \n> > Oleg.\n> > ---- \n> > Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n> > Programmers don't die, they just GOSUB without RETURN.\n> > \n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 28 Jan 1999 15:45:05 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Death!" } ]
[ { "msg_contents": "\n\nMarc G. Fournier [email protected]\nSystems Administrator, Acadia University\n\n \"These are my opinions, which are not necessarily shared by my employer\"\n\n", "msg_date": "Thu, 28 Jan 1999 06:06:41 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Testing..." } ]
[ { "msg_contents": "Hi,\n\nFYI regarding the recent performance issues.\n\nThis lib is ported to win32. I don't know, how many unixes are \nsupported. Perhaps it's better to reuse something and not to \nreimplementit from scratch. But you are to decide.\n\nforwarded from comp.os.linux.announce\n\n---\n\nmmalloc is a heap manager. It is written from a scratch. Main goals were\naccurate RAM consuming and performance. Goals achieved using relatively\nnew virtual memory mapping techniques (known in UNIX wolrd as mmap ;-) and\nAVL trees.\n\n Major advantages of this heap manager:\n * Trimming and \"no commit\". mmalloc immediately (not in Windows\n world) releases all deallocated pages to the system. Also all\n allocated pages are not commited, because new areas are just mapped\n in, still not commited and only user program could commit memory. So\n the following rule is real true:\n \"NO UNUSED MEMORY WILL BE CONSUMED\".\n\n * Best-fit. Best-fit strategy was used. As shown in real world \n experiments, best-fit proven to be more accurate than first-fit.\n\n * AVL Trees. Primary internal structure used for controlling large\n blocks (>256 bytes, tunable). So the time consumed by allocating\n new block is proportional to O(log N), where N is the number of memory\n fragments. Implementation is in pure C and optimized.\n\n * Small blocks grouped. Small blocks are grouped within pages. This \n provides more accurate memory consuming. When doing 100000 times \n mmalloc(1) only ~130k of real memory will be allocated.\n\n * Smart alignment. Blocks smaller than MALLOC_ALIGN (tunable)\n are not aligned. (typical for i386 are blocks <4 bytes). Other\n blocks are aligned by MALLOC_ALIGN.\n\n * Small overhead. For blocks large blocks overhead is 32 bytes. It is\n approximately 12.5% for 256 bytes long block. For larger blocks size\n of this control structure is ever less noticed. Small blocks are\n grouped within one page and resulting overhead is less than\n 0.2% (8/4096*100).\n\n * Pure ANSI-C. Pure ANSI-C without any extensions was used. So library\n should be portable. Only vmm functions are not portable, other library\n parts should be.\n\nVisit homepage:\n \n http://www.geocities.com/SiliconValley/Circuit/5426/index.html\n\nValery\n\n----\n\nCiao\n\nUlrich\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nUlrich Vo\"3 [email protected]\n\n \" As a human being I claim the right\n to be widely inconsistent \" John Peel \n \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Thu, 28 Jan 1999 12:06:25 +0100", "msg_from": "\"Ulrich Voss\" <[email protected]>", "msg_from_op": true, "msg_subject": "new heap manager mmalloc" }, { "msg_contents": "\nTwo things against it...\n\nFirst, its a Linux-ism...he's got it ported to Win and Linux, that's it...\n\nSecond:\n\n GNU LIBRARY GENERAL PUBLIC LICENSE\n Version 2, June 1991\n\n Copyright (C) 1991 Free Software Foundation, Inc.\n 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n[This is the first released version of the library GPL. It is\n numbered 2 because it goes with version 2 of the ordinary GPL.]\n\n\n\nOn Thu, 28 Jan 1999, Ulrich Voss wrote:\n\n> Hi,\n> \n> FYI regarding the recent performance issues.\n> \n> This lib is ported to win32. I don't know, how many unixes are \n> supported. Perhaps it's better to reuse something and not to \n> reimplementit from scratch. But you are to decide.\n> \n> forwarded from comp.os.linux.announce\n> \n> ---\n> \n> mmalloc is a heap manager. It is written from a scratch. Main goals were\n> accurate RAM consuming and performance. Goals achieved using relatively\n> new virtual memory mapping techniques (known in UNIX wolrd as mmap ;-) and\n> AVL trees.\n> \n> Major advantages of this heap manager:\n> * Trimming and \"no commit\". mmalloc immediately (not in Windows\n> world) releases all deallocated pages to the system. Also all\n> allocated pages are not commited, because new areas are just mapped\n> in, still not commited and only user program could commit memory. So\n> the following rule is real true:\n> \"NO UNUSED MEMORY WILL BE CONSUMED\".\n> \n> * Best-fit. Best-fit strategy was used. As shown in real world \n> experiments, best-fit proven to be more accurate than first-fit.\n> \n> * AVL Trees. Primary internal structure used for controlling large\n> blocks (>256 bytes, tunable). So the time consumed by allocating\n> new block is proportional to O(log N), where N is the number of memory\n> fragments. Implementation is in pure C and optimized.\n> \n> * Small blocks grouped. Small blocks are grouped within pages. This \n> provides more accurate memory consuming. When doing 100000 times \n> mmalloc(1) only ~130k of real memory will be allocated.\n> \n> * Smart alignment. Blocks smaller than MALLOC_ALIGN (tunable)\n> are not aligned. (typical for i386 are blocks <4 bytes). Other\n> blocks are aligned by MALLOC_ALIGN.\n> \n> * Small overhead. For blocks large blocks overhead is 32 bytes. It is\n> approximately 12.5% for 256 bytes long block. For larger blocks size\n> of this control structure is ever less noticed. Small blocks are\n> grouped within one page and resulting overhead is less than\n> 0.2% (8/4096*100).\n> \n> * Pure ANSI-C. Pure ANSI-C without any extensions was used. So library\n> should be portable. Only vmm functions are not portable, other library\n> parts should be.\n> \n> Visit homepage:\n> \n> http://www.geocities.com/SiliconValley/Circuit/5426/index.html\n> \n> Valery\n> \n> ----\n> \n> Ciao\n> \n> Ulrich\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> Ulrich Vo\"3 [email protected]\n> \n> \" As a human being I claim the right\n> to be widely inconsistent \" John Peel \n> \n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 28 Jan 1999 08:41:49 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Two things against it...\n> First, its a Linux-ism...he's got it ported to Win and Linux, that's it...\n\nIf it depends on mmap(), porting may not be trivial, either.\n\nStill, if it's a drop-in replacement for malloc, I see no reason that\na particular installation couldn't use it with Postgres --- and we could\neven recommend that, if it's a big win. The license conflict would\npreclude actually including it with Postgres, but pointing to it is\nno problem.\n\nAnyone want to try it and see if it provides a noticeable speed\nimprovement?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jan 1999 10:50:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc " }, { "msg_contents": "> \n> Two things against it...\n> \n> First, its a Linux-ism...he's got it ported to Win and Linux, that's it...\n\nActually, our problem is not malloc itself. Most Unix OS's have pretty\ngood malloc's, tuned to their OS. The problem is the number of times we\ncall it.\n\nMassimo's idea of having several alloc contexts, some of which supply\nmemory from a backend-managed pool is a good idea. When I was working\non a SQL backend design, I thought this would be a very good way to go. \nSQL databases have a nice end-of-transaction free-it-all point that can\nmake use of such a give-me-the-memory and don't worry about freeing it\nuntil I am done with the transaction model.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 28 Jan 1999 12:46:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Actually, our problem is not malloc itself. Most Unix OS's have pretty\n> good malloc's, tuned to their OS. The problem is the number of times we\n> call it.\n\nWell, some malloc libs are noticeably better than others, but as long\nas the operating assumption is that any allocated block can be freed\nindependently of any other one, it's hard to do a *lot* better than\na standard malloc library. You have to keep track of each allocated\nchunk and each free area, individually, to meet malloc/free's API.\n\nWhat we need to do is exploit the notion of pooled allocation\n(contexts), wherein the memory management apparatus doesn't keep track\nof each allocation individually, but just takes it from a pool of space\nthat will all be freed at the same time. End of statement, end of\ntransaction, etc, are good pool lifetimes for Postgres.\n\nWe currently have the worst of both worlds: we pay malloc's overhead,\nand we have a *separate* bookkeeping layer on top of malloc that links\nallocated blocks together to allow everything to be freed at end-of-\ncontext. We should be able to do this more cheaply than malloc, not\nmore expensively.\n\nBTW, I already did something similar in the frontend libpq, and it\nwas a considerable win for reading large SELECT results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jan 1999 13:57:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> Bruce Momjian <[email protected]> writes:\n> > Actually, our problem is not malloc itself. Most Unix OS's have pretty\n> > good malloc's, tuned to their OS. The problem is the number of times we\n> > call it.\n>\n> [...]\n>\n> What we need to do is exploit the notion of pooled allocation\n> (contexts), wherein the memory management apparatus doesn't keep track\n> of each allocation individually, but just takes it from a pool of space\n> that will all be freed at the same time. End of statement, end of\n> transaction, etc, are good pool lifetimes for Postgres.\n>\n> We currently have the worst of both worlds: we pay malloc's overhead,\n> and we have a *separate* bookkeeping layer on top of malloc that links\n> allocated blocks together to allow everything to be freed at end-of-\n> context. We should be able to do this more cheaply than malloc, not\n> more expensively.\n\n Right right right! Pooled allocation will gain performance\n and the separate bookkeeping should be more useful.\n\n I did some little hacking and placed a silly pool into\n palloc() and friends. It simply uses bigger blocks of memory\n for small allocations and keeps only a refcount in the block\n to see when all allocations are pfree()'d. No chunks inside a\n block are reused, instead it waits until the entire block is\n free to throw it away (what doesn't happen as often as it\n should). The blocks are allocated in the same memory context\n the chunks should have been, so transaction ends\n AllocSetReset() cleans them out anyway.\n\n It is a poor, simple way to reduce the number of palloc()'d\n segments, so it saves calls to malloc()/free() and reduces\n the bookkeeping overhead in AllocSet...().\n\n The performance win on the regression test is about 10%. So\n it demonstrates that it's a good place for optimization.\n\n For now, it noticably raises the memory consumption of the\n backend. I think there are many small palloc()'d chunks not\n free'd, that cause my entier blocks to stay in memory. And\n since there is no reuse in a block, this summarizes up. This\n kind of pool is useful for things that should stay until\n transaction end.\n\n But I can think of another thing that might help. A temporary\n allocation pool stack.\n\n The functions to manage it are:\n\n void tmppalloc_push(void);\n void tmppalloc_pop(void);\n void *tmppalloc(Size size);\n void *tmpuppalloc(Size size, int levels_up);\n\n The stack does also handle the allocations in bigger blocks.\n But no individual free's are necessary, because the closing\n pop will throw away the innermost memory context. And since\n there are no free and realloc functions, it must not remember\n any information about the individual chunks. Could be a much\n bigger win for all our little allocs (I've seen thousands of\n 2-16 byte allocations when hacking in the above - with the\n old palloc(), every such has an additional overhead of 16\n bytes in the AllocSet).\n\n Well, it requires us to revise the entire backend code,\n module per module, to look which palloc()'s could be changed\n into temp ones. But we'll find MANY places where memory\n isn't pfree()'d that nobody wants until transaction end.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 29 Jan 1999 15:02:26 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc" }, { "msg_contents": "> The functions to manage it are:\n> \n> void tmppalloc_push(void);\n> void tmppalloc_pop(void);\n> void *tmppalloc(Size size);\n> void *tmpuppalloc(Size size, int levels_up);\n\nThis is a nice interface that I think would work. Let's map out the\n*alloc options after 6.5, pick one, and go for it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 00:22:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Right right right! Pooled allocation will gain performance\n> and the separate bookkeeping should be more useful.\n\n...\n\n> But I can think of another thing that might help. A temporary\n> allocation pool stack.\n> \n> The functions to manage it are:\n> \n> void tmppalloc_push(void);\n> void tmppalloc_pop(void);\n> void *tmppalloc(Size size);\n> void *tmpuppalloc(Size size, int levels_up);\n\nYou might be interested in:\n\nPOOL(9) NetBSD Kernel Manual POOL(9)\n\nNAME\n pool_create, pool_destroy, pool_get, pool_put, pool_prime - resource-pool\n manager\n\n...\n\nDESCRIPTION\n These utility routines provide management of pools of fixed-sized areas\n of memory. Resource pools set aside an amount of memory for exclusive\n use by the resource pool owner. This can be used by applications to\n guarantee the availability of a minimum amount of memory needed to con���\n tinue operation independent of the memory resources currently available\n from the system-wide memory allocator (malloc(9)). The pool manager can\n optionally obtain temporary memory by calling the palloc() function\n passed to pool_create(), for extra pool items in case the number of allo���\n cations exceeds the nominal number of pool items managed by a pool re���\n source. This temporary memory will be automatically returned to the sys���\n tem at a later time.\n\n...\n\nCODE REFERENCES\n The pool manager is implemented in the file sys/kern/subr_pool.c.\n\neg. ftp://ftp.NetBSD.org/pub/NetBSD-current/src/sys/kern/subr_pool.c\n\nCheers,\n\nPatrick\n", "msg_date": "Sat, 30 Jan 1999 15:08:33 +0000 (GMT)", "msg_from": "\"Patrick Welche\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc" }, { "msg_contents": "\nI have added this to the TODO list:\n\n* improve dynamic memory allocation by introducing tuple-context memory \n allocation\n* add pooled memory allocation where allocations are freed only as a\n group \n\n\n> Bruce Momjian <[email protected]> writes:\n> > Actually, our problem is not malloc itself. Most Unix OS's have pretty\n> > good malloc's, tuned to their OS. The problem is the number of times we\n> > call it.\n> \n> Well, some malloc libs are noticeably better than others, but as long\n> as the operating assumption is that any allocated block can be freed\n> independently of any other one, it's hard to do a *lot* better than\n> a standard malloc library. You have to keep track of each allocated\n> chunk and each free area, individually, to meet malloc/free's API.\n> \n> What we need to do is exploit the notion of pooled allocation\n> (contexts), wherein the memory management apparatus doesn't keep track\n> of each allocation individually, but just takes it from a pool of space\n> that will all be freed at the same time. End of statement, end of\n> transaction, etc, are good pool lifetimes for Postgres.\n> \n> We currently have the worst of both worlds: we pay malloc's overhead,\n> and we have a *separate* bookkeeping layer on top of malloc that links\n> allocated blocks together to allow everything to be freed at end-of-\n> context. We should be able to do this more cheaply than malloc, not\n> more expensively.\n> \n> BTW, I already did something similar in the frontend libpq, and it\n> was a considerable win for reading large SELECT results.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 21:25:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new heap manager mmalloc" } ]
[ { "msg_contents": "Hi,\n\n just committed partial support for mixed case identifiers in\n PL/pgSQL using the \"Identifier\" syntax.\n\n Partial means, that PL/pgSQL does not support all possible\n identifiers that can occur. Inside double quotes, only\n alphanumerics or underscore are allowed.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 28 Jan 1999 13:01:17 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "PL/pgSQL mixed case support" } ]
[ { "msg_contents": "Hi... hope to send this e-mail to the right place!\n\nWe are a non-commercial place, and I'm working for no money. We are in need\nof working with a database for 9 diferents but relatted databases (something\nlike 9G of information).\n\nWe started using PostgreSQL 6.3.2, but as we had lots of problems, including\nsome discussed in this list, and as Sybase has a popular name we decided to\nmigrate to Sybase ASE 11.0.3 for Linux.\n\nBut, as I guessed at first, we keept on dealing with lots of problems worse\nthen those first as much we decided to reconsider the database solution.\n\nSo, I need your help to the following:\n- How and where can I change the 25 users connections hard limit??\n- tell me about how Postgresql would probably work in a 100 user connection\nspot and a 9 shared databases... I'm asking your developer opinium about\nthat!!\n\nThanks for any help you can do to me!...\n\nSilvio Payva\nCongregation Christian of USA\nSystem Analist", "msg_date": "Thu, 28 Jan 1999 10:41:06 -0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Using PostgreSQL in spite of ASE" } ]
[ { "msg_contents": "At release 6.4.2, COPY does not respect column defaults: \n\njunk=> create table testbed (\njunk-> f1 int4 default 5, \njunk-> f2 float default 7.34,\njunk-> f3 datetime default now(),\njunk-> f4 text default 'default');\nCREATE\njunk=> copy testbed from stdin;\nEnter info followed by a newline\nEnd with a backslash and a period on a line by itself.\n>> \n>> \\.\njunk=> select * from testbed;\nf1|f2|f3|f4\n--+--+--+--\n 0| | | \n(1 row)\n\nINSERT works correctly, however.\n\nIs this intentional, or a bug?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Many are the afflictions of the righteous; but the \n LORD delivereth him out of them all.\" \n Psalm 34:19 \n\n\n", "msg_date": "Thu, 28 Jan 1999 13:28:33 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug or feature? COPY ignores column defaults" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> At release 6.4.2, COPY does not respect column defaults:\n> \n> INSERT works correctly, however.\n> \n> Is this intentional, or a bug?\n\nThis is standard behaviour. DEFAULT value is sabstituted\nonly if column value (including NULL) is not specified in \nINSERT statement. \n\nVadim\n", "msg_date": "Thu, 28 Jan 1999 21:11:51 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or feature? COPY ignores column defaults" }, { "msg_contents": ">\n> Oliver Elphick wrote:\n> >\n> > At release 6.4.2, COPY does not respect column defaults:\n> >\n> > INSERT works correctly, however.\n> >\n> > Is this intentional, or a bug?\n>\n> This is standard behaviour. DEFAULT value is sabstituted\n> only if column value (including NULL) is not specified in\n> INSERT statement.\n\n And so for the rule system. It is not invoked on COPY, so\n rewrite rules don't take effect.\n\n If you want some columns to have defaults assigned when the\n value in COPY is NULL, or maybe override something like a\n timestamp field, you could define a trigger. Triggers are\n called from COPY.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 28 Jan 1999 15:37:03 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or feature? COPY ignores column defaults" }, { "msg_contents": "Vadim Mikheev wrote:\n >Oliver Elphick wrote:\n >> \n >> At release 6.4.2, COPY does not respect column defaults:\n >> \n >> INSERT works correctly, however.\n >> \n >> Is this intentional, or a bug?\n >\n >This is standard behaviour. DEFAULT value is substituted\n >only if column value (including NULL) is not specified in \n >INSERT statement. \n\nWell, isn't that the case here?\n\n junk=> copy testbed from stdin;\n Enter info followed by a newline\n End with a backslash and a period on a line by itself.\n >> \n >> \\.\n\nI haven't specified \\N; there is no value at all for the column, so\nsurely the default should be used?\n\nIf that is not the case, I will add an explanation to the documentation for\nCOPY.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Many are the afflictions of the righteous; but the \n LORD delivereth him out of them all.\" \n Psalm 34:19 \n\n\n", "msg_date": "Thu, 28 Jan 1999 15:07:33 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug or feature? COPY ignores column defaults " }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Oliver Elphick wrote:\n>> At release 6.4.2, COPY does not respect column defaults:\n>> INSERT works correctly, however.\n>> Is this intentional, or a bug?\n\n> This is standard behaviour.\n\nAs it must be, or dumping/reloading a table via COPY would\nfail to preserve null fields in columns with defaults.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jan 1999 10:41:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or feature? COPY ignores column defaults " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> junk=> copy testbed from stdin;\n> Enter info followed by a newline\n> End with a backslash and a period on a line by itself.\n>>> \n>>> \\.\n\n> I haven't specified \\N; there is no value at all for the column, so\n> surely the default should be used?\n\nOh, I see what you're complaining about. No, that still shouldn't\nmean \"substitute the default\". An empty input means an empty string\nfor text fields. It MUST NOT mean substitute the default, or you\ncan't save and reload empty text fields.\n\nI would argue that an empty input field in COPY ought to be a syntax\nerror for int4 and other types that don't accept an empty string as a\nvalid external representation. You ought to be getting something much\nlike the result of\n\nplay=> select '':int4;\nERROR: parser: parse error at or near \":\"\nplay=> select '':float;\nERROR: parser: parse error at or near \":\"\nplay=>\n\n(In fact, I'm surprised you're not getting that. Is COPY ignoring\nthe complaints from the type conversion routines?)\n\nThere's a further issue here, which is that (I assume) you just pressed\nreturn and didn't type the three TAB characters that should have been\nrequired as field separators for your four-column table. That should've\nbeen a syntax error too, IMHO.\n\nSo, I agree COPY has a bug, but not the one you say ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jan 1999 14:48:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or feature? COPY ignores column defaults " }, { "msg_contents": "I wrote:\n> You ought to be getting something much like the result of\n\n> play=> select '':int4;\n> ERROR: parser: parse error at or near \":\"\n\nSheesh, need to learn to count my colons. Of course, what I should've\nwritten was:\n\nplay=> select ''::int4;\n?column?\n--------\n 0\n(1 row)\n\nwhich strikes me as being a bug in the INT4 text-to-value conversion\nroutine: it ought to be griping about bad input. (float4 and float8\nalso seem overly permissive.)\n\nThe other thing COPY is evidently doing is substituting NULLs for\nthe remaining fields if it hits RETURN before getting the right number\nof column separators. I still say that's a bad idea, and that raising\na syntax error would be safer behavior. COPY is not particularly\nintended to be user-friendly, it's intended to be a simple and reliable\ndump/reload syntax (no?). Allowing omissions in order to ease typing\njust makes the behavior less predictable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jan 1999 15:04:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or feature? COPY ignores column defaults " } ]
[ { "msg_contents": "Hi everybody,\n\nI found a problem running Postgres 6.4 and 6.4.2 on our HP-Ux \n10.10 system. (Compiling with gcc 2.7.1.3 or gcc 2.8.1 makes no \ndifferences.)\n\nAfter two SQL errors (e.g. SALACT instead of SELECT or insert \nsomething into a none existing table) the backend dies abnormally. \n\nBelow the HP-Ux gurus can find additional information from the \ncore file.\n\nAny help welcome.\n\nKind regards,\nReiner Nippes\n\nUMS GmbH, Ulm - Germany\n\n------- Forwarded Message Follows -------\nFrom: \[email protected] (Jan Wieck)\nSubject: \tRe: [ADMIN] Running Postgres on a HP-Ux 10.10 System\nTo: \[email protected]\nDate sent: \tWed, 27 Jan 1999 15:15:05 +0100 (MET)\nCopies to: \[email protected], [email protected]\nSend reply to: \[email protected] (Jan Wieck)\n\n> #0 0xc00a9098 in memset ()\n> (gdb) backtrace\n> #0 0xc00a9098 in memset ()\n> #1 0x15d0e8 in PostgresMain (argc=-534762622, argv=0x203cb000,\n> real_argc=-534763790, real_argv=0x203cb000) at postgres.c:1582\n> #2 0xe0202530 in ?? ()\n> Cannot access memory at address 0x203cafe8.\n>\n> (gdb) frame 1\n> #1 0x15d0e8 in PostgresMain (argc=-534762622, argv=0x203cb000,\n> real_argc=-534763790, real_argv=0x203cb000) at postgres.c:1582\n> 1582 MemSet(parser_input, 0, MAX_PARSE_BUFFER);\n> (gdb) list\n> 1577\n> 1578 /* ----------------\n> 1579 * (3) read a command.\n> 1580 * ----------------\n> 1581 */\n> 1582 MemSet(parser_input, 0, MAX_PARSE_BUFFER);\n> 1583\n> 1584 firstchar = ReadCommand(parser_input);\n> 1585\n> 1586 QueryCancel = false; /* forget any earlier CANCEL sig\n> nal */\n> (gdb)\n\n Bingo!\n\n MemSet() is a macro in src/include/c.h which in this case\n calls the real memset() library function (area to set is\n greater than 64 bytes).\n\n parser_input is a dynamic char array inside of\n PostgresMain(), so it's part of the innermost stackframe.\n This looks to me like the execution of longjmp() from the\n elog() corrupted the stackframe of PostgresMain() instead of\n restoring it as it should have done.\n\n There are different kinds of jumps used depending on the\n installation. One is setjmp()/longjmp() the other is\n sigsetjmp()/siglongjmp(). If I recall correct, sigsetjmp() is\n #defined to setjmp() if it isn't available.\n\n So folks, low level HP/UX 10.10 know how required!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Thu, 28 Jan 1999 15:22:30 +0100", "msg_from": "\"Reiner Nippes\" <[email protected]>", "msg_from_op": true, "msg_subject": "(Fwd) Re: [ADMIN] Running Postgres on a HP-Ux 10.10 System" } ]
[ { "msg_contents": "> And just seconds short of 3 hours after my report. I wonder how long\n> it would take to get a fix from Oracle for something like this.\n> \n> Thanks.\nThat depends on weather they already have the patch and were just\nwaiting for you to ask for it before giving it to you, or if the really\nhave to patch it because you're the first to notice. Always hope\nthey're holding out on you. %^|\n\t-DEJ\n", "msg_date": "Thu, 28 Jan 1999 10:28:55 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problem with multiple SUMs" }, { "msg_contents": "Thus spake Jackson, DeJuan\n> > And just seconds short of 3 hours after my report. I wonder how long\n> > it would take to get a fix from Oracle for something like this.\n> > \n> That depends on weather they already have the patch and were just\n> waiting for you to ask for it before giving it to you, or if the really\n\nI don't believe that they would get me an updated version of their product\nwithin three hours anyway. That was three hours from the time that I\nsent the bug notice to the time I had a useable product!\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 28 Jan 1999 12:23:21 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with multiple SUMs" } ]
[ { "msg_contents": "I was asked this off-list but thought a cc: to the list was appropriate:\n> If, as you say, Postgres can't use a GPL'd library, does that mean that\n> Postgres itself isn't GPL'd? If so, then what is it? I'm having trouble\n> understanding all of the nuances of open source, GPL, etc.\n\nPostgres is distributed under the BSD license, which is a little bit\ndifferent from GPL in the detailed terms of what recipients can and\ncan't do with the software. In particular BSD does not place a\nrequirement on a recipient to further redistribute the code. There\nare several other popular variants on the theme of free source code.\n\nThere is a brief overview of common open-source licenses at\nhttp://www.cs.mu.oz.au/~trd/www-free/license_categories.html.\nI have seen a more thorough treatment recently, probably at one\nof the big open-source sites like Debian or Cygnus, but I can't\nfind it right now :-( ... anyone have a better link?\n\nSome other good pages that came up while looking:\nhttp://www.fsf.org/philosophy/categories.html (server badly overloaded)\nhttp://www.debian.org/intro/license_disc.html\nhttp://metalab.unc.edu/pub/Linux/LICENSES/theory.html\nhttp://www.opensource.org/osd.html\nhttp://www.debian.org/social_contract\nhttp://www.fsf.org/philosophy/free-sw.html\n\nFor background and historical material about hacking culture, it's\ndifficult to do better than Eric Raymond's writings. See for example\nhttp://www.tuxedo.org/~esr/writings/homesteading/\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jan 1999 13:43:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Theory and practice of free software" } ]
[ { "msg_contents": "> Thus spake Jackson, DeJuan\n> > > And just seconds short of 3 hours after my report. I \n> wonder how long\n> > > it would take to get a fix from Oracle for something like this.\n> > > \n> > That depends on weather they already have the patch and were just\n> > waiting for you to ask for it before giving it to you, or \n> if the really\n> \n> I don't believe that they would get me an updated version of \n> their product\n> within three hours anyway. That was three hours from the time that I\n> sent the bug notice to the time I had a useable product!\n\nI never claimed that it would be 3 hours or less, because I know that it\nwouldn't. What I was saying is that they could already have a fix/patch\nand just not let you know about it, which I find to be absurd.\n\t-DEJ\n", "msg_date": "Thu, 28 Jan 1999 13:44:41 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problem with multiple SUMs" } ]
[ { "msg_contents": "I am attaching a file containing changes needed for adding temp tables\nto the backend code. This is not a complete patch because I am adding\nnew files and stuff. It is attached just for people to review.\n\nThe basic question is whether this is the proper way to do temp tables.\nThis implementation never adds the table to the system tables like\npg_class and pg_attribute. It does all temp table work by mapping the\nuser table names to system-generated unique table names using the cache\nlookup code. Fortunately because of the mega-patch from August, almost\nall system table access is done through the cache. Of course, a table\nscan of pg_class will not show the temp table because it is not really\nin pg_class, just in the cache, but there does not seem to be many cases\nwhere this is a bad thing.\n\nI still need to run some more tests and add a few more features, but you\nget the idea. I hope to apply the patch tomorrow or Saturday.\n\nThe only other way I can think of doing temp tables is to actually\ninsert into the system tables, and have some flag that makes those rows\nonly visible to the single backend that created it. We would also have\nto add a new pg_class column that contained the temp name, and modify\npg_class so it could have duplicate table names as long as the temp name\nwas unique. This seemed very unmodular, and would add more complexity\nto the heap tuple tuple visibility code.\n\nHere is a sample of what it does:\n\t\n\t#$ sql test\n\tWelcome to the POSTGRESQL interactive sql monitor:\n\t\ttest=> select * from test;\n\t\tERROR: test: Table does not exist.\n\t\ttest=> create temp table test (x int);\n\t\tCREATE\n\t\ttest=> insert into test values (3);\n\t\tINSERT 19745 1\n\t\ttest=> \\q\n\t#$ sql test\n\tWelcome to the POSTGRESQL interactive sql monitor:\n\t\ttest=> select * from test;\n\t\tERROR: test: Table does not exist.\n\t\ttest=> \n\t\nIn this example, I create a non-temp table, then mask that with a temp\ntable, then destroy them both:\n\t\n\t#$ sql test\n\tWelcome to the POSTGRESQL interactive sql monitor:\n\t\ttest=> create table test (x int);\n\t\tCREATE\n\t\ttest=> create temp table test (x int);\n\t\tCREATE\n\t\ttest=> create temp table test (x int);\n\t\tERROR: test relation already exists\n\t\ttest=> drop table test;\n\t\tDROP\n\t\ttest=> drop table test;\n\t\tDROP\n\t\ttest=> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Thu, 28 Jan 1999 16:38:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "TEMP table code" }, { "msg_contents": "> The basic question is whether this is the proper way to do temp \n> tables.\n\nI haven't looked at the patches, but fwiw I would have tried it about\nthe same way. No need to touch pg_class if the info is\nsession-specific...\n\n - Tom\n", "msg_date": "Fri, 29 Jan 1999 06:17:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TEMP table code" }, { "msg_contents": "> > The basic question is whether this is the proper way to do temp \n> > tables.\n> \n> I haven't looked at the patches, but fwiw I would have tried it about\n> the same way. No need to touch pg_class if the info is\n> session-specific...\n\nYes, my feeling is that the code is complicated enough without having\nthe temp table stuff adding complexity. What I did is that a cache\nlookup returns a fake pg_class tuple. The only code changes are a few\nfunction calls in the cache routines to insert my fake tuples, and some\ncode in the heap_create_with_catalog/heap_create/heap_destroy code to\ncreate temp tables with unique names. A new istemp flag in a few\nstructuers. The rest of the code is untouched.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Jan 1999 01:43:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TEMP table code" } ]
[ { "msg_contents": "*** From dbi-users -- To unsubscribe, see the end of this message. ***\n\nJust thought I would write in to let everyone know I haven't had the\n\"DBD::Pg::db do failed: pqReadData()\" problem\never since I converted all my columns that were 'text' to 'varchar()'.\nI can live with the limit of 4096 for the stuff I am doing...\n\nMarc Northover\n\n\nMarc Northover wrote:\n> \n> *** From dbi-users -- To unsubscribe, see the end of this message. ***\n> \n> I am using postgres.6.4.2 and occasionally get this error when inserting\n> rows into my tables:\n> \n> DBD::Pg::db do failed: pqReadData() -- backend closed the channel unexpectedly. This\n> probably means the backend terminated abnormally before or while processing the request.\n> \n> Sometimes I get rid of the error by issuing the 'vacuum' command, but doesn't work\n> all the time.\n> \n> I read some of the archives and there was a mention of a patch that is used,\n> but the version of postgres was 6.4. Also the error was received during 'make\n> test' and not during time of insert.\n> \n> I am using:\n> DBI 1.03\n> DBD::Pg 0.89\n> \n> Any suggestions??\n> \n> thanks,\n> Marc\n> \n\n\nThis problem seems not to be related to DBI/DBD-Pg.\nPlease write a bug report to:\n\n [email protected]\n\nIt would be helpful if you could provide traces.\n\n\nEdmund\n\n\n\n\n------------------------------------------------------------------------------\nTo unsubscribe from this list, please visit http://www.fugue.com/dbi\nIf you are without web access, or if you are having trouble with the web page,\nplease send mail to [email protected]. Please try to use the web\npage first - it will take a long time for your request to be processed by hand.\n------------------------------------------------------------------------------", "msg_date": "Thu, 28 Jan 1999 22:49:26 +0100", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: DBD::Pg::db do failed: pqReadData() Error]" } ]
[ { "msg_contents": "test=> create table test (x int);\nCREATE\ntest=> insert into test values (1);\nINSERT \ntest=> insert into test values (2);\nINSERT 19787 1\ntest=> select * from test;\nx\n-\n2\n(1 row)\n\ntest=> drop table test;\nDROP\ntest=> select * from test;\nx\n-\n1\n(1 row)\n\ntest=> drop table test;\nDROP\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 28 Jan 1999 18:18:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Another TEMP table trick" }, { "msg_contents": "I don't know how I posted this, but it is wrong. I have added the\nmissing lines.\n\n> test=> create table test (x int);\n> CREATE\n> test=> insert into test values (1);\n> INSERT \n> test=> create temp table test (x int); <--\n> CREATE <--\n> test=> insert into test values (2);\n> INSERT 19787 1\n> test=> select * from test;\n> x\n> -\n> 2\n> (1 row)\n> \n> test=> drop table test;\n> DROP\n> test=> select * from test;\n> x\n> -\n> 1\n> (1 row)\n> \n> test=> drop table test;\n> DROP\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 28 Jan 1999 18:32:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another TEMP table trick" } ]
[ { "msg_contents": "\nHi, \n\nI sent the following message to the pgsql-general \nlist on the 24th but haven't received any answers\nfrom PostgreSQL developers, only from other people\nwho are experiencing the same problems.\n\nI would say the errors I am describing are quite\nserious and I was wondering whether there was any\nchance of them being addressed in the forthcoming\n6.5 release.\n\nThe problem is very easy to reproduce - here are\nthe necessary steps:\n\n1. Install PostgreSQL 6.4.2\n2. Install Perl 5.005_02\n3. Install Perl modules: DBI 1.06; DBD-Pg 0.90; ApacheDBI-0.81\n3. Download apache 1.3.4\n4. Download mod_perl 1.17+ in same directory\n5. Extract distributions\n6. cd mod_perl-1.17\n7. perl Makefile.PL EVERYTHING=1 && make && make test && make install\n8. Set the following directives in Apache's httpd.conf:\n MinSpareServers 100\n MaxSpareServers 100\n StartServers 100\n MaxClients 100\n9. PerlRequire /usr/local/apache/conf/startup.pl where\n startup.pl contains:\n use Apache::Registry ();\n use Apache::DBI ();\n Apache::DBI->connect_on_init(\"DBI:Pg:dbname=template1\", \"\", \"\");\n 1;\n10. Start Apache: apachectl start\n\nNote that this example makes use of no custom\napplication code and is using the template1 \ndatabase. \n\nCheck Apache's error_log and you will see error \nmessages and eventually the postmaster will die\nwith something like:\n\n FATAL: s_lock(28001065) at spin.c:125, stuck spinlock. Aborting.\n\nThe magic number seems to be 48. If I start 49 \nhttpd/postgres processes everything falls apart\nbut if I start 48 everything is fine. I'm \nrunning on FreeBSD 2.2.8 and I've increased\nmaxusers to 512 - no difference.\n\nI'd appreciate some feedback from the guys who\nare making PostgreSQL happen. Can these issues \nbe addressed? PostgreSQL is a great database but \nthis is a show stopper for people developing big \nWeb applications. \n\nIf you need any more information don't hesitate\nto contact me.\n\nCheers.\n\n\n\nPatrick\n\n--\n\nSent to pgsql-general list on : January 24th 1999\n\nHi,\n\nI've been doing some benchmarking with PostgreSQL\nunder mod_perl and I've been getting some rather\ndisturbing results. To achieve the maximum benefit\nfrom persistent connections I am using a method\ncalled 'connect_on_init' that comes with a Perl\nmodule called Apache::DBI. Using this method,\nwhen the Web server is first started - each child\nprocess establishes a persistent connection with \nthe database. When using PostgreSQL as the database,\nthis causes there to be as many 'postgres' \nprocesses are there are 'httpd' processes\nfor a given database.\n\nAs part of my benchmarking I've been testing the\nnumber of httpd processes that my server can \nsupport. The machine is a 450 MHz PII/256 MB RAM.\nAs an excercise I tried to start 100 httpd\nprocesses. Doing this consistently results in the\nfollowing PostgreSQL errors and the backend usually\ndies:\n\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\nFATAL: s_lock(28001065) at spin.c:125, stuck spinlock. Aborting.\n\nNote that the 'no space left on device' is\nmisleading as there is a minimum of 400 MB \navailable on each file-system on the server.\n\nThis is obviously bad news, especially as we are \nhoping to develop some fairly large-scale \napplications with PostgreSQL. Note that this\nhappens when connecting to a single database.\nWe were hoping to connect to several databases\nfrom each httpd process!! \n\nThe frustrating thing is we have the resources. \nIf I only start 30 processes (which seems to be\nthe approximate limit) there is about 100 MB\nof RAM that is not being used. \n\nAre there any configuration values that control \nthe number of postgres processes? Do you have\nany idea why this is happening? \n\nIs anyone else using Apache/mod_perl and PostgreSQL \nsuccessfully in a demanding environment?\n\nAny help would be greatly appreciated.\n\nCheers.\n\n\n\nPatrick\n\n-- \n\n#===============================#\n\\ KAN Design & Publishing Ltd /\n/ T: +44 (0)1223 511134 \\\n\\ F: +44 (0)1223 571968 /\n/ E: mailto:[email protected] \\ \n\\ W: http://www.kan.co.uk /\n#===============================#\n", "msg_date": "Thu, 28 Jan 1999 23:40:29 +0000", "msg_from": "Patrick Verdon <[email protected]>", "msg_from_op": true, "msg_subject": "Postmaster dies with many child processes (spinlock/semget failed)" } ]
[ { "msg_contents": "\nHi, \n\nI sent the following message to the pgsql-general \nlist on the 24th but haven't received any answers\nfrom PostgreSQL developers, only from other people\nwho are experiencing the same problems.\n\nI would say the errors I am describing are quite\nserious and I was wondering whether there was any\nchance of them being addressed in the forthcoming\n6.5 release.\n\nThe problem is very easy to reproduce - here are\nthe necessary steps:\n\n1. Install PostgreSQL 6.4.2\n2. Install Perl 5.005_02\n3. Install Perl modules: DBI 1.06; DBD-Pg 0.90; ApacheDBI-0.81\n3. Download apache 1.3.4\n4. Download mod_perl 1.17+ in same directory\n5. Extract distributions\n6. cd mod_perl-1.17\n7. perl Makefile.PL EVERYTHING=1 && make && make test && make install\n8. Set the following directives in Apache's httpd.conf:\n MinSpareServers 100\n MaxSpareServers 100\n StartServers 100\n MaxClients 100\n9. PerlRequire /usr/local/apache/conf/startup.pl where\n startup.pl contains:\n use Apache::Registry ();\n use Apache::DBI ();\n Apache::DBI->connect_on_init(\"DBI:Pg:dbname=template1\", \"\", \"\");\n 1;\n10. Start Apache: apachectl start\n\nNote that this example makes use of no custom\napplication code and is using the template1 \ndatabase. \n\nCheck Apache's error_log and you will see error \nmessages and eventually the postmaster will die\nwith something like:\n\n FATAL: s_lock(28001065) at spin.c:125, stuck spinlock. Aborting.\n\nThe magic number seems to be 48. If I start 49 \nhttpd/postgres processes everything falls apart\nbut if I start 48 everything is fine. I'm \nrunning on FreeBSD 2.2.8 and I've increased\nmaxusers to 512 - no difference.\n\nI'd appreciate some feedback from the guys who\nare making PostgreSQL happen. Can these issues \nbe addressed? PostgreSQL is a great database but \nthis is a show stopper for people developing big \nWeb applications. \n\nIf you need any more information don't hesitate\nto contact me.\n\nCheers.\n\n\n\nPatrick\n\n--\n\nSent to pgsql-general list on January 24th 1999:\n\nHi,\n\nI've been doing some benchmarking with PostgreSQL\nunder mod_perl and I've been getting some rather\ndisturbing results. To achieve the maximum benefit\nfrom persistent connections I am using a method\ncalled 'connect_on_init' that comes with a Perl\nmodule called Apache::DBI. Using this method,\nwhen the Web server is first started - each child\nprocess establishes a persistent connection with \nthe database. When using PostgreSQL as the database,\nthis causes there to be as many 'postgres' \nprocesses are there are 'httpd' processes\nfor a given database.\n\nAs part of my benchmarking I've been testing the\nnumber of httpd processes that my server can \nsupport. The machine is a 450 MHz PII/256 MB RAM.\nAs an excercise I tried to start 100 httpd\nprocesses. Doing this consistently results in the\nfollowing PostgreSQL errors and the backend usually\ndies:\n\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\nFATAL: s_lock(28001065) at spin.c:125, stuck spinlock. Aborting.\n\nNote that the 'no space left on device' is\nmisleading as there is a minimum of 400 MB \navailable on each file-system on the server.\n\nThis is obviously bad news, especially as we are \nhoping to develop some fairly large-scale \napplications with PostgreSQL. Note that this\nhappens when connecting to a single database.\nWe were hoping to connect to several databases\nfrom each httpd process!! \n\nThe frustrating thing is we have the resources. \nIf I only start 30 processes (which seems to be\nthe approximate limit) there is about 100 MB\nof RAM that is not being used. \n\nAre there any configuration values that control \nthe number of postgres processes? Do you have\nany idea why this is happening? \n\nIs anyone else using Apache/mod_perl and PostgreSQL \nsuccessfully in a demanding environment?\n\nAny help would be greatly appreciated.\n\nCheers.\n\n\n\nPatrick\n\n-- \n\n#===============================#\n\\ KAN Design & Publishing Ltd /\n/ T: +44 (0)1223 511134 \\\n\\ F: +44 (0)1223 571968 /\n/ E: mailto:[email protected] \\ \n\\ W: http://www.kan.co.uk /\n#===============================#\n", "msg_date": "Thu, 28 Jan 1999 23:51:35 +0000", "msg_from": "Patrick Verdon <[email protected]>", "msg_from_op": true, "msg_subject": "Postmaster dies with many child processes (spinlock/semget failed)" }, { "msg_contents": ">Hi, \n>\n>I sent the following message to the pgsql-general \n>list on the 24th but haven't received any answers\n>from PostgreSQL developers, only from other people\n>who are experiencing the same problems.\n>\n>I would say the errors I am describing are quite\n>serious and I was wondering whether there was any\n>chance of them being addressed in the forthcoming\n>6.5 release.\n\nI don't think it's a PostgreSQL's problem.\n\n[snip]\n\n>Note that the 'no space left on device' is\n>misleading as there is a minimum of 400 MB \n>available on each file-system on the server.\n\nNo. that message does not talking about the space left on your\ndisk. You need to increase the shared memory size.\n\nYou want to have 100 backends? 6.4.2 has the hard limit of number of\nbackends as 64. You can change this by editing following line:\n\n#define MaxBackendId 64\t\t\t/* maximum number of backends\t\t*/\n\nin src/include/storage/sinvaladt.h. make sure do gmake clean before\nrecompiling.\n\nAlso you might ran out the file table entries. I recommend you to\nlimit the number of descriptors available to each backend. Probably 15\nis enough. You can do this by issuing the csh builtin limit command\nbefore starting postmaster.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 29 Jan 1999 10:10:28 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "On Thu, 28 Jan 1999, Patrick Verdon wrote:\n\n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\n> NOTICE: Message from PostgreSQL backend:\n> The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n> I have rolled back the current transaction and am going to terminate your database system connection and exit.\n> Please reconnect to the database system and repeat your query.\n> \n> FATAL: s_lock(28001065) at spin.c:125, stuck spinlock. Aborting.\n> \n> Note that the 'no space left on device' is\n> misleading as there is a minimum of 400 MB \n> available on each file-system on the server.\n\nMy first guess is that you don't have enough semaphores enabled in your\nkernel...increase that from the default, and I'm *guessing* that you'll\nget past your 48...\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 28 Jan 1999 21:31:58 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "Patrick Verdon wrote:\n> \n> Check Apache's error_log and you will see error\n> messages and eventually the postmaster will die\n> with something like:\n> \n> FATAL: s_lock(28001065) at spin.c:125, stuck spinlock. Aborting.\n\nTry to increase S_MAX_BUSY in src/backend/storage/buffer/s_lock.c:\n\n#define S_MAX_BUSY 500 * S_NSPINCYCLE\n ^^^\ntry with 10000.\n\nVadim\n", "msg_date": "Fri, 29 Jan 1999 10:06:24 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "I don't think this is a Postgres problem. I got the same\nproblem you described when upgrading Apache from 1.3.3 to 1.3.4\nI had to return to 1.3.3\nProbably I will try modperl 1.18+Apache 1.3.4\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 29 Jan 1999 09:35:53 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" } ]
[ { "msg_contents": "regression=> SELECT 1 AS two UNION SELECT 2;\nNOTICE: equal: don't know whether nodes of type 600 are equal\ntwo\n---\n 1\n 2\n(2 rows)\n\ngdb:\n\n#0 equal (a=0x179f10, b=0x1c1490) at equalfuncs.c:746\n#1 0x7bf4d in remove_duplicates (list=0x167d90) at prepqual.c:563\n#2 0x7b86a in qual_cleanup (qual=0x168f50) at prepqual.c:234\n#3 0x7b48e in cnfify (qual=0x168790, removeAndFlag=1 '\\001') at prepqual.c:76\n#4 0xba22a in Except_Intersect_Rewrite (parsetree=0x179f10)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n at rewriteHandler.c:2842\n\nVadim\n", "msg_date": "Fri, 29 Jan 1999 14:07:22 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "equal: don't know whether nodes of type 600 are equal" }, { "msg_contents": "I chased down the complaints that were coming out of the union regress\ntest:\n\n> regression=> SELECT 1 AS two UNION SELECT 2;\n> NOTICE: equal: don't know whether nodes of type 600 are equal\n\nand found that this was a known shortcoming of the recently installed\nEXCEPT/INTERSECT code:\n\n>> -) When using UNION/EXCEPT/INTERSECT you will get:\n>> NOTICE: equal: \"Don't know if nodes of type xxx are equal\".\n>> I did not have time to add comparsion support for all the needed nodes,\n>> but the default behaviour of the function equal met my requirements.\n>> I did not dare to supress this message!\n>> \n>> That's the reason why the regression test for union will fail: These\n>> messages are also included in the union.out file!\n\nI added equality checking for T_Query and T_RangeTblEntry to\nequalfuncs.c. This is enough to persuade the regression tests\nto pass again, but I suspect that some other node types still\nneed to be added to equal()'s repertoire.\n\nWe are definitely not out of the woods with the EXCEPT/INTERSECT changes:\n\ntest=> EXPLAIN SELECT 1 UNION SELECT 2;\nERROR: copyObject: don't know how to copy 604\n\nI do not think that extending copyObject to handle 604 (T_SelectStmt)\nis necessarily the right fix here. I am casting a wary eye on\nthe first lines of ExplainOneQuery:\n\n\t/* plan the queries (XXX we've ignored rewrite!!) */\n\tplan = planner(query);\n\nI think the real problem is that the EXCEPT/INTERSECT code is dependent\non rewrite work that is not being done in the EXPLAIN path, and that\nwe need to fix that underlying problem rather than patching the symptom.\nOtherwise we'll likely just hit another symptom...\n\nI'm about out of steam for today so I'll just toss that up to see if\nanyone else wants to tackle it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Feb 1999 20:13:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] equal: don't know whether nodes of type 600 are equal " }, { "msg_contents": "Tom Lane wrote:\n\n> I do not think that extending copyObject to handle 604 (T_SelectStmt)\n> is necessarily the right fix here. I am casting a wary eye on\n> the first lines of ExplainOneQuery:\n>\n> /* plan the queries (XXX we've ignored rewrite!!) */\n> plan = planner(query);\n>\n> I think the real problem is that the EXCEPT/INTERSECT code is dependent\n> on rewrite work that is not being done in the EXPLAIN path, and that\n> we need to fix that underlying problem rather than patching the symptom.\n> Otherwise we'll likely just hit another symptom...\n\n I added the call to QueryRewrite() in ExplainQuery() some\n time ago. So the comment in ExplainOneQuery() isn't right\n (and I should have removed that).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sun, 7 Feb 1999 12:44:53 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] equal: don't know whether nodes of type 600 are equal" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> I think the real problem is that the EXCEPT/INTERSECT code is dependent\n>> on rewrite work that is not being done in the EXPLAIN path, and that\n>> we need to fix that underlying problem rather than patching the symptom.\n\n> I added the call to QueryRewrite() in ExplainQuery() some\n> time ago. So the comment in ExplainOneQuery() isn't right\n> (and I should have removed that).\n\nttest=> select 1 union select 2;\n?column?\n--------\n 1\n 2\n(2 rows)\n\nttest=> explain select 1 union select 2;\nERROR: copyObject: don't know how to copy 604\nttest=>\n\nThis error is coming from inside the planner (specifically union_planner).\nObviously there's *something* different about the context in which the\nplanner is invoked for EXPLAIN.\n\nIt looks to me like the problem is that some rewrite code got placed in\npg_parse_and_plan() in postgres.c --- there is some UNION-handling stuff\ngoing on *after* the call to QueryRewrite(), and evidently that stuff\nis not duplicated in the EXPLAIN case. Probably the right fix is to\nmove all that logic inside QueryRewrite() --- but I don't want to touch\nit without confirmation from someone who knows the parser/planner\nbetter.\n\nSome quick checks with gdb show union_planner() being invoked only once\nwhen the query is executed for real, but recursively when using EXPLAIN\n... and it's the recursive call that is throwing the error:\n\n#0 elog (lev=-1, fmt=0x2ef20 \"copyObject: don't know how to copy %d\")\n at elog.c:79\n#1 0xae994 in copyObject (from=0x400a5028) at copyfuncs.c:1870\n#2 0xae93c in copyObject (from=0x400a5250) at copyfuncs.c:1859\n#3 0xc7888 in new_unsorted_tlist (targetlist=0xffffffff) at tlist.c:314\n#4 0xc03a0 in union_planner (parse=0x400a5028) at planner.c:110\n#5 0xc40d4 in plan_union_queries (parse=0x400a53b8) at prepunion.c:146\n#6 0xc03f0 in union_planner (parse=0x400a53b8) at planner.c:131\n#7 0xc0320 in planner (parse=0x400a53b8) at planner.c:80\n#8 0x8dc64 in ExplainOneQuery (query=0x400a53b8, verbose=0 '\\000', dest=604)\n at explain.c:92\n#9 0x8dc24 in ExplainQuery (query=0x400a5670, verbose=0 '\\000', dest=Remote)\n at explain.c:76\n#10 0x1012e8 in ProcessUtility (parsetree=0x400a52d8, dest=Remote)\n at utility.c:781\n#11 0xfeba0 in pg_exec_query_dest (\n query_string=0x7b0342b0 \"explain select a from tt union select a from tt;\",\n dest=Remote, aclOverride=92 '\\\\') at postgres.c:819\n#12 0xfe9f8 in pg_exec_query (\n query_string=0xffffffff <Address 0xffffffff out of bounds>)\n at postgres.c:711\n#13 0xffbfc in PostgresMain (argc=316064, argv=0x51, real_argc=5,\n real_argv=0x40002b10) at postgres.c:1664\n\nI wonder whether union_planner() is even really needed anymore given the\nrewrite stuff ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Feb 1999 12:44:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] equal: don't know whether nodes of type 600 are equal " }, { "msg_contents": "Tom Lane wrote:\n\n> It looks to me like the problem is that some rewrite code got placed in\n> pg_parse_and_plan() in postgres.c --- there is some UNION-handling stuff\n> going on *after* the call to QueryRewrite(), and evidently that stuff\n> is not duplicated in the EXPLAIN case. Probably the right fix is to\n> move all that logic inside QueryRewrite() --- but I don't want to touch\n> it without confirmation from someone who knows the parser/planner\n> better.\n\n It is the job of QueryRewrite() to puzzle/shuffle the nodes\n of parsetrees before the planner get's them.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sun, 7 Feb 1999 21:30:23 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] equal: don't know whether nodes of type 600 are equal" }, { "msg_contents": "> \n> This error is coming from inside the planner (specifically union_planner).\n> Obviously there's *something* different about the context in which the\n> planner is invoked for EXPLAIN.\n> \n> It looks to me like the problem is that some rewrite code got placed in\n> pg_parse_and_plan() in postgres.c --- there is some UNION-handling stuff\n> going on *after* the call to QueryRewrite(), and evidently that stuff\n> is not duplicated in the EXPLAIN case. Probably the right fix is to\n> move all that logic inside QueryRewrite() --- but I don't want to touch\n> it without confirmation from someone who knows the parser/planner\n> better.\n\n\nI have been meaning to move the UNION stuff into the rewrite system\nwhere it belongs, but haven't had time to do it. In tcop/postgres.c,\nyou will see me moving through the union nodes. That should be done at\nthe top of the rewrite system. At the time, the rewrite system was so\nconfusing to me, I did not attempt it. I believe that will fix the\nproblem. Let me know if you need me to do it.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Feb 1999 15:36:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] equal: don't know whether nodes of type 600 are equal" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have been meaning to move the UNION stuff into the rewrite system\n> where it belongs, but haven't had time to do it. In tcop/postgres.c,\n> you will see me moving through the union nodes. That should be done at\n> the top of the rewrite system. At the time, the rewrite system was so\n> confusing to me, I did not attempt it. I believe that will fix the\n> problem. Let me know if you need me to do it.\n\nWell, I could give it a shot, but I'm just as happy to defer to you;\nI don't know anything about the rewriter and might break something.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Feb 1999 17:02:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] equal: don't know whether nodes of type 600 are equal " } ]
[ { "msg_contents": "\t>> test=> create table test (x int);\n\t>> CREATE\n\t>> test=> insert into test values (1);\n\t>> INSERT \n\t>> test=> create temp table test (x int); <--\n\t>> CREATE <--\n\t>> test=> insert into test values (2);\n\t>> INSERT 19787 1\n\t>> test=> select * from test;\n\t>> x\n\t>> -\n\t>> 2\n\t>> (1 row)\n\t>> \n\t>> test=> drop table test;\n\t>> DROP\n\t>> test=> select * from test;\n\t>> x\n\t>> -\n\t>> 1\n\t>> (1 row)\n\t>> \n\t>> test=> drop table test;\n\t>> DROP\n\nDo you really think that this should be allowed ? I think table names\nincluding \ntemp tables should (at least in combination with the owner) be unique. I\nthink your \nexample above demonstrates how confusing the application code can get.\n\nI think it is good, that temp tables are not really inserted into system\ntables,\nsince this would be substantial overhead.\nThere could be a problem with GUI tools that rely on these rows\nto format their output (like pgaccess or ODBC --> M$ Access) though.\n\nAndreas\n\n", "msg_date": "Fri, 29 Jan 1999 09:37:34 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Another TEMP table trick" }, { "msg_contents": "> \t>> test=> create table test (x int);\n> \t>> CREATE\n> \t>> test=> insert into test values (1);\n> \t>> INSERT \n> \t>> test=> create temp table test (x int); <--\n> \t>> CREATE <--\n> \t>> test=> insert into test values (2);\n> \t>> INSERT 19787 1\n> \t>> test=> select * from test;\n> \t>> x\n> \t>> -\n> \t>> 2\n> \t>> (1 row)\n> \t>> \n> \t>> test=> drop table test;\n> \t>> DROP\n> \t>> test=> select * from test;\n> \t>> x\n> \t>> -\n> \t>> 1\n> \t>> (1 row)\n> \t>> \n> \t>> test=> drop table test;\n> \t>> DROP\n> \n> Do you really think that this should be allowed ? I think table names\n> including \n> temp tables should (at least in combination with the owner) be unique. I\n> think your \n> example above demonstrates how confusing the application code can get.\n\nI think it should be allowed. Suppose someone has created a non-temp\ntable with a certain name, and you want a temp table with that name. No\nreason you shouldn't be allowed to do that. Five people can all have\ntemp tables with the same name, so it doesn't matter if there is a\nnon-temp table with that name too.\n\n> \n> I think it is good, that temp tables are not really inserted into system\n> tables,\n> since this would be substantial overhead.\n\nNot really much overhead.\n\n> There could be a problem with GUI tools that rely on these rows\n> to format their output (like pgaccess or ODBC --> M$ Access) though.\n\nOh, never thought of that. A select of pg_class will return no rows for\nthat table because it is a temp table.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 00:13:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Another TEMP table trick" }, { "msg_contents": "Bruce Momjian wrote:\n>\n> > There could be a problem with GUI tools that rely on these rows\n> > to format their output (like pgaccess or ODBC --> M$ Access) though.\n> \n> Oh, never thought of that. A select of pg_class will return no rows for\n> that table because it is a temp table.\n\nOne more reson to move \\d from psql to backend maybe with syntax like \nOracle's \"DESC xxx\" unless there is something in ANSI standard for that.\n\nOr implement the ANSI system tables (I think there were some ;) and\nviews.\n\nThen the front-end tools can be advised to use these (and TEMP TABLES\ncan \nadd rows to other (possibly structure-permanent) TEMP tables that are\nUNIONed \nwithe real pg_class to give them real values.\n\nOr we can even implement just temp _rows_ for tables that exist in a \nsession only (maybe like in independant uncommitted transactions), \nand add the info for temp tables to pg_class (and friends) as temp rows.\n\n----------------\nHannu\n", "msg_date": "Sat, 30 Jan 1999 13:44:17 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Another TEMP table trick" }, { "msg_contents": "> Bruce Momjian wrote:\n> >\n> > > There could be a problem with GUI tools that rely on these rows\n> > > to format their output (like pgaccess or ODBC --> M$ Access) though.\n> > \n> > Oh, never thought of that. A select of pg_class will return no rows for\n> > that table because it is a temp table.\n> \n> One more reson to move \\d from psql to backend maybe with syntax like \n> Oracle's \"DESC xxx\" unless there is something in ANSI standard for that.\n> \n> Or implement the ANSI system tables (I think there were some ;) and\n> views.\n> \n> Then the front-end tools can be advised to use these (and TEMP TABLES\n> can \n> add rows to other (possibly structure-permanent) TEMP tables that are\n> UNIONed \n> withe real pg_class to give them real values.\n> \n > Or we can even implement just temp _rows_ for tables that exist in a \n> session only (maybe like in independant uncommitted transactions), \n> and add the info for temp tables to pg_class (and friends) as temp rows.\n\nI have thought some more about it, and I now want to create proper\npg_class rows for the temp tables.\n\nThe temp tables are named pg_temp.$pid.$seqno. What I am going to do\nfor the temp table is to add an _extra_ entry in the system cache for\nthe user-supplied name RELNAME lookup. All other lookups of pg_class by\noid, and pg_attribute, etc use just the relid, which works without any\ntranslation.\n\nThe advantage is that I can keep the system tables consistent, have less\ncode overhead, and allow things like sequential scans of pg_class see\nthe table, even though it will not be under the user-supplied name.\n\nMost interfaces already don't display pg_* tables, so this will be OK. \nI will add a new relkind for the temp tables. I will also now be able\nto test in vacuum if the temp table was orphaned after a backend crash,\nand delete it.\n\nI will prevent psql \\dS from displaying the temp tables.\n\nShould be a few more days.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 20:24:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Another TEMP table trick" } ]
[ { "msg_contents": "Note this one contains the one I send the other day. If that one was already\napplied this one will fail partly. If this is the case tell me and I resend\na correct version.\n\nMichael \n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!", "msg_date": "Fri, 29 Jan 1999 13:26:39 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "another ecpg patch" } ]
[ { "msg_contents": "Hello!\n\n I have a production server with Postgres 6.4.2. Every night crom runs\nmaintainance script, that contayned VACUUM ANALYZE (I use psql).\n Few days ago the scrip failed with usual \"... backend closed\nconnection\". I changed it to just VACUUM (this worked) and started\ninvestigation. Please note, the system is RedHat 5.1 on Pentium.\n\n I dumped the datbase and reloaded it, then ran VACUUM ANALYZE. It failed\n(I removed pg_vlock, of course).\n I loaded the dump into 6.4.2 on my debugging server - Ultra-1, Solaris\n2.5.1 and ran VACUUM ANALYZE. It worked.\n I loaded the dump into 6.4.2 on my debugging Pentium with Debian 2.0 and\nran VACUUM ANALYZE. It failed.\n\n Seems 6.4.2 has problems on linux. Dump file is small (30K in bzip2) - I\ncan send it if someone want to try to reproduce it.\n\n BTW, while reloading, I noticed postgres eats virtual memory like a\nhungry beast. My RedHat booted on loading (but after reboot db loaded ok). I\nhave to free much memory on Solaris to load the dump. Does \"COPY FROM stdin\"\nreally require so much memory? And how I will feel when my database will\ngrow even bigger? Sooner or later I couldn't load my own dump. Will I need\nto split the dump into chunks?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 29 Jan 1999 18:54:15 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM ANALYZE failed on linux" }, { "msg_contents": "> I have a production server with Postgres 6.4.2.\n> Seems 6.4.2 has problems on linux. Dump file is small (30K in \n> bzip2) - I can send it if someone want to try to reproduce it.\n\nYes, send me the file. Unless gzip is *much* larger, please send in that\nformat.\n\n - Tom\n", "msg_date": "Tue, 02 Feb 1999 15:00:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE failed on linux" }, { "msg_contents": ">\n> > I have a production server with Postgres 6.4.2.\n> > Seems 6.4.2 has problems on linux. Dump file is small (30K in\n> > bzip2) - I can send it if someone want to try to reproduce it.\n>\n> Yes, send me the file. Unless gzip is *much* larger, please send in that\n> format.\n\n I'm already on it and seem's I've found the problem.\n\n Oleg is using a database schema with check constraints (which\n are executed during COPY FROM). The function ExecRelCheck()\n parses each constraint for each tuple every time with\n stringToNode().\n\n First this is wasted efford, second only the outermost node\n of the qualification tree built with stringToNode() is\n pfree()'d in the loop. Without debugging it I can tell that\n a simple constraint like 'attr != 0' will produce an Expr\n pointing to an Oper and a List with one Var and another\n Const. So only one of 4 palloc()'d nodes is pfree()'d, the\n other 3 hang aroung until transaction end.\n\n But it's a little wired here and we cannot put the constraint\n qual-trees into the Relation structure for a long time. This\n will later cause these nodes hang around in the Cache context\n where they shouldn't. Don't know how to optimize here yet.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 2 Feb 1999 16:29:38 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE failed on linux" } ]
[ { "msg_contents": "pgsql-docs-digest \n\nCorrea Vargas <[email protected]> wrote on pgsql-docs\nlist:\n> \n\n[snip]\n\n> Hello!\n> ...Or do you think that is better to install a most recent version?!...\n\n6.4.2 (the current version) is much much more advanced than the 6.0\nversion.\nI would definitely recommend moving to it.\n \n> I wrote a mail one week ago more or less, and I have not received a reply.\n> Please write to me...I need to know if you're receiving my mails, and if\n> you can help me.\n\nI read the digest and it comes about ONCE A MONTH ! (the last one (400) \nwas 13.01.99, this one (401) came today ;(\n\n------------\nHannu\n", "msg_date": "Fri, 29 Jan 1999 19:19:51 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres v6.0 and pgsql-docs-digest V1 #401" }, { "msg_contents": "On Fri, 29 Jan 1999, Hannu Krosing wrote:\n\n> pgsql-docs-digest \n> \n> Correa Vargas <[email protected]> wrote on pgsql-docs\n> list:\n> > \n> \n> [snip]\n> \n> > Hello!\n> > ...Or do you think that is better to install a most recent version?!...\n> \n> 6.4.2 (the current version) is much much more advanced than the 6.0\n> version.\n> I would definitely recommend moving to it.\n> \n> > I wrote a mail one week ago more or less, and I have not received a reply.\n> > Please write to me...I need to know if you're receiving my mails, and if\n> > you can help me.\n> \n> I read the digest and it comes about ONCE A MONTH ! (the last one (400) \n> was 13.01.99, this one (401) came today ;(\n\nOdd...I force a digest to go out every night at midnight...let me watch\ntonights and see if I see any problems I hadn't noticed previously...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 30 Jan 1999 16:34:29 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: postgres v6.0 and pgsql-docs-digest V1 #401" } ]
[ { "msg_contents": "and this is now the DEFAULT isolevel.\n\nI run some tests to ensure how it works, but not so much.\nUnfortunately, currently it's not possible to add\nsuch tests to regression suit because of they require\nconcurrent transactions. We could write simple script to\nrun a few psql-s simultaneously and than just put queries\nto them (through pipes) in required order. I have no time\nfor this now...\n\nProcessing updates in READ COMMITTED isolevel is much\ncomplex than in SERIALIZABLE one, because of if transaction T1\nnotices that tuple to be updated/deleted/selected_for_update\nis changed by concurrent transaction T2 then T1 has to check\ndoes new version of tuple satisfy T1 plan qual or not.\nFor simple cases like UPDATE t ... WHERE x = 0 or x = 1\nit would be possible to just ExecQual for new tuple, but\nfor joins & subqueries it's required to re-execute entire\nplan having this tuple stuck in Index/Seq Scan over result\nrelation (i.e. - scan over result relation will return\nonly this new tuple, but all other scans will work as usual). \nTo archieve this, copy of plan is created and executed. If\ntuple is returned by this child plan then T1 tries to update\nnew version of tuple and if it's already updated (in the time\nof child plan execution) by transaction T3 then T1 will re-execute\nchild plan for T3' version of tuple, etc.\n\nHandling of SELECT FOR UPDATE OF > 1 relations is ever more\ncomplex. While processing tuples (more than 1 tuple may be \nreturned by join) from child plan P1 created for tuple of table\nA and trying to mark a tuple of table B, updated by T3, T1\nwill have to suspend P1 execution and create new child plan\nP2 with two tuples stuck in scans of A & B. Execution of P1\nwill be continued after execution of P2 (P3, P4 ... -:)).\nFortunately, max # of possible child plans is equal to\nthe number of relations in FOR UPDATE clause: if while\nprocessing first tuple from Pn T1 sees that tuple stuck in\nPm, m < n, was changed, then T1 stops execution of\nPn, ..., Pm-1 and re-start Pm execution for new version\nof tuple. Note that n - m may be more than 1 because of\ntuples are always marked in the order specified in FOR UPDATE\nclause and only after transaction ensured that new tuple\nversion satisfies plan qual.\n\nTrigger manager is also able to use child plans for\nbefore row update/delete triggers (tuple must be \nmarked for update - i.e. locked - before trigger\nexecution), but this is not tested at all, yet.\n\nExecutor never frees child plans explicitely but re-uses\nthem if needed and there are unused ones. \n\nWell, MVCC todo list:\n\n-- big items\n\n1. vacuum\n2. btree\n 2.1 still use page locking\n 2.2 ROOT page may be changed by concurrent insertion but\n btinsert doesn't check this\n\n-- small ones\n\n3. refint - selects don't block concurrent transactions: \n FOR UPDATE must be used in some cases\n4. user_lock contrib code: lmgr structures changed\n\nVadim\n", "msg_date": "Sat, 30 Jan 1999 00:54:56 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "Hello All,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Saturday, January 30, 1999 2:55 AM\n> To: [email protected]\n> Subject: [HACKERS] READ COMMITTED isolevel is implemented ...\n> \n> \n> and this is now the DEFAULT isolevel.\n>\n\nIt's different from current(v6.4.2).\nThe way will be provided to upgrade user's current code ?\n \n> I run some tests to ensure how it works, but not so much.\n> Unfortunately, currently it's not possible to add\n> such tests to regression suit because of they require\n> concurrent transactions. We could write simple script to\n> run a few psql-s simultaneously and than just put queries\n> to them (through pipes) in required order. I have no time\n> for this now...\n> \n> Processing updates in READ COMMITTED isolevel is much\n> complex than in SERIALIZABLE one, because of if transaction T1\n> notices that tuple to be updated/deleted/selected_for_update\n> is changed by concurrent transaction T2 then T1 has to check\n> does new version of tuple satisfy T1 plan qual or not.\n\nHow about UPDATE t set x = x + 1 where .... ?\n\nThe values of x used for x = x + 1 are at the time when statement \nstarted ?\nIt seems that this case also requires re-execution.\n\nThanks.\n\nHiroshi Inoue\[email protected] \n\n\n", "msg_date": "Sat, 30 Jan 1999 09:05:24 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > Subject: [HACKERS] READ COMMITTED isolevel is implemented ...\n> >\n> > and this is now the DEFAULT isolevel.\n> >\n> \n> It's different from current(v6.4.2).\n\nFirst, I think that DEFAULT isolevel must be configure-able.\n\n> The way will be provided to upgrade user's current code ?\n\nEven SERIALIZABLE isolevel in MVCC is different from\none in locking systems. There is only one way to don't\nchange anything in applications - use table level locking.\nShould we provide ability to turn MVCC off?\n\n> > Processing updates in READ COMMITTED isolevel is much\n> > complex than in SERIALIZABLE one, because of if transaction T1\n> > notices that tuple to be updated/deleted/selected_for_update\n> > is changed by concurrent transaction T2 then T1 has to check\n> > does new version of tuple satisfy T1 plan qual or not.\n> \n> How about UPDATE t set x = x + 1 where .... ?\n> \n> The values of x used for x = x + 1 are at the time when statement\n> started ?\n> It seems that this case also requires re-execution.\n\nx + 1 is in target list of execution plan. And so when child plan\nis executed, new value of x is used to evaluate target list\nexpressions. Executor uses tuple from child plan as new version\nof tuple.\n\nVadim\n", "msg_date": "Sat, 30 Jan 1999 11:40:36 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "> Handling of SELECT FOR UPDATE OF > 1 relations is ever more\n> complex. While processing tuples (more than 1 tuple may be \n> returned by join) from child plan P1 created for tuple of table\n\nI don't think Informix allows FOR UPDATE in a multi-table select.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 01:10:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Saturday, January 30, 1999 1:41 PM\n> To: Hiroshi Inoue\n> Cc: [email protected]\n> Subject: Re: [HACKERS] READ COMMITTED isolevel is implemented ...\n> \n> \n> Hiroshi Inoue wrote:\n> > \n> > > Subject: [HACKERS] READ COMMITTED isolevel is implemented ...\n> > >\n> > > and this is now the DEFAULT isolevel.\n> > >\n> > \n> > It's different from current(v6.4.2).\n> \n> First, I think that DEFAULT isolevel must be configure-able.\n> \n> > The way will be provided to upgrade user's current code ?\n> \n> Even SERIALIZABLE isolevel in MVCC is different from\n> one in locking systems. There is only one way to don't\n> change anything in applications - use table level locking.\n> Should we provide ability to turn MVCC off?\n>\n\nI think in most cases SEIALIZABLE is sufficient for upgrading.\nSo it is preferable that we can change default isolation level \neasily.\nI believe that SET TRANSCTION ISOLATION LEVEL is per \ntransaction command(i.e it is necessary for every transaction \nwhich is different from default).\nAnother command to set per connection default is necessary \nas Thomas Lockhart wrote about \"autocommit\".\n \n We can have the default be \"set autocommit on\" (probably \n with an equals sign like our other \"set\" variables) and we can \n have it be a run-time option like DATESTYLE and other settable \n parameters. So you can configure your server or your client \n environment to always behave the way you prefer.\n \n> > > Processing updates in READ COMMITTED isolevel is much\n> > > complex than in SERIALIZABLE one, because of if transaction T1\n> > > notices that tuple to be updated/deleted/selected_for_update\n> > > is changed by concurrent transaction T2 then T1 has to check\n> > > does new version of tuple satisfy T1 plan qual or not.\n> > \n> > How about UPDATE t set x = x + 1 where .... ?\n> > \n> > The values of x used for x = x + 1 are at the time when statement\n> > started ?\n> > It seems that this case also requires re-execution.\n> \n> x + 1 is in target list of execution plan. And so when child plan\n> is executed, new value of x is used to evaluate target list\n> expressions. Executor uses tuple from child plan as new version\n> of tuple.\n>\n\nOracle(Version7) seems to work as you mentioned. \nSorry.\n\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Sat, 30 Jan 1999 15:29:40 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > >\n> > > > Subject: [HACKERS] READ COMMITTED isolevel is implemented ...\n> > > >\n> > > > and this is now the DEFAULT isolevel.\n> > > >\n> > >\n> > > It's different from current(v6.4.2).\n> >\n> > First, I think that DEFAULT isolevel must be configure-able.\n> >\n> > > The way will be provided to upgrade user's current code ?\n> >\n> > Even SERIALIZABLE isolevel in MVCC is different from\n> > one in locking systems. There is only one way to don't\n> > change anything in applications - use table level locking.\n> > Should we provide ability to turn MVCC off?\n> >\n> \n> I think in most cases SEIALIZABLE is sufficient for upgrading.\n> So it is preferable that we can change default isolation level\n> easily.\n\nAgreed, but I never worked with configure stuff...\n\n> I believe that SET TRANSCTION ISOLATION LEVEL is per\n> transaction command(i.e it is necessary for every transaction\n> which is different from default).\n> Another command to set per connection default is necessary\n> as Thomas Lockhart wrote about \"autocommit\".\n\nOracle uses ALTER SESSION command for this.\n\n> > > > Processing updates in READ COMMITTED isolevel is much\n> > > > complex than in SERIALIZABLE one, because of if transaction T1\n> > > > notices that tuple to be updated/deleted/selected_for_update\n> > > > is changed by concurrent transaction T2 then T1 has to check\n> > > > does new version of tuple satisfy T1 plan qual or not.\n> > >\n> > > How about UPDATE t set x = x + 1 where .... ?\n> > >\n> > > The values of x used for x = x + 1 are at the time when statement\n> > > started ?\n> > > It seems that this case also requires re-execution.\n> >\n> > x + 1 is in target list of execution plan. And so when child plan\n> > is executed, new value of x is used to evaluate target list\n> > expressions. Executor uses tuple from child plan as new version\n> > of tuple.\n> >\n> \n> Oracle(Version7) seems to work as you mentioned.\n> Sorry.\n\nIsn't this the same you told in first message?\nAnd if so - what \"sorry\" means? -:)\n\nOk. T1 executes UPDATE t SET x = x + 1 WHERE y = 2 and sees\nthat row (x = 1, y = 2) is updated by T2 to be (x = 3, y = 2).\nWhat is the result of T1 update? In postgres the result\nwill be (x = 4, y = 2), not (x = 2, y = 2). Is it ok?\n\nVadim\n", "msg_date": "Sat, 30 Jan 1999 18:07:05 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Handling of SELECT FOR UPDATE OF > 1 relations is ever more\n> > complex. While processing tuples (more than 1 tuple may be\n> > returned by join) from child plan P1 created for tuple of table\n> \n> I don't think Informix allows FOR UPDATE in a multi-table select.\n\nOracle does. I don't know about SyBase, DB2 etc.\nIn any case - this is implemented already -:)\n\nVadim\n", "msg_date": "Sat, 30 Jan 1999 18:09:06 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > > Handling of SELECT FOR UPDATE OF > 1 relations is ever more\n> > > complex. While processing tuples (more than 1 tuple may be\n> > > returned by join) from child plan P1 created for tuple of table\n> >\n> > I don't think Informix allows FOR UPDATE in a multi-table select.\n> \n> Oracle does. I don't know about SyBase, DB2 etc.\n> In any case - this is implemented already -:)\n>\n\nWhen MS Access came out they made a big fuss about this ability, \nclaiming that they were the first ones to implement this.\n\nI'm not sure in what category they claimed they were first ;)\n\n-------------------\nHannu\n", "msg_date": "Sat, 30 Jan 1999 13:47:52 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": "> and this is now the DEFAULT isolevel.\n\nBut it seems that the standard says SERIALIZABLE is the default\nisolation level (or at least the highest isolation level implemented\nin the product), doesn't it?\n\nI have looked into Japanese transalated version of:\n\n\"A guide to the SQL standard 4th edition\" by C.J.Date\n\"Understanding the new SQL: A complete guide\" by J.Melton and A.R.Simon\n\nAnyone can confirm this?\n--\nTatsuo Ishii\n", "msg_date": "Sun, 31 Jan 1999 18:18:54 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ... " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > and this is now the DEFAULT isolevel.\n> \n> But it seems that the standard says SERIALIZABLE is the default\n> isolation level (or at least the highest isolation level implemented\n> in the product), doesn't it?\n\nYes, it does.\n\nBut Oracle, Informix, Sybase all use READ COMMITTED as default.\nPlease decide youself - it doesn't matter much to me -:)\nI would like to see it 1. configure-able; 2. in pg_options;\n3. in command line args. I'll do this after beta started,\nif no one else before.\n\nVadim\n", "msg_date": "Mon, 01 Feb 1999 10:05:05 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." }, { "msg_contents": ">Tatsuo Ishii wrote:\n>> \n>> > and this is now the DEFAULT isolevel.\n>> \n>> But it seems that the standard says SERIALIZABLE is the default\n>> isolation level (or at least the highest isolation level implemented\n>> in the product), doesn't it?\n>\n>Yes, it does.\n\nThen we should go for the standard way, I think.\n\n>But Oracle, Informix, Sybase all use READ COMMITTED as default.\n>Please decide youself - it doesn't matter much to me -:)\n>I would like to see it 1. configure-able; 2. in pg_options;\n>3. in command line args. I'll do this after beta started,\n>if no one else before.\n\nBTW, what is the advantage of READ COMMMITTED in PostgreSQL? I thought\nthe SERIALIZABLE should give us enough concurrency since we are using\nMVCC. Could you give me some examples?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 02 Feb 1999 14:34:13 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ... " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> BTW, what is the advantage of READ COMMMITTED in PostgreSQL? I thought\n> the SERIALIZABLE should give us enough concurrency since we are using\n> MVCC. Could you give me some examples?\n\nYes, but UPDATE/DELETE in SERIALIZABLE mode will cause\nelog(ERROR, \"Can't serialize access due to concurrent update\");\nin the case of the-same row update.\nOracle sets implicit savepoint before executing a statement.\nIn Postgres - entire transaction will be aborted...\n\nI have some ideas about savepoints... may be in 6.6 or 6.7...\n\nVadim\n", "msg_date": "Tue, 02 Feb 1999 17:02:00 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] READ COMMITTED isolevel is implemented ..." } ]
[ { "msg_contents": "ANOQ of the Sun wrote:\n> Clark Evans wrote:\n> > I am going to write a DOM implemention for PostgreSQL,\n> > which is licensed under the BSD license. Would you\n> > consider releasing a copy of your work under the BSD\n> > license so that I could use it in PostgreSQL?\n> \n> Very much so... Basically any freeware licence is fine with me.\n\nCool.\n\n> When do you need the licence? And do you need it\n> for the current release? Can you point me to documents\n> on how to release under the licence?\n\nEnclosed is the copyright agreement for the PostgreSQL database.\nI assume that this is how it would need to be licensed so that\na DOM implementation could be included in their distribution.\n\nThank you,\n\nClark Evans\n\n----------------------------------------\nPostgreSQL Data Base Management System (formerly known as Postgres, then\nas Postgres95).\n\nCopyright (c) 1994-7 Regents of the University of California\n\nPermission to use, copy, modify, and distribute this software and its\ndocumentation for any purpose, without fee, and without a written\nagreement\nis hereby granted, provided that the above copyright notice and this\nparagraph and the following two paragraphs appear in all copies.\n\nIN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY\nFOR\nDIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,\nINCLUDING\nLOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\nDOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF\nTHE\nPOSSIBILITY OF SUCH DAMAGE.\n\nTHE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,\nINCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER\nIS\nON AN \"AS IS\" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS\nTO\nPROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n", "msg_date": "Fri, 29 Jan 1999 20:16:02 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "License for PostgreSQL Contributions (Was: RE: DOM Implementation for\n\tC++)" } ]
[ { "msg_contents": "Berkley types,\n\tDr. Stonebraker developed some code in 1991 called Visionary.\nThat \nallowed a user to visually datamine a database engine's data. Does the\noriginal\nsource to this project still exist? And could Postgres benefit from it\nas a\ndatamining tool for Postgres clients?\n\nD.\n", "msg_date": "Fri, 29 Jan 1999 16:15:57 -0500", "msg_from": "Dan Gowin <[email protected]>", "msg_from_op": true, "msg_subject": "Visionary" }, { "msg_contents": "> Berkley types,\n> \tDr. Stonebraker developed some code in 1991 called Visionary.\n> That \n> allowed a user to visually datamine a database engine's data. Does the\n> original\n> source to this project still exist? And could Postgres benefit from it\n> as a\n> datamining tool for Postgres clients?\n> \n> D.\n> \n> \n\nI believe it is called tioga. Not sure, but I saw a web page about it\nonce. Check the postgresql e-mail archives on our web site. Maybe\nhackers or general list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 01:14:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Visionary" } ]
[ { "msg_contents": "I'm having a few problems getting updates from anon CVS.\n\nUpon issuing the \"cvs update\" command I get the error:-\n\nFatal error, aborting.\n: no such user\n\nI decided to try a complete checkout so renamed the directory \nand my ~/.cvspass file and logged on again.\n\nmtcc:[/export/home/emkxp01](226)% cvs -d \n:pserver:[email protected]:/usr/local/cvsroot login\n(Logging in to [email protected])\nCVS password: <---- entered \"postgresql\"\nmtcc:[/export/home/emkxp01](227)% cvs -z3 -d \n:pserver:[email protected]:/usr/local/cvsroot co -P pgsql\nFatal error, aborting.\n: no such user\n\nSo this doesn't help :-(\n\nAnyone else having problems?\n\nKeith.\n\nBTW: cvs -v\n\nConcurrent Versions System (CVS) 1.10 `Halibut' (client/server)\n\n", "msg_date": "Fri, 29 Jan 1999 22:05:09 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with anon CVS?" }, { "msg_contents": "Dear PostgreSQL gurus, \n\ndear Keith, \n\nI'm seeing exactly the same problem with CVS... has the server\nbeen changed or has a technical problem occurred?\n\nRegards,\n\nErnst\n", "msg_date": "Sat, 30 Jan 1999 12:40:50 GMT", "msg_from": "\"Dr. Ernst Molitor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with anon CVS?" }, { "msg_contents": "\nTry now...should *cross fingers* be fixed...\n\nit was a problem resulting from the system upgrade this past week...\n\nOn Fri, 29 Jan 1999, Keith Parks wrote:\n\n> I'm having a few problems getting updates from anon CVS.\n> \n> Upon issuing the \"cvs update\" command I get the error:-\n> \n> Fatal error, aborting.\n> : no such user\n> \n> I decided to try a complete checkout so renamed the directory \n> and my ~/.cvspass file and logged on again.\n> \n> mtcc:[/export/home/emkxp01](226)% cvs -d \n> :pserver:[email protected]:/usr/local/cvsroot login\n> (Logging in to [email protected])\n> CVS password: <---- entered \"postgresql\"\n> mtcc:[/export/home/emkxp01](227)% cvs -z3 -d \n> :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n> Fatal error, aborting.\n> : no such user\n> \n> So this doesn't help :-(\n> \n> Anyone else having problems?\n> \n> Keith.\n> \n> BTW: cvs -v\n> \n> Concurrent Versions System (CVS) 1.10 `Halibut' (client/server)\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 31 Jan 1999 16:23:03 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with anon CVS?" } ]