threads
listlengths
1
2.99k
[ { "msg_contents": "It seems that I can use tcl.h under an alternative directory if I\nwould tell configure target directory lists like:\n\nAdditional directories to search for include files{ /usr/local/include }: /usr/local/include/tcl7.6jp /usr/local/include/tk4.2jp /usr/local/include \n\nBut this is a bit pain for me. I miss the way 6.3 handles the include\ndirs for Tcl/Tk.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Wed, 25 Mar 1998 15:11:39 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "using alternative tcl include dir?" } ]
[ { "msg_contents": "David Gould wrote:\n>Bruce Momjian writes:\n>> > Stephane Lajeunesse writes:\n>> > > > A create group <groupname> is still missing in the grammar,\n>> > > \n>> > > I'm working on this.. Should have something working around the end of\n>> > > this week (for ALTER USER and CREATE USER).\n>> > \n>> > Please let me use this to tell you all that I would like to get notice of\n>> > each change to gram.y. I am currently modelling ecpg's parser after gram.y\n>> > to get good syntax checking. So I have to make these changes, too.\n>> \n>> Good idea on telling you of each change, but I also recommend that every\n>> time you update the ecpg grammer, you save a copy the gram.y that you\n>> used to do it, so later when you need to get it back in sync, you can do\n>> a diff on the old and new one to see each change so you don't miss any.\n>\n>Consider also not updateing the grammar. The strength of PostgreSQL is that\n>functions can be added to work inside the server. These functions can often\n>do whatever is being proposed as new syntax.\n>\n>So, instead of cluttering up the grammar with non-standard SQLish stuff\n>to handle things like groups, just create an administrative function to\n>do this job.\n>\n>* return create_group('groupname');\n>* return add_user_to_group('groupname', 'username');\n>* return drop_group('groupname');\n>\n>These can be written in C, in SQL, or what ever far more quickly and with\n>much less risk of destabilizing the system than the parser can be modified.\n>It also avoids making incompatibility with ecpg.\n>\n>And, in keeping with the recent anti-bloat thread, these can be loadable\n>extensions, not part of the core. So if you don't use groups, you don't pay\n>for them.\n\nI am sorry, but I have to disagree here. The group functionality is part of SQL92\nit is only called \"role\". In my opinion it is the only serious way to use the \nSQL permission stuff. I never grant rights directly to users, I always try to\ncreate task oriented roles, and then grant the users roles. Then if we get a new\nsecretary I only have to grant secretary to the new user. Everything else would be a nightmare.\nThere is only a misconcept in Informix, that makes roles rather useless,\nyou have to say 'set role secretary;' in every session to actually get the rights, there is no\ndefault roles like in Oracle.\n\nAndreas\n\n\n", "msg_date": "Wed, 25 Mar 1998 09:37:16 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: PostgreSQL reference manual (groups)" }, { "msg_contents": "Zeugswetter Andreas wrote:\n> \n> >So, instead of cluttering up the grammar with non-standard SQLish stuff\n> >to handle things like groups, just create an administrative function to\n> >do this job.\n> >\n> >* return create_group('groupname');\n> >* return add_user_to_group('groupname', 'username');\n> >* return drop_group('groupname');\n\nI actually tought about this but would have considered this a 'patch'\nuntil native support.\n\n> >\n> >These can be written in C, in SQL, or what ever far more quickly and with\n> >much less risk of destabilizing the system than the parser can be modified.\n> >It also avoids making incompatibility with ecpg.\n\nThe syntax for ALTER USER .. IN GROUP and CREATE USER IN GROUP is\nalready in gram.y. The arguments are also passed to user.c. The only\nthing needed was implementation. The only thing not in gram.y is CREATE\nGROUP. BTW, I have a working version of alter user and create user. \nAlso started working on delete user (removall from all groups). Hope to\nclean up the code and release it soon.\n\n> I am sorry, but I have to disagree here. The group functionality is part of SQL92\n> it is only called \"role\". In my opinion it is the only serious way to use the\n> SQL permission stuff. I never grant rights directly to users, I always try to\n> create task oriented roles, and then grant the users roles. Then if we get a new\n> secretary I only have to grant secretary to the new user. Everything else would be a nightmare.\n> There is only a misconcept in Informix, that makes roles rather useless,\n> you have to say 'set role secretary;' in every session to actually get the rights, there is no\n> default roles like in Oracle.\n> \n> Andreas\n\nI totally support Andreas here. Roles or groups should be part of the\ncore RDBMS. I don't think telling users to load a module to have groups\nor roles enabled would be appropriate when all other RDBMS support some\nimplementation of this off the shelf.\n\n-- \nStephane Lajeunesse.\nOracle and Sybase DBA\n", "msg_date": "Wed, 25 Mar 1998 20:25:00 -0500", "msg_from": "Stephane Lajeunesse <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: PostgreSQL reference manual (groups)" }, { "msg_contents": "\nAndreas is right.\n\nHooray for \"roles\" -- this is indeed the only way to preserve \nthe DBA's sanity across personnel turnover in key entry and \nfront desk, etc. definitely a core item.\n\nA place where Sybase falls on its heinie (yes, I can admit\nthat their engine is less than perfect :-)) is that users\ncan only belong to *one* \"group\". This is frustrating,\nand I'd love to see the ability to associate users with\nmultiple roles, rather than creating an elaborate system\nof hierarchical role defs.\n\nGee, this PG discussion just gets more and more exciting as \nit goes on.\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n", "msg_date": "Wed, 25 Mar 1998 17:47:41 -0800 (PST)", "msg_from": "De Clarke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: PostgreSQL reference manual (groups) " } ]
[ { "msg_contents": "Bruce wrote:\n>> \n>> You seem to be getting me wrong, right from the start. I appreciate your work on removing \n>> char2-char16. I also think it is the correct step. You have the hackers behind you, the discussion was\n>> about 2 - 3 weeks ago. I was part of it. There are three things that will need documentation:\n>> \t1. the replacement for char16 is char(16)\n>> \t2. char(16) gives and ignores trailing blanks\n>\n>I don't think this is true.\n\nHmm ? I am puzzeled. Of course this is true, it has to be, it is standard:\n\ntest=> create table chartest (a char(16));\ntest=> insert into chartest values ('Andreas');\ntest=> select a || '<' from chartest;\n?column?\n-----------------\nAndreas <\n(1 row)\ntest=> select a || '<' from chartest where a='Andreas';\n?column?\n-----------------\nAndreas <\n(1 row)\n\nAndreas\n\n\n\n", "msg_date": "Wed, 25 Mar 1998 10:01:33 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] char types gone." } ]
[ { "msg_contents": "David Gould wrote:\n>Andreas wrote:\n>>\n>> I think we should depreciate the BEGIN/END keywords in SQL to allow them\nI am only talking about the syntax here.\n>> to be used for the new PL/SQL. So definitely leave them out of ecpg now.\n>> Only accept BEGIN WORK and BEGIN TRANSACTION. (do a sequence of commit work; begin work)\n>> BTW.: why is a transaction always open ? A lot of programs would never need a\nI meant: why is a transaction always open in an ecpg program\n>> transaction. Is it because of cursors ?\n>\n>Because without transactions it is darn near impossible to build a database \n>that can guarantee data consistancy. Transactions are _the_ tool used to\n>build robust systems that remain usable even after failures.\n\nI shoud probably have said: A lot of programs would never need a transaction\nthat span more than one statement.\n\n>For example take the simple single statment:\n>\n>insert into customers values(\"my name\", customer_number(\"my name\"));\n>\n>Assuming that there is an index on the name and id # columns, what happens\n>if the system dies after the name index is updated, but the id # index \n>is not? Your indexes are corrupt. With transactions, the whole thing just\n>rolls back and remains consistant.\n>\n>Since PostgreSQL is more powerful than many databases, it is just about\n>impossible for a client application to tell what is really happening and\n>whether a transaction is needed even if the client only is using very\n>simple SQL that looks like it doesn't need a transaction.\n>\n>Take the SQL statement above and add a trigger or rule on the customers\n>table like so:\n>\n>create rule new_cust on insert to customers do after\n> insert into daily_log values (\"new customer\", new.name);\n> update statistics set total_customers = total_customers + 1 ...\n>\n>Now you really need a transaction.\n>\n>Oh, but lets look at the customer_number() function:\n>\n>begin\n> return (select unique max(cust_no) + 1 from customers);\n>end\n>\n>This needs to lock the whole table and cannot release those locks until\n>the insert to customer is done. This too must be part of the transaction.\n>\n>Fortunately, unlike say 'mySQL', posgreSQL does the right thing and always\n>has a transaction wrapped around any statement.\n\nYes, but this is handeled implicitly by the backend even if the user does not say\nbegin work;\nblabla\ncommit work;\nIn that sense every statement is atomic.\n\nIn a client server environment the implicit begin work; commit work; can save\na lot of time since it saves 2 network roundtrips.\nAnd of course it would be bad practice if the user is forced to do commit work;\nand then for ease of programming and execution speed only does this every 100 statements.\n\nWhat I am saying here is, that an ecpg program should be able to run with \nautocommit mode on. (Michael Meskes)\n\nAndreas\n\n", "msg_date": "Wed, 25 Mar 1998 10:32:04 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Begin statement again" }, { "msg_contents": "Zeugswetter Andreas writes:\n> I meant: why is a transaction always open in an ecpg program\n\nBecause this is how it works with other embedded SQL systems too. I have\ndone quite some work with Oracle, and it always has the transaction open.\n\nKeep in mind that there is no disconnect command. Instead you go out by\nissuing a commit.\n\n> Yes, but this is handeled implicitly by the backend even if the user does not say\n> begin work;\n> blabla\n> commit work;\n> In that sense every statement is atomic.\n> \n> In a client server environment the implicit begin work; commit work; can save\n> a lot of time since it saves 2 network roundtrips.\n> And of course it would be bad practice if the user is forced to do commit work;\n> and then for ease of programming and execution speed only does this every 100 statements.\n> \n> What I am saying here is, that an ecpg program should be able to run with \n> autocommit mode on. (Michael Meskes)\n\nI tend to agree. But all embedded SQL programs I've seen so far only use\ncommit. I never saw one that issues a begin work since I stopped using\nIngres.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 25 Mar 1998 15:49:14 +0100 (CET)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Begin statement again" } ]
[ { "msg_contents": "Yes, my postgresql moves to a 2xPII 266MHx/128GB/8GB SCSI Digital Server.\nBut the network card DE500AA refuses to work under Linux.\n\nI know that this problem isn't directly related to PostgreSQL yet it is more\nlikely for database administrators to deal with rare hardware.\n\nSo please answer if you have a solution.\n\nIs Digital DE500AA network card supported by Linux ?\nIf yes, then are any special requirements/parameters ?\n\n\n\n\n", "msg_date": "Wed, 25 Mar 1998 11:45:57 +0200", "msg_from": "\"SC Altex Impex SRL\" <[email protected]>", "msg_from_op": true, "msg_subject": "my postgresql moves to a Digital server, but..." }, { "msg_contents": "On Wed, 25 Mar 1998, SC Altex Impex SRL wrote:\n\n> Yes, my postgresql moves to a 2xPII 266MHx/128GB/8GB SCSI Digital Server.\n> But the network card DE500AA refuses to work under Linux.\n\nthis is a tad off topic, however. visit cesdis.gsfc.nasa.gov and get the\nlatest tulip driver. i believe v0.87P or higher is on the site.\n/pub/linux/drivers/*\n\n-d\n\nLook, look, see Windows 95. Buy, lemmings, buy! \n(c) 1998 David Ford. Redistribution via the Microsoft Network is prohibited.\n\n[reply to: [email protected] without the 127-0-0-1.]\n*** *** Flames will go to /dev/null\n** WARNING ** SPAM mail will be returned to you at a\n*** *** minimum rate of 50,000 copies per email\n\n", "msg_date": "Wed, 25 Mar 1998 02:28:41 -0800 (PST)", "msg_from": "Blu3Viper <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] my postgresql moves to a Digital server, but..." } ]
[ { "msg_contents": ">> > > I have another question about GRANT/REVOKE:\n>> > >\n>> > > grant <privilege[,privilege,...]>\n>> > > on <rel1>[,...<reln>]\n>> > > to [public | GROUP <group> | <username>]\n>> > > ^^^^^^^^^^^^^\n>> > > I don't know how to create a GROUP ?\n>> >\n>> > I believe that you use \"CREATE USER groupname\", and then can assign\n>> > privileges to that pseudo-user/group, and then add users to that \n>> > group. Have you tried that?\n>> postgres=> create user grupo;\n>> CREATE USER\n\nNo, do this: insert into pg_group values ('grupo', 100, '{6}');\n\n>> postgres=> grant all on tmp to grupo;\n>> CHANGE\n>> create user joe in group grupo;\n\nlooks like this is ignored ?\n\n>> CREATE USER\n>> postgres=> grant select on tmp to group grupo;\n>> ERROR: non-existent group \"grupo\"\n>\n>Can someone tell us how \"groups\" work? I'm not finding enough clues just\n>by looking in the parser, and haven't stumbled across it in the docs...\n\nI have no idea what the grosysid is supposed to be, I only notice, that 100 works while 18204\ncrashes psql. To be consistent with pg_user I think it should hold the unix group id,\nif the group also exists in /etc/groups. If not, from what I see in the sources\nit must still be unique, see src/backend/catalog/aclchk.c ** this code is a little mess really ** \nThe field grolist has to be manually maintained currently. It contains an \narray of usesysid's of the users in this group. (select usesysid from pg_user where usename='joe';)\n\nAndreas\n\n\n\n", "msg_date": "Wed, 25 Mar 1998 11:08:42 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: PostgreSQL reference manual" } ]
[ { "msg_contents": "\n>Zeugswetter Andreas writes:\n>> I meant: why is a transaction always open in an ecpg program\n>\n>Because this is how it works with other embedded SQL systems too. I have\n>done quite some work with Oracle, and it always has the transaction open.\n\nI am well accustomed to the deficiencies of Oracle. But in Oracle you don't have read locks,\nand so a read only program does no harm if it only does one commit when it exits \n(except maybe block the RBS if it did one small update).\nSince postgresql does have read locks, such a program will lock all resources as time goes by,\nif it does not do frequent commits. Not to speak of memory, that does not get freed.\n\n>>\n>>Keep in mind that there is no disconnect command. Instead you go out by\n>>issuing a commit.\n\nHmmm ? you don't tell the backend when the program exits ?\n\n> What I am saying here is, that an ecpg program should be able to run with \n> autocommit mode on. (Michael Meskes)\n>\n>I tend to agree. But all embedded SQL programs I've seen so far only use\n>commit. I never saw one that issues a begin work since I stopped using\n>Ingres.\n\nTry Informix, and you will love the difference and speed in these points.\nThe begin work statement is also a fundamental part of postgres. I simply would not hide it.\n\nAndreas\n\n\n", "msg_date": "Wed, 25 Mar 1998 16:28:25 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Begin statement again" }, { "msg_contents": "Zeugswetter Andreas writes:\n> I am well accustomed to the deficiencies of Oracle. But in Oracle you don't have read locks,\n> and so a read only program does no harm if it only does one commit when it exits \n> (except maybe block the RBS if it did one small update).\n> Since postgresql does have read locks, such a program will lock all resources as time goes by,\n> if it does not do frequent commits. Not to speak of memory, that does not get freed.\n\nYou got a point with this.\n\n> Hmmm ? you don't tell the backend when the program exits ?\n\nSo far I don't. Does anyone know whether there's a disconnect command\nsomewhere? In embedded SQL that is. Oracle uses 'commit work release'.\n\nThe function I have to call does exist already.\n\n> Try Informix, and you will love the difference and speed in these points.\n> The begin work statement is also a fundamental part of postgres. I simply would not hide it.\n\nI do not hide it all. But I'd like to be as compatible to Oracle as\npossible. Maybe we could add an autotransaction flag somehow.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 25 Mar 1998 16:57:00 +0100 (CET)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Begin statement again" }, { "msg_contents": "\nAndreas wrote:\n>\n>\n> >Zeugswetter Andreas writes:\n> >> I meant: why is a transaction always open in an ecpg program\n> >\n> >Because this is how it works with other embedded SQL systems too. I have\n> >done quite some work with Oracle, and it always has the transaction open.\n>\n> I am well accustomed to the deficiencies of Oracle. But in Oracle you don't have read locks,\n> and so a read only program does no harm if it only does one commit when it exits\n> (except maybe block the RBS if it did one small update).\n> Since postgresql does have read locks, such a program will lock all resources as time goes by,\n> if it does not do frequent commits. Not to speak of memory, that does not get freed.\n\n I'm not that familiar with the C level of Oracle connections.\n But I used oratcl from Tom Poindexter sometimes and that has\n a AUTOCOMMIT ON/OFF statement that sets the autocommit flag\n in the library routines somewhere. Doesn't embedded SQL use\n the same libraries to connect to oracle that oratcl uses?\n\n In oratcl autocommit is ON by default and I assumed this is\n the libraries default too. Correct me if I'm wrong.\n\n Anyway - ecpg could work around. It can manage an autocommit\n flag and an in_trans status by itself. When autocommit is OFF\n and in_trans is false, it sends down a 'BEGIN TRANSACTION'\n right before the next query and sets in_trans to true.\n Later, when PostgreSQL responds 'COMMIT' from a query, it\n sets in_trans back to false and we have the behaviour of the\n AUTOCOMMIT. This way, a program that doesn't explicitly set\n autocommit to off might sometimes issue a COMMIT that results\n in an empty BEGIN/COMMIT sequence sent down to the backend -\n not too bad IMHO. As soon as a program requires real\n transactions, it sets autocommit to false and has (from the\n embedded SQL programmers point of view) total Oracle\n compatibility. And as long as autocommit is ON, there are no\n open locks laying around since ecpg doesn't send 'BEGIN\n TRANSACTION' and PostgreSQL's default is somewhat like\n autocommit too.\n\n>\n> >>\n> >>Keep in mind that there is no disconnect command. Instead you go out by\n> >>issuing a commit.\n>\n> Hmmm ? you don't tell the backend when the program exits ?\n\n Isn't EOF information enough? Must a client say BYE?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 25 Mar 1998 17:33:01 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Begin statement again" }, { "msg_contents": "Jan Wieck writes:\n> I'm not that familiar with the C level of Oracle connections.\n> But I used oratcl from Tom Poindexter sometimes and that has\n> a AUTOCOMMIT ON/OFF statement that sets the autocommit flag\n> in the library routines somewhere. Doesn't embedded SQL use\n> the same libraries to connect to oracle that oratcl uses?\n\nDon't know.\n\n> Anyway - ecpg could work around. It can manage an autocommit\n> flag and an in_trans status by itself. When autocommit is OFF\n> and in_trans is false, it sends down a 'BEGIN TRANSACTION'\n> right before the next query and sets in_trans to true.\n> Later, when PostgreSQL responds 'COMMIT' from a query, it\n> sets in_trans back to false and we have the behaviour of the\n> AUTOCOMMIT. This way, a program that doesn't explicitly set\n> autocommit to off might sometimes issue a COMMIT that results\n> in an empty BEGIN/COMMIT sequence sent down to the backend -\n> not too bad IMHO. As soon as a program requires real\n\nWait a moment. This is almost as it is handled currently. ecpg issues a\n'BEGIN TRANSACTION' before the next statement if commited (as the variable\nis called) is set to TRUE. Then it sets commited back to FALSE. Issuing a\nCOMMIT sets it back to TRUE.\n\n> transactions, it sets autocommit to false and has (from the\n> embedded SQL programmers point of view) total Oracle\n> compatibility. And as long as autocommit is ON, there are no\n\nOracle compatibility means exactly the behaviour we currently have. BEGIN\nTRANSACTION is issued automatically. COMMIT has to be called by hand. But\nwhat we were talking about is forcing both to be called by the programmer.\n\n> open locks laying around since ecpg doesn't send 'BEGIN\n> TRANSACTION' and PostgreSQL's default is somewhat like\n> autocommit too.\n> ...\n> Isn't EOF information enough? Must a client say BYE?\n\nNo, it need not. But it would be nice if it does, wouldn't it?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 26 Mar 1998 09:12:20 +0100 (MEZ)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Begin statement again" }, { "msg_contents": "On Wed, 25 Mar 1998, Michael Meskes wrote:\n\n> Zeugswetter Andreas writes:\n> > I am well accustomed to the deficiencies of Oracle. But in Oracle you don't have read locks,\n> > and so a read only program does no harm if it only does one commit when it exits \n> > (except maybe block the RBS if it did one small update).\n> > Since postgresql does have read locks, such a program will lock all resources as time goes by,\n> > if it does not do frequent commits. Not to speak of memory, that does not get freed.\n> \n> You got a point with this.\n> \n> > Hmmm ? you don't tell the backend when the program exits ?\n> \n> So far I don't. Does anyone know whether there's a disconnect command\n> somewhere? In embedded SQL that is. Oracle uses 'commit work release'.\n> \nThe DISCONNECT statement is used to terminate an inactive\nSQL-Connection. A SQL-Connection can be closed whether it is the\ncurrent SQL-Connection or a dormant SQL-Connection, but may not\nclosed while a transaction is on-going for its associated\nSQL-session.\n\nThe required syntax for the DISCONNECT statement is:\n\nDISCONNECT \n <Connection Name> |\n DEFAULT |\n CURRENT |\n ALL\n Ciao, Jose'\n\n", "msg_date": "Thu, 26 Mar 1998 14:10:49 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Begin statement again " } ]
[ { "msg_contents": "> > \n>> David Gould writes:\n>> > Consider also not updateing the grammar. The strength of PostgreSQL is that\n>> > functions can be added to work inside the server. These functions can often\n>> > do whatever is being proposed as new syntax.\n>> \n>> So you want me to not check the syntax while parsing the embedded SQL code?\n>\n>What I think we was suggesting is that we add non-ANSI functionality as\n>function calls rather than grammer changes with keywords. The only\n>disadvantage is that it is a little more cumbersom, and less intuitive\n>for users.\n\nbut it ** is ** ANSI functionality, look under \"role\" (with an O)\n\nAndreas\n\n\n\n\n", "msg_date": "Wed, 25 Mar 1998 16:31:00 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: [HACKERS] Re: PostgreSQL reference manual" }, { "msg_contents": "Andreas:\n> >> David Gould writes:\n> >> > Consider also not updateing the grammar. The strength of PostgreSQL is that\n> >> > functions can be added to work inside the server. These functions can often\n> >> > do whatever is being proposed as new syntax.\n> >> \n> >> So you want me to not check the syntax while parsing the embedded SQL code?\n> >\n> >What I think we was suggesting is that we add non-ANSI functionality as\n> >function calls rather than grammer changes with keywords. The only\n> >disadvantage is that it is a little more cumbersom, and less intuitive\n> >for users.\n> \n> but it ** is ** ANSI functionality, look under \"role\" (with an O)\n\nOk, but are we using the ANSI syntax? If so, then I withdraw my objection.\nBut, if we are adding ANSI functionality with UNIQUE syntax, then why bother\nhacking the parser since the functionality can be added with functions.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 26 Mar 1998 17:04:34 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: AW: AW: [HACKERS] Re: PostgreSQL reference manual" }, { "msg_contents": "> > but it ** is ** ANSI functionality, look under \"role\" (with an O)\n> Ok, but are we using the ANSI syntax? If so, then I withdraw my \n> objection. But, if we are adding ANSI functionality with UNIQUE \n> syntax, then why bother hacking the parser since the functionality can \n> be added with functions.\n\nWe don't have a goal of implementing unique syntax *just because*,\nalthough it may look that way from time to time. If the syntax can be\nmade compliant without damaging the functionality, we will make it SQL92\ncompatible (or compatible with whatever standard makes sense).\n\nbtw, this brings up a question:\n\nThe MySQL bunch have included some syntax in their \"crash-me\" test which\nis _not_ SQL92 compliant, including hex constants specified as\n\n 0x0F\n\n(for decimal 15, assuming I've done the conversion right :). They claim\nthat this is required by the ODBC standard, whatever that is. What is\nthe relationship between the two? Isn't ODBC a client interface, not\nnecessarily dealing with SQL directly but rather with common SQLish\nfunctionality? In cases where SQL92 and ODBC conflict, how do systems\nresolve the differences? For this case, SQL92 clearly defines the syntax\nas\n\n x'0F'\n\nIn this particular case it will be easy to implement this ODBC syntax in\nthe scanner, but I don't want to jerk it around too much if it a bogus\nissue :(\n\n - Tom\n", "msg_date": "Fri, 27 Mar 1998 03:35:04 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: [HACKERS] Re: PostgreSQL reference manual" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> > > but it ** is ** ANSI functionality, look under \"role\" (with an O)\n> > Ok, but are we using the ANSI syntax? If so, then I withdraw my \n> > objection. But, if we are adding ANSI functionality with UNIQUE \n> > syntax, then why bother hacking the parser since the functionality can \n> > be added with functions.\n> \n> We don't have a goal of implementing unique syntax *just because*,\n> although it may look that way from time to time. If the syntax can be\n> made compliant without damaging the functionality, we will make it SQL92\n> compatible (or compatible with whatever standard makes sense).\n> \n> btw, this brings up a question:\n> \n> The MySQL bunch have included some syntax in their \"crash-me\" test which\n> is _not_ SQL92 compliant, including hex constants specified as\n> \n> 0x0F\n> \n> (for decimal 15, assuming I've done the conversion right :). They claim\n> that this is required by the ODBC standard, whatever that is. What is\n> the relationship between the two? Isn't ODBC a client interface, not\n> necessarily dealing with SQL directly but rather with common SQLish\n> functionality? In cases where SQL92 and ODBC conflict, how do systems\n> resolve the differences? For this case, SQL92 clearly defines the syntax\n> as\n> \n> x'0F'\n\nWell, far be it for me to want or suggest that we be exactly like\nSybase, but:\n\n1> select 0x0F\n2> go\n \n ---- \n 0x0f \n \n(1 row affected)\n1> select x'0F'\n2> go\nMsg 207, Level 16, State 2:\nLine 1:\nInvalid column name 'x'.\n1> \n\n\nOcie\n", "msg_date": "Fri, 27 Mar 1998 13:23:55 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: AW: AW: [HACKERS] Re: PostgreSQL reference manual" } ]
[ { "msg_contents": "On my RH 5 configure can't find libtcl. It's becouse there is no 'main'\nsymbol inside. (tcl8.0)\n\nMaybe for libraries we should check for other symbols. _init might a\ngood choice?\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Wed, 25 Mar 1998 18:10:39 +0100", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": true, "msg_subject": "Autoconf" } ]
[ { "msg_contents": "\n\n\n\nIt appears that this patch is not exactly correct. Using patch version 2.1b\non my slackware linux-2.0.30 kernel,\nI get the following error:\n\nHmm... The next patch looks like a new-style context diff to me...\nThe text leading up to this was:\n--------------------------\n|diff -cNr postgresql-6.3/src/bin/pgaccess/formdemo.sql\npgsql/src/bin/pgaccess/formdemo.sql\n|*** postgresql-6.3/src/bin/pgaccess/formdemo.sql Wed Dec 31 19:00:00\n 1969\n|--- pgsql/src/bin/pgaccess/formdemo.sql Thu Mar 12 08:09:41 1998\n--------------------------\n(Creating file postgresql-6.3/src/bin/pgaccess/formdemo.sql...)\nPatching file postgresql-6.3/src/bin/pgaccess/formdemo.sql using Plan A...\npatch: **** malformed patch at line 22987: box} {}}\n\n\nMatt\n\n\n", "msg_date": "Wed, 25 Mar 1998 13:15:58 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Postgresql-6.3-6.3.1.gz" }, { "msg_contents": "On Wed, 25 Mar 1998 [email protected] wrote:\n\n> \n> \n> \n> \n> It appears that this patch is not exactly correct. Using patch version 2.1b\n> on my slackware linux-2.0.30 kernel,\n> I get the following error:\n> \n> Hmm... The next patch looks like a new-style context diff to me...\n> The text leading up to this was:\n> --------------------------\n> |diff -cNr postgresql-6.3/src/bin/pgaccess/formdemo.sql\n> pgsql/src/bin/pgaccess/formdemo.sql\n> |*** postgresql-6.3/src/bin/pgaccess/formdemo.sql Wed Dec 31 19:00:00\n> 1969\n> |--- pgsql/src/bin/pgaccess/formdemo.sql Thu Mar 12 08:09:41 1998\n> --------------------------\n> (Creating file postgresql-6.3/src/bin/pgaccess/formdemo.sql...)\n> Patching file postgresql-6.3/src/bin/pgaccess/formdemo.sql using Plan A...\n> patch: **** malformed patch at line 22987: box} {}}\n\n\tFrom one other report, it appears that the line is too long for\n2.1b to handle...it was tested and works fine under FreeBSDs patch 2.1...\n\n\n", "msg_date": "Wed, 25 Mar 1998 15:22:48 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgresql-6.3-6.3.1.gz" } ]
[ { "msg_contents": "I am again trying to compile on my (possible non standard) Solaris\nplatform. I still have to add in library directories and -lucb in\norder to find getrusage, &c. One thing I have noticed is that the\nconfigure script now checks for string.h vs strings.h, but this\ninformation is not being propagated to the correct makefiles, so that\npgc.l gets the following problem:\n\ngcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes -I../include -DMAJOR_VERSION=1 -DMINOR_VERSION=1 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/dolomite/pgsql/include\\\" -c pgc.c \npgc.l:8: strings.h: No such file or directory\n\n\nI fixed this by hand in the ecpg Makefile, but I'm not quite sure how\nto get the configure information into the correct Makefile.\n\nOcie\n\n", "msg_date": "Wed, 25 Mar 1998 15:02:55 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "string.h and strings.h in ecpg" }, { "msg_contents": "[email protected] writes:\n > I am again trying to compile on my (possible non standard) Solaris\n > platform. I still have to add in library directories and -lucb in\n > order to find getrusage, &c. One thing I have noticed is that the\n > configure script now checks for string.h vs strings.h, but this\n > information is not being propagated to the correct makefiles, so that\n > pgc.l gets the following problem:\n > \n > gcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes -I../include -DMAJOR_VERSION=1 -DMINOR_VERSION=1 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/dolomite/pgsql/include\\\" -c pgc.c \n > pgc.l:8: strings.h: No such file or directory\n\nThe file pgc.l is missing a \"#include \"config.h\" at the\nbeginning. Then everything is fine.\n\n > \n > \n > I fixed this by hand in the ecpg Makefile, but I'm not quite sure how\n > to get the configure information into the correct Makefile.\n > \n > Ocie\n > \n\n-- \nMfG/Regards\n\t\t\t\t Siemens Nixdorf Informationssysteme AG\n /==== Abt.: OEC XS QM4\n / Ridderbusch / , Heinz Nixdorf Ring\n / /./ 33106 Paderborn, Germany\n /=== /,== ,===/ /,==, // Tel.: (49) 05251-8-15211\n / // / / // / / \\ NERV: ridderbusch.pad\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n", "msg_date": "Thu, 26 Mar 1998 08:44:50 +0100 (MET)", "msg_from": "Frank Ridderbusch <[email protected]>", "msg_from_op": false, "msg_subject": "[HACKERS] string.h and strings.h in ecpg" }, { "msg_contents": "Frank Ridderbusch writes:\n> The file pgc.l is missing a \"#include \"config.h\" at the\n> beginning. Then everything is fine.\n\nOkay, I have added this to my source. It will go into cvs with my next\npatch.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 26 Mar 1998 09:15:37 +0100 (\u0010\u000e)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] string.h and strings.h in ecpg" } ]
[ { "msg_contents": "I have been able to build postgres under Solaris, and even run the\ninitdb program, but when I try to start the postmaster, I get the\nfollowing:\n\nFATAL: StreamServerPort: setsockopt (SO_REUSEADDR) failed: errno=71\npostmaster: cannot create UNIX stream port\n\nErrno 71 is EPROTO (Protocol error). This is coming from the\nStreamServerPort function in src/backend/libpq/pqcomm.c:\n\n if ((setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (char *) &one,\n sizeof(one))) == -1)\n {\n sprintf(PQerrormsg,\n \"FATAL: StreamServerPort: setsockopt (SO_REUSEAD\nDR) failed: errno=%d\\n\",\n errno);\n fputs(PQerrormsg, stderr);\n pqdebug(\"%s\", PQerrormsg);\n return (STATUS_ERROR);\n }\n\n\nAny ideas what might be going on?\n\nOcie\n", "msg_date": "Wed, 25 Mar 1998 15:29:57 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "More Solaris Woes" } ]
[ { "msg_contents": "Zeugswetter Andreas writes:\n> > So far I don't. Does anyone know whether there's a disconnect command\n> > somewhere? In embedded SQL that is. Oracle uses 'commit work release'.\n> \n> exec sql disconnect {current|default|all|connectionname|connection_hostvar};\n\nHmm, is this standard? Anyway I like it.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 26 Mar 1998 09:12:55 +0100 (\u0010\u000e)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: AW: [HACKERS] Begin statement again" } ]
[ { "msg_contents": "I just checked the SQL standards as Tom send them to me and found the\ndisconnect statement. It's on my TODO list now. Just one question, I tkae it\nI have to call PQfinish to disconnect, but does this automatically commit\nthe active transaction? This looks reasonable but I don't know.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 26 Mar 1998 09:42:30 +0100 (MEZ)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "disconnect - PQfinish" } ]
[ { "msg_contents": "Where is this symbol used? I see now way to check for correct syntax when\n\":a\" can mean something else that the C variable a.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 26 Mar 1998 13:45:58 +0100 (MEZ)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "symbol ':'" }, { "msg_contents": "> Where is this symbol used? I see now way to check for correct syntax \n> when \":a\" can mean something else that the C variable a.\n\nIt's an allowed operator symbol assigned to a little-used math\noperation. You can disallow it for embedded sql if you need to. Another\npossibility though is to disallow it as an operator _unless_ it is\ninside parentheses. I did a similar thing in gram.y in the BETWEEN/AND\nparsing to allow the boolean \"AND\" operator to be used on the right side\nof BETWEEN/AND. But maybe indicator variables are allowed to show up\ninside expressions too, so this wouldn't work...\n\nbtw, I've been thinking about changing another operator symbol, \";\",\nbecause of the obvious parsing ambiguities with the SQL end of statement\nsymbol. We could change both \":\" and \";\" operators for v6.4??\n\n - Tom\n", "msg_date": "Thu, 26 Mar 1998 15:18:34 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] symbol ':'" }, { "msg_contents": "\b��\u000e@\u0010)\nCc: [email protected] (PostgreSQL Hacker)\nX-Mailer: ELM [version 2.4ME+ PL39 (25)]\nMIME-Version: 1.0\nContent-Type: text/plain; charset=US-ASCII\nContent-Transfer-Encoding: 7bit\n\nThomas G. Lockhart writes:\n> It's an allowed operator symbol assigned to a little-used math\n> operation. You can disallow it for embedded sql if you need to. Another\n\nYes, I have to.\n\n> possibility though is to disallow it as an operator _unless_ it is\n> inside parentheses. I did a similar thing in gram.y in the BETWEEN/AND\n> parsing to allow the boolean \"AND\" operator to be used on the right side\n> of BETWEEN/AND. But maybe indicator variables are allowed to show up\n> inside expressions too, so this wouldn't work...\n\nThey are. And it's not only indicators. The variables begin with ':', too.\n\n> btw, I've been thinking about changing another operator symbol, \";\",\n> because of the obvious parsing ambiguities with the SQL end of statement\n> symbol. We could change both \":\" and \";\" operators for v6.4??\n\nGood idea.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 26 Mar 1998 16:54:42 +0100 (x", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] symbol ':'" } ]
[ { "msg_contents": "Hi,\n\njust got me the complete 6.3.1 tarball. Compilation and installation\nwent smoothly for the most part.\n\nProblem is, that the backend dumps core during the initdb run. Here is \nthe transscript (see the 'Abort' below, vacuuming template1). \n\npostgres@utensil (9) $ bin/initdb\ninitdb: using /home/tools/pgsql-6.3/lib/local1_template1.bki.source as input to create the template database.\ninitdb: using /home/tools/pgsql-6.3/lib/global1.bki.source as input to create the global classes.\ninitdb: using /home/tools/pgsql-6.3/lib/pg_hba.conf.sample as the host-based authentication control file.\n\nWe are initializing the database system with username postgres (uid=121).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /home/tools/pgsql-6.3/data/base\n\ninitdb: creating template database in /home/tools/pgsql-6.3/data/base/template1\nRunning: postgres -boot -C -F -D/home/tools/pgsql-6.3/data -Q template1\n\nCreating global classes in /base\nRunning: postgres -boot -C -F -D/home/tools/pgsql-6.3/data -Q template1\n\nAdding template1 database to pg_database...\nRunning: postgres -boot -C -F -D/home/tools/pgsql-6.3/data -Q template1 < /tmp/create.6347\n\nvacuuming template1\nAbort\ncreating public pg_user view\nloading pg_description\n\nI then pulled the core into dbx and got the following info. \n\npostgres@utensil (16) $ dbx ../../../bin/postgres core\ndbx 2.1A00 SINIX (Apr 6 1995)\nCopyright (C) Siemens Nixdorf Informationssysteme AG 1995\nBase:\tBSD, Copyright (C) The Regents of the University of California\nAll rights reserved\nreading symbolic information ...\nCurrent signal in memory image is: SIGIOT (6) (generated by pid 6409 uid 121)\n[using memory image in core]\nType 'help' for help\n(dbx) where\n.kill() at 0x482d994\n.abort() at 0x4822d30\nExcAbort(excP = 0x62ddc0, detail = 0, data = (nilv), message = \"!(RelationIsValid(relation))\"), line 29 in \"excabort.c\"\nExcUnCaught(excP = 0x62ddc0, detail = 0, data = (nilv), message = \"!(RelationIsValid(relation))\"), line 173 in \"exc.c\"\nExcRaise(excP = 0x62ddc0, detail = 0, data = (nilv), message = \"!(RelationIsValid(relation))\"), line 190 in \"exc.c\"\nExceptionalCondition(conditionName = \"!(RelationIsValid(relation))\", exceptionP = 0x62ddc0, detail = (nilv), fileName = \"indexam.c\", lineNumber = 231), line 69 in \"assert.c\"\nindex_beginscan(relation = (nilv), scanFromEnd = '\\0', numberOfKeys = 0, key = (nilv)), line 231 in \"indexam.c\"\nvc_scanoneind(indrel = (nilv), nhtups = 0), line 1448 in \"vacuum.c\"\nvc_vacone(relid = 1247, analyze = '\\0', va_cols = (nilv)), line 560 in \"vacuum.c\"\nvc_vacuum(VacRelP = (nilv), analyze = '\\0', va_cols = (nilv)), line 253 in \"vacuum.c\"\n.vacuum.vacuum(vacrel = (nilv), verbose = '\\0', analyze = '\\0', va_spec = (nilv)), line 159 in \"vacuum.c\"\nProcessUtility(parsetree = 0x66c770, dest = Debug), line 633 in \"utility.c\"\npg_exec_query_dest(query_string = \"vacuum\\n\", argv = (nilv), typev = (nilv), nargs = 0, dest = Debug), line 653 in \"postgres.c\"\npg_exec_query(query_string = \"vacuum\\n\", argv = (nilv), typev = (nilv), nargs = 0), line 601 in \"postgres.c\"\nPostgresMain(argc = 7, argv = 0x7fffe7ac), line 1382 in \"postgres.c\"\n.main() at 0x4bb29c\n__start() at 0x417de4\n(dbx) \n\nMfG/Regards\n--\n /==== Siemens Nixdorf Informationssysteme AG\n / Ridderbusch / , Abt.: OEC XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n", "msg_date": "Thu, 26 Mar 1998 13:46:51 GMT", "msg_from": "Frank Ridderbusch <[email protected]>", "msg_from_op": true, "msg_subject": "6.3.1: Core during initdb on SVR4 (MIPS)" } ]
[ { "msg_contents": ">\n>Hi Hackers,\n>\n>it seems that Linux malloc, or better libc-5.4.23 malloc, is doing what is\n>declared in the man page, i.e. returning \"memory which is suitably aligned\n>for any kind of variable\". I have some malloc traces and all malloc results\n>are aligned to double size (8 bytes):\n>\nI did the original Electric fence test using the libc version which\ncomes with redhat 4.0. I just upgraded my libc so I don't know\nwith version it was.\n\nWith greetings from Maurice.\n\n\n", "msg_date": "Thu, 26 Mar 1998 17:46:43 +0100", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] patch for memory overrun on Linux(i386)" } ]
[ { "msg_contents": "Hi,\n\nI gathered up some courage and ran the regression tests with the\nElectric Fence library.\n\nA quick overview of the queries that failed follows. (In the following a\nfailed test\nis a test where the backend dies.) This usually means electric fence\ndetected some overrun.\n\nSELECT INTO\nINSERT INTO\nCREATE FUNCTION\nCREATE TYPE\nCREATE FUNCTION\nCREATE AGGREGATE\nCREATE OPERATOR\none fancy select statement\nALTER TABLE\n\nIf we are lucky these are all cause by the same bug -:). Anybody want\nto find out for sure? If you are interested in the regression.diff file I\npost it to the list.\n\nRegards,\nMaurice\n\n\n", "msg_date": "Thu, 26 Mar 1998 17:59:48 +0100", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "More overruns?" } ]
[ { "msg_contents": "> > Marc, can you comment on this.\n> \n> \tIt kind of bothers me that something that is standard in autoconf\n> breaks on some systems, while, as far as I can tell, its always been this\n> way ;(\n> \n> \tAndrew, have you asked the developers of autoconf about this?\n> \n> \n> > > The patch to configure contains the following:\n> > > \n\nIt's not just \"on some systems\" the test seems to miss the screen for\n\"is this gcc\" and it tries a gcc-specific flag (--traditional) on all\nC compilers - at least that's how I understand the problem.\n\nunder Irix, if I use gcc, then it runs fine; if I use the Irix C\ncompiler, the script crashes.\n\nNo, I haven't contacted the autoconf guys yet... I have never used\nautoconf and really have not looked into how it works, but I guess\nthis bug should be reported to them\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 26 Mar 1998 18:30:36 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] URGENT PROBLEM with the 6.3->6.3.1 patch" } ]
[ { "msg_contents": "Tom Ivar Helbekkmo wrote:\n> Stonebraker et al are working on something like this, aren't they,\n> using Postgres95? I'm thinking of the Mariposa project.\n\nhttp://mariposa.CS.Berkeley.EDU:8000/mariposa/\n\nLooks interesting indeed.\n\n\thilsen,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Thu, 26 Mar 1998 20:38:20 +0100", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] PostgreSQL in more complex apps : some ambitious Q's" } ]
[ { "msg_contents": "> Hi all,\n> \n> I'm writing Reference Manual pages and I have some questions about...\n> \n> create rule:\n> \n> The man page says ...\n> \n> \"The current rule system implementation is very brittle and\n> is unstable. Users are discouraged from using rules at\n> this time.\"\n> \n> My question are: \n> Is this command a PostgreSQL deprecated feature ?\n> Sould I skip it ?\n> Does it works ? \n\nThe rule system is a key feature of PostgreSQL. It should not be deprecated,\nif it doesn't work, it should be fixed. \n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 26 Mar 1998 11:42:29 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Reference Guide" }, { "msg_contents": "Hi all,\n\nI'm writing Reference Manual pages and I have some questions about...\n\ncreate rule:\n\n The man page says ...\n\n \"The current rule system implementation is very brittle and\n is unstable. Users are discouraged from using rules at\n this time.\"\n\n My question are: \n Is this command a PostgreSQL deprecated feature ?\n Sould I skip it ?\n Does it works ? \n\ndeclare:\n\n DECLARE cursor [ BINARY ] \n FOR SELECT expression\n\n The man page says ...\n\n \"BINARY cursors give you back the data in the native binary\n representation. Thus, binary cursors will tend to be a\n little faster since there's less overhead of conversion.\"\n \n My question is: \n I can't see any difference between BINARY and normal cursors.\n Does it works ? \n\n Thanks, Jose'\n\n", "msg_date": "Thu, 26 Mar 1998 20:02:27 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Reference Guide" }, { "msg_contents": "> \n> > Hi all,\n> > \n> > I'm writing Reference Manual pages and I have some questions about...\n> > \n> > create rule:\n> > \n> > The man page says ...\n> > \n> > \"The current rule system implementation is very brittle and\n> > is unstable. Users are discouraged from using rules at\n> > this time.\"\n> > \n> > My question are: \n> > Is this command a PostgreSQL deprecated feature ?\n> > Sould I skip it ?\n> > Does it works ? \n> \n> The rule system is a key feature of PostgreSQL. It should not be deprecated,\n> if it doesn't work, it should be fixed. \n\nIt does work, but has some limitations in using rules for\nUPDATE/DELETE/INSERT. We need to prevent the ones that are problems. I\nam removing the warning from the manual page.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 26 Mar 1998 15:53:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reference Guide" } ]
[ { "msg_contents": "I have been troubled by a segmentation fault when reloading from a dumpall.\nThis has been happening when a second \\connect is encountered. \n\nThe faulty code was in fe-connect.c, where the memory for the user password\nwas freed, but the pointer itself was not set to NULL. Later, the memory was\nreused and the password appeared not to be empty, so that an attempt was\nmade to reference it.\n\nHere is the patch:\ndiff -c postgresql-6.3.1{.orig,}/src/interfaces/libpq/fe-connect.c\n*** postgresql-6.3.1.orig/src/interfaces/libpq/fe-connect.c Thu Feb 26 \n04:44:59 1998\n--- postgresql-6.3.1/src/interfaces/libpq/fe-connect.c Thu Mar 26 18:45:23 \n1998\n***************\n*** 667,672 ****\n--- 667,673 ----\n if (conn->pgpass != NULL)\n {\n free(conn->pgpass);\n+ conn->pgpass = NULL;\n }\n \n return CONNECTION_OK;\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n\n", "msg_date": "Thu, 26 Mar 1998 22:32:04 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cure for segmentation fault in libpq" }, { "msg_contents": "> \n> I have been troubled by a segmentation fault when reloading from a dumpall.\n> This has been happening when a second \\connect is encountered. \n> \n> The faulty code was in fe-connect.c, where the memory for the user password\n> was freed, but the pointer itself was not set to NULL. Later, the memory was\n> reused and the password appeared not to be empty, so that an attempt was\n> made to reference it.\n\nApplied to source tree.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 26 Mar 1998 18:45:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Cure for segmentation fault in libpq" } ]
[ { "msg_contents": "\nv6.3.1, compiled and installed today under FreeBSD:\n\nhub> /home/db/bin/psql acctng\nNOTICE: SIAssignBackendId: discarding tag 2147482737\nConnection to database 'acctng' failed.\nFATAL 1: Backend cache invalidation initialization failed\nhub> /home/db/bin/psql acctng\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: acctng\n\nacctng=> \n\n\nThe second attempt followed the first with little delay...\n\n\n\n", "msg_date": "Thu, 26 Mar 1998 18:45:50 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Any seen this...?" }, { "msg_contents": "Was your postmaster just starting up during that time.\n\n> \n> \n> v6.3.1, compiled and installed today under FreeBSD:\n> \n> hub> /home/db/bin/psql acctng\n> NOTICE: SIAssignBackendId: discarding tag 2147482737\n> Connection to database 'acctng' failed.\n> FATAL 1: Backend cache invalidation initialization failed\n> hub> /home/db/bin/psql acctng\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: acctng\n> \n> acctng=> \n> \n> \n> The second attempt followed the first with little delay...\n> \n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 26 Mar 1998 19:26:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Any seen this...?" }, { "msg_contents": "On Thu, 26 Mar 1998, Bruce Momjian wrote:\n\n> Was your postmaster just starting up during that time.\n\n\tNope, running for several hours now...\n\n> > v6.3.1, compiled and installed today under FreeBSD:\n> > \n> > hub> /home/db/bin/psql acctng\n> > NOTICE: SIAssignBackendId: discarding tag 2147482737\n> > Connection to database 'acctng' failed.\n> > FATAL 1: Backend cache invalidation initialization failed\n> > hub> /home/db/bin/psql acctng\n> > Welcome to the POSTGRESQL interactive sql monitor:\n> > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > \n> > type \\? for help on slash commands\n> > type \\q to quit\n> > type \\g or terminate with semicolon to execute query\n> > You are currently connected to the database: acctng\n> > \n> > acctng=> \n> > \n> > \n> > The second attempt followed the first with little delay...\n> > \n> > \n> > \n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\n", "msg_date": "Thu, 26 Mar 1998 19:27:51 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Any seen this...?" } ]
[ { "msg_contents": "> Gee, I saw I put the file on my postgresql account, but probably \n> forgot to install it. Sorry. Would have been nice to get it into \n> 6.3.1 too.\n\nWell, I had mixed feelings about that anyway, since it hadn't been field\ntested. I'll apply it along with the type conversion mods I'm working\non. btw, I already have working code for automatic conversions of\nfunction arguments, moving on to operators next. Examples follow...\n\n - Tom\n\n-- this input is automatically converted to float8...\npostgres=> select dsqrt(2);\n dsqrt\n---------------\n1.4142135623731\n(1 row)\n\n-- allow unknown types to convert to text type\n-- this addresses the annoying MySQL context-free test cases\npostgres=> select 'abc' || 'def';\n--------\nabcdef\n(1 row)\n\n-- this function is massively overloaded, but we automatically choose\n-- the \"datetime\" type for evaluation...\npostgres=> select date_part('year', 'now');\ndate_part\n---------\n 1998\n(1 row)\n\nOh, I stumbled across a declaration problem in dpow() while testing :-/\n", "msg_date": "Fri, 27 Mar 1998 02:47:21 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Using % in a query" }, { "msg_contents": "> \n> > Gee, I saw I put the file on my postgresql account, but probably \n> > forgot to install it. Sorry. Would have been nice to get it into \n> > 6.3.1 too.\n> \n> Well, I had mixed feelings about that anyway, since it hadn't been field\n> tested. I'll apply it along with the type conversion mods I'm working\n> on. btw, I already have working code for automatic conversions of\n> function arguments, moving on to operators next. Examples follow...\n> \n> - Tom\n> \n> -- this input is automatically converted to float8...\n> postgres=> select dsqrt(2);\n> dsqrt\n> ---------------\n> 1.4142135623731\n> (1 row)\n\nThis is great. I had wanted to do this, but my head hurts every time I\nstudy it. Good job.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 26 Mar 1998 23:25:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Using % in a query" } ]
[ { "msg_contents": "Thomas, you mentioned that some of the documentation is now in the\nsource code. I assume you were referring to psql's \\d commands.\n\nLet me remind you that psql has HTML output options, so before each\nrelease, you could dump out the psql \\df,\\do, etc commands, and load\nthem into docbook.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 01:01:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Docs and psql \\d commands" } ]
[ { "msg_contents": "On Thu, 26 Mar 1998, Bruce Momjian wrote:\n\n> > I vaguely remember that there was some consensus that this \"detached\" \n> > or \"non-transactional\" browsing through a table by walking through an\n> > existing index was a significant feature, very useful for people.\n> > \n> > What is the status ?\n> \n> I don't think we found that it was very do-able in an SQL database, or\n> we couldn't think of a good way to do it.\n\n Here goes an implementation idea :\n [ warning : I'm not duly familiar with pg sequences ]\n\n To instatiate an index \"pointer\" that would be used to go to next /\nprev table item, one says \"create sequence my_ind_seq on my_table_index\".\nOr maybe \"declare sequence my_ind_seq on my_table_index\" would be better.\n\n This will create a var as a temp variable for the duration of the\nconnection in the same way a cursor is declared. This variable will be of\nspecial type \"rowpointer\" or something like that. One can then say\n\"next(my_ind_seq)\" and it will change give value into pointer to next row. \nThen one can say \"select first rowpointer into my_ind_seq where ...\" in\norder to *jump* to a given table row. Also one could say ``last'' instead\nof ``first''. That means that each table / temp table (result of select)\nwould have to have a hidden attribute called \"rowpointer\", similarly to\nthe hidden oid attribute. \n\n The effect would be that one could say \"select * from mytable where\nrowpointer = prev(my_ind_seq)\", which would not only allow the\n\"detached\"(no lock is kept for hours) browsing, but would also have the\neffect of immediate retrieval of the row we are looking for, since the\nrowpointer var contains the row pointer. (Row pointer is used in the\ninternals=guts of pgsql).\n\n\n How does that sound for an idea for an implememtation ? It may not be\nperfect, but I hope it doesn't have basic conceptual flaws.\n\n TTYL,\n\n Jan\n\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n", "msg_date": "Fri, 27 Mar 1998 03:06:31 -0500 (EST)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": true, "msg_subject": "Idea : Re: status of Postgres \"trasaction-free browsing\" feature" } ]
[ { "msg_contents": "I just found another problem with C variables resp. indicators:\n\nIn opt_indirection I can have [ a_expr : a_expr ] and a_expr could be a\nvariable I think. Is it okay, not to allow variables at this point of the\ngrammer. I'm not sure what the standard says, but at least an indicator\nmakes no sense here.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 27 Mar 1998 09:34:05 +0100 (MEZ)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Another parsing problem" } ]
[ { "msg_contents": " \n> My question is: \n> I can't see any difference between BINARY and normal cursors.\n> Does it works ? \n>\n> Thanks, Jose'\n\nYes, this works. The difference is, that for a binary cursor, the column values are not\npassed through the type output function. You therefore get the internal representation for a type.\nThe internal representation is Platform specific, so pay attention when you use a binary cursor\non client x running on hardware y from vendor z accessing a server on hardware a from vendor b. \ne.g. for an integer column, you get a C integer number using a binary cursor, whilst you get a \nstring value like '12345' using the non binary cursor.\n\nAndreas\n\n", "msg_date": "Fri, 27 Mar 1998 11:02:33 +-100", "msg_from": "Zeugswetter Andreas <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Reference Guide (binary cursor)" } ]
[ { "msg_contents": "Hi Hackers,\n\nI (and at least four others) reported strange behaviour of PG 6.3(.1),\nwhich under certain circumstances doesn't use indices like the versions\nbefore.\n\nSo we still have to use 6.2.1 (now with the Massimo patches). For us\n6.2.1 is three times faster than 6.3.\n\nI have narrowed the problem down a bit, so please take a look:\n\nWe have two tables:\n\nCREATE TABLE trans (spieler_nr int4, wpk_nr int4, state char, anzahl\nint4, buyprice float8, buydate date, sellprice float8, selldate date,\nmail char) archive = none;\nCREATE TABLE kurse (wpk_nr int4, name text, curr char4, kurs float8,\ndatum date, art char, high float8, low float8, open float8, old float8)\narchive = none; \n\nwith three indices\n\nCREATE INDEX i_kurse_wpk_nr on kurse using btree ( wpk_nr int4_ops );\nCREATE INDEX i_trans_wpk_nr on trans using btree ( wpk_nr int4_ops );\nCREATE INDEX i_trans_spieler_nr on trans using btree ( spieler_nr\nint4_ops );\n\nIf I do this select:\n\ntest=> explain SELECT * from Trans, Kurse where\nKurse.wpk_nr=Trans.wpk_nr and Trans.spieler_nr=3;\nNOTICE: QUERY PLAN:\n\nHash Join (cost=408.60 size=1364 width=103)\n -> Seq Scan on kurse (cost=238.61 size=4958 width=65)\n -> Hash (cost=0.00 size=0 width=0)\n -> Index Scan on trans (cost=3.41 size=29 width=38)\n\nI get the seq scan, which slows the query down tremendously compared to\n6.2. \n\nWith the query:\n\ntest=> explain SELECT * from Trans, Kurse where\nKurse.wpk_nr=Trans.wpk_nr;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=7411.81 size=3343409 width=103)\n -> Index Scan on kurse (cost=337.90 size=4958 width=65)\n -> Index Scan on trans (cost=4563.60 size=71112 width=38) \n\neverything is fine.\n\nFor your convenience I have a dump of the database with some real world\ndata und the selects (and some vacuums of course) on our web server.\n\nYou can download it via HTTP\n\nhttp://www.vocalweb.de/test_index.dump.gz\n\nIt's around 1 Mb.\n\nPlease take a look at this, cause this seems to be a major bug in\noptimizer/analyzer code somewhere and we are not the only ones who see\nthis problem.\n\nTIA\n\nUlrich\n", "msg_date": "Fri, 27 Mar 1998 11:34:15 +0100", "msg_from": "Ulrich Voss <[email protected]>", "msg_from_op": true, "msg_subject": "Reminder: Indices are not used" }, { "msg_contents": "> test=> explain SELECT * from Trans, Kurse where\n> Kurse.wpk_nr=Trans.wpk_nr and Trans.spieler_nr=3;\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=408.60 size=1364 width=103)\n> -> Seq Scan on kurse (cost=238.61 size=4958 width=65)\n> -> Hash (cost=0.00 size=0 width=0)\n> -> Index Scan on trans (cost=3.41 size=29 width=38)\n> \n> I get the seq scan, which slows the query down tremendously compared to\n> 6.2. \n\nThis does help. Vadim, can you check this?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 14:26:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > test=> explain SELECT * from Trans, Kurse where\n> > Kurse.wpk_nr=Trans.wpk_nr and Trans.spieler_nr=3;\n> > NOTICE: QUERY PLAN:\n> >\n> > Hash Join (cost=408.60 size=1364 width=103)\n> > -> Seq Scan on kurse (cost=238.61 size=4958 width=65)\n> > -> Hash (cost=0.00 size=0 width=0)\n> > -> Index Scan on trans (cost=3.41 size=29 width=38)\n> >\n> > I get the seq scan, which slows the query down tremendously compared to\n> > 6.2.\n> \n> This does help. Vadim, can you check this?\n\nOk.\n\nVadim\n", "msg_date": "Mon, 30 Mar 1998 11:19:13 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "Could you post EXPLAINs from 6.2 for the _same_ data/schema ?\n\nAs for 6.3 - I just added\n\nCREATE INDEX i_trans on trans (spieler_nr, wpk_nr);\n\nand see near the same performance for all possible plans (NestLoop,\nMergeJoin & HashJoin) - you are able to restrict possible plans\nusing -fX backend' option... NestLoop is slowest (I used -fh -fm to\nget it).\n\nMy recommendation is to don't create 1-key indices - trans(spieler_nr) &\ntrans(wpk_nr), - but create 2-key indices - trans (spieler_nr, wpk_nr) &\ntrans (wpk_nr, spieler_nr).\n\nNevertheless, I'm interested in 6.2(.1 ?) EXPLAIN..\n\nVadim\n\nUlrich Voss wrote:\n> \n> Hi Hackers,\n> \n> I (and at least four others) reported strange behaviour of PG 6.3(.1),\n> which under certain circumstances doesn't use indices like the versions\n> before.\n> \n> So we still have to use 6.2.1 (now with the Massimo patches). For us\n> 6.2.1 is three times faster than 6.3.\n> \n> I have narrowed the problem down a bit, so please take a look:\n> \n> We have two tables:\n> \n> CREATE TABLE trans (spieler_nr int4, wpk_nr int4, state char, anzahl\n> int4, buyprice float8, buydate date, sellprice float8, selldate date,\n> mail char) archive = none;\n> CREATE TABLE kurse (wpk_nr int4, name text, curr char4, kurs float8,\n> datum date, art char, high float8, low float8, open float8, old float8)\n> archive = none;\n> \n> with three indices\n> \n> CREATE INDEX i_kurse_wpk_nr on kurse using btree ( wpk_nr int4_ops );\n> CREATE INDEX i_trans_wpk_nr on trans using btree ( wpk_nr int4_ops );\n> CREATE INDEX i_trans_spieler_nr on trans using btree ( spieler_nr\n> int4_ops );\n> \n> If I do this select:\n> \n> test=> explain SELECT * from Trans, Kurse where\n> Kurse.wpk_nr=Trans.wpk_nr and Trans.spieler_nr=3;\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=408.60 size=1364 width=103)\n> -> Seq Scan on kurse (cost=238.61 size=4958 width=65)\n> -> Hash (cost=0.00 size=0 width=0)\n> -> Index Scan on trans (cost=3.41 size=29 width=38)\n> \n> I get the seq scan, which slows the query down tremendously compared to\n> 6.2.\n> \n> With the query:\n> \n> test=> explain SELECT * from Trans, Kurse where\n> Kurse.wpk_nr=Trans.wpk_nr;\n> NOTICE: QUERY PLAN:\n> \n> Merge Join (cost=7411.81 size=3343409 width=103)\n> -> Index Scan on kurse (cost=337.90 size=4958 width=65)\n> -> Index Scan on trans (cost=4563.60 size=71112 width=38)\n> \n> everything is fine.\n> \n> For your convenience I have a dump of the database with some real world\n> data und the selects (and some vacuums of course) on our web server.\n> \n> You can download it via HTTP\n> \n> http://www.vocalweb.de/test_index.dump.gz\n> \n> It's around 1 Mb.\n> \n> Please take a look at this, cause this seems to be a major bug in\n> optimizer/analyzer code somewhere and we are not the only ones who see\n> this problem.\n> \n> TIA\n> \n> Ulrich\n", "msg_date": "Tue, 31 Mar 1998 16:20:55 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "Hi,\n\nboersenspiel=> explain SELECT * from Trans, Kurse where\nKurse.wpk_nr=Trans.wpk_nr and Trans.spieler_nr=3; NOTICE: QUERY PLAN:\n\nNested Loop (cost=6.15 size=2 width=103)\n -> Index Scan on trans (cost=2.05 size=2 width=38)\n -> Index Scan on kurse (cost=2.05 size=14307 width=65)\n\nEXPLAIN\n\n(Funny, the query which uses indices the right way in 6.3 is wrong in \n6.2.1, but who cares if multi-key-indices get used ...\n\nboersenspiel=> explain SELECT * from Trans, Kurse where\nKurse.wpk_nr=Trans.wpk_n r; NOTICE: QUERY PLAN:\n\nHash Join (cost=18425.21 size=175546 width=103)\n -> Seq Scan on trans (cost=8134.02 size=175546 width=38)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on kurse (cost=712.13 size=14307 width=65)\n\nEXPLAIN\n)\n\n> Could you post EXPLAINs from 6.2 for the _same_ data/schema ?\n> \n> As for 6.3 - I just added\n> \n> CREATE INDEX i_trans on trans (spieler_nr, wpk_nr);\n> \n> and see near the same performance for all possible plans (NestLoop,\n> MergeJoin & HashJoin) - you are able to restrict possible plans\n> using -fX backend' option... NestLoop is slowest (I used -fh -fm to\n> get it).\n> \n> My recommendation is to don't create 1-key indices - trans(spieler_nr) &\n> trans(wpk_nr), - but create 2-key indices - trans (spieler_nr, wpk_nr) &\n> trans (wpk_nr, spieler_nr).\n> \n> Nevertheless, I'm interested in 6.2(.1 ?) EXPLAIN..\n> \n> Vadim\n> \n> Ulrich Voss wrote:\n> > \n> > Hi Hackers,\n> > \n> > I (and at least four others) reported strange behaviour of PG 6.3(.1),\n> > which under certain circumstances doesn't use indices like the versions\n> > before.\n> > \n> > So we still have to use 6.2.1 (now with the Massimo patches). For us\n> > 6.2.1 is three times faster than 6.3.\n> > \n> > I have narrowed the problem down a bit, so please take a look:\n> > \n> > We have two tables:\n> > \n> > CREATE TABLE trans (spieler_nr int4, wpk_nr int4, state char, anzahl\n> > int4, buyprice float8, buydate date, sellprice float8, selldate date,\n> > mail char) archive = none;\n> > CREATE TABLE kurse (wpk_nr int4, name text, curr char4, kurs float8,\n> > datum date, art char, high float8, low float8, open float8, old float8)\n> > archive = none;\n> > \n> > with three indices\n> > \n> > CREATE INDEX i_kurse_wpk_nr on kurse using btree ( wpk_nr int4_ops );\n> > CREATE INDEX i_trans_wpk_nr on trans using btree ( wpk_nr int4_ops );\n> > CREATE INDEX i_trans_spieler_nr on trans using btree ( spieler_nr\n> > int4_ops );\n> > \n> > If I do this select:\n> > \n> > test=> explain SELECT * from Trans, Kurse where\n> > Kurse.wpk_nr=Trans.wpk_nr and Trans.spieler_nr=3;\n> > NOTICE: QUERY PLAN:\n> > \n> > Hash Join (cost=408.60 size=1364 width=103)\n> > -> Seq Scan on kurse (cost=238.61 size=4958 width=65)\n> > -> Hash (cost=0.00 size=0 width=0)\n> > -> Index Scan on trans (cost=3.41 size=29 width=38)\n> > \n> > I get the seq scan, which slows the query down tremendously compared to\n> > 6.2.\n> > \n> > With the query:\n> > \n> > test=> explain SELECT * from Trans, Kurse where\n> > Kurse.wpk_nr=Trans.wpk_nr;\n> > NOTICE: QUERY PLAN:\n> > \n> > Merge Join (cost=7411.81 size=3343409 width=103)\n> > -> Index Scan on kurse (cost=337.90 size=4958 width=65)\n> > -> Index Scan on trans (cost=4563.60 size=71112 width=38)\n> > \n> > everything is fine.\n> > [...] \n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Tue, 31 Mar 1998 12:30:35 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "Thanks for help!\n\nThis is patch for src/backend/optimizer/path/prune.c.\nAccess pathes of pruned joinrels were not merged and better\npathes were lost, sometimes...\n\nVadim", "msg_date": "Thu, 02 Apr 1998 15:39:50 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "> Thanks for help!\n> \n> This is patch for src/backend/optimizer/path/prune.c.\n> Access pathes of pruned joinrels were not merged and better\n> pathes were lost, sometimes...\n\nGee, I am sorry Vadim. This is one of the optimizer functions I tried\nto clean up, but obviously broke it.\n\n> \n> *** prune.c.orig\tThu Apr 2 14:56:54 1998\n> --- prune.c\tThu Apr 2 15:16:17 1998\n> ***************\n> *** 61,99 ****\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 2 Apr 1998 10:23:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "Thanks Vadim!\n\nThis patch does help.\n\nThe simple join still uses two sequential scans though.\n\nIs your tip regarding single-key vs. multi-key-indices still valid?\nI don't see any performance difference between these two.\n\nBTW: Has the spinlock code, that was discussed some days ago, made it \ninto 6.3.1.? \n\nCiao\n\nUlrich\n\n> Thanks for help!\n> \n> This is patch for src/backend/optimizer/path/prune.c.\n> Access pathes of pruned joinrels were not merged and better\n> pathes were lost, sometimes...\n> \n> Vadim\n> \n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Fri, 3 Apr 1998 19:01:23 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" }, { "msg_contents": "Das Boersenspielteam:\n> \n> BTW: Has the spinlock code, that was discussed some days ago, made it \n> into 6.3.1.? \n\nI have not sent in the patch yet, so it is not in 6.3.1. I have a couple of\nother tasks related to my real life that have cut into my time lately. I\nhope to free up and finish the spinlock patch next week.\n\nThanks\n-dg \n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n", "msg_date": "Fri, 3 Apr 1998 11:55:58 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reminder: Indices are not used" } ]
[ { "msg_contents": "> > If any restructuring happens which removes, or makes optional, some of\n> > the fundamental types, it should be accomplished so that the types can\n> > be added in transparently, from a single set of source code, during\n> > build time or after. OIDs would have to be assigned, presumably, and the\n> > hardcoding of the function lookups for builtin types must somehow be\n> > done incrementally. Probably needs more than this to be done right, and\n> > without careful planning and implementation we will be taking a big step\n> > backwards.\n> \n> Exactly. Right now modules get installed by building the .so files and\n> then creating all the types, functions, rules, tables, indexes etc. This\n> is a bit more complicated than the Linux kernal 'insmod' operation. We could\n> easily make the situation worse through careless \"whacking\".\n\nGeez, Louise. What I'm proposing will _SHOWCASE_ the extensibility. I'm not\nlooking to remove it and hardcode everything.\n\n> > Seems to me that Postgres' niche is at the high end of size and\n> > capability, not at the lightweight end competing for design wins against\n> > systems which don't even have transactions.\n> \n> And, there are already a couple of perfectly good 'toy' database systems.\n> What is the point of having another one? Postgres should move toward\n> becoming an \"industrial strength\" solution.\n\nMaking some of the _mostly_unused_ data types loadable instead of always\ncompiled in will NOT make postgres into a 'toy'. Does \"industrial strength\"\nimply having every possible data type compiled in? Regardless of use?\n\nI think the opposite is true. Puttining some of these extra types into modules\nwill show people the greatest feature that separates us from the 'toy's.\n\nI realize there might not be a performance hit _now_, but if someone doesn't\nstart this \"loadable module\" initiative, every Tom, Dick and Harry will want\ntheir types in the backend and eventually there _will_ be a performance hit.\nThen the problem would be big enough to be a major chore to convert the many,\nmany types to loadable instead of only doing a couple now.\n\nI'm not trying to cry \"Wolf\" or proposing to do this to just push around some\ncode. I really think there are benefits to it, if not now, in the future.\n\nAnd I know there are other areas that are broken or could be written better.\nWe all do what we can...I'm not real familiar with the workings of the cache,\nindices, etc., but working on AIX has given me a great understanding of how\nto make/load modules.\n\nThere, my spleen feels _much_ better now. :)\n\ndarrenk\n\n\n", "msg_date": "Fri, 27 Mar 1998 10:06:45 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Data type removal" }, { "msg_contents": "On Fri, 27 Mar 1998, Darren King wrote:\n\n> And I know there are other areas that are broken or could be written better.\n> We all do what we can...I'm not real familiar with the workings of the cache,\n> indices, etc., but working on AIX has given me a great understanding of how\n> to make/load modules.\n\nThis whole discussion has, IMHO, gone dry...Darren, if you can cleanly and\neasily build a module for the ip_and_mac contrib types to use as a model\nto work from, please do so...\n\nI think the concept of modularization for types is a good idea, and agree\nwith your perspective that it *proves* our extensibility...\n\nBut, this has to be added in *perfectly* cleanly, such that there is no\nextra work on anyone's part in order to make use of those types we already\nhave existing.\n\nFreeBSD uses something called 'LKM's (loadable kernel modules) for doing\nthis in the kernel, and Linux does something similar, with the benefit\nbeing, in most cases, that you can unload an older version and load in a\nnewer one relatively seamlessly...\n\nUntil a demo is produced as to how this can work, *please* kill this\nthread...its gotten into a circular loop, and, quite frankly, isn't moving\nanywhere...\n\nif this is to be workable, the module has to be build when the system is\ncompiled, initially, for the base types, and installed into a directory\nthat the server can see and load from...the *base* modules have to be\ntransparent to the end user/adminstrator...\n\n\n\n", "msg_date": "Fri, 27 Mar 1998 10:52:26 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Data type removal" }, { "msg_contents": "Darren writes:\n> > > If any restructuring happens which removes, or makes optional, some of\n> > > the fundamental types, it should be accomplished so that the types can\n> > > be added in transparently, from a single set of source code, during\n> > > build time or after. OIDs would have to be assigned, presumably, and the\n> > > hardcoding of the function lookups for builtin types must somehow be\n> > > done incrementally. Probably needs more than this to be done right, and\n> > > without careful planning and implementation we will be taking a big step\n> > > backwards.\n> > \n> > Exactly. Right now modules get installed by building the .so files and\n> > then creating all the types, functions, rules, tables, indexes etc. This\n> > is a bit more complicated than the Linux kernal 'insmod' operation. We could\n> > easily make the situation worse through careless \"whacking\".\n> \n> Geez, Louise. What I'm proposing will _SHOWCASE_ the extensibility. I'm not\n> looking to remove it and hardcode everything.\n\nApparently I was not clear. I am sorry that you find my comment upsetting, it\nwas not meant to be.\n\nWhat I meant is that adding a type and related functions to a database is a\nmuch more complicated job than loading a module into a Unix kernel. \n\nTo load a module into a kernel all you need to do is read the code in,\nresolve the symbols, and maybe call an intialization routine. This is\nmerely a variation on loading a shared object (.so) file into a program.\n\nTo add a type and related stuff to a database is really a much harder problem.\n\nYou need to be able to \n - add one or more type descriptions\t\t\ttypes table\n - add input and output functions\t\t\ttypes, functions tables\n - add cast functions\t\t\t\t\tcasts, functions tables\n - add any datatype specific behavior functions\tfunctions table\n - add access method operators (maybe)\t\t\tamops, functions tables\n - add aggregate operators\t\t\t\taggregates, functions\n - add operators\t\t\t\t\toperators, functions\n - provide statistics functions\n - provide destroy operators\n - provide .so files for C functions, SQL for sql functions\n (note this is the part needed for a unix kernel module)\n - do all the above within a particular schema\n\nYou may also need to create and populate data tables, rules, defaults, etc\nrequired by the implementation of the new type.\n\nAnd of course, a \"module\" may really implement dozens of types and hundreds\nof functions.\n\nTo unload a type requires undoing all the above. But there is a wrinkle: first\nyou have to check if there are any dependancies. That is, if the user has\ncreated a table with one of the new types, you have to drop that table\n(including column defs, indexes, rules, triggers, defaults etc) before\nyou can drop the type. Of course the user may not want to drop their tables\nwhich brings us to the the next problem.\n\nWhen this gets really hard is when it is time to upgrade an existing database\nto a new version. Suppose you add a new column to a type in the new version.\nHow does a user with lots of data in dozens of tables using the old type\ninstall the new module?\n\nWhat about restoring a dump from an old version into a system with the new\nversion installed?\n\nOr how about migrating to a different platform? Can we move data from\na little endian platform (x86) to a big endian platform (sparc)? Obviously\nthe .so files will be different, but what about the copying the data out and\nreloading it?\n\nThis is really the same problem as \"schema evolution\" which is (in the general\ncase) an open research topic.\n\n> I realize there might not be a performance hit _now_, but if someone doesn't\n> start this \"loadable module\" initiative, every Tom, Dick and Harry will want\n> their types in the backend and eventually there _will_ be a performance hit.\n> Then the problem would be big enough to be a major chore to convert the many,\n> many types to loadable instead of only doing a couple now.\n\nI agree that we might want to work on making installing new functionality\neasier. I have no objection to this, I just don't want to see the problem\napproached without some understanding of the real issues.\n\n I'm not trying to cry \"Wolf\" or proposing to do this to just push around some\n> code. I really think there are benefits to it, if not now, in the future.\n> \n> And I know there are other areas that are broken or could be written better.\n> We all do what we can...I'm not real familiar with the workings of the cache,\n> indices, etc., but working on AIX has given me a great understanding of how\n> to make/load modules.\n\nJust to belabor this, it is perfectly reasonable to add a set of types and\nfunctions that have no 'C' implementation. The 'loadable module' analogy\nmisses a lot of the real requirements.\n \n> There, my spleen feels _much_ better now. :)\n\nIt looks better too... ;-)\n \n> darrenk\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Fri, 27 Mar 1998 12:15:19 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Data type removal" }, { "msg_contents": "David Gould wrote:\n>\n> To load a module into a kernel all you need to do is read the code in,\n> resolve the symbols, and maybe call an intialization routine. This is\n> merely a variation on loading a shared object (.so) file into a program.\n> \n> To add a type and related stuff to a database is really a much harder problem.\n\nI don't agree.\n\n> You need to be able to\n> - add one or more type descriptions types table\n> - add input and output functions types, functions tables\n> - add cast functions casts, functions tables\n> - add any datatype specific behavior functions functions table\n> - add access method operators (maybe) amops, functions tables\n> - add aggregate operators aggregates, functions\n> - add operators operators, functions\n> - provide statistics functions\n> - provide destroy operators\n> - provide .so files for C functions, SQL for sql functions\n> (note this is the part needed for a unix kernel module)\n> - do all the above within a particular schema\n> \n> You may also need to create and populate data tables, rules, defaults, etc\n> required by the implementation of the new type.\n\nAll this would be done by the init function in the module you load.\nWhat we need is a set of functions callable by modules, like\nmodule_register_type(name, descr, func*, textin*, textout*, whatever\n...)\nmodule_register_smgr(name, descr, .....)\nmodule_register_command(....\nCasts would be done by converting to a common format (text) and then to\nthe desired type. Use textin/textout. No special cast functions would\nhave to exist. Why doesn't it work this way already??? Would not that\nsolve all casting problems?\n\n\n> To unload a type requires undoing all the above. But there is a wrinkle: first\n> you have to check if there are any dependancies. That is, if the user has\n> created a table with one of the new types, you have to drop that table\n> (including column defs, indexes, rules, triggers, defaults etc) before\n> you can drop the type. Of course the user may not want to drop their tables\n> which brings us to the the next problem.\n\nDependencies are checked by the OS kernel when you try to unload\nmodules.\nYou cannot unload slhc without first unloading ppp, for example. What's\nthe\ndifference?\nIf you have Mod4X running with /dev/dsp opened, then you can't unload\nthe sound driver, because it is in use, and you cannot unload a.out\nmodule\nif you have a non-ELF program running, and you can see the refcount on\nall\nmodules and so on... This would not be different in a SQL server.\nIf you have a cursor open, accessing IP types, then you cannot unload\nthe IP-types module. Close the cursor, and you can unload the module if\nyou want to.\nYou don't have to drop tables containing new types just because you\nunload\nthe module. If you want to SELECT from it, then that module would be\nloaded\nautomagically when it is needed.\n\n\n> When this gets really hard is when it is time to upgrade an existing database\n> to a new version. Suppose you add a new column to a type in the new version.\n> How does a user with lots of data in dozens of tables using the old type\n> install the new module?\n> \n> What about restoring a dump from an old version into a system with the new\n> version installed?\n\nSuppose you change TIMESTAMP to 64 bits time and 16 bits userid... how\ndo you\nsolve that problem? You would probably have to make the textin/textout\nfunctions\nfor the type recognize the old format and make the appropriate\nconversions.\nPerhaps add zero userid, or default to postmaster userid?\nThis would not be any different if TIMESTAMP was in a separate module.\n\nFor the internal storage format, every type could have it's own way\nof recognizing different versions of the data. For example, say you have\nan IPv4 module and inserts millions of IP-addresses, then you upgrade\nto IPv6 module. It would then be able to look at the data and see if\nit is a IPv4 or IPv6 address. Of course, you would have problems if you\ntried to downgrade and had lots of IPv6 addresses inserted.\nMyOwnType could use the first few bits of the data to decide which\nversion it is, and later releases of MyOwnType-module would be able\nto recognize the older formats.\nThis way, types could be upgraded without dump-and-load procedure.\n\n\n> Or how about migrating to a different platform? Can we move data from\n> a little endian platform (x86) to a big endian platform (sparc)? Obviously\n> the .so files will be different, but what about the copying the data out and\n> reloading it?\n\nIs this a problem right now? Dump and reload, how can it fail?\n\n\n> Just to belabor this, it is perfectly reasonable to add a set of types and\n> functions that have no 'C' implementation. The 'loadable module' analogy\n> misses a lot of the real requirements.\n\nWhy would someone want a type without implementation?\nOk, let the module's init function register a type marked as\n\"non-existant\"? Null\nfunction?\n\n/* m */\n", "msg_date": "Sat, 28 Mar 1998 14:05:11 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Modules" }, { "msg_contents": "Mattias Kregert writes: \n> David Gould wrote:\n...\n> > You need to be able to\n> > - add one or more type descriptions types table\n [big list of stuff to do deleted ]\n> > - do all the above within a particular schema\n> > \n> > You may also need to create and populate data tables, rules, defaults, etc\n> > required by the implementation of the new type.\n> \n> All this would be done by the init function in the module you load.\n> What we need is a set of functions callable by modules, like\n> module_register_type(name, descr, func*, textin*, textout*, whatever\n> ...)\n> module_register_smgr(name, descr, .....)\n> module_register_command(....\n\nOk, now you are requiring the module to handle all this in C. How does it\nregister a type, a default, a rule, a column, functions, etc? \n\nHaving thought about that, consider that currently all this can already be\ndone using SQL. postgreSQL is a relational database system. One of the\nprime attributes of relational systems is that they are reflexive. That is,\nyou can use SQL to query and update the system catalogs that define the\ncharacteristics of the the system.\n\nBy and large, all the tasks I mentioned previously can be done using SQL and\ntaking advantage of the high semantic level and power of the complete SQL\nsystem. Given that we already have a perfectly good high level mechanism, I\njust don't see any advantage to adding a bunch of low level APIs to\nduplicate existing functionality.\n\n> Casts would be done by converting to a common format (text) and then to\n> the desired type. Use textin/textout. No special cast functions would\n> have to exist. Why doesn't it work this way already??? Would not that\n> solve all casting problems?\n\nNo. It is usable in some cases as an implementation of a defined cast, but\nyou still need defined casts. Think about these problems:\n\nFirst, there is a significant performance penalty. Think about a query\nlike:\n\n select * from huge_table where account_balance > 1000.\n\nThe textout -> textin approach would be far slower than the current direct\nint to float cast. \n\nSecond, how do you restrict the system to sensible casts or enforce a\nmeaningful order of attempted casts.\n\n create type yen based float;\n create type centigrade based float;\n\nWould you allow?\n\n select yen * centigrade from market_data, weather_data\n where market_data.date = weather_data.date;\n\nEven though the types 'yen' and 'centigrade' are implemented by float this\nleaves open a few important questions:\n\n - what is the type of the result?\n - what could the result possibly mean?\n\nThird you still can't do casts for many types:\n\n create type motion_picture (arrayof jpeg) ...\n\n select motion_picture * 10 from films...\n\nThere is no useful cast possible here.\n\n> > To unload a type requires undoing all the above. But there is a wrinkle: first\n> > you have to check if there are any dependancies. That is, if the user has\n> > created a table with one of the new types, you have to drop that table\n> > (including column defs, indexes, rules, triggers, defaults etc) before\n> > you can drop the type. Of course the user may not want to drop their tables\n> > which brings us to the the next problem.\n> \n> Dependencies are checked by the OS kernel when you try to unload\n> modules.\n> You cannot unload slhc without first unloading ppp, for example. What's\n> the\n> difference?\n\nI could have several million objects that might use that type. I cannot\ndo anything with them without the type definition. Not even delete them.\n\n> If you have Mod4X running with /dev/dsp opened, then you can't unload\n> the sound driver, because it is in use, and you cannot unload a.out\n> module\n> modules and so on... This would not be different in a SQL server.\n\nBut it is very different. SQL servers are much more complex than OS kernels.\nHaving spent a number of years maintaining the OS kernel in a SQL engine\nthat was originally intended to run on bare hardware, I can tell you that\nthat kernel was less than 10% of the complete SQL engine.\n\n> If you have a cursor open, accessing IP types, then you cannot unload\n> the IP-types module. Close the cursor, and you can unload the module if\n> you want to.\n> You don't have to drop tables containing new types just because you\n> unload\n> the module. If you want to SELECT from it, then that module would be\n> loaded\n> automagically when it is needed.\n\nAhha, I start to understand. You are saying 'module' and meaning 'loadable\nobject file of functions'. Given that this is what you mean, we already\nhandle this. \n\nWhat I took you to mean by 'module' was the SCHEMA defined to make the\nfunctions useful, and the functions.\n\n> > Just to belabor this, it is perfectly reasonable to add a set of types and\n> > functions that have no 'C' implementation. The 'loadable module' analogy\n> > misses a lot of the real requirements.\n> \n> Why would someone want a type without implementation?\n\nWhy should a type with no C functions fail to have an implementation? Right\nnow every table is also a type.\n\nMany types are based on a extending an existing type, or are composites. Is\nthere some reason not to define the implementation (if any) in SQL?\n\n--\n\nI understand that modularity is good. I am asserting the postgreSQL is a\nvery modular and extendable system right now. There are mechanisms to add\njust about any sort of extension you want. Very little is hard coded in\nthe core.\n\nI think this discussion got started because someone wanted to remove the\nip and mac and money types. This is a mere matter of the current packaging,\nas there is no reason to for them to be in or out except that historically\nthe system used some of these types before the extendibility was finished\nso they went in the core code.\n\nI don't think it matters much whether any particular type is part of the\ncore or not, so feel free to pull them out. Do package them up to\ninstall in the normal way that extensions are supposed to install and\nretest everything. Don't _add_ a whole new way to do the same kinds\nof extensibilty that we _already_ do. Just use the mechanisms that already\nexist.\n\n\nThis discussion has gone on too long as others are starting to point out,\nso I am happy to take it to private mail if you wish to continue.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sat, 28 Mar 1998 18:12:42 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Modules" }, { "msg_contents": "> All this would be done by the init function in the module you load.\n> What we need is a set of functions callable by modules, like\n> module_register_type(name, descr, func*, textin*, textout*, whatever\n> ...)\n> module_register_smgr(name, descr, .....)\n> module_register_command(....\n> Casts would be done by converting to a common format (text) and then to\n> the desired type. Use textin/textout. No special cast functions would\n> have to exist. Why doesn't it work this way already??? Would not that\n> solve all casting problems?\n\nIt does work this way already, at least in some cases. It definitely\ndoes not solve casting problems, for several reasons, two of which are:\n- textout->textin is inefficient compared to binary conversions.\n- type conversion also may require value and format manipulation far\nbeyond what you would accept for an input function for a specific type.\nThat is, to convert a type to another type may require a conversion\nwhich would not be acceptable in any case other than a conversion from\nthat specific type to the target type. You need to call a specialized\nroutine to do this.\n\n> Dependencies are checked by the OS kernel when you try to unload\n> modules. You cannot unload slhc without first unloading ppp, for \n> example. What's the difference?\n\nGranularity. If, for example, we had a package of 6 types and 250\nfunctions, you would need to check each of these for dependencies. David\nwas just pointing out that it isn't as easy, not that it is impossible.\nI thought his list of issues was fairly complete, and any solution would\naddress these somehow...\n\n - Tom\n", "msg_date": "Mon, 30 Mar 1998 06:32:49 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Modules" } ]
[ { "msg_contents": "I'll be going on vacation on Monday, so don't expect much from me for the\nnext 2.5 weeks. As for ecpg, I'm currently rewriting the parser from\nscratch. It was too much work to get it into sync with the backend parser.\nTherefore I tore it down and built it new by using gram.y and scan.l.\nCurrently I can parse exec sql declare blocks and the exec sql whenever\nstatement. That's it. But I'm confident that the most diffcult part is\nalready finished since it does already use a lex file very similar to\nscan.l.\n\nIf anyone's interested in working on that part, please tell me and I will\nmail you my source tree. I will read and answer mail at least once over the\nweekend before actually leaving.\n\nOther than that I wish you some nice working weeks. :-)\n\nSee you again April 16th.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 27 Mar 1998 17:53:01 +0100 (MEZ)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Going on vacation" }, { "msg_contents": "> \n> I'll be going on vacation on Monday, so don't expect much from me for the\n> next 2.5 weeks. As for ecpg, I'm currently rewriting the parser from\n> scratch. It was too much work to get it into sync with the backend parser.\n> Therefore I tore it down and built it new by using gram.y and scan.l.\n> Currently I can parse exec sql declare blocks and the exec sql whenever\n> statement. That's it. But I'm confident that the most diffcult part is\n> already finished since it does already use a lex file very similar to\n> scan.l.\n> \n> If anyone's interested in working on that part, please tell me and I will\n> mail you my source tree. I will read and answer mail at least once over the\n> weekend before actually leaving.\n> \n> Other than that I wish you some nice working weeks. :-)\n> \n> See you again April 16th.\n\nWould it make you life any easier if you opened a connection to the\npostgres/psql, and passed queries into it and looked at the\noutput/errors? Just an idea.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 14:37:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Going on vacation" }, { "msg_contents": "> \n> I'll be going on vacation on Monday, so don't expect much from me for the\n> next 2.5 weeks. As for ecpg, I'm currently rewriting the parser from\n\n Same to me - 4 weeks off - but I'll visit the office\n sometimes.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n", "msg_date": "Fri, 27 Mar 1998 20:56:57 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Going on vacation" } ]
[ { "msg_contents": "Michal Mosiewicz writes: \n> \n> Say you have a table with 1024 indexed values from 0 to 1023. Now, you\n> want to select the values less than, say, 64. If you use sequential\n> scan, you have to scan 1024 records, right? But if you use index scan,\n> you first go to the first node of the tree, you compare 64 with the\n> value stored in this node, which (for balanced tree) is 512. Now you you\n> have to search through next lower node. You compare 64 to 128 which is\n> the value of this node. Then you go to next lower node. You compare 64\n> to 64, and yes, we hit the target. After 3 searches, we know that\n> everything under this node is the set of records we are looking for\n> right? So, what we have to do is to browse through those values, and\n> collect the tupple indentifiers.\n>\n> Note, that once we have done this 3 searches, we have to browse all the\n> structure of the tree below. We are looking for 64 values. So, it will\n> cost us looking through 128 nodes of the subtree.\n> \n> OK, so by using an index, we had to check 128 + 3 nodes of the tree. \n\nOur btree indexes are quite a bit better than the balanced tree you suppose.\n \n> Now, let's note, that there has been only a few IO transfers by now. No\n> more than few pages. And we have tupple identifiers pointing us to 64\n> records. Now we may sort this tids in ascending order to optimise IO. \n\nBut, we do not do this tid sort. It really isn't easy as you might have\nmillions of tids, not just a few. Which would mean doing an external sort.\nThis might be a nice thing to do, but it isn't there now as far as I know.\n \n> Everything took us 3 + 128 nodes from index + 64 records from table.\n> This is defnitely better than reading all 1024 records. \n\nI am not trying to flame you, but it seems to me that you have some\nmisconceptions about how the indexes work and I am trying only to explain\nthem a little better.\n\nUsing your example of 1024 rows with values from 0 to 1023. Further assuming:\n\n - 8K pages\n - 200 byte data rows (including overheads)\n - 4 byte keys, so perhaps 32 byte index rows with overheads\n - btree index\n\nThen: \n\nTable has 8192 / 200 = 40 rows per page giving 1024 / 40 = 26 pages\n\nIndex has 8192 / 32 = 256 keys per page giving 1024 / 256 = 4 leaf pages\n and one root page with 4 keys.\n\nSomething like:\n\t ____________________\n index root: | 0, 256, 512, 768 |\n --------------------\n / \\ \\\n / \\ \\\n / \\ \\--------------\\ \n / \\ \\\n _______________ _________________ ____________\n leaf0: | 0,1,2...255 | leaf1: | 256,257...511 | leaf2: ....\n --------------- ----------------- ------------\n | \\ \\--------------------------------\\\n | \\------------------\\ \\\n | \\ \\\n | \\ \\\ndata: | 234 |, | 11 |, | 763 |, | 401 |, | 29 |, | 970 |, | 55 |, ....\n\n\nTo scan the index to get the tids for keys 0...63 will take two page\nreads: root page, leaf1.\n\nBut, to access the data rows you still need 64 random IOs.\n\nSo the total times to read the data rows for keys 0..63 look like:\n\n Using index:\n\n IOs time why\n\n 1 20msec read root\n 1\t 20msec read leaf0\n 64 1280msec read 64 rows from leaf pages\n --- ---------\n 66 1320msec total\n\n\n Using table scan:\n\n IOs time why\n \n 1 20msec seek and read 1st page\n 25 125msec sequential read (5 msec each) of rest of table\n --- --------\n 26 145msec total\n\nNote that this ignores cache effects. \n\nIf you assume the cache is big enough for the whole table (and in this case\nit is, and assume none of the table is resident initially (cold start) then\nthe IO analysis is:\n\n Using index (with cache):\n\n IOs time why\n\n 1 20msec read root\n 1\t 20msec read leaf0\n 10 200msec read 10 unique data pages (assumes desired keys are\n not uniformily distributed, this favors index case)\n --- ---------\n 12 240msec total\n\n\n Using table scan (with cache):\n\n IOs time why\n \n 1 20msec seek and read 1st page\n 25 125msec sequential read (5 msec each) of rest of table\n --- --------\n 26 145msec total (no difference from uncached case)\n\nEven with the very favorable assumption that the only part of the data pages\nneed to be read, the sequential scan is still slower here.\n \n> And note, that this is a very simple example. In my case I had a 250MB\n> table, and about 1% of records to select from it.\n\nMy initial calculation a few posts ago was that the break even was .5% for\nthe table you described so I still think it is reasonable to do a table scan.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Fri, 27 Mar 1998 11:43:06 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer fails?" }, { "msg_contents": "David Gould wrote:\n\n> > Now, let's note, that there has been only a few IO transfers by now. No\n> > more than few pages. And we have tupple identifiers pointing us to 64\n> > records. Now we may sort this tids in ascending order to optimise IO.\n> \n> But, we do not do this tid sort. It really isn't easy as you might have\n> millions of tids, not just a few. Which would mean doing an external sort.\n> This might be a nice thing to do, but it isn't there now as far as I know.\n\nNo, you don't need a full set. You may sort it in portions of the\npredefined size. You may even try ascending/descending order to optimise\nyour hd heads movements, however it may be not very good idea, since\nit's against the read-ahead feature of most disk IO and it may\nnegatively influence ORDER BY performance. Anyhow, you may accomplish\nsawtooth readings, that certainly decrease the access time.\n \n> > Everything took us 3 + 128 nodes from index + 64 records from table.\n> > This is defnitely better than reading all 1024 records.\n> \n> I am not trying to flame you, but it seems to me that you have some\n> misconceptions about how the indexes work and I am trying only to explain\n> them a little better.\n>[cut]\n> Using index (with cache):\n> \n> IOs time why\n> \n> 1 20msec read root\n> 1 20msec read leaf0\n> 10 200msec read 10 unique data pages (assumes desired keys are\n> not uniformily distributed, this favors index case)\n> --- ---------\n\nOK, this example may be simple. In fact I would agree that in this case\nfigures looks like seq scan is sufficient. But that's all started from\nmy example of 250MB file with about 2M of records, that I wanted to\nselect only about 25k. (I ommit the fact, that those record were\nnaturally clustered by the time they came into database. So those 20.000\nof records were pretty continuos).\n\nAs I observed postgres scanned this database at rate of\n100kBps(sequentially). Much less than the actuall I/O throughput on this\nmachine. Even when I prepared a condition to return no records it also\nscanned it sequentially, while it would cost only 20msec.\n\nAnyhow... I have to admit that similiar question asked to mysql takes...\nmysql> select count(*) from log where dt < 19980209000000 and\ndt>19980208000000;\n+----------+\n| count(*) |\n+----------+\n| 26707 |\n+----------+\n1 row in set (7.61 sec) \n\nOf course, if I ask it without the index it takes ~3 minutes. That's why\nexpected that postgres would make some use of index. (The table is in\nboth cases the same).\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Fri, 27 Mar 1998 23:23:50 +0100", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fails?" }, { "msg_contents": "Michal Mosiewicz:\n> As I observed postgres scanned this database at rate of\n> 100kBps(sequentially). Much less than the actuall I/O throughput on this\n> machine. Even when I prepared a condition to return no records it also\n> scanned it sequentially, while it would cost only 20msec.\n\nWell, now it looks like there is a bug or two:\n\n - 100kBps(sequentially) is way too slow. If you have time, try profileing\n (with gprof) this scan. We should be able to do much better than this.\n If you can't do it, we might want to put \"Improve sequential scan rate\"\n on the todo list. \n\n - a \"select count(*) from x where <some_index_col> <some_qual>\"\n should use the index.\n\n> Anyhow... I have to admit that similiar question asked to mysql takes...\n> mysql> select count(*) from log where dt < 19980209000000 and\n> dt>19980208000000;\n> +----------+\n> | count(*) |\n> +----------+\n> | 26707 |\n> +----------+\n> 1 row in set (7.61 sec) \n> \n> Of course, if I ask it without the index it takes ~3 minutes. That's why\n> expected that postgres would make some use of index. (The table is in\n> both cases the same).\n\nJust out of curiosity, how long do these queries take in MySQL vs postgreSQL?\n\nThanks\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n\n", "msg_date": "Fri, 27 Mar 1998 17:03:29 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer fails?" }, { "msg_contents": "David Gould wrote:\n\n> - 100kBps(sequentially) is way too slow. If you have time, try profileing\n> (with gprof) this scan. We should be able to do much better than this.\n> If you can't do it, we might want to put \"Improve sequential scan rate\"\n> on the todo list.\n\nOK. I'll profile it as soon as I find some spare moment.\n\n> > Of course, if I ask it without the index it takes ~3 minutes. That's why\n> > expected that postgres would make some use of index. (The table is in\n> > both cases the same).\n> \n> Just out of curiosity, how long do these queries take in MySQL vs postgreSQL?\n\nMySQL seems to be aproximately 20 times better. Using a sequential scan\nI can obserwe on IO monitor that it's able to read 2-5MB/s. (This is\ntwo-disk RAID0 configuration, that has maximum throughput of 10MBps).\n\nOf course, you have to remember that mySQL has much simpler storage due\nto lack of some features like transactions. I've read some Monty's (the\nauthor) thoughts on transactions. He says that introducing transactions\nwould lower the performance at least 4-5 times (even read performance).\n\nNow, I've downloaded personal version of Solid that I'm also going to\ncompare. What I personally find very interesting in Solid is it's\noptimistic locking. Actually it means no locking at all. Solid seems to\nhave non-overwritting feature much like postgres. But this\nnon-overwritting feature is limited to not-checkpointed-yet data in\nbuffers (logs). Also, it maintains an internal structure called 'Bonsai\nTree', that includes all transaction's with their time. If there is a\ncollision between data it is deducted from this bonsai tree structure,\nand then the latest transaction is rolled back.\n\nBy using systematic checkpointing it makes sure that those structures\nare relatively small and conflicts are resolved quickly.\n\nOf course, don't treat it as comparisions between postgres and\ncommercial product. I just want to share it as a kind of 'food for\nthoughts'.\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Sun, 29 Mar 1998 16:17:29 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fails?" }, { "msg_contents": "David Gould wrote:\n> \n> > Now, let's note, that there has been only a few IO transfers by now. No\n> > more than few pages. And we have tupple identifiers pointing us to 64\n> > records. Now we may sort this tids in ascending order to optimise IO.\n> \n> But, we do not do this tid sort. It really isn't easy as you might have\n> millions of tids, not just a few. Which would mean doing an external sort.\n> This might be a nice thing to do, but it isn't there now as far as I know.\n\nUsing TID as (last) part of index key is on my TODO. \nThis will speed up vacuuming, get rid of all duplicate key\nproblems and give us feature above.\n\n> To scan the index to get the tids for keys 0...63 will take two page\n> reads: root page, leaf1.\n\n+ meta page read first - to get root page block number.\n\nVadim\n", "msg_date": "Mon, 30 Mar 1998 10:38:08 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fails?" } ]
[ { "msg_contents": "Found this on the MySQL crashme page, http://www.tcx.se/crash-me.html:\n\n Join methods\n MySQL EMPRESS mSQL Oracle PostgreSQL SOLID\n cross join (same as from\n a,b) [yes] [no] [no] [no] [no] [yes]\n full outer join [no] [no] [no] [no] [no] [yes]\n tables in join 32 63 +64 +64 30 23\n left outer join [yes] [no] [no] [no] [no] [yes]\n left outer join using [yes] [no] [no] [no] [no] [no]\n natural join [no] [no] [no] [no] [no] [no]\n natural left outer join [yes] [no] [no] [no] [no] [no]\n left outer join odbc style [yes] [no] [no] [no] [no] [yes]\n recursive subqueries 49 +64 226 14\n ^^^^^^^^^^^^^^^^^^^^\n\n right outer join [no] [no] [no] [no] [no] [yes]\n ANSI SQL simple joins [yes] [yes] [yes] [yes] [yes]\n subqueries [no] [yes] [no] [yes] [yes] [yes]\n ^^^^^^^^^^\n\nSo he did updated it to be accurate. Interesting we support 226 levels\nof recursive subqueries. That sounds like a lot.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 16:16:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Subquery limits" } ]
[ { "msg_contents": "\n-----Original Message-----\nFrom: Michael Meskes <[email protected]>\nTo: PostgreSQL Hacker <[email protected]>\nDate: zaterdag 28 maart 1998 0:12\nSubject: [HACKERS] Going on vacation\n\n\n>I'll be going on vacation on Monday, so don't expect much from me for the\n>next 2.5 weeks. \n\nHere's wishing you a happy holiday.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Fri, 27 Mar 1998 23:05:16 +0100", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Going on vacation" } ]
[ { "msg_contents": "At 7:06 AM -0800 3/27/98, Cary B. O'Brien wrote:\n>[email protected] wrote\n>> It seems that if one wants to bring a website that relies heavily on mSQL or\n>> MySQL to it's knees, simply telnet to the port the server listens on (1112\n>> for mSQL or 3333 for MySQL) and then just sit there, forget about it.\n\n>Observations:\n>\n>2) Sounds like postgresql would be a bit better off, since postmaster\n> forks backend processes, but I haven't checked it. Actually\n> wouldn't tight host-based authentication prevent this?\n\nDon't know about hba, but I concur that Postgres is structurally immune to\nthis attack. It's an artifact of mSQL being a single-threaded process:\ngreat for fast response under light load, terrible under heavy load. It\ndoes mean they don't have to worry about concurrent transactions though:\nthere is no concurrency.\n>\n>3) How hard would it be to create a postmaster/postgresql process\n> that could be started from inetd under tcp_wrappers. That would\n> provide authentication/logging/monitoring of what happens\n\nI like the idea of allowing postgres to run from inetd since it makes for\nless overhead if it's little-used. However I think there is some\ncoordination among the children that is managed by the postmaster which\nwould suffer, and would require duplicate reimplementation to work. Not a\ngood use of our programmers time IMHO.\n\nOn the other hand there is a library version of tcp_wrappers (libwrap under\nNetBSD) which could, and probably should, be linked into the postmaster to\nprovide the same functionality.\n\nI'm cc'ing the hackers list on this note in the hope that someone will take\nan interest.\n\n>4) Has anyone tried the 'send garbage and see what happens' test?\n>\nNot I, but it's a good idea.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Fri, 27 Mar 1998 14:22:23 -0800", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Re Trivial mSQL/MySQL DoS method" } ]
[ { "msg_contents": "These drivers are -very- corrupt in the image I downloaded... It looks\nlike someone ran a source reformatting program on it and completely\nspammed it.\n\nAs an example, all \"//\" comments that contained a \",\" were split into two\nlines. All \"//\" comments containing the word \"do\" also got split.\n\nI could post patches - but I'd rather the maintainers for the package\nhandle it... I'm too likely to just reformat the entire beast.\n[incidentally I finally got it to compile after 4+ hours of\nmanual patching]\n\nG'day, eh? :)\n\t- Teunis\n\n", "msg_date": "Fri, 27 Mar 1998 16:40:56 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC drivers bundled with postgres" }, { "msg_contents": "> \n> These drivers are -very- corrupt in the image I downloaded... It looks\n> like someone ran a source reformatting program on it and completely\n> spammed it.\n> \n> As an example, all \"//\" comments that contained a \",\" were split into two\n> lines. All \"//\" comments containing the word \"do\" also got split.\n> \n> I could post patches - but I'd rather the maintainers for the package\n> handle it... I'm too likely to just reformat the entire beast.\n> [incidentally I finally got it to compile after 4+ hours of\n> manual patching]\n\nThat was me. All *.c, *.h files are formatted via src/tools/pgindent. \nSee the FAQ_DEV for more info. // are not valid C comments, as much as\nWindoze C programmers would like to think, and pgindent does not know\nhow to handle them.\n\nC++ source is not reformatted. I was not aware pgindent actually was a\nproblem for PostODBC. We can re-load the old sources and prevent\npgindent from doing this in the future.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 20:57:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC drivers bundled with postgres" }, { "msg_contents": "> \n> > \n> > These drivers are -very- corrupt in the image I downloaded... It looks\n> > like someone ran a source reformatting program on it and completely\n> > spammed it.\n> > \n> > As an example, all \"//\" comments that contained a \",\" were split into two\n> > lines. All \"//\" comments containing the word \"do\" also got split.\n> > \n> > I could post patches - but I'd rather the maintainers for the package\n> > handle it... I'm too likely to just reformat the entire beast.\n> > [incidentally I finally got it to compile after 4+ hours of\n> > manual patching]\n> \n> That was me. All *.c, *.h files are formatted via src/tools/pgindent. \n> See the FAQ_DEV for more info. // are not valid C comments, as much as\n> Windoze C programmers would like to think, and pgindent does not know\n> how to handle them.\n> \n> C++ source is not reformatted. I was not aware pgindent actually was a\n> problem for PostODBC. We can re-load the old sources and prevent\n> pgindent from doing this in the future.\n\nI am rolling back the pgindent changes, and will change the // comments\nto /* */ comments, and run pgindent again.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 21:11:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC drivers bundled with postgres" } ]
[ { "msg_contents": "Now that I see c++ code in the odbc tree as well, I will just return all\nthe code to its pre-pgindent state, and prevent it from being run in the\nfuture.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 27 Mar 1998 21:13:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "pgindent on odbc" }, { "msg_contents": "Hi Bruce, hi Hackers,\n\nOn Fri, 27 Mar 1998, Bruce Momjian wrote:\n\n> Now that I see c++ code in the odbc tree as well, I will just return all\n> the code to its pre-pgindent state, and prevent it from being run in the\n> future.\n\nThe c++ files are named *.cpp, whereas the c-files are named *.c.\nIMHO it would be ok to change the // comments to /* */, but it could be\nmuch work :-(.\n\nQ: Is some one else still working on a 16-bit version of the driver?\n Does any one use the 16-bit version of (older) PostODBC driver?\n\nBye,\n\tGerhard\n\n+-----------------+ +--- [email protected] ---+\n| Technische EDV \\ Reithofer / Technical Sofware Developement |\n| A-2136 Laa/Thaya \\ Gerhard / Tel +43-2522/8726 +-------------+\n| Staatsbahnstr. 100 +-------+ Fax +43-2522/87268 |\n+----- http://members.aon.at/tech-edv/Info -------+\n\n\n\n", "msg_date": "Sun, 29 Mar 1998 00:41:24 +0100 (MET)", "msg_from": "Gerhard Reithofer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgindent on odbc" }, { "msg_contents": "> \n> Hi Bruce, hi Hackers,\n> \n> On Fri, 27 Mar 1998, Bruce Momjian wrote:\n> \n> > Now that I see c++ code in the odbc tree as well, I will just return all\n> > the code to its pre-pgindent state, and prevent it from being run in the\n> > future.\n> \n> The c++ files are named *.cpp, whereas the c-files are named *.c.\n> IMHO it would be ok to change the // comments to /* */, but it could be\n> much work :-(.\n> \n> Q: Is some one else still working on a 16-bit version of the driver?\n> Does any one use the 16-bit version of (older) PostODBC driver?\n\nI can change // to /* */ very easily. Let me know.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 20:43:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pgindent on odbc" }, { "msg_contents": "Hello Bruce,\n\nOn Sat, 28 Mar 1998, Bruce Momjian wrote:\n\n> > \n> > Hi Bruce, hi Hackers,\n> > \n> > On Fri, 27 Mar 1998, Bruce Momjian wrote:\n> > \n> > > Now that I see c++ code in the odbc tree as well, I will just return all\n> > > the code to its pre-pgindent state, and prevent it from being run in the\n> > > future.\n> > \n> > The c++ files are named *.cpp, whereas the c-files are named *.c.\n> > IMHO it would be ok to change the // comments to /* */, but it could be\n> > much work :-(.\n> > \n> > Q: Is some one else still working on a 16-bit version of the driver?\n> > Does any one use the 16-bit version of (older) PostODBC driver?\n> \n> I can change // to /* */ very easily. Let me know.\nI'm glad that some one would do the work ;-)\n\nI think we should ask Julie, she is the official maintainer. \n\nBye,\n\tGerhard\n\n+-----------------+ +--- [email protected] ---+\n| Technische EDV \\ Reithofer / Technical Sofware Developement |\n| A-2136 Laa/Thaya \\ Gerhard / Tel +43-2522/8726 +-------------+\n| Staatsbahnstr. 100 +-------+ Fax +43-2522/87268 |\n+----- http://members.aon.at/tech-edv/Info -------+\n\n", "msg_date": "Mon, 30 Mar 1998 00:07:28 +0200 (MET DST)", "msg_from": "Gerhard Reithofer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgindent on odbc" } ]
[ { "msg_contents": "Hi there!\n\nPlease forgive my affords in making the backend crash (this time postgres\n6.3.1), seems I've got too much spare time :-)\n\na)\n\nSELECT DISTINCT * from x2 UNION SELECT * FROM x2;\n\ncrashes the backend and corrupts shmem, so that other connections break\ntoo, if x2 is a view returning NULL values\n\nE.g.:\npostgres=> create table x (i int4, j int4);\nCREATE\npostgres=> create view x2 as select j from x;\nCREATE\npostgres=> insert into x values (1,2);\nINSERT 144128 1\npostgres=> insert into x values (3);\nINSERT 144129 1\npostgres=> insert into x values (NULL,4);\nINSERT 144130 1\npostgres=> select * from x;\ni|j\n-+-\n1|2\n3|\n |4\n(3 rows)\n\npostgres=> select distinct * from x2 union select * from x2;\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request. \n(The second x2 can also be other table)\n\n\nb)\nAnother way is to remove the table a view is based upon, e.g.\n\npostgres=> create table x (i int);\nCREATE\npostgres=> create view x2 as select * from x;\nCREATE\npostgres=> drop table x;\nDROP\npostgres=> select * from x2;\nPQexec() -- Request was sent to backend, but b...\n\nBest regards,\nMB\n\n-- \nMichael Bussmann <[email protected]> [Tel.: +49 228 9435 211; Fax: +49 228 348953]\n\"It's still the same old story, a fight for love and glory,\nA case of do or die, The world will always welcome lovers, as time goes by.\"\n", "msg_date": "Sat, 28 Mar 1998 13:02:33 +0100", "msg_from": "Michael Bussmann <[email protected]>", "msg_from_op": true, "msg_subject": "Ways to crash the backend" }, { "msg_contents": "> \n> Hi there!\n> \n> Please forgive my affords in making the backend crash (this time postgres\n> 6.3.1), seems I've got too much spare time :-)\n> \n> a)\n> \n> SELECT DISTINCT * from x2 UNION SELECT * FROM x2;\n\nSorry to disappoint you, but this is fixed in the 6.3.1 patch.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 10:15:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Ways to crash the backend" } ]
[ { "msg_contents": "It's not a bad idea. But I like to be able to check the syntax without\nhaving the database around. I hate it when the precompiler really goes\ninto the database.\n\nAlso I have to account for lots of different syntactical stuff because\nof the C code.\n\nMichael\n--\nDr. Michael Meskes, Projekt-Manager | topystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Use Debian GNU/Linux! | Tel: (+49) 2405/4670-44\n\n> ----------\n> From: \tBruce Momjian[SMTP:[email protected]]\n> Sent: \tFreitag, 27. März 1998 20:37\n> To: \[email protected]\n> Cc: \[email protected]\n> Subject: \tRe: [HACKERS] Going on vacation\n> \n> Would it make you life any easier if you opened a connection to the\n> postgres/psql, and passed queries into it and looked at the\n> output/errors? Just an idea.\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania\n> 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n", "msg_date": "Sat, 28 Mar 1998 20:25:33 +0100", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Going on vacation" } ]
[ { "msg_contents": "> PostgreSQL version : 6.3-980326\n> Problem Description:\n> --------------------\n> Parser generated error message when script was inserting\n> a value equal -1 into column of float8 type.\n> If exchange -1 to -1.0 - no errors. \n> \n> --------------------------------------------------------------------------\n> \n> Test Case:\n> ----------\n> create table templ_arg(accii int2,type char,sign float8);\n> create table a1 () inherits (templ_arg);\n> insert into a1 values (9999,'a',1); -- working;\n> insert into a1 values (9999,'a',-1); -- ERROR;\n> insert into a1 values (9999,'a',-1.0); --working\nYep, it is a bug, and I will add it to the TODO list. In float8out, we\nuse:\n\tprintf(\"%.*g\", CONST, value)\n\nand the %g is causing it to print as:\n\n equal to the precision. Trailing zeros are removed from the\n fractional part of the result; a decimal point appears only if it\n is followed by at least one digit.\n\nNow, what we should probably be doing is to allow -1 (without decimal\npoint) to be promoted to float8 in the INSERT, and I think we are going\nto be able to do that in 6.4.\n\nI know this is crummy, but I can't even think of a workaround for it.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 15:16:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: pg_dump -d database >unload.file;\n\tcat unload.file|psql database ARE NOT EQUAL" }, { "msg_contents": "> Problem Description:\n> --------------------\n> Parser generated error message when script was inserting\n> a value equal -1 into column of float8 type.\n> If exchange -1 to -1.0 - no errors. \n> \n> --------------------------------------------------------------------------\n> \n> Test Case:\n> ----------\n> create table templ_arg(accii int2,type char,sign float8);\n> create table a1 () inherits (templ_arg);\n> insert into a1 values (9999,'a',1); -- working;\n> insert into a1 values (9999,'a',-1); -- ERROR;\n> insert into a1 values (9999,'a',-1.0); --working\n\nOK, it is more compilcated that I realized. I now see that:\n\n\t> insert into a1 values (9999,'a',1); -- working;\n\nworks but:\n\t\n\t> insert into a1 values (9999,'a',-1); -- ERROR;\n\ndoes not. Thomas, any idea why this would happen?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 15:52:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: pg_dump -d database >unload.file;\n\tcat unload.file|psql database ARE NOT EQUAL" }, { "msg_contents": "> create table templ_arg(accii int2,type char,sign float8);\n> create table a1 () inherits (templ_arg);\n> insert into a1 values (9999,'a',1); -- working;\n> insert into a1 values (9999,'a',-1); -- ERROR;\n> insert into a1 values (9999,'a',-1.0); --working\n\nLooking at the grammar, I think that -1 is being interpreted as the\nexpression minus and 1, as in (no_expression - 1), and the complaint is\nthat that expression is not a float8. We already have code to promote\nconstants to float4. Thomas will have to comment, because he\nunderstands the handling of the minuses.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 16:14:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: pg_dump -d database >unload.file;\n\tcat unload.file|psql database ARE NOT EQUAL" }, { "msg_contents": "> create table templ_arg(accii int2,type char,sign float8);\n> create table a1 () inherits (templ_arg);\n> insert into a1 values (9999,'a',1); -- working;\n> insert into a1 values (9999,'a',-1); -- ERROR;\n> insert into a1 values (9999,'a',-1.0); --working\n\nYep, the problem gram.y line is:\n\n | '-' a_expr %prec UMINUS\n { $$ = makeA_Expr(OP, \"-\", NULL, $2);}\n\nThis is being executed rather than the code that reads in negative\ninteger constants.\n \n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 16:22:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: pg_dump -d database >unload.file;\n\tcat unload.file|psql database ARE NOT EQUAL" }, { "msg_contents": "> > create table templ_arg(accii int2,type char,sign float8);\n> > create table a1 () inherits (templ_arg);\n> > insert into a1 values (9999,'a',1); -- working;\n> > insert into a1 values (9999,'a',-1); -- ERROR;\n> > insert into a1 values (9999,'a',-1.0); --working\n> \n> Yep, the problem gram.y line is:\n> \n> | '-' a_expr %prec UMINUS\n> { $$ = makeA_Expr(OP, \"-\", NULL, $2);}\n> \n> This is being executed rather than the code that reads in negative\n> integer constants.\n\nYes. It's a problem because the scanner assigned types to integer and\nfloating point constants, but does not have a sense of context so the\nnegative sign must be stripped off since it could be in the middle of a\nmath expression. We will need to fix this in gram.y or by using the new\ntype conversion stuff farther back. But, I don't know how much new type\nconversion capabilities there will be until I've tried to address most\nof the pieces; still working out lingering problems in the function call\nportion.\n\nWill put this one on my list of things to look at.\n\n - Tom\n", "msg_date": "Mon, 30 Mar 1998 06:42:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Port Bug Report: pg_dump -d database >unload.file;\n\tcat unload.file|psql database ARE NOT EQUAL" } ]
[ { "msg_contents": "> \n> >> Subselects are a BIG item for 6.3, and this is a serious feature that we\n> >> should be telling people about. In the past, I am sure certain people\n> >> did not consider using PostgreSQL because of this missing feature.\n> >> \n> \n> Yes, they are a big reason I want to use PostgreSQL, but as far as\n> I can tell, they do not work. Is there a patch I am missing?\n> I have 6.3.1 on RedHat Linux 5.0.\n\nNope, this is the first problem I have heard about with subselects.\n\n> \n> Here is what I tried:\n> ======================================================================\n> bbrmdc=> select runnum from mdc1_simu where version = '4.3.7g';\n> runnum\n> ------\n> 048930\n> 048931\n> 048932\n> 048933\n> 048934\n> (5 rows)\n> \n> bbrmdc=> select distinct runtype from mdc1_runs where runnum in\n> bbrmdc-> ('048930','048931','048932','048933','048934');\n> runtype \n> --------------------\n> tau+ -> X, tau- -> X\n> (1 row)\n> \n> bbrmdc=> select distinct runtype from mdc1_runs where runnum in\n> bbrmdc-> (select runnum from mdc1_simu where version = '4.3.7g');\n> FATAL: unrecognized data from the backend. It probably dumped core.\n> FATAL: unrecognized data from the backend. It probably dumped core.\n> bbrmdc=> \\q\n> \n> ======================================================================\n> \n> Each of the single selects took < 1 sec. The fatals are that after 15 \n> minutes, I killed the postgres process on my server. BTW, is there \n> clean way to kill a query from the psql side? Doing a Ctrl-C just \n> kills the psql process and leaves the postgres process eating up my \n> CPU on the server. \n\nNo way to cancel them, but it is on the TODO list.\n\nI am CC'ing Vadim on this. Looks strange. Any way we can reproduce\nthis? Does the removal of the DISTINCT help? Are there a lot of values\nwithout the DISTINCT?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 16:30:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Let's talk up 6.3" }, { "msg_contents": ">> I am CC'ing Vadim on this. Looks strange. Any way we can reproduce\n>> this? Does the removal of the DISTINCT help?\n\nNo, removing DISTINCT did not help.\n\nI currently have the data in Oracle and am using Perl and DBI to \ntransfer data between the two. I did the following additional tests. \nI dropped both tables, did a vacuum, and recreated the tables. Run the \nsubselect with them empty returned no rows as expected. I transfered \nover about 20 rows into each table. The subselect ran fine (and fast) \nreturning the expected result. \n\nI did another drop, vacuum, create and then transfered over the entire \n~5500 rows for each table. The subselect now hangs as before. Maybe \nit is working if the time is an expotential function of the number of \nrows. I killed it after 15 minutes. I fail to see why it should be \nmuch longer than doing the subselect by hand as in my previous email. \nOracle takes a couple of seconds to do the same subselect command. \n\nAfter killing the postgres process, I reconnected to the database\nand tried a vacuum. This also appeared to hang. I killed it after\none minute (it normal took about 5 seconds). I killed the postmaster, then\nrestarted, reconnected and a vacuum worked fine.\n\n>> Are there a lot of values\n>> without the DISTINCT?\n\nThere are just as many values as there are values returned by the\nsubselect. For my example it was just five, but it can certainly\nbe a lot more for other choices and the DISTINCT is important.\n\nHere are the tables:\n\nbbrmdc=> \\d mdc1_runs\n \nTable = mdc1_runs\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| runnum | char() not null | 6 |\n| runtype | text | var |\n| nevents | int4 | 4 |\n| who | text | var |\n| note | text | var |\n+----------------------------------+----------------------------------+-------+\nbbrmdc=> \\d mdc1_simu\n \nTable = mdc1_simu\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| runnum | char() not null | 6 |\n| version | varchar() not null | 10 |\n| jobgrp | varchar() not null | 8 |\n| bldrnum | int4 not null | 4 |\n| status | text | var |\n| cpusecs | int4 | 4 |\n| outsize | int4 | 4 |\n| machine | text | var |\n| location | text | var |\n| jobdate | abstime | 4 |\n| who | text | var |\n| note | text | var |\n+----------------------------------+----------------------------------+-------+\n\nI can make the entire database available to you if that would be helpful.\nIt is about 5MB uncompressed.\n\npr\n\n--\n_________________________________________________________________________\nPaul Raines [email protected] 650-926-2369\nStanford Linear Accelerator BABAR Group Software Team \nhttp://www.slac.stanford.edu/~raines/index.html <======== PGP public key\n\n\n", "msg_date": "Sat, 28 Mar 1998 15:04:25 -0800 (PST)", "msg_from": "Paul Raines <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Let's talk up 6.3" } ]
[ { "msg_contents": "> \n> Bruce,\n> \n> On 1998-03-28 10:15:48 -0500, Bruce Momjian wrote:\n> > > SELECT DISTINCT * from x2 UNION SELECT * FROM x2;\n> > \n> > Sorry to disappoint you, but this is fixed in the 6.3.1 patch.\n> \n> No disappointment at all, but I upgraded from 6.3 to 6.3.1. Well, at least\n> I think I did. Do you know of a way to get the current version of the\n> program?\n\nNow I am diappointed. You found a bug in my UNION code, and I am not\nsure how to fix it.\n\n\n> ~/data/PG_VERSION contains 6.3, so you may be right, but the HISTORY file\n> states that one of the changes from 6.3 to 6.3.1 includes the possibility\n> to use 'password' as a column identifier.\n\nIf history says 6.3.1 at the top, it is 6.3.1.\n\n> \n> Here's the output from my test system:\n> # bus@goliath [1019] =>psql -c 'create table x (password int)'\n> # CREATE\n> # bus@goliath [1020] =>psql -c 'select distinct * from pg_user union select *\n> # from pg_user'\n> # PQexec() -- Request was sent to backend, but backend closed the ...\n\nYes, this crashes things. I have added to the TODO list:\n\n\t* DISTINCT not on last query fails on UNION but not UNION ALL\n\nUNION does an automatic distinct between the two queries\n\n\ttest=> insert into test1 values (1);\n\tINSERT 18706 1\n\ttest=> insert into test1 values (1);\n\tINSERT 18707 1\n\ttest=> insert into test1 values (1);\n\tINSERT 18708 1\n\ttest=> insert into test2 values (2);\n\tINSERT 18709 1\n\ttest=> insert into test2 values (2);\n\tINSERT 18710 1\n\ttest=> select * from test1 union select * from test2;\n\tx\n\t-\n\t1\n\t2\n\t(2 rows)\n\nIf you use UNION ALL, you don't get distinct, and the query you gave me\nwould work.\n\nNot sure how to fix it. Could just disable the DISTINCT's when using\nUNION and not UNION ALL, because it is redundant. Perhaps Vadim has a\ncomment on this.\n\nt\n> \n> On another system that uses 6.3, 'password' isn't a valid id.\n> # bus@tardis [1017] =>psql -c 'create table x (password int)'\n> # ERROR: parser: parse error at or near \"password\"\n> \n> I hope I'm not upsetting you (or any of the developers) with my 'here's\n> another silly command that crashes the backend'-reports.\n\nGlad you are finding it. Glad I have a workaround for now. We will\nkeep it on the TODO list until it is fixed.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 28 Mar 1998 23:53:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: (PM) Re: [HACKERS] Ways to crash the backend" } ]
[ { "msg_contents": "Hi,\nThis simple patch to catalog/pg_type.c fixes a buffer overrun.\nIt was detected by Electric Fence and triggered by statements like:\n\n\tSELECT * into table t from pg_database;\n\nThe system would crash on a memmove call in DataFile() with arguments like this:\n\n\tmemmove(0x0, 0x0, 0); \n\nHere's the patch.\n\n320a321\n> \tNameData\t \tname;\n390c391,392\n< \tvalues[i++] = PointerGetDatum(typeName);\t/* 1 */\n---\n> \tnamestrcpy(&name,typeName);\n> \tvalues[i++] = NameGetDatum(&name);\t/* 1 */\n\nAfter applying the patch the problem goes away.\n\nI'll try to make all the buffer overruns detected by Electric Fence\nduring the regression test on my Linux system go away -:).\nDebugging is a nice way to learn about a system isn't it?\n\nWith regards from Maurice.\n\n", "msg_date": "Sun, 29 Mar 1998 14:33:21 +0200", "msg_from": "Maurice Gittens <[email protected]>", "msg_from_op": true, "msg_subject": "found another overrun" }, { "msg_contents": "Patch applied.\n\n> \n> Hi,\n> This simple patch to catalog/pg_type.c fixes a buffer overrun.\n> It was detected by Electric Fence and triggered by statements like:\n> \n> \tSELECT * into table t from pg_database;\n> \n> The system would crash on a memmove call in DataFile() with arguments like this:\n> \n> \tmemmove(0x0, 0x0, 0); \n> \n> Here's the patch.\n> \n> 320a321\n> > \tNameData\t \tname;\n> 390c391,392\n> < \tvalues[i++] = PointerGetDatum(typeName);\t/* 1 */\n> ---\n> > \tnamestrcpy(&name,typeName);\n> > \tvalues[i++] = NameGetDatum(&name);\t/* 1 */\n> \n> After applying the patch the problem goes away.\n> \n> I'll try to make all the buffer overruns detected by Electric Fence\n> during the regression test on my Linux system go away -:).\n> Debugging is a nice way to learn about a system isn't it?\n> \n> With regards from Maurice.\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 12:45:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] found another overrun" } ]
[ { "msg_contents": "http://users.skynet.be/sky92975/spck.htm\n********************************************************\n\nDear Mr./Mrs.\n\nDon't just pass over this message, because this might turn out to \nbe a\nvery valuable mail for you and your company.\n\nNewsgroup AutoPoster 95 - Version 2.0.14\nA high speed Newsgroup Auto Poster/Newsreader for Windows 95\nAutopost articles to more than 40,000 newsgroups\nSuper fast - up to 12,000 posting per hour on a single PC\nLoaded with features!\nEasy to run\n\nBuilt in Special Functions such as:\nScramble Random Fake Sender\nGroup Postings\nForce Hits\n(Any one reading your article is FORCED to a URL of your choice)\nAttach File - Attach any file to an article!\nMultiple Postings - Post more than one article to each newsgroup!\nRotate Postings - Rotate through a list of articles to post!\nNuke - New header \"x-no-archive\" for Deja News postings\nNews Reader - Read your postings direct from Newsgroup AutoPoster \n95\nNotePad - Built in notepad for cut and paste when writing \narticles etc...\nLog - Now you can create a log file to see whats really happens \nwhen you post\nSupport - Built in Online support - Sent support questions direct \nfrom Newsgroup AutoPoster 95\nWeb Browser - Built in Web Browser for easy access to internet \nsurfing\nSignature File - Now you can attach a signature file to your \narticles\n\nThe worlds most effective AutoPoster to Internets newsgroups! \nAutoPost articles to all Internets newsgroups, now more than \n40,000! Make unlimited profits through effective marketing of \nyour service/products. It's perfect for advertisement and sales. \nSome users have reported traffic increases on their sites from \n30-40 hits per day to more than 1000 hits per day in less than a \nweek after starting to use Newsgroup AutoPoster 95.\n\nDownload your own copy of Newsgroup AutoPoster 95 NOW!!!\nat :\n**********************************************\nhttp://users.skynet.be/sky92975/spck.htm\n", "msg_date": "Sun, 29 Mar 1998 14:39:19 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "autopost/dm" } ]
[ { "msg_contents": "Paul Raines wrote:\n> \n> I have made no indices yet. And these are the only two tables\n> in the database (beside the system ones).\n> \n> bbrmdc=> explain verbose select distinct runtype from mdc1_runs where runnum in\n> bbrmdc-> (select runnum from mdc1_simu where version = '4.3.7g');\n> \n> Unique (cost=686.02 size=0 width=0)\n> -> Sort (cost=686.02 size=0 width=0)\n> -> Seq Scan on mdc1_runs (cost=686.02 size=1455 width=12)\n> SubPlan\n> -> Seq Scan on mdc1_simu (cost=733.02 size=1 width=12)\n> \n\nCurrent implementation of IN is very simple. As you see from EXPLAIN\nfor each row from mdc1_runs server performes SeqScan on mdc1_simu.\nTry to create index on mdc1_simu (version) and let's know about results.\nAlso, you could create index on mdc1_simu (version, runnum) and re-write\nyour query as\n\nselect distinct runtype from mdc1_runs where \nEXISTS (select * from mdc1_runs where version = '...' and\nrunnum = mdc1_runs.runnum);\n\n- this can be faster.\n\nIn the future, subselects in FROM-clause will be implemented and \n'IN' and others 'Op ANY' will be handled in this new way.\n\nVadim\n", "msg_date": "Mon, 30 Mar 1998 08:37:09 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Let's talk up 6.3" }, { "msg_contents": "> \n> I have made no indices yet. And these are the only two tables\n> in the database (beside the system ones).\n\nNo indexes. No wonder it takes so long. Put an index on\nmdc1_runs.runnum and mdc1_simu.version, and see how fast it is. Did\nOracle manage to do this quickly without the indexes?\n\nHaving it crash is certainly not an acceptable outcome, but I am sure\nindexes will fix the problem.\n\nNow, the fact that it runs quickly as separate queries, even without the\nindexes, but takes a long time with the indexes, I think is\nunderstandable. Think of a join of two tables. You can do through each\nquickly, but if you join two non-indexed fields, it will take quite some\ntime. I think our subselect code is doing just that. We designed it\nthat way to give good performance for the majority of subselects,\nincluding correlated ones.\n\n\n> \n> bbrmdc=> explain verbose select distinct runtype from mdc1_runs where runnum in\n> bbrmdc-> (select runnum from mdc1_simu where version = '4.3.7g');\n> NOTICE: QUERY PLAN:\n> \n> \n> Unique (cost=686.02 size=0 width=0)\n> -> Sort (cost=686.02 size=0 width=0)\n> -> Seq Scan on mdc1_runs (cost=686.02 size=1455 width=12)\n> SubPlan\n> -> Seq Scan on mdc1_simu (cost=733.02 size=1 width=12)\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 00:16:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Let's talk up 6.3" }, { "msg_contents": "bbrmdc=> select distinct runtype from mdc1_runs where runnum in\nbbrmdc-> (select runnum from mdc1_simu where version = '3.1.0');\n\nOk, without indices, the subselect took 35 seconds. I then\ncreated the following two indices which seem to correspond\nto the ones I have in Oracle:\n\nbbrmdc=> create index mdc1_runs_pk on mdc1_runs using btree ( \nbbrmdc-> runnum char_ops );\nCREATE\nbbrmdc=> create index mdc1_simu_pk on mdc1_simu using btree (\nbbrmdc-> runnum char_ops, version varchar_ops );\nCREATE\n\nThe subselect still took 35 seconds. I then created:\n\nbbrmdc=> create index mdc1_simu_ver on mdc1_simu using btree (\nbbrmdc-> version varchar_ops );\nCREATE\n\nNow the subselect takes < 3 seconds. Should I have expected that \nsecond index above to help at all? Since all runnum's are\nunique in this example, probably not. Would a rule be that\nif the first attribute of an index is unique, then additional\nattributes are basically useless?\n\n>> Having it crash is certainly not an acceptable outcome, but I am sure\n>> indexes will fix the problem.\n>> \n\nWell, it didn't exactly crash. I just gave up on it and killed it\nmyself after 15 minutes. That was when I had about 5500 rows in\neach table rather than the 2500 now. BTW, is there anyway for a \"user\"\nto stop a runaway postgres process? I had to log in directly to the\nserver and kill it as either root or postgres.\n\n>> Now, the fact that it runs quickly as separate queries, even without the\n>> indexes, but takes a long time with the indexes, I think is\n>> understandable. Think of a join of two tables. You can do through each\n>> quickly, but if you join two non-indexed fields, it will take quite some\n>> time. I think our subselect code is doing just that. We designed it\n>> that way to give good performance for the majority of subselects,\n>> including correlated ones.\n>> \n\nIs there a better way to do this subselect? Is there a way to\nsave the results of one query and feed it into a second one easily\nwhen doing interactive stuff on psql? I know this can be done in\nprogramming, though I worry the statement might get too long. I\nwas thinking of trying a function for this but they only seem to\nreturn scalars, not suitable for a IN clause.\n\nOn another note, is there anyway to prevent a user from being able\nto create tables in a database? There only seems to be security\nin making the connection in the first place and then there is\njust security on existing tables. I want to set up a \"safe\" user\nid that has query access only on a database.\n\nThanks for all your help.\n\npr\n\n\n--\n_________________________________________________________________________\nPaul Raines [email protected] 650-926-2369\nStanford Linear Accelerator BABAR Group Software Team \nhttp://www.slac.stanford.edu/~raines/index.html <======== PGP public key\n\n\n\n", "msg_date": "Mon, 30 Mar 1998 09:54:50 -0800 (PST)", "msg_from": "Paul Raines <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Let's talk up 6.3" }, { "msg_contents": "> \n> bbrmdc=> select distinct runtype from mdc1_runs where runnum in\n> bbrmdc-> (select runnum from mdc1_simu where version = '3.1.0');\n> \n> Ok, without indices, the subselect took 35 seconds. I then\n> created the following two indices which seem to correspond\n> to the ones I have in Oracle:\n> \n> bbrmdc=> create index mdc1_runs_pk on mdc1_runs using btree ( \n> bbrmdc-> runnum char_ops );\n> CREATE\n\n\n> bbrmdc=> create index mdc1_simu_pk on mdc1_simu using btree (\n> bbrmdc-> runnum char_ops, version varchar_ops );\n> CREATE\n\nThis index is useless. If you are only restricting on the second field\nof an index, and not the first, the index is useless, just like knowing\nthe second letter of a word is q doesn't help you look it up in a\ndictionary.\n\n> \n> The subselect still took 35 seconds. I then created:\n> \n> bbrmdc=> create index mdc1_simu_ver on mdc1_simu using btree (\n> bbrmdc-> version varchar_ops );\n> CREATE\n> \n> Now the subselect takes < 3 seconds. Should I have expected that \n> second index above to help at all? Since all runnum's are\n> unique in this example, probably not. Would a rule be that\n> if the first attribute of an index is unique, then additional\n> attributes are basically useless?\n\nSee above.\n\n> bbrmdc=> create index mdc1_simu_pk on mdc1_simu using btree (\n> bbrmdc-> runnum char_ops, version varchar_ops );\n\n> Well, it didn't exactly crash. I just gave up on it and killed it\n> myself after 15 minutes. That was when I had about 5500 rows in\n> each table rather than the 2500 now. BTW, is there anyway for a \"user\"\n> to stop a runaway postgres process? I had to log in directly to the\n> server and kill it as either root or postgres.\n\nNo, but on the TODO list.\n\n> >> Now, the fact that it runs quickly as separate queries, even without the\n> >> indexes, but takes a long time with the indexes, I think is\n> >> understandable. Think of a join of two tables. You can do through each\n> >> quickly, but if you join two non-indexed fields, it will take quite some\n> >> time. I think our subselect code is doing just that. We designed it\n> >> that way to give good performance for the majority of subselects,\n> >> including correlated ones.\n> >> \n> \n> Is there a better way to do this subselect? Is there a way to\n> save the results of one query and feed it into a second one easily\n> when doing interactive stuff on psql? I know this can be done in\n> programming, though I worry the statement might get too long. I\n> was thinking of trying a function for this but they only seem to\n> return scalars, not suitable for a IN clause.\n\nSELECT * INTO TABLE ... would work. DELETE when done.\n\n\n> \n> On another note, is there anyway to prevent a user from being able\n> to create tables in a database? There only seems to be security\n> in making the connection in the first place and then there is\n> just security on existing tables. I want to set up a \"safe\" user\n> id that has query access only on a database.\n\nNo, again on the TODO list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 13:34:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Let's talk up 6.3" } ]
[ { "msg_contents": "\tThere is one comment I would like to state, on the issue of \n\ta sequential scan beeing faster than an index scan. It is actually\noften\n\ttrue in a singel user system that an index scan is more expensive\n\tthan a sequential scan. \n\tAs long as we have table level locks this is also true for heavyly\n\tconcurrent access.\n\tHere comes the disadvantage:\n\tOnce row or page locks will be implemented, the sequential scan\n\tcost should be reconsidered, since then readers will often be\nwaiting \n\tfor updaters, that are actually updating data, that is irrelevant\nfor the\n\treader. The average wait time will have to be added to the sequ.\nscan\n\tcost.\n\n\tAndreas\n", "msg_date": "Mon, 30 Mar 1998 14:20:42 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Optimizer fails?" } ]
[ { "msg_contents": "Hi, all\n\nI'm writing Reference Manual and I have two questions:\n\n1 - Why PostgreSQL accept HOUR, MINUTE and SECOND to extract if from a\n date ?\n\n EXTRACT (field FROM date_expression)\n\n There are no such fields on a date!\n \n It make sense only for SQL92 syntax because it uses also time and interval\n types:\n\n EXTRACT -- extract a datetime field from a datetime or interval.\n\n The possible values for field are:\n - YEAR\n - MONTH\n - DAY\n - HOUR\n - MINUTE\n - SECOND\n - TIMEZONE_HOUR\n - TIMEZONE_MINUTE\n\n------------------------------------------------------------------------\n\n2. - Seems that optional ALL keyword of UNION doesn't work.\n The following query prints always the same result with and without\n the ALL clause.\n\n* UNION of two tables:\n\nmytable: yourtable:\n id|name id|name\n --+------ --+------\n 1|Smith 1|Soares\n 2|Jones 2|Panini\n 3|Soares\n\n\nSELECT mytable.id, mytable.name\nFROM mytable\nWHERE mytable.name LIKE 'S%'\n UNION\n SELECT yourtable.id, yourtable.name\n FROM yourtable\n WHERE yourtable.name LIKE 'S%';\n\nthis is the result even if I don't specify ALL.\n id|name\n --+------\n 1|Smith\n 1|Soares\n 3|Soares\n---------\nSQL92 says that result does not contain any duplicate rows anless\n the ALL keyword is specified.\t\t \n\nWhat's wrong with my example ?\n Thanks, Jose'\n\n", "msg_date": "Mon, 30 Mar 1998 15:19:05 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Reference Manual" }, { "msg_contents": "> 2. - Seems that optional ALL keyword of UNION doesn't work.\n> The following query prints always the same result with and without\n> the ALL clause.\n> \n> * UNION of two tables:\n> \n> mytable: yourtable:\n> id|name id|name\n> --+------ --+------\n> 1|Smith 1|Soares\n> 2|Jones 2|Panini\n> 3|Soares\n> \n> \n> SELECT mytable.id, mytable.name\n> FROM mytable\n> WHERE mytable.name LIKE 'S%'\n> UNION\n> SELECT yourtable.id, yourtable.name\n> FROM yourtable\n> WHERE yourtable.name LIKE 'S%';\n> \n> this is the result even if I don't specify ALL.\n> id|name\n> --+------\n> 1|Smith\n> 1|Soares\n> 3|Soares\n\nThe second column is duplicate, but the first is not. It looks at all\ncolumns to determine duplicates.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 10:25:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reference Manual]" }, { "msg_contents": "> I'm writing Reference Manual and I have two questions:\n> \n> 1 - Why PostgreSQL accept HOUR, MINUTE and SECOND to extract if from a\n> date ?\n> EXTRACT (field FROM date_expression)\n> There are no such fields on a date!\n\nAnd it returns zeros for those fields. I think that is OK; it makes for\na symmetric implementation...\n\n> - TIMEZONE_HOUR\n> - TIMEZONE_MINUTE\n\nHmm. Don't do these yet. But:\n\ntgl=> select date_part('timezone', 'now');\ndate_part\n---------\n -7200\n(1 row)\n\nso the underlying implementation does know about timezones. It may only\nneed a parser adjustment. Will look at it...\n\n - Tom\n", "msg_date": "Mon, 30 Mar 1998 16:17:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Reference Manual" }, { "msg_contents": "> so the underlying implementation does know about timezones. It may \n> only need a parser adjustment. Will look at it...\n\ntgl=> select extract(timezone_hour from datetime 'now');\n\ndate_part\n---------\n -1\n(1 row)\n\ntgl=> show timezone;\nNOTICE: Time zone is GMT-1\nSHOW VARIABLE\n\nWas more than a parser adjustment, but not too bad. \nWill be in v6.4 :)\n\n - Tom\n", "msg_date": "Mon, 30 Mar 1998 16:55:03 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Reference Manual" }, { "msg_contents": "> The point is: why EXTRACT accepts only date types ?\n> SQL92 specifies date, time, timestamp and interval.\n\ntgl=> select extract (year from date 'now');\ndate_part\n---------\n 1998\n(1 row)\ntgl=> select extract (year from datetime 'now');\ndate_part\n---------\n 1998\n(1 row)\ntgl=> select extract (year from abstime 'now');\ndate_part\n---------\n 1998\n(1 row)\ntgl=> select extract(year from timestamp 'now');\ndate_part\n---------\n 1998\n(1 row)\ntgl=> select extract (hour from timespan '5 hours');\ndate_part\n---------\n 5\n(1 row)\n\ntgl=> select extract (hour from reltime '5 hours');\ndate_part\n---------\n 5\n(1 row)\ntgl=> select extract (hour from interval '5 hours');\ndate_part\n---------\n 5\n(1 row)\n\nAnd,\n\ntgl=> select extract (hour from time '03:04:05');\nERROR: function 'time_timespan(time)' does not exist\n\nThis is a known problem; will fix for v6.4.\n\n - Tom\n", "msg_date": "Wed, 01 Apr 1998 14:33:42 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Reference Manual" }, { "msg_contents": "On Mon, 30 Mar 1998, Thomas G. Lockhart wrote:\n\n> > I'm writing Reference Manual and I have two questions:\n> > \n> > 1 - Why PostgreSQL accept HOUR, MINUTE and SECOND to extract if from a\n> > date ?\n> > EXTRACT (field FROM date_expression)\n> > There are no such fields on a date!\n> \n> And it returns zeros for those fields. I think that is OK; it makes for\n> a symmetric implementation...\n> \nThe point is: why EXTRACT accepts only date types ?\n\nSQL92 specifies date, time, timestamp and interval.\n\n Ciao, Jose'\n\n", "msg_date": "Wed, 1 Apr 1998 14:46:48 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Reference Manual" }, { "msg_contents": "> > tgl=> select extract (year from date 'now');\n> > date_part\n> > ---------\n> > 1998\n> > (1 row)\n> It doesn't work for me. Why ??\n> psql=> select extract (year from current_timestamp);\n> ERROR: function date_part(unknown, timestamp) does not exist\n\nWhat version of Postgres are you running? Something may have gone a\nlittle screwy in v6.3.1, since the numerology regression test has been\nreported to have failed with it unable to compare an int4 to a float8. \n\nIt must work for some installations though since they wouldn't have\nreleased without a clean regression test, right? :)\n\nI'm still developing with v6.3 because I'm in the middle of working on\nthe automatic type conversion stuff...\n\n - Tom\n", "msg_date": "Wed, 01 Apr 1998 16:08:30 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Reference Manual" }, { "msg_contents": "On Wed, 1 Apr 1998, Thomas G. Lockhart wrote:\n\n> > The point is: why EXTRACT accepts only date types ?\n> > SQL92 specifies date, time, timestamp and interval.\n> \n> tgl=> select extract (year from date 'now');\n> date_part\n> ---------\n> 1998\n> (1 row)\n> tgl=> select extract (year from datetime 'now');\n> date_part\n> ---------\n> 1998\n> (1 row)\n> tgl=> select extract (year from abstime 'now');\n> date_part\n> ---------\n> 1998\n> (1 row)\n> tgl=> select extract(year from timestamp 'now');\n> date_part\n> ---------\n> 1998\n> (1 row)\n> tgl=> select extract (hour from timespan '5 hours');\n> date_part\n> ---------\n> 5\n> (1 row)\n> \n> tgl=> select extract (hour from reltime '5 hours');\n> date_part\n> ---------\n> 5\n> (1 row)\n> tgl=> select extract (hour from interval '5 hours');\n> date_part\n> ---------\n> 5\n> (1 row)\n> \n> And,\n> \n> tgl=> select extract (hour from time '03:04:05');\n> ERROR: function 'time_timespan(time)' does not exist\n> \n> This is a known problem; will fix for v6.4.\n> \n> - Tom\n> \n\nIt doesn't work for me. Why ??\n\npsql=> select extract (year from current_timestamp);\nERROR: function date_part(unknown, timestamp) does not exist\npsql=> select extract (hour from current_time);\nERROR: function time_timespan(time) does not exist\npsql=> select extract (minute from current_time);\nERROR: function time_timespan(time) does not exist\npsql=> select extract (second from current_time);\nERROR: function time_timespan(time) does not exist\n Ciao, Jose'\n\n", "msg_date": "Wed, 1 Apr 1998 16:32:29 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Reference Manual" }, { "msg_contents": "On Wed, 1 Apr 1998, Thomas G. Lockhart wrote:\n\n> > > tgl=> select extract (year from date 'now');\n> > > date_part\n> > > ---------\n> > > 1998\n> > > (1 row)\n> > It doesn't work for me. Why ??\n> > psql=> select extract (year from current_timestamp);\n> > ERROR: function date_part(unknown, timestamp) does not exist\n> \n> What version of Postgres are you running? Something may have gone a\n> little screwy in v6.3.1, since the numerology regression test has been\n> reported to have failed with it unable to compare an int4 to a float8. \n> \n> It must work for some installations though since they wouldn't have\n> released without a clean regression test, right? :)\n> \n> I'm still developing with v6.3 because I'm in the middle of working on\n> the automatic type conversion stuff...\n> \nI I'm running version 6.3 and my regress.out is like this:\n\n=============== Notes... =================\npostmaster must already be running for the regression tests to succeed.\nThe time zone is now set to PST8PDT explicitly by this regression test\n client frontend. Please report any apparent problems to\n [email protected]\nSee regress/README for more information.\n\n=============== destroying old regression database... =================\n=============== creating new regression database... =================\n=============== running regression queries... =================\nboolean .. ok\nchar .. ok\nchar2 .. ok\nchar4 .. ok\nchar8 .. ok\nchar16 .. ok\nvarchar .. ok\ntext .. ok\nstrings .. ok\nint2 .. ok\nint4 .. ok\noid .. ok\noidint2 .. ok\noidint4 .. ok\noidname .. ok\nfloat4 .. ok\nfloat8 .. ok\nnumerology .. ok\npoint .. ok\nlseg .. ok\nbox .. ok\npath .. ok\npolygon .. ok\ncircle .. ok\ngeometry .. failed\ntimespan .. ok\ndatetime .. ok\nreltime .. ok\nabstime .. ok\ntinterval .. ok\nhorology .. ok\ncomments .. ok\ncreate_function_1 .. ok\ncreate_type .. ok\ncreate_table .. ok\ncreate_function_2 .. ok\nconstraints .. ok\ntriggers .. ok\ncopy .. ok\ncreate_misc .. ok\ncreate_aggregate .. ok\ncreate_operator .. ok\ncreate_view .. ok\ncreate_index .. ok\nsanity_check .. ok\nerrors .. ok\nselect .. ok\nselect_into .. ok\nselect_distinct .. ok\nselect_distinct_on .. ok\nsubselect .. ok\naggregates .. ok\ntransactions .. ok\nrandom .. ok\nportals .. ok\nmisc .. ok\narrays .. ok\nbtree_index .. ok\nhash_index .. ok\nselect_views .. ok\nalter_table .. ok\nportals_p2 .. ok\n Ciao, Jose'\n\n", "msg_date": "Thu, 2 Apr 1998 11:11:46 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Reference Manual" } ]
[ { "msg_contents": "\nSomeone is/was talking about using PostgreSQL for the backend for this?\nBasically, making the mailing list archives searchable...\n\nWe don't have free-text searching, I don't believe, but anyone have an\nidea on what would be involved in having it?\n\n---------- Forwarded message ----------\nDate: Mon, 30 Mar 1998 09:50:48 +0100\nFrom: Bob Bishop <[email protected]>\nTo: [email protected], Wolfram Schneider <[email protected]>\nCc: [email protected], [email protected], [email protected],\n [email protected], Amancio Hasty <[email protected]>\nSubject: Re: [PORTS] Pgaccess doesn't run on -current anymore, Update\n\nAt 10:57 pm +0100 29/3/98, Simon Shapiro wrote:\n>..\n>We have been playing with the idea of normalizing the archive into an\n>RDBMS. [etc]\n\nGreat, but to be useful you need free-text search too.\n\n\n--\nBob Bishop (0118) 977 4017 international code +44 118\[email protected] fax (0118) 989 4254 between 0800 and 1800 UK\n\n\n", "msg_date": "Mon, 30 Mar 1998 11:41:10 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Pgaccess doesn't run on -current anymore, Update (fwd)" }, { "msg_contents": "> \n> \n> Someone is/was talking about using PostgreSQL for the backend for this?\n> Basically, making the mailing list archives searchable...\n> \n> We don't have free-text searching, I don't believe, but anyone have an\n> idea on what would be involved in having it?\n\nTalk to Maarten Boekhold, [email protected]. I just asked him\nabout this because we suggested the CLUSTER change, and I wanted to get\nnew performance numbers. I also asked him for a copy for contrib.\n\nI am going to add him to the TODO contrib list. I think he has\ncontributed patches in the past, but is not on the list.\n\n> \n> ---------- Forwarded message ----------\n> Date: Mon, 30 Mar 1998 09:50:48 +0100\n> From: Bob Bishop <[email protected]>\n> To: [email protected], Wolfram Schneider <[email protected]>\n> Cc: [email protected], [email protected], [email protected],\n> [email protected], Amancio Hasty <[email protected]>\n> Subject: Re: [PORTS] Pgaccess doesn't run on -current anymore, Update\n> \n> At 10:57 pm +0100 29/3/98, Simon Shapiro wrote:\n> >..\n> >We have been playing with the idea of normalizing the archive into an\n> >RDBMS. [etc]\n> \n> Great, but to be useful you need free-text search too.\n> \n> \n> --\n> Bob Bishop (0118) 977 4017 international code +44 118\n> [email protected] fax (0118) 989 4254 between 0800 and 1800 UK\n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 12:35:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Pgaccess doesn't run on -current anymore,\n\tUpdate (fwd)" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> We don't have free-text searching, I don't believe, but anyone have an\n> idea on what would be involved in having it?\n\nA new type of index splitting and indexing words\nworking on text and large objects would be\na good starting point.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Tue, 31 Mar 1998 10:56:03 +0200", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "free-text searching" } ]
[ { "msg_contents": "Can someone comment on this?\n\nForwarded message:\n> From [email protected] Thu Mar 26 15:25:15 1998\n> Date: Thu, 26 Mar 1998 15:12:29 -0500 (EST)\n> From: Unprivileged user <[email protected]>\n> Message-Id: <[email protected]>\n> To: [email protected]\n> Reply-to: john edstrom <[email protected]>\n> Subject: [PORTS] Port Bug Report: ident authority map problem\n> Sender: [email protected]\n> Precedence: bulk\n> \n> \n> ============================================================================\n> POSTGRESQL BUG REPORT TEMPLATE\n> ============================================================================\n> \n> \n> Your name\t\t: john edstrom\n> Your email address\t: [email protected]\n> \n> Category\t\t: runtime: back-end\n> Severity\t\t: serious\n> \n> Summary: ident authority map problem\n> \n> System Configuration\n> --------------------\n> Operating System : linux 2.0.32 ELF\n> \n> PostgreSQL version : 6.3.1\n> \n> Compiler used : cc -v => gcc version egcs-2.90.23 980102 (egcs-1.0.1 release)\n> \n> \n> Hardware:\n> ---------\n> Linux Poopsie.hmsc.orst.edu 2.0.32 #26 Wed Mar 18 17:11:39 PST 1998 i586 unknown\n> \n> \n> Versions of other tools:\n> ------------------------\n> GNU Make version 3.76.1\n> flex version 2.5.4\n> \n> \n> --------------------------------------------------------------------------\n> \n> Problem Description:\n> --------------------\n> postgres gets confused reading hba.conf. The last line\n> pg_hba.conf appears not to be read properly. Specifications\n> above the last line appear to be understood correctly.\n> \n> --------------------------------------------------------------------------\n> \n> Test Case:\n> ----------\n> Here is how I do it.\n> \n> 3 lines in pg_hba.con\n> \n> host edstrom 127.0.0.1 255.255.255.255 ident test\n> host all 127.0.0.1 255.255.255.255 ident pgsql\n> host tstdb 127.0.0.1 255.255.255.255 ident tst\n> \n> \n> 4 lines in pg_ident.conf\n> tst edstrom edstrom\n> pgsql postgres postgres\n> test edstrom edstrom\n> test postgres postgres\n> \n> Postgres and edstrom are unix accounts, tstdb is a valid\n> postgres user but not a unix account.\n> \n> Around line 729 (verify_against_open_usermap()) in \n> src/backend/libpq/hba.c I put:\n> \n> sprintf(PQerrormsg,\"pg_ident: [%s] [%s] [%s] [%s] [%s] [%s]\\n\",\n> file_map, usermap_name,\n> file_pguser, pguser,\n> file_iuser, ident_username\n> );\n> fputs(PQerrormsg, stderr);\n> pqdebug(\"%s\", PQerrormsg);\n> \n> using psql from the command line user edstrom tries to\n> connect to tstdb (\"psql tstdb\") and is rejected. The error\n> log says:\n> \n> >>->pg_ident: [tst] [pgsql] [edstrom] [edstrom] [edstrom] [edstrom]\n> pg_ident: [pgsql] [pgsql] [postgres] [edstrom] [postgres] [edstrom]\n> pg_ident: [test] [pgsql] [edstrom] [edstrom] [edstrom] [edstrom]\n> pg_ident: [test] [pgsql] [postgres] [edstrom] [postgres] [edstrom]\n> pg_ident: [] [pgsql] [] [edstrom] [] [edstrom]\n> pg_ident: [] [pgsql] [] [edstrom] [] [edstrom]\n> User authentication failed\n> \n> The arrow shows where it should have succeeded. For some\n> It isn't cycling through usermap_name properly.\n> \n> --------------------------------------------------------------------------\n> \n> Solution:\n> ---------\n> \n> \n> --------------------------------------------------------------------------\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 12:22:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "[PORTS] Port Bug Report: ident authority map problem (fwd)" } ]
[ { "msg_contents": "> Would a rule be that\n> if the first attribute of an index is unique, then additional\n> attributes are basically useless?\n\nFor PostgreSQL this is currently true, since indexes are currently not\nused for order by. If you have a unique first column in an index,\nthen all following columns could only be used for sorting,\nnot for faster access (access actually gets worse).\n\nAndreas\n\n", "msg_date": "Mon, 30 Mar 1998 20:19:01 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: Let's talk up 6.3" }, { "msg_contents": "> \n> > Would a rule be that\n> > if the first attribute of an index is unique, then additional\n> > attributes are basically useless?\n> \n> For PostgreSQL this is currently true, since indexes are currently not\n> used for order by. If you have a unique first column in an index,\n> then all following columns could only be used for sorting,\n> not for faster access (access actually gets worse).\n\nSorry, don't follow this logic. He is not restricting on the first\nfield of the index, so the index is not used.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 13:35:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: Let's talk up 6.3" }, { "msg_contents": "Andreas:\n> > Would a rule be that\n> > if the first attribute of an index is unique, then additional\n> > attributes are basically useless?\n> \n> For PostgreSQL this is currently true, since indexes are currently not\n> used for order by. If you have a unique first column in an index,\n> then all following columns could only be used for sorting,\n> not for faster access (access actually gets worse).\n\nThe rule 'if the first attribute of an index is unique, then additional\nattributes are basically useless' is exactly correct for all systems, not\njust PostgreSQL. It has nothing to do with whether indexes are used for\n'orderby'.\n\nA bit of thought will reveal that if the first key is unique then there\nis no way any subsequent key can influence the sort order. Consider:\n\n col1 col2\n ---- ----\n A 9\n B 5\n C 1\n ... ...\n\nThere is no value you can put in col2 that will make 'A' sort after 'B'.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Mon, 30 Mar 1998 11:14:18 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: Let's talk up 6.3" } ]
[ { "msg_contents": "Paul Raines wrote:\n> \n> >> Current implementation of IN is very simple. As you see from EXPLAIN\n> >> for each row from mdc1_runs server performes SeqScan on mdc1_simu.\n> >> Try to create index on mdc1_simu (version) and let's know about results.\n> >> Also, you could create index on mdc1_simu (version, runnum) and re-write\n> >> your query as\n> >>\n> >> select distinct runtype from mdc1_runs where\n> >> EXISTS (select * from mdc1_runs where version = '...' and\n> >> runnum = mdc1_runs.runnum);\n> >>\n> >> - this can be faster.\n> >>\n> \n> It was about 4 seconds faster. After creating the indices, the\n> above took < 3 seconds, as did the original subselect statement.\n\nPlease remember us how long query was in Oracle.\nAlso, as I understand, subselect with EXISTS takes < 3 sec and\noriginal subselect (with IN) takes ~ 7 sec - is this correct ?\n\nVadim\n", "msg_date": "Tue, 31 Mar 1998 02:31:41 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Let's talk up 6.3" } ]
[ { "msg_contents": "I haven't been able to CVSup in a bit... I keep getting\n\nruntime error:\n\tAttempt to dereference NIL\n\tfile \"../src/text/Text.m3\", line 58\n\nAnyone have any ideas? This has been happening since January and\nintermittently before since the beginning of November (IIRC).\n\nI've been downloading the source by FTP but I miss this...\n\nG'day, eh? :)\n\t- Teunis\n\n", "msg_date": "Mon, 30 Mar 1998 12:53:58 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": true, "msg_subject": "CVSup" }, { "msg_contents": "On Mon, 30 Mar 1998, teunis wrote:\n\n> I haven't been able to CVSup in a bit... I keep getting\n> \n> runtime error:\n> \tAttempt to dereference NIL\n> \tfile \"../src/text/Text.m3\", line 58\n\n\tHave you tried removing the ../src/text directory? What version\nof CVSup? Operating system?\n\n> \n> Anyone have any ideas? This has been happening since January and\n> intermittently before since the beginning of November (IIRC).\n\n\tGeez, I just *love* timely bug reports :)\n\n\n", "msg_date": "Mon, 30 Mar 1998 14:54:29 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVSup" }, { "msg_contents": "On Mon, 30 Mar 1998, The Hermit Hacker wrote:\n\n> On Mon, 30 Mar 1998, teunis wrote:\n> \n> > I haven't been able to CVSup in a bit... I keep getting\n> > \n> > runtime error:\n> > \tAttempt to dereference NIL\n> > \tfile \"../src/text/Text.m3\", line 58\n> \n> \tHave you tried removing the ../src/text directory? What version\n> of CVSup? Operating system?\n\nThanks - I always forget OS...\n\nI don't HAVE a ../src/text directory - it looks like an error in the\nbinary... BUT - I've figured it out.. *grin*\n\n> > Anyone have any ideas? This has been happening since January and\n> > intermittently before since the beginning of November (IIRC).\n> \n> \tGeez, I just *love* timely bug reports :)\n\n*heh* - well, it WAS intermittent. Thought it was lack of drivespace\nactually - because it would start working (before) if I freed up some\nspace.\n\nBut lately it hasn't worked at all:\n\nSystem:\n\tLinux; glibc-2.0 (redhat-type system)\nFix:\n\tDownload static version of cvsup\n\nLikely bug:\n\tToo old CVSup. Prolly last summer-ish. whoops.\n\nSorry about the false alarm, folks.\n\nG'day, eh? :)\n\t- Teunis\n\nPS: now to check that ODBC driver again... *sigh*...\n\n", "msg_date": "Mon, 30 Mar 1998 15:47:20 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVSup" } ]
[ { "msg_contents": "On Mon, 30 Mar 1998, Simon Shapiro wrote:\n\n> \n> On 30-Mar-98 The Hermit Hacker wrote:\n> > On Mon, 30 Mar 1998, Simon Shapiro wrote:\n> > \n> >> \n> >> On 30-Mar-98 Bob Bishop wrote:\n> >> > At 10:57 pm +0100 29/3/98, Simon Shapiro wrote:\n> >> >>..\n> >> >>We have been playing with the idea of normalizing the archive into an\n> >> >>RDBMS. [etc]\n> >> > \n> >> > Great, but to be useful you need free-text search too.\n> >> \n> >> Yup. This is in the head-scratching stage still. I was thinking of\n> >> either\n> >> glimpse or maybe simply extracting the blob and applying regex to it.\n> >> Comments?\n> > \n> > Just checked into it, and *supposedly* there is a free-text search\n> > module for PostgreSQL available...\n> \n> Which? Where? This is good news. Saves some headaches.\n\nI've CC'd this into [email protected] ... Bruce pointed out\nthat Maarten(?) was working on something like this...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Mon, 30 Mar 1998 17:48:45 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Pgaccess doesn't run on -current anymore, Update" } ]
[ { "msg_contents": "Hi,\n I am experiencing a problem with my server application\nafter upgrading from v6.2 to v6.3 and hope that you may\nhave some insight. The problem is that after several\nthousand database transactions the postgres backend returns\nthe error \"palloc failed - out of memory\". The server app\nmaintains it's connection to the same postgres backend\nthroughout. \n\n I notice a tremendous amount of image growth in the\npostgres backend(8K per 10 trans). None of the application\ncode has changed, only the upgrade to v6.3 of postgres.\n\n Is memory leakage a known problem in v6.3? If so, has it\nbeen addressed in v6.31?\n\nI appreciate any information you can give me.\nthanks,\n-steve\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n> E-Mail: Steve Tarkalson <[email protected]>\n> Company: Atlas Telecom, Portland OR\n> Group: APS Engineering\n> Sent: 30-Mar-98 19:02:36\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n", "msg_date": "Mon, 30 Mar 1998 19:02:36 -0800 (PST)", "msg_from": "Steve Tarkalson <[email protected]>", "msg_from_op": true, "msg_subject": "problem with 6.3" } ]
[ { "msg_contents": "I have merged the rename manual pages into the alter_table manual page,\nwhere it belongs.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 30 Mar 1998 23:34:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "rename manual page" } ]
[ { "msg_contents": "\t>> \n\t>> > Would a rule be that\n\t>> > if the first attribute of an index is unique, then additional\n\t>> > attributes are basically useless?\n\t>> \n\t>> For PostgreSQL this is currently true, since indexes are\ncurrently not\n\t>> used for order by. If you have a unique first column in an index,\n\t>> then all following columns could only be used for sorting,\n\t>> not for faster access (access actually gets worse).\n\t>\n\t>Sorry, don't follow this logic. He is not restricting on the first\n\t>field of the index, so the index is not used.\n\nOoops, I did not look at that, I just took the sentence standalone,\nand under the presumption that the first field of the index is in the where\nrestriction.\n\nAndreas\n\n", "msg_date": "Tue, 31 Mar 1998 09:21:13 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: [HACKERS] Re: Let's talk up 6.3" } ]
[ { "msg_contents": "\t>> > Would a rule be that\n\t>> > if the first attribute of an index is unique, then additional\n\t>> > attributes are basically useless?\n\t>> \n<< cut some of my nonsense here >> \n\n\tDavid Gould writes:\n\t>The rule 'if the first attribute of an index is unique, then\nadditional\n\t>attributes are basically useless' is exactly correct for all\nsystems, not\n\t>just PostgreSQL. It has nothing to do with whether indexes are used\nfor\n\t>'orderby'.\n\nThe second Ooops in one day, I guess I have to relearn the meaning of\n\"First think then talk.\", Sorry. (what a day, well at least the sun is\nshining here :-) \nBut to speak in self defense there is still one case where the trailing \ncolumns in this index would help. If only index columns are selected,\nthe engine could do an index only scan (not in postgresql yet).\n\nAndreas\n\n\n", "msg_date": "Tue, 31 Mar 1998 09:48:39 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Let's talk up 6.3" } ]
[ { "msg_contents": "Hi,\n\nWhile debugging yet another overrun I came across the StrNCpy macro.\nA quick grep of the source tells me that usage of the StrNCpy macro is\nseemingly inconsistent.\n\nUsage 1:\nstrptr = palloc(len); // done is a diffrent context\nptr = palloc(len + 1);\nStrNCpy(ptr, strptr, len + 1);\n\nUsage 2:\nNameData name;\nStrNCpy(name.data, ptr2name, NAMEDATALEN);\n\nThe StrNCpy macro zero terminates the destination buffer.\n\nUsage 1 is gives a read=buffer overrun (which I agree is not the most\nserious of bugs\nif you system doesn't dump core on it).\nUsage 2 makes gives the name a maximum of 31 instead of 32 characters.\n\nIs the maximun name length supposted to be 31 or 32 characters?\n\nWith regards from Maurice.\n\n", "msg_date": "Tue, 31 Mar 1998 11:12:07 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "StrNCpy" }, { "msg_contents": "> \n> Hi,\n> \n> While debugging yet another overrun I came across the StrNCpy macro.\n> A quick grep of the source tells me that usage of the StrNCpy macro is\n> seemingly inconsistent.\n> \n> Usage 1:\n> strptr = palloc(len); // done is a diffrent context\n> ptr = palloc(len + 1);\n> StrNCpy(ptr, strptr, len + 1);\n> \n> Usage 2:\n> NameData name;\n> StrNCpy(name.data, ptr2name, NAMEDATALEN);\n> \n> The StrNCpy macro zero terminates the destination buffer.\n> \n> Usage 1 is gives a read=buffer overrun (which I agree is not the most\n> serious of bugs\n> if you system doesn't dump core on it).\n> Usage 2 makes gives the name a maximum of 31 instead of 32 characters.\n> \n> Is the maximun name length supposted to be 31 or 32 characters?\n\nThanks for checking into these things for us Maurice. I zero-terminate\nall name-type fields, so the max is 31, not 32.\n\nThe SQL manual page says:\n\n Names in SQL are sequences of less than NAMEDATALEN\n alphanumeric characters, starting with an alphabetic char-\n acter. By default, NAMEDATALEN is set to 32, but at the\n time the system is built, NAMEDATALEN can be changed by\n changing the #ifdef in src/backend/include/postgres.h.\n Underscore (\"_\") is considered an alphabetic character.\n\nI updated this manual page when I decided to be consistent and always\nzero-terminate the NAME type.\n\nSo the second usage is OK, the first usage:\n\t\n\t> strptr = palloc(len); // done is a diffrent context\n\t> ptr = palloc(len + 1);\n\t> StrNCpy(ptr, strptr, len + 1);\n\nis legally a problem because strNcpy() is doing:\n\n (strncpy((dst),(src),(len)),(len > 0) ? *((dst)+(len)-1)='\\0' :(dummyret)NULL,(void)(dst))\n\nand the call to strncpy() is using len, when that is one too large. \nNow, I know I am going to write over that byte anyway if len >0, so it\nis cleaned up, but it is wrong. I will change the macro to do:\n\n (strncpy((dst),(src),(len-1)),(len > 0) ? *((dst)+(len)-1)='\\0' :(dummyret)NULL,(void)(dst))\n\nor something like that so I check for len == 0.\n\nHere is a patch that does this. I am applying it now. Uses the new\nmacro formatting style:\n\n---------------------------------------------------------------------------\n\n*** ./include/c.h.orig\tTue Mar 31 10:42:36 1998\n--- ./include/c.h\tTue Mar 31 10:46:23 1998\n***************\n*** 703,709 ****\n */\n /* we do this so if the macro is used in an if action, it will work */\n #define StrNCpy(dst,src,len)\t\\\n! \t(strncpy((dst),(src),(len)),(len > 0) ? *((dst)+(len)-1)='\\0' : (dummyret)NULL,(void)(dst))\n \n /* Get a bit mask of the bits set in non-int32 aligned addresses */\n #define INT_ALIGN_MASK (sizeof(int32) - 1)\n--- 703,717 ----\n */\n /* we do this so if the macro is used in an if action, it will work */\n #define StrNCpy(dst,src,len)\t\\\n! ( \\\n! \t((len) > 0) ? \\\n! \t( \\\n! \t\tstrncpy((dst),(src),(len)-1), \\\n! \t\t*((dst)+(len)-1)='\\0' \\\n! \t) \\\n! \t: \\\n! \t\t(dummyret)NULL,(void)(dst) \\\n! )\n \n /* Get a bit mask of the bits set in non-int32 aligned addresses */\n #define INT_ALIGN_MASK (sizeof(int32) - 1)\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 31 Mar 1998 10:51:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] StrNCpy" } ]
[ { "msg_contents": "> \n> On Mon, 30 Mar 1998, Bruce Momjian wrote:\n> \n> > With the CLUSTER change, are things faster. Can you give me some times?\n> > Can you supply some source to put into contrib/?\n> \n> I did some timing after I clustered the data. However, I remember no \n> spectacular speedups. Maybe I remember wrong. I'll do the timings again \n> tonight and send them to you. I'd be happy to send sources to be put into \n> contrib, but I might need some help cleaning it up so ppl can actually \n> *use* it (an easy build, maybe some script to create the correct \n> tables/indices for a particular setup etc.)\n> \n> I also think it would be nice if there was a function that could be \n> called to build the index *after* the main table is populated. Might be \n> much faster for large tables (but then again, it might not be).\n> \n> Give me some time (end of the week?), I'm very busy with graduating right \n> now :)\n\nGreat. Remember, we have an problem with the optimizer and referencing\ntwo tables by different names, so that may be part of the problem with\nperformance.\n\nAt this point, I would be interested in just hearing about single and\ntwo word search times.\n\nThe CLUSTER really speeded up things here.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 31 Mar 1998 10:28:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: indexing words" }, { "msg_contents": "Hate to bother you again, but do you think you can test speed with\nCLUSTER, and have the text indexing packaged up for /contrib by the time\nwe release 6.4, with beta August 1?\n\nWith 6.3.* closed, there is no rush on any of this.\n\n> \n> On Mon, 30 Mar 1998, Bruce Momjian wrote:\n> \n> > With the CLUSTER change, are things faster. Can you give me some times?\n> > Can you supply some source to put into contrib/?\n> \n> I did some timing after I clustered the data. However, I remember no \n> spectacular speedups. Maybe I remember wrong. I'll do the timings again \n> tonight and send them to you. I'd be happy to send sources to be put into \n> contrib, but I might need some help cleaning it up so ppl can actually \n> *use* it (an easy build, maybe some script to create the correct \n> tables/indices for a particular setup etc.)\n> \n> I also think it would be nice if there was a function that could be \n> called to build the index *after* the main table is populated. Might be \n> much faster for large tables (but then again, it might not be).\n> \n> Give me some time (end of the week?), I'm very busy with graduating right \n> now :)\n> \n> Maarten\n> \n> _____________________________________________________________________________\n> | TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n> | Department of Electrical Engineering |\n> | Computer Architecture and Digital Technique section |\n> | [email protected] |\n> -----------------------------------------------------------------------------\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 00:05:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: indexing words" }, { "msg_contents": "On Sun, 26 Apr 1998, Bruce Momjian wrote:\n\n> Hate to bother you again, but do you think you can test speed with\n> CLUSTER, and have the text indexing packaged up for /contrib by the time\n> we release 6.4, with beta August 1?\n> \n> With 6.3.* closed, there is no rush on any of this.\n\nOh, august 1 should be no problem.\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Sun, 26 Apr 1998 10:37:09 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexing words" } ]
[ { "msg_contents": ">\n>Here is a patch that does this. I am applying it now. Uses the new\n>macro formatting style:\n>\n\n\nIf I cvsup the new sources, wil this mean that I am in sync with you?\nBTW is it possible to do a 'cvs update' with cvsup? \nI'm sorry if this is a stupid question.\n\nThanks, with regards from Maurice.\n\n\n", "msg_date": "Tue, 31 Mar 1998 18:34:30 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] StrNCpy" }, { "msg_contents": "> \n> >\n> >Here is a patch that does this. I am applying it now. Uses the new\n> >macro formatting style:\n> >\n> \n> \n> If I cvsup the new sources, wil this mean that I am in sync with you?\n\nYes, patch already applied.\n\n> BTW is it possible to do a 'cvs update' with cvsup? \n> I'm sorry if this is a stupid question.\n\nNo, cvsup only downloads sources. You need a telnet username/password\nfrom Marc to use cvs for updates.\n\n>From src/tools/FAQ_DEV:\n\nThere are several ways to obtain the source tree. Occasional developers can\njust get the most recent source tree snapshot from ftp.postgresql.org. For\nregular developers, you can get CVSup, which is available from\nftp.postgresql.org too. CVSup allows you to download the source tree, then\noccasionally update your copy of the source tree with any new changes. Using\nCVSup, you don't have to download the entire source each time, only the\nchanged files. CVSup does not allow developers to update the source tree.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 31 Mar 1998 11:42:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] StrNCpy" } ]
[ { "msg_contents": "> 1) in the parser transformations (and/or in the optimizer), look for\n> unary minus operators on constants, and convert those node subtrees to\n> negative constant nodes.\n> \n> 2) try to do the right thing to convert types to be compatible with\n> target columns. I'm working on this topic now, but I'm planning on\n> addressing functions (first cut is done) and operators (starting now)\n> before looking at target columns. Hopefully all three areas will be\n> do-able.\n> \n> Anyone interested in looking at (1)? I think it would be a good thing to\n> have even if (2) masks the problem away, unless of course the optimizer\n> already gets rid of function calls on constants by executing them before\n> run-time...\n\nI am confused. As I can tell, these are coming in as null_expr - 1. \nWhy can't we do a check in gram.y, and if it is a null_expr - 1, replace\nto a negative constant of int or float8. The regular constant type\nconversion code will then fix this.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 31 Mar 1998 21:54:08 +6700 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: int2 negative numbers not parsed\n\tcorrectly" }, { "msg_contents": "> > I declared a column to be type \"smallint\". It works, except\n> > when I attempt to enter a negative number at which time the\n> > parser complains about its 'type'. See example, below.\n> > mdalphin=> insert into test values (-1);\n> > ERROR: parser: attribute 'number' is of type 'int2' but expression \n> > is of type 'int4'\n> This is a problem we have seen in a few places. It is caused by the\n> negative sign being handled in an unusual way. No fix known yet.\n\nThere are two ways to address this for a fix as far as I can tell:\n\n1) in the parser transformations (and/or in the optimizer), look for\nunary minus operators on constants, and convert those node subtrees to\nnegative constant nodes.\n\n2) try to do the right thing to convert types to be compatible with\ntarget columns. I'm working on this topic now, but I'm planning on\naddressing functions (first cut is done) and operators (starting now)\nbefore looking at target columns. Hopefully all three areas will be\ndo-able.\n\nAnyone interested in looking at (1)? I think it would be a good thing to\nhave even if (2) masks the problem away, unless of course the optimizer\nalready gets rid of function calls on constants by executing them before\nrun-time...\n\n - Tom\n", "msg_date": "Wed, 01 Apr 1998 02:48:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Port Bug Report: int2 negative numbers not parsed\n\tcorrectly" }, { "msg_contents": "> I am confused. As I can tell, these are coming in as null_expr - 1.\n\nWhat is \"null_expr - 1\"? I think that is the same thing; a node with a\nsubtraction operator and the left side set to null and the right side\nset to a constant node. That's what I meant by the unary minus on a\nconstant.\n\n> Why can't we do a check in gram.y,...\n\nWell we maybe can, but it sure is ugly. This will be spread around a\nbunch of places (everywhere there is a unary minus allowed). I already\ndid the wrong thing and brute-forced something similar into the CREATE\nSEQUENCE code in gram.y. Isolating it in transform_expr() or somewhere\nlike that would be much cleaner.\n", "msg_date": "Wed, 01 Apr 1998 04:24:26 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Port Bug Report: int2 negative numbers not parsed\n\tcorrectly" }, { "msg_contents": "> \n> > I am confused. As I can tell, these are coming in as null_expr - 1.\n> \n> What is \"null_expr - 1\"? I think that is the same thing; a node with a\n> subtraction operator and the left side set to null and the right side\n> set to a constant node. That's what I meant by the unary minus on a\n> constant.\n> \n> > Why can't we do a check in gram.y,...\n> \n> Well we maybe can, but it sure is ugly. This will be spread around a\n> bunch of places (everywhere there is a unary minus allowed). I already\n> did the wrong thing and brute-forced something similar into the CREATE\n> SEQUENCE code in gram.y. Isolating it in transform_expr() or somewhere\n> like that would be much cleaner.\n> \n\nBut isn't it is just one line in gram.y. That is where I was seeing it\nhappen.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 1 Apr 1998 11:02:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: int2 negative numbers not parsed\n\tcorrectly" }, { "msg_contents": "> > Well we maybe can, but it sure is ugly. This will be spread around a\n> > bunch of places (everywhere there is a unary minus allowed). I \n> > already did the wrong thing and brute-forced something similar into \n> > the CREATE SEQUENCE code in gram.y. Isolating it in transform_expr() \n> > or somewhere like that would be much cleaner.\n> But isn't it is just one line in gram.y. That is where I was seeing \n> it happen.\n\ngolem$ grep UMINUS gram.y\n%right UMINUS\n | '-' default_expr %prec UMINUS\n | '-' constraint_expr %prec UMINUS\n | '-' a_expr %prec UMINUS\n | '-' b_expr %prec UMINUS\n | '-' position_expr %prec UMINUS\n\nSo at least 5 different places, perhaps more when you get into it :(\n\n - Tom\n", "msg_date": "Wed, 01 Apr 1998 16:23:21 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Port Bug Report: int2 negative numbers not\n\tparsed correctly" }, { "msg_contents": "> \n> > > Well we maybe can, but it sure is ugly. This will be spread around a\n> > > bunch of places (everywhere there is a unary minus allowed). I \n> > > already did the wrong thing and brute-forced something similar into \n> > > the CREATE SEQUENCE code in gram.y. Isolating it in transform_expr() \n> > > or somewhere like that would be much cleaner.\n> > But isn't it is just one line in gram.y. That is where I was seeing \n> > it happen.\n> \n> golem$ grep UMINUS gram.y\n> %right UMINUS\n> | '-' default_expr %prec UMINUS\n> | '-' constraint_expr %prec UMINUS\n> | '-' a_expr %prec UMINUS\n> | '-' b_expr %prec UMINUS\n> | '-' position_expr %prec UMINUS\n> \n> So at least 5 different places, perhaps more when you get into it :(\n\nOK, let's take a look at it. The only one I have seen a problem with\nis:\n\n\t| '-' a_expr %prec UMINUS\n\nBut let's look at the others. Default_expr has it:\n\t\n\tdefault_expr: AexprConst\n\t { $$ = makeConstantList((A_Const *) $1); }\n\t | NULL_P\n\t { $$ = lcons( makeString(\"NULL\"), NIL); }\n\t | '-' default_expr %prec UMINUS\n\t { $$ = lcons( makeString( \"-\"), $2); }\n\nBut I don't understand why it is there. Doesn't AexprConst handle such\na case, or do we get shift-reduce conflicts without it?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 1 Apr 1998 11:41:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PORTS] Port Bug Report: int2 negative numbers not\n\tparsed correctly" }, { "msg_contents": "> > So at least 5 different places, perhaps more when you get into it :(\n> OK, let's take a look at it. The only one I have seen a problem with\n> is:\n> \n> | '-' a_expr %prec UMINUS\n> \n> But let's look at the others. Default_expr has it:\n> \n> default_expr: AexprConst\n> { $$ = makeConstantList((A_Const *) $1); }\n> | NULL_P\n> { $$ = lcons( makeString(\"NULL\"), NIL); }\n> | '-' default_expr %prec UMINUS\n> { $$ = lcons( makeString( \"-\"), $2); }\n> \n> But I don't understand why it is there. Doesn't AexprConst handle \n> such a case, or do we get shift-reduce conflicts without it?\n\nNo, _no_ negative numbers get through scan.l without being separated\ninto a minus sign and a positive number. This is because the scanner\ndoes not have any information about context, and cannot distinguish the\nusage of the minus, for example between the two cases \"a - b\" and \"- b\".\nSo, to handle \"a - b\" correctly, the minus sign must always be\nseparated, otherwise you get \"a (-b)\" which the grammar does not\nunderstand.\n\nThe one place which will see every node come through is in the parser\ntransformations or in the optimizer, which is why it seems that those\nmight be good places to look for this case.\n\n - Tom\n", "msg_date": "Wed, 01 Apr 1998 17:08:35 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Port Bug Report: int2 negative numbers not\n\tparsed correctly" } ]
[ { "msg_contents": "Hi,\n\nAfter applying the following patch there remain two\nprobable buffer overruns detected by Electric Fence during\nthe regression test. \nI'll try find out what causes the remain two ones.\n\nThis patch also corrects a typo in smgr.c.\n\nWith regards from Maurice.\n\n--------- Patch starts here ----------\n*** ./backend/catalog/pg_aggregate.c.orig\tWed Apr 1 10:10:47 1998\n--- ./backend/catalog/pg_aggregate.c\tWed Apr 1 10:22:28 1998\n***************\n*** 78,83 ****\n--- 78,84 ----\n \tOid\t\t\txret2 = InvalidOid;\n \tOid\t\t\tfret = InvalidOid;\n \tOid\t\t\tfnArgs[8];\n+ \tNameData\t\taname;\n \tTupleDesc\ttupDesc;\n \n \tMemSet(fnArgs, 0, 8 * sizeof(Oid));\n***************\n*** 202,208 ****\n \t\tnulls[i] = ' ';\n \t\tvalues[i] = (Datum) NULL;\n \t}\n! \tvalues[Anum_pg_aggregate_aggname - 1] = PointerGetDatum(aggName);\n \tvalues[Anum_pg_aggregate_aggowner - 1] =\n \t\tInt32GetDatum(GetUserId());\n \tvalues[Anum_pg_aggregate_aggtransfn1 - 1] =\n--- 203,210 ----\n \t\tnulls[i] = ' ';\n \t\tvalues[i] = (Datum) NULL;\n \t}\n! \tnamestrcpy(&aname, aggName);\n! \tvalues[Anum_pg_aggregate_aggname - 1] = NameGetDatum(&aname);\n \tvalues[Anum_pg_aggregate_aggowner - 1] =\n \t\tInt32GetDatum(GetUserId());\n \tvalues[Anum_pg_aggregate_aggtransfn1 - 1] =\n*** ./backend/catalog/pg_operator.c.orig\tWed Apr 1 10:10:47 1998\n--- ./backend/catalog/pg_operator.c\tWed Apr 1 10:49:30 1998\n***************\n*** 19,24 ****\n--- 19,25 ----\n #include <catalog/pg_proc.h>\n #include <utils/syscache.h>\n #include <utils/tqual.h>\n+ #include <utils/builtins.h>\n #include <access/heapam.h>\n #include <catalog/catname.h>\n #include <catalog/pg_operator.h>\n***************\n*** 229,234 ****\n--- 230,236 ----\n \tDatum\t\tvalues[Natts_pg_operator];\n \tchar\t\tnulls[Natts_pg_operator];\n \tOid\t\t\toperatorObjectId;\n+ \tNameData\toname;\n \tTupleDesc\ttupDesc;\n \n \t/* ----------------\n***************\n*** 246,252 ****\n \t * ----------------\n \t */\n \ti = 0;\n! \tvalues[i++] = PointerGetDatum(operatorName);\n \tvalues[i++] = Int32GetDatum(GetUserId());\n \tvalues[i++] = (Datum) (uint16) 0;\n \n--- 248,255 ----\n \t * ----------------\n \t */\n \ti = 0;\n! \tnamestrcpy(&oname, operatorName);\n! \tvalues[i++] = NameGetDatum(&oname);\n \tvalues[i++] = Int32GetDatum(GetUserId());\n \tvalues[i++] = (Datum) (uint16) 0;\n \n***************\n*** 474,479 ****\n--- 477,483 ----\n \tchar\t *name[4];\n \tOid\t\t\ttypeId[8];\n \tint\t\t\tnargs;\n+ \tNameData\t\toname;\n \tTupleDesc\ttupDesc;\n \n \tstatic ScanKeyData opKey[3] = {\n***************\n*** 608,614 ****\n \t * ----------------\n \t */\n \ti = 0;\n! \tvalues[i++] = PointerGetDatum(operatorName);\n \tvalues[i++] = Int32GetDatum(GetUserId());\n \tvalues[i++] = UInt16GetDatum(precedence);\n \tvalues[i++] = leftTypeName ? (rightTypeName ? 'b' : 'r') : 'l';\n--- 612,619 ----\n \t * ----------------\n \t */\n \ti = 0;\n! \tnamestrcpy(&oname, operatorName);\n! \tvalues[i++] = NameGetDatum(&oname);\n \tvalues[i++] = Int32GetDatum(GetUserId());\n \tvalues[i++] = UInt16GetDatum(precedence);\n \tvalues[i++] = leftTypeName ? (rightTypeName ? 'b' : 'r') : 'l';\n*** ./backend/catalog/pg_proc.c.orig\tWed Apr 1 10:10:47 1998\n--- ./backend/catalog/pg_proc.c\tWed Apr 1 10:26:58 1998\n***************\n*** 71,76 ****\n--- 71,77 ----\n \tOid\t\t\trelid;\n \tOid\t\t\ttoid;\n \ttext\t *prosrctext;\n+ \tNameData\tprocname;\n \tTupleDesc\ttupDesc;\n \n \t/* ----------------\n***************\n*** 229,235 ****\n \t}\n \n \ti = 0;\n! \tvalues[i++] = PointerGetDatum(procedureName);\n \tvalues[i++] = Int32GetDatum(GetUserId());\n \tvalues[i++] = ObjectIdGetDatum(languageObjectId);\n \n--- 230,237 ----\n \t}\n \n \ti = 0;\n! \tnamestrcpy(&procname, procedureName);\n! \tvalues[i++] = NameGetDatum(&procname);\n \tvalues[i++] = Int32GetDatum(GetUserId());\n \tvalues[i++] = ObjectIdGetDatum(languageObjectId);\n \n*** ./backend/catalog/pg_type.c.orig\tWed Apr 1 10:10:47 1998\n--- ./backend/catalog/pg_type.c\tWed Apr 1 10:50:09 1998\n***************\n*** 160,165 ****\n--- 160,166 ----\n \tDatum\t\tvalues[Natts_pg_type];\n \tchar\t\tnulls[Natts_pg_type];\n \tOid\t\t\ttypoid;\n+ \tNameData\tname;\n \tTupleDesc\ttupDesc;\n \n \t/* ----------------\n***************\n*** 177,183 ****\n \t * ----------------\n \t */\n \ti = 0;\n! \tvalues[i++] = (Datum) typeName;\t\t/* 1 */\n \tvalues[i++] = (Datum) InvalidOid;\t/* 2 */\n \tvalues[i++] = (Datum) (int16) 0;\t/* 3 */\n \tvalues[i++] = (Datum) (int16) 0;\t/* 4 */\n--- 178,185 ----\n \t * ----------------\n \t */\n \ti = 0;\n! \tnamestrcpy(&name, typeName);\n! \tvalues[i++] = NameGetDatum(&name);\t\t/* 1 */\n \tvalues[i++] = (Datum) InvalidOid;\t/* 2 */\n \tvalues[i++] = (Datum) (int16) 0;\t/* 3 */\n \tvalues[i++] = (Datum) (int16) 0;\t/* 4 */\n***************\n*** 315,325 ****\n \tchar\t *procs[4];\n \tbool\t\tdefined;\n \tItemPointerData itemPointerData;\n \tTupleDesc\ttupDesc;\n- \n \tOid\t\t\targList[8];\n- \tNameData\t \tname;\n- \n \n \tstatic ScanKeyData typeKey[1] = {\n \t\t{0, Anum_pg_type_typname, NameEqualRegProcedure}\n--- 317,325 ----\n \tchar\t *procs[4];\n \tbool\t\tdefined;\n \tItemPointerData itemPointerData;\n+ \tNameData\tname;\n \tTupleDesc\ttupDesc;\n \tOid\t\t\targList[8];\n \n \tstatic ScanKeyData typeKey[1] = {\n \t\t{0, Anum_pg_type_typname, NameEqualRegProcedure}\n*** ./backend/storage/smgr/smgr.c.orig\tWed Apr 1 10:10:52 1998\n--- ./backend/storage/smgr/smgr.c\tWed Apr 1 10:33:01 1998\n***************\n*** 132,138 ****\n \tint\t\t\tfd;\n \n \tif ((fd = (*(smgrsw[which].smgr_create)) (reln)) < 0)\n! \t\telog(ERROR, \"cannot open %s\",\n \t\t\t &(reln->rd_rel->relname.data[0]));\n \n \treturn (fd);\n--- 132,138 ----\n \tint\t\t\tfd;\n \n \tif ((fd = (*(smgrsw[which].smgr_create)) (reln)) < 0)\n! \t\telog(ERROR, \"cannot create %s\",\n \t\t\t &(reln->rd_rel->relname.data[0]));\n \n \treturn (fd);\n", "msg_date": "Wed, 1 Apr 1998 15:48:16 +0200", "msg_from": "Maurice Gittens <[email protected]>", "msg_from_op": true, "msg_subject": "patch for some more overruns; two to go" } ]
[ { "msg_contents": "Hi,\n\nSorry to bring bad news but it seems that the postgresql daemon is leaking\nmemory when building indices.\n(When using electric fence it takes long enough to notice -:))\n\nAnybody want to recommend a good freeware tool which helps to find memory\nleaks?\n\nThanks with regards from Maurice.\n\n\n", "msg_date": "Wed, 1 Apr 1998 17:01:41 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Memory leak while creating indices?" }, { "msg_contents": "> \n> Hi,\n> \n> Sorry to bring bad news but it seems that the postgresql daemon is leaking\n> memory when building indices.\n> (When using electric fence it takes long enough to notice -:))\n> \n> Anybody want to recommend a good freeware tool which helps to find memory\n> leaks?\n\nYea, as I reported earlier, it is probably all from the same place. I\nused a pginterface C file do show it. I think we need Purify.\n\n\n---------------------------------------------------------------------------\n\n/*\n * pgnulltest.c\n *\n*/\n\n#include <stdio.h>\n#include <signal.h>\n#include <time.h>\n#include <halt.h>\n#include <postgres.h>\n#include <libpq-fe.h>\n#include <pginterface.h>\n\nint main(int argc, char **argv)\n{\n\tchar query[4000];\n\t\n\tif (argc != 2)\n\t\thalt(\"Usage: %s database\\n\",argv[0]);\n\n\tconnectdb(argv[1],NULL,NULL,NULL,NULL);\n\n\twhile (1)\n\t{\n\t\tsprintf(query,\"select * from test;\");\n\t\tdoquery(query);\n\t\tsprintf(query,\"update test set x=3;\");\n\t\tdoquery(query);\n\t}\n\tdisconnectdb();\n\treturn 0;\n}\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 1 Apr 1998 11:10:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Memory leak while creating indices?" } ]
[ { "msg_contents": "Help!!!", "msg_date": "Wed, 01 Apr 1998 08:28:58 -0800", "msg_from": "Sajiah Chmaitelli <[email protected]>", "msg_from_op": true, "msg_subject": "ILLUSTRA DBA CONSULTANT NEEDED ASAP!" } ]
[ { "msg_contents": ">> > bbrmdc=> create index mdc1_simu_pk on mdc1_simu using btree (\n>> > bbrmdc-> runnum char_ops );\n>> ^^^^^^^^\n>> bpchar_ops must be used !!!\n>> char_ops is for 'char' data type, not for 'char(N)'.\n>> But it's much better to DON'T USE ANY XXX_ops at all (and USING btree too -\n>> btree is default) - both features aren't standard and useless in your case.\n>> \n\nOkay, I destroyed the database and recreated it. I then created\nthe following tables and indices;\n\nbbrmdc=> create table mdc1_simu (\nbbrmdc-> runnum char(6) not null,\nbbrmdc-> version varchar(10) not null,\nbbrmdc-> jobgrp varchar(8) not null,\nbbrmdc-> bldrnum int4 not null,\nbbrmdc-> status text,\nbbrmdc-> cpusecs int4,\nbbrmdc-> outsize int4,\nbbrmdc-> machine text,\nbbrmdc-> location text,\nbbrmdc-> jobdate abstime,\nbbrmdc-> who text,\nbbrmdc-> note text );\nCREATE\nbbrmdc=> create table mdc1_runs (\nbbrmdc-> runnum char(6) not null,\nbbrmdc-> runtype text,\nbbrmdc-> nevents int4,\nbbrmdc-> who text,\nbbrmdc-> note text );\nCREATE\nbbrmdc=> create unique index mdc1_runs_pk on mdc1_runs ( runnum );\nCREATE\nbbrmdc=> create index mdc1_simu_pk on mdc1_simu ( runnum );\nCREATE\nbbrmdc=> create index mdc1_simu_ver on mdc1_simu ( version );\nCREATE\n\nI then filled the tables from my Perl DBI script copying Oracle\ndata to Postgres (same as before). This time, it worked without\nfailing do the index FATAL. \n\nI immediatetly tried my subselect.\n\nbbrmdc=> select distinct runtype from mdc1_runs where \nbbrmdc-> runnum in (select runnum from mdc1_simu where version = '3.1.0');\n\nAfter a couple of minutes, I killed the postgres process. I quit my \npsql and then reconnectd. I tried a simple select and it hung too. \nKilled it and reconnected. I dropped the three indices and tried a \nvacuum. It also hung forever. I killed the postgres process, \nrestarted the postmaster, deleted the pg_vlock file, and retried the \nvacuum. It worked. A simple select then works too. \n\nI recreated the indices exactly as above, and selects still\nwork. The subselect also worked too and took about 12 seconds.\n\nI destroyed the database and started over. This time, after \ntransfering the data, I first tried a simple select. It worked\nfine. Then the subselect. It hung again. Killed and reconnected.\nA simple select also hangs. Killed it, restarted the postmaster,\nreconnected and did a vacuum. Now both simple select and\nsubselect work fine.\n\nAny clues?\n\npr\n\n--\n_________________________________________________________________________\nPaul Raines [email protected] 650-926-2369\nStanford Linear Accelerator BABAR Group Software Team \nhttp://www.slac.stanford.edu/~raines/index.html <======== PGP public key\n\n\n", "msg_date": "Wed, 01 Apr 1998 12:08:53 -0800 (PST)", "msg_from": "Paul Raines <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Let's talk up 6.3" }, { "msg_contents": "Paul Raines wrote:\n> \n> Okay, I destroyed the database and recreated it. I then created\n> the following tables and indices;\n> \n...\n> \n> I then filled the tables from my Perl DBI script copying Oracle\n> data to Postgres (same as before). This time, it worked without\n> failing do the index FATAL.\n> \n> I immediatetly tried my subselect.\n> \n> bbrmdc=> select distinct runtype from mdc1_runs where\n> bbrmdc-> runnum in (select runnum from mdc1_simu where version = '3.1.0');\n> \n> After a couple of minutes, I killed the postgres process. I quit my\n> psql and then reconnectd. I tried a simple select and it hung too.\n> Killed it and reconnected. I dropped the three indices and tried a\n> vacuum. It also hung forever. I killed the postgres process,\n> restarted the postmaster, deleted the pg_vlock file, and retried the\n> vacuum. It worked. A simple select then works too.\n\nFirst, I assume that you didn't run vacuum after filling tables and so\nindices were not used: to get index scans you have to either create\nindices _after_ (not before) filling tables or vacuum tables _after_\nfilling.\n\nSecond, after killing server process it's better to restart postmaster!\nKilling is abnormal thing - some locks/spinlocks were not released\nand so your next connection hung.\n\n> \n> I recreated the indices exactly as above, and selects still\n> work. The subselect also worked too and took about 12 seconds.\n\nWhat's Oracle time ?\n\nVadim\n", "msg_date": "Thu, 02 Apr 1998 09:16:58 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Let's talk up 6.3" } ]
[ { "msg_contents": "Chris Albertson wrote:\n> \n> This is just one example. ++Every time++ I do a SELECT where\n> the expected result is a large number of rows I get a\n> failure of some type.\n> \n> testdb=> select count(*) from tassm16\n> testdb-> where 15.5 < b_mag::float4 - (0.375 * (b_mag -\n> r_mag)::float4);\n> FATAL 1: palloc failure: memory exhausted\n> testdb=>\n> \n> I can make Postgresql 6.3 fail every time. Just do a SELECT\n> where the number of rows returned is > a few million. The\n\n0.375 above is float8 and so server uses two float8 funcs to\ncalculate right op of '<' ==> 2 * palloc(8) for each row.\npalloc(8) eats more than 8 bytes of memmory (~ 24): 2 * 24 = 48,\n48 * 1_million_of_rows = 48 Mb.\n\nThis is problem of all data types passed by ref !!!\nAnd this is old problem.\n\nVadim\n", "msg_date": "Thu, 02 Apr 1998 09:37:27 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Postgresql dies with \"FATAL 1: palloc failure: memory\n\texhausted\"" } ]
[ { "msg_contents": "\nHello,\n\nIt took longer than I expected to build packages for v6.3.1 for RH Linux,\nmainly because I had problems compiling the beast.\n\nI am attaching a patch that is mostly Linux psecific, but also fixes\n(almost) all problems currently in the code regarding the use of\n$(DESTDIR) to have files installed in a different root directory.\n\nOne problem I could not figure out is that postgresql will not compile if\none already have the header files from a previous version installed. I\nthink there is a problem with not passing all the local include files as\ncompiler arguments in the libpq makefile, but I don't have time to look at\nit.\n\nPackages are available from ftp://ftp.redhat.com/home/gafton/pgsql.\nFeedback much appreciated.\n\nAlso, I am unable to monitor this mailing list anymore due to increased\nworkload, so please report any RH-Linux related problems regarding this\npackge directly to me.\n\nBest wishes,\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n", "msg_date": "Wed, 1 Apr 1998 21:08:49 -0500 (EST)", "msg_from": "Cristian Gafton <[email protected]>", "msg_from_op": true, "msg_subject": "6.3.1 RH Linux 5.0 packages" }, { "msg_contents": "> Packages are available from ftp://ftp.redhat.com/home/gafton/pgsql.\n> Feedback much appreciated.\n\nI'll try an upgrade on my RH5.0 machine at work tomorrow.\n\n> Also, I am unable to monitor this mailing list anymore due to \n> increased workload, so please report any RH-Linux related problems \n> regarding this packge directly to me.\n\nWe'll be sure to do so. btw, would it be of any help to have others look\nat the rpm packaging and give you a hand with that?\n\nThanks for your support. We were gratified to find Postgres included in\nthe RH5.0 distribution and would like to help it stay solid for you.\n\n - Tom\n", "msg_date": "Thu, 02 Apr 1998 04:39:13 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.3.1 RH Linux 5.0 packages" }, { "msg_contents": "On Thu, 2 Apr 1998, Thomas G. Lockhart wrote:\n\n> > Also, I am unable to monitor this mailing list anymore due to \n> > increased workload, so please report any RH-Linux related problems \n> > regarding this packge directly to me.\n> \n> We'll be sure to do so. btw, would it be of any help to have others look\n> at the rpm packaging and give you a hand with that?\n\nOf course, suggestions, ideas, what did I miss, etc. Just please send me\ncomments or suggestions, because I am _really_ bad at picking up\nimprovements mailed to me like: \"I've built a new package at <address>. If\nyou take a look at it you'll find some nifty improvements to the package\".\nOne thing I can say about such a message: I won't take a look.\n:-)\n\nBest wishes,\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n", "msg_date": "Wed, 1 Apr 1998 23:42:23 -0500 (EST)", "msg_from": "Cristian Gafton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.3.1 RH Linux 5.0 packages" } ]
[ { "msg_contents": "Doh! I am tired. Here is the actual patch. I'd appreciate if you would\nconsider integrating it in the main distribution.\n\nThanks,\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.", "msg_date": "Wed, 1 Apr 1998 21:10:43 -0500 (EST)", "msg_from": "Cristian Gafton <[email protected]>", "msg_from_op": true, "msg_subject": "RH Linux v6.3.1 patch" }, { "msg_contents": "> \n> Doh! I am tired. Here is the actual patch. I'd appreciate if you would\n> consider integrating it in the main distribution.\n> \n> Thanks,\n> \n\nI have gone through the patch. First, the DESTDIR was certainly\nmis-configured. It was used in some places, but never defined. I\ndecided that instead of making it consistent everywhere, I would remove\nit. Not sure what it added in addition to the existing defines.\n\nThe other defines were very Linux-specific, and because I don't run\nLinux here, did not feel I could apply them.\n\nI will send the message to the hackers list, and see if anyone can\ncomment on the Linux changes.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 18:18:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RH Linux v6.3.1 patch" }, { "msg_contents": "On Sun, 5 Apr 1998, Bruce Momjian wrote:\n\n> I have gone through the patch. First, the DESTDIR was certainly\n> mis-configured. It was used in some places, but never defined. I\n> decided that instead of making it consistent everywhere, I would remove\n> it. Not sure what it added in addition to the existing defines.\n\nNOOOOOOOOOOOOOOOOOOOO !!!!!\n\nIf you want us to REMOVE postgresql from our distribution, then get rid of\nDESTDIR. DESTDIR is very cool, because we can make:\n\n./configure\nmake\nmake DESTDIR=/tmp/postgres-root install\n\nand then list all the files in /tmp/postgres-root as being part of the\npackage. You have to understand that for some people clobbering their\nbuild machines with tons of un-needed things is a nightmare.\n\nIn this respect, postgresql made me a pleasant surprise because of it's\n(partial) support for DESTDIR. That what made me belive that we can ship a\npostgresql package. \n\nIf you remove it, I won't have time to hack it back, so ... Please don't.\n\n> The other defines were very Linux-specific, and because I don't run\n> Linux here, did not feel I could apply them.\n\nYou mean the patches for things like:\n\nifeq($(platform), linux)\n...\nendif\n\nYes, those are Linux specifiv and I _think_ you could apply them.\n\n> I will send the message to the hackers list, and see if anyone can\n> comment on the Linux changes.\n\nPlease, please don't remove DESTDIR support. Enhance it.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat Software, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n", "msg_date": "Sun, 5 Apr 1998 18:29:12 -0400 (EDT)", "msg_from": "Cristian Gafton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RH Linux v6.3.1 patch" }, { "msg_contents": "> \n> On Sun, 5 Apr 1998, Bruce Momjian wrote:\n> \n> > I have gone through the patch. First, the DESTDIR was certainly\n> > mis-configured. It was used in some places, but never defined. I\n> > decided that instead of making it consistent everywhere, I would remove\n> > it. Not sure what it added in addition to the existing defines.\n> \n> NOOOOOOOOOOOOOOOOOOOO !!!!!\n> \n> If you want us to REMOVE postgresql from our distribution, then get rid of\n> DESTDIR. DESTDIR is very cool, because we can make:\n> \n> ./configure\n> make\n> make DESTDIR=/tmp/postgres-root install\n> \n> and then list all the files in /tmp/postgres-root as being part of the\n> package. You have to understand that for some people clobbering their\n> build machines with tons of un-needed things is a nightmare.\n> \n> In this respect, postgresql made me a pleasant surprise because of it's\n> (partial) support for DESTDIR. That what made me belive that we can ship a\n> postgresql package. \n> \n> If you remove it, I won't have time to hack it back, so ... Please don't.\n> \n> > The other defines were very Linux-specific, and because I don't run\n> > Linux here, did not feel I could apply them.\n> \n> You mean the patches for things like:\n> \n> ifeq($(platform), linux)\n> ...\n> endif\n> \n> Yes, those are Linux specifiv and I _think_ you could apply them.\n> \n> > I will send the message to the hackers list, and see if anyone can\n> > comment on the Linux changes.\n> \n> Please, please don't remove DESTDIR support. Enhance it.\n\nOK, old DESTDIR re-installed, and patch applied. I skipped part of the\npatch. One part commented out perl compile in interfaces/Makefile. \nAnother used mkdir -p, which I am not sure is supported on all\nplatforms. I also skipped the configure.in changes, because that has\nbeen cleaned up recently, and your changes were very unclear to me.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 21:35:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RH Linux v6.3.1 patch" } ]
[ { "msg_contents": "We have several open issues with 6.3.1, which will probably have to be\naddressed with a mega-patch, separate patches, or a minor release.\n\nThey are:\n\n\tindexes not used that were used in 6.2\n\tmemory leak in backend when run on simple queries\n\tnegative sign causing problems in various areas, like float4 & shortint\n\tconfigure assert checking is reversed\n\tUNION crashes on ORDER BY or DISTINCT(already fixed in source tree)\n\nWe also have HAVING in the source tree.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 1 Apr 1998 21:47:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Open 6.3.1 issues" }, { "msg_contents": "> We have several open issues with 6.3.1, which will probably have to be\n> addressed with a mega-patch, separate patches, or a minor release.\n\nI'm wondering what is causing the reported failures in the numerology\nregression test, with Postgres having difficulty comparing a float8 to\nan int4. It used to work, and it may have been tweaked just slightly so\nthat is a bit broken. Perhaps if we identify that we will find that\nother things (like the \"negative sign\" problem, which afaik has been\nthis way forever) will fix themselves, at least for now.\n\n - Tom\n", "msg_date": "Thu, 02 Apr 1998 04:43:57 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> Perhaps if we identify that we will find that\n> other things (like the \"negative sign\" problem, which afaik has been\n> this way forever) will fix themselves, at least for now.\n\nI did make a small change to scan.l to fix problems with negative\nnumeric arguments to the CREATE SEQUENCE command. Does someone want to\ntry the scan.l and/or scan.c from v6.3 and see if it fixes anything?\n_Might_ have an effect on the conversion from int4 to an int2 column as\nreported recently.\n\n - Tom\n", "msg_date": "Thu, 02 Apr 1998 05:14:16 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> We have several open issues with 6.3.1, which will probably have to be\n> addressed with a mega-patch, separate patches, or a minor release.\n> \n> They are:\n> \n> indexes not used that were used in 6.2\n\nJust fixed and CVSed.\nIntroduced 1997/12/21:\nRemove some recursion in optimizer and clean up some code there.\n\nBruce, could you check other changed places (I fixed just prune.c) ?\n\nVadim\n", "msg_date": "Thu, 02 Apr 1998 15:51:21 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > We have several open issues with 6.3.1, which will probably have to be\n> > addressed with a mega-patch, separate patches, or a minor release.\n> > \n> > They are:\n> > \n> > indexes not used that were used in 6.2\n> \n> Just fixed and CVSed.\n> Introduced 1997/12/21:\n> Remove some recursion in optimizer and clean up some code there.\n> \n> Bruce, could you check other changed places (I fixed just prune.c) ?\n> \n> Vadim\n> \n\nSure. That was really the only one where I really had trouble. The\nothers were very clean changes. I had already fixed one bug in my new\ncode in that place before the 6.3 release.\n\nThe old code had an double-exponential growth search path that was\nunnessary.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 2 Apr 1998 10:26:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "OK, we have most of the open items fixed. Marc, can you check on number\n4, and Thomas, please apply your patch for item 3. We can then package\na patch and close 6.3.*.\n\n\n> \tindexes not used that were used in 6.2(fixed)\n> \tmemory leak in backend when run on simple queries(fixed)\n> \tnegative sign causing problems in various areas, like float4 & shortint\n> \tconfigure assert checking is reversed\n> \tUNION crashes on ORDER BY or DISTINCT(already fixed in source tree)\n \n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 00:47:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> OK, we have most of the open items fixed. Marc, can you check on number\n> 4, and Thomas, please apply your patch for item 3. We can then package\n> a patch and close 6.3.*.\n> \n> > indexes not used that were used in 6.2(fixed)\n> > memory leak in backend when run on simple queries(fixed)\n> > negative sign causing problems in various areas, like float4 & shortint\n> > configure assert checking is reversed\n> > UNION crashes on ORDER BY or DISTINCT(already fixed in source tree)\n\nHow do we want to package the patches? Shall we assemble the relevant\npatches as we did p1-p7 for v6.2.1? There were reports of problems or\ndifficulties from some people regarding the 6.3->6.3.1 patch, which is\nprobably pretty complex. Also, there are some changes in the source\ntree, like the char2-16 removal, which should not appear until v6.4.\n\nSo, I'd recommend that we take the individual patches fixing each\nproblem, apply them to a clean v6.3.1, run regression tests (since those\napparently broke in v6.3.1) and then release that set of patches at one\ntime.\n\nI'd be willing to do the regression testing and then pass the patches\nback for posting. Comments?\n\n - Tom\n", "msg_date": "Sun, 05 Apr 1998 13:47:38 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> \n> > OK, we have most of the open items fixed. Marc, can you check on number\n> > 4, and Thomas, please apply your patch for item 3. We can then package\n> > a patch and close 6.3.*.\n> > \n> > > indexes not used that were used in 6.2(fixed)\n> > > memory leak in backend when run on simple queries(fixed)\n> > > negative sign causing problems in various areas, like float4 & shortint\n> > > configure assert checking is reversed\n> > > UNION crashes on ORDER BY or DISTINCT(already fixed in source tree)\n> \n> How do we want to package the patches? Shall we assemble the relevant\n> patches as we did p1-p7 for v6.2.1? There were reports of problems or\n> difficulties from some people regarding the 6.3->6.3.1 patch, which is\n> probably pretty complex. Also, there are some changes in the source\n> tree, like the char2-16 removal, which should not appear until v6.4.\n> \n> So, I'd recommend that we take the individual patches fixing each\n> problem, apply them to a clean v6.3.1, run regression tests (since those\n> apparently broke in v6.3.1) and then release that set of patches at one\n> time.\n\nI would think we are safer by releasing a new diff. The char2-16\nchanges are the only ones I know of that should not have been applied\n(by me!), so we can back them out. Just seems it is too easy to miss\nsome part of the patch.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 10:30:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I would think we are safer by releasing a new diff.\n\n...as in: a new minor version, right? There's no good reason not to\nbump it to 6.3.2, and if you do that, it becomes easier for people who\nreport problems and what not to specify exactly what they're running.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "05 Apr 1998 17:41:40 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> > > > indexes not used that were used in 6.2(fixed)\n> > > > memory leak in backend when run on simple queries(fixed)\n> > > > negative sign causing problems in various areas\n> > > > configure assert checking is reversed\n> > > > UNION crashes on ORDER BY or DISTINCT\n> I would think we are safer by releasing a new diff. The char2-16\n> changes are the only ones I know of that should not have been applied\n> (by me!), so we can back them out. Just seems it is too easy to miss\n> some part of the patch.\n\nWell, we have the other side of the problem to worry about too: that\nwith changes in the source tree, there may be unanticipated interactions\nwith other patches when we are really trying to fix only 5 specific\nproblems.\n\nI would like to do a test with specific patches on a clean v6.3.1\ninstallation, and then we can compare the patches from my test with\npatches from the CVS extraction. I'll isolate my \"negative sign\" fixes\n(which I haven't yet committed to the source tree, but which I think\njust need a reversion of scan.l/scan.c to the v6.3 release).\n\nCan you (re)send me the patches for these others? I still have the\n\"memory leak\" patches, but can't remember who posted the \"index\" and\n\"UNION\" patches (were they all yours Bruce?? Probably gone from my mail\nanyway).\n\n - Tom\n", "msg_date": "Sun, 05 Apr 1998 17:06:18 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> \n> > I would think we are safer by releasing a new diff.\n> \n> ...as in: a new minor version, right? There's no good reason not to\n> bump it to 6.3.2, and if you do that, it becomes easier for people who\n> report problems and what not to specify exactly what they're running.\n\nExactly. We also have more fixes. By the time you package up all the\ndiffs and test them, might was well take a new snapshot of the current\nsource.\n\nAlso, it allows people to test the current tree and make changes until\nthe final patch release.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 15:57:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> \n> > > > > indexes not used that were used in 6.2(fixed)\n> > > > > memory leak in backend when run on simple queries(fixed)\n> > > > > negative sign causing problems in various areas\n> > > > > configure assert checking is reversed\n> > > > > UNION crashes on ORDER BY or DISTINCT\n> > I would think we are safer by releasing a new diff. The char2-16\n> > changes are the only ones I know of that should not have been applied\n> > (by me!), so we can back them out. Just seems it is too easy to miss\n> > some part of the patch.\n> \n> Well, we have the other side of the problem to worry about too: that\n> with changes in the source tree, there may be unanticipated interactions\n> with other patches when we are really trying to fix only 5 specific\n> problems.\n> \n> I would like to do a test with specific patches on a clean v6.3.1\n> installation, and then we can compare the patches from my test with\n> patches from the CVS extraction. I'll isolate my \"negative sign\" fixes\n> (which I haven't yet committed to the source tree, but which I think\n> just need a reversion of scan.l/scan.c to the v6.3 release).\n> \n> Can you (re)send me the patches for these others? I still have the\n> \"memory leak\" patches, but can't remember who posted the \"index\" and\n> \"UNION\" patches (were they all yours Bruce?? Probably gone from my mail\n> anyway).\n\nVadim did the index one, and I think I have a copy. The UNION was\nseveral patches by the time I was happy with it, so I would have to do a\ndiff on just the files I know I changed.\n\nNone of the current bugs are from changes made between 6.3 and 6.3.1\nexcept the negative patch, so I can't see us adding more problems.\n\nThe regression test did not show these problems either, so I have little\nconfidence that they will find new bugs we may be introducing. If we go\nwith the current tree, we can have people who use cvsup keep testing the\nsnapshot until we are happy with it.\n\nWe will probably need Marc to make this decision. It can be argued\neither way.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 16:00:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> \n> > OK, we have most of the open items fixed. Marc, can you check on number\n> > 4, and Thomas, please apply your patch for item 3. We can then package\n> > a patch and close 6.3.*.\n> > \n> > > indexes not used that were used in 6.2(fixed)\n> > > memory leak in backend when run on simple queries(fixed)\n> > > negative sign causing problems in various areas, like float4 & shortint\n> > > configure assert checking is reversed\n> > > UNION crashes on ORDER BY or DISTINCT(already fixed in source tree)\n> \n> How do we want to package the patches? Shall we assemble the relevant\n> patches as we did p1-p7 for v6.2.1? There were reports of problems or\n> difficulties from some people regarding the 6.3->6.3.1 patch, which is\n\nI believe these problems were because Marc's copy of one of the geqo\nfiles was zero length. Once he fixed that, I don't remember any other\nproblems.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 16:06:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "On Sun, 5 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > > > > > indexes not used that were used in 6.2(fixed)\n> > > > > > memory leak in backend when run on simple queries(fixed)\n> > > > > > negative sign causing problems in various areas\n> > > > > > configure assert checking is reversed\n> > > > > > UNION crashes on ORDER BY or DISTINCT\n> > > I would think we are safer by releasing a new diff. The char2-16\n> > > changes are the only ones I know of that should not have been applied\n> > > (by me!), so we can back them out. Just seems it is too easy to miss\n> > > some part of the patch.\n> > \n> > Well, we have the other side of the problem to worry about too: that\n> > with changes in the source tree, there may be unanticipated interactions\n> > with other patches when we are really trying to fix only 5 specific\n> > problems.\n> > \n> > I would like to do a test with specific patches on a clean v6.3.1\n> > installation, and then we can compare the patches from my test with\n> > patches from the CVS extraction. I'll isolate my \"negative sign\" fixes\n> > (which I haven't yet committed to the source tree, but which I think\n> > just need a reversion of scan.l/scan.c to the v6.3 release).\n> > \n> > Can you (re)send me the patches for these others? I still have the\n> > \"memory leak\" patches, but can't remember who posted the \"index\" and\n> > \"UNION\" patches (were they all yours Bruce?? Probably gone from my mail\n> > anyway).\n> \n> Vadim did the index one, and I think I have a copy. The UNION was\n> several patches by the time I was happy with it, so I would have to do a\n> diff on just the files I know I changed.\n> \n> None of the current bugs are from changes made between 6.3 and 6.3.1\n> except the negative patch, so I can't see us adding more problems.\n> \n> The regression test did not show these problems either, so I have little\n> confidence that they will find new bugs we may be introducing. If we go\n> with the current tree, we can have people who use cvsup keep testing the\n> snapshot until we are happy with it.\n> \n> We will probably need Marc to make this decision. It can be argued\n> either way.\n\n\tI just read through all the posts on this subject (it was one busy\nweekend), and considering that we *just* put out v6.3.1, I don't really\nlike the idea of doing another v6.3.2...\n\n\tfor v6.2.1, when Vadim has a problem that he fixed against that,\nhe put out a quick patch for that individual bug...\n\n\tIMHO, v6.3.1 was a post-release release, mainly to work on and fix\nbugs based on what those who were afraid of using the beta software\nreported...anything else from that point should just be issued as\nindividual patches to address individual problems...\n\n\tThe patches listed above are great, and serve an important\nfunction, but by fixing those, what other problem(s) have been introduced?\nHas there been ample testing of the newest 'current' to release it as a\n*release*, that alot of ppl will download and use?\n\n\tPut them up as \"official patches\" against v6.3.1, but no\nv6.3.2...not so close behind v6.3.1 :(\n\n\n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 6 Apr 1998 03:06:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> \tPut them up as \"official patches\" against v6.3.1, but no\n> v6.3.2...not so close behind v6.3.1 :(\n\nWhatever you choose to do, make sure it's well reasoned out and that\nyou will then continue to follow that scheme!\n\nYou have to decide what will be \"releases\" and what will be \"interim\".\nFor instance, you could say that 6.3 is a release, and 6.4 will be\none, while 6.3.N (for a possibly large number of sequentially\nallocated values for N) will be interim releases. On the other hand,\nyou could call 6.3.1 a release, and then go for 6.3.1.N, after the\nsame scheme. (This seems to be what Marc wants.) Next it is\nnecessary to decide whether the interim releases will be snapshots of\nthe development tree or a separate branch where important problems are\nfixed by patches that are derived from the main branch. The former is\neasier on the developers, the latter option means that two versions of\nthe code tree must be administered (if they're both in the same actual\nCVS tree, they will be different branches).\n\nIn any case, make sure that upgrading is a step by step operation,\nyielding a numerically increasing sequence of version numbers (or\nversion number plus patchlevel, if you like), so that it will always\nbe possible to say \"I run version so-and-so\", and _not_ \"well, I run\nversion so-and-so, and I've applied the patches for this and that, and\nthat other patch that I also needed\".\n\nWhile I'm writing: are these lists gatewayed to USENET somehow? It\nseems to me that my posting to PostgreSQL lists causes an increase in\nthe amount of garbage I receive from the sort of people whom I'd like\nto have some time alone with -- with a baseball bat, while they were\ntied up. Yup, people who send unsolicited commercial email. If there\nis such a gateway, I'd like to know, so I can stop sending anything to\nthese lists. (I've stopped using USENET altogether for this reason.)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "06 Apr 1998 15:45:19 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" }, { "msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> \n> > \tPut them up as \"official patches\" against v6.3.1, but no\n> > v6.3.2...not so close behind v6.3.1 :(\n> \n> Whatever you choose to do, make sure it's well reasoned out and that\n> you will then continue to follow that scheme!\n> \nAgreed, most of your arguments have been gone over a while ago.\n\nBut I disagree VERY strongly with Marc, a new set of patches should\nDEFINITELY be 6.3.2. What does it matter that this is so close behind\n6.3.1 - that simply shows that the PostgreSQL developers have \nresponded to and fixed a number of bugs in double-quick time. OK,\nso it's a little embarrassing that these weren't spotted before\n6.3 was released, but it makes things so much simpler for everyone\nto say \"I'm running 6.3.2\" rather than \"I'm running 6.3.1 patched\nwith patch xyz (which I may or may not have applied correctly...)\"\n\nIf Marc can come up with one really solid reason why a patched\nversion should NOT be released as 6.3.2, I might reconsider my\nviewpoint :-)\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Tue, 7 Apr 1998 10:54:13 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" } ]
[ { "msg_contents": "hi~ my name is kang-hae kyong. i'm korean.\n\ni have two question.\none is \n topological data structure of postgresql.\n i wonder that\n how to related each spatial object-point, polygon, path..\n in postgres.\n\nanother is..\n process of spatial operator.\n eg: 'contains' search points in polygon.\n how to search??\n how to relate between point table and polygon table.\n (the table has only set of coordinate..)\n\nwhere reference book or site ...talk to me, please.\ni hope to your reply.\nhave a nice time~ bye~\n\n\n", "msg_date": "Thu, 2 Apr 1998 15:58:15 +0900 (KST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "[Q]process for 'contains'." }, { "msg_contents": "> i have two question.\n> one is\n> topological data structure of postgresql.\n> i wonder that\n> how to related each spatial object-point, polygon, path..\n> in postgres.\n\nI'm not certain of your question. Most geometric objects consist of\ncollections of points. The exception is the circle, which consists of a\npoint and a radius. In order of complexity, the geometric objects are\npoint, lseg, line, box, path, polygon, and circle.\n\n> another is..\n> process of spatial operator.\n> eg: 'contains' search points in polygon.\n> how to search??\n> how to relate between point table and polygon table.\n> (the table has only set of coordinate..)\n\nHmm. Again not certain of your question, but here are some example\nqueries using geometric types:\n\n CREATE TABLE pointtbl (name text, location point);\n CREATE TABLE polytbl (region text, boundary polygon);\n\n -- find which region each point is in\n SELECT p.name, y.region FROM pointtbl p, polytbl y\n WHERE p.location @ y.boundary;\n\n> where reference book or site\n\nThere is a small description of each geometric type in the new User's\nGuide.\n\n - Tom\n", "msg_date": "Thu, 02 Apr 1998 13:35:09 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Q]process for 'contains'." } ]
[ { "msg_contents": " \n> Maurice:\n> Sorry to bring bad news but it seems that the postgresql daemon is leaking\n> memory when building indices.\n> (When using electric fence it takes long enough to notice -:))\n> \n> Anybody want to recommend a good freeware tool which helps to find memory\n> leaks?\n \n \n \n> Bruce:\n> Yea, as I reported earlier, it is probably all from the same place. I\n> used a pginterface C file do show it. I think we need Purify.\n \n \n\n> Vadim:\n> Chris Albertson wrote:\n> > \n> > This is just one example. ++Every time++ I do a SELECT where\n> > the expected result is a large number of rows I get a\n> > failure of some type.\n> > \n> > testdb=> select count(*) from tassm16\n> > testdb-> where 15.5 < b_mag::float4 - (0.375 * (b_mag -\n> > r_mag)::float4);\n> > FATAL 1: palloc failure: memory exhausted\n> > testdb=>\n> > \n> > I can make Postgresql 6.3 fail every time. Just do a SELECT\n> > where the number of rows returned is > a few million. The\n> \n> 0.375 above is float8 and so server uses two float8 funcs to\n> calculate right op of '<' ==> 2 * palloc(8) for each row.\n> palloc(8) eats more than 8 bytes of memmory (~ 24): 2 * 24 = 48,\n> 48 * 1_million_of_rows = 48 Mb.\n> \n> This is problem of all data types passed by ref !!!\n> And this is old problem.\n \n\nI have been doing some thought about memory allocation in postgresql\nso the above messages are very timely (at least to me).\n\nI would like to discuss the idea of replacing the current scheme of\nexplicit memory allocation and and deallocation from separate\n\"MemoryDuration\" pools with a conservative garbage collector.\n\nFor more information about garbage collection in general and about the\nspecific collector I am proposing see these urls:\n\n GC FAQ and links\n\thttp://www.iecc.com/gclist/GC-faq.html\n\n Boehm-Weiser Collector\n\thttp://reality.sgi.com/employees/boehm_mti/gc.html\n\nRationale:\n\n>From my experience, I think that this is a stone cold win for postgresql. I\nhave of course, no before and afternumbers (yet) to back this up, but the\nfollowing are I think true:\n\n- MemoryDuration is really \"poor mans garbage collection\" anyway. The idea\n being that it is either a) hard, or b) impossible to arrange to free your\n memory at the right time. Some objects need to last for a session, some\n for a transaction, some for a statement, some for a function, and some\n for other indeterminate durations. The duration mechanism is meant to\n help free things at the right time. Arguably it almost but not quite\n works.\n\n- MemoryDuration adds complexity to the code. A quick skim of the code base\n will show that duration gets switched all over the place often for not\n very obvious reasons. An example is a function allocateing working memory\n (per function duration) and returning an allocated result (per statement\n duration). This makes writing both server code and extension code much\n harder than need be.\n\n\n- MemoryDuration is very costly in terms of space:\n\na) A function returning an allocated object returns memory that cannot be\n freed until the end of the statement. But as observed, often this memory\n is really needed only \"per row\". When millions of rows are involved, this\n gets pretty piggy (they just don't make swap partitions big enough...).\n\nb) Each chunk of allocated memory has overhead of 8 to 16 bytes of\n bookkeeping overhead to link it to the MemoryContext structure that\n controls the duration. This is in addition to whatever overhead (often\n 8 bytes) imposed by the underlying malloc() for boundary tags or size\n and arena information. Each allocation then has 16 to 24 bytes of\n overhead.\n\n The statistics I have seen for a derivative of postgres say that 86% of\n all allocations are 64 bytes or less. 75% are 32 bytes or less, and 43%\n are less than 16 bytes. This suggests that allocator overhead about\n doubles the storage needed.\n\nc) But it is really quite a lot worse than this. As noted above, memory\n is not freed for reuse in a timely way but accumulated until the end\n of the memory duration (ie statement or transaction). This is the\n usual reason for running out of memory in a large query. Additionaly,\n the division of memory into separate pools creates extra fragmentation\n which can only make matters even worse.\n\n\n- MemoryDuration is very costly in terms of speed:\n\na) Some profiling (with Quantify) I have done on a derivative of postgres\n show that for even simple queries return only one row like\n \"select * from table where key = <unique_value>;\"\n spend about 25% to 30% of their total execution time in malloc(), free(),\n or one of the MemoryContext routines. This probably understates the case\n since it is based on instruction counting, not actual time and the\n \"chase a big list of pointers\" operation in MemoryContextDestroy() is\n almost guaranteed to have nasty cache behavior.\n\nb) There is quite a bit of bookeeping code all over the system (to support\n releaseing memory after an error etc). This is in heavily trafficced\n paths. Since is so widely distributed it is very hard to measure the\n slowdown, but there certainly is some. This could all be removed if we\n had garbage collection (although this in itself would be a big job).\n\n\n- MemoryDuration is very costly in terms of correctness and stability:\n\n I am not going to say much here except to point out the number of\n freed pointer errors and memory leaks that have been found in the code\n to date. And that there are new ones in every release. And that I have\n spent a good part of the last three years chasing leaks out of a\n similar system, and no, I am not done yet.\n\n The very existance of companies and products like Purify should be a\n tipoff. There is no practical way to write large C programs with\n dynamic storage without significant storage management problems.\n\n\n- MemoryDuration is very costly in terms of development cost:\n\n First, there are huge testing and debugging costs associated with\n manual storage management. This is basically all waste motion and a\n royal pain.\n\n Even more importantly, code written knowing that there is garbage\n collection tends to have about substantially fewer source statements.\n A typical case is a routine that allocates several things and operates\n on them and checks for failures:\n\n/* non garbage collected example\n*/\ndothing(...)\n{\n if ((p1 = malloc(...)) == NULL)\n return ERR;\n ...\n if ((p2 = malloc(...)) == NULL) {\n free(p1);\n return ERR;\n }\n ...\n if ((p3 = malloc(...)) == NULL) {\n free(p2);\n free(p1);\n return ERR;\n }\n ...\n if ((p4 = malloc(...)) == NULL) {\n free(p3);\n free(p2);\n free(p1);\n return ERR;\n }\n if ((info = do_the_wild_thing(p1, p2,p3, p4)) == ERR) {\n free(p4)\n free(p3);\n free(p2);\n free(p1);\n return ERR;\n }\n ...\n free(info)\n free(p4)\n free(p3);\n free(p2);\n free(p1);\n return OK;\n}\n\n\n\n/* same example only this time with garbage collection\n */\ndothing(...)\n{\n if ((p1 = malloc(...)) == NULL)\n return ERR;\n ...\n if ((p2 = malloc(...)) == NULL)\n return ERR;\n ...\n if ((p3 = malloc(...)) == NULL)\n return ERR;\n ...\n if ((p4 = malloc(...)) == NULL)\n return ERR;\n ...\n if ((info = do_the_wild_thing(p1, p2,p3, p4)) == ERR) \n return ERR;\n ...\n return OK;\n}\n\n\nI know which one I would rather write! And it is fairly obvious which one\nis more likely to work.\n\n\nThis is probably to long for one post, so I will stop now.\n\nI would very much like comments and suggestions on this topic, especially if\nthis is something you have thought about or have experience with.\n\nUnsupported assertions to the effect \"GC is too slow ... only works with\nlisp ...\" etc are ok too, but will be eligible to win valuable prizes.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"A week of coding can sometimes save an hour of thought.\"\n", "msg_date": "Thu, 2 Apr 1998 00:44:48 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "[email protected] (David Gould) writes:\n\n> I would like to discuss the idea of replacing the current scheme of\n> explicit memory allocation and and deallocation from separate\n> \"MemoryDuration\" pools with a conservative garbage collector.\n\nYes! This is a great idea! [scrambles, grinning, to finally get to\nwork on porting the Boehm-Weiser collector properly to NetBSD/* 1.3]\n\nIt seems, from recent discussion, reasonable to assume that this will\nkill a number of bugs, reduce the memory footprint of the backend and\nquite possibly even (judging by the profiling data you quote) give a\nwelcome performance boost. Will you be doing some trial runs with\nBoehm-Weiser simply linked in as a malloc/free replacement? Is it a\nbig project to actually rip out the MemoryDuration allocator's guts to\nget rid of some of that overhead?\n\n> I know which one I would rather write! And it is fairly obvious\n> which one is more likely to work.\n\nOf course, this one [he said, grinning]:\n\n(define (do-thing)\n (with-handler my-handler\n (do-the-wild-thing)))\n\n> Unsupported assertions to the effect \"GC is too slow ... only works\n> with lisp ...\" etc are ok too, but will be eligible to win valuable\n> prizes.\n\n...like a guide to documents on the net debunking these and other\nfavorite misconceptions about garbage collection? You're hardly\nlikely to get too many of those assertions, though: by now, I would\nassume that it's gotten through to most programmers that the handling\nof memory in a large system can be done more reliably _and_ more\nefficiently by a good garbage collector than by a C programmer. The\nfact that the Java designers got this right (no surprise, of course,\nwith Steele at the core), should by itself have convinced many.\n\nOff-topic: as for Java, we now just have to wait for the byte-code\nengine and entire run-time support system to be rewritten in Java, so\nthat we can get a stable deployment platform for Java on the web that\nwon't crash the user's browser every other time she loads an applet!\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "02 Apr 1998 20:01:30 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "Tom Ivar Helbekkmo writes: \n> [email protected] (David Gould) writes:\n> \n> > I would like to discuss the idea of replacing the current scheme of\n> > explicit memory allocation and and deallocation from separate\n> > \"MemoryDuration\" pools with a conservative garbage collector.\n> \n> Yes! This is a great idea! [scrambles, grinning, to finally get to\n> work on porting the Boehm-Weiser collector properly to NetBSD/* 1.3]\n\nThis is exactly the kind of thoughtful response I was looking for ;-)\n\n> It seems, from recent discussion, reasonable to assume that this will\n> kill a number of bugs, reduce the memory footprint of the backend and\n> quite possibly even (judging by the profiling data you quote) give a\n> welcome performance boost. Will you be doing some trial runs with\n> Boehm-Weiser simply linked in as a malloc/free replacement? Is it a\n> big project to actually rip out the MemoryDuration allocator's guts to\n> get rid of some of that overhead?\n\nNot too big, just redefine palloc and make the Context calls into no-ops.\nA bit more trouble to track down all the 'malloc()' calls that shouldn't\nhave ever been there (but there are quite a few).\n\n> Of course, this one [he said, grinning]:\n> \n> (define (do-thing)\n> (with-handler my-handler\n> (do-the-wild-thing)))\n\nSure, but right now we have some few hundred thousand lines of C...\n \n> > Unsupported assertions to the effect \"GC is too slow ... only works\n> > with lisp ...\" etc are ok too, but will be eligible to win valuable\n> > prizes.\n> \n> ...like a guide to documents on the net debunking these and other\n> favorite misconceptions about garbage collection? You're hardly\n\nI had meant to say \"not be eligible\" but I like your idea better. Both\nurls I posted have a bunch of very fine links to a lot of really good\ninformation.\n\n> likely to get too many of those assertions, though: by now, I would\n> assume that it's gotten through to most programmers that the handling\n> of memory in a large system can be done more reliably _and_ more\n> efficiently by a good garbage collector than by a C programmer. The\n\nIt is surprising, but this simple fact has not yet penetrated into\npopular thought. I have seen large organizations full of very bright\npeople spend hundreds of man years chasing leaks without ever wondering\nif there might be an alternative.\n\nFor some reason people cling to the belief that if they were just careful\nenough and only let really good programmers touch the code and carried\na lucky rabbits foot that somehow they could write leak free software.\n\nAll this in the face of the observation that no-one ever actually _writes_\nleak free software. Personally, I don't know anyone who can write leak\nfree software of any size, certainly not in a finite time.\n\n-dg\n> \nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"A week of coding can sometimes save an hour of thought.\"\n", "msg_date": "Thu, 2 Apr 1998 23:25:16 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" } ]
[ { "msg_contents": "Hi\n\nWhere can i download new patches ?\n\n-- \nSY, Serj\n", "msg_date": "Thu, 02 Apr 1998 13:13:43 +0400", "msg_from": "serj <[email protected]>", "msg_from_op": true, "msg_subject": "6.3.1 patches" } ]
[ { "msg_contents": ">I have been doing some thought about memory allocation in postgresql\n>so the above messages are very timely (at least to me).\n>\n>I would like to discuss the idea of replacing the current scheme of\n>explicit memory allocation and and deallocation from separate\n>\"MemoryDuration\" pools with a conservative garbage collector.\n>\n>For more information about garbage collection in general and about the\n>specific collector I am proposing see these urls:\n>\n> GC FAQ and links\n> http://www.iecc.com/gclist/GC-faq.html\n>\n> Boehm-Weiser Collector\n> http://reality.sgi.com/employees/boehm_mti/gc.html\n>\n>Rationale:\n>\n>>From my experience, I think that this is a stone cold win for postgresql.\nI\n>have of course, no before and afternumbers (yet) to back this up, but the\n>following are I think true:\n>\n>- MemoryDuration is really \"poor mans garbage collection\" anyway. The idea\n> being that it is either a) hard, or b) impossible to arrange to free your\n> memory at the right time. Some objects need to last for a session, some\n> for a transaction, some for a statement, some for a function, and some\n> for other indeterminate durations. The duration mechanism is meant to\n> help free things at the right time. Arguably it almost but not quite\n> works.\nI'm afraid I don't follow this argument.\nDifferent objects have different lifetimes. It is a matter of design to\nensure that\nit is known what the lifetime of an object is when it is created.\nWhat you are describing is similar to the problem of lifetime of objects in\na parse tree.\nIf you don't know/understand the grammar being parsed it is very difficult\ndo manage\nmemory correctly. However if you do understand the grammar it's becomes\nalmost trivial.\n\n>\n>- MemoryDuration adds complexity to the code. A quick skim of the code base\n> will show that duration gets switched all over the place often for not\n> very obvious reasons. An example is a function allocateing working memory\n> (per function duration) and returning an allocated result (per statement\n> duration). This makes writing both server code and extension code much\n> harder than need be.\n\nI'm sorry I don't agree. If the programmer doesn't know what the lifetimes\nof the objects creates should be, he probably should first find out. IMO\nthis is\none the most import parts of understanding a system.\n\nI like to see the backend of a system like postgres as a parser for some\nformal language. No, not only the SQL part but all of the system.\nThe life times of objects within a system also obey certain \"grammar rules\"\nas you have indirectly suggested above. (Sessions containing a list of\ntransactions,\nwhich contain a list of statements ... etc).\nMake these rules explicite and the \"when to free an object\" problem goes\naway.\n>\n>\n>- MemoryDuration is very costly in terms of space:\n>\n>a) A function returning an allocated object returns memory that cannot be\n> freed until the end of the statement. But as observed, often this\nmemory\n> is really needed only \"per row\". When millions of rows are involved,\nthis\n> gets pretty piggy (they just don't make swap partitions big enough...).\nThis seems to imply that we need a per row MemoryDuration pool.\n>\n>b) Each chunk of allocated memory has overhead of 8 to 16 bytes of\n> bookkeeping overhead to link it to the MemoryContext structure that\n> controls the duration. This is in addition to whatever overhead (often\n> 8 bytes) imposed by the underlying malloc() for boundary tags or size\n> and arena information. Each allocation then has 16 to 24 bytes of\n> overhead.\nThis overhead and more is also present in any garbage collector. The\ngarbage collector wastes CPU cycles figuring out if a memory block\nis still referenced as well.\n>\n> The statistics I have seen for a derivative of postgres say that 86% of\n> all allocations are 64 bytes or less. 75% are 32 bytes or less, and 43%\n> are less than 16 bytes. This suggests that allocator overhead about\n> doubles the storage needed.\nDid you also measure the lifetime of the objects. I would expect this to be\nrelatively short (as compared to the lifetime these objects might have with\na garbage collector.)\nSo I would expect (and this is _my_ experience) that for any but the most\ncosmetic\nof applications GC's use more memory.\n>\n>c) But it is really quite a lot worse than this. As noted above, memory\n> is not freed for reuse in a timely way but accumulated until the end\n> of the memory duration (ie statement or transaction). This is the\n> usual reason for running out of memory in a large query. Additionaly,\n> the division of memory into separate pools creates extra fragmentation\n> which can only make matters even worse.\nDon't GC's accumulate memory until \"garbage collect\" time? At\nGC time memory pages are revisited which may have been swapped out\nages ago. This isn't good for performance. And much more is accumulated than\nin the case where palloc and pfree are used.\n>\n>\n>- MemoryDuration is very costly in terms of speed:\n>\n>a) Some profiling (with Quantify) I have done on a derivative of postgres\n> show that for even simple queries return only one row like\n> \"select * from table where key = <unique_value>;\"\n> spend about 25% to 30% of their total execution time in malloc(),\nfree(),\n> or one of the MemoryContext routines. This probably understates the\ncase\n> since it is based on instruction counting, not actual time and the\n> \"chase a big list of pointers\" operation in MemoryContextDestroy() is\n> almost guaranteed to have nasty cache behavior.\nRight, but the GC is constantly \"chasing a bug list of pointers\" when it\ntries to\ndetermine if a memory block is still in use. Sure everybody gives the\nlist of pointers a different name but it all boils down to the same thing.\n\nI think it was you who suggested a good solution to this problem which would\nalso guaranteed 8 byte alignment for palloced objects.\nSuch an implementation would be very efficient indeed. (According to some\nsources\n(Bjarn Stroustrup C++ book?)).\nAlso if we pre allocate big chuncks of memory (as you suggested I think) we\ncan in many cases avoid \"chasing a big list of pointers\" because the\nlike time of most objects is likely to me small for many applications.\n>\n>b) There is quite a bit of bookeeping code all over the system (to support\n> releaseing memory after an error etc). This is in heavily trafficced\n> paths. Since is so widely distributed it is very hard to measure the\n> slowdown, but there certainly is some. This could all be removed if we\n> had garbage collection (although this in itself would be a big job).\nMy reading of the principle of procrastination is:\n\n Do now what you _must_ do anyway. (Considering priorities of course)\n Postpone until tomorrow all things which you are _certain_ you\n might not have to do at all.\n\nAccording to me garbage collectors follow a principle as in the following:\n\n Pospone everything until It can't be posponed anymore.\n\nI don't think this is good reasoning in any context. Because you tend to\nmake the problem more difficult than it needed to be in the general case.\nThe GC has to find out that which in general was allready known at somepoint\nin time.\n\nConsider the case where all objects have a lifespan similar to the stack\nframe\nof the functions in which they are used. GC give provably bad perforrmance\nin\nsuch cases (which for many applications is the common case).\n>\n>\n>- MemoryDuration is very costly in terms of correctness and stability:\n>\n> I am not going to say much here except to point out the number of\n> freed pointer errors and memory leaks that have been found in the code\n> to date. And that there are new ones in every release. And that I have\n> spent a good part of the last three years chasing leaks out of a\n> similar system, and no, I am not done yet.\nYes and if we use good tools to help we can squash them all.\nI recall the MySql guys boasting \"No memory errors as reported by\nPurify\" This confirms what I allready know: \"All memory errors can be\ndetected\ncan be squashed\".\n\nEspecially if the programming team disciplines it's self\nby consistently using good tools. We are professionals.\nWe can do that, can't we? In my experience memory errors are a result of\n\"tired fingers\" and being a newbie.\n >\n> The very existance of companies and products like Purify should be a\n> tipoff. There is no practical way to write large C programs with\n> dynamic storage without significant storage management problems.\nNo I don't agree. Have you ever been hired to \"fix\" large systems using\ngarbage collectors which refused to perform well (memory hogs)?\nThank God for malloc and free.\n\nAnother point is using a GC in languages with pointers. You get the\nmost terrible bugs because us C and C++ programmers tend to use tricks\n(like pointer arithmetic) a times. These (in general) are always\nincompatible\n(in one way or the other) with the GC implementation.\nCombine this with \"smart\" compilers doing \"helpful\" optimizations and\nyou get very very obscure bugs.\n\n>\n>\n>- MemoryDuration is very costly in terms of development cost:\n>\n> First, there are huge testing and debugging costs associated with\n> manual storage management. This is basically all waste motion and a\n> royal pain.\nI agree in a sense. This is equal to saying: It costs more to build a system\nof\ngood quality as compared to the same system of less quality.\n\nObtaining a high measure of \"quality\" implies \"care\" has to be taken.\nIn our context \"care\" translates to:\n\n- Good design (Using formal techiques where possible)\n- proper use of profiling tools\n- proper use of memory usage debuggers\n- Configuration managment\n- etc.\n\nSo it is possible to _measure_ how good as system is performing and why.\n\nSo having high goals is more dificult than having less exacting goals.\n>\n> Even more importantly, code written knowing that there is garbage\n> collection tends to have about substantially fewer source statements.\n> A typical case is a routine that allocates several things and operates\n> on them and checks for failures:\n>\n>/* non garbage collected example\n>*/\n>dothing(...)\n>{\n> if ((p1 = malloc(...)) == NULL)\n> return ERR;\n> ...\n> if ((p2 = malloc(...)) == NULL) {\n> free(p1);\n> return ERR;\n> }\n> ...\n> if ((p3 = malloc(...)) == NULL) {\n> free(p2);\n> free(p1);\n> return ERR;\n> }\n> ...\n> if ((p4 = malloc(...)) == NULL) {\n> free(p3);\n> free(p2);\n> # free(p1);\n> return ERR;\n> }\n> if ((info = do_the_wild_thing(p1, p2,p3, p4)) == ERR) {\n> free(p4)\n> free(p3);\n> free(p2);\n> free(p1);\n> return ERR;\n> }\n> ...\n> free(info)\n> free(p4)\n> free(p3);\n> free(p2);\n> free(p1);\n> return OK;\n>}\n>\n>\n>\n>/* same example only this time with garbage collection\n> */\n>dothing(...)\n>{\n> if ((p1 = malloc(...)) == NULL)\n> return ERR;\n> ...\n> if ((p2 = malloc(...)) == NULL)\n> return ERR;\n> ...\n> if ((p3 = malloc(...)) == NULL)\n> return ERR;\n> ...\n> if ((p4 = malloc(...)) == NULL)\n> return ERR;\n> ...\n> if ((info = do_the_wild_thing(p1, p2,p3, p4)) == ERR)\n> return ERR;\n> ...\n> return OK;\n>}\n>\n>\n>I know which one I would rather write! And it is fairly obvious which one\n>is more likely to work.\n\n\nYes and there are programming idioms which invalidate the need to do the\nabove without introducing the overhead of Garbage Collectors.\nSo such code is only needed if the designed of a system didn't design\na memory management subsystem which properly solved the problem.\n\nOn a side note IMO GC's don't solve any _general_ problem at all.\nAnd I'll try to explain why.\nConsider a language like Java were there is no equivalent of the\nfree() function.\n\nNow you create some resource like some window object (which allocates memory\nand other system resources).\nThese languages introduce function like \"dispose\" (and other such names)\nwhich\nare supposed to free the resources used by such objects.\n\nWhen you open a file you still _must_ close the thing. So you must know\n_when_\nyou are allowed to close it. So you must know the lifetime of the object\nbefore\nyou can properly use it.\n\nSo what is the fundamental difference between opening/closing a file and\nmallocing/freeing memory?\nIf a GC solved a general problem it would also figure out when to close the\nfile.\nBecause both memory and files are system resources.\n\nSo what implementors do is they pretend that memory is a special resource\nas aposed to windows/files/sockets/cryptographic contexts/ etc.\n\nI don't agree with them.\n>\n>\n>This is probably to long for one post, so I will stop now.\n>\n>I would very much like comments and suggestions on this topic, especially\nif\n>this is something you have thought about or have experience with.\n>\n>Unsupported assertions to the effect \"GC is too slow ... only works with\n>lisp ...\" etc are ok too, but will be eligible to win valuable prizes.\n>\n\n\nOk, I'll try to keep an open mind. I suggest you make a garbage collecting\nversion\nof postgresql and I'll be willing to profile for you -:).\nIf it can compare performance wise with a \"purified\" version of postgresql\nI'll\nbe totally for it -:).\n\nWith regards from Maurice.\n\nPS: Sorry to disagree.\n\n\n", "msg_date": "Thu, 2 Apr 1998 13:33:54 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "I agreed with Maurice.\nUsing GC instead of MemoryDuration everywhere isn't good idea for\ndatabase server.\n\nBut we could implement additional GC-like allocation mode and use it\nwhere is appropriate!\n\nOne example - using float8 (etc) in WHERE. We could switch to GC-allocation\nin the beginnig of ExecQual () and destroy all allocations made in GC-mode\nbefore return().\n\nAnother example - psort.c! With -S 8192 I see that server uses ~ 30M\nof memory - due to malloc/palloc overhead in palloc() for each tuple.\nNo one of these allocations will be freed untill psort_end() <-\ngood place for GC-destroyer.\n\nComments ?\n\nVadim\n", "msg_date": "Fri, 03 Apr 1998 13:17:09 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "Vadim:\n> I agreed with Maurice.\n> Using GC instead of MemoryDuration everywhere isn't good idea for\n> database server.\n\nWhy? Please state your reasons for this claim.\n \n> But we could implement additional GC-like allocation mode and use it\n> where is appropriate!\n\nThis assumes that there is a \"where it is not appropriate\". My contention\nis that it is generally appropriate. So my question must be, where is it\nnot appropriate and why?\n \n> One example - using float8 (etc) in WHERE. We could switch to GC-allocation\n> in the beginnig of ExecQual () and destroy all allocations made in GC-mode\n> before return().\n> \n> Another example - psort.c! With -S 8192 I see that server uses ~ 30M\n> of memory - due to malloc/palloc overhead in palloc() for each tuple.\n> No one of these allocations will be freed untill psort_end() <-\n> good place for GC-destroyer.\n\nThe examples you give are certainly places where a GC would be very very\nuseful. But, I think restricting the GC to cover only some allocations\nwould lose most of the benifit of using a GC altogether.\n\nFirst, the entire heap and stack have to be scanned as part of the root\nset in either case. However your proposal only lets the collector free\nsome of the garbage identified in that scan. This has the effect of making\nthe cost of each bit of reclaimed storage higher than it would be in the\ngeneral case. That is, the cost of a collection remains the same, but less\nstorage would be freed by each collection.\n\nSecond, one of the reasons a GC can be faster that explicit allocation /\ndeallocation is that it frees the rest of the system from doing bookeeping\nwork. A half-and-half system does not get this benifit.\n\nPostgreSQL is I think an especially good candidate to use a GC as the overall\ncomplexity of the system makes it very hard to determine the real lifetime of\nany particular allocation. This is why we have the complex MemoryDuration\nsystem that we currently have. This is also why we have the leaks and vast\nstorage requirements that we have.\n\nFinally, my main reason for suggesting GC is stabilty and correctness. With\nan effective GC, many many bugs simply never get the chance to exist at all.\n\nA GC would likewise make the business of writing loadable functions for new\ntypes etc much simpler and less error prone.\n\nDid you have a chance to review the links I sent in the earlier posting?\nSome of the papers referenced there are quite interesting, particularly\nthe Zorn papers on the real cost of explicit storage allocation.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nIf simplicity worked, the world would be overrun with insects.\n", "msg_date": "Thu, 2 Apr 1998 23:07:47 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "> >For more information about garbage collection in general and about the\n> >specific collector I am proposing see these urls:\n> >\n> > GC FAQ and links\n> > http://www.iecc.com/gclist/GC-faq.html\n> >\n> > Boehm-Weiser Collector\n> > http://reality.sgi.com/employees/boehm_mti/gc.html\n...\n> >- MemoryDuration is really \"poor mans garbage collection\" anyway. The idea\n> > being that it is either a) hard, or b) impossible to arrange to free your\n> > memory at the right time. Some objects need to last for a session, some\n> > for a transaction, some for a statement, some for a function, and some\n> > for other indeterminate durations. The duration mechanism is meant to\n> > help free things at the right time. Arguably it almost but not quite\n> > works.\n> I'm afraid I don't follow this argument.\n> Different objects have different lifetimes. It is a matter of design to\n> ensure that\n> it is known what the lifetime of an object is when it is created.\n\nThis is exactly the problem memoryduration is meant to solve. It trys to\nguarantee that everything will be freed when it is no longer needed. It does\nthis by keeping a list of all allocations and then freeing them when the\nduration ends. However we have existance proofs (by the truckload) that\nthis does not solve the problem.\n\n\n> What you are describing is similar to the problem of lifetime of objects in\n> a parse tree.\n> If you don't know/understand the grammar being parsed it is very difficult\n> do manage\n> memory correctly. However if you do understand the grammar it's becomes\n> almost trivial.\n\nPerhaps. Why then does it not work? \n\n\n> I'm sorry I don't agree. If the programmer doesn't know what the lifetimes\n> of the objects creates should be, he probably should first find out. IMO\n> this is\n> one the most import parts of understanding a system.\n\nTo write extention function to make some application domain calculation\nand return a (allocated) double, I now have to understand the whole\nexecutor? I hope not.\n\nIn any case, there is no mechanism in the current code to allow a function\nauthor to control this accurately.\n\n> The life times of objects within a system also obey certain \"grammar rules\"\n> as you have indirectly suggested above. (Sessions containing a list of\n> transactions,\n> which contain a list of statements ... etc).\n> Make these rules explicite and the \"when to free an object\" problem goes\n> away.\n\nThis is again exactly what MemoryDuration is intended to do. My argument is\nthat it a) doesn't work, b) wastes memory, c) is slow.\n\n\n> This overhead and more is also present in any garbage collector. The\n> garbage collector wastes CPU cycles figuring out if a memory block\n> is still referenced as well.\n\nWhy should this overhead be part of a collector? As nearly as I can tell from\na quick skim of the Boehm collector code, allocated objects have ZERO space\noverhead.\n\nAs for time, considering the time we currently lavish on the allocator, almost\nanything would be an improvement.\n\n\n> > The statistics I have seen for a derivative of postgres say that 86% of\n> > all allocations are 64 bytes or less. 75% are 32 bytes or less, and 43%\n> > are less than 16 bytes. This suggests that allocator overhead about\n> > doubles the storage needed.\n> Did you also measure the lifetime of the objects. I would expect this to be\n> relatively short (as compared to the lifetime these objects might have with\n> a garbage collector.)\n\nI did not measure lifetimes. It would take full tracing to really understand\nthe behavior in detail and I simply have not done it. However the common\nuncontrolled growth case we see is that the objects may have short lifetimes\nbut they are not freed until end of statement so the server just gets bigger\nand bigger and bigger...\n\n\n> So I would expect (and this is _my_ experience) that for any but the most\n> cosmetic of applications GC's use more memory.\n\nI am interested. What experience have you had with collection? Also, what\napplications have you seen use more memory collected than they would have\notherwise?\n\nIn any case, the usual number is that a collected system will have a virtual\nsize somewhere from the same to 50% larger than an explicitly allocated\nsystem. The higher number tends to be found with copying collectors as they\nneed both the \"from\" and \"to\" spaces. Boehm is not a copying collector. Even\nso, I expect it will make us use more memory that the theoretical optimum.\nBut I also expect it to be better then the current implementation.\n\n\n> > is not freed for reuse in a timely way but accumulated until the end\n> > of the memory duration (ie statement or transaction). This is the\n> > usual reason for running out of memory in a large query. Additionaly,\n> > the division of memory into separate pools creates extra fragmentation\n> > which can only make matters even worse.\n> Don't GC's accumulate memory until \"garbage collect\" time? At\n\nNo. There is no requirement for this. The Boehm collector has incremental\ncollection.\n\n\n> GC time memory pages are revisited which may have been swapped out\n> ages ago. This isn't good for performance. And much more is accumulated than\n> in the case where palloc and pfree are used.\n\nIt is pretty fatal to database systems with more than one active thread/process\nto page at all. Think about what happens when someone holding a spinlock\ngets paged out, it is not pretty. Fortunately there is no requirement with\na modern collector to wait until pages are swapped out.\n\n\n> I think it was you who suggested a good solution to this problem which would\n> also guaranteed 8 byte alignment for palloced objects.\n...\n> Also if we pre allocate big chuncks of memory (as you suggested I think) we\n> can in many cases avoid \"chasing a big list of pointers\" because the\n> like time of most objects is likely to me small for many applications.\n\nThis was my initial thought. I expect a good improvement could be made on\nthe current system. I think collection would serve us even better.\n\n\n> Consider the case where all objects have a lifespan similar to the stack\n> frame\n> of the functions in which they are used. GC give provably bad perforrmance\n> in\n> such cases (which for many applications is the common case).\n\nThere are papers (by I believe Appel) that show that collection can be\nfaster than stack allocation. I admit to being surprised by this, and have\nnot yet read the papers, but they are taken seriously, it is not just\nsmoke and sunshine.\n\n\n> > I am not going to say much here except to point out the number of\n> > freed pointer errors and memory leaks that have been found in the code\n> > to date. And that there are new ones in every release. And that I have\n> > spent a good part of the last three years chasing leaks out of a\n> > similar system, and no, I am not done yet.\n> Yes and if we use good tools to help we can squash them all.\n\nAs it happens, the Boehm collector can be used as a leak detector too.\nIn this mode you just use your conventional allocation scheme and run the\ncollector in \"purify\" mode. It does its collection scan at each malloc and\nthen reports all the allocated but unreachable (hence leake) memory. \n\n\n> I recall the MySql guys boasting \"No memory errors as reported by\n\nI suspect they are exagerating. On Solaris, the resolver libc routines\nleak so anything linked with libc leaks (just a little).\n\n> Purify\" This confirms what I allready know: \"All memory errors can be\n> detected\n> can be squashed\".\n\nI believe that the Space Shuttle onboard code is leak free... maybe.\n\n\n> Especially if the programming team disciplines it's self\n> by consistently using good tools. \n\nExactly why I am proposing this. The Boehm collector is a good tool. It\neliminates a large class of errors. They just \"cease to be\".\n\n> We are professionals.\n\nWe are volunteers. I don't happen to think chasing leaks is what\nI want to do with my free time, I get to do more than enough of that at work.\n\n> We can do that, can't we? In my experience memory errors are a result of\n> \"tired fingers\" and being a newbie.\n\nEven if this was true, and we can write perfect code, we are working with\npostgresql, a great big piece of code written mostly by grad students. Who\nare almost guaranteed to be either newbies or to have \"tired fingers\". \n\n\n> > The very existance of companies and products like Purify should be a\n> > tipoff. There is no practical way to write large C programs with\n> > dynamic storage without significant storage management problems.\n> No I don't agree. Have you ever been hired to \"fix\" large systems using\n> garbage collectors which refused to perform well (memory hogs)?\n> Thank God for malloc and free.\n\nPlease share your experience here, I am very interested.\n\nWhile I am taking an advocacy position on this topic, I am sincere about\nwanting a discussion and more information. If there is real evidence that\napplies to our case that suggest GC is not the right thing to do, we need\nto know about it.\n\n\n> Another point is using a GC in languages with pointers. You get the\n> most terrible bugs because us C and C++ programmers tend to use tricks\n> (like pointer arithmetic) a times. These (in general) are always\n> incompatible\n> (in one way or the other) with the GC implementation.\n> Combine this with \"smart\" compilers doing \"helpful\" optimizations and\n> you get very very obscure bugs.\n\nThe Boehm collector is aware of most of this and in practice almost\nany ANSI conforming C program can be collected correctly. The only real\nrestriction is that you can't use 'xor' to store both a prev and next field\nin one pointer as it hides the pointer. However, there is not an ANSI\nconforming way to do this anyway.... (you can cast (ptr to int), but the\nresult of casting back after the xor is unspecified).\n\n\n[code examples deleted]\n\n> Yes and there are programming idioms which invalidate the need to do the\n> above without introducing the overhead of Garbage Collectors.\n\nYes? How can we apply them to postgres? I hate the kind of code I placed\nin the example, but it is ubiquitous. If you have a better way (that can\nbe applied here), please please tell us.\n\n> So such code is only needed if the designed of a system didn't design\n> a memory management subsystem which properly solved the problem.\n\nA succinct description of the system we are working on.\n \n\n> On a side note IMO GC's don't solve any _general_ problem at all.\n> And I'll try to explain why.\n> Consider a language like Java were there is no equivalent of the\n> free() function.\n> \n> Now you create some resource like some window object (which allocates memory\n> and other system resources).\n> These languages introduce function like \"dispose\" (and other such names)\n> which\n> are supposed to free the resources used by such objects.\n> \n> When you open a file you still _must_ close the thing. So you must know\n> _when_\n> you are allowed to close it. So you must know the lifetime of the object\n> before\n> you can properly use it.\n\nThis is known as \"finalization\". The Boehm collecter supports it by allowing\nyou to register a function to be called when an object is collected. I do\nnot think we need to take advantage of this now though.\n \n> So what is the fundamental difference between opening/closing a file and\n> mallocing/freeing memory?\n\nHmmm, what is the difference between a file and memory? Conceptually nothing.\nIn practice, real implementations posit a number of differences (size,\npersistance etc). Real systems provide different mechanism to access\nfiles vs memory. And, I suspect, most of us find this comforting.\n\n\n> So what implementors do is they pretend that memory is a special resource\n> as aposed to windows/files/sockets/cryptographic contexts/ etc.\n> \n> I don't agree with them.\n\nThere is a point to what you say here and some interesting research is being\ndone in this area, but what we have in postgreSQL is a big pile of 'C'.\n\n> Ok, I'll try to keep an open mind. I suggest you make a garbage collecting\n> version of postgresql and I'll be willing to profile for you -:).\n\nA fine plan. I will make sure to ask for your help on the performance\ntesting. If you have particular test cases to suggest, start collecting\nthem, otherwise I am going to time a full regression suite run as the\nbenchmark.\n\n> If it can compare performance wise with a \"purified\" version of postgresql\n> I'll be totally for it -:).\n\nFair enough.\n\n> With regards from Maurice.\n> \n> PS: Sorry to disagree.\n\nSomeone had to ;-)\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Fri, 3 Apr 1998 01:04:00 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "David Gould wrote:\n> \n> Vadim:\n> > I agreed with Maurice.\n> > Using GC instead of MemoryDuration everywhere isn't good idea for\n> > database server.\n> \n> Why? Please state your reasons for this claim.\n> \n> > But we could implement additional GC-like allocation mode and use it\n> > where is appropriate!\n> \n> This assumes that there is a \"where it is not appropriate\". My contention\n> is that it is generally appropriate. So my question must be, where is it\n> not appropriate and why?\n\nWhere would you put call to collector in Executor ?\n\n> The examples you give are certainly places where a GC would be very very\n> useful. But, I think restricting the GC to cover only some allocations\n> would lose most of the benifit of using a GC altogether.\n> \n> First, the entire heap and stack have to be scanned as part of the root\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> set in either case. However your proposal only lets the collector free\n ^^^^^^^^^^^^^^^^^^\n> some of the garbage identified in that scan. This has the effect of making\n> the cost of each bit of reclaimed storage higher than it would be in the\n> general case. That is, the cost of a collection remains the same, but less\n> storage would be freed by each collection.\n\nNo! In GC-like allocation mode I meant to use malloc to allocate\nmemory in big chunks (> 8K) and use Last_Allocated_Byte counter for\neach chunk in palloc() to \"allocate\" memory. pfree will do nothing. \nGC-destroyer will just free a few chunks - without any scans. \nMany GC-storages will be available simultaneously (GC_Storage_Identifier \nwill be returned by StartGCAllocation() call and used by EndGCAllocation() \nto free memory in given storage). GC-allocations will be made in current memory\ncontext (in term of postgres) ==> code using special memory contexts\n(relation cache etc) will not be affected at all (switching to another\ncontext will stop GC-allocation untill first context restored)\nas well elog(ERROR) clean up feature.\n\n> Second, one of the reasons a GC can be faster that explicit allocation /\n> deallocation is that it frees the rest of the system from doing bookeeping\n> work. A half-and-half system does not get this benifit.\n> \n> PostgreSQL is I think an especially good candidate to use a GC as the overall\n> complexity of the system makes it very hard to determine the real lifetime of\n> any particular allocation. This is why we have the complex MemoryDuration\n> system that we currently have. This is also why we have the leaks and vast\n> storage requirements that we have.\n\nSure - it's not so hard to determine lifetime of any allocation.\nPlease, don't forget that postgres was _academic_ research project\nfor very long time and so there were no big efforts against leaks etc.\n\n> Did you have a chance to review the links I sent in the earlier posting?\n> Some of the papers referenced there are quite interesting, particularly\n> the Zorn papers on the real cost of explicit storage allocation.\n\nSorry - I just started and will continue...\n\nVadim\n", "msg_date": "Fri, 03 Apr 1998 17:35:26 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" }, { "msg_contents": "David Gould wrote:\n> \n> > What you are describing is similar to the problem of lifetime of objects in\n> > a parse tree.\n> > If you don't know/understand the grammar being parsed it is very difficult\n> > do manage\n> > memory correctly. However if you do understand the grammar it's becomes\n> > almost trivial.\n> \n> Perhaps. Why then does it not work?\n\nThere was no proper attention to this issue!\n\n> > I'm sorry I don't agree. If the programmer doesn't know what the lifetimes\n> > of the objects creates should be, he probably should first find out. IMO\n> > this is\n> > one the most import parts of understanding a system.\n> \n> To write extention function to make some application domain calculation\n> and return a (allocated) double, I now have to understand the whole\n> executor? I hope not.\n> \n> In any case, there is no mechanism in the current code to allow a function\n> author to control this accurately.\n\nAnd he shouldn't care! He have to make allocation in current context\nand let us take care about anything else.\n\n> > > The statistics I have seen for a derivative of postgres say that 86% of\n> > > all allocations are 64 bytes or less. 75% are 32 bytes or less, and 43%\n> > > are less than 16 bytes. This suggests that allocator overhead about\n> > > doubles the storage needed.\n> > Did you also measure the lifetime of the objects. I would expect this to be\n> > relatively short (as compared to the lifetime these objects might have with\n> > a garbage collector.)\n> \n> I did not measure lifetimes. It would take full tracing to really understand\n> the behavior in detail and I simply have not done it. However the common\n> uncontrolled growth case we see is that the objects may have short lifetimes\n> but they are not freed until end of statement so the server just gets bigger\n ^^^^^^^^^^^^^\nSo we have to fix this!\n\n> > > is not freed for reuse in a timely way but accumulated until the end\n> > > of the memory duration (ie statement or transaction). This is the\n> > > usual reason for running out of memory in a large query. Additionaly,\n> > > the division of memory into separate pools creates extra fragmentation\n> > > which can only make matters even worse.\n> > Don't GC's accumulate memory until \"garbage collect\" time? At\n> \n> No. There is no requirement for this. The Boehm collector has incremental\n> collection.\n\nThis is interest - I see I have to read more...\n\nVadim\n", "msg_date": "Fri, 03 Apr 1998 19:19:39 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" } ]
[ { "msg_contents": "Hi,\n\nI added these lines to tcop/postgres.c just before the ReadCommand call\n(which is in an infinite loop).\n\n+ \t\tprintf(\"Allocount: %ld; FreeCount: %ld; Diff:%ld\\n\", \n+ \t\t\t\tgetPAllocCount(),\n+ \t\t\t\tgetPFreeCount(),\n+ \t\t\t\tgetPAllocCount() - getPFreeCount());\n+ \n\nAnd the following lines to mmgr/palloc.c\n \n+ static long pallocCount = 0;\n+ static long pfreeCount = 0;\n+ \n+ long getPAllocCount()\n+ {\n+ \treturn pallocCount;\n+ }\n+ \n+ long getPFreeCount()\n+ {\n+ \treturn pfreeCount;\n+ }\n\n void *\n palloc(Size size)\n {\n+ \tpallocCount++;\n ...\n\n void\n pfree(void *pointer)\n {\n+ \tpfreeCount++;\n ...\n\nRunning postgresql in interactive mode shows that for each query I \ntype there is memory lost. The exact amount of memory lost depends on\nthe query I use. The amount of memory not freed is also a function\nof the number of tuples returned.\n\nNow I'm hoping there is an easy way to find out about how this is _supposed_ \nto work. Any one feel like giving a nice little explanation?\n\nThanks, with regards from Maurice.\n \n", "msg_date": "Thu, 2 Apr 1998 15:40:57 +0200", "msg_from": "Maurice Gittens <[email protected]>", "msg_from_op": true, "msg_subject": "Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> \n> Hi,\n> \n> I added these lines to tcop/postgres.c just before the ReadCommand call\n> (which is in an infinite loop).\n> \n> + \t\tprintf(\"Allocount: %ld; FreeCount: %ld; Diff:%ld\\n\", \n> + \t\t\t\tgetPAllocCount(),\n> + \t\t\t\tgetPFreeCount(),\n> + \t\t\t\tgetPAllocCount() - getPFreeCount());\n> + \n> \n> And the following lines to mmgr/palloc.c\n> \n> + static long pallocCount = 0;\n> + static long pfreeCount = 0;\n> + \n> + long getPAllocCount()\n> + {\n> + \treturn pallocCount;\n> + }\n> + \n> + long getPFreeCount()\n> + {\n> + \treturn pfreeCount;\n> + }\n> \n> void *\n> palloc(Size size)\n> {\n> + \tpallocCount++;\n> ...\n> \n> void\n> pfree(void *pointer)\n> {\n> + \tpfreeCount++;\n> ...\n> \n> Running postgresql in interactive mode shows that for each query I \n> type there is memory lost. The exact amount of memory lost depends on\n> the query I use. The amount of memory not freed is also a function\n> of the number of tuples returned.\n> \n> Now I'm hoping there is an easy way to find out about how this is _supposed_ \n> to work. Any one feel like giving a nice little explanation?\n> \n> Thanks, with regards from Maurice.\n\nThe way I was looking at it was to first find the context that is\nloosing the memory. I called AllocSetDump(GlobalMemory->setData) to\ndump out the allocated memory pointers. The problem is that only\ncertain ones are do-able, as portals each have their own context, I\nthink, and there is no macro that enables all the memory debugging\noperations.\n\nNot sure how to proceed.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 2 Apr 1998 10:58:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> Hi,\n> \n> I added these lines to tcop/postgres.c just before the ReadCommand call\n> (which is in an infinite loop).\n> \n> + \t\tprintf(\"Allocount: %ld; FreeCount: %ld; Diff:%ld\\n\", \n> + \t\t\t\tgetPAllocCount(),\n> + \t\t\t\tgetPFreeCount(),\n> + \t\t\t\tgetPAllocCount() - getPFreeCount());\n> + \n> \n> And the following lines to mmgr/palloc.c\n> \n> + static long pallocCount = 0;\n> + static long pfreeCount = 0;\n> + \n> + long getPAllocCount()\n> + {\n> + \treturn pallocCount;\n> + }\n> + \n> + long getPFreeCount()\n> + {\n> + \treturn pfreeCount;\n> + }\n> \n> void *\n> palloc(Size size)\n> {\n> + \tpallocCount++;\n> ...\n> \n> void\n> pfree(void *pointer)\n> {\n> + \tpfreeCount++;\n> ...\n> \n> Running postgresql in interactive mode shows that for each query I \n> type there is memory lost. The exact amount of memory lost depends on\n> the query I use. The amount of memory not freed is also a function\n> of the number of tuples returned.\n> \n> Now I'm hoping there is an easy way to find out about how this is _supposed_ \n> to work. Any one feel like giving a nice little explanation?\n> \n> Thanks, with regards from Maurice.\n> \n\nThe term \"getPAllocCount() - getPFreeCount())\" returns the amount of\nmemory not explicitly freed. The rest of the memory was (probably) released\nby MemoryContextDestroy() when the context went out of scope.\n\nThis is normal behavior under the current scheme.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nIf simplicity worked, the world would be overrun with insects.\n", "msg_date": "Thu, 2 Apr 1998 22:24:42 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "Hello,\n\nI recall reading about a problem with subselects taking a long time to\nreturn. I am experiencing similar problems under 6.3.1 running on Linux\nRedHat5.0. The tables only contains approx 2k records and the command\nran for about 10 minutes.\n\nThanks,\n\nEdwin Ramirez\n\n", "msg_date": "Fri, 03 Apr 1998 09:42:03 -0500", "msg_from": "\"Edwin S. Ramirez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Problem with 6.3.1 under Linux RH5.0" } ]
[ { "msg_contents": "\n>\n>Running postgresql in interactive mode shows that for each query I \n>type there is memory lost. The exact amount of memory lost depends on\n>the query I use. The amount of memory not freed is also a function\n>of the number of tuples returned.\n>\n \nOops, it seems some palloced memory is not freed by pfree but\nusing some other function(s).\nMy mistake, sorry.\n\nThanks, with regards from Maurice.\n\n\n", "msg_date": "Thu, 2 Apr 1998 18:02:25 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> \n> \n> >\n> >Running postgresql in interactive mode shows that for each query I \n> >type there is memory lost. The exact amount of memory lost depends on\n> >the query I use. The amount of memory not freed is also a function\n> >of the number of tuples returned.\n> >\n> \n> Oops, it seems some palloced memory is not freed by pfree but\n> using some other function(s).\n> My mistake, sorry.\n> \n\nOne thing I have found is that:\n\n\tselect * from test where 1=0;\n\ndo not leak memory while\n\n\tselect * from test where x=-999;\n\ndoes leak memory, even though neither returns any rows. Strange. Would\nseem to point to the optimizer or executor.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 4 Apr 1998 16:07:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> \n> \n> >\n> >Running postgresql in interactive mode shows that for each query I \n> >type there is memory lost. The exact amount of memory lost depends on\n> >the query I use. The amount of memory not freed is also a function\n> >of the number of tuples returned.\n> >\n> \n> Oops, it seems some palloc'ed memory is not freed by pfree() but\n> using some other function(s).\n> My mistake, sorry.\n\nOK, I think I know where the leak is from. I checked AllocSetDump(), and\nit did not show any problems with any context growing, so I started to\nsuspect direct calls to malloc(). I tried trapping on malloc(), but\nthere are too many calls.\n\nThen I ran profiling on the two queries I mentioned, where one leaks and\none doesn't, and found that the leaking one had 500 extra calls to\nmalloc. Grep'ing out the calls and comparing the output of the two\nprofiles, I found:\n\n 0.00 0.00 1/527 ___irs_gen_acc [162]\n 0.00 0.00 35/527 _GetDatabasePath [284]\n 0.00 0.00 491/527 _GetDatabaseName [170]\n[166] 0.8 0.00 0.00 527 _strdup [166]\n 0.00 0.00 527/2030 _malloc [105]\n 0.00 0.00 527/604 _strlen [508]\n 0.00 0.00 527/532 _bcopy [515]\n\n\nI believe this code was modified by Vadim to fix our problem with blind\nwrite errors when using psql while the regression tests were being run.\n\nAm I correct on this? I have not developed a fix yet.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 4 Apr 1998 16:59:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Then I ran profiling on the two queries I mentioned, where one leaks and\n> one doesn't, and found that the leaking one had 500 extra calls to\n> malloc. Grep'ing out the calls and comparing the output of the two\n> profiles, I found:\n> \n> 0.00 0.00 1/527 ___irs_gen_acc [162]\n> 0.00 0.00 35/527 _GetDatabasePath [284]\n> 0.00 0.00 491/527 _GetDatabaseName [170]\n> [166] 0.8 0.00 0.00 527 _strdup [166]\n> 0.00 0.00 527/2030 _malloc [105]\n> 0.00 0.00 527/604 _strlen [508]\n> 0.00 0.00 527/532 _bcopy [515]\n> \n> I believe this code was modified by Vadim to fix our problem with blind\n> write errors when using psql while the regression tests were being run.\n> \n> Am I correct on this? I have not developed a fix yet.\n\nOnly src/backend/storage/smgr/md.c:mdblindwrt() was changed...\n\nVadim\n", "msg_date": "Sun, 05 Apr 1998 21:39:03 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "Bruce:\n> Maurice:\n> > >Running postgresql in interactive mode shows that for each query I \n> > >type there is memory lost. The exact amount of memory lost depends on\n> > >the query I use. The amount of memory not freed is also a function\n> > >of the number of tuples returned.\n> > >\n> > \n> > Oops, it seems some palloced memory is not freed by pfree but\n> > using some other function(s).\n> > My mistake, sorry.\n> > \n> \n> One thing I have found is that:\n> \n> \tselect * from test where 1=0;\n> \n> do not leak memory while\n> \n> \tselect * from test where x=-999;\n> \n> does leak memory, even though neither returns any rows. Strange. Would\n> seem to point to the optimizer or executor.\n\nIn the first case, the where clause is constant and evaluates false, so\nnot much is done. In the second case, the table is scanned and presumably\nsome memory is allocated for each row, probably to evaluate the expression.\nSince this memory is allocated into a per statement duration, it will not\nbe freed until the end of the statement when the context is destroyed.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 9 Apr 1998 00:48:17 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> \n> Bruce:\n> > Maurice:\n> > > >Running postgresql in interactive mode shows that for each query I \n> > > >type there is memory lost. The exact amount of memory lost depends on\n> > > >the query I use. The amount of memory not freed is also a function\n> > > >of the number of tuples returned.\n> > > >\n> > > \n> > > Oops, it seems some palloced memory is not freed by pfree but\n> > > using some other function(s).\n> > > My mistake, sorry.\n> > > \n> > \n> > One thing I have found is that:\n> > \n> > \tselect * from test where 1=0;\n> > \n> > do not leak memory while\n> > \n> > \tselect * from test where x=-999;\n> > \n> > does leak memory, even though neither returns any rows. Strange. Would\n> > seem to point to the optimizer or executor.\n> \n> In the first case, the where clause is constant and evaluates false, so\n> not much is done. In the second case, the table is scanned and presumably\n> some memory is allocated for each row, probably to evaluate the expression.\n> Since this memory is allocated into a per statement duration, it will not\n> be freed until the end of the statement when the context is destroyed.\n> \n> -dg\n> \n\nDoes it make sense to have a 'row' context which is released just before\nstarting with a new tuple ? The total number or free is the same but they\nare distributed over the query and unused memory should not accumulate.\nI have seen backends growing to 40-60MB with queries which scan a very\nlarge number of rows.\n\nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Thu, 9 Apr 1998 12:31:42 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> \n> Bruce:\n> > Maurice:\n> > > >Running postgresql in interactive mode shows that for each query I \n> > > >type there is memory lost. The exact amount of memory lost depends on\n> > > >the query I use. The amount of memory not freed is also a function\n> > > >of the number of tuples returned.\n> > > >\n> > > \n> > > Oops, it seems some palloced memory is not freed by pfree but\n> > > using some other function(s).\n> > > My mistake, sorry.\n> > > \n> > \n> > One thing I have found is that:\n> > \n> > \tselect * from test where 1=0;\n> > \n> > do not leak memory while\n> > \n> > \tselect * from test where x=-999;\n> > \n> > does leak memory, even though neither returns any rows. Strange. Would\n> > seem to point to the optimizer or executor.\n> \n> In the first case, the where clause is constant and evaluates false, so\n> not much is done. In the second case, the table is scanned and presumably\n> some memory is allocated for each row, probably to evaluate the expression.\n> Since this memory is allocated into a per statement duration, it will not\n> be freed until the end of the statement when the context is destroyed.\n\nThis will be fixed in the 6.3.2 patch, due out soon.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 09:51:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" } ]
[ { "msg_contents": "First success at automatic conversions for left-hand operators. Still\nneed right-hand and binary operators but it looks hopeful...\n\ntgl=> select |/ 2;\n---------------\n1.4142135623731\n(1 row)\n\n - Tom\n", "msg_date": "Thu, 02 Apr 1998 17:06:35 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Left operator automatic type conversion" } ]
[ { "msg_contents": "> \n> \n> Would Insure++ do the trick? They offer a Linux version...\n> \n> ccb\n> \n\nI don't run Linux, but that may do it.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 2 Apr 1998 12:43:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] General Bug Report: memory leak in postgres backend\n\tprocesses" } ]
[ { "msg_contents": "\nI've got a table that has a primary key with a default of\nnextval('seq'). I've got another table which inherits this one, but\nit fails to inherit the unique btree index. It does inherit the\ndefault value. So, I'm assuming that if I create a unique index for\nthat field on the child table, it won't keep you from inserting values\nthat exist in that field in the parent table (and since they both\nshare the same sequence, that's what I want).\n\nSo primary keys do not work in this situation. Are there plans to\nenhance the inheritance? I have no idea how it works, is it\nintelligent? Seems more klunky than not, but I haven't really looked\nat the code. Should I stop using inheritance altogether, considering\nits drawbacks (no idea what child class it is in when selecting from\nparent and all children, no shared indices/pkeys) when I don't select\nfrom them all at once?\n", "msg_date": "Thu, 2 Apr 1998 21:42:53 -0800 (PST)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "inherited sequences and primary keys " } ]
[ { "msg_contents": ">\n>I've got a table that has a primary key with a default of\n>nextval('seq'). I've got another table which inherits this one, but\n>it fails to inherit the unique btree index. It does inherit the\n>default value. So, I'm assuming that if I create a unique index for\n>that field on the child table, it won't keep you from inserting values\n>that exist in that field in the parent table (and since they both\n>share the same sequence, that's what I want).\nThis is the way it should work IMO.\n>\n>So primary keys do not work in this situation. Are there plans to\n>enhance the inheritance? I have no idea how it works, is it\n>intelligent? Seems more klunky than not, but I haven't really looked\n>at the code. Should I stop using inheritance altogether, considering\n>its drawbacks (no idea what child class it is in when selecting from\n>parent and all children, no shared indices/pkeys) when I don't select\n>from them all at once?\n\nIMO the current semantics for inheritance in Postgresql are broken.\nI've been wanting to do something about it but I got distracted and started\nto debug some other problems in the system.\n\nI hope to get back to this some time.\n\nI personally feel that we have to make some choices:\n\n Is postgresql going to be an Object Relational dbms or is it going to\n be yet another relation dbms?\n\nWhen the developers make an explicite choice on this point it will be a \nGood Thing (tm).\n\n\nWith regards from Maurice.\n\n\n", "msg_date": "Fri, 3 Apr 1998 09:15:44 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inherited sequences and primary keys " }, { "msg_contents": "Maurice:\n> IMO the current semantics for inheritance in Postgresql are broken.\n\nIt seems that way.\n\n> I've been wanting to do something about it but I got distracted and started\n> to debug some other problems in the system.\n> \n> I hope to get back to this some time.\n> \n> I personally feel that we have to make some choices:\n> \n> Is postgresql going to be an Object Relational dbms or is it going to\n> be yet another relation dbms?\n> \n> When the developers make an explicite choice on this point it will be a \n> Good Thing (tm).\n\nAgreed. There are lots of pretty decent relation dbms's out there. There are\nvery few Object Relational dbms's. I happen to think ORDBMS is a really cool\nidea and have seen some great applications done with it that a straight\nup RDBMS just couldn't do. So my vote is for ORDBMS.\n\nThat said, postgresql needs to become a much better RDBMS that it currently\nis.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 2 Apr 1998 23:31:03 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inherited sequences and primary keys" }, { "msg_contents": "David Gould wrote:\n> \n> Agreed. There are lots of pretty decent relation dbms's out there. There are\n> very few Object Relational dbms's. I happen to think ORDBMS is a really cool\n> idea and have seen some great applications done with it that a straight\n> up RDBMS just couldn't do. So my vote is for ORDBMS.\n> \n> That said, postgresql needs to become a much better RDBMS that it currently\n> is.\n\nAgreed.\nUnfortunately, there are many problems in all areas of postgres.\nAnd time is limited :(\n\nVadim\n", "msg_date": "Fri, 03 Apr 1998 16:23:52 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] inherited sequences and primary keys" } ]
[ { "msg_contents": ">\n>It is surprising, but this simple fact has not yet penetrated into\n>popular thought. I have seen large organizations full of very bright\n>people spend hundreds of man years chasing leaks without ever wondering\n>if there might be an alternative.\n>For some reason people cling to the belief ithat if they were just careful\n>enough and only let really good programmers touch the code and carried\n>a lucky rabbits foot that somehow they could write leak free software.\n>\n\n\nDavid, you state this is a fact. Ok I am willing to accept that as a fact if\nyou\ncan provide \"real world applications\" which use GC's which\nget the job done more efficiently that a comparable system without the\nGC.\nPlease David, not just assertions facts. And I don't consider statements\nlike \"proferssor X says so\" to be a factual.\n\nYes I am aware of \"academic experiments\" where some claim\nthat GC's are actually good _general_ purpose solutions. But\neverytime I've taken the time to dig into this (and I have taken the time)\nit turns out that they seem to massage the results based on (what seems to\nbe)\nsome pro GC bias.\n\nAnd after my personal search I have concluded that GC's are based on\na bad principle. Thisis the principle as I have identified it.\n\n\"Pospone until you can pospone no longer\".\n\nIMO this principle is not a good principle in _general_.\n\nAgain I will state that a GC does not solve any _general_ problem.\n\n1. Can you identify the fundamental difference between allocating/freeing\nmemory\nand opening/closing files?\n\n2. Have you also been reading those so called \"optimisation hints\" for\nJava programs that Sun sends around to Java developers?\n\n3. Have you noticed that these \"hints\" are supposted to help the GC\nimplementation not make a fool of it's self?\n\n4. Have you noticed that the finalize method in Java in useless in general\nbecause you never know if it will be called?\nSo if you put any code in it, the code is quite useless, because\nmaybe it wont be called. (I'm sorry I'm getting to Java specific).\n\n5. Isn't it true that a GC's performance is a function of the number of\nactive objects in the runtime system?\nSo adding some new subsystem actually in _general_ decreases\nyour systems performance (larger memory footprint) _and_ your GC's\nperformance?\n(Assuming the subsystem also uses the GC).\n\n6. Are algorithms to detected circular references? O(n) or event worse?\nLast time I checked they where not linear time. I'm certain you\ncan envision a graph for which it would take some effort on the part of the\nGC to determine if a node is freeable.\n\nAnd sure, I too am able to think up some cases in which this will not be\ntrue.\nBut in the _general_ case it will, because the GC's performance _is_\na function of the number of objects to be collected. No amount of\nprayer is going to change that.\n\nI'm sorry but as far as I am concerned these are facts. Please\nsupply me with some facts about the merits of these GC's.\nI'd also prefer you don't refer me to some researcher who is\nobviously biased because he probably spent a few years of his life\nresearching a lost case.\n\nI prefer you give me results comparing similar systems implemented\nwith and without GC's.\n\nI have yet to see any \"non cosmetic\" case where the GC systems wins\nin _general_. Many poeple can massage the input of a system\nas to show the \"merit\" of their pet algorithm but it doesn't change the fact\nthat the _principle_ GC's are based on is a poor one.\nIf it were not so, GC's would solve general problems (including situations\nlike opening and closing files.) )Yes those researchers recognize this as\nwell.)\n\nIsn't it true that good principles are applicable to the general case?\nIf this is so then GC's just don't pull it.\nBut maybe you can design a GC based on some better principle?\nI certainly would use it if it would prove to work better in general.\n\n>All this in the face of the observation that no-one ever actually _writes_\n>leak free software. Personally, I don't know anyone who can write leak\n>free software of any size, certainly not in a finite time.\n>\nYes, we make mistakes. I make mistakes too. But I am certain that some\nof the subsystems I've written are leak free. And I am certain some of them\nare not. I just don't know which ones are and which ones are not.\n\nI know you are not trying to say that using GC's will insure that no\nprogramming\nerror are made. So I'll conclude that with or without GC's we will have\nwill have errors in our systems.\nThe challenge/solution is then to design systems which work inspite of the\nfact\nthat we make mistakes.\n\nThis leads to _general_ techniques such as using (precondition and\npostconditions)\nto help us cope with the fact that we are not perfect.\n\nThere is a good principle. It works all the time, in any situation\n(considering the\ncontext of our discussion.)\n\nOk, I'll stop now.\n\nPlease understand that I have no personal grudge against anyone.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Fri, 3 Apr 1998 10:45:16 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" } ]
[ { "msg_contents": ">\n>No! In GC-like allocation mode I meant to use malloc to allocate\n>memory in big chunks (> 8K) and use Last_Allocated_Byte counter for\n>each chunk in palloc() to \"allocate\" memory. pfree will do nothing.\n>GC-destroyer will just free a few chunks - without any scans.\n>Many GC-storages will be available simultaneously (GC_Storage_Identifier\n>will be returned by StartGCAllocation() call and used by EndGCAllocation()\n>to free memory in given storage). GC-allocations will be made in current\nmemory\n>context (in term of postgres) ==> code using special memory contexts\n>(relation cache etc) will not be affected at all (switching to another\n>context will stop GC-allocation untill first context restored)\n>as well elog(ERROR) clean up feature.\n>\nThis seems like an effective strategy too me. It also provides a solution\nto the 8 byte alignment problem.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Fri, 3 Apr 1998 12:27:00 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" } ]
[ { "msg_contents": ">> >For more information about garbage collection in general and about the\n>> >specific collector I am proposing see these urls:\n>> >\n>> > GC FAQ and links\n>> > http://www.iecc.com/gclist/GC-faq.html\n>> >\n>> > Boehm-Weiser Collector\n>> > http://reality.sgi.com/employees/boehm_mti/gc.html\n>...\nI'll try to find time to find out about these particular garbage collectors.\nIt's just that I've never seen/heard of one kept it's promises yet.\n\n\n>This is exactly the problem memoryduration is meant to solve. It trys to\n>guarantee that everything will be freed when it is no longer needed. It\ndoes\n>this by keeping a list of all allocations and then freeing them when the\n>duration ends. However we have existance proofs (by the truckload) that\n>this does not solve the problem.\n\nNo. we don't. The \"implementation of a concept\" is not \"the concept\".\nPoorly implementing any algorithm is hardly a proof.\n\n>\n>\n>> What you are describing is similar to the problem of lifetime of objects\nin\n>> a parse tree.\n>> If you don't know/understand the grammar being parsed it is very\ndifficult\n>> do manage\n>> memory correctly. However if you do understand the grammar it's becomes\n>> almost trivial.\n>\n>Perhaps. Why then does it not work?\n\nIt works. It simply does not yet meet our standards.\nI for one, don't fully appreciate mm in postgresql as yet. But if I were to\nunderstand\nit (and this is a matter of time) it would meet at least my standards if I\nwere\nto so choose.\n\n>\n>> I'm sorry I don't agree. If the programmer doesn't know what the\nlifetimes\n>> of the objects creates should be, he probably should first find out. IMO\n>> this is\n>> one the most import parts of understanding a system.\n>\n>To write extention function to make some application domain calculation\n>and return a (allocated) double, I now have to understand the whole\n>executor? I hope not.\nNo, you don't have to understand the whole executor.\nYou would read the documentation on how to write extention functions.\nThis would tell you how to allocated memory for your purposes.\n\n>\n>In any case, there is no mechanism in the current code to allow a function\n>author to control this accurately.\n\nThis can however be fixed.\n\n>\n>> The life times of objects within a system also obey certain \"grammar\nrules\"\n>> as you have indirectly suggested above. (Sessions containing a list of\n>> transactions,\n>> which contain a list of statements ... etc).\n>> Make these rules explicite and the \"when to free an object\" problem goes\n>> away.\n>\n>This is again exactly what MemoryDuration is intended to do. My argument is\n>that it a) doesn't work, b) wastes memory, c) is slow.\nYes. The MemoryDuration sceem as it's implemented now could have been\nbetter. But that doesn't mean the _concept_ of MemoryDuration is bad\nit only means that it is poorly implemented.\n\nI'm certain you know it is logically incorrect to assume that the\nimplementation of a concept _is_ the concept.\n\n>\n>\n>> This overhead and more is also present in any garbage collector. The\n>> garbage collector wastes CPU cycles figuring out if a memory block\n>> is still referenced as well.\n>\n>Why should this overhead be part of a collector? As nearly as I can tell\nfrom\n>a quick skim of the Boehm collector code, allocated objects have ZERO space\n>overhead.\n\nDon't you consider wasted execution time to be overhead David?\n\n>\n>As for time, considering the time we currently lavish on the allocator,\nalmost\n>anything would be an improvement.\n\nLet's make some of the main resources we are concerned with explicite.\n- program execution time\n- runtime program memory usage\n- time spent programming/debugging etc.\n\nLets assume that we have 10 developers and 1000 users and that\nthese users have 5 connection to the database at any given time.\nPoor execution time and poor memory management affects 5050 processes\nrunning on different machines.\nThe time spent programming costs a lot for those 10 developers.\n\nWhich resource we want to conserve most depends on situation as usual.\n\nI prefer to make the programmer go through hell so that the users\ndon't have to go through a similar hell.\n\n>\n>\n>> > The statistics I have seen for a derivative of postgres say that 86%\nof\n>> > all allocations are 64 bytes or less. 75% are 32 bytes or less, and\n43%\n>> > are less than 16 bytes. This suggests that allocator overhead about\n>> > doubles the storage needed.\n>> Did you also measure the lifetime of the objects. I would expect this to\nbe\n>> relatively short (as compared to the lifetime these objects might have\nwith\n>> a garbage collector.)\n>\n>I did not measure lifetimes. It would take full tracing to really\nunderstand\n>the behavior in detail and I simply have not done it. However the common\n>uncontrolled growth case we see is that the objects may have short\nlifetimes\n>but they are not freed until end of statement so the server just gets\nbigger\n>and bigger and bigger...\nAnd the lifetimes are usually short so GC's are not the answer IMO.\nPlease don't confuse a bad implementation a concept with the concept.\n\n>\n>\n>> So I would expect (and this is _my_ experience) that for any but the most\n>> cosmetic of applications GC's use more memory.\n>\n>I am interested. What experience have you had with collection? Also, what\n>applications have you seen use more memory collected than they would have\n>otherwise?\n\nI have experience with collection. And I don't think it's wise for me\nto speak about it, sorry.\n\n>\n>In any case, the usual number is that a collected system will have a\nvirtual\n>size somewhere from the same to 50% larger than an explicitly allocated\n>system. The higher number tends to be found with copying collectors as they\n>need both the \"from\" and \"to\" spaces. Boehm is not a copying collector.\nEven\n>so, I expect it will make us use more memory that the theoretical optimum.\n>But I also expect it to be better then the current implementation.\n\nCome on, we should measure space _and_ time.\nThis is what the GC is costing us.\n\nLet's consider an object M which has a life time which equals the life time\nof\nthe program. How many times will it be checked for possible collection\nbefore the program ends? Consider we have N objects with\na similar life time as M. Now the GC is considering N*M objects for\ncollections which won't be collected until the program ends.\n\nThis is _wasted_ resources. And the waste is a function of\nthe number of objects in the system multiplied with the amount of time\nthe program is running. It is _impossible_ that a GC will use\nthe same amount of resources as any sane non GC implementation\nof a pratical system if we consider both space _and_ time.\n\n>\n>\n>> > is not freed for reuse in a timely way but accumulated until the end\n>> > of the memory duration (ie statement or transaction). This is the\n>> > usual reason for running out of memory in a large query.\nAdditionaly,\n>> > the division of memory into separate pools creates extra\nfragmentation\n>> > which can only make matters even worse.\n>> Don't GC's accumulate memory until \"garbage collect\" time? At\n>\n>No. There is no requirement for this. The Boehm collector has incremental\n>collection.\n\nWhich simply means that instead of wasting more space it wastes more\nCPU cycles. Please, don't be fooled by nice talk.\n\n>\n>\n>> GC time memory pages are revisited which may have been swapped out\n>> ages ago. This isn't good for performance. And much more is accumulated\nthan\n>> in the case where palloc and pfree are used.\n>\n>It is pretty fatal to database systems with more than one active\nthread/process\n>to page at all. Think about what happens when someone holding a spinlock\n>gets paged out, it is not pretty. Fortunately there is no requirement with\n>a modern collector to wait until pages are swapped out.\n\nNo, but a GC visits allocated objects. These objects\nare scattered all over the memory space, so the working set of\nthe program using the GC increases and as a result the perfomance\ndecreases. This has allways been this way an probably always will.\n\n>\n>\n>> I think it was you who suggested a good solution to this problem which\nwould\n>> also guaranteed 8 byte alignment for palloced objects.\n>...\n>> Also if we pre allocate big chuncks of memory (as you suggested I think)\nwe\n>> can in many cases avoid \"chasing a big list of pointers\" because the\n>> like time of most objects is likely to me small for many applications.\n>\n>This was my initial thought. I expect a good improvement could be made on\n>the current system. I think collection would serve us even better.\n>\n>\n>> Consider the case where all objects have a lifespan similar to the stack\n>> frame\n>> of the functions in which they are used. GC give provably bad\nperforrmance\n>> in\n>> such cases (which for many applications is the common case).\n>\n>There are papers (by I believe Appel) that show that collection can be\n>faster than stack allocation. I admit to being surprised by this, and have\n>not yet read the papers, but they are taken seriously, it is not just\n>smoke and sunshine.\n\nThey are taken seriosly by whom? I love it when poeple use careful wordings.\n\n>\n>\n>> > I am not going to say much here except to point out the number of\n>> > freed pointer errors and memory leaks that have been found in the code\n>> > to date. And that there are new ones in every release. And that I have\n>> > spent a good part of the last three years chasing leaks out of a\n>> > similar system, and no, I am not done yet.\n>> Yes and if we use good tools to help we can squash them all.\n>\n>As it happens, the Boehm collector can be used as a leak detector too.\n>In this mode you just use your conventional allocation scheme and run the\n>collector in \"purify\" mode. It does its collection scan at each malloc and\n>then reports all the allocated but unreachable (hence leake) memory.\n\nWow, that must take quite some bookkeeping. You mean they actually\nhave some kind of a table which keeps track of the pointers?\nThis sound pretty conventional to me.\n\n>\n>\n>> I recall the MySql guys boasting \"No memory errors as reported by\n>\n>I suspect they are exagerating. On Solaris, the resolver libc routines\n>leak so anything linked with libc leaks (just a little).\n>\n>> Purify\" This confirms what I allready know: \"All memory errors can be\n>> detected\n>> can be squashed\".\n>\n>I believe that the Space Shuttle onboard code is leak free... maybe.\n>\n>\n>> Especially if the programming team disciplines it's self\n>> by consistently using good tools.\n>\n>Exactly why I am proposing this. The Boehm collector is a good tool. It\n>eliminates a large class of errors. They just \"cease to be\".\n>\n>> We are professionals.\n>\n>We are volunteers. I don't happen to think chasing leaks is what\n>I want to do with my free time, I get to do more than enough of that at\nwork.\n>\n>> We can do that, can't we? In my experience memory errors are a result of\n>> \"tired fingers\" and being a newbie.\n>\n>Even if this was true, and we can write perfect code, we are working with\n>postgresql, a great big piece of code written mostly by grad students. Who\n>are almost guaranteed to be either newbies or to have \"tired fingers\".\n>\n\n>\n>> > The very existance of companies and products like Purify should be a\n>> > tipoff. There is no practical way to write large C programs with\n>> > dynamic storage without significant storage management problems.\n>> No I don't agree. Have you ever been hired to \"fix\" large systems using\n>> garbage collectors which refused to perform well (memory hogs)?\n>> Thank God for malloc and free.\n>\n>Please share your experience here, I am very interested.\n\nAs I said, I choose not to.\n\n>\n>While I am taking an advocacy position on this topic, I am sincere about\n>wanting a discussion and more information. If there is real evidence that\n>applies to our case that suggest GC is not the right thing to do, we need\n>to know about it.\n\nI would suggest measuring practical implementations.\nGC's always lose. When they win there are cheating in my experience.\n(They don't chart the runtime overhead etc. or some other relevant\nresource.)\n\n>\n>> Another point is using a GC in languages with pointers. You get the\n>> most terrible bugs because us C and C++ programmers tend to use tricks\n>> (like pointer arithmetic) a times. These (in general) are always\n>> incompatible\n>> (in one way or the other) with the GC implementation.\n>> Combine this with \"smart\" compilers doing \"helpful\" optimizations and\n>> you get very very obscure bugs.\n>\n>The Boehm collector is aware of most of this and in practice almost\n>any ANSI conforming C program can be collected correctly. The only real\n>restriction is that you can't use 'xor' to store both a prev and next field\n>in one pointer as it hides the pointer. However, there is not an ANSI\n>conforming way to do this anyway.... (you can cast (ptr to int), but the\n>result of casting back after the xor is unspecified).\n>\n>\n>[code examples deleted]\n>\n>> Yes and there are programming idioms which invalidate the need to do the\n>> above without introducing the overhead of Garbage Collectors.\n>\n>Yes? How can we apply them to postgres? I hate the kind of code I placed\n>in the example, but it is ubiquitous. If you have a better way (that can\n>be applied here), please please tell us.\n>\n>> So such code is only needed if the designed of a system didn't design\n>> a memory management subsystem which properly solved the problem.\n>\n>A succinct description of the system we are working on.\nAnd it can all be fixed. Remember that the original developers of\nour system probably had different goals than we do. Do you know what\nour goals are? I don't, I'm simply learning about the system right now.\n\nAnyway, as I've said if you prove me wrong I will be happy, because certain\nprogramming tasks will be simpler. It is just that I've never seen\nanyone backup the GC promise with real world facts.\n\n\nThanks again, with regards from Maurice.\n\n\n", "msg_date": "Fri, 3 Apr 1998 15:02:07 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Its not my fault. Its SEG's FAULT!" } ]
[ { "msg_contents": "> > regression test failed for numerology with 6.3.1 (compiled with\n> >MB=EUC_JP option by gcc-2.7.2) on x86 Linux-2.0.33.\n> >There was not such an error with 6.3. Is anyone know about this?\n> test=> select * from f where (f > -2147483647);\n> ERROR: There is no operator '>' for types 'float8' and 'int4'\n> Look like \"> plus_number\" and \">= minus_number\" are ok, only\n> \"> minus_number\" fails.\n\nYes, it appears to be a side-effect of a change I made to ensure that\nall \"unary minus\" math expressions are handled correctly. If you revert\nscan.l/scan.c to those from v6.3 this particular problem should\ndisappear (but the original reported problem, which is apparently not\ncommonly seen, will reappear).\n\nWe should have caught this before releasing v6.3.1 :(\n\nI am working on automatic type conversion capabilities for v6.4, and\nwould expect that this problem goes away.\n\n - Tom\n", "msg_date": "Fri, 03 Apr 1998 15:33:39 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] regression failed for numerology." } ]
[ { "msg_contents": "> I want to get OUTER JOINs into\n> the server for v6.4, to give me some major functionality that I'm\n> currently missing, and to finally force myself to get into the \n> internals :) Someone want to recommend how to start on this? My \n> constant fear with doing anything is never knowing how to start *sigh*\n\nMy strategy for this last time was to cajole Vadim into doing the hard\nwork :) But, I promised him to make up for it by helping with his\nprojects in the future when he wanted. I'm not sure what is on his list\nfor v6.4. Outer joins would sure be nice though...\n\n - Tom\n", "msg_date": "Fri, 03 Apr 1998 16:07:20 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Re: pg_dump -z [Please don't apply this]" } ]
[ { "msg_contents": "Hi,\n\nonce and a while this typedef appears in the\ninclude files. On some systems it gives only\nwarnings, on other systems the compilers stops.\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n\nperl -v output:\n\nThis is perl, version 5.003 with EMBED\n built under irix at Jan 23 1997 16:55:47\n + suidperl security patch\n\nCopyright 1987-1996, Larry Wall\n\nPerl may be copied only under the terms of either the Artistic License or\nthe\nGNU General Public License, which may be found in the Perl 5.0 source kit.\n\nperl -V output:\n\nSummary of my perl5 (5.0 patchlevel 3 subversion 0) configuration:\n Platform:\n osname=irix, osver=6.2, archname=irix-n32\n uname='irix hoshi 6.2 03131015 ip22 '\n hint=recommended, useposix=true, d_sigaction=define\n Compiler:\n cc='cc -n32 -mips3', optimize='-O2 -OPT:Olimit=0', gccversion=\n cppflags='-DBSD_TYPES -D_BSD_SIGNALS -D_BSD_TIME -DLANGUAGE_C -woff\n1009'\n ccflags ='-DBSD_TYPES -D_BSD_SIGNALS -D_BSD_TIME -DLANGUAGE_C -woff\n1009'\n stdchar='unsigned char', d_stdstdio=define, usevfork=false\n voidflags=15, castflags=0, d_casti32=define, d_castneg=define\n intsize=4, alignbytes=8, usemymalloc=y, randbits=15\n Linker and Libraries:\n ld='cc', ldflags =''\n libpth=/usr/lib32 /lib32 /usr/lib /lib\n libs=-lbsd -lm\n libc=, so=so\n Dynamic Linking:\n dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=, ccdlflags=' '\n cccdlflags=' ', lddlflags='-n32 -mips3 -shared'\n\n@INC: /usr/share/lib/perl5/irix-n32/5.003 /usr/share/lib/perl5\n/usr/share/lib/perl5/site_perl/irix-n32 /usr/share/lib/perl5/site_perl\n/usr/share/lib/perl5/sgi_perl .\n\nPostgresql Version = 6.3\npgsql_perl5 Version = 1.7.0\n\nAnd the error message while compiling (at the 'make' phase):\n\n cc -n32 -mips3 -c -I/usr/local/pgsql/include -DBSD_TYPES\n-D_BSD_SIGNALS\n-D_BSD_TIME -DLANGUAGE_C -woff 1009 -O2 -OPT:Olimit=0\n-DVERSION=\\\"1.7.0\\\" -DXS_VERSION=\\\"1.7.0\\\"\n-I/usr/share/lib/perl5/irix-n32/5.003/CORE Pg.c\n\"/usr/local/pgsql/include/c.h\", line 66: error(1084): invalid combination\nof\n type specifiers\n typedef char bool;\n ^\n\n1 error detected in the compilation of \"Pg.c\".\n*** Error code 2 (bu21)\n\nuname -a: IRIX64 soulman 6.4 02121744 IP27\n\nAnd the postgresql 6.3 was succesfully installed&working.\nIf you know a solution to this problem, please reply it to me.\n\nThanks in advance: Spitzer Andras", "msg_date": "Fri, 03 Apr 1998 21:38:36 +0200", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: pgsql_perl5 problem to compile on IRIX 6.4]" } ]
[ { "msg_contents": "Here are 3 patches (all relative to the src directory) to help with\nthe configuration of v6.3.1. I have replaced the queries for\ninclude/lib directories with --with configuration options. I have\nalso included a list of potential tcl/tk include directories directly\nin the CPPFLAGS variable. As new versions are needed, these should be\nadded to the list in reverse numerical order (libraries are in a\nseparate list near the end). This greatly simplifies the later checks\nif --with-tcl is set. I hope this solution works for everyone. \n\nI also added a check to disable the perl support if postgres was not\nalready installed (as per the instructions in the directory). By the\nway, why must there be an installed pgsql to compile perl support?\nThis seems odd, at best.\n\nFinally, I changed the Makefile in the libpgtcl interface to place the\nshared libraries at the end of the list of files, not at the\nbeginning. With NetBSD at least, libraries are linked in order, so\nthe original sequence does not work.\n\nHope you find these useful.\n\nCheers,\nBrook\n\n===========================================================================\n\n--- configure.in.orig\tMon Mar 23 07:33:33 1998\n+++ configure.in\tFri Apr 3 15:32:42 1998\n@@ -141,62 +141,47 @@\n LIBS=`grep '^LIBS:' $TEMPLATE | awk -F: '{print $2}'`\n \n \n-dnl We now need to check for additional directories (include\n-dnl and library directories.\n-echo \"**************************************************************\"\n-echo \"We now need to know if your compiler needs to search any\n-echo \"additional directories for include or library files. If\n-echo \"you don't know the answers to these questions, then just\n-echo \"hit enter and we will try and figure it out. If things\n-echo \"don't compile because of missing libraries or include\n-echo \"files, then you probably need to enter something here.\n-echo \"enter 'none' or new directories to override default\"\n-echo \"\"\n-$ECHO_N \"Additional directories to search for include files { $SRCH_INC }: $ECHO_C\"\n-if test \"X$with_defaults\" = \"Xyes\"\n-then\n-\ta=$SRCH_INC\n-\techo \"\"\n-else\n-\tread a\n+AC_ARG_WITH(includes,\n+ [ --with-includes=DIR site header files for tk/tcl, etc in DIR],\n+ [\n+\tcase \"$withval\" in\n+\t\"\" | y | ye | yes | n | no)\n+\t AC_MSG_ERROR([*** You must supply an argument to the --with-includes option.])\n+\t ;;\n+\tesac\n+\tINCLUDE_DIRS=\"$withval\"\n+ ])\n+\n+if test \"$INCLUDE_DIRS\"; then\n+\tfor dir in $INCLUDE_DIRS; do\n+\t if test -d \"$dir\"; then\n+\t\tPGSQL_CPPFLAGS=\"$PGSQL_CPPFLAGS -I$dir\"\n+\t else\n+\t\tAC_MSG_WARN([*** Include directory $dir does not exist.])\n+\t fi\n+\tdone\n+fi\n+\n+AC_ARG_WITH(libraries,\n+ [ --with-libraries=DIR site library directories for tk/tcl, etc in DIR],\n+ [\n+\tcase \"$withval\" in\n+\t\"\" | y | ye | yes | n | no)\n+\t AC_MSG_ERROR([*** You must supply an argument to the --with-libraries option.])\n+\t ;;\n+\tesac\n+\tLIBRARY_DIRS=\"$withval\"\n+ ])\n+\n+if test \"$LIBRARY_DIRS\"; then\n+\tfor dir in $withval; do\n+\t if test -d \"$dir\"; then\n+\t\tPGSQL_LDFLAGS=\"$PGSQL_LDFLAGS -L$dir\"\n+\t else\n+\t\tAC_MSG_WARN([*** Library directory $dir does not exist.])\n+\t fi\n+\tdone\n fi\n-if test \"$a.\" = \"none.\" \n-then\n-\tSRCH_INC=\n-\tCPPFLAGS=\n-else\n-\tif test \"$a.\" = \".\"\n-\tthen\n-\t\ta=$SRCH_INC\n-\tfi\n-\tCPPFLAGS=`echo \"$a\" | sed 's@ *@ @g; s@^\\([[^ ]]\\)@-I\\1@; s@ \\([[^ ]]\\)@ -I\\1@g'`\n-\n-fi\n-export CPPFLAGS\n-echo \"- setting CPPFLAGS=$CPPFLAGS\"\n-\n-$ECHO_N \"Additional directories to search for library files { $SRCH_LIB }: $ECHO_C\"\n-if test \"X$with_defaults\" = \"Xyes\"\n-then\n-\ta=$SRCH_LIB\n-\techo \"\"\n-else\n-\tread a\n-fi\n-if test \"$a.\" = \"none.\"\n-then\n-\tSRCH_LIB=\n-\tLDFLAGS=\n-else\n-\tif test \"$a.\" = \".\"\n-\tthen\n-\t\ta=$SRCH_LIB\n-\tfi\n-\tLDFLAGS=`echo \"$a\" | sed 's@ *@ @g; s@^\\([[^ ]]\\)@-L\\1@; s@ \\([[^ ]]\\)@ -L\\1@g'`\n-\n-fi\n-export LDFLAGS\n-echo \"- setting LDFLAGS=$LDFLAGS\"\n \n dnl We have read the default value of USE_LOCALE from the template \n dnl file. We have a further option of using \n@@ -242,6 +227,27 @@\n USE_TCL=true; AC_MSG_RESULT(enabled),\n USE_TCL=false; AC_MSG_RESULT(disabled)\n )\n+\n+dnl Add tcl/tk candidate directories to CPPFLAGS\n+if test \"$USE_TCL\"; then\n+\theader_dirs=\"/usr/include $INCLUDE_DIRS\"\n+\ttcl_dirs=\"tcl8.0 tcl80 tcl7.6 tcl76\"\n+\ttk_dirs=\"tk8.0 tk4.2\"\n+\tfor dir in $header_dirs; do\n+\t for tcl_dir in $tcl_dirs; do\n+\t\tif test -d \"$dir/$tcl_dir\"; then\n+\t\t PGSQL_CPPFLAGS=\"$PGSQL_CPPFLAGS -I$dir/$tcl_dir\"\n+\t\tfi\n+\t done\n+\tdone\n+\tfor dir in $header_dirs; do\n+\t for tk_dir in $tk_dirs; do\n+\t\tif test -d \"$dir/$tk_dir\"; then\n+\t\t PGSQL_CPPFLAGS=\"$PGSQL_CPPFLAGS -I$dir/$tk_dir\"\n+\t\tfi\n+\t done\n+\tdone\n+fi\n export USE_TCL\n USE_X=$USE_TCL\n \n@@ -253,6 +259,15 @@\n USE_PERL=true; AC_MSG_RESULT(enabled),\n USE_PERL=false; AC_MSG_RESULT(disabled)\n )\n+\n+dnl Verify that postgres is already installed\n+dnl per instructions for perl interface installation\n+if test \"$USE_PERL\" = \"true\"; then\n+\tif test ! -x $prefix/bin/postgres; then\n+\t AC_MSG_WARN(perl support disabled; postgres not previously installed)\n+\t USE_PERL=\n+\tfi\n+fi\n export USE_PERL\n \n dnl Unless we specify the command line options\n@@ -276,6 +291,13 @@\n AC_PROG_CC\n fi\n \n+CPPFLAGS=\"$CPPFLAGS $PGSQL_CPPFLAGS\"\n+export CPPFLAGS\n+echo \"- setting CPPFLAGS=$CPPFLAGS\"\n+\n+LDFLAGS=\"$LDFLAGS $PGSQL_LDFLAGS\"\n+export LDFLAGS\n+echo \"- setting LDFLAGS=$LDFLAGS\"\n \n AC_CONFIG_HEADER(include/config.h)\n \n@@ -571,17 +593,21 @@\n fi\n \n dnl Check for Tcl archive\n-if test \"$USE_TCL\" = \"true\"\n-then\n-TCL_LIB=\n-AC_CHECK_LIB(tcl, main, TCL_LIB=tcl)\n-if test -z \"$TCL_LIB\"; then\n-AC_MSG_WARN(tcl support disabled; Tcl library missing)\n-USE_TCL=\n-else\n-TCL_LIB=-l$TCL_LIB\n-fi\n-AC_SUBST(TCL_LIB)\n+if test \"$USE_TCL\" = \"true\"; then\n+\tTCL_LIB=\n+\ttcl_libs=\"tcl8.0 tcl80 tcl7.6 tcl76 tcl\"\n+\tfor tcl_lib in $tcl_libs; do\n+\t if test -z \"$TCL_LIB\"; then\n+\t\tAC_CHECK_LIB($tcl_lib, main, TCL_LIB=$tcl_lib)\n+\t fi\n+\tdone\n+\tif test -z \"$TCL_LIB\"; then\n+\t AC_MSG_WARN(tcl support disabled; Tcl library missing)\n+\t USE_TCL=\n+\telse\n+\t TCL_LIB=-l$TCL_LIB\n+\tfi\n+\tAC_SUBST(TCL_LIB)\n fi\n \n dnl Check for location of Tk support (only if Tcl used)\n@@ -612,17 +638,21 @@\n fi\n \n dnl Check for Tk archive\n-if test \"$USE_TCL\" = \"true\"\n-then\n-TK_LIB=\n-AC_CHECK_LIB(tk, main, TK_LIB=tk)\n-if test -z \"$TK_LIB\"; then\n-AC_MSG_WARN(tcl support disabled; Tk library missing)\n-USE_TCL=\n-else\n-TK_LIB=-l$TK_LIB\n-fi\n-AC_SUBST(TK_LIB)\n+if test \"$USE_TCL\" = \"true\"; then\n+\tTK_LIB=\n+\ttk_libs=\"tk8.0 tk80 tk4.2 tk42 tk\"\n+\tfor tk_lib in $tk_libs; do\n+\t if test -z \"$TK_LIB\"; then\n+\t\tAC_CHECK_LIB($tk_lib, main, TK_LIB=$tk_lib)\n+\t fi\n+\tdone\n+\tif test -z \"$TK_LIB\"; then\n+\t AC_MSG_WARN(tk support disabled; Tk library missing)\n+\t USE_TCL=\n+\telse\n+\t TK_LIB=-l$TK_LIB\n+\tfi\n+\tAC_SUBST(TK_LIB)\n fi\n \n AC_OUTPUT(GNUmakefile Makefile.global backend/port/Makefile bin/pg_version/Makefile bin/psql/Makefile bin/pg_dump/Makefile backend/utils/Gen_fmgrtab.sh interfaces/libpq/Makefile interfaces/libpgtcl/Makefile interfaces/ecpg/lib/Makefile ) \n\n===========================================================================\n\n--- ../INSTALL.orig\tSun Feb 22 13:02:06 1998\n+++ ../INSTALL\tFri Apr 3 11:57:04 1998\n@@ -267,14 +267,21 @@\n listens for incoming connections on. The\n default for this is port 5432.\n \n- --with-defaults Use default responses to several queries during\n- configuration.\n-\n --with-tcl Enables programs requiring Tcl/Tk and X11,\n including pgtclsh and libpgtcl.\n \n --with-perl Enables the perl interface. Note that this\n requires an installed version of postgreSQL.\n+\n+ --with-includes=DIRS\n+ Include DIRS in list of directories searched\n+ for header files. (Typical use will need\n+ --with-includes=/usr/local/include)\n+\n+ --with-libraries=DIRS\n+ Include DIRS in list of directories searched\n+ for archive libraries. (Typical use will need\n+ --with-libraries=/usr/local/lib)\n \n As an example, here is the configure script I use on a Sparc\n Solaris 2.5 system with /opt/postgres being the install base.\n\n===========================================================================\n\n--- interfaces/libpgtcl/Makefile.in.orig\tMon Mar 23 07:33:35 1998\n+++ interfaces/libpgtcl/Makefile.in\tFri Apr 3 11:13:52 1998\n@@ -31,12 +31,14 @@\n install-shlib-dep :=\n shlib := \n \n+LIBPQ\t\t\t= -L $(SRCDIR)/interfaces/libpq -lpq\n+\n ifeq ($(PORTNAME), linux)\n ifdef LINUX_ELF\n install-shlib-dep\t:= install-shlib\n shlib\t\t:= libpgtcl.so.1\n CFLAGS\t\t+= $(CFLAGS_SL)\n- LDFLAGS_SL\t\t= -shared -L$(SRCDIR)/interfaces/libpq -lpq\n+ LDFLAGS_SL\t\t= -shared\n endif\n endif\n \n@@ -52,14 +54,14 @@\n ifeq ($(PORTNAME), i386_solaris)\n install-shlib-dep\t:= install-shlib\n shlib\t\t\t:= libpgtcl.so.1\n- LDFLAGS_SL\t\t= -G -z text -L$(SRCDIR)/interfaces/libpq -lpq\n+ LDFLAGS_SL\t\t= -G -z text\n CFLAGS\t\t+= $(CFLAGS_SL)\n endif\n \n ifeq ($(PORTNAME), univel)\n install-shlib-dep\t:= install-shlib\n shlib\t\t\t:= libpgtcl.so.1\n- LDFLAGS_SL\t\t= -G -z text -L$(SRCDIR)/interfaces/libpq -lpq\n+ LDFLAGS_SL\t\t= -G -z text\n CFLAGS\t\t+= $(CFLAGS_SL)\n endif\n \n@@ -77,7 +79,7 @@\n \t$(RANLIB) libpgtcl.a\n \n $(shlib): $(OBJS)\n-\t$(LD) $(LDFLAGS_SL) -o $@ $(OBJS) \n+\t$(LD) $(LDFLAGS_SL) -o $@ $(OBJS) $(LIBPQ)\n \tln -sf $@ libpgtcl.so\n \n .PHONY: beforeinstall-headers install-headers\n", "msg_date": "Fri, 3 Apr 1998 21:34:22 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "configuration patches" }, { "msg_contents": "> \n> Here are 3 patches (all relative to the src directory) to help with\n> the configuration of v6.3.1. I have replaced the queries for\n> include/lib directories with --with configuration options. I have\n> also included a list of potential tcl/tk include directories directly\n> in the CPPFLAGS variable. As new versions are needed, these should be\n> added to the list in reverse numerical order (libraries are in a\n> separate list near the end). This greatly simplifies the later checks\n> if --with-tcl is set. I hope this solution works for everyone. \n> \n> I also added a check to disable the perl support if postgres was not\n> already installed (as per the instructions in the directory). By the\n> way, why must there be an installed pgsql to compile perl support?\n> This seems odd, at best.\n> \n> Finally, I changed the Makefile in the libpgtcl interface to place the\n> shared libraries at the end of the list of files, not at the\n> beginning. With NetBSD at least, libraries are linked in order, so\n> the original sequence does not work.\n> \n> Hope you find these useful.\n> \n\nApplied. These will be very tempting for Marc, because it improves\nautomatic tcl/tk include/library search support.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 16:26:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] configuration patches" } ]
[ { "msg_contents": "Hello ,\n\nin version 6.3.1 of the file src/interfaces/ecpg/preproc/pgc.l .\nThere are some flex specific code .\nlike :\n %option ( no fatal )\n <<EOF>> ( fatal error )\n\nmy os was solaris 2.6. with standart lex .\n\nSo if i want to compile , postgres I must use flex .....\n\nbye\n\n\n--\n ____________________________________________________________\n / Erwan MAS /\\\n | mailto:[email protected] |_/\n | http://www.geocities.com/SiliconValley/Heights/8685/ |\n___|________________________________________________________ |\n\\___________________________________________________________\\__/\n\n", "msg_date": "Sat, 4 Apr 1998 15:43:01 +0200", "msg_from": "Erwan MAS <[email protected]>", "msg_from_op": true, "msg_subject": "lex/flex portability PB in version 6.3.1" }, { "msg_contents": "On Sat, 4 Apr 1998, Erwan MAS wrote:\n\n> Hello ,\n> \n> in version 6.3.1 of the file src/interfaces/ecpg/preproc/pgc.l .\n> There are some flex specific code .\n> like :\n> %option ( no fatal )\n> <<EOF>> ( fatal error )\n> \n> my os was solaris 2.6. with standart lex .\n> \n> So if i want to compile , postgres I must use flex .....\n\n\tActually, I hate to say it, but its pretty much recommended that\nanyone using PostgreSQL use flex/bison...in particular, we're starting to\nfind that 'stock yacc' on some systems chocks on gram.y, because its just\ngotten to be *very* large...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 6 Apr 1998 03:12:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lex/flex portability PB in version 6.3.1" } ]
[ { "msg_contents": "Does this make sense to anyone familiar with that area of the code.\n\nForwarded message:\n> From [email protected] Sat Apr 4 08:59:43 1998\n> Message-Id: <[email protected]>\n> Comments: Authenticated sender is <[email protected]>\n> From: \"Fernando Carello\" <[email protected]>\n> To: Bruce Momjian <[email protected]>\n> Date: Sat, 4 Apr 1998 16:08:35 +0000\n> MIME-Version: 1.0\n> Content-type: text/plain; charset=US-ASCII\n> Content-transfer-encoding: 7BIT\n> Subject: Re: [BUGS] Possible password authentication bug in 6.3.1\n> Priority: normal\n> In-reply-to: <[email protected]>\n> References: <[email protected]> from \"[email protected]\" at Apr 3, 98 08:16:40 pm\n> X-mailer: Pegasus Mail for Win32 (v2.54)\n> \n> \n> > Try adding another host line to the end of the file, and let me know if\n> > that fixes it.\n> \n> Added:\n> \n> host\tusers 192.168.0.1 255.255.255.255 password\n> \n> at the end of pg_hba.conf, but the error is still there.\n> \n> Please note that I don't make use of Unix sockets for the connection, \n> I use TCP/IP instead (\" -i \").\n> \n> I've also commented out the (original) last two lines that allowed \n> restrictless connections from the localhost.\n> \n> I'm not very familiar with Postgres internals, but it *seems* to me \n> that the variable \"areq\" is not getting the right value: it should be \n> \"3\" ( = AUTH_REQ_PASSWORD) for plain-password authentication, while \n> it gets \"13824\".\n> ----\n> Now I'm at home, and I'm playing a little with libpq sources: here \n> I've got Postgres 6.3 (not 6.3.1) and I get a value of areq = 14336 \n> (and the same error, of course).\n> So I printed out areq value in \"fe-connect.c\", just after the \n> pqGetInt call: I get areq = \"14336d\", that is quite strange; of \n> course, shortly after, the call to fe_sendauth fails.\n> Then I tried to force areq=3 just before calling fe_sendauth (we are \n> near the middle of fe-connect.c), and it happens that the error \n> becomes:\n> \n> FATAL 1: Socket command option.\n> \n> Don't know if that helps in some way ! :-)\n> \n> Please let me know if I can do something useful (btw, I'm in trouble \n> with that authentication stuff: for now I'm not able to protect my \n> data, so I shutted down the SQL server), and as always thanks to all \n> you people.\n> \n> \n> \t\t\t\tFernando Carello\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 4 Apr 1998 10:54:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Possible password authentication bug in 6.3.1 (fwd)" } ]
[ { "msg_contents": "\n> Try adding another host line to the end of the file, and let me know if\n> that fixes it.\n\nAdded:\n\nhost\tusers 192.168.0.1 255.255.255.255 password\n\nat the end of pg_hba.conf, but the error is still there.\n\nPlease note that I don't make use of Unix sockets for the connection, \nI use TCP/IP instead (\" -i \").\n\nI've also commented out the (original) last two lines that allowed \nrestrictless connections from the localhost.\n\nI'm not very familiar with Postgres internals, but it *seems* to me \nthat the variable \"areq\" is not getting the right value: it should be \n\"3\" ( = AUTH_REQ_PASSWORD) for plain-password authentication, while \nit gets \"13824\".\n----\nNow I'm at home, and I'm playing a little with libpq sources: here \nI've got Postgres 6.3 (not 6.3.1) and I get a value of areq = 14336 \n(and the same error, of course).\nSo I printed out areq value in \"fe-connect.c\", just after the \npqGetInt call: I get areq = \"14336d\", that is quite strange; of \ncourse, shortly after, the call to fe_sendauth fails.\nThen I tried to force areq=3 just before calling fe_sendauth (we are \nnear the middle of fe-connect.c), and it happens that the error \nbecomes:\n\nFATAL 1: Socket command option.\n\nDon't know if that helps in some way ! :-)\n\nPlease let me know if I can do something useful (btw, I'm in trouble \nwith that authentication stuff: for now I'm not able to protect my \ndata, so I shutted down the SQL server), and as always thanks to all \nyou people.\n\n\n\t\t\t\tFernando Carello\n\n", "msg_date": "Sat, 4 Apr 1998 16:08:35 +0000", "msg_from": "\"Fernando Carello\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Possible password authentication bug in 6.3.1" } ]
[ { "msg_contents": "> > >Running postgresql in interactive mode shows that for each query I \n> > >type there is memory lost. The exact amount of memory lost depends on\n> > >the query I use. The amount of memory not freed is also a function\n> > >of the number of tuples returned.\n> > >\n> > \n> > Oops, it seems some palloc'ed memory is not freed by pfree() but\n> > using some other function(s).\n> > My mistake, sorry.\n> \n> OK, I think I know where the leak is from. I checked AllocSetDump(), and\n> it did not show any problems with any context growing, so I started to\n> suspect direct calls to malloc(). I tried trapping on malloc(), but\n> there are too many calls.\n> \n> Then I ran profiling on the two queries I mentioned, where one leaks and\n> one doesn't, and found that the leaking one had 500 extra calls to\n> malloc. Grep'ing out the calls and comparing the output of the two\n> profiles, I found:\n> \n> 0.00 0.00 1/527 ___irs_gen_acc [162]\n> 0.00 0.00 35/527 _GetDatabasePath [284]\n> 0.00 0.00 491/527 _GetDatabaseName [170]\n> [166] 0.8 0.00 0.00 527 _strdup [166]\n> 0.00 0.00 527/2030 _malloc [105]\n> 0.00 0.00 527/604 _strlen [508]\n> 0.00 0.00 527/532 _bcopy [515]\n> \n> \n> I believe this code was modified by Vadim to fix our problem with blind\n> write errors when using psql while the regression tests were being run.\n> \n> Am I correct on this? I have not developed a fix yet.\n\nHow about changing the strdup()'s in src/backend/utils/init/miscinit.c to\nthe pstrdup() that is in palloc.c so the memory is palloc()/pfree()'d?\n\nI seem to recall a discussion a long time ago about using strdup and how\nit would leak when using the postgres memory contexts, and that the pstrdup\ncode was created to be used instead of strdup to \"fix\" this.\n\ndarrenk\n", "msg_date": "Sat, 4 Apr 1998 17:29:29 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" } ]
[ { "msg_contents": "Here is the assert error report.\n\nForwarded message:\n> From [email protected] Fri Mar 27 04:01:04 1998\n> Message-ID: <[email protected]>\n> Date: Fri, 27 Mar 1998 09:50:48 +0100\n> From: Michael Bussmann <[email protected]>\n> To: bcurnow <[email protected]>, [email protected]\n> Subject: Re: [QUESTIONS] pqexec error in psql when root creates table\n> Mail-Followup-To: bcurnow <[email protected]>,\n> \[email protected]\n> References: <[email protected]>\n> Mime-Version: 1.0\n> Content-Type: text/plain; charset=us-ascii\n> In-Reply-To: <[email protected]> from \"bcurnow\" at Mar 27, 1998 00:21:49\n> Sender: [email protected]\n> Precedence: bulk\n> \n> Hi!\n> \n> On 1998-03-27 00:21:49 -0800, bcurnow wrote:\n> \n> > When root creates a table in a database that root created, I get a pqexec\n> > and the backend core dumps.\n> \n> Reminds me of a problem I've had some time ago. Try recompiling with\n> --enable-cassert. This actually _disables_ assert checking (check in\n> include/config.h: NO_ASSERT_CHECKING should be 1).\n> It worked for me, YMMV.\n> \n> HTH\n> MfG\n> MB\n> \n> -- \n> Michael Bussmann <[email protected]> [Tel.: +49 228 9435 211; Fax: +49 228 348953]\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 00:51:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] pqexec error in psql when root creates table (fwd)" } ]
[ { "msg_contents": "Hi,\n\nI'm currently under the impression that the following change in the\npostgresql system would benefict the overall performance and quality\nof the system.\n\n Tuples for a class and all it's derived classes are stored in one file.\n\nAdvantages:\n- Since all tuples for a given class hierarchy are stored in the same\nphysical file,\n oids now need only be unique to a single inheritance hierarchy\n (instead of unique in each posgresql installation).\n\n So no longer is there any _necessity_ for a systemwide unique oid.\n (This necessity existed because all objects by definition (in OO\nsematics)\n must posses the identity property (\"this\" in C++/Java sometimes also\ncalled \"self\"),\n and because instances of the same hierachy were stored in different files\n it was necesary to provide the identity property in a \"file independant\"\nway.\n\n The bottleneck formed by the systemwide unique oid is replaced by a\n bottleneck for each inheritance hierarchy within an installation.\n If one doesn't use inheritance then it translates to a per table\nbottleneck.\n (Which is what we have now anyway isn't it?).\n\n- Indices, triggers, contraints, etc. are automatically inherited.\n so that we can showcase classic OO semantics (including polymorphism).\n\n- Makes easy implementation of referential integrity for oids possible.\n\n- It becomes possible to store more than 4Giga tuples\n on 32 bit systems\n\n- given an instance of a class identified by an oid it is easy to determine\n the most derived class it belongs to.\n (This feature has been requested by a number of poeple on the\n questions list.)\n\n- It is the first step to support tables with no oids at all (not that this\n is particularly interesting to me though). I'd suggest that system\ncatalogues\n keep their oids though our we would be in for a major rewrite I think.\n\n\nDisadvantages\n- sequential heapscans for tables _with_ derived classes will be less\nefficient\n in general, because now some tuples may have to be skipped since they\nmay\n belong to the wrong class. This is easily solved using indices.\n\n- slight space overhead for tuple when not using inheritance.\n The space is used to tag each tuple with the most derived class it\n belongs to.\n\nTo improve OO support the implementation plan is to:\n1. Add a system attribute to each heap tuple which identifies the most\nderived\n class the instance belongs to. (easy; I think)\n2. Store instances of derived classes in the same physical file as the top\n most base class. I hope that hacking heapopen() to tell it in which file\n it should look for tuples of a particular relation will be enough.\n Maybe this might have implications for caching etc. which I don't\nunderstand.\n (difficult?)\n3. modify the heap_scanning functions to support the new sceem. (easy; I\nthink)\n\nNow for my questions.\n- Is implementing the above major surgery?\n- Am I missing something important?\n- What do you guys think of this?\n\nWith regards from Maurice.\n\n\n\n", "msg_date": "Sun, 5 Apr 1998 15:31:51 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "On improving OO support in posgresql and relaxing oid bottleneck at\n\tthe same time" }, { "msg_contents": "> I'm currently under the impression that the following change in the\n> postgresql system would benefict the overall performance and quality\n> of the system.\n> \n> Tuples for a class and all it's derived classes stored in one file.\n\nI hate to sound like a \"small thinker\" here, but I'd be concerned about\nsome issues:\n\n1) true OO semantics are difficult/impossible to accomplish with SQL.\nThis is one reason why Postgres is probably in the OR realm rather than\ntrue OO.\n\n2) Supporting inheritance using one-file storage probably leads to\nlarger overhead in _all_ file accesses, not just ones containing\ninherited tables. Tuples would now contain a variable number of fields,\nwith variable definitions, with ... Ack! :)\n\n3) Indices are fundamentally present to speed up access, though we use\nthem for other purposes too (such as enforcing uniqueness). Perhaps the\ntopic of inheritance, uniqueness, and referential integrity (foreign\nkeys, etc) should be solved (or at least discussed) independent of\nindices, though indices or index-like structures may be involved in the\nsolution.\n\n4) imho, the roughest areas of existing (or missing) capability in\nPostgres involve array types and types which require additional support\ninformation (such as exact numerics). Focusing on fixing/improving these\nareas may lead to cleaning up semantics, mechanisms, and capabilities in\nthe backend, and make other (more derived?) features such as constraint\ninheritance and enforcement easier to implement. Well, it will help\nsomething anyway, even if not constraints :)\n\n - Tom\n", "msg_date": "Sun, 05 Apr 1998 15:44:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] On improving OO support in posgresql and relaxing oid\n\tbottleneck at the same time" } ]
[ { "msg_contents": "I've got binary operators working with automated type conversion, at\nleast for some cases:\n\ntgl=> select 2.0 > 1;\n\n?column?\n--------\nt\n(1 row)\n\nHowever, I'll need to revisit the search algorithms for matching\npossible types to functions and operators; I'm pretty sure that the best\nalgorithm (or even the minimally acceptable one) isn't in there yet.\n\nOn to target type matching...\n\n - Tom\n", "msg_date": "Sun, 05 Apr 1998 13:54:47 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Automatic type conversion" }, { "msg_contents": "> On to target type matching...\n\nOoh. This one was easy, apparently:\n\ntgl=> create table test (s smallint);\nCREATE\n\nOld behavior:\ntgl=> insert into test select 1 + 2;\nERROR: parser: attribute 's' is of type 'int2' but expression is of\ntype 'int4'\n\nNew behavior:\ntgl=> insert into test select 1 + 2;\nINSERT 18432 1\ntgl=> select * from test;\ns\n-\n3\n(1 row)\n\nBack to fixing broken stuff in operators...\n\n - Tom\n", "msg_date": "Sun, 05 Apr 1998 14:46:57 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Automatic type conversion" } ]
[ { "msg_contents": "Bruce Momjian wrote:\n> \n> > I seem to recall a discussion a long time ago about using strdup and how\n> > it would leak when using the postgres memory contexts, and that the pstrdup\n> > code was created to be used instead of strdup to \"fix\" this.\n> >\n> \n> OK, here is a patch to fix the memory leak problem. Not sure when this\n> was introduced, but who cares. Probably not Vadim, as I first thought.\n\n(Sure - not me :). Congratulations with finding this!\nstorage/buffer/bufmgr.c:BufferAlloc():\n\n strcpy(buf->sb_dbname, GetDatabaseName());\n\nand so we had leak for _every_ new buffer allocation!\n\nBut why strdup() was added there? (Hope that nothing is broken now).\n\nVadim\n", "msg_date": "Sun, 05 Apr 1998 21:54:57 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Re: [HACKERS] Everything leaks;\n\tHow it mm suppose to work?" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > I seem to recall a discussion a long time ago about using strdup and how\n> > > it would leak when using the postgres memory contexts, and that the pstrdup\n> > > code was created to be used instead of strdup to \"fix\" this.\n> > >\n> > \n> > OK, here is a patch to fix the memory leak problem. Not sure when this\n> > was introduced, but who cares. Probably not Vadim, as I first thought.\n> \n> (Sure - not me :). Congratulations with finding this!\n> storage/buffer/bufmgr.c:BufferAlloc():\n> \n> strcpy(buf->sb_dbname, GetDatabaseName());\n> \n> and so we had leak for _every_ new buffer allocation!\n> \n> But why strdup() was added there? (Hope that nothing is broken now).\n\nNo idea, but it was not needed. Removed the strdup, changed the\nfunction to return a const char, and everything worked, so nothing was\nchanging the value. (pstrdup() did not work because the allocation was\nprobably being done in the CacheCxt). Also checked each call, and no\none was changing the variable, so no need for the strdup().\n\nIn fact, I am going to remove the Get/Set functions entirely(too many\ncalls), make it a global variable, and add to the TODO list:\n\n\tfor safety, make global variables const, cast to non-const as needed\n\nThis will be a more general solution for global variable protection.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 10:28:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [HACKERS] Everything leaks;\n\tHow it mm suppose to work?" } ]
[ { "msg_contents": "> The following query works:\n> brett=> select '1'::money * 2;\n> but this one doesn't:\n> brett=> select sum('1'::money) * 2\n> with the following error:\n> ERROR: There is no operator '*' for types 'money' and 'money'\n> bug?\n\nFeature, until we fix it :)\nI'm working on automatic type conversion for v6.4, and your specific\nexample already works in my preliminary code:\n\ntgl=> select sum('$1.00'::money) * 2;\n--------\n$2.00\n(1 row)\n\nAnd even:\n\ntgl=> select sum('$1.00'::money) * 2.5;\n--------\n$2.50\n(1 row)\n\n> Also, are there any plans to remove the dollar sign from the money\n> type? Or are we just going to use integers with precision. How do I\n> create a table with integers that only output the first two decimal\n> places?\n\nWe don't yet have exact numerics with a scale other than zero (a\nstandard integer). We need to get an implementation with either a BCD\npackage, or 64-bit integers, or the GNU extended-precision math package,\nor ?? to enable arbitrary range integers with scale. We also probably\nneed some work on the backend to pass along extended information for\nthese kinds of types (e.g. the precision and scale defined for a target\ncolumn).\n\n - Tom\n", "msg_date": "Sun, 05 Apr 1998 17:46:06 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] money * money?" } ]
[ { "msg_contents": "\nI am thinking about a project that may change a large number of source files.\nIt may also take a fair amount of time, so I expect the main source tree\nto get changed a fair amount during my work.\n\nWhat is the best way of setting up my source tree to support mergeing my\nchanges later later? Or maybe another way to ask is how do I make the best\nuse of cvs and cvsup etc to support concurant development? Or even, what\nare the active pgsql developers doing now?\n\nDo you try to stay up to date on the patchs for you own source tree? Do you\nmaintain a local cvs repository? If so, how do you handle branching and\nmerging? Do you use \"vendor branches\"?\n\nI am sure I can puzzle something out, but if you have already developed\nways of organizing this that are 'standard procedure' or that work especially\nwell, I would like to take advantage of them.\n\nThanks,\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Sun, 5 Apr 1998 13:44:06 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Developer setup, what works?" }, { "msg_contents": "> \n> \n> I am thinking about a project that may change a large number of source files.\n> It may also take a fair amount of time, so I expect the main source tree\n> to get changed a fair amount during my work.\n> \n> What is the best way of setting up my source tree to support mergeing my\n> changes later later? Or maybe another way to ask is how do I make the best\n> use of cvs and cvsup etc to support concurant development? Or even, what\n> are the active pgsql developers doing now?\n> \n> Do you try to stay up to date on the patchs for you own source tree? Do you\n> maintain a local cvs repository? If so, how do you handle branching and\n> merging? Do you use \"vendor branches\"?\n> \n> I am sure I can puzzle something out, but if you have already developed\n> ways of organizing this that are 'standard procedure' or that work especially\n> well, I would like to take advantage of them.\n\nGood question. If I can do it in stages, I do, and commit my changes\nevery few days. Even if you break the system doing it, that is OK. We\nmay or may not ask you to at least keep it running minimally so others\ncan test. Another way to do it is to 'claim the tree' and we will stay\nout of it and hold our patches until you are finished.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 17:33:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "Bruce Momjian:\n> David Gould:\n> > I am thinking about a project that may change a large number of source files.\n> > It may also take a fair amount of time, so I expect the main source tree\n> > to get changed a fair amount during my work.\n> > \n> > What is the best way of setting up my source tree to support mergeing my\n> > changes later later? Or maybe another way to ask is how do I make the best\n> > use of cvs and cvsup etc to support concurant development? Or even, what\n> > are the active pgsql developers doing now?\n> > \n> > Do you try to stay up to date on the patchs for you own source tree? Do you\n> > maintain a local cvs repository? If so, how do you handle branching and\n> > merging? Do you use \"vendor branches\"?\n> > \n> > I am sure I can puzzle something out, but if you have already developed\n> > ways of organizing this that are 'standard procedure' or that work especially\n> > well, I would like to take advantage of them.\n> \n> Good question. If I can do it in stages, I do, and commit my changes\n> every few days. Even if you break the system doing it, that is OK. We\n> may or may not ask you to at least keep it running minimally so others\n> can test. Another way to do it is to 'claim the tree' and we will stay\n> out of it and hold our patches until you are finished.\n \nI guess I am asking a more basic question than this. For the past 10 years or\nso I have been using either SCCS or RCS or ClearCase (Blech!) for source\ncontrol. I have not used CVS or cvsup. So I am really asking what is the best\nway to setup up a source tree on my machine? I have read the CVS manual so I\ncan make it work however, I am just wondering about what the most usual\nconfigureation is. Do I make a local cvsroot and use the \"import vendor\nbranch\" feature? Or do I try to configure to use a remote CVS server\n(ie at postgresql.org). Should I maintain my own branch? And how does cvsup\nfit in with this? Sorry for the lamity, but a 5 minute guide to how to set up\na convenient pgsql CM environment would save me a bit of time.\n\nThanks,\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Sun, 5 Apr 1998 16:20:26 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "> I guess I am asking a more basic question than this. For the past 10 years or\n> so I have been using either SCCS or RCS or ClearCase (Blech!) for source\n> control. I have not used CVS or cvsup. So I am really asking what is the best\n> way to setup up a source tree on my machine? I have read the CVS manual so I\n> can make it work however, I am just wondering about what the most usual\n> configureation is. Do I make a local cvsroot and use the \"import vendor\n> branch\" feature? Or do I try to configure to use a remote CVS server\n> (ie at postgresql.org). Should I maintain my own branch? And how does cvsup\n> fit in with this? Sorry for the lamity, but a 5 minute guide to how to set up\n> a convenient pgsql CM environment would save me a bit of time.\n\nSee tools/FAQ_DEV. We cvsup the source, then use\ntools/make_diff/difforig to generate patches.\n\nWe then post them to the patches list. Or, if you have a postgresql.org\ntelnet account from Marc, you do a cvs checkout in that account to set\nup a tree, ftp the patch to your telnet account on postgresql.org, run\n'patch' with the diff we just ftp'ed, and do a 'cvs update' to apply the\npatch, with an appropriate description.\n\nAfter that, everyone sees your changes.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 21:12:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "Marc G. Fournier has a great idea:\n> \tI don't know if anyone has tried this yet, and I've only just\n> barely tried it here...\n> \n> \tIf you take the README.cvsup file that is found at\n> ftp.postgresql.org:/pub/CVSup and comment out the line that states\n> '*default tag=.', it will pull down the actual RCS files.\n> \n> \tSo, for instance, if you were to setup a CVS repository on your\n> machine and, using the above, pull the RCS files into $CVSROOT (and\n> created an appropriate $CVSROOT/CVSROOT directory), you could grab the\n> current source tree easily, and then checkout (and update) a local source\n> tree)...\n> \n> \tThen you'd just have to send in patches periodically, while\n> keeping your local source tree in sync with the master...\n\nThis sounds like exactly what I was looking for. So I am still a little hazy\non the interaction between CVS and cvsup so perhaps you could spell this\nout a bit:\n\nSuppose I want to work in ~/pgsql and refer to the module as pgsql. And I want\nto store all my CVS trees under /local1/cvsroot. If I have understood you\nI need to do\n\nexport CVSROOT=/local0/cvsroot\ncvs init $CVSROOT\nmkdir $CVSROOT/CVSROOT\t\t\t# is this right? why?\ncat >$CVSROOT/modules\npgsql ??? what goes here ???\n^D\n\nAnd then how do I keep in sync with the master?\n\nI am embarrassed to keep asking about this, I really do know about databases,\nbut I have never used CVS and cvsup so all help is appreciated.\n\nThanks\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"I was pleased to be able to answer right away, and I did.\n I said I didn't know.\" -- Mark Twain, Life on the Mississippi\n", "msg_date": "Sun, 5 Apr 1998 22:39:48 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "On Sun, 5 Apr 1998, David Gould wrote:\n\n> I guess I am asking a more basic question than this. For the past 10 years or\n> so I have been using either SCCS or RCS or ClearCase (Blech!) for source\n> control. I have not used CVS or cvsup. So I am really asking what is the best\n> way to setup up a source tree on my machine? I have read the CVS manual so I\n> can make it work however, I am just wondering about what the most usual\n> configureation is. Do I make a local cvsroot and use the \"import vendor\n> branch\" feature? Or do I try to configure to use a remote CVS server\n> (ie at postgresql.org). Should I maintain my own branch? And how does cvsup\n> fit in with this? Sorry for the lamity, but a 5 minute guide to how to set up\n> a convenient pgsql CM environment would save me a bit of time.\n\n\tI don't know if anyone has tried this yet, and I've only just\nbarely tried it here...\n\n\tIf you take the README.cvsup file that is found at\nftp.postgresql.org:/pub/CVSup and comment out the line that states\n'*default tag=.', it will pull down the actual RCS files.\n\n\tSo, for instance, if you were to setup a CVS repository on your\nmachine and, using the above, pull the RCS files into $CVSROOT (and\ncreated an appropriate $CVSROOT/CVSROOT directory), you could grab the\ncurrent source tree easily, and then checkout (and update) a local source\ntree)...\n\n\tThen you'd just have to send in patches periodically, while\nkeeping your local source tree in sync with the master...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 6 Apr 1998 03:25:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "On Sun, 5 Apr 1998, David Gould wrote:\n\n> Marc G. Fournier has a great idea:\n> > \tI don't know if anyone has tried this yet, and I've only just\n> > barely tried it here...\n> > \n> > \tIf you take the README.cvsup file that is found at\n> > ftp.postgresql.org:/pub/CVSup and comment out the line that states\n> > '*default tag=.', it will pull down the actual RCS files.\n> > \n> > \tSo, for instance, if you were to setup a CVS repository on your\n> > machine and, using the above, pull the RCS files into $CVSROOT (and\n> > created an appropriate $CVSROOT/CVSROOT directory), you could grab the\n> > current source tree easily, and then checkout (and update) a local source\n> > tree)...\n> > \n> > \tThen you'd just have to send in patches periodically, while\n> > keeping your local source tree in sync with the master...\n> \n> This sounds like exactly what I was looking for. So I am still a little hazy\n> on the interaction between CVS and cvsup so perhaps you could spell this\n> out a bit:\n> \n> Suppose I want to work in ~/pgsql and refer to the module as pgsql. And I want\n> to store all my CVS trees under /local1/cvsroot. If I have understood you\n> I need to do\n> \n> export CVSROOT=/local0/cvsroot\n> cvs init $CVSROOT\n\n\tNeat, I don't believe I used this when I created mine...:(\n\n> mkdir $CVSROOT/CVSROOT\t\t\t# is this right? why?\n\n\tFrom what I just scanned through in the info pages, isn't this\nwhat cvs init is supposed to do?\n\n> cat >$CVSROOT/modules\n\n\tSame as above...\n\n> pgsql ??? what goes here ???\n\n\tI don't recall what platform you are running on, but as long as\nyou have cvsup available, use the attached 'README.cvsup' (cvsup -L 1 -g\nREADME.cvsup) to pull down the CVS repository and deposit it into your\n$CVSROOT directory...\n\n\tRun that out of cron, once a night, or once a week (base it on the\ncommit messages going through)...never commit your changes to your cvs\nrepository, except just before you are ready to make a patch, as the next\nCVSup you do will overwrite your changes, but you can do checkouts and\nupdates as appropriate...\n\n> I am embarrassed to keep asking about this, I really do know about databases,\n> but I have never used CVS and cvsup so all help is appreciated.\n\n\tThat's okay...I'm embarressed that I don't remember how to do\nthis, after doing it so many times lately :(", "msg_date": "Mon, 6 Apr 1998 08:03:12 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "> > I am embarrassed to keep asking about this, I really do know about \n> > databases, but I have never used CVS and cvsup so all help is \n> > appreciated.\n> That's okay...I'm embarressed that I don't remember how to do\n> this, after doing it so many times lately :(\n\nOh yeah? Well _I'm_ embarrassed that I couldn't figure out how CVSup and\ncvs could be used together on my machine! This was a useful discussion\nDavid; keep up with the stupid questions.\n\nDavid/someone, would you want to take a crack at consolidating this\ndiscussion into either SGML/docbook source or into plain text that I can\nmark up? This should go into the developer's guide...\n\n - Tom\n", "msg_date": "Mon, 06 Apr 1998 13:42:51 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "> \n> > > I am embarrassed to keep asking about this, I really do know about \n> > > databases, but I have never used CVS and cvsup so all help is \n> > > appreciated.\n> > That's okay...I'm embarressed that I don't remember how to do\n> > this, after doing it so many times lately :(\n> \n> Oh yeah? Well _I'm_ embarrassed that I couldn't figure out how CVSup and\n> cvs could be used together on my machine! This was a useful discussion\n> David; keep up with the stupid questions.\n> \n> David/someone, would you want to take a crack at consolidating this\n> discussion into either SGML/docbook source or into plain text that I can\n> mark up? This should go into the developer's guide...\n\nOK, now I am confused. Why would you use cvs on your local machine? I\njust use cvsup to download the most recent code, and log into\npostgresql.org to use cvs to update my changes. After the 'cvs update',\nI run cvsup again to re-sync my local source with the current tree.\n\nWhat am I missing?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 6 Apr 1998 10:03:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "On Mon, 6 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > > > I am embarrassed to keep asking about this, I really do know about \n> > > > databases, but I have never used CVS and cvsup so all help is \n> > > > appreciated.\n> > > That's okay...I'm embarressed that I don't remember how to do\n> > > this, after doing it so many times lately :(\n> > \n> > Oh yeah? Well _I'm_ embarrassed that I couldn't figure out how CVSup and\n> > cvs could be used together on my machine! This was a useful discussion\n> > David; keep up with the stupid questions.\n> > \n> > David/someone, would you want to take a crack at consolidating this\n> > discussion into either SGML/docbook source or into plain text that I can\n> > mark up? This should go into the developer's guide...\n> \n> OK, now I am confused. Why would you use cvs on your local machine? I\n> just use cvsup to download the most recent code, and log into\n> postgresql.org to use cvs to update my changes. After the 'cvs update',\n> I run cvsup again to re-sync my local source with the current tree.\n> \n> What am I missing?\n\n\tThe fact that you can login to postgresql.org and update the\nsource tree so that your work doesn't diverge greatly from that which is\nthe main source tree? :)\n\n\n", "msg_date": "Mon, 6 Apr 1998 10:27:22 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "> > ... I couldn't figure out how CVSup \n> > and cvs could be used together on my machine!\n> OK, now I am confused. Why would you use cvs on your local machine? \n> I just use cvsup to download the most recent code, and log into\n> postgresql.org to use cvs to update my changes. After the 'cvs \n> update', I run cvsup again to re-sync my local source with the current \n> tree.\n> \n> What am I missing?\n\nI _think_ what this will do is allow me to do my CVSup any time I want,\nthen do \"cvs update ...\" on my local machine. Working changes I have\nmade _won't_ get erased (as they do when you work directly in your CVSup\ntarget area), but rather cvs will show them as modified. It may be that\nI _only_ want the cvs repository, and then can set my local CVSROOT to\npoint at it.\n\nWill let you know; I've got the cvs repository downloaded now...\n\n - Tom\n", "msg_date": "Mon, 06 Apr 1998 14:37:51 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "On Mon, 6 Apr 1998, Thomas G. Lockhart wrote:\n\n> > > I am embarrassed to keep asking about this, I really do know about \n> > > databases, but I have never used CVS and cvsup so all help is \n> > > appreciated.\n> > That's okay...I'm embarressed that I don't remember how to do\n> > this, after doing it so many times lately :(\n> \n> Oh yeah? Well _I'm_ embarrassed that I couldn't figure out how CVSup and\n> cvs could be used together on my machine! This was a useful discussion\n> David; keep up with the stupid questions.\n\n\tActually, for that, I had to go to the man pages...took me a while\nto figure that one out too :)\n\n\n", "msg_date": "Mon, 6 Apr 1998 11:59:31 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "> > OK, now I am confused. Why would you use cvs on your local machine? I\n> > just use cvsup to download the most recent code, and log into\n> > postgresql.org to use cvs to update my changes. After the 'cvs update',\n> > I run cvsup again to re-sync my local source with the current tree.\n> > \n> > What am I missing?\n> \n> \tThe fact that you can login to postgresql.org and update the\n> source tree so that your work doesn't diverge greatly from that which is\n> the main source tree? :)\n\nNo, he seems to want to have cvs running on his local machine, at least\nthat is what I see Thomas saying.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 6 Apr 1998 12:32:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "On Mon, 6 Apr 1998, Bruce Momjian wrote:\n\n> > > OK, now I am confused. Why would you use cvs on your local machine? I\n> > > just use cvsup to download the most recent code, and log into\n> > > postgresql.org to use cvs to update my changes. After the 'cvs update',\n> > > I run cvsup again to re-sync my local source with the current tree.\n> > > \n> > > What am I missing?\n> > \n> > \tThe fact that you can login to postgresql.org and update the\n> > source tree so that your work doesn't diverge greatly from that which is\n> > the main source tree? :)\n> \n> No, he seems to want to have cvs running on his local machine, at least\n> that is what I see Thomas saying.\n\n\tOkay...put yourself in his shoes...no access to postgresql.org...\n\n\tNow, he can do a cvsup of the sources, which, if he makes any\nchanges to his sources, overwrites those changes...or, he can cvsup the\ncvs repository itself, and manipulate that as if he were connected\ndirectly to postgresql.org...\n\n\tBasically, he can do a \"cvs update pgsql\" to bring in any new\nchanges, *plus* have CVS auto-merge his changes into it...\n\n\tOnce way, he submits a whole bunch of little patches, the other he\ncan work until he ready, on his home machine, and submit one large\npatch...both ways he succeeds in staying in sync with any changes that we\nmake, or anyone else does...one is less convienent to us all then the\nother though :)\n\n\tBTW...got cheque today...thanks...:)\n\n", "msg_date": "Mon, 6 Apr 1998 12:38:22 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "> > No, he seems to want to have cvs running on his local machine, at least\n> > that is what I see Thomas saying.\n> \n> \tOkay...put yourself in his shoes...no access to postgresql.org...\n> \n> \tNow, he can do a cvsup of the sources, which, if he makes any\n> changes to his sources, overwrites those changes...or, he can cvsup the\n> cvs repository itself, and manipulate that as if he were connected\n> directly to postgresql.org...\n> \n> \tBasically, he can do a \"cvs update pgsql\" to bring in any new\n> changes, *plus* have CVS auto-merge his changes into it...\n> \n> \tOnce way, he submits a whole bunch of little patches, the other he\n> can work until he ready, on his home machine, and submit one large\n> patch...both ways he succeeds in staying in sync with any changes that we\n> make, or anyone else does...one is less convienent to us all then the\n> other though :)\n\nOK, now my head hurts.\n\nSo he basically keeps his copy of CVSROOT current with our tree, and has\na personal copy of that that he uses to make changes. And he can run\n'cvs update' and that will change his personal tree to stay in sync with\nour changes? Yikes.\n\nIf you are over-writing the CVSROOT with remote changes via cvsup, is\ncvs smart enough to realize how to keep his sources in sync?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 6 Apr 1998 12:54:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "On Mon, 6 Apr 1998, Bruce Momjian wrote:\n\n> > > No, he seems to want to have cvs running on his local machine, at least\n> > > that is what I see Thomas saying.\n> > \n> > \tOkay...put yourself in his shoes...no access to postgresql.org...\n> > \n> > \tNow, he can do a cvsup of the sources, which, if he makes any\n> > changes to his sources, overwrites those changes...or, he can cvsup the\n> > cvs repository itself, and manipulate that as if he were connected\n> > directly to postgresql.org...\n> > \n> > \tBasically, he can do a \"cvs update pgsql\" to bring in any new\n> > changes, *plus* have CVS auto-merge his changes into it...\n> > \n> > \tOnce way, he submits a whole bunch of little patches, the other he\n> > can work until he ready, on his home machine, and submit one large\n> > patch...both ways he succeeds in staying in sync with any changes that we\n> > make, or anyone else does...one is less convienent to us all then the\n> > other though :)\n> \n> OK, now my head hurts.\n> \n> So he basically keeps his copy of CVSROOT current with our tree, and has\n> a personal copy of that that he uses to make changes. And he can run\n> 'cvs update' and that will change his personal tree to stay in sync with\n> our changes? Yikes.\n> \n> If you are over-writing the CVSROOT with remote changes via cvsup, is\n> cvs smart enough to realize how to keep his sources in sync?\n\n\tTry it on hub.org sometime...\n\n\tGo into a file and make a change to it...then do a 'cvs update\n<dir>'...that file you modified will be marked as 'M' (Modified)\n\n\tNow, if he makes changes to the same file that you made changes\nto, cvsup's the new *RCS* files (which is what we've detailed how to do),\nwhen he does a 'cvs update', it will actually take your changes and merge\nthem with his changes *unless* there is a conflict with that change (ie.\nthe section of the file he changed is identical to the one that you\nchanged)\n\n\tIts what CVS was designed for...two ppl can work on the same file\nwithout little to no conflicts between them when it comes time to\nre-introduce the changes into the main stream...\n\n\n\n", "msg_date": "Mon, 6 Apr 1998 13:11:28 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "I've seen several people give some fairly (IMHO) icky \"solutions\" to\nthis.\n\nI consider them icky because they didn't support the following:\n- local availability of \"cvs log\" messages\n- local availability of cvs branches (may not be important if pgsql\n doesn't use many branches, but this was an absolute requirement with\n the FreeBSD repository)\n- ability to commit partial changes of a work in progress easily without\n upsetting the main cvs branch. (This was required since we often had\n a number of people locally using or working on these changes).\n\n\nI only know of two ways to do this:\n\n1) setup and use cvs in client mode assuming that pgsql has (or can)\n setup their cvs to act as a server.\n\nOne drawback to this for David is that he will be working on a long term\nproject which will break things if he commits them to the main branch\n(and he won't want to keep all his changes uncommitted if he's smart).\nA simple solution to this is that he could use a branch in the main\nrepository and merge that into the main branch when he's all done. This\nwould be ideal assuming David's got a fast connection to postgresql.org\nand there are no security problems with the postgres cvs gods allowing\ncvs client access.\n\nIf no body else will be working on the same branch then there isn't a\nneed to have the changes go to the main pgsql tree right away so it\nmay be better to do the following:\n\n\n2) setup a local cvs repository, use cvsup to keep all the bits up to\n date with the main pgsql repository. David creates a local branch\n (see caveat bellow) and puts all changes there.\n\nWhenever he feels like syncing the source he's working on he runs cvsup\nand either\na) merges changes from the main branch into his branch on his tree\nor\nb) merges his changes into a new branch on the head (I prefer this method).\n\nCVS assists with either of these operations making them mostly painless\nexcept, of course, if there are conflicting changes.\n\nWhen he's all done and wants his work committed to the main branch he\ndoes one of the above and then he can trivially generate a huge patch\nrelative to the current head of the main repository suitable for anyone\nto commit. (Or you could do fancy tricks with cvsup to get David's\nbranch into the main tree in order to keep his intermediate commit\nmessages).\n\n\nCaveat:\n\nThe problem with this (that no one here has mentioned how to get\naround) is that with the \"delete\" option cvsup will blow away any\nlocal changes you make since if the checksum on the resulting files\ndon't match it resends the whole file. Okay, so you remove the \"delete\"\noption, right? Only partially correct. Cvsup will still blow away any\nchanges with the same revision numbers.\n\nHere's an example of what happens:\n\n1) cvsup fetches version 1.69 of a file\n2) you locally make changes and commit version 1.70 of this file\n3) everything is cool until some time later when someone commits 1.70 to\n the pgsql main repository then:\n4) cvsup fetches version 1.70 and blows away your version 1.70\n\nSo how do you get around this? Simple (sort of) you create a branch\npoint. Now your changes go to 1.69.0.1, 1.69.0.2, etc. Cvsup will\nmerrily add 1.70, 1.71, etc without disturbing your 1.69.0 branch.\nEXCEPT if there is a branch created in the main tree at the same point\n(thereby causing a collision of version numbers).\n\nTo avoid this you can make a small change to your local cvs program to\nuse a higher branch number instead of starting at zero. For example,\nmake it use 1.69.42.1, 1.69.42.2, etc, that way for a collision to occur\nthe main pgsql repository must be branched more than 20 times at a\nsingle point (cvs uses only even numbers for branching).\n\n\n\nFor those that are lost here is a step by step summary of what I've done\nwith the FreeBSD repository in the past (modified here to work with the\npgsql repository):\n\nMy $CVSROOT=/cvs (created long ago via cvs init) and I want the pgsql\nrepository to be rooted at $CVSROOT/pgsql. Change the appropriate paths\nbellow to match whatever you prefer.\n\nCreate a pgsql.cvsup file as follows:\n\npgsql release=cvs host=postgresql.org base=/cvs/pgsql prefix=/cvs \\\n backup compress use-rel-suffix\n\nNote: This is actually a single line, I'm not sure if cvsup understands\nthe '\\' character. \nNote2: There is no delete option present.\n\nThe \"base\" argument tells cvsup were to create a \"sup\" subdirectory\nfor it's bookkeeping (here it is put in /cvs/pgsql/sup). The \"prefix\"\nargument is where to dump the actual files, since all the pgsql files\nstart with pgsql/ this actually puts everything into /cvs/pgsql\n\n\nIf you didn't add pgsql to the root of the local cvs repository (as I\ndid) then add something like this to the modules file:\npgsql\t-a path_from_cvsroot_to/pgsql\n\n\nRun cvsup to fetch the files. (I do \"cvsup -g -P - pgsql.cvsup\")\n\nCheckout the version you want to work from (probably the head).\n\nTag the branch point:\n% cvs tag BLAH_BLAH_BP\n\nCreate the branch:\n% cvs tag -b BLAH_BLAH\n\nNow work on the files and commit things whenever you like. Run cvsup as\noften as you like and when you want to sync your working sources up do\none of the following:\n\na) merge changes between the branch point and the head into your branch:\n\n(add an -r option to the rtag command if you don't want to use the head\nof the main branch)\n% cvs rtag some_tag pgsql\n% cvs update -j BLAH_BLAH_BP -j some_tag\n\nNext time use:\n% cvs rtag some_other_tag\n% cvs update -j some_tag -j some_other_tag\n\nThe extra tags are required so that you don't try merging the same\nchanges multiple times. (There may be a better way to do this, I've\nnever actually done it this way).\n\nInstead I do:\n\nb) merge changes on your local branch to a new branch\n\nGet the head (or use a -r option if you don't want the head)\n% cvs update -A\t\n\nCreate a new branch\n% cvs tag NEW_BRANCH_BP\n% cvs tag -b NEW_BRANCH\n\nMerge in your own changes\n% cvs update -j BLAH_BLAH_BP -j BLAH_BLAH\n\n\nThis merge operation can happen as often as you like since it's mostly\nautomatic (although the tag operations can take awhile). With FreeBSD\nI used the (b) approach and did the merge every time there was a new\nstable release.\n\nie I did:\n% cvs update -rRELENG_2_2_6\n% cvs tag DDM_2_2_6_BP\n% cvs tag -b DDM_2_2_6\n% cvs update -j DDM_2_2_5_BP -j DDM_2_2_5\n\nOccasionally I'd use cvs to merge in specific important changes without\nmerging everything else.\n\n\nWhen you want to submit things to the main pgsql repository you can\ncreate a huge diff like this:\n\n% cvs diff -uN -r LAST_BRANCH_BP -r LAST_BRANCH\n\n\nThis may seem overly complicated but I found it worked quite well with\nthe FreeBSD repository. If you don't fully understand branches under\ncvs I highly suggest a full reading of the cvs.info file.\n\nHope this helped.\n-- \nDave Chapeskie, DDM Consulting\nE-Mail: [email protected]\n", "msg_date": "Mon, 6 Apr 1998 15:17:12 -0400", "msg_from": "Dave Chapeskie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 6 Apr 1998, Thomas G. Lockhart wrote:\n> \n> > > > I am embarrassed to keep asking about this, I really do know about \n> > > > databases, but I have never used CVS and cvsup so all help is \n> > > > appreciated.\n> > > That's okay...I'm embarressed that I don't remember how to do\n> > > this, after doing it so many times lately :(\n> > \n> > Oh yeah? Well _I'm_ embarrassed that I couldn't figure out how CVSup and\n> > cvs could be used together on my machine! This was a useful discussion\n> > David; keep up with the stupid questions.\n> \n> \tActually, for that, I had to go to the man pages...took me a while\n> to figure that one out too :)\n> \n\nWhile we're on the topic of stupid cvsup questions, does anyone know\nhow to get this thing to use a firewall?\n\nOcie\n\n", "msg_date": "Mon, 6 Apr 1998 13:00:56 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" }, { "msg_contents": "Attached is the man page...there is a -P option that is for getting around\nFirewalls...supposedly...\n\nOn Mon, 6 Apr 1998 [email protected] wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Mon, 6 Apr 1998, Thomas G. Lockhart wrote:\n> > \n> > > > > I am embarrassed to keep asking about this, I really do know about \n> > > > > databases, but I have never used CVS and cvsup so all help is \n> > > > > appreciated.\n> > > > That's okay...I'm embarressed that I don't remember how to do\n> > > > this, after doing it so many times lately :(\n> > > \n> > > Oh yeah? Well _I'm_ embarrassed that I couldn't figure out how CVSup and\n> > > cvs could be used together on my machine! This was a useful discussion\n> > > David; keep up with the stupid questions.\n> > \n> > \tActually, for that, I had to go to the man pages...took me a while\n> > to figure that one out too :)\n> > \n> \n> While we're on the topic of stupid cvsup questions, does anyone know\n> how to get this thing to use a firewall?\n> \n> Ocie\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org", "msg_date": "Tue, 7 Apr 1998 03:00:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Developer setup, what works?" } ]
[ { "msg_contents": "Not sure how to handle this. Can other HP-UX people comment on this?\n\n> \n> \n> --Boundary-1923581-0-0\n> \n> Yes, I had to modify the Makefile.hpux and the permissions of the shared \n> library file. I made the following change to \n> postgresql-6.3.1/src/makefiles/Makefile.hpux \n> \n> *** Makefile.hpux Fri Apr 3 14:38:37 1998 \n> --- Makefile.hpux.org Fri Apr 3 14:37:51 1998 \n> *************** \n> *** 1,7 **** \n> HPUX_MAJOR= $(shell uname -r|sed 's/^[^.]*\\.\\([^.]*\\).*/\\1/') \n> # HP-UX 10 has a select() in libcurses, so we need to get the libc version \n> first \n> ifeq ($(HPUX_MAJOR), 10) \n> ! LDFLAGS:= -Wl,-E,+b/opt/postgres/lib -lc $(LDFLAGS) \n> endif \n> \n> # HP-UX 09 needs libc before libPW, so we need to get the libc version first \n> --- 1,7 ---- \n> HPUX_MAJOR= $(shell uname -r|sed 's/^[^.]*\\.\\([^.]*\\).*/\\1/') \n> # HP-UX 10 has a select() in libcurses, so we need to get the libc version \n> first \n> ifeq ($(HPUX_MAJOR), 10) \n> ! LDFLAGS:= -Wl,-E -lc $(LDFLAGS) \n> endif \n> \n> # HP-UX 09 needs libc before libPW, so we need to get the libc version first \n> \n> \n> I also, needed to change the permissions on the libpq.sl lib from read access \n> to read and execute access, i.e., \n> \n> -rwxr-xr-x 1 postgres postgres 98489 Apr 3 15:07 libpq.sl \n> \n> \n> --- \n> \tCarl\n> \n> \n> --Boundary-1923581-0-0\n> X-Orcl-Content-Type: message/rfc822\n> \n> Received: 01 Apr 1998 03:57:37 Sent: 01 Apr 1998 03:54:59\n> From:\"Pierre Habraken \" <[email protected]>\n> To: [email protected]\n> Subject: Re: HP-UX shared lib problem with 6.3.1\n> Cc: [email protected]\n> Reply-to: [email protected]\n> X-Orcl-Application: Organization: Universit��� Joseph Fourier\n> X-Orcl-Application: X-Mailer: Mozilla 4.04 [en] (WinNT; I)\n> X-Orcl-Application: Mime-Version: 1.0\n> X-Orcl-Application: Content-Type: text/plain; charset=iso-8859-1\n> X-Orcl-Application: Content-Transfer-Encoding: 8bit\n> \n> \n> Have you got any answer to the question you posted last week about the\n> wrong identification of install directory for the shared libs ?\n> \n> The hpux dld.sl man page says that a run time search path may be\n> specified using either the option +b followed by the search path list,\n> or the option +s which enables the path list defined by SHLIB_PATH at\n> run time.\n> \n> The makefile in src/interfaces/libpq defines the option -b which is\n> apparently not appropriate. I tried both +b <path list> and +s but the\n> result in either case was a bunch of unresolved symbols...\n> \n> Pierre\n> -- \n> ________________________________________________________________________\n> Pierre HABRAKEN - mailto:[email protected]\n> T���l: 04 76 51 48 87 - Fax: 04 76 51 45 63\n> Universit��� Joseph Fourier - D���partement Scientifique Universitaire\n> Adresse postale : CAFIM/DSU BP53 38041 Grenoble Cedex 9\n> ________________________________________________________________________\n> \n> \n> --Boundary-1923581-0-0--\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 5 Apr 1998 18:20:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Re: HP-UX shared lib problem with 6.3.1" } ]
[ { "msg_contents": "> On Sat, 4 Apr 1998, Erwan MAS wrote:\n> \n> > Hello ,\n> > \n> > in version 6.3.1 of the file src/interfaces/ecpg/preproc/pgc.l .\n> > There are some flex specific code .\n> > like :\n> > %option ( no fatal )\n> > <<EOF>> ( fatal error )\n> > \n> > my os was solaris 2.6. with standart lex .\n> > \n> > So if i want to compile , postgres I must use flex .....\n> \n> \tActually, I hate to say it, but its pretty much recommended that\n> anyone using PostgreSQL use flex/bison...in particular, we're starting to\n> find that 'stock yacc' on some systems chocks on gram.y, because its just\n> gotten to be *very* large...\n> \n\n6.3.1 now breaks under Irix lex as well.\n\nIf we're going to REQUIRE flex rather than lex, this MUST be made clear\nin the installation docs!!!!\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 6 Apr 1998 09:34:23 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] lex/flex portability PB in version 6.3.1" }, { "msg_contents": "> > > in version 6.3.1 of the file src/interfaces/ecpg/preproc/pgc.l .\n> > > There are some flex specific code .\n> > > like :\n> > > %option ( no fatal )\n> > > <<EOF>> ( fatal error )\n> > Actually, I hate to say it, but its pretty much recommended that\n> > anyone using PostgreSQL use flex/bison...in particular, we're \n> > starting to find that 'stock yacc' on some systems chocks on gram.y, \n> > because its just gotten to be *very* large...\n> 6.3.1 now breaks under Irix lex as well.\n> If we're going to REQUIRE flex rather than lex, this MUST be made \n> clear in the installation docs!!!!\n\nWe do not (or should not, rather) _require_ flex for an installation. We\nstarted shipping the flex output for the main scanner after we started\nusing \"exclusive states\" (unsupported in old AT&T lexers) to simplify\nthe scanner code.\n\nSimilarly, we ship the bison output for the parser.\n\nMichael, if these flex features are not adding capability, can you\nremove them? If they are very helpful, then can we start including the\npgc.c flex output in patches and into the distribution? We would need to\nchange the Makefile to keep it from deleting pgc.c when doing a \"make\nclean\". We also need to be careful that the file creation times on the\nfiles in the cvs tree are in the proper order so that make won't try\nrecreating the output...\n\n - Tom\n", "msg_date": "Mon, 06 Apr 1998 14:04:52 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lex/flex portability PB in version 6.3.1" } ]
[ { "msg_contents": "> OK, we have most of the open items fixed. Marc, can you check on number\n> 4, and Thomas, please apply your patch for item 3. We can then package\n> a patch and close 6.3.*.\n> \n> \n> > \tindexes not used that were used in 6.2(fixed)\n> > \tmemory leak in backend when run on simple queries(fixed)\n> > \tnegative sign causing problems in various areas, like float4 & shortint\n> > \tconfigure assert checking is reversed\n> > \tUNION crashes on ORDER BY or DISTINCT(already fixed in source tree)\n> \n\nFor a 6.3.2 release can we make sure the patch is applied to configure correctly\nafter running autoconf. (The one which fixes configure for compilers other\nthan gcc.)\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 6 Apr 1998 09:40:22 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Open 6.3.1 issues" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Thomas G. Lockhart <[email protected]>\nTo: Maurice Gittens <[email protected]>\nCc: [email protected] <[email protected]>\nDate: zondag 5 april 1998 23:56\nSubject: Re: [HACKERS] On improving OO support in posgresql and relaxing oid\nbottleneck at the same time\n\n\n>> I'm currently under the impression that the following change in the\n>> postgresql system would benefict the overall performance and quality\n>> of the system.\n>>\n>> Tuples for a class and all it's derived classes stored in one file.\n>\n>I hate to sound like a \"small thinker\" here, but I'd be concerned about\n>some issues:\n>\n>1) true OO semantics are difficult/impossible to accomplish with SQL.\n>This is one reason why Postgres is probably in the OR realm rather than\n>true OO.\n\nOk, let's be more specific. We'll define OO semantics as\nsupport for:\n1. identity\n(We already have partial support); we just have to get the details right\n2. inheritance\nWe allready have support. I'm suggesting an implementation which\nis a better overall choice IMO. Because we avoid the system oid\nunique oid requirement while it also provides for an improvent\nin the support for polymorphism.\n3. polymorphism\npartially supported, but some necesary properties are not yet\ninherited automatically. I believe overriding triggers\nis likely to work automatically. There are however some\nchoices which we'll have to make.\n\nAs far as I see, these concepts can be implemented without\nany changes to the current definition of the query language in\npostgresql.\n\nEncapsulation would seem to require new syntax. It also\nseems not to fully fit in to the the relation model, so\nwe leave it out.\n\n>\n>2) Supporting inheritance using one-file storage probably leads to\n>larger overhead in _all_ file accesses, not just ones containing\n>inherited tables. Tuples would now contain a variable number of fields,\n>with variable definitions, with ... Ack! :)\n\nYes but this overhead is very small for tables without inheritance.\nAn extra statement like:\n\nheap_getnext(ScanDesc scanDesc)\n{\n...\nwhile(!done)\n{\n tuple = readTuple(...)\n...\nif (IsInstanceOf(tuple -> reloid, scanDesc -> reloid)\n{\n return tuple;\n}\n...\n}\n\nThe information on inheritance relations between classes can be precomputed\nwhen a heap scandesc is created.\n\nThis IMO this overhead is not significant, when there is no inheritance.\nWhen there is inheritance we simple use indices to speed things up,\nif it's deemed necesary.\n\n>\n>3) Indices are fundamentally present to speed up access, though we use\n>them for other purposes too (such as enforcing uniqueness). Perhaps the\n>topic of inheritance, uniqueness, and referential integrity (foreign\n>keys, etc) should be solved (or at least discussed) independent of\n>indices, though indices or index-like structures may be involved in the\n>solution.\n\nLets consider the following mail to the questions list by Brett McCormick\n<[email protected]> (copied from the list archive):\n\n> I've got a table that has a primary key with a default of\n> nextval('seq'). I've got another table which inherits this one, but\n> it fails to inherit the unique btree index. It does inherit the\n> default value. So, I'm assuming that if I create a unique index for\n> that field on the child table, it won't keep you from inserting values\n> that exist in that field in the parent table (and since they both\n> share the same sequence, that's what I want).\n>\n> So primary keys do not work in this situation. Are there plans to\n> enhance the inheritance? I have no idea how it works, is it\n> intelligent? Seems more klunky than not, but I haven't really looked\n> at the code. Should I stop using inheritance altogether, considering\n> its drawbacks (no idea what child class it is in when selecting from\n> parent and all children, no shared indices/pkeys) when I don't select\n> from them all at once?\n\nThis person identifies a number of problems with the current system.\n- no idea what child class it is when selecting from parent and all children\n- no shared indices/primary keys\n- no inheritance of unique attribute etc.\n\nI can also add similar points\n- triggers should also be inherited. This gives us polymorphism without\n without introducing any new syntax.\n- etc.\n\nI agree that conceptually indices are present only for speed. But the\nreality is that by inheriting them we give users that which they\nexpect. (There are more emails like this one to be found on\nthe questions lists).\n\nI think that what Brett wants to do is legitemate.\nStoring the tuples of a same class hierarchy in different files is IMO\nan unfortunate design choice of the original implementors\nof postgresql.\n\nThe suggestion I'm making solves all of Brett's problems.\n>\n>4) imho, the roughest areas of existing (or missing) capability in\n>Postgres involve array types and types which require additional support\n>information (such as exact numerics). Focusing on fixing/improving these\n>areas may lead to cleaning up semantics, mechanisms, and capabilities in\n>the backend, and make other (more derived?) features such as constraint\n>inheritance and enforcement easier to implement. Well, it will help\n>something anyway, even if not constraints :)\n\nI see that we have similar ideas about where the system should eventually\nbe. I do however believe that we'll get there by means of cleaning up\nthe semantics and then using these cleaned semantics to\nmake the system as a whole more conceptually pure.\n\nIn my experience systems which are conceptually pure can be\nmade to be very efficient.\n\nI think that removing the oid bottleneck, while also solving\na number of fundamental problems (from an OO perspective)\nwith one and the same change, is a Good Thing (tm) -:).\n\nThanks, with regards from Maurice.\n\n\n", "msg_date": "Mon, 6 Apr 1998 12:49:23 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] On improving OO support in posgresql and relaxing oid\n\tbottleneck at the same time" }, { "msg_contents": "\nare there some good, human-readable documents that outline these and\nother basic OO concepts? I've done some OO programming, but I'm fuzzy\non a lot of issues. sorry to be so off-topic\n\n--brett\n\nOn Mon, 6 April 1998, at 12:49:23, Maurice Gittens wrote:\n\n> -----Original Message-----\n> From: Thomas G. Lockhart <[email protected]>\n> To: Maurice Gittens <[email protected]>\n> Cc: [email protected] <[email protected]>\n> Date: zondag 5 april 1998 23:56\n> Subject: Re: [HACKERS] On improving OO support in posgresql and relaxing oid\n> bottleneck at the same time\n> \n> \n> >> I'm currently under the impression that the following change in the\n> >> postgresql system would benefict the overall performance and quality\n> >> of the system.\n> >>\n> >> Tuples for a class and all it's derived classes stored in one file.\n> >\n> >I hate to sound like a \"small thinker\" here, but I'd be concerned about\n> >some issues:\n> >\n> >1) true OO semantics are difficult/impossible to accomplish with SQL.\n> >This is one reason why Postgres is probably in the OR realm rather than\n> >true OO.\n> \n> Ok, let's be more specific. We'll define OO semantics as\n> support for:\n> 1. identity\n> (We already have partial support); we just have to get the details right\n> 2. inheritance\n> We allready have support. I'm suggesting an implementation which\n> is a better overall choice IMO. Because we avoid the system oid\n> unique oid requirement while it also provides for an improvent\n> in the support for polymorphism.\n> 3. polymorphism\n> partially supported, but some necesary properties are not yet\n> inherited automatically. I believe overriding triggers\n> is likely to work automatically. There are however some\n> choices which we'll have to make.\n> \n> As far as I see, these concepts can be implemented without\n> any changes to the current definition of the query language in\n> postgresql.\n> \n> Encapsulation would seem to require new syntax. It also\n> seems not to fully fit in to the the relation model, so\n> we leave it out.\n> \n> >\n> >2) Supporting inheritance using one-file storage probably leads to\n> >larger overhead in _all_ file accesses, not just ones containing\n> >inherited tables. Tuples would now contain a variable number of fields,\n> >with variable definitions, with ... Ack! :)\n> \n> Yes but this overhead is very small for tables without inheritance.\n> An extra statement like:\n> \n> heap_getnext(ScanDesc scanDesc)\n> {\n> ...\n> while(!done)\n> {\n> tuple = readTuple(...)\n> ...\n> if (IsInstanceOf(tuple -> reloid, scanDesc -> reloid)\n> {\n> return tuple;\n> }\n> ...\n> }\n> \n> The information on inheritance relations between classes can be precomputed\n> when a heap scandesc is created.\n> \n> This IMO this overhead is not significant, when there is no inheritance.\n> When there is inheritance we simple use indices to speed things up,\n> if it's deemed necesary.\n> \n> >\n> >3) Indices are fundamentally present to speed up access, though we use\n> >them for other purposes too (such as enforcing uniqueness). Perhaps the\n> >topic of inheritance, uniqueness, and referential integrity (foreign\n> >keys, etc) should be solved (or at least discussed) independent of\n> >indices, though indices or index-like structures may be involved in the\n> >solution.\n> \n> Lets consider the following mail to the questions list by Brett McCormick\n> <[email protected]> (copied from the list archive):\n> \n> > I've got a table that has a primary key with a default of\n> > nextval('seq'). I've got another table which inherits this one, but\n> > it fails to inherit the unique btree index. It does inherit the\n> > default value. So, I'm assuming that if I create a unique index for\n> > that field on the child table, it won't keep you from inserting values\n> > that exist in that field in the parent table (and since they both\n> > share the same sequence, that's what I want).\n> >\n> > So primary keys do not work in this situation. Are there plans to\n> > enhance the inheritance? I have no idea how it works, is it\n> > intelligent? Seems more klunky than not, but I haven't really looked\n> > at the code. Should I stop using inheritance altogether, considering\n> > its drawbacks (no idea what child class it is in when selecting from\n> > parent and all children, no shared indices/pkeys) when I don't select\n> > from them all at once?\n> \n> This person identifies a number of problems with the current system.\n> - no idea what child class it is when selecting from parent and all children\n> - no shared indices/primary keys\n> - no inheritance of unique attribute etc.\n> \n> I can also add similar points\n> - triggers should also be inherited. This gives us polymorphism without\n> without introducing any new syntax.\n> - etc.\n> \n> I agree that conceptually indices are present only for speed. But the\n> reality is that by inheriting them we give users that which they\n> expect. (There are more emails like this one to be found on\n> the questions lists).\n> \n> I think that what Brett wants to do is legitemate.\n> Storing the tuples of a same class hierarchy in different files is IMO\n> an unfortunate design choice of the original implementors\n> of postgresql.\n> \n> The suggestion I'm making solves all of Brett's problems.\n> >\n> >4) imho, the roughest areas of existing (or missing) capability in\n> >Postgres involve array types and types which require additional support\n> >information (such as exact numerics). Focusing on fixing/improving these\n> >areas may lead to cleaning up semantics, mechanisms, and capabilities in\n> >the backend, and make other (more derived?) features such as constraint\n> >inheritance and enforcement easier to implement. Well, it will help\n> >something anyway, even if not constraints :)\n> \n> I see that we have similar ideas about where the system should eventually\n> be. I do however believe that we'll get there by means of cleaning up\n> the semantics and then using these cleaned semantics to\n> make the system as a whole more conceptually pure.\n> \n> In my experience systems which are conceptually pure can be\n> made to be very efficient.\n> \n> I think that removing the oid bottleneck, while also solving\n> a number of fundamental problems (from an OO perspective)\n> with one and the same change, is a Good Thing (tm) -:).\n> \n> Thanks, with regards from Maurice.\n> \n> \n", "msg_date": "Mon, 6 Apr 1998 13:41:25 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "OO resources" }, { "msg_contents": "Maurice Gittens>\n> >> I'm currently under the impression that the following change in the\n> >> postgresql system would benefict the overall performance and quality\n> >> of the system.\n> >>\n> >> Tuples for a class and all it's derived classes stored in one file.\n> >\n> >I hate to sound like a \"small thinker\" here, but I'd be concerned about\n> >some issues:\n> >\n...\n> >2) Supporting inheritance using one-file storage probably leads to\n> >larger overhead in _all_ file accesses, not just ones containing\n> >inherited tables. Tuples would now contain a variable number of fields,\n> >with variable definitions, with ... Ack! :)\n> \n> Yes but this overhead is very small for tables without inheritance.\n> An extra statement like:\n\nAnything that gets done for every row is on _the_ critical path. Any extra\ncode here will have a performance penalty. We are already an order of\nmagnitude too slow on scans. Think in terms of a few hundred instructions\nper row.\n\nI will also say that table inheritance is rarely used in real applications.\nPartly no doubt this is because the implementation is not wonderful, but\nI also think that it may be one of those ideas like time travel that\nsound great but in practice noone can figure out a use for it.\n\n> heap_getnext(ScanDesc scanDesc)\n> {\n> ...\n> while(!done)\n> {\n> tuple = readTuple(...)\n> ...\n> if (IsInstanceOf(tuple -> reloid, scanDesc -> reloid)\n> {\n> return tuple;\n> }\n> ...\n> }\n> \n> The information on inheritance relations between classes can be precomputed\n> when a heap scandesc is created.\n> \n> This IMO this overhead is not significant, when there is no inheritance.\n> When there is inheritance we simple use indices to speed things up,\n> if it's deemed necesary.\n\nI disagree, all per row overhead is significant. The primary operation in\nthe system is sifting rows. \n\nBut this is just the start of the extra overhead. What about the expression\nevaluator trying to determine if this tuple matchs the where clause. Now it\nhas to determine column offset and type and \"Equal\" function etc for\neach row.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 9 Apr 1998 01:03:56 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] On improving OO support in posgresql and relaxing oid\n\tbottleneck at the same time" } ]