threads
listlengths
1
2.99k
[ { "msg_contents": "> >> Actually, it shouldn't matter whether the server is 6.6-without-SSL\n> >> or pre-6.6. At least in the way I envisioned it, they'd act the same.\n> \n> > The 6.6-without-SSL still knows about the NEGOTIATE_SSL_CODE packet that\nis\n> > sent, and can respond \"No, I can't do SSL\". The pre-6.6 does not know\nabout\n> > the existance of this packet, and will thus respond with \"Unsupported\n> > Frontend Protocol\" (since it's a normal StartupPacket with the version\n> > number set to something very large (like the cancel request was\n> > implemented)).\n> \n> OK, the point being that then the client can either break the connection\n> (if it doesn't want to do an insecure connection) or send a\n> StartupPacket to continue with an insecure connection. I agree this\n> will be a little quicker than tearing down the connection and starting\n> a new one, which is what must happen in the case of an old server.\n> \n> But you could save some code on both sides if you just made the\n> teardown/new connection behavior the only path for falling back to\n> non-SSL. I'm not sure that SSL-enabled-client-talking-to-6.6-but-\n> not-SSL-enabled-server will be a sufficiently common scenario to\n> justify a lot of extra code to make it a tad faster. You'd expect\n> that most installations will have SSL at the server if they have\n> it anywhere.\n> \n> If it's only a small amount of extra code then it doesn't matter,\n> of course. I'm just dubious that it's worth taking extra care\n> for that case when you are going to have the other case anyway.\n\nWell. There is almost no extra code. The code has to be in place anyway, to\ndeal with those clients that do support SSL. It's just a matter of the\nserver sending a single character (a 'N' to say normal, non-SSL connection)\nto the client after receiving the special packet that it knows about. It's\njust:\n#ifdef USE_SSL\n\tSSLok = 'S';\n#else\n\tSSLok = 'N';\n#endif\n\nSSLok is then sent to the client. This code is only executed if the client\nfirst asked for SSL - if it's a non-SSL client, this whole phase is skipped.\nAll the rest of the code is shared between the SSL and the non-SSL case.\n\n\nI'll post a patch to the patches-list that makes the client work with a\npre-6.6 server in a minute. The noticable thing being that each time this\nclient tries to connect, the server logs an \"Unsupported frontent protocol\"\nmessage. The client will then tear down it, and the connectDB() function\nwill call itself while being instructed to run with the pre-6.6 protocol. It\nworks fine for me to connect to a 6.4.2 server.\n\n\n//Magnus\n", "msg_date": "Sat, 24 Jul 1999 20:34:42 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: SSL patch " } ]
[ { "msg_contents": "Thanks Tom,\n\nYou were correct, as always. The query was wrong. \nI apologize for wasting your time. I needed to \nencase the chained OR clauses in parens...I'll try \nand purge that Slip.\n\nSorry,\n\nMike Mascari\[email protected]\n\n--- Tom Lane <[email protected]> wrote:\n> Mike Mascari <[email protected]> writes:\n> > ... However, if an OR clause is introduced as\n> below:\n\n> I think the problem is that the OR appears at top\n> level in the WHERE\n> clause (assuming the above is a verbatim transcript\n> of your query).\n> OR groups less tightly than AND, so what this really\n> means is\n> \t(other-conditions AND (LIKEs-for-SEQ)) OR\n> (LIKEs-for-SCD)\n> which is undoubtedly not what you had in mind, and\n> will certainly\n> produce a lot of unwanted records if the query\n> manages to complete.\n> Every supplies tuple matching SCD will appear joined\n> to every possible\n> combination of records from the other tables...In\nthe\n> mistaken version, they get evaluated for every\n> possible combination\n> of joined tuples...\n> \n> \t\t\tregards, tom lane\n> \n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sat, 24 Jul 1999 15:02:26 -0400 (EDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Index not used on select (Is this more OR + LIKE?) " } ]
[ { "msg_contents": "\nIs't possible to use SELECT FOR UPDATE in functions ?\nI have function for 'insert or update' which works ok, but as I have some \nproblem with duplicated records I tried as suggested by Tom Lane to use \nSELECT FOR UPDATE instead of just select. Unfortunately it doesn't works:\n\nERROR: query didn't return correct # of attributes for *internal*\n\nHere is a function:\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\nDeclare\n keyval Alias For $1;\n cnt int4;\n curtime datetime;\nBegin\n curtime := ''now'';\n-- Select count into cnt from hits where msg_id = keyval FOR UPDATE;\n Select count into cnt from hits where msg_id = keyval;\n if Not Found then\n cnt := 1;\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, cnt); \n else\n cnt := cnt + 1;\n Update hits set count = cnt,last_access = curtime where msg_id = keyval;\n End If;\n return cnt;\nEnd;\n' LANGUAGE 'plpgsql';\n\n\n\tRegards,\n\n\t\tOleg\n\nPS. \n\nJust to test:\n\ncreate table hits ( \n msg_id int4 not null primary key,\n count int4 not null,\n first_access datetime default now(),\n last_access datetime \n); \n\nselect acc_hits(1);\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Sun, 25 Jul 1999 00:14:00 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT FOR UPDATE in function" }, { "msg_contents": ">\n> Is't possible to use SELECT FOR UPDATE in functions ?\n> I have function for 'insert or update' which works ok, but as I have some\n> problem with duplicated records I tried as suggested by Tom Lane to use\n> SELECT FOR UPDATE instead of just select. Unfortunately it doesn't works:\n>\n> ERROR: query didn't return correct # of attributes for *internal*\n>\n\nAFAIC,\"SELECT FOR UPDATE\" always causes above errors in\nPL/pgSQL functions.\nCould we use PL/pgSQL for update procedures in MVCC ?\n\nORDER/GROUP BY items that are not in the targetlist also cause\nsame errors in PL/pgSQL functions.\nIn both cases,target entries are added which are not wanted in the\nfinal projected tuple(SELECT FOR UPDATE adds \"ctid\" entry).\n\nIn such cases,the # of target entries is different from the # of\nfinal attributes estimated in pl_gram.y and above elog() in\npl_exec.c is called.\n\nShould current check be loosen ?\nOr another check is necessary ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Wed, 28 Jul 1999 11:58:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] SELECT FOR UPDATE in (PL/pgSQL) function" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> ERROR: query didn't return correct # of attributes for *internal*\n\n> AFAIC,\"SELECT FOR UPDATE\" always causes above errors in\n> PL/pgSQL functions.\n\n> ORDER/GROUP BY items that are not in the targetlist also cause\n> same errors in PL/pgSQL functions.\n> In both cases,target entries are added which are not wanted in the\n> final projected tuple(SELECT FOR UPDATE adds \"ctid\" entry).\n\nIt sounds like the code that deals with the resulting tuple is not\nsmart enough to ignore resjunk attributes (or to call ExecProject\nif it needs an actual junk-free tuple). That's probably an easily\nfixed bug, but I'm not familiar with the PL code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 10:19:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SELECT FOR UPDATE in (PL/pgSQL) function " } ]
[ { "msg_contents": "At 12:29 24/07/99 -0400, you wrote:\n>Oleg Bartunov <[email protected]> writes:\n>> I did some benchmarks of my Web site and notice I lost some hits\n>> which I accumulate in postgres (6.5.1) database on Linux 2.0.36 system\n>\n>> CREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\n>> Declare\n>> keyval Alias For $1;\n>> cnt int4;\n>> curtime datetime;\n>> Begin\n>> curtime := ''now'';\n>> Select count into cnt from hits where msg_id = keyval;\n>> if Not Found then\n>> cnt := 1;\n>> -- first_access inserted on default, last_access is NULL\n>> Insert Into hits (msg_id,count) values (keyval, cnt);\n>> else\n>> cnt := cnt + 1;\n>> Update hits set count = cnt,last_access = curtime where msg_id =\nkeyval;\n>> End If;\n>> return cnt;\n>> End;\n>> ' LANGUAGE 'plpgsql';\n>\n>I wonder whether this doesn't have a problem with concurrent access:\n>\n>1. Transaction A does 'Select count into cnt', gets (say) 200.\n>2. Transaction B does 'Select count into cnt', gets 200.\n>3. Transaction A writes 201 into hits record.\n>4. Transaction B writes 201 into hits record.\n>\n>and variants thereof. (Even if A has already written 201, I don't think\n>B will see it until A has committed...)\n>\n>I am not too clear on MVCC yet, but I think you need \"SELECT FOR UPDATE\"\n>or possibly an explicit lock on the hits table in order to avoid this\n>problem. Vadim, any comments?\n\nThe usual way around this sort of problem is to update the counter as the\nfirst thing you do in any transaction. This locks the row and prevents any\npossible deadlock:\n\n Begin\n curtime := ''now'';\n update hits set count = count + 1; -- Now have a lock, which causes\nother updates to wait.\n get diagnostics select processed into numrows; \n if numrows == 0 then\n cnt := 1;\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, cnt);\n End If;\n return cnt;\n End;\n\nThe only hassle with this is that the patch to plpgsql for 'get\ndiagnostics' is not yet applied (I may not have mailed it yet...), and I am\nnot sure if plpgsql starts a new TX for each statment - if so, you need to\nstart a TX in the procedure, or prior to valling it.\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 25 Jul 1999 10:18:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [SQL] inserts/updates problem under\n stressing !" }, { "msg_contents": "Philip Warner wrote:\n> \n> The usual way around this sort of problem is to update the counter as the\n> first thing you do in any transaction. This locks the row and prevents any\n> possible deadlock:\n\nBut if there was no record then nothing will be locked...\nWithout ability to read dirty data LOCK is the only way.\n\n...\n\n> diagnostics' is not yet applied (I may not have mailed it yet...), and I am\n> not sure if plpgsql starts a new TX for each statment - if so, you need to\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIt doesn't.\n\n> start a TX in the procedure, or prior to valling it.\n\nVadim\n", "msg_date": "Mon, 26 Jul 1999 10:50:28 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] inserts/updates problem understressing !" }, { "msg_contents": "On Mon, 26 Jul 1999, Vadim Mikheev wrote:\n\n> Date: Mon, 26 Jul 1999 10:50:28 +0800\n> From: Vadim Mikheev <[email protected]>\n> To: Philip Warner <[email protected]>\n> Cc: [email protected], [email protected],\n> Oleg Bartunov <[email protected]>\n> Subject: Re: [HACKERS] Re: [SQL] inserts/updates problem understressing !\n> \n> Philip Warner wrote:\n> > \n> > The usual way around this sort of problem is to update the counter as the\n> > first thing you do in any transaction. This locks the row and prevents any\n> > possible deadlock:\n> \n> But if there was no record then nothing will be locked...\n> Without ability to read dirty data LOCK is the only way.\n> \n\nI agree, no data, no locking.\n\n> ...\n> \n> > diagnostics' is not yet applied (I may not have mailed it yet...), and I am\n> > not sure if plpgsql starts a new TX for each statment - if so, you need to\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> It doesn't.\n> \n> > start a TX in the procedure, or prior to valling it.\n\nHow do I start a TX in the procedure ? Is't possible ?\nI don't understand this because a procedure must return something, so \nthere're no point where to end a TX.\n\n\tOleg\n> \n> Vadim\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Jul 1999 10:28:57 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] inserts/updates problem understressing !" } ]
[ { "msg_contents": "Note: Sent to NetBSD and PostgreSQL mailing lists as I am not sure\nexactly where the problem lies.\n\nI am running PostgreSQL 6.5 on a NetBSD system running -current. When\nI try to include my user defined type I get the following error.\n\nERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed\n\nThe file definitely exists and is world readable as the following indicates.\n\n[db@smaug:/usr/db] $ ls -l /usr/pgsql/modules/glaccount.so\n-rwxr-xr-x 1 pgsql pgsql 3826 Jul 25 05:04 /usr/pgsql/modules/glaccount.so\n[db@smaug:/usr/db] $ file /usr/pgsql/modules/glaccount.so\n/usr/pgsql/modules/glaccount.so: ELF 32-bit LSB shared object, Intel 80386, version 1, not stripped\n\nThe error message isn't very informative. Is there some way to get more\ninformation on why the load failed?\n\nThanks for any help.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 25 Jul 1999 09:06:05 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with dlopen and PostgreSQL - load of file failed" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> ERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed\n\n> The error message isn't very informative.\n\nDynamic loaders tend to be pretty horrid about that :-(. My bet is\na failure to resolve an external reference to another shared library.\nTry using \"ldd\" (or local equivalent) on the shlib to find out what\nother shlibs it depends on. Be suspicious if ldd fails to show all the\ndependencies you expect (eg, practically anything will depend on libc);\nthat probably means the linker failed to locate the other shlib when\nlinking this one. Next make sure all those other shlibs are in the\nright places, and are known to the system if your system keeps a table\nof shlibs. Then start checking *their* dependencies...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Jul 1999 12:06:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with dlopen and PostgreSQL - load of file\n\tfailed" }, { "msg_contents": "Tom Lane wrote:\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > ERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed\n> \n> > The error message isn't very informative.\n> \n> Dynamic loaders tend to be pretty horrid about that :-(. My bet is\n> a failure to resolve an external reference to another shared library.\n> Try using \"ldd\" (or local equivalent) on the shlib to find out what\n> other shlibs it depends on. Be suspicious if ldd fails to show all the\n> dependencies you expect (eg, practically anything will depend on libc);\n> that probably means the linker failed to locate the other shlib when\n> linking this one. Next make sure all those other shlibs are in the\n> right places, and are known to the system if your system keeps a table\n> of shlibs. Then start checking *their* dependencies...\n\nFurther lossage - ELF vs. a.out: when calling() dlsym(3), a.out\nneeds the symbols prepended with underscore, but ELF doesn't. Got bitten\nby this bit when helping garbled with ClanLib ...\nYou might check the source if a.out systems are handled right ....\n\n-- \nJaromir Dolecek <[email protected]> http://www.ics.muni.cz/~dolecek/\n\"The only way how to get rid temptation is to yield to it.\" -- Oscar Wilde\n", "msg_date": "Sun, 25 Jul 1999 21:02:00 +0200 (MEST)", "msg_from": "Jaromir Dolecek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with dlopen and PostgreSQL - load of file\n failed" }, { "msg_contents": ">>>>> \"D'Arcy\" == J M <D> writes:\n\n D'Arcy> ERROR: Load of file /usr/pgsql/modules/glaccount.so\n D'Arcy> failed: dlopen (/usr/pgsql/modules/glaccount.so) failed\n\n D'Arcy> The error message isn't very informative. Is there some\n D'Arcy> way to get more information on why the load failed?\n\nFor more information you can try to set LD_DEBUG to a non-null value\nin the execution environment - the shared loader should give you a\nhint what's going on; as an alternative, you can add a call to\ndlerror() at the place in question and print the string returned.\n(The code should probably do that anyway, but if it already does and\ndlerror() returned a null pointer that would be a bug in the shared\nloader.)\n", "msg_date": "25 Jul 1999 21:48:09 +0200", "msg_from": "Klaus Klein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with dlopen and PostgreSQL - load of file failed" }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > ERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed\n> \n> Dynamic loaders tend to be pretty horrid about that :-(. My bet is\n> a failure to resolve an external reference to another shared library.\n> Try using \"ldd\" (or local equivalent) on the shlib to find out what\n> other shlibs it depends on. Be suspicious if ldd fails to show all the\n> dependencies you expect (eg, practically anything will depend on libc);\n> that probably means the linker failed to locate the other shlib when\n> linking this one. Next make sure all those other shlibs are in the\n> right places, and are known to the system if your system keeps a table\n> of shlibs. Then start checking *their* dependencies...\n\nOK, so what do I do to fix it? Do I need more options to my link command?\nMy link rule is now this.\n\n.o.so:\n ld -Bshareable -L ${PGDIR}/lib -lpq -lc -o $@ $<\n\nAnd here is what ldd shows.\n\n[darcy@smaug:/usr/pgsql/modules] $ ldd glaccount.so\nglaccount.so:\n -lpq => not found\n -lc.12 => /usr/lib/libc.so.12\n\nThe file libpq.so exists in the directory ${PGDIR}/lib and I still get the\nsame problem. I tried nm and I get the following external symbols.\n\n[darcy@smaug:/usr/pgsql/modules] $ nm -Cn glaccount.so\n U CurrentMemoryContext\n U MemoryContextAlloc\n U _ctype_\n U elog\n U sprintf\n U strtol\n[... internal symbols ...]\n\nI assume those first two are in the libpq library that wasn't found.\n\nAnd I just now had a panic while investigating why ldconfig is not being built.\nCan I assume that ldconfig is not used in an ELF system? Did faking it\nout and trying to build it cause the panic? The panic was;\n\npanic: lockmgr: pid %d, not exclusive lock holder %d unlocking\n\nI have crash files if anyone is interested.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 25 Jul 1999 22:39:47 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with dlopen and PostgreSQL - load of file\n failed" }, { "msg_contents": "Thus spake Jaromir Dolecek\n> Further lossage - ELF vs. a.out: when calling() dlsym(3), a.out\n> needs the symbols prepended with underscore, but ELF doesn't. Got bitten\n> by this bit when helping garbled with ClanLib ...\n> You might check the source if a.out systems are handled right ....\n\nI am running a pure ELF system - or at least I believe I am. I installed\nthe latest i386 snapshot from ftp.vex.net then built a new kernel and\nthen built the world. The snapshot appeared to be an ELF system so I\nassume that the world built on it will be ELF.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 25 Jul 1999 22:43:20 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with dlopen and PostgreSQL - load of file\n failed" }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> \n> Thus spake Tom Lane\n> > \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > > ERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed\n> >\n> > Dynamic loaders tend to be pretty horrid about that :-(. My bet is\n> > a failure to resolve an external reference to another shared library.\n> > Try using \"ldd\" (or local equivalent) on the shlib to find out what\n> > other shlibs it depends on. Be suspicious if ldd fails to show all the\n> > dependencies you expect (eg, practically anything will depend on libc);\n> > that probably means the linker failed to locate the other shlib when\n> > linking this one. Next make sure all those other shlibs are in the\n> > right places, and are known to the system if your system keeps a table\n> > of shlibs. Then start checking *their* dependencies...\n> \n> OK, so what do I do to fix it? Do I need more options to my link command?\n> My link rule is now this.\n> \n> .o.so:\n> ld -Bshareable -L ${PGDIR}/lib -lpq -lc -o $@ $<\n> \n> And here is what ldd shows.\n> \n> [darcy@smaug:/usr/pgsql/modules] $ ldd glaccount.so\n> glaccount.so:\n> -lpq => not found\n> -lc.12 => /usr/lib/libc.so.12\n\nwhile developing plperl on a linux/ELF system i saw the same thing.\nI solved the problem by replacing backend/port/dynloader/linux.[ch] with\ncopies of the sunos files in the same directories.\n\npostgres uses dl_open and firends on all linux system, even though\ndlopen\nis directly support on linux/ELF. This seems to be wrong. I think a\nconfigure\ntest is needed to decide between dlopen (sunos style) and dl_open (or\nwhatever\nit is).\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Mon, 26 Jul 1999 09:33:25 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with dlopen and PostgreSQL - load of file\n failed" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n First, please forget my English is very poor. So that, I made a patch\nfor support gettext function of GNU C Library to show other languages by\nsetting locale.\n\n This is the first version that I hack PostgreSQL, and I had made only the\nsome files which ware put in src/bin, like psql, pg_*, ...etc. If you\npatch this patch file, please re-compile your PostgreSQL again. And when\nit done, you will get some .pot files, please read gettext via `info' utility\nfor getting more informations. \n\n PostgreSQL is the best DBMS product that I've used. Why not let it can\nsupport peoples who lives other states and using other language?\n\n- --\n.....=======............................. Cd Chen, (���X��)\n..// �s �s |............................ ===========================\n..|| �� �� <............................ What's Cynix? Cyber Linux.\n..|< > |............................ mailto:[email protected]\n..| | \\___/ |............................ http://www.cynix.com.tw/~cdchen\n.. |\\______/............................. ICQ UIN:3345768\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3ia\nCharset: latin1\n\niQCVAwUBN5xDzgLVm5OSJINxAQGdLgQArz6K9+r6QcFck8ASNcDClWt0kiUXq0RD\nxs8UYAIT9fyc3HZUTNRNyP/ELy8wfZxpWF1tzda2Yy1gGP7H11wA0KRZjQK9EAso\nhIve5dxCa3QGKngLV+L9FPRUp96WqIi2UYxax0e/ye4ZIAxrZSWVjd2C3v4TlMXc\nqXxBVJHvHek=\n=gV3U\n-----END PGP SIGNATURE-----", "msg_date": "Mon, 26 Jul 1999 03:17:37 +0800", "msg_from": "Cd Chen <[email protected]>", "msg_from_op": true, "msg_subject": "A multi-lang patch for psql 6.5.1" }, { "msg_contents": "> -----BEGIN PGP SIGNED MESSAGE-----\n> \n> First, please forget my English is very poor. So that, I made a patch\n> for support gettext function of GNU C Library to show other languages by\n> setting locale.\n> \n> This is the first version that I hack PostgreSQL, and I had made only the\n> some files which ware put in src/bin, like psql, pg_*, ...etc. If you\n> patch this patch file, please re-compile your PostgreSQL again. And when\n> it done, you will get some .pot files, please read gettext via `info' utility\n> for getting more informations. \n> \n> PostgreSQL is the best DBMS product that I've used. Why not let it can\n> support peoples who lives other states and using other language?\n\nFirst, the attached patch was zero length. Second, I am not sure what\nthis patch was supposed to do. I am not sure we could distribute a\npatch for GNU C library as part of PostgreSQL.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Jul 1999 15:51:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A multi-lang patch for psql 6.5.1" } ]
[ { "msg_contents": "I will be in Moscow from 27.07 till 01.08...\n\nVadim\n", "msg_date": "Mon, 26 Jul 1999 09:55:35 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "don't lose me :)" }, { "msg_contents": "On Mon, 26 Jul 1999, Vadim Mikheev wrote:\n\n> Date: Mon, 26 Jul 1999 09:55:35 +0800\n> From: Vadim Mikheev <[email protected]>\n> To: PostgreSQL Developers List <[email protected]>\n> Subject: [HACKERS] don't lose me :)\n> \n> I will be in Moscow from 27.07 till 01.08...\n\nVadim,\n\nGlad to see you in Moscow. You could reach me at Moscow University - \nnice place. \n\nPhones: 939-1683, 939-2383\n\n\tOleg\n\nPS.\nZaodno, mozhet ob'yasnish' MVCC popodrobnee :-) za pivkom.\n\n> \n> Vadim\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Jul 1999 10:15:46 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] don't lose me :)" } ]
[ { "msg_contents": "Having looked, albeit briefly, at what I think is the relevant code, I have\nnow come up with a plan.\n\n1. Don't introduce modules. For the moment, just introduce features\nconsistent with handling modules later. The reasons for this are: (1) I\ndon't really like them yet and we don't need them, since I would not plan\nto add other module features, (2) If they are the only way to define\nfunctions, then it breaks a lot of people's databases, and (3) It's more\nwork than I want to do and would (probably) substantially affect all\nexternal languages, and I'm sure I don't want to get into that yet.\n\n2. Modify the CREATE FUNCTION definition as follows:\n\nCREATE FUNCTION name ( [ ftype [, ...] ] )\n RETURNS rtype\n AS path\n LANGUAGE 'langname'\n [AUTHORIZATION 'authid']\n\nwhere authid is any valid user/group. Defaults to none/NULL/empty. This\nalso means pg_dump would have to change - is that right? Or is it handles\nautomagically by the parser?\n\n3. [Possibly] add a new statement: 'SET AUTHORIZATION ON FUNCTION name( [\nftype [, ...] ] ) TO authid'.\n\n4. Modify the code that executes (or plans?) functions to push the relevant\nauth-id onto a stack when the function is executed (if AUTH ID is not\nspecified in the function, use the prevailing auth-id). I guess it would be\nbetter to modify the query planner, since we only want to retrieve security\ninformation once. I'll need to be careful to 'catch' anything that rapidly\nunwinds the stack of call frames (in the case of an error, for example).\n\n5. Create a new function PgGetAuthID which returns the current active Auth\nID from the stack.\n\n6. Modify any existing code that checks security by calling PgGetUserName\nto now call 'PgGetAuthID'.\n\n7. Do whatever the SQL3 standard says about CURRENT_USERNAME and add an\nappropriate method of getting the real user as well as the current auth id.\n\nDoes this sound reasonable? Can somebody who knows a little more about the\ninternals, tell me if I am being ridiculously naieve?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 26 Jul 1999 12:43:56 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RFC: Security and Impersonation - Initial Plan" } ]
[ { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> First, please forget my English is very poor. So that, I made a patch\n>> for support gettext function of GNU C Library to show other languages by\n>> setting locale.\n>> \n>> This is the first version that I hack PostgreSQL, and I had made only the\n>> some files which ware put in src/bin, like psql, pg_*, ...etc. If you\n>> patch this patch file, please re-compile your PostgreSQL again. And when\n>> it done, you will get some .pot files, please read gettext via `info' utility\n>> for getting more informations. \n>\n>First, the attached patch was zero length. Second, I am not sure what\n>this patch was supposed to do. I am not sure we could distribute a\n>patch for GNU C library as part of PostgreSQL.\n\nI assume (from gettext usage in GNOME), that the patch will allow localization\nof prompts, error messages, etc. in psql, etc. This is a good thing. As \nfor the licensing, the correct way to do this is a autoconf check for an\ninstalled gettext library (steal it from GNOME). That way, if someone has\ngettext installed on their system, they can use it, and otherwise no one is\nbothered.\n\nIn the long term, a BSD-licensed gettext clone would be ideal, but that's \na completely different issue.\n\n\t-Michael Robinson\n\n", "msg_date": "Mon, 26 Jul 1999 12:12:39 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] A multi-lang patch for psql 6.5.1" } ]
[ { "msg_contents": "\nAny ideas?\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: 24 Jul 99 16:56:28 +0800\nFrom: Donny Ryan Chong <[email protected]>\nTo: [email protected]\nSubject: Hello\n\nHello, Could you please help me with my problem. I am currenlty implementing somewhat of a proxy. Meaning, there will be a lot of sql queries doing at the same time. Every URL address will be check whether it exist in the database. After opening a lot of sites, suddenly i receive a \"backend message type 0x50 arrived while idle\". Please reply as soon as possible because we have a deadline for the project. Thank you in advance.\n\nBy the way i am using postgres 6.4.2 running on a Pentuim II-350 with 64 Megs RAM\n\nDonny Ryan Chong\n\n\n\n\n", "msg_date": "Mon, 26 Jul 1999 06:40:51 +0100 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Hello (fwd)" } ]
[ { "msg_contents": "Why does anything need to be broken if a different port is used? Same way\nas web browsers use 80 for clear http, and 443 (by default) for SSL. But a\nserver cannot dish up http and https on the same port. Then the whole\ncompatibility issue falls away. Think of it as using 'pgsql' for clear\nconnections, and 'pgsqls' for SSL connections. This way, a post-6.6 client\ncan still connecct to a pre-6.6 server, using 'pgsql', a pre-6.6 client can\nconnect to a post-6.6 server using 'pgsql', and a post-6.6 client can\nconnect to a post-6.6 server using 'pgsql', or 'pgsqls'.\n\nOr is there an issue using different ports?\n\n>> > Bruce Momjian <[email protected]> writes:\n>> > >> But, we've had protocol changes before that breaks backward\n>> > >> compatibility...why is this all of a sudden a problem?\n>> > \n>> > > No reason to change the protocol when we don't need to.\n>> \n>> What I meant is that there is reason to break compatibility when we\n>> don't need to. Magnus seems like he has addressed this already.\n>> \n>> > \n>> > The point is that we *do not have to* break backwards compatibility to\n>> > add this feature, and indeed hardly anything would be gained by\nbreaking\n>> > compatibility. See subsequent messages from myself and Magnus.\n>> > \n>> > \t\t\tregards, tom lane\n>> > \n", "msg_date": "Mon, 26 Jul 1999 09:47:55 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] RE: [INTERFACES] Re: SSL patch" }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> Why does anything need to be broken if a different port is used? Same way\n> as web browsers use 80 for clear http, and 443 (by default) for SSL. But a\n> server cannot dish up http and https on the same port.\n\nActually you are free to use HTTPS on 80 and HTTP on 443 if you wish.\n\nThere is nothing at the protocol level that makes it impossible. \nAt least on Apache-mod_ssl you have to explicitly disable non-SSL \nconnections on 443 if you don't want them\n\n> Then the whole\n> compatibility issue falls away. Think of it as using 'pgsql' for clear\n> connections, and 'pgsqls' for SSL connections. This way, a post-6.6 client\n> can still connecct to a pre-6.6 server, using 'pgsql', a pre-6.6 client can\n> connect to a post-6.6 server using 'pgsql', and a post-6.6 client can\n> connect to a post-6.6 server using 'pgsql', or 'pgsqls'.\n> \n> Or is there an issue using different ports?\n\nNot to scare anyone away (I like crypto !;), but isn't it illegal to\nhave SSL \nin an exportable product in US.\n\nI guess this should be kept in a separate patch distributed from an\nnon-US site \nuntil US government wisens up.\n\nI'd really hate to have to fill some 'us-citizen verificatiohn form' to\ndownload \nthe latest snapshot.\n\n-----\nHannu\n", "msg_date": "Mon, 26 Jul 1999 12:04:22 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch" }, { "msg_contents": "Thus spake Hannu Krosing\n> Not to scare anyone away (I like crypto !;), but isn't it illegal to\n> have SSL \n> in an exportable product in US.\n> \n> I guess this should be kept in a separate patch distributed from an\n> non-US site \n> until US government wisens up.\n\nThe PostgreSQL server is in Canada. There may still be some issues but\nlast time I checked we weren't a US state yet.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Mon, 26 Jul 1999 06:24:53 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch" }, { "msg_contents": "At 06:24 26/07/99 -0400, D'Arcy\" \"J.M.\" Cain wrote:\n>Thus spake Hannu Krosing\n>> Not to scare anyone away (I like crypto !;), but isn't it illegal to\n>> have SSL \n>> in an exportable product in US.\n>> \n>> I guess this should be kept in a separate patch distributed from an\n>> non-US site \n>> until US government wisens up.\n>\n>The PostgreSQL server is in Canada. There may still be some issues but\n>last time I checked we weren't a US state yet.\n>\n\nEven if there are problems, I believe it's OK to export PostgreSQL with\noptions for SSL support, so long as you don't export SSL.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 26 Jul 1999 20:45:50 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Why does anything need to be broken if a different port is used?\n\nThat was the quick-and-dirty answer that I suggested to begin with, but\nMagnus objected on the grounds that it would be a nontransparent change\nfor *users* of Postgres; anyplace that knows what port it is supposed\nto connect to would have a problem. I think he has a good point.\nPushing the conversion headaches out of our bailiwick does not mean that\nthere are no conversion headaches.\n\nThe solution that we arrived at does not break compatibility nor require\nan additional port --- it will just mean a slightly slower connection\nprocess when an SSL-using client tries to connect to a non-SSL-capable\nserver. I think that's OK, since that scenario is probably the least\ncommon of the four possible combinations. (And if you're really worried\nabout a few extra millisec of startup time, the client-side library will\naccept a connect option that tells it not to try the SSL connection...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jul 1999 09:58:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch " }, { "msg_contents": "Philip Warner wrote:\n> \n> At 06:24 26/07/99 -0400, D'Arcy\" \"J.M.\" Cain wrote:\n> >Thus spake Hannu Krosing\n> >> Not to scare anyone away (I like crypto !;), but isn't it illegal to\n> >> have SSL\n> >> in an exportable product in US.\n> >>\n> >> I guess this should be kept in a separate patch distributed from an\n> >> non-US site\n> >> until US government wisens up.\n> >\n> >The PostgreSQL server is in Canada. There may still be some issues but\n> >last time I checked we weren't a US state yet.\n\nGood to hear, I was afraid of them being more or less the same\ncrypto-wise.\n\n> Even if there are problems, I believe it's OK to export PostgreSQL with\n> options for SSL support, so long as you don't export SSL.\n\nLet's hope so. In US that would be a 'crypto hook' and legally as bad as \nreal crypto.\n\n---------\nHannu\n", "msg_date": "Tue, 27 Jul 1999 01:37:51 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch" }, { "msg_contents": "At 01:37 27/07/99 +0300, Hannu Krosing wrote:\n>Philip Warner wrote:\n>> \n>> At 06:24 26/07/99 -0400, D'Arcy\" \"J.M.\" Cain wrote:\n>> >Thus spake Hannu Krosing\n>> >> Not to scare anyone away (I like crypto !;), but isn't it illegal to\n>> >> have SSL\n>> >> in an exportable product in US.\n>> >>\n>> >> I guess this should be kept in a separate patch distributed from an\n>> >> non-US site\n>> >> until US government wisens up.\n>> >\n>> >The PostgreSQL server is in Canada. There may still be some issues but\n>> >last time I checked we weren't a US state yet.\n>\n>Good to hear, I was afraid of them being more or less the same\n>crypto-wise.\n>\n>> Even if there are problems, I believe it's OK to export PostgreSQL with\n>> options for SSL support, so long as you don't export SSL.\n>\n>Let's hope so. In US that would be a 'crypto hook' and legally as bad as \n>real crypto.\n\nThat's a worry - maybe it would be worth looking at the approach of Apache.\nThey have a general 'module' concept, and one of the available modules adds\nSSL. Both mod_ssl, and opensll are available overseas.\n\nPerhaps the same idea could be used in PosgreSQL?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Jul 1999 11:32:59 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch" }, { "msg_contents": "\n\nOn Tue, 27 Jul 1999, Philip Warner wrote:\n\n> At 01:37 27/07/99 +0300, Hannu Krosing wrote:\n> >Philip Warner wrote:\n> >> \n> >> At 06:24 26/07/99 -0400, D'Arcy\" \"J.M.\" Cain wrote:\n> >> >Thus spake Hannu Krosing\n> >> >> Not to scare anyone away (I like crypto !;), but isn't it illegal to\n> >> >> have SSL\n> >> >> in an exportable product in US.\n> >> >>\n> >> >> I guess this should be kept in a separate patch distributed from an\n> >> >> non-US site\n> >> >> until US government wisens up.\n> >> >\n> >> >The PostgreSQL server is in Canada. There may still be some issues but\n> >> >last time I checked we weren't a US state yet.\n> >\n> >Good to hear, I was afraid of them being more or less the same\n> >crypto-wise.\n> >\n> >> Even if there are problems, I believe it's OK to export PostgreSQL with\n> >> options for SSL support, so long as you don't export SSL.\n> >\n> >Let's hope so. In US that would be a 'crypto hook' and legally as bad as \n> >real crypto.\n> \n> That's a worry - maybe it would be worth looking at the approach of Apache.\n> They have a general 'module' concept, and one of the available modules adds\n> SSL. Both mod_ssl, and opensll are available overseas.\n> \n> Perhaps the same idea could be used in PosgreSQL?\n> \n\nI like this idea, does Postgresql (I'm new around here) have a compression\noption for slow links? If not the same interfaces that support SSL could\nalso support a compressed stream, if someone were to invent one. That way\nyou have a more generalized interface that can't really be considered a\n\"crypto hook\"\n\nThis is a big issue for us (we use Sybase at work) going over 56k frame\nrelay. We have pretty powerful machines at the clients but the network is\na bottleneck. A compressed stream would be very cool.\n\nBrian\n\n", "msg_date": "Mon, 26 Jul 1999 22:12:05 -0400 (EDT)", "msg_from": "Brian Bruns <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] Re: SSL patch" } ]
[ { "msg_contents": "Andreas,\n\nI rewrote my function but got a problem how to know if update fails:\n\nCREATE FUNCTION \"acc_hits\" (int4) RETURNS datetime AS ' \nDeclare\n keyval Alias For $1;\n cnt int4;\n curtime datetime;\nBegin\n curtime := ''now'';\n Update hits set count = count + 1,last_access = curtime where msg_id = keyval;\n if Not Found then\n ??????????\n -- first_access inserted on default, last_access is NULL\n Insert Into hits (msg_id,count) values (keyval, 1); \n End If;\n return curtime; \nEnd;\n' LANGUAGE 'plpgsql';\n\n\n\n\tregards,\n\t\n\t\tOleg\n\n\n\n\n\n\nOn Mon, 26 Jul 1999, Zeugswetter Andreas IZ5 wrote:\n\n> Date: Mon, 26 Jul 1999 10:31:33 +0200\n> From: Zeugswetter Andreas IZ5 <[email protected]>\n> To: 'Oleg Bartunov' <[email protected]>\n> Subject: AW: [HACKERS] inserts/updates problem under stressing !\n> \n> \n> > CREATE FUNCTION \"acc_hits\" (int4) RETURNS int4 AS '\n> > Declare\n> > keyval Alias For $1;\n> > cnt int4;\n> > curtime datetime;\n> > Begin\n> > curtime := ''now'';\n> > Select count into cnt from hits where msg_id = keyval;\n> > if Not Found then\n> > cnt := 1;\n> > -- first_access inserted on default, last_access is NULL\n> > Insert Into hits (msg_id,count) values (keyval, cnt);\n> > else\n> > cnt := cnt + 1;\n> > Update hits set count = cnt,last_access = curtime where msg_id =\n> > keyval;\n> > End If;\n> > return cnt;\n> > End;\n> > ' LANGUAGE 'plpgsql';\n> > \n> > \n> Ok, this proc is not concurrent capable. This is because in the time between\n> the select and the update some other connection can update count.\n> \n> 1. Change the update to:\n> > Update hits set count = count+1, last_access = curtime where msg_id =\n> > keyval;\n> > \n> 2. the insert is also not concurrent capable, since there could be two\n> simultaneous\n> first accesses.\n> \n> It looks like there will be more updates than inserts, so I would change the\n> above to \n> 1. try update\n> 2. if num rows affected = 0 do the insert \n> \n> I don't know how to get the rows affected, but this should be possible.\n> \n> Andreas\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Mon, 26 Jul 1999 12:54:49 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: [HACKERS] inserts/updates problem under stressing !" }, { "msg_contents": "At 12:54 26/07/99 +0400, Oleg Bartunov wrote:\n>Andreas,\n>\n>I rewrote my function but got a problem how to know if update fails:\n>\n>CREATE FUNCTION \"acc_hits\" (int4) RETURNS datetime AS ' \n>Declare\n> keyval Alias For $1;\n> cnt int4;\n> curtime datetime;\n>Begin\n> curtime := ''now'';\n> Update hits set count = count + 1,last_access = curtime where msg_id =\nkeyval;\n> if Not Found then\n> ??????????\n\nYou need a patch to plpgsql with adds:\n\n GET DIAGNOSTICS SELECT PROCESSED INTO num_of_rows_affected;\n\nwhere num_of_rows_affected is a local variable.\n\nThe patch is currently with Jan, who is quite busy.\n\n\n> -- first_access inserted on default, last_access is NULL\n> Insert Into hits (msg_id,count) values (keyval, 1); \n> End If;\n> return curtime; \n>End;\n>' LANGUAGE 'plpgsql';\n>\n>\n>\n>\tregards,\n>\t\n>\t\tOleg\n>\n>\n>\n>\n>\n>\n>On Mon, 26 Jul 1999, Zeugswetter Andreas IZ5 wrote:\n>\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 26 Jul 1999 19:14:02 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] inserts/updates problem under stressing !" }, { "msg_contents": "On Mon, 26 Jul 1999, Philip Warner wrote:\n\n> Date: Mon, 26 Jul 1999 19:14:02 +1000\n> From: Philip Warner <[email protected]>\n> To: Oleg Bartunov <[email protected]>,\n> Zeugswetter Andreas IZ5 <[email protected]>\n> Cc: [email protected]\n> Subject: Re: AW: [HACKERS] inserts/updates problem under stressing !\n> \n> At 12:54 26/07/99 +0400, Oleg Bartunov wrote:\n> >Andreas,\n> >\n> >I rewrote my function but got a problem how to know if update fails:\n> >\n> >CREATE FUNCTION \"acc_hits\" (int4) RETURNS datetime AS ' \n> >Declare\n> > keyval Alias For $1;\n> > cnt int4;\n> > curtime datetime;\n> >Begin\n> > curtime := ''now'';\n> > Update hits set count = count + 1,last_access = curtime where msg_id =\n> keyval;\n> > if Not Found then\n> > ??????????\n> \n> You need a patch to plpgsql with adds:\n> \n> GET DIAGNOSTICS SELECT PROCESSED INTO num_of_rows_affected;\n> \n> where num_of_rows_affected is a local variable.\n> \n> The patch is currently with Jan, who is quite busy.\n> \n\nJan, did you approve the patch. Is it usable with 6.5.1 ?\n\n\tOleg\n\n> \n> > -- first_access inserted on default, last_access is NULL\n> > Insert Into hits (msg_id,count) values (keyval, 1); \n> > End If;\n> > return curtime; \n> >End;\n> >' LANGUAGE 'plpgsql';\n> >\n> >\n> >\n> >\tregards,\n> >\t\n> >\t\tOleg\n> >\n> >\n> >\n> >\n> >\n> >\n> >On Mon, 26 Jul 1999, Zeugswetter Andreas IZ5 wrote:\n> >\n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: +61-03-5367 7422 | _________ \\\n> Fax: +61-03-5367 7430 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Jul 1999 16:06:18 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: [HACKERS] inserts/updates problem under stressing !" } ]
[ { "msg_contents": "Hannu wrote:\n>> Actually you are free to use HTTPS on 80 and HTTP on 443 if you wish.\n>> \nI understand this; the point that I was trying to make was that they run on\ndifferent ports. I don't think that it's possible to run both http and\nhttps on the same port at the same time on the same server, and I think that\nwe should take the cue.\n\nIt's a concept that people already understand.\n\nMikeA\n", "msg_date": "Mon, 26 Jul 1999 11:52:44 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] SSL patch" }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> Hannu wrote:\n> >> Actually you are free to use HTTPS on 80 and HTTP on 443 if you wish.\n> >>\n> I understand this; the point that I was trying to make was that they run on\n> different ports. I don't think that it's possible to run both http and\n> https on the same port at the same time on the same server, and I think that\n> we should take the cue.\n\nIt is possible unless you mean that the very same connection is both \nhttp and https ;)\n\nThe decision to use either http or https is done et _each_ connection \nsetup (at each http(s) request). \nSo http://samehost.com:443/ and https://samehost.com/ will connect to \nsamehost.com port 443, only the latter user SSL.\n\n> \n> It's a concept that people already understand.\n\nAgreed, but there is nothing at the protocol level that forces them to\nbe \nseparate.\n\n-------------\nHannu\n", "msg_date": "Mon, 26 Jul 1999 13:15:55 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SSL patch" }, { "msg_contents": ">\n> Hannu wrote:\n> >> Actually you are free to use HTTPS on 80 and HTTP on 443 if you wish.\n> >>\n> I understand this; the point that I was trying to make was that they run on\n> different ports. I don't think that it's possible to run both http and\n> https on the same port at the same time on the same server, and I think that\n> we should take the cue.\n>\n> It's a concept that people already understand.\n\n I would prefer to have the SSL connections on a different\n port. Doing the mentioned try and error from a 6.6 client to\n connect to a 6.5 server would cause a log message for every\n connect. It's hard to find really important log messages\n then.\n\n Better have a new PGPORT_V7 variable that contains some more\n information for a past 6.5 client. It could look like this:\n\n ssl=5433,raw=5432 Try SSL connect on 5433 and if fails, try\n insecure connection on 5432.\n\n ssl=5432 Try SSL connect on 5432 only and bail out\n if it fails.\n\n raw=5432 Use insecure connection allways on 5432.\n\n This way, the semantic of the PGPORT variable doesn't change,\n so using old and new clients from within the same login\n doesn't cause problems.\n\n Beeing able to configure a particular login explicitly to use\n an insecure connection is IMHO important. Have the database\n and a WEB server doing much CGI on the same server. Why\n crypting local connections, even if they go through TCP (as\n PHP allways does)? The cryptography overhead isn't that\n small.\n\n Well - root could listen on the lo device, but since root\n could easily patch passwd(1) to send him mail if anyone\n changes his password, that's not a real drawback for me.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 26 Jul 1999 12:46:46 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SSL patch" } ]
[ { "msg_contents": "Hi,\n\nWell, I got psql to work with arbitrary query string lengths, eventually, by\nimplementing an expandable buffer struct. Unfortunately, I didn't bring the\nstuff to work today, so it'll only be in the patches tomorrow.\nThis means that the query buffer length will not be restricted in psql.\nNext up is libpq. I checked the code for this, and there seems to be only\none place where this is an issue, and it is just a (redundant?) check on the\nlength of the string to ensure that it is still within the limit. I removed\nthe check, which then seemed to allow a large query string through to the\nback end. I received a message like this:\nERROR: parser: parse error at or near \"\"\n\nSo, I assumed that it was making it as far as the parser. So now I have to\ndive into the backend, and make sure that it can accept arbitrary length\nqueries, right? My first take, strategy-wise, is going to be to go through\nthe diagram detailing the design of the back end, and take it piece by\npiece. Any fundamental flaws in this?\n\nAs far as the buffer struct, and code goes, is there anything around which\ndoes this already? If not, I thought of placing it somewhere where all the\nmodules can access it, as I will probably land up using it in more places\nthan just psql. Thoughts, screams of laughter, etc....\n\nLastly, when I ran configure, I included the --enable-cassert switch.\nHowever, this does not seem to force the compiler to include debug info,\ni.e.: --enable-debug. I tried the --enable-debug switch as well, but it\ndidn't seem to have any effect. Am I missing something?\n\nThanks\n\n\nMikeA\n\n\n\n", "msg_date": "Mon, 26 Jul 1999 12:54:57 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Max query string length" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> As far as the buffer struct, and code goes, is there anything around which\n> does this already? If not, I thought of placing it somewhere where all the\n> modules can access it, as I will probably land up using it in more places\n> than just psql. Thoughts, screams of laughter, etc....\n\nThere is a simple expansible-string module in the backend already;\nsee backend/lib/stringinfo.c. It could use some improvements but\nI'd suggest enhancing it rather than making another one.\n\n> Lastly, when I ran configure, I included the --enable-cassert switch.\n> However, this does not seem to force the compiler to include debug info,\n> i.e.: --enable-debug. I tried the --enable-debug switch as well, but it\n> didn't seem to have any effect. Am I missing something?\n\nThere is no --enable-debug switch. You have to turn on debug by\nmodifying the CFLAGS line in the template file for your system.\n(Hmm, now that you mention it, --enable-debug would be a cleaner\nsolution than keeping a locally modified template file, which is\nwhat I currently do and most of the other developers probably do\nalso ... another to-do item ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jul 1999 10:21:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max query string length " }, { "msg_contents": "> There is no --enable-debug switch. You have to turn on debug by\n> modifying the CFLAGS line in the template file for your system.\n> (Hmm, now that you mention it, --enable-debug would be a cleaner\n> solution than keeping a locally modified template file, which is\n> what I currently do and most of the other developers probably do\n> also ... another to-do item ...)\n\nWhat debugs would --enable-debug enable?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Jul 1999 10:40:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max query string length" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> (Hmm, now that you mention it, --enable-debug would be a cleaner\n>> solution than keeping a locally modified template file, which is\n>> what I currently do and most of the other developers probably do\n>> also ... another to-do item ...)\n\n> What debugs would --enable-debug enable?\n\nGood question. I was just thinking of adding -g to CFLAGS.\nWhat else would you want to do?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jul 1999 11:52:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max query string length " } ]
[ { "msg_contents": "Hi,\n\nI'm a little bit confused with http://www.postgresql.org\n\n1. Netscape 3.04 totally unusable to browse postgres site - \n I see only text of javascript\n2. Lynx works better, but still when select \"Info Central\" I see nothing !\n\nAlso, don't forget ALT=\"\" for inline images (not href's) - documents\nwill look much better in Lynx or browser with images disabled.\n\nО©╫I understand that most users move touse new browsers but it's good\ntradition (not Microsoft) to support (if possible) old browsers.\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Jul 1999 16:29:34 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "postgres Web problem" }, { "msg_contents": "On Mon, 26 Jul 1999, Oleg Bartunov wrote:\n\n> Hi,\n> \n> I'm a little bit confused with http://www.postgresql.org\n\nTry going to: http://www.postgresql.org/index.html to make sure you're\nnot getting a mirror. This is the first problem I've heard like you \ndescribe. If all works well we'll need to determine which mirror that's\nhaving problems.\n\nVince.\n\n> \n> 1. Netscape 3.04 totally unusable to browse postgres site - \n> I see only text of javascript\n> 2. Lynx works better, but still when select \"Info Central\" I see nothing !\n> \n> Also, don't forget ALT=\"\" for inline images (not href's) - documents\n> will look much better in Lynx or browser with images disabled.\n> \n> �I understand that most users move touse new browsers but it's good\n> tradition (not Microsoft) to support (if possible) old browsers.\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Jul 1999 09:37:14 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "On Mon, 26 Jul 1999, Vince Vielhaber wrote:\n\n> Date: Mon, 26 Jul 1999 09:37:14 -0400 (EDT)\n> From: Vince Vielhaber <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] postgres Web problem\n> \n> On Mon, 26 Jul 1999, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > I'm a little bit confused with http://www.postgresql.org\n> \n> Try going to: http://www.postgresql.org/index.html to make sure you're\n> not getting a mirror. This is the first problem I've heard like you \n> describe. If all works well we'll need to determine which mirror that's\n> having problems.\n> \n> Vince.\n> \n\nHuh, sorry, this is my mirror :-) Original site works fine. Shame.\nWhat do I need to configure ? http://www.sai.msu.su:8000/\nBut, just reloaded and everything looks ok. Weird.\nNo idea.\n\n Regards,\n\tOleg\n\n\n> > \n> > 1. Netscape 3.04 totally unusable to browse postgres site - \n> > I see only text of javascript\n> > 2. Lynx works better, but still when select \"Info Central\" I see nothing !\n> > \n> > Also, don't forget ALT=\"\" for inline images (not href's) - documents\n> > will look much better in Lynx or browser with images disabled.\n> > \n> > О©╫I understand that most users move touse new browsers but it's good\n> > tradition (not Microsoft) to support (if possible) old browsers.\n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > \n> \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Jul 1999 17:56:36 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "On Mon, 26 Jul 1999, Oleg Bartunov wrote:\n\n> On Mon, 26 Jul 1999, Vince Vielhaber wrote:\n> \n> > Date: Mon, 26 Jul 1999 09:37:14 -0400 (EDT)\n> > From: Vince Vielhaber <[email protected]>\n> > To: Oleg Bartunov <[email protected]>\n> > Cc: [email protected]\n> > Subject: Re: [HACKERS] postgres Web problem\n> > \n> > On Mon, 26 Jul 1999, Oleg Bartunov wrote:\n> > \n> > > Hi,\n> > > \n> > > I'm a little bit confused with http://www.postgresql.org\n> > \n> > Try going to: http://www.postgresql.org/index.html to make sure you're\n> > not getting a mirror. This is the first problem I've heard like you \n> > describe. If all works well we'll need to determine which mirror that's\n> > having problems.\n> > \n> > Vince.\n> > \n> \n> Huh, sorry, this is my mirror :-) Original site works fine. Shame.\n> What do I need to configure ? http://www.sai.msu.su:8000/\n> But, just reloaded and everything looks ok. Weird.\n> No idea.\n\nSeems ok from here.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Jul 1999 10:20:08 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "\nOn 26-Jul-99 Oleg Bartunov wrote:\n> Hi,\n> \n> I'm a little bit confused with http://www.postgresql.org\n> \n> 1. Netscape 3.04 totally unusable to browse postgres site - \n> I see only text of javascript\n\n It's well known bug in NS 3.x. \nAll \n <script><!----------\n ......\n //------></script>\n\n need to be commented exactly like example above (ie without spaces and\n line breaks> and must be placed inside <HEAD> </HEAD>\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Mon, 26 Jul 1999 18:36:14 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] postgres Web problem" }, { "msg_contents": "Using Netscape 4.07 from Win95. index.html is alright but I get some\ngarbage in the top left hand corner of the info central page (left of the\nmain elephant, above the frames tree). It seems OK with IE4.\n\nhttp://www.postgresql.org/doxlist.html:\n<!--- /* We dont\nperform mouse\nactions on IE3.x &\nNS 2.x */ function\nmsOn() { } function\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \n\nOpenBSD: Secure by default. http://www.openbsd.org/\n\nOn Mon, 26 Jul 1999, Vince Vielhaber wrote:\n\n> On Mon, 26 Jul 1999, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > I'm a little bit confused with http://www.postgresql.org\n> \n> Try going to: http://www.postgresql.org/index.html to make sure you're\n> not getting a mirror. This is the first problem I've heard like you \n> describe. If all works well we'll need to determine which mirror that's\n> having problems.\n> \n> Vince.\n> \n> > \n> > 1. Netscape 3.04 totally unusable to browse postgres site - \n> > I see only text of javascript\n> > 2. Lynx works better, but still when select \"Info Central\" I see nothing !\n> > \n> > Also, don't forget ALT=\"\" for inline images (not href's) - documents\n> > will look much better in Lynx or browser with images disabled.\n> > \n> > �I understand that most users move touse new browsers but it's good\n> > tradition (not Microsoft) to support (if possible) old browsers.\n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > \n> \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n> \n> \n\n\n", "msg_date": "Mon, 26 Jul 1999 14:36:52 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "On Mon, Jul 26, 1999 at 09:37:14AM -0400, Vince Vielhaber wrote:\n> On Mon, 26 Jul 1999, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > I'm a little bit confused with http://www.postgresql.org\n> \n> Try going to: http://www.postgresql.org/index.html to make sure you're\n> not getting a mirror. This is the first problem I've heard like you \n> describe. If all works well we'll need to determine which mirror that's\n> having problems.\n\nAs long as we're complaining about browsers...\n\nUsing NetPositive under BeOS I get the various buttons seperated and\nskewed. I'll look into it and see if I can figure out exactly what\nis wrong, but there could be a strange table in there.\n\n", "msg_date": "Mon, 26 Jul 1999 07:38:34 -0700", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "\nOn 26-Jul-99 Louis Bertrand wrote:\n> Using Netscape 4.07 from Win95. index.html is alright but I get some\n> garbage in the top left hand corner of the info central page (left of the\n> main elephant, above the frames tree). It seems OK with IE4.\n> \n> http://www.postgresql.org/doxlist.html:\n> <!--- /* We dont\n> perform mouse\n> actions on IE3.x &\n> NS 2.x */ function\n> msOn() { } function\n> \n> Ciao\n> --Louis <[email protected]> \n\nUnfortunately, I can't find computer near around to install IE4 \nand so I'm unable to fix coverpage.\n\n This problem, IMHO, waits for MS IEx for FreeBSD ;-)) or\nsomebody familar with MSIE.\n \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Mon, 26 Jul 1999 19:53:50 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "Louis Bertrand <[email protected]> writes:\n> Using Netscape 4.07 from Win95. index.html is alright but I get some\n> garbage in the top left hand corner of the info central page (left of the\n> main elephant, above the frames tree). It seems OK with IE4.\n\n> <!--- /* We dont\n> perform mouse\n> actions on IE3.x &\n> NS 2.x */ function\n> msOn() { } function\n\nYeah, I've been seeing the same with Netscape 4.08 (HPUX build) for\nsome time... quoting problem, likely ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Jul 1999 12:01:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem " }, { "msg_contents": "On Mon, 26 Jul 1999, Louis Bertrand wrote:\n\n> Using Netscape 4.07 from Win95. index.html is alright but I get some\n> garbage in the top left hand corner of the info central page (left of the\n> main elephant, above the frames tree). It seems OK with IE4.\n> \n> http://www.postgresql.org/doxlist.html:\n> <!--- /* We dont\n\nYeah, the <!--- is a comment, but the older netscapes showed it anyway\nin some cases. Is this the only place you see it?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Jul 1999 12:02:38 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> > <!--- /* We dont\n> > perform mouse\n> > actions on IE3.x &\n> > NS 2.x */ function\n> > msOn() { } function\n> \n> Yeah, I've been seeing the same with Netscape 4.08 (HPUX build) for\n> some time... quoting problem, likely ...\n\nWell, the HTML source for that FRAME is erroneous, as is normal on the\nweb these days, but it's not that. My guess is you've turned off the\njavascript misfeature, which, possibly combined with the HTML errors\nand/or the no less than four comment conventions used in that little\nsnippet of code, is confusing Netscape.\n\nIncidentally, in the RDBMS comparison chart (whose URL I can't quote,\nbecause of the FRAME stuff), PostgreSQL is shown as lacking row level\nlocking. Change to \"supported\" and add a footnote, someone?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "27 Jul 1999 07:43:58 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "On 27 Jul 1999, Tom Ivar Helbekkmo wrote:\n\n> Tom Lane <[email protected]> writes:\n> \n> > > <!--- /* We dont\n> > > perform mouse\n> > > actions on IE3.x &\n> > > NS 2.x */ function\n> > > msOn() { } function\n> > \n> > Yeah, I've been seeing the same with Netscape 4.08 (HPUX build) for\n> > some time... quoting problem, likely ...\n> \n> Well, the HTML source for that FRAME is erroneous, as is normal on the\n> web these days, but it's not that. My guess is you've turned off the\n> javascript misfeature, which, possibly combined with the HTML errors\n> and/or the no less than four comment conventions used in that little\n> snippet of code, is confusing Netscape.\n> \n> Incidentally, in the RDBMS comparison chart (whose URL I can't quote,\n> because of the FRAME stuff), PostgreSQL is shown as lacking row level\n> locking. Change to \"supported\" and add a footnote, someone?\n\nThe comparison is still for 6.4.2, it needs to be updated.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 27 Jul 1999 06:19:35 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem" }, { "msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> Yeah, I've been seeing the same with Netscape 4.08 (HPUX build) for\n>> some time... quoting problem, likely ...\n\n> Well, the HTML source for that FRAME is erroneous, as is normal on the\n> web these days, but it's not that. My guess is you've turned off the\n> javascript misfeature,\n\nGood guess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 1999 09:18:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres Web problem " } ]
[ { "msg_contents": "Hi all,\n\nI discovered the following bug which is new in 6.5.1:\n\n template1=> CREATE TABLE user_selektor (user char(16), selektor int4);\n ERROR: parser: parse error at or near \"user\"\n ERROR: parser: parse error at or near \"user\"\n template1=> CREATE TABLE user_selektor (uuser char(16), selektor\nint4);\n CREATE \n\nSo it looks like an attribute must not be named 'user', with\n'uuser' it works. This is new. We use such a table in our\nproduction system which is 6.3.2.\n\nGreet's,\nFrank.\n-- \n---------------------------------------------------------------\n _/_/_/_/ _/_/ _/_/_/ EAD-Systeme GmbH\n _/ _/ _/ _/ _/ Nachfeldstr. 4\n _/_/_/ _/ _/ _/ _/ D-82490 Farchant, Germany\n _/ _/_/_/_/_/ _/ _/ Phone: +49 8821 9623-0\n _/_/_/_/ _/ _/ _/_/_/_/ Fax: +49 8821 9623-20\n---------------------------------------------------------------\nEmail: [email protected]\n---------------------------------------------------------------\n", "msg_date": "Mon, 26 Jul 1999 15:03:48 +0200", "msg_from": "Frank Stefani <[email protected]>", "msg_from_op": true, "msg_subject": "New bug invented in 6.5.1" } ]
[ { "msg_contents": "All it would do is include the -g switch when compiling. At the moment,\nthis doesn't happen, and the --enable-debug is a fairly standard switch in\nthe other packages that I've seen that use autoconf. Is there some other\nway that it is done using the configure script?\n\nBruce Momjian wrote:\n>> Michael Ansley wrote:\n>> > There is no --enable-debug switch. You have to turn on debug by\n>> > modifying the CFLAGS line in the template file for your system.\n>> > (Hmm, now that you mention it, --enable-debug would be a cleaner\n>> > solution than keeping a locally modified template file, which is\n>> > what I currently do and most of the other developers probably do\n>> > also ... another to-do item ...)\n>> \n>> What debugs would --enable-debug enable?\n", "msg_date": "Mon, 26 Jul 1999 17:06:36 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Max query string length" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> All it would do is include the -g switch when compiling. At the moment,\n> this doesn't happen, and the --enable-debug is a fairly standard switch in\n> the other packages that I've seen that use autoconf. Is there some other\n> way that it is done using the configure script?\n> \n> Bruce Momjian wrote:\n> >> Michael Ansley wrote:\n> >> > There is no --enable-debug switch. You have to turn on debug by\n> >> > modifying the CFLAGS line in the template file for your system.\n> >> > (Hmm, now that you mention it, --enable-debug would be a cleaner\n> >> > solution than keeping a locally modified template file, which is\n> >> > what I currently do and most of the other developers probably do\n> >> > also ... another to-do item ...)\n> >> \n> >> What debugs would --enable-debug enable?\n> \n\nAdded to TODO:\n\t\n\t* Make configure --enable-debug add -g on compile line\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Jul 1999 11:26:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max query string length" } ]
[ { "msg_contents": "\n> (Hmm, now that you mention it, --enable-debug would be a cleaner\n> solution than keeping a locally modified template file, which is\n> what I currently do and most of the other developers probably do\n> also ... another to-do item ...)\n> \nI use Makefile.custom, this will not be messed up by configure, checkout\nor make. --enable-debug would sound like some more output to a logfile, \nand not CFLAGS\n\nAndreas\n\n", "msg_date": "Mon, 26 Jul 1999 17:16:32 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] --enable-debug (was: Max query string length) " } ]
[ { "msg_contents": "I work for Hewlett-Packard and we are using PosgreSQL 6.5 internally to collect \ndata about the servers we manage in the data center.\n\n-Ryan\n\n> Morning all...\n> \n> \tJust had someone inquire as to whether any of the 'Fortune 500'\n> companies are using PostgreSQL ...\n> \n> \tDon't know the answer myself...anyone out there associated with\n> one willing to speak out?\n> \n> \t\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n", "msg_date": "Mon, 26 Jul 1999 18:40:01 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fortune 500 ..." } ]
[ { "msg_contents": "Another thing that whoever is implementing BLOBS will have to be aware of\nis making sure that access to individual BLOBS is controlled by the\n'owning' table.\n\nI don't know if an implementation has been settled on, but at one point it\nwas suggested that BLOBs be stored in a single table. I suspect this\napproach would not work for the above reason.\n\nAlso, is there any reason why BLOBs need to be stored in a table at all?\nCan lower level I/O routines to be used to chain groups of pages together?\n\nFinally, is there a plan for the BLOB implementation that is available for\ncomment?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Jul 1999 12:04:26 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "BLOBS and security" }, { "msg_contents": "\nIs there likley to be a way to use BLOBS via psql????\n\n\nOn Tue, 27 Jul 1999, Philip Warner wrote:\n\n> Another thing that whoever is implementing BLOBS will have to be aware of\n> is making sure that access to individual BLOBS is controlled by the\n> 'owning' table.\n> \n> I don't know if an implementation has been settled on, but at one point it\n> was suggested that BLOBs be stored in a single table. I suspect this\n> approach would not work for the above reason.\n> \n> Also, is there any reason why BLOBs need to be stored in a table at all?\n> Can lower level I/O routines to be used to chain groups of pages together?\n> \n> Finally, is there a plan for the BLOB implementation that is available for\n> comment?\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: +61-03-5367 7422 | _________ \\\n> Fax: +61-03-5367 7430 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\nA.J. ([email protected])\nSometimes you're ahead, somtimes you're behind.\nThe race is long, and in the end it's only with yourself.\n\n", "msg_date": "Tue, 27 Jul 1999 13:12:43 +0100 (BST)", "msg_from": "A James Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BLOBS and security" } ]
[ { "msg_contents": "Hi,\n\nafter I got DBIlogging work, I run several tests and noticed performance \ndegradation when doing sequential updating of *one* row.\n\nI have 5 processes updated the same row. I use\nLOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\n\nWhen I run 200 requests I got about 16 req/sec, which is quite enough\nfor my purposes. I expected the same speed if I just increase a number of \nrequests, but it decreases. for 2000 requests I got about 10 req/sec\nand for 20,000 - about 2.5 req/sec !\nI see no reason for such performance degradation - no way to use\npostgres for logging in 24*7*365 Web-site. Probably this is very \nspecific case when several processes updates only one row,\nbut again, I see no reason for such big degradation.\nTable hits itself contains only 1 row !\nI'll try to elimanate httpd, perl in my test bench to test only \npostgres, I dont' have right now such a tool, probable someone\nalready did this ? What tool I can use for testing concurrent update\n\n\tRegards,\n\t\tOleg\n\n\nThis is my home machine, Linux 2.2.10. postgres 6.5.1\nLoad is about 2-2.5\n\nTypical output of ps:\n\n11:21[om]:/usr/local/apache/logs>psg disc\n 1036 ? S 24:17 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n 1040 ? R 24:09 /usr/local/pgsql/bin/postgres localhost httpd discovery idle\n 1042 ? S 24:02 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n 1044 ? R 23:51 /usr/local/pgsql/bin/postgres localhost httpd discovery idle\n 1046 ? S 23:49 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n 1048 ? S 23:47 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n\nI see only one process with SELECT, this is what I expected when use\nIN SHARE ROW EXCLUSIVE MODE. Right ?\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Tue, 27 Jul 1999 12:51:07 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "UPDATE performance degradation (6.5.1)" }, { "msg_contents": "Probably I found the problem. After running my test, whiich became\nvery slow I looked at /usr/local/pgsql/data/base/discovery\n\n-rw------- 1 postgres users 5070848 Jul 27 16:14 hits\n-rw------- 1 postgres users 1409024 Jul 27 16:14 hits_pkey\n\nThis is for table with one row after a lot of updates.\nToo much. vacuum analyze this table was a good medicine !\nIs this a design problem ? \n\n\tRegards,\n\t\tOleg\n\nOn Tue, 27 Jul 1999, Oleg Bartunov wrote:\n\n> Date: Tue, 27 Jul 1999 12:51:07 +0400 (MSD)\n> From: Oleg Bartunov <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] UPDATE performance degradation (6.5.1)\n> \n> Hi,\n> \n> after I got DBIlogging work, I run several tests and noticed performance \n> degradation when doing sequential updating of *one* row.\n> \n> I have 5 processes updated the same row. I use\n> LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\n> \n> When I run 200 requests I got about 16 req/sec, which is quite enough\n> for my purposes. I expected the same speed if I just increase a number of \n> requests, but it decreases. for 2000 requests I got about 10 req/sec\n> and for 20,000 - about 2.5 req/sec !\n> I see no reason for such performance degradation - no way to use\n> postgres for logging in 24*7*365 Web-site. Probably this is very \n> specific case when several processes updates only one row,\n> but again, I see no reason for such big degradation.\n> Table hits itself contains only 1 row !\n> I'll try to elimanate httpd, perl in my test bench to test only \n> postgres, I dont' have right now such a tool, probable someone\n> already did this ? What tool I can use for testing concurrent update\n> \n> \tRegards,\n> \t\tOleg\n> \n> \n> This is my home machine, Linux 2.2.10. postgres 6.5.1\n> Load is about 2-2.5\n> \n> Typical output of ps:\n> \n> 11:21[om]:/usr/local/apache/logs>psg disc\n> 1036 ? S 24:17 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n> 1040 ? R 24:09 /usr/local/pgsql/bin/postgres localhost httpd discovery idle\n> 1042 ? S 24:02 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n> 1044 ? R 23:51 /usr/local/pgsql/bin/postgres localhost httpd discovery idle\n> 1046 ? S 23:49 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n> 1048 ? S 23:47 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK\n> \n> I see only one process with SELECT, this is what I expected when use\n> IN SHARE ROW EXCLUSIVE MODE. Right ?\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 27 Jul 1999 17:58:20 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1)" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Probably I found the problem. After running my test, whiich became\n> very slow I looked at /usr/local/pgsql/data/base/discovery\n\n> -rw------- 1 postgres users 5070848 Jul 27 16:14 hits\n> -rw------- 1 postgres users 1409024 Jul 27 16:14 hits_pkey\n\n> This is for table with one row after a lot of updates.\n> Too much. vacuum analyze this table was a good medicine !\n\nIf the table contains only one row, why are you bothering with an\nindex on it?\n\n> Is this a design problem ? \n\nOnly that space in tables and indexes can't be re-used until vacuum.\nI'm not sure if there's any good way around that or not...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 1999 10:39:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1) " }, { "msg_contents": "On Tue, 27 Jul 1999, Tom Lane wrote:\n\n> Date: Tue, 27 Jul 1999 10:39:40 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] UPDATE performance degradation (6.5.1) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > Probably I found the problem. After running my test, whiich became\n> > very slow I looked at /usr/local/pgsql/data/base/discovery\n> \n> > -rw------- 1 postgres users 5070848 Jul 27 16:14 hits\n> > -rw------- 1 postgres users 1409024 Jul 27 16:14 hits_pkey\n> \n> > This is for table with one row after a lot of updates.\n> > Too much. vacuum analyze this table was a good medicine !\n> \n> If the table contains only one row, why are you bothering with an\n> index on it?\n> \n\nThis table with one row is just for test. In production it will\ncontain many thousands of msg_id. I didn't test yet waht will happens\nif I populate table by thousands of row. But could imagine how long\nit will be updated. Ooh. \n\n\n> > Is this a design problem ? \n> \n> Only that space in tables and indexes can't be re-used until vacuum.\n> I'm not sure if there's any good way around that or not...\n\nSo, I need a cron job to vaccuum database. I'm curious how mysql works\nso fast and has no problem in Web environment. I know some sites with\nmysql logging and millions of updates every day.\n\n\tOleg\n\n18:54[om]:/usr/local/apache/comps/discovery/db>psql discovery -c 'select * from hits'\nmsg_id|count|first_access |last_access \n------+-----+----------------------------+----------------------------\n 1463|44417|Tue 27 Jul 10:30:18 1999 MSD|Tue 27 Jul 18:44:31 1999 MSD\n 123|58814|Mon 26 Jul 22:54:54 1999 MSD|Tue 27 Jul 10:29:54 1999 MSD\n 4| 219|Mon 26 Jul 22:48:48 1999 MSD|Mon 26 Jul 22:49:02 1999 MSD\n 2| 418|Mon 26 Jul 22:47:28 1999 MSD|Mon 26 Jul 22:48:12 1999 MSD\n 1| 211|Mon 26 Jul 22:46:44 1999 MSD|Mon 26 Jul 22:47:09 1999 MSD\n 13| 1|Sat 24 Jul 23:56:57 1999 MSD| \n 1464| 1|Tue 27 Jul 18:17:51 1999 MSD| \n(7 rows)\n\nand after vacuum analyze:\n\n-rw------- 1 postgres users 8192 Jul 27 18:54 hits\n-rw------- 1 postgres users 1703936 Jul 27 18:54 hits_pkey\n\nWhy hits_pkey is so big ? I have only 7 rows in the table.\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 27 Jul 1999 18:52:56 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1) " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> and after vacuum analyze:\n> -rw------- 1 postgres users 8192 Jul 27 18:54 hits\n> -rw------- 1 postgres users 1703936 Jul 27 18:54 hits_pkey\n> Why hits_pkey is so big ? I have only 7 rows in the table.\n\nLooks like vacuum reclaims the extra space in the table itself,\nbut does not do so with indexes. Ugh.\n\nI've thought for some time that vacuum ought to drop and rebuild\nindexes instead of trying to update them. This might be another\nreason for doing that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 1999 10:57:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1) " }, { "msg_contents": "On Tue, 27 Jul 1999, Tom Lane wrote:\n\n> Date: Tue, 27 Jul 1999 10:57:36 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] UPDATE performance degradation (6.5.1) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > and after vacuum analyze:\n> > -rw------- 1 postgres users 8192 Jul 27 18:54 hits\n> > -rw------- 1 postgres users 1703936 Jul 27 18:54 hits_pkey\n> > Why hits_pkey is so big ? I have only 7 rows in the table.\n> \n> Looks like vacuum reclaims the extra space in the table itself,\n> but does not do so with indexes. Ugh.\n\nAnd do we consider this as a bug ? How do correcting of vacuum\ncould change poor performance ?\n\nI just rebuild my table without using indices and performace increased\na lot. But this is undesirable because it will slowdown my application.\nI'll try dbm files for logging instead of postgres. What's the shame :-)\n\n\tregards,\n Oleg\n\n\n\n> \n> I've thought for some time that vacuum ought to drop and rebuild\n> indexes instead of trying to update them. This might be another\n> reason for doing that...\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 27 Jul 1999 19:46:38 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1) " } ]
[ { "msg_contents": "Many thanks to everyone who helped so far especially Todd Vierling for\nthe RTFF. I think I am closer but I still have a problem. Here is the\nrule in my makefile now.\n\n.o.so:\n ld -shared -L${PGDIR}/lib --export-dynamic -rpath ${PGDIR}/lib \\\n -lpq -lc -o $@ $<\n\nldd now shows this.\n\nglaccount.so:\n -lpq => /usr/pgsql/lib/libpq.so\n -lc.12 => /usr/lib/libc.so.12\n\nI then went into the PostgreSQL code and added a dlerror() call to the\nerror message after dlopen(). I still get an error but now I get a little\nmore information.\n\nERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed (/usr/pgsql/modules/glaccount.so: Undefined symbol \"CurrentMemoryContext\" (reloc type = 6, symnum = 6))\n\nCurrentMemoryContext is defined in the postmaster (I checked with nm) which\nis the program doing the dlopen. Here is the relevant line from nm.\n\n08138544 D CurrentMemoryContext\n\nSo it looks like everything should be working but it doesn't. Is this\npossibly a case of bogus error message or am I misunderstanding it? Is\nELF fully baked or do I need to revert to a pre-ELF system?\n\nHmm. I just noticed that nm is an old binary and that it doesn't build\nfrom current sources due to a missing bfd.h. Is nm like ldconfig and\nnot needed any more on ELF systems or is there just a problem with\nthe current sources?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Jul 1999 08:12:19 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "More on shared objects problem" }, { "msg_contents": "On Tue, 27 Jul 1999, D'Arcy J.M. Cain wrote:\n\n: Many thanks to everyone who helped so far especially Todd Vierling for\n: the RTFF. I think I am closer but I still have a problem. Here is the\n: rule in my makefile now.\n: \n: .o.so:\n: ld -shared -L${PGDIR}/lib --export-dynamic -rpath ${PGDIR}/lib \\\n: -lpq -lc -o $@ $<\n\n--export-dynamic is only needed for _executables_. It is implied for shared\nobjects.\n\nBTW, for platform compatibility, may I suggest using -R instead of -rpath...\nthat works on all NetBSD, a.out and ELF, linkers (and even some non-NetBSD\nones :).\n\n: ERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed (/usr/pgsql/modules/glaccount.so: Undefined symbol \"CurrentMemoryContext\" (reloc type = 6, symnum = 6))\n: \n: CurrentMemoryContext is defined in the postmaster (I checked with nm) which\n: is the program doing the dlopen. Here is the relevant line from nm.\n\n...and you don't have --export-dynamic on your _executable's_ link line.\nWhen linking the executable whose symbols will be used by a shared object,\nuse:\n\ncc -Wl,-E ...\n\n(which is equivalent, from the cc side).\n\n: Hmm. I just noticed that nm is an old binary and that it doesn't build\n: from current sources due to a missing bfd.h.\n\nYou need the sources of src/gnu/lib/libbfd and\nsrc/gnu/dist/{opcodes,bfd,libiberty} in order to build any libbfd using\nprogram. This is because there are a lot of internal bfd headers used by\nthese programs. However, there is nothing wrong with your nm.\n\n-- \n-- Todd Vierling ([email protected])\n\n", "msg_date": "Tue, 27 Jul 1999 09:04:58 -0400 (EDT)", "msg_from": "Todd Vierling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More on shared objects problem" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> ldd now shows this.\n\n> glaccount.so:\n> -lpq => /usr/pgsql/lib/libpq.so\n> -lc.12 => /usr/lib/libc.so.12\n\nActually, do you even need libpq? That's a client-side library; I don't\nthink it should get linked into shlibs that are intended to be dynlinked\ninto the server...\n\n> ERROR: Load of file /usr/pgsql/modules/glaccount.so failed: dlopen (/usr/pgsql/modules/glaccount.so) failed (/usr/pgsql/modules/glaccount.so: Undefined symbol \"CurrentMemoryContext\" (reloc type = 6, symnum = 6))\n\n> CurrentMemoryContext is defined in the postmaster (I checked with nm) which\n> is the program doing the dlopen. Here is the relevant line from nm.\n>\n> 08138544 D CurrentMemoryContext\n\nHmm. On HPUX there is a special linker switch you have to use when the\nmain program is linked to make the linker \"export\" the main-program\nsymbols so that they will be visible to dynlinked libraries. Perhaps\nyour platform needs something similar.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 1999 10:33:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on shared objects problem " }, { "msg_contents": "Thus spake Todd Vierling\n> On Tue, 27 Jul 1999, D'Arcy J.M. Cain wrote:\n> : ld -shared -L${PGDIR}/lib --export-dynamic -rpath ${PGDIR}/lib \\\n> : -lpq -lc -o $@ $<\n> \n> --export-dynamic is only needed for _executables_. It is implied for shared\n> objects.\n\nSo I have been told. Removing it didn't help though.\n\n> BTW, for platform compatibility, may I suggest using -R instead of -rpath...\n> that works on all NetBSD, a.out and ELF, linkers (and even some non-NetBSD\n> ones :).\n\nOK, I did that.\n\n> ...and you don't have --export-dynamic on your _executable's_ link line.\n> When linking the executable whose symbols will be used by a shared object,\n> use:\n> \n> cc -Wl,-E ...\n\nHmm. OK, I'll try to get that into the PostgreSQL code. Is that flag\nbenign on a non-ELF system or do I have to test for ELF before adding\nthe flag?\n\n> You need the sources of src/gnu/lib/libbfd and\n> src/gnu/dist/{opcodes,bfd,libiberty} in order to build any libbfd using\n> program. This is because there are a lot of internal bfd headers used by\n> these programs. However, there is nothing wrong with your nm.\n\nI just realized that I have not been supping gnu files. Didn't someone\nsay here that src/gnu was now included in /src? I supped the current\ngnu down and will rebuild the world but I will try the -E first.\n\nBingo! That was it. OK, I'll see that the change gets back into PostgreSQL.\nHmmm. Looking at the code I see that it does expect to add that flag if\nit is on an ELF system. I guess configure needs to be tweaked. I'll\ncopy (and set followups to) the PostgreSQL list to start discussions\nthere on that.\n\nSo how do we determine that a system is elf? I don't see it in uname. Do\nwe just run file(1) on the kernel and see if the string \"ELF\" shows up?\n\nMany thanks for everyone's help.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Jul 1999 13:08:56 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More on shared objects problem" }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > glaccount.so:\n> > -lpq => /usr/pgsql/lib/libpq.so\n> > -lc.12 => /usr/lib/libc.so.12\n> \n> Actually, do you even need libpq? That's a client-side library; I don't\n> think it should get linked into shlibs that are intended to be dynlinked\n> into the server...\n\nYah, I was just trying stuff. As it turns out, PostgreSQL doesn't\nrecognize NetBSD as an ELF system unless it is a powerpc. That's\nprobably correct as it is only -current that is ELF, not the release.\nIf it helps, here is the output of \"file /netbsd\" which tells you for\nsure it is an ELF system.\n\n/netbsd: ELF 32-bit LSB executable, Intel 80386, version 1, statically linked, not stripped\n\nso;\n\nif [ \"`file /netbsd | cut -d' ' -f2`\" = \"ELF\" ]\nthen elf=yes\nfi\n\nUnder the netbsd secion of configure_in should do it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Jul 1999 13:24:30 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More on shared objects problem" }, { "msg_contents": "D'Arcy\" \"J.M.\" Cain wrote:\n> Thus spake Tom Lane\n> > \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > > glaccount.so:\n> > > -lpq => /usr/pgsql/lib/libpq.so\n> > > -lc.12 => /usr/lib/libc.so.12\n> > \n> > Actually, do you even need libpq? That's a client-side library; I don't\n> > think it should get linked into shlibs that are intended to be dynlinked\n> > into the server...\n> \n> Yah, I was just trying stuff. As it turns out, PostgreSQL doesn't\n> recognize NetBSD as an ELF system unless it is a powerpc. That's\n> probably correct as it is only -current that is ELF, not the release.\n\nActually, alpha is ELF from the day one too. mips is ELF from\nthe 1.3.X release IIRC. Just the i386 & sparc\nhave been switched to ELF recently. With exception of x68k & pc532,\nELF is accross the board now.\n\n> If it helps, here is the output of \"file /netbsd\" which tells you for\n> sure it is an ELF system.\n> \n> /netbsd: ELF 32-bit LSB executable, Intel 80386, version 1, statically linked, not stripped\n> \n> so;\n> \n> if [ \"`file /netbsd | cut -d' ' -f2`\" = \"ELF\" ]\n> then elf=yes\n> fi\n> \n> Under the netbsd secion of configure_in should do it.\n\nA bit more sane would be to compile and run this little program on NetBSD:\nint main() {\n#ifdef __ELF__\n\treturn 1;\n#else\n\treturn 0;\n#endif\n}\n\nELFism of userland is independant of kernel (ELF kernel can run\nwith a.out userland & a.out kernel can run ELF userland), so this\nmethod is probably safer.\n\n-- \nJaromir Dolecek <[email protected]> http://www.ics.muni.cz/~dolecek/\n\"The only way how to get rid temptation is to yield to it.\" -- Oscar Wilde\n", "msg_date": "Tue, 27 Jul 1999 19:36:39 +0200 (MEST)", "msg_from": "Jaromir Dolecek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on shared objects problem" }, { "msg_contents": "On Tue, 27 Jul 1999, D'Arcy J.M. Cain wrote:\n\n> Yah, I was just trying stuff. As it turns out, PostgreSQL doesn't\n> recognize NetBSD as an ELF system unless it is a powerpc. That's\n> probably correct as it is only -current that is ELF, not the release.\n> If it helps, here is the output of \"file /netbsd\" which tells you for\n> sure it is an ELF system.\n> \n> /netbsd: ELF 32-bit LSB executable, Intel 80386, version 1, statically linked, not stripped\n> \n> so;\n> \n> if [ \"`file /netbsd | cut -d' ' -f2`\" = \"ELF\" ]\n> then elf=yes\n> fi\n> \n> Under the netbsd secion of configure_in should do it.\n\n\tThe ELFness of the kernel is independant to the ELFness of the\n\tuserland - you may want to run a file on some userland binary\n\tinstead.\n\n\t\tDavid/absolute\n\n -=- Sue me, screw me, walk right through me -=-\n\n\n\n", "msg_date": "Tue, 27 Jul 1999 10:45:55 -0700 (PDT)", "msg_from": "David Brownlee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on shared objects problem" }, { "msg_contents": "\n>If it helps, here is the output of \"file /netbsd\" which tells you for\n>sure it is an ELF system.\n>\n>/netbsd: ELF 32-bit LSB executable, Intel 80386, version 1, statically linke\n>d, not stripped\n>\n>so;\n>\n>if [ \"`file /netbsd | cut -d' ' -f2`\" = \"ELF\" ]\n>then elf=yes\n>fi\n>\n>Under the netbsd secion of configure_in should do it.\n\nIt's worth getting this really right during the migration to ELF...\n\nThat test doesn't work on pmax systems which have ECOFF-format kernels\n(for netbooting) and ELF userland. A `clean' test that asks the kernel\nwhat format(s) it supports would be nice; absent that, testing on\nuserland binaries as well --say /usr/libexec/ld.elf_so (or maybe\ninstead?) is safer than relying on the format of the kernel itself.\n\n", "msg_date": "Tue, 27 Jul 1999 10:49:49 -0700", "msg_from": "Jonathan Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on shared objects problem " }, { "msg_contents": "On Tue, 27 Jul 1999, D'Arcy J.M. Cain wrote:\n\n(Note that pgsql-hackers is not in my To: header, as I'm not on the list and\ncannot post.)\n\n: So how do we determine that a system is elf? I don't see it in uname. Do\n: we just run file(1) on the kernel and see if the string \"ELF\" shows up?\n\nOn NetBSD, the following will do it. This may even be platform independednt\nif \"grep -q\" is replaced by \"grep >/dev/null 2>&1\".\n\nif echo __ELF__ | ${CC} -E - | grep -q __ELF__; then\n ... a.out action ...\nelse\n ... ELF action ...\nfi\n\n-- \n-- Todd Vierling ([email protected])\n\n", "msg_date": "Tue, 27 Jul 1999 14:00:28 -0400 (EDT)", "msg_from": "Todd Vierling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More on shared objects problem" }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> \n> So how do we determine that a system is elf? I don't see it in uname. Do\n> we just run file(1) on the kernel and see if the string \"ELF\" shows up?\n\nThe test I use is to compile a program that opens its own executable\nand checks for the magic number.\n\n\\127ELF as I remember.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Tue, 27 Jul 1999 14:35:13 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: More on shared objects problem" }, { "msg_contents": "On Tue, Jul 27, 1999 at 02:35:13PM -0400, Mark Hollomon wrote:\n> D'Arcy J.M. Cain wrote:\n> > So how do we determine that a system is elf? I don't see it in uname. Do\n> > we just run file(1) on the kernel and see if the string \"ELF\" shows up?\n> \n> The test I use is to compile a program that opens its own executable\n> and checks for the magic number.\nOr this:\n\nif echo __ELF__ | ${CC} -E - | grep -q __ELF__; then\n # ELF\nelse\n # a.out\nfi\n\nThis is not my idea, it's from the patches for apache in the package tree.\n-- \nDies ist Thilos Unix Signature! Viel Spass damit.\n", "msg_date": "Tue, 27 Jul 1999 20:41:08 +0200", "msg_from": "Thilo Manske <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: More on shared objects problem" }, { "msg_contents": "On Tue, 27 Jul 1999, Thilo Manske wrote:\n\n: if echo __ELF__ | ${CC} -E - | grep -q __ELF__; then\n: # ELF\n: else\n: # a.out\n: fi\n: \n: This is not my idea, it's from the patches for apache in the package tree.\n\nIt's actually backwards. If the \"grep -q\" returns true, it's an a.out\nsystem (since cpp did *not* replace __ELF__ with 1).\n\n-- \n-- Todd Vierling ([email protected])\n\n", "msg_date": "Tue, 27 Jul 1999 15:04:34 -0400 (EDT)", "msg_from": "Todd Vierling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: More on shared objects problem" }, { "msg_contents": "On Tue, Jul 27, 1999 at 01:24:30PM -0400, D'Arcy J.M. Cain wrote:\n> recognize NetBSD as an ELF system unless it is a powerpc. That's\n> probably correct as it is only -current that is ELF, not the release.\n> If it helps, here is the output of \"file /netbsd\" which tells you for\n> sure it is an ELF system.\n\nPlease note that NetBSD is already ELF on some platforms, e.g macppc and\npmax. Other platforms are still only aout, e.g. m68k and others. At least\ntwo platforms have changed to ELF since the last NetBSD release (i386\nand sparc) and will be ELF when NetBSD 1.5 is released (but not 1.4.1 and\nother patch releases for 1.4).\n\nThe end result is that 3rd party software should not decide ELF-ness based\non versions or platforms, but try to detect the actual status of the system\non which it installs.\n\n- Erik\n", "msg_date": "Tue, 27 Jul 1999 22:37:13 +0200", "msg_from": "Erik Bertelsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on shared objects problem" } ]
[ { "msg_contents": "Hi, all\n\nI've noticed that some messages are coming through to me very late - like\ntwenty days late. Is this a problem with my incoming server, or is anybody\nelse experiencing the same thing. Right now, I'm receiving mail that was\nsent on the 8th of July.\n\nPlease respond to me, so as not to clog up the list.\n\nThanks\n\n\nMikeA\n\n", "msg_date": "Tue, 27 Jul 1999 15:56:57 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Late mail" } ]
[ { "msg_contents": "I've posted RPMs for v6.5.1 at\n\n ftp://postgresql.org/pub/{RPMS,SRPMS}/*.rpm\n\nPlease report any problems (or successes!) though things should be\npretty smooth since the build was almost identical to that for v6.5.\nNote that a dump/reload is not necessary if you already have v6.5 RPMs\ninstalled. Just shut down your v6.5 server, do an \"rpm -Uvh *.rpm\",\nrestart and you're ready to go.\n\nIf you are upgrading from an earlier version, do a \"pg_dumpall >\nfile.dumpall\" before upgrading, move /var/lib/pgsql aside, upgrade,\nand then do a \"psql < file.dumpall\" after restarting.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Jul 1999 14:47:34 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "i386 RPMs available for v6.5.1" } ]
[ { "msg_contents": "\n> This is for table with one row after a lot of updates.\n> Too much. vacuum analyze this table was a good medicine !\n> Is this a design problem ? \n> \nIn PostgreSQL an update always adds a new row to the table.\nThe old rows get eliminated by vacuum that is the whole business of vacuum.\nThere has been some discussion for implementing row reuse,\nbut that is a major task.\n\nAndreas\n", "msg_date": "Tue, 27 Jul 1999 16:57:06 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1)" }, { "msg_contents": "On Tue, 27 Jul 1999, Zeugswetter Andreas IZ5 wrote:\n\n> Date: Tue, 27 Jul 1999 16:57:06 +0200\n> From: Zeugswetter Andreas IZ5 <[email protected]>\n> To: 'Oleg Bartunov' <[email protected]>\n> Cc: \"'[email protected]'\" <[email protected]>\n> Subject: Re: [HACKERS] UPDATE performance degradation (6.5.1)\n> \n> \n> > This is for table with one row after a lot of updates.\n> > Too much. vacuum analyze this table was a good medicine !\n> > Is this a design problem ? \n> > \n> In PostgreSQL an update always adds a new row to the table.\n> The old rows get eliminated by vacuum that is the whole business of vacuum.\n> There has been some discussion for implementing row reuse,\n> but that is a major task.\n\nOk, I understand now the size of the table. What's about index file ?\nWhy it's so big. Look. just did delete from hits and vacuum analyze.\n\nom:/usr/local/pgsql/data/base/discovery$ l hits*\n-rw------- 1 postgres users 0 Jul 27 19:14 hits\n-rw------- 1 postgres users 2015232 Jul 27 19:14 hits_pkey\n\nafter 6500 updates:\n\nom:/usr/local/pgsql/data/base/discovery$ l hits*\n-rw------- 1 postgres users 344064 Jul 27 19:23 hits\n-rw------- 1 postgres users 2097152 Jul 27 19:23 hits_pkey\n\nand it took a lot of time. Also I populate table hits by 10,000 rows\nand run the same test. It was incredibly slow. \n\nIt seems index file doesn't affected by vacuum analyze !\nCould we consider this as a bug ?\n\n\tRegards,\n\n\t\tOleg\n\n\n> \n> Andreas\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Tue, 27 Jul 1999 19:39:46 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1)" } ]
[ { "msg_contents": "Yes, I tracked it down to smtp2.a2000.nl. So they are duplicates? I\nthought that they were originals, that I was receiving late. OK, fine, at\nleast I didn't miss the conversation.\n\nThanks, Tom\n\n\nMikeA\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Tuesday, July 27, 1999 4:35 PM\n>> To: Ansley, Michael\n>> Subject: Re: [INTERFACES] Late mail \n>> \n>> \n>> See \"Mail loop\" thread ... they're being regurgitated by a broken\n>> mailserver in .nl someplace...\n>> \n>> \t\t\tregards, tom lane\n>> \n", "msg_date": "Tue, 27 Jul 1999 17:34:11 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] Late mail " } ]
[ { "msg_contents": "(back on list, since this is a topic of general RH interest imho)\n\n> > I've posted RPMs for v6.5.1 at\n> I'm curious why the rpm installs place postgres files in different\n> directories than does the .tgz source files. When I installed 6.4.2, I had\n> all sorts of problems with the rpm version because it wouldn't let me define\n> and use my own data directories (e.g., ~/data/accounting). I finally figured\n> out how to configure, make and install the source according to the\n> Administrator's Guide.\n\nRPMs install files into /usr/{bin,lib} and /var/lib/pgsql because\nthese same (or similar) RPMs are shipped by RedHat as part of their\ndistribution. RH feels constrained to *only* install binaries,\nlibraries, etc, into the \"standard places\" which leaves no room for\nalternatives. As you may know, file system layout has been a hot topic\nfor Linux standardization, and there isn't much point in trying to\nfight it, or trying to push RH away from their choices ;)\n\nIt would be possible to build RPMs which install into the areas\ndocumented directly in Postgres, but imho that is not helping by much,\nsince on different (non-Linux) systems the best locations will, in\ngeneral, vary, and RH is just an example of that.\n\nNow that we can generate our own RPMs, we can do a better job of\ndocumenting the RPM behavior. Before, that was a RH black box.\n\nI *believe* that the v6.5.x RPMs will allow you to define your data\ndirectories, but I agree that it isn't clear how to do this. fyi, the\nway to do this is to use initdb with PGDATA pointed at your desired\ntarget directory. You might try using \"initlocation\" to prepare that\ntarget directory, or you can do this manually. For the RH\ninstallation, set the protections and ownership the same as for\n/var/lib/pgsql; your postgres account needs to own the directory and\nthe protections should be 700.\n\nYou would also modify /etc/rc.d/init.d/postgresql.init to point to the\nright place.\n\n> Now I'm ready to do some serious porting of my DOS-based business\n> applicaitons. I'm going to use GTK+ for the GUI, C for the glue and I think\n> that I ought to upgrade to 6.5.1 from the present 6.4.2. But, I hesitate.\n> Postgres appears to function now and I don't want to spend more time\n> floundering around trying to get 6.5.1 up and running instead.\n> So, my question: should I grab the 6.5.1 .tgz file and follow the steps I\n> used for the 6.4.2 installation rather than using the package?\n\nimho, the RPMs are most appropriate for casual users or small\ninstallations, but I think that they could be used successfully in a\nlarge installation too. Especially as you deploy production servers,\nthe flexibility you get by a from-source installation is not as\nimportant, and the convenience of an easy installation process is more\nimportant. A from-source installation gives you the most control over\n(server) installation issues, and more importantly would let you more\neasily apply workaround patches during the lifetime of that version.\nI'd suggest using a from-source when doing development, and the RPMs\nwhen deploying, but that is just me...\n\nThe RPMs seem to me to be entirely appropriate for any and all\nclient-only installations, for both small and large configurations.\n\nWith a bit more docs, the RPM installation can be (almost) as flexible\nas a from-source installation (but the target locations are afaik\npretty fixed). If you (or anyone else) would like to do the RPM\ninstallation for v6.5.1 to upgrade your v6.4.2 system, I'd be happy to\nhelp walk you through it, and we can capture that info for the next\nround of docs.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Jul 1999 16:12:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ANNOUNCE] i386 RPMs available for v6.5.1" }, { "msg_contents": "On Tue, 27 Jul 1999, Thomas Lockhart wrote:\n\n> (back on list, since this is a topic of general RH interest imho)\n\n Makes sense to me.\n \n> RPMs install files into /usr/{bin,lib} and /var/lib/pgsql because\n> these same (or similar) RPMs are shipped by RedHat as part of their\n> distribution.\n\n I have /usr/bin/postgres and /usr/local/pgsql/doc/* with user postgres\nbased in /home/postgres. And, of course, /etc/rc.d/init.d/posgresql. If I\nunderstand your message, the only difference I will see by removing all this\nand replacing it with the 6.5.1 rpm is that postgres' home directory and the\ndocs will be in /var and the binary will be in /var/lib rather than\n/usr/bin. Is this correct? \n \n> RH feels constrained to *only* install binaries, libraries, etc, into the\n> \"standard places\" which leaves no room for alternatives. As you may know,\n> file system layout has been a hot topic for Linux standardization, and\n> there isn't much point in trying to fight it, or trying to push RH away\n> from their choices ;)\n\n I certainly did not mean to imply that I'm against the nacent standard for\nlinux file systems! :-) I think that's a Real Good Thing. But, I didn't use\nthe rpm for 6.4.2 because I couldn't get alternative directories to work ...\n\n> I *believe* that the v6.5.x RPMs will allow you to define your data\n> directories, but I agree that it isn't clear how to do this. fyi, the\n> way to do this is to use initdb with PGDATA pointed at your desired\n> target directory. You might try using \"initlocation\" to prepare that\n> target directory, or you can do this manually.\n\n ... despite having done all this. I defined PGDATA2 in ~/.bash_profile as\n/home/rshepard/data/accounting. I then ran (as root) initlocation. When I\ntried to use pgsql to open a file there, I received error messages that the\ndirectory (or the ../base directory) didn't exist. When I use the source\ninstallation, it works.\n\n> You would also modify /etc/rc.d/init.d/postgresql.init to point to the\n> right place.\n\n I have a /etc/rc.d/init.d/posgresql script (no .init). I see references in\nthere to the postmaster at /usr/bin and a series of references to files in\n/var/lib. So, I assume that I need only change the /usr/bin paths to\n/var/lib paths to make the upgrade work. Correct?\n \n> pretty fixed). If you (or anyone else) would like to do the RPM\n> installation for v6.5.1 to upgrade your v6.4.2 system, I'd be happy to\n> help walk you through it, and we can capture that info for the next\n> round of docs.\n\n This evening, after work and after I replace the (!!&*%(*)#$%# Seagate tape\ndrive which died on my after only a year.\n\nThanks, Thomas!\n\nRich\n\nDr. Richard B. Shepard, President\n\n Applied Ecosystem Services, Inc.\n 2404 SW 22nd Street | Troutdale, OR 97060-1247 | U.S.A.\n + 1 503-667-4517 (voice) | + 1 503-667-8863 (fax) | [email protected]\n Making environmentally-responsible mining happen.\n\n", "msg_date": "Tue, 27 Jul 1999 10:06:55 -0700 (PDT)", "msg_from": "Rich Shepard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] i386 RPMs available for v6.5.1" } ]
[ { "msg_contents": "> > I have no problems with this...I heard an outcry for a bug-tracking\n> > system, and this was the better known one, so installed it. If ya wanna\n> > trash it, be my guest...I have no attachments to it :)\n> If you're gonna trash it, let me know slightly in advance so I can remove\n> the announcement and news entry (it's only a couple of mouse clicks).\n\nWe are not going to trash it, but we *must* evolve how it will be\nused. (btw, I've taken this on-list per Tom Lane's suggestion; the\nshort summary is that the new bug tracking system is getting non-bug\nbug reports and it is short-circuiting the highly successful mailing\nlist support process.)\n\nCan you change the announcement to state something like:\n\n\nWe have installed a new bug tracking system. After reading\ndocumentation, checking the tracking system, *and* asking for help on\nthe mailing lists, you may be directed to enter your problem statement\ninto the system. Please do *not* enter a new report unless it is a\nconfirmed bug or problem.\n\n\nThis wording should get us some breathing room. Comments?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Jul 1999 16:52:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [CORE] Re: [Keystone Slip # 22] Some confusion with datetimedata\n\ttype andtimezone" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> (btw, I've taken this on-list per Tom Lane's suggestion; the\n> short summary is that the new bug tracking system is getting non-bug\n> bug reports and it is short-circuiting the highly successful mailing\n> list support process.)\n\nJust to give everyone else some context (and try to get a more useful\ntitle on the thread ;-)), this discussion concerns the Keystone bug\ntracking system that was recently installed on www.postgresql.org;\nsee http://www.PostgreSQL.ORG/bugs/nbrowse.php3. There is some\nsketchy documentation about Keystone at\nhttp://www.stonekeep.com/ksonline/docs/docindex.html.\n\nNow that we've got the thing, we need to figure out an effective process\nfor using it. The limited experience so far doesn't seem particularly\nproductive. Here are some comments that I sent to the core group\nearlier today.\n\n\nThomas Lockhart <[email protected]> writes:\n> We've got a *great* network of folks on the mailing lists who help\n> everyone with questions. That should be the first (and second, and\n> third) line of defense for anyone with a question or a possible bug\n> report, and imho we shouldn't have *anything* in the bug tracking\n> system which has not gone through that process first.\n\n> How do we accomplish this without having a completely closed bug\n> reporting system (which for me is one of the options; Bruce could use\n> this for his bug-related ToDo's...).\n\nHmm, so you are thinking it should be a *tracking* system for\nacknowledged bugs, but not an initial reporting system, and we'd\ncontinue to rely on the mail lists for initial reporting.\n\nThat might not be a bad idea. I've already noticed that people\nare failing to provide full bug reports (version, platform, etc)\nbecause the Keystone system doesn't give them a template to fill out.\nSeems like we were getting more complete reports via the email process.\n\n> Should we consider having a more limited number of folks with access\n> to the bug tracking system? Perhaps this could be a perk for long-time\n> contributors who go out of their way to help answer questions??\n\nWe want read-only access for everyone, I think, but limiting the number\nof people who can enter and update slips might be good.\n\nMy reasons for pushing a BTS in the first place were that it would\nprovide better *visibility* : has a bug been fixed, who is working\non it, what is known about it, etc. Basically I was thinking of a\nTODO list with more detail per item than a one-line summary. (Also\nit should keep records of closed-out problems, so people could find\nout what version fixes a problem.)\n\nCluttering the BTS database with random reports doesn't aid visibility\nof the important ones. We want to allow read-only access so that status\nis visible to everyone, but that doesn't mean we have to allow everyone\nto alter the database.\n\nThis line of thought also suggests that we should immediately enter\nall the TODO items into the BTS as slips... Bruce could then generate\nthe text TODO via a query from the DB ;-)\n\nFurther comments, anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 1999 13:11:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Bug tracking system policy" }, { "msg_contents": "I wrote:\n> see http://www.PostgreSQL.ORG/bugs/nbrowse.php3.\n\nEr, make that\n\nhttp://www.PostgreSQL.ORG/bugs/visitor.php3\n\nif you're not one of the people known to the Keystone system...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 1999 13:16:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug tracking system policy " }, { "msg_contents": "\nOn 27-Jul-99 Thomas Lockhart wrote:\n>> > I have no problems with this...I heard an outcry for a bug-tracking\n>> > system, and this was the better known one, so installed it. If ya wanna\n>> > trash it, be my guest...I have no attachments to it :)\n>> If you're gonna trash it, let me know slightly in advance so I can remove\n>> the announcement and news entry (it's only a couple of mouse clicks).\n> \n> We are not going to trash it, but we *must* evolve how it will be\n> used. (btw, I've taken this on-list per Tom Lane's suggestion; the\n> short summary is that the new bug tracking system is getting non-bug\n> bug reports and it is short-circuiting the highly successful mailing\n> list support process.)\n> \n> Can you change the announcement to state something like:\n> \n> \n> We have installed a new bug tracking system. After reading\n> documentation, checking the tracking system, *and* asking for help on\n> the mailing lists, you may be directed to enter your problem statement\n> into the system. Please do *not* enter a new report unless it is a\n> confirmed bug or problem.\n> \n> \n> This wording should get us some breathing room. Comments?\n\nDone.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 27 Jul 1999 17:20:54 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [CORE] Re: [Keystone Slip # 22] Some confusion with datetime" }, { "msg_contents": "\nJust a thought, but we know *what* we want...why not build our own? Or,\nis there something else out there that would better serve what we need?\n\nPersonally, my playing with Keystone leaves much to be desired, and the\nmailing list for it, quite frankly, is totally unresponsive. (ie. no\nquestions asked have turned up an answer)...if someone has something\nbetter they would like to suggest, I have no problems setting it up and\nrunning through it.\n\nIf someone thinks they have the time, energy and desire to work on one\nfrom scratch, tailoring it to our requirements, let me know that\nalso...I'll provide you with an account and the resources...\n\nThe idea is to come up with something clean, easy to use and fully\nfunctional...something that ppl *will* use. I don't think Keystone is\nthat, but I haven't seen anything closer/better to what we require...\n\nOn Tue, 27 Jul 1999, Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > (btw, I've taken this on-list per Tom Lane's suggestion; the\n> > short summary is that the new bug tracking system is getting non-bug\n> > bug reports and it is short-circuiting the highly successful mailing\n> > list support process.)\n> \n> Just to give everyone else some context (and try to get a more useful\n> title on the thread ;-)), this discussion concerns the Keystone bug\n> tracking system that was recently installed on www.postgresql.org;\n> see http://www.PostgreSQL.ORG/bugs/nbrowse.php3. There is some\n> sketchy documentation about Keystone at\n> http://www.stonekeep.com/ksonline/docs/docindex.html.\n> \n> Now that we've got the thing, we need to figure out an effective process\n> for using it. The limited experience so far doesn't seem particularly\n> productive. Here are some comments that I sent to the core group\n> earlier today.\n> \n> \n> Thomas Lockhart <[email protected]> writes:\n> > We've got a *great* network of folks on the mailing lists who help\n> > everyone with questions. That should be the first (and second, and\n> > third) line of defense for anyone with a question or a possible bug\n> > report, and imho we shouldn't have *anything* in the bug tracking\n> > system which has not gone through that process first.\n> \n> > How do we accomplish this without having a completely closed bug\n> > reporting system (which for me is one of the options; Bruce could use\n> > this for his bug-related ToDo's...).\n> \n> Hmm, so you are thinking it should be a *tracking* system for\n> acknowledged bugs, but not an initial reporting system, and we'd\n> continue to rely on the mail lists for initial reporting.\n> \n> That might not be a bad idea. I've already noticed that people\n> are failing to provide full bug reports (version, platform, etc)\n> because the Keystone system doesn't give them a template to fill out.\n> Seems like we were getting more complete reports via the email process.\n> \n> > Should we consider having a more limited number of folks with access\n> > to the bug tracking system? Perhaps this could be a perk for long-time\n> > contributors who go out of their way to help answer questions??\n> \n> We want read-only access for everyone, I think, but limiting the number\n> of people who can enter and update slips might be good.\n> \n> My reasons for pushing a BTS in the first place were that it would\n> provide better *visibility* : has a bug been fixed, who is working\n> on it, what is known about it, etc. Basically I was thinking of a\n> TODO list with more detail per item than a one-line summary. (Also\n> it should keep records of closed-out problems, so people could find\n> out what version fixes a problem.)\n> \n> Cluttering the BTS database with random reports doesn't aid visibility\n> of the important ones. We want to allow read-only access so that status\n> is visible to everyone, but that doesn't mean we have to allow everyone\n> to alter the database.\n> \n> This line of thought also suggests that we should immediately enter\n> all the TODO items into the BTS as slips... Bruce could then generate\n> the text TODO via a query from the DB ;-)\n> \n> Further comments, anyone?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 27 Jul 1999 23:15:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking system policy" }, { "msg_contents": "> \n> Just a thought, but we know *what* we want...why not build our own? Or,\n> is there something else out there that would better serve what we need?\n> \n> Personally, my playing with Keystone leaves much to be desired, and the\n> mailing list for it, quite frankly, is totally unresponsive. (ie. no\n> questions asked have turned up an answer)...if someone has something\n> better they would like to suggest, I have no problems setting it up and\n> running through it.\n\nThey obviously need better bug tracking software. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Jul 1999 22:35:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking system policy" }, { "msg_contents": "On Tue, 27 Jul 1999, Bruce Momjian wrote:\n\n> > \n> > Just a thought, but we know *what* we want...why not build our own? Or,\n> > is there something else out there that would better serve what we need?\n> > \n> > Personally, my playing with Keystone leaves much to be desired, and the\n> > mailing list for it, quite frankly, is totally unresponsive. (ie. no\n> > questions asked have turned up an answer)...if someone has something\n> > better they would like to suggest, I have no problems setting it up and\n> > running through it.\n> \n> They obviously need better bug tracking software. :-)\n\n*grin*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Jul 1999 01:04:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking system policy" }, { "msg_contents": "\nBelay my last comment then...if this satisfies everyoen for now...\n\nOn Tue, 27 Jul 1999, Vince Vielhaber wrote:\n\n> \n> On 27-Jul-99 Thomas Lockhart wrote:\n> >> > I have no problems with this...I heard an outcry for a bug-tracking\n> >> > system, and this was the better known one, so installed it. If ya wanna\n> >> > trash it, be my guest...I have no attachments to it :)\n> >> If you're gonna trash it, let me know slightly in advance so I can remove\n> >> the announcement and news entry (it's only a couple of mouse clicks).\n> > \n> > We are not going to trash it, but we *must* evolve how it will be\n> > used. (btw, I've taken this on-list per Tom Lane's suggestion; the\n> > short summary is that the new bug tracking system is getting non-bug\n> > bug reports and it is short-circuiting the highly successful mailing\n> > list support process.)\n> > \n> > Can you change the announcement to state something like:\n> > \n> > \n> > We have installed a new bug tracking system. After reading\n> > documentation, checking the tracking system, *and* asking for help on\n> > the mailing lists, you may be directed to enter your problem statement\n> > into the system. Please do *not* enter a new report unless it is a\n> > confirmed bug or problem.\n> > \n> > \n> > This wording should get us some breathing room. Comments?\n> \n> Done.\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Jul 1999 01:06:51 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [CORE] Re: [Keystone Slip # 22] Some confusion with datetime" } ]
[ { "msg_contents": "Todd Vierling ([email protected]) suggested the following test for ELFness.\nSeems pretty portable to me.\n\nThus spake Todd Vierling ([email protected])\n>On NetBSD, the following will do it. This may even be platform independednt\n>if \"grep -q\" is replaced by \"grep >/dev/null 2>&1\".\n>\n>if echo __ELF__ | ${CC} -E - | grep -q __ELF__; then\n> ... a.out action ...\n>else\n> ... ELF action ...\n>fi\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Jul 1999 17:58:33 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Checking if a system is ELF" }, { "msg_contents": "On Tue, Jul 27, 1999 at 05:58:33PM -0400, D'Arcy J.M. Cain wrote:\n> Todd Vierling ([email protected]) suggested the following test for ELFness.\n> Seems pretty portable to me.\n> \n> Thus spake Todd Vierling ([email protected])\n> >On NetBSD, the following will do it. This may even be platform independednt\n> >if \"grep -q\" is replaced by \"grep >/dev/null 2>&1\".\n> >\n> >if echo __ELF__ | ${CC} -E - | grep -q __ELF__; then\n> > ... a.out action ...\n> >else\n> > ... ELF action ...\n> >fi\n\nUh, two problems:\n\nOne, it assumes ${CC} is really gcc - the native SGI MIPS compiler doesn't\nlike - for compiling stdin. Second, my linux box seems to #define __ELF__\nas 1. I'm not sure if that is gcc version related, or platform.\n\ntatabox% echo __ELF__ | cc -E - \ncc ERROR parsing -: unknown flag\ncc ERROR: no source or object file given\ntatabox% cc -version \nMIPSpro Compilers: Version 7.2.1.3m\n\ntatabox% echo __ELF__ | gcc -E -\n# 1 \"\"\n__ELF__\ntatabox% gcc -v\nReading specs from /usr/site/egcs-1.1.2/lib/gcc-lib/mips-sgi-irix6.5/egcs-2.91.66/specs\ngcc version egcs-2.91.66 19990314 (egcs-1.1.2 release)\n\n\nwallace$ echo __ELF__ | gcc -E -\n# 1 \"\"\n1 \nwallace$ gcc -v\nReading specs from /usr/lib/gcc-lib/i486-linux/2.7.2.3/specs\ngcc version 2.7.2.3\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 27 Jul 1999 17:27:52 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Checking if a system is ELF" }, { "msg_contents": "Thus spake Ross J. Reedstrom\n> Uh, two problems:\n> \n> One, it assumes ${CC} is really gcc - the native SGI MIPS compiler doesn't\n> like - for compiling stdin.\n\nOK, so we do the test for specific ports like NetBSD. That would be better\nthan no test at all. Perhaps we can also add an option to config to force\nit if we can't tell automatically.\n\n> Second, my linux box seems to #define __ELF__\n> as 1. I'm not sure if that is gcc version related, or platform.\n\nWell, that's sort of the idea assuming that your Linux box is ELF. Here\nis the output from two systems. Druid is a.out and smaug is ELF.\n\n[darcy@druid:work/trends] $ echo __ELF__ | gcc -E -\n# 1 \"\"\n__ELF__\n\n[db@smaug:/usr/db] $ echo __ELF__ | gcc -E -\n# 1 \"\"\n1 \n\nSo grep will find \"__ELF__\" in the output on druid proving that it is an\na.out system. On smaug, __ELF__ is defined as \"1\" so grep fails to find\nthe string \"__ELF__\" proving it to be an ELF system.\n\n> tatabox% echo __ELF__ | cc -E - \n> cc ERROR parsing -: unknown flag\n> cc ERROR: no source or object file given\n\nIs there any way to compile stdin? If so then all we need to do is make\nthe command a variable and special case it for some ports.\n\n> tatabox% cc -version \n> MIPSpro Compilers: Version 7.2.1.3m\n> \n> tatabox% echo __ELF__ | gcc -E -\n> # 1 \"\"\n> __ELF__\n\nThis implies that tatabox is not an ELF system. Is that accurate?\n\n> wallace$ echo __ELF__ | gcc -E -\n> # 1 \"\"\n> 1 \n\nAnd this says that wallace is. Correct or no?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Jul 1999 20:20:31 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Checking if a system is ELF" }, { "msg_contents": "On Tue, Jul 27, 1999 at 08:20:31PM -0400, D'Arcy J.M. Cain wrote:\n> Thus spake Ross J. Reedstrom\n> > tatabox% echo __ELF__ | gcc -E -\n> > # 1 \"\"\n> > __ELF__\n> \n> This implies that tatabox is not an ELF system. Is that accurate?\n> \n> > wallace$ echo __ELF__ | gcc -E -\n> > # 1 \"\"\n> > 1 \n> \n> And this says that wallace is. Correct or no?\n> \n\nQuite correct. I had the logic backwords from the original suggestion.\nI can only plead pre-dinner hunger - lack of glucose to the brain ;-)\n\nDon't know about feeding the native cc on stdin. Probably have to \ncreate a temp file and compile that.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 27 Jul 1999 22:16:15 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Checking if a system is ELF" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> So grep will find \"__ELF__\" in the output on druid proving that it is an\n> a.out system. On smaug, __ELF__ is defined as \"1\" so grep fails to find\n> the string \"__ELF__\" proving it to be an ELF system.\n\nSeems to me that this is a test for __ELF__ being defined, but not for\nexactly what it is defined as. Mightn't a non-ELF system define it as 0?\n\nAlso, I think there are prefab test macros in Autoconf for checking\nwhether a #define symbol exists ... you shouldn't have to do anything\nas grotty as writing out an explicit test program ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 09:50:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Checking if a system is ELF " }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > So grep will find \"__ELF__\" in the output on druid proving that it is an\n> > a.out system. On smaug, __ELF__ is defined as \"1\" so grep fails to find\n> > the string \"__ELF__\" proving it to be an ELF system.\n> \n> Seems to me that this is a test for __ELF__ being defined, but not for\n> exactly what it is defined as. Mightn't a non-ELF system define it as 0?\n\nHard to imagine. Pre-ELF systems wouldn't know about it one way or\nanother. However, it is certainly a theoretical possibility.\n\n> Also, I think there are prefab test macros in Autoconf for checking\n> whether a #define symbol exists ... you shouldn't have to do anything\n> as grotty as writing out an explicit test program ...\n\nThis is why I didn't send in diffs. I don't know enough about autoconf.\nIs anyone looking at this discussion planning to incorporate something?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 28 Jul 1999 12:16:09 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Checking if a system is ELF" } ]
[ { "msg_contents": "\nOdd...this is a v6.5.1 system...I thought we had fixed this?\n\nhardware=> vacuum;\nNOTICE: Index products_category: NUMBER OF INDEX' TUPLES (3360) IS NOT THE SAME AS HEAP' (5355)\nNOTICE: Index products_vendor: NUMBER OF INDEX' TUPLES (5089) IS NOT THE SAME AS HEAP' (5355)\nVACUUM\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 27 Jul 1999 20:04:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "# of Index' Tuples != Heap'" } ]
[ { "msg_contents": "First, thanks for installing a bug-tracking system.\n\nWriting from an end-user's perspective, I agree and support Tom Lane's\npoints.\n\nTom Lane <[email protected]> writes:\n> We want read-only access for everyone, I think, but limiting the number\n> of people who can enter and update slips might be good.\n\nI would like read-only access to the tracking database. I am happy to\nreport bugs by whatever means is most efficient for the core group. \nCurrently, that is the e-mail bug template, right?\n\n> My reasons for pushing a BTS in the first place were that it would\n> provide better *visibility* : has a bug been fixed, who is working\n> on it, what is known about it, etc. Basically I was thinking of a\n> TODO list with more detail per item than a one-line summary. (Also\n> it should keep records of closed-out problems, so people could find\n> out what version fixes a problem.)\n\nI agree completely.\n\nIt would be nice to be able to search the bug database. For example, it\nwould be useful to do a search using an error message to find out what\nmight be causing the problem.\n\n> Cluttering the BTS database with random reports doesn't aid visibility\n> of the important ones. We want to allow read-only access so that status\n> is visible to everyone, but that doesn't mean we have to allow everyone\n> to alter the database.\n\nAgain, I agree completely.\n\n> This line of thought also suggests that we should immediately enter\n> all the TODO items into the BTS as slips... Bruce could then generate\n> the text TODO via a query from the DB ;-)\n\nGreat idea.\n\n> Further comments, anyone?\n\nHere are my user-centric ideas on the goals for the bug tracking system:\n\n1. Allow end users to determine if unexpected behavior is an\noutstanding bug, a bug fixed in a later release, or a feature.\n2. Cut down on 'me-too' e-mail to the mailing lists.\n3. Provide a centralized, transparent, and structured facility for\ndevelopers to report progress on bug fixes.\n4. Share work-arounds for known issues.\n5. Help users determine which release and platform to deploy.\n\nFinally, here are my thoughts about how I would use the bug tracking\nsystem (BTS):\n\n1. Before downloading and installing Postgres, check the BTS for bugs\nin the release and platform I plan to deploy.\n2. If I have any problems during installation or while using Postgres,\nfirst read the documentation, then search the BTS by keyword.\n3. If I notice any problems discussed on the mailing lists that sound\nlike they could affect me, check the BTS periodically to determine\nwhether bug is likely to be a factor.\n\nHope this helps. Thank you to everybody involved in the project for\ncontinuing to improve all aspects of PostgreSQL!\n\nFred Horch\n", "msg_date": "Tue, 27 Jul 1999 21:21:05 -0400", "msg_from": "Fred Wilson Horch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug tracking system policy" } ]
[ { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n>So, I need a cron job to vaccuum database. I'm curious how mysql works\n>so fast and has no problem in Web environment. I know some sites with\n>mysql logging and millions of updates every day.\n\nThe mysql faq explains this in detail. The short answer is that mysql\nhas been highly optimized for a small subset of possible RDBMS applications by\neliminating support for many important RDBMS features (transactions,\nreferential integrity, etc., etc.).\n\nNot only is mysql faster than postgres on, e.g., simple web logging, it\nis also much faster than any commercial RDBMS, such as Oracle, Sybase, etc.\n\nIn reality, mysql is little more than a flat-file database with an SQL \nquery interface. But if that's all you need for your application, then\nthere is no reason not to use it. It's what my hosting service uses, and\nI've learned to live with it for simple Web stuff.\n\n\t-Michael Robinson\n\n", "msg_date": "Wed, 28 Jul 1999 11:00:13 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UPDATE performance degradation (6.5.1) " } ]
[ { "msg_contents": "Hi all,\n\nThere is a TODO item\n* Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n\nIn Windows-NT,we could see the following error reported by \nyutaka tanida [[email protected]]. \n\n> version\n> ------------------------------------------------------------\n> PostgreSQL 6.5.1 on i686-pc-cygwin, compiled by gcc gcc-2.95\n> (1 row)\n>\n> template1=> create table table1 ( i int,j int);\n> CREATE\n> template1=> create view view1 as select * from table1;\n> CREATE\n> template1=> drop view view1;\n> DROP\n> template1=> create view view1 as select * from table1;\n> ERROR: cannot create view1\n\n\"drop view\" couldn't unlink the base file of target view because\nit is doubly opened and so \"create view\" coundn't create the view. \n\nAfter applying the following patch on trial,\"drop view\" was able to\nunlink the base file and \"create view\" was able to create the view\nagain.\n\nI think base files should be closed at the time of cache invalidation.\nRelationFlushRelation() invalidates the entry of relation cache but\ndoesn't close the base file of target relation.\nIs there any reason ?\n\nOr why doesn't RelationCacheDelete() close the base file of \ntarget relation ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** utils/cache/relcache.c.orig\tWed May 26 16:05:38 1999\n--- utils/cache/relcache.c\tWed Jul 28 13:23:49 1999\n***************\n*** 1282,1287 ****\n--- 1282,1288 ----\n \t\toldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n \n \t\tRelationCacheDelete(relation);\n+ \t\tsmgrclose(DEFAULT_SMGR, relation);\n \n \t\tFreeTupleDesc(relation->rd_att);\n \t\tSystemCacheRelationFlushed(RelationGetRelid(relation));\n\n", "msg_date": "Wed, 28 Jul 1999 14:34:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "double opens" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi all,\n> \n> There is a TODO item\n> * Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> \n> In Windows-NT,we could see the following error reported by \n> yutaka tanida [[email protected]]. \n> \n> > version\n> > ------------------------------------------------------------\n> > PostgreSQL 6.5.1 on i686-pc-cygwin, compiled by gcc gcc-2.95\n> > (1 row)\n> >\n> > template1=> create table table1 ( i int,j int);\n> > CREATE\n> > template1=> create view view1 as select * from table1;\n> > CREATE\n> > template1=> drop view view1;\n> > DROP\n> > template1=> create view view1 as select * from table1;\n> > ERROR: cannot create view1\n> \n> \"drop view\" couldn't unlink the base file of target view because\n> it is doubly opened and so \"create view\" coundn't create the view. \n> \n> After applying the following patch on trial,\"drop view\" was able to\n> unlink the base file and \"create view\" was able to create the view\n> again.\n> \n> I think base files should be closed at the time of cache invalidation.\n> RelationFlushRelation() invalidates the entry of relation cache but\n> doesn't close the base file of target relation.\n> Is there any reason ?\n> \n> Or why doesn't RelationCacheDelete() close the base file of \n> target relation ?\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> *** utils/cache/relcache.c.orig\tWed May 26 16:05:38 1999\n> --- utils/cache/relcache.c\tWed Jul 28 13:23:49 1999\n> ***************\n> *** 1282,1287 ****\n> --- 1282,1288 ----\n> \t\toldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n> \n> \t\tRelationCacheDelete(relation);\n> + \t\tsmgrclose(DEFAULT_SMGR, relation);\n> \n> \t\tFreeTupleDesc(relation->rd_att);\n> \t\tSystemCacheRelationFlushed(RelationGetRelid(relation));\n> \n> \n> \n\nBasically, I thought the close was done already in the drop table code. \nIs it strange to do the close inside the cache? The cache does the\nopens, right?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 11:42:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] double opens" }, { "msg_contents": "> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Hi all,\n> > \n> > There is a TODO item\n> > * Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> > \n> > In Windows-NT,we could see the following error reported by \n> > yutaka tanida [[email protected]]. \n> > \n> > > version\n> > > ------------------------------------------------------------\n> > > PostgreSQL 6.5.1 on i686-pc-cygwin, compiled by gcc gcc-2.95\n> > > (1 row)\n> > >\n> > > template1=> create table table1 ( i int,j int);\n> > > CREATE\n> > > template1=> create view view1 as select * from table1;\n> > > CREATE\n> > > template1=> drop view view1;\n> > > DROP\n> > > template1=> create view view1 as select * from table1;\n> > > ERROR: cannot create view1\n> > \n> > \"drop view\" couldn't unlink the base file of target view because\n> > it is doubly opened and so \"create view\" coundn't create the view. \n> >\n \n[snip]\n\n> > \n> > *** utils/cache/relcache.c.orig\tWed May 26 16:05:38 1999\n> > --- utils/cache/relcache.c\tWed Jul 28 13:23:49 1999\n> > ***************\n> > *** 1282,1287 ****\n> > --- 1282,1288 ----\n> > \t\toldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n> > \n> > \t\tRelationCacheDelete(relation);\n> > + \t\tsmgrclose(DEFAULT_SMGR, relation);\n> > \n> > \t\tFreeTupleDesc(relation->rd_att);\n> > \t\tSystemCacheRelationFlushed(RelationGetRelid(relation));\n> > \n> > \n> > \n> \n> Basically, I thought the close was done already in the drop table code. \n> Is it strange to do the close inside the cache? The cache does the\n> opens, right?\n>\n\nNo,relcache stuff doesn't do the opens.\n\nFirst,my patch is not only for \"drop view\" case.\nIt's for cases such that\n A backend registers an information to invalidate a relcache \n entry and another backend removes the relcache entry trig-\n gered by the information.\n\n\"drop view\" plays both of the part alone and doubly opens\nas follows.\n\nRemoveView()\n RemoveRewriteRule()\n prs2_deleteFromRelation()\n heap_open(relid of view) ---- opens a new file descriptor\n\t\t\t\t for the base file of the view\n setRelhasrulesInRelations()\n heap_replace(tuple of \"pg_class\") ---- registers an informat-\n\t\t\t\t\tion to invalidate the view's \n\t\t\t\t\trelcache entry.\n heap_close() ---- doesn't close the file descriptor\n CommandCounterIncrement() \n ................. \n RelationFlushRelation() --- removes the relcache entry\n\t\t\t\t of the view\n heap_destroy_with_catalog()\n heap_openr(viewName) --- opens another file descriptor\n\t\t\t for the same view because\n\t\t\t heap_openr() couldn't find the\n\t\t\t relcache entry\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 29 Jul 1999 17:29:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] double opens" }, { "msg_contents": "Tom, can you comment on this patch. Seems you have made changes in this\narea. Thanks.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi all,\n> \n> There is a TODO item\n> * Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> \n> In Windows-NT,we could see the following error reported by \n> yutaka tanida [[email protected]]. \n> \n> > version\n> > ------------------------------------------------------------\n> > PostgreSQL 6.5.1 on i686-pc-cygwin, compiled by gcc gcc-2.95\n> > (1 row)\n> >\n> > template1=> create table table1 ( i int,j int);\n> > CREATE\n> > template1=> create view view1 as select * from table1;\n> > CREATE\n> > template1=> drop view view1;\n> > DROP\n> > template1=> create view view1 as select * from table1;\n> > ERROR: cannot create view1\n> \n> \"drop view\" couldn't unlink the base file of target view because\n> it is doubly opened and so \"create view\" coundn't create the view. \n> \n> After applying the following patch on trial,\"drop view\" was able to\n> unlink the base file and \"create view\" was able to create the view\n> again.\n> \n> I think base files should be closed at the time of cache invalidation.\n> RelationFlushRelation() invalidates the entry of relation cache but\n> doesn't close the base file of target relation.\n> Is there any reason ?\n> \n> Or why doesn't RelationCacheDelete() close the base file of \n> target relation ?\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> *** utils/cache/relcache.c.orig\tWed May 26 16:05:38 1999\n> --- utils/cache/relcache.c\tWed Jul 28 13:23:49 1999\n> ***************\n> *** 1282,1287 ****\n> --- 1282,1288 ----\n> \t\toldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n> \n> \t\tRelationCacheDelete(relation);\n> + \t\tsmgrclose(DEFAULT_SMGR, relation);\n> \n> \t\tFreeTupleDesc(relation->rd_att);\n> \t\tSystemCacheRelationFlushed(RelationGetRelid(relation));\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Sep 1999 23:30:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] double opens" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, can you comment on this patch. Seems you have made changes in this\n> area. Thanks.\n\nThis change is in place in current sources --- it's a few lines away from\nwhere Hiroshi suggested, but I don't think that makes any difference...\n\n>> *** utils/cache/relcache.c.orig\tWed May 26 16:05:38 1999\n>> --- utils/cache/relcache.c\tWed Jul 28 13:23:49 1999\n>> ***************\n>> *** 1282,1287 ****\n>> --- 1282,1288 ----\n>> oldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n>> \n>> RelationCacheDelete(relation);\n>> + \t\tsmgrclose(DEFAULT_SMGR, relation);\n>> \n>> FreeTupleDesc(relation->rd_att);\n>> SystemCacheRelationFlushed(RelationGetRelid(relation));\n>> \n>> \n>> \n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Sep 1999 09:34:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] double opens " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, can you comment on this patch. Seems you have made changes in this\n> > area. Thanks.\n> \n> This change is in place in current sources --- it's a few lines away from\n> where Hiroshi suggested, but I don't think that makes any difference...\n\nSorry, I missed it the first time. I see it now. Thanks.\n\n\n\n> \n> >> *** utils/cache/relcache.c.orig\tWed May 26 16:05:38 1999\n> >> --- utils/cache/relcache.c\tWed Jul 28 13:23:49 1999\n> >> ***************\n> >> *** 1282,1287 ****\n> >> --- 1282,1288 ----\n> >> oldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n> >> \n> >> RelationCacheDelete(relation);\n> >> + \t\tsmgrclose(DEFAULT_SMGR, relation);\n> >> \n> >> FreeTupleDesc(relation->rd_att);\n> >> SystemCacheRelationFlushed(RelationGetRelid(relation));\n> >> \n> >> \n> >> \n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 11:49:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] double opens" } ]
[ { "msg_contents": "\n> So, the selectivity that a search for the most common value would\n> have is a reasonable estimate for the selectivity of a search for any\n> value. That's a bogus assumption in this case --- but it's hard to\n> justify making any other assumption in general.\n> \nOther db's usually use the value count(*) / nunique for the light weight\nstatistics.\nThis makes the assumptoin that the distinct index values are evenly\ndistributed.\nThat is on average a correct assumption, whereas our assumption on average\noverestimates the number of rows returned.\nI am not sure we have a nunique info though.\n\nAndreas\n\n", "msg_date": "Wed, 28 Jul 1999 09:49:26 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "Zeugswetter Andreas IZ5 <[email protected]> writes:\n> Other db's usually use the value count(*) / nunique for the light\n> weight statistics. This makes the assumptoin that the distinct index\n> values are evenly distributed. That is on average a correct\n> assumption, whereas our assumption on average overestimates the number\n> of rows returned. I am not sure we have a nunique info though.\n\nWe don't, and AFAICS it would be an expensive statistic to compute.\n\nI have thought about this a little more overnight, and I have come up\nwith what I think is a better idea. Suppose that VACUUM ANALYZE stores\nin pg_statistic not only the disbursion, but also the most frequently\noccurring value of each column. It already computes (or I should say\nestimates) the most frequently occurring value (MFOV) in order to arrive\nat the disbursion, so storing the value costs nothing except a little\nmore space in pg_statistic. Now, the logic that eqsel() should use is\n\n\tif constant-being-compared-against == MFOV then\n\t\treturn disbursion;\n\telse\n\t\treturn MIN(disbursion, 1.0 - disbursion);\n\nwhich works like this: if we are indeed looking for the MFOV then the\nselectivity is just the disbursion, no question. If we are looking for\na value *other* than the MFOV, then the selectivity must be less than\nthe disbursion, since surely this value occurs less often than the MFOV.\nBut the total fraction of non-MFOV values in the table is\n1.0-disbursion, so the fraction that are the specific value we want\ncan't exceed that either.\n\nThe MIN() above is therefore a hard upper bound for the selectivity\nof the non-MFOV case. In practice we might want to multiply the MIN\nby a fudge-factor somewhat less than one, to arrive at what we hope\nis a reasonable estimate rather than a worst-case estimate.\n\nBTW, this argument proves rigorously that the selectivity of a search\nfor any value other than the MFOV is not more than 0.5, so there is some\nbasis for my intuition that eqsel should not return a value above 0.5.\nSo, in the cases where eqsel does not know the exact value being\nsearched for, I'd still be inclined to cap its result at 0.5.\n\nIf we use this logic, the stat we really want is exactly the frequency\nof the MFOV, not the disbursion which is just closely related to it.\nI have not looked at the other uses of disbursion, but if they all can\nwork like this we might want to forget the statistical niceties and just\nstore the frequency of the MFOV.\n\nA final comment is that NULL would be treated just like any regular\nvalue in determining what is the MFOV...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 10:57:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> \n> > So, the selectivity that a search for the most common value would\n> > have is a reasonable estimate for the selectivity of a search for any\n> > value. That's a bogus assumption in this case --- but it's hard to\n> > justify making any other assumption in general.\n> > \n> Other db's usually use the value count(*) / nunique for the light weight\n> statistics.\n> This makes the assumptoin that the distinct index values are evenly\n> distributed.\n> That is on average a correct assumption, whereas our assumption on average\n> overestimates the number of rows returned.\n> I am not sure we have a nunique info though.\n> \n\nYes, that's the problem. Figuring out the number of uniques is hard,\nexpecially with no index.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 11:43:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> Zeugswetter Andreas IZ5 <[email protected]> writes:\n> > Other db's usually use the value count(*) / nunique for the light\n> > weight statistics. This makes the assumptoin that the distinct index\n> > values are evenly distributed. That is on average a correct\n> > assumption, whereas our assumption on average overestimates the number\n> > of rows returned. I am not sure we have a nunique info though.\n> \n> We don't, and AFAICS it would be an expensive statistic to compute.\n> \n> I have thought about this a little more overnight, and I have come up\n> with what I think is a better idea. Suppose that VACUUM ANALYZE stores\n> in pg_statistic not only the disbursion, but also the most frequently\n> occurring value of each column. It already computes (or I should say\n> estimates) the most frequently occurring value (MFOV) in order to arrive\n> at the disbursion, so storing the value costs nothing except a little\n> more space in pg_statistic. Now, the logic that eqsel() should use is\n> \n> \tif constant-being-compared-against == MFOV then\n> \t\treturn disbursion;\n> \telse\n> \t\treturn MIN(disbursion, 1.0 - disbursion);\n\nYes, I like this.\n\n> BTW, this argument proves rigorously that the selectivity of a search\n> for any value other than the MFOV is not more than 0.5, so there is some\n> basis for my intuition that eqsel should not return a value above 0.5.\n> So, in the cases where eqsel does not know the exact value being\n> searched for, I'd still be inclined to cap its result at 0.5.\n\nI don't follow this. If the most frequent value occurs 95% of the time,\nwouldn't the selectivity be 0.95? And why use 1-disbursion. You may\nfind the existing code does a better job than the MIN() computation.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 12:06:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> BTW, this argument proves rigorously that the selectivity of a search\n>> for any value other than the MFOV is not more than 0.5, so there is some\n>> basis for my intuition that eqsel should not return a value above 0.5.\n>> So, in the cases where eqsel does not know the exact value being\n>> searched for, I'd still be inclined to cap its result at 0.5.\n\n> I don't follow this. If the most frequent value occurs 95% of the time,\n> wouldn't the selectivity be 0.95?\n\nIf you are searching for the most frequent value, then the selectivity\nestimate should indeed be 0.95. If you are searching for anything else,\nthe selectivity estimate ought to be 0.05 or less. If you don't know\nwhat value you will be searching for, which number should you use?\n\nThe unsupported assumption here is that if the table contains 95%\noccurrence of a particular value, then the odds are also 95% (or at\nleast high) that that's the value you are searching for in any given\nquery that has an \"= something\" WHERE qual.\n\nThat assumption is pretty reasonable in some cases (such as your\nexample earlier of \"WHERE state = 'PA'\" in a Pennsylvania-local\ndatabase), but it falls down badly in others, such as where the\nmost common value is NULL or an empty string or some other indication\nthat there's no useful data. In that sort of situation it's actually\npretty unlikely that the user will be searching for field =\nmost-common-value ... but the system probably has no way to know that.\n\nI wonder whether it would help to add even more data to pg_statistic.\nFor example, suppose we store the fraction of the columns that are NULL,\nplus the most frequently occurring *non null* value, plus the fraction\nof the columns that are that value. This would allow us to be very\nsmart about columns in which \"no data\" is represented by NULL (as a good\nDB designer would do):\n\nselectivity of \"IS NULL\": NULLfraction\n\nselectivity of \"IS NOT NULL\": 1 - NULLfraction\n\nselectivity of \"= X\" for a known non-null constant X:\n\tif X == MFOV: MFOVfraction\n\telse: MIN(MFOVfraction, 1-MFOVfraction-NULLfraction)\n\nselectivity of \"= X\" when X is not known a priori, but presumably is not\nnull:\n\tMIN(MFOVfraction, 1-NULLfraction)\n\nBoth of the MIN()s are upper bounds, so multiplying them by a\nfudge-factor < 1 would be reasonable.\n\nThese rules would guarantee small selectivity values when either\nMFOVfraction or 1-NULLfraction is small. It still wouldn't cost\nmuch, since I believe VACUUM ANALYZE is counting nulls already...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 19:44:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> BTW, this argument proves rigorously that the selectivity of a search\n> >> for any value other than the MFOV is not more than 0.5, so there is some\n> >> basis for my intuition that eqsel should not return a value above 0.5.\n> >> So, in the cases where eqsel does not know the exact value being\n> >> searched for, I'd still be inclined to cap its result at 0.5.\n> \n> > I don't follow this. If the most frequent value occurs 95% of the time,\n> > wouldn't the selectivity be 0.95?\n> \n> If you are searching for the most frequent value, then the selectivity\n> estimate should indeed be 0.95. If you are searching for anything else,\n> the selectivity estimate ought to be 0.05 or less. If you don't know\n> what value you will be searching for, which number should you use?\n\nYou are going to love this:\n\n\t0.95 * 0.95 + 0.05 * 0.05\n\nThis is because with a 95% of one value, you would think the ask for\nthat value 95% of the time, and another value 5% of the time. The last\n0.05 is not really accurate. It assumes there are only two unique\nvalues in the table, which may be wrong, but it is close enough.\n\n> \n> The unsupported assumption here is that if the table contains 95%\n> occurrence of a particular value, then the odds are also 95% (or at\n> least high) that that's the value you are searching for in any given\n> query that has an \"= something\" WHERE qual.\n\nYes.\n\n> That assumption is pretty reasonable in some cases (such as your\n> example earlier of \"WHERE state = 'PA'\" in a Pennsylvania-local\n> database), but it falls down badly in others, such as where the\n> most common value is NULL or an empty string or some other indication\n> that there's no useful data. In that sort of situation it's actually\n> pretty unlikely that the user will be searching for field =\n> most-common-value ... but the system probably has no way to know that.\n\nWell, if null is most common, it is very probable they would be looking\nfor col IS NULL.\n\n> I wonder whether it would help to add even more data to pg_statistic.\n> For example, suppose we store the fraction of the columns that are NULL,\n> plus the most frequently occurring *non null* value, plus the fraction\n> of the columns that are that value. This would allow us to be very\n> smart about columns in which \"no data\" is represented by NULL (as a good\n> DB designer would do):\n\nThat would be nice.\n\n> \n> selectivity of \"IS NULL\": NULLfraction\n> \n> selectivity of \"IS NOT NULL\": 1 - NULLfraction\n> \n> selectivity of \"= X\" for a known non-null constant X:\n> \tif X == MFOV: MFOVfraction\n> \telse: MIN(MFOVfraction, 1-MFOVfraction-NULLfraction)\n> \n> selectivity of \"= X\" when X is not known a priori, but presumably is not\n> null:\n> \tMIN(MFOVfraction, 1-NULLfraction)\n> \n> Both of the MIN()s are upper bounds, so multiplying them by a\n> fudge-factor < 1 would be reasonable.\n\nYes, I am with you here.\n\n> These rules would guarantee small selectivity values when either\n> MFOVfraction or 1-NULLfraction is small. It still wouldn't cost\n> much, since I believe VACUUM ANALYZE is counting nulls already...\n\nYes, it is. Sounds nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 20:21:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "At 20:21 28/07/99 -0400, you wrote:\n>\n>> I wonder whether it would help to add even more data to pg_statistic.\n>> For example, suppose we store the fraction of the columns that are NULL,\n>> plus the most frequently occurring *non null* value, plus the fraction\n>> of the columns that are that value. This would allow us to be very\n>> smart about columns in which \"no data\" is represented by NULL (as a good\n>> DB designer would do):\n>\n>That would be nice.\n>\n\nI know I've mentioned this before, but can't the designer of the query be\ngiven some influence over optimizer index choices? We can circle around the\nproblem of understanding the demographics of a table, but without\nrow-by-row analysis, you'll *never* get the complete and accurate view that\nis needed to cater for all cases.\n\nOTOH, a query designer often knows that a particular query will only be run\nto find 'exceptions' (ie. non-nulls when 95% are nulls), or to find 'small'\nranges. IMO, when a DBA is in a position to help the optimizer, they\n*should* allowed to. PG *already* has something like this in the form of\npartial indexes: you can view the query that is associated with the index\nas a 'hint' as to when that index should be used. All I'm asking is for\nqueries, not indexes, to specify when an index is used.\n\nThis will not in any way replace the optimizer, but it will give users the\nability deal with pathological cases.\n\nIn terms of the statistics collected, it *may* also be worth doing some\nrudimentary analysis on the data to see it is conforms to any common\ndistribution (or sum of distributions), and if it does, save that\ninformation. eg. the optimizer will do pretty well if it *knows* the data\nis in a normal distribution, with a mean of 972 and a stdev of 70! Of\ncourse, you must be sure that it *is* a normal distribution to start with.\n\nFWIW, statisticians often seem worried about three values: the mean, median\nand mode. I don't really know which is which, but they are:\n\no The average of all values\no The average of the min and max value\no The most common value.\n\nSomeone who knows a lot more about this stuff than me can probably tell us\nhow these values will affect the trust we place in the index statistics.\nSomeone on this list must be able to give us some insight???\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 29 Jul 1999 12:37:05 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n\tse lect)" }, { "msg_contents": "> FWIW, statisticians often seem worried about three values: the mean, median\n> and mode. I don't really know which is which, but they are:\n> \n> o The average of all values\n> o The average of the min and max value\n\nWe have this value. Good for > and < comparisons.\n\n> o The most common value.\n\nWe know the number of times the most common value occurs, but not the\nactual value.\n\n> \n> Someone who knows a lot more about this stuff than me can probably tell us\n> how these values will affect the trust we place in the index statistics.\n> Someone on this list must be able to give us some insight???\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: +61-03-5367 7422 | _________ \\\n> Fax: +61-03-5367 7430 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 22:39:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> BTW, this argument proves rigorously that the selectivity of a search\n> >> for any value other than the MFOV is not more than 0.5, so there is some\n> >> basis for my intuition that eqsel should not return a value above 0.5.\n> >> So, in the cases where eqsel does not know the exact value being\n> >> searched for, I'd still be inclined to cap its result at 0.5.\n> \n> > I don't follow this. If the most frequent value occurs 95% of the time,\n> > wouldn't the selectivity be 0.95?\n> \n> If you are searching for the most frequent value, then the selectivity\n> estimate should indeed be 0.95. If you are searching for anything else,\n> the selectivity estimate ought to be 0.05 or less. If you don't know\n> what value you will be searching for, which number should you use?\n> \n> The unsupported assumption here is that if the table contains 95%\n> occurrence of a particular value, then the odds are also 95% (or at\n> least high) that that's the value you are searching for in any given\n> query that has an \"= something\" WHERE qual.\n> \n> That assumption is pretty reasonable in some cases (such as your\n> example earlier of \"WHERE state = 'PA'\" in a Pennsylvania-local\n> database), but it falls down badly in others, such as where the\n> most common value is NULL or an empty string or some other indication\n> that there's no useful data. In that sort of situation it's actually\n> pretty unlikely that the user will be searching for field =\n> most-common-value ... but the system probably has no way to know that.\n\nThis is exactly what a partial index is supposed to do. And then the\nsystem knows it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 29 Jul 1999 04:48:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Tom Lane wrote:\n>> ... it falls down badly in others, such as where the\n>> most common value is NULL or an empty string or some other indication\n>> that there's no useful data. In that sort of situation it's actually\n>> pretty unlikely that the user will be searching for field =\n>> most-common-value ... but the system probably has no way to know that.\n\n> This is exactly what a partial index is supposed to do. And then the\n> system knows it...\n\nI've heard a couple of people assert in this thread that partial indexes\nare the answer, but I don't believe it. Two reasons:\n\n(1) The system won't use a partial index *at all* unless it can prove\nthat the index's predicate (condition for including tuples) is implied\nby the query's WHERE condition. So the predicate doesn't add a thing\nto the system's knowledge about the query.\n\n(2) The statistics that we have available are stats about a column.\nNot stats about a column given the predicate of some index. So there's\nno gain in our statistical knowledge either.\n\nPartial indexes might be a component of a solution, but they are\nvery far from being a solution all by themselves.\n\n\t\t\tregards, tom lane\n\nPS: a quick glance at gram.y shows that we don't actually accept\npartial-index predicates in CREATE INDEX, so Andreas was right that\nthe feature got ripped out at some point. I have no idea how much\nwork might be required to re-enable it... but I'll bet it's not\ntrivial.\n", "msg_date": "Thu, 29 Jul 1999 01:09:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "Tom Lane wrote:\n> \n> (2) The statistics that we have available are stats about a column.\n> Not stats about a column given the predicate of some index. So there's\n> no gain in our statistical knowledge either.\n\nIf we added just count of NULLs we would cover for the NOT NULL case\n\n> Partial indexes might be a component of a solution, but they are\n> very far from being a solution all by themselves.\n> \n> regards, tom lane\n> \n> PS: a quick glance at gram.y shows that we don't actually accept\n> partial-index predicates in CREATE INDEX, so Andreas was right that\n> the feature got ripped out at some point. I have no idea how much\n> work might be required to re-enable it... but I'll bet it's not\n> trivial.\n\nThat's why I suggested getting just the simplest case (NOT NULL) \nworking first.\n\nThe more general approach would of course be to gather stats by index.\n\n--------------\nHannu\n", "msg_date": "Thu, 29 Jul 1999 08:42:45 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > Tom Lane wrote:\n> >> ... it falls down badly in others, such as where the\n> >> most common value is NULL or an empty string or some other indication\n> >> that there's no useful data. In that sort of situation it's actually\n> >> pretty unlikely that the user will be searching for field =\n> >> most-common-value ... but the system probably has no way to know that.\n> \n> > This is exactly what a partial index is supposed to do. And then the\n> > system knows it...\n> \n> I've heard a couple of people assert in this thread that partial indexes\n> are the answer, but I don't believe it. Two reasons:\n> \n> (1) The system won't use a partial index *at all* unless it can prove\n> that the index's predicate (condition for including tuples) is implied\n> by the query's WHERE condition. So the predicate doesn't add a thing\n> to the system's knowledge about the query.\n> \n> (2) The statistics that we have available are stats about a column.\n> Not stats about a column given the predicate of some index. So there's\n> no gain in our statistical knowledge either.\n> \n> Partial indexes might be a component of a solution, but they are\n> very far from being a solution all by themselves.\n\nAgreed.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 09:46:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "OK, I haven't heard any objections to my last proposal for improving\nselectivity estimates for \"=\", so I'm going to bull ahead and implement it.\n\nThis will require adding columns to a system table, which I've never\ndone before. There are some attribute statistics in pg_attribute and\nsome in pg_statistic, but it looks like changing pg_attribute is a\npretty dangerous business, so I'm inclined to leave pg_attribute alone\nand just add columns to pg_statistic.\n\nDo I need to do anything beyond making the obvious additions to\ncatalog/pg_statistic.h, rebuild, and initdb? I see that pg_attribute.h\ndoesn't contain any handmade entries for pg_statistic, so that at least\nis no problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Jul 1999 12:13:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> OK, I haven't heard any objections to my last proposal for improving\n> selectivity estimates for \"=\", so I'm going to bull ahead and implement it.\n> \n> This will require adding columns to a system table, which I've never\n> done before. There are some attribute statistics in pg_attribute and\n> some in pg_statistic, but it looks like changing pg_attribute is a\n> pretty dangerous business, so I'm inclined to leave pg_attribute alone\n> and just add columns to pg_statistic.\n> \n> Do I need to do anything beyond making the obvious additions to\n> catalog/pg_statistic.h, rebuild, and initdb? I see that pg_attribute.h\n> doesn't contain any handmade entries for pg_statistic, so that at least\n> is no problem...\n\nNo, that's pretty much it. You have to make sure you get it all\ncorrect, or it will not work, but you know that. The only other thing\nis that you have to make sure that every insert into pg_statistic in the\nbackend knows about the new fields. I do that by looking for all\nreferences to pg_statistic in the backend, and making sure I have the\nnew fields covered. If you add a varlena field, and there is an index\non the table, you may have to add something to the cache.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 31 Jul 1999 12:50:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" } ]
[ { "msg_contents": "\n> Yes, I think we index nulls. What are partial indexes?\n> \nA create index statement that accepts a where condition. All\nrows that satisfy the where condition are indexed, others not.\nThis needs intelligence in the optimizer.\n\nThis was in postgresql code some time ago, but was removed\nfor some reason I don't remember.\n\nExample: create index ax0 on a (id) where id is not null;\n\nAndreas\n\n", "msg_date": "Wed, 28 Jul 1999 10:00:28 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "Zeugswetter Andreas IZ5 <[email protected]> writes:\n>> Yes, I think we index nulls. What are partial indexes?\n>> \n> A create index statement that accepts a where condition. All\n> rows that satisfy the where condition are indexed, others not.\n> This needs intelligence in the optimizer.\n\n> This was in postgresql code some time ago, but was removed\n> for some reason I don't remember.\n\nIt was? There's still a ton of code in the optimizer to support it\n(a big chunk of indxqual.c is for testing index predicates).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 10:16:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> > This was in postgresql code some time ago, but was removed\n> > for some reason I don't remember.\n> It was? There's still a ton of code in the optimizer to support it\n> (a big chunk of indxqual.c is for testing index predicates).\n\nThere was talk of removing it, but it seemed to be a Bad Idea to do\nso. The discussion even provoked a negative response from the Gods\nthemselves (in the voice of Paul Aoki) and led to the short\ndescription of them in the docs.\n\nThey have definitely been neglected, but are a Good Thing and should\nbe rehabbed...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 28 Jul 1999 14:51:17 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "The code for partial indices is still intact in RTREES, and there is\nsome\ninformation about them in one of the Stonebraker papers. If anyone is\nintersted I will dig up my file and look for an exact reference. \n\nBernie\n", "msg_date": "Wed, 28 Jul 1999 15:02:41 +0000", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple\n select)" } ]
[ { "msg_contents": "Hi,\n\ntesting of DBIlogging to postgres I've got serious problem\nwith performance degradation during updates.\nIn my case I got 15-16 req/sec for the first 1000 updates which drops down\nto 1-2 req. sec after 20000 updates. This is quite unusable even for\nmedium Web site. As Tom Lane noticed update is just an insert, so if\ntable has only one row which is updated by several processes the size\nwill grow until 'vacuum analyze'. Indeed 'vacuum analyze' helps a lot,\nbut index file doesn't affected, it remains big !\n\nAfter 20190 updates and several 'vacuum analyze':\n\n-rw------- 1 postgres users 1810432 Jul 28 14:22 hits\n-rw------- 1 postgres users 1368064 Jul 28 14:22 hits_pkey\nom:/usr/local/pgsql/data/base/discovery$ psql discovery -c 'select count(*) from hits'\ncount\n-----\n10000\n(1 row)\n\nom:/usr/local/pgsql/data/base/discovery$ psql discovery -c 'select sum(count) from hits'\n sum\n-----\n20190\n(1 row)\n\nI inserted 10,000 rows into table hits just to test how the number of \nrows could affect to performance while 2 rows are updated. I didn't notice\nany difference. \n\nAfter 'vacuum analyze':\nom:/usr/local/pgsql/data/base/discovery$ l hits*\n-rw------- 1 postgres users 606208 Jul 28 14:27 hits\n-rw------- 1 postgres users 1368064 Jul 28 14:27 hits_pkey\nom:/usr/local/pgsql/data/base/discovery$ \n\nIndex file doesn't touched, actually modification date changed, but the\nsize remains big.\n\nHow update performance could be increased if:\n 1. 'vacuum analyze' will analyze index file\n 2. reuse row instead of inserting\n\nI found in TODO only\n\n* Allow row re-use without vacuum(Vadim)\n\nMy site isn't in production yet, so I'd like to know are there some chance\nupdate problem will be solved. I think this is rather general problem\nand many Web developers will appreciate solving it as Jan's feature patch\nfor LIMIT inspired many people to use postgres in real applications as well\nas great new MVCC feature.\n\n\tRegards,\n\n\t\tOleg\n\nPS.\n\nFor those who interested in my handler for Logging accumulated hits into \npostgres:\n\n\nIn httpd.conf:\n\nPerlModule Apache::HitsDBI0\n<Location /db/pubs.html>\n PerlCleanupHandler Apache::HitsDBI0\n</Location> \n\nTable scheme:\ncreate table hits (\n msg_id int4 not null primary key,\n count int4 not null,\n first_access datetime default now(),\n last_access datetime\n);\n-- grant information\n\nGRANT SELECT ON hits to PUBLIC;\nGRANT INSERT,UPDATE ON hits to httpd;\n\n\n\npackage Apache::HitsDBI0;\n\nuse strict;\n\n# preloaded in startup.pl\nuse Apache::Constants qw(:common);\n#use DBI ();\n\nsub handler {\n my $orig = shift;\n if ( $orig->args() =~ /msg_id=(\\d+)/ ) {\n my $dbh = DBI->connect(\"dbi:Pg:dbname=discovery\") || die DBI->errstr;\n $dbh->{AutoCommit} = 0;\n my $sth = $dbh->do(\"LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\") || die $dbh->errstr;\n my $rows_affected = $dbh->do(\"update hits set count=count+1,last_access=now() where msg_id=$1\") || die $dbh->errstr;\n## postgres specific !!!\n $sth = $dbh->do(\"Insert Into hits (msg_id,count) values ($1, 1)\") if ($rows_affected eq '0E0');\n my $rc = $dbh->commit || die $dbh->errstr;\n }\n return OK;\n}\n\n1;\n__END__\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Wed, 28 Jul 1999 14:39:21 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "row reuse while UPDATE and vacuum analyze problem " }, { "msg_contents": "On Wed, 28 Jul 1999, Oleg Bartunov wrote:\n\n> How update performance could be increased if:\n> 1. 'vacuum analyze' will analyze index file\n> 2. reuse row instead of inserting\n\nJust to clarify, 'reuse row' won't replace inserting (to the best of my\nknowledge), only reduce space wastage between vacuum's. Especially, again\nTTBOMK, with MVCC, where each \"instance\" of a row is serialized. \n\nActually, there is a tought...if I understand the concept of MVCC, how is\nreusing a row going to work? My understanding is that I can \"physically\"\nhave to copies of a row in a table, one newer then the other. So, if\nsomeone is running a SELECT while I'm doing an UPDATE, their SELECT will\ntake the older version of hte row (the row at the time their SELECT\nstarted)...depending on how busy that table is, there will have to be some\nsort of mechanism for determining how 'stale' a row is, no?\n\nie. on a *very* large table, with multiple SELECT/UPDATEs happening?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Jul 1999 09:00:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem " }, { "msg_contents": "On Wed, 28 Jul 1999, The Hermit Hacker wrote:\n\n> Date: Wed, 28 Jul 1999 09:00:21 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected], [email protected]\n> Subject: Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem \n> \n> On Wed, 28 Jul 1999, Oleg Bartunov wrote:\n> \n> > How update performance could be increased if:\n> > 1. 'vacuum analyze' will analyze index file\n> > 2. reuse row instead of inserting\n> \n> Just to clarify, 'reuse row' won't replace inserting (to the best of my\n> knowledge), only reduce space wastage between vacuum's. Especially, again\n> TTBOMK, with MVCC, where each \"instance\" of a row is serialized. \n> \n> Actually, there is a tought...if I understand the concept of MVCC, how is\n> reusing a row going to work? My understanding is that I can \"physically\"\n> have to copies of a row in a table, one newer then the other. So, if\n> someone is running a SELECT while I'm doing an UPDATE, their SELECT will\n> take the older version of hte row (the row at the time their SELECT\n> started)...depending on how busy that table is, there will have to be some\n> sort of mechanism for determining how 'stale' a row is, no?\n> \n> ie. on a *very* large table, with multiple SELECT/UPDATEs happening?\n\nThis is what I noticed when start my testing about a week ago - I got\nduplicates, because of multiple concurrent processes trying\ninserts/updates. After LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE\nall problem were gone except performance slow downs a little bit.\nBut after many updates performance dergrades very much IMO because\ntable and index size grow even if I update the same row and \neven 'vacuum analyze' doesn't reduce the size of index file.\nIn principle, I could live with cron job running vaccumdb every hour but\nvacuum doesn't touch indices.\nI hope I'll meet Vadim in Moscow and we discuss MVCC aspects in native \nrussian :-)\n\n\tRegards,\n\n\t\tOleg\n\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 28 Jul 1999 16:28:27 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem " }, { "msg_contents": "At 09:00 28/07/99 -0300, you wrote:\n>On Wed, 28 Jul 1999, Oleg Bartunov wrote:\n>\n>Actually, there is a tought...if I understand the concept of MVCC, how is\n>reusing a row going to work? My understanding is that I can \"physically\"\n>have to copies of a row in a table, one newer then the other. So, if\n>someone is running a SELECT while I'm doing an UPDATE, their SELECT will\n>take the older version of hte row (the row at the time their SELECT\n>started)...depending on how busy that table is, there will have to be some\n>sort of mechanism for determining how 'stale' a row is, no?\n>\n>ie. on a *very* large table, with multiple SELECT/UPDATEs happening?\n>\n\nI presume that's part of MVCC - if PGSQL has a 'transaction id', then you\nonly need to keep copies of rows back to the earliest started active tx. In\nfact, you only really need to keep versions to satisfy all currenct active\ntx's (ie. you don't need *all* intermediate versions). You must also keep a\ncopy of a row prior to the current writer's update (until they commit).\n\nThere was talk a while back about doing a 'background vacuum' - did the\ntalk go anywhere, because reusing space is probably the only way to solve\nthe infinite growth problem.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 28 Jul 1999 22:29:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] row reuse while UPDATE and vacuum analyze\n problem" }, { "msg_contents": "> On Wed, 28 Jul 1999, Oleg Bartunov wrote:\n> \n> > How update performance could be increased if:\n> > 1. 'vacuum analyze' will analyze index file\n> > 2. reuse row instead of inserting\n> \n> Just to clarify, 'reuse row' won't replace inserting (to the best of my\n> knowledge), only reduce space wastage between vacuum's. Especially, again\n> TTBOMK, with MVCC, where each \"instance\" of a row is serialized. \n> \n> Actually, there is a tought...if I understand the concept of MVCC, how is\n> reusing a row going to work? My understanding is that I can \"physically\"\n> have to copies of a row in a table, one newer then the other. So, if\n> someone is running a SELECT while I'm doing an UPDATE, their SELECT will\n> take the older version of hte row (the row at the time their SELECT\n> started)...depending on how busy that table is, there will have to be some\n> sort of mechanism for determining how 'stale' a row is, no?\n> \n> ie. on a *very* large table, with multiple SELECT/UPDATEs happening?\n\nYou would have to leave referenced rows alone. I think Vadim has this\ncovered.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 11:45:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem" }, { "msg_contents": "After thinking about the row reuse problem for a while, this is the best\nI could come up with. I don't think it is well enough thought out to be\nworth implementing yet, but maybe the ideas are a step in the right\ndirection.\n\nA partial solution to row-reuse is to allow writing commands (commands\nwith write locks on a buffer) to purge expendable tuples as they find\nthem. When a transaction starts it could look at the snapshots of\nconcurrent transactions and compute a tide-mark equal to the oldest TID\nin all the concurrent snapshots. Any deleted tuple in a heap relation\nwith a committed tmax value that is smaller than the tide-mark is\ninvisible to any of the concurrent or future transactions, and is\nexpendable. The write command that discovers the expendable transaction\ncan then delete it, and reclaim the tuple space. \n\nUnfortunately this solves only half the problem. Deleting the tuple\nmeans that other scans no longer have to read and then discard it, but\nthe space that the tuple uses is not really reclaimed because the way\nthat insertions work is that they always add to the end of the heap. If\nall the tuples in a \ntable are of fixed size then one solution would be to keep a list of\nempty tuple slots in the heap, and insert new tuples in these slots\nfirst. This would allow inserts to keep the table well packed.\n\nIn the case of tables with variable length tuples the problem seems\nharder. Since the actual tuples in a table are always accessed through\nItemIds, it is possible for a process with a write lock on a buffer to\nrearrange the tuples in the page to remove free space without affecting\nconcurrent processes' views of the data within the buffer. After\nfreeing the tuple, and compacting the space in the page, the process\nwould have to update the free space list \nby removing any previous pointer to space on the page, and then\nre-inserting a pointer to the new space on the page. The free space\nlist becomes quite a bit more complicated in this case, as it has to\nkeep track of the sizes of the free space segments, and it needs to be\nindexed by the block in which the free space resides, and the size of\nthe space available. This would seem to indicate the need for both a\ntree-structure and a hash structure associated with the free space list.\n\nComments anyone?\n\nBernie\n", "msg_date": "Wed, 28 Jul 1999 15:54:20 +0000", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem" }, { "msg_contents": "At 15:54 28/07/99 +0000, Bernard Frankpitt wrote:\n>A partial solution to row-reuse is to allow writing commands (commands\n>with write locks on a buffer) to purge expendable tuples as they find\n>them. When a transaction starts it could look at the snapshots of\n>concurrent transactions and compute a tide-mark equal to the oldest TID\n>in all the concurrent snapshots. Any deleted tuple in a heap relation\n>with a committed tmax value that is smaller than the tide-mark is\n>invisible to any of the concurrent or future transactions, and is\n>expendable. The write command that discovers the expendable transaction\n>can then delete it, and reclaim the tuple space. \n\nIs there any way that the space can be freed as soon as it is no longer\nneeded? I'm not sure how the MVCC stuff works, but I assume that when a R/O\nTX starts, a lock is taken out on the tables (and/or rows) that are read.\nIf a R/W TX updates a record, then a new version is written with a new TID.\nCan the writer also mark the old version of the row for deletion, so that\nwhen the last reader commits, the row is deleted? I have no idea if this\nwould be more or less efficient that making the writers do it, but it would\nhave the advantage of distributing the load across more TXs.\n\n>Unfortunately this solves only half the problem. Deleting the tuple\n>means that other scans no longer have to read and then discard it, but\n>the space that the tuple uses is not really reclaimed because the way\n>that insertions work is that they always add to the end of the heap. If\n>all the tuples in a \n>table are of fixed size then one solution would be to keep a list of\n>empty tuple slots in the heap, and insert new tuples in these slots\n>first. This would allow inserts to keep the table well packed.\n\nThis makes sense, but I would guess that fixed length tuples would not be\ncommon.\n\n>In the case of tables with variable length tuples the problem seems\n>harder. Since the actual tuples in a table are always accessed through\n>ItemIds, it is possible for a process with a write lock on a buffer to\n>rearrange the tuples in the page to remove free space without affecting\n>concurrent processes' views of the data within the buffer. After\n>freeing the tuple, and compacting the space in the page, the process\n>would have to update the free space list \n>by removing any previous pointer to space on the page, and then\n>re-inserting a pointer to the new space on the page. The free space\n>list becomes quite a bit more complicated in this case, as it has to\n>keep track of the sizes of the free space segments, and it needs to be\n>indexed by the block in which the free space resides, and the size of\n>the space available. This would seem to indicate the need for both a\n>tree-structure and a hash structure associated with the free space list.\n\nI'm not sure I follow this, I also don't know anything about the internals\nof the data storage code, but...\n\nUsing my favorite database as a model, an alternative might be to (1)\nRecord the free space on a page-by-page (or any kind of chunk-by-chunk)\nbasis, (2) Don't bother doing any rearrangement when a record is deleted,\njust mark the record as free, and update the bytes-free count for the page,\n(3) When you want to store a record, look for any page that has enough\nbytes free, then do any space rearrangement necessary to store the record.\n\nAFAICT, most of the above steps (with the exception of page-fullness) must\nalready be performed.\n\nThis will of course fail totally if records are allowed to freely cross\npage boundaries.\n\nPlease forgive me if this is way off the mark...and (if you have the\npatience) explain what I've missed.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 29 Jul 1999 02:11:03 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem" } ]
[ { "msg_contents": "\n> BTW, this argument proves rigorously that the selectivity of a search\n> for any value other than the MFOV is not more than 0.5, so there is some\n> basis for my intuition that eqsel should not return a value above 0.5.\n> So, in the cases where eqsel does not know the exact value being\n> searched for, I'd still be inclined to cap its result at 0.5.\n> \nYes, this is imho an easy and efficient fix. I would even use a lower value,\nlike 0,3.\nGood database design would not create an index for such bad selectivity\nanyway.\nSo if you have a performance problem because of so bad selectivity,\nthe advice is to drop the index.\n\nIf you plan to store explicit key values, I would do this in an extra\nstatistic, \nthat stores bunches of equally sized buckets, and distinct values for very\nbadly \nscewed values.\n\nExample assuming int index column:\nfrom\tto\tnrow_estimate\n1\t100\t10005\n101\t20000\t9997\n20001\t100000\t10014\n\nbadly scewed values (excluded in above table):\nval\t\tnrow_estimate\n1\t\t100000\n5\t\t1000000\n\nBut imho this is an overkill, and seldom useful.\n\nAndreas\n", "msg_date": "Wed, 28 Jul 1999 17:45:32 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" }, { "msg_contents": "> Example assuming int index column:\n> from\tto\tnrow_estimate\n> 1\t100\t10005\n> 101\t20000\t9997\n> 20001\t100000\t10014\n> \n> badly scewed values (excluded in above table):\n> val\t\tnrow_estimate\n> 1\t\t100000\n> 5\t\t1000000\n> \n> But imho this is an overkill, and seldom useful.\n\nYes, some commerical databases do this, though it is of questionable value.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jul 1999 12:07:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity of \"=\" (Re: [HACKERS] Index not used on simple se\n\tlect)" } ]
[ { "msg_contents": "\nHi,\n\nHas anyone tried tocompile with SYSLOGD facility enabled ?\nI just tried 6.5.1 but couldn't compiled on Linux 2.2.10:\n\ntrace.c: In function \u0014printf':\ntrace.c:119: \fOG_INFO' undeclared (first use in this function)\ntrace.c:119: (Each undeclared identifier is reported only once\ntrace.c:119: for each function it appears in.)\ntrace.c:119: \fOG_DEBUG' undeclared (first use in this function)\ntrace.c:96: warning: \fog_level' might be used uninitialized in this function\ntrace.c: In function \u0005printf':\ntrace.c:180: \fOG_ERR' undeclared (first use in this function)\ntrace.c: In function \u0017rite_syslog':\ntrace.c:207: warning: implicit declaration of function enlog'\ntrace.c:207: \fOG_PID' undeclared (first use in this function)\ntrace.c:207: \fOG_NDELAY' undeclared (first use in this function)\ntrace.c:207: \fOG_LOCAL0' undeclared (first use in this function)\n\nIt seems I'm missing something - I just uncomment lines in src/include/config.h\n#define ELOG_TIMESTAMPS\n#define USE_SYSLOG\n\nbtw, I'm not familiar with /etc/syslogd.conf, so what do I need to enable\npostgres messages in /usr/adm/pgsql ?\nSomething like:\n*.LOCAL0 /usr/adm/pgsql\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 28 Jul 1999 22:39:58 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "SYSLOGD facility" }, { "msg_contents": "I just compiled the current source tree with SYSLOGD enabled, and it\ncompiled fine.\n\n\n> \n> Hi,\n> \n> Has anyone tried tocompile with SYSLOGD facility enabled ?\n> I just tried 6.5.1 but couldn't compiled on Linux 2.2.10:\n> \n> trace.c: In function \u0014printf':\n> trace.c:119: \fOG_INFO' undeclared (first use in this function)\n> trace.c:119: (Each undeclared identifier is reported only once\n> trace.c:119: for each function it appears in.)\n> trace.c:119: \fOG_DEBUG' undeclared (first use in this function)\n> trace.c:96: warning: \fog_level' might be used uninitialized in this function\n> trace.c: In function \u0005printf':\n> trace.c:180: \fOG_ERR' undeclared (first use in this function)\n> trace.c: In function \u0017rite_syslog':\n> trace.c:207: warning: implicit declaration of function enlog'\n> trace.c:207: \fOG_PID' undeclared (first use in this function)\n> trace.c:207: \fOG_NDELAY' undeclared (first use in this function)\n> trace.c:207: \fOG_LOCAL0' undeclared (first use in this function)\n> \n> It seems I'm missing something - I just uncomment lines in src/include/config.h\n> #define ELOG_TIMESTAMPS\n> #define USE_SYSLOG\n> \n> btw, I'm not familiar with /etc/syslogd.conf, so what do I need to enable\n> postgres messages in /usr/adm/pgsql ?\n> Something like:\n> *.LOCAL0 /usr/adm/pgsql\n> \n> \tRegards,\n> \n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Sep 1999 16:32:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SYSLOGD facility" } ]
[ { "msg_contents": "I just installed 6.5.1 on my RH 6.0 PII/400 MHz machine. I had\npreviously been running the database on 6.4.2 and had several backups\nmade through pgdump. When I tried to restore the database (i.e. psql -e\ndb01 < db.backup) all of the tables were created, but only some of them\nhad data. These tables are just real tables, not views or anything\nstrange. Luckily, I also had a back up where I had pg_dump'ed each table\nseparately (so I'm not in a total jam). But I can't figure out why the\npg_dump didn't backup all of the data.\n\n-Tony Reina\n\n\n", "msg_date": "Wed, 28 Jul 1999 15:20:12 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump not dumping all tables" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I just installed 6.5.1 on my RH 6.0 PII/400 MHz machine. I had\n> previously been running the database on 6.4.2 and had several backups\n> made through pgdump. When I tried to restore the database (i.e. psql -e\n> db01 < db.backup) all of the tables were created, but only some of them\n> had data. These tables are just real tables, not views or anything\n> strange. Luckily, I also had a back up where I had pg_dump'ed each table\n> separately (so I'm not in a total jam). But I can't figure out why the\n> pg_dump didn't backup all of the data.\n\nThat is distressing, all right ... and it's not a report we've heard\nbefore. Can you see any pattern to which tables' contents were saved\nand which were not? I'd wonder about peculiar table names, seldom-\nused column data types, and so forth.\n\nDid your indexes get recreated from the db.backup file?\n\nIs there any chance that the db.backup file got truncated (say, because\nyou ran out of disk space during the dump)?\n\nIf you can, it would be nice to see the db.backup file itself, minus\ndata so that it's not too big to email. If you could strip the data\nout and just indicate which tables had data and which not, it should\namount to only a few K of table-creation commands...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 18:44:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables " }, { "msg_contents": "\"G. Anthony Reina\" wrote:\n> \n> I just installed 6.5.1 on my RH 6.0 PII/400 MHz machine. I had\n> previously been running the database on 6.4.2 and had several backups\n> made through pgdump. When I tried to restore the database (i.e. psql -e\n> db01 < db.backup) all of the tables were created, but only some of them\n> had data. These tables are just real tables, not views or anything\n> strange. Luckily, I also had a back up where I had pg_dump'ed each table\n> separately (so I'm not in a total jam). But I can't figure out why the\n> pg_dump didn't backup all of the data.\n> \n> -Tony Reina\n\nIf there is even one row dumped wrong the data for the whole table is\nnot \ninserted ;(\n\nI've had this for row's containing \\n (or maybe \\r) that got dumped as\nreal \nnewline that screwed the whole COPY xxx FROM stdin. \nI resolved it by editing the dumpfile via visual inspection.\n\nAnother thing to try would be to dump as proper insert strings (pg_dump\n-d) \ninstead of copy from. It will be slow to load though ...\n\n---------------------------\nHannu\n", "msg_date": "Thu, 29 Jul 1999 01:49:55 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables" }, { "msg_contents": "Hannu Krosing wrote:\n\n> If there is even one row dumped wrong the data for the whole table is\n> not\n> inserted ;(\n>\n> I've had this for row's containing \\n (or maybe \\r) that got dumped as\n> real\n> newline that screwed the whole COPY xxx FROM stdin.\n> I resolved it by editing the dumpfile via visual inspection.\n>\n> Another thing to try would be to dump as proper insert strings (pg_dump\n> -d)\n> instead of copy from. It will be slow to load though ...\n>\n\nHannu,\n\n Unfortunately, my dump file is 2 Gig and so I can't edit it easily. I\ndon't mind slowness as long as I have accuracy so I'll try the pg_dump -d.\n\nThanks.\n-Tony\n\n\n", "msg_date": "Wed, 28 Jul 1999 15:52:22 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables" }, { "msg_contents": "Tom,\n\n I think I may have found the error but I can't be sure. I compressed the\npg_dump'd backup file and then samba'd it to a Windows 95 machine in order to\nburn it to a CD-ROM. I wonder if Windows added extra line feeds here and\nthere (although I don't see them when I do a head or tail on the file). If\nthat's the case, then it is my fault.\n\n-Tony\n\n\n\nTom Lane wrote:\n\n> That is distressing, all right ... and it's not a report we've heard\n> before. Can you see any pattern to which tables' contents were saved\n> and which were not? I'd wonder about peculiar table names, seldom-\n> used column data types, and so forth.\n>\n\nAll of the tables seemed to be the ones marked ***_proc (e.g.\ncenter_out_proc, ellipse_proc, etc.). These all seemed to be at the end of\nthe pg_dump. So probably somewhere in the pg_dump a table had an extra\ncharacter and screwed up the remaining tables from being written (if I am\ncorrectly understanding how pg_dump works).\n\n>\n> Did your indexes get recreated from the db.backup file?\n\nYes. They get created just after the copy commands. Of course, it would be\nnice if they were created first and then the data was copied in. My indicies\nhave unique keys. There have been times with 6.4.2 where for some reason\n(despite having a unique index), I have had two rows in an index. This even\nhappened when I went to pg_dump the table and rebuild it. I was thinking that\nif the index was created first and then the data was copied, then this\nprobably couldn't occur on a rebuild.\n\n>\n>\n> Is there any chance that the db.backup file got truncated (say, because\n> you ran out of disk space during the dump)?\n>\n\nNo, this partition is 10 Gigs. I have about 1-2 Gigs left even when the\npg_dump finishes.\n\n>\n> If you can, it would be nice to see the db.backup file itself, minus\n> data so that it's not too big to email. If you could strip the data\n> out and just indicate which tables had data and which not, it should\n> amount to only a few K of table-creation commands...\n>\n> regards, tom lane\n\nAgain, the text file is over 2 Gig so I can't seem to find an editor that is\nbig enough to hold it all in memory (I only have a half a gig of RAM). So it\nreally is just guesswork. Anything you can think of to strip the data from\nthis big of a file?\n\n-Tony\n\n\n\n", "msg_date": "Wed, 28 Jul 1999 16:14:24 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> I've had this for row's containing \\n (or maybe \\r) that got dumped as\n> real newline that screwed the whole COPY xxx FROM stdin.\n\nFWIW, I think that particular bug was fixed some time ago; leastwise\nI cannot reproduce it with either 6.4.2 or current pg_dump.\n\nTony, would you let us know whether -d helps?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 19:16:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables " }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I think I may have found the error but I can't be sure. I compressed the\n> pg_dump'd backup file and then samba'd it to a Windows 95 machine in order to\n> burn it to a CD-ROM. I wonder if Windows added extra line feeds here and\n> there (although I don't see them when I do a head or tail on the\n> file).\n\nIf the file was compressed when you transferred it, then any newline\nbreakage would have messed it up pretty thoroughly... so I doubt that\ntheory.\n\nHannu's thought is a good one: corrupted data within a particular COPY\ncommand would probably have caused the entire COPY to fail, but psql\nwould have recovered at the \\. and picked up with the rest of the\nrestore script, which seems to match the symptoms. I think he's blamed\na long-gone bug, but there could be another one with similar effects.\n\nHowever, if that happened you should certainly have seen a complaint\nfrom psql (and also in the postmaster log) while running the restore.\nDid you look through the output of the restore script carefully?\n\n> All of the tables seemed to be the ones marked ***_proc (e.g.\n> center_out_proc, ellipse_proc, etc.). These all seemed to be at the end of\n> the pg_dump.\n\nHmm. What kind of data was in them?\n\n> Yes. They get created just after the copy commands. Of course, it would be\n> nice if they were created first and then the data was copied in.\n\nThere's a reason for that: it's a lot faster to build the index after\ndoing the bulk load, rather than incrementally as the data is loaded.\n\n> Again, the text file is over 2 Gig so I can't seem to find an editor that is\n> big enough to hold it all in memory (I only have a half a gig of RAM). So it\n> really is just guesswork. Anything you can think of to strip the data from\n> this big of a file?\n\nNot short of writing a little perl script that looks for COPY ... and \\.\nBut at this point it seems likely that the problem is in the data\nitself, so stripping it out would lose the evidence anyway. Grumble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 1999 20:07:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables " }, { "msg_contents": "On Wed, 28 Jul 1999, G. Anthony Reina wrote:\n\n> Again, the text file is over 2 Gig so I can't seem to find an editor that is\n> big enough to hold it all in memory (I only have a half a gig of RAM). So it\n> really is just guesswork. Anything you can think of to strip the data from\n> this big of a file?\n\negrep \"^CREATE|^COPY\" <filename> ?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Jul 1999 23:28:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables" }, { "msg_contents": "On Wed, Jul 28, 1999 at 11:28:17PM -0300, The Hermit Hacker wrote:\n> On Wed, 28 Jul 1999, G. Anthony Reina wrote:\n> \n> > Again, the text file is over 2 Gig so I can't seem to find an editor that is\n> > big enough to hold it all in memory (I only have a half a gig of RAM). So it\n> > really is just guesswork. Anything you can think of to strip the data from\n> > this big of a file?\n> \n> egrep \"^CREATE|^COPY\" <filename> ?\n\nThe one class of failures on upgrade we have been seeing is tables with\nfieldnames that were previously reserved words. One of those might keep the\nrest of the COPYs from working, would it not?\n\ntry piping the combined stdout and stderr together through grep \"ERROR\"\nand see if anything pops up.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Wed, 28 Jul 1999 22:54:11 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 28 Jul 1999, G. Anthony Reina wrote:\n> \n> > Again, the text file is over 2 Gig so I can't seem to find an editor that is\n> > big enough to hold it all in memory (I only have a half a gig of RAM). So it\n> > really is just guesswork. Anything you can think of to strip the data from\n> > this big of a file?\n> \n> egrep \"^CREATE|^COPY\" <filename> ?\n\nNay,we have currently nice multi-line CREATEs.\n\nthe following python script should to work\n\n------------------------------------------------------\n#!/usr/bin/env python\n \nimport sys\n \nin_data = 0\n \nwhile 1:\n line = sys.stdin.readline()\n if not line: break\n if line[:5] == 'COPY ':\n in_data = 1\n if not in_data: sys.stdout.write(line)\n if in_data and line[:2] == '\\\\.':\n in_data = 0\n-----------------------------------------------------\n\nas you can probably guess it is used as \n\nstripdata.py <withdata.sql >withoutdata.sql\n\n-------------------------\nHannu\n", "msg_date": "Thu, 29 Jul 1999 09:04:12 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump not dumping all tables" } ]
[ { "msg_contents": "Tom,\n\ndo you have some patch to get vacuum analyze truncate index file ?\nI expect it will be enough to live with current implementation of\nupdate if vacuum analyze will works properly even without row reuse\nin update. \n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Thu, 29 Jul 1999 12:40:56 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum analyze and index file" } ]
[ { "msg_contents": "Is there any autoconf guru?\n\nI have lots problems Autoconf + C++ library.\n\nif I make CC=CXX, configure can't find libpq,\n AC_CHECK_LIB(pq, PQexec)\n\nbecause it try link PQexec() - and it is missing \n\nif I keep CC as cc \nAC_TRY_COMPILE([#include <stdlib.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n],\n[int a = accept(1, (struct sockaddr *) 0, (int *) 0);],\n[AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)],\n[AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)])\n\nreturn wrong result because int always aceptable for C but\ncan cause error for CXX\n\nIs there autoconf version modified for work with C++ or \nI have to patch it's macros by my self ?\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 29 Jul 1999 16:04:11 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "Off-topic: autoconf guru" }, { "msg_contents": "\nIf you check configure.in with PostgreSQL, we use the --with-libs call in\norder to tell it where to look for 'libraries outside the system\nnorm'...check the code for that, as I believe its what you are looking\nfor, since, in general, the libpq would be outside that 'norm'..\n\n\n On Thu, 29 Jul 1999, Dmitry Samersoff wrote:\n\n> Is there any autoconf guru?\n> \n> I have lots problems Autoconf + C++ library.\n> \n> if I make CC=CXX, configure can't find libpq,\n> AC_CHECK_LIB(pq, PQexec)\n> \n> because it try link PQexec() - and it is missing \n> \n> if I keep CC as cc \n> AC_TRY_COMPILE([#include <stdlib.h>\n> #include <sys/types.h>\n> #include <sys/socket.h>\n> ],\n> [int a = accept(1, (struct sockaddr *) 0, (int *) 0);],\n> [AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)],\n> [AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)])\n> \n> return wrong result because int always aceptable for C but\n> can cause error for CXX\n> \n> Is there autoconf version modified for work with C++ or \n> I have to patch it's macros by my self ?\n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 29 Jul 1999 09:52:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: autoconf guru" }, { "msg_contents": "\nOn 29-Jul-99 The Hermit Hacker wrote:\n> \n> If you check configure.in with PostgreSQL, we use the --with-libs call in\n> order to tell it where to look for 'libraries outside the system\n> norm'...check the code for that, as I believe its what you are looking\n> for, since, in general, the libpq would be outside that 'norm'..\n\nThanks, but I mention some other problem - \nsequence \n CC=g++\u001b\n AC_CHECK_LIB(pq, PQexec)\n\nis espanded by autoconf into\n... main(){ PQexec(); } ...\nthat can't be compiled by g++, \ninstead \n... main(){ PGconn *conn; const char *query; PQexec(conn,query); }\u001b \n \nis there a way to correct this problem or I need to rewrite \nautoconf macros?\n\nconfigure:2672: checking for PQexec in -lpq\nconfigure:2691: g++ -o conftest -g -O2 conftest.c -lpq \n-L/usr/local/pgsql/lib 1>&5\nconfigure:2688: Undefined symbol `PQexec(void)' referenced from text segment\ncollect2: ld returned 1 exit status\nconfigure: failed program was:\n#line 2680 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar PQexec();\n\nint main() {\nPQexec()\n; return 0; }\n\n\n> \n> \n> On Thu, 29 Jul 1999, Dmitry Samersoff wrote:\n> \n>> Is there any autoconf guru?\n>> \n>> I have lots problems Autoconf + C++ library.\n>> \n>> if I make CC=CXX, configure can't find libpq,\n>> AC_CHECK_LIB(pq, PQexec)\n>> \n>> because it try link PQexec() - and it is missing \n>> \n>> if I keep CC as cc \n>> AC_TRY_COMPILE([#include <stdlib.h>\n>> #include <sys/types.h>\n>> #include <sys/socket.h>\n>> ],\n>> [int a = accept(1, (struct sockaddr *) 0, (int *) 0);],\n>> [AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)],\n>> [AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)])\n>> \n>> return wrong result because int always aceptable for C but\n>> can cause error for CXX\n>> \n>> Is there autoconf version modified for work with C++ or \n>> I have to patch it's macros by my self ?\n>> \n>> \n>> ---\n>> Dmitry Samersoff, [email protected], ICQ:3161705\n>> http://devnull.wplus.net\n>> * There will come soft rains ...\n>> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 29 Jul 1999 18:02:36 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Off-topic: autoconf guru" }, { "msg_contents": "Dmitry Samersoff <[email protected]> writes:\n> if I make CC=CXX, configure can't find libpq,\n\nYou shouldn't do that. CC is supposed to be a C compiler not a C++\ncompiler.\n\nWe have enough cross-platform headaches with the code already ...\ntrying to make it all compile under C++ as well as C is a pushup\nI don't care to undertake...\n\n> if I keep CC as cc \n> AC_TRY_COMPILE([#include <stdlib.h>\n> #include <sys/types.h>\n> #include <sys/socket.h>\n> ],\n> [int a = accept(1, (struct sockaddr *) 0, (int *) 0);],\n> [AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)],\n> [AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)])\n\n> return wrong result because int always aceptable for C but\n> can cause error for CXX\n\nHuh? How can it be a problem for C++? The test snippet is C, and\nso is the code that is going to be trying to call accept().\n\n> Is there autoconf version modified for work with C++ or \n> I have to patch it's macros by my self ?\n\nI think you have a misconfigured C++ installation, and that you'd\nbe best off directing your attention to fixing that. I have\nseen libpq++ fail in odd ways when I tried to build Postgres here\nwith a fresh egcs install whose C++ support wasn't right. (IIRC,\nmy problem was that /usr/local/lib/libg++ was an old version not\ncompatible with the new egcs --- but the error messages weren't\nparticularly helpful in diagnosing that...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jul 1999 10:37:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: autoconf guru " }, { "msg_contents": "Hi Dmitry,\n\nAutoconf can do this out of the box. With the macros AC_LANG_C and \nAC_LANG_CPLUSPLUS you can switch between C and C++ compiler mode. Here \nis a small example based on the snippet you provided:\n\n\n\tAC_PREREQ(2.12)\n\tAC_INIT(configure.in)\n\n\t#\n\t# Check which C and C++ compiler to use\n\t#\n\tAC_PROG_CC\n\tAC_PROG_CXX\n\n\t#\n\t# The following checks are done with the C compiler\n\t#\n\tAC_LANG_C\n\n\tAC_CHECK_FUNC(accept)\n\n\n\t#\n\t# Now switch over to the C++ compiler for the next test\n\t#\n\tAC_LANG_CPLUSPLUS\n\n\tAC_MSG_CHECKING(socket size type)\n\tAC_TRY_COMPILE([#include <stdlib.h>\n\t#include <sys/types.h>\n\t#include <sys/socket.h>\n\t],\n\t[int a = accept(1, (struct sockaddr *) 0, (int *) 0);],\n\t[AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)],\n\t[AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)])\n\n\t#\n\t# Switch back to C mode again\n\t#\n\tAC_LANG_C\n\n\tAC_OUTPUT() \n\n\nBest regards,\n\nPatrick\n-- \nPatrick van Kleef\t \t\[email protected]\n\n\n", "msg_date": "Thu, 29 Jul 1999 16:59:43 +0200", "msg_from": "Patrick van Kleef <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: autoconf guru " }, { "msg_contents": "Dmitry Samersoff <[email protected]> writes:\n> CC=g++\n\nThat is your problem. Don't do it.\n\nAutoconf's job is difficult enough without trying to make all its macros\nwork with either C or C++ compilers. They haven't tried. If you think\nit is critical that they should try, go off and join the GNU autoconf\nteam.\n\nBack to actually solving the problem: you should be letting CC=gcc and\nCXX=g++ as the system is expecting. I don't know what problem you are\ntrying to solve, but I can assure you that switching those two symbols\naround is *not* the path to a solution. What happens when you try to\nbuild the system without forcing the wrong choice of compilers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jul 1999 11:20:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Off-topic: autoconf guru " }, { "msg_contents": "\nOn 29-Jul-99 Patrick van Kleef wrote:\n> Hi Dmitry,\n> \n> Autoconf can do this out of the box. With the macros AC_LANG_C and \n> AC_LANG_CPLUSPLUS you can switch between C and C++ compiler mode. Here \n> is a small example based on the snippet you provided:\n\nThank you very match !!!!\nIt is exactly what I need for !!!\n\n> \n> \n> AC_PREREQ(2.12)\n> AC_INIT(configure.in)\n> \n> #\n> # Check which C and C++ compiler to use\n> #\n> AC_PROG_CC\n> AC_PROG_CXX\n> \n> #\n> # The following checks are done with the C compiler\n> #\n> AC_LANG_C\n> \n> AC_CHECK_FUNC(accept)\n> \n> \n> #\n> # Now switch over to the C++ compiler for the next test\n> #\n> AC_LANG_CPLUSPLUS\n> \n> AC_MSG_CHECKING(socket size type)\n> AC_TRY_COMPILE([#include <stdlib.h>\n> #include <sys/types.h>\n> #include <sys/socket.h>\n> ],\n> [int a = accept(1, (struct sockaddr *) 0, (int *) 0);],\n> [AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)],\n> [AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)])\n> \n> #\n> # Switch back to C mode again\n> #\n> AC_LANG_C\n> \n> AC_OUTPUT() \n> \n> \n> Best regards,\n> \n> Patrick\n> -- \n> Patrick van Kleef \n> [email protected]\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 29 Jul 1999 19:52:00 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Off-topic: autoconf guru" } ]
[ { "msg_contents": "Hi,\n\nI've been mulling over the idea of using stringinfo for psql, but I don't\nthink that it's such a good idea, because it means that I would be linking\nstuff from the backend sub-tree into client tools. Stringinfo is a module\nwhich implements an expandable string buffer which already exists in\nsrc/backend/lib. I have required this for psql, and initially wrote my own.\nBruce mentioned that I should look at stringinfo, which I did, and it seems\nto fill the requirements just fine.\nWhat I'd like to do is use the expandable buffer that I wrote for psql, and\nany other client-side utils that require it; and then use stringinfo for\nanything new that requires an expanable buffer in the backend (this will\nhappen anyway, stringinfo's here to stay). This removes any dependency\nbetween the client-side and server-side components of the system. I don't\nknow if there are any dependencies at the moment, but my first instinct\nwould be to try to prevent any from creeping in.\n\nIdeas, thoughts...\n\n\nMikeA\n\n\n", "msg_date": "Thu, 29 Jul 1999 14:36:32 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Max Query length string" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> What I'd like to do is use the expandable buffer that I wrote for psql, and\n> any other client-side utils that require it; and then use stringinfo for\n> anything new that requires an expanable buffer in the backend (this will\n> happen anyway, stringinfo's here to stay). This removes any dependency\n> between the client-side and server-side components of the system.\n\nThere is precedent for sharing low-level code between frontend and\nbackend, with maybe a #define or two to handle differences like\npalloc <=> malloc. See dllist.c and some of the MULTIBYTE routines that\nget included into frontend libpq. The main restriction is that you\ncan't really report any errors from such code, since error handling in\nthe backend is via elog() which won't exist in clients. However,\nreporting errors from a client-side library is going to be ticklish\nanyway --- if you try to do anything more than return an error-code\nreturn value, some clients will be unhappy with you. So it's not as big\na restriction as it might appear.\n\nIn short, I'd suggest seeing whether you can't make stringinfo portable\nto a frontend environment; and then merge any good ideas you had in your\ncode into it (or vice versa if that seems easier). Keeping two\ndifferent modules that accomplish more or less the same thing doesn't\nstrike me as a win.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jul 1999 10:44:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max Query length string " } ]
[ { "msg_contents": "> Bruce,\n> \n> \n> \n> Dead sure. It took me some time to locate the offending line. I\n> initially thought it was a shell expansion problem.\n\nYikes. Is this the expected behavour? Does anyone else see this?\n\n> \n> The platform is RedHat linux 6.0 with egcs-2.91.66.\n> [me@linny2 src]# ./configure\n> creating cache ./config.cache\n> checking host system type... alpha-unknown-linux-gnu\n> checking echo setting...\n> checking setting template to... linux_alpha\n> checking setting USE_LOCALE... disabled\n> checking setting CYR_RECODE... disabled\n> checking setting MULTIBYTE... disabled\n> checking setting DEF_PGPORT... 5432\n> checking setting DEF_MAXBACKENDS... 32\n> checking setting USE_TCL... disabled\n> checking setting USE_PERL... disabled\n> checking setting USE_ODBC... disabled\n> checking setting ASSERT CHECKING... disabled\n> checking for gcc... gcc\n> checking whether the C compiler (gcc -O -mieee # optimization -O2\n> removed becau\n> se of egcs problem ) works... no\n> configure: error: installation or configuration problem: C compiler\n> cannot creat\n> e executables.\n> \n> \n> Peter\n> \n> Bruce Momjian wrote:\n> > \n> > > ---------------------------------------------------------------------------\n> > > Slip number -----: 14\n> > > Problem ---------: Building pristine source on RedHat Alpha 6.0\n> > > Opened by -------: [email protected] on 07/24/99 22:36\n> > > Assigned To -----: thomas\n> > > ---------------------------------------------------------------------------\n> > > Summary:\n> > > Comment placed aftef CFLAGS in template/linux_alpha cause egs errors\n> > \n> > Hmm. That is strange. I thought the Makefile would honor the # and not\n> > pass it through. No one else has reported this as a problem. Are you\n> > sure?\n> > \n> > --\n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 09:48:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] [Keystone Slip # 14] Building pristine source on RedHat\n\tAlpha6.0" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yikes. Is this the expected behavour? Does anyone else see this?\n\n>> checking whether the C compiler (gcc -O -mieee # optimization -O2\n>> removed because of egcs problem ) works... no\n>> configure: error: installation or configuration problem: C compiler\n>> cannot create executables.\n\nYup, I see exactly the same thing when I add a # comment to CFLAGS in\nmy template file. I think it is only safe to put # comments on their\nown lines in template files. A quick grep shows that linux_alpha is\nthe only template that violates that rule.\n\n>>>> Hmm. That is strange. I thought the Makefile would honor the # and not\n>>>> pass it through.\n\ngmake might, but autoconf is a different story...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jul 1999 11:28:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] [Keystone Slip # 14] Building pristine\n\tsource on RedHat Alpha6.0" }, { "msg_contents": "> > Bruce,\n> > \n> > \n> > \n> > Dead sure. It took me some time to locate the offending line. I\n> > initially thought it was a shell expansion problem.\n> \n> Yikes. Is this the expected behavour? Does anyone else see this?\n> \n> > \n> > The platform is RedHat linux 6.0 with egcs-2.91.66.\n> > [me@linny2 src]# ./configure\n> > creating cache ./config.cache\n> > checking host system type... alpha-unknown-linux-gnu\n...\n> > checking whether the C compiler (gcc -O -mieee # optimization -O2\n> > removed becau\n> > se of egcs problem ) works... no\n> > configure: error: installation or configuration problem: C compiler\n> > cannot creat\n> > e executables.\n> > \n> > \n> > Peter\n> > \n> > Bruce Momjian wrote:\n> > > \n\n\tThe same sort of thing happens with OSF1 unless you specify\n\n\t./configure --with-template=alpha_gcc\n\n\tThere are comments in the documentation that a few changes are needed\nbut I have not found what they were/are. For grins I installed egcs as\ngcc and I still get this during the make so those \"changes\" for OSF1\nwould be nice. Else I get to puzzle out what's happening. Does anybody\nhave DEC OSF working with postgress?? :-)\n\n\n....\nld -r -o SUBSYS.o md.o mm.o smgr.o smgrtype.o\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/smgr'\nfor i in buffer file ipc large_object lmgr page smgr; do gmake -C $i \nbuffer/SUBSYS.o; done\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/buffer'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/buffer'\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/file'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/file'\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/ipc'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/ipc'\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/large_object'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/large_object'\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/lmgr'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/lmgr'\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/page'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/page'\ngmake[3]: Entering directory `/source/local/postgresql/postgresql-6.5/src/backe\nnd/storage/smgr'\ngmake[3]: *** No rule to make target `buffer/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage/smgr'\ngmake[2]: *** [buffer/SUBSYS.o] Error 2\ngmake[2]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd/storage'\ngmake[1]: *** [storage.dir] Error 2\ngmake[1]: Leaving directory `/source/local/postgresql/postgresql-6.5/src/backen\nd'\ngmake: *** [all] Error 2\n\n\n-- \n\tStephen N. Kogge\n\[email protected]\n\thttp://www.uimage.com\n\n\n", "msg_date": "Thu, 29 Jul 1999 11:33:52 -0400", "msg_from": "Stephen Kogge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] [Keystone Slip # 14] Building pristine\n\tsource on RedHat Alpha6.0" }, { "msg_contents": "> > > Bruce,\n> > > \n> > > \n> > > \n> > > Dead sure. It took me some time to locate the offending line. I\n> > > initially thought it was a shell expansion problem.\n> > \n> > Yikes. Is this the expected behavour? Does anyone else see this?\n> > \n> > > \n> > > The platform is RedHat linux 6.0 with egcs-2.91.66.\n> > > [me@linny2 src]# ./configure\n> > > creating cache ./config.cache\n> > > checking host system type... alpha-unknown-linux-gnu\n> ...\n> > > checking whether the C compiler (gcc -O -mieee # optimization -O2\n> > > removed becau\n> > > se of egcs problem ) works... no\n> > > configure: error: installation or configuration problem: C compiler\n> > > cannot creat\n> > > e executables.\n> > > \n> > > \n> > > Peter\n> > > \n> > > Bruce Momjian wrote:\n> > > > \n> \n> \tThe same sort of thing happens with OSF1 unless you specify\n> \n> \t./configure --with-template=alpha_gcc\n> \n> \tThere are comments in the documentation that a few changes are needed\n> but I have not found what they were/are. For grins I installed egcs as\n> gcc and I still get this during the make so those \"changes\" for OSF1\n> would be nice. Else I get to puzzle out what's happening. Does anybody\n> have DEC OSF working with postgress?? :-)\n> \n\nOK, I have removed the comment after the optimization flag. In fact,\nthe comment is not needed now that we have alpha-specific flags for\ncertain Makefiles.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 11:39:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [BUGS] [Keystone Slip # 14] Building pristine\n source\n\ton RedHat Alpha6.0" } ]
[ { "msg_contents": "I'm a new Linux user, RH6.0. PostgreSQL-6.4.2-3. The package was\ninstalled when I installed the RH cd.\n\nThis is the error message I get on startup: (linuxconf log)\n\nExecuting: /etc/rc.d/rc5.d/S85postgesql start\n * /usr/bin/postmaster does not find the database system. Expected\nto find it in the PGDATA directory \"/var/lib/pgsql\" , but unable to open\nthe\nfile with pathname \"/var/lib/pgsql/base/template1/pg_class\".\n *\n * no data directory -- can not proceed\n > starting postgresql service: postmaster []\n\nAny help you can provide me would be appreciated. I just assumed that\nthe RPM install would have created all the necessary files and\ndirectories I needed.\n\nThanks\n\nTim Potier\n", "msg_date": "Thu, 29 Jul 1999 08:57:57 -0500", "msg_from": "Timothy Potier <[email protected]>", "msg_from_op": true, "msg_subject": "Unusual Problem?" } ]
[ { "msg_contents": "> I'm sending individual E-mail because I think this subject should not be\n> discussed in mailing list.\n\nIt is appropriate for the mailing list, so try that next time please.\n\n> I just thought that external representation(output) of datetime should be\n> the same as input.\n> Does anybody agree that the following behavior is correct ?\n...\n> Depending on timezone, the hour will be changed.\n\nYes, I believe that the behavior is correct. But I implemented the\ncode ;)\n\nYou can suppress any timezone shifting by running your backend in GMT\nby setting your system time zone or by setting PGTZ. You can also set\nPGTZ for any client using libpq (e.g. psql), or do a 'set time zone'\nfrom your app.\n\nAn alternative is to use the \"date\" data type, which does not carry\ntime zone info.\n\n> I solved this problem just cutting the timezone ID in the application, when\n> needed, because if dttest < ??/??/1901 or dttest>??/??/2037 (abstime limits)\n> no timezone ID is showed.\n\nRight. The conversion to local time can only happen if the underlying\nsystem can help do the conversion. Also, for times in the past the\nconventions for time zones and daylight savings time were much more\nfluid and unsettled, and who knows what they will be in the future?\n\nI hope this helps...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 29 Jul 1999 14:38:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Keystone Slip # 22] Some confusion with datetimedata type\n\tandtimezone" } ]
[ { "msg_contents": "Hi. There is interest at work in doing some fairly simple inventory\ncontrol using a database. We've already got Postgres up and running\n(for obvious reasons) and I was wondering if anyone had suggestions\nfor a good approach to app development for brower-based user\ninterfaces.\n\nThere are other applications which would be of interest, so something\nwith some growth potential would be helpful.\n\nWe have a bit of expertise in Java servlets, etc, so that is one\noption via the jdbc interface.\n\nAre there any options which are particularly \"approachable\" which\nwould allow newbies to get something working if they have some\nexisting code to look at?\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 29 Jul 1999 16:20:09 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "web-based front end development" }, { "msg_contents": "Thomas - \nI'd suggest you have a look at the Zope system (www.zope.org). It's a\nthrough-the-web development environment, designed for ease of partitioning \nthe development and maintanence of web-based apps. \n\nOne could think of it as a cross of Cold Fusion and PHP on steroids. It's\nactively being developed in an Open Source way, with Digitial Creations,\nInc. as lead developers (only fair, it's their code they donated to\nus all).\n\nThe partitioning allows you to write the SQL parts (as ZSQL methods),\nand have someone else use them in a 'blackbox' fashion to design the\napp. using DTML, the Zope HTML scripting language. It's relatively easy\nto extend this to having 'content' people do just that, content. Not an\nHTML tag in sight (for them).\n\nAt first glance, it may look a little heavyweight for 'simple' app., but\nwe all know that those simple apps grow, and you never get a chance to\nrewrite from scratch untill you have too.\n\nNow the bad news. The current beta has lots of nice new features,\nbut in the process of adding some of them (better concurrency, and\na new implementation of their transaction system), all the database\nadaptors broke. They still work fine for the current stable, 1.10.2,\nhowever. It's just that there are major features (ZClasses, XML support)\nin the beta that could affect fundamental design decisions for any app\nyou'd start now.\n\nI think it'd be a major win for both systems to have such a core member\nof the PostgreSQL development using Zope. Give me 'till Monday and I'll see\nif I can't get the ZPyGreSQLDA (ow, painful name!) working acceptably with\nZope 2.0. (I need it anyway, even if you don't go with Zope)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 29 Jul 1999 11:50:44 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "Thomas Lockhart wrote:\n> (for obvious reasons) and I was wondering if anyone had suggestions\n> for a good approach to app development for brower-based user\n> interfaces.\n \n> There are other applications which would be of interest, so something\n> with some growth potential would be helpful.\n\n> Are there any options which are particularly \"approachable\" which\n> would allow newbies to get something working if they have some\n> existing code to look at?\n\nMy recommendation is AOLserver 3. If you know any tcl, you can deal\nwith AOLserver's tcl. AOLserver is a complete, lightweight,\nmultithreaded, industrial-strength, open source powerhouse of a web\nserver, available from www.aolserver.com. There are many examples\navailable. AOL runs their www.aol.com and www.digitalcity.com sites on\nAOLserver -- this thing is a performance beast! Postgres is well\nsupported under the older 2.3 series and the newer 3.0 series. 2.3 is\nnot open source, though. \n\nIn fact, AOLserver's support of Postgres is why I got started with\nPostgreSQL in the first place -- the seamless db connectivity was just\ntoo tempting. Database connections are pooled and throttled, so that\nonly as many backends as your server can support can be loaded at any\ntime. Connection threads share pooled connections -- a new backend is\nnot spawned for each an every page request -- AOLserver was the first\nweb server with this capability, BTW. AOLserver will cooexist with\nother web servers on the same box -- you can have it listen to any port\nyou desire, on any interface.\n\nAOLserver allows embedded tcl inside HTML pages -- and the tcl has\ncomplete run of the database. For administration, a telnet control port\nis available that allows execution of operating system commands, tcl\ncommands, and direct entry of SQL -- like having its own psql built-in. \n\nVersion 3 is currently at beta 2, but development is heavy. \n\nAnd, AOL is using this for their highest traffic sites.\n\nThe interactive development community site is at aolserver.lcs.mit.edu,\nand is run by the guys behind ArsDigita (the best known of whom is\nPhilip Greenspun, Mr. Database-backed-website himself.). Their\nArsDigita Community System (ACS) is written for the 2.3 server, but is\nchock full of example code that runs backed by Oracle, although an\neffort is underway to port over to PostgreSQL.\n\nThe learning curve is surprising shallow, with any experienced\nprogrammer taking maybe a day or so to get up to speed on AOLserver's\ndialect of tk-less tcl. I have run this system for over two years, and\nit works very well.\n\nFor more information on Greenspun's philosophy and on AOLserver itself,\ncheck out the book \"Philip and Alex's Guide to Web Publishing\",\navailable at amazon.com or for free at photo.net/wtr/thebook -- this\nbook really is a must-read.\n\nHTH.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Thu, 29 Jul 1999 12:53:25 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "On Thu, 29 Jul 1999, Thomas Lockhart wrote:\n\n> Hi. There is interest at work in doing some fairly simple inventory\n> control using a database. We've already got Postgres up and running\n> (for obvious reasons) and I was wondering if anyone had suggestions\n> for a good approach to app development for brower-based user\n> interfaces.\n> \n> There are other applications which would be of interest, so something\n> with some growth potential would be helpful.\n> \n> We have a bit of expertise in Java servlets, etc, so that is one\n> option via the jdbc interface.\n> \n> Are there any options which are particularly \"approachable\" which\n> would allow newbies to get something working if they have some\n> existing code to look at?\n\nDmitry was working on something like that; I couldn't wait and wrote\none in C. His is in perl. Handles inventory, invoice printing (one at a\ntime on demand), special prices and a few other minor things. Probably\ncould use to be cleaned up some tho. This one runs with Apache, but one\nday I'll probably rewrite it to use my micro-webserver so a full web\nserver wouldn't be a requirement.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 29 Jul 1999 12:55:21 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "> Hi. There is interest at work in doing some fairly simple inventory\n> control using a database. We've already got Postgres up and running\n> (for obvious reasons) and I was wondering if anyone had suggestions\n> for a good approach to app development for brower-based user\n> interfaces.\n> \n> There are other applications which would be of interest, so something\n> with some growth potential would be helpful.\n> \n> We have a bit of expertise in Java servlets, etc, so that is one\n> option via the jdbc interface.\n> \n> Are there any options which are particularly \"approachable\" which\n> would allow newbies to get something working if they have some\n> existing code to look at?\n\nPHP is a must see. See FAQ.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 13:02:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "At 04:20 PM 7/29/99 +0000, Thomas Lockhart wrote:\n\n>Hi. There is interest at work in doing some fairly simple inventory\n>control using a database. We've already got Postgres up and running\n>(for obvious reasons) and I was wondering if anyone had suggestions\n>for a good approach to app development for brower-based user\n>interfaces.\n\nI'm fond of AOLserver and its TCL API, which has built-in pooled\ndb connectivity, maintaining persistent db connections with\nno forking overhead. The webserver itself schedules TCL threads,\nthere's no CGI forking overhead as with older style Perl, etc\ninterfaces (of course, Apache + modPerl etc now implements a similarly\nefficient interface). AOLserver's open source, GNU-licensed, and\nalso supports CGI and a C API. It works with many dbs, including\nPostgres.\n\nThe integrated TCL API has been extended with literally dozens\nof website-useful extensions to connect to the db, do selects,\ndo dml, get table info, send e-mail (\"ns_sendmail to from message \nextraheaders\" - this gives a flavor of the ease in which one\ngets things done in this environment), schedule TCL scripts\n(I use this feature to do nightly pg_dumps), load pages from\nother URLs, upload files from client browsers, etc etc etc.\nIf you're at all familiar with TCL you could have a few modest\npages up and running within hours of downloading the server\nand the utilities package mentioned below.\n\nAn example of a website built using this combination plus\npostgres 6.5.1 can be found at http://donb.photo.net/tweeterdom.\n\nArsDigita has a ton of code available to implement bboards,\ne-commerce, bug (ticket) tracking, chat, and a bunch of other\nstuff which sits on top of Oracle. I've found this stuff easy\nto migrate to Postgres - the TCL scripts don't need changing\nother than the actual SQL, which is mostly vanilla. The lack\nof an outer join gets in the way of porting some of this code,\nthus those who've discussed porting the complete ArsDigita system\nare eagerly awaiting Postgres 6.6.\n\nOther than that, the fact that Postgres implements sequences\nin a way very much like Oracle with a just slightly different\nsyntax makes porting the SQL fairly easy. My particular\napplication doesn't include any of the specific modules\nimplemented by ArsDigita, other than that used to register\nand log in visitors. But, I've poached a bunch of code from\nthe modules to avoid wheel-reinventing in a bunch of my own\nstuff.\n\nArsDigita also has prototyping scripts available that quickly\nbuilds form-based entry and edit pages from tables. This is\nnew, again is Oracle-based, but should be easy to move to\npostgres.\n\nThey also supply a rich set of utilities which greatly\nsimplify using the db and forms, whether or not you use\ntheir other tools.\n\nAll of this stuff's available from ArsDigita under the GNU\nlicense as well, at arsdigita.com.\n\nWhile not well-known, AOLserver, the ArsDigita stuff, and\nOracle lie underneath some very busy sites. Postgres 6.5.1\nisn't quite robust enough to measure up to Oracle yet for\na truly busy site, but is so much better in the web environment\nthan 6.4 that I think it's really reasonable for modest sites,\nup to a few ten thousands of hits a day at the very least.\n\nI've tested the AOLserver+TCL+Postgres on a P200 classic\nwith Linux, 64 MB (running postgres -B 2000), and two IDE\ndisks (indices on one, datafiles on the other). On another\nLAN-connected machine I fired up a bunch of browsers and\nused them to run test scripts which simulated ten users \neach doing four inserts a second, and two users doing\n\"selects\" in tight TCL loops (did this in part to test\nthe changes which removed the unneeded logging after read-only\nselects). The machine was perfectly happy performing these\ntasks, not falling behind a bit. The four inserts were bundled\ninto a single transaction, so that's ten transactions a second\nconsisting of forty inserts. A lot of hits for a tiny machine,\nreally. Says a lot for Postgres, Linux, the web server and its\nTCL API, not to mention the speed and capacity of modern \nmicrocomputers!\n\nIf you're doing simple web applications, a straight interpreter,\nbe it TCL, Perl or whatever, has some advantages over a compiled\nlanguage. You make your changes, and poof! they're there, no\nrecompiling, relinking, etc. \n\nOf course, I wouldn't use TCL for a large program requiring \nhundreds of thousands of lines of code...but it does have a\nreasonably clean syntax and the basic loop, select, if-then-else\nstructures and the ability to declare procedures.\n\nThe very nature of HTML imposes a structure on web programs\nwhich is oriented around web pages, i.e. if you take input\nvia a form, that's a page. You process it and stuff into or\ninquire from a database in the page that's the \"action\"\ntarget of the form. You get to form pages via hyperlinks,\nand normally they live on separate pages because that's the\nnatural way to present things to the user.\n\nSo the structure's typically not one imposed by the programmer's\ndecomposition of the problem in some abstract sense, but rather\nyour decomposition of the page flow through the client browser,\ni.e. the design of the interface as you wish the user to see\nit. That page flow design leads you around by the nose as you\ndevelop the code underlying them (either html with embedded\nTCL (.adp pages), or as I prefer scripts which write html (.tcl\npages), in AOLserver - ASP pages and ColdFusion-augmented \nhtml are much like AOLserver .adp pages, CGI+Perl more like\nthe .tcl page approach).\n\n>From the point of view of the user, this is almost cool - the\nprogrammer's FORCED to design the user interface and the \npage flow that takes the user through it. The UI - primitive\nas it is - steps forward front-and-center, by necessity. One\ncan hope, at least, that the need to focus here leads to better\nUIs, though of course HTML is so primitive that UIs are awkward\nunless you get deep into client side java/javascript, which has\na lot of problems of its own.\n\nBecause of this, actual web code I've seen looks remarkably\nthe same regardless of the language being used. Much of\nit is dominated by the writing of html, and shoving stuff\nback-and-forth between pages (by URLencoding or hidden\nform variables - html concepts, not Tcl or Perl or Java\nconcepts). \n\nSo I think it might be best to concentrate on looking for\ncombinations that can scale to high levels of activity, which\nmeans avoiding CGI's forking interface (Apache and AOLserver\nare both good at this), pooling DB connections (to avoid\nforking off new db backends each time you connect), etc.\n\nI'm using Linux+AOLserver+their Tcl API+Postgres+ArsDigita code\nbecause the combination can service a lot of users on a cheap PC,\nand it's all open-source/free software, not because I love Tcl\n(I don't). The scripts underlying the sample site I gave above are\nsuch relatively short and simple programs, though, that I don't mind\nmuch what language I'm writing them in, language choice in this\ncase was a secondary consideration.\n\n(given I'm a professional compiler writer, it still seems weird\nto hear these words come from my mouth).\n\n\n\n\n...\n\n>Are there any options which are particularly \"approachable\" which\n>would allow newbies to get something working if they have some\n>existing code to look at?\n\nIf you're interested in the AOLserver approach, visit the ArsDigita\nweb site and poke around. You can download source code to all their\nstuff to see what stuff look like.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n", "msg_date": "Thu, 29 Jul 1999 10:55:40 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Hi. There is interest at work in doing some fairly simple inventory\n> control using a database. We've already got Postgres up and running\n> (for obvious reasons) and I was wondering if anyone had suggestions\n> for a good approach to app development for brower-based user\n> interfaces.\n> \n> There are other applications which would be of interest, so something\n> with some growth potential would be helpful.\n> \n> We have a bit of expertise in Java servlets, etc, so that is one\n> option via the jdbc interface.\n> \n> Are there any options which are particularly \"approachable\" which\n> would allow newbies to get something working if they have some\n> existing code to look at?\n\nSince you explictly mention Java, you could look at the Java Apache\nproject, which apparently implements the java servlet\ninterface. (http://java.apache.org)\n\nIf you like Perl, Apache + mod_perl + HTML::Mason have done right by\nme for a number of projects. I think it's incredibly approachable (and\npowerful), but I've got four years of solid perl experience and like\nthe language a lot---YMMV. (http://perl.apache.org/,\nhttp://www.masonhq.com/)\n\nIf you like TCL, AOLServer definitely deserves a look, as others have\nsaid. It's also a nice, low overhead server to administer.\n(http://www.aolserver.com/, http://www.arsdigita.com/)\n\nPHP has a lot of activity, and is supposed to be really fast if you\ncompile it right into your apache server, although it's often YANAL\nfor people to have to learn. (http://www.php.net)\n\nRoxen Challenger, with its Pike language and many other nifty features\nis also worth a glance, though it's also got the YANAL stigma.\nThere's both a commercial version and a GPL'd\nversion. (http://www.roxen.com/)\n\nWith the exception of the Java stuff, I am personally aquainted with\nsignificant projects that have been implemented in each of these\nenvironments---all of them have the ability to support large projects,\nthough I don't know how well they support programming in the large.\n\nI suspect approachability may initially hinge on language familiarity.\nIf you have to learn a whole new language, you're probably going to\nhave a steeper learning curve.\n\nMike.\n", "msg_date": "29 Jul 1999 14:37:54 -0400", "msg_from": "Michael Alan Dorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "\nOn 29-Jul-99 Thomas Lockhart wrote:\n> Hi. There is interest at work in doing some fairly simple inventory\n> control using a database. We've already got Postgres up and running\n> (for obvious reasons) and I was wondering if anyone had suggestions\n> for a good approach to app development for brower-based user\n> interfaces.\n\nYou begin the star war ;-))\n\nI use three different interface for different tasks \n\nFirstly, I use php3 to make web based interface \nhttp://www.piter-press.ru is an example (not the best, but the only public\navailable)\nPHP3 code looks like\n\n $qu = pg_exec($conn, \"select * from users where (uid = '$uid');\" );\n $nm = pg_numrows($qu);\n if ($nm > 0)\n {\n $data = pg_fetch_object ($qu, $i);\n $xpin = crypt($pin, $data->pin);\n if ($xpin == $data->pin)\n { BlueEcho(\"PIN of user '$uid' is valid\");\n }\n else\n { RedEcho(\"Sorry, you enter incorrect PIN for user '$uid'\");\n }\n }\n\n pg_close($conn);\n\nand seems to be very convenient for sambody familiar with perl or C\n\nSecondly, I use Perl every time as I need write anything for five minits \njust because I use Perl about five years.\n\nPerl code looks like (I skip any error check, usually doing inside runSQL)\n\n my $connect_args = 'dbname=voip host=nymph.wplus.net user=dms';\n my $conn = Pg::connectdb($connect_args);\n my $query = \"select uid from users where(opstatus=1 and manstatus=1 and units\n< $insure)\n\n $result = $conn->exec(\"BEGIN\");\n $result = $conn->exec(\"DECLARE killer001 CURSOR FOR $query;\");\n\n open(EMP,\"|$empress\");\n\n while( ($result = Voip::runSQL($conn, \"FETCH FORWARD 1 IN\nkiller001;\")) )\n { while (@row = $result->fetchrow)\n {\n runSQL($conn, \"begin\");\n runSQL($conn, \"update users set opstatus=2 where(uid = '$row[0]');\");\n print EMP \"update its_user set acctstatus=1 where(userid =\n'$row[0]');\\n\";\n runSQL($conn, \"commit\");\n print \"User $row[0] disabled\\n\";\n }\n }\n\n runSQL($conn,\"CLOSE killer001;\");\n runSQL($conn, \"END\");\n $conn->reset();\n\nThird, I use C++ for really hard tasks (By historical reasons, I use my own\nlibrary, not libpq++) \nC++ code looks like \n\n pgs = new PgSQL(listdb);\n pgs->trace();\n\n sprintf( (char *)qbuf,\"select id, email from %s where( msgid < %d );\", \n (char *) list, msgid);\n pgs->openCursor( (char *)qbuf);\n\n while( pgs->fetchNext() )\n {\n sprintf(update,\"update %s set msgid=%d where( id = %s);\", \n (char *)listn, msgid\n send_proc(qbuf, (char *) (*pgs)[\"email\"], fm, (char *)\n update );\n } // while \n\n pgs->closeCursor();\n delete pgs;\n\nI also try to use Java but I find no appropriate task for it.\nMain disadvantage to Java - is lots of incompatible browser versions,\nfor example Lucent made java interface works only on 2 MS Win based \nworkstations and 0 of UNIX based from about 80 comps. All other have\nfonts, security manager or other problems ;-)) \n(but it's my opinion - any of Java gurus may be completely disagree with me\n;-)) )\n\n\nIMHO, PHP3 is the best choice for Web it self, but bakground programs \nshould be written in C/C++\n\nSo, make your choice and good luck ;-))\n\n\n> \n> There are other applications which would be of interest, so something\n> with some growth potential would be helpful.\n> \n> We have a bit of expertise in Java servlets, etc, so that is one\n> option via the jdbc interface.\n> \n> Are there any options which are particularly \"approachable\" which\n> would allow newbies to get something working if they have some\n> existing code to look at?\n> \n> TIA\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart [email protected]\n> South Pasadena, California\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 29 Jul 1999 23:49:59 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] web-based front end development" }, { "msg_contents": "Thus spake Dmitry Samersoff\n> On 29-Jul-99 Thomas Lockhart wrote:\n> > (for obvious reasons) and I was wondering if anyone had suggestions\n> > for a good approach to app development for brower-based user\n> > interfaces.\n> \n> You begin the star war ;-))\n> \n> I use three different interface for different tasks \n\n[PHP3, Perl and C++ discussion ellided]\n\nOr, use Python for everything.\n\n> Firstly, I use php3 to make web based interface \n\nI use Python over PHP because, like PHP, Ican embed it into the web\nserver but I can reuse the code in non-web applications so I don't\nhave to reinvent each wheel.\n\n> http://www.piter-press.ru is an example (not the best, but the only public\n> available)\n> PHP3 code looks like\n> \n> $qu = pg_exec($conn, \"select * from users where (uid = '$uid');\" );\n> $nm = pg_numrows($qu);\n> if ($nm > 0)\n> {\n> $data = pg_fetch_object ($qu, $i);\n> $xpin = crypt($pin, $data->pin);\n> if ($xpin == $data->pin)\n> { BlueEcho(\"PIN of user '$uid' is valid\");\n> }\n> else\n> { RedEcho(\"Sorry, you enter incorrect PIN for user '$uid'\");\n> }\n> }\n> \n> pg_close($conn);\n\nfor data in db.query(\"select * from users where uid = '%d'\" % uid).dictresult():\n\tif crypt(pin, data.pin) == data.pin:\n\t\tprint \"PIN of user '%d' is valid\" % uid\n\telse:\n\t\tprint \"Sorry, you enter incorrect PIN for user '%d'\" % uid\n\nI assume that BlueEcho and RedEcho are simply functions that wrap the\nstrings in font color tags. Such functions can easily be added to Python.\nSee http://www.druid.net/rides/ for a real example.\n\n> Secondly, I use Perl every time as I need write anything for five minits \n> just because I use Perl about five years.\n\nI use Python over Perl because I find it to be a cleaner and more logical\nlanguage. This is a personal preference thing, of course.\n\n> Third, I use C++ for really hard tasks (By historical reasons, I use my own\n> library, not libpq++) \n\nI hardly do anything in C (never cared much for C++ except for a few\nspecific features) any more as Python gives me the ability to do anything\nI could do in C and, if needed, I can always write low level code in C\nand link it in.\n\nCheck out http://www.python.org/ for more information. For a PostgreSQL\ninterface for Python see http://www.druid.net/pygresql/ or look in the\nPostgreSQL source tree.\n\n> IMHO, PHP3 is the best choice for Web it self, but bakground programs \n> should be written in C/C++\n\nAs I said, one language for all makes code reuse easier. I find that my\nprojects generally require web interfaces, CLI interfaces as well as\nscheduled background tasks and I can write modules that get imported into\nall of them saving me much development time.\n\n> So, make your choice and good luck ;-))\n\nCan't argue with that.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 30 Jul 1999 08:02:45 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": "At 11:49 PM 7/29/99 +0400, Dmitry Samersoff wrote:\n...\n\n> $qu = pg_exec($conn, \"select * from users where (uid = '$uid');\" );\n> $nm = pg_numrows($qu);\n> if ($nm > 0)\n> {\n> $data = pg_fetch_object ($qu, $i);\n> $xpin = crypt($pin, $data->pin);\n> if ($xpin == $data->pin)\n> { BlueEcho(\"PIN of user '$uid' is valid\");\n> }\n> else\n> { RedEcho(\"Sorry, you enter incorrect PIN for user '$uid'\");\n> }\n> }\n>\n> pg_close($conn);\n\nTcl code in AOLserver looks roughly like this (using the utilities\npackage from ArsDigita as well as AOLserver Tcl extensions):\n\n# Note that this gets a persistent handle, i.e. the overhead is\n# simply that of assigning a handle pointer from a pool\n\nset db [ns_db gethandle]\nset selection [ns_db select $db \"select * from users where (uid='$uid')\"\nwhile {[ns_db getrow $db $selection]} {\n set_variables_after_query\n if {$pin == ...\n\n}\n\nns_db releasehandle $db\n\n>and seems to be very convenient for sambody familiar with perl or C\n>\n>Secondly, I use Perl every time as I need write anything for five minits \n>just because I use Perl about five years.\n>\n>Perl code looks like (I skip any error check, usually doing inside runSQL)\n\n> my $connect_args = 'dbname=voip host=nymph.wplus.net user=dms';\n> my $conn = Pg::connectdb($connect_args);\n\nThis is bad for websites - building a new db connection is expensive.\n\nThis is why AOLserver provides pooled connections.\n\nThis is why Apache/modperl types use packages that pool persistent\nconnections if they plan to build a busy site.\n\n> my $query = \"select uid from users where(opstatus=1 and manstatus=1 and\nunits\n>< $insure)\n>\n> $result = $conn->exec(\"BEGIN\");\n> $result = $conn->exec(\"DECLARE killer001 CURSOR FOR $query;\");\n\nAs you can see, when it gets down to it, all of these solutions have\nmore in common than in differences.\n\nThe key to look at, IMO, is the efficiency of the webserver and its\ndatabase connectivity.\n\n...\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n", "msg_date": "Fri, 30 Jul 1999 11:13:39 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] web-based front end development" } ]
[ { "msg_contents": "Is there a way that I can supress the column heading and the \"(# rows)\"\nwhich come before and after the following psql command:\n\npsql -e db01 -c \"select tablename from pg_tables where tablename NOT\nLIKE 'pg%'\"\n\n\nThe current output is:\n\ntablename\n=======\ntable1\ntable2\ntable3\ntable4\n(4 rows)\n\nWhat I'd like to get is simply:\n\ntable1\ntable2\ntable3\ntable4\n\n\nThanks.\n-Tony\n\n\n", "msg_date": "Thu, 29 Jul 1999 10:02:13 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Formatting the output" }, { "msg_contents": "Try:\n\npsql db01 -c \"select tablename from pg_tables where tablename NOT LIKE '%pg%'\" -t\n\nThe -e makes it display the QUERY: <whatever> line... so removing it fixes that\nproblem... the -t tells it not to show the column headings and such...\n\nBill\n\nOn Thu, Jul 29, 1999 at 10:02:13AM -0700, G. Anthony Reina wrote:\n>Is there a way that I can supress the column heading and the \"(# rows)\"\n>which come before and after the following psql command:\n>\n>psql -e db01 -c \"select tablename from pg_tables where tablename NOT\n>LIKE 'pg%'\"\n>\n>\n>The current output is:\n>\n>tablename\n>=======\n>table1\n>table2\n>table3\n>table4\n>(4 rows)\n>\n>What I'd like to get is simply:\n>\n>table1\n>table2\n>table3\n>table4\n>\n>\n>Thanks.\n>-Tony\n>\n>\n>\n\n-- \nLiam\n\nBill Brandt \[email protected] http://www.draaw.net/\n", "msg_date": "Thu, 29 Jul 1999 13:34:03 -0400", "msg_from": "Bill Brandt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Formatting the output" }, { "msg_contents": "> Is there a way that I can supress the column heading and the \"(# rows)\"\n> which come before and after the following psql command:\n> \n> psql -e db01 -c \"select tablename from pg_tables where tablename NOT\n> LIKE 'pg%'\"\n\nHow about psql -qt:\n\t\n\t#$ sql -qt test\n\tselect * from pg_language;\n\tinternal|f |f | 0|n/a \n\tlisp |f |f | 0|/usr/ucb/liszt\n\tC |f |f | 0|/bin/cc \n\tsql |f |f | 0|postgres \n\nThe manual page says -q turns off row count, but -t does. I will fix\nthat now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 13:41:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Formatting the output" }, { "msg_contents": "Thomas, please change the psql manual page to say -t turns off headings\nand the row count, and change -q to not mention turning off of row\ncount, ok? I believe you are controlling the master manual pages\ncopies.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 13:43:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Formatting the output" }, { "msg_contents": "Bill Brandt wrote:\n\n> Try:\n>\n> psql db01 -c \"select tablename from pg_tables where tablename NOT LIKE '%pg%'\" -t\n>\n> The -e makes it display the QUERY: <whatever> line... so removing it fixes that\n> problem... the -t tells it not to show the column headings and such...\n>\n\nBruce Momjian wrote:\n\n> How about psql -qt:\n>\n> #$ sql -qt test\n> select * from pg_language;\n> internal|f |f | 0|n/a\n> lisp |f |f | 0|/usr/ucb/liszt\n> C |f |f | 0|/bin/cc\n> sql |f |f | 0|postgres\n>\n> The manual page says -q turns off row count, but -t does. I will fix\n> that now.\n>\n\nThanks. That works fine.\n\n-Tony\n\n\n", "msg_date": "Thu, 29 Jul 1999 11:37:25 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Formatting the output" }, { "msg_contents": "> Thomas, please change the psql manual page to say -t turns off headings\n> and the row count, and change -q to not mention turning off of row\n> count, ok? I believe you are controlling the master manual pages\n> copies.\n\nSure. Although I forgot to mention it explicitly, I've completed the\nmerge of the old man page info into the newer sgml pages. From here on\nI'm just making small tweaks to get the man page stuff actually\nworking (have had to dive into perl code to get the cross references\nto work).\n\nSo, feel free to touch those files yourself if you would like, but let\nme know if you want me to do it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 30 Jul 1999 00:30:38 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Formatting the output" }, { "msg_contents": "> > Thomas, please change the psql manual page to say -t turns off headings\n> > and the row count, and change -q to not mention turning off of row\n> > count, ok? I believe you are controlling the master manual pages\n> > copies.\n> \n> Sure. Although I forgot to mention it explicitly, I've completed the\n> merge of the old man page info into the newer sgml pages. From here on\n> I'm just making small tweaks to get the man page stuff actually\n> working (have had to dive into perl code to get the cross references\n> to work).\n> \n> So, feel free to touch those files yourself if you would like, but let\n> me know if you want me to do it.\n\nNo problem. Done.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Jul 1999 20:42:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Formatting the output" } ]
[ { "msg_contents": "Thank you for your http://www.postgresql.org/doxlist.html\n\nI was hoping to find whether PostgreSQL supports\n\n - transactions\n - stored procedures or triggers\n - PHP3\n\nI was also hoping to find PostgreSQL compared with mySQL,\nwithin the grid.\n\nAgain just 2cents worth\n\n* Todd F. Boyle CPA http://www.GLDialtone.com/\n* International Accounting Services LLC [email protected]\n* 9745-128th Av NE, Kirkland WA 98033 (425) 827-3107\n\n", "msg_date": "Thu, 29 Jul 1999 15:21:46 -0400 (EDT)", "msg_from": "Todd Boyle <[email protected]>", "msg_from_op": true, "msg_subject": "Feature grid suggestions" } ]
[ { "msg_contents": "Hello, \n\nI am receiving the following error quite a bit lately. I am connecting\nvi perl and Pg. What could I do to prevent this error if anything.\n\nError-WARN:WaitOnLock: error on wakeup - Aborting this transaction\n\nThanks.\n-- \nChristopher Hutton\n(847) 265-2028\n\n\"PocketCard, the best way to carry money ...\"\nPlease visit www.pocketcard.com to learn more\n", "msg_date": "Thu, 29 Jul 1999 20:17:31 -0500", "msg_from": "Christopher Hutton <[email protected]>", "msg_from_op": true, "msg_subject": "error question" } ]
[ { "msg_contents": "\nFurther to the Alpha thread, is this something that could safely be put in\nthe -stable tree? \n\nOn Thu, 29 Jul 1999, Tom Lane wrote:\n\n> Update of /usr/local/cvsroot/pgsql/src/backend/optimizer/util\n> In directory hub.org:/tmp/cvs-serv38342\n> \n> Modified Files:\n> \tpathnode.c \n> Log Message:\n> Fix coredump seen when doing mergejoin between indexed tables,\n> for example in the regression test database, try\n> select * from tenk1 t1, tenk1 t2 where t1.unique1 = t2.unique2;\n> 6.5 has this same bug ...\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 29 Jul 1999 22:29:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/backend/optimizer/util pathnode.c'" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Further to the Alpha thread, is this something that could safely be put in\n> the -stable tree? \n\nI'm sorry, ignore that comment in the commit log. 6.5 does *not*\nhave this bug --- I got confused about which postmaster process\nI was testing against :-(. It's just a list-slinging error that\nI introduced in my latest round of optimizer hacking.\n\n\t\t\tregards, tom lane\n\n\n> On Thu, 29 Jul 1999, Tom Lane wrote:\n>> Update of /usr/local/cvsroot/pgsql/src/backend/optimizer/util\n>> In directory hub.org:/tmp/cvs-serv38342\n>> \n>> Modified Files:\n>> pathnode.c \n>> Log Message:\n>> Fix coredump seen when doing mergejoin between indexed tables,\n>> for example in the regression test database, try\n>> select * from tenk1 t1, tenk1 t2 where t1.unique1 = t2.unique2;\n>> 6.5 has this same bug ...\n", "msg_date": "Thu, 29 Jul 1999 21:34:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/backend/optimizer/util pathnode.c' " } ]
[ { "msg_contents": "\nJust curious, but why...?\n\nrevision 1.10\ndate: 1999/07/13 20:00:37; author: momjian; state: Exp; lines: +2 -2\nRedefine cpu's as __cpu__. Only for 6.6 branch.\n\n\nWhy 'Only for 6.6...'? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Jul 1999 01:28:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "cvs log for libpq-int.h ..." }, { "msg_contents": "> \n> Just curious, but why...?\n> \n> revision 1.10\n> date: 1999/07/13 20:00:37; author: momjian; state: Exp; lines: +2 -2\n> Redefine cpu's as __cpu__. Only for 6.6 branch.\n> \n> \n> Why 'Only for 6.6...'? \n\n6.5.* is a dead branch.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Jul 1999 01:25:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs log for libpq-int.h ..." }, { "msg_contents": "On Fri, 30 Jul 1999, Bruce Momjian wrote:\n\n> > \n> > Just curious, but why...?\n> > \n> > revision 1.10\n> > date: 1999/07/13 20:00:37; author: momjian; state: Exp; lines: +2 -2\n> > Redefine cpu's as __cpu__. Only for 6.6 branch.\n> > \n> > \n> > Why 'Only for 6.6...'? \n> \n> 6.5.* is a dead branch.\n\n>From a development standpoint, yes...from a commercial standpoint,\nno...6.5.x represents our only stable branch until v6.6 takes over its\nplace...\n\nI'm planning on maintaining it such that if a client calls up, running\nv6.5.x and saying there is a bug, we can easily supply a patch to him to\nfix it...telling a commercial/production site that \"its fixed in v6.6,\nwhich will be out in 4 months\", IMHO, is no longer acceptable...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Jul 1999 08:48:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] cvs log for libpq-int.h ..." } ]
[ { "msg_contents": "Lamar Owen <[email protected]> writes:\n>The learning curve is surprising shallow, with any experienced\n ^^^^^^^^^^^^^^\n>programmer taking maybe a day or so to get up to speed on AOLserver's\n>dialect of tk-less tcl. I have run this system for over two years, and\n>it works very well.\n\nMichael Alan Dorman <[email protected]> writes:\n>If you have to learn a whole new language, you're probably going to\n>have a steeper learning curve.\n ^^^^^^^^^^^^^^\n\nWhat is a learning curve?\n\n\t-Michael Robinson\n\n", "msg_date": "Fri, 30 Jul 1999 13:57:57 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] web-based front end development" }, { "msg_contents": ">What is a learning curve?\n\nThe basic term for how long it takes to become proficient at a task.\nIt's a curve because if you graph days studied vs. proficiency it goes\nup in a curve slowing down as you become closer to your optimal\nproficiency.\n\n(There are actually *three* learning curves associated with any task,\nthink of them as \"beginning\", \"intermediate\", and \"expert\"; but most\npeople just think of it as a single curve and that does alright as a\nsimplification.)\n-- \nD. Jay Newman ! For the pleasure and the profit it derives\[email protected] ! I arrange things, like furniture, and\nhttp://www.sprucegrove.com/~jay/ ! daffodils, and ...lives. -- Hello Dolly\n", "msg_date": "Fri, 30 Jul 1999 08:29:05 -0400 (EDT)", "msg_from": "\"D. Jay Newman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] web-based front end development" } ]
[ { "msg_contents": "> Update of /usr/local/cvsroot/pgsql\n> from hub.org:/tmp/cvs-serv74621\n> Modified Files:\n> HISTORY\n> v6.6's HISTORY file should reflect changes that went into all previous\n> releases, including v6.5.1 ...\n\nThis file is generated from the SGML sources, with a minor\nhand-editing using ApplixWare. No need to update it between releases;\nit all gets fixed up a day or two before the release. I updated the\nmain branch sgml and hand-edited the v6.5.x branch for the v6.5.1\nrelease, just because it was easiest for me, but didn't bother\nupdating the derived files for v6.6.\n\nUse the SGML sources for everything but the FAQ stuff. Some of that is\non my hit list too; you notice that for v6.5 the FAQ_CVS has been\nabsorbed into the sgml and is now \"pretty-formatted\" on the web site\nas a url link into the docs.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 30 Jul 1999 14:18:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [COMMITTERS] pgsql (HISTORY)" } ]
[ { "msg_contents": "[ Sorry for not getting in on the fun earlier, but I'm not subscribed to\npgsql-ports, so I had no idea this discussion was going on over there. ]\n\nBruce wrote yesterday:\n> My recommendation(hold on to your seats) is to take the current cvs\n> tree, patch it with Uncle George's patches and any others needed, and\n> release a 6.5.2 release that addresses alpha. We can back-patch 6.5.2,\n> but there is really no reason to do that. There is really nothing\n> 'special' in the current tree. In fact, the most risky of them are the\n> alpha ones,\n\nI have to disagree. I've already committed a ton of parser and optimizer\nchanges, which are far too unproven to call 6.5.2. If there is a 6.5.2\nit *must* be as small as possible a diff from 6.5.1, not something based\non the current tree.\n\nI have not seen the proposed Alpha patches, but I gather they include\ndiffs to get rid of the \"fmgr calls not passing char and short\nparameters properly\" problem, which we know also exists on PPC and\nperhaps some other architectures. That strikes me as being a fairly\nmajor problem that can't really be fixed in 6.5.* --- the diffs would\ncertainly be extensive, and would they have gotten enough testing to\nrisk being put into a patch release? I certainly would not trust\nthem without testing on multiple architectures...\n\n(BTW, you may recall that I have a proposal on the table to fix the\nparameter problem via a wholesale revision of the fmgr interface.\nIf we go that route, uglifying the code by changing chars and shorts\nto Datum will be work that'll have to be undone later.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jul 1999 10:34:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> (BTW, you may recall that I have a proposal on the table to fix the\n> parameter problem via a wholesale revision of the fmgr interface.\n> If we go that route, uglifying the code by changing chars and shorts\n> to Datum will be work that'll have to be undone later.)\n\n\nOh.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Jul 1999 11:16:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" } ]
[ { "msg_contents": "Ok, they are finally here.... This is an initial release to just\nthe pgsql lists. If no one sees any problems with the below announcement\nor patches, I will then forward them on to the rest of the world (save for\nthe RH pgsql packager, as I do not have his email address). Also, as\nothers see fit, maybe the patches should be put on the FTP site, and maybe\na small annoucement somewhere on the web site, so people can find the\npatches, even if they missed them on the mailing lists.\n\n-------------\n\n\tAttached are patches to fix about 99% of the problems that exist\nwith PostgreSQL on Alpha's running Linux. They have not been heavily\ntested, but from what testing has been done, they appear to fix a number\nof nagging problems in the Alpha port of PostgreSQL.\n\tThese patches are against the 6.5.1 release tarball as available\nfrom ftp.postgresql.org. When applied, the resulting binaries pass all\nregression tests save for two:\n\n\tgeometry: Minor off by one error in the last decimal place. Can be\n\t\tsafely ignored provided you don't need extreme accuracy.\n\trules: Minor quirk in that PostgreSQL on Alpha has a different\n\t\tdefault sorting order than the platform original used to\n\t\tgenerate the expected regression results. Can be safely ignored.\n\nUnaligned traps are still present in a moderate amount with PostgreSQL on\nLinux/Alpha. To give one a feeling of the number, 28 were generated from a\nrun of the regression tests. These are being worked on and should be\nresolved in the future.\n\n\tTherefore, everyone with an Alpha running Linux and interested in\nrunning an excellent SQL database, download the 6.5.1 source for\nPostgreSQL, apply the patches, compile it, and pound on it. Send\nreactions, evaluations, problems to either me, the Debian Alpha list,\nRed Hat Alpha list, or even the PostgreSQL ports list and I will see them.\n\n\tBecause the extensive changes these patches make to the main\nPostgreSQL source tree, they are being and will be provided seperately for\nthe 6.5.x releases. It is planned to roll them into the main distribution\nsource tree for the 6.6 release that is down the road a couple of months.\n\n\tLastly, these patches are also available from my web site (see\naddress in sig), where I will also be posting any future updates.\n\n\tPS. Uncle George (often spotted on the RedHat axp-list) is the one\nto thank for the bulk of these patches. The rest is a combined mix of\nfixes by both Bruce Momjian and myself.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------", "msg_date": "Fri, 30 Jul 1999 10:59:06 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Patches for Postgresql on Linux/Alpha!" }, { "msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n> \tOk, they are finally here.... This is an initial release to just\n> the pgsql lists. If no one sees any problems with the below announcement\n> or patches, I will then forward them on to the rest of the world (save for\n> the RH pgsql packager, as I do not have his email address). Also, as\n> others see fit, maybe the patches should be put on the FTP site, and maybe\n> a small annoucement somewhere on the web site, so people can find the\n> patches, even if they missed them on the mailing lists.\n\nOK, after a *real* quick once-over, it certainly seems that 99% of the\nbulk is changes for the fmgr interface problem (although it looks like\nint32s were changed to Datum also? Is that really needed on Alpha?)\n\nI see a few hacks on time_t that I am worried about; those will almost\ncertainly cause problems for other architectures if not ifdef'd.\n\nAs I commented before, I would like to think about a different solution\nto the fmgr problem for 6.6, so I'd like to hold off committing any of\nthese fmgr changes into the current tree until we have a consensus on\nwhat the best approach is. But we could commit them into the 6.5 branch\nafter sufficient testing. That would be nice for PPC folks as well, as\nI'll bet these changes would let them run with more than -O0 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jul 1999 13:59:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patches for Postgresql on Linux/Alpha! " }, { "msg_contents": "> As I commented before, I would like to think about a different solution\n> to the fmgr problem for 6.6, so I'd like to hold off committing any of\n> these fmgr changes into the current tree until we have a consensus on\n> what the best approach is. But we could commit them into the 6.5 branch\n> after sufficient testing. That would be nice for PPC folks as well, as\n> I'll bet these changes would let them run with more than -O0 ...\n\nYes. I would love to see your solution!\n\nBTW, What about HP-UX? Are they having similar problems with -O2?\n---\nTatsuo Ishii\n", "msg_date": "Sat, 31 Jul 1999 06:37:16 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patches for Postgresql on Linux/Alpha! " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> As I commented before, I would like to think about a different solution\n>> to the fmgr problem for 6.6,\n\n> Yes. I would love to see your solution!\n\nI posted some preliminary ideas to the hackers list on 6/14/99 (see\nmessage titled \"Cleaning up function interface\"). I think the fmgr\ninterface is in need of thorough redesign. Aside from the portability\nbugs that we are currently getting our noses rubbed in, it cannot handle\nNULL function arguments or results properly.\n\nSomething I didn't talk about in my earlier message, but am getting\nmore interested in doing: I would like to see if we can't make float4\na pass-by-value type, and float8 and int8 as well when on a platform\nwhere Datum is 8 bytes wide. 64-bit platforms are going to become\nthe norm over the next few years, and Postgres shouldn't be forever\ntied to palloc() overhead for passing and returning datatypes that\ndon't need it on newer platforms. I think that with suitably chosen\nmacros for accessing and returning function arguments/results, it\nwould be possible for the bulk of the source code not to be aware of\nwhether a particular C datatype is pass by value or pass by reference\nin the fmgr interface.\n\nIn short, I think there are enough reasons for redoing the fmgr\ninterface from scratch that we ought to just bite the bullet and do it,\ntedious though it will be.\n\nThe only real argument against it is that it'd break existing user code\nin loadable libraries; people with user-defined functions written in C\nwould have to revise 'em. That's certainly a bad negative, but I think\nthe positives outweigh it. That existing user code will need to\nchange anyway to be ported to one of the platforms where parameter-\npassing problems exist :-(\n\n> BTW, What about HP-UX? Are they having similar problems with -O2?\n\nOn the newer 64-bit machines, I wouldn't be at all surprised. But\nMike Schout just reported regression tests passing on his 64-bit box,\nso maybe we're still OK there. For now anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jul 1999 19:05:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patches for Postgresql on Linux/Alpha! " }, { "msg_contents": "On Fri, 30 Jul 1999, Tom Lane wrote:\n\n> OK, after a *real* quick once-over, it certainly seems that 99% of the\n> bulk is changes for the fmgr interface problem (although it looks like\n> int32s were changed to Datum also? Is that really needed on Alpha?)\n> \n> I see a few hacks on time_t that I am worried about; those will almost\n> certainly cause problems for other architectures if not ifdef'd.\n\n\tI have hardly looked at the patch in detail at all, so I can't\nrespond to your specific observation at this time. I just wanted something\nthat made at least alpha work out the door first, and then go back and\nreview the patch in detail to add #ifdef (__alpha__) as needed.\n\n> As I commented before, I would like to think about a different solution\n> to the fmgr problem for 6.6, so I'd like to hold off committing any of\n> these fmgr changes into the current tree until we have a consensus on\n> what the best approach is. But we could commit them into the 6.5 branch\n> after sufficient testing. That would be nice for PPC folks as well, as\n> I'll bet these changes would let them run with more than -O0 ...\n\n\tI never asked for these patches to be put in the pgsql source tree\nin anyway. This is just so people who want to run 6.5.1 on Linux/Alpha\nwill be able to, with a relatively clean and well working binary. The\npatches for the pgsql source tree will be trickling in over the next few\nmonths as I evaluate the alpha patches in detail.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Fri, 30 Jul 1999 20:21:18 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Patches for Postgresql on Linux/Alpha! " }, { "msg_contents": "> ... I will then forward them on to the rest of the world (save for\n> the RH pgsql packager, as I do not have his email address).\n\nThat would be Lamar and myself, until we have validated it for the RPM\nbuild. We'll take it from there...\n\n - Thomas\n\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 01 Aug 1999 04:04:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patches for Postgresql on Linux/Alpha!" } ]
[ { "msg_contents": "I have PostgreSQL 6.4.2 installed since before and it was pretty easy to\ncompile it except for a few minor things. But but today I planned to\nupgrade to version 6.5.1. So I made a pg_dumpall and backed up the rest and\ndeleted my /usr/local/pgsql to install the new version there.\n\nThe compilation and all worked great, it recognised the system as BSDi 4\nwhich is correct and no errors. After this I logged in as postgres user and\nwrote \"initdb\", worked great, BUT when I wrote \"postmaster -i\" I got a big\nproblem which I have never got ever before and I don't know why. Here is\nthe error message from postmaster ->\n\nIpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\nsize=1063936, permission=600\nFATAL 1: ShmemCreate: cannot create region\n\nTo me it sounds like it is using some kind of wrong argument with shmget,\nbut I don't know how to fix this. I hope someone can help me.\n\nThis is what I get when I write ipcs ->\nMessage Queues:\nT ID KEY MODE OWNER GROUP\n\nShared Memory:\nT ID KEY MODE OWNER GROUP\nm 196608 5432210 --rwa------ postgres user\nm 196609 5432201 --rw------- postgres user\nm 983042 5432207 --rw------- postgres user\nm 1376259 5432010 --rwa------ postgres postgres\nm 131076 5432001 --rw------- postgres user\nm 786437 5432007 --rw------- postgres postgres\n\nSemaphores:\nT ID KEY MODE OWNER GROUP\n\nIf it's anything I need to do with IPC or PostgreSQL, please let me know\nwhat and how, or maybe this is a bug ? I hope not, I love this database\nengine and really wanna start using 6.5.1 asap.\n\nSincerely\nRoberth Andersson\n\nRoberth Andersson, Server Administrator @ Jump-Gate & Webworqs\nPhone:\t011-46-550-17864\nCellphone:\t011-46-70-6422024\nEMail:\[email protected] / [email protected]\n\n", "msg_date": "Sat, 31 Jul 1999 05:53:30 +0200", "msg_from": "Roberth Andersson <[email protected]>", "msg_from_op": true, "msg_subject": "IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> size=1063936, permission=600\n> FATAL 1: ShmemCreate: cannot create region\n> \n> To me it sounds like it is using some kind of wrong argument with shmget,\n> but I don't know how to fix this. I hope someone can help me.\n> \n> This is what I get when I write ipcs ->\n> Message Queues:\n> T ID KEY MODE OWNER GROUP\n> \n> Shared Memory:\n> T ID KEY MODE OWNER GROUP\n> m 196608 5432210 --rwa------ postgres user\n> m 196609 5432201 --rw------- postgres user\n> m 983042 5432207 --rw------- postgres user\n> m 1376259 5432010 --rwa------ postgres postgres\n> m 131076 5432001 --rw------- postgres user\n> m 786437 5432007 --rw------- postgres postgres\n> \n> Semaphores:\n> T ID KEY MODE OWNER GROUP\n> \n> If it's anything I need to do with IPC or PostgreSQL, please let me know\n> what and how, or maybe this is a bug ? I hope not, I love this database\n> engine and really wanna start using 6.5.1 asap.\n\nI am running BSDI here. Try using pgsql/bin/ipcclean to remove the\ncurrent shared memory stuff. Seems the old version did not clean up its\nshared memory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 31 Jul 1999 00:11:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "Roberth Andersson <[email protected]> writes:\n> BUT when I wrote \"postmaster -i\" I got a big\n> problem which I have never got ever before and I don't know why. Here is\n> the error message from postmaster ->\n\n> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> size=1063936, permission=600\n> FATAL 1: ShmemCreate: cannot create region\n\nThe kernel error message (\"Invalid argument\", here) is often very\nunhelpful when dealing with shared memory and semaphore operations :-(\nI will bet that the real problem is that your kernel is configured\nnot to allow shared mem regions bigger than 1 megabyte --- but could\nit say \"Request too big\", or some such? Nooo...\n\nPostgres 6.5 defaults to wanting a shmem region just over a meg, whereas\n6.4 was just under IIRC, so this problem will bite anyone who has the\nfairly common kernel parameter setting SHMEMMAX = 1meg.\n\nIf that's the problem, you can either reconfigure your kernel with a\nlarger SHMEMMAX setting, or start Postgres with smaller-than-default\nlimits on number of buffers and backends. I'd try -N 16 for starters.\n\nAnother possibility is that you are running into a kernel limit on the\ntotal amount of shared memory, not the size of this individual chunk.\nYou say:\n\n> This is what I get when I write ipcs ->\n> Message Queues:\n> T ID KEY MODE OWNER GROUP\n\n> Shared Memory:\n> T ID KEY MODE OWNER GROUP\n> m 196608 5432210 --rwa------ postgres user\n> m 196609 5432201 --rw------- postgres user\n> m 983042 5432207 --rw------- postgres user\n> m 1376259 5432010 --rwa------ postgres postgres\n> m 131076 5432001 --rw------- postgres user\n> m 786437 5432007 --rw------- postgres postgres\n\n> Semaphores:\n> T ID KEY MODE OWNER GROUP\n\nIf you do not have a postmaster running then those postgres-owned shared\nmemory segments should not be there; they must be left over from some\nold run where the postmaster crashed without releasing 'em :-(. They\ncould be causing the kernel to decide it's given out too much shared\nmemory. Use ipcclean to get rid of them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Jul 1999 11:55:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x " }, { "msg_contents": "At 00:11 1999-07-31 -0400, you wrote:\n>> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n>> size=1063936, permission=600\n>> FATAL 1: ShmemCreate: cannot create region\n>> \n>> To me it sounds like it is using some kind of wrong argument with shmget,\n>> but I don't know how to fix this. I hope someone can help me.\n>> \n>> This is what I get when I write ipcs ->\n>> Message Queues:\n>> T ID KEY MODE OWNER GROUP\n>> \n>> Shared Memory:\n>> T ID KEY MODE OWNER GROUP\n>> m 196608 5432210 --rwa------ postgres user\n>> m 196609 5432201 --rw------- postgres user\n>> m 983042 5432207 --rw------- postgres user\n>> m 1376259 5432010 --rwa------ postgres postgres\n>> m 131076 5432001 --rw------- postgres user\n>> m 786437 5432007 --rw------- postgres postgres\n>> \n>> Semaphores:\n>> T ID KEY MODE OWNER GROUP\n>> \n>> If it's anything I need to do with IPC or PostgreSQL, please let me know\n>> what and how, or maybe this is a bug ? I hope not, I love this database\n>> engine and really wanna start using 6.5.1 asap.\n>\n>I am running BSDI here. Try using pgsql/bin/ipcclean to remove the\n>current shared memory stuff. Seems the old version did not clean up its\n>shared memory.\n\nThanks Bruce\n\nI tried to do that and it worked just fine the first time, and after that I\nwrote \"ipcs\" to get a statistical if it really was cleaned or not, but I\ngot this ->\n\nMessage Queues:\nT ID KEY MODE OWNER GROUP\n\nShared Memory:\nT ID KEY MODE OWNER GROUP\nm 1376259 0 --rwa------ postgres postgres\nm 131076 0 --rw------- postgres user\nm 786437 0 --rw------- postgres postgres\n\nSemaphores:\nT ID KEY MODE OWNER GROUP\n\nNow I tried to use \"ipcclean\" once again, and I am always getting these\nerrors ->\n\nipcrm: shmid(1376259): : Invalid argument\nipcrm: shmid(131076): : Invalid argument\nipcrm: shmid(786437): : Invalid argument\n\nI have no idea why, except maybe this could be something that is left over\nsince old and now the system doesn't know how to remove this Postgres stuff.\n\nIf anyone have any clue about what I can do, please let me know, I would\nappreciate it a lot.\n\nSincerely\n\nRoberth Andersson, Server Administrator @ Jump-Gate & Webworqs\nPhone:\t011-46-550-17864\nCellphone:\t011-46-70-6422024\nEMail:\[email protected] / [email protected]\n\n", "msg_date": "Sat, 31 Jul 1999 18:22:20 +0200", "msg_from": "Roberth Andersson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "> >I am running BSDI here. Try using pgsql/bin/ipcclean to remove the\n> >current shared memory stuff. Seems the old version did not clean up its\n> >shared memory.\n> \n> Thanks Bruce\n> \n> I tried to do that and it worked just fine the first time, and after that I\n> wrote \"ipcs\" to get a statistical if it really was cleaned or not, but I\n> got this ->\n> \n> Message Queues:\n> T ID KEY MODE OWNER GROUP\n> \n> Shared Memory:\n> T ID KEY MODE OWNER GROUP\n> m 1376259 0 --rwa------ postgres postgres\n> m 131076 0 --rw------- postgres user\n> m 786437 0 --rw------- postgres postgres\n> \n> Semaphores:\n> T ID KEY MODE OWNER GROUP\n> \n> Now I tried to use \"ipcclean\" once again, and I am always getting these\n> errors ->\n> \n> ipcrm: shmid(1376259): : Invalid argument\n> ipcrm: shmid(131076): : Invalid argument\n> ipcrm: shmid(786437): : Invalid argument\n> \n> I have no idea why, except maybe this could be something that is left over\n> since old and now the system doesn't know how to remove this Postgres stuff.\n> \n> If anyone have any clue about what I can do, please let me know, I would\n> appreciate it a lot.\n\nGo to /tmp, and do a ls -la. There will be some file in there that are\nleft over that should be deleted. But now that I look, they aren't in\n/tmp anymore in 4.0. Use ipcs and ipcrm to manually delete them.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 31 Jul 1999 12:53:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "\n\nOn Sat, 31 Jul 1999, Tom Lane wrote:\n\n> Roberth Andersson <[email protected]> writes:\n> > IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> > size=1063936, permission=600\n> > FATAL 1: ShmemCreate: cannot create region\n> \n> The kernel error message (\"Invalid argument\", here) is often very\n> unhelpful when dealing with shared memory and semaphore operations :-(\n\nFWIW, I'm just installing v6.5.1 on Solaris 2.5 -- and lo! I had the\nsame problem, and sure enough, SHMEMMAX is 1meg, and -N 16 worked like a\ncharm! So Tom, looks like you're right. As always.\n\nSince this was my first time compiling/installing pgsql, I've noticed a\ncouple of oopses (maybe mine) in the installation instructions... Who do\nI talk to to update them? (Example: Instead of being able simply to type\n\"initdb\" to get started, I had to specify a user with \"initdb -u\npostgres\". That kind of stuff.)\n\nMichael\n\n", "msg_date": "Sun, 1 Aug 1999 13:58:08 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x " }, { "msg_contents": "> Since this was my first time compiling/installing pgsql, I've noticed a\n> couple of oopses (maybe mine) in the installation instructions... Who do\n> I talk to to update them? (Example: Instead of being able simply to type\n> \"initdb\" to get started, I had to specify a user with \"initdb -u\n> postgres\". That kind of stuff.)\n\nSend it to hackers or focs. Either is good. The -u may be because of\nyour pg_hba.conf file.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 1 Aug 1999 16:14:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "> > Since this was my first time compiling/installing pgsql, I've noticed a\n> > couple of oopses (maybe mine) in the installation instructions... Who do\n> > I talk to to update them?\n> Send it to hackers or focs. Either is good. The -u may be because of\n> your pg_hba.conf file.\n\nIf you have trouble finding the \"focs\" list, try \"docs\" instead ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 01 Aug 1999 21:05:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "> > > Since this was my first time compiling/installing pgsql, I've noticed a\n> > > couple of oopses (maybe mine) in the installation instructions... Who do\n> > > I talk to to update them?\n> > Send it to hackers or focs. Either is good. The -u may be because of\n> > your pg_hba.conf file.\n> \n> If you have trouble finding the \"focs\" list, try \"docs\" instead ;)\n> \n\nShhh, don't tell everyone else about the focs list. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 1 Aug 1999 17:42:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "You people are harsh.\n\nBTW, I didn't modify the pg_hba.conf file at all since I want to allow\nonly local use, so ... um ... since I really haven't digested the usage of\nthe whole host configuration thing, I haven't the faintest clue whether\nthat would be the problem.\n\nHowever, either way, I'd like to understand what happened and make sure\nthat the next hapless newbie doesn't spend as much time on it as I did.\nWhich, granted, wasn't that much time.\n\nHere are a couple of surprises, then, that I encountered during this.\n\n- initdb complained that it couldn't find a user. I gave it -u postgres.\n- I needed to install flex (no surprise) -- the instructions are quite\n explicit, but, well, wrong: flex depends on bison. So you have to get\n and compile bison first. Also, the GNU FTP server has \"redisorganized\"\n their file structure, so the very detailed FTP instructions for getting\n flex are also outdated.\n- Being only halfway a sysadmin, I was a little worried about making a\n postgres \"superuser\". I just made a postgres user and didn't worry\n about the super part, and it seems to work. Am I missing a point?\n- The aforementioned shared memory problem was distressing. Thank God\n somebody else had just encountered it. Is there any better way to\n trap for that? Should the default number of backends be made something\n less than 32 so that the \"common setting\" of 1 meg will be safe? Am I\n being too wimpy?\n\nThat's pretty much it. Seems to be perking along happily now, but I\nhaven't run the regression tests yet.\n\nOn Sun, 1 Aug 1999, Thomas Lockhart wrote:\n\n> > > Since this was my first time compiling/installing pgsql, I've noticed a\n> > > couple of oopses (maybe mine) in the installation instructions... Who do\n> > > I talk to to update them?\n> > Send it to hackers or focs. Either is good. The -u may be because of\n> > your pg_hba.conf file.\n> \n> If you have trouble finding the \"focs\" list, try \"docs\" instead ;)\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n", "msg_date": "Sun, 1 Aug 1999 17:21:02 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "[HACKERS] Installation procedure." }, { "msg_contents": "> You people are harsh.\n\nOoh, a compliment. We live for those... ;)\n\n> - initdb complained that it couldn't find a user. I gave it -u postgres.\n\nThis is the only report of this I can remember (others might remind me\notherwise, but...). The best I can tell you somehow didn't have a USER\nenvironment variable or mucked around with accounts between building\nsoftware and trying to initdb. There are several messages from initdb\nwith similar wording but with different diagnostics so you would need\nto send the actual text or look in the initdb source code yourself\n(src/bin/initdb/initdb.sh).\n\n> - I needed to install flex (no surprise) -- the instructions are quite\n> explicit, but, well, wrong: flex depends on bison. So you have to get\n> and compile bison first. Also, the GNU FTP server has \"redisorganized\"\n> their file structure, so the very detailed FTP instructions for getting\n> flex are also outdated.\n\nThanks. Can you give a suggestion for a more helpful phrasing for\nthis, or a better choice of content?\n\n> - Being only halfway a sysadmin, I was a little worried about making a\n> postgres \"superuser\". I just made a postgres user and didn't worry\n> about the super part, and it seems to work. Am I missing a point?\n\nOnly sort of. The \"postgres superuser\" is a normal user as far as the\nOS is concerned, but is a superuser as far as the Postgres\ninstallation is concerned. Ya done good.\n\n> - The aforementioned shared memory problem was distressing. Thank God\n> somebody else had just encountered it. Is there any better way to\n> trap for that? Should the default number of backends be made something\n> less than 32 so that the \"common setting\" of 1 meg will be safe? Am I\n> being too wimpy?\n\nThis is the first release where the shared memory size was actually\nbeing calculated correctly. The numbers used pretty much match the\ntheoretical (but incorrectly calculated) maximum limits in previous\nreleases, but the calculated number is bigger and a few OSes seem to\ncough. Your OS is being wimpy imho, but the workaround is pretty easy.\n\nDo you have a specific suggestion for a change here in the docs?\nProbably no need to change the build procedure, but perhaps a warning\nabout possible startup problems?\n\nHave fun with the new toy...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 02 Aug 1999 03:32:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure." }, { "msg_contents": "\"J. Michael Roberts\" <[email protected]> writes:\n> Since this was my first time compiling/installing pgsql, I've noticed a\n> couple of oopses (maybe mine) in the installation instructions... Who do\n> I talk to to update them? (Example: Instead of being able simply to type\n> \"initdb\" to get started, I had to specify a user with \"initdb -u\n> postgres\". That kind of stuff.)\n\nFWIW, I think I know the cause of that one --- initdb, and also the\nregression tests (and maybe other places?) look at the USER environment\nvariable by default to get the name of the postgres user. If you are\non a platform that doesn't ordinarily set USER, you lose. I've been\nburnt by that myself.\n\nI am not sure whether we ought to make the code look at LOGNAME as\na fallback if USER isn't set, or just document that you ought to set\nUSER. The first sounds good, but I wonder what the odds are of\npicking up the wrong username. On my platform, for example, su'ing\nto the postgres account does *not* change LOGNAME, which would mean\ninitdb would pick the wrong thing. Maybe what we need is just a\nbetter error message (\"USER environment variable is not set, please\nset it or provide -u switch\" ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Aug 1999 23:50:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x " }, { "msg_contents": "> > You people are harsh.\n> Ooh, a compliment. We live for those... ;)\n\nHeh.\n\n> > - initdb complained that it couldn't find a user. I gave it -u postgres.\n> \n> This is the only report of this I can remember (others might remind me\n> otherwise, but...). The best I can tell you somehow didn't have a USER\n> environment variable or mucked around with accounts between building\n> software and trying to initdb.\n\nAha. Come to think of it, I'm sure that I was actually root su'd to\npostgres. That was probably the problem. I hadn't even thought of that.\n\n> > - I needed to install flex (no surprise) -- the instructions are quite\n> > explicit, but, well, wrong: flex depends on bison.\n> Thanks. Can you give a suggestion for a more helpful phrasing for\n> this, or a better choice of content?\n\nWill do. Um, \"soon.\" Actually, I was very heartened by the explicit\ndetail.\n\n> > - Being only halfway a sysadmin, I was a little worried about making a\n> > postgres \"superuser\". I just made a postgres user and didn't worry\n> > about the super part, and it seems to work. Am I missing a point?\n> \n> Only sort of. The \"postgres superuser\" is a normal user as far as the\n> OS is concerned, but is a superuser as far as the Postgres\n> installation is concerned. Ya done good.\n\nIMHO that should probably be more explicit in INSTALL. I'll update it.\n\n> Your OS is being wimpy imho, but the workaround is pretty easy.\n> \n> Do you have a specific suggestion for a change here in the docs?\n> Probably no need to change the build procedure, but perhaps a warning\n> about possible startup problems?\n\nYeah, that would be a good idea. Again, sounds like it's up to me to\nrevise INSTALL a bit.\n\n> Have fun with the new toy...\n\nToy, schmoy. This is going to back up several web sites. After the\nrecent glowing description on this mailing list of v6.5.1, I decided to\ntake the plunge and junk Illustra (the total lack of support and\ndocumentation and tuning and licensing freedom and so on have finally\ntaken their toll on my confidence...) Wish me luck.\n\n", "msg_date": "Sun, 1 Aug 1999 22:53:22 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure." }, { "msg_contents": "> If you are\n> on a platform that doesn't ordinarily set USER, you lose. I've been\n> burnt by that myself.\n> \n> I am not sure whether we ought to make the code look at LOGNAME as\n> a fallback if USER isn't set, or just document that you ought to set\n> USER.\n\nMy take would be at least to change the installation documentation,\nand improve the error message (you can never improve error message *too*\nmuch). Getting clever about what username to use seems more dangerous\nthan helpful, and besides, it's simple enough simply to say \"initdb needs\nto figure out your user somehow\" and leave it at that. How often do you\nrun initdb anyway?\n\n The first sounds good, but I wonder what the odds are of\n> picking up the wrong username. On my platform, for example, su'ing\n> to the postgres account does *not* change LOGNAME, which would mean\n> initdb would pick the wrong thing. Maybe what we need is just a\n> better error message (\"USER environment variable is not set, please\n> set it or provide -u switch\" ...)\n\n", "msg_date": "Sun, 1 Aug 1999 23:12:22 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x " }, { "msg_contents": "Great stuff, Michael. I think by the time most of us got to the point\nof contributing much to Postgres, we'd forgotten the little glitches\nwe hit on the first try. Cleaning up these issues is definitely\nworthwhile, and I am glad to see you willing to help out.\n\nThomas already gave good responses about the technical issues,\nbut I have one point to add:\n\n>> - The aforementioned shared memory problem was distressing. Thank God\n>> somebody else had just encountered it. Is there any better way to\n>> trap for that? Should the default number of backends be made something\n>> less than 32 so that the \"common setting\" of 1 meg will be safe? Am I\n>> being too wimpy?\n\n> This is the first release where the shared memory size was actually\n> being calculated correctly. The numbers used pretty much match the\n> theoretical (but incorrectly calculated) maximum limits in previous\n> releases, but the calculated number is bigger and a few OSes seem to\n> cough. Your OS is being wimpy imho, but the workaround is pretty easy.\n\nActually, when we set the default MAXBACKENDS to 32 for 6.5, it was\ndone specifically to ensure that the default shared mem block size would\nstay under a meg. (The equivalent setting in 6.4 was 64 backends.)\nBut I guess various data structures changed a little bit after that\ntime, and we ended up on the wrong side of the breakpoint without\nthinking about it.\n\nShould we cut the default MAXBACKENDS some more, or just try to\ndocument the issue better?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 00:46:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure. " }, { "msg_contents": "> Actually, when we set the default MAXBACKENDS to 32 for 6.5, it was\n> done specifically to ensure that the default shared mem block size would\n> stay under a meg. (The equivalent setting in 6.4 was 64 backends.)\n> But I guess various data structures changed a little bit after that\n> time, and we ended up on the wrong side of the breakpoint without\n> thinking about it.\n\nOops. :-)\n\n> Should we cut the default MAXBACKENDS some more, or just try to\n> document the issue better?\n\nI've added a paragraph to the system requirements in INSTALL which\nexplains the situation. Now to figure out how to get the changes to you\nguys.... Is the procedure simply to diff it and email it to somebody, or\nwhat? Just because I've been lurking on this list for nearly a year now\ndoesn't mean I know what I'm doing.\n\nAs to whether MAXBACKENDS should be changed -- I have no idea what impact\nthat would actually have. What *is* a backend, precisely? In Illustra,\nanyway, each active query starts a new process while it's working -- is\nthat a backend? I've never seen more than about 5 on my own server (not\nthat I hang around monitoring all the time) so 16 seems capacious.\n\n\n\n", "msg_date": "Sun, 1 Aug 1999 23:56:08 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure. " }, { "msg_contents": "\"J. Michael Roberts\" <[email protected]> writes:\n> Now to figure out how to get the changes to you\n> guys.... Is the procedure simply to diff it and email it to somebody, or\n> what?\n\nStandard operating procedure is to make a patch-compatible diff\n(I think -c format is preferred) and post it to the pgsql-patches\nmailing list. If you have a real good idea which core member is\nprobably going to apply the patch you could send it just to that\nperson, but it's more courteous to put it on the public mailing list.\n\n> As to whether MAXBACKENDS should be changed -- I have no idea what impact\n> that would actually have. What *is* a backend, precisely? In Illustra,\n> anyway, each active query starts a new process while it's working -- is\n> that a backend?\n\nNo. There's one backend process per client connection; it lives till\nthe client disconnects, and handles all queries that come through that\nconnection. So MAXBACKENDS really means \"how many simultaneous\nclients am I expecting\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 01:09:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure. " }, { "msg_contents": "On Mon, 2 Aug 1999, Tom Lane wrote:\n\n> Great stuff, Michael. I think by the time most of us got to the point\n> of contributing much to Postgres, we'd forgotten the little glitches\n> we hit on the first try. Cleaning up these issues is definitely\n> worthwhile, and I am glad to see you willing to help out.\n> \n> Thomas already gave good responses about the technical issues,\n> but I have one point to add:\n> \n> >> - The aforementioned shared memory problem was distressing. Thank God\n> >> somebody else had just encountered it. Is there any better way to\n> >> trap for that? Should the default number of backends be made something\n> >> less than 32 so that the \"common setting\" of 1 meg will be safe? Am I\n> >> being too wimpy?\n> \n> > This is the first release where the shared memory size was actually\n> > being calculated correctly. The numbers used pretty much match the\n> > theoretical (but incorrectly calculated) maximum limits in previous\n> > releases, but the calculated number is bigger and a few OSes seem to\n> > cough. Your OS is being wimpy imho, but the workaround is pretty easy.\n> \n> Actually, when we set the default MAXBACKENDS to 32 for 6.5, it was\n> done specifically to ensure that the default shared mem block size would\n> stay under a meg. (The equivalent setting in 6.4 was 64 backends.)\n> But I guess various data structures changed a little bit after that\n> time, and we ended up on the wrong side of the breakpoint without\n> thinking about it.\n> \n> Should we cut the default MAXBACKENDS some more, or just try to\n> document the issue better?\n\nMy opinion...cut it back to 16 and document. Reason: those new won't hit\nthe problem, and those that have started to use it in \"more production\nenvironments\" will have started to look into performance tuning, and most\nlikely have at least scan'd thruogh the postmaster man page (and will have\nseen the -B option)...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 02:21:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure. " }, { "msg_contents": "> IMHO that should probably be more explicit in INSTALL. I'll update it.\n\nOops. Sorry I took a break for dinner (I see another e-mail or two\nreferencing INSTALL too).\n\nThe changes need to happen in doc/src/sgml/install.sgml, which is the\nsource code from which INSTALL is derived. If you look at that file I\nthink you will see how it corresponds to the output file.\n\nIf you can make changes and preserve the sgml markup, great. If not,\njust send patches with text inserted in the right places and I'll\nfinish up the markup.\n\nIf your changes to INSTALL are pretty isolated, then I can also accept\npatches on that file. But ones on the sgml source would be easier.\n\n> Toy, schmoy. This is going to back up several web sites. After the\n> recent glowing description on this mailing list of v6.5.1, I decided to\n> take the plunge and junk Illustra (the total lack of support and\n> documentation and tuning and licensing freedom and so on have finally\n> taken their toll on my confidence...) Wish me luck.\n\nWe'll be interested in hearing how it goes. afaik Illustra was\n\"Postgres done right\", based on Postgres as of a few years ago with\nsome sections rewritten. In the meantime, PostgreSQL has had\nsubstantial improvements, and I wonder if we've caught up to or passed\nwhere Illustra froze.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 02 Aug 1999 05:38:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation procedure." }, { "msg_contents": "\"J. Michael Roberts\" <[email protected]> writes:\n> - I needed to install flex (no surprise) -- the instructions are quite\n> explicit, but, well, wrong: flex depends on bison. So you have to get\n> and compile bison first.\n\nBTW, does anyone understand *why* our lexer files require flex and not\njust garden-variety lex? Would it be worth trying to make them more\nportable?\n\nOr perhaps we should ship pre-lexed derived files, as we do for the\nlarger grammar files?\n\nHaving to install bison & flex is probably the most annoying Postgres\nprerequisite for people on non-Linux platforms, so I think it would\nbe nice to clean this up. I hadn't realized that you're essentially\nforced to install both...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 10:22:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "flex (was Re: [HACKERS] Installation procedure.)" }, { "msg_contents": "> BTW, does anyone understand *why* our lexer files require flex and not\n> just garden-variety lex? Would it be worth trying to make them more\n> portable?\n\nSome of the oldest and cruftiest AT&T lexers do not support the\nconcept of an exclusive start state, which we use extensively (it's my\nfault; makes for *much* cleaner specifications).\n\nUnfortunately, Sun adopted some SysV packages when they made the\nswitch from BSD, and got a bad lexer in the bargain. afaik, most other\nsystems ship a more capable package.\n\n> Or perhaps we should ship pre-lexed derived files, as we do for the\n> larger grammar files?\n\nYes, that would probably be a good idea...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 02 Aug 1999 14:46:52 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: flex (was Re: [HACKERS] Installation procedure.)" }, { "msg_contents": "> \"J. Michael Roberts\" <[email protected]> writes:\n> > - I needed to install flex (no surprise) -- the instructions are quite\n> > explicit, but, well, wrong: flex depends on bison. So you have to get\n> > and compile bison first.\n> \n> BTW, does anyone understand *why* our lexer files require flex and not\n> just garden-variety lex? Would it be worth trying to make them more\n> portable?\n\nCan we do that? Is flex actually required too?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 11:07:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: flex (was Re: [HACKERS] Installation procedure.)" }, { "msg_contents": "\n> \"J. Michael Roberts\" <[email protected]> writes:\n> > - I needed to install flex (no surprise) -- the instructions are quite\n> > explicit, but, well, wrong: flex depends on bison. So you have to get\n> > and compile bison first.\n> \n> BTW, does anyone understand *why* our lexer files require flex and not\n> just garden-variety lex? Would it be worth trying to make them more\n> portable?\n> \n> Or perhaps we should ship pre-lexed derived files, as we do for the\n> larger grammar files?\n> \n> Having to install bison & flex is probably the most annoying Postgres\n> prerequisite for people on non-Linux platforms, so I think it would\n> be nice to clean this up. I hadn't realized that you're essentially\n> forced to install both...\n\nYou undoubtedly already had both installed...\n\nFor the record, if configure doesn't find flex, it assumes lex. The\nproblem is if you don't even have lex.\n\nHowever, the pre-lexed derived files are a good idea. If somebody then\nreally wants to mess with those, they can go get flex. That means that\nthe standard distribution *wouldn't* require flex. For me, it was a good\nexcuse to get the lead out and finally install flex and bison (OK, it took\nme only about fifteen minutes, but you know how those to-do lists get).\nBut if my only goal were just to get Postgres running, that would be a\nrather unnecessary step.\n\n", "msg_date": "Mon, 2 Aug 1999 10:47:58 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: flex (was Re: [HACKERS] Installation procedure.)" }, { "msg_contents": "\"J. Michael Roberts\" <[email protected]> writes:\n> For the record, if configure doesn't find flex, it assumes lex. The\n> problem is if you don't even have lex.\n\nOr if you have lex but it doesn't work on Postgres' .l files, as indeed\nis true for the vendor lex on HPUX, and probably some other systems.\n\n> However, the pre-lexed derived files are a good idea.\n\nYah. This has been discussed before, but no one has got round to it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 12:55:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: flex (was Re: [HACKERS] Installation procedure.) " } ]
[ { "msg_contents": "At 11:55 1999-07-31 -0400, you wrote:\n>Roberth Andersson <[email protected]> writes:\n>> BUT when I wrote \"postmaster -i\" I got a big\n>> problem which I have never got ever before and I don't know why. Here is\n>> the error message from postmaster ->\n>\n>> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n>> size=1063936, permission=600\n>> FATAL 1: ShmemCreate: cannot create region\n>\n>The kernel error message (\"Invalid argument\", here) is often very\n>unhelpful when dealing with shared memory and semaphore operations :-(\n>I will bet that the real problem is that your kernel is configured\n>not to allow shared mem regions bigger than 1 megabyte --- but could\n>it say \"Request too big\", or some such? Nooo...\n>Postgres 6.5 defaults to wanting a shmem region just over a meg, whereas\n>6.4 was just under IIRC, so this problem will bite anyone who has the\n>fairly common kernel parameter setting SHMEMMAX = 1meg.\n>If that's the problem, you can either reconfigure your kernel with a\n>larger SHMEMMAX setting, or start Postgres with smaller-than-default\n>limits on number of buffers and backends. I'd try -N 16 for starters.\n>Another possibility is that you are running into a kernel limit on the\n>total amount of shared memory, not the size of this individual chunk.\n>You say:\n\nThanks Tom\n\nIs it possible to find how big it is right now without touching the kernel\nsource codes ? I tried to search also for SHMEMMAX in the source codes,\nbut found nothing.\n\nI am going to try to start up Postgres later today with your suggested\nparameter -N 16 and see whats happends.\n\n>> This is what I get when I write ipcs ->\n>> Message Queues:\n>> T ID KEY MODE OWNER GROUP\n>\n>> Shared Memory:\n>> T ID KEY MODE OWNER GROUP\n>> m 196608 5432210 --rwa------ postgres user\n>> m 196609 5432201 --rw------- postgres user\n>> m 983042 5432207 --rw------- postgres user\n>> m 1376259 5432010 --rwa------ postgres postgres\n>> m 131076 5432001 --rw------- postgres user\n>> m 786437 5432007 --rw------- postgres postgres\n>\n>> Semaphores:\n>> T ID KEY MODE OWNER GROUP\n>\n>If you do not have a postmaster running then those postgres-owned shared\n>memory segments should not be there; they must be left over from some\n>old run where the postmaster crashed without releasing 'em :-(. They\n>could be causing the kernel to decide it's given out too much shared\n>memory. Use ipcclean to get rid of them.\n\nI tried to do that and it worked just fine the first time, and after that I\nwrote \"ipcs\" to get a statistical if it really was cleaned or not, but I\ngot this ->\n\nMessage Queues:\nT ID KEY MODE OWNER GROUP\n\nShared Memory:\nT ID KEY MODE OWNER GROUP\nm 1376259 0 --rwa------ postgres postgres\nm 131076 0 --rw------- postgres user\nm 786437 0 --rw------- postgres postgres\n\nSemaphores:\nT ID KEY MODE OWNER GROUP\n\nNow I tried to use \"ipcclean\" once again, and I am always getting these\nerrors ->\n\nipcrm: shmid(1376259): : Invalid argument\nipcrm: shmid(131076): : Invalid argument\nipcrm: shmid(786437): : Invalid argument\n\nI have no idea why, except maybe this could be something that is left over\nsince old and now the system doesn't know how to remove this Postgres stuff.\n\nIf anyone have any clue about what I can do, please let me know, I would\nappreciate it a lot.\n\nSincerely \n", "msg_date": "Sat, 31 Jul 1999 18:30:53 +0200", "msg_from": "Roberth Andersson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x " } ]
[ { "msg_contents": "\nJust curious...why can't he just reboot?\n\nMike Mascari \n([email protected])\n\n--- Bruce Momjian <[email protected]> wrote:\n> > >I am running BSDI here. Try using\n> pgsql/bin/ipcclean to remove the\n> > >current shared memory stuff. Seems the old\n> version did not clean up its\n> > >shared memory.\n> > \n> > Thanks Bruce\n> > \n> > I tried to do that and it worked just fine the\n> first time, and after that I\n> > wrote \"ipcs\" to get a statistical if it really was\n> cleaned or not, but I\n> > got this ->\n> > \n> > Message Queues:\n> > T ID KEY MODE OWNER \n> GROUP\n> > \n> > Shared Memory:\n> > T ID KEY MODE OWNER \n> GROUP\n> > m 1376259 0 --rwa------ postgres\n> postgres\n> > m 131076 0 --rw------- postgres \n> user\n> > m 786437 0 --rw------- postgres\n> postgres\n> > \n> > Semaphores:\n> > T ID KEY MODE OWNER \n> GROUP\n> > \n> > Now I tried to use \"ipcclean\" once again, and I am\n> always getting these\n> > errors ->\n> > \n> > ipcrm: shmid(1376259): : Invalid argument\n> > ipcrm: shmid(131076): : Invalid argument\n> > ipcrm: shmid(786437): : Invalid argument\n> > \n> > I have no idea why, except maybe this could be\n> something that is left over\n> > since old and now the system doesn't know how to\n> remove this Postgres stuff.\n> > \n> > If anyone have any clue about what I can do,\n> please let me know, I would\n> > appreciate it a lot.\n> \n> Go to /tmp, and do a ls -la. There will be some\n> file in there that are\n> left over that should be deleted. But now that I\n> look, they aren't in\n> /tmp anymore in 4.0. Use ipcs and ipcrm to manually\n> delete them.\n\n\n_____________________________________________________________\nDo You Yahoo!?\nFree instant messaging and more at http://messenger.yahoo.com\n\n", "msg_date": "Sat, 31 Jul 1999 11:17:05 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" }, { "msg_contents": "> \n> Just curious...why can't he just reboot?\n> \n> Mike Mascari \n> ([email protected])\n> \n\nThat would be the best. I assume he can't, for some reason.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 31 Jul 1999 15:14:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" } ]
[ { "msg_contents": "At 11:17 1999-07-31 -0700, you wrote:\n>Just curious...why can't he just reboot?\n>\n>Mike Mascari \n>([email protected])\n\nBecuase this is a server with many customers/users, we rather let it stay\nas long as possible online without any reboot, we only reboots if we REALLY\nneeds to do that like patching the BSDi kernel or something similar.\n\nSincerely \nRoberth Andersson, Server Administrator @ Jump-Gate & Webworqs\nPhone:\t011-46-550-17864\nCellphone:\t011-46-70-6422024\nEMail:\[email protected] / [email protected]\n\n", "msg_date": "Sat, 31 Jul 1999 21:22:23 +0200", "msg_from": "Roberth Andersson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] IPC Memory problem with Postmaster on BSDi 4.x" } ]
[ { "msg_contents": "I re-enabled pg_upgrade this afternoon, thinking that it would be easier\nto use than dump/initdb/reload for coping with the pg_statistic change\nI'm about to commit. However, testing shows that it doesn't really\nwork. The \"upgraded\" database behaves very strangely --- vacuum tends\nto fail, and I have seen duplicate listings for attributes of a relation\nin psql's \\d listing, broken links between a relation and its indices,\nand other problems.\n\nI think the problem is that pg_upgrade no longer works in the presence\nof MVCC. In particular, forcibly moving the old database's pg_log into\nthe new is probably a bad idea when there is no similarity between the\nsets of committed transaction numbers. I suspect the reason for the\nstrange behaviors I've seen is that after the pg_log copy, the system no\nlonger believes that all of the rows in the new database's system tables\nhave been committed.\n\nIs it possible to make pg_upgrade work again, perhaps by requiring a\nvacuum on the old and/or new databases just before the move happens?\nOr must we consign pg_upgrade to the dustbin of history?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Jul 1999 18:18:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "pg_upgrade may be mortally wounded" }, { "msg_contents": "> I re-enabled pg_upgrade this afternoon, thinking that it would be easier\n> to use than dump/initdb/reload for coping with the pg_statistic change\n> I'm about to commit. However, testing shows that it doesn't really\n> work. The \"upgraded\" database behaves very strangely --- vacuum tends\n> to fail, and I have seen duplicate listings for attributes of a relation\n> in psql's \\d listing, broken links between a relation and its indices,\n> and other problems.\n> \n> I think the problem is that pg_upgrade no longer works in the presence\n> of MVCC. In particular, forcibly moving the old database's pg_log into\n> the new is probably a bad idea when there is no similarity between the\n> sets of committed transaction numbers. I suspect the reason for the\n> strange behaviors I've seen is that after the pg_log copy, the system no\n> longer believes that all of the rows in the new database's system tables\n> have been committed.\n> \n> Is it possible to make pg_upgrade work again, perhaps by requiring a\n> vacuum on the old and/or new databases just before the move happens?\n> Or must we consign pg_upgrade to the dustbin of history?\n\nI am unsure how MVCC would affect this. I will say that pg_upgrade does\nnot work when the underlying table structure changes, though I don't\nthink we have changed any of that. Strange.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 31 Jul 1999 22:06:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": ">> I think the problem is that pg_upgrade no longer works in the presence\n>> of MVCC. In particular, forcibly moving the old database's pg_log into\n>> the new is probably a bad idea when there is no similarity between the\n>> sets of committed transaction numbers. I suspect the reason for the\n>> strange behaviors I've seen is that after the pg_log copy, the system no\n>> longer believes that all of the rows in the new database's system tables\n>> have been committed.\n\nSome preliminary experiments suggest that vacuuming the new database\njust before moving the data files solves the problem --- at least,\npg_upgrade seems to work then. I will commit this change, since it's\nvery clear that pg_upgrade doesn't work without it.\n\nHowever, I'd sure like to hear Vadim's opinion before I trust pg_upgrade\nwith MVCC very far...\n\n\nBTW, it seems to me that it is a good idea to kill and restart the\npostmaster immediately after pg_upgrade finishes. Otherwise there might\nbe buffers in shared memory that do not reflect the actual contents of\nthe corresponding pages of the relation files (now that pg_upgrade\noverwrote the files with other data).\n\nAnother potential gotcha is that it'd be a really bad idea to let any\nother clients connect to the new database while it's being built.\n\nLooking at these two items together, it seems like the really safe way\nfor pg_upgrade to operate would be *not* to start a postmaster for the\nnew database until after pg_upgrade finishes; that is, the procedure\nwould be \"initdb; pg_upgrade; start postmaster\". pg_upgrade would\noperate by invoking a standalone backend for initial table creation.\nThis would guarantee no unwanted interference from other clients\nduring the critical steps.\n\nThe tricky part is that pg_dump output includes psql \\connect commands,\nwhich AFAIK are not accepted by a standalone backend. We'd have to\nfigure out another solution for those. Ideas?\n\n\t\t\tregards, tom lane\n\nPS: if you try to test pg_upgrade by running the regression database\nthrough it, and then \"vacuum analyze\" the result, you will observe a\nbackend crash when vacuum gets to the table \"c_star\". This seems to be\nthe fault of a bug that Chris Bitmead has complained of in the past.\nc_star has had a column added via inherited ALTER TABLE ADD COLUMN, and\nthe output of pg_dump creates a database with a different column order\nfor such a table than ADD COLUMN does. So, the reconstructed database\nschema does not match the table data that pg_upgrade has moved in. Ugh.\nBut we already knew that inherited ADD COLUMN is pretty bogus. I wonder\nwhether we shouldn't just disable it until it can be fixed properly...\n", "msg_date": "Mon, 02 Aug 1999 18:30:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded " }, { "msg_contents": "> BTW, it seems to me that it is a good idea to kill and restart the\n> postmaster immediately after pg_upgrade finishes. Otherwise there might\n> be buffers in shared memory that do not reflect the actual contents of\n> the corresponding pages of the relation files (now that pg_upgrade\n> overwrote the files with other data).\n\nHonestly, I have been surprised at how well pg_upgrade worked in 6.4. I\ngot very few complaints, and I think people used it.\n\nYour issue with buffer cache is a major one. Clearly, this would be a\nproblem. However, it is my understanding that the buffer cache after\ninitdb would only contain system table info, so if they pg_upgrade after\nthat, there is no way they have bad stuf in the cache, right?\n\n\n> \n> Another potential gotcha is that it'd be a really bad idea to let any\n> other clients connect to the new database while it's being built.\n\nThat is pretty obvious, and just basic sysadmin.\n\n> \n> Looking at these two items together, it seems like the really safe way\n> for pg_upgrade to operate would be *not* to start a postmaster for the\n> new database until after pg_upgrade finishes; that is, the procedure\n> would be \"initdb; pg_upgrade; start postmaster\". pg_upgrade would\n> operate by invoking a standalone backend for initial table creation.\n> This would guarantee no unwanted interference from other clients\n> during the critical steps.\n> \n> The tricky part is that pg_dump output includes psql \\connect commands,\n> which AFAIK are not accepted by a standalone backend. We'd have to\n> figure out another solution for those. Ideas?\n> \n> \t\t\tregards, tom lane\n> \n> PS: if you try to test pg_upgrade by running the regression database\n> through it, and then \"vacuum analyze\" the result, you will observe a\n> backend crash when vacuum gets to the table \"c_star\". This seems to be\n> the fault of a bug that Chris Bitmead has complained of in the past.\n> c_star has had a column added via inherited ALTER TABLE ADD COLUMN, and\n> the output of pg_dump creates a database with a different column order\n> for such a table than ADD COLUMN does. So, the reconstructed database\n> schema does not match the table data that pg_upgrade has moved in. Ugh.\n> But we already knew that inherited ADD COLUMN is pretty bogus. I wonder\n> whether we shouldn't just disable it until it can be fixed properly...\n> \n\nAnd report a message to the user. Good idea.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 22:29:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> BTW, it seems to me that it is a good idea to kill and restart the\n>> postmaster immediately after pg_upgrade finishes. Otherwise there might\n>> be buffers in shared memory that do not reflect the actual contents of\n>> the corresponding pages of the relation files (now that pg_upgrade\n>> overwrote the files with other data).\n\n> Your issue with buffer cache is a major one. Clearly, this would be a\n> problem. However, it is my understanding that the buffer cache after\n> initdb would only contain system table info, so if they pg_upgrade after\n> that, there is no way they have bad stuf in the cache, right?\n\nCached copies of system tables obviously are no problem, since\npg_upgrade doesn't overwrite those. I'm concerned whether there can\nbe cached copies of pages from user tables or indexes. Since we've\njust done a bunch of CREATE INDEXes (and a VACUUM, if my latest hack\nis right), it seems at least possible that this would happen.\n\nNow all those user tables will be empty (zero-length files), so there is\nnothing to cache. But the user indexes are *not* zero-length --- it looks\nlike they are at least 2 pages long even when empty. So there seems\nto be a real risk of having a cached copy of one of the pages of a user\nindex while pg_upgrade is overwriting the index file with new data...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 10:03:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded " }, { "msg_contents": "> Cached copies of system tables obviously are no problem, since\n> pg_upgrade doesn't overwrite those. I'm concerned whether there can\n> be cached copies of pages from user tables or indexes. Since we've\n> just done a bunch of CREATE INDEXes (and a VACUUM, if my latest hack\n> is right), it seems at least possible that this would happen.\n> \n> Now all those user tables will be empty (zero-length files), so there is\n> nothing to cache. But the user indexes are *not* zero-length --- it looks\n> like they are at least 2 pages long even when empty. So there seems\n> to be a real risk of having a cached copy of one of the pages of a user\n> index while pg_upgrade is overwriting the index file with new data...\n\nOh, I see. That would be a problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Aug 1999 12:12:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> ... So there seems\n>> to be a real risk of having a cached copy of one of the pages of a user\n>> index while pg_upgrade is overwriting the index file with new data...\n\n> Oh, I see. That would be a problem.\n\nOK, then what do you think of the idea of changing pg_upgrade to use\na standalone backend, so that no postmaster is running while it runs?\nThat'd eliminate the shared-memory-cache issue and also prevent\naccidental interference from other clients.\n\nThere's an awk script in there already that processes the pg_dump\nscript, so maybe we could change it to look for \\connect commands\nand replace them by re-executions of the backend.\n\nBTW, do you think it's really necessary for the awk script to remove\nCOPY commands? There shouldn't be any unwanted copies in there in\nthe first place, if the user made the dump with -s per instructions...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 12:48:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> ... So there seems\n> >> to be a real risk of having a cached copy of one of the pages of a user\n> >> index while pg_upgrade is overwriting the index file with new data...\n> \n> > Oh, I see. That would be a problem.\n> \n> OK, then what do you think of the idea of changing pg_upgrade to use\n> a standalone backend, so that no postmaster is running while it runs?\n> That'd eliminate the shared-memory-cache issue and also prevent\n> accidental interference from other clients.\n> \n> There's an awk script in there already that processes the pg_dump\n> script, so maybe we could change it to look for \\connect commands\n> and replace them by re-executions of the backend.\n\nThat is risky. How do we know what flags to pass to the stand-alone\nbackend? In most cases, there is not a backend running after a initdb. \nIn fact, you can't have postmaster running during initdb. I recommend\nthey be told in the instructions, and after the pg_upgrade finished to\nprint something reminding them to start and stop the postmaster. \nBecause each backend flushes dirty pages on exit, after each psql\nfinishes, it has already updated the files with dirty pages, so\nstarting/stopping postmaster will not cause the replaced tables to be\nmodified, and then the cache will be empty.\n\n> BTW, do you think it's really necessary for the awk script to remove\n> COPY commands? There shouldn't be any unwanted copies in there in\n> the first place, if the user made the dump with -s per instructions...\n\nBut we don't know that they did that. Maybe they found pg_upgrade\n_after_ they performed the pg_dump. Very likely.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Aug 1999 13:20:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": "Tom, did we address this. I forgot.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> BTW, it seems to me that it is a good idea to kill and restart the\n> >> postmaster immediately after pg_upgrade finishes. Otherwise there might\n> >> be buffers in shared memory that do not reflect the actual contents of\n> >> the corresponding pages of the relation files (now that pg_upgrade\n> >> overwrote the files with other data).\n> \n> > Your issue with buffer cache is a major one. Clearly, this would be a\n> > problem. However, it is my understanding that the buffer cache after\n> > initdb would only contain system table info, so if they pg_upgrade after\n> > that, there is no way they have bad stuf in the cache, right?\n> \n> Cached copies of system tables obviously are no problem, since\n> pg_upgrade doesn't overwrite those. I'm concerned whether there can\n> be cached copies of pages from user tables or indexes. Since we've\n> just done a bunch of CREATE INDEXes (and a VACUUM, if my latest hack\n> is right), it seems at least possible that this would happen.\n> \n> Now all those user tables will be empty (zero-length files), so there is\n> nothing to cache. But the user indexes are *not* zero-length --- it looks\n> like they are at least 2 pages long even when empty. So there seems\n> to be a real risk of having a cached copy of one of the pages of a user\n> index while pg_upgrade is overwriting the index file with new data...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 12:52:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, did we address this. I forgot.\n\nNo, it's still an open issue as far as I'm concerned. I was hoping to\nhear something from Vadim about how pg_upgrade could work safely under\nMVCC...\n\n\t\t\tregards, tom lane\n\n\n>> Bruce Momjian <[email protected]> writes:\n>>>>> BTW, it seems to me that it is a good idea to kill and restart the\n>>>>> postmaster immediately after pg_upgrade finishes. Otherwise there might\n>>>>> be buffers in shared memory that do not reflect the actual contents of\n>>>>> the corresponding pages of the relation files (now that pg_upgrade\n>>>>> overwrote the files with other data).\n>> \n>>>> Your issue with buffer cache is a major one. Clearly, this would be a\n>>>> problem. However, it is my understanding that the buffer cache after\n>>>> initdb would only contain system table info, so if they pg_upgrade after\n>>>> that, there is no way they have bad stuf in the cache, right?\n>> \n>> Cached copies of system tables obviously are no problem, since\n>> pg_upgrade doesn't overwrite those. I'm concerned whether there can\n>> be cached copies of pages from user tables or indexes. Since we've\n>> just done a bunch of CREATE INDEXes (and a VACUUM, if my latest hack\n>> is right), it seems at least possible that this would happen.\n>> \n>> Now all those user tables will be empty (zero-length files), so there is\n>> nothing to cache. But the user indexes are *not* zero-length --- it looks\n>> like they are at least 2 pages long even when empty. So there seems\n>> to be a real risk of having a cached copy of one of the pages of a user\n>> index while pg_upgrade is overwriting the index file with new data...\n", "msg_date": "Mon, 27 Sep 1999 18:40:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, did we address this. I forgot.\n> \n> No, it's still an open issue as far as I'm concerned. I was hoping to\n> hear something from Vadim about how pg_upgrade could work safely under\n> MVCC...\n> \n> \t\t\tregards, tom lane\n\nWould a solution to this be to add instructions to pg_upgrade to require\nthe user to stop and restart the postmaster? Seems like that is the\nonly solution unless we do that stop of postmater inside pg_upgrade, but\nthat seems risky.\n\n> \n> \n> >> Bruce Momjian <[email protected]> writes:\n> >>>>> BTW, it seems to me that it is a good idea to kill and restart the\n> >>>>> postmaster immediately after pg_upgrade finishes. Otherwise there might\n> >>>>> be buffers in shared memory that do not reflect the actual contents of\n> >>>>> the corresponding pages of the relation files (now that pg_upgrade\n> >>>>> overwrote the files with other data).\n> >> \n> >>>> Your issue with buffer cache is a major one. Clearly, this would be a\n> >>>> problem. However, it is my understanding that the buffer cache after\n> >>>> initdb would only contain system table info, so if they pg_upgrade after\n> >>>> that, there is no way they have bad stuf in the cache, right?\n> >> \n> >> Cached copies of system tables obviously are no problem, since\n> >> pg_upgrade doesn't overwrite those. I'm concerned whether there can\n> >> be cached copies of pages from user tables or indexes. Since we've\n> >> just done a bunch of CREATE INDEXes (and a VACUUM, if my latest hack\n> >> is right), it seems at least possible that this would happen.\n> >> \n> >> Now all those user tables will be empty (zero-length files), so there is\n> >> nothing to cache. But the user indexes are *not* zero-length --- it looks\n> >> like they are at least 2 pages long even when empty. So there seems\n> >> to be a real risk of having a cached copy of one of the pages of a user\n> >> index while pg_upgrade is overwriting the index file with new data...\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Sep 1999 09:11:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, did we address this. I forgot.\n> \n> No, it's still an open issue as far as I'm concerned. I was hoping to\n> hear something from Vadim about how pg_upgrade could work safely under\n> MVCC...\n> \n\nI don't think there is going to be any way to fix the incorrect\npostmaster buffers without restarting the postmaster, so I have added\nthis to the bottom of pg_upgrade:\n\n\techo \"You must stop/start the postmaster before doing anything else.\"\n\nand have re-organized the instructions to tell them to stop/start the\npostmaster right after running pg_upgrade.\n\nAs it is, 6.5.* upgrades can not use it, and 6.6 can not use it either\nbecause base structures will change. It does allow 6.5 people moving to\nother 6.5 releases to use initdb to get new features.\n\nLet's close this item.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Sep 1999 12:09:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" }, { "msg_contents": "\nNew instructions now say that you must stop/restart postmaster after\nupgrade. That should fix problem because all index buffers are not\ndirty, so stop/start just clears out buffers.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> BTW, it seems to me that it is a good idea to kill and restart the\n> >> postmaster immediately after pg_upgrade finishes. Otherwise there might\n> >> be buffers in shared memory that do not reflect the actual contents of\n> >> the corresponding pages of the relation files (now that pg_upgrade\n> >> overwrote the files with other data).\n> \n> > Your issue with buffer cache is a major one. Clearly, this would be a\n> > problem. However, it is my understanding that the buffer cache after\n> > initdb would only contain system table info, so if they pg_upgrade after\n> > that, there is no way they have bad stuf in the cache, right?\n> \n> Cached copies of system tables obviously are no problem, since\n> pg_upgrade doesn't overwrite those. I'm concerned whether there can\n> be cached copies of pages from user tables or indexes. Since we've\n> just done a bunch of CREATE INDEXes (and a VACUUM, if my latest hack\n> is right), it seems at least possible that this would happen.\n> \n> Now all those user tables will be empty (zero-length files), so there is\n> nothing to cache. But the user indexes are *not* zero-length --- it looks\n> like they are at least 2 pages long even when empty. So there seems\n> to be a real risk of having a cached copy of one of the pages of a user\n> index while pg_upgrade is overwriting the index file with new data...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 17:57:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade may be mortally wounded" } ]
[ { "msg_contents": "I have just committed changes that require a recompile and initdb\nwhen next you pull the current CVS sources. Be warned.\n\nThe changes revise the contents of the pg_statistic system table\nin order to support more accurate selectivity estimation, as per\ndiscussions a few days ago.\n\nI had hoped that pg_upgrade would be sufficient to deal with this,\nbut it seems to be out of service at the moment...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Aug 1999 01:00:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "initdb needed for newest sources" }, { "msg_contents": "> The changes revise the contents of the pg_statistic system table\n> in order to support more accurate selectivity estimation, as per\n> discussions a few days ago.\n\nHey Tom, I'm not sure I was all that enthused about trying to optimize\nthe selectivity for *any* particular strategy. How about also allowing\nthe value(s) used for selectivity estimates to be manually set in the\ntable so folks can tune things to their satisfaction? Maybe they can\nalready be set, in which case we could add a SET OPTIMIZATION command\n(the name is negotiable) to make it easier?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 01 Aug 1999 21:25:14 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] initdb needed for newest sources" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> The changes revise the contents of the pg_statistic system table\n>> in order to support more accurate selectivity estimation, as per\n>> discussions a few days ago.\n\n> Hey Tom, I'm not sure I was all that enthused about trying to optimize\n> the selectivity for *any* particular strategy. How about also allowing\n> the value(s) used for selectivity estimates to be manually set in the\n> table so folks can tune things to their satisfaction?\n\nWell, I'm more interested in putting my effort into making the system\ndo the right thing without help. Manual overrides are OK as long as\nyou remember to revisit the settings whenever anything changes ...\notherwise your manual optimization can become manual pessimization ...\n\nBut if you want to spend time on the manual approach, I have no\nobjection. There's room for everyone to play.\n\n> Maybe they can already be set, in which case we could add a SET\n> OPTIMIZATION command (the name is negotiable) to make it easier?\n\nThere's no manual inputs presently, except for some rather crude\ncontrol variables (_enable_mergejoin_ and so on --- see\nbackend/optimizer/path/costsize.c). There are a mixture of SET\ncommands and backend command-line switches (ugh) to set these,\nand some don't have any tweak method short of source code changes\nor going in with a debugger. This area could use some cleanup\nand rethinking, for sure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 00:01:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] initdb needed for newest sources " }, { "msg_contents": "Another thing I found out during installation is that libpq.so is hard to\nfind. Setting LD_LIBRARY_PATH works, but I hate depending on environment\nsettings, mostly because I always screw them up sooner or later.\n\nOn searching, I found that you can specify -R/usr/local/pgsql/lib during\nlinking in gcc and in cc on Solaris -- is there some reason that would be\nbad to do in general? I tried forcing it on psql and it seems to work\nfine, and I don't need to set LD_LIBRARY_PATH any more.\n\nI'm talking about Solaris 2.5 here, by the way.\n\n", "msg_date": "Sun, 1 Aug 1999 23:33:54 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "[HACKERS] A suggestion on finding those pesky .so files" }, { "msg_contents": "\"J. Michael Roberts\" <[email protected]> writes:\n> On searching, I found that you can specify -R/usr/local/pgsql/lib during\n> linking in gcc and in cc on Solaris -- is there some reason that would be\n> bad to do in general?\n\nOnly that it's platform-specific. We do already do the equivalent\nincantation for HPUX --- they pronounce it differently, of course,\nbut it's the same idea --- and it seems to work well. If you want\nto submit a patch to make it happen on Solaris, go for it.\n(The HPUX tweak is in makefiles/Makefile.hpux, so I suppose you'd\nwant to hack on Makefile.solaris_*.)\n\nUltimately I think we want to get out of the business of learning\nall these grotty details about shared libraries on different flavors\nof Unix, and instead rely on GNU libtool to do it for us. But\nconverting to libtool takes work too :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 01:23:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] A suggestion on finding those pesky .so files " }, { "msg_contents": "Hi,\n\nI know of course that the pgsql optimizer is never wrong\nbut for the sake of argument do you think we could have\noptimizer hints the way say MSSQL has them?\n\nOh, I'll work on the rest of my wish list later. ;-)\n\nRegards\n\nJohn Ridout.\n", "msg_date": "Mon, 2 Aug 1999 09:59:16 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "Optimizer hints" }, { "msg_contents": "John Ridout wrote:\n> \n> Hi,\n> \n> I know of course that the pgsql optimizer is never wrong\n\n:) \n\nOTOH, it is never completely right either :)\n\n> but for the sake of argument do you think we could have\n> optimizer hints the way say MSSQL has them?\n\nI think it has been discussed (a little) but more on the \nlines of 'the way Oracle has them' ;)\n\nIt seems that this is not currently very high priority for \nany of the current real developers, but if you would \ncontribute a nice clean implementation I'm sure it would \nbe accepted.\n\n----------------\nHannu\n", "msg_date": "Mon, 02 Aug 1999 15:53:18 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer hints" }, { "msg_contents": "> Hannu Krosing wrote:\n> \n> John Ridout wrote:\n> > \n> > Hi,\n> > \n> > I know of course that the pgsql optimizer is never wrong\n> \n> :) \n> \n> OTOH, it is never completely right either :)\n> \n> > but for the sake of argument do you think we could have\n> > optimizer hints the way say MSSQL has them?\n> \n> I think it has been discussed (a little) but more on the \n> lines of 'the way Oracle has them' ;)\n> \n\nOh, you mean properly.\n\n> It seems that this is not currently very high priority for \n> any of the current real developers, but if you would \n> contribute a nice clean implementation I'm sure it would \n> be accepted.\n> \n> ----------------\n> Hannu\n> \n\nCome the end of September I will have enough time\nto play with the internals of pgsql.\nI suppose there are more useful things that\ncan be done before optimizer hints such as nearly\neverything on the TODO list.\n\nJohn.\n", "msg_date": "Mon, 2 Aug 1999 13:56:43 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Optimizer hints" }, { "msg_contents": "> Come the end of September I will have enough time\n> to play with the internals of pgsql.\n> I suppose there are more useful things that\n> can be done before optimizer hints such as nearly\n> everything on the TODO list.\n\nYes, that is basically the issue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 10:28:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer hints" }, { "msg_contents": "On Mon, Aug 02, 1999 at 12:01:03AM -0400, Tom Lane wrote:\n> \n> Well, I'm more interested in putting my effort into making the system\n> do the right thing without help. Manual overrides are OK as long as\n> you remember to revisit the settings whenever anything changes ...\n> otherwise your manual optimization can become manual pessimization ...\n> \n\nHey Tom - \nRan across this paper, about an interesting approach, pulling in the indices\nto aid in selectivity estimates.\n\nhttp://db.cs.berkeley.edu/papers/CSD-98-1021.pdf\n\nI grabbed this from a link at:\n\nhttp://db.cs.berkeley.edu/papers/\n\nwhile looking at the Mariposa work ( http://mariposa.cs.berkeley.edu)\nfrom the Sequoia2000 project. I've convinced my team to let me spend\na couple days analyzing what it would take to fold the remote access\nfeatures of Mariposa into the current PostgreSQL tree. I grabbed the\nalpha-1 code (June 23, 1996) which seems to be based on an early version\nof Postgres95. Interesting to see all the academic cruft you guys have\nalready cleaned out ;-)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 2 Aug 1999 15:26:41 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Selectivity estimates paper, and Mariposa" }, { "msg_contents": "> On Mon, Aug 02, 1999 at 12:01:03AM -0400, Tom Lane wrote:\n> > \n> > Well, I'm more interested in putting my effort into making the system\n> > do the right thing without help. Manual overrides are OK as long as\n> > you remember to revisit the settings whenever anything changes ...\n> > otherwise your manual optimization can become manual pessimization ...\n> > \n> \n> Hey Tom - \n> Ran across this paper, about an interesting approach, pulling in the indices\n> to aid in selectivity estimates.\n> \n> http://db.cs.berkeley.edu/papers/CSD-98-1021.pdf\n> \n> I grabbed this from a link at:\n> \n> http://db.cs.berkeley.edu/papers/\n> \n> while looking at the Mariposa work ( http://mariposa.cs.berkeley.edu)\n> from the Sequoia2000 project. I've convinced my team to let me spend\n> a couple days analyzing what it would take to fold the remote access\n> features of Mariposa into the current PostgreSQL tree. I grabbed the\n> alpha-1 code (June 23, 1996) which seems to be based on an early version\n> of Postgres95. Interesting to see all the academic cruft you guys have\n> already cleaned out ;-)\n\nWe still have a directory called tioga which is also related to\nMariposa. Basically, at the time, no one understood the academic stuff,\nand we had tons of bugs in general areas. We just didn't see any reason\nto keep around unusual features while our existing code was so poorly\nmaintained from Berkeley.\n\nThe mariposa remote access features looked like they were heavily done\nin the executor directory. This makes sense assuming they wanted the\naccess to be done remotely. They also tried to fix some things while\ndoing Mariposa. A few of those fixes have been added over the years.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 16:44:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Selectivity estimates paper, and Mariposa" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Ran across this paper, about an interesting approach, pulling in the indices\n> to aid in selectivity estimates.\n> http://db.cs.berkeley.edu/papers/CSD-98-1021.pdf\n\nLooks pretty interesting, but also vastly more complex than I want to\ntackle at the moment.\n\nAs of 6.5 the selectivity code is broken for everything except integers.\nWhat I'm trying to do for this release cycle is to get it to operate as\nwell as can be expected given the existing design (in which available\nstatistics are not much more than min/max/mode values for each column;\nstrictly speaking the mode was not in the existing design, but given\nthat VACUUM was computing it anyway, we might as well use it).\n\nA lot more *could* be done, as this paper suggests; but there are also\nmany other important tasks, and only so many hours in the day. I doubt\nthat building an entirely new selectivity estimation infrastructure is\nworthwhile until we have cured some more problems elsewhere :-(\n\n> while looking at the Mariposa work ( http://mariposa.cs.berkeley.edu)\n> from the Sequoia2000 project. I've convinced my team to let me spend\n> a couple days analyzing what it would take to fold the remote access\n> features of Mariposa into the current PostgreSQL tree.\n\nLet us know what you find out...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 17:08:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selectivity estimates paper, and Mariposa " }, { "msg_contents": "On Mon, Aug 02, 1999 at 04:44:10PM -0400, Bruce Momjian wrote:\n> \n> We still have a directory called tioga which is also related to\n> Mariposa. Basically, at the time, no one understood the academic stuff,\n> and we had tons of bugs in general areas. We just didn't see any reason\n> to keep around unusual features while our existing code was so poorly\n> maintained from Berkeley.\n\nThe right thing to do, I concur. Get the basics stable and working well,\n_then_ tack on the interesting stuff :-) A common complaint about us\nacademics: we only want to do the interesting stuff.\n\n> \n> The mariposa remote access features looked like they were heavily done\n> in the executor directory. This makes sense assuming they wanted the\n> access to be done remotely. They also tried to fix some things while\n> doing Mariposa. A few of those fixes have been added over the years.\n> \n\nRight. As I've been able to make out so far, in Mariposa a query passes\nthrough the regular parser and single-site optimizer, then the selected\nplan tree is handed to a 'fragmenter' to break the work up into chunks,\nwhich are then handed around to a 'broker' which uses a microeconomic\n'bid' process to parcels them out to both local and remote executors. The\nresults from each site then go through a local 'coordinator' which merges\nthe result sets, and hands them back to the original client.\n\nWhew!\n\nIt's interesting to compare the theory describing the workings of Mariposa\n(such as the paper in VLDB), and the code. For the fragmenter, the paper\ndescribes basically a rational decomposition of the plan, while the code\napplies non-deterministic, but tuneable, methods (lots of calls to random\nand comparisions to user specified odds ratios).\n\nIt strikes me as a bit odd to optimize the plan for a single site,\nthen break it all apart again. My thoughts on this are to implement\na two new node types: one a remote table, and one which represents\naccess to a remote table. Remote tables have host info in them, and\nalways be added to the plan with a remote-access node directly above\nthem. Remote-access nodes would be seperate from their remote-table,\nto allow the communications cost to be slid up the plan tree, and merged\nwith other remote-access nodes talking to the same server. This should\nmaintain the order-agnostic nature of the optimizer. The executor will\nneed to build SQL statements and from the sub-plans and submit them via\nstandard network db access client librarys.\n\nFirst step, create a remote-table node, and teach the excutor how to get\ninfo from it. Later, add the seperable remote-access node.\n\nHow insane does this sound now? Am I still a mad scientist? (...always!)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 2 Aug 1999 17:23:54 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mariposa" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> It strikes me as a bit odd to optimize the plan for a single site,\n> then break it all apart again.\n\nYes, that sounds pretty peculiar, especially considering that the\noptimizer's choices are all about access costs. A plan generated\nfor entirely-local execution might be far from optimal when broken\nacross multiple nodes.\n\n> My thoughts on this are to implement a two new node types: one a\n> remote table, and one which represents access to a remote\n> table. Remote tables have host info in them, and always be added to\n> the plan with a remote-access node directly above them. Remote-access\n> nodes would be seperate from their remote-table, to allow the\n> communications cost to be slid up the plan tree, and merged with other\n> remote-access nodes talking to the same server.\n\nI like that approach a lot better. If the access cost estimates for the\nshared-table node can be set to reflect remote communication costs,\nyou might actually get reasonable plans out of the optimizer...\n\nYou should not move too far with an actual implementation until you talk\nto Jan about rangetable entries for sub-selects. If we are going to\ninvent new RTE types we may as well try to deal with that problem\nat the same time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 18:51:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Mariposa " }, { "msg_contents": "> It strikes me as a bit odd to optimize the plan for a single site,\n> then break it all apart again. My thoughts on this are to implement\n> a two new node types: one a remote table, and one which represents\n> access to a remote table. Remote tables have host info in them, and\n> always be added to the plan with a remote-access node directly above\n> them. Remote-access nodes would be seperate from their remote-table,\n> to allow the communications cost to be slid up the plan tree, and merged\n> with other remote-access nodes talking to the same server. This should\n> maintain the order-agnostic nature of the optimizer. The executor will\n> need to build SQL statements and from the sub-plans and submit them via\n> standard network db access client librarys.\n> \n> First step, create a remote-table node, and teach the excutor how to get\n> info from it. Later, add the seperable remote-access node.\n> \n> How insane does this sound now? Am I still a mad scientist? (...always!)\n\nSounds interesting, and doable. People have asked from time to time\nabout this. Our access routines are very modular, so if you can get\nyour stuff working inside the tuple access routines, you will have it\nmade. The easiest way may be to just hack up the storage manager\n(smgr). Create a new access method, and hook your remote stuff to that.\n\nYou could try something easy like common backend NFS with some locking\nprotocol to prevent contension. In fact, NFS would be transparent\nexcept for locking issues.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 22:25:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mariposa" }, { "msg_contents": "On Mon, Aug 02, 1999 at 10:25:15PM -0400, Bruce Momjian wrote:\n> \n> Sounds interesting, and doable. People have asked from time to time\n> about this. Our access routines are very modular, so if you can get\n> your stuff working inside the tuple access routines, you will have it\n> made. The easiest way may be to just hack up the storage manager\n> (smgr). Create a new access method, and hook your remote stuff to that.\n> \n\nI considered two quick and dirty proof-of-concept implementations first:\nhacking the smgr, and upgrading functions to allow them to return sets,\nthen building views with an ON SELECT rule that fired an arbitrary db\naccess routine.\n\nOne advantage of going the smgr route is I automagically get all the\nbuffer and relation caching that's builit-in. A negative is that it\nbasically just gives me a remote table: every query's going to pull\nthe whole table, or I'm going to have to write somewhat hairy access\nmethods, I think. I think it's a little _too_ low level. I want to be\nable to have the backend DB do as much work as possible, minimizing the\nexpensive network transfer.\n\nOh, and BTW, while the smgr infrastructure is still in place, and all\naccesses vector through it, the relsmgr field has been removed from\nthe pg_class table, and all calls to smgropen() and friends use the\nDEFAULT_SMGR #define.\n\nThe second, having functions return sets, is still a possibility, but it\nsuffers from the same lack of extensibility - I only get complete tables\nback. Even worse, I don't get the buffer caches for free. If rules \n_currently_ could return sets, it's be a 1-2 day hack to get something\nworking.\n\nSo, I think I'll see what Jan has to say about subselect RTEs, as Tom\nsuggested. Seems the way to go for future work.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 2 Aug 1999 22:37:45 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mariposa" }, { "msg_contents": "> One advantage of going the smgr route is I automagically get all the\n> buffer and relation caching that's builit-in. A negative is that it\n> basically just gives me a remote table: every query's going to pull\n> the whole table, or I'm going to have to write somewhat hairy access\n> methods, I think. I think it's a little _too_ low level. I want to be\n> able to have the backend DB do as much work as possible, minimizing the\n> expensive network transfer.\n> \n> Oh, and BTW, while the smgr infrastructure is still in place, and all\n> accesses vector through it, the relsmgr field has been removed from\n> the pg_class table, and all calls to smgropen() and friends use the\n> DEFAULT_SMGR #define.\n\nThat was me. The only other storage manager we had was the stable\nmemory storage manager, and that was just too confusing to the\ndevelopers. It can be easily added back by just changing the\nDEFAULT_SMGR define to access the relation tuple. In fact, our code\nuses the storage manager much more consistently now because to enable\nmulti-segment tables (>2 gigs for non-supporting OS's), we had to clean\nthat up.\n\n\n> The second, having functions return sets, is still a possibility, but it\n> suffers from the same lack of extensibility - I only get complete tables\n> back. Even worse, I don't get the buffer caches for free. If rules \n> _currently_ could return sets, it's be a 1-2 day hack to get something\n> working.\n> \n> So, I think I'll see what Jan has to say about subselect RTEs, as Tom\n> suggested. Seems the way to go for future work.\n\nYes, I think he needs that for foreign keys, and I think he wants to do\nthat for 6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 23:43:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mariposa" } ]
[ { "msg_contents": "[postgresql lists added to Cc in hope of elucidation]\n\nAdam Di Carlo wrote:\n >\n >[Background: PostgreSQL is causing extremely hard crashes on my Sun4u\n >(Ultra5) Debian SPARC system. Anyone should be able to reproduce this\n >by installing the postgresql-test environment, and running:\n >\n > # cd /usr/lib/postgresql/test/regress\n > # chown -R postgres .\n > # su - postgres\n > $ cd /usr/lib/postgresql/test/regress\n > $ make runtest\n >\n >BEWARE -- this hard crashes my system. You may crash hard; you may\n >lose data.\n >\n >Note: I am running a mostly up-to-date 2.2.9 kernel (stock image from\n >potato) with the newest postgresql package (6.5.1-3 I believe).\n >]\n >\n >>That is very nasty -- and unexpected; I would like to report whatever\n >>information is available to [email protected]. However, they\n >>will need to know exactly what was going on - logfile output, if available,\n >>progress through the test, test output file, if it survived. It doesn't\n >>seem at all like the problem that I thought I was asking you to look at.\n >>We should investigate whether there is some entirely separate cause.\n >\n >Yes. On followup, I am getting intermittant hard crashes when running\n >regress.sh or doing any operation with postgresql. Obviously, this is\n >more on the level of a sparc64 kernel problem, even, than a purely\n >postgres problem -- after all, no user process should be able to take\n >out the system this way.\n\nI regret that I have no experience with kernel debugging.\n\n >\n >My most recent crash has this output to 'make runtest':\n >\n >path .. ok\n >polygon .. ok\n >circle .. ok\n >geometry .. failed\n >timespan ..\n >\n >And in the postgres.log, with debugging at 4:\n >\n >plan:\n >\n >{ SEQSCAN :cost 43 :size 334 :width 16 :state <> :qptargetlist\n >({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 705 :restypmod -1\n >:resname \"one\" :reskey 0 :reskeyop 0 :resgroupref 0 :resjunk false }\n >:expr { CONST :consttype 705 :constlen -1 :constisnull false\n >:constvalue 4 [ 0 0 0 4 ] :constbyval false }} { TARGETENTRY\n >:resdom { RESDOM :resno 2 :restype 600 :restypmod -1 :resname \"f1\"\n >:reskey 0 :reskeyop 0 :resgroupref 0 :resjunk false } :expr { VAR\n >:varno 1 :varattno 1 :vartype 600 :vartypmod -1 :varlevelsup 0\n >:varnoold 1 :varoattno 1}}) :qpqual ({ EXPR :typeOid 16 :opType func\n >:oper { FUNC :funcid 1532 :functype 16 :funcisindex false :funcsize 0\n >:func_fcache @ 0x0 :func_tlist ({ TARGETENTRY :resdom { RESDOM :resno\n >1 :restype 16 :restypmod -1 :resname \"<noname>\" :reskey 0 :reskeyop 0\n >:resgroupref 0 :resjunk false } :expr { VAR :varno -1 :varattno 1\n >:vartype 16 :vartypmod -1 :varlevelsup 0 :varnoold -1 :varoattno 1}})\n >:func_planlist <>} :args ({ VAR :varno 1 :varattno 1 :vartype 600\n >:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1} { CONST\n >:consttype 600 :constlen 16 :constisnull false :constvalue 16 [ 64\n >20 102 102 102 102 102 102 64 65 64 0 0 0 0 0 ]\n >:constbyval false })}) :lefttree <> :righttree <> :extprm () :locprm\n >() :initplan <> :nprm 0 :scanrelid 1 }\n >\n >ProcessQuery\n >CommitTransactionCommand\n >StartTransactionCommand\n >query: SELECT '' AS one, p1.f1\n > FROM POINT_TBL p1\n > WHERE p1.f1 ?| '(5.1,34.5)'::point;\n >parser outputs:\n >\n >{ QUERY :command 1 :utility <> :resultRelation 0 :into <> :isPortal\n >false :isBinary false :isTemp false :unionall false :unique <>\n >:sortClause <> :rtable ({ RTE :relname point_tbl :refname p1 :relid\n >20864 :inh false :inFromCl true :skipAcl false}) :targetlist\n >({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 705 :restypmod -1\n >:resname \"one\" :reskey 0 :reskeyop 0 :resgroupref 0 :resjunk false }\n >:expr { CONST :consttype 705 :constlen -1 :constisnull false\n >:constvalue 4 [ 0 0 0 4 ] :constbyval false }} { TARGETENTRY\n >:resdom { RESDOM :resno 2 :restype 600 :restypmod -1\n >:resname \"f1\" :reskey 0 :reskeyop 0 :resgroupref 0 :resjunk false }\n >:expr { VAR :varno 1 :varattno 1 :vartype 600 :vartypmod -1\n >:varlevelsup 0 :varnoold 1 :varoattno 1}}) :qual { EXPR :typeOid 16\n >:opType op :oper { OPER :opno 809\n >:opid 0 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 1 :vartype\n >\n >------\n >\n >Output just stops there, with a hard crash to the system.\n\nnot even a kernel oops output?\n\n >\n >--\n >.....Adam Di [email protected].....<URL:http://www.onShore.com/>\n\nCan postgresql developers tell from this what routine we are in when the\ncrash occurs? I suppose that log output is buffered; where can we turn\noff buffering so that all possible output is saved to disk before the\ncrash?\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And why call ye me, Lord, Lord, and do not the things \n which I say?\" Luke 6:46 \n\n\n", "msg_date": "Sun, 01 Aug 1999 07:35:00 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug#41277: postgresql 6.5.1-3 + sparc (sun4u) == nasty nasty\n\tcrashes" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>> Yes. On followup, I am getting intermittant hard crashes when running\n>> regress.sh or doing any operation with postgresql. Obviously, this is\n>> more on the level of a sparc64 kernel problem, even, than a purely\n>> postgres problem -- after all, no user process should be able to take\n>> out the system this way.\n\nYipes...\n\n> Can postgresql developers tell from this what routine we are in when the\n> crash occurs? I suppose that log output is buffered; where can we turn\n> off buffering so that all possible output is saved to disk before the\n> crash?\n\nThe log is not nearly detailed enough to tell what routine we're in,\neven if there weren't the buffering problem. Also, given that this is\na kernel crash, I'm not sure I'd assume that even fsync() after every\nline of output would ensure that the last line made it to disk.\n\nWhat you really want is a truss or strace log of kernel calls, anyhow,\nbut there's still the problem of getting it out to disk before the\ncrash. Better find a kernel-debugging expert to ask for advice...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Aug 1999 11:27:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#41277: postgresql 6.5.1-3 + sparc (sun4u) ==\n\tnasty nasty crashes" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> What you really want is a truss or strace log of kernel calls, anyhow,\n> but there's still the problem of getting it out to disk before the\n> crash. Better find a kernel-debugging expert to ask for advice...\n\nSerial terminal, or printer or some such hooked up to a serial port.\n\nMike.\n", "msg_date": "01 Aug 1999 12:57:34 -0400", "msg_from": "Michael Alan Dorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#41277: postgresql 6.5.1-3 + sparc (sun4u) ==\n\tnasty nasty crashes" }, { "msg_contents": "\n>> Can postgresql developers tell from this what routine we are in when the\n>> crash occurs? I suppose that log output is buffered; where can we turn\n>> off buffering so that all possible output is saved to disk before the\n>> crash?\n>\n>The log is not nearly detailed enough to tell what routine we're in,\n>even if there weren't the buffering problem. Also, given that this is\n>a kernel crash, I'm not sure I'd assume that even fsync() after every\n>line of output would ensure that the last line made it to disk.\n>\n>What you really want is a truss or strace log of kernel calls, anyhow,\n>but there's still the problem of getting it out to disk before the\n>crash. Better find a kernel-debugging expert to ask for advice...\n\nHopefully someone from the sparc or sparc64 team at Debian can look\ninto this. I am going on business travel for 4 days so will be away\nfrom any Debian/SPARC machines for a while.\n\nThese are the questions which need to be answered:\n\n * do other people running debian sparc finding the problem, using the\nrecipe I mentioned in previous email?\n\n * Is it 2.2.9 specific? Sun4u specific?\n\n * get strace output as Tom suggests\n\n * shouldn't we notify the Sparc/Linux folks?\n\n--\n.....Adam Di [email protected].....<URL:http://www.onShore.com/>\n\n", "msg_date": "Mon, 02 Aug 1999 02:10:47 -0400", "msg_from": "Adam Di Carlo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#41277: postgresql 6.5.1-3 + sparc (sun4u) ==\n\tnasty nasty crashes" } ]
[ { "msg_contents": "Hi!\n\n For the first question, I remember hackers had decided not to implement\nit - it is more important to send crypted passwords over the wires, right?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n---------- Forwarded message ----------\nDate: Sun, 1 Aug 1999 15:40:03 +0400 (MSD)\nFrom: Dmitry Morozovsky <[email protected]>\nTo: [email protected]\nSubject: [GENERAL] database access authentication: crypted passwords\n\nHello there.\n\nIs there any workable solutions to use crypt()ed passwords in pg_shadow?\n\nAnd, is there a plan to differentiate postgres super-user from other\nsuperusers, so that they, e.g., cannot change other superusers records in\npg_shadow (only their own and non-superuser records)?\n\nThanx in advance.\n\n\nSincerely,\nD.Marck [DM5020, DM268-RIPE, DM3-RIPN]\n------------------------------------------------------------------------\n*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- [email protected] ***\n------------------------------------------------------------------------\n\n\n\n\n", "msg_date": "Mon, 2 Aug 1999 16:37:32 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "[GENERAL] database access authentication: crypted passwords" }, { "msg_contents": "From: Oleg Broytmann <[email protected]>\n> For the first question, I remember hackers had decided not to implement\n> it - it is more important to send crypted passwords over the wires, right?\n\nNo. You can do both. That decision was based on incomplete knowledge.\n\nGene Sokolov.\n\n\n\n", "msg_date": "Mon, 2 Aug 1999 17:17:37 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] database access authentication: crypted\n\tpasswords" }, { "msg_contents": "On Mon, 2 Aug 1999, Gene Sokolov wrote:\n> > For the first question, I remember hackers had decided not to implement\n> > it - it is more important to send crypted passwords over the wires, right?\n> \n> No. You can do both. That decision was based on incomplete knowledge.\n\n Aha, I haven't followd that thread carefully. Well, how I can do one\nthing or another?\n\n> Gene Sokolov.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 2 Aug 1999 17:37:20 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [GENERAL] database access authentication: crypted\n\tpasswords" } ]
[ { "msg_contents": "Hi!\n\n We've discussed the issue and got to conclusion - Postgres needs to\nimplement more granulated rights. Currently user after connecting to DB can\ndo almost anything, at least she can create its own tables and do whatever\nshe wants.\n\n I want to compare the situation to MySQL. In this field MySQL design is\nbetter than ours - in MySQL database admin can grant and revoke a lot of\nrights for given DB - who can connect and from which host, who can CREATE\nTABLEs, INSERT, UPDATE, DELETE, DROP and all that...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n---------- Forwarded message ----------\nDate: Sun, 1 Aug 1999 19:23:20 +0400 (MSD)\nFrom: Dmitry Morozovsky <[email protected]>\nTo: [email protected]\nSubject: [GENERAL] read-only databases\n\nHello there again.\n\nIs there a way to make database 'db' for user 'john' read-only? I mean, is\nthere a way to disable 'create table/index/etc' SQL statements? I can make\nread-only restrictions on existing tables, but -- what about creating new?\n\nPgSQL 6.5, surely. Thanx in advance.\n\nSincerely,\nD.Marck [DM5020, DM268-RIPE, DM3-RIPN]\n------------------------------------------------------------------------\n*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- [email protected] ***\n------------------------------------------------------------------------\n\n\n\n", "msg_date": "Mon, 2 Aug 1999 16:42:03 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "[GENERAL] read-only databases" } ]
[ { "msg_contents": "We have been having some trouble with our C programs that access the SQL\ndatabase since upgrading from PostgreSQL 6.4.2 to 6.5.1. It looks as\nthough the header size for the binary cursor has changed from 20 bytes\nto 16 bytes. Can anyone confirm this?\n\n-Tony\n\n\n", "msg_date": "Mon, 02 Aug 1999 06:25:37 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> We have been having some trouble with our C programs that access the SQL\n> database since upgrading from PostgreSQL 6.4.2 to 6.5.1. It looks as\n> though the header size for the binary cursor has changed from 20 bytes\n> to 16 bytes. Can anyone confirm this?\n\nNo ... I'm pretty sure that nothing has changed in the FE/BE protocol\nsince 6.4. What do you mean by \"header size for the binary cursor\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 10:15:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes? " } ]
[ { "msg_contents": "Tom Lane wrote:\n\n> There is *no* header overhead for binary data as far as libpq or the\n> FE/BE protocol is concerned; what you get from PQgetvalue() is just\n> a pointer to whatever the backend's internal representation of the\n> data type is. It's certainly possible for particular data types to\n> change representation from time to time, though I didn't recall anyone\n> planning such a thing for 6.5. What data type is the column you're\n> retrieving, anyway? (I'm guessing float4 array, perhaps?) What kind\n> of platform is the backend running on?\n>\n> regards, tom lane\n\nRight on the money. The column being retrieved is a float4 array. I am running\nthe backend on a Red Hat Linux 6.0 machine (Pentium II / 400 MHz / 512 Meg RAM /\n128 Meg Shared buffers). The clients are all SGI machines (O2, Impact, and\nIndy).\n\n-Tony\n\n\n", "msg_date": "Mon, 02 Aug 1999 08:40:23 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "> Tom Lane wrote:\n> \n> > There is *no* header overhead for binary data as far as libpq or the\n> > FE/BE protocol is concerned; what you get from PQgetvalue() is just\n> > a pointer to whatever the backend's internal representation of the\n> > data type is. It's certainly possible for particular data types to\n> > change representation from time to time, though I didn't recall anyone\n> > planning such a thing for 6.5. What data type is the column you're\n> > retrieving, anyway? (I'm guessing float4 array, perhaps?) What kind\n> > of platform is the backend running on?\n> >\n> > regards, tom lane\n> \n> Right on the money. The column being retrieved is a float4 array. I am running\n> the backend on a Red Hat Linux 6.0 machine (Pentium II / 400 MHz / 512 Meg RAM /\n> 128 Meg Shared buffers). The clients are all SGI machines (O2, Impact, and\n> Indy).\n> \n\nI don't think you can do binary cursors across architectures. The\ninternal formats for most types are different, though you may be able to\nget away with string fields and int if the endian is the same.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 12:24:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Tom Lane wrote:\n>> There is *no* header overhead for binary data as far as libpq or the\n>> FE/BE protocol is concerned; what you get from PQgetvalue() is just\n>> a pointer to whatever the backend's internal representation of the\n>> data type is. It's certainly possible for particular data types to\n>> change representation from time to time, though I didn't recall anyone\n>> planning such a thing for 6.5. What data type is the column you're\n>> retrieving, anyway? (I'm guessing float4 array, perhaps?) What kind\n>> of platform is the backend running on?\n>> \n>> regards, tom lane\n\n> Right on the money. The column being retrieved is a float4 array. I am\n> running the backend on a Red Hat Linux 6.0 machine (Pentium II / 400\n> MHz / 512 Meg RAM / 128 Meg Shared buffers). The clients are all SGI\n> machines (O2, Impact, and Indy).\n\nOK, I think I see what is going on here. If you look at the\ndeclarations for arrays in src/include/utils/array.h, the overhead for\na one-dimensional array is 5 * sizeof(int) (total object size, #dims,\nflags word, lo bound, hi bound) rounded up to the next MAXALIGN()\nboundary to ensure that the array element data is aligned safely.\n\nI was not quite right about the libpq presentation of a binary cursor\nbeing the same as the backend's internal representation. Actually,\nit seems that the length word is stripped off --- all variable-size\ndatatypes are required to start with a word that is the total object\nsize, and what you get from libpq is a pointer to the word after that.\n\nSo, what you should have been measuring was 4*sizeof(int) plus\nthe array alignment padding, if any.\n\n6.4 had some hardwired assumptions about compiler alignment behavior,\nwhich were giving us grief on platforms that didn't conform, so as of\n6.5 we determine the actual alignment properties of the\ncompiler/platform during configure. The old code used to be aligning\nthe array overhead to a multiple of 8 whether that was appropriate or\nnot, but the new code is only aligning to the largest alignment multiple\nactually observed on the target platform. Evidently that's just 4 on\nyour system.\n\nBottom line: yes, the change from 20 to 16 is likely to persist.\n\nI think there is actually a bug in the way libpq is doing this, because\nit is allocating space for the stored varlena object minus the total-\nsize word. This means that any internal alignment assumptions are *not*\nbeing respected --- for example, in a machine that does need MAXALIGN\nof 8, the client-side representation of an array object will fail to\nhave its array members aligned at a multiple-of-8 address. libpq ought\nto allocate space for and store the whole varlena object including\nlength word, the same as it is in the backend, so that internal fields\nof the varlena will have the same alignment as in the backend. Will put\nthis on my todo list.\n\n> The clients are all SGI machines (O2, Impact, and Indy).\n\nYou realize, of course, that using a binary cursor in a cross-platform\nenvironment is a fairly dangerous thing to do. Any cross-machine\ndiscrepancies in data format or alignment become your problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 12:48:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes? " }, { "msg_contents": "Bruce Momjian wrote:\n\n> I don't think you can do binary cursors across architectures. The\n> internal formats for most types are different, though you may be able to\n> get away with string fields and int if the endian is the same.\n>\n\nNo, it works just fine. All you have to do is to swap the endian format (Linux Intel\nis little endian; SGI is big endian). We've been using this approach since Postgres\n6.3.\n\n-Tony\n\n\n", "msg_date": "Mon, 02 Aug 1999 09:55:32 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > I don't think you can do binary cursors across architectures. The\n> > internal formats for most types are different, though you may be able to\n> > get away with string fields and int if the endian is the same.\n> >\n> \n> No, it works just fine. All you have to do is to swap the endian format (Linux Intel\n> is little endian; SGI is big endian). We've been using this approach since Postgres\n> 6.3.\n> \n\nWhat doesn't work? Floats? Alignment problems?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Aug 1999 13:02:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "Tom Lane wrote:\n\n> Will put this on my todo list.\n>\n> > The clients are all SGI machines (O2, Impact, and Indy).\n>\n> You realize, of course, that using a binary cursor in a cross-platform\n> environment is a fairly dangerous thing to do. Any cross-machine\n> discrepancies in data format or alignment become your problem...\n>\n> regards, tom lane\n\nThanks Tom. I just wanted to make sure the subject was brought up to help\nothers in case they had been racking their brains on the problem.\n\nAs I wrote to Bruce, the cross architecture seems to work just fine as long\nas you have make sure to swap the endians in the data. So it looks like you\ncan do something else that was not in the original planning. Another kudo\nfor the database architecture!\n\n-Tony\n\n\n", "msg_date": "Mon, 02 Aug 1999 10:14:15 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n> > No, it works just fine. All you have to do is to swap the endian format (Linux Intel\n> > is little endian; SGI is big endian). We've been using this approach since Postgres\n> > 6.3.\n> >\n>\n> What doesn't work? Floats? Alignment problems?\n>\n\nThe only thing that seems to have problems is when you select multiple variables. For\nthis case, you have to put all of your arrays at the end.\n\ne.g.\n sprintf(data_string, \"DECLARE data_cursor BINARY CURSOR \"\n \"FOR SELECT repetition, cycle, time_instants FROM %s_proc WHERE \"\n \"subject= '%s' and arm = '%s' and rep = %s and cycle = %s\",\n task_name, subject_name[subject], arm_name[arm],\n repetition_name[i], cycle_name[j]);\n\n res = PQexec(conn, data_string);\n if (PQresultStatus(res) != PGRES_COMMAND_OK) {\n printf(\"\\n\\nERROR issuing command ... %s\\n\", data_string);\n exit_nicely(conn);\n }\n PQclear(res);\n sprintf(data_string, \"FETCH ALL IN data_cursor\");\n res = PQexec(conn, data_string);\n if (PQresultStatus(res) != PGRES_TUPLES_OK) {\n printf(\"\\n\\nERROR issuing command ... %s\\n\", data_string);\n exit_nicely(conn);\n }\n\n /* Move binary-transferred data to desired variable float array */\n memmove(bin_time, (PQgetvalue(res, 0, 2)), (number_of_bins + 1) *\nsizeof(float));\n\n PQclear(res);\n switch_endians_4bytes(bin_time, number_of_bins + 1);\n res = PQexec(conn, \"CLOSE data_cursor\");\n PQclear(res);\n res = PQexec(conn, \"END\");\n PQclear(res);\n\n\nSo in the above case, I can get the repetition (single int value), cycle (single int\nvalue), and time_instants (variable array of float values) out as a binary cursor. But\nneed to put the variable array at the end to make it work correctly. In this case, I\ndon't need to offset by 16 bytes to get the 2nd and 3rd column (cycles and\ntime_instants); I only need to do this for the 1st column (repetition).\n\nMy switch_endians_4_bytes looks like this:\n\nvoid switch_endians_4bytes(int *temp_array, int size_of_array)\n{\n short int test_endianess_word = 0x0001;\n char *test_endianess_byte = (char *) &test_endianess_word;\n\n int i;\n int temp_int;\n char *temp_char, byte0, byte1, byte2, byte3;\n\n if (test_endianess_byte[0] == BIG_ENDIAN) {\n\n for (i = 0; i < size_of_array; i++) {\n\n temp_int = temp_array[i];\n temp_char = (char *) (&temp_int);\n byte0 = *temp_char;\n byte1 = *(++temp_char);\n byte2 = *(++temp_char);\n byte3 = *(++temp_char);\n temp_char = (char *) (&temp_int);\n *temp_char = byte3;\n *(++temp_char) = byte2;\n *(++temp_char) = byte1;\n *(++temp_char) = byte0;\n temp_array[i] = temp_int;\n }\n }\n}\n\n\nwhere BIG_ENDIAN is defined as 0. Because I test the machine at run-time for its\nendianess, I can run this on both of my platforms and it will either switch or not switch\ndepending on the need (assuming that the server is on a little endian machine).\n\n-Tony\n\n\n", "msg_date": "Mon, 02 Aug 1999 10:32:23 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> The only thing that seems to have problems is when you select multiple\n> variables. For this case, you have to put all of your arrays at the\n> end.\n\nThat doesn't make a lot of sense to me either. What happens if you\ndon't?\n\n> I don't need to offset by 16 bytes to get the 2nd and 3rd column (cycles and\n> time_instants); I only need to do this for the 1st column (repetition).\n\nRight, there'd not be any array overhead for non-array datatypes...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 1999 14:22:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes? " }, { "msg_contents": "Tom Lane wrote:\n\n> \"G. Anthony Reina\" <[email protected]> writes:\n> > The only thing that seems to have problems is when you select multiple\n> > variables. For this case, you have to put all of your arrays at the\n> > end.\n>\n> That doesn't make a lot of sense to me either. What happens if you\n> don't?\n>\n\n It comes back as \"gibberish\". But we haven't really experimented with what\nthe gibberish is (e.g. alignment off, etc). Once we figured out the trick about\nputting the arrays at the end, we stopped fooling with it. It would be a nice\nlittle experiment since it appears that this kind of thing isn't frequently done.\n\n Anyone else out there using a binary cursor between two different computer\narchitectures?\n\n>\n> > I don't need to offset by 16 bytes to get the 2nd and 3rd column (cycles and\n> > time_instants); I only need to do this for the 1st column (repetition).\n>\n\nSorry I misspoke but you interpretted correctly anyway. The 1st and 2nd columns\n(just single ints) don't need the 16 byte offset, just the 3rd column (variable\narray). We've tried this with both int and float variable arrays and it works\nfine.\n\n\n-Tony\n\n\n\n", "msg_date": "Mon, 02 Aug 1999 11:54:48 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Binary cursor header changed from 20 to 16 Bytes?" } ]
[ { "msg_contents": "\nLinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\nhttp://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\nout of 84 votes, while MySQL is in first with 33...\n\nEven if you aren't using Linux, take a minute and make your vote...we like\nto see PostgreSQL on top, no? :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 12:51:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "You people are harsh *and* devious.\n\n(Yes, I've now voted. Solaris is kind of like Linux. At least I use a\nlot of GNUware.) It's now 28 to 35 out of 93 votes.\n\nOn Mon, 2 Aug 1999, The Hermit Hacker wrote:\n\n> \n> LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> out of 84 votes, while MySQL is in first with 33...\n> \n> Even if you aren't using Linux, take a minute and make your vote...we like\n> to see PostgreSQL on top, no? :)\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> \n\n", "msg_date": "Mon, 2 Aug 1999 11:12:16 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "At 12:51 PM 8/2/99 -0300, The Hermit Hacker wrote:\n>\n>LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n>http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n>out of 84 votes, while MySQL is in first with 33...\n\nThere are now 119 votes, and PostgreSQL is in first place!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n", "msg_date": "Mon, 02 Aug 1999 09:28:15 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Mon, 2 Aug 1999, J. Michael Roberts wrote:\n\n> You people are harsh *and* devious.\n> \n> (Yes, I've now voted. Solaris is kind of like Linux. At least I use a\n> lot of GNUware.) It's now 28 to 35 out of 93 votes.\n\n66 to 37 last I checked...and why devious? :)\n\n\n> \n> On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n> \n> > \n> > LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> > http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> > out of 84 votes, while MySQL is in first with 33...\n> > \n> > Even if you aren't using Linux, take a minute and make your vote...we like\n> > to see PostgreSQL on top, no? :)\n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 13:40:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "hi\n\n\ti'm not a Linux user, but an OpenBSD user. recently Louis \nBertrand posted some requests for assistance in getting ODBC support to \nwork on the postgres port to openbsd. i didn't see many responses to his \nrequests, though I saw lots of discussions re: Linux problems.\n\n\tI'd like to use PostgreSQL, but it doesn't seem like there is \nmuch interest from this community in assisting an OpenBSD developer. \nI'll change databases before I change OS's.\n\n\tSorry if this sounds like sour grapes, just call 'em like I see them\n\nDiana Eichert\nNetwork Systems Analyst\nSandia National Laboratories\n\nOn Mon, 2 Aug 1999, The Hermit Hacker wrote:\n\n> \n> LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> out of 84 votes, while MySQL is in first with 33...\n> \n> Even if you aren't using Linux, take a minute and make your vote...we like\n> to see PostgreSQL on top, no? :)\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> \n\n\nDiana Eichert\[email protected]\n\n", "msg_date": "Mon, 2 Aug 1999 11:47:01 -0600 (MDT)", "msg_from": "Diana Eichert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "\n133 and climbing...65% of the votes so far :)\n\n\nOn Mon, 2 Aug 1999, Brett W. McCoy wrote:\n\n> On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n> \n> > LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> > http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> > out of 84 votes, while MySQL is in first with 33...\n> > \n> > Even if you aren't using Linux, take a minute and make your vote...we like\n> > to see PostgreSQL on top, no? :)\n> \n> It's shot way up now. We're up to 122 votes and in first place!\n> \n> Brett W. McCoy \n> http://www.lan2wan.com/~bmccoy\n> -----------------------------------------------------------------------\n> Today is the tomorrow you worried about yesterday\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 14:51:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "\nMoved off of -announce ...\n\nOn Mon, 2 Aug 1999, Diana Eichert wrote:\n\n> hi\n> \n> \ti'm not a Linux user, but an OpenBSD user. recently Louis \n> Bertrand posted some requests for assistance in getting ODBC support to \n> work on the postgres port to openbsd. i didn't see many responses to his \n> requests, though I saw lots of discussions re: Linux problems.\n\nI'm not much into ODBC myself, so can't help much, but could you have\nLouis repost his problem. The problem is that the 'Linux community' makes\nup a very very large chunk (sadly *grin*) of our user base...but not\nnecessarily a large chunk of the developers.\n\nI don't know if there are any OpenBSD \"developers\" right now...I'm\nFreeBSD, as I believe Vadim is, Bruce is BSD/OS, and Thomas is Linux...I\ndon't know what anyone else is...except D'Arcy, who's Net/BSD ... Vince, I\nbelieve, is FreeBSD also ... so there is a pretty good proportion of *BSD\nusers out there, but I don't think any of us use ODBC vs 'straight\ninterfaces'...\n\nBasically, if you are non-Linux in this world nowadays, you have to learn\nto be a little more pushy to get your point across :( \n\n> \tI'd like to use PostgreSQL, but it doesn't seem like there is \n> much interest from this community in assisting an OpenBSD developer. \n> I'll change databases before I change OS's.\n> \n> \tSorry if this sounds like sour grapes, just call 'em like I see them\n> \n> Diana Eichert\n> Network Systems Analyst\n> Sandia National Laboratories\n> \n> On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n> \n> > \n> > LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> > http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> > out of 84 votes, while MySQL is in first with 33...\n> > \n> > Even if you aren't using Linux, take a minute and make your vote...we like\n> > to see PostgreSQL on top, no? :)\n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> > \n> \n> \n> Diana Eichert\n> [email protected]\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 14:56:20 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n\n> \n> LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> out of 84 votes, while MySQL is in first with 33...\n> \n> Even if you aren't using Linux, take a minute and make your vote...we like\n> to see PostgreSQL on top, no? :)\n\nWell, I've just voted, and now were 1st with 64.8% (to mysql's 22.2). I\nthink your plea for votes worked :-)\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Mon, 2 Aug 1999 19:50:09 +0100 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "\nDrop'ng now...the MySQL camp appears to have finally heard about it...we\nare still just slightly above 60%...\n\n\nOn Mon, 2 Aug 1999, Peter Mount wrote:\n\n> On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n> \n> > \n> > LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> > http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> > out of 84 votes, while MySQL is in first with 33...\n> > \n> > Even if you aren't using Linux, take a minute and make your vote...we like\n> > to see PostgreSQL on top, no? :)\n> \n> Well, I've just voted, and now were 1st with 64.8% (to mysql's 22.2). I\n> think your plea for votes worked :-)\n> \n> --\n> Peter T Mount [email protected]\n> Main Homepage: http://www.retep.org.uk\n> PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n> Java PDF Generator: http://www.retep.org.uk/pdf\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 16:29:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n\n> LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> out of 84 votes, while MySQL is in first with 33...\n> \n> Even if you aren't using Linux, take a minute and make your vote...we like\n> to see PostgreSQL on top, no? :)\n\nIt's shot way up now. We're up to 122 votes and in first place!\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy\n-----------------------------------------------------------------------\nToday is the tomorrow you worried about yesterday\n\n", "msg_date": "Mon, 2 Aug 1999 17:36:46 -0400 (EDT)", "msg_from": "\"Brett W. McCoy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "PostgreSQL now has 61% of 621 votes. :-)\n\n> \n> LinuxDev.Net is currently conducting a poll of \"The Most \n> Popular DBMS\", at\n> http://linuxdev.net. Currently, PostgreSQL ia in second \n> place, with 21\n> out of 84 votes, while MySQL is in first with 33...\n> \n> Even if you aren't using Linux, take a minute and make your \n> vote...we like\n> to see PostgreSQL on top, no? :)\n> \n> \n> Marc G. Fournier ICQ#7615664 \n> IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: \n> scrappy@{freebsd|postgresql}.org \n> \n> \n> \n", "msg_date": "Tue, 3 Aug 1999 10:03:37 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: " }, { "msg_contents": "John Ridout wrote:\n> \n> PostgreSQL now has 61% of 621 votes. :-)\n> \n\nDoes'nt MySQL have a mailing list ?? ;-)\n\n---------\nHannu\n", "msg_date": "Tue, 03 Aug 1999 14:32:10 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE:" }, { "msg_contents": "Nothing strange - MySQL is the best free (and not only free) tool for the \nfast search engines (when you do not need the full SQL), and PSQL is the \nonly stable free full (almost full) SQL DBA.\n\n\n\n\nOn Mon, 2 Aug 1999, J. Michael Roberts wrote:\n\n> Date: Mon, 2 Aug 1999 11:12:16 -0500 (EST)\n> From: J. Michael Roberts <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Cc: [email protected]\n> Subject: [ANNOUNCE] Re: your mail\n> \n> You people are harsh *and* devious.\n> \n> (Yes, I've now voted. Solaris is kind of like Linux. At least I use a\n> lot of GNUware.) It's now 28 to 35 out of 93 votes.\n> \n> On Mon, 2 Aug 1999, The Hermit Hacker wrote:\n> \n> > \n> > LinuxDev.Net is currently conducting a poll of \"The Most Popular DBMS\", at\n> > http://linuxdev.net. Currently, PostgreSQL ia in second place, with 21\n> > out of 84 votes, while MySQL is in first with 33...\n> > \n> > Even if you aren't using Linux, take a minute and make your vote...we like\n> > to see PostgreSQL on top, no? :)\n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> > \n> \n> \n> \n\nAleksei Roudnev, Network Operations Center, Relcom, Moscow\n(+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 230-41-41, N 13729 (pager)\n(+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)\n\n", "msg_date": "Mon, 9 Aug 1999 13:43:31 +0400 (MSD)", "msg_from": "\"Alex P. Rudnev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] Re: your mail" } ]
[ { "msg_contents": "Hi.\n\nI am trying to get postgresql to store data from MS-Project via ODBC. So\nfar I have managed to get pretty close, but the save operation fails when\ncreating a table with one column specified as \"fieldname bytea(8)\".\nPostgresql does not like the precision argument and bombs out with a parse\nerror at the '('. \ne.g.: create table foo (f1 bytea) # ok\n create table foo (f1 bytea(8)) # fails\n\nAwhile back on the list (dec 7 1998). this was brought up for inclusion\ninto the 6.4 tree. Since Im running 6.5.1 I assume the patch that fixed\nthis never made it. Was there a resolution to this issue as to why it\nnever was included into the main source? And if so, does anyone know where\nI can get this patch to get Project and postgresql to work together?\n\nThanks for any info anyone has.\nMike \n", "msg_date": "Mon, 2 Aug 1999 11:27:44 -0500 (CDT)", "msg_from": "Michael J Schout <[email protected]>", "msg_from_op": true, "msg_subject": "bytea type and precision." } ]
[ { "msg_contents": "Hi,\n\nI noticed some activities in REL6_5_PATCHES branch. \nDo we need 6.5.2 release ?\n\n\n\tRegards,\n\t\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 3 Aug 1999 00:00:14 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Any 6.5.2 activities ?" }, { "msg_contents": "On Tue, 3 Aug 1999, Oleg Bartunov wrote:\n\n> Hi,\n> \n> I noticed some activities in REL6_5_PATCHES branch. \n> Do we need 6.5.2 release ?\n\njust alot of prepartory work for one...starting with v6.5 and the\ncommercial support offerings, the -STABLE branch will be maintained until\nthe new -CURRENT branch becomes the -STABLE one...\n\nThat way the -CURRENT branch isn't so rushed to get out, either...if it\ngoes 6 mos, those running -STABLE and are seeing bugs, they won't get\nsuck...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Aug 1999 17:47:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Any 6.5.2 activities ?" } ]
[ { "msg_contents": "Hi Everybody.\n\nWe've gotten a few requests for a distributed version of postgresql.\nThere's two models that I think could work for this. What I'm looking \nfor is two things:\n\t1. Any opinions on which option to take.\n\t2. Anyone willing to work on the project.\n\nThe features people are looking for are fault tolerance, and load distribution\n\nThe two models:\n\t1. A separate server process which manages the parallelism.\n\t\tClient connects to that server, which handles the request\n\t\tusing (nearly) unmodified backends on different machines.\n\t\tWould make use of the fact that the vast majority of \n\t\tcommands are read-only.\n\n\t\tAdvantages:\n\t\t\tPlatform independent. (e.g. SPARC's and x86 in \n\t\t\t\tsame cluster)\n\t\t\tLess inherently complex (simpler, separate component)\n\t\t\tLess changes to current code\n\n\t\tPotential Snag:\n\t\t\tMay need a different communication interface built..\n\t\t\t(maybe not...)\n\n\t2. Modifying the current server to handle the parallelism.\n\t\ti.e. Start the server with another option and config file\n\t\t(or something equivalent). All modifications built into\n\t\tthe current server.\n\n\t\tAdvantages:\n\t\t\tCould more easily be made to run single queries in\n\t\t\t\tparallel\n\t\t\tCould be made more efficient by directly making\n\t\t\t\tstorage calls, instead of using an SQL\n\t\t\t\tinterface.\n\n\t\tPotential Snag:\n\t\t\tAdds a lot of complexity to the current backend.\n\nMy personal preference is toward option 1. \n\tIt sounds a lot easier to implement, and works through a well-defined \n\t \tinterface (SQL). \n\tPlatform independent, just in case someone decides to throw in a \n\t\tSPARC box into their room of Alphas. \n\tMinimal changes to the current backend which keeps the backend simpler\n\t\tfor people to work on. (My next email's gonna go down that\n\t\tline somewhere)\n\t\nAny preferences or options I haven't thought of? (or any details which would\ncomplicate the project?) As well, would anybody be interested in working on\nthis?\n\nDuane\n", "msg_date": "Tue, 3 Aug 1999 10:33:05 +0000 (AST)", "msg_from": "Duane Currie <[email protected]>", "msg_from_op": true, "msg_subject": "|| PostgreSQL" }, { "msg_contents": "Ingres happened to implement their distributed system using option\n(1), having a distributed front-end which knew about remote servers\nand could parse and optimize queries then send individual queries to\nthe actual servers.\n\nSeemed to work pretty well, and you could reuse a large amount of\ncode. otoh, if you implemented option (2) (local and remote tables),\nthen you could choose to construct your database on the option (1)\nmodel without penalty.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 03 Aug 1999 14:21:50 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] || PostgreSQL" }, { "msg_contents": "Hi Duane - \nI swear, the syncronicity is starting to get _real_ thick around here.\nTake a look at the discussion between myself, Tom Lane, and Bruce\nMojarian yesterday, under the helpful subject line of Mariposa. We discuss\nimplementing model 2, which, as Thomas points out in his reply to you,\ncan be a superset of model 1. I need to be able to set up a distributed,\nheterogenous database system. We've priced commerical offerings in this\nfield, and ones with sufficent flexibilty to do what we need start at\n$40000 for a _two_ backend license.\n\nAnyway, let's combine forces! I'm going to take a stab at this anyway, no\nsense in duplicating effort. I could do a local CVS tree, to keep us synced.\n\nRoss\n\nP.S. Mariposa was a project out of Stonebraker's lab at Berkeley, to build a\ndistributed db. checkout http://mariposa.cs.berkeley.edu\n\nOn Tue, Aug 03, 1999 at 10:33:05AM +0000, Duane Currie wrote:\n\n<SNIP>\n\n> \t\n> Any preferences or options I haven't thought of? (or any details which would\n> complicate the project?) As well, would anybody be interested in working on\n> this?\n> \n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 3 Aug 1999 09:57:11 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] || PostgreSQL" } ]
[ { "msg_contents": "Hi Everybody.\n\nAs well, a few have been asking about multi-threading.\n\nMarc has told me about the past discussions on it. \nI'm interested in re-opening some discussion on it, as we may eventually\nhave funding to do help with it.\n\nWould it not be advantageous to use threading in the PostgreSQL backend?\nMany servers make use of threading to increase throughput by performing\nIO and computations concurrently. (While one thread is waiting for the\ndisk to respond to an IO request, another processes the last chunk of data).\n\nAs well, threads tend to have a lower process switching time, so that may\nhelp a bit on heavily loaded systems. (Of course, it would tend to be a\ncombination of forking and threading. Threading benefits have limits).\n\nAny thoughts?\n\nDuane\n", "msg_date": "Tue, 3 Aug 1999 10:56:51 +0000 (AST)", "msg_from": "Duane Currie <[email protected]>", "msg_from_op": true, "msg_subject": "Threads" }, { "msg_contents": "> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > Hmm, what about threads in the frontend? Anyone know if libpq is thread\n> > safe, and if not, how hard it might be to make it so?\n> \n> It is not; the main reason why not is a brain-dead part of the API that\n> exposes modifiable global variables. Check the mail list archives\n> (probably psql-interfaces, but maybe -hackers) for previous discussions\n> with details. Earlier this year I think, or maybe late 98.\n\nhmmm... usually this is repairable by creating wrapper functions which \nindex the variables by thread id, and enforcing the use of the functions...\n(maybe something for a wish list...) \n\nDuane\n", "msg_date": "Tue, 3 Aug 1999 13:33:44 +0000 (AST)", "msg_from": "Duane Currie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "> As well, a few have been asking about multi-threading.\n> Any thoughts?\n\nThreads within a client backend might be interesting. imho a\nsingle-process multi-client multi-threaded server is just asking for\ntrouble, putting all clients at risk for any single misbehaving one.\nParticularly with our extensibility features, where users and admins\ncan add functionality through code they have written (or are trying to\nwrite ;) having each backend isolated is A Good Thing.\n\nistm that many of the cases for which multi-threading is proposed (web\nserving always comes up) can be solved using persistant connections or\nother techniques.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 03 Aug 1999 14:18:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "Duane Currie <[email protected]> writes:\n> Would it not be advantageous to use threading in the PostgreSQL backend?\n\nJust so you don't break the code for non-threaded platforms.\n\nI believe mysql *requires* working thread support, which is one reason\nit is not so portable as Postgres... we should not give up that advantage.\n\nBTW, I'm not convinced that threading would improve performance very\nmuch within a single backend. It might be a win as a substitute for\nmultiple backends, ie, instead of postmaster + N backends you have just\none process with a bunch of threads. (But on the downside of *that* is\nthat a backend crash now takes down your postmaster along with\neverything else...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 10:34:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "\nOn 03-Aug-99 Thomas Lockhart wrote:\n>> As well, a few have been asking about multi-threading.\n>> Any thoughts?\n\nI completly agree with Thomas.\n\nMutiprocess server is also more convenient \nfor managing and more portable \n\n\n> \n> Threads within a client backend might be interesting. imho a\n> single-process multi-client multi-threaded server is just asking for\n> trouble, putting all clients at risk for any single misbehaving one.\n> Particularly with our extensibility features, where users and admins\n> can add functionality through code they have written (or are trying to\n> write ;) having each backend isolated is A Good Thing.\n> \n> istm that many of the cases for which multi-threading is proposed (web\n> serving always comes up) can be solved using persistant connections or\n> other techniques.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart [email protected]\n> South Pasadena, California\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Tue, 03 Aug 1999 18:37:57 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "On Tue, 3 Aug 1999, Tom Lane wrote:\n\n> Duane Currie <[email protected]> writes:\n> > Would it not be advantageous to use threading in the PostgreSQL backend?\n> \n> Just so you don't break the code for non-threaded platforms.\n> \n> I believe mysql *requires* working thread support, which is one reason\n> it is not so portable as Postgres... we should not give up that advantage.\n\nJust curious here, but out of all the platforms we support, are there any\nremaining that don't support threads?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 3 Aug 1999 12:13:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just curious here, but out of all the platforms we support, are there any\n> remaining that don't support threads?\n\nPretty much all of the older ones, I imagine --- I know HPUX 9 does not.\n\nOf course HPUX 9 will be a dead issue by the end of the year, because\nHP isn't going to fix its Y2K bugs; I wonder whether there is a similar\nforcing function for old SunOS and other systems?\n\nBut still, I believe there are several different flavors of thread\npackages running around, so we will be opening a brand new can of\nportability worms. We'd best keep a \"no threads\" fallback option...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 11:37:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "On Tue, Aug 03, 1999 at 02:18:07PM +0000, Thomas Lockhart wrote:\n> > As well, a few have been asking about multi-threading.\n> > Any thoughts?\n> \n> Threads within a client backend might be interesting. [...]\n\nHmm, what about threads in the frontend? Anyone know if libpq is thread\nsafe, and if not, how hard it might be to make it so?\n\nRoss \n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 3 Aug 1999 10:57:24 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "On Tue, 3 Aug 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Just curious here, but out of all the platforms we support, are there any\n> > remaining that don't support threads?\n> \n> Pretty much all of the older ones, I imagine --- I know HPUX 9 does not.\n> \n> Of course HPUX 9 will be a dead issue by the end of the year, because\n> HP isn't going to fix its Y2K bugs; I wonder whether there is a similar\n> forcing function for old SunOS and other systems?\n\nHPUX9 won't be that dead. It's still the highest version the 300\nseries machines will run and the Y2K problems in it aren't enough\nto force alot of folks to upgrade the hardware and OS. We're still\nusing HPUX8 on a number of our test stands and they even pass enuf\nY2K to keep 'em going (RMB can't do the dates but the system can).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 3 Aug 1999 12:03:36 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "On Tue, 3 Aug 1999, Tom Lane wrote:\n\n> But still, I believe there are several different flavors of thread\n> packages running around, so we will be opening a brand new can of\n> portability worms. We'd best keep a \"no threads\" fallback option...\n\nSounds reasonable, but is it feasible? I think the general thread right\nnow is to go with partial threading, but how hard is it going to be to\nimplement even partial threading will maintaining the no-thread features?\nBasically just massive #ifdef blocks? *raised eyebrow*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 3 Aug 1999 13:10:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Hmm, what about threads in the frontend? Anyone know if libpq is thread\n> safe, and if not, how hard it might be to make it so?\n\nIt is not; the main reason why not is a brain-dead part of the API that\nexposes modifiable global variables. Check the mail list archives\n(probably psql-interfaces, but maybe -hackers) for previous discussions\nwith details. Earlier this year I think, or maybe late 98.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 12:13:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "> On Tue, Aug 03, 1999 at 02:18:07PM +0000, Thomas Lockhart wrote:\n> > > As well, a few have been asking about multi-threading.\n> > > Any thoughts?\n> > \n> > Threads within a client backend might be interesting. [...]\n> \n> Hmm, what about threads in the frontend? Anyone know if libpq is thread\n> safe, and if not, how hard it might be to make it so?\n> \n\nI believe it is thread-safe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Aug 1999 12:19:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" } ]
[ { "msg_contents": "This was one of the points I was talking about in the original message.\nThis way, it's still one session per backend, but uses threads to improve\nthroughput...\n\t\"(While one thread is waiting for the disk to respond to \n\t an IO request, another processes the last chunk of data)\"\n\nThis one looks to me like the best idea. \n\nNow that pthreads is pretty much standard on systems (or available),\nthreading shouldn't be so problematic from a portability standpoint...\n\nDuane\n\n> Aren't there cases, though, where multi-threading could be used within the\n> back end design that we have at the moment, for example, to avoid lags\n> during I/O? So, while the nth block of data is being read from the disk,\n> the (n-1)th block is being processed by the next process down the line. For\n> example...\n> This wouldn't (shouldn't?) break the isolation that currently exists due to\n> single-process servers. \n> \n> MikeA\n> \n> >> > As well, a few have been asking about multi-threading.\n> >> > Any thoughts?\n> >> \n> >> Threads within a client backend might be interesting. imho a\n> >> single-process multi-client multi-threaded server is just asking for\n> >> trouble, putting all clients at risk for any single misbehaving one.\n> >> Particularly with our extensibility features, where users and admins\n> >> can add functionality through code they have written (or are \n> >> trying to\n> >> write ;) having each backend isolated is A Good Thing.\n> >> \n> >> istm that many of the cases for which multi-threading is \n> >> proposed (web\n> >> serving always comes up) can be solved using persistant \n> >> connections or\n> >> other techniques.\n> >> \n> >> - Thomas\n> \n\n", "msg_date": "Tue, 3 Aug 1999 11:44:01 +0000 (AST)", "msg_from": "Duane Currie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "Aren't there cases, though, where multi-threading could be used within the\nback end design that we have at the moment, for example, to avoid lags\nduring I/O? So, while the nth block of data is being read from the disk,\nthe (n-1)th block is being processed by the next process down the line. For\nexample...\nThis wouldn't (shouldn't?) break the isolation that currently exists due to\nsingle-process servers. \n\nMikeA\n\n>> > As well, a few have been asking about multi-threading.\n>> > Any thoughts?\n>> \n>> Threads within a client backend might be interesting. imho a\n>> single-process multi-client multi-threaded server is just asking for\n>> trouble, putting all clients at risk for any single misbehaving one.\n>> Particularly with our extensibility features, where users and admins\n>> can add functionality through code they have written (or are \n>> trying to\n>> write ;) having each backend isolated is A Good Thing.\n>> \n>> istm that many of the cases for which multi-threading is \n>> proposed (web\n>> serving always comes up) can be solved using persistant \n>> connections or\n>> other techniques.\n>> \n>> - Thomas\n", "msg_date": "Tue, 3 Aug 1999 16:35:35 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Threads" } ]
[ { "msg_contents": "\"Marc G. Fournier\" <scrappy> writes:\n> Another 'mega-commit' of back-patches ... \n> \n> - - integrating the #include file cleanup that Bruce recently did\n> - - got the CPU change to adt/Makefile \n> - - changing DOUBLEALIGN -> MAXALIGN\n\nIs anyone else disturbed by wholesale changes to what is supposed to\nbe a stable release?\n\nI am sure Marc will say these are low-risk changes --- but they're not\n*no* risk, because there is always a chance of propagating part of\nsome other change that you didn't want, or failing to propagate all\nof the change you did want. And how much testing will the modified\n6.5.x code get before it gets published as a stable version?\n\nMy feeling is that we should only back-patch essential bug fixes.\nYou can define \"essential\" as \"anything a user requests\", if you like.\nBut surely code cleanups do not qualify unless they fix a demonstrable\nbug.\n\nJust my $0.02...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 10:23:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Mega-commits to \"stable\" version" }, { "msg_contents": "> Is anyone else disturbed by wholesale changes to what is supposed to\n> be a stable release?\n> I am sure Marc will say these are low-risk changes --- but they're not\n> *no* risk, because there is always a chance of propagating part of\n> some other change that you didn't want, or failing to propagate all\n> of the change you did want. And how much testing will the modified\n> 6.5.x code get before it gets published as a stable version?\n\nI agree, and think we should back off on some of those patches, or at\nleast back off of applying patches of that nature in the future. I'm\nguessing that it wasn't clear to Marc what the range of opinions might\nbe on these particular patches.\n\notoh, since it's a done deal, it should encourage us to do a bit more\ntesting than we would have otherwise, on a wider range of platforms.\nAnd the changes probably got the camel's nose under the tent to move\nforward with 64-bit fixes on the stable branch (which would be nice\nfor our RPM work).\n\nPerhaps the Japanese contingent, who seem to have a remarkable range\nof platforms, will be willing and able to regression test when we get\ncloser to a v6.5.2.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 03 Aug 1999 14:47:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mega-commits to \"stable\" version" }, { "msg_contents": "\nNote that I have no problems with someone requesting a patch to be backed\nout...IMHO, anything dealing with the configuration process should be\nbrought back into -STABLE (ie. the CPU changes that Bruce did)...but\nanything else that I've changed, or will change, are generally what I\nconsider to be \"safe bets\"...if I'm wrong, they are easy to back\nout...just let me know...\n\nOn Tue, 3 Aug 1999, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy> writes:\n> > Another 'mega-commit' of back-patches ... \n> > \n> > - - integrating the #include file cleanup that Bruce recently did\n> > - - got the CPU change to adt/Makefile \n> > - - changing DOUBLEALIGN -> MAXALIGN\n> \n> Is anyone else disturbed by wholesale changes to what is supposed to\n> be a stable release?\n> \n> I am sure Marc will say these are low-risk changes --- but they're not\n> *no* risk, because there is always a chance of propagating part of\n> some other change that you didn't want, or failing to propagate all\n> of the change you did want. And how much testing will the modified\n> 6.5.x code get before it gets published as a stable version?\n> \n> My feeling is that we should only back-patch essential bug fixes.\n> You can define \"essential\" as \"anything a user requests\", if you like.\n> But surely code cleanups do not qualify unless they fix a demonstrable\n> bug.\n> \n> Just my $0.02...\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 3 Aug 1999 12:19:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mega-commits to \"stable\" version" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > Is anyone else disturbed by wholesale changes to what is supposed to\n> > be a stable release?\n> > I am sure Marc will say these are low-risk changes --- but they're not\n> > *no* risk, because there is always a chance of propagating part of\n> > some other change that you didn't want, or failing to propagate all\n> > of the change you did want. And how much testing will the modified\n> > 6.5.x code get before it gets published as a stable version?\n\nMaybe we should do a 6.5.2beta before the real 6.5.2 so that people are \nwarned and know what to expect? \n\n------------\nHannu\n", "msg_date": "Tue, 03 Aug 1999 21:39:59 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mega-commits to \"stable\" version" }, { "msg_contents": "On Tue, 3 Aug 1999, Hannu Krosing wrote:\n\n> Thomas Lockhart wrote:\n> > \n> > > Is anyone else disturbed by wholesale changes to what is supposed to\n> > > be a stable release?\n> > > I am sure Marc will say these are low-risk changes --- but they're not\n> > > *no* risk, because there is always a chance of propagating part of\n> > > some other change that you didn't want, or failing to propagate all\n> > > of the change you did want. And how much testing will the modified\n> > > 6.5.x code get before it gets published as a stable version?\n> \n> Maybe we should do a 6.5.2beta before the real 6.5.2 so that people are \n> warned and know what to expect? \n\nShould always have at least a 1 week beta before a minor-minor release,\nno?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 3 Aug 1999 15:45:14 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mega-commits to \"stable\" version" } ]
[ { "msg_contents": "A few weeks ago, George Young <[email protected]> complained that the\nfollowing query:\n\n> select os.name,r.run_name,ro.status from opset_steps os,runs r,\n> run_opsets ro where (ro.status=3 or ro.status=1) and\n> ro.opset_id=os.opset_id and ro.run_id=r.run_id and\n> ro.opset_ver=os.opset_ver and r.status=1;\n\nhad horrible performance when executed via the system's preferred plan,\n\n> Hash Join (cost=1793.58 rows=14560 width=38)\n> -> Hash Join (cost=1266.98 rows=14086 width=24)\n> -> Seq Scan on run_opsets ro (cost=685.51 rows=13903 width=8)\n> -> Hash (cost=70.84 rows=1389 width=16)\n> -> Seq Scan on opset_steps os (cost=70.84 rows=1389 width=16)\n> -> Hash (cost=47.43 rows=374 width=14)\n> -> Seq Scan on runs r (cost=47.43 rows=374 width=14)\n\nI have looked into this, and it seems that the problem is this: the\ninnermost hash join between run_opsets and opset_steps is being done\non the join clause ro.opset_ver=os.opset_ver. In George's data,\nthe opset_ver columns only have about 14 distinct values, with a\nvery strong bias towards the values 1,2,3. This means that the vast\nmajority of the opset_steps entries go into only three hash buckets,\nand the vast majority of the probes from run_opsets search one of those\nsame three buckets, so that most of the run_opsets rows are being\ncompared to almost a third of the opset_steps rows, not just a small\nfraction of them. Almost all of the runtime of the query is going into\nthe tuple comparison tests :-(\n\nIt seems clear that we want the system not to risk using a hashjoin\nunless it has good evidence that the inner table's column has a fairly\nflat distribution. I'm thinking that the right sort of check would be\nto check whether the \"disbursion\" statistic set by VACUUM ANALYZE is\nfairly small, maybe 0.01 or less (but not zero, which would suggest\nthat VACUUM ANALYZE has never been run). This would roughly\ncorrespond to the most common value appearing not more than 1% of the\ntime, so that we can be sure at least 100 different hashbuckets will\nbe used. Comments? Is that value too small?\n\nThis change is likely to reduce the optimizer's willingness to use\nhashjoins by a *lot*, especially if we make the threshold too small.\nIf you'd like to see what kind of disbursion numbers you get on your\nown data, try something like\n\tselect relname,attname,attdisbursion from pg_class,pg_attribute\n\twhere attrelid = pg_class.oid and relkind = 'r' and attnum > 0\n\torder by relname,attname;\nafter a vacuum analyze.\n\n\t\t\tregards, tom lane\n\nPS: George, in the meantime I bet your query would run fine if the\nsystem would only choose the opset_id clause instead of opset_ver\nto do the hashjoin with --- opset_id has far better distribution.\nI'm guessing that it thinks the two clauses are equally attractive\nand is just choosing whichever one it happens to process first (or\nlast?). You might try rearranging the order of the WHERE clauses\nas a stopgap solution...\n", "msg_date": "Tue, 03 Aug 1999 11:12:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad select performance fixed by forbidding hash joins " } ]
[ { "msg_contents": "I've been getting the following error out of the backend (followed by a\ngeneral demise of the system)\n\nERROR: btree scan list trashed; can't find 0x1401e38c0\n\nAnybody know what this means? I've spent a day trying to figure out what\nI'm doing wrong, but this gives me no clues. It seems to happen\ncompletely intermittently and seems to depend on a combination of\nclients doing something at the same time. \n\nAny help greatly appreciated!\n\nAdriaan\n", "msg_date": "Tue, 03 Aug 1999 18:15:15 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: btree scan list trashed ??" }, { "msg_contents": "Adriaan Joubert <[email protected]> writes:\n> I've been getting the following error out of the backend (followed by a\n> general demise of the system)\n> ERROR: btree scan list trashed; can't find 0x1401e38c0\n> Anybody know what this means? I've spent a day trying to figure out what\n> I'm doing wrong, but this gives me no clues.\n\nI doubt you are doing anything \"wrong\" ... you're just getting burnt\nby some internal bug.\n\nWhat Postgres version are you using, and on what platform? If it's\nanything older than 6.5.1, an upgrade would probably be a good idea.\n\nIf you are seeing the problem in 6.5.1, then we need to try to fix it.\nA reproducible test case would help a lot in finding the bug.\n\n> It seems to happen completely intermittently and seems to depend on a\n> combination of clients doing something at the same time.\n\nUgh. Getting a reproducible test case might be hard...\n\nBTW, another possible stopgap is to rebuild (drop and re-create) all\nyour indexes. If the problem is being triggered by a corrupted index\nthen that should make it go away, at least until the next time the\nindex gets corrupted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 11:46:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ?? " } ]
[ { "msg_contents": "Hi,\n\nThis is idle curiosity, I was looking through some papers I found on deferred \nbalancing of btrees, with the idea of implementing it (for coldstore.)\n\nThe idea is that you only need to lock a couple of pages for update, and you \ncome back later to rebalance the tree.\n\nSomeone suggested postgresql already does this. Is it so? If so, would \nsomeone give me a quick precis on the subject?\n\nColin.\n\n\n", "msg_date": "Wed, 04 Aug 1999 01:58:23 +1000", "msg_from": "Colin McCormack <[email protected]>", "msg_from_op": true, "msg_subject": "Curiosity on deferred/delayed balancing btrees" }, { "msg_contents": "> Hi,\n> \n> This is idle curiosity, I was looking through some papers I found on deferred \n> balancing of btrees, with the idea of implementing it (for coldstore.)\n> \n> The idea is that you only need to lock a couple of pages for update, and you \n> come back later to rebalance the tree.\n> \n> Someone suggested postgresql already does this. Is it so? If so, would \n> someone give me a quick precis on the subject?\n\nI think we do. We have nbtree, whatever that is.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Aug 1999 12:19:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Curiosity on deferred/delayed balancing btrees" } ]
[ { "msg_contents": "> > > As well, a few have been asking about multi-threading.\n> > > Any thoughts?\n> > \n> > Threads within a client backend might be interesting. [...]\n> \n> Hmm, what about threads in the frontend? Anyone know if libpq \n> is thread\n> safe, and if not, how hard it might be to make it so?\n\nI beleive it is, as long as you only use each PGconn or PGresult from one\nthread at a time. If you have two threads using the same PGconn, you're in\nfor trouble.\n\nMaking handling of PGresult thread-safe shouldn't be too hard, except you\nhave to do it platform-specific (you need some kind of mutex or similar, and\nI doubt you can use the same code on e.g. any Unix and Win32). \n\nDoing the same for PGconn is probably a lot harder, since the\nfrontend/backend protocol is \"single threaded\". So some kind of \"tagging\" of\neach packet telling which thread it belongs to would be required. It would\nprobably be possible to \"lock\" the whole PGconn at the start of any\nprocessing (such as sending a query), and then \"unlock\" once all the results\nhave been moved into a PGresult, but that is going to leave the PGconn\nlocked almost always, which kind of takes away the advantage of threading.\n\n\nI think.\n\n\n//Magnus\n", "msg_date": "Tue, 3 Aug 1999 18:20:19 +0200 ", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Threads" } ]
[ { "msg_contents": ":\n> > Hmm, what about threads in the frontend? Anyone know if \n> libpq is thread\n> > safe, and if not, how hard it might be to make it so?\n> \n> It is not; the main reason why not is a brain-dead part of \n> the API that\n> exposes modifiable global variables. Check the mail list archives\n> (probably psql-interfaces, but maybe -hackers) for previous \n> discussions\n> with details. Earlier this year I think, or maybe late 98.\n\nHmm. Really?\nAFAIK, only:\npgresStatus[]\n\nis exported, and that one is a) read-only and b) deprecated, and replaced\nwith a function.\n\nNo?\n\nOtherwise, I've been darn lucky in the multithreaded stuff I have :-) (I run\nwith a different PGconn for each thread, and the PGresult:s are protected by\nCriticalSections (this is Win32)). And if that's it, then I really need to\nfix it...\n\n\n//Magnus\n", "msg_date": "Tue, 3 Aug 1999 18:34:52 +0200 ", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Threads " }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n>> It is not; the main reason why not is a brain-dead part of \n>> the API that exposes modifiable global variables.\n\n> Hmm. Really?\n\nPQconnectdb() is the function that's not thread-safe; if you had\nmultiple threads invoking PQconnectdb() in parallel you would see a\nproblem. PQconndefaults() is the function that creates an API problem,\nbecause it exposes the static variable that PQconnectdb() ought not have\nhad in the first place.\n\nThere might be some other problems too, but that's the main one I'm\naware of. If we didn't mind breaking existing apps that use\nPQconndefaults(), it would be straightforward to fix...\n\n> Otherwise, I've been darn lucky in the multithreaded stuff I have :-) (I run\n> with a different PGconn for each thread, and the PGresult:s are protected by\n> CriticalSections (this is Win32)). And if that's it, then I really need to\n> fix it...\n\nSeems reasonable. PGconns do need to be per-thread, or else protected\nby mutexes, but you can run them in parallel. PGresults are actually\nalmost read-only, and I've been wondering if they can't be made entirely\nso (until destroyed of course). Then you wouldn't need a\nCriticalSection. You might want some kind of reference-counting\nmechanism for PGresults though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Aug 1999 12:59:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "Tom Lane wrote:\n> PQconnectdb() is the function that's not thread-safe; if you had\n> multiple threads invoking PQconnectdb() in parallel you would see a\n> problem. PQconndefaults() is the function that creates an API problem,\n> because it exposes the static variable that PQconnectdb() ought not have\n> had in the first place.\n> \n> There might be some other problems too, but that's the main one I'm\n> aware of. If we didn't mind breaking existing apps that use\n> PQconndefaults(), it would be straightforward to fix...\n\nOh, this is interesting. I've been pointer and thread chasing for the\nlast few hours trying to figure out why AOLserver (a multithreaded open\nsource web server that supports pooled database (inluding postgresql)\nconnections) doesn't hit this snag -- and I haven't yet found the\nanswer...\n\nHowever, this does answer a question that I had had but had never\nasked...\n\nIn any case, I have a couple of cents to throw in to the multithreaded\ndiscussion at large:\n\n1.)\tWhile threads are nice for those programs that can benefit from\nthem, there are those tasks that are not ideally suited to threads. \nWhether postgresql could benefit or not, I don't know; it would be an\ninteresting excercise to wrewrite the executor to be multithreaded -- of\ncourse, the hard part is identifying what each thread would do, etc.\n\n2.)\tA large multithreaded program, AOLserver, has just gone from a\nmultihost multiclient multithread model to a single host multiclient\nmultithread model: where AOLserver before would server as many virtual\nhosts as you wished out of a single multi-threaded process, it was\ndetermined through heavy stress-testing (after all, this server sits\nbehind www.aol.com, www.digitalcity.com, and others), that it was more\nefficient to let the TCP/IP stack in the kernel handle address\nmultiplexing -- thus, the latest version requires you to start a\n(multi-threaded) server process for each virtual host. The source code\nfor this server is a model of multithreaded server design -- see\naolserver.lcs.mit.edu for more.\n\n3.)\tRedesigning an existing single-threaded program to efficiently\nutilize multithreading is non-trivial. Highly non-trivial. In fact,\nefficiently multithreading an existing program may involve a complete\nredesign of basic structures and algorithms -- it did in the case of\nAOLserver (then called Naviserver), which was originally a thread-take\non the CERN httpd.\n\n4.)\tThreading PostgreSQL is going to be a massive effort -- and the\nbiggest part of that effort involves understanding the existing code\nwell enough to completely redesign the interaction of the internals --\nit might be found that an efficient thread model involves multiple\nlayers of threads: one or more threads to parse the SQL source; one or\nmore threads to optimize the query, and one or more threads to execute\noptimized SQL -- even while the parser is still parsing later statements\n-- I realize that doesn't fit very well in the existing PostgreSQL\nmodel. However, the pipelined thread model could be a good fit -- for a\npooled connection or for long queries. The most benefit could be had by\neliminating the postmaster/postgres linkage altogether, and having a\nsingle postgres process handle multiple connections on its port in a\nmultiplexed-pipelined manner -- which is the model AOLserver uses. \n\nAOLserver works like this: when a connection request is received, a\nthread is immediately dispatched to service the connection -- if a\nthread in the precreated thread pool is available, it gets it, otherwise\na new thread is created, up to MAXTHREADS.\n\nThe connection thread then pulls the data necessary to service the HTTP\nrequest (which can include dispatching a tcl interpreter thread or a\ndatabase driver thread out of the available database pools (up to\nMAXPOOLS) to service dynamic content). The data is sequentially\nstreamed to the connection, the connection is closed, and the thread\nsleeps for a another dispatch.\n\nPretty simple in theory; a bear in practice.\n\nSo, hackers, are there areas of the backend itself that would benefit\nfrom threading? I'm sure the whole 'postmaster forking a backend'\nprocess would benefit from threading from a pure performance point of\nview, but stability could possibly suffer (although, this model is good\nenough for www.aol.com....). Can parsing/optimizing/executing be done\nin a parallel/semi-parallel fashion? Of course, one of the benefits is\ngoing to be effective SMP utilization on architectures that support SMP\nthreading. Multithreading the whole shooting match also eliminates the\nneed for interprocess communication via shared memory -- each connection\nthread has the whole process context to work with.\n\nThe point is that it should be a full architectural redesign to properly\nthread something as large as an RDBMS -- is it worth it, and, if so,\ndoes anybody want to do it (who has enough pthreads experience to do it,\nthat is)? No, I'm not volunteering -- I know enough about threads to be\ndangerous, and know less about the postgres backend. Not to mention a\ngreat deal of hard work is going to be involved -- every single line of\ncode will have to be threadsafed -- not a fun prospect, IMO.\n\nAnyone interesting in this stuff should take a look at some\nwell-threaded programs (such as AOLserver), and should be familiar with\nsome of the essential literature (such as O'Rielly's pthreads book).\n\nIncidentally, with AOLserver's database connection pooling and\npersistence, you get most of the benefits of a multithreaded backend\nwithout the headaches of a multithreaded backend....\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Tue, 03 Aug 1999 19:56:06 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "At 07:56 PM 8/3/99 -0400, Lamar Owen wrote:\n>Tom Lane wrote:\n>> PQconnectdb() is the function that's not thread-safe; if you had\n>> multiple threads invoking PQconnectdb() in parallel you would see a\n>> problem. PQconndefaults() is the function that creates an API problem,\n>> because it exposes the static variable that PQconnectdb() ought not have\n>> had in the first place.\n>> \n>> There might be some other problems too, but that's the main one I'm\n>> aware of. If we didn't mind breaking existing apps that use\n>> PQconndefaults(), it would be straightforward to fix...\n>\n>Oh, this is interesting. I've been pointer and thread chasing for the\n>last few hours trying to figure out why AOLserver (a multithreaded open\n>source web server that supports pooled database (inluding postgresql)\n>connections) doesn't hit this snag -- and I haven't yet found the\n>answer...\n\nAOLserver rarely does a connect once the server gets fired up and\nreceives traffic. It may be that actual db connects are\nguarded by semaphores or the like. It may be that conflicts are\nrare because on a busy site a connection will be made once and only\nonce and later lives on in the pool forever, with the handle being\nallocated and released by individual .tcl scripts and .adp pages.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n", "msg_date": "Wed, 04 Aug 1999 04:20:30 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "Lamar Owen's comments brought up a thought. Bruce has talked several\ntimes about moving in Oracle's direction, with dedicated backends for\neach database (or maybe in Ingres' direction, since they allow both\ndedicated backends as well as multi-database backends). In any case,\nIFF we went that way, would it make sense to reduce the postmaster's\nrole to more of a traffic cop (a la Ingres' iigcn)?\n\nEffectively, what we'd end up with is a postmaster that knows \"which\nbackends serve which data\" that would then either tell the client to\nreconnect directly to the backend, or else provide a mediated\nconnection.\n\nRedirection will end up costing us a whole 'nother TCP connection\nbuild/destroy which can be disregarded for non-trivial queries, but\nstill may prove too much (depending upon query patterns). On the\nother hand, it would probably be easier to code and have better\nthroughput than funneling all data through the postmaster. On the\ngripping hand, a postmaster that mediated all transactions could also\nimplement QoS style controls, or throttle connections taking an unfair\nshare of the available bandwidth.\n\nIn any event, this could also be the start of a naming service. It\nshould be relatively easy, with either method, to have the postmaster\nhandle connections to databases (not just tables, mind you) on other\nmachines. \n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "4 Aug 1999 08:58:55 -0400", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "> Redirection will end up costing us a whole 'nother TCP connection\n> build/destroy which can be disregarded for non-trivial queries, but\n> still may prove too much (depending upon query patterns). On the\n> other hand, it would probably be easier to code and have better\n> throughput than funneling all data through the postmaster. On the\n> gripping hand, a postmaster that mediated all transactions could also\n> implement QoS style controls, or throttle connections taking an unfair\n> share of the available bandwidth.\n> In any event, this could also be the start of a naming service. It\n> should be relatively easy, with either method, to have the postmaster\n> handle connections to databases (not just tables, mind you) on other\n> machines.\n\nStarting to sound suspiciously like the Corba work I've been doing on\nmy day job.\n\nWe're using ACE/TAO for it's realtime and QoS features, but other\nimplementations are probably much lower footprint wrt installation and\nuse. I suppose we'd want a C implementation; the ones I've been using\nare all C++...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 04 Aug 1999 14:26:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "On Wed, 4 Aug 1999, Thomas Lockhart wrote:\n\n> > Redirection will end up costing us a whole 'nother TCP connection\n> > build/destroy which can be disregarded for non-trivial queries, but\n> > still may prove too much (depending upon query patterns). On the\n> > other hand, it would probably be easier to code and have better\n> > throughput than funneling all data through the postmaster. On the\n> > gripping hand, a postmaster that mediated all transactions could also\n> > implement QoS style controls, or throttle connections taking an unfair\n> > share of the available bandwidth.\n> > In any event, this could also be the start of a naming service. It\n> > should be relatively easy, with either method, to have the postmaster\n> > handle connections to databases (not just tables, mind you) on other\n> > machines.\n> \n> Starting to sound suspiciously like the Corba work I've been doing on\n> my day job.\n> \n> We're using ACE/TAO for it's realtime and QoS features, but other\n> implementations are probably much lower footprint wrt installation and\n> use. I suppose we'd want a C implementation; the ones I've been using\n> are all C++...\n\nKDE/KOffice uses Mico, which is also C++...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 4 Aug 1999 11:54:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "> KDE/KOffice uses Mico, which is also C++...\n\nRight, it's a nice implementation from the little I've used it (I have\nit installed to use the tcl binding).\n\nPresumably we would want a C implementation to match the rest of our\nenvironment, but I can't help thinking that a C++ one for the Corba\nparts would be more natural (it maps to the Corba OO view of the\nworld).\n\nACE/TAO includes OS abstractions to allow porting to a *bunch* of\nplatforms, including real-time OSes. However, if you build the whole\npackage and include debugging symbols then you're taking up 1.2GB of\ndisk space for a from-source build (yes, that's GigaByte with a gee\n:()\n\nThe libraries are substantially smaller, but the packaging is not very\ngood yet so you end up keeping the full installation around.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 04 Aug 1999 15:42:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Presumably we would want a C implementation to match the rest of our\n> environment\n\nIn which case one might want to consider the Orbit ORB from the GNOME\nproject. It's pure C, and is supposed to be quite small and fast, and\naims for full CORBA compliance---which I understand MICO and maybe TAO\ndon't quite achieve.\n\nMike.\n", "msg_date": "04 Aug 1999 13:03:03 -0400", "msg_from": "Michael Alan Dorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" }, { "msg_contents": "On 4 Aug 1999, Michael Alan Dorman wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > Presumably we would want a C implementation to match the rest of our\n> > environment\n> \n> In which case one might want to consider the Orbit ORB from the GNOME\n> project. It's pure C, and is supposed to be quite small and fast, and\n> aims for full CORBA compliance---which I understand MICO and maybe TAO\n> don't quite achieve.\n\nActually, unless Orbit has recently changed, MICO is more compliant then\nit is...last time we all looked into it, at least, that was the case...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 4 Aug 1999 14:37:45 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads" } ]
[ { "msg_contents": "Ross J. Reedstrom wrote:\n\n> Right. As I've been able to make out so far, in Mariposa a query passes\n> through the regular parser and single-site optimizer, then the selected\n> plan tree is handed to a 'fragmenter' to break the work up into chunks,\n> which are then handed around to a 'broker' which uses a microeconomic\n> 'bid' process to parcels them out to both local and remote executors. The\n> results from each site then go through a local 'coordinator' which merges\n> the result sets, and hands them back to the original client.\n> \n> Whew!\n> \n> It's interesting to compare the theory describing the workings of Mariposa\n> (such as the paper in VLDB), and the code. For the fragmenter, the paper\n> describes basically a rational decomposition of the plan, while the code\n> applies non-deterministic, but tuneable, methods (lots of calls to random\n> and comparisions to user specified odds ratios).\n> \n> It strikes me as a bit odd to optimize the plan for a single site,\n> then break it all apart again. My thoughts on this are to implement\n> a two new node types: one a remote table, and one which represents\n> access to a remote table. Remote tables have host info in them, and\n> always be added to the plan with a remote-access node directly above\n> them. Remote-access nodes would be seperate from their remote-table,\n> to allow the communications cost to be slid up the plan tree, and merged\n> with other remote-access nodes talking to the same server. This should\n> maintain the order-agnostic nature of the optimizer. The executor will\n> need to build SQL statements and from the sub-plans and submit them via\n> standard network db access client librarys.\n> \n> First step, create a remote-table node, and teach the excutor how to get\n> info from it. Later, add the seperable remote-access node.\n> \n> How insane does this sound now? Am I still a mad scientist? (...always!)\n\nLet me give a brief \"what, where, and why\" about Mariposa.\n\nThe impetus for Mariposa was that every distributed database\nidea either died or failed to scale up beyond a handful of nodes.\nThere was Ingres/Star, IBM's important R* project, DEC's failed\nRdbStar project, and others. The idea of grouping together a\nbunch of servers is a seductive, but very hard, one.\n\nWhat exists today is a group of simple extensions (basically they\nare: treating remote tables differently, use replication instead\nof distributed consistency, and pushing the problem up to the\nprogrammer's level). Not too transparent or even seamless.\n\nMariposa proposed that tables can dynamically move. And fragments\nof tables. And queries too! This helps a lot towards meeting\nthe big problem of load balancing and configuring any huge system.\nCompare this with web servers -- web sites can become overloaded\nplus everyone knows the permanent location of the data on the\nserver (there is some tricky footwork behind the scenes that allow\nmultiple servers to share the load).\n\nSoooo, how to optimize a query where _everything_ can move?\nThat is the basis for the \"optimize for a single site, later\nsplit it for distributed execution\" idea. The loss of data\nmobility would have been more painful than a more complicated\noptimizer (at least in theory! ;-) So it does a bit insane but\nthe alternative would have been to assume all tables are remote\nbut this collapses to the same idea as \"all local\" tables that\njust have higher access costs.\n\nFast forward to 1999, Mariposa is now being commercialized by\nCohera (www.cohera.com) as a middleware distribution layer.\n\n--\nBob Devine [email protected]\n\n(PS: Just for background, I proposed a lot of the Mariposa ideas\nway back in 1992 at Berkeley after working on DEC's RdbStar.\nNow working on Impulse - www.cs.utah.edu/projects/impulse)\n\n\n", "msg_date": "Tue, 03 Aug 1999 11:34:47 -0600", "msg_from": "Bob Devine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Mariposa" }, { "msg_contents": "On Tue, Aug 03, 1999 at 11:34:47AM -0600, Bob Devine wrote:\n> Let me give a brief \"what, where, and why\" about Mariposa.\n> \n<SNIPPED Very interesting precise>\n\n> \n> Fast forward to 1999, Mariposa is now being commercialized by\n> Cohera (www.cohera.com) as a middleware distribution layer.\n\nRight - at $40000 a crack for a two backend db version :-(\n\n> \n> --\n> Bob Devine [email protected]\n> \n> (PS: Just for background, I proposed a lot of the Mariposa ideas\n> way back in 1992 at Berkeley after working on DEC's RdbStar.\n> Now working on Impulse - www.cs.utah.edu/projects/impulse)\n\nBob - \n\nThanks for the background. As you say, Mariposa is coming at this problem\nfrom a different angle, and it wasn't clear to me _why_ that was. Now it\nis. I think it's still be good to have the small, add-on version of remote\ntables that I proposed. Folding the rest of Mariposa in later might be an\ninteresting project, as well, and could co-exist (or be built on top of)\nfix-location remote tables. Perhaps I should see if it'd be possible to\nuse the Mariposa syntax extensions with fixed remote tables. That way,\ndistributed dbs could be made mobile seamlessly (Ha!)\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 3 Aug 1999 13:47:36 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mariposa" }, { "msg_contents": "I've been reading all this with interest, even though I know nothing\nabout distributed database design. I've used Tandem's a bit though, and\nthey do a rather good job of parallelising queries. A key part of\nbuilding an efficient database system on a Tandem is figuring out how\nthe database is distributed over the disks, which used to correspond (on\nthe K10000 anyway) to processors. This partitioning is explicitly\ndeclared. I believe the query optimizer used this information to figure\nout where it had to go for data. If yor partitioning was wrong,\nperformance would be dismal, if it was right -- pheew, it would fly.\nBit more onus on the dba, or application developer, but, having worked\non lots of parallel applications, it is my experience that a completely\nautomatic solution is never terribly good. Distributing work/data\noptimally is just too complex a problem to automate.\n\nAdriaan\n", "msg_date": "Wed, 04 Aug 1999 09:00:02 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mariposa" } ]
[ { "msg_contents": "I know that I can't insert a tuple into Postgres > 8192 KBytes long. We\nneed to store data in a variable length float array which can take up a\ntotal length of greater than this amount. To get around the limit, we\nsimply insert a zeroed array (which takes up less character space) and\nthen update the array in chunks specifying where in the array to put the\ndata.\n\n\ne.g. INSERT INTO tablename VALUES '{0,0,0,0,0,0, .... }';\n to pad the array with zeros (this, of course, has to be less\nthan 8192 KBytes)\n\nthen\n\nUPDATE tablename SET array1[1:100] = '{123.9, 12345.987, 123454555.87,\n.... }'\netc.\n\nThis works fine.\n\nOkay, long intro for a short question. When we do a pg_dump and then\nrestore the database should the COPY contained within the pg_dumped file\nbe able to handle these long arrays?\n\n-Tony Reina\n\n\n\n", "msg_date": "Tue, 03 Aug 1999 11:41:21 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Backup database with entries > 8192 KBytes" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I know that I can't insert a tuple into Postgres > 8192 KBytes long.\n\nEr, make that 8K. If the limit were 8Meg, a lot fewer people would be\ncomplaining about it...\n\n> need to store data in a variable length float array which can take up a\n> total length of greater than this amount. To get around the limit, we\n> simply insert a zeroed array (which takes up less character space)\n\nHuh? I assume you're talking about an array of float4 (or float8).\nThat's going to be 4 (or 8) bytes per value internally, zero or not.\n\nMaybe you are thinking of the external textual representation of the\narray.\n\n> Okay, long intro for a short question. When we do a pg_dump and then\n> restore the database should the COPY contained within the pg_dumped file\n> be able to handle these long arrays?\n\nOffhand I'm not sure. I don't see any obvious restriction in copy.c,\nbut there could be lurking problems elsewhere. Have you tried it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 1999 18:14:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Backup database with entries > 8192 KBytes " }, { "msg_contents": "Tom Lane wrote:\n\n> \"G. Anthony Reina\" <[email protected]> writes:\n> > I know that I can't insert a tuple into Postgres > 8192 KBytes long.\n>\n> Er, make that 8K. If the limit were 8Meg, a lot fewer people would be\n> complaining about it...\n>\n\nRight. My mistake. 8192 Bytes\n\n>\n> > need to store data in a variable length float array which can take up a\n> > total length of greater than this amount. To get around the limit, we\n> > simply insert a zeroed array (which takes up less character space)\n>\n> Huh? I assume you're talking about an array of float4 (or float8).\n> That's going to be 4 (or 8) bytes per value internally, zero or not.\n>\n\nfloat4 variable array\n\n>\n> Maybe you are thinking of the external textual representation of the\n> array.\n>\n\nright. so the representation when I pass it into Postgres through a C\nprogram:\n\ne.g.\n\n sprintf(data_string, \"INSERT INTO tablename VALUES '{%f'\", array[0]);\n for (i=1; i < length_of_array; i++) {\n sprintf(temp_string, \", %f\", array[i]);\n strcat(data_string, temp_string);\n }\n strcat(data_string, \"}'\");\n\n if (strlen(data_string) > 8192)) {\n printf(\"String too long\\n\");\n exit_nicely(conn);\n }\n\n res = PQexec(conn, data_string);\n\n>\n> > Okay, long intro for a short question. When we do a pg_dump and then\n> > restore the database should the COPY contained within the pg_dumped file\n> > be able to handle these long arrays?\n>\n> Offhand I'm not sure. I don't see any obvious restriction in copy.c,\n> but there could be lurking problems elsewhere. Have you tried it?\n\nYes. But some tables didn't go in when I restored the backup. It was probably\ndue to an error somewhere in the COPY command as far as I've been able to\nsurmise. I'm wondering if this could have been that problem (i.e. the array\nwas > 8K as so the COPY command crapped out when I tried to restore from a\npg_dump). When I get some time I'll try to make a small test database with\nlong arrays, pg_dump them, and then restore them.\n\nSorry to be so confusing. Trying to do 10 things at once. Must be the coffee\n... ;>)\nThanks for the help.\n-Tony\n\n", "msg_date": "Wed, 04 Aug 1999 15:40:42 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Backup database with entries > 8192 KBytes" } ]
[ { "msg_contents": "> >> It is not; the main reason why not is a brain-dead part of \n> >> the API that exposes modifiable global variables.\n> \n> > Hmm. Really?\n> \n> PQconnectdb() is the function that's not thread-safe; if you had\n> multiple threads invoking PQconnectdb() in parallel you would see a\n> problem. PQconndefaults() is the function that creates an \n> API problem,\n> because it exposes the static variable that PQconnectdb() \n> ought not have\n> had in the first place.\n\nOk. Now I see it. I guess my code worked because I run PQconnectdb() at the\nstart of the program, and hand down the PGconn:s to the threads later. So\nonly one thread can call PQconnectdb().\n\n\n> There might be some other problems too, but that's the main one I'm\n> aware of. If we didn't mind breaking existing apps that use\n> PQconndefaults(), it would be straightforward to fix...\n\nWouldn't it be possible to do something like this (ok, a little bit ugly,\nbut shouldn't break the clients):\n\nStore the \"active\" PQconninfoOptions array inside the PGconn struct. That\nway, the user should not require to change anything when doing PQconnectdb()\nand the likes.\n\nRewrite the conninfo_xxx functions to take a (PQconninfoOptions *) as\nparameter to work on, instead of working on the static array.\n\nKeep the static array, rename it to PQconninfoDefaultOptions, make it\ncontain the *default* options *from the beginning*, and declare it as\n\"const\". Then have PQconndefaults() return that array. Then the\nPQconndefaults() works just like before, and does not break the old\nprograms.\n\nShouldn't this be possible to achieve without any changes in the API?\n\nIf you don't see anything obviously wrong with this, I can try to put\ntogether a patch to do that. It'd be really nice to have a thread-safe\nclient lib :-)\n\n\n> You might want some kind of reference-counting\n> mechanism for PGresults though.\nIn this case, the PGresults are owned by the client connections, and are\nonly used by one client connection at a time, and they are freed when the\nclient connection ends. The PGconns are owned one each by the Worker Threads\nin the pool, and are freed when the worker thread is stopped (which is when\nthe application is stopped). So no special reference-counting should be\nneeded.\n\n//Magnus\n", "msg_date": "Wed, 4 Aug 1999 14:52:15 +0200 ", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Threads " }, { "msg_contents": "[ interfaces list added, since we are discussing an incompatible libpq\nAPI change to fix the problem that PQconnectdb() is not thread-safe ]\n\nMagnus Hagander <[email protected]> writes:\n>> There might be some other problems too, but that's the main one I'm\n>> aware of. If we didn't mind breaking existing apps that use\n>> PQconndefaults(), it would be straightforward to fix...\n\n> Wouldn't it be possible to do something like this (ok, a little bit ugly,\n> but shouldn't break the clients):\n\n> Store the \"active\" PQconninfoOptions array inside the PGconn struct. That\n> way, the user should not require to change anything when doing PQconnectdb()\n> and the likes.\n\nI don't think we'd need to store the array on a long-term basis; it is\nreally only needed as working storage during PQconnectdb. So it could\nbe malloc'd at PQconnectdb entry and free'd at exit, in the normal case\nwhere it's just supporting a PQconnectdb call. As you saw, that'd be\na pretty straightforward set of changes inside fe-connect.c.\n\nThe problem is how should PQconndefaults() act.\n\n> Keep the static array, rename it to PQconninfoDefaultOptions, make it\n> contain the *default* options *from the beginning*, and declare it as\n> \"const\". Then have PQconndefaults() return that array.\n\nIt's not const because the default values are not constants --- they\ndepend on environment variables. In theory the results could change\nfrom call to call, if the client program does putenv()'s in between.\nI dunno whether there are any thread packages that keep separate\nenvironment-variable sets for each thread, but if so then the results\ncould theoretically vary across threads.\n\nThe most natural thing given the above change would be to say that what\nPQconndefaults() returns is a malloc'd array, and the user program is\nrequired to free said array when done with it. (Actually, required to\ncall some helper routine inside fe-connect.c, which would know to free\nthe subsidiary variable strings along with the top-level array...)\n\nThe breakage here is that existing code won't know to call the free\nroutine, and will therefore suffer a memory leak of ~ a few hundred bytes\nper PQconndefaults() call. It might well be that we could live with\nthat, since I'll bet that most client programs don't use PQconndefaults\nat all, much less call it so many times that a leak of that size would be\na problem. Comments?\n\nWhat I'm envisioning is a static const array that contains all the\nfixed fields of PQconninfoOptions, but the variable fields (just\nthe \"val\" current-value field, AFAIR) are permanently NULL. Then\nto create a working copy you do\n\tptr = malloc(sizeof(PQconninfoOptions));\n\tmemcpy(ptr, PQconninfoOptions, sizeof(PQconninfoOptions));\nThe free routine would run down the work array freeing any non-null val\nstrings, then free the array itself. No copying or freeing of the\nconstant subsidiary strings (such as \"keyword\") is needed.\n\nIf you want to work on it, be my guest...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 1999 10:08:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Threads " }, { "msg_contents": "On 4 Aug, Tom Lane wrote:\n\n> [ Snip discussion regarding PQconnectdb()s thread-safety ]\n> \n> If you want to work on it, be my guest...\n\nI don't have time to think about this today, so I can't comment on how\nit should work, but I _am_ currently working in this area - I am\nproviding non-blocking versions of the connect statements, as discussed\non the interfaces list a couple of weeks ago. In fact, it is pretty\nmuch done, apart from a tidy-up, documentation, and testing. I don't\nsee any point in two people hammering away at the same code - it will\nonly make work when we try to merge again - so perhaps I should\nimplement what ever is decided - I don't mind doing so. However, if I\ndidn't get it done this weekend it would have to be mid-to-late\nSeptember, since I'm going away. Would that be a problem for anyone?\n\nI had noticed that the connect statements weren't thread-safe, but\nwas neither aware that that was a problem for anyone, nor inclined to\naudit the whole of libpq for thread-safety, so I left it alone.\n\nEwan.\n\n\n", "msg_date": "Wed, 4 Aug 1999 17:18:19 +0100 (BST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Threads " } ]
[ { "msg_contents": "Adriaan Joubert <[email protected]> writes:\n>> What Postgres version are you using, and on what platform? If it's\n>> anything older than 6.5.1, an upgrade would probably be a good idea.\n\n> Sorry, I should have mentioined that. I'm using 6.5.0 on DEC Alpha\n> (Digital Unix, compiled with cc). \n\nAlpha, eh? We have some known porting problems on 64-bit architectures,\nand I wonder whether this is one of them. Going to be hard to nail that\ndown until we can reproduce the error, however.\n\nAfter some digging around in backend/access/nbtree/nbtscan.c, which is\nproducing the error, I notice that the routine in question is searching\na list that does not get cleared properly at transaction abort. It's\nnot clear that that's the cause of the error message, though. What\nI suggest at this point is that you pay more attention to what happens\njust before the transaction in which you get the \"btree scan list\ntrashed\" message. In particular, are there any commands that abort\nwith errors a little bit earlier in the same backend? It might take\nthe combination of an error in a btree-index-using command and then\nanother btree index access to provoke the \"trashed\" symptom.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 1999 09:46:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ?? " }, { "msg_contents": "Tom Lane wrote:\n> \n> Adriaan Joubert <[email protected]> writes:\n> >> What Postgres version are you using, and on what platform? If it's\n> >> anything older than 6.5.1, an upgrade would probably be a good idea.\n> \n> > Sorry, I should have mentioined that. I'm using 6.5.0 on DEC Alpha\n> > (Digital Unix, compiled with cc).\n> \n> Alpha, eh? We have some known porting problems on 64-bit architectures,\n> and I wonder whether this is one of them. Going to be hard to nail that\n> down until we can reproduce the error, however.\n> \n> After some digging around in backend/access/nbtree/nbtscan.c, which is\n> producing the error, I notice that the routine in question is searching\n> a list that does not get cleared properly at transaction abort. It's\n> not clear that that's the cause of the error message, though. What\n> I suggest at this point is that you pay more attention to what happens\n> just before the transaction in which you get the \"btree scan list\n> trashed\" message. In particular, are there any commands that abort\n> with errors a little bit earlier in the same backend? It might take\n> the combination of an error in a btree-index-using command and then\n> another btree index access to provoke the \"trashed\" symptom.\n\n\nThat may be it. I have some PL routines that raise an exception if an\noperation could lead to an inconsistency in my database tables. This is\nnot really an error, but I do want to abort the transaction in that\ncase, so I raise an exception. I'll continue trying to nail it down and\nthen I can look with the debugger what happens.\n\nBTW, I've installed 6.5.1 and still have the same problems. Vacuuming\nhung up everything, and I had to shut the whole thing down and restart\nit to get it working again. Dropping the indices and rebuilding them all\nfixed the problem.\n\nHow difficult is it to clear the list at transaction abort? Is this\nsomething I could patch and try out?\n\nThanks a lot for looking at this, much appreciated!\n\nAdriaan\n", "msg_date": "Wed, 04 Aug 1999 17:49:47 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ??" }, { "msg_contents": "Adriaan Joubert <[email protected]> writes:\n>> After some digging around in backend/access/nbtree/nbtscan.c, which is\n>> producing the error, I notice that the routine in question is searching\n>> a list that does not get cleared properly at transaction abort. It's\n>> not clear that that's the cause of the error message, though.\n\n> BTW, I've installed 6.5.1 and still have the same problems.\n\nNo surprise, really.\n\n> Vacuuming hung up everything, and I had to shut the whole thing down\n> and restart it to get it working again. Dropping the indices and\n> rebuilding them all fixed the problem.\n\nHmm, that suggests that your indexes are actually getting corrupted.\n\n> How difficult is it to clear the list at transaction abort? Is this\n> something I could patch and try out?\n\nThe BTScans variable in nbtscan.c needs to be reset to NULL during\nxact abort. I don't see how this would *directly* cause the\nobserved symptom, but failing to do it should lead to misbehavior in\n_bt_adjscans() during later transactions, so it might be related\nsomehow. If you want to patch it, make a subroutine that clears the\nvariable (no need to free the list; since it's palloc'd it'll go\naway anyway) and call it from transaction cleanup in\nbackend/access/transam/xact.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 1999 11:59:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ?? " }, { "msg_contents": "Tom Lane wrote:\n> \n> The BTScans variable in nbtscan.c needs to be reset to NULL during\n> xact abort. I don't see how this would *directly* cause the\n> observed symptom, but failing to do it should lead to misbehavior in\n> _bt_adjscans() during later transactions, so it might be related\n> somehow. If you want to patch it, make a subroutine that clears the\n> variable (no need to free the list; since it's palloc'd it'll go\n> away anyway) and call it from transaction cleanup in\n> backend/access/transam/xact.c.\n\nThis should be fixed in CVS too.\n\nVadim\n", "msg_date": "Thu, 05 Aug 1999 11:57:44 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ??" }, { "msg_contents": "On Thu, 5 Aug 1999, Vadim Mikheev wrote:\n\n> Tom Lane wrote:\n> > \n> > The BTScans variable in nbtscan.c needs to be reset to NULL during\n> > xact abort. I don't see how this would *directly* cause the\n> > observed symptom, but failing to do it should lead to misbehavior in\n> > _bt_adjscans() during later transactions, so it might be related\n> > somehow. If you want to patch it, make a subroutine that clears the\n> > variable (no need to free the list; since it's palloc'd it'll go\n> > away anyway) and call it from transaction cleanup in\n> > backend/access/transam/xact.c.\n> \n> This should be fixed in CVS too.\n\nIs this something that can be easily back-patched for v6.5.2?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 5 Aug 1999 01:21:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ??" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 5 Aug 1999, Vadim Mikheev wrote:\n>> Tom Lane wrote:\n>>>> The BTScans variable in nbtscan.c needs to be reset to NULL during\n>>>> xact abort.\n>> \n>> This should be fixed in CVS too.\n\nYes, absolutely.\n\n> Is this something that can be easily back-patched for v6.5.2?\n\nI will patch this in both current and REL6_5. But, although this\nis clearly a bug, I am not at all convinced that it explains\nAdriaan's problem. I think more creepie-crawlies lurk nearby :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 1999 00:37:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ?? " }, { "msg_contents": "> I will patch this in both current and REL6_5. But, although this\n> is clearly a bug, I am not at all convinced that it explains\n> Adriaan's problem. I think more creepie-crawlies lurk nearby :-(\n\n\nHmm, I made the changes and I only got three errors out of the system\ntoday. So it is not fixed, although perhaps improved (or was I just\nlucky?). I've been locking tables more restrictively, so this may have\nhelped as well. I definitely think this has something to do with\nconcurrent accesses to the same index. It always seems to start\nhappening as the the tables start getting updates more rapidly.\n\nAnother thought: an index on a table that gets updated sometimes through\na PL trigger is an index on a user-defined type (the bitmask type I\nposted a while ago). Could this have something to do with a btree index\non a user-defined type? I'll drop that index and see whether it makes a\ndifference. All indexes on other tables that are touched are int4.\n\nThanks for all the help, Tom!\n\nAdriaan\n", "msg_date": "Thu, 05 Aug 1999 16:32:33 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ??" }, { "msg_contents": "OK, I've dropped my user-defined type index and it hasn't made any\ndifference. I've had quite a few of the following again:\n\nUPDATE TasksIds SET qstate=8::bit1 where task=358 and id=5654\nERROR: btree scan list trashed; can't find 0x1401744a0\n\nI've got a lot of logging switched on, and these do not seem to be\npreceded by errors. Since patching it the system seems to recover ok, so\nI'm wondering whether this could be a caching issue. I think I will just\nlock all tables in their entirety now, and see whether that fixes it\n(there goes my MVCC performance boost 8-(). I still think it has\nsomething to do with concurrent access to the indices.\n\nIf anybody has any more suggestions of what I could try, please let me\nknow.\n\nCheers,\n\nAdriaan\n", "msg_date": "Fri, 06 Aug 1999 12:01:55 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ??" }, { "msg_contents": "Adriaan Joubert <[email protected]> writes:\n> OK, I've dropped my user-defined type index and it hasn't made any\n> difference. I've had quite a few of the following again:\n\n> UPDATE TasksIds SET qstate=8::bit1 where task=358 and id=5654\n> ERROR: btree scan list trashed; can't find 0x1401744a0\n\n> I've got a lot of logging switched on, and these do not seem to be\n> preceded by errors. Since patching it the system seems to recover ok, so\n> I'm wondering whether this could be a caching issue. I think I will just\n> lock all tables in their entirety now, and see whether that fixes it\n> (there goes my MVCC performance boost 8-(). I still think it has\n> something to do with concurrent access to the indices.\n\nLet us know whether going to full locking makes any difference.\n\nI am currently wondering whether this is a porting issue (64-bit vs\n32-bit pointers). If it only happens on 64-bit platforms, that would\nexplain why we haven't seen many similar reports. Unfortunately,\nthat theory provides little useful guidance about where to look :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Aug 1999 09:51:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ERROR: btree scan list trashed ?? " } ]
[ { "msg_contents": "Hi Thomas,\n\nI have been noticing that if I write something like\n\t... WHERE float8column < 33;\nthe system is not very smart about it, whereas\n\t... WHERE float8column < 33.0;\nworks fine. The reason is that what the optimizer gets handed in the\nfirst case is actually\n\t... WHERE float8column < float8(33);\nie, the parse tree reflects a run-time type coercion function call;\nand the optimizer doesn't recognize that as a potential indexqual\nrestriction --- it wants \"var op constant\".\n\nOf course the long-run answer for problems of this ilk is to insert a\nconstant-expression-folding stage, but for this particular case it seems\nto me that the parser is wasting a lot of cycles by not outputting a\nconstant node of the right type in the first place. Especially since it\ndoes convert constants to the desired type in other cases.\n\nLooking into this, I find that the reason for the difference is that\nparse_coerce() only performs parse-time coercion of constants if they\nare of type UNKNOWNOID --- ie, the constant is of string type. And\nindeed\n\t... WHERE float8column < '33';\nproduces a pre-coerced float8 constant! But make_const produces type\nINT4OID or INT8OID for integer or float constants.\n\nIt seems to me that parse_coerce ought to do parse-time coercion if\nthe input tree is a constant of either UNKNOWNOID, INT4OID, or FLOAT8OID\ntype, and only fall back to inserting a function call if it's unable\nto do the coercion. Am I missing anything?\n\nIt also looks like parser_typecast2() could be dispensed with, or more\naccurately folded into parse_coerce(). Is there a reason not to?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 1999 19:00:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "parse_coerce question" }, { "msg_contents": "> Looking into this, I find that the reason for the difference is that\n> parse_coerce() only performs parse-time coercion of constants if they\n> are of type UNKNOWNOID --- ie, the constant is of string type. And\n> indeed\n> \t... WHERE float8column < '33';\n> produces a pre-coerced float8 constant! But make_const produces type\n> INT4OID or INT8OID for integer or float constants.\n> \n> It seems to me that parse_coerce ought to do parse-time coercion if\n> the input tree is a constant of either UNKNOWNOID, INT4OID, or FLOAT8OID\n> type, and only fall back to inserting a function call if it's unable\n> to do the coercion. Am I missing anything?\n\nYou are right. The textin/out trick is an old one, and one we only did\nbecause we _had_ to make some conversion at that point. No problem\nmaking it more general.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 4 Aug 1999 19:44:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parse_coerce question" }, { "msg_contents": "> > It seems to me that parse_coerce ought to do parse-time coercion if\n> > the input tree is a constant of either UNKNOWNOID, INT4OID, or FLOAT8OID\n> > type, and only fall back to inserting a function call if it's unable\n> > to do the coercion. Am I missing anything?\n> You are right. The textin/out trick is an old one, and one we only did\n> because we _had_ to make some conversion at that point. No problem\n> making it more general.\n\nSure, as long as we don't use textin/out to do it. It's an old trick\nwith more limitations than benefits. The Right Way to approach it is\nto use type-specific conversion functions, so that real conversions\ncan take place. textin/out relies on the fact that the printed format\nof a type is *precisely* the same as the format for the target type,\nwhich is true for only a very limited number of cases.\n\nThere is already code for doing type coersion. As Tom points out, it\ncurrently wraps a type conversion function around the constant, to be\nevaluated later. It should be easy to pre-evaluate that function,\nwhich btw should happen anyway. afaik it does, but not until after the\noptimizer has had its look at the query, and by then it is too late to\nselect an index properly, for example.\n\nFor the index selection problem, I was thinking to move some of the\nparse_coerce techniques to that part of the code, so that functions on\nconstants are allowed to be considered as candidate constants in a\nquery.\n\nIn any case, you'll need to make sure that you only promote types one\ndirection, so that (for example)\n\n select intcol from table where intcol < 33.5;\n\ngets evaluated correctly.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 05 Aug 1999 05:15:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parse_coerce question" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> It seems to me that parse_coerce ought to do parse-time coercion if\n>>>> the input tree is a constant of either UNKNOWNOID, INT4OID, or FLOAT8OID\n>>>> type, and only fall back to inserting a function call if it's unable\n>>>> to do the coercion. Am I missing anything?\n>> You are right. The textin/out trick is an old one, and one we only did\n>> because we _had_ to make some conversion at that point. No problem\n>> making it more general.\n\n> Sure, as long as we don't use textin/out to do it. It's an old trick\n> with more limitations than benefits. The Right Way to approach it is\n> to use type-specific conversion functions, so that real conversions\n> can take place.\n\nRight --- the revision I committed last night looks up the\ntype-conversion function the same as before, but then applies it\nimmediately if the input is a constant.\n\n> It should be easy to pre-evaluate that function,\n> which btw should happen anyway. afaik it does, but not until after the\n> optimizer has had its look at the query,\n\nI'm not aware of any post-optimizer place where that might happen.\nIn any case, the optimizer would be much happier if constant-expression\nreduction happened before it rather than after.\n\n> For the index selection problem, I was thinking to move some of the\n> parse_coerce techniques to that part of the code, so that functions on\n> constants are allowed to be considered as candidate constants in a\n> query.\n\nI still think we want a generalized constant-expression folder, applied\nafter rule rewrite and before the optimizer. This particular case was\njust something I thought the parser should handle, since it was already\nhandling closely related cases...\n\n> In any case, you'll need to make sure that you only promote types one\n> direction, so that (for example)\n> select intcol from table where intcol < 33.5;\n> gets evaluated correctly.\n\nThat is not parse_coerce()'s problem --- it just does what it's told.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 1999 10:21:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] parse_coerce question " }, { "msg_contents": "> > In any case, you'll need to make sure that you only promote types one\n> > direction, so that (for example)\n> > select intcol from table where intcol < 33.5;\n> > gets evaluated correctly.\n> That is not parse_coerce()'s problem --- it just does what it's told.\n\nRight. I wasn't sure how you were going to implement it. If you are\ndoing everything the same, but just pre-evaluating the result, we\nshould be OK.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 05 Aug 1999 14:39:29 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parse_coerce question" } ]
[ { "msg_contents": "Hi,\n\nSQL92 seems to assume that results of the INTERSECT/EXCEPT contain no\nduplications unless \"ALL\" is specified. However PostgreSQL 6.5.1 does\nnot follow the requirement, I think. Maybe we need to add this to the\nTODO list?\n---\nTatsuo Ishii\n\n", "msg_date": "Thu, 05 Aug 1999 10:13:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "INTERSECT/EXCEPT and duplicates?" }, { "msg_contents": "> Hi,\n> \n> SQL92 seems to assume that results of the INTERSECT/EXCEPT contain no\n> duplications unless \"ALL\" is specified. However PostgreSQL 6.5.1 does\n> not follow the requirement, I think. Maybe we need to add this to the\n> TODO list?\n> ---\n\nAdded to TODO:\n\n* have INTERSECT/EXCEPT prevent duplicates unless ALL is specified\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 4 Aug 1999 21:44:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INTERSECT/EXCEPT and duplicates?" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> SQL92 seems to assume that results of the INTERSECT/EXCEPT contain no\n> duplications unless \"ALL\" is specified. However PostgreSQL 6.5.1 does\n> not follow the requirement, I think.\n\nI think you are right --- the ALL keyword is only implemented for UNION\n(and it's not quite right there either :-(). For the other two, you\njust get whichever behavior was easiest to implement, I suppose...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 1999 10:09:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INTERSECT/EXCEPT and duplicates? " } ]
[ { "msg_contents": "> Thanks for packaging up PG 6.5.1 into RPM's, but I do have one\n> suggestion. Please include the static .a libraries as well as\n> the dynamic .so libraries so I can create statically linked\n> PG client apps.\n\nGood point. We'll do that for the next round.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 05 Aug 1999 05:17:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 6.5.1 RPMS" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > Thanks for packaging up PG 6.5.1 into RPM's, but I do have one\n> > suggestion. Please include the static .a libraries as well as\n> > the dynamic .so libraries so I can create statically linked\n> > PG client apps.\n> \n> Good point. We'll do that for the next round.\n \nMy 6.5.1 rpm set already has that fixed..... :-) For Doug's edification,\nhttp://www.ramifordistat.net has my confusingly named RPM set (there are\ntwo \"6.5.1-1\" RPM's -- one there, and one on ftp.postgresql.org.)\n\nLamar\n", "msg_date": "Thu, 05 Aug 1999 13:30:25 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 6.5.1 RPMS" } ]
[ { "msg_contents": "\nJust a short note to let everyone know that any email sent to me from\ntomorrow (1800UT) won't be read for a week, as I'm away to see the\nEclipse.\n\nNormally I wouldn't send this sort of email, except that I receive a good\nnumber of emails asking about JDBC problems, so I thought if I sent this\nto the lists, anyone with a question would send them to the interfaces\nlist instead.\n\nAnyhow, I do have a backlog of patches here for JDBC, but at the rate that\nI'm working through other things, I now doubt that I will be able to\ncommit them in time (unless I can wrangle some time tomorrow morning).\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Thu, 5 Aug 1999 14:12:24 +0100 (GMT)", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Email" } ]
[ { "msg_contents": "Someone was just complaining over in the sql list about the poor\nperformance of\n\nselect name,description from descriptions \nwhere name in (select name \n\t\tfrom descriptions \n\t\twhere description like '%Bankverbindung%');\n\nSince the inner query is uncorrelated with the outer, there's really\nno need to execute it more than once, but currently it's re-executed\neach time through the outer plan.\n\nI wonder whether it wouldn't be a good idea to force a Materialize\nnode to be added to the top of an uncorrelated subplan? Then at\nleast the re-executions would be pretty cheap...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 1999 10:52:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "> Someone was just complaining over in the sql list about the poor\n> performance of\n> \n> select name,description from descriptions \n> where name in (select name \n> \t\tfrom descriptions \n> \t\twhere description like '%Bankverbindung%');\n> \n> Since the inner query is uncorrelated with the outer, there's really\n> no need to execute it more than once, but currently it's re-executed\n> each time through the outer plan.\n> \n> I wonder whether it wouldn't be a good idea to force a Materialize\n> node to be added to the top of an uncorrelated subplan? Then at\n> least the re-executions would be pretty cheap...\n\nYes, the subqueries need work. We don't even do index lookups into the\ninner plan, only sequential. Already on TODO. The multiple query\nexecutions are not on the TODO list. Not sure why this is happening\nhere.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Aug 1999 11:07:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, the subqueries need work. We don't even do index lookups into the\n> inner plan, only sequential. Already on TODO.\n\nHuh? I don't follow that at all...\n\n> The multiple query executions are not on the TODO list. Not sure why\n> this is happening here.\n\nAfter looking at subselect.c I think I understand why --- InitPlans are\nonly for subqueries that are known to return a *single* reslt. When you\nhave a subquery that might potentially return many, many tuples, you\nneed to scan through those tuples, so we use SubPlan tactics even if\nthere's not a query correlation.\n\nBut this neglects the cost of re-executing the subplan over and over.\nMaterializing the result should help, no? (Of course there are cases\nwhere it won't, such as when the subplan is just an unqualified select,\nbut most of the time it should be a win, I think...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 1999 11:13:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, the subqueries need work. We don't even do index lookups into the\n> > inner plan, only sequential. Already on TODO.\n> \n> Huh? I don't follow that at all...\n\nSuppose you have a subquery that returns 1000 rows. There is no code so\nlookups of the inner table are indexed:\n\n\tselect *\n\tfrom tab\n\twhere col in (select col2 from tab2)\n\nIn this case, a sequential scan of the subquery results are required. I\ndidn't think the subquery was executed every time it needed to see if\ncol1 was in the subquery.\n\n> \n> > The multiple query executions are not on the TODO list. Not sure why\n> > this is happening here.\n> \n> After looking at subselect.c I think I understand why --- InitPlans are\n> only for subqueries that are known to return a *single* reslt. When you\n> have a subquery that might potentially return many, many tuples, you\n> need to scan through those tuples, so we use SubPlan tactics even if\n> there's not a query correlation.\n> \n> But this neglects the cost of re-executing the subplan over and over.\n> Materializing the result should help, no? (Of course there are cases\n> where it won't, such as when the subplan is just an unqualified select,\n> but most of the time it should be a win, I think...)\n\nNo what Vadim is done MVCC, I would like to bug him to improve subquery\nperformance. We are tweeking the optimizer, but we have this huge\nsubquery performance problem here.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Aug 1999 11:48:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Bruce Momjian <[email protected]> writes:\n>>>> Yes, the subqueries need work. We don't even do index lookups into the\n>>>> inner plan, only sequential. Already on TODO.\n>> \n>> Huh? I don't follow that at all...\n\n> Suppose you have a subquery that returns 1000 rows. There is no code so\n> lookups of the inner table are indexed:\n\n> \tselect *\n> \tfrom tab\n> \twhere col in (select col2 from tab2)\n\n> In this case, a sequential scan of the subquery results are required.\n\nWell, yes, the subquery is a sequential scan. I guess what you are\nenvisioning is rewriting this into some kind of nested-loop join?\nFor simple cases that might be possible...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 1999 12:10:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Bruce Momjian <[email protected]> writes:\n> >>>> Yes, the subqueries need work. We don't even do index lookups into the\n> >>>> inner plan, only sequential. Already on TODO.\n> >> \n> >> Huh? I don't follow that at all...\n> \n> > Suppose you have a subquery that returns 1000 rows. There is no code so\n> > lookups of the inner table are indexed:\n> \n> > \tselect *\n> > \tfrom tab\n> > \twhere col in (select col2 from tab2)\n> \n> > In this case, a sequential scan of the subquery results are required.\n> \n> Well, yes, the subquery is a sequential scan. I guess what you are\n> envisioning is rewriting this into some kind of nested-loop join?\n> For simple cases that might be possible...\n\nYes, or mergejoin/hashjoin.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Aug 1999 12:17:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Yes, the subqueries need work. We don't even do index lookups into the\n> inner plan, only sequential. Already on TODO. The multiple query\n ^^^^^^^^^^^^^^^\nWhat? Indices are used when appropriate.\n\nVadim\n", "msg_date": "Fri, 06 Aug 1999 10:12:10 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Yes, the subqueries need work. We don't even do index lookups into the\n> > inner plan, only sequential. Already on TODO. The multiple query\n> ^^^^^^^^^^^^^^^\n> What? Indices are used when appropriate.\n\nSorry, bad wording. My English should be better. :-)\n\nI meant to say that joins from the outer plan to subplan are always\nnested loops, not hash or mergejoins. If you have something like:\n\n\tdelete from tab where col in (select col2 from largetable)\n\nit takes a long time, when it really should be quick. That is why\npeople are encouraged to use EXISTS.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Aug 1999 22:35:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Suppose you have a subquery that returns 1000 rows. There is no code so\n> lookups of the inner table are indexed:\n> \n> select *\n> from tab\n> where col in (select col2 from tab2)\n> \n> In this case, a sequential scan of the subquery results are required. I\n> didn't think the subquery was executed every time it needed to see if\n> col1 was in the subquery.\n\nOh, well. These are cases when query may be rewritten with EXISTS.\nThis is possible when there are no aggregates in subquery.\n\n> > After looking at subselect.c I think I understand why --- InitPlans are\n> > only for subqueries that are known to return a *single* reslt. When you\n> > have a subquery that might potentially return many, many tuples, you\n> > need to scan through those tuples, so we use SubPlan tactics even if\n> > there's not a query correlation.\n\nYes. But as I said already, you can use InitPlan (i.e. execute subquery\nfirst) after removing duplicates from subquery results.\n\n> > But this neglects the cost of re-executing the subplan over and over.\n> > Materializing the result should help, no? (Of course there are cases\n\nWe could not only cache subquery results (materialization) but also\nhash them. \n\n> > where it won't, such as when the subplan is just an unqualified select,\n> > but most of the time it should be a win, I think...)\n\nIn such cases, if there are no aggregates in subquery then EXISTS\ncould be used else materialization will still help. \n\n> No what Vadim is done MVCC, I would like to bug him to improve subquery\n> performance. We are tweeking the optimizer, but we have this huge\n> subquery performance problem here.\n\nNo, Bruce. I'm in WAL now. I think that we need in recovery\n(remember that you'll lose indices being updated when some\ncrash took place), fast backup (it's easy to copy part of log \nthan dump 1Gb table), fast commits (<= 1 fsync per commit\nusing group commit, instead of >= 2 fsyncs now), savepoints \nAND buffered logging, which you, Bruce, want so much, \nand so long -:).\n\nVadim\n", "msg_date": "Fri, 06 Aug 1999 11:04:38 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "> > > where it won't, such as when the subplan is just an unqualified select,\n> > > but most of the time it should be a win, I think...)\n> \n> In such cases, if there are no aggregates in subquery then EXISTS\n> could be used else materialization will still help. \n> \n> > No what Vadim is done MVCC, I would like to bug him to improve subquery\n> > performance. We are tweeking the optimizer, but we have this huge\n> > subquery performance problem here.\n> \n> No, Bruce. I'm in WAL now. I think that we need in recovery\n> (remember that you'll lose indices being updated when some\n> crash took place), fast backup (it's easy to copy part of log \n> than dump 1Gb table), fast commits (<= 1 fsync per commit\n> using group commit, instead of >= 2 fsyncs now), savepoints \n> AND buffered logging, which you, Bruce, want so much, \n> and so long -:).\n\nOh, no, I have been outmaneuvered by Vadim.\n\nHelp.\n\nIsn't it something that takes only a few hours to implement. We can't\nkeep telling people to us EXISTS, especially because most SQL people\nthink correlated queries are slower that non-correlated ones. Can we\njust on-the-fly rewrite the query to use exists?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Aug 1999 23:31:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Isn't it something that takes only a few hours to implement. We can't\n> keep telling people to us EXISTS, especially because most SQL people\n> think correlated queries are slower that non-correlated ones. Can we\n> just on-the-fly rewrite the query to use exists?\n\nThis seems easy to implement. We could look does subquery have\naggregates or not before calling union_planner() in\nsubselect.c:_make_subplan() and rewrite it (change \nslink->subLinkType from IN to EXISTS and add quals).\n\nWithout caching implemented IN-->EXISTS rewriting always\nhas sence.\n\nAfter implementation of caching we probably should call union_planner()\nfor both original/modified subqueries and compare costs/sizes\nof EXISTS/IN_with_caching plans and maybe even make\ndecision what plan to use after parent query is planned\nand we know for how many parent rows subplan will be executed.\n\nVadim\n", "msg_date": "Fri, 06 Aug 1999 12:01:57 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Isn't it something that takes only a few hours to implement. We can't\n> keep telling people to us EXISTS, especially because most SQL people\n> think correlated queries are slower that non-correlated ones. Can we\n> just on-the-fly rewrite the query to use exists?\n\nI was just about to suggest exactly that. The \"IN (subselect)\"\nnotation seems to be a lot more intuitive --- at least, people\nkeep coming up with it --- so why not rewrite it to the EXISTS\nform, if we can handle that more efficiently?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Aug 1999 00:14:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Isn't it something that takes only a few hours to implement. We can't\n> > keep telling people to us EXISTS, especially because most SQL people\n> > think correlated queries are slower that non-correlated ones. Can we\n> > just on-the-fly rewrite the query to use exists?\n> \n> I was just about to suggest exactly that. The \"IN (subselect)\"\n> notation seems to be a lot more intuitive --- at least, people\n> keep coming up with it --- so why not rewrite it to the EXISTS\n> form, if we can handle that more efficiently?\n\nYes, we have the nice subselect feature. I just hate to see it not\ncompletely finished, performance-wise.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Aug 1999 01:11:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" } ]
[ { "msg_contents": "I've posted a tarball of new man pages at\n\n ftp://postgresql.org/pub/doc/man.tar.gz\n\nI *think* these are ready for prime time, or close to it. They are\ngenerated completely automatically from the sgml sources, using some\npatched perl utilities. I'll try to post the complete set of docs\ntools on the ftp site sometime soon.\n\nNote that there are a few more man pages than were available in the\noriginal versions, and that *all* information in the original man\npages appears in the new ones (or somewhere in the other docs).\n\nI haven't yet updated the cvs tree to contain these new man pages.\nPlease look through the tarball and report any problems you see, if\nyou have any interest in the man page issue. Also, please report if\nthey look OK, so I know *someone* looked at them ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 05 Aug 1999 14:53:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "New man pages" }, { "msg_contents": "> I've posted a tarball of new man pages at\n> \n> ftp://postgresql.org/pub/doc/man.tar.gz\n> \n> I *think* these are ready for prime time, or close to it. They are\n> generated completely automatically from the sgml sources, using some\n> patched perl utilities. I'll try to post the complete set of docs\n> tools on the ftp site sometime soon.\n> \n> Note that there are a few more man pages than were available in the\n> original versions, and that *all* information in the original man\n> pages appears in the new ones (or somewhere in the other docs).\n> \n> I haven't yet updated the cvs tree to contain these new man pages.\n> Please look through the tarball and report any problems you see, if\n> you have any interest in the man page issue. Also, please report if\n> they look OK, so I know *someone* looked at them ;)\n\nI looked at the new pages, and they looked very good, much better than I\nthought they would. The only problem was the display of the command\nsyntax was wrapped rather than being one operator per line:\n\n select [distinct [on attr_name]]\n expression1 [as attr_name-1]\n {, expression-1 [as attr_name-i]}\n [into [temp] [table] classname]\n [from from-list]\n [where where-clause]\n [group by attr_name1 {, attr_name-i....}]\n [having having-clause]\n\nbecame:\n\n select [distinct [on attr_name]] expression1 [as attr_name-1]\n {, expression-1 [as attr_name-i]} [into [temp] [table] classname]\n [from from-list] [where where-clause] [group by attr_name1 \n\t {, attr_name-i....}] [having having-clause]\n\nWhich is almost unreadable.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Aug 1999 11:02:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] New man pages" }, { "msg_contents": "> I looked at the new pages, and they looked very good, much better than I\n> thought they would.\n\n:)\n\n> The only problem was the display of the command\n> syntax was wrapped rather than being one operator per line:\n> Which is almost unreadable.\n\nFixed it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 06 Aug 1999 15:28:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] New man pages" }, { "msg_contents": "> > I looked at the new pages, and they looked very good, much better than I\n> > thought they would.\n> \n> :)\n\nYes, I was prepared to really lose some of the nice formatting we had,\nbut we really didn't lose anything.\n\n> \n> > The only problem was the display of the command\n> > syntax was wrapped rather than being one operator per line:\n> > Which is almost unreadable.\n> \n> Fixed it.\n\nThanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Aug 1999 11:52:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] New man pages" }, { "msg_contents": "I mentioned this earlier in the context of pg_dump, which fails\ntrying to create the table \"pgdump_oid\".\n\nAfter a bit, a memory jog reminded me that I've seen this\nbefore, with the table \"foo\", which I once used for testing.\n\nAfter a fair number of \"create/drop\" cycles, making then\ndropping tables for testing, pgsql now refuses to let me\n\"create table foo...\", giving the same simple error message\n\"can't create foo\" as pg_dump's getting on pgdump_oid.\n\nI can't \"drop table foo\", getting an error message telling\nme the class doesn't exist, so that's not the problem.\n\nI CAN create/drop other tables, i.e. \"create table bar...\"\nfollowed by \"drop table bar\" works fine.\n\nSo it doesn't appear to be a general permissions problem,\ni.e. it's not as though the system thinks I don't have\ncreate table rights.\n\nIt would seem as some system table is being corrupted???\n\nDoes this sound at all familiar?\n\nUnfortunately, I don't know how to reproduce this other\nthan create/drop tables until eventually it fails. As\nI mentioned in my first note, pg_dump has been running\nnightly on this database for weeks, at least, with no\nerrors reported. Suddenly - poof! can't create pgdump_oid.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 06 Aug 1999 09:42:28 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Unable to create tables..." }, { "msg_contents": "Don Baccus wrote:\n \n> I can't \"drop table foo\", getting an error message telling\n> me the class doesn't exist, so that's not the problem.\n \n> It would seem as some system table is being corrupted???\n\nCheck pg_tables and pg_class -- select * from pg_tables; will give you a\nlist of tables from the system catalog, and select * from ps_class; will\ndo the same for classes -- if the psuedo-dropped table shows in either\nof these tables, delete it, and see if that helps. If this indeed is a\nsystem table corruption..... eeewwww....\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Fri, 06 Aug 1999 14:14:08 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "At 02:14 PM 8/6/99 -0400, Lamar Owen wrote:\n>Don Baccus wrote:\n> \n>> I can't \"drop table foo\", getting an error message telling\n>> me the class doesn't exist, so that's not the problem.\n> \n>> It would seem as some system table is being corrupted???\n>\n>Check pg_tables and pg_class -- select * from pg_tables; will give you a\n>list of tables from the system catalog, and select * from ps_class; will\n>do the same for classes -- if the psuedo-dropped table shows in either\n>of these tables, delete it, and see if that helps. If this indeed is a\n>system table corruption..... eeewwww....\n\nNeither \"pgdump_oid\" or \"foo\" (my other example from my\nfollow-up message) appear to exist in pg_class or pg_tables.\n\nThanks for the suggestion, though :(\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 06 Aug 1999 11:23:47 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "At 11:23 AM 8/6/99 -0700, Don Baccus wrote:\n\n>Neither \"pgdump_oid\" or \"foo\" (my other example from my\n>follow-up message) appear to exist in pg_class or pg_tables.\n>\n>Thanks for the suggestion, though :(\n\nThings are getting more and more odd...\n\nI've done some more testing and things are now in a state\nwhere I can create a table, drop the table (and get the\nmessage \"DROP\" back), yet the relation still exists.\n\nIn fact, I can do a \"select count(*) from ...\" on it and\nget zero rows back.\n\nArgh!\n\nOf course, now if I try to create a table with that name,\nI'm told the relation already exists.\n\nDifferent than the situation with pgdump_oid and foo,\nwhere I'm just told that the create failed.\n\nObviously, some table contents somewhere must be messed\nup. Any ideas?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 06 Aug 1999 11:54:31 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "On Fri, Aug 06, 1999 at 11:54:31AM -0700, Don Baccus wrote:\n> At 11:23 AM 8/6/99 -0700, Don Baccus wrote:\n> \n> >Neither \"pgdump_oid\" or \"foo\" (my other example from my\n> >follow-up message) appear to exist in pg_class or pg_tables.\n> >\n> >Thanks for the suggestion, though :(\n> \n> Things are getting more and more odd...\n> \n> I've done some more testing and things are now in a state\n> where I can create a table, drop the table (and get the\n> message \"DROP\" back), yet the relation still exists.\n> \n> In fact, I can do a \"select count(*) from ...\" on it and\n> get zero rows back.\n> \n> Argh!\n> \n> Of course, now if I try to create a table with that name,\n> I'm told the relation already exists.\n> \n> Different than the situation with pgdump_oid and foo,\n> where I'm just told that the create failed.\n> \n> Obviously, some table contents somewhere must be messed\n> up. Any ideas?\n> \n\nCheck to see if there are files in the pgsql/data/base/'yourdbname'\ndirectory called 'pgdump_oid' and 'foo'. Some situations lead to a table\nbeing almost completely deleted, but leaving the file behind. Doesn't\nexplain the 'table still there' phenomena, but might let you recreate a\n'dropped' table.\n\nRoss\nP.S. once common problem is dropping a table doesn't always get all the\nobjects created by 'convenience' types. For example, not not sure the \nsequence created for a serial type gets dropped with its table. In fact,\nI'm pretty sure it doesn't (and, for now, shouldn't)\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Fri, 6 Aug 1999 14:46:38 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "Don Baccus wrote:\n> Obviously, some table contents somewhere must be messed\n> up. Any ideas?\n\nOoooo....\n\nIf this were happening to me, I'd probably stop postmaster, rename the\nPGDATA tree to something else, initdb, start postmaster, restore from\nthe last good dump, stop postmaster, copy back the user database dirs\nunder PGDATA, restart postmaster, VACUUM all tables -- on the production\nmachine, if that is where the problems are. Then, I'd pull that PGDATA\nbinary backup over to a development workstation, start up a postmaster\npointing to it, and do a post-mortem, checking all system tables for\ntheir contents, running vacuum, et al (all the while keeping a good copy\nof the old PGDATA tree -- just in case something blows up).\n\nObviously, some system catalog somewhere is getting farkled -- Don, I'm\nassuming that you are vacuuming often.\n\nHTHaL\n\nLamar\n", "msg_date": "Fri, 06 Aug 1999 15:53:52 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "At 02:46 PM 8/6/99 -0500, Ross J. Reedstrom wrote:\n\n>Check to see if there are files in the pgsql/data/base/'yourdbname'\n>directory called 'pgdump_oid' and 'foo'. Some situations lead to a table\n>being almost completely deleted, but leaving the file behind. Doesn't\n>explain the 'table still there' phenomena, but might let you recreate a\n>'dropped' table.\n\nThanks, I thought of this one myself, and deleted \"foo\". This is\nwhen it got into the mode of allowing \"create table foo...\" and\nan apparently successful \"drop table foo\", but with \"foo\" left\nbehind in pg_class (I think that's right) and \"select count(*)\nfrom foo\" returning 0 rows (i.e. the relation really seems to\nexist!)\n\n>\n>Ross\n>P.S. once common problem is dropping a table doesn't always get all the\n>objects created by 'convenience' types. For example, not not sure the \n>sequence created for a serial type gets dropped with its table. In fact,\n>I'm pretty sure it doesn't (and, for now, shouldn't)\n\nYes, this isn't my problem, though.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 06 Aug 1999 12:59:33 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "At 03:53 PM 8/6/99 -0400, Lamar Owen wrote:\n>Don Baccus wrote:\n>> Obviously, some table contents somewhere must be messed\n>> up. Any ideas?\n>\n>Ooooo....\n>\n>If this were happening to me, I'd probably stop postmaster, rename the\n>PGDATA tree to something else, initdb, start postmaster, restore from\n>the last good dump, stop postmaster, copy back the user database dirs\n>under PGDATA, restart postmaster, VACUUM all tables -- on the production\n>machine, if that is where the problems are. Then, I'd pull that PGDATA\n>binary backup over to a development workstation, start up a postmaster\n>pointing to it, and do a post-mortem, checking all system tables for\n>their contents, running vacuum, et al (all the while keeping a good copy\n>of the old PGDATA tree -- just in case something blows up).\n\nI decided awhile back to punt and rebuild.\n\n>Obviously, some system catalog somewhere is getting farkled -- Don, I'm\n>assuming that you are vacuuming often.\n\nNightly, after dumping.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 06 Aug 1999 13:00:19 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unable to create tables..." }, { "msg_contents": "Hi Thomas,\n\nI'm looking at man pages, and I find it very good. Now we have only one\ndocumentation's version. Nice job.\nThere's only a little thing that I would like to recall your attention;\nI see whenever the name Postgres instead of\nPostgreSQL. Is there a reason to continue to call it Postgres in the\ndocs ?\n\n-------------------------------------------------------------------------\n\nUPDATE(l) UPDATE(l)\n\nNAME\n UPDATE - Replaces values of columns in a table\n\nSYNOPSIS\n UPDATE table SET R\">colle> = expression [, ...] [ FROM\n ^^^^^^^\n fromlist ] [ WHERE condition ]\n\n INPUTS\n table The name of an existing table.\n\n column The name of a column in table.\n\n expression\n A valid expression or value to assign to column.\n\n fromlist\n A Postgres non-standard extension to allow columns\n ^^^^^^^\n from other tables to appear in the WHERE condition.\n\nJos�\n\n\nThomas Lockhart ha scritto:\n\n> I've posted a tarball of new man pages at\n>\n> ftp://postgresql.org/pub/doc/man.tar.gz\n>\n> I *think* these are ready for prime time, or close to it. They are\n> generated completely automatically from the sgml sources, using some\n> patched perl utilities. I'll try to post the complete set of docs\n> tools on the ftp site sometime soon.\n>\n> Note that there are a few more man pages than were available in the\n> original versions, and that *all* information in the original man\n> pages appears in the new ones (or somewhere in the other docs).\n>\n> I haven't yet updated the cvs tree to contain these new man pages.\n> Please look through the tarball and report any problems you see, if\n> you have any interest in the man page issue. Also, please report if\n> they look OK, so I know *someone* looked at them ;)\n>\n> - Thomas\n>\n> --\n> Thomas Lockhart [email protected]\n> South Pasadena, California\n\n", "msg_date": "Mon, 09 Aug 1999 15:26:46 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New man pages" }, { "msg_contents": "> There's only a little thing that I would like to recall your attention;\n> I see whenever the name Postgres instead of\n> PostgreSQL. Is there a reason to continue to call it Postgres in the\n> docs ?\n\nI have chosen to use \"Postgres\" within the docs, as a shorter (and\npronouncable ;) form of our product. \"PostgreSQL\" appears in all\ntitles and introductory material. I have considered the \"SQL\" part of\nthe \"PostgreSQL\" as sort of a version or branch, like \"OpenIngres\" or\n\"Windows 2000\", and a bit cumbersome in the body of the docs.\n\nBut that was a choice which can always be reconsidered, we're just a\n\"sed\" away from a different name...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 09 Aug 1999 14:11:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New man pages" }, { "msg_contents": "> > There's only a little thing that I would like to recall your attention;\n> > I see whenever the name Postgres instead of\n> > PostgreSQL. Is there a reason to continue to call it Postgres in the\n> > docs ?\n> \n> I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> titles and introductory material. I have considered the \"SQL\" part of\n> the \"PostgreSQL\" as sort of a version or branch, like \"OpenIngres\" or\n> \"Windows 2000\", and a bit cumbersome in the body of the docs.\n> \n> But that was a choice which can always be reconsidered, we're just a\n> \"sed\" away from a different name...\n\nI vote for PostgreSQL.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Aug 1999 10:45:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New man pages" }, { "msg_contents": "On Mon, 9 Aug 1999, Bruce Momjian wrote:\n\n> > > There's only a little thing that I would like to recall your attention;\n> > > I see whenever the name Postgres instead of\n> > > PostgreSQL. Is there a reason to continue to call it Postgres in the\n> > > docs ?\n> > \n> > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> > titles and introductory material. I have considered the \"SQL\" part of\n> > the \"PostgreSQL\" as sort of a version or branch, like \"OpenIngres\" or\n> > \"Windows 2000\", and a bit cumbersome in the body of the docs.\n> > \n> > But that was a choice which can always be reconsidered, we're just a\n> > \"sed\" away from a different name...\n> \n> I vote for PostgreSQL.\n\nI'm just an end user who loves the product...with an opinion. ;-)\n\nAlthough I never used postgres (i.e., prior to the suffix being appended)\nI always use `postgres' in conversation both as it is easily pronounced\nand as there is a rather noteable history/lineage. When communicating\nwith other postgres fans I say pee-gee...\n\nWhen I refer specfically to the newer incarnation I say Postgres SQL\n(post-gress see-qwell) rather than postgreS-Q-L...\n\nI don't really mind if the man pages get edited as if I ever choose to\nread them to my 4 year old son I will swap in the generic name, on the\nfly. But we are not up to that point yet - my son is still learning\nabout grep. ;-)\n\nI fully appreciate what the name is designed to convey but it does not\nroll off the tongue...so I kinda like Thomas' decision to stick with\nthe more generic term - and the more poetic.\n\nCheers,\nTom\n\n------- North Richmond Community Mental Health Center -------\n\nThomas Good MIS Coordinator\nVital Signs: tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n \n/* Member: Computer Professionals For Social Responsibility */ \n\n", "msg_date": "Mon, 9 Aug 1999 11:48:09 -0400 (EDT)", "msg_from": "Thomas Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New man pages" }, { "msg_contents": "Thomas Lockhart wrote:\n> I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n\nHow hard is it to say \"Postgresquel\", really? Don't tell me you've been\nsaying \"Postgres cue ell\".....;-)\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Mon, 09 Aug 1999 13:47:16 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "On Mon, 9 Aug 1999, Lamar Owen wrote:\n\n> Thomas Lockhart wrote:\n> > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> \n> How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> saying \"Postgres cue ell\".....;-)\n\nI have! I don't see that there is really any difference between saying\n\"postgresequel\" and \"postgres-cue-ell\". The syllables are just switched.\n(kind of) However, perhaps the inventors of this term could offer their\ninsight into their intended pronounciation.\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Mon, 9 Aug 1999 15:16:05 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> On Mon, 9 Aug 1999, Lamar Owen wrote:\n> > How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> > saying \"Postgres cue ell\".....;-)\n> \n> I have! I don't see that there is really any difference between saying\n> \"postgresequel\" and \"postgres-cue-ell\". The syllables are just switched.\n> (kind of) However, perhaps the inventors of this term could offer their\n> insight into their intended pronounciation.\n\nNot post-gre'-se-quel -- post-gres'-quel. One less syllable. At least,\nthat's how I've been saying it. But, then again, when referring to\nMySQL I don't say my-s-q-l; I say my'-squel.\n\nOh well...\n\nLamar\n", "msg_date": "Mon, 09 Aug 1999 15:25:19 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "The only thing I hate about PostgreSQL is that it's hard to type with all\nthat mixed case. I confess that I've always pronounced it \"postgres\"\nanyway, so there!\n\nThe point of a name is to be distinctive and somewhat descriptive.\nPostgres suffices to set the product off from other database systems.\nThere's no real need for the SQL to be in the name. Even Microsoft, in\ntheir eternal marketing battle, doesn't make a point of writing AccesSQL.\n\nMy two bits, and your mileage may vary.\n\nOn Mon, 9 Aug 1999, Peter Eisentraut wrote:\n\n> On Mon, 9 Aug 1999, Lamar Owen wrote:\n> \n> > Thomas Lockhart wrote:\n> > > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> > \n> > How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> > saying \"Postgres cue ell\".....;-)\n> \n> I have! I don't see that there is really any difference between saying\n> \"postgresequel\" and \"postgres-cue-ell\". The syllables are just switched.\n> (kind of) However, perhaps the inventors of this term could offer their\n> insight into their intended pronounciation.\n> \n> -- \n> Peter Eisentraut\n> PathWay Computing, Inc.\n> \n> \n> \n\n", "msg_date": "Mon, 9 Aug 1999 14:51:02 -0500 (EST)", "msg_from": "\"J. Michael Roberts\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "On Mon, Aug 09, 1999 at 02:51:02PM -0500, J. Michael Roberts wrote:\n> The only thing I hate about PostgreSQL is that it's hard to type with all\n> that mixed case. I confess that I've always pronounced it \"postgres\"\n> anyway, so there!\n> \n> The point of a name is to be distinctive and somewhat descriptive.\n> Postgres suffices to set the product off from other database systems.\n> There's no real need for the SQL to be in the name. Even Microsoft, in\n> their eternal marketing battle, doesn't make a point of writing AccesSQL.\n> \n\nWell, that might be because MS-Access... isn't! Their SQL server product,\nhowever is called,...\n\nSQL Server!\n\nAnyone else get cheesed off how MS seems to always try to co-opt generic\nwords and turn them into ProductNames(tm)? Word? Access? SQL? In my own\nlittle rebellion, I make a point of prefixing MS- whenever I speak or\nwrite about their products.\n\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 9 Aug 1999 15:32:26 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "> > > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> > How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> > saying \"Postgres cue ell\".....;-)\n\nI'll bet that even the coiners of the term have some differences in\ntheir pronunciation. For the record, I use \"Postgres\" usually, and\n\"Postgres-cue-ell\" when forced...\n\n - Thomas\n\nSince we cap the \"S\", however you pronounce \"SQL\" should probably be\nhow you do the end of \"PostgreSQL\". Lamar, are you a \"ess-quel\"\nperson? That may be a regional dialect. Don't even get me started on\nthat: it took me years to get over my strong inclination to use\n\"hacker\" to refer to a bad, clueless programmer...\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 02:16:02 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> I'll bet that even the coiners of the term have some differences in\n> their pronunciation. For the record, I use \"Postgres\" usually, and\n> \"Postgres-cue-ell\" when forced...\n\n> Since we cap the \"S\", however you pronounce \"SQL\" should probably be\n> how you do the end of \"PostgreSQL\". Lamar, are you a \"ess-quel\"\n> person? That may be a regional dialect.\n\nOnly in postgressive context (ouch...)\n\nI have been known to say 'squel' (as opposed to ess-quel....). If I\ncould just inflect my Southern Drawl in text.... And I have said ess cue\nell (not very often...)... Oh well. Minor points. I think it goes back\nto my Z80 days, where I'd pronounce the machine code -- it sounds real\nstrange to say \"sidbateotwo\" to mean CD B8 02 (CALL x'02B8' in Z80\nassembler), but I have actually done that. Just a little game that a\nfriend and I would play.\n\nIncidentally, radically changing the subject, I have done some tests on\nthe RPM-packaged perl client, with great success. I am also\nexperimenting with my new (3lo) RPM's, which are the first try to\npackage the regression tests. Now to see if they run ;-/ As soon as\nthe RedHat 5.2 machine (a creaky 486-100 w/16MB) finishes a good build,\nI'll post. Although, I am hitting snags -- the regression tests have\nsome strange requirements -- ie, the resulting regress.so in the package\nis built to require /usr/local/bin/perl, and /usr/local/bin/python..... \nOh well; I'll slog through it.\n\nNow to learn enough python to be dangerous...\n\nLamar\n", "msg_date": "Mon, 09 Aug 1999 23:19:47 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "> I have done some tests on\n> the RPM-packaged perl client, with great success.\n\nThat's building the client into the distro-specific directories for\nperl? Great...\n\n> I am also\n> experimenting with my new (3lo) RPM's, which are the first try to\n> package the regression tests. Now to see if they run ;-/ As soon as\n> the RedHat 5.2 machine (a creaky 486-100 w/16MB) finishes a good build,\n> I'll post. Although, I am hitting snags -- the regression tests have\n> some strange requirements -- ie, the resulting regress.so in the package\n> is built to require /usr/local/bin/perl, and /usr/local/bin/python.....\n> Oh well; I'll slog through it.\n\nKeep on patching. It's pretty convenient for stuff like this...\n\n> Now to learn enough python to be dangerous...\n\nDangerous doesn't take very long. I haven't progressed past that yet,\nat least for python (others may suggest other topics too ;)\n\nI was just rebuilding the plain RPMs to include the .a forms of the\nlibraries, and noticed problems with:\n\n1) naming the programming language shared libraries (not libpltcl.so\nbut pltcl.so, etc)\n\n2) finding bin/pgaccess/README.pga (it is obsolete)\n2a) bin/pgaccess/README should be included in the pgaccess docs target\n\nHere is the spec file for you to compare to previous versions; perhaps\nyou can forward your spec file so I don't have to download an entire\n-src.rpm to start scoping it out?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Tue, 10 Aug 1999 03:31:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RPMs (was Pronunciation of \"PostgreSQL\")" }, { "msg_contents": "Thomas Lockhart wrote:\n> That's building the client into the distro-specific directories for\n> perl? Great...\n\nActually, no -- it's doing exactly what your first run of it did -- and\nit works. a simple connect-slect-fetchrows sequence did exactly as\nexpected -- and fancier constructs worked. My take on that is to say\nthat, if it works, use it -- it is being put ina reasonable place with\nyour install, and perl is happy. I've done some fairly extensive tests\nof that interface, and have yet to crash it.\n\n> > some strange requirements -- ie, the resulting regress.so in the package\n> > is built to require /usr/local/bin/perl, and /usr/local/bin/python.....\n> > Oh well; I'll slog through it.\n> \n> Keep on patching. It's pretty convenient for stuff like this...\n\nWell, I found that it wasn't the regression tests _per se_ that was\ncausing a failed requires for /usr/local/bin/[perl,python], but some\nmiscellaneous testing scripts in the test tree -- for reasons of\ncompletness, I'm packaging the whole test tree -- not just the\nregression tests, but the performance tests, benchmarks, et al. On a\nRedHat system, perl is always at /usr/bin/perl, and python is always at\n/usr/bin/python, but it's easy enough to issue a which perl or which\npython to make sure, and to make the RPM as portable as RPM's can get.\n\n> I was just rebuilding the plain RPMs to include the .a forms of the\n> libraries, and noticed problems with:\n> \n> 1) naming the programming language shared libraries (not libpltcl.so\n> but pltcl.so, etc)\n\nGot those.\n\n> 2) finding bin/pgaccess/README.pga (it is obsolete)\n> 2a) bin/pgaccess/README should be included in the pgaccess docs target\n\nThat's one I haven't yet corrected... Although I have included 0.97b of\nthe pgaccess tcl script, as /usr/bin/pgaccess97, for testing.\n\n> Here is the spec file for you to compare to previous versions; perhaps\n> you can forward your spec file so I don't have to download an entire\n> -src.rpm to start scoping it out?\n\nhttp://www.ramifordistat.net/postgres/postgresql-6.5.1-2lo.spec is the\none I last released. 3lo is not ready for prime-time -- however, I've\nuploaded it to postgresql-6.5.1-3lo.spec.beta in the same directory,\nrather than take up list bandwidth.\n\nLamar\n", "msg_date": "Tue, 10 Aug 1999 00:03:14 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: RPMs (was Pronunciation of \"PostgreSQL\")" }, { "msg_contents": "On Tue, 10 Aug 1999, Thomas Lockhart wrote:\n\n> > > > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > > > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> > > How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> > > saying \"Postgres cue ell\".....;-)\n> \n> I'll bet that even the coiners of the term have some differences in\n> their pronunciation. For the record, I use \"Postgres\" usually, and\n> \"Postgres-cue-ell\" when forced...\n> \n> - Thomas\n> \n> Since we cap the \"S\", however you pronounce \"SQL\" should probably be\n> how you do the end of \"PostgreSQL\". Lamar, are you a \"ess-quel\"\n> person? That may be a regional dialect. Don't even get me started on\n> that: it took me years to get over my strong inclination to use\n> \"hacker\" to refer to a bad, clueless programmer...\n\nAnd how do you say Linux, Thomas? I have a friend in Regina, Saskatchewan\n(that's ra-Jine-ah, not ra-geen-ah) who says LINE-ex...I wince every time\nI hear it. In fact, sometimes, if I'm less than fully caffeinated it\ndoesn't even register that he's made a reference to Linn-Ucks...\n(Usually I truncate it to Lin-icks or simply: Slackware ;-)\n\n------- North Richmond Community Mental Health Center -------\n\nThomas Good MIS Coordinator\nVital Signs: tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n \n/* Member: Computer Professionals For Social Responsibility */ \n\n", "msg_date": "Tue, 10 Aug 1999 08:05:46 -0400 (EDT)", "msg_from": "Thomas Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "> And how do you say Linux, Thomas? I have a friend in Regina, Saskatchewan\n> (that's ra-Jine-ah, not ra-geen-ah) who says LINE-ex...I wince every time\n> I hear it. In fact, sometimes, if I'm less than fully caffeinated it\n> doesn't even register that he's made a reference to Linn-Ucks...\n> (Usually I truncate it to Lin-icks or simply: Slackware ;-)\n\n*rolf* Actually, that's a problem! When I first started with\nlinn-ucks, the person who got me going pronounced it line-ex. And\nlater, when I asked, she said that she intentionally mispronounced it\nso she could trace the lineage of the person; that is, so she could\ntell if they had learned about it from someone that she knew. Pretty\ndastardly... ;)\n\nBut I try to pronounce it in Finnish rather than Texan nowadays...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 13:14:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "On Mon, 9 Aug 1999, Lamar Owen wrote:\n\n> Thomas Lockhart wrote:\n> > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> \n> How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> saying \"Postgres cue ell\".....;-)\n\nUmmm, your second way is the correct pronounciation :)\n\nPost-gres-Q-L :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Aug 1999 23:23:18 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" }, { "msg_contents": "On Mon, 9 Aug 1999, Bruce Momjian wrote:\n\n> > > There's only a little thing that I would like to recall your attention;\n> > > I see whenever the name Postgres instead of\n> > > PostgreSQL. Is there a reason to continue to call it Postgres in the\n> > > docs ?\n> > \n> > I have chosen to use \"Postgres\" within the docs, as a shorter (and\n> > pronouncable ;) form of our product. \"PostgreSQL\" appears in all\n> > titles and introductory material. I have considered the \"SQL\" part of\n> > the \"PostgreSQL\" as sort of a version or branch, like \"OpenIngres\" or\n> > \"Windows 2000\", and a bit cumbersome in the body of the docs.\n> > \n> > But that was a choice which can always be reconsidered, we're just a\n> > \"sed\" away from a different name...\n> \n> I vote for PostgreSQL.\n\nI second it...whenever ppl mention \"postgres\", i think back to our\nancestor and figure they are referring to that :(\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Aug 1999 23:55:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New man pages" }, { "msg_contents": "The Hermit Hacker wrote:\n> > How hard is it to say \"Postgresquel\", really? Don't tell me you've been\n> > saying \"Postgres cue ell\".....;-)\n> \n> Ummm, your second way is the correct pronounciation :)\n> \n> Post-gres-Q-L :)\n\nCorrect or not, it doesn't quite flow off the tongue... Oh well. There\nare more important issues than pronunciation (such as rpm -Uvh for the\nrpm's...), but a little levity never exacerberated any problems....\n(laughter is the best medicine!). Maybe we need you to record the\ncanonical pronunciation (I'll even encode to RealAudio -- and will\ndonate the RealServer bandwidth!), like Linus did for linux.\n(lee-nooks...).\n\nCheers....\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 18 Aug 1999 16:10:40 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pronunciation of \"PostgreSQL\" (was: Re: [HACKERS] New man pages)" } ]
[ { "msg_contents": "On 4 Aug, Tom Lane wrote:\n\n> [ Snip discussion regarding PQconnectdb()s thread-safety ]\n> \n> If you want to work on it, be my guest...\n\nI don't have time to think about this today, so I can't comment on how\nit should work, but I _am_ currently working in this area - I am\nproviding non-blocking versions of the connect statements, as discussed\non the interfaces list a couple of weeks ago. In fact, it is pretty\nmuch done, apart from a tidy-up, documentation, and testing. I don't\nsee any point in two people hammering away at the same code - it will\nonly make work when we try to merge again - so perhaps I should\nimplement what ever is decided - I don't mind doing so. However, if I\ndidn't get it done this weekend it would have to be mid-to-late\nSeptember, since I'm going away. Would that be a problem for anyone?\n\nI had noticed that the connect statements weren't thread-safe, but\nwas neither aware that that was a problem for anyone, nor inclined to\naudit the whole of libpq for thread-safety, so I left it alone.\n\nEwan.\n", "msg_date": "Thu, 05 Aug 1999 16:16:58 +0100", "msg_from": "\"Ewan Mellor\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] Threads " } ]
[ { "msg_contents": "I thought that all correlated and uncorrelated sub-queries could be\nrewritten as a join, simplifying the query tree. It should be a mechanical\nprocess which can probably be performed in the rewriter.\n\nSomebody put me right.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Bruce Momjian [mailto:[email protected]]\n>> Sent: Friday, August 06, 1999 7:11 AM\n>> To: Tom Lane\n>> Cc: Vadim Mikheev; [email protected]\n>> Subject: Re: [HACKERS] Idea for speeding up uncorrelated subqueries\n>> \n>> \n>> > Bruce Momjian <[email protected]> writes:\n>> > > Isn't it something that takes only a few hours to \n>> implement. We can't\n>> > > keep telling people to us EXISTS, especially because \n>> most SQL people\n>> > > think correlated queries are slower that non-correlated \n>> ones. Can we\n>> > > just on-the-fly rewrite the query to use exists?\n>> > \n>> > I was just about to suggest exactly that. The \"IN (subselect)\"\n>> > notation seems to be a lot more intuitive --- at least, people\n>> > keep coming up with it --- so why not rewrite it to the EXISTS\n>> > form, if we can handle that more efficiently?\n>> \n>> Yes, we have the nice subselect feature. I just hate to see it not\n>> completely finished, performance-wise.\n>> \n>> -- \n>> Bruce Momjian | http://www.op.net/~candle\n>> [email protected] | (610) 853-3000\n>> + If your life is a hard drive, | 830 Blythe Avenue\n>> + Christ can be your backup. | Drexel Hill, \n>> Pennsylvania 19026\n>> \n", "msg_date": "Fri, 6 Aug 1999 09:58:00 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Idea for speeding up uncorrelated subqueries" }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> I thought that all correlated and uncorrelated sub-queries could be\n> rewritten as a join, simplifying the query tree. It should be a mechanical\n> process which can probably be performed in the rewriter.\n\nIN can't be rewritten as a join! Subquery may return duplicates\nand join would return tuple for all of them. \n\nAnd how about WHERE x = (select max(y) from ...) ?\n\nAnd even for WHERE x = (select y from ...) we have to check\nthat subquery returns exactly ONE row, or abort.\n\nVadim\n", "msg_date": "Fri, 06 Aug 1999 16:12:59 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Idea for speeding up uncorrelated subqueries" } ]
[ { "msg_contents": "\nThis one takes the cake...by default, I guess, when you create the\ncolumn, it is not-case sensitive...you have to create the column as\nsomething like:\n\n\tCOL1 VARCHAR(12) BINARY\n\nto make it case sensitive...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Fri, 6 Aug 1999 15:21:44 +0800\nFrom: bben <[email protected]>\nReply-To: bben <[email protected]>\nTo: phpmailist <[email protected]>\nSubject: [PHP3] mysql is case sensitive?\n\nhello,\n\nI use mysql3.22.25 on linux,\nI found mysql is not case sensitive,\nfor example,I use : \"select 'abc'='ABC'\"\nIt return 1.\n\nHow can I set mysql to case sensitive?\nThanks.\n .___.\n / \\\n | O _ O |\n / \\_/ \\\n .' / \\ `.\n /~ | | ~\\\n(_ | | _)\n ~ \\ / ~\n __\\_>-<_/__\n ~;/ \\;~\n\n\n", "msg_date": "Fri, 6 Aug 1999 05:25:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "[PHP3] mysql is case sensitive? (fwd)" } ]
[ { "msg_contents": "Are we going to release subj in near future?\nIf yes then please wait a few days - seems that I'll\nhave patch for btree to fix problems reported by\nOleg Bartunov - indices growing/update slowdown...\n\nVadim\n", "msg_date": "Fri, 06 Aug 1999 17:59:38 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.2" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Are we going to release subj in near future?\n> If yes then please wait a few days -\n\nAlso, I'd like to see if we can't resolve Adriaan Joubert's bug report\nfirst. The list-clearing problem doesn't seem to be it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Aug 1999 09:52:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2 " }, { "msg_contents": "All has been going well with this version, so I decided to\nlet 'er rip in public two days ago. Last night, I got the\nfollowing error from my nightly pg_dump, which has been running\nfine for weeks during development:\n\nCan not create pgdump_oid table. Explanation from backend: 'ERROR: cannot\ncreate pgdump_oid\n'.\n\nAny explanations for this behavior?\n\nAny ideas as to how I can fix it? I'm dumping OIDs because some of the\nfree code I'm porting from Oracle/Tcl/AOLserver to Postgres/Tcl/AOLserver\nuses their equivalent row id in places.\n\nsigh...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 06 Aug 1999 08:40:10 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1, error in pg_dump" }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> Can not create pgdump_oid table. Explanation from backend: 'ERROR: cannot\n> create pgdump_oid\n> '.\n\nAFAICT, the only way that you'd get that precise wording out of the\nbackend is if its attempt to create the physical file for the table\nfails --- that is, open(filename, O_RDWR | O_CREAT | O_EXCL, 0600)\nfails. The only error message I can find with that wording is in\nsmgrcreate(), and it looks like all the other potential causes of\nfailure below that point (such as running out of memory) would yield\ndifferent messages.\n\nSo, it would seem that the initial cause of the problem is that there\nwas already a file by that name --- perhaps because some earlier\ninstance of postgres failed to unlink it.\n\nI'm not sure about the bizarrenesses you report later on --- they\nsound like the system tables may have gotten corrupted, or perhaps\njust the relation cache inside the backend. (Did killing the backend\nand starting a new one help?) But I am thinking the error messages\nmust have been different at that point...\n\nWe saw vaguely similar behaviors with temp tables when we were still\nflushing the bugs out of temp tables. I wonder if there are still\nsome temp-table-related bugs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Aug 1999 18:48:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1, error in pg_dump " }, { "msg_contents": "At 06:48 PM 8/6/99 -0400, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> Can not create pgdump_oid table. Explanation from backend: 'ERROR: cannot\n>> create pgdump_oid\n>> '.\n>\n>AFAICT, the only way that you'd get that precise wording out of the\n>backend is if its attempt to create the physical file for the table\n>fails --- that is, open(filename, O_RDWR | O_CREAT | O_EXCL, 0600)\n>fails. The only error message I can find with that wording is in\n>smgrcreate(), and it looks like all the other potential causes of\n>failure below that point (such as running out of memory) would yield\n>different messages.\n>\n>So, it would seem that the initial cause of the problem is that there\n>was already a file by that name --- perhaps because some earlier\n>instance of postgres failed to unlink it.\n\nthanks for tracking this down, I was busy yesterday, started to\npoke at sources but didn't really have time to do so in earnest.\n\nThis does fit with the fact that the files for the offending\ntables did indeed still exist.\n\nI did try deleting the one for my \"foo\" test case, and could\nthen create and supposedly drop the table, but the relation\nin this case then was not removed from pg_class, nor was the\nfile removed. \n\nI have had occassional filesystem problems on this machine,\nand not only in postgres. Linux flakies? System flakies?\nI've not quite been able to decide, nor do I have a spare\nmachine to rebuild on. I keep hearing that Linux is very\nsolid, particularly the 2.0.36 kernal I'm using, but this\nis my first experience running Linux on a steady basis (I\nhave tons of other Unix experience, of course).\n\n>\n>I'm not sure about the bizarrenesses you report later on --- they\n>sound like the system tables may have gotten corrupted, or perhaps\n>just the relation cache inside the backend. (Did killing the backend\n>and starting a new one help?)\n\nNo, unfortunately.\n\n> But I am thinking the error messages\n>must have been different at that point...\n\nYes, the relation was still in the pg_class table (I think that's\nright as opposed to pg_tables). As was the file. This makes more\nsense, actually, in that it appears that the unlink was failing\nand it didn't get removed from the system table, either - though\nit's weird it reported success on the drop.\n\nOne thought crossing my mind was that the Linux filesystem may've\ngotten itself into a weird state, in particular the cache. I\nrebooted, initdb'd and rebuilt and everything's OK now.\n\n>We saw vaguely similar behaviors with temp tables when we were still\n>flushing the bugs out of temp tables. I wonder if there are still\n>some temp-table-related bugs?\n\nAt this point I'm willing to believe it is a Linux or (perhaps\nmore likely) system problem. If it happens again, I'm better\nprepared as to where to look for more details, at least.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 07 Aug 1999 08:01:17 -0700", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1, error in pg_dump " }, { "msg_contents": "On Fri, 6 Aug 1999, Tom Lane wrote:\n\n> Vadim Mikheev <[email protected]> writes:\n> > Are we going to release subj in near future?\n> > If yes then please wait a few days -\n> \n> Also, I'd like to see if we can't resolve Adriaan Joubert's bug report\n> first. The list-clearing problem doesn't seem to be it...\n\nShall we go for a Sept 1st release of v6.5.2? Vadim, does that give you\nenough time? It doesn't have to be \"perfect\", only better then v6.5.1...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Aug 1999 23:22:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2 " }, { "msg_contents": "On Sun, 15 Aug 1999, The Hermit Hacker wrote:\n\n> Date: Sun, 15 Aug 1999 23:22:30 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Vadim Mikheev <[email protected]>,\n> PostgreSQL Developers List <[email protected]>\n> Subject: Re: [HACKERS] 6.5.2 \n> \n> On Fri, 6 Aug 1999, Tom Lane wrote:\n> \n> > Vadim Mikheev <[email protected]> writes:\n> > > Are we going to release subj in near future?\n> > > If yes then please wait a few days -\n> > \n> > Also, I'd like to see if we can't resolve Adriaan Joubert's bug report\n> > first. The list-clearing problem doesn't seem to be it...\n> \n> Shall we go for a Sept 1st release of v6.5.2? Vadim, does that give you\n> enough time? It doesn't have to be \"perfect\", only better then v6.5.1...\n\nVadim has made a quick patch for row-reusing in index file after vacuum and\nI've tested it on my Linux box, 6.5.1 sources. It works fine - \nnow index file doesn't grow indefinitely ! It still grows (logarithmically) \nbut not as before. Of course, fully truncating would be fine. but\nas Vadim said it's another story.\n\nWhat's a list of changes for 6.5.2 ? I've seen several patches for 6.5.1\nand current tree. Some of them I'd like to see in 6.5.2:\n\n1. Vadim's patch for nbtinsert\n2. enabling Index access for multi-column indices by Hiroshi Inoue\n3. Descending order Index scan patch by Hiroshi Inoue \n\n\n\tRegards,\n\n\t\tOleg\n\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Mon, 16 Aug 1999 13:01:42 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "None" }, { "msg_contents": "Hello!\n\n I didn't saw my latest patches for README.locale in latest (Aug 15)\nsnapshot, although Bruce reported he got it. I think it best to apply it\nfor 6.5.2 than to 6.6 ot 7.0...\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 16 Aug 1999 13:48:49 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2" }, { "msg_contents": "On Mon, 16 Aug 1999, Oleg Bartunov wrote:\n\n> 1. Vadim's patch for nbtinsert\n> 2. enabling Index access for multi-column indices by Hiroshi Inoue\n> 3. Descending order Index scan patch by Hiroshi Inoue \n\nExpect bug fixes, not extra features...2 and 3 are nice, but don't fix a\nbug other them possibly improving speed...they will not be in 6.5.2...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 09:08:57 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: your mail" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 16 Aug 1999, Oleg Bartunov wrote:\n> \n> > 1. Vadim's patch for nbtinsert\n> > 2. enabling Index access for multi-column indices by Hiroshi Inoue\n> > 3. Descending order Index scan patch by Hiroshi Inoue\n> \n> Expect bug fixes, not extra features...2 and 3 are nice, but don't fix a\n> bug other them possibly improving speed...\n\nOr in case of 3 avoiding exhausting memory on unneccessary sort.\n\n>they will not be in 6.5.2...\n\nWill they be in patches/ in a form ready to apply to 6.5.2 ?\n\n--------\nHannu\n", "msg_date": "Tue, 17 Aug 1999 00:42:27 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: your mail" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Fri, 6 Aug 1999, Tom Lane wrote:\n> \n> > Vadim Mikheev <[email protected]> writes:\n> > > Are we going to release subj in near future?\n> > > If yes then please wait a few days -\n> >\n> > Also, I'd like to see if we can't resolve Adriaan Joubert's bug report\n> > first. The list-clearing problem doesn't seem to be it...\n> \n> Shall we go for a Sept 1st release of v6.5.2? Vadim, does that give you\n> enough time? It doesn't have to be \"perfect\", only better then v6.5.1...\n\nI posted patch to hackers- or patches-list - sorry, but I haven't\n6.5.X tree. I don't know is it applied or not...\n\nVadim\n", "msg_date": "Tue, 17 Aug 1999 09:25:32 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.2" }, { "msg_contents": "Oleg, do you seem them now? If not, can you send them to me.\n\n> Hello!\n> \n> I didn't saw my latest patches for README.locale in latest (Aug 15)\n> snapshot, although Bruce reported he got it. I think it best to apply it\n> for 6.5.2 than to 6.6 ot 7.0...\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:21:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2" }, { "msg_contents": "Hi!\n\nOn Mon, 27 Sep 1999, Bruce Momjian wrote:\n> Oleg, do you seem them now? If not, can you send them to me.\n\n I saw it long time ago and sent my thankx to the list. It seems you\nstore messages in your mailbox a little too long :)\n\n> > Hello!\n> > \n> > I didn't saw my latest patches for README.locale in latest (Aug 15)\n> > snapshot, although Bruce reported he got it. I think it best to apply it\n> > for 6.5.2 than to 6.6 ot 7.0...\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Tue, 28 Sep 1999 11:06:42 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2" }, { "msg_contents": "> Hi!\n> \n> On Mon, 27 Sep 1999, Bruce Momjian wrote:\n> > Oleg, do you seem them now? If not, can you send them to me.\n> \n> I saw it long time ago and sent my thankx to the list. It seems you\n> store messages in your mailbox a little too long :)\n\nYes, I am way behind. I just got off a 4 month job, so I finally had\nsome time to work on this. With a major release months away, I wasn't\nrushing to apply patches. I am now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Sep 1999 08:51:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.2" } ]
[ { "msg_contents": "I have received a report that pl/plpgsql requires bison to compile. I\nhave verified this is true, at least on BSD/OS 4.01, and since we enable\nplpgsql compile by default, this requires bison for our standard build.\n\nThe issue appears to be contents of gram.tab.c, which is part of our\ndistribution, and contains at the top:\n\n\t\n\t/* A Bison parser, made from gram.y\n\t by GNU Bison version 1.25\n\t */\n\t\n\t#define YYBISON 1 /* Identify Bison output. */\n\t\n\t#define K_ALIAS 258\n\t#define K_ASSIGN 259\n\t#define K_BEGIN 260\n\t#define K_CONSTANT 261\n\t#define K_DEBUG 262\n\t#define K_DECLARE 263\n\t#define K_DEFAULT 264\n\t#define K_DOTDOT 265\n\t#define K_ELSE 266\n\t#define K_END 267\n\nand later on, the contents of gram.y tables. The file appears to allow\nfor the passage of keywords, but is not done by flex/yacc combination. \nI did:\n\t\n\tyacc -d gram.y\n\tsed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' <y.tab.c >pl_gram.c\n\tsed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' <y.tab.h >pl.tab.h\n\trm -f y.tab.c y.tab.h\n\nbut got errors like:\n\t\n\tscan.l: In function `plpgsql_yylex':\n\tscan.l:85: `K_ASSIGN' undeclared (first use this function)\n\tscan.l:85: (Each undeclared identifier is reported only once\n\tscan.l:85: for each function it appears in.)\n\tscan.l:87: `K_DOTDOT' undeclared (first use this function)\n\tscan.l:88: `K_ALIAS' undeclared (first use this function)\n\nJan, is this a known portability problem?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Aug 1999 14:25:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "plpgsql requires bison" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have received a report that pl/plpgsql requires bison to compile. I\n> have verified this is true, at least on BSD/OS 4.01, and since we enable\n> plpgsql compile by default, this requires bison for our standard build.\n\nThis same problem was reported for HPUX a couple weeks ago (see thread\n\"[PORTS] HP-UX port\" on 29 July). I think that moving the #include of \n\"pl_scan.c\" down to the file trailer section, instead of having it in\nthe file header, would work. Did not try it yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Aug 1999 18:24:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] plpgsql requires bison " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have received a report that pl/plpgsql requires bison to compile. I\n> > have verified this is true, at least on BSD/OS 4.01, and since we enable\n> > plpgsql compile by default, this requires bison for our standard build.\n> \n> This same problem was reported for HPUX a couple weeks ago (see thread\n> \"[PORTS] HP-UX port\" on 29 July). I think that moving the #include of \n> \"pl_scan.c\" down to the file trailer section, instead of having it in\n> the file header, would work. Did not try it yet.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\nMoving the #include down below the %} does not work. Is there another\nsection. I put it on line 49.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Aug 1999 19:05:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] plpgsql requires bison" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> This same problem was reported for HPUX a couple weeks ago (see thread\n>> \"[PORTS] HP-UX port\" on 29 July). I think that moving the #include of \n>> \"pl_scan.c\" down to the file trailer section, instead of having it in\n>> the file header, would work. Did not try it yet.\n\n> Moving the #include down below the %} does not work. Is there another\n> section. I put it on line 49.\n\nNo, that'd be in the grammar proper. I meant down in the trailing\nmiscellaneous-C-code section, say right after the %% divider at line\n1081. (If I were a yacc expert I'd know the names of these sections,\nbut I'm just a hacker...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Aug 1999 23:36:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] plpgsql requires bison " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> This same problem was reported for HPUX a couple weeks ago (see thread\n> >> \"[PORTS] HP-UX port\" on 29 July). I think that moving the #include of \n> >> \"pl_scan.c\" down to the file trailer section, instead of having it in\n> >> the file header, would work. Did not try it yet.\n> \n> > Moving the #include down below the %} does not work. Is there another\n> > section. I put it on line 49.\n> \n> No, that'd be in the grammar proper. I meant down in the trailing\n> miscellaneous-C-code section, say right after the %% divider at line\n> 1081. (If I were a yacc expert I'd know the names of these sections,\n> but I'm just a hacker...)\n\nThank you. That did it. Commit applied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 7 Aug 1999 00:24:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] plpgsql requires bison" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n> I have received a report that pl/plpgsql requires bison to compile. I\n> have verified this is true, at least on BSD/OS 4.01, and since we enable\n> plpgsql compile by default, this requires bison for our standard build.\n>\n> The issue appears to be contents of gram.tab.c, which is part of our\n> distribution, and contains at the top:\n>\n> [...]\n>\n> Jan, is this a known portability problem?\n\n I had expected portability problems with the PL/pgSQL parser\n and suggested to put prepared flex/bison output files into\n our distribution as we do it for the main backend parser.\n Unfortunately I forgot about it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 9 Aug 1999 12:30:30 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: plpgsql requires bison" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> >\n> > I have received a report that pl/plpgsql requires bison to compile. I\n> > have verified this is true, at least on BSD/OS 4.01, and since we enable\n> > plpgsql compile by default, this requires bison for our standard build.\n> >\n> > The issue appears to be contents of gram.tab.c, which is part of our\n> > distribution, and contains at the top:\n> >\n> > [...]\n> >\n> > Jan, is this a known portability problem?\n> \n> I had expected portability problems with the PL/pgSQL parser\n> and suggested to put prepared flex/bison output files into\n> our distribution as we do it for the main backend parser.\n> Unfortunately I forgot about it.\n\nOK, I have fixed the problem by testing to see if I am running Bison,\nand doing the include in the proper place.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 16:12:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql requires bison" } ]
[ { "msg_contents": "Excuse me for asking in this list about other.\n\n\nI�ve send some subscription mails to [email protected] and\nno reply.\n\nI�ve tried to acces to www.iztacala.unam.mx with no success, the pings don�t\nwork. Some body knows if there is any change in address?.\n\nThanks.\n\nF.J.Cuberos\n\n\n", "msg_date": "Sat, 7 Aug 1999 00:07:24 +0200", "msg_from": "\"F J Cuberos\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?iso-8859-1?Q?OT:_pgsql-ayuda._It=B4s_working=3F?=" } ]
[ { "msg_contents": "Announcing release 2 of the lamar owen (lo) series of testing RPMS\navailable now at http://www.ramifordistat.net/postgres.\n\nRPM naming has been changed in this release to hopefully avoid confusion\nwith RPMS built by Thomas on ftp.postgresql.org. \n\nAlpha CPU users should download the src.rpm and build -- please let me\nknow of problems. The patch that is applied is the one developed by\nUncle George and others, backported and packaged by Ryan Kirkpatrick --\nin fact, it is the EXACT patch set he posted here recently. Other\narchitectures do not have the patches applied, thanks to the %ifarch RPM\ndirective.\n\nUnless you are running RPMS of version 6.5 or later DO NOT USE 'rpm\n-Uvh' TO UPGRADE -- IT WILL NOT WORK. Check the rpm_upgrade page on\nRamifordistat for upgrade instructions.\n\nTHESE ARE STILL TESTING -- BETA -- RPMS. Treat them as such -- please\ngive me feedback as to their suitability, bugs, etc. Bugs that are\ngeneral PostgreSQL bugs, OTOH....\n\nRPM packaging has changed versus 6.4.2 or previous -- note the\ncross-reference list:\n\nOld package New package\n----------- -----------\npostgresql postgresql-server\npostgresql-clients postgresql, postgresql-jdbc, postgresql-odbc,\n postgresql-tcl, postgresql-perl,\n postgresql-python\npostgresql-devel postgresql-devel\n\nTo prevent rogue upgrading, these packages are built to conflict with\nthe 'postgresql-clients' package.\n\nChangelog:\n* Fri Aug 6 1999 Lamar Owen <[email protected]>\n- Added alpha patches courtesy Ryan Kirkpatrick and Uncle George\n- Renamed lamar owen series of RPMS with release of #lo\n- Put Ramifordistat as vendor and URL for lamar owen RPM series,\n-- until non-beta release coordinated with PGDG.\n\n* Mon Jul 19 1999 Lamar Owen <[email protected]>\n- Correct some file misappropriations:\n-- /usr/lib/pgsql was in wrong package\n-- createlang, destroylang, and vacuumdb now in main package\n-- ipcclean now in server subpackage\n-- The static libraries are now in the devel subpackage\n-- /usr/lib/plpgsql.so and /usr/lib/pltcl.so now in server\n- Cleaned up some historical artifacts for readability -- left\nreferences\n- to these artifacts in the changelog\n\nEnjoy!\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Fri, 06 Aug 1999 18:36:34 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Release 2 testing RPMS -- includes ALPHA patches." }, { "msg_contents": "Has anyone addressed the 64 bit lseek/tell? how about the filesize\nlimitation ( not ) of about 17 gigs? Any sayso if there will be a\nproblem when the 2gig/file barrier is broken ?\ngat\n\n", "msg_date": "Fri, 06 Aug 1999 20:29:36 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Release 2 testing RPMS -- includes ALPHA patches." }, { "msg_contents": "Uncle George wrote:\n> \n> Has anyone addressed the 64 bit lseek/tell? how about the filesize\n> limitation ( not ) of about 17 gigs? Any sayso if there will be a\n> problem when the 2gig/file barrier is broken ?\n> gat\n\nInteresting set of questions -- Ryan may be able to answer them -- I\nhave no Alpha yet available to test with. Like my message said -- these\nare testing RPM's, for which I'm requesting feedback. \n\nEveryone has their own area of expertise -- and Alpha is NOT mine...\n\nThanks for the work you did on that code -- and thanks to Ryan for the\nbackpatch to 6.5.1.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Fri, 06 Aug 1999 21:53:47 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PORTS] Release 2 testing RPMS -- includes ALPHA patches." } ]
[ { "msg_contents": "Hi,\n\nWith my system I get it to do a full pg_dump once per hour so that if\nanything goes wrong I can go back to the previous hours information if\nneeded. One thing I did notice is that the whole process of the COPY\ncommand is highly CPU bound, the postgres will use up all available CPU\nand the disks are relatively idle.\n\nI ran some profiling on it with gprof, and I was shocked to find millions\nof calls to functions like memcpy, pq_putbytes, CopySendChar. I had a look\nat the code and most of these were just wrappers to various other\nfunctions, and did no work of their own. Having all these functions\nrunning a couple of million times and making heaps of their own calls\nmeant that it was spending most of its time pushing and pulling things off\nthe stack instead of doing real work :)\n\nSo I looked into the problem, and CopyTo was getting the data, \nand calling CopyAttributeOut to convert it to a string and send it to the\nclient. The CopyTo function was getting called rarely so this was a good\nplace to start.\n\nTurns out that the CopyAttributeOut function gets the supplied string, and\nescapes it out, and as it does it, it calls CopySendChar for each\ncharacter to put it in the buffer for sending. This function does a lot of\nother function calls, including a memcpy or two.\n\nSo I made some changes to CopyAttributeOut so that it escapes the string\ninitially into a temporary buffer (allocated onto the stack) and then\nfeeds the whole string to the CopySendData which is a lot more efficient\nbecause it can blast the whole string in one go, saving about 1/3 to 1/4\nthe number of memcpy and so on.\n\nThis modification caused a copy (with profiling and everything else) to go\nfrom 1m14sec to 45 sec, which I was very happy with considering how easy\nit was to fix :)\n\n\nI kept hacking at the code though, and found another slow point in the\ncode which was int4out. It was a one line front end to ltoa, then this\ncalled sprintf with just \"%d\", and then numerous calls to other internal\nlibc functions. This was very expensive and wasted lots of CPU time.\n\nSo, I rewrote int4out (throwing away all the excess code) to do this\nconversion directly without calling any libc code which is more generic\nand slower, and this too gained a speed improvement. Combined with my\nprevious patch, with profiling and all that turned off, and -O3, I managed\nto get another COPY (on a different table - sorry) down from 30 seconds to\n15 seconds, so I've managed to double its speed without any tradeoffs\nwhatsoever!\n\nI have included a patch below for the changes I made, although I would\nonly use this to look at, it is shocking code which was a real hack job\nusing #defines and a few other tricks to make it work so I could test this\nidea out quickly. The int4out example is especially bad, and has lots of\npointers and stuff flying around and may have some bugs.\n\nBut, what I think could be interesting to improve our performance is to\nmake simple mods to CopyAttributeOut (along the lines of my changes) - and\nto also make changes to remove all use of sprintf when numbers and floats\nare being converted into strings. I would imagine this would generally \nspeed up SELECT and COPY commands, so the performance increase can be\ngained in other places too.\n\nIf someone wants, I can do cleaner versions of this patch for inclusion in\nthe code, or one of the developers may want to do this instead in case\ntheres something I'm missing. But I wanted to share this with everyone to\nhelp increase Postgres' speed as it is quite significant!\n\n[Patch included below for reference - note that this patch probably has\nbugs and use it at your own risk! I am not using this patch except for\ntesting]\n\nRegards,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n\n\n\ndiff -u -r NORMAL/postgresql-6.5/src/backend/commands/copy.c PROFILING/postgresql-6.5/src/backend/commands/copy.c\n--- NORMAL/postgresql-6.5/src/backend/commands/copy.c\tMon Jun 14 10:33:40 1999\n+++ PROFILING/postgresql-6.5/src/backend/commands/copy.c\tSat Aug 7 16:02:50 1999\n@@ -1278,12 +1278,23 @@\n #endif\n }\n \n+\n+\n+#warning Wayne put hacks here\n+\n+#define XCopySendChar(ch, fp) __buffer[__buffer_ofs++] = ch\n+\n+\n+\n static void\n CopyAttributeOut(FILE *fp, char *server_string, char *delim, int is_array)\n {\n \tchar\t *string;\n \tchar\t\tc;\n \n+\tchar __buffer [(strlen (server_string) * 2) + 4]; /* Use +4 for safety */\n+\tint __buffer_ofs = 0;\n+\n #ifdef MULTIBYTE\n \tint\t\t\tmblen;\n \tint\t\t\tencoding;\n@@ -1307,31 +1318,34 @@\n \t{\n \t\tif (c == delim[0] || c == '\\n' ||\n \t\t\t(c == '\\\\' && !is_array))\n-\t\t\tCopySendChar('\\\\', fp);\n+\t\t\tXCopySendChar('\\\\', fp);\n \t\telse if (c == '\\\\' && is_array)\n \t\t{\n \t\t\tif (*(string + 1) == '\\\\')\n \t\t\t{\n \t\t\t\t/* translate \\\\ to \\\\\\\\ */\n-\t\t\t\tCopySendChar('\\\\', fp);\n-\t\t\t\tCopySendChar('\\\\', fp);\n-\t\t\t\tCopySendChar('\\\\', fp);\n+\t\t\t\tXCopySendChar('\\\\', fp);\n+\t\t\t\tXCopySendChar('\\\\', fp);\n+\t\t\t\tXCopySendChar('\\\\', fp);\n \t\t\t\tstring++;\n \t\t\t}\n \t\t\telse if (*(string + 1) == '\"')\n \t\t\t{\n \t\t\t\t/* translate \\\" to \\\\\\\" */\n-\t\t\t\tCopySendChar('\\\\', fp);\n-\t\t\t\tCopySendChar('\\\\', fp);\n+\t\t\t\tXCopySendChar('\\\\', fp);\n+\t\t\t\tXCopySendChar('\\\\', fp);\n \t\t\t}\n \t\t}\n #ifdef MULTIBYTE\n \t\tfor (i = 0; i < mblen; i++)\n-\t\t\tCopySendChar(*(string + i), fp);\n+\t\t\tXCopySendChar(*(string + i), fp);\n #else\n-\t\tCopySendChar(*string, fp);\n+\t\tXCopySendChar(*string, fp);\n #endif\n \t}\n+\t\n+\t/* Now send the whole output string in one shot */\n+\tCopySendData (__buffer, __buffer_ofs, fp);\n }\n \n /*\ndiff -u -r NORMAL/postgresql-6.5/src/backend/utils/adt/int.c PROFILING/postgresql-6.5/src/backend/utils/adt/int.c\n--- NORMAL/postgresql-6.5/src/backend/utils/adt/int.c\tSun Feb 14 09:49:20 1999\n+++ PROFILING/postgresql-6.5/src/backend/utils/adt/int.c\tSat Aug 7 15:53:36 1999\n@@ -210,9 +210,60 @@\n int4out(int32 l)\n {\n \tchar\t *result;\n+\tchar *sptr, *tptr;\n+\tint32 value = l;\n+\tchar temp[12]; /* For storing string in reverse */\n+\n+#warning Wayne put hacks here\n \n \tresult = (char *) palloc(12);\t\t/* assumes sign, 10 digits, '\\0' */\n-\tltoa(l, result);\n+\n+\n+\t/* ltoa(l, result);\n+\t maps to sprintf(result, \"%d\", l) later on in the code */\n+\n+\t/* Remove minus sign firstly */\n+\tif (l < 0)\n+\t value = -l;\n+\t\n+\t/* Point to our output string at the end */\n+\ttptr = temp;\n+\t\n+\t/* Now loop until we have no value left */\n+\twhile (value > 0)\n+\t {\n+\t int current = value % 10;\n+\t value = value / 10;\n+\n+\t *tptr = current + '0';\n+\t tptr++;\n+\t }\n+\tif (tptr == temp)\n+\t {\n+\t *tptr = '0';\n+\t tptr++;\n+\t }\n+\t*tptr = '\\0';\n+\ttptr--;\n+\n+\t/* Now, we need to prepare the result which is the reversal */\n+\tsptr = result;\n+\tif (l < 0)\n+\t {\n+\t *sptr = '-';\n+\t sptr++;\n+\t }\n+\t\n+\twhile (tptr >= temp)\n+\t {\n+\t *sptr = *tptr;\n+\t sptr++;\n+\t tptr--;\n+\t }\n+\n+\t/* Ok, we have copied everything - terminate now */\n+\t*sptr = '\\0';\n+\t\n \treturn result;\n }\n \n\n", "msg_date": "Sat, 7 Aug 1999 17:29:59 +0930 (CST)", "msg_from": "Wayne Piekarski <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficiencies in COPY command" }, { "msg_contents": "Wayne Piekarski <[email protected]> writes:\n> So I made some changes to CopyAttributeOut so that it escapes the string\n> initially into a temporary buffer (allocated onto the stack) and then\n> feeds the whole string to the CopySendData which is a lot more efficient\n> because it can blast the whole string in one go, saving about 1/3 to 1/4\n> the number of memcpy and so on.\n\ncopy.c is pretty much of a hack job to start with, IMHO. If you can\nspeed it up without making it even uglier, have at it! However, it\nalso has to be portable, and what you've done here:\n\n> CopyAttributeOut(FILE *fp, char *server_string, char *delim, int is_array)\n> {\n> \tchar\t *string;\n> \tchar\t\tc;\n> +\tchar __buffer [(strlen (server_string) * 2) + 4]; /* Use +4 for safety */\n\nis not portable --- variable-sized local arrays are a gcc-ism. (I use\n'em a lot too, but not in code intended for public release...) Also,\nbe careful that you don't introduce any assumptions about maximum\nfield or tuple width; we want to get rid of those, not add more.\n\n> to also make changes to remove all use of sprintf when numbers\n> and floats are being converted into strings.\n ^^^^^^^^^^\n\nWhile formatting an int is pretty simple, formatting a float is not so\nsimple. I'd be leery of replacing sprintf with quick-hack float\nconversion code. OTOH, if someone wanted to go to the trouble of doing\nit *right*, using our own code would tend to yield more consistent\nresults across different OSes, which would be a Good Thing. I'm not\nsure it'd be any faster than the typical sprintf, but it might be worth\ndoing anyway.\n\n(My idea of *right* is the guaranteed-inverse float<=>ASCII routines\npublished a few years ago in some SIGPLAN proceedings ... I've got the\nproceedings, and I even know approximately where they are, but I don't\nfeel like excavating for them right now...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Aug 1999 11:01:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inefficiencies in COPY command " }, { "msg_contents": "Tom Lane wrote -\n> Wayne Piekarski <[email protected]> writes:\n> > So I made some changes to CopyAttributeOut so that it escapes the string\n> > initially into a temporary buffer (allocated onto the stack) and then\n> > feeds the whole string to the CopySendData which is a lot more efficient\n> > because it can blast the whole string in one go, saving about 1/3 to 1/4\n> > the number of memcpy and so on.\n> \n> copy.c is pretty much of a hack job to start with, IMHO. If you can\n> speed it up without making it even uglier, have at it! However, it\n> also has to be portable, and what you've done here:\n\nOk, well I will write up a proper patch for CopyAttributeOut so it is not\nsuch a hack (using all those #defines and stuff wasn't very \"elegant\") and\nthen submit a proper patch for it.... This was pretty straight forward to\nfix up.\n\n> While formatting an int is pretty simple, formatting a float is not so\n> simple. I'd be leery of replacing sprintf with quick-hack float\n> conversion code. OTOH, if someone wanted to go to the trouble of doing\n> it *right*, using our own code would tend to yield more consistent\n> results across different OSes, which would be a Good Thing. I'm not\n> sure it'd be any faster than the typical sprintf, but it might be worth\n> doing anyway.\n\nI understand there are issues to do with not being able to use GPL code\nwith Postgres, because its BSD license is not compatible, but would it be\nacceptable to extract code from BSD style code? If so, my FreeBSD here has\nlibc code and includes the internals used by sprintf for rendering\nintegers (and floats) and so we could include that code in, and should\nhopefully be portable at the same time as well. \n\nThis would be a lot faster than going via sprintf and lots of other\nfunctions, and would make not just COPY, but I think any SELECT query runs\nfaster as well (because they get rewritten to strings by the output\nfunctions don't they). I guess other advantages would be improvements in\nthe regression tests maybe, for problem types like int8 which in the past\nhave had trouble under some BSDs.\n\nDoes what I've written above sound ok? If so I'll go and work up something\nand come back with a patch.\n\nbye,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n\n", "msg_date": "Sat, 21 Aug 1999 16:03:08 +0930 (CST)", "msg_from": "Wayne Piekarski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inefficiencies in COPY command" }, { "msg_contents": "> Tom Lane wrote -\n> > Wayne Piekarski <[email protected]> writes:\n> > > So I made some changes to CopyAttributeOut so that it escapes the string\n> > > initially into a temporary buffer (allocated onto the stack) and then\n> > > feeds the whole string to the CopySendData which is a lot more efficient\n> > > because it can blast the whole string in one go, saving about 1/3 to 1/4\n> > > the number of memcpy and so on.\n> > \n> > copy.c is pretty much of a hack job to start with, IMHO. If you can\n> > speed it up without making it even uglier, have at it! However, it\n> > also has to be portable, and what you've done here:\n> \n> Ok, well I will write up a proper patch for CopyAttributeOut so it is not\n> such a hack (using all those #defines and stuff wasn't very \"elegant\") and\n> then submit a proper patch for it.... This was pretty straight forward to\n> fix up.\n\nGreat.\n\n> \n> > While formatting an int is pretty simple, formatting a float is not so\n> > simple. I'd be leery of replacing sprintf with quick-hack float\n> > conversion code. OTOH, if someone wanted to go to the trouble of doing\n> > it *right*, using our own code would tend to yield more consistent\n> > results across different OSes, which would be a Good Thing. I'm not\n> > sure it'd be any faster than the typical sprintf, but it might be worth\n> > doing anyway.\n> \n> I understand there are issues to do with not being able to use GPL code\n> with Postgres, because its BSD license is not compatible, but would it be\n> acceptable to extract code from BSD style code? If so, my FreeBSD here has\n> libc code and includes the internals used by sprintf for rendering\n> integers (and floats) and so we could include that code in, and should\n> hopefully be portable at the same time as well. \n> \n> This would be a lot faster than going via sprintf and lots of other\n> functions, and would make not just COPY, but I think any SELECT query runs\n> faster as well (because they get rewritten to strings by the output\n> functions don't they). I guess other advantages would be improvements in\n> the regression tests maybe, for problem types like int8 which in the past\n> have had trouble under some BSDs.\n\nDoes using the FreeBSD sprintf conversion functions really make it\nfaster than just calling sprintf? How?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 24 Aug 1999 00:13:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inefficiencies in COPY command" }, { "msg_contents": "Yes, I too would be interested in any code that would speed up COPY\nwithout loosing modularity or portability.\n\nPlease let us know if you get a patch we can include in our source tree.\n\n> Wayne Piekarski <[email protected]> writes:\n> > So I made some changes to CopyAttributeOut so that it escapes the string\n> > initially into a temporary buffer (allocated onto the stack) and then\n> > feeds the whole string to the CopySendData which is a lot more efficient\n> > because it can blast the whole string in one go, saving about 1/3 to 1/4\n> > the number of memcpy and so on.\n> \n> copy.c is pretty much of a hack job to start with, IMHO. If you can\n> speed it up without making it even uglier, have at it! However, it\n> also has to be portable, and what you've done here:\n> \n> > CopyAttributeOut(FILE *fp, char *server_string, char *delim, int is_array)\n> > {\n> > \tchar\t *string;\n> > \tchar\t\tc;\n> > +\tchar __buffer [(strlen (server_string) * 2) + 4]; /* Use +4 for safety */\n> \n> is not portable --- variable-sized local arrays are a gcc-ism. (I use\n> 'em a lot too, but not in code intended for public release...) Also,\n> be careful that you don't introduce any assumptions about maximum\n> field or tuple width; we want to get rid of those, not add more.\n> \n> > to also make changes to remove all use of sprintf when numbers\n> > and floats are being converted into strings.\n> ^^^^^^^^^^\n> \n> While formatting an int is pretty simple, formatting a float is not so\n> simple. I'd be leery of replacing sprintf with quick-hack float\n> conversion code. OTOH, if someone wanted to go to the trouble of doing\n> it *right*, using our own code would tend to yield more consistent\n> results across different OSes, which would be a Good Thing. I'm not\n> sure it'd be any faster than the typical sprintf, but it might be worth\n> doing anyway.\n> \n> (My idea of *right* is the guaranteed-inverse float<=>ASCII routines\n> published a few years ago in some SIGPLAN proceedings ... I've got the\n> proceedings, and I even know approximately where they are, but I don't\n> feel like excavating for them right now...)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:04:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inefficiencies in COPY command" } ]
[ { "msg_contents": "The current CVS version of pl/plpgsql/src/gram.y may work for plain\nyacc, but it fails with bison :-(\n\nI think the only real solution will be to stop trying to compile the\nyacc and lex output files as one C compilation, and compile them\nseparately like everyone else does it...\n\nIn the meantime I suggest reverting the \"fix\", since most of the\ndevelopers are presumably using bison.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 08 Aug 1999 15:56:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "plpgsql grammar fix not so easy after all" }, { "msg_contents": "> The current CVS version of pl/plpgsql/src/gram.y may work for plain\n> yacc, but it fails with bison :-(\n> \n> I think the only real solution will be to stop trying to compile the\n> yacc and lex output files as one C compilation, and compile them\n> separately like everyone else does it...\n> \n> In the meantime I suggest reverting the \"fix\", since most of the\n> developers are presumably using bison.\n\nOh, man. Ok, moved.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 8 Aug 1999 20:08:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql grammar fix not so easy after all" } ]
[ { "msg_contents": "Hi all,\n\nIn v6.5\n\n Prevent sorting if result is already sorted\n\nwas implemented by Jan Wieck.\nHis work is for ascending order cases.\n\nHere is a patch to prevent sorting also in descending\norder cases.\nBecause I had already changed _bt_first() to position\nbackward correctly before v6.5,this patch would work.\n\nThis patch needs \"make clean\" .\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** ../../head/pgcurrent/backend/optimizer/plan/planner.c\tMon Jul 26\n12:44:55 1999\n--- backend/optimizer/plan/planner.c\tMon Aug 9 11:01:49 1999\n***************\n*** 39,45 ****\n static Plan *make_groupplan(List *group_tlist, bool tuplePerGroup,\n \t\t\t List *groupClause, AttrNumber *grpColIdx,\n \t\t\t Plan *subplan);\n! static bool need_sortplan(List *sortcls, Plan *plan);\n static Plan *make_sortplan(List *tlist, List *sortcls, Plan *plannode);\n\n\n/***************************************************************************\n**\n--- 39,45 ----\n static Plan *make_groupplan(List *group_tlist, bool tuplePerGroup,\n \t\t\t List *groupClause, AttrNumber *grpColIdx,\n \t\t\t Plan *subplan);\n! static ScanDirection get_dir_to_omit_sortplan(List *sortcls, Plan *plan);\n static Plan *make_sortplan(List *tlist, List *sortcls, Plan *plannode);\n\n\n/***************************************************************************\n**\n***************\n*** 303,310 ****\n \t}\n \telse\n \t{\n! \t\tif (parse->sortClause && need_sortplan(parse->sortClause,\nresult_plan))\n! \t\t\treturn (make_sortplan(tlist, parse->sortClause, result_plan));\n \t\telse\n \t\t\treturn ((Plan *) result_plan);\n \t}\n--- 303,319 ----\n \t}\n \telse\n \t{\n! \t\tif (parse->sortClause)\n! \t\t{\n! \t\t\tScanDirection\tdir = get_dir_to_omit_sortplan(parse->sortClause,\nresult_plan);\n! \t\t\tif (ScanDirectionIsNoMovement(dir))\n! \t\t\t\treturn (make_sortplan(tlist, parse->sortClause, result_plan));\n! \t\t\telse\n!\n\n! \t\t\t\t((IndexScan *)result_plan)->indxorderdir = dir;\n! \t\t\t\treturn ((Plan *) result_plan);\n! \t\t\t}\n! \t\t}\n \t\telse\n \t\t\treturn ((Plan *) result_plan);\n \t}\n***************\n*** 822,828 ****\n\n\n /* ----------\n! * Support function for need_sortplan\n * ----------\n */\n static TargetEntry *\n--- 831,837 ----\n\n\n /* ----------\n! * Support function for get scan direction to omit sortplan\n * ----------\n */\n static TargetEntry *\n***************\n*** 845,855 ****\n * Check if a user requested ORDER BY is already satisfied by\n * the choosen index scan.\n *\n! * Returns TRUE if sort is required, FALSE if can be omitted.\n * ----------\n */\n! static bool\n! need_sortplan(List *sortcls, Plan *plan)\n {\n \tRelation\tindexRel;\n \tIndexScan *indexScan;\n--- 854,866 ----\n * Check if a user requested ORDER BY is already satisfied by\n * the choosen index scan.\n *\n! * Returns the direction of Index scan to omit sort,\n! * if sort is required returns NoMovementScanDirection\n! *\n * ----------\n */\n! static ScanDirection\n! get_dir_to_omit_sortplan(List *sortcls, Plan *plan)\n {\n \tRelation\tindexRel;\n \tIndexScan *indexScan;\n***************\n*** 858,870 ****\n \tHeapTuple\thtup;\n \tForm_pg_index index_tup;\n \tint\t\t\tkey_no = 0;\n\n \t/* ----------\n \t * Must be an IndexScan\n \t * ----------\n \t */\n \tif (nodeTag(plan) != T_IndexScan)\n! \t\treturn TRUE;\n\n \tindexScan = (IndexScan *) plan;\n\n--- 869,883 ----\n \tHeapTuple\thtup;\n \tForm_pg_index index_tup;\n \tint\t\t\tkey_no = 0;\n+ \tScanDirection dir, nodir = NoMovementScanDirection;\n\n+ \tdir = nodir;\n \t/* ----------\n \t * Must be an IndexScan\n \t * ----------\n \t */\n \tif (nodeTag(plan) != T_IndexScan)\n! \t\treturn nodir;\n\n \tindexScan = (IndexScan *) plan;\n\n***************\n*** 873,881 ****\n \t * ----------\n \t */\n \tif (plan->lefttree != NULL)\n! \t\treturn TRUE;\n \tif (plan->righttree != NULL)\n! \t\treturn TRUE;\n\n \t/* ----------\n \t * Must be a single index scan\n--- 886,894 ----\n \t * ----------\n \t */\n \tif (plan->lefttree != NULL)\n! \t\treturn nodir;\n \tif (plan->righttree != NULL)\n! \t\treturn nodir;\n\n \t/* ----------\n \t * Must be a single index scan\n***************\n*** 882,888 ****\n \t * ----------\n \t */\n \tif (length(indexScan->indxid) != 1)\n! \t\treturn TRUE;\n\n \t/* ----------\n \t * Indices can only have up to 8 attributes. So an ORDER BY using\n--- 895,901 ----\n \t * ----------\n \t */\n \tif (length(indexScan->indxid) != 1)\n! \t\treturn nodir;\n\n \t/* ----------\n \t * Indices can only have up to 8 attributes. So an ORDER BY using\n***************\n*** 890,896 ****\n \t * ----------\n \t */\n \tif (length(sortcls) > 8)\n! \t\treturn TRUE;\n\n \t/* ----------\n \t * The choosen Index must be a btree\n--- 903,909 ----\n \t * ----------\n \t */\n \tif (length(sortcls) > 8)\n! \t\treturn nodir;\n\n \t/* ----------\n \t * The choosen Index must be a btree\n***************\n*** 902,908 ****\n \tif (strcmp(nameout(&(indexRel->rd_am->amname)), \"btree\") != 0)\n \t{\n \t\theap_close(indexRel);\n! \t\treturn TRUE;\n \t}\n \theap_close(indexRel);\n\n--- 915,921 ----\n \tif (strcmp(nameout(&(indexRel->rd_am->amname)), \"btree\") != 0)\n \t{\n \t\theap_close(indexRel);\n! \t\treturn nodir;\n \t}\n \theap_close(indexRel);\n\n***************\n*** 937,943 ****\n \t\t\t * Could this happen?\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn TRUE;\n \t\t}\n \t\tif (nodeTag(tle->expr) != T_Var)\n \t\t{\n--- 950,956 ----\n \t\t\t * Could this happen?\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn nodir;\n \t\t}\n \t\tif (nodeTag(tle->expr) != T_Var)\n \t\t{\n***************\n*** 946,952 ****\n \t\t\t * cannot be the indexed attribute\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn TRUE;\n \t\t}\n \t\tvar = (Var *) (tle->expr);\n\n--- 959,965 ----\n \t\t\t * cannot be the indexed attribute\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn nodir;\n \t\t}\n \t\tvar = (Var *) (tle->expr);\n\n***************\n*** 957,963 ****\n \t\t\t * that of the index\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn TRUE;\n \t\t}\n\n \t\tif (var->varattno != index_tup->indkey[key_no])\n--- 970,976 ----\n \t\t\t * that of the index\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn nodir;\n \t\t}\n\n \t\tif (var->varattno != index_tup->indkey[key_no])\n***************\n*** 966,972 ****\n \t\t\t * It isn't the indexed attribute.\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn TRUE;\n \t\t}\n\n \t\tif (oprid(oper(\"<\", resdom->restype, resdom->restype, FALSE)) !=\nsortcl->opoid)\n--- 979,985 ----\n \t\t\t * It isn't the indexed attribute.\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn nodir;\n \t\t}\n\n \t\tif (oprid(oper(\"<\", resdom->restype, resdom->restype, FALSE)) !=\nsortcl->opoid)\n***************\n*** 975,982 ****\n \t\t\t * Sort order isn't in ascending order.\n \t\t\t * ----------\n \t\t\t */\n! \t\t\treturn TRUE;\n \t\t}\n\n \t\tkey_no++;\n \t}\n--- 988,1007 ----\n \t\t\t * Sort order isn't in ascending order.\n \t\t\t * ----------\n \t\t\t */\n! \t\t\tif (ScanDirectionIsForward(dir))\n! \t\t\t\treturn nodir;\n! \t\t\tdir = BackwardScanDirection;\n \t\t}\n+ \t\telse\n+ \t\t{\n+ \t\t\t/* ----------\n+ \t\t\t * Sort order is in ascending order.\n+ \t\t\t * ----------\n+ \t\t\t*/\n+ \t\t\tif (ScanDirectionIsBackward(dir))\n+ \t\t\t\treturn nodir;\n+ \t\t\tdir = ForwardScanDirection;\n+ \t\t}\n\n \t\tkey_no++;\n \t}\n***************\n*** 985,989 ****\n \t * Index matches ORDER BY - sort not required\n \t * ----------\n \t */\n! \treturn FALSE;\n }\n--- 1010,1014 ----\n \t * Index matches ORDER BY - sort not required\n \t * ----------\n \t */\n! \treturn dir;\n }\n*** ../../head/pgcurrent/backend/executor/nodeIndexscan.c\tMon Jul 26\n12:44:47 1999\n--- backend/executor/nodeIndexscan.c\tMon Aug 9 10:54:23 1999\n***************\n*** 99,104 ****\n--- 99,111 ----\n \t */\n \testate = node->scan.plan.state;\n \tdirection = estate->es_direction;\n+ \tif (ScanDirectionIsBackward(node->indxorderdir))\n+ \t{\n+ \t\tif (ScanDirectionIsForward(direction))\n+ \t\t\tdirection = BackwardScanDirection;\n+ \t\telse if (ScanDirectionIsBackward(direction))\n+ \t\t\tdirection = ForwardScanDirection;\n+ \t}\n \tsnapshot = estate->es_snapshot;\n \tscanstate = node->scan.scanstate;\n \tindexstate = node->indxstate;\n***************\n*** 316,321 ****\n--- 323,330 ----\n \tindxqual = node->indxqual;\n \tnumScanKeys = indexstate->iss_NumScanKeys;\n \tindexstate->iss_IndexPtr = -1;\n+ \tif (ScanDirectionIsBackward(node->indxorderdir))\n+ \t\tindexstate->iss_IndexPtr = numIndices;\n\n \t/* If this is re-scanning of PlanQual ... */\n \tif (estate->es_evTuple != NULL &&\n***************\n*** 966,971 ****\n--- 975,982 ----\n \t}\n\n \tindexstate->iss_NumIndices = numIndices;\n+ \tif (ScanDirectionIsBackward(node->indxorderdir))\n+ \t\tindexPtr = numIndices;\n \tindexstate->iss_IndexPtr = indexPtr;\n \tindexstate->iss_ScanKeys = scanKeys;\n \tindexstate->iss_NumScanKeys = numScanKeys;\n*** ../../head/pgcurrent/backend/optimizer/plan/createplan.c\tMon Aug 9\n11:31:33 1999\n--- backend/optimizer/plan/createplan.c\tMon Aug 9 11:48:55 1999\n***************\n*** 1024,1029 ****\n--- 1024,1030 ----\n \tnode->indxid = indxid;\n \tnode->indxqual = indxqual;\n \tnode->indxqualorig = indxqualorig;\n+ \tnode->indxorderdir = NoMovementScanDirection;\n \tnode->scan.scanstate = (CommonScanState *) NULL;\n\n \treturn node;\n*** ../../head/pgcurrent/backend/nodes/copyfuncs.c\tWed Jul 28 15:25:51 1999\n--- backend/nodes/copyfuncs.c\tMon Aug 9 10:55:00 1999\n***************\n*** 238,243 ****\n--- 238,244 ----\n \tnewnode->indxid = listCopy(from->indxid);\n \tNode_Copy(from, newnode, indxqual);\n \tNode_Copy(from, newnode, indxqualorig);\n+ \tnewnode->indxorderdir = from->indxorderdir;\n\n \treturn newnode;\n }\n*** ../../head/pgcurrent/backend/nodes/readfuncs.c\tMon Jul 26 14:45:56 1999\n--- backend/nodes/readfuncs.c\tMon Aug 9 11:00:47 1999\n***************\n*** 532,537 ****\n--- 532,542 ----\n \ttoken = lsptok(NULL, &length);\t\t/* eat :indxqualorig */\n \tlocal_node->indxqualorig = nodeRead(true);\t/* now read it */\n\n+ \ttoken = lsptok(NULL, &length);\t\t/* eat :indxorderdir */\n+ \ttoken = lsptok(NULL, &length);\t\t/* get indxorderdir */\n+\n+ \tlocal_node->indxorderdir = atoi(token);\n+\n \treturn local_node;\n }\n\n*** ../../head/pgcurrent/backend/nodes/outfuncs.c\tMon Jul 26 14:45:56 1999\n--- backend/nodes/outfuncs.c\tMon Aug 9 10:55:28 1999\n***************\n*** 445,450 ****\n--- 445,451 ----\n \tappendStringInfo(str, \" :indxqualorig \");\n \t_outNode(str, node->indxqualorig);\n\n+ \tappendStringInfo(str, \" :indxorderdir %d \", node->indxorderdir);\n }\n\n /*\n*** ../../head/pgcurrent/backend/nodes/equalfuncs.c\tFri Jul 30 17:29:37\n1999\n--- backend/nodes/equalfuncs.c\tMon Aug 9 10:55:08 1999\n***************\n*** 437,442 ****\n--- 437,445 ----\n \tif (a->scan.scanrelid != b->scan.scanrelid)\n \t\treturn false;\n\n+ \tif (a->indxorderdir != b->indxorderdir)\n+ \t\treturn false;\n+\n \tif (!equali(a->indxid, b->indxid))\n \t\treturn false;\n \treturn true;\n*** ../../head/pgcurrent/include/nodes/plannodes.h\tMon Jul 26 12:45:39 1999\n--- include/nodes/plannodes.h\tMon Aug 9 10:52:54 1999\n***************\n*** 175,180 ****\n--- 175,181 ----\n \tList\t *indxid;\n \tList\t *indxqual;\n \tList\t *indxqualorig;\n+ \tScanDirection\tindxorderdir;\n \tIndexScanState *indxstate;\n } IndexScan;\n\n*** ../../head/pgcurrent/backend/commands/explain.c\tMon Jul 26 12:44:46\n1999\n--- backend/commands/explain.c\tMon Aug 9 10:53:44 1999\n***************\n*** 200,205 ****\n--- 200,207 ----\n \tswitch (nodeTag(plan))\n \t{\n \t\tcase T_IndexScan:\n+ \t\t\tif (ScanDirectionIsBackward(((IndexScan *)plan)->indxorderdir))\n+ \t\t\t\tappendStringInfo(str, \" Backward\");\n \t\t\tappendStringInfo(str, \" using \");\n \t\t\ti = 0;\n \t\t\tforeach(l, ((IndexScan *) plan)->indxid)\n\n", "msg_date": "Mon, 9 Aug 1999 12:01:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Descending order Index scan patch" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi all,\n> \n> In v6.5\n> \n> Prevent sorting if result is already sorted\n> \n> was implemented by Jan Wieck.\n> His work is for ascending order cases.\n> \n> Here is a patch to prevent sorting also in descending\n> order cases.\n> Because I had already changed _bt_first() to position\n> backward correctly before v6.5,this patch would work.\n> \n> This patch needs \"make clean\" .\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n\n\nThis patch is broken. See the wrapping that happened to the /**** line.\nPlease resubmit.\n\n\n> *** ../../head/pgcurrent/backend/optimizer/plan/planner.c\tMon Jul 26\n> 12:44:55 1999\n> --- backend/optimizer/plan/planner.c\tMon Aug 9 11:01:49 1999\n> ***************\n> *** 39,45 ****\n> static Plan *make_groupplan(List *group_tlist, bool tuplePerGroup,\n> \t\t\t List *groupClause, AttrNumber *grpColIdx,\n> \t\t\t Plan *subplan);\n> ! static bool need_sortplan(List *sortcls, Plan *plan);\n> static Plan *make_sortplan(List *tlist, List *sortcls, Plan *plannode);\n> \n> \n> /***************************************************************************\n> **\n> --- 39,45 ----\n> static Plan *make_groupplan(List *group_tlist, bool tuplePerGroup,\n> \t\t\t List *groupClause, AttrNumber *grpColIdx,\n> \t\t\t Plan *subplan);\n> ! static ScanDirection get_dir_to_omit_sortplan(List *sortcls, Plan *plan);\n> static Plan *make_sortplan(List *tlist, List *sortcls, Plan *plannode);\n> \n> \n> /***************************************************************************\n> **\n> ***************\n> *** 303,310 ****\n> \t}\n> \telse\n> \t{\n> ! \t\tif (parse->sortClause && need_sortplan(parse->sortClause,\n> result_plan))\n> ! \t\t\treturn (make_sortplan(tlist, parse->sortClause, result_plan));\n> \t\telse\n> \t\t\treturn ((Plan *) result_plan);\n> \t}\n> --- 303,319 ----\n> \t}\n> \telse\n> \t{\n> ! \t\tif (parse->sortClause)\n> ! \t\t{\n> ! \t\t\tScanDirection\tdir = get_dir_to_omit_sortplan(parse->sortClause,\n> result_plan);\n> ! \t\t\tif (ScanDirectionIsNoMovement(dir))\n> ! \t\t\t\treturn (make_sortplan(tlist, parse->sortClause, result_plan));\n> ! \t\t\telse\n> !\n> \n> ! \t\t\t\t((IndexScan *)result_plan)->indxorderdir = dir;\n> ! \t\t\t\treturn ((Plan *) result_plan);\n> ! \t\t\t}\n> ! \t\t}\n> \t\telse\n> \t\t\treturn ((Plan *) result_plan);\n> \t}\n> ***************\n> *** 822,828 ****\n> \n> \n> /* ----------\n> ! * Support function for need_sortplan\n> * ----------\n> */\n> static TargetEntry *\n> --- 831,837 ----\n> \n> \n> /* ----------\n> ! * Support function for get scan direction to omit sortplan\n> * ----------\n> */\n> static TargetEntry *\n> ***************\n> *** 845,855 ****\n> * Check if a user requested ORDER BY is already satisfied by\n> * the choosen index scan.\n> *\n> ! * Returns TRUE if sort is required, FALSE if can be omitted.\n> * ----------\n> */\n> ! static bool\n> ! need_sortplan(List *sortcls, Plan *plan)\n> {\n> \tRelation\tindexRel;\n> \tIndexScan *indexScan;\n> --- 854,866 ----\n> * Check if a user requested ORDER BY is already satisfied by\n> * the choosen index scan.\n> *\n> ! * Returns the direction of Index scan to omit sort,\n> ! * if sort is required returns NoMovementScanDirection\n> ! *\n> * ----------\n> */\n> ! static ScanDirection\n> ! get_dir_to_omit_sortplan(List *sortcls, Plan *plan)\n> {\n> \tRelation\tindexRel;\n> \tIndexScan *indexScan;\n> ***************\n> *** 858,870 ****\n> \tHeapTuple\thtup;\n> \tForm_pg_index index_tup;\n> \tint\t\t\tkey_no = 0;\n> \n> \t/* ----------\n> \t * Must be an IndexScan\n> \t * ----------\n> \t */\n> \tif (nodeTag(plan) != T_IndexScan)\n> ! \t\treturn TRUE;\n> \n> \tindexScan = (IndexScan *) plan;\n> \n> --- 869,883 ----\n> \tHeapTuple\thtup;\n> \tForm_pg_index index_tup;\n> \tint\t\t\tkey_no = 0;\n> + \tScanDirection dir, nodir = NoMovementScanDirection;\n> \n> + \tdir = nodir;\n> \t/* ----------\n> \t * Must be an IndexScan\n> \t * ----------\n> \t */\n> \tif (nodeTag(plan) != T_IndexScan)\n> ! \t\treturn nodir;\n> \n> \tindexScan = (IndexScan *) plan;\n> \n> ***************\n> *** 873,881 ****\n> \t * ----------\n> \t */\n> \tif (plan->lefttree != NULL)\n> ! \t\treturn TRUE;\n> \tif (plan->righttree != NULL)\n> ! \t\treturn TRUE;\n> \n> \t/* ----------\n> \t * Must be a single index scan\n> --- 886,894 ----\n> \t * ----------\n> \t */\n> \tif (plan->lefttree != NULL)\n> ! \t\treturn nodir;\n> \tif (plan->righttree != NULL)\n> ! \t\treturn nodir;\n> \n> \t/* ----------\n> \t * Must be a single index scan\n> ***************\n> *** 882,888 ****\n> \t * ----------\n> \t */\n> \tif (length(indexScan->indxid) != 1)\n> ! \t\treturn TRUE;\n> \n> \t/* ----------\n> \t * Indices can only have up to 8 attributes. So an ORDER BY using\n> --- 895,901 ----\n> \t * ----------\n> \t */\n> \tif (length(indexScan->indxid) != 1)\n> ! \t\treturn nodir;\n> \n> \t/* ----------\n> \t * Indices can only have up to 8 attributes. So an ORDER BY using\n> ***************\n> *** 890,896 ****\n> \t * ----------\n> \t */\n> \tif (length(sortcls) > 8)\n> ! \t\treturn TRUE;\n> \n> \t/* ----------\n> \t * The choosen Index must be a btree\n> --- 903,909 ----\n> \t * ----------\n> \t */\n> \tif (length(sortcls) > 8)\n> ! \t\treturn nodir;\n> \n> \t/* ----------\n> \t * The choosen Index must be a btree\n> ***************\n> *** 902,908 ****\n> \tif (strcmp(nameout(&(indexRel->rd_am->amname)), \"btree\") != 0)\n> \t{\n> \t\theap_close(indexRel);\n> ! \t\treturn TRUE;\n> \t}\n> \theap_close(indexRel);\n> \n> --- 915,921 ----\n> \tif (strcmp(nameout(&(indexRel->rd_am->amname)), \"btree\") != 0)\n> \t{\n> \t\theap_close(indexRel);\n> ! \t\treturn nodir;\n> \t}\n> \theap_close(indexRel);\n> \n> ***************\n> *** 937,943 ****\n> \t\t\t * Could this happen?\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn TRUE;\n> \t\t}\n> \t\tif (nodeTag(tle->expr) != T_Var)\n> \t\t{\n> --- 950,956 ----\n> \t\t\t * Could this happen?\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn nodir;\n> \t\t}\n> \t\tif (nodeTag(tle->expr) != T_Var)\n> \t\t{\n> ***************\n> *** 946,952 ****\n> \t\t\t * cannot be the indexed attribute\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn TRUE;\n> \t\t}\n> \t\tvar = (Var *) (tle->expr);\n> \n> --- 959,965 ----\n> \t\t\t * cannot be the indexed attribute\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn nodir;\n> \t\t}\n> \t\tvar = (Var *) (tle->expr);\n> \n> ***************\n> *** 957,963 ****\n> \t\t\t * that of the index\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn TRUE;\n> \t\t}\n> \n> \t\tif (var->varattno != index_tup->indkey[key_no])\n> --- 970,976 ----\n> \t\t\t * that of the index\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn nodir;\n> \t\t}\n> \n> \t\tif (var->varattno != index_tup->indkey[key_no])\n> ***************\n> *** 966,972 ****\n> \t\t\t * It isn't the indexed attribute.\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn TRUE;\n> \t\t}\n> \n> \t\tif (oprid(oper(\"<\", resdom->restype, resdom->restype, FALSE)) !=\n> sortcl->opoid)\n> --- 979,985 ----\n> \t\t\t * It isn't the indexed attribute.\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn nodir;\n> \t\t}\n> \n> \t\tif (oprid(oper(\"<\", resdom->restype, resdom->restype, FALSE)) !=\n> sortcl->opoid)\n> ***************\n> *** 975,982 ****\n> \t\t\t * Sort order isn't in ascending order.\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\treturn TRUE;\n> \t\t}\n> \n> \t\tkey_no++;\n> \t}\n> --- 988,1007 ----\n> \t\t\t * Sort order isn't in ascending order.\n> \t\t\t * ----------\n> \t\t\t */\n> ! \t\t\tif (ScanDirectionIsForward(dir))\n> ! \t\t\t\treturn nodir;\n> ! \t\t\tdir = BackwardScanDirection;\n> \t\t}\n> + \t\telse\n> + \t\t{\n> + \t\t\t/* ----------\n> + \t\t\t * Sort order is in ascending order.\n> + \t\t\t * ----------\n> + \t\t\t*/\n> + \t\t\tif (ScanDirectionIsBackward(dir))\n> + \t\t\t\treturn nodir;\n> + \t\t\tdir = ForwardScanDirection;\n> + \t\t}\n> \n> \t\tkey_no++;\n> \t}\n> ***************\n> *** 985,989 ****\n> \t * Index matches ORDER BY - sort not required\n> \t * ----------\n> \t */\n> ! \treturn FALSE;\n> }\n> --- 1010,1014 ----\n> \t * Index matches ORDER BY - sort not required\n> \t * ----------\n> \t */\n> ! \treturn dir;\n> }\n> *** ../../head/pgcurrent/backend/executor/nodeIndexscan.c\tMon Jul 26\n> 12:44:47 1999\n> --- backend/executor/nodeIndexscan.c\tMon Aug 9 10:54:23 1999\n> ***************\n> *** 99,104 ****\n> --- 99,111 ----\n> \t */\n> \testate = node->scan.plan.state;\n> \tdirection = estate->es_direction;\n> + \tif (ScanDirectionIsBackward(node->indxorderdir))\n> + \t{\n> + \t\tif (ScanDirectionIsForward(direction))\n> + \t\t\tdirection = BackwardScanDirection;\n> + \t\telse if (ScanDirectionIsBackward(direction))\n> + \t\t\tdirection = ForwardScanDirection;\n> + \t}\n> \tsnapshot = estate->es_snapshot;\n> \tscanstate = node->scan.scanstate;\n> \tindexstate = node->indxstate;\n> ***************\n> *** 316,321 ****\n> --- 323,330 ----\n> \tindxqual = node->indxqual;\n> \tnumScanKeys = indexstate->iss_NumScanKeys;\n> \tindexstate->iss_IndexPtr = -1;\n> + \tif (ScanDirectionIsBackward(node->indxorderdir))\n> + \t\tindexstate->iss_IndexPtr = numIndices;\n> \n> \t/* If this is re-scanning of PlanQual ... */\n> \tif (estate->es_evTuple != NULL &&\n> ***************\n> *** 966,971 ****\n> --- 975,982 ----\n> \t}\n> \n> \tindexstate->iss_NumIndices = numIndices;\n> + \tif (ScanDirectionIsBackward(node->indxorderdir))\n> + \t\tindexPtr = numIndices;\n> \tindexstate->iss_IndexPtr = indexPtr;\n> \tindexstate->iss_ScanKeys = scanKeys;\n> \tindexstate->iss_NumScanKeys = numScanKeys;\n> *** ../../head/pgcurrent/backend/optimizer/plan/createplan.c\tMon Aug 9\n> 11:31:33 1999\n> --- backend/optimizer/plan/createplan.c\tMon Aug 9 11:48:55 1999\n> ***************\n> *** 1024,1029 ****\n> --- 1024,1030 ----\n> \tnode->indxid = indxid;\n> \tnode->indxqual = indxqual;\n> \tnode->indxqualorig = indxqualorig;\n> + \tnode->indxorderdir = NoMovementScanDirection;\n> \tnode->scan.scanstate = (CommonScanState *) NULL;\n> \n> \treturn node;\n> *** ../../head/pgcurrent/backend/nodes/copyfuncs.c\tWed Jul 28 15:25:51 1999\n> --- backend/nodes/copyfuncs.c\tMon Aug 9 10:55:00 1999\n> ***************\n> *** 238,243 ****\n> --- 238,244 ----\n> \tnewnode->indxid = listCopy(from->indxid);\n> \tNode_Copy(from, newnode, indxqual);\n> \tNode_Copy(from, newnode, indxqualorig);\n> + \tnewnode->indxorderdir = from->indxorderdir;\n> \n> \treturn newnode;\n> }\n> *** ../../head/pgcurrent/backend/nodes/readfuncs.c\tMon Jul 26 14:45:56 1999\n> --- backend/nodes/readfuncs.c\tMon Aug 9 11:00:47 1999\n> ***************\n> *** 532,537 ****\n> --- 532,542 ----\n> \ttoken = lsptok(NULL, &length);\t\t/* eat :indxqualorig */\n> \tlocal_node->indxqualorig = nodeRead(true);\t/* now read it */\n> \n> + \ttoken = lsptok(NULL, &length);\t\t/* eat :indxorderdir */\n> + \ttoken = lsptok(NULL, &length);\t\t/* get indxorderdir */\n> +\n> + \tlocal_node->indxorderdir = atoi(token);\n> +\n> \treturn local_node;\n> }\n> \n> *** ../../head/pgcurrent/backend/nodes/outfuncs.c\tMon Jul 26 14:45:56 1999\n> --- backend/nodes/outfuncs.c\tMon Aug 9 10:55:28 1999\n> ***************\n> *** 445,450 ****\n> --- 445,451 ----\n> \tappendStringInfo(str, \" :indxqualorig \");\n> \t_outNode(str, node->indxqualorig);\n> \n> + \tappendStringInfo(str, \" :indxorderdir %d \", node->indxorderdir);\n> }\n> \n> /*\n> *** ../../head/pgcurrent/backend/nodes/equalfuncs.c\tFri Jul 30 17:29:37\n> 1999\n> --- backend/nodes/equalfuncs.c\tMon Aug 9 10:55:08 1999\n> ***************\n> *** 437,442 ****\n> --- 437,445 ----\n> \tif (a->scan.scanrelid != b->scan.scanrelid)\n> \t\treturn false;\n> \n> + \tif (a->indxorderdir != b->indxorderdir)\n> + \t\treturn false;\n> +\n> \tif (!equali(a->indxid, b->indxid))\n> \t\treturn false;\n> \treturn true;\n> *** ../../head/pgcurrent/include/nodes/plannodes.h\tMon Jul 26 12:45:39 1999\n> --- include/nodes/plannodes.h\tMon Aug 9 10:52:54 1999\n> ***************\n> *** 175,180 ****\n> --- 175,181 ----\n> \tList\t *indxid;\n> \tList\t *indxqual;\n> \tList\t *indxqualorig;\n> + \tScanDirection\tindxorderdir;\n> \tIndexScanState *indxstate;\n> } IndexScan;\n> \n> *** ../../head/pgcurrent/backend/commands/explain.c\tMon Jul 26 12:44:46\n> 1999\n> --- backend/commands/explain.c\tMon Aug 9 10:53:44 1999\n> ***************\n> *** 200,205 ****\n> --- 200,207 ----\n> \tswitch (nodeTag(plan))\n> \t{\n> \t\tcase T_IndexScan:\n> + \t\t\tif (ScanDirectionIsBackward(((IndexScan *)plan)->indxorderdir))\n> + \t\t\t\tappendStringInfo(str, \" Backward\");\n> \t\t\tappendStringInfo(str, \" using \");\n> \t\t\ti = 0;\n> \t\t\tforeach(l, ((IndexScan *) plan)->indxid)\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Aug 1999 00:43:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Descending order Index scan patch" } ]
[ { "msg_contents": "Looks like autoconf has been updated.\n\nJust an FYI.\n\n-Ryan\n\n------------- Begin Forwarded Message -------------\n\n> I've descovered a problem with autoconf 2.13 on HP-UX 11.00. Config.sub \nreturns \n> one of the following values which is not recongized by config.sub.\n\n> \thppa2.0n-hp-hpux11.00 --> 32 bit.\n> or \thppa2.0w-hp-hpux11.00 --> 64 bit.\n\nThis has been fixed. Pick up a recent revision of config.guess from the\nAutoconf CVS repository.\n\nCheers, Ben\n\n------------- End Forwarded Message -------------\n\n", "msg_date": "Mon, 9 Aug 1999 09:57:12 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autoconf does not recognize HP-UX 11.00." }, { "msg_contents": "Ryan Bradetich <[email protected]> writes:\n> Looks like autoconf has been updated.\n\n>> This has been fixed. Pick up a recent revision of config.guess from the\n>> Autoconf CVS repository.\n\nI suspected as much. I'll make a point of pulling up-to-date configure\nsupport files from the Autoconf repository whenever we approach a\nmajor release of Postgres.\n\nThe next question is whether we ought to back-patch such updates into\nour stable-release series or not? If they're not formal releases from\nthe Autoconf guys I hesitate to do that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Aug 1999 18:32:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: autoconf does not recognize HP-UX 11.00. " }, { "msg_contents": "On Mon, 9 Aug 1999, Tom Lane wrote:\n\n> Ryan Bradetich <[email protected]> writes:\n> > Looks like autoconf has been updated.\n> \n> >> This has been fixed. Pick up a recent revision of config.guess from the\n> >> Autoconf CVS repository.\n> \n> I suspected as much. I'll make a point of pulling up-to-date configure\n> support files from the Autoconf repository whenever we approach a\n> major release of Postgres.\n> \n> The next question is whether we ought to back-patch such updates into\n> our stable-release series or not? If they're not formal releases from\n> the Autoconf guys I hesitate to do that...\n\nMy opinion is yes, we should...\"stable releases\", IMHO, fix existing bugs\nthat can be done without major re-writes. I'd say a good portion of the\n\"bugs\" are going to be porting related, and some of those are merely\n'Autoconf' related...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Aug 1999 23:57:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: autoconf does not recognize HP-UX 11.00. " } ]
[ { "msg_contents": "It seem that a drop table while in a transaction keeps the table but not the\ndata. Bug? or undocumented feature?\n\ntestcase=> select * from t;\ni\n-\n(0 rows)\n\ntestcase=> insert into t VALUES(1); \nINSERT 551854 1\ntestcase=> insert into t VALUES(2); \nINSERT 551855 1\ntestcase=> insert into t VALUES(3); \nINSERT 551856 1\ntestcase=> select * from t;\ni\n-\n1\n2\n3\n(3 rows)\n\ntestcase=> begin;\nBEGIN\ntestcase=> insert into t VALUES(4);\nINSERT 551857 1\ntestcase=> drop table t;\nDROP\ntestcase=> abort;\nABORT\ntestcase=> select * from t;\ni\n-\n(0 rows)\n\ntestcase=> select version(); \nversion \n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3\n", "msg_date": "Mon, 9 Aug 1999 15:15:20 -0500 ", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Drop table abort" }, { "msg_contents": "> It seem that a drop table while in a transaction keeps the table but not the\n> data. Bug? or undocumented feature?\n> \n> testcase=> select * from t;\n> i\n> -\n> (0 rows)\n> \n> testcase=> insert into t VALUES(1); \n> INSERT 551854 1\n> testcase=> insert into t VALUES(2); \n> INSERT 551855 1\n> testcase=> insert into t VALUES(3); \n> INSERT 551856 1\n> testcase=> select * from t;\n> i\n> -\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> testcase=> begin;\n> BEGIN\n> testcase=> insert into t VALUES(4);\n> INSERT 551857 1\n> testcase=> drop table t;\n> DROP\n> testcase=> abort;\n> ABORT\n> testcase=> select * from t;\n> i\n> -\n> (0 rows)\n> \n> testcase=> select version(); \n> version \n\nKnown bug.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 13:35:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Drop table abort" } ]