threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi all,\n\nI just want to apologize... I finally managed to get through it with\nreviewing my sql files & casting. Sorry for bothering you again with\nthat tiny SQL thing.\n\nPeter B.\n\n",
"msg_date": "Sun, 21 Mar 1999 13:29:14 +0100",
"msg_from": "Peter Blazso <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems are now solved with the view"
}
] |
[
{
"msg_contents": " LIVE PERSONAL PSYCHIC! (as seen on T.V.) \n\n \n LEARN TODAY WHAT YOUR FUTURE HOLDS \n FOR LOVE, FAMILY, AND MONEY. \n \n\n ASTROLOGY CLAIRVOYANCY \n NUMBEROLOGY TAROT \n \n ALL QUESTIONS ANSWERED IMMEDIATELY! \n\n REALIZE YOUR DESTINY! CALL RIGHT NOW!\n\n 1-900-226-4140 or 1-800-372-3384 for VISA, MC \n\n (These are not sex lines!) \n\nThis message is intended for Psychic Readers, Psychic Users and people who are involved in the $1 Billion Dollar a year Psychic Industry. If this his message has reached you in error, please disregard it and accept our apoligies. To be removed from this list, please respond with the subject \"remove\". Thank You. \n\nStop Unsolicited Commercial Email - join CAUCE\n(http://www.cauce.org) \nSupport HR 1748, the anti-spam bill. \n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n LIVE PERSONAL PSYCHIC! (as seen on T.V.) \n\n\n \n\n LEARN TODAY WHAT YOUR FUTURE HOLDS \n\n FOR LOVE, FAMILY, AND MONEY. \n\n\n\n\n\n",
"msg_date": "Sun, 21 Mar 99 20:17:30 Pacific Standard Time",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Find Out What The Future Holds For You?"
}
] |
[
{
"msg_contents": " LIVE PERSONAL PSYCHIC! (as seen on T.V.) \n\n \n LEARN TODAY WHAT YOUR FUTURE HOLDS \n FOR LOVE, FAMILY, AND MONEY. \n \n\n ASTROLOGY CLAIRVOYANCY \n NUMBEROLOGY TAROT \n \n ALL QUESTIONS ANSWERED IMMEDIATELY! \n\n REALIZE YOUR DESTINY! CALL RIGHT NOW!\n\n 1-900-226-4140 or 1-800-372-3384 for VISA, MC \n\n (These are not sex lines!) \n\nThis message is intended for Psychic Readers, Psychic Users and people who are involved in the $1 Billion Dollar a year Psychic Industry. If this message has reached you in error, please disregard it and accept our apoligies. To be removed from this list, please respond with the subject \"remove\". Thank You. \n\nStop Unsolicited Commercial Email - join CAUCE\n(http://www.cauce.org) \nSupport HR 1748, the anti-spam bill. \n\n\n\n \n\n\n\n\n\n\n\n\n\n\n LIVE PERSONAL PSYCHIC! (as seen on T.V.) \n\n\n\n\n",
"msg_date": "Sun, 21 Mar 99 21:54:38 Pacific Standard Time",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Find Out What The Future Holds For You?"
}
] |
[
{
"msg_contents": "LIVE PERSONAL PSYCHIC! (as seen on T.V.)\n\nLEARN TODAY WHAT YOUR FUTURE HOLDS\nFOR LOVE, FAMILY AND MONEY.\n\nASTROLOGY CLAIRVOYANCY\n\nNUMEROLOGY TAROT\n\nALL QUESTIONS ANSWERED IMMEDIATELY!\n\nREALIZE YOUR OWN DESTINY! CALL RIGHT NOW!\n\n1-900-226-4140 or 1-800-372-3384 for VISA, MC\n\n (These are not sex lines!)\n\nThis message is intended for Psyhic Readers , Psychic Users and people who are\ninvolved in the $1 Billion Dollar a year Psychic Industry. If this message\nhas reached you in error, please disregard it and accept our apoligies. To be\nremoved from this list, please respond with the subject \"remove\". Thank you.\n\nStop Unsolicited Commercial Email-join CAUCE\n(http://www.cauce.org)\nSupport HR 1748, the anti-spam bill.\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLIVE PERSONAL PSYCHIC! (as seen on T.V.)\n\n\n\n\n\n\n\n\n",
"msg_date": "Sun, 21 Mar 99 22:45:12 Pacific Standard Time",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Find Out What The Future Holds For You?"
}
] |
[
{
"msg_contents": "Hi,\n\n I have no one to turn to. I've tried here & there but got no answer.\n Why is there is a problem creating unique indices on macaddr columns even\n if there ARE NO (checked by count on distinct,group by) duplicated keys ?\n\n And finally :\n\n create table tmac (i int4,mac macaddr);\n insert int table tmac (i) values (1);\n insert int table tmac (i) values (2);\n insert int table tmac (i) values (3);\n select * from tmac where mac='01:02:03:04:05:06';\n\n I get backend termination no matter if table has indices, is vacuumed ,\n is new or even just after the postgres instalation.\n\n HELP !\n\n Oh, I use 6.4.2 under i386 Redhat 5.2 Linux.\n\nPawel Pierscionek,\n\n\n",
"msg_date": "Mon, 22 Mar 1999 00:57:24 +0100",
"msg_from": "Pawel Pierscionek <[email protected]>",
"msg_from_op": true,
"msg_subject": "macaddr stuff !"
},
{
"msg_contents": "> Hi,\n> \n> I have no one to turn to. I've tried here & there but got no answer.\n> Why is there is a problem creating unique indices on macaddr columns even\n> if there ARE NO (checked by count on distinct,group by) duplicated keys ?\n> \n> And finally :\n> \n> create table tmac (i int4,mac macaddr);\n> insert int table tmac (i) values (1);\n> insert int table tmac (i) values (2);\n> insert int table tmac (i) values (3);\n> select * from tmac where mac='01:02:03:04:05:06';\n> \n\nYes, I can reproduce it here. Looks like a bug.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 21 Mar 1999 23:45:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] macaddr stuff !u"
},
{
"msg_contents": "> Hi,\n> \n> I have no one to turn to. I've tried here & there but got no answer.\n> Why is there is a problem creating unique indices on macaddr columns even\n> if there ARE NO (checked by count on distinct,group by) duplicated keys ?\n> \n> And finally :\n> \n> create table tmac (i int4,mac macaddr);\n> insert int table tmac (i) values (1);\n> insert int table tmac (i) values (2);\n> insert int table tmac (i) values (3);\n> select * from tmac where mac='01:02:03:04:05:06';\n\nI have fixed the problem in the current development tree. The problem\nis nulls in that IP field.\n\nI added PointerIsValid() checks to backend/utils/adt/mac.c.\n\nThis will be fixed in 6.5 beta. This was a known problem with the INET\ntypes, but I did not realize how bad it was.\n\n[D'Arcy, I just added PointerIsValid() checks that were similar to other\ntype routines.]\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 00:04:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] macaddr stuff !"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> I have fixed the problem in the current development tree. The problem\n> is nulls in that IP field.\n> \n> I added PointerIsValid() checks to backend/utils/adt/mac.c.\n> \n> This will be fixed in 6.5 beta. This was a known problem with the INET\n> types, but I did not realize how bad it was.\n> \n> [D'Arcy, I just added PointerIsValid() checks that were similar to other\n> type routines.]\n\nYes, this was the issue I was mentioning to you the other day in IRC.\nRemember I submitted a patch to network.c but we agreed that the proper\nfix is higer up in the code. The problem is that any function taking\na null arg should return null but in the code as it stands now we don't\ncheck that until after the function has been called. The result is that\nwe have all sorts of code in the package that has to deal with null\narguments just so the result can be thrown away after the function\nreturns. What we need to do is identify the places where the function\nis dispatched and deal with the null args there before calling them. I\ntried finding these places but it wasn't so easy. Has anyone else been\nlooking at this part of the code?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 22 Mar 1999 06:56:24 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] macaddr stuff !"
}
] |
[
{
"msg_contents": "Min() and max() are now working, thank you. Create User causes the back end\nto abort:\n\nselect * from pg_shadow;\n\t\tusename\n|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil\n\n\t\n--------+--------+-----------+--------+--------+---------+------+-----------\n-----------------\n\t\tpostgres| 501|t |t |t |t |\n|Fri Jan 30 23:00:00 2037 MST\n\t\t(1 row)\n\ncreate user kari;\nERROR: Bad abstime external representation ''\n\nselect * from pg_shadow;\nNOTICE: (transaction aborted): queries ignored until END\n*ABORT STATE*\n\nInteresting to note that with 6.4.2 everything worked last night. This\nmorning, all ODBC client get a permission denied message. I can log using\npsql but not with any ODBC client. The only change I made was installing a\nnew printer.\n\nStill some regression test failures:\n\n\tint2 .. failed\n\tint4 .. failed\n\tgeometry .. failed\n\ttriggers .. failed\n\n\nI am not sure I have the cvs update thing correct. I had to cd into the\ndirectory containing nodeAgg.c and do a cvs update to get this downloaded.\nHow come a cvs update from my pgsql/src directory did not pull all updated\nsources?\n\nThanks, Michael\n\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tSunday, March 21, 1999 11:59 AM\n\tTo:\[email protected]\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] min() and max() causing aborts\n\n\t> Yesterday evening (after you partially backed out that patch) I\nupdated\n\t> and rebuilt and ran regression test. I didn't see any regress\nfailures\n\t> involving aggregates, and a quick hand smoke-test of max and min\nlooks\n\t> OK:\n\n\tI am attaching the patch I BACKED HOW, so the user can see if it is\nin\n\ttheir tree. It should not be ther.\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026 << File: /wrk/tmp2/nodeAgg.c.diff >> << File: ATT26249.ATT >> \n",
"msg_date": "Sun, 21 Mar 1999 19:52:07 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] min() and max() causing aborts"
}
] |
[
{
"msg_contents": "\nCurious as to why user passwords are stored as cleartext in the pg_pwd\nfile. Have I setup something wrong?\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n",
"msg_date": "Sun, 21 Mar 1999 20:38:39 -0600 (CST)",
"msg_from": "James Thompson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Passwords in cleartext?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have solved some problems with dynamic loading on NT. It is possible to\nrun succesfully both trigger and plpgsql regression tests. The patch is in\nthe included file \"diff\".\n\n\t\t\tDan\n\nPS: current regress.out and regression.diff are included\nsome notes:\n- int2, int4, float8 - different error messages from libc\n- geometry - differences in float numbers (mostly least significant digits)\n- date & time - 1 hour difference\n- constraints - important!!!\n- misc - missing lines in result\n- rules - different order of returned records\n- temp - crash when doing \"\\c regression\" ;-(",
"msg_date": "Mon, 22 Mar 1999 12:08:53 +0100",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] dynamic loading on NT works!"
},
{
"msg_contents": "[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I have solved some problems with dynamic loading on NT. It is possible to\n> run succesfully both trigger and plpgsql regression tests. The patch is in\n> the included file \"diff\".\n> \n> \t\t\tDan\n> \n> PS: current regress.out and regression.diff are included\n> some notes:\n> - int2, int4, float8 - different error messages from libc\n> - geometry - differences in float numbers (mostly least significant digits)\n> - date & time - 1 hour difference\n> - constraints - important!!!\n> - misc - missing lines in result\n> - rules - different order of returned records\n> - temp - crash when doing \"\\c regression\" ;-(\n\nApplied. Hopefully someone will have fixes for the regression problems.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 11:44:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic loading on NT works!"
}
] |
[
{
"msg_contents": "I'd like to make the automatic transaction starting of ecpg a function\nadjustable on a per connection base. My best bet is to add a variable and\nallow something like:\n\nexec sql [at connection] set autotrans = 1;\n\nThe question now is how do I name this variable? It seems to me that there\nis not much in the standard that would limit my choice, is it?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 22 Mar 1999 19:53:47 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "SET AUTO TRANSACTION"
},
{
"msg_contents": "> I'd like to make the automatic transaction starting of ecpg a function\n> adjustable on a per connection base. My best bet is to add a variable \n> and allow something like:\n> exec sql [at connection] set autotrans = 1;\n> The question now is how do I name this variable? It seems to me that \n> there is not much in the standard that would limit my choice, is it?\n\nThe Ingres syntax is\n\n SET AUTOCOMMIT ON;\n\nwhich is clearer than setting a variable. We do allow the syntax\n\n SET item TO value;\n\nwhich is almost as clear:\n\n SET AUTOCOMMIT TO ON;\n\nBut I think it would be appropriate to not require quotes around a\nstring or an integer argument for this important option.\n\n - Tom\n",
"msg_date": "Fri, 26 Mar 1999 18:15:07 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET AUTO TRANSACTION"
}
] |
[
{
"msg_contents": "\n[last week aggregation, this week, the optimizer]\n\nI have a somewhat general optimizer question/problem that I would like\nto get some input on - i.e. I'd like to know what is \"supposed\" to\nwork here and what I should be expecting. Sadly, I think the patch\nfor this is more involved than my last message.\n\nUsing my favorite table these days:\n\nTable = lineitem\n+------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+------------------------+----------------------------------+-------+\n| l_orderkey | int4 not null | 4 |\n| l_partkey | int4 not null | 4 |\n| l_suppkey | int4 not null | 4 |\n| l_linenumber | int4 not null | 4 |\n| l_quantity | float4 not null | 4 |\n| l_extendedprice | float4 not null | 4 |\n| l_discount | float4 not null | 4 |\n| l_tax | float4 not null | 4 |\n| l_returnflag | char() not null | 1 |\n| l_linestatus | char() not null | 1 |\n| l_shipdate | date | 4 |\n| l_commitdate | date | 4 |\n| l_receiptdate | date | 4 |\n| l_shipinstruct | char() not null | 25 |\n| l_shipmode | char() not null | 10 |\n| l_comment | char() not null | 44 |\n+------------------------+----------------------------------+-------+\nIndex: lineitem_index_\n\nand the query:\n\n--\n-- Query 1\n--\nexplain select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, \nsum(l_extendedprice) as sum_base_price, \nsum(l_extendedprice*(1-l_discount)) as sum_disc_price, \nsum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, \navg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, \navg(l_discount) as avg_disc, count(*) as count_order \nfrom lineitem \nwhere l_shipdate <= '1998-09-02'::date \ngroup by l_returnflag, l_linestatus \norder by l_returnflag, l_linestatus;\n\n\nnote that I have eliminated the date calculation in my query of last\nweek and manually replaced it with a constant (since this wasn't\nhappening automatically - but let's not worry about that for now).\nAnd this is only an explain, we care about the optimizer. So we get:\n\nSort (cost=34467.88 size=0 width=0)\n -> Aggregate (cost=34467.88 size=0 width=0)\n -> Group (cost=34467.88 size=0 width=0)\n -> Sort (cost=34467.88 size=0 width=0)\n -> Seq Scan on lineitem (cost=34467.88 size=200191 width=44)\n\nso let's think about the selectivity that is being chosen for the\nseq scan (the where l_shipdate <= '1998-09-02').\n\nTurns out the optimizer is choosing \"33%\", even though the real answer\nis somewhere in 90+% (that's how the query is designed). So, why does\nit do that?\n\nTurns out that selectivity in this case is determined via\nplancat::restriction_selectivity() which calls into functionOID = 103\n(intltsel) for operatorOID = 1096 (date \"<=\") on relation OID = 18663\n(my lineitem).\n\nThis all follows because of the description of 1096 (date \"<=\") in\npg_operator. Looking at local1_template1.bki.source near line 1754\nshows:\n\ninsert OID = 1096 ( \"<=\" PGUID 0 <...> date_le intltsel intltjoinsel )\n\nwhere we see that indeed, it thinks \"intltsel\" is the right function\nto use for \"oprrest\" in the case of dates.\n\nQuestion 1 - is intltsel the right thing for selectivity on dates?\n\nHope someone is still with me.\n\nSo now we're running selfuncs::intltsel() where we make a further call\nto selfuncs::gethilokey(). The job of gethilokey is to determine the\nmin and max values of a particular attribute in the table, which will\nthen be used with the constant in my where clause to estimate the\nselectivity. It is going to search the pg_statistic relation with\nthree key values:\n\nAnum_pg_statistic_starelid 18663 (lineitem)\nAnum_pg_statistic_staattnum 11 (l_shipdate)\nAnum_pg_statistic_staop 1096 (date \"<=\")\n\nthis finds no tuples in pg_statistic. Why is that? The only nearby\ntuple in pg_statistic is:\n\nstarelid|staattnum|staop|stalokey |stahikey \n--------+---------+-----+----------------+----------------\n 18663| 11| 0|01-02-1992 |12-01-1998\n\nand the reason the query doesn't match anything? Because 1096 != 0.\nBut why is it 0 in pg_statistic? Statistics are determined near line\n1844 in vacuum.c (assuming a 'vacuum analyze' run at some point)\n\n i = 0;\n values[i++] = (Datum) relid; /* 1 */\n values[i++] = (Datum) attp->attnum; /* 2 */\n====> values[i++] = (Datum) InvalidOid; /* 3 */\n fmgr_info(stats->outfunc, &out_function);\n out_string = <...min...>\n values[i++] = (Datum) fmgr(F_TEXTIN, out_string);\n pfree(out_string);\n out_string = <...max...>\n values[i++] = (Datum) fmgr(F_TEXTIN, out_string);\n pfree(out_string);\n stup = heap_formtuple(sd->rd_att, values, nulls);\n\nthe \"offending\" line is setting the staop to InvalidOid (i.e. 0).\n\nQuestion 2 - is this right? Is the intent for 0 to serve as a\n\"wildcard\", or should it be inserting an entry for each operation\nindividually?\n\nIn the case of \"wildcard\" then gethilokey() should allow a match for \n\nAnum_pg_statistic_staop 0\n\ninstead of requiring the more restrictive 1096. In the current code,\nwhat happens next is gethilokey() returns \"not found\" and intltsel()\nreturns the default 1/3 which I see in the resultant query plan (size\n= 200191 is 1/3 of the number of lineitem tuples).\n\nQuestion 3 - is there any inherent reason it couldn't get this right?\nThe statistic is in the table 1992 to 1998, so the '1998-09-02' date\nshould be 90-some% selectivity, a much better guess than 33%.\n\nDoesn't make a difference for this particular query, of course,\nbecause the seq scan must proceed anyhow, but it could easily affect\nother queries where selectivities matter (and it affects the\nmodifications I am trying to test in the optimizer to be \"smarter\"\nabout selectivities - my overall context is to understand/improve the\nbehavior that the underlying storage system sees from queries like this).\n\nOK, so let's say we treat 0 as a \"wildcard\" and stop checking for\n1096. Not we let gethilokey() return the two dates from the statistic\ntable. The immediate next thing that intltsel() does, near lines 122\nin selfuncs.c is call atol() on the strings from gethilokey(). And\nguess what it comes up with?\n\nlow = 1\nhigh = 12\n\nbecause it calls atol() on '01-02-1992' and '12-01-1998'. This\nclearly isn't right, it should get some large integer that includes\nthe year and day in the result. Then it should compare reasonably\nwith my constant from the where clause and give a decent selectivity\nvalue. This leads to a re-visit of Question 1.\n\nQuestion 4 - should date \"<=\" use a dateltsel() function instead of\nintltsel() as oprrest?\n\nIf anyone is still with me, could you tell me if this makes sense, or\nif there is some other location where the appropriate type conversion\ncould take place so that intltsel() gets something reasonable when it\ndoes the atol() calls?\n\nCould someone also give me a sense for how far out-of-whack the whole\ncurrent selectivity-handling structure is? It seems that most of the\noperators in pg_operator actually use intltsel() and would have\ntype-specific problems like that described. Or is the problem in the\nway attribute values are stored in pg_statistic by vacuum analyze? Or\nis there another layer where type conversion belongs?\n\nPhew. Enough typing, hope someone can follow this and address at\nleast some of the questions.\n\nThanks.\n\nErik Riedel\nCarnegie Mellon University\nwww.cs.cmu.edu/~riedel\n\n",
"msg_date": "Mon, 22 Mar 1999 18:27:15 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizer and type question"
},
{
"msg_contents": "On Mon, 22 Mar 1999, you wrote:\n>Question 1 - is intltsel the right thing for selectivity on dates?\n\nI think so... dates are really special integers.\n\n>Question 2 - is this right? Is the intent for 0 to serve as a\n>\"wildcard\", or should it be inserting an entry for each operation\n>individually?\n\nThis looks wrong... but I'm not proficient enough to know.\n\n>Question 3 - is there any inherent reason it couldn't get this right?\n>The statistic is in the table 1992 to 1998, so the '1998-09-02' date\n>should be 90-some% selectivity, a much better guess than 33%.\n\nI would imagine that 33% is a result due to the lack of the statistics match.\n\n>OK, so let's say we treat 0 as a \"wildcard\" and stop checking for\n>1096. Not we let gethilokey() return the two dates from the statistic\n>table. The immediate next thing that intltsel() does, near lines 122\n>in selfuncs.c is call atol() on the strings from gethilokey(). And\n>guess what it comes up with?\n\nThis is ridiculous... why does gethilokey() return a string for a field that is\ninternally stored as an integer?\n\n*sigh* Just more questions...\n\nTaral\n",
"msg_date": "Mon, 22 Mar 1999 17:53:59 -0600",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer and type question"
},
{
"msg_contents": "Erik Riedel <[email protected]> writes:\n> [ optimizer doesn't find relevant pg_statistic entry ]\n\nIt's clearly a bug that the selectivity code is not finding this tuple.\nIf your analysis is correct, then selectivity estimation has *never*\nworked properly, or at least not in recent memory :-(. Yipes.\nBruce and I found a bunch of other problems in the optimizer recently,\nso it doesn't faze me to assume that this is broken too.\n\n> the \"offending\" line is setting the staop to InvalidOid (i.e. 0).\n> Question 2 - is this right? Is the intent for 0 to serve as a\n> \"wildcard\",\n\nMy thought is that what the staop column ought to be is the OID of the\ncomparison function that was used to determine the sort order of the\ncolumn. Without a sort op the lowest and highest keys in the column are\nnot well defined, so it makes no sense to assert \"these are the lowest\nand highest values\" without providing the sort op that determined that.\n(For sufficiently complex data types one could reasonably have multiple\nordering operators. A crude example is sorting on \"circumference\" and\n\"area\" for polygons.) But typically the sort op will be the \"<\"\noperator for the column data type.\n\nSo, the vacuum code is definitely broken --- it's not storing the sort\nop that it used. The code in gethilokey might be broken too, depending\non how it is producing the operator it's trying to match against the\ntuple. For example, if the actual operator in the query is any of\n< <= > >= on int4, then int4lt ought to be used to probe the pg_statistic\ntable. I'm not sure if we have adequate info in pg_operator or pg_type\nto let the optimizer code determine the right thing to probe with :-(\n\n> The immediate next thing that intltsel() does, near lines 122\n> in selfuncs.c is call atol() on the strings from gethilokey(). And\n> guess what it comes up with?\n> low = 1\n> high = 12\n> because it calls atol() on '01-02-1992' and '12-01-1998'. This\n> clearly isn't right, it should get some large integer that includes\n> the year and day in the result. Then it should compare reasonably\n> with my constant from the where clause and give a decent selectivity\n> value. This leads to a re-visit of Question 1.\n> Question 4 - should date \"<=\" use a dateltsel() function instead of\n> intltsel() as oprrest?\n\nThis is clearly busted as well. I'm not sure that creating dateltsel()\nis the right fix, however, because if you go down that path then every\nsingle datatype needs its own selectivity function; that's more than we\nneed.\n\nWhat we really want here is to be able to map datatype values into\nsome sort of numeric range so that we can compute what fraction of the\nlow-key-to-high-key range is on each side of the probe value (the\nconstant taken from the query). This general concept will apply to\nmany scalar types, so what we want is a type-specific mapping function\nand a less-specific fraction-computing-function. Offhand I'd say that\nwe want intltsel() and floatltsel(), plus conversion routines that can\nproduce either int4 or float8 from a data type as seems appropriate.\nAnything that couldn't map to one or the other would have to supply its\nown selectivity function.\n\n> Or is the problem in the\n> way attribute values are stored in pg_statistic by vacuum analyze?\n\nLooks like it converts the low and high values to text and stores them\nthat way. Ugly as can be :-( but I'm not sure there is a good\nalternative. We have no \"wild card\" column type AFAIK, which is what\nthese columns of pg_statistic would have to be to allow storage of\nunconverted min and max values.\n\nI think you've found a can of worms here. Congratulations ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Mar 1999 20:12:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer and type question "
},
{
"msg_contents": "> Erik Riedel <[email protected]> writes:\n> > [ optimizer doesn't find relevant pg_statistic entry ]\n> \n> It's clearly a bug that the selectivity code is not finding this tuple.\n> If your analysis is correct, then selectivity estimation has *never*\n> worked properly, or at least not in recent memory :-(. Yipes.\n> Bruce and I found a bunch of other problems in the optimizer recently,\n> so it doesn't faze me to assume that this is broken too.\n\nYes. Originally, pg_statistic was always empty, and there was no\npg_attribute.attdisbursion.\n\nI added proper pg_attribute.attdisbursion processing. In fact, our TODO\nlist has(you can see it on our web page under documentation, or in\n/doc/TODO):\n\n\t* update pg_statistic table to remove operator column\n\nWhat I did not realize is that the selectivity code was still addressing\nthat column. We either have to populate is properly, or throw it away. \nThe good thing is that we only use \"<\" and \">\" to compute min/max, so we\nreally don't need that operator column, and I don't know what I would\nput in there anyway.\n\nI realized \"<\" optimization processing was probably pretty broken, so\nthis is no surprise.\n\nWhat we really need is some way to determine how far the requested value\nis from the min/max values. With int, we just do (val-min)/(max-min). \nThat works, but how do we do that for types that don't support division.\nStrings come to mind in this case. Maybe we should support string too,\nand convert all other types to string representation to do the\ncomparison, though things like date type will fail badly.\n\nMy guess is that 1/3 is a pretty good estimate for these types. Perhaps\nwe should just get int types and float8 types to work, and punt on the\nrest.\n\n> I think you've found a can of worms here. Congratulations ;-)\n\nI can ditto that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 21:25:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer and type question"
},
{
"msg_contents": "\nOK, building on your high-level explanation, I am attaching a patch that\nattempts to do something \"better\" than the current code. Note that I\nhave only tested this with the date type and my particular query. I\nhaven't run it through the regression, so consider it \"proof of concept\"\nat best. Although hopefully it will serve my purposes.\n\n> My thought is that what the staop column ought to be is the OID of the\n> comparison function that was used to determine the sort order of the\n> column. Without a sort op the lowest and highest keys in the column are\n> not well defined, so it makes no sense to assert \"these are the lowest\n> and highest values\" without providing the sort op that determined that.\n>\n> (For sufficiently complex data types one could reasonably have multiple\n> ordering operators. A crude example is sorting on \"circumference\" and\n> \"area\" for polygons.) But typically the sort op will be the \"<\"\n> operator for the column data type.\n> \nI changed vacuum.c to do exactly that. oid of the lt sort op.\n\n> So, the vacuum code is definitely broken --- it's not storing the sort\n> op that it used. The code in gethilokey might be broken too, depending\n> on how it is producing the operator it's trying to match against the\n> tuple. For example, if the actual operator in the query is any of\n> < <= > >= on int4, then int4lt ought to be used to probe the pg_statistic\n> table. I'm not sure if we have adequate info in pg_operator or pg_type\n> to let the optimizer code determine the right thing to probe with :-(\n> \nThis indeed seems like a bigger problem. I thought about somehow using\ntype-matching from the sort op and the actual operator in the query - if\nboth the left and right type match, then consider them the same for\npurposes of this probe. That seemed complicated, so I punted in my\nexample - it just does the search with relid and attnum and assumes that\nonly returns one tuple. This works in my case (maybe in all cases,\nbecause of the way vacuum is currently written - ?).\n\n> What we really want here is to be able to map datatype values into\n> some sort of numeric range so that we can compute what fraction of the\n> low-key-to-high-key range is on each side of the probe value (the\n> constant taken from the query). This general concept will apply to\n> many scalar types, so what we want is a type-specific mapping function\n> and a less-specific fraction-computing-function. Offhand I'd say that\n> we want intltsel() and floatltsel(), plus conversion routines that can\n> produce either int4 or float8 from a data type as seems appropriate.\n> Anything that couldn't map to one or the other would have to supply its\n> own selectivity function.\n> \nThis is what my example then does. Uses the stored sort op to get the\ntype and then uses typinput to convert from the string to an int4.\n\nThen puts the int4 back into string format because that's what everyone\nwas expecting.\n\nIt seems to work for my particular query. I now get:\n\n(selfuncs) gethilokey() obj 18663 attr 11 opid 1096 (ignored)\n(selfuncs) gethilokey() found op 1087 in pg_proc\n(selfuncs) gethilokey() found type 1082 in pg_type\n(selfuncs) gethilokey() going to use 1084 to convert type 1082\n(selfuncs) gethilokey() have low -2921 high -396\n(selfuncs) intltsel() high -396 low -2921 val -486\n(plancat) restriction_selectivity() for func 103 op 1096 rel 18663 attr\n11 const -486 flag 3 returns 0.964356\nNOTICE: QUERY PLAN:\n\nSort (cost=34467.88 size=0 width=0)\n -> Aggregate (cost=34467.88 size=0 width=0)\n -> Group (cost=34467.88 size=0 width=0)\n -> Sort (cost=34467.88 size=0 width=0)\n -> Seq Scan on lineitem (cost=34467.88 size=579166 width=44)\n\nincluding my printfs, which exist in the patch as well.\n\nSelectivity is now the expected 96% and the size estimate for the seq\nscan is much closer to correct.\n\nAgain, not tested with anything besides date, so caveat not-tested.\n\nHope this helps.\n\nErik\n\n----------------------[optimizer_fix.sh]------------------------\n\n#! /bin/sh\n# This is a shell archive, meaning:\n# 1. Remove everything above the #! /bin/sh line.\n# 2. Save the resulting text in a file.\n# 3. Execute the file with /bin/sh (not csh) to create:\n#\tselfuncs.c.diff\n#\tvacuum.c.diff\n# This archive created: Mon Mar 22 22:58:14 1999\nexport PATH; PATH=/bin:/usr/bin:$PATH\nif test -f 'selfuncs.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'selfuncs.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'selfuncs.c.diff'\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/utils/adt\n/selfuncs.c\tThu Mar 11 23:59:35 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/615/src/backend/utils/adt\n/selfuncs.c\tMon Mar 22 22:57:25 1999\n***************\n*** 32,37 ****\n--- 32,40 ----\n #include \"utils/lsyscache.h\"\t/* for get_oprrest() */\n #include \"catalog/pg_statistic.h\"\n \n+ #include \"catalog/pg_proc.h\" /* for Form_pg_proc */\n+ #include \"catalog/pg_type.h\" /* for Form_pg_type */\n+ \n /* N is not a valid var/constant or relation id */\n #define NONVALUE(N)\t\t((N) == -1)\n \n***************\n*** 103,110 ****\n \t\t\t\tbottom;\n \n \tresult = (float64) palloc(sizeof(float64data));\n! \tif (NONVALUE(attno) || NONVALUE(relid))\n \t\t*result = 1.0 / 3;\n \telse\n \t{\n \t\t/* XXX\t\t\tval = atol(value); */\n--- 106,114 ----\n \t\t\t\tbottom;\n \n \tresult = (float64) palloc(sizeof(float64data));\n! \tif (NONVALUE(attno) || NONVALUE(relid)) {\n \t\t*result = 1.0 / 3;\n+ \t}\n \telse\n \t{\n \t\t/* XXX\t\t\tval = atol(value); */\n***************\n*** 117,130 ****\n \t\t}\n \t\thigh = atol(highchar);\n \t\tlow = atol(lowchar);\n \t\tif ((flag & SEL_RIGHT && val < low) ||\n \t\t\t(!(flag & SEL_RIGHT) && val > high))\n \t\t{\n \t\t\tfloat32data nvals;\n \n \t\t\tnvals = getattdisbursion(relid, (int) attno);\n! \t\t\tif (nvals == 0)\n \t\t\t\t*result = 1.0 / 3.0;\n \t\t\telse\n \t\t\t{\n \t\t\t\t*result = 3.0 * (float64data) nvals;\n--- 121,136 ----\n \t\t}\n \t\thigh = atol(highchar);\n \t\tlow = atol(lowchar);\n+ \t\tprintf(\"(selfuncs) intltsel() high %d low %d val %d\\n\",high,low,val);\n \t\tif ((flag & SEL_RIGHT && val < low) ||\n \t\t\t(!(flag & SEL_RIGHT) && val > high))\n \t\t{\n \t\t\tfloat32data nvals;\n \n \t\t\tnvals = getattdisbursion(relid, (int) attno);\n! \t\t\tif (nvals == 0) {\n \t\t\t\t*result = 1.0 / 3.0;\n+ \t\t\t}\n \t\t\telse\n \t\t\t{\n \t\t\t\t*result = 3.0 * (float64data) nvals;\n***************\n*** 336,341 ****\n--- 342,353 ----\n {\n \tRelation\trel;\n \tHeapScanDesc scan;\n+ \t/* this assumes there is only one row in the statistics table for any\nparticular */\n+ \t/* relid, attnum pair - could be more complicated if staop is also\nused. */\n+ \t/* at the moment, if there are multiple rows, this code ends up\npicking the */\n+ \t/* \"first\" one \n - er1p */\n+ \t/* the actual \"ignoring\" is done in the call to heap_beginscan()\nbelow, where */\n+ \t/* we only mention 2 of the 3 keys in this array \n - er1p */\n \tstatic ScanKeyData key[3] = {\n \t\t{0, Anum_pg_statistic_starelid, F_OIDEQ, {0, 0, F_OIDEQ}},\n \t\t{0, Anum_pg_statistic_staattnum, F_INT2EQ, {0, 0, F_INT2EQ}},\n***************\n*** 344,355 ****\n \tbool\t\tisnull;\n \tHeapTuple\ttuple;\n \n \trel = heap_openr(StatisticRelationName);\n \n \tkey[0].sk_argument = ObjectIdGetDatum(relid);\n \tkey[1].sk_argument = Int16GetDatum((int16) attnum);\n \tkey[2].sk_argument = ObjectIdGetDatum(opid);\n! \tscan = heap_beginscan(rel, 0, SnapshotNow, 3, key);\n \ttuple = heap_getnext(scan, 0);\n \tif (!HeapTupleIsValid(tuple))\n \t{\n--- 356,377 ----\n \tbool\t\tisnull;\n \tHeapTuple\ttuple;\n \n+ \tHeapTuple tup;\n+ \tForm_pg_proc proc;\n+ \tForm_pg_type typ;\n+ \tOid which_op;\n+ \tOid which_type;\n+ \tint32 low_value;\n+ \tint32 high_value;\n+ \n \trel = heap_openr(StatisticRelationName);\n \n \tkey[0].sk_argument = ObjectIdGetDatum(relid);\n \tkey[1].sk_argument = Int16GetDatum((int16) attnum);\n \tkey[2].sk_argument = ObjectIdGetDatum(opid);\n! \tprintf(\"(selfuncs) gethilokey() obj %d attr %d opid %d (ignored)\\n\",\n! \t key[0].sk_argument,key[1].sk_argument,key[2].sk_argument);\n! \tscan = heap_beginscan(rel, 0, SnapshotNow, 2, key);\n \ttuple = heap_getnext(scan, 0);\n \tif (!HeapTupleIsValid(tuple))\n \t{\n***************\n*** 376,383 ****\n--- 398,461 ----\n \t\t\t\t\t\t\t\t&isnull));\n \tif (isnull)\n \t\telog(DEBUG, \"gethilokey: low key is null\");\n+ \n \theap_endscan(scan);\n \theap_close(rel);\n+ \n+ \t/* now we deal with type conversion issues \n */\n+ \t/* when intltsel() calls this routine (who knows what other callers\nmight do) */\n+ \t/* it assumes that it can call atol() on the strings and then use\ninteger */\n+ \t/* comparison from there. what we are going to do here, then, is try\nto use */\n+ \t/* the type information from Anum_pg_statistic_staop to convert the\nhigh */\n+ \t/* and low values \n- er1p */\n+ \n+ \t/* WARNING: this code has only been tested with the date type and has\nNOT */\n+ \t/* been regression tested. consider it \"sample\" code of what might\nbe the */\n+ \t/* right kind of thing to do \n- er1p */\n+ \n+ \t/* get the 'op' from pg_statistic and look it up in pg_proc */\n+ \twhich_op = heap_getattr(tuple,\n+ \t\t\t\tAnum_pg_statistic_staop,\n+ \t\t\t\tRelationGetDescr(rel),\n+ \t\t\t\t&isnull);\n+ \tif (InvalidOid == which_op) {\n+ \t /* ignore all this stuff, try conversion only if we have a valid staop */\n+ \t /* note that there is an accompanying change to 'vacuum analyze' that */\n+ \t /* gets this set to something useful. */\n+ \t} else {\n+ \t /* staop looks valid, so let's see what we can do about conversion */\n+ \t tup = SearchSysCacheTuple(PROOID, ObjectIdGetDatum(which_op), 0, 0, 0);\n+ \t if (!HeapTupleIsValid(tup)) {\n+ \t elog(ERROR, \"selfuncs: unable to find op in pg_proc %d\", which_op);\n+ \t }\n+ \t printf(\"(selfuncs) gethilokey() found op %d in pg_proc\\n\",which_op);\n+ \t \n+ \t /* use that to determine the type of stahikey and stalokey via pg_type */\n+ \t proc = (Form_pg_proc) GETSTRUCT(tup);\n+ \t which_type = proc->proargtypes[0]; /* XXX - use left and right\nseparately? */\n+ \t tup = SearchSysCacheTuple(TYPOID, ObjectIdGetDatum(which_type), 0, 0, 0);\n+ \t if (!HeapTupleIsValid(tup)) {\n+ \t elog(ERROR, \"selfuncs: unable to find type in pg_type %d\", which_type);\n+ \t }\n+ \t printf(\"(selfuncs) gethilokey() found type %d in pg_type\\n\",which_type);\n+ \t \n+ \t /* and use that type to get the conversion function to int4 */\n+ \t typ = (Form_pg_type) GETSTRUCT(tup);\n+ \t printf(\"(selfuncs) gethilokey() going to use %d to convert type\n%d\\n\",typ->typinput,which_type);\n+ \t \n+ \t /* and convert the low and high strings */\n+ \t low_value = (int32) fmgr(typ->typinput, *low, -1);\n+ \t high_value = (int32) fmgr(typ->typinput, *high, -1);\n+ \t printf(\"(selfuncs) gethilokey() have low %d high\n%d\\n\",low_value,high_value);\n+ \t \n+ \t /* now we have int4's, which we put back into strings because\nthat's what out */\n+ \t /* callers (intltsel() at least) expect \n - er1p */\n+ \t pfree(*low); pfree(*high); /* let's not leak the old strings */\n+ \t *low = int4out(low_value);\n+ \t *high = int4out(high_value);\n+ \n+ \t /* XXX - this probably leaks the two tups we got from\nSearchSysCacheTuple() - er1p */\n+ \t}\n }\n \n float64\nSHAR_EOF\nfi\nif test -f 'vacuum.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'vacuum.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'vacuum.c.diff'\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/commands/\nvacuum.c\tThu Mar 11 23:59:09 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/615/src/backend/commands/\nvacuum.c\tMon Mar 22 21:23:15 1999\n***************\n*** 1842,1848 ****\n \t\t\t\t\ti = 0;\n \t\t\t\t\tvalues[i++] = (Datum) relid;\t\t/* 1 */\n \t\t\t\t\tvalues[i++] = (Datum) attp->attnum; /* 2 */\n! \t\t\t\t\tvalues[i++] = (Datum) InvalidOid;\t/* 3 */\n \t\t\t\t\tfmgr_info(stats->outfunc, &out_function);\n \t\t\t\t\tout_string = (*fmgr_faddr(&out_function)) (stats->min,\nstats->attr->atttypid);\n \t\t\t\t\tvalues[i++] = (Datum) fmgr(F_TEXTIN, out_string);\n--- 1842,1848 ----\n \t\t\t\t\ti = 0;\n \t\t\t\t\tvalues[i++] = (Datum) relid;\t\t/* 1 */\n \t\t\t\t\tvalues[i++] = (Datum) attp->attnum; /* 2 */\n! \t\t\t\t\tvalues[i++] = (Datum) stats->f_cmplt.fn_oid;\t/* 3 */ /* get the\n'<' oid, instead of 'invalid' - er1p */\n \t\t\t\t\tfmgr_info(stats->outfunc, &out_function);\n \t\t\t\t\tout_string = (*fmgr_faddr(&out_function)) (stats->min,\nstats->attr->atttypid);\n \t\t\t\t\tvalues[i++] = (Datum) fmgr(F_TEXTIN, out_string);\nSHAR_EOF\nfi\nexit 0\n#\tEnd of shell archive\n\n",
"msg_date": "Mon, 22 Mar 1999 23:14:55 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] optimizer and type question"
},
{
"msg_contents": "Erik, if you can, please stick around and keep digging into the code.\n\nI am working on fixing the memory allocation problems you had with\nexpressions right now.\n\nYou are obviously coming up with some good ideas. And with Tom Lane and\nI, you are also in Pennsylvania.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Mar 1999 00:37:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer and type question"
},
{
"msg_contents": "Erik Riedel <[email protected]> writes:\n> OK, building on your high-level explanation, I am attaching a patch that\n> attempts to do something \"better\" than the current code. Note that I\n> have only tested this with the date type and my particular query.\n\nGlad to see you working on this. I don't like the details of your\npatch too much though ;-). Here are some suggestions for making it\nbetter.\n\n1. I think just removing staop from the lookup in gethilokey is OK for\nnow, though I'm dubious about Bruce's thought that we could delete that\nfield entirely. As you observe, vacuum will not currently put more\nthan one tuple for a column into pg_statistic, so we can just do the\nlookup with relid and attno and leave it at that. But I think we ought\nto leave the field there, with the idea that vacuum might someday\ncompute more than one statistic for a data column. Fixing vacuum to\nput its sort op into the field is a good idea in the meantime.\n\n2. The type conversion you're doing in gethilokey is a mess; I think\nwhat you ought to make it do is simply the inbound conversion of the\nstring from pg_statistic into the internal representation for the\ncolumn's datatype, and return that value as a Datum. It also needs\na cleaner success/failure return convention --- this business with\n\"n\" return is ridiculously type-specific. Also, the best and easiest\nway to find the type to convert to is to look up the column type in\nthe info for the given relid, not search pg_proc with the staop value.\n(I'm not sure that will even work, since there are pg_proc entries\nwith wildcard argument types.)\n\n3. The atol() calls currently found in intltsel are a type-specific\ncheat on what is conceptually a two-step process:\n * Convert the string stored in pg_statistic back to the internal\n form for the column data type.\n * Generate a numeric representation of the data value that can be\n used as an estimate of the range of values in the table.\nThe second step is trivial for integers, which may obscure the fact\nthat there are two steps involved, but nonetheless there are. If\nyou think about applying selectivity logic to strings, say, it\nbecomes clear that the second step is a necessary component of the\nprocess. Furthermore, the second step must also be applied to the\nprobe value that's being passed into the selectivity operator.\n(The probe value is already in internal form, of course; but it is\nnot necessarily in a useful numeric form.)\n\nWe can do the first of these steps by applying the appropriate \"XXXin\"\nconversion function for the column data type, as you have done. The\ninteresting question is how to do the second one. A really clean\nsolution would require adding a column to pg_type that points to a\nfunction that will do the appropriate conversion. I'd be inclined to\nmake all of these functions return \"double\" (float8) and just have one\ntop-level selectivity routine for all data types that can use\nrange-based selectivity logic.\n\nWe could probably hack something together that would not use an explicit\nconversion function for each data type, but instead would rely on\ntype-specific assumptions inside the selectivity routines. We'd need many\nmore selectivity routines though (at least one for each of int, float4,\nfloat8, and text data types) so I'm not sure we'd really save any work\ncompared to doing it right.\n\nBTW, now that I look at this issue it's real clear that the selectivity\nentries in pg_operator are horribly broken. The intltsel/intgtsel\nselectivity routines are currently applied to 32 distinct data types:\n\nregression=> select distinct typname,oprleft from pg_operator, pg_type\nregression-> where pg_type.oid = oprleft\nregression-> and oprrest in (103,104);\ntypname |oprleft\n---------+-------\n_aclitem | 1034\nabstime | 702\nbool | 16\nbox | 603\nbpchar | 1042\nchar | 18\ncidr | 650\ncircle | 718\ndate | 1082\ndatetime | 1184\nfloat4 | 700\nfloat8 | 701\ninet | 869\nint2 | 21\nint4 | 23\nint8 | 20\nline | 628\nlseg | 601\nmacaddr | 829\nmoney | 790\nname | 19\nnumeric | 1700\noid | 26\noid8 | 30\npath | 602\npoint | 600\npolygon | 604\ntext | 25\ntime | 1083\ntimespan | 1186\ntimestamp| 1296\nvarchar | 1043\n(32 rows)\n\nmany of which are very obviously not compatible with integer for *any*\npurpose. It looks to me like a lot of data types were added to\npg_operator just by copy-and-paste, without paying attention to whether\nthe selectivity routines were actually correct for the data type.\n\nAs the code stands today, the bogus entries don't matter because\ngethilokey always fails, so we always get 1/3 as the selectivity\nestimate for any comparison operator (except = and != of course).\nI had actually noticed that fact and assumed that it was supposed\nto work that way :-(. But, clearly, there is code in here that\nis *trying* to be smarter.\n\nAs soon as we fix gethilokey so that it can succeed, we will start\ngetting essentially-random selectivity estimates for those data types\nthat aren't actually binary-compatible with integer. That will not do;\nwe have to do something about the issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Mar 1999 12:01:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer and type question "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> What we really need is some way to determine how far the requested value\n> is from the min/max values. With int, we just do (val-min)/(max-min). \n> That works, but how do we do that for types that don't support division.\n> Strings come to mind in this case.\n\nWhat I'm envisioning is that we still apply the (val-min)/(max-min)\nlogic, but apply it to numeric values that are produced in a\ntype-dependent way.\n\nFor ints and floats the conversion is trivial, of course.\n\nFor strings, the first thing that comes to mind is to return 0 for a\nnull string and the value of the first byte for a non-null string.\nThis would give you one-part-in-256 selectivity which is plenty good\nenough for what the selectivity code needs to do. (Actually, it's\nonly that good if the strings' first bytes are pretty well spread out.\nIf you have a table containing English words, for example, you might\nonly get about one part in 26 this way, since the first bytes will\nprobably only run from A to Z. Might be better to use the first two\ncharacters of the string to compute the selectivity representation.)\n\nIn general, you can apply this logic as long as you can come up with\nsome numerical approximation to the data type's sorting order. It\ndoesn't have to be exact.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Mar 1999 12:09:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer and type question "
},
{
"msg_contents": "\n[The changes discussed below are more longer-term issues than my two\nprevious posts, which is why I am not attaching the patch for these\nchanges. My hope is that the description of this experiment will\nserve as input to whoever works on the optimizer in the future.\n\nThis should be considered a long-term \"think about\" and \"probably\nignore as too specific\" optimization. You have been warned. I just\nwanted to brain-dump this, in the chance that it is useful somehow in\nthe future. - Erik ]\n\nContext: cost and size estimates inside the optimizer, particularly\nAggregate/Group nodes.\n\nBasic idea is as follows: we look at the min and max values for all\nthe GROUP BY attributes and if they have a small dynamic range then we\n_know_ the result of the aggregation will have a small number of rows\n(bounded by the range of these attributes). This information is then\nused to make better cost estimates during optimization.\n\ne.g. if we have two attributes 'a' and 'b' and we know (from\npg_statistics) that all 'a' values are in the range 25 to 35 and all\n'b' values are in the range 75 to 100 then a GROUP BY on columns 'a'\nand 'b' cannot result in more than 250 [ (100 - 75) * (35 - 25) = 25 *\n10 ] rows, no matter how many rows are input. We might be wrong if\nthe statistics are out of date, but we are just doing size/cost\nestimation, so that's ok. Since aggregations are often very selective\n(i.e. much smaller outputs than inputs), this can make a big\ndifference further up the tree.\n\nGory details are as follows:\n\nTable = lineitem\n+------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+------------------------+----------------------------------+-------+\n| l_orderkey | int4 not null | 4 |\n| l_partkey | int4 not null | 4 |\n| l_suppkey | int4 not null | 4 |\n| l_linenumber | int4 not null | 4 |\n| l_quantity | float4 not null | 4 |\n| l_extendedprice | float4 not null | 4 |\n| l_discount | float4 not null | 4 |\n| l_tax | float4 not null | 4 |\n| l_returnflag | char() not null | 1 |\n| l_linestatus | char() not null | 1 |\n| l_shipdate | date | 4 |\n| l_commitdate | date | 4 |\n| l_receiptdate | date | 4 |\n| l_shipinstruct | char() not null | 25 |\n| l_shipmode | char() not null | 10 |\n| l_comment | char() not null | 44 |\n+------------------------+----------------------------------+-------+\nIndex: lineitem_index_\n\n--\n-- Query 1\n--\nselect l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, \nsum(l_extendedprice) as sum_base_price, \nsum(l_extendedprice*(1-l_discount)) as sum_disc_price, \nsum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, \navg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, \navg(l_discount) as avg_disc, count(*) as count_order \nfrom lineitem \nwhere l_shipdate <= '1998-09-02'\ngroup by l_returnflag, l_linestatus \norder by l_returnflag, l_linestatus;\n\nFor our favorite query, the current code (with my selectivity fixes of\nyesterday, but they aren't crucial to this discussion) produces the\noriginal plan:\n\nSort (cost=35623.88 size=0 width=0)\n -> Aggregate (cost=35623.88 size=0 width=0)\n -> Group (cost=35623.88 size=0 width=0)\n -> Sort (cost=35623.88 size=0 width=0)\n -> Seq Scan on lineitem (cost=35623.88 size=579166 width=60)\n\n[Problem 1 - no cost estimates are done for sort nodes, and costs are\nnot passed up the query tree for Sorts, Groups, or Aggregates]\n\nNow, let's say we have it actually add a cost for the sorts and pass\nthe size and width up the tree as we go:\n\nSort (cost=713817.69 size=579166 width=60)\n -> Aggregate (cost=374720.78 size=579166 width=60)\n -> Group (cost=374720.78 size=579166 width=60)\n -> Sort (cost=374720.78 size=579166 width=60)\n -> Seq Scan on lineitem (cost=35623.88 size=579166 width=60)\n\nwe do this by adding several bits of code. For sort nodes near line\n470 of planmain.c, something like:\n\n sortplan->plan.cost = subplan->cost + \n cost_sort(XXX, subplan->plan_size, subplan->plan_width, false);\n\n /* sorting doesn't change size or width */\n sortplan->plan.plan_size = subplan->plan_size;\n sortplan->plan.plan_width = subplan->plan_width;\n\nand for group nodes near line 480 of planmain.c, something like:\n\n /* pretty easy, right? the size and width will not change */\n /* from the sort node, so we can just carry those along */\n /* and group has (practically) no cost, so just keep that too */\n grpplan->plan.cost = sortplan->plan.cost;\n grpplan->plan.plan_size = sortplan->plan.plan_size;\n grpplan->plan.plan_width = sortplan->plan.plan_width;\n\nand something similar for the other sort nodes near line 410 of\nplanner.c, and a similar pass along of the cost values from the group\nnode through the aggregation node near line 260 of planner.c\n\nAnd, of course, we'd have to check hash nodes, join nodes and all the\nothers to do the right thing, but I only looked at the ones necessary\nfor my query.\n\nAdding all those parts, we now have the plan:\n\nSort (cost=713817.69 size=579166 width=60)\n -> Aggregate (cost=374720.78 size=579166 width=60)\n -> Group (cost=374720.78 size=579166 width=60)\n -> Sort (cost=374720.78 size=579166 width=60)\n -> Seq Scan on lineitem (cost=35623.88 size=579166 width=60)\n\nwhich actually includes the cost of the sorts and passes the size and\nwidth up the tree.\n\n[End discussion of Problem 1 - no sort costs or cost-passing]\n\n[Craziness starts here]\n\n[Problem 2 - the Aggregate node is way over-estimated by assuming the\noutput is as large as the input, but how can you do better? ]\n\nIt turns out the above plan estimate isn't very satisfactory, because\nthe Aggregate actually significantly reduces the size and the\nestimation of the final Sort is way higher than it needs to be. But\nhow can we estimate what happens at an Aggregation node?\n\nFor general types in the GROUP BY, there is probably not much we can\ndo, but if we can somehow \"know\" that the GROUP BY columns have a\nlimited dynamic range - e.g. the char(1) fields in my lineitem, or\ninteger fields with small ranges - then we should be able to do some\n\"back of the envelope\" estimation and get a better idea of costs\nhigher up in the plan. A quick look shows that this would be\napplicable in 5 of the 17 queries from TPC-D that I am looking at, not\nhuge applicability, but not single-query specific either. What are\nthe \"normal\" or \"expected\" types and columns for all the GROUP BYs out\nthere - I don't know.\n\nAnyway, so we do the estimation described above and get to the much\nmore accurate plan:\n\nSort (cost=374734.87 size=153 width=60)\n -> Aggregate (cost=374720.78 size=153 width=60)\n -> Group (cost=374720.78 size=579166 width=60)\n -> Sort (cost=374720.78 size=579166 width=60)\n -> Seq Scan on lineitem (cost=35623.88 size=579166 width=60)\n\nthis works because pg_statistic knows that l_returnflag has at most 17\npossible values (min 'A' to max 'R') and l_linestatus has at most 9\npossible values (min 'F' to max 'O'). In fact, l_returnflag and\nl_linestatus are categorical and have only 3 possible values each, but\nthe estimate of 153 (17 * 9) is much better than the 579166 that the\nnode must otherwise assume, and much closer to the actual result,\nwhich has only 4 rows.\n\nI get the above plan by further modifcation after line 260 of\nplanner.c. I am not attaching the patch because it is ugly in spots\nand I just did \"proof of concept\", full implementation of this idea\nwould require much more thinking about generality.\n\nBasically I go through the target list of the agg node and identify\nall the Var expressions. I then figure out what columns in the\noriginal relations these refer to. I then look up the range of the\nattributes in each of these columns (using the min and max values for\nthe attributes obtained via gethilokey() as discussed in my last post\nand doing the \"strings map to ASCII value of their first letter\" that\nTom suggested). I multiply all these values together (i.e. the cross\nproduct of all possible combinations that the GROUP BY on all n\ncolumns could produce - call this x). The size of the agg result is\nthan the smaller of this value or the input size (can't be bigger than\nthe input, and if there are only x unique combinations of the GROUP BY\nattributes, then there will never be more than x rows in the output of\nthe GROUP BY).\n\nI do this very conservatively, if I can't get statistics for one of\nthe columns, then I forget it and go with the old estimate (or could\ngo with an estimate of 0, if one were feeling optimistic. It is not\nclear what a \"good\" default estimate is here, maybe the 1/3 that is\nused for selectivities would be as good as anything else).\n\n[End discussion of Problem 2 - aggregate/group estimation]\n\nAs with my previous posts, this is most likely not a general solution,\nit's just an idea that works (very well) for the query I am looking\nat, and has some general applicability. I am sure that the above\nignores a number of \"bigger picture\" issues, but it does help the\nparticular query I care about.\n\nAlso note that none of this actually speeds up even my query, it only\nmakes the optimizer estimate much closer to the actual query cost\n(which is what I care about for the work I am doing).\n\nMaybe this will be of help in any future work on the optimizer. Maybe\nit is simply the rantings of a lunatic.\n\nEnjoy.\n\nErik Riedel\nCarnegie Mellon University\nwww.cs.cmu.edu/~riedel\n\n",
"msg_date": "Tue, 23 Mar 1999 22:12:56 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "longer-term optimizer musings"
},
{
"msg_contents": "> As with my previous posts, this is most likely not a general solution,\n> it's just an idea that works (very well) for the query I am looking\n> at, and has some general applicability. I am sure that the above\n> ignores a number of \"bigger picture\" issues, but it does help the\n> particular query I care about.\n> \n> Also note that none of this actually speeds up even my query, it only\n> makes the optimizer estimate much closer to the actual query cost\n> (which is what I care about for the work I am doing).\n> \n> Maybe this will be of help in any future work on the optimizer. Maybe\n> it is simply the rantings of a lunatic.\n\nInteresting. The problem I see is that trying to do a char(20) column\nwith min(A) and max(B) can have 256^19 possible unique values from A to\nB, so it kind if kills many general cases. Floats have the same\nproblem.\n\nA nice general fix would be to assume GROUP BY/AGG returns only 10% of\nthe existing rows. I don't even know if an Aggregate without a group by\nknows it only returns one row. Oops, I guess not:\n\n\ttest=> explain select max(relpages) from pg_class;\n\tNOTICE: QUERY PLAN:\n\t\n\tAggregate (cost=2.58 size=0 width=0)\n\t -> Seq Scan on pg_class (cost=2.58 size=48 width=4)\n\t\nBasically, there are some major issues with this optimizer. Only in pre\n6.5 have we really dug into it and cleaned up some glaring problems. \nProblems that were so bad, if I had know how bad they were, I would\ncertainly have started digging in there sooner.\n\nWe have even general cases that are not being handled as well as they\nshould be. We just fixed a bug where \"col = -3\" was never using an\nindex, because -3 was being parsed as prefix \"-\" with an operand of 3,\nand the index code can only handle constants.\n\nYes, we have some major things that need cleaning. I have updated\noptimizer/README to better explain what is happening in there, and have\nrenamed many of the structures/variables to be clearer. I hope it\nhelps someone, someday.\n\nSo I guess I am saying that your ideas are good, but we need to walk\nbefore we can run with this optimizer.\n\nI am not saying the optimizer is terrible, just that it is complex, and\nhas not had the kind of code maintenance it needs.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Mar 1999 23:08:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] longer-term optimizer musings"
},
{
"msg_contents": "> Interesting. The problem I see is that trying to do a char(20) column\n> with min(A) and max(B) can have 256^19 possible unique values from A to\n> B, so it kind if kills many general cases. Floats have the same\n> problem.\n> \n> A nice general fix would be to assume GROUP BY/AGG returns only 10% of\n> the existing rows. I don't even know if an Aggregate without a group by\n> knows it only returns one row. Oops, I guess not:\n> \n> \ttest=> explain select max(relpages) from pg_class;\n> \tNOTICE: QUERY PLAN:\n> \t\n> \tAggregate (cost=2.58 size=0 width=0)\n> \t -> Seq Scan on pg_class (cost=2.58 size=48 width=4)\n> \t\n> Basically, there are some major issues with this optimizer. Only in pre\n> 6.5 have we really dug into it and cleaned up some glaring problems. \n> Problems that were so bad, if I had know how bad they were, I would\n> certainly have started digging in there sooner.\n> \n> We have even general cases that are not being handled as well as they\n> should be. We just fixed a bug where \"col = -3\" was never using an\n> index, because -3 was being parsed as prefix \"-\" with an operand of 3,\n> and the index code can only handle constants.\n> \n> Yes, we have some major things that need cleaning. I have updated\n> optimizer/README to better explain what is happening in there, and have\n> renamed many of the structures/variables to be clearer. I hope it\n> helps someone, someday.\n> \n> So I guess I am saying that your ideas are good, but we need to walk\n> before we can run with this optimizer.\n> \n> I am not saying the optimizer is terrible, just that it is complex, and\n> has not had the kind of code maintenance it needs.\n\nAlso, let me not discourage you.\n\nWe are just learning about the optimizer, we welcome any ideas that you\nmay have. I also enjoy discussing the issues, becuase it give me a\nsounding-board for future coding.\n\nKeep it up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Mar 1999 23:13:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] longer-term optimizer musings"
},
{
"msg_contents": "\n> Interesting. The problem I see is that trying to do a char(20) column\n> with min(A) and max(B) can have 256^19 possible unique values from A to\n> B, so it kind if kills many general cases. Floats have the same\n> problem.\n> \nRight, in most general cases, there isn't much you can do.\n\nAlthough, if this seemed like an important thing, one could imagine an\nextension to 'vacuum analyze' and pg_statistic that tried to track the\nnumber of unique values while it finds the min and max. Maybe tracking\nsome fixed number (10?) of unique attr values and stop searching once it\nexceeds 10 different values (or maybe some tiny fraction of the tuples\nin the relation, whatever gives a decent balance of memory and CPU at\nanalyze time). Basically to find out if it might be .01% instead of the\n10% default you suggest below.\n\nThis would work for a database that tracks all the CDs owned by \"Bob\"\nand \"Alice\" even with char(20) first names. For floats, it wouldn't be\nvery good for prices at Tiffany's, but should work pretty well for the\nEverything's $1 store.\n\n> A nice general fix would be to assume GROUP BY/AGG returns only 10% of\n> the existing rows. I don't even know if an Aggregate without a group by\n> knows it only returns one row. Oops, I guess not:\n> \n> test=> explain select max(relpages) from pg_class;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=2.58 size=0 width=0)\n> -> Seq Scan on pg_class (cost=2.58 size=48 width=4)\n> \nYup, this would be easy to add (both the 10% and 1 for non-group aggs). \nThe current code just passes along the cost and zeros the size and width\nin all Sort, Group, and Aggregate nodes (this was the issue flagged as\nProblem 1 in my message - and I tried to give line numbers where that\nwould have to be fixed). Note that cost_sort() seems to work reasonably\nenough, but has this non-obvious \"sortkeys\" argument that it does\nnothing with.\n\n> So I guess I am saying that your ideas are good, but we need to walk\n> before we can run with this optimizer.\n> \nUnderstood. I am not discouraged and will continue throwing these\nthings out as I see them and think I have a reasonable explanation.\n\nErik\n\n",
"msg_date": "Wed, 24 Mar 1999 00:13:13 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] longer-term optimizer musings"
},
{
"msg_contents": "Erik Riedel <[email protected]> writes:\n> This should be considered a long-term \"think about\" and \"probably\n> ignore as too specific\" optimization.\n\nNot at all --- it looks like useful stuff to work on.\n\nAs Bruce pointed out, the current generation of Postgres developers\ndon't understand the optimizer very well. (Bruce and I have both been\ndigging into it a little, but we certainly welcome anyone else who\nwants to study it.) The optimizer has been suffering from software rot\nfor several releases now, and as far as I can tell there were a lot of\nthings that it never really did right in the first place. So take what\nyou see with a grain of salt.\n\n> [Problem 1 - no cost estimates are done for sort nodes, and costs are\n> not passed up the query tree for Sorts, Groups, or Aggregates]\n\nThese things probably need to be fixed. I have noticed that there are\nplaces where the code does not bother to fill in estimates, for example\nin a hash join the hash subnode never gets filled in, but it probably\ndoesn't matter as long as the top hash node does get filled in. The\nimportant thing is to propagate reasonable estimates upwards.\n\n> [Problem 2 - the Aggregate node is way over-estimated by assuming the\n> output is as large as the input, but how can you do better? ]\n\nAn excellent point. Bruce's idea of a default 10% estimate seems\nreasonable to me (and of course, recognize the non-group-by case).\n\n> [ get column min/max values and ] multiply all these values together\n\nYou'd have to watch out for integer overflow in this calculation ---\nwould be safer to do it in floating point I think. A more serious\nissue is how do you know what the granularity of the type is. For\nexample, with a float8 column the min and max values might be 1 and 4,\nbut that doesn't entitle you to assume that there are only 4 values.\nYou could really only apply this optimization to int, bool, and char(1)\ncolumns, I think. Of course, there are plenty of those out there, so\nit might still be worth doing.\n\n> Also note that none of this actually speeds up even my query, it only\n> makes the optimizer estimate much closer to the actual query cost\n> (which is what I care about for the work I am doing).\n\nWell, that could result in a more intelligently chosen plan further up\nthe tree, so it *could* lead to a faster query. However this would\nonly be true if there were important choices to be made at higher tree\nlevels. I suppose you would have to be looking at a subselect involving\nGROUP BY for this to really make much difference in practice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 10:31:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] longer-term optimizer musings "
},
{
"msg_contents": "\n> > Also note that none of this actually speeds up even my query, it only\n> > makes the optimizer estimate much closer to the actual query cost\n> > (which is what I care about for the work I am doing).\n> \n> Well, that could result in a more intelligently chosen plan further up\n> the tree, so it *could* lead to a faster query. However this would\n> only be true if there were important choices to be made at higher tree\n> levels. I suppose you would have to be looking at a subselect involving\n> GROUP BY for this to really make much difference in practice.\n> \nRight, if there are still choices higher up. In particular, the case\nthat I was looking at was the possible combination of Aggregation and\nSort nodes that I'd mentioned before. Having the proper estimate at\nthat point would tell you if it were worthwhile doing the aggregation\n(or duplicate elimination) while sorting. Which could save lots of\nmemory and writing/reading of run files for out-of-core mergesort.\n\nWhile I'm at it, I should note that this combination of aggr and sort is\nnot my invention by a long shot. The paper:\n\n\"Fast Algorithms for Universal Quantification in Large Databases\"\n\nreferenced at:\n\nhttp://www.cse.ogi.edu/DISC/projects/ereq/papers/graefe-papers.html\n\nclaims this as the \"obvious optimization\" and provides pointers to the\n(by now ancient) papers that discuss both this and combination of\naggregation and hashing, which should be even cheaper in general. \nSection 2.2, page 10.\n\nI guess those things just never found their way into Stonebraker's\nversion of the code. Maybe they are of interest in the future.\n\nErik\n\n",
"msg_date": "Wed, 24 Mar 1999 14:45:52 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] longer-term optimizer musings"
}
] |
[
{
"msg_contents": "\nThe docs say that the text type is the \"Best choice\" of the character data\ntypes. It was always my understanding that the text type was stored else-\nwhere in the database and a pointer to it was stored in the actual table.\nDoes PostgreSQL do this or is a more efficient way been found? It seems\nthat it'd be too expensive to store it elsewhere. I'm making some changes\nto a table and wanted to optomize it (varchar vs text).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 22 Mar 1999 18:41:22 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "to text or not to text"
},
{
"msg_contents": "> \n> The docs say that the text type is the \"Best choice\" of the character data\n> types. It was always my understanding that the text type was stored else-\n> where in the database and a pointer to it was stored in the actual table.\n> Does PostgreSQL do this or is a more efficient way been found? It seems\n> that it'd be too expensive to store it elsewhere. I'm making some changes\n> to a table and wanted to optomize it (varchar vs text).\n\nInternally, text is stored just like char(). Only large objects are\nstored outside the database.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 18:54:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] to text or not to text"
}
] |
[
{
"msg_contents": "\nI notice you used portals for vacuum, rather than a separate memory\ncontext. Can I ask why?\n\nI am considering creating an expression portal or memory context to\nprevent the memory leaks from the utils/adt functions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 21:51:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "portals vs. memory contexts"
},
{
"msg_contents": "> \n> I notice you used portals for vacuum, rather than a separate memory\n> context. Can I ask why?\n> \n> I am considering creating an expression portal or memory context to\n> prevent the memory leaks from the utils/adt functions.\n> \n\nI am also confused about varaible portal memory vs heap portal memory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 22:09:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] portals vs. memory contexts"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I notice you used portals for vacuum, rather than a separate memory\n> context. Can I ask why?\n\nNot me really -:)\n /*\n * Create a portal for safe memory across transctions. We need to\n\nVacuum uses separate transaction for each of relations to be\nvacuumed. VACPNAME is special portal name that is not cleaned\nat commit/abort.\n\n> \n> I am considering creating an expression portal or memory context to\n> prevent the memory leaks from the utils/adt functions.\n\nWill you try to fix problems with WHERE a = lower(b) ?\n\nVadim\n",
"msg_date": "Tue, 23 Mar 1999 10:17:09 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: portals vs. memory contexts"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > I notice you used portals for vacuum, rather than a separate memory\n> > context. Can I ask why?\n> >\n> > I am considering creating an expression portal or memory context to\n> > prevent the memory leaks from the utils/adt functions.\n> >\n> \n> I am also confused about varaible portal memory vs heap portal memory.\n\nportalmem.c:\n\n * Node\n * |\n * MemoryContext___\n * / \\\n * GlobalMemory PortalMemoryContext\n * / \\\n * PortalVariableMemory PortalHeapMemory\n *\n * Flushed at Flushed at Checkpoints\n * Transaction Portal\n * Commit Close\n *\n * GlobalMemory n n n\n * PortalVariableMemory n y n\n * PortalHeapMemory y y y *\n\nVadim\n",
"msg_date": "Tue, 23 Mar 1999 10:33:44 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] portals vs. memory contexts"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I notice you used portals for vacuum, rather than a separate memory\n> > context. Can I ask why?\n> \n> Not me really -:)\n> /*\n> * Create a portal for safe memory across transctions. We need to\n> \n> Vacuum uses separate transaction for each of relations to be\n> vacuumed. VACPNAME is special portal name that is not cleaned\n> at commit/abort.\n> \n> > \n> > I am considering creating an expression portal or memory context to\n> > prevent the memory leaks from the utils/adt functions.\n> \n> Will you try to fix problems with WHERE a = lower(b) ?\n\nYes, this will fix that too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 23:27:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: portals vs. memory contexts"
},
{
"msg_contents": "> portalmem.c:\n> \n> * Node\n> * |\n> * MemoryContext___\n> * / \\\n> * GlobalMemory PortalMemoryContext\n> * / \\\n> * PortalVariableMemory PortalHeapMemory\n> *\n> * Flushed at Flushed at Checkpoints\n> * Transaction Portal\n> * Commit Close\n> *\n> * GlobalMemory n n n\n> * PortalVariableMemory n y n\n> * PortalHeapMemory y y y *\n\nYes, I saw that. Is that the only difference?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Mar 1999 23:28:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] portals vs. memory contexts"
}
] |
[
{
"msg_contents": "\nThis afternoon CVS download.\n\nTable created via\ncreate table\n customer (\n uid varchar(30) primary key,\n id varchar(30) unique,\n name_first varchar(30) not null,\n name_middle varchar(30),\n name_last varchar(30) not null,\n company varchar(80),\n address1 varchar(50) not null,\n address2 varchar(50),\n city varchar(30) not null,\n state char(2),\n country varchar(80),\n zip int4,\n phone_work varchar(12),\n phone_home varchar(12),\n phone_fax varchar(12),\n email varchar(30),\n date_entered date not null,\n billing_terms varchar(15) not null,\n confirmation_method varchar(10)\n );\n\n\n\n\\d reports\n\nobe=> \\d customer\n\n\nTable = customer\n+----------------------------------+----------------------------------+-------+\n| Field | Type |Length|\n+----------------------------------+----------------------------------+-------+\n| uid | varchar() not null |30 |\n| id | varchar() |30 |\n| name_first | varchar() not null |30 |\n| name_middle | varchar() |30 |\n| name_last | varchar() not null |30 |\n| company | varchar() |80 |\n| address1 | varchar() not null |50 |\n| address2 | varchar() |50 |\n| city | varchar() not null |30 |\n| date_entered | date not null |4 |\n+----------------------------------+----------------------------------+-------+\nIndices: customer_id_key\n customer_pkey\n\n\n\nBut select * shows the columns exist\n\nobe=> select * from customer;\nuid|id|name_first|name_middle|name_last|company|address1|address2|city|state|country|zip|phone_work|phone_home|phone_fax|\n---+--+----------+-----------+---------+-------+--------+--------+----+-----+-------+---+----------+----------+---------+\nemail|date_entered|billing_terms|confirmation_method\n-----+------------+-------------+-------------------\n(0 rows)\n\n\nI deleted my data dir and started over with initdb with the same results.\nAll of the tables where created with a script that was not generated by\npgdump.\n\nSystem tables and views seem to be affected as well\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n",
"msg_date": "Mon, 22 Mar 1999 21:29:03 -0600 (CST)",
"msg_from": "James Thompson <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS 3-22-99 \\d broken?"
},
{
"msg_contents": "James Thompson <[email protected]> writes:\n> [ psql's \\d not working right ]\n\nHmm, it works OK for me on sources from Sunday --- and a quick check\nshows no interesting changes since then. Either you've found a\nplatform-specific bug, or you didn't rebuild correctly after cvs update.\n\nDoing a partial rebuild after an update is risky because the Postgres\nmakefiles are sloppy about declaring all dependencies. (You can\nimprove matters by doing \"make depend\", if you are using gcc, but I'm\nnot sure that will solve the problem entirely.) I usually play the\ngame conservatively by doing make distclean and a full rebuild after\nan update, unless I see that the changes were very localized.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Mar 1999 10:47:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS 3-22-99 \\d broken? "
},
{
"msg_contents": "On Tue, 23 Mar 1999, Tom Lane wrote:\n\n> James Thompson <[email protected]> writes:\n> > [ psql's \\d not working right ]\n> \n> Hmm, it works OK for me on sources from Sunday --- and a quick check\n> shows no interesting changes since then. Either you've found a\n> platform-specific bug, or you didn't rebuild correctly after cvs update.\n>\n\nMoved my old copy of the pgsql tree and checked out the entire thing\nyestereday afternoon to insure a fresh copy. Used all defaults\n\n./configure \nmake \nmake install\n\n\\d still does not display complete table listings. Everything (select,\ninsert, update, delete) seems to work though. Regression test pass well\nenough (error messages different, a few rounding errors, etc)\n\nI have noticed a lot of little oddities.\n\nI've noticed the backend is not stable. I think it has something to do\nwith permissions/passwords. I don't have exact details but if I change\npasswords, create users, or do a large quantity of grants the backend\nseems to die when the db superuser exits psql. At least the next login\nfails due to no backend process running. I must remove the /tmp/.s.* file\nand restart the backend. (Is there an error I can look at somewhere?)\n\nThe psql command create user jamest;\ndoes not work. I must use createuser from the shell.\n\nGroups do not work. I can create the group using the insert command in \nthe manual, I can add people to the group. But those people cannot select\nfrom the tables, some type of group 0 error occurs.\n\nI apologize for being vague, my net connection for home was hideous last\nnight so I'm doing this from memory at work.\n\nI've tried this on a \"stock\" RedHat 5.2 system (gcc and friends are the\nRPMs that came with the system). A modified RedHat 5.1 system with latest\ngcc 2.7.x series compiler. Both behave in the exact same mannor.\n\nMy previous CVS code from about a week ago didn't have the \\d problem. I\ncan't say on the other problems as I started using these features after my\nrecent cvs update.\n\nAny ideas on where I should look for clues as to what has went wrong?\n\nTIA\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n\n\n",
"msg_date": "Wed, 24 Mar 1999 08:41:39 -0600 (EST)",
"msg_from": "James Thompson <[email protected]>",
"msg_from_op": true,
"msg_subject": "backend unstable, \\d broken, groups broken was CVS 3-22-99 \\d broken?"
},
{
"msg_contents": "James Thompson <[email protected]> writes:\n> [ many things very broken despite full rebuild ]\n\nSounds like you've hit some kind of platform-specific breakage, then.\n\nI'd suggest chasing the \\d failure, simply because that's apparently\nthe easiest thing to reproduce. Look at the source code for psql.c,\nand try issuing by hand the same queries it uses to obtain the system\ntable info for \\d. Use a debugger to look at the data coming back\nfrom the backend during \\d --- in other words, is the lossage in psql\nor in the backend? Most likely it's the backend but you ought to make\nsure.\n\nI'm not enough of a backend guru to suggest where to look for the fault\ninside the backend... anyone?\n\n> I've noticed the backend is not stable. I think it has something to do\n> with permissions/passwords. I don't have exact details but if I change\n> passwords, create users, or do a large quantity of grants the backend\n> seems to die when the db superuser exits psql. At least the next login\n> fails due to no backend process running.\n\nYou mean no postmaster process running. Is there a corefile? (The\npostmaster would drop core in the directory you started it in, IIRC;\nor it might be the top /usr/local/pgsql/data/ directory.) If so,\nwhat backtrace do you get from it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 11:16:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend unstable, \\d broken,\n\tgroups broken was CVS 3-22-99 \\d broken?"
}
] |
[
{
"msg_contents": "> \n> Bruce, for the most part, I have worked around the problems with\n> pg_dump.\n> \n> But I am getting this error now, and it is worrisome:\n> \n> NOTICE: Can't truncate multi-segments relation tbl_mail_archive\n> ERROR: Tuple is too big: size 10024\n> \n\nCan someone comment on this? It is coming from\n./backend/storage/smgr/md.c in mdtruncate(). Is this table very large?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Mar 1999 09:55:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -"
}
] |
[
{
"msg_contents": "Is this from vacuum or pg_dump?\n\nI'm just wondering if vacuum cannot cope with a table that is >1Gb\n(hence in two segments), and when cleaned down, is <1Gb (one segment).\n\nSaying that, why is it saying the tuple is too big? What is creating\nthat tuple?\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Tuesday, March 23, 1999 2:55 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [HACKERS] \"CANNOT EXTEND\" -\n\n\n> \n> Bruce, for the most part, I have worked around the problems with\n> pg_dump.\n> \n> But I am getting this error now, and it is worrisome:\n> \n> NOTICE: Can't truncate multi-segments relation tbl_mail_archive\n> ERROR: Tuple is too big: size 10024\n> \n\nCan someone comment on this? It is coming from\n./backend/storage/smgr/md.c in mdtruncate(). Is this table very large?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n",
"msg_date": "Tue, 23 Mar 1999 15:15:43 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] \"CANNOT EXTEND\" -"
},
{
"msg_contents": "> Is this from vacuum or pg_dump?\n\nI'm not sure about the original message. In my case, I got \n\n\tNOTICE: Can't truncate multi-segments relation tbl_mail_archive\n\nwhile doing vacuum on a multi-segment relation.\n\nHowever I didn't get:\n\nERROR: Tuple is too big: size 10024\n\nI'm not sure if I was just lucky.\n\n> I'm just wondering if vacuum cannot cope with a table that is >1Gb\n> (hence in two segments), and when cleaned down, is <1Gb (one segment).\n\nI don't think vacuum is currently usable for a segmented relation.\n\n> Saying that, why is it saying the tuple is too big? What is creating\n> that tuple?\n\nSeems that comes from RelationPutHeapTupleAtEnd(). It's saying\nrequested tuple length is too big to fit into a page. (10024 is\napparently bigger than 8192) Someting very weird is going on...\n---\nTatsuo Ishii\n",
"msg_date": "Wed, 24 Mar 1999 21:21:13 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" - "
}
] |
[
{
"msg_contents": "Try a named pipe.\n\nD.\n\n-----Original Message-----\nFrom: secret [mailto:[email protected]]\nSent: Tuesday, March 23, 1999 11:36 AM\nTo: Tom Lane\nCc: Bruce Momjian; [email protected]\nSubject: Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\nperformance)\n\n\nTom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > I can't imagine he has enough disk space for truss/ktrace output for a\n> > full day of backend activity, does he?\n>\n> That's why I was encouraging him to set up a playpen and actively\n> work at crashing it, rather than waiting around to see whether it'd\n> happen before his disk fills up ;-)\n>\n> regards, tom lane\n\n I've built a simple program to record the last N lines(currently\n5000...Suggestions?) of input... What I'd like to do is pipe STDIN and\nSTDERR to this program, but \"|\" doesn't do this, do you all have a\nsuggestion on how to do this? If I can then I can get you the system trace\nand hopefully get this crash bug fixed.\n\n\n\n",
"msg_date": "Tue, 23 Mar 1999 12:04:50 -0500",
"msg_from": "Dan Gowin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: postmaster dies (was Re: Very disappointing pe\n\trformance)"
}
] |
[
{
"msg_contents": "\nPlatform: Alpha, Digital UNIX 4.0D \nSoftware: PostgreSQL 6.5 snaphot (11 March 1999)\n\nI have two tables as follows:\n\nTable = orderfoo\n+-------------------------+----------------------------+-------+\n| Field | Type | Length|\n+-------------------------+----------------------------+-------+\n| o_orderkey | int4 not null | 4 |\n| o_custkey | int4 not null | 4 |\n| o_orderstatus | char() not null | 1 |\n| o_totalprice | float8 not null | 8 |\n| o_orderdate | date | 4 |\n| o_orderpriority | char() not null | 15 |\n| o_clerk | char() not null | 15 |\n| o_shippriority | int4 not null | 4 |\n| o_comment | char() not null | 79 |\n+-------------------------+----------------------------+-------+\nIndex: orderfoo_index_\n\n\nTable = customer\n+-------------------------+----------------------------+-------+\n| Field | Type | Length|\n+-------------------------+----------------------------+-------+\n| c_custkey | int4 not null | 4 |\n| c_name | char() not null | 25 |\n| c_address | char() not null | 40 |\n| c_nationkey | int4 not null | 4 |\n| c_phone | char() not null | 15 |\n| c_acctbal | float8 not null | 8 |\n| c_mktsegment | char() not null | 10 |\n| c_comment | char() not null | 117 |\n+-------------------------+----------------------------+-------+\nIndex: customer_index_\n\nand a query:\n\n--\n-- Query 3\n--\nselect l_orderkey, sum(l_extendedprice*(1-l_discount)) as revenue,\no_orderdate, o_shippriority\nfrom customer, orderfoo, lineitem\nwhere c_mktsegment = 'BUILDING'\nand c_custkey = o_custkey\nand l_orderkey = o_orderkey\nand o_orderdate < '1995-03-15'\nand l_shipdate > '1995-03-15'\ngroup by l_orderkey, o_orderdate, o_shippriority\norder by revenue desc, o_orderdate;\n\nwhose plan includes the segment that hash joins the two tables listed\nabove:\n\n -> Hash Join (cost=12268.29 size=5284 width=20)\n -> Seq Scan on orderfoo (cost=8609.00 size=72911 width=16)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on customer (cost=1031.00 size=1087 width=4)\n\nand which crashes the backend during the Hash Join with:\n\n Unaligned access pid=28933 <postgres> va=0x14027e934 \n pc=0x120099430 ra=0x14027e93c inst=0xb74e0010\n\nfollowed swiftly by:\n\n Bus error (core dumped)\n\n(note that the unaligned access and the bus error may be unrelated\nevents, but I suspect not - more below)\n\nwe then have a dbx backtrace that shows:\n\n 0 ExecScanHashBucket(hjstate = 0x140238478, bucket = 0x14027e8d8,\ncurtuple = 0x11fffd000, hjclauses = 0x140234118, econtext = 0x140238588)\n[\"nodeHash.c\":706, 0x120099434]\n 1 ExecHashJoin(node = 0x140234258) [\"nodeHashjoin.c\":288, 0x120099f00]\n 2 ExecProcNode(node = 0x140234258, parent = 0x1402347d8)\n[\"execProcnode.c\":315, 0x120090ffc]\n 3 ExecNestLoop(node = 0x1402347d8, parent = 0x140236108)\n[\"nodeNestloop.c\":160, 0x12009d104]\n 4 ExecProcNode(node = 0x1402347d8, parent = 0x140236108)\n[\"execProcnode.c\":279, 0x120090e7c]\n 5 createfirstrun(node = 0x140236108) [\"psort.c\":409, 0x1201762fc]\n 6 initialrun(node = 0x140236108) [\"psort.c\":291, 0x120176024]\n 7 psort_begin(node = 0x140236108, nkeys = 5159, key = 0x14023c560)\n[\"psort.c\":150, 0x120175e64]\n 8 ExecSort(node = 0x140236108) [\"nodeSort.c\":156, 0x12009e1c4]\n 9 ExecProcNode(node = 0x140236108, parent = 0x1402366d8)\n[\"execProcnode.c\":295, 0x120090f0c]\n 10 ExecGroupEveryTuple(node = 0x1402366d8) [\"nodeGroup.c\":104, 0x12009fab4]\n 11 ExecGroup(node = (nil)) [\"nodeGroup.c\":56, 0x12009fa0c]\n 12 ExecProcNode(node = 0x1402366d8, parent = 0x140237020)\n[\"execProcnode.c\":303, 0x120090f6c]\n 13 ExecAgg(node = 0x140237020) [\"nodeAgg.c\":243, 0x120097064]\n 14 ExecProcNode(node = 0x140237020, parent = 0x140237478)\n[\"execProcnode.c\":307, 0x120090f9c]\n 15 createfirstrun(node = 0x140237478) [\"psort.c\":409, 0x1201762fc]\nMore (n if no)?\n 16 initialrun(node = 0x140237478) [\"psort.c\":291, 0x120176024]\n 17 psort_begin(node = 0x140237478, nkeys = 5159, key = 0x14023cd80)\n[\"psort.c\":150, 0x120175e64]\n 18 ExecSort(node = 0x140237478) [\"nodeSort.c\":156, 0x12009e1c4]\n 19 ExecProcNode(node = 0x140237478, parent = 0x140237478)\n[\"execProcnode.c\":295, 0x120090f0c]\n 20 ExecutePlan(estate = 0x140237ea8, plan = 0x140237478, direction =\nForwardScanDirection, destfunc = 0x14001b118) [\"execMain.c\":985,\n0x12008f2bc]\n 21 ExecutorRun(queryDesc = (nil), estate = 0x140237ea8, limoffset =\n0x1, limcount = 0x14001b118) [\"execMain.c\":360, 0x12008e780]\n 22 ProcessQueryDesc(queryDesc = 0x140237e78, limoffset = (nil),\nlimcount = (nil)) [\"pquery.c\":334, 0x120123a1c] 23\nProcessQuery(parsetree = 0x1401f3a88, plan = 0x140237478, dest =\n536858624) [\"pquery.c\":377, 0x120123b1c]\n 24 pg_exec_query_dest(query_string = 0x11fffd7c0 = \"select l_orderkey,\nsum(l_extendedprice*(1-l_discount)) as revenue, o_orderdate,\no_shippriority from customer, orderfoo, lineitem where c\\...\",\naclOverride = '^@') [\"postgres.c\":805, 0x1201217bc]\n 25 PostgresMain(argv = 0x11ffff808, real_argv = 0x11ffff808)\n[\"postgres.c\":703, 0x12012302c]\n 26 main(argv = 0x11ffff808) [\"main.c\":103, 0x1200ae28c]\n(dbx) \n\nthe offending code in nodeHash.c is near line 700 as:\n\n...\n\n if (curtuple == NULL)\n heapTuple = (HeapTuple) \n LONGALIGN(ABSADDR(bucket->top));\n else\n heapTuple = (HeapTuple)\n LONGALIGN(((char *) curtuple + curtuple->t_len + HEAPTUPLESIZE));\n\n while (heapTuple < (HeapTuple) ABSADDR(bucket->bottom))\n {\n\n heapTuple->t_data = (HeapTupleHeader) \n ((char *) heapTuple + HEAPTUPLESIZE);\n\n inntuple = ExecStoreTuple(heapTuple, /* tuple to store */\n hjstate->hj_HashTupleSlot, /* slot */\n InvalidBuffer, /* tuple has no buffer */\n false); /* do not pfree this tuple */\n\n...\n\nit crashes at the ExecStoreTuple().\n\nSince it gives that unaligned access error beforehand, I suspect the\nLONGALIGN() macros in the lines above are the culprits somehow. This\ntakes me to include/utils/memutils.h which has:\n\n\n#if (defined(sun) && ! defined(sparc)) || defined(m68k)\n#define LONGALIGN(LEN) SHORTALIGN(LEN)\n#elif defined (__alpha)\n\n /*\n * even though \"long alignment\" should really be on 8-byte boundaries for\n * linuxalpha, we want the strictest alignment to be on 4-byte (int)\n * boundaries, because otherwise things break when they try to use the\n * FormData_pg_* structures. --djm 12/12/96\n */\n#define LONGALIGN(LEN)\\\n (((long)(LEN) + (sizeof (int) - 1)) & ~(sizeof (int) -1))\n#else\n#define LONGALIGN(LEN)\\\n (((long)(LEN) + (sizeof (long) - 1)) & ~(sizeof (long) -1))\n#endif\n\n\nsince I am __alpha (but __osf__, not linux), I get the version with\nthe sizeof(int) instead of the sizeof(long).\n\nCan someone explain the comment from djm to me (or is djm still\nlistening somewhere?). At first blush, I suspect that I actually\n_want_ it to do the latter version of LONGALIGN(), since my longs\nreally are 8 bytes. But when I try to do that instead, I am unable to\neven run \"initdb\" - dies with an error like \"attribute not\nfound/invalid\" (sorry, scrolled away the window with the actual error\n- I can re-create if the exact message would help anyone).\n\nAnyone have suggestions on how I might proceed? Are there known\nproblems (or known-workings) of HashJoin on 64-bit platforms?\n\nErik Riedel\nCarnegie Mellon University\nwww.cs.cmu.edu/~riedel\n\n",
"msg_date": "Tue, 23 Mar 1999 23:21:22 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "64-bit hashjoins"
},
{
"msg_contents": "> since I am __alpha (but __osf__, not linux), I get the version with\n> the sizeof(int) instead of the sizeof(long).\n> \n> Can someone explain the comment from djm to me (or is djm still\n> listening somewhere?). At first blush, I suspect that I actually\n> _want_ it to do the latter version of LONGALIGN(), since my longs\n> really are 8 bytes. But when I try to do that instead, I am unable to\n> even run \"initdb\" - dies with an error like \"attribute not\n> found/invalid\" (sorry, scrolled away the window with the actual error\n> - I can re-create if the exact message would help anyone).\n> \n> Anyone have suggestions on how I might proceed? Are there known\n> problems (or known-workings) of HashJoin on 64-bit platforms?\n> \n\nGood analysis. I never have understood the alpha issues. I realize\nthat initdb does not work in those cases, but never understood why.\n\nNot sure who djm is.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Mar 1999 23:33:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "Erik Riedel <[email protected]> writes:\n> Platform: Alpha, Digital UNIX 4.0D \n> [ memutils.h says ]\n> /*\n> * even though \"long alignment\" should really be on 8-byte boundaries for\n> * linuxalpha, we want the strictest alignment to be on 4-byte (int)\n> * boundaries, because otherwise things break when they try to use the\n> * FormData_pg_* structures. --djm 12/12/96\n> */\n\nI remember looking at that code and saying \"Huh? You can't do that!\".\nI kept my fingers off it because I didn't have direct proof that it\nwas broken ... but it sounds like you do.\n\n> Can someone explain the comment from djm to me (or is djm still\n> listening somewhere?). At first blush, I suspect that I actually\n> _want_ it to do the latter version of LONGALIGN(), since my longs\n> really are 8 bytes. But when I try to do that instead, I am unable to\n> even run \"initdb\" - dies with an error like \"attribute not\n> found/invalid\"\n\nYeah, that's about what I'd expect. The point is that the struct\nlayouts found in include/catalog/pg_*.h for system table records\nhave to match the actual physical layout of tuples on disk. What\nyou are probably running into is that the attribute size/alignment\ncalculations done by the heaptuple code using the declared column data\ntypes fail to match up with the struct field alignment done by the\ncompiler.\n\nMy guess is that either a struct field is being declared \"long\" when\nit really oughta be \"int\", or some part of the tuple storage routines\nis applying LONGALIGN() when it only oughta apply INTALIGN(). This\nis something that would be difficult to track down or verify without\na box on which sizeof(int) != sizeof(long), so I haven't gone after it.\nIf you have time, please leave memutils.h with the more reasonable\nlooking definition of LONGALIGN() and go looking to find out which\nsystem table has the sizing conflict.\n\nBTW, we'd run into this same problem if any of the system tables had\na float8 column, since the alignment of those is platform-dependent.\nMemo to hackers: stay away from float8 in sys tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 10:53:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins "
},
{
"msg_contents": "> My guess is that either a struct field is being declared \"long\" when\n> it really oughta be \"int\", or some part of the tuple storage routines\n> is applying LONGALIGN() when it only oughta apply INTALIGN(). This\n> is something that would be difficult to track down or verify without\n> a box on which sizeof(int) != sizeof(long), so I haven't gone after it.\n> If you have time, please leave memutils.h with the more reasonable\n> looking definition of LONGALIGN() and go looking to find out which\n> system table has the sizing conflict.\n\nYes. If you can tell us the column, by running initdb in debug mode\n(somehow), I think we can figure out the problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Mar 1999 11:21:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "\n> Yes. If you can tell us the column, by running initdb in debug mode\n> (somehow), I think we can figure out the problem.\n> \nI changed LONGALIGN to be the \"more correct\" version, and got the\nfollowing trace tidbit from initdb:\n\n<...startup elided...>\n\n+ mkdir /mnt/pgsql/data/base \n+ [ 0 -ne 0 ] \n+ rm -rf /mnt/pgsql/data/base/template1 \n+ mkdir /mnt/pgsql/data/base/template1 \n+ [ 0 -eq 1 ] \nBACKEND_TALK_ARG=-Q\nBACKENDARGS=-boot -C -F -D/mnt/pgsql/data -Q\n+ echo Creating template database in /mnt/pgsql/data/base/template1 \nCreating template database in /mnt/pgsql/data/base/template1\n+ [ 0 -ne 0 ] \n+ postgres -boot -C -F -D/mnt/pgsql/data -Q template1 \n+ cat /usr/pdl/lib/pgsql/lib/local1_template1.bki.source \n+ sed -e s/postgres PGUID/er1p 5555/ -e s/PGUID/5555/ \nERROR: create index: type for attribute 'attrelid' undefined\nERROR: create index: type for attribute 'attrelid' undefined\n/usr/pdl/lib/pgsql/bin/initdb: 2300 Quit - core dumped\n+ [ 131 -ne 0 ] \n+ echo initdb: could not create template database \ninitdb: could not create template database\n+ [ 0 -eq 0 ] \n+ echo initdb: cleaning up by wiping out /mnt/pgsql/data/base/template1 \ninitdb: cleaning up by wiping out /mnt/pgsql/data/base/template1\n+ rm -rf /mnt/pgsql/data/base/template1 \n+ exit 1 \n\nthe crash is near the ERROR statements, core dumped. The dbx\nbacktrace then shows:\n\nsignal Quit at >*[__kill, 0x120185aa8] \tbeq\tr19, 0x120185ac0\n(dbx)\n(dbx) where\n> 0 __kill(0x0, 0x140035d50, 0x12016e508, 0xffffffffffffffff,\n0x1400545b8) [0x120185aa8]\n 1 elog(fmt = 0x1400067b0 = \"create index: type for attribute '%s'\nundefined\") [\"elog.c\":224, 0x12016e530]\n 2 NormIndexAttrs(attList = 0x8fc, attNumP = 0x1401e91e2, classOidP =\n0x1401e91f8) [\"indexcmds.c\":500, 0x12007ae5c]\n 3 DefineIndex(heapRelationName = 0x1401991b0 = \"pg_attribute\",\nindexRelationName = 0x1401c6e00 = \"pg_attribute_relid_attnam_index\",\naccessMethodName = 0x1401e91f8 = \"^B\", attributeList = 0x1400a48d0,\nparameterList = (nil), primary = '^@', predicate = (nil), rangetable =\n(nil)) [\"indexcmds.c\":198, 0x12007a2b4]\n 4 Int_yyparse()\n[\"/usr0/kosak/tmp/bisontestinstall/share/bison.simple\":700, 0x120062c14]\n 5 BootstrapMain(argv = 0x14007da20) [\"bootstrap.c\":430, 0x1200654c4]\n 6 main(argv = 0x11ffff5c8) [\"main.c\":100, 0x1200ae190]\n(dbx) \n\nit seems to be ok until it goes to build the indices.\n\nDoes this help identify where to look?\n\nErik\n\n\n\n",
"msg_date": "Wed, 24 Mar 1999 13:56:55 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "Erik Riedel <[email protected]> writes:\n> 1 elog(fmt = 0x1400067b0 = \"create index: type for attribute '%s'\n> undefined\") [\"elog.c\":224, 0x12016e530]\n> 2 NormIndexAttrs(attList = 0x8fc, attNumP = 0x1401e91e2, classOidP =\n> 0x1401e91f8) [\"indexcmds.c\":500, 0x12007ae5c]\n\n> Does this help identify where to look?\n\nJust from looking at that chunk of the source, it seems that the problem\nis in either pg_type or pg_attribute, since it is using a field from a\npg_attribute tuple to look for a pg_type tuple... probably\npg_attribute... which starts out with\n Oid attrelid;\n NameData attname;\n Oid atttypid;\nCould it be NameData? A few minutes later: YUP!\n\n>From pg_type, type \"name\" is declared as having fixed length 32 and\nint-alignment (typalign = 'i'). This is dubious enough, since a\ncompiler is likely to treat a char array as having only byte alignment;\ntypalign = 'c' seems more correct. But it's been working so far and\nprolly wouldn't break on an Alpha.\n\nBut in src/include/access/tupmacs.h we find the code that actually\nimplements attribute alignment calculations, and it reads:\n\n#define att_align(cur_offset, attlen, attalign) \\\n( \\\n ((attlen) < sizeof(int32)) ? \\\n ( \\\n ((attlen) == -1) ? \\\n ( \\\n ((attalign) == 'd') ? DOUBLEALIGN(cur_offset) : \\\n INTALIGN(cur_offset) \\\n ) \\\n : \\\n ( \\\n ((attlen) == sizeof(char)) ? \\\n ( \\\n (long)(cur_offset) \\\n ) \\\n : \\\n ( \\\n AssertMacro((attlen) == sizeof(short)), \\\n SHORTALIGN(cur_offset) \\\n ) \\\n ) \\\n ) \\\n : \\\n ( \\\n ((attlen) == sizeof(int32)) ? \\\n ( \\\n INTALIGN(cur_offset) \\\n ) \\\n : \\\n ( \\\n AssertMacro((attlen) > sizeof(int32)), \\\n ((attalign) == 'd') ? DOUBLEALIGN(cur_offset) : \\\n LONGALIGN(cur_offset) \\\n ) \\\n ) \\\n)\n\nWalk through that with attlen = 32, attalign = 'i', and guess what:\nit applies LONGALIGN(). Which is different on Alpha than everywhere\nelse, not to mention flat-out wrong for the given arguments.\n\nErik, try changing that last LONGALIGN to INTALIGN and see if it\nworks any better on the Alpha.\n\nWe probably really ought to bag this entire logic and replace it\nwith something like\n\t\tattalign 'c' -> no alignment\n\t\tattalign 's' -> SHORTALIGN\n\t\tattalign 'i' -> INTALIGN\n\t\tattalign 'd' -> DOUBLEALIGN\nwith possibly some AssertMacro cross-checks that the given attlen\nmakes sense for the attalign. But driving the logic primarily off\nattlen rather than attalign makes little sense to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 18:52:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins "
},
{
"msg_contents": "I have committed changes that I believe eliminate the need for the\nbogus LONGALIGN() definition Erik was complaining of.\n\nThe modified sources pass regression (as well as before anyway ;-))\non my machine, but I have not got an Alpha to test with. Erik,\nwould you pull the current CVS sources and see if it works for you?\nNOTE: please run a full install including initdb, in order to make sure\nI didn't break bootstrap...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 22:59:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins "
},
{
"msg_contents": "> Walk through that with attlen = 32, attalign = 'i', and guess what:\n> it applies LONGALIGN(). Which is different on Alpha than everywhere\n> else, not to mention flat-out wrong for the given arguments.\n> \n> Erik, try changing that last LONGALIGN to INTALIGN and see if it\n> works any better on the Alpha.\n> \n> We probably really ought to bag this entire logic and replace it\n> with something like\n> \t\tattalign 'c' -> no alignment\n> \t\tattalign 's' -> SHORTALIGN\n> \t\tattalign 'i' -> INTALIGN\n> \t\tattalign 'd' -> DOUBLEALIGN\n> with possibly some AssertMacro cross-checks that the given attlen\n> makes sense for the attalign. But driving the logic primarily off\n> attlen rather than attalign makes little sense to me.\n\nI can agree that the type alignment is clearly wrong. I did some\ncleanup of the pg_* files to they at least match, but that was obviously\nnot the whole picture, and this flakeyness has given us problems on\nalpha.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Mar 1999 23:40:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "\n> The modified sources pass regression (as well as before anyway ;-))\n> on my machine, but I have not got an Alpha to test with. Erik,\n> would you pull the current CVS sources and see if it works for you?\n> NOTE: please run a full install including initdb, in order to make sure\n> I didn't break bootstrap...\n> \nDone. Looks like it works.\n\nIt compiles fine, initdb runs fine, and my query runs correctly with\nHashJoin (more than 100x improvement over the Nested Loops version!).\n\nThanks!\n\nErik\n\n",
"msg_date": "Thu, 25 Mar 1999 11:57:02 -0500 (EST)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "Now that the XXXALIGN() macros are supposed to reflect reality instead\nof arbitrary decisions ;-), I have revised memutils.h to eliminate all\nthat \"#if defined(platform)\" cruft. Instead, the actual alignment values\nbeing used by the compiler are discovered by the configure script.\nThis should make things considerably more robust on machines where the\nalignment requirement of the basic C datatypes is not the same as their\nsize. In particular, we should no longer see any problems with the\nstruct declarations in include/catalog/pg_*.h not matching the way that\nthe tuple access code lays out the tuples.\n\nWARNING: if you are on a machine where this actually makes a difference,\nyou may have to do an initdb after your next CVS update, because the\npadding in your tables may change. I think this would be most likely\nto affect tables containing float8 or int8 data --- some machines\nrequire 8-byte alignment of doubles, but some don't, and the padding of\nfloat data will now reflect that.\n\nRight now the system is still making an assumption that I consider\ncrufty: it uses typalign = 'd' (ie, DOUBLE alignment) for int8 data\n(long long int). As things stand, this would only cause problems on\nmachines where long long actually has stronger alignment requirements\nthan double. I've never heard of such a platform, but maybe they are\nout there --- has anyone heard of one? A more likely cause of trouble\nis that if any int8 columns are ever added to system tables, the code\nwill risk failure unless int8 and double have exactly the same alignment\nrequirement (because the catalog structs could get laid out differently\nthan the tuple code would expect).\n\nIs it worth adding a new typalign value specifically for int8, in order\nto make the world safe for int8 columns in system tables?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Mar 1999 14:25:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins "
},
{
"msg_contents": "> Right now the system is still making an assumption that I consider\n> crufty: it uses typalign = 'd' (ie, DOUBLE alignment) for int8 data\n> (long long int). As things stand, this would only cause problems on\n> machines where long long actually has stronger alignment requirements\n> than double. I've never heard of such a platform, but maybe they are\n> out there --- has anyone heard of one?\n\nNo. At least not in our list of supported platforms; don't know what\nCray mainframes require, since machines like that are not necessarily\n8-bit-bytes, 4-byte-longword machines.\n\n> Is it worth adding a new typalign value specifically for int8, in \n> order to make the world safe for int8 columns in system tables?\n\nI would be comfortable making the same assumptions about int8 as for\ndouble.\n\n - Thomas\n",
"msg_date": "Fri, 26 Mar 1999 19:12:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "\nTom, you fixed this, right?\n\n\n> Erik Riedel <[email protected]> writes:\n> > Platform: Alpha, Digital UNIX 4.0D \n> > [ memutils.h says ]\n> > /*\n> > * even though \"long alignment\" should really be on 8-byte boundaries for\n> > * linuxalpha, we want the strictest alignment to be on 4-byte (int)\n> > * boundaries, because otherwise things break when they try to use the\n> > * FormData_pg_* structures. --djm 12/12/96\n> > */\n> \n> I remember looking at that code and saying \"Huh? You can't do that!\".\n> I kept my fingers off it because I didn't have direct proof that it\n> was broken ... but it sounds like you do.\n> \n> > Can someone explain the comment from djm to me (or is djm still\n> > listening somewhere?). At first blush, I suspect that I actually\n> > _want_ it to do the latter version of LONGALIGN(), since my longs\n> > really are 8 bytes. But when I try to do that instead, I am unable to\n> > even run \"initdb\" - dies with an error like \"attribute not\n> > found/invalid\"\n> \n> Yeah, that's about what I'd expect. The point is that the struct\n> layouts found in include/catalog/pg_*.h for system table records\n> have to match the actual physical layout of tuples on disk. What\n> you are probably running into is that the attribute size/alignment\n> calculations done by the heaptuple code using the declared column data\n> types fail to match up with the struct field alignment done by the\n> compiler.\n> \n> My guess is that either a struct field is being declared \"long\" when\n> it really oughta be \"int\", or some part of the tuple storage routines\n> is applying LONGALIGN() when it only oughta apply INTALIGN(). This\n> is something that would be difficult to track down or verify without\n> a box on which sizeof(int) != sizeof(long), so I haven't gone after it.\n> If you have time, please leave memutils.h with the more reasonable\n> looking definition of LONGALIGN() and go looking to find out which\n> system table has the sizing conflict.\n> \n> BTW, we'd run into this same problem if any of the system tables had\n> a float8 column, since the alignment of those is platform-dependent.\n> Memo to hackers: stay away from float8 in sys tables.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 00:35:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, you fixed this, right?\n\nI believe this is fixed, but it'd be nice to have some confirmation from\nsomeone with a platform where long != int ... Erik, have you tried it\nrecently?\n\n\t\t\tregards, tom lane\n\n\n>> Erik Riedel <[email protected]> writes:\n>>>> Platform: Alpha, Digital UNIX 4.0D \n>>>> [ memutils.h says ]\n>>>> /*\n>>>> * even though \"long alignment\" should really be on 8-byte boundaries for\n>>>> * linuxalpha, we want the strictest alignment to be on 4-byte (int)\n>>>> * boundaries, because otherwise things break when they try to use the\n>>>> * FormData_pg_* structures. --djm 12/12/96\n>>>> */\n>> \n>> I remember looking at that code and saying \"Huh? You can't do that!\".\n>> I kept my fingers off it because I didn't have direct proof that it\n>> was broken ... but it sounds like you do.\n>> \n>>>> Can someone explain the comment from djm to me (or is djm still\n>>>> listening somewhere?). At first blush, I suspect that I actually\n>>>> _want_ it to do the latter version of LONGALIGN(), since my longs\n>>>> really are 8 bytes. But when I try to do that instead, I am unable to\n>>>> even run \"initdb\" - dies with an error like \"attribute not\n>>>> found/invalid\"\n>> \n>> Yeah, that's about what I'd expect. The point is that the struct\n>> layouts found in include/catalog/pg_*.h for system table records\n>> have to match the actual physical layout of tuples on disk. What\n>> you are probably running into is that the attribute size/alignment\n>> calculations done by the heaptuple code using the declared column data\n>> types fail to match up with the struct field alignment done by the\n>> compiler.\n>> \n>> My guess is that either a struct field is being declared \"long\" when\n>> it really oughta be \"int\", or some part of the tuple storage routines\n>> is applying LONGALIGN() when it only oughta apply INTALIGN(). This\n>> is something that would be difficult to track down or verify without\n>> a box on which sizeof(int) != sizeof(long), so I haven't gone after it.\n>> If you have time, please leave memutils.h with the more reasonable\n>> looking definition of LONGALIGN() and go looking to find out which\n>> system table has the sizing conflict.\n>> \n>> BTW, we'd run into this same problem if any of the system tables had\n>> a float8 column, since the alignment of those is platform-dependent.\n>> Memo to hackers: stay away from float8 in sys tables.\n>> \n>> regards, tom lane\n>> \n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:31:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins "
},
{
"msg_contents": "Excerpts from mail: 10-May-99 Re: [HACKERS] 64-bit hashjo.. by Tom\[email protected] \n> I believe this is fixed, but it'd be nice to have some confirmation from\n> someone with a platform where long != int ... Erik, have you tried it\n> recently?\n> \nSorry for the slow response.\n\nI tried this when the fix was first done, and I thought I reported to\nthe list that it worked fine.\n\nI actually have not updated my tree since then, so I don't know about\nchanges after 25 March. My logs say:\n\n630 snapshot from postgresql CVS (25 March 1999)\n631 fix for 64-bit LONGALIGN (works!)\n\nI have been using that version since March without problems (well, at\nleast no problems with 64-bit ints and hashjoins...).\n\nErik\n\n",
"msg_date": "Mon, 17 May 1999 09:14:35 -0400 (EDT)",
"msg_from": "Erik Riedel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins"
},
{
"msg_contents": "Erik Riedel <[email protected]> writes:\n> I actually have not updated my tree since then, so I don't know about\n> changes after 25 March. My logs say:\n> 630 snapshot from postgresql CVS (25 March 1999)\n> 631 fix for 64-bit LONGALIGN (works!)\n> I have been using that version since March without problems (well, at\n> least no problems with 64-bit ints and hashjoins...).\n\nOK, but I've done some considerable hacking on the hashjoin code since\nthen. I don't *think* I broke anything ... but ... if you have the\ntime to pull current sources and check again, it'd be appreciated.\n(There have been a lot of other bugs fixed since March, too.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 10:22:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 64-bit hashjoins "
}
] |
[
{
"msg_contents": "Not sure if I should post this here, but it seemed kinda appropriate.\nAnyway, I'm using 6.4.2 and execute the following query in psql, piping the\nresults to a file:\n\"select autos.*, owners.name, owners.email, owners.dphone, owners.ephone,\nowners.zip, owners.country from autos, owners where autos.ownerid =\nowners.id;\"\nThis takes about 60 seconds at 0% idle CPU, with the backend taking all the\ntime. The file ends up about 3MB. Both tables have between 1200 and 1600\nrows with about 25 and 7 columns respectively.\nA simpler query like:\n\"select * from autos;\" takes about a second at about 50% idle, and produces\na similiar amount of data in a 3MB file.\nAny hints on speeding this up?\nOS: Redhat Linux 5.1, Dual-PPro 266.\n\nThe table definitions are below if anyone is interested:\n(Also, the cdate default value doesn't get set properly to the current date.\nAny hints on that would\nbe appreciated as well.)\nThanks,\nRich.\n\n\nTable = owners\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| id | float8 |\n8 |\n| name | varchar() |\n0 |\n| email | varchar() |\n0 |\n| dphone | varchar() |\n0 |\n| ephone | varchar() |\n0 |\n| zip | varchar() |\n0 |\n| country | varchar() |\n0 |\n| password | varchar() |\n0 |\n| isdealer | bool |\n1 |\n| cdate | date default datetime 'now' |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\nTable = autos\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| id | float8 |\n8 |\n| ownerid | float8 |\n8 |\n| city | varchar() |\n0 |\n| region | varchar() |\n0 |\n| year | varchar() |\n0 |\n| mileage | int8 |\n8 |\n| make | varchar() |\n0 |\n| model | varchar() |\n0 |\n| price | money |\n4 |\n| bo | bool |\n1 |\n| ecolor | varchar() |\n0 |\n| icolor | varchar() |\n0 |\n| condition | varchar() |\n0 |\n| trans | varchar() |\n0 |\n| drivetrain | varchar() |\n0 |\n| cylinders | varchar() |\n0 |\n| power_steering | varchar() |\n0 |\n| power_windows | varchar() |\n0 |\n| power_locks | varchar() |\n0 |\n| pwr_driver_seat | varchar() |\n0 |\n| abs | varchar() |\n0 |\n| driver_air_bag | varchar() |\n0 |\n| dual_air_bag | varchar() |\n0 |\n| leather | varchar() |\n0 |\n| air | varchar() |\n0 |\n| radio | varchar() |\n0 |\n| cassette | varchar() |\n0 |\n| cd | varchar() |\n0 |\n| extra_cab | varchar() |\n0 |\n| tow_pkg | varchar() |\n0 |\n| sun_roof | varchar() |\n0 |\n| roof_rack | varchar() |\n0 |\n| description | varchar() |\n0 |\n| cdate | date default datetime 'now' |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\n\n\n\n\n",
"msg_date": "Tue, 23 Mar 1999 23:37:27 -0800",
"msg_from": "\"Postgres mailing lists\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Really slow query on 6.4.2"
},
{
"msg_contents": "\"Postgres mailing lists\" <[email protected]> writes:\n> Anyway, I'm using 6.4.2 and execute the following query in psql, piping the\n> results to a file:\n> \"select autos.*, owners.name, owners.email, owners.dphone, owners.ephone,\n> owners.zip, owners.country from autos, owners where autos.ownerid =\n> owners.id;\"\n> This takes about 60 seconds at 0% idle CPU, with the backend taking all the\n> time. The file ends up about 3MB. Both tables have between 1200 and 1600\n> rows with about 25 and 7 columns respectively.\n\nHave you done a \"vacuum analyze\" lately? Sounds like the thing is using\na nested loop query plan, which is appropriate for tiny tables but not\nfor large ones. You could check this by seeing what EXPLAIN says.\n\nUnfortunately, if you haven't done a vacuum, the system effectively\nassumes that all your tables are tiny. I think this is a brain-dead\ndefault, but haven't had much luck convincing anyone else that the\ndefault should be changed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 10:09:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "The vacuum analyze did it. It's fast now. Thanks a bunch.\nrich.\n\n\nOn Wed, 24 Mar 1999, Tom Lane wrote:\n\n> \"Postgres mailing lists\" <[email protected]> writes:\n> > Anyway, I'm using 6.4.2 and execute the following query in psql, piping the\n> > results to a file:\n> > \"select autos.*, owners.name, owners.email, owners.dphone, owners.ephone,\n> > owners.zip, owners.country from autos, owners where autos.ownerid =\n> > owners.id;\"\n> > This takes about 60 seconds at 0% idle CPU, with the backend taking all the\n> > time. The file ends up about 3MB. Both tables have between 1200 and 1600\n> > rows with about 25 and 7 columns respectively.\n> \n> Have you done a \"vacuum analyze\" lately? Sounds like the thing is using\n> a nested loop query plan, which is appropriate for tiny tables but not\n> for large ones. You could check this by seeing what EXPLAIN says.\n> \n> Unfortunately, if you haven't done a vacuum, the system effectively\n> assumes that all your tables are tiny. I think this is a brain-dead\n> default, but haven't had much luck convincing anyone else that the\n> default should be changed.\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Wed, 24 Mar 1999 11:55:21 -0800 (PST)",
"msg_from": "RHS Linux User <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Really slow query on 6.4.2 "
}
] |
[
{
"msg_contents": "I tried to create a new user-defined type. I use palloc() to create\nstorage for this type so I include palloc.h however, this generates\nthe following error.\n\n[postgres@druid:contrib/chkpass] $ make\ncc -g -O -fPIC -I/usr/local/pgsql/include -c chkpass.c\nIn file included from /usr/local/pgsql/include/postgres.h:44,\n from chkpass.c:9:\n/usr/local/pgsql/include/utils/palloc.h:30: utils/mcxt.h: No such file or directory\n\nIf I copy mcxt.h there I get this error.\n\n[postgres@druid:contrib/chkpass] $ make\ncc -g -O -fPIC -I/usr/local/pgsql/include -c chkpass.c\nIn file included from /usr/local/pgsql/include/utils/palloc.h:30,\n from /usr/local/pgsql/include/postgres.h:44,\n from chkpass.c:9:\n/usr/local/pgsql/include/utils/mcxt.h:25: syntax error before `MemoryContext'\n\nIf I remove the include from palloc.h I have problems with the use of\npalloc because the macro expands to use CurrentMemoryContext which is\ndeclared in mcxt.h. Anyone know what the correct answer to this problem\nmight be?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 24 Mar 1999 08:47:39 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "mcxt.h"
},
{
"msg_contents": "How long since you updated your source code? I took care of this\na couple weeks ago ... or so I thought ...\n\nYou do need to repeat the \"make install\" step to get the right stuff\nput into /usr/local/pgsql/include/.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Mar 1999 10:13:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mcxt.h "
}
] |
[
{
"msg_contents": "> If I copy mcxt.h there I get this error.\n> \n> [postgres@druid:contrib/chkpass] $ make\n> cc -g -O -fPIC -I/usr/local/pgsql/include -c chkpass.c\n> In file included from /usr/local/pgsql/include/utils/palloc.h:30,\n> from /usr/local/pgsql/include/postgres.h:44,\n> from chkpass.c:9:\n> /usr/local/pgsql/include/utils/mcxt.h:25: syntax error before \n> `MemoryContext'\n\nThis can be caused by the DLLIMPORT symbol in declaration of\nCurrentMemoryContext in mcxt.h. DLLIMPORT is defined in c.h as nothing for\nall platforms other than win32 (I hope) and c.h is included by postgres.h.\n\n\t\tDan\n",
"msg_date": "Wed, 24 Mar 1999 15:00:16 +0100",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] mcxt.h"
}
] |
[
{
"msg_contents": "You don't mention any indexes. Make sure you have indexes in stalled in\nautos.ownerid and owners.id.\n\n\t-----Original Message-----\n\tFrom:\tPostgres mailing lists [SMTP:[email protected]]\n\tSent:\tWednesday, March 24, 1999 12:37 AM\n\tTo:\[email protected]\n\tSubject:\t[HACKERS] Really slow query on 6.4.2\n\n\tNot sure if I should post this here, but it seemed kinda\nappropriate.\n\tAnyway, I'm using 6.4.2 and execute the following query in psql,\npiping the\n\tresults to a file:\n\t\"select autos.*, owners.name, owners.email, owners.dphone,\nowners.ephone,\n\towners.zip, owners.country from autos, owners where autos.ownerid =\n\towners.id;\"\n\tThis takes about 60 seconds at 0% idle CPU, with the backend taking\nall the\n\ttime. The file ends up about 3MB. Both tables have between 1200 and\n1600\n\trows with about 25 and 7 columns respectively.\n\tA simpler query like:\n\t\"select * from autos;\" takes about a second at about 50% idle, and\nproduces\n\ta similiar amount of data in a 3MB file.\n\tAny hints on speeding this up?\n\tOS: Redhat Linux 5.1, Dual-PPro 266.\n\n\tThe table definitions are below if anyone is interested:\n\t(Also, the cdate default value doesn't get set properly to the\ncurrent date.\n\tAny hints on that would\n\tbe appreciated as well.)\n\tThanks,\n\tRich.\n\n\n\tTable = owners\n\t\n+----------------------------------+----------------------------------+-----\n\t--+\n\t| Field | Type\n|\n\tLength|\n\t\n+----------------------------------+----------------------------------+-----\n\t--+\n\t| id | float8\n|\n\t8 |\n\t| name | varchar()\n|\n\t0 |\n\t| email | varchar()\n|\n\t0 |\n\t| dphone | varchar()\n|\n\t0 |\n\t| ephone | varchar()\n|\n\t0 |\n\t| zip | varchar()\n|\n\t0 |\n\t| country | varchar()\n|\n\t0 |\n\t| password | varchar()\n|\n\t0 |\n\t| isdealer | bool\n|\n\t1 |\n\t| cdate | date default datetime 'now'\n|\n\t4 |\n\t\n+----------------------------------+----------------------------------+-----\n\t--+\n\n\tTable = autos\n\t\n+----------------------------------+----------------------------------+-----\n\t--+\n\t| Field | Type\n|\n\tLength|\n\t\n+----------------------------------+----------------------------------+-----\n\t--+\n\t| id | float8\n|\n\t8 |\n\t| ownerid | float8\n|\n\t8 |\n\t| city | varchar()\n|\n\t0 |\n\t| region | varchar()\n|\n\t0 |\n\t| year | varchar()\n|\n\t0 |\n\t| mileage | int8\n|\n\t8 |\n\t| make | varchar()\n|\n\t0 |\n\t| model | varchar()\n|\n\t0 |\n\t| price | money\n|\n\t4 |\n\t| bo | bool\n|\n\t1 |\n\t| ecolor | varchar()\n|\n\t0 |\n\t| icolor | varchar()\n|\n\t0 |\n\t| condition | varchar()\n|\n\t0 |\n\t| trans | varchar()\n|\n\t0 |\n\t| drivetrain | varchar()\n|\n\t0 |\n\t| cylinders | varchar()\n|\n\t0 |\n\t| power_steering | varchar()\n|\n\t0 |\n\t| power_windows | varchar()\n|\n\t0 |\n\t| power_locks | varchar()\n|\n\t0 |\n\t| pwr_driver_seat | varchar()\n|\n\t0 |\n\t| abs | varchar()\n|\n\t0 |\n\t| driver_air_bag | varchar()\n|\n\t0 |\n\t| dual_air_bag | varchar()\n|\n\t0 |\n\t| leather | varchar()\n|\n\t0 |\n\t| air | varchar()\n|\n\t0 |\n\t| radio | varchar()\n|\n\t0 |\n\t| cassette | varchar()\n|\n\t0 |\n\t| cd | varchar()\n|\n\t0 |\n\t| extra_cab | varchar()\n|\n\t0 |\n\t| tow_pkg | varchar()\n|\n\t0 |\n\t| sun_roof | varchar()\n|\n\t0 |\n\t| roof_rack | varchar()\n|\n\t0 |\n\t| description | varchar()\n|\n\t0 |\n\t| cdate | date default datetime 'now'\n|\n\t4 |\n\t\n+----------------------------------+----------------------------------+-----\n\t--+\n\n\n\n\n\t\n",
"msg_date": "Wed, 24 Mar 1999 09:01:19 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Really slow query on 6.4.2"
}
] |
[
{
"msg_contents": "Hi,\n\nI need some static oids to add new datatypes and procs to the system catalog.\nCan I just grab the first unused oids or should I choose oids from specific\nranges for types and procs.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Wed, 24 Mar 1999 18:21:43 +0100 (MET)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "static oid"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I need some static oids to add new datatypes and procs to the system catalog.\n> Can I just grab the first unused oids or should I choose oids from specific\n> ranges for types and procs.\n\nSee unused_oids in pg/include. If you need a sizable range, I recommend\ngetting a continious range of them from the end. There is no\npartitioning of oids.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Mar 1999 13:11:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] static oid"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I need some static oids to add new datatypes and procs to the system catalog.\n> Can I just grab the first unused oids or should I choose oids from specific\n> ranges for types and procs.\n> \n\nCan you give me a large city that you are near for our developers map?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Mar 1999 13:12:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] static oid"
},
{
"msg_contents": "On Wed, 24 Mar 1999, Bruce Momjian wrote:\n>Can you give me a large city that you are near for our developers map?\n\nWhat developers map?\n\nTaral\n",
"msg_date": "Wed, 24 Mar 1999 19:11:13 -0600",
"msg_from": "Taral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] static oid"
},
{
"msg_contents": "> On Wed, 24 Mar 1999, Bruce Momjian wrote:\n> >Can you give me a large city that you are near for our developers map?\n> \n> What developers map?\n> \n> Taral\n> \n\nThe one under help us/developers on the web page. Jan is working on\nimproving it. We don't have everyone there, but people who have been\naround a long time.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Mar 1999 23:41:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] static oid"
}
] |
[
{
"msg_contents": "Hi,\n\narray_in is defined in the system catalog as taking two arguments while it\nactually takes three. Please apply the following patch.\n\n*** src/include/catalog/pg_proc.h.orig\tMon Dec 14 01:14:53 1998\n--- src/include/catalog/pg_proc.h\tWed Mar 24 12:11:22 1999\n***************\n*** 984,990 ****\n DESCR(\"array\");\n DATA(insert OID = 749 ( array_ref\t\t PGUID 11 f t f 7 f 23 \"0 23 0 23 23 23 0\" 100 0 0 100 foo bar));\n DESCR(\"array\");\n! DATA(insert OID = 750 ( array_in\t\t PGUID 11 f t f 2 f 23 \"0 0\" 100 0 0 100\tfoo bar ));\n DESCR(\"array\");\n DATA(insert OID = 751 ( array_out\t\t PGUID 11 f t f 2 f 23 \"0 0\" 100 0 0 100\tfoo bar ));\n DESCR(\"array\");\n--- 992,998 ----\n DESCR(\"array\");\n DATA(insert OID = 749 ( array_ref\t\t PGUID 11 f t f 7 f 23 \"0 23 0 23 23 23 0\" 100 0 0 100 foo bar));\n DESCR(\"array\");\n! DATA(insert OID = 750 ( array_in\t\t PGUID 11 f t f 3 f 23 \"0 0 23\" 100 0 0 100\tfoo bar ));\n DESCR(\"array\");\n DATA(insert OID = 751 ( array_out\t\t PGUID 11 f t f 2 f 23 \"0 0\" 100 0 0 100\tfoo bar ));\n DESCR(\"array\");\n\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Wed, 24 Mar 1999 18:29:35 +0100 (MET)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "incorret definition of array_in"
}
] |
[
{
"msg_contents": "Have you done an initdb. Just a thought.\nI'd do:\n delete/move old pgsql tree including data/bin/include/lib directories\n make distclean - because there could still be some date problems (i.e.\ngram.c)\n make\n make install\n initdb\n start postmaster\n run regression\n try \\d on the regression database\n\nSorry if you already said that you ran initdb.\n\t-DEJ\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, March 24, 1999 10:16 AM\n> To: James Thompson\n> Cc: [email protected]\n> Subject: Re: [HACKERS] backend unstable, \\d broken, groups broken was\n> CVS 3-22-99 \\d broken? \n> \n> \n> James Thompson <[email protected]> writes:\n> > [ many things very broken despite full rebuild ]\n> \n> Sounds like you've hit some kind of platform-specific breakage, then.\n> \n> I'd suggest chasing the \\d failure, simply because that's apparently\n> the easiest thing to reproduce. Look at the source code for psql.c,\n> and try issuing by hand the same queries it uses to obtain the system\n> table info for \\d. Use a debugger to look at the data coming back\n> from the backend during \\d --- in other words, is the lossage in psql\n> or in the backend? Most likely it's the backend but you ought to make\n> sure.\n> \n> I'm not enough of a backend guru to suggest where to look for \n> the fault\n> inside the backend... anyone?\n> \n> > I've noticed the backend is not stable. I think it has \n> something to do\n> > with permissions/passwords. I don't have exact details but \n> if I change\n> > passwords, create users, or do a large quantity of grants \n> the backend\n> > seems to die when the db superuser exits psql. At least \n> the next login\n> > fails due to no backend process running.\n> \n> You mean no postmaster process running. Is there a corefile? (The\n> postmaster would drop core in the directory you started it in, IIRC;\n> or it might be the top /usr/local/pgsql/data/ directory.) If so,\n> what backtrace do you get from it?\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Wed, 24 Mar 1999 13:34:48 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] backend unstable, \\d broken, groups broken was CVS\n\t3-22-99 \\d broken?"
}
] |
[
{
"msg_contents": "> Unfortunately, if you haven't done a vacuum, the system effectively\n> assumes that all your tables are tiny. I think this is a brain-dead\n> default, but haven't had much luck convincing anyone else that the\n> default should be changed.\n> \nI totally agree with Tom Lane here. Let me try to give some arguments.\n\t\n\n1. If you have a user that does vacuum analyze regularly, we can\nconvince him to do vacuum analyze right after table creation, if he\nknows the table will be tiny.\n\n2. We have an application where the size of 20 tables changes from \n0 to ~200000 rows in 3 hours. To have accurate statistics during the day we\nwould need to analyze at least every 20 min.\nThis was not acceptable during those 3 hours.\nSo we took the approach to tune the sql to work properly without ever\ndoing statistics.\nThis works perfectly on our Informix installation, since Informix has\na tuning parameter, that tells it, that an index has to be used iff\npossible even if cost is higher, and the default for table size is 100.\n\n3. There are two types of popular optimizers, rule and cost based. \nA good approach is to behave rule based lacking statistics and cost\nbased with statistics. An easy way to achieve this is to choose\nreasonable defaults for the statistics before accurate statistics \nare made.\n\n4. Those doing statistics will most likely not leave out a few tables, thus\ncreating an undefined state where the optimizer would behave rule\nand cost based.\n\n5. Actually postgresql has behaved in this manner because of certain\n\"bugs\" in the optimizer. Recently a lot of those \"bugs\" have been\nidentified and \"fixed\", thus destroying the defacto rule based\nbehavior.\n\nIf the defaults are not changed, behavior of the overall system will\nactually be changed for the case where statistics are lacking, when the\noptimizer is improved to actually behave cost based under all \ncircumstances.\n\nAndreas\n\n\n\n\n\nAW: [HACKERS] Really slow query on 6.4.2 \n\n\n\n\nUnfortunately, if you haven't done a vacuum, the system effectively\nassumes that all your tables are tiny. I think this is a brain-dead\ndefault, but haven't had much luck convincing anyone else that the\ndefault should be changed.\n\n\nI totally agree with Tom Lane here. Let me try to give some arguments.\n \n1. If you have a user that does vacuum analyze regularly, we can\nconvince him to do vacuum analyze right after table creation, if he\nknows the table will be tiny.\n\n2. We have an application where the size of 20 tables changes from \n0 to ~200000 rows in 3 hours. To have accurate statistics during the day we would need to analyze at least every 20 min.\nThis was not acceptable during those 3 hours.\nSo we took the approach to tune the sql to work properly without ever\ndoing statistics.\nThis works perfectly on our Informix installation, since Informix has\na tuning parameter, that tells it, that an index has to be used iff\npossible even if cost is higher, and the default for table size is 100.\n\n3. There are two types of popular optimizers, rule and cost based. \nA good approach is to behave rule based lacking statistics and cost\nbased with statistics. An easy way to achieve this is to choose\nreasonable defaults for the statistics before accurate statistics \nare made.\n\n4. Those doing statistics will most likely not leave out a few tables, thus creating an undefined state where the optimizer would behave rule\nand cost based.\n\n5. Actually postgresql has behaved in this manner because of certain\n\"bugs\" in the optimizer. Recently a lot of those \"bugs\" have been\nidentified and \"fixed\", thus destroying the defacto rule based\nbehavior.\n\nIf the defaults are not changed, behavior of the overall system will\nactually be changed for the case where statistics are lacking, when the\noptimizer is improved to actually behave cost based under all \ncircumstances.\n\nAndreas",
"msg_date": "Thu, 25 Mar 1999 08:40:09 +0100",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "Zeugswetter Andreas IZ5 <[email protected]> writes:\n> 5. Actually postgresql has behaved in this manner because of certain\n> \"bugs\" in the optimizer. Recently a lot of those \"bugs\" have been\n> identified and \"fixed\", thus destroying the defacto rule based\n> behavior.\n\nThat's a real good point --- I think we've already heard a couple of\ncomplaints about the new optimizer doing \"silly\" things that it didn't\nuse to do.\n\nI repeat my proposal: CREATE TABLE should insert a default size (say\nabout 1000 tuples) into pg_class.reltuples, rather than inserting 0.\nThat way, the optimizer will only choose small-table-oriented plans\nif the table has actually been verified to be small by vacuum.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Mar 1999 10:23:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "On Thu, 25 Mar 1999, Zeugswetter Andreas IZ5 wrote:\n\n> \n> > Unfortunately, if you haven't done a vacuum, the system effectively\n> > assumes that all your tables are tiny. I think this is a brain-dead\n> > default, but haven't had much luck convincing anyone else that the\n> > default should be changed.\n> > \n> I totally agree with Tom Lane here. Let me try to give some arguments.\n\nMaybe I've missed something here, but I don't think anyone disagree's that\nour stats aren't the best, but I also don't think anyone has step'd up and\nprovided an alternative...have they?\n\nPersonally, I'd like to see some method where stats can, to a certain\nextent, be updated automagically, when changes are made to the table. The\ngenerated stats wouldn't *replace* vacuum, just reduce the overall need\nfor them.\n\nI'm not sure what is all contained in the stats, but the easiest one, I\nthink, to have done automagically is table sizes...add a tuple, update the\ntable of number of rows automatically. If that numbers gets \"off\", at\nleast it will be more reasonable then not doing anything...no?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 25 Mar 1999 11:58:01 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "> Zeugswetter Andreas IZ5 <[email protected]> writes:\n> > 5. Actually postgresql has behaved in this manner because of certain\n> > \"bugs\" in the optimizer. Recently a lot of those \"bugs\" have been\n> > identified and \"fixed\", thus destroying the defacto rule based\n> > behavior.\n> \n> That's a real good point --- I think we've already heard a couple of\n> complaints about the new optimizer doing \"silly\" things that it didn't\n> use to do.\n> \n> I repeat my proposal: CREATE TABLE should insert a default size (say\n> about 1000 tuples) into pg_class.reltuples, rather than inserting 0.\n> That way, the optimizer will only choose small-table-oriented plans\n> if the table has actually been verified to be small by vacuum.\n\nOK. Sounds good to me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Mar 1999 11:29:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Really slow query on 6.4.2"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I'm not sure what is all contained in the stats, but the easiest one, I\n> think, to have done automagically is table sizes...add a tuple, update the\n> table of number of rows automatically. If that numbers gets \"off\", at\n> least it will be more reasonable then not doing anything...no?\n\nThe number of tuples is definitely the most important stat; updating it\nautomatically would make the optimizer work better. The stuff in\npg_statistics is not nearly as important.\n\nThe only objection I can think of to auto-updating reltuples is that\nit'd mean additional computation (to access and rewrite the pg_class\nentry) and additional disk I/O (to write back pg_class) for every INSERT\nand DELETE. There's also a potential problem of multiple backends all\ntrying to write pg_class and being delayed or even deadlocked because of\nit. (Perhaps the MVCC code will help here.)\n\nI'm not convinced that accurate stats are worth that cost, but I don't\nknow how big the cost would be anyway. Anyone have a feel for it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Mar 1999 14:34:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> I'm not convinced that accurate stats are worth that cost, but I don't\n> know how big the cost would be anyway. Anyone have a feel for it?\n\nThey are definitely *not* worth the cost. Especially since no table\nwill have the default 0 rows entry after a single vacuum analyze of\nthat table. Let's be honest: if you aren't interested in doing a\nvacuum, then really aren't interested in performance, anyway.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "25 Mar 1999 16:02:40 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "On 25 Mar 1999 [email protected] wrote:\n\n> Then <[email protected]> spoke up and said:\n> > I'm not convinced that accurate stats are worth that cost, but I don't\n> > know how big the cost would be anyway. Anyone have a feel for it?\n> \n> They are definitely *not* worth the cost. Especially since no table\n> will have the default 0 rows entry after a single vacuum analyze of\n> that table. Let's be honest: if you aren't interested in doing a\n> vacuum, then really aren't interested in performance, anyway.\n\nWhat I personally am not interested in is having to spend 20 minute per\nday with a totally locked up database because I want my queries to be\nfaster, when there are other ways of doing it...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 14:37:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "On Thu, 25 Mar 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I'm not sure what is all contained in the stats, but the easiest one, I\n> > think, to have done automagically is table sizes...add a tuple, update the\n> > table of number of rows automatically. If that numbers gets \"off\", at\n> > least it will be more reasonable then not doing anything...no?\n> \n> The number of tuples is definitely the most important stat; updating it\n> automatically would make the optimizer work better. The stuff in\n> pg_statistics is not nearly as important.\n> \n> The only objection I can think of to auto-updating reltuples is that\n> it'd mean additional computation (to access and rewrite the pg_class\n> entry) and additional disk I/O (to write back pg_class) for every INSERT\n> and DELETE. There's also a potential problem of multiple backends all\n> trying to write pg_class and being delayed or even deadlocked because of\n> it. (Perhaps the MVCC code will help here.)\n> \n> I'm not convinced that accurate stats are worth that cost, but I don't\n> know how big the cost would be anyway. Anyone have a feel for it?\n\nWe're not looking for perfect numbers here, how about something just\nstored in cache and periodically written out to disk? We already have the\nshard memory pool to work with...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 14:40:00 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "On Thu, 25 Mar 1999, Tom Lane wrote:\n\n> Zeugswetter Andreas IZ5 <[email protected]> writes:\n> > 5. Actually postgresql has behaved in this manner because of certain\n> > \"bugs\" in the optimizer. Recently a lot of those \"bugs\" have been\n> > identified and \"fixed\", thus destroying the defacto rule based\n> > behavior.\n> \n> That's a real good point --- I think we've already heard a couple of\n> complaints about the new optimizer doing \"silly\" things that it didn't\n> use to do.\n> \n> I repeat my proposal: CREATE TABLE should insert a default size (say\n> about 1000 tuples) into pg_class.reltuples, rather than inserting 0.\n> That way, the optimizer will only choose small-table-oriented plans\n> if the table has actually been verified to be small by vacuum.\n\ninserting 0 is an accurate number, not 1000 ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 14:49:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Really slow query on 6.4.2 "
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> On 25 Mar 1999 [email protected] wrote:\n> > They are definitely *not* worth the cost. Especially since no table\n> > will have the default 0 rows entry after a single vacuum analyze of\n> > that table. Let's be honest: if you aren't interested in doing a\n> > vacuum, then really aren't interested in performance, anyway.\n> \n> What I personally am not interested in is having to spend 20 minute per\n> day with a totally locked up database because I want my queries to be\n> faster, when there are other ways of doing it...\n\nUhm, no. The specific case we are talking about here is creation of a\ntable, inserting rows into it, and NEVER running vacuum analyze on\nit. This would not lock up your database for 20 minutes unless you\nare dropping and re-creating a bunch of tables. Even that case could\nbe scripted creatively[0], though. Further, you don't have to run it\non a whole database every night. Just the tables of interest.\n\nWe run a multi-gigabyte Ingres database her for our student systems.\nWhen we want to make sure that good plans are chosen, we sysmod and\noptimizedb it. Since we always want good plans, but rarely inload\nmassive amounts of data, we do this once a week.\n\nOne of the things to be kept in mind with performance tuning is\ntradeoffs. Does it make sense to penalize every transaction for the\nsake of updating statistics? (the answer is \"maybe\") Does it make\nsense to penalize every transaction to provide a recovery mechanism?\n(yes) Does it make sense to penalize every transaction to prevent any\none transaction from using up more than 1MB/s of bandwidth? (no)\nShould you extract the data to a binary flat file, read it in C,\ncollect the information of interest and then do something interesting\nwith it? (maybe)\n\n[0] Assuming the data are laid out \"sequentially\" on the index fields:\ncreate the table, chop off and insert only the first and last\nthousand[1] rows, vacuum, and then insert the rest.\n\n[1] Or perhaps a slightly bigger number. Or a sampling of the file\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================\n\n",
"msg_date": "29 Mar 1999 15:07:20 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Really slow query on 6.4.2 "
}
] |
[
{
"msg_contents": "Are it is good idea to add to PQsetdb parameters \n parameter \"USER to conect as\" ?\n\n(Or probably there is other way to specify postgres USER, different from\n pogram euid?)\n\n Thank you!\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n",
"msg_date": "Thu, 25 Mar 1999 15:06:03 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "connection user"
},
{
"msg_contents": "Dmitry Samersoff <[email protected]> writes:\n> Are it is good idea to add to PQsetdb parameters \n> parameter \"USER to conect as\" ?\n\nYup. PQsetdb() is deprecated these days --- you should really be\ncalling PQsetdbLogin() or even better PQconnectdb() instead.\n\n> (Or probably there is other way to specify postgres USER, different from\n> pogram euid?)\n\nIf you don't provide a user ID parameter, libpq will look first for\nan environment variable PGUSER, and failing that use getpwuid(geteuid())\nto find out who you are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Mar 1999 10:18:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] connection user "
}
] |
[
{
"msg_contents": "Hello!\n\n I've just submitted locale-patch.tar.gz to patches list. Please someone\nextract the archive and apply the patch file locale-patch.\n Also, please remove executable bits off the file koi8-r/test-koi8.sql.in.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.",
"msg_date": "Thu, 25 Mar 1999 17:42:53 +0300 (MSK)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Locale patch (Greek locale and koi8-to-win1251 tests)"
},
{
"msg_contents": "> I've just submitted locale-patch.tar.gz to patches list. Please someone\n>extract the archive and apply the patch file locale-patch.\n\nI have applied your patches.\n\n> Also, please remove executable bits off the file koi8-r/test-koi8.sql.in.\n\nI don't know how to take care of this. Maybe I should do chmod in the \nrepository?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 29 Mar 1999 18:04:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Locale patch (Greek locale and koi8-to-win1251 tests) "
},
{
"msg_contents": "Hello!\n\nOn Mon, 29 Mar 1999, Tatsuo Ishii wrote:\n> > I've just submitted locale-patch.tar.gz to patches list. Please someone\n> >extract the archive and apply the patch file locale-patch.\n> \n> I have applied your patches.\n\n Thanks!\n\n> > Also, please remove executable bits off the file koi8-r/test-koi8.sql.in.\n> \n> I don't know how to take care of this. Maybe I should do chmod in the \n> repository?\n\n Don't know - never worked with CSV.\n\n> --\n> Tatsuo Ishii\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 29 Mar 1999 14:38:59 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Locale patch (Greek locale and koi8-to-win1251 tests) "
},
{
"msg_contents": "Hi!\n\nOn Mon, 29 Mar 1999, Tatsuo Ishii wrote:\n> > I've just submitted locale-patch.tar.gz to patches list. Please someone\n> >extract the archive and apply the patch file locale-patch.\n> \n> I have applied your patches.\n\n Now with these patches, I want to declare RECODE obsolete. (Oleg\nBartunov, any objection?)\n\n How can I include in documenation the warning: \"RECODE now declared\nobsolete and soon to be removed\"?\n\n Actually I want to remove RECODE at version 6.6.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Mon, 29 Mar 1999 14:42:12 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Locale patch (Greek locale and koi8-to-win1251 tests) "
},
{
"msg_contents": "On Mon, 29 Mar 1999, Oleg Broytmann wrote:\n\n> Date: Mon, 29 Mar 1999 14:42:12 +0400 (MSD)\n> From: Oleg Broytmann <[email protected]>\n> Reply-To: [email protected]\n> To: PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] Locale patch (Greek locale and koi8-to-win1251 tests) \n> \n> Hi!\n> \n> On Mon, 29 Mar 1999, Tatsuo Ishii wrote:\n> > > I've just submitted locale-patch.tar.gz to patches list. Please someone\n> > >extract the archive and apply the patch file locale-patch.\n> > \n> > I have applied your patches.\n> \n> Now with these patches, I want to declare RECODE obsolete. (Oleg\n> Bartunov, any objection?)\n\nI vote for removing RECODE feature.\n\n> \n> How can I include in documenation the warning: \"RECODE now declared\n> obsolete and soon to be removed\"?\n> \n> Actually I want to remove RECODE at version 6.6.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 29 Mar 1999 15:41:25 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Locale patch (Greek locale and koi8-to-win1251 tests) "
}
] |
[
{
"msg_contents": "Content-Type: text/plain; charset=us-ascii\nContent-Transfer-Encoding: 7bit\n\ncould someone give me an idea on how to find out the\nmeaning of the following :\n\nNOTICE: SIReadEntryData: cache state reset\nNOTICE: SIReadEntryData: cache state reset\nNOTICE: trying to delete a reldesc that does not exist.\nNOTICE: trying to delete a reldesc that does not exist.\n\nbest regards,\n\n-- \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n David O'Farrell AerSoft Limited \n mailto:[email protected] 2 Northumberland Avenue,\n Dun Laoghaire,Co. Dublin\n\tDirect Phone 353-1-2145950\n Phone: 01-2301166 Fax: 01-2301167\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n",
"msg_date": "Thu, 25 Mar 1999 17:02:37 +0000",
"msg_from": "\"David O'Farrell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SIReadEntryData: cache state reset"
}
] |
[
{
"msg_contents": "I'm leaving shortly for a two week trip with my family. If bugs in ecpg come\nup I will fix them as soon as I'm back. \n\nI haven't finished working on the docs. Tom, could you please check the file\necpg.sgml for correct syntax? Thanks.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 25 Mar 1999 20:09:37 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Going on vacation"
},
{
"msg_contents": "On Thu, 25 Mar 1999, Michael Meskes wrote:\n\n> I'm leaving shortly for a two week trip with my family. If bugs in ecpg come\n> up I will fix them as soon as I'm back. \n\nEnjoy your holidays... :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 25 Mar 1999 19:23:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Going on vacation"
},
{
"msg_contents": "> I haven't finished working on the docs. Tom, could you please check \n> the file ecpg.sgml for correct syntax? Thanks.\n\nOK, will check next week...\n\n - Tom\n",
"msg_date": "Fri, 26 Mar 1999 19:36:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Going on vacation"
}
] |
[
{
"msg_contents": "My vote is 'YES'.\n\tDEJ\n \n> Is it worth adding a new typalign value specifically for \n> int8, in order\n> to make the world safe for int8 columns in system tables?\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Mar 1999 14:20:19 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 64-bit hashjoins "
}
] |
[
{
"msg_contents": "Does anyone know what the system table pg_parg is, was, or might be\nused for?\n\nA comment at the head of include/catalog/pg_parg.h says\n\n * [whatever this relation was, it doesn't seem to be used anymore --djm]\n\nand as far as I can tell this is true --- the source code contains\nno references to pg_parg or any of the field names therein. The table\nis utterly undocumented, but it looks like it might once in the distant\npast have represented proc argument types, which we now keep elsewhere.\n\nThe table is suffering from bit rot, in that its \"parproid\" field\ncontains OIDs of rows in both pg_proc and pg_operator. Rather than\ntrying to intuit what it is for enough to fix this, I propose just\ndeleting the darn thing. Do I hear any objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Mar 1999 10:35:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_parg system table is suffering from software rot"
},
{
"msg_contents": "> Does anyone know what the system table pg_parg is, was, or might be\n> used for?\n> \n> A comment at the head of include/catalog/pg_parg.h says\n> \n> * [whatever this relation was, it doesn't seem to be used anymore --djm]\n> \n> and as far as I can tell this is true --- the source code contains\n> no references to pg_parg or any of the field names therein. The table\n> is utterly undocumented, but it looks like it might once in the distant\n> past have represented proc argument types, which we now keep elsewhere.\n> \n> The table is suffering from bit rot, in that its \"parproid\" field\n> contains OIDs of rows in both pg_proc and pg_operator. Rather than\n> trying to intuit what it is for enough to fix this, I propose just\n> deleting the darn thing. Do I hear any objections?\n> \n\nHouse-clean away.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Mar 1999 11:33:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_parg system table is suffering from software rot"
}
] |
[
{
"msg_contents": "I like it!\n\n\t-----Original Message-----\n\tFrom:\tD'Arcy\" \"J.M.\" Cain [SMTP:[email protected]]\n\tSent:\tFriday, March 26, 1999 10:45 AM\n\tTo:\[email protected]\n\tCc:\[email protected]; [email protected];\[email protected]; [email protected]; [email protected];\[email protected]; [email protected]; [email protected]\n\tSubject:\tRe: [HACKERS] PostgreSQL LOGO (was: Developers Globe\n(FINAL))\n\n\tThus spake Jan Wieck\n\t> I've removed that jaundice one and slightly polished up the\n\t> jewel. There's now a \"Carried by\" logo too. And I've added\n\t> another idea to the whole thing - just take a look.\n\n\tI know that \"Powered by\" isn't quite right but \"Carried by\" doesn't\n\tseem right either. I understand the elephant reference in \"Carried\n\tby\" but I bet most people won't get it right off. How does this\n\tone sound? \"Empowered by.\"\n\n\t-- \n\tD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three\nwolves\n\thttp://www.druid.net/darcy/ | and a sheep voting on\n\t+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 26 Mar 1999 13:17:02 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] PostgreSQL LOGO (was: Developers Globe (FINAL))"
},
{
"msg_contents": "\nI don't like the \"Empowered by\"...reminds me of some feminist group ... :)\n\n\nOn Fri, 26 Mar 1999, Michael Davis wrote:\n\n> I like it!\n> \n> \t-----Original Message-----\n> \tFrom:\tD'Arcy\" \"J.M.\" Cain [SMTP:[email protected]]\n> \tSent:\tFriday, March 26, 1999 10:45 AM\n> \tTo:\[email protected]\n> \tCc:\[email protected]; [email protected];\n> [email protected]; [email protected]; [email protected];\n> [email protected]; [email protected]; [email protected]\n> \tSubject:\tRe: [HACKERS] PostgreSQL LOGO (was: Developers Globe\n> (FINAL))\n> \n> \tThus spake Jan Wieck\n> \t> I've removed that jaundice one and slightly polished up the\n> \t> jewel. There's now a \"Carried by\" logo too. And I've added\n> \t> another idea to the whole thing - just take a look.\n> \n> \tI know that \"Powered by\" isn't quite right but \"Carried by\" doesn't\n> \tseem right either. I understand the elephant reference in \"Carried\n> \tby\" but I bet most people won't get it right off. How does this\n> \tone sound? \"Empowered by.\"\n> \n> \t-- \n> \tD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three\n> wolves\n> \thttp://www.druid.net/darcy/ | and a sheep voting on\n> \t+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 26 Mar 1999 15:49:01 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] PostgreSQL LOGO (was: Developers Globe (FINAL))"
},
{
"msg_contents": "> \n> I don't like the \"Empowered by\"...reminds me of some feminist group ... :)\n> \n\nDitto.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Mar 1999 19:22:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO (was: Developers Globe (FINAL))"
}
] |
[
{
"msg_contents": "Backed by PostgreSQL\n\n\t-Michael Robinson\n\n",
"msg_date": "Sat, 27 Mar 1999 13:28:10 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO"
},
{
"msg_contents": "\nHrmmmm...that one almost sounds perfect...its accurate, no? :)\n\n\nOn Sat, 27 Mar 1999, Michael Robinson wrote:\n\n> Backed by PostgreSQL\n> \n> \t-Michael Robinson\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 27 Mar 1999 15:23:22 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO"
},
{
"msg_contents": "> \n> Hrmmmm...that one almost sounds perfect...its accurate, no? :)\n> \n> \n> On Sat, 27 Mar 1999, Michael Robinson wrote:\n> \n> > Backed by PostgreSQL\n> > \n> > \t-Michael Robinson\n> > \n> > \n\nYes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 28 Mar 1999 16:59:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO"
},
{
"msg_contents": ">\n> >\n> > Hrmmmm...that one almost sounds perfect...its accurate, no? :)\n> >\n> >\n> > On Sat, 27 Mar 1999, Michael Robinson wrote:\n> >\n> > > Backed by PostgreSQL\n> > >\n> > > -Michael Robinson\n> > >\n> > >\n>\n> Yes.\n\n I like it too - done.\n\n Is that all now the final? Should the logo go onto all the\n menu pages for the left frame?\n\n Also I think we should include the \"Backed by\" GIF in the\n v6.5 distribution. Along with a small README telling that\n our preferred style to get embedded is with an HREF to us but\n with BORDER=0.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 29 Mar 1999 10:35:13 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n\n> Is that all now the final? Should the logo go onto all the\n> menu pages for the left frame?\n\nI think the new logo looks very good and fits PostgreSQL nicely.\n\nBut, I wouldn't put it at the top of every menu. At the top of the\nhome page, yes. And at the bottom of a few key content pages.\n\nI hope the purpose of the web site is not to get more people to think\nabout elephants and diamonds when they see \"PostgreSQL\" and vice\nversa. The Internet is already awash in logos.\n\n> Also I think we should include the \"Backed by\" GIF in the\n> v6.5 distribution. Along with a small README telling that\n> our preferred style to get embedded is with an HREF to us but\n> with BORDER=0.\n\nWhy not include full sample HTML in the readme, e.g.\n[stolen from Jan's example]\n\n<a href=\"http://www.postgresql.org/\">\n<img src=\"pg_backed.gif\" width=180 height=40 border=0 alt=\"Carried by PostgreSQL\">\n</a>\n",
"msg_date": "29 Mar 1999 03:28:04 -0600",
"msg_from": "Hal Snyder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO"
},
{
"msg_contents": "On 29 Mar 1999, Hal Snyder wrote:\n\n> [email protected] (Jan Wieck) writes:\n> \n> > Is that all now the final? Should the logo go onto all the\n> > menu pages for the left frame?\n> \n> I think the new logo looks very good and fits PostgreSQL nicely.\n> \n> But, I wouldn't put it at the top of every menu. At the top of the\n> home page, yes. And at the bottom of a few key content pages.\n> \n> I hope the purpose of the web site is not to get more people to think\n> about elephants and diamonds when they see \"PostgreSQL\" and vice\n> versa. The Internet is already awash in logos.\n\nI don't know...I think the menu should always have the same look and feel,\nno matter what page you are on. Might need to be scaled down a little\nbit, but having that elephant/diamond combination at the top of the menu\nno matter where you are on the web page, I think, led's uniformity to the\npages...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 08:54:48 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL LOGO"
}
] |
[
{
"msg_contents": "Hi to all\nI have a problem to join a table and a view. My table and view each has\n150000\nrecords. When I join these it takes about 30sec. How can improve\nperformance.\nRegards\nHossein\n\n",
"msg_date": "Sat, 27 Mar 1999 12:00:07 +0430",
"msg_from": "pourreza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Low Performance"
},
{
"msg_contents": "pourreza wrote:\n> \n> Hi to all\n> I have a problem to join a table and a view. My table and view each has\n> 150000\n> records. When I join these it takes about 30sec. How can improve\n> performance.\n\n1. Use EXPLAIN on the join and add indexes where appropriate\n2. Use a join that returns less rows\n3. Use a faster computer or increase buffer size\n4. Redesign your tables.\n5. Tell us more about your setup and tables/views\n\n----------------\nHannu\n",
"msg_date": "Sat, 24 Apr 1999 16:51:33 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Low Performance"
}
] |
[
{
"msg_contents": "I have just come across an weird datetime representation problem:\n\nUnder two BSD/OS machines, running 6.4.2. I get these two results:\n\ncustomer=> select '28-03-1999'::datetime;\n?column? \n----------------------------\nSun 28 Mar 00:00:00 1999 EET\n(1 row)\n\nand\n\ncustomer=> select '28-03-1999'::datetime;\n?column? \n-----------------------------\nSun 28 Mar 02:40:50 1999 EEST\n(1 row)\n\nThe difference is... in the /etc/localtime file. The 'right' file is actually \nquite older (the system has been upgraded from earlier versions several times \nand the /etc/localtime is dated Oct 8 1996). The 'wrong' file is what recent \nversions of BSD/OS install (at least after BSD/OS 3.0) for 'Europe/Sofia'.\n\nCould some expect in datetime/timezone representation look at this - I am \nattaching the two /etc/localtime files.\n\nRegards,\nDaniel",
"msg_date": "Sat, 27 Mar 1999 15:04:43 +0200",
"msg_from": "Daniel Kalchev <[email protected]>",
"msg_from_op": true,
"msg_subject": "timezone problem"
}
] |
[
{
"msg_contents": "With a CVS update from this morning, I'm finding that the 3-way join\n\nSELECT p1.oid, p2.oid, p2.oprname, p3.oid, p3.opcname\nFROM pg_amop AS p1, pg_operator AS p2, pg_opclass AS p3\nWHERE p1.amopopr = p2.oid AND p1.amopclaid = p3.oid;\n\nis only returning a small fraction of the rows that it should.\nThe problem is obvious from EXPLAIN:\n\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=64.12 size=507 width=84)\n -> Seq Scan on pg_operator p2 (cost=24.73 size=507 width=36)\n -> Hash Join (cost=17.34 size=161 width=48)\n -> Seq Scan on pg_opclass p3 (cost=1.86 size=26 width=36)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on pg_amop p1 (cost=7.31 size=161 width=12)\n\nThe optimizer seems to have forgotten that Merge Join needs sorted\ninputs... either that or it now believes that a sequential scan\nwill yield sorted data...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Mar 1999 16:43:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merge joins badly broken in current sources"
}
] |
[
{
"msg_contents": "I have added a new regression test \"type_sanity\" to perform mechanical\ncross-checks on pg_type, pg_class, and related tables. I also greatly\nexpanded \"opr_sanity\" to check pg_proc, pg_aggregate, pg_am & friends\nas well as checking pg_operator more thoroughly.\n\nThis turned up the usual quota of small errors in the tables,\nof course ;-)\n\nAn annoying limitation on these tests is that they can't enforce\noperand type cross-checks (like whether a pg_operator entry has\nthe same operand and result types as its underlying pg_proc) because\nof binary-compatible-type cheats that are being done in a few existing\ncases. For now, I have left those checks out of the regression tests.\n\nIt would be nice to develop an SQL representation of binary type\ncompatibility so that those checks could be made cleanly.\n\nAlternatively, we could make multiple entries in pg_proc for any\nprocedure being used to support multiple types (for example, if\nint4eq is also used to implement equality of \"oid\" then it would need\nto be listed again with oid as the input data type). This would only\ncost another couple dozen entries in pg_proc, so it might be the best\nnear-term solution.\n\n(A very short-term answer would be to turn on the checks anyway, and\nput the known exception cases into the expected outputs for the tests.\nThat's pretty ugly, not to mention a pain to maintain, but it might be\na reasonable thing to do if we aren't going to implement a better\nsolution soon...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Mar 1999 21:27:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another new regress test"
},
{
"msg_contents": "> (A very short-term answer would be to turn on the checks anyway, and\n> put the known exception cases into the expected outputs for the tests.\n> That's pretty ugly, not to mention a pain to maintain, but it might be\n> a reasonable thing to do if we aren't going to implement a better\n> solution soon...)\n\nOr, you could eliminate those types in the WHERE clause. That may be\neasier to maintain.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Mar 1999 10:51:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another new regress test"
}
] |
[
{
"msg_contents": "I noticed just now that there are a lot of SQL-language procedures in\npg_proc whose only purpose is to provide alternative names for built-in\nfunctions. For example, none of the seven functions named \"float8\"\nis actually a built-in; they are all SQL aliases for built-in functions\nlike i4tod().\n\nIt strikes me that this is pretty inefficient. For example, converting\nan int4 column to float seems to take about twice as long if you do it\nwith float8(int4column) as if you do it with i4tod(int4column), because\nthe former involves a level of SQL overhead.\n\nI am thinking about fixing this by decoupling the user-level name of\nan internal function from its C-language name. The simplest way seems\nto be to modify pg_proc.h and Gen_fmgrtab.sh.in so that the C-language\nname of an internal function is stored in pg_proc's \"prosrc\" field\n(which is currently unused for internal functions) rather than being\ntaken from \"proname\". Then, all of the SQL functions that are simply\naliases for internal functions could be converted to plain internal\nfunction entries that have proname different from prosrc.\n\nAnyone have an objection to this? I suppose we'd need to check that\nthe regression tests still exercise SQL functions ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Mar 1999 22:20:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speedup idea: avoid using SQL procedures as aliases"
},
{
"msg_contents": "> I noticed just now that there are a lot of SQL-language procedures in\n> pg_proc whose only purpose is to provide alternative names for \n> built-in functions.\n\nYeah. Neat hack, eh? Edmund Mergl found this, and allowed us to use\ngeneric names for functions for the first time. But...\n\n> It strikes me that this is pretty inefficient.\n> I am thinking about fixing this by decoupling the user-level name of\n> an internal function from its C-language name. The simplest way seems\n> to be to modify pg_proc.h and Gen_fmgrtab.sh.in so that the C-language\n> name of an internal function is stored in pg_proc's \"prosrc\" field\n> (which is currently unused for internal functions) rather than being\n> taken from \"proname\". Then, all of the SQL functions that are simply\n> aliases for internal functions could be converted to plain internal\n> function entries that have proname different from prosrc.\n> Anyone have an objection to this? I suppose we'd need to check that\n> the regression tests still exercise SQL functions ;-)\n\nNo objection; I've toyed with the idea of doing this, but didn't have\nthe guts to touch the layout of system tables. You seem to have no\nsuch qualms ;)\n\nI'd be happy to help with the conversion, though I'd suggest that this\nwould perhaps be a great topic for v6.6 since it does involve some\nfundamental (hopefully isolated) changes. Also, for my participation\n(certainly not required) I'd have more time during the next cycle...\n\n - Tom\n",
"msg_date": "Mon, 29 Mar 1999 17:28:34 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedup idea: avoid using SQL procedures as aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'd be happy to help with the conversion, though I'd suggest that this\n> would perhaps be a great topic for v6.6 since it does involve some\n> fundamental (hopefully isolated) changes.\n\nActually, the changes were *extremely* isolated, being visible nowhere\nother than fmgr. Editing pg_proc.h was the only hard part (fortunately\nI was able to do the bulk of the work with a sed script).\n\nI see I just got it squeezed in under the wire before Marc declared\nbeta freeze, however ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 1999 12:46:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Speedup idea: avoid using SQL procedures as aliases "
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > I'd be happy to help with the conversion, though I'd suggest that this\n> > would perhaps be a great topic for v6.6 since it does involve some\n> > fundamental (hopefully isolated) changes.\n> \n> Actually, the changes were *extremely* isolated, being visible nowhere\n> other than fmgr. Editing pg_proc.h was the only hard part (fortunately\n> I was able to do the bulk of the work with a sed script).\n> \n> I see I just got it squeezed in under the wire before Marc declared\n> beta freeze, however ;-)\n> \n\nAh, sounds like me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Mar 1999 12:54:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Speedup idea: avoid using SQL procedures as aliases"
}
] |
[
{
"msg_contents": "Has this come up before? 6.4.2 and current sources both have\nthis problem:\n\n\tselect count(*) from pg_proc where pg_proc.proargtypes[0] = 701;\nworks, but\n\tselect count(*) from pg_proc where proargtypes[0] = 701;\nfails with ERROR: Unable to locate type name 'proargtypes' in catalog\n\nThe grammar doesn't seem to have a case that allows for a subscripted\nattribute name without a relation name in front of it.\n\nIt looks like fixing this might be as easy as making the \"ColId\"\ncases in a_expr, b_expr, possibly other places include an\nopt_indirection item like columnElem does. But maybe there's\nmore to it than meets the eye?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Mar 1999 23:16:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parser doesn't grok unqualified array element"
},
{
"msg_contents": "> Has this come up before? 6.4.2 and current sources both have\n> this problem:\n> \n> \tselect count(*) from pg_proc where pg_proc.proargtypes[0] = 701;\n> works, but\n> \tselect count(*) from pg_proc where proargtypes[0] = 701;\n> fails with ERROR: Unable to locate type name 'proargtypes' in catalog\n> \n> The grammar doesn't seem to have a case that allows for a subscripted\n> attribute name without a relation name in front of it.\n> \n> It looks like fixing this might be as easy as making the \"ColId\"\n> cases in a_expr, b_expr, possibly other places include an\n> opt_indirection item like columnElem does. But maybe there's\n> more to it than meets the eye?\n\nNo, it is that easy. For some reason, no one has done it yet. Our TODO\nlist has:\n\n\t* array index references without table name cause problems\n\nI am sure the complexity of yacc grammar rules have kept some away from\nfixing this. \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Mar 1999 10:59:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parser doesn't grok unqualified array element"
},
{
"msg_contents": "> Has this come up before? 6.4.2 and current sources both have\n> this problem:\n> select count(*) from pg_proc where pg_proc.proargtypes[0] = 701;\n> works, but\n> select count(*) from pg_proc where proargtypes[0] = 701;\n> fails with\n> ERROR: Unable to locate type name 'proargtypes' in catalog\n> \n> The grammar doesn't seem to have a case that allows for a subscripted\n> attribute name without a relation name in front of it.\n> It looks like fixing this might be as easy as making the \"ColId\"\n> cases in a_expr, b_expr, possibly other places include an\n> opt_indirection item like columnElem does. But maybe there's\n> more to it than meets the eye?\n\nIt has been reported, and is probably on the ToDo list as something. I\nhave been carrying it on my personal ToDo for a while, just to make\nsure it doesn't get lost.\n\nI would try your solution if I were fixing it, which I'm not yet. Go\nfer it dude!\n\n - Tom\n",
"msg_date": "Mon, 29 Mar 1999 17:31:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parser doesn't grok unqualified array element"
}
] |
[
{
"msg_contents": "Hello,\n\nI have downloaded the latest snapshot-version from 27th, compiled and\ninstalled it onto a Linux 2.1.131, libc6 I have the following table:\n\nCREATE TABLE \"west0\" (\n \"lfnr\" int8,\n \"kdnr\" int8,\n \"artnr\" int8,\n \"eknumsatz\" float8,\n \"ekumsatz\" float8,\n \"vkumsatz\" float8,\n \"lvkumsatz\" float8,\n \"menge\" float8,\n \"anz\" int2,\n \"datum\" date);\n\n\nDoing the following is quite fast and memory usage of the postmaster is\nok (abt 3MB).\n\nstamm=> select count(*) from west0;\n count\n--------\n12290703\n(1 row)\n\n\nBut doing the following aggregate on the same table will crash the\nbackend:\n\nstamm=> select sum(ekumsatz), sum(vkumsatz),sum(lvkumsatz),count(*) from\nwest0;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nTake a look at the output of top after about 2 minutes:\n\n 6:43pm up 25 days, 10:19, 1 user, load average: 1.97, 0.71, 0.42\n70 processes: 68 sleeping, 2 running, 0 zombie, 0 stopped\nCPU states: 25.6% user, 11.1% system, 1.5% nice, 63.4% idle\nMem: 257244K av, 254048K used, 3196K free, 6116K shrd, 13100K buff\n\nSwap: 130748K av, 122264K used, 8484K free 18812K\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME\nCOMMAND\n12253 postgres 16 0 304M 205M 1148 R 0 33.8 81.7 2:00\npostmaster\n ^^^^^^^^^^\n\n\nAny idea?\n\nKind regards\n\nMichael Contzen\n\nDohle Handelsgruppe Systemberatung GmbH, Germany\nE-Mail [email protected]\n\n\n",
"msg_date": "Sun, 28 Mar 1999 17:46:58 +0200",
"msg_from": "Michael Contzen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory grows without bounds in aggregates!"
},
{
"msg_contents": "Michael Contzen <[email protected]> writes:\n> [ out of memory for ]\n> stamm=> select sum(ekumsatz), sum(vkumsatz),sum(lvkumsatz),count(*) from\n> west0;\n\nRight, this is an instance of a known problem (palloc'd temporaries for\naggregate functions aren't freed until end of statement). I think\nsomeone was looking into a quick-hack patch for aggregates, but there\nare comparable problems in evaluation of WHERE expressions, COPY, etc.\nWe really need a general-purpose solution, and that probably won't\nhappen till 6.6.\n\nIn the meantime, I expect that doing only one float8 sum() per select\nwould take a third as much memory --- you might find that that's an\nadequate workaround for the short run.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Mar 1999 12:14:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory grows without bounds in aggregates! "
},
{
"msg_contents": "> Michael Contzen <[email protected]> writes:\n> > [ out of memory for ]\n> > stamm=> select sum(ekumsatz), sum(vkumsatz),sum(lvkumsatz),count(*) from\n> > west0;\n> \n> Right, this is an instance of a known problem (palloc'd temporaries for\n> aggregate functions aren't freed until end of statement). I think\n> someone was looking into a quick-hack patch for aggregates, but there\n> are comparable problems in evaluation of WHERE expressions, COPY, etc.\n> We really need a general-purpose solution, and that probably won't\n> happen till 6.6.\n> \n> In the meantime, I expect that doing only one float8 sum() per select\n> would take a third as much memory --- you might find that that's an\n> adequate workaround for the short run.\n> \n\nI thought we fixed this recently with that aggregate patch? \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Mar 1999 11:03:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory grows without bounds in aggregates!"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Right, this is an instance of a known problem (palloc'd temporaries for\n>> aggregate functions aren't freed until end of statement).\n\n> I thought we fixed this recently with that aggregate patch? \n\nNo, we backed out said patch because it was busted (tried to free temp\neven for pass-by-value types :-(). Anyone want to try again?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 1999 12:40:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory grows without bounds in aggregates! "
}
] |
[
{
"msg_contents": "So, beta can be started...\n\nMy vacation begins today.\nI'll come back Apr 5th and run mooore tests.\n\nThere are also some small things to do.\n\nBTW, btrees are changed -> initdb is required...\n\nHave a good days!\n\nVadim\n",
"msg_date": "Mon, 29 Mar 1999 05:35:21 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum updated..."
},
{
"msg_contents": "\nAlmight Webmaster(s)...please propogate the news!!\n\nWe are now officially in Beta.\n\nRelease date is officially set for May 1st, no new features are to be\nadded, only bug fixes from here until then. This time through, I'm going\nto be really really harsh...I pull CVS access to anyone adding new\nfeatures :) We're already 2 months late on this release, and I'd like for\nit to be *smoother* then the last one :)\n\nOn May 1st, v6.5 will be tag'd so that work can resume...\n\nVadim/Bruce...enjoy your holidays and look forward to seeing you both back\n:)\n\n\n\nOn Mon, 29 Mar 1999, Vadim Mikheev wrote:\n\n> So, beta can be started...\n> \n> My vacation begins today.\n> I'll come back Apr 5th and run mooore tests.\n> \n> There are also some small things to do.\n> \n> BTW, btrees are changed -> initdb is required...\n> \n> Have a good days!\n> \n> Vadim\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 09:00:29 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated..."
},
{
"msg_contents": "On Mon, 29 Mar 1999, The Hermit Hacker wrote:\n\n> \n> Almight Webmaster(s)...please propogate the news!!\n\nThe announcement is now on the main page. Updates to follow.\n\nVince.\n\n> \n> We are now officially in Beta.\n> \n> Release date is officially set for May 1st, no new features are to be\n> added, only bug fixes from here until then. This time through, I'm going\n> to be really really harsh...I pull CVS access to anyone adding new\n> features :) We're already 2 months late on this release, and I'd like for\n> it to be *smoother* then the last one :)\n> \n> On May 1st, v6.5 will be tag'd so that work can resume...\n> \n> Vadim/Bruce...enjoy your holidays and look forward to seeing you both back\n> :)\n> \n> \n> \n> On Mon, 29 Mar 1999, Vadim Mikheev wrote:\n> \n> > So, beta can be started...\n> > \n> > My vacation begins today.\n> > I'll come back Apr 5th and run mooore tests.\n> > \n> > There are also some small things to do.\n> > \n> > BTW, btrees are changed -> initdb is required...\n> > \n> > Have a good days!\n> > \n> > Vadim\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 29 Mar 1999 08:17:41 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated..."
},
{
"msg_contents": ">Almight Webmaster(s)...please propogate the news!!\n>\n>We are now officially in Beta.\n>\n>Release date is officially set for May 1st, no new features are to be\n>added, only bug fixes from here until then. This time through, I'm going\n>to be really really harsh...I pull CVS access to anyone adding new\n>features :) We're already 2 months late on this release, and I'd like for\n>it to be *smoother* then the last one :)\n>\n>On May 1st, v6.5 will be tag'd so that work can resume...\n\nI have some patches contributed by a user corresponding to one of our\nTODO list:\n\n* Add version number in startup banners for psql and postmaster\n\n(actually the patches only add banners to psql, not to postmaster)\n\nAlso these include tiny fixes to psql. Do I have any chance to apply\nthem for 6.5beta?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Mar 1999 10:40:32 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated... "
},
{
"msg_contents": "On Tue, 30 Mar 1999, Tatsuo Ishii wrote:\n\n> >Almight Webmaster(s)...please propogate the news!!\n> >\n> >We are now officially in Beta.\n> >\n> >Release date is officially set for May 1st, no new features are to be\n> >added, only bug fixes from here until then. This time through, I'm going\n> >to be really really harsh...I pull CVS access to anyone adding new\n> >features :) We're already 2 months late on this release, and I'd like for\n> >it to be *smoother* then the last one :)\n> >\n> >On May 1st, v6.5 will be tag'd so that work can resume...\n> \n> I have some patches contributed by a user corresponding to one of our\n> TODO list:\n> \n> * Add version number in startup banners for psql and postmaster\n> \n> (actually the patches only add banners to psql, not to postmaster)\n> \n> Also these include tiny fixes to psql. Do I have any chance to apply\n> them for 6.5beta?\n\nHow will this affect user scripts that use psql? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 21:49:56 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated... "
},
{
"msg_contents": ">> I have some patches contributed by a user corresponding to one of our\n>> TODO list:\n>> \n>> * Add version number in startup banners for psql and postmaster\n>> \n>> (actually the patches only add banners to psql, not to postmaster)\n>> \n>> Also these include tiny fixes to psql. Do I have any chance to apply\n>> them for 6.5beta?\n>\n>How will this affect user scripts that use psql? \n\nNothing except new option \"-E\" which shows actual queries issued by\nsome \\ commands (Example session follows). Sorry, I forgot to mention\nabout it. I believe this will greatly reduce beginners questions: \"How\ncan I list tables in my database?\":-)\n\n./psql -E regression Welcome to the POSTGRESQL interactive sql\nmonitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i386-unknown-freebsd2.2.6, compiled by gcc 2.7.2.]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: regression\n\nregression=> \\d\nQUERY: SELECT usename, relname, relkind, relhasrules FROM pg_class, pg_user WHERE ( relkind = 'r' OR relkind = 'i' OR relkind = 'S') and relname !~ '^pg_' and usesysid = relowner ORDER BY relname \n\nDatabase = regression\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | t-ishii | a_star | table |\n | t-ishii | abstime_tbl | table |\n[snip]\n\n\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Mar 1999 12:57:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated... "
},
{
"msg_contents": "On Tue, 30 Mar 1999, Tatsuo Ishii wrote:\n\n> >> I have some patches contributed by a user corresponding to one of our\n> >> TODO list:\n> >> \n> >> * Add version number in startup banners for psql and postmaster\n> >> \n> >> (actually the patches only add banners to psql, not to postmaster)\n> >> \n> >> Also these include tiny fixes to psql. Do I have any chance to apply\n> >> them for 6.5beta?\n> >\n> >How will this affect user scripts that use psql? \n> \n> Nothing except new option \"-E\" which shows actual queries issued by\n> some \\ commands (Example session follows). Sorry, I forgot to mention\n> about it. I believe this will greatly reduce beginners questions: \"How\n> can I list tables in my database?\":-)\n\nSounds and looks reasonable...as it doesn't affect the backend at all,\nplease go for it :)\n\n\n> \n> ./psql -E regression Welcome to the POSTGRESQL interactive sql\n> monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> [PostgreSQL 6.5.0 on i386-unknown-freebsd2.2.6, compiled by gcc 2.7.2.]\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: regression\n> \n> regression=> \\d\n> QUERY: SELECT usename, relname, relkind, relhasrules FROM pg_class, pg_user WHERE ( relkind = 'r' OR relkind = 'i' OR relkind = 'S') and relname !~ '^pg_' and usesysid = relowner ORDER BY relname \n> \n> Database = regression\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | t-ishii | a_star | table |\n> | t-ishii | abstime_tbl | table |\n> [snip]\n> \n> \n> --\n> Tatsuo Ishii\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 00:07:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated... "
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 30 Mar 1999, Tatsuo Ishii wrote:\n> \n> > >> I have some patches contributed by a user corresponding to one of our\n> > >> * Add version number in startup banners for psql and postmaster\n> Sounds and looks reasonable...as it doesn't affect the backend at all,\n> please go for it :)\n\nOh good, scrappy gave the right answer :) That is a nice feature...\n\n - Tom\n",
"msg_date": "Tue, 30 Mar 1999 06:09:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated..."
},
{
"msg_contents": "> The Hermit Hacker wrote:\n> > \n> > On Tue, 30 Mar 1999, Tatsuo Ishii wrote:\n> > \n> > > >> I have some patches contributed by a user corresponding to one of our\n> > > >> * Add version number in startup banners for psql and postmaster\n> > Sounds and looks reasonable...as it doesn't affect the backend at all,\n> > please go for it :)\n> \n> Oh good, scrappy gave the right answer :) That is a nice feature...\n\nThomas is definitely back!\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Mar 1999 10:57:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated..."
},
{
"msg_contents": "On Tue, 30 Mar 1999, Bruce Momjian wrote:\n\n> > The Hermit Hacker wrote:\n> > > \n> > > On Tue, 30 Mar 1999, Tatsuo Ishii wrote:\n> > > \n> > > > >> I have some patches contributed by a user corresponding to one of our\n> > > > >> * Add version number in startup banners for psql and postmaster\n> > > Sounds and looks reasonable...as it doesn't affect the backend at all,\n> > > please go for it :)\n> > \n> > Oh good, scrappy gave the right answer :) That is a nice feature...\n> \n> Thomas is definitely back!\n\nThat's okay...Tatsuo found a place in my heart...he actually *asked*\nbefore adding the code after Beta started *grin*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 13:38:21 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum updated..."
}
] |
[
{
"msg_contents": "Hi folks,\n\nI am trying to get a version (any version !) of PostgreSQL running\non OpenBSD 2.4 upwards, but I am getting some weird behaviour with\nthe SIGQUIT in elog():\n\nfrom kdump:\n\n21633 postgres RET write 56/0x38\n21633 postgres CALL sendto(0x4,0x125640,0x3a,0,0,0)\n21633 postgres GIO fd 4 wrote 58 bytes\n \"EERROR: destroydb: database 'regression' does not exist\n \\0\"\n21633 postgres RET sendto 58/0x3a\n21633 postgres CALL kill(0x5481,0x3)\n21633 postgres RET kill -1 errno 1 Operation not permitted\n21633 postgres CALL sigprocmask(0x1,0)\n21633 postgres RET sigprocmask 0\n21633 postgres CALL sigsuspend(0)\n\nFor those who cannot read hex, FYI 0x5481 == 21633. What this basically\nmeans that the process is getting permission denied sending a signal\nto itself. Hmm. This is with a snapshot from a few days agao, but\nthe results are identical for 6.4.2 as well.\n\nOh, the result - the regression tests hang waiting for the postgres\nprocess to receive the SIGQUIT to abort transaction that never\ncomes.\n\nThe OpenBSD folks don't seem to be bothered. I have RTFM, APUE and \nkern_sig.c in OpenBSD - no joy. Any ideas anyone ? Seen this before ?\n\nIs it some bizarre interaction of sigprocmask() or whatever ?\n\nRegards,\n-- \nPeter Galbavy\nKnowledge Matters Ltd\nhttp://www.knowledge.com /http://www.wonderland.org/ http://www.literature.org/\n",
"msg_date": "Mon, 29 Mar 1999 11:03:49 +0100",
"msg_from": "Peter Galbavy <[email protected]>",
"msg_from_op": true,
"msg_subject": "signal weirdness"
},
{
"msg_contents": "On Mon, Mar 29, 1999 at 11:03:49AM +0100, Peter Galbavy wrote:\n> Oh, the result - the regression tests hang waiting for the postgres\n> process to receive the SIGQUIT to abort transaction that never\n> comes.\n\nMy quick fix has been to replace the kill() with a direct call\nto siglongjmp() - not sure if this is safe, but it works. I\nam being a pragmatist today.\n\nI will try to get more debugging done on the signal problem at\nsome stage, but I seem to have a \"working\" solution.\n\nIs their any reason not to replace the kill() with a longjmp()\nin the long run - no pun intended.\n\nRegards,\n-- \nPeter Galbavy\nKnowledge Matters Ltd\nhttp://www.knowledge.com /http://www.wonderland.org/ http://www.literature.org/\n",
"msg_date": "Mon, 29 Mar 1999 12:17:12 +0100",
"msg_from": "Peter Galbavy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] signal weirdness"
},
{
"msg_contents": "Peter Galbavy <[email protected]> writes:\n> Is their any reason not to replace the kill() with a longjmp()\n\nI always wondered why elog uses such a bizarre approach to transferring\ncontrol back to the main loop, myself.\n\tkill() self -> SIGQUIT signal catcher -> longjmp -> main loop.\nSeems to me two of these steps could be eliminated ;-)\n\nSo far there hasn't been a reason to touch the code (if it ain't broke\ndon't fix it) ... but if it is broken on at least one platform, the\nsituation is different.\n\nI'd say OpenBSD is definitely broken, however. A process should be\nallowed to signal itself. File a bug report...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 1999 10:42:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] signal weirdness "
}
] |
[
{
"msg_contents": "Problem caused by incorrect const tag usage still exist\n\nIt can be solved \n whether to check all calls for const modifyier\n or remove all const tags \n or (worst) patch configure to add -std to CFLAGS\n\nkernigan(dms)~/pgsql/pgsql/src/interfaces/libpq>uname -a\nOSF1 kernigan.wplus.net V4.0 878 alpha\n\ncc: Error: fe-connect.c, line 173: In this declaration, parameter 1 has a\ndifferent type than specified in an earlier declaration of this function.\nPQconnectdb(const char *conninfo)\n^\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n",
"msg_date": "Mon, 29 Mar 1999 14:46:40 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "incorrect const usage"
},
{
"msg_contents": "Dmitry Samersoff <[email protected]> writes:\n> Problem caused by incorrect const tag usage still exist\n> cc: Error: fe-connect.c, line 173: In this declaration, parameter 1 has a\n> different type than specified in an earlier declaration of this function.\n> PQconnectdb(const char *conninfo)\n\nUgh. Apparently you're getting bit by c.h's gratuitous\n\t#define const\nwhen _STDC_ is not set. This really ought to be handled by a direct\ntest whether CONST is usable; in fact that whole \"#ifdef __STDC__\"\nsection of c.h is pretty bogus. I will work on it.\n\nThe problem wouldn't have come up, except that libpq-fe.h doesn't\ninclude config.h or c.h anymore (to avoid polluting application\nnamespace with huge amounts of Postgres-internal symbols). So the uses\nof \"const\" in libpq-fe.h are seen by the compiler before it sees the\n#define, and then when it hits the actual function definitions inside\nlibpq, it (rightly) complains.\n\nWhat that means is that no one has yet tried to compile 6.4.* on a\ncompiler that doesn't recognize \"const\" (or if anyone did, they didn't\nreport its failure). Considering that we require compilers to support\nANSI-style function definitions, and \"const\" predates ANSI, it seems\nunlikely that there'd be any hope of building Postgres with such a\ncompiler anyway. In short: we could probably just lose the \"#define const\"\nentirely, as well as a lot of the other alleged support for pre-ANSI\ncompilers in c.h.\n\n> or (worst) patch configure to add -std to CFLAGS\n\nWhat's worst about that? If your compiler is not really ANSI without it,\nit seems like a good idea to me. What is your platform exactly?\nDoes it have a template file? Adding -std to CFLAGS in the template\nwould be simple enough...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 1999 11:06:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] incorrect const usage "
}
] |
[
{
"msg_contents": "\nHi all,\n\nI'm using postgres 6.3.2 as built by RedHat 5.2.\n\nEvery time one of my programs tries to read the _2nd_ large object it\ngets an error. Well actually, closing the descriptor on the 1st large\nobject fails as does retrieving the 2nd large object. The error is....\n\nPQfn: expected a 'V' from the backend. Got 'N' instead\n\nI have got a code extract below. It is simply a perl program using\nPg-0.91 that opens the database and tries to read two large objects\ngiven on the command line.\n\nWhat is the best bet for getting around this? Is upgrading to a later\nversion of postgres likely to help? Has anyone seen this before?\n\nThe large objects I'm using aren't very large. Only a few hundred bytes.\n\nAre large objects well supported? Are they considered very stable to\nuse?\n\nHere is the code....\n\npackage techmod;\nrequire Exporter;\n\nuse DBI;\nuse Pg;\n\nsub pgdbconnect\n{\n $pgdbh ||= Pg::connectdb(\"dbname=httpd\");\n die unless $pgdbh;\n $pgdbh->trace(STDOUT);\n return $pgdbh;\n}\n\n\nsub getlarge\n{\n my ($name,$lobjId)=@_;\n my $buf;\n my $mode = PGRES_INV_READ;\n if (0 <= ($lobj_fd = $pgdbh->lo_open($lobjId, $mode)))\n {\n print \"open\\n\";\n while (0 < ($nbytes = $pgdbh->lo_read($lobj_fd, $b, 100000)))\n {\n $buf = $buf . $b;\n }\n if ($nbytes < 0)\n { print \"read fail\\n\", $pgdbh->errorMessage; }\n if ($pgdbh->lo_close($lobj_fd) < 0)\n { print \"close fail\\n\", $pgdbh->errorMessage; }\n }\n else\n {\n print \"notopen $lobjId\\n\", $pgdbh->errorMessage;\n }\n return $buf;\n}\n\n#!/usr/bin/perl\nuse techmod;\ntechmod->pgdbconnect();\n$lobjId=$ARGV[0];\nprint techmod->getlarge($lobjId);\nprint techmod->getlarge($ARGV[1]);\n\nHere is an extract from the trace.\n\nTo backend> F \nTo backend (4#)> 954\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 100000\n>From backend> V\n>From backend> G\n>From backend (#4)> 33\n>From backend (33)> This is some data stored in a large object.\n\n>From backend> 0\nTo backend> F \nTo backend (4#)> 954\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 100000\n>From backend> V\n>From backend> G\n>From backend (#4)> 0\n>From backend (0)> \n>From backend> 0\nTo backend> F \nTo backend (4#)> 953\nTo backend (4#)> 1\nTo backend (4#)> 4\nTo backend (4#)> 0\n>From backend> N\nclose fail\nPQfn: expected a 'V' from the backend. Got 'N' insteadThis is some data\nstored in a large object\nTo backend> F \nTo backend (4#)> 952\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 21008\nTo backend (4#)> 4\nTo backend (4#)> 262144\n>From backend> N\nnotopen 21008\nPQfn: expected a 'V' from the backend. Got 'N' instead\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Mon, 29 Mar 1999 10:58:12 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large objects error - expected a 'V' from the backend"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hi all,\n> \n> I'm using postgres 6.3.2 as built by RedHat 5.2.\n> \n> Every time one of my programs tries to read the _2nd_ large object it\n> gets an error. Well actually, closing the descriptor on the 1st large\n> object fails as does retrieving the 2nd large object. The error is....\n> \n> PQfn: expected a 'V' from the backend. Got 'N' instead\n> \n> I have got a code extract below. It is simply a perl program using\n> Pg-0.91 that opens the database and tries to read two large objects\n> given on the command line.\n\n\nthis will most probably not solve your problem, but for DBD-Pg-0.91 you need\npostgresql-6.4.2.\n\nEdmund\n\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Mon, 29 Mar 1999 19:19:50 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Large objects error - expected a 'V' from the backend"
},
{
"msg_contents": "\n> I have tried to use the lo interface and it appears to\n> work ok (although there is a fix required for solaris).\n> There is also a memory leak in the back end so several\n> thousand large objects will probably cause the backend\n> to fail .\n\nOuch.\n\nWell perhaps if I tell you PG hackers what I want to do, if you could\ntell me the best way to do it.\n\nI want to have a comment database storying ascii text comments. These\ncould be over 8000 bytes, and my understanding is that conventional PG\nrows can't be bigger than 8000 bytes. On the other hand most of them\nwill probably be much smaller than 8000 bytes. I will certainly have\nmore than \"several thousand\" of them.\n\nIs large objects the right way to go here? What are the disk usage /\nspeed tradeoffs of using large objects here, perhaps compared to\nstraight UNIX files? The main reasons I don't use the file system is\nthat I might run out of inodes, and also it's probably not that fast or\nefficient.\n",
"msg_date": "Mon, 29 Mar 1999 23:31:22 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are large objects well supported? Are they considered very\n\tstableto use?"
},
{
"msg_contents": "\nFYI, on a standard RedHat 5.2 system the current PG snapshot fails the\nfollowing regessions...\n\nint2 .. failed\nint4 .. failed\ngeometry .. failed\n\nIf anyone wants more info, let me know.\n\ndiff results/int4.out expected/int4.out \n10c10\n< ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out\nof range\n---\n> ERROR: pg_atoi: error reading \"1000000000000\": Math result not representable\ndiff results/int2.out expected \n10c10\n< ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\n---\n> ERROR: pg_atoi: error reading \"100000\": Math result not representable\n",
"msg_date": "Tue, 30 Mar 1999 08:30:18 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression failures"
},
{
"msg_contents": "> > I have tried to use the lo interface and it appears to\n> > work ok (although there is a fix required for solaris).\n> > There is also a memory leak in the back end so several\n> > thousand large objects will probably cause the backend\n> > to fail .\n\nThis was reported some times ago but I don't have time to fix.\n\n> Ouch.\n> \n> Well perhaps if I tell you PG hackers what I want to do, if you could\n> tell me the best way to do it.\n> \n> I want to have a comment database storying ascii text comments. These\n> could be over 8000 bytes, and my understanding is that conventional PG\n> rows can't be bigger than 8000 bytes. On the other hand most of them\n> will probably be much smaller than 8000 bytes. I will certainly have\n> more than \"several thousand\" of them.\n\nI thought the problem stated above was in that creating lots of large\nobjects in a session could be a trouble. On the other hand, if you\nread/or write not so much in a session, you could avoid the problem, I\nguess.\n\n> Is large objects the right way to go here? What are the disk usage /\n> speed tradeoffs of using large objects here, perhaps compared to\n> straight UNIX files? The main reasons I don't use the file system is\n> that I might run out of inodes, and also it's probably not that fast or\n> efficient.\n\nIf you are short of inodes, forget about large objects. Creating a\nlarge object consumes 2 inodes (one is for holding data itself,\nanother is for an index for faster access) and problably this is not\ngood news for you.\n\nI think we could implement large objects in a different way, for\nexample packing many of them into a single table. This is just a\nthought, though.\n---\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Mar 1999 22:58:32 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Are large objects well supported? Are they\n\tconsidered very stableto use?"
},
{
"msg_contents": "\nThanks for all the suggestions about large objects. To me they sound\nnearly a waste of time, partly because they take 2 unix files for each\none, and partly because the minimum size is 16k.\n\nFor the moment I think I will use text type in a regular class and just\nput up with the restriction of less than 8k. Maybe I will use an \"oid\nmore,\" link for chaining.\n\nI think the only real solution to this is to remove the arbitrary limits\nin postgres as in the 8k record limit and the 8k query buffer limit.\n\nHas anybody thought much about this yet?\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 30 Mar 1999 14:07:52 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are large objects well supported? Are they considered very\n\tstableto use?"
},
{
"msg_contents": "> FYI, on a standard RedHat 5.2 system the current PG snapshot fails the\n> following regessions...\n\nOK. I have the \"reference platform\" for the regression tests, and it\nhas recently had a (forced) upgrade to RH5.2 after losing some disks.\nI'd expect the regression tests to start matching your installation\nvery soon now; certainly before we release v6.5.\n\nThanks for the info.\n\n - Tom\n",
"msg_date": "Tue, 30 Mar 1999 16:37:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression failures"
}
] |
[
{
"msg_contents": "New www mirror of postgresql site is ready \n\nhttp://postgresql.wplus.net\n\n(St.Petersburg, Russia)\n\nThis mirror updates daily by rsync\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n",
"msg_date": "Mon, 29 Mar 1999 14:34:40 +0300 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "New mirror"
}
] |
[
{
"msg_contents": "\nWho are maintainer of libpq++?\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n",
"msg_date": "Mon, 29 Mar 1999 16:33:13 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq++"
},
{
"msg_contents": "On Mon, 29 Mar 1999, Dmitry Samersoff wrote:\n\n> \n> Who are maintainer of libpq++?\n\nI was going to when things settled down, but a couple of months ago\nsomeone else posted to hackers that they had an immediate need and was\ngoing to so I backed off.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 29 Mar 1999 08:26:37 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n>> Who are maintainer of libpq++?\n\n> I was going to when things settled down, but a couple of months ago\n> someone else posted to hackers that they had an immediate need and was\n> going to so I backed off.\n\nI haven't noticed any patches getting posted, however, so whoever-it-was\nseems to have lost interest. Vince, if you still are interested in\nadopting libpq++, it desperately needs a caring parent ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 1999 10:45:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++ "
},
{
"msg_contents": "On Mon, 29 Mar 1999, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> >> Who are maintainer of libpq++?\n> \n> > I was going to when things settled down, but a couple of months ago\n> > someone else posted to hackers that they had an immediate need and was\n> > going to so I backed off.\n> \n> I haven't noticed any patches getting posted, however, so whoever-it-was\n> seems to have lost interest. Vince, if you still are interested in\n> adopting libpq++, it desperately needs a caring parent ;-)\n\nLemme see what it'll take to get up to speed. Yesterday while I was \nBBQ'in I got the last three items finished up that were sitting on the\ntop of the pile for the last four months (they weren't small) so I've\nactually got some time. I'll let ya know.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 29 Mar 1999 11:32:38 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++ "
},
{
"msg_contents": "\nOn 29-Mar-99 Tom Lane wrote:\n> Vince Vielhaber <[email protected]> writes:\n>>> Who are maintainer of libpq++?\n> \n>> I was going to when things settled down, but a couple of months ago\n>> someone else posted to hackers that they had an immediate need and was\n>> going to so I backed off.\n> \n> I haven't noticed any patches getting posted, however, so whoever-it-was\n> seems to have lost interest. Vince, if you still are interested in\n> adopting libpq++, it desperately needs a caring parent ;-)\n\nLooking at it now. I see the first thing I'm going to have to do is \nfix the docs - second actually, I need to find out why it didn't build\nin my tree in the first place back when I went thru the docs in December.\n\nAre there any glaring problems that require immediate attention that you\nknow of? I see it hasn't been touched since around '97.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 29 Mar 1999 16:15:33 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++"
},
{
"msg_contents": "On 29-Mar-99 Vince Vielhaber wrote:\n> \n> On 29-Mar-99 Tom Lane wrote:\n>> Vince Vielhaber <[email protected]> writes:\n>>>> Who are maintainer of libpq++?\n>> \n> \n> Are there any glaring problems that require immediate attention that you\n> know of? I see it hasn't been touched since around '97.\n\n Probably, official libpq need to go to templates and exceptions,\nstop supporting unusable g++ 2.7.2 \n\n\n( I use self-written completely different version of C++ interface. \n It use exceptions for error reporting and some convenient overloads like\n [\"field name\"]. )\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n",
"msg_date": "Tue, 30 Mar 1999 14:58:51 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] libpq++"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Are there any glaring problems that require immediate attention that you\n> know of? I see it hasn't been touched since around '97.\n\nThe connection-related routines desperately require work; the only\ninterface is equivalent to PQsetdb(), which means there's no way to\nspecify username/password. (Except by passing them via environment\nvariables, which is a kluge.) The code should be calling PQsetdbLogin\nand there need to be a couple more connection parameters for username/\npassword.\n\nI'd also like to see a connection method that interfaces to\nPQconnectdb() and passes everything in a string, forgetting the\npgEnv stuff entirely. That's the only way that won't require\nfurther attention if more connection parameters are added to libpq.\n\nAlso, if the routines immediately around the connection code are\nany indication, the library is just crawling with bugs. A few\nexamples:\n\n1. PgConnection::Connect tests \"if (errorMessage)\" ... where\nerrorMessage is a locally declared char array. In other words\nthe if() tests to see whether the address of errorMessage is non-null.\nThis means that PgConnection::Connect *always* thinks it has failed.\nThe symptom that has been complained of is that if PQsetdb actually\ndoes fail, no useful error message is available, because\nPgConnection::Connect has overwritten it with the usually-null\nmessage from fe_setauthsvc().\n\n2. If it were coded as probably intended, ie \"if (errorMessage[0])\",\nthen it would not be testing the *connection* status but only whether\nfe_setauthsvc was happy or not. The test should really be looking at\nthe returned pgConn.\n\n3. If it's gonna call fe_setauthsvc, one would think it should not go\nahead trying to make the connection if fe_setauthsvc fails. But it does.\n\n4. It probably shouldn't be calling fe_setauthsvc in the first place,\nthat routine being obsolete and deprecated.\n\n5. Why are the caller(s) of PgConnection::Connect not noticing its\nfailure return status?\n\nI got sufficiently discouraged after deconstructing that one routine\nthat I didn't go looking for more problems. Five bugs in ten lines\nof code is not promising...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Mar 1999 19:07:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++ "
},
{
"msg_contents": "\nOn 01-Apr-99 Tom Lane wrote:\n> Vince Vielhaber <[email protected]> writes:\n>> Are there any glaring problems that require immediate attention that you\n>> know of? I see it hasn't been touched since around '97.\n> \n> The connection-related routines desperately require work; the only\n> interface is equivalent to PQsetdb(), which means there's no way to\n> specify username/password. (Except by passing them via environment\n> variables, which is a kluge.) The code should be calling PQsetdbLogin\n> and there need to be a couple more connection parameters for username/\n> password.\n\nLast nite I started looking at it a bit closer (docs are fixed and\nsent to Tom). Without trying to use any of it I was asking many of\nthe same questions that you're mentioning here.\n \n> I'd also like to see a connection method that interfaces to\n> PQconnectdb() and passes everything in a string, forgetting the\n> pgEnv stuff entirely. That's the only way that won't require\n> further attention if more connection parameters are added to libpq.\n \nPrior to eliminating anything (like the pgEnv stuff), do we know how\nmany people are using libpq++? I'm wondering which would be better,\nclean break or a phase out.\n\nRemaining items noted - as well as the items that Dmitry mentioned the\nother day.\n\nI can see some directions I can go in with this. The library looks like\nit has a home/\"caring parent\" now. :) I'll be a bit slow starting, \nespecially with the holiday coming up and a convention following that, so\nmaking it before 6.5 release is doubtful (hopeful, but doubtful, but \nprobably not wise anyway). \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 31 Mar 1999 19:30:20 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> On 01-Apr-99 Tom Lane wrote:\n>> I'd also like to see a connection method that interfaces to\n>> PQconnectdb() and passes everything in a string, forgetting the\n>> pgEnv stuff entirely. That's the only way that won't require\n>> further attention if more connection parameters are added to libpq.\n \n> Prior to eliminating anything (like the pgEnv stuff), do we know how\n> many people are using libpq++? I'm wondering which would be better,\n> clean break or a phase out.\n\nI'd say phase out: there's no reason not to support both styles for a\nwhile (just as libpq is still supporting PQsetdb). But in the long run\nI'd like to encourage apps to move towards using the PQconnectdb\ninterface. The idea is to avoid exactly the problem we see in libpq++:\ninterface layers that know about a specific set of connection parameters\nand have to be fixed anytime more are added.\n\nTo answer your question, there are at least some people using libpq++,\nsince we get bug reports and inquiries about it. Hard to say how many.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Mar 1999 19:58:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq++ "
}
] |
[
{
"msg_contents": "> Update of /usr/local/cvsroot/pgsql/src/backend/access/heap\n> In directory hub.org:/tmp/cvs-serv88724/backend/access/heap\n> \n> Modified Files:\n> \theapam.c \n> Log Message:\n> 1. Vacuum is updated for MVCC.\n> 2. Much faster btree tuples deletion in the case when first on page\n> index tuple is deleted (no movement to the left page(s)).\n> 3. Remember blkno of new root page in BTPageOpaque of\n> left/right siblings when root page is splitted.\n\nGreat news!\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Mar 1999 11:04:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] 'pgsql/src/backend/access/heap heapam.c't"
}
] |
[
{
"msg_contents": "> Update of /usr/local/cvsroot/pgsql/src/backend/utils\n> In directory hub.org:/tmp/cvs-serv11411/src/backend/utils\n> \n> Modified Files:\n> \tGen_fmgrtab.sh.in \n> Log Message:\n> Modify fmgr so that internal name (compiler name) of a built-in\n> function is found in prosrc field of pg_proc, not proname. This allows\n> multiple aliases of a built-in to all be implemented as direct builtins,\n> without needing a level of indirection through an SQL function. Replace\n> existing SQL alias functions with builtin entries accordingly.\n> Save a few K by not storing string names of builtin functions in fmgr's\n> internal table (if you really want 'em, get 'em from pg_proc...).\n> Update opr_sanity with a few more cross-checks.\n> \n> \n\nThis is great. I think we get memory leaks from SQL functions too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Mar 1999 11:58:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] 'pgsql/src/backend/utils Gen_fmgrtab.sh.in'"
}
] |
[
{
"msg_contents": "I would like to have a C function and/or stored procedure that can accept a\nnull parameter value and return a non-null value. \n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tD'Arcy\" \"J.M.\" Cain [SMTP:[email protected]]\n\tSent:\tMonday, March 29, 1999 10:27 AM\n\tTo:\[email protected]\n\tCc:\[email protected]; [email protected]\n\tSubject:\tRe: [HACKERS] NULL handling question\n\n\tThus spake Thomas Lockhart\n\t> > I don't seek this in the source, but i think, all function, who\ntake a \n\t> > NULL value as parameter can't return with a NOT NULL value.\n\t> > But why?\n\t> \n\t> Postgres assumes that a NULL input will give a NULL output, and\nnever\n\t> calls your routine at all. Since NULL means \"don't know\", there is\na\n\n\tActually, the problem is that it does call the function. After it\n\treturns it throws away the result and so the effect is that the\nfunction\n\tnever gets called but in the meantime, the function has to deal with\n\tNULL inputs for nothing. This has been hanging around since the\nlast\n\trelease. I looked at the dispatch code but it wasn't very clear\nwhere\n\twe have to put the test to do this correctly. Maybe we can get it\ncleaned\n\tup before release this time.\n\n\n\t> strong argument that this is correct behavior.\n\n\tI agree but recently I said that there was no stored procedures in\nPostgreSQL\n\tand someone corrected me pointing out that functions with no return\nwere\n\tin effect stored procedures. Do the same arguments apply? If a\nprocedure\n\tis passed a NULL argument, should the side effects be bypassed?\n\n\t-- \n\tD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three\nwolves\n\thttp://www.druid.net/darcy/ | and a sheep voting on\n\t+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 29 Mar 1999 11:53:10 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NULL handling question"
}
] |
[
{
"msg_contents": "\nGood morning all...\n\n\tAs was discussed briefly a short while ago, we are slowly work up\ninto the next level for PostgreSQL ... Commercial Support.\n\n\tGod, what a horrific word, eh? Commercial? Not meant to be,\nwhich is why I'm posting this...\n\n\tThree years ago (or was it four?), shortly after Jolly and Andrew\ngraduated from University of California at Berkeley, I rose up and took\nover the task of co-ordinating the evolution of what has been come to be\nknown as PostgreSQL. What started off as a simple project involving\nBruce, Thomas, Vadim and myself evolved into one where the developers span\nall the continents (except Australia), and where we have a product that we\nare all proud to be apart of.\n\n\tSeveral weeks back, \"The Core\" began talking about going to the\naforementioned \"next step\"...offering Commercial Support. This, like the\noriginal project, is meant to evolve as we go along, so anything that I\nstate here is subject to change as situations require it...except for\n*one* thing: we are *not* evolving this into a Commercial Product!!\nCommercially *Viable*, yes...but, as it has always been, it always will be\nOpen and Freely available Software.\n\n\tWe have alot of plans as to what we want to do, and where we want\nto go with it, but nothing, as of yet, has been set in stone. Our start\ndate is planned for June 1st of this year...\n\n\tThe reason for this email, at this time, is to get a feel for who\nis interested in being apart of this. Plans are to feed revenues back\ninto the project, as a whole, towards development of features that have\n\"sat by the wayside\" because they aren't particular glamorous, as well as\nseveral other things over time.\n\n\tThe Support side of this will be run under http://www.pgsql.com,\nwhich has a plain \"place-holder\" in place while we build the real site. \n\n\tWhat we are looking for, right now, is a pseudo-resume from those\ninterested...ppl that have experience with the internals that we can\ncontract out to, as required, as well as those who consider themselves\nproficient at using PostgreSQL from an admin/user standpoint.\n\n\tWhat we are looking for, mainly, is such things as:\n\t\n\tAreas of proficiency.\n\tOperating Systems Experienced in.\n\tPrimary spoken language, as well as any other language.\n\n\tInformation on projects that you are proud of should be included\nalso...its kinda nice to know *why* you are proficient :)\n\n\tAll and any correspondance should go to [email protected], since\nsending it to my personal mailbox has a very good chance of getting it\nlost...;) Anyone with a reasonably active mailbox will understand what\nI'm saying :)\n\n\tAs June 1st approaches, there will be more specific information\nposted about this...\n\nThanks...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 29 Mar 1999 20:25:04 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commercial Support ..."
}
] |
[
{
"msg_contents": "Try it on current snapshot. Any comment Jan?\n\t-DEJ\n> -----Original Message-----\n> From: Clark Evans [mailto:[email protected]]\n> Sent: Monday, March 29, 1999 6:01 PM\n> To: Jackson, DeJuan\n> Cc: Eduardo Noeda; [email protected]\n> Subject: Re: [SQL] IIF..\n> \n> \n> \"Jackson, DeJuan\" wrote:\n> > It would be:\n> > SELECT (CASE Number\n> > WHEN 1 THEN 'First'\n> > ELSE 'Other'\n> > END) AS Description\n> > FROM Table\n> \n> Way cool. A new trick for me. \n> This is similar to DECODE, I assume.\n> \n> However, I tried this test on a snapshot\n> about a week old, and this is the result:\n> \n> > Welcome to the POSTGRESQL interactive sql monitor:\n> > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > \n> > type \\? for help on slash commands\n> > type \\q to quit\n> > type \\g or terminate with semicolon to execute query\n> > You are currently connected to the database: clark\n> > \n> > clark=> create table test (a text, b int4 );\n> > CREATE\n> > clark=> insert into test values ( 'one', 1);\n> > INSERT 18634 1\n> > clark=> insert into test values ( 'two', 2);\n> > INSERT 18635 1\n> > clark=> insert into test values ( null, null);\n> > INSERT 18636 \n> > clark=> select ( case b when 1 then 'first' else 'other' \n> end ) as xx from test;\n> > ERROR: copyObject: don't know how to copy 704\n> > clark=> \n> > \n> \n> Did this work in earlier versions? If so, then perhaps this\n> should be added to a regression test if it isn't already.\n> \n> Best,\n> \n> Clark\n> \n",
"msg_date": "Mon, 29 Mar 1999 18:31:19 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [SQL] IIF.."
},
{
"msg_contents": "I'm receiving the following error testing SELECT (CASE with\nthe recent cvs replication ( 29MAR99 , 8:30PM EST).\n\n\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: clark\n> \n> clark=> create table test (a text, b int4 );\n> CREATE\n> clark=> insert into test values ( 'one', 1);\n> INSERT 18634 1\n> clark=> insert into test values ( 'two', 2);\n> INSERT 18635 1\n> clark=> insert into test values ( null, null);\n> INSERT 18636 \n> clark=> select ( case b when 1 then 'first' else 'other' end ) as xx from test;\n> ERROR: copyObject: don't know how to copy 704\n> clark=> \n>\n",
"msg_date": "Tue, 30 Mar 1999 02:43:00 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "SELECT (CASE ... ) gives copyObject error in current CVS build."
}
] |
[
{
"msg_contents": "Part of the problem is that PostgreSQL Assumes that a functions value will\nchange each time it is required, therefore automatic table scan and the\nfunction is called for each row.\nTry using 'now'::date instead of now()::date\nYou index creation syntax is good but there's a bug in function indexes\nwhich require you to specify the ops. Try:\n create index when_ndx3 on notes (date(when) date_ops);\n\nWhich won't work because the date(datetime) function isn't trusted.\nYou can change this yourself in the system tables or you can use PL/PGSQL\n(the only trustable PL in PostgreSQL that I've found) to create another\nconversion function and use it instead. Or you can as Thomas Lockhart (or\nis it Tom Lane) if he'd create a trusted function for the conversions in\n6.5.\nDISCLAIMER: I haven't tested this on the current CSV(?CVS I just can't think\ntonight) so it might already be fixed.\n\t-DEJ\n\n> -----Original Message-----\n> From: Andrew Merrill [mailto:[email protected]]\n> Sent: Monday, March 29, 1999 9:28 PM\n> To: [email protected]\n> Subject: [SQL] indexing a datetime by date\n> \n> \n> I have a table with a field, \"when\", of type \"datetime\". I can't use\n> \"date\" because I need the times as well. I'm using PostgreSQL 6.4.2.\n> \n> I'd like to identify all of the records with today's date, as in:\n> \n> select when from notes where when::date = now()::date;\n> \n> The query works, but is very slow. Explain confirms that a sequential\n> scan is being used.\n> \n> I've tried indexing on when:\n> \n> create index when_ndx1 on notes (when);\n> \n> But that doesn't help, as (I suppose) the optimizer can't match\n> when::date with this index.\n> \n> Neither of these works:\n> \n> db=> create index when_ndx2 on notes (when::date);\n> ERROR: parser: parse error at or near \"::\"\n> \n> db=> create index when_ndx3 on notes (date(when));\n> ERROR: DefineIndex: class not found\n> \n> As a workaround, I've been using this:\n> \n> select when from notes where when >= '3/29/1999 0:0:0' and when <=\n> '3/29/1999 23:59:59';\n> \n> but that's ugly and requires hardcoding today's date each time, rather\n> than using now().\n> \n> So, the question is, is there a way to index a datetime field by date?\n> \n> Andrew Merrill\n> \n> \n",
"msg_date": "Mon, 29 Mar 1999 22:07:20 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [SQL] indexing a datetime by date"
},
{
"msg_contents": "> Your index creation syntax is good but there's a bug in function \n> indexes which require you to specify the ops. Try:\n> create index when_ndx3 on notes (date(when) date_ops);\n> Which won't work because the date(datetime) function isn't trusted.\n> You can change this yourself in the system tables or you can use \n> PL/PGSQL (the only trustable PL in PostgreSQL that I've found) to \n> create another conversion function and use it instead. Or you can as \n> Thomas Lockhart (or is it Tom Lane) if he'd create a trusted function \n> for the conversions in 6.5.\n\nTom, does this ring a bell with you? istm that (almost) all builtin\nfunctions should be trusted, but I haven't done anything explicit\nabout it that I can remember.\n\nIn your new role as System Table Berserker, perhaps you would want to\nfix this? :)\n\n - Tom\n",
"msg_date": "Tue, 30 Mar 1999 06:14:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] indexing a datetime by date"
},
{
"msg_contents": "At 06:07 +0200 on 30/03/1999, Jackson, DeJuan wrote:\n\n\n>\n> Part of the problem is that PostgreSQL Assumes that a functions value will\n> change each time it is required, therefore automatic table scan and the\n> function is called for each row.\n> Try using 'now'::date instead of now()::date\n\nHow about using the ANSI standard CURRENT_DATE instead of either? It's\nalready of type date. Or is it considered a function call?\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Tue, 30 Mar 1999 13:59:14 +0200",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [SQL] indexing a datetime by date"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> create index when_ndx3 on notes (date(when) date_ops);\n>> Which won't work because the date(datetime) function isn't trusted.\n\n> Tom, does this ring a bell with you?\n\nNo, and in fact datetime_date *is* marked trusted in pg_proc,\nboth current sources and 6.4.2.\n\nI see the problem DeJuan is getting at:\n\nplay=> create table notes (when datetime);\nCREATE\nplay=> create index when_ndx3 on notes (date(when) date_ops);\nCREATE\nplay=> insert into notes values ('now');\nERROR: internal error: untrusted function not supported.\n\nThis is either a bug or a very poorly worded error message.\nI'll look into it.\n\nIn the meantime, a workaround is to call the function using its\nbuiltin name:\n\nplay=> create table notes (when datetime);\nCREATE\nplay=> create index when_ndx3 on notes (datetime_date(when) date_ops);\nCREATE\nplay=> insert into notes values ('now');\nINSERT 1086489 1\n\nIn 6.4.2, date() on a datetime is an SQL-language function that just\ncalls the builtin function datetime_date(). It would seem that 6.4.2\ncan't cope with an SQL-language function as an index generator. This\nmight be a minor bug or it might be difficult to change; I dunno.\n\nIn 6.5, date() on a datetime is a true builtin, on par with\ndatetime_date(), so you'll be able to use either name interchangeably in\nthat release. But we may still not be able to do anything with other\nSQL-language functions as index generators.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Mar 1999 11:31:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] indexing a datetime by date "
},
{
"msg_contents": "Tom Lane wrote:\n\n> In the meantime, a workaround is to call the function using its\n> builtin name:\n>\n> play=> create table notes (when datetime);\n> CREATE\n> play=> create index when_ndx3 on notes (datetime_date(when) date_ops);\n> CREATE\n\nThanks, that helps - I can now index a datetime field by date.But the index\ndoesn't appear to be used:\n\ndb=> create index ndx3 on notes (datetime_date(when) date_ops);\nCREATE\ndb=> vacuum analyze notes;\nVACUUM\ndb=> explain select when from notes where when::date = 'now'::date;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notes (cost=4330.37 size=43839 width=8)\n\nEXPLAIN\n\nSo it appears that the optimizer doesn't like this index. (This is with\nversion 6.4.2.)\nThe table has about 90,000 rows, of which between 10 and 100 might match a\ngiven date, so an index would really help.\n\nAm I missing something simple here? Thanks again for all your help.\n\nAndrew Merrill\n\n",
"msg_date": "Tue, 30 Mar 1999 09:25:39 -0800",
"msg_from": "Andrew Merrill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] indexing a datetime by date"
}
] |
[
{
"msg_contents": "I grabbed the latest code (6.5) from the cvs source tree. The following\nregression tests failed:\n\nint2 .. failed\nint4 .. failed\ngeometry .. failed\ntriggers .. failed\nmisc .. failed\nplpgsql .. failed\n\nI am running Red Hat 5.1 on intel. Should I be concerned about these? I am\neager to get up and running on 6.5 but always seem to run into one obstacle\nor another.\n\nThanks, Michael\n\n",
"msg_date": "Mon, 29 Mar 1999 22:50:50 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some 6.5 regression tests are failing"
},
{
"msg_contents": "> int2 .. failed\n> int4 .. failed\n\nBoth int2 and int4 have been consistently failing\nfor me, but it is due to a wording problem:\n\n pg_atoi: error reading \"100000\": Math result not representable\n instead of \n pg_atoi: error reading \"100000\": Numerical result out of range\n\nAlso, float8 usually fails for me, due to the expected \nbeing a 'result is out of range' error, and the result being:\n\n > ! bad|?column?\n > ! ---+--------\n > ! |0\n > ! |NaN\n etc.\n\n> geometry .. failed\n\nThis almost always fails due to a rounding differences.\nIs there a way to build the rounding into the test\nso that it would fail if something is _really_ wrong?\n\n> triggers .. failed\n\nFor me, \"check_primary_key: even number of arguments should be specified\"\nis shown as an error, which is not 'expected', then, as a result, \neverything else in the test fails.\n\nThis is definately a problem.\n\n> misc .. failed\n> plpgsql .. failed\n\nI'm not too sure about these.. I had a host of other failures, but\nthis was due to me running the test in the wrong account....\nrunning it in the right account now. :)\n\n> I am running Red Hat 5.1 on intel. Should I be concerned about these? I am\n> eager to get up and running on 6.5 but always seem to run into one obstacle\n> or another.\n\nI have the same configuration, Pentium Pro, RH 5.1 almost stock configuration.\n\nClark\n",
"msg_date": "Tue, 30 Mar 1999 05:04:32 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some 6.5 regression tests are failing"
},
{
"msg_contents": "\nJust a quick answer on this, but if you look in the test/regress/expected\nsubdirectory, there are \"system specific\" .out files for a few systems.\nIt isn't perfect, but it does help to remove system specific differences\nin the regressio tests...\n\nIf your OS doesn't have an appropriate file there, please feel free to\nconfirm that the problem is benign and submit an appropraite file for\ninclusion...if your OS does have an appropriate file and is still having a\nproblem, then it is something that we shoudl look at...\n\n\nOn Tue, 30 Mar 1999, Clark Evans wrote:\n\n> > int2 .. failed\n> > int4 .. failed\n> \n> Both int2 and int4 have been consistently failing\n> for me, but it is due to a wording problem:\n> \n> pg_atoi: error reading \"100000\": Math result not representable\n> instead of \n> pg_atoi: error reading \"100000\": Numerical result out of range\n> \n> Also, float8 usually fails for me, due to the expected \n> being a 'result is out of range' error, and the result being:\n> \n> > ! bad|?column?\n> > ! ---+--------\n> > ! |0\n> > ! |NaN\n> etc.\n> \n> > geometry .. failed\n> \n> This almost always fails due to a rounding differences.\n> Is there a way to build the rounding into the test\n> so that it would fail if something is _really_ wrong?\n> \n> > triggers .. failed\n> \n> For me, \"check_primary_key: even number of arguments should be specified\"\n> is shown as an error, which is not 'expected', then, as a result, \n> everything else in the test fails.\n> \n> This is definately a problem.\n> \n> > misc .. failed\n> > plpgsql .. failed\n> \n> I'm not too sure about these.. I had a host of other failures, but\n> this was due to me running the test in the wrong account....\n> running it in the right account now. :)\n> \n> > I am running Red Hat 5.1 on intel. Should I be concerned about these? I am\n> > eager to get up and running on 6.5 but always seem to run into one obstacle\n> > or another.\n> \n> I have the same configuration, Pentium Pro, RH 5.1 almost stock configuration.\n> \n> Clark\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 01:20:11 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some 6.5 regression tests are failing"
}
] |
[
{
"msg_contents": "\nI'd stay away from PostgreSQL large objects for now.\n\nTwo big problems:\n\n1) Minimum size is 16K\n2) They all end up in the same directory as your regular\n tables.\n\nIf you need to store a lot of files in the 10-20-30K size, I'd\nsuggest first trying the unix file system, but hash them into some\nsort of subdirectory structure so as to have not so many in each\ndirectory. 256 per directory is nice, so give each file a 32 bit\nid, store the id and the key information in postgresql, and when\nyou need file 0x12345678, go to 12/34/56/12345678.txt. You could\nbe smarter about the hashing so the bins filled evenly. Either way\nyou can spread the load out over different file systems with\nsoft links.\n\nIf space is at a preimum, and your files are compressable, you can\ndo what we did on one project: batch the files up into batches of,\nsay, about 32k (i.e. keep adding files till the aggregate gets over\n32k), store start and end offsets for each file in the database, and \ngzip each batch. gzip -d -c can tear through whatever your 32K compresses\ndown to pretty quickly, and a little bit of C or perl can discard the unwanted\nleading part of the file pretty quickly too. You can store the blocks \nthemselves hashed as described above.\n\nHave fun,\nDrop me a line if I can help.\n-- cary\[email protected]\n\n\n\n\n",
"msg_date": "Mon, 29 Mar 1999 23:52:58 -0500 (EST)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are large objects well supported? Are they considered very\n\tstableto use?"
}
] |
[
{
"msg_contents": "CREATE USER sarah;\t\nERROR: Bad abstime external representation ''\n\nThe back end will fail with the next sql call:\n\nCREATE USER sarah;\nNOTICE: (transaction aborted): all queries ignored until end of transaction\nblock\n*ABORT STATE*\n\nAm I missing something?\n\nThanks, Michael\n",
"msg_date": "Mon, 29 Mar 1999 23:04:23 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create user is failing under 6.5"
}
] |
[
{
"msg_contents": "Anytime I create a view that includes a call to a C function in a library I\ncreated, I get the following errors in pgsql:\n\nselect * from pg_views;\nERROR: cache lookup of attribute 0 in relation 148801 failed\n\n\nSelect * from pg_tables;\nWorks\n\n\\dt works okay.\n\nRed Hat 5.2, Intel, psql and PostgreSQL version 6.5. This error also\nappeared in version 6.4.2.\n\nThanks, Michael\n",
"msg_date": "Mon, 29 Mar 1999 23:20:18 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "select * from pg_veiw; errors in 6.5 and 6.4.2"
},
{
"msg_contents": ">\n> Anytime I create a view that includes a call to a C function in a library I\n> created, I get the following errors in pgsql:\n>\n> select * from pg_views;\n> ERROR: cache lookup of attribute 0 in relation 148801 failed\n\n Must be in utils/adt/ruleutils.c - I'll take a look.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 30 Mar 1999 10:01:17 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] select * from pg_veiw; errors in 6.5 and 6.4.2"
}
] |
[
{
"msg_contents": "I'd advise you get 6.4.2 or better still wait until 6.5 is out.\n\nWhen working on large object support for JDBC (at the time I was using\n6.3.x) I came across this. The problem is caused by the order that libpq\nexpects the packets from the backend, and if memory serves (which it\nisn't at the moment) a patch was submitted for it.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Chris Bitmead [mailto:[email protected]]\nSent: Monday, March 29, 1999 11:58 AM\nTo: [email protected]\nSubject: [HACKERS] Large objects error - expected a 'V' from the backend\n\n\n\nHi all,\n\nI'm using postgres 6.3.2 as built by RedHat 5.2.\n\nEvery time one of my programs tries to read the _2nd_ large object it\ngets an error. Well actually, closing the descriptor on the 1st large\nobject fails as does retrieving the 2nd large object. The error is....\n\nPQfn: expected a 'V' from the backend. Got 'N' instead\n\nI have got a code extract below. It is simply a perl program using\nPg-0.91 that opens the database and tries to read two large objects\ngiven on the command line.\n\nWhat is the best bet for getting around this? Is upgrading to a later\nversion of postgres likely to help? Has anyone seen this before?\n\nThe large objects I'm using aren't very large. Only a few hundred bytes.\n\nAre large objects well supported? Are they considered very stable to\nuse?\n\nHere is the code....\n\npackage techmod;\nrequire Exporter;\n\nuse DBI;\nuse Pg;\n\nsub pgdbconnect\n{\n $pgdbh ||= Pg::connectdb(\"dbname=httpd\");\n die unless $pgdbh;\n $pgdbh->trace(STDOUT);\n return $pgdbh;\n}\n\n\nsub getlarge\n{\n my ($name,$lobjId)=@_;\n my $buf;\n my $mode = PGRES_INV_READ;\n if (0 <= ($lobj_fd = $pgdbh->lo_open($lobjId, $mode)))\n {\n print \"open\\n\";\n while (0 < ($nbytes = $pgdbh->lo_read($lobj_fd, $b, 100000)))\n {\n $buf = $buf . $b;\n }\n if ($nbytes < 0)\n { print \"read fail\\n\", $pgdbh->errorMessage; }\n if ($pgdbh->lo_close($lobj_fd) < 0)\n { print \"close fail\\n\", $pgdbh->errorMessage; }\n }\n else\n {\n print \"notopen $lobjId\\n\", $pgdbh->errorMessage;\n }\n return $buf;\n}\n\n#!/usr/bin/perl\nuse techmod;\ntechmod->pgdbconnect();\n$lobjId=$ARGV[0];\nprint techmod->getlarge($lobjId);\nprint techmod->getlarge($ARGV[1]);\n\nHere is an extract from the trace.\n\nTo backend> F \nTo backend (4#)> 954\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 100000\n>From backend> V\n>From backend> G\n>From backend (#4)> 33\n>From backend (33)> This is some data stored in a large object.\n\n>From backend> 0\nTo backend> F \nTo backend (4#)> 954\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 0\nTo backend (4#)> 4\nTo backend (4#)> 100000\n>From backend> V\n>From backend> G\n>From backend (#4)> 0\n>From backend (0)> \n>From backend> 0\nTo backend> F \nTo backend (4#)> 953\nTo backend (4#)> 1\nTo backend (4#)> 4\nTo backend (4#)> 0\n>From backend> N\nclose fail\nPQfn: expected a 'V' from the backend. Got 'N' insteadThis is some data\nstored in a large object\nTo backend> F \nTo backend (4#)> 952\nTo backend (4#)> 2\nTo backend (4#)> 4\nTo backend (4#)> 21008\nTo backend (4#)> 4\nTo backend (4#)> 262144\n>From backend> N\nnotopen 21008\nPQfn: expected a 'V' from the backend. Got 'N' instead\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 30 Mar 1999 09:53:19 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Large objects error - expected a 'V' from the backe nd"
}
] |
[
{
"msg_contents": "Because I'm re-wiring the power supply at home, I'm temporarily offline\nfrom there. If anyone is trying to contact me, please cc my work's\nemail.\n\nThis should only affect me for this week only.\n\nThanks, Peter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n",
"msg_date": "Tue, 30 Mar 1999 09:55:17 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Contacting me"
}
] |
[
{
"msg_contents": "I have seen this before. I've simply got round it by adding the valid\nuntil date.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Michael Davis [mailto:[email protected]]\nSent: Tuesday, March 30, 1999 6:04 AM\nTo: [email protected]\nSubject: [HACKERS] Create user is failing under 6.5\n\n\nCREATE USER sarah;\t\nERROR: Bad abstime external representation ''\n\nThe back end will fail with the next sql call:\n\nCREATE USER sarah;\nNOTICE: (transaction aborted): all queries ignored until end of\ntransaction\nblock\n*ABORT STATE*\n\nAm I missing something?\n\nThanks, Michael\n",
"msg_date": "Tue, 30 Mar 1999 11:00:09 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Create user is failing under 6.5"
},
{
"msg_contents": "\nI hit it too. You can create the user with the createuser utility in a\nshell. Might still explain some backend crashes for me though.\n\nOn Tue, 30 Mar 1999, Peter Mount wrote:\n\n> I have seen this before. I've simply got round it by adding the valid\n> until date.\n> \n> Peter\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n",
"msg_date": "Tue, 30 Mar 1999 09:19:59 -0600 (EST)",
"msg_from": "James Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Create user is failing under 6.5"
}
] |
[
{
"msg_contents": "While trying to fix a program of mine (still didn't get there) I came\nacross:\n\nDllist *\nDLNewList(void)\n{\n Dllist *l;\n\n l = malloc(sizeof(Dllist));\n l->dll_head = 0;\n l->dll_tail = 0;\n\n return l;\n}\n\nin src/backend/lib/dllist.c, and for a while thought that my problem\nhad something to do with malloc failing, returning 0, and the next\nline trying to write a zero at memory location zero leading to a\nsegfault. What is the right way of dealing with this? Error message,\nor returning NULL a la:\n\nDllist *\nDLNewList(void)\n{\n Dllist *l;\n\n l = (Dllist *)malloc(sizeof(Dllist));\n if (l != (Dllist *)NULL) {\n\t l->dll_head = 0;\n\t\tl->dll_tail = 0;\n\t}\n return l;\n}\n\nAgain, this wasn't the problem (still looking), but I thought it might\nbe worth mentioning (same probably applies to DLNewElem()).\n\nCheers,\n\nPatrick\n",
"msg_date": "Tue, 30 Mar 1999 11:09:11 +0100 (BST)",
"msg_from": "[email protected] (Patrick Welche)",
"msg_from_op": true,
"msg_subject": "dllist.c"
}
] |
[
{
"msg_contents": "\nmaybe I'm missing something but how do I get this version ?\n\n-- \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n David O'Farrell AerSoft Limited \n mailto:[email protected] 2 Northumberland Avenue,\n Dun Laoghaire,Co. Dublin\n\tDirect Phone 353-1-2145950\n Phone: 01-2301166 Fax: 01-2301167\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n",
"msg_date": "Tue, 30 Mar 1999 15:16:11 +0100",
"msg_from": "\"David O'Farrell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Announcing 6.5 Beta!"
},
{
"msg_contents": "On Tue, 30 Mar 1999, David O'Farrell wrote:\n\n> \n> maybe I'm missing something but how do I get this version ?\n\nThe source code is now considered Beta...there is no \"beta\" package built\nyet, nor will it be for another week. For now, only the snaps\nexist...next Monday, I will build a beta package up, and announce that for\ntesting...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 10:30:56 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Announcing 6.5 Beta!"
}
] |
[
{
"msg_contents": "I have a view:\n\nDROP VIEW InvoiceSum;\nCREATE VIEW InvoiceSum as\n SELECT i.InvoiceID, i.DatePrinted, il.MemberID, \nsum((il.UnitPrice * il.Quantity) + il.ShippingHandling) AS AmountOfInvoice\n FROM Invoice i, InvoiceLines il\n WHERE i.InvoiceID = il.InvoiceID \n group by i.InvoiceID, i.DatePrinted, il.MemberID;\n\n\nThe following works great:\n\nselect * from invoicesum where memberid = 685;\n\nThe following fails:\n\nselect MemberID, sum(AmountOfInvoice) as InvAmt \n\t\tfrom InvoiceSum \n\t\twhere memberid = 685;\n\nERROR: Illegal use of aggregates or non-group column in target list\n\nThe following also fails:\n\nselect MemberID, sum(AmountOfInvoice) as InvAmt \n\t\tfrom InvoiceSum \n\t\twhere memberid = 685\n\t\tgroup by memberid;\n\nERROR: ExecAgg: Bad Agg->Target for Agg 0\n\tI get this error with or without the where clause.\n\nI have many complex queries like this that I would like (need) to port to\nPostgreSQL. As a result, this limitation will be difficult for me to work\naround. I would be willing to explore fixing this for 6.6 if someone would\nbe willing to point me in the right direction and tell me where to start\nlooking in the code and possibly what to look for. The more information the\nbetter.\n\nI would also like to make views updateable without having to add rules.\n\nThe other limitation that is presenting some challenges is the lack of outer\njoins. Is any portion of outer join supported? Could I find out when outer\njoin support is planned for implementation?\n\nThanks, Michael\n\n",
"msg_date": "Tue, 30 Mar 1999 10:11:57 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Views, aggregations, and errors"
}
] |
[
{
"msg_contents": "\nselect id \n from clients \n where id = ( select id \n from clients \n where count(id) = 1 ) ;\n\nThe error I get is that you can't do the AGGREGATE int he WHERE clause,\nbut this is with a pre-v6.5 server too...technically, should the above be\npossible?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 13:18:17 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should the following work...?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> select id\n> from clients\n> where id = ( select id\n> from clients\n> where count(id) = 1 ) ;\n> \n\nWhat are you trying to do, grab the id \nof the first row in the table?\n\nIf this is so, try:\n\n\tselect id from clients limit 1;\n\nOtherwise, I can't figure out what\nthe above code is trying to accomplish.\n\nBest,\n\nClark\n",
"msg_date": "Tue, 30 Mar 1999 18:34:40 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Should the following work...?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n>\n> select id\n> from clients\n> where id = ( select id\n> from clients\n> where count(id) = 1 ) ;\n>\n\nHmm. If you are trying to identify \nduplicate id's then try :\n\nselect distinct id from client x\nwhere 1 < \n ( select count(id) \n from client y\n where y.id = x.id );\n\n\nIdeally, this would be done as:\n\nselect a from \n ( select a, count(a) cnt \n from test \n group by a ) where cnt < 2;\n\nHowever, PostgreSQL dosn't support\ndynamic views. This, btw, is a \nvery useful feature.\n\nHope this helps,\n\nClark\n",
"msg_date": "Tue, 30 Mar 1999 18:44:35 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Should the following work...?"
}
] |
[
{
"msg_contents": "try:\n explain select when from notes where datetime_date(when) = 'now'::date;\n\t-DEJ\n\n> Tom Lane wrote:\n> \n> > In the meantime, a workaround is to call the function using its\n> > builtin name:\n> >\n> > play=> create table notes (when datetime);\n> > CREATE\n> > play=> create index when_ndx3 on notes (datetime_date(when) \n> date_ops);\n> > CREATE\n> \n> Thanks, that helps - I can now index a datetime field by \n> date.But the index\n> doesn't appear to be used:\n> \n> db=> create index ndx3 on notes (datetime_date(when) date_ops);\n> CREATE\n> db=> vacuum analyze notes;\n> VACUUM\n> db=> explain select when from notes where when::date = 'now'::date;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on notes (cost=4330.37 size=43839 width=8)\n> \n> EXPLAIN\n> \n> So it appears that the optimizer doesn't like this index. \n> (This is with\n> version 6.4.2.)\n> The table has about 90,000 rows, of which between 10 and 100 \n> might match a\n> given date, so an index would really help.\n> \n> Am I missing something simple here? Thanks again for all your help.\n> \n> Andrew Merrill\n> \n",
"msg_date": "Tue, 30 Mar 1999 11:33:46 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [SQL] indexing a datetime by date"
},
{
"msg_contents": "Jackson, DeJuan wrote:\n\n> try:\n> explain select when from notes where datetime_date(when) = 'now'::date;\n> -DEJ\n\nAha. That does the trick. Thanks!\n\nAndrew Merrill\n\n",
"msg_date": "Tue, 30 Mar 1999 10:11:19 -0800",
"msg_from": "Andrew Merrill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] indexing a datetime by date"
}
] |
[
{
"msg_contents": "> select id \n> from clients \n> where id = ( select id \n> from clients \n> where count(id) = 1 ) ;\n> The error I get is that you can't do the AGGREGATE int he \n> WHERE clause,\n> but this is with a pre-v6.5 server too...technically, should \n> the above be\n> possible?\nI believe instead of WHERE that should be a HAVING clause.\nBut I'm not sure PostgreSQL can handle a HAVING in a sub-select.\n\n\t-DEJ\n",
"msg_date": "Tue, 30 Mar 1999 11:42:21 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Should the following work...?"
},
{
"msg_contents": "\nUsing:\n\nselect id\n from clients\n where id = ( select id\n from clients\n group by id\n having count(id) = 1 ) ;\n\n\nI get:\n\nERROR: rewrite: aggregate column of view must be at rigth side in qual\n\n\n\nOn Tue, 30 Mar 1999, Jackson, DeJuan wrote:\n\n> > select id \n> > from clients \n> > where id = ( select id \n> > from clients \n> > where count(id) = 1 ) ;\n> > The error I get is that you can't do the AGGREGATE int he \n> > WHERE clause,\n> > but this is with a pre-v6.5 server too...technically, should \n> > the above be\n> > possible?\n> I believe instead of WHERE that should be a HAVING clause.\n> But I'm not sure PostgreSQL can handle a HAVING in a sub-select.\n> \n> \t-DEJ\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 14:27:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Should the following work...?"
},
{
"msg_contents": "\nJust talked to one of our Oracle guru's here at hte office, and he had to\nshake his head a bit :)\n\nTo find duplicate records, or, at least, data in a particular field, he\nsuggests just doing:\n\n SELECT id,count(1)\n FROM clients\n GROUP BY id\n HAVING count(1) > 1;\n\nA nice, clean, simple solution :)\n\nOn Tue, 30 Mar 1999, The Hermit Hacker wrote:\n\n> \n> Using:\n> \n> select id\n> from clients\n> where id = ( select id\n> from clients\n> group by id\n> having count(id) = 1 ) ;\n> \n> \n> I get:\n> \n> ERROR: rewrite: aggregate column of view must be at rigth side in qual\n> \n> \n> \n> On Tue, 30 Mar 1999, Jackson, DeJuan wrote:\n> \n> > > select id \n> > > from clients \n> > > where id = ( select id \n> > > from clients \n> > > where count(id) = 1 ) ;\n> > > The error I get is that you can't do the AGGREGATE int he \n> > > WHERE clause,\n> > > but this is with a pre-v6.5 server too...technically, should \n> > > the above be\n> > > possible?\n> > I believe instead of WHERE that should be a HAVING clause.\n> > But I'm not sure PostgreSQL can handle a HAVING in a sub-select.\n> > \n> > \t-DEJ\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 14:44:34 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Should the following work...?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> To find duplicate records, or, at least,\n> data in a particular field, he suggests \n> just doing:\n> \n> SELECT id,count(1)\n> FROM clients\n> GROUP BY id\n> HAVING count(1) > 1;\n> \n> A nice, clean, simple solution :)\n\nYa. That's pretty. For some\nreason I always forget using the\n'HAVING' clause, and end up using\na double where clause. \n\n:) Clark\n",
"msg_date": "Tue, 30 Mar 1999 18:47:58 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Should the following work...?"
},
{
"msg_contents": "\nYa, that's what I forgot too :( Its not something I use everyday, so\nnever think about it :)\n\n\nOn Tue, 30 Mar 1999, Clark Evans wrote:\n\n> The Hermit Hacker wrote:\n> > To find duplicate records, or, at least,\n> > data in a particular field, he suggests \n> > just doing:\n> > \n> > SELECT id,count(1)\n> > FROM clients\n> > GROUP BY id\n> > HAVING count(1) > 1;\n> > \n> > A nice, clean, simple solution :)\n> \n> Ya. That's pretty. For some\n> reason I always forget using the\n> 'HAVING' clause, and end up using\n> a double where clause. \n> \n> :) Clark\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 30 Mar 1999 15:08:33 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Should the following work...?"
}
] |
[
{
"msg_contents": "select sum(TotShippingHandling) from Invoice where 1 = 1 and\nTotShippingHandling <> 0;\n---\n \n(1 row)\n\n\nselect sum(TotShippingHandling) from Invoice where TotShippingHandling <> 0;\nsum\n-------\n6781.05\n(1 row)\n\n\nRed Hat 5.1, intel, PostgreSQL 6.5 (downloaded last night).\n\nThanks, Michael\n",
"msg_date": "Tue, 30 Mar 1999 15:23:59 -0600",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interesting failure when selecting aggregates"
}
] |
[
{
"msg_contents": "Could one of you kinds soul point me to the PostgreSQL code for determining\nTimezones and Daylight Savings. If I can assess the OS's database that\nwould be best. Thanks\n\t-DEJ\n",
"msg_date": "Tue, 30 Mar 1999 18:02:21 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[OT] Timezones and Daylight savings."
},
{
"msg_contents": "> Could one of you kinds soul point me to the PostgreSQL code for \n> determining Timezones and Daylight Savings. \n\nbackend/utils/adt/{dt.c,nabstime.c}\n\n> If I can assess the OS's database that would be best.\n\nNot sure what you mean here. As a guess, you should look at the\nutility \"zdump\", which will show you the transition times for ST/DST.\nYou can set the TZ environment variable (or PGTZ envar when running\nPostgres) to test out different time zones, which is how I can test\nbug reports from other parts of the world.\n\n - Tom\n",
"msg_date": "Wed, 31 Mar 1999 15:47:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [OT] Timezones and Daylight savings."
}
] |
[
{
"msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tBilly G. Allie\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: UnixWare 7.0.1\n\n PostgreSQL version (example: PostgreSQL-6.4) : Current CVS version\n\n Compiler used (example: gcc 2.8.0)\t\t: Optimizing C Compilation\n\t\t\t\t\t\t System (CCS) 3.2 08/18/98\n\t\t\t\t\t\t (u701)\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nCompiling 'vacuum.c' produces the following errors:\n\n UX:acomp: ERROR: \"vacuum.c\", line 2424: cannot do pointer arithmetic on\n operand of unknown size\n UX:acomp: ERROR: \"vacuum.c\", line 2428: cannot do pointer arithmetic on\n operand of unknown size\n UX:acomp: ERROR: \"vacuum.c\", line 2431: cannot do pointer arithmetic on\n operand of unknown size\n UX:acomp: ERROR: \"vacuum.c\", line 2433: cannot do pointer arithmetic on\n operand of unknown size\n UX:acomp: ERROR: \"vacuum.c\", line 2448: cannot do pointer arithmetic on\n operand of unknown size\n\nCompiling 'shmem.c' produces the following error:\n\n UX:acomp: ERROR: \"shmem.c\", line 740: void function cannot return value\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\nCompile the program on a strict ANSI C compiler :-)\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nThe attached patch will fix the problems.\n\nIn vacuumc.c, pointer arithmatic was performed on a pointer of type void. The \npatch casts the void pointer to a character pointer, does the arithmatic, and \nthen casts the result back to a void pointer.\n\nIn shmem.c, a function of type void returned a value. The patch removes the \noffending return statement.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Wed, 31 Mar 1999 03:06:41 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug Report - Compile errors in vacuum.c and shmem.c"
}
] |
[
{
"msg_contents": "Hi,\nI still have a problem with one query not returning all matching queries.\nBut it get's better than a few weeks before :-)\nThe symptom now:\nAs long as I do NOT create statisics wit 'VACUUM ANALYZE' I get all matching\nqueries. After the first 'VACCUM ANALYZE' the query malfunctions.\n\nAs a attachment: a.txt a EXPLAIN SELECT before the VACUUM,\n b.txt a EXPLAIN SELECT after the VACUUM.\n\nFor the used queries please search my last mail to hackers..\n\nBye!\n----\nMichael Reifenberger\nPlaut Software GmbH, R/3 Basis",
"msg_date": "Wed, 31 Mar 1999 11:37:08 +0200 (CEST)",
"msg_from": "Michael Reifenberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with complexer join still persists sometimes"
},
{
"msg_contents": "Michael Reifenberger <[email protected]> writes:\n> I still have a problem with one query not returning all matching queries.\n\nYou didn't say what version you're using, but if it's a 6.5 prerelease\nthen I think this is the same problem I reported on Sunday: the\noptimizer is generating mergejoin plans that don't sort the input.\n\n> NOTICE: QUERY PLAN:\n> \n> Merge Join (cost=10.96 size=149 width=184)\n> -> Seq Scan on kunden k (cost=2.39 size=42 width=28)\n> -> Merge Join (cost=7.06 size=4 width=156)\n> -> Seq Scan on emp e (cost=1.07 size=2 width=28)\n> -> Nested Loop (cost=5.86 size=2 width=128)\n> -> Seq Scan on pausch p (cost=1.07 size=2 width=52)\n> -> Index Scan using reise_2 on reise r (cost=2.40 size=8 width=76)\n\nThe sequential scans wouldn't necessarily produce sorted output,\nbut MergeJoin depends on having sorted input, so this plan looks\npretty bogus to me.\n\nBTW, Charles Hornberger also sees this problem in the 29-Mar snapshot,\nbut reports that the 23-Mar snapshot doesn't have the bug. (Does that\nagree with your results, Michael?) Apparently it was broken by some\nrecent change, not the large optimizer changes Bruce and I made a few\nweeks ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Mar 1999 20:06:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with complexer join still persists sometimes "
}
] |
[
{
"msg_contents": "--- Situation Description ---\n\nI'm writing a quota system to a web-by-mail robot, on a database\nwhich consists of\n email text,\n url text,\n action varchar(8),\t-- in \"sent\", \"toobig\", \"refused\", ...\n bytes int4,\n at <undeterimined>\n\nWhat I'm wanting to say is:\n\n SELECT COUNT(bytes) AS requests, SUM(bytes) as datasent\n FROM tbl_robot_log\n WHERE email = '$email' AND action = 'sent'\n AND { it was less than an hour ago }\n\nThat'll give me the user's usage statistics for the last hour;\nrequests -> number of requests processed, datasent -> bytes they\nhave received.\n\nThen I do another request to get their 24 hour stats.\n\n--- Problem ---\n\nAssuming 'at' is a datetime (it's currently an int4 with a unix timestamp in it - yeuch :)\n\nQuestion 1:\n\n SELECT COUNT(bytes), SUM(bytes)\n FROM tbl_robot_log\n WHERE email = '$email' AND action = 'sent'\n AND at >= datetime('now' + reltime('-60 mins'::timespan));\n\nWhen is the datetime(...) expression evaluated?\n\nIs it evaluated per line of data that matches the previous two\nexpressions? Or is it evaluated once? If so, then it is probably\nas efficient as my current operations.\n\n\nQuestion 2:\n\nWhat would be the most efficient way to get the combination of\n1 and 24 hour logs? Should I get all entries within the last 24\nhours and use a 'group by' statement? If so; how would I do the\ngroup by? Is there a way to say :\n\n GROUP BY age(at, 'hours')?\n\nOliver\n-- \nIf at first you don't succeed, skydiving is not for you...\n",
"msg_date": "Wed, 31 Mar 1999 11:54:02 +0100",
"msg_from": "Oliver Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Date operations"
}
] |
[
{
"msg_contents": "--- Situation Description ---\n\nI'm writing a quota system to a web-by-mail robot, on a database\nwhich consists of\n email text,\n url text,\n action varchar(8),\t-- in \"sent\", \"toobig\", \"refused\", ...\n bytes int4,\n at <undeterimined>\n\nWhat I'm wanting to say is:\n\n SELECT COUNT(bytes) AS requests, SUM(bytes) as datasent\n FROM tbl_robot_log\n WHERE email = '$email' AND action = 'sent'\n AND { it was less than an hour ago }\n\nThat'll give me the user's usage statistics for the last hour;\nrequests -> number of requests processed, datasent -> bytes they\nhave received.\n\nThen I do another request to get their 24 hour stats.\n\n--- Problem ---\n\nAssuming 'at' is a datetime (it's currently an int4 with a unix timestamp in it - yeuch :)\n\nQuestion 1:\n\n SELECT COUNT(bytes), SUM(bytes)\n FROM tbl_robot_log\n WHERE email = '$email' AND action = 'sent'\n AND at >= datetime('now' + reltime('-60 mins'::timespan));\n\nWhen is the datetime(...) expression evaluated?\n\nIs it evaluated per line of data that matches the previous two\nexpressions? Or is it evaluated once? If so, then it is probably\nas efficient as my current operations.\n\n\nQuestion 2:\n\nWhat would be the most efficient way to get the combination of\n1 and 24 hour logs? Should I get all entries within the last 24\nhours and use a 'group by' statement? If so; how would I do the\ngroup by? Is there a way to say :\n\n GROUP BY age(at, 'hours')?\n\nOliver\n-- \nIf at first you don't succeed, skydiving is not for you...\n",
"msg_date": "Wed, 31 Mar 1999 11:57:40 +0100",
"msg_from": "Oliver Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Date operations"
},
{
"msg_contents": "On Wed, Mar 31, 1999 at 11:57:40AM +0100, Oliver Smith wrote:\n> Question 2:\n\nOh - and \n\nQuestion 3:\n\nBecause this query gets hit a lot, would it make sense to create an\nindex\n\n CREATE INDEX tbl_robot_logs_at_age_idx ON tbl_robot_logs\n ( AGE(at) );\n\nIf so - how should I word the query to use this?\n\n SELECT * FROM tbl_robot_logs\n WHERE email = '$email' AND action = 'sent'\n AND AGE(at) > '-1 hour'::timespan;\n\nWill the query optimizer recognise that it has an index to suit this?\n\n\nOliver\n-- \nIf at first you don't succeed, skydiving is not for you...\n",
"msg_date": "Wed, 31 Mar 1999 12:30:01 +0100",
"msg_from": "Oliver Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Date operations"
},
{
"msg_contents": "> Question 1:\n> SELECT COUNT(bytes), SUM(bytes)\n> FROM tbl_robot_log\n> WHERE email = '$email' AND action = 'sent'\n> AND at >= datetime('now' + reltime('-60 mins'::timespan));\n> When is the datetime(...) expression evaluated?\n\nThe function calls are probably evaluated once per query. And you can\nsimplify your query a bit:\n\n AND at >= ('now'::datetime + '-60 mins'::timespan);\n\n> Question 2:\n> What would be the most efficient way to get the combination of\n> 1 and 24 hour logs? Should I get all entries within the last 24\n> hours and use a 'group by' statement? If so; how would I do the\n> group by? Is there a way to say :\n> GROUP BY age(at, 'hours')?\n\nThe function you want is date_part('hours', at). Not sure if the GROUP\nBY would be happy with it, but istm that you might be able to select a\ncolumn of date_part() and then group on that...\n\n - Tom\n",
"msg_date": "Wed, 31 Mar 1999 16:19:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Date operations"
},
{
"msg_contents": "> Question 3:\n> Because this query gets hit a lot, would it make sense to create an\n> index\n> CREATE INDEX tbl_robot_logs_at_age_idx ON tbl_robot_logs\n> ( AGE(at) );\n\nNope, since \"age\" is calculated from \"now\", which is alway changing.\nYou can't make a useful index on a non-constant expression or\nfunction.\n\n - Tom\n",
"msg_date": "Wed, 31 Mar 1999 16:21:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Date operations"
}
] |
[
{
"msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tClark Evans\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: Linux RH 5.2\n\n PostgreSQL version (example: PostgreSQL-6.4) : Current CVS version\n\n Compiler used (example: gcc 2.8.0)\t\t: \n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nAlter table does not seem to operate on tempoary tables\nand does not throw an error. I would like to be able\nto add columns to temp tables to do a cross-tab query.\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\nclark=> create temp table x ( x text );\nCREATE\nclark=> insert into x values ( 'x' );\nINSERT 270675 1\nclark=> select * from x;\nx\n-\nx\n(1 row)\n\nclark=> alter table x add column y text;\nADD\nclark=> select * from x;\nx\n-\nx\n(1 row)\n\nclark=> insert into x values ('a','b');\nERROR: INSERT has more expressions than target columns.\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n\nThe only work-around I know is to use arrays. I'm trying that.",
"msg_date": "Wed, 31 Mar 1999 16:22:40 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug Report - alter table does not work on temporary tables"
}
] |
[
{
"msg_contents": "Hi,\n\nI have some patches for version 6.5 (postgresql.snapshot.tar.gz dated Mar 26):\n\nvarchar-array.patch\tthis patch adds support for arrays of char() and\n\t\t\tvarchar(), which where always missing from postgres,\n\t\t\tand also fixes a bug in the catalog definition of\n\t\t\tarray_in(). You can now define filds as:\n\n\t\t\t\tx char(12)[],\n\t\t\t\ty varchar(9)[],\n\nblock-size.patch\tthis patch fixes many errors in the parser and other\n\t\t\tprogram which happen with very large query statements\n\t\t\t(> 8K) when using a page size other than 8192.\n\t\t\tThe patch also replaces all the occurrences of `8192'\n\t\t\tand `1<<13' in the sources with the proper constants\n\t\t\tdefined in various include files.\n\n\t\t\tI found a little problem with this patch: flex defines\n\t\t\tan internal buffer size of 16K which is obviously\n\t\t\tinadequate with large queries. I solved the problem\n\t\t\treplacing 16K with 64K with sed, but this introduces\n\t\t\tsome constants hardwired in the Makefiles, which I\n\t\t\tdon't like very much. I don't even know if this fix\n\t\t\tis portable with other versions of flex. If someone\n\t\t\thas a better idea it is welcome.\n\nIn the .tgz attachment there are also two sql file I used for testing.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+",
"msg_date": "Wed, 31 Mar 1999 18:39:10 +0200 (MET DST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "patches for 6.5"
}
] |
[
{
"msg_contents": "Hi,\n\nreading \"The practical SQL handbook\" by Bowman et.al (third edition)\nI tried to do some examples (there is CD comes with book) and after\nporting example bookbiz database to 6.4.2 I found a problem with view:\n\ncreate view categories\nas select type as Category, avg(price) as Average_Price\nfrom titles\ngroup by Category;\n\n\n\nbookbiz=> \\d categories\n\nTable = categories\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| category | char() | 12 |\n| average_price | money | 4 |\n+----------------------------------+----------------------------------+-------+\n\nbookbiz=> select * from categories;\ncategory |average_price\n------------+-------------\nbusiness |$13.73\nmod_cook |$2.99\npopular_comp|$22.95\npsychology |$13.50\ntrad_cook |$16.45\n |\n(6 rows)\n\nbookbiz=> select category,average_price from categories;\ncategory |average_price\n------------+-------------\nbusiness |$13.73\nmod_cook |$2.99\npopular_comp|$22.95\npsychology |$13.50\ntrad_cook |$16.45\n |\n(6 rows)\n\nbookbiz=> select average_price from categories;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while pr\nocessing the request.\nWe have lost the connection to the backend, so further processing is impossible.\n\n\nLast query doesn't works ! I don't see possible reason if previous queries\nwork ok. \nMy setup: \nbookbiz=> select version();\nversion\n------------------------------------------------------------------\nPostgreSQL 6.4.2 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.6\n(1 row)\n\n\tRegards,\n\n\t\tOleg\n\nBTW, Does somebody already ported to postgreSQL an example script, which created\nbookbiz database from the book ?\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 1 Apr 1999 00:11:08 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem with view"
}
] |
[
{
"msg_contents": "Here's my problem I need to display times from a Web application for a\nspecific timezone(very variable). What I need to know is when that specific\nlocation switches to Daylight savings and what the new timezone abbreviation\nwould be, so that I can adjust their input datetimes accordingly as well as\nmy output datetimes. i.e. CDT to CST \n\n> -----Original Message-----\n> > Could one of you kinds soul point me to the PostgreSQL code for \n> > determining Timezones and Daylight Savings. \n> \n> backend/utils/adt/{dt.c,nabstime.c}\n> \n> > If I can assess the OS's database that would be best.\n> \n> Not sure what you mean here. As a guess, you should look at the\n> utility \"zdump\", which will show you the transition times for ST/DST.\n> You can set the TZ environment variable (or PGTZ envar when running\n> Postgres) to test out different time zones, which is how I can test\n> bug reports from other parts of the world.\n> \n> - Tom\n",
"msg_date": "Wed, 31 Mar 1999 15:33:13 -0600",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] [OT] Timezones and Daylight savings."
}
] |
[
{
"msg_contents": "It seems that one topic of discussion regarding the optimizer\nis that it dosen't know how many rows are in the table untill\nan analyze is done. Doing SELECT COUNT(*) seems like something\noften done that also requires the number of rows in the table\nto be known. Perhaps it would be prudent to keep a running\ntotal of the number of active rows for each table cashed?\n\nClark\n\nJordan Krushen wrote:\n> I know that MySQL doesn't actually hit the tables when doing a SELECT\n> COUNT(*), and it returns very quickly as a result.. On large Postgres\n> tables, COUNT(*) is horrendously slow. What I'm wondering is if anybody\n> here knows a faster way to get the row count, be it an internal Postgres\n> variable one can access, or if a COUNT(small_primary_key) would be faster.\n> Any ideas?\n>\n",
"msg_date": "Thu, 01 Apr 1999 00:21:51 +0000",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PHP3] [OT]: SELECT COUNT(*) in Postgres"
}
] |
[
{
"msg_contents": "Configure no longer recognizes UnixWare 7 as a supported OS. The attached \npatches will correct this.\n\nWith these patches, config.guess will produce:\n\n\ti?86-pc-unixware7.0.1\t(where ? is 3 or 5 depending on the chipset)\n\nwhich config.sub will accept and pass on to configure. Configure was changed \nto recognize 'unixware*' as a valid type instead of 'sysv5'. I belive these \nchanges brings the recognition of the unixware systems in line with the intent \nof the config.guess and config.sub routines (Intel x86 machines identified by \niX86-pc constructs followed by -OSname). The other option would be to have \nconfig.guess output 'i?86-pc-sysv5-unixware7.0.1', but I did not have time to \nget it to work in that form (yet).\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sat, 17 Apr 1999 16:39:20 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UnixWare 7 patches for current CVS code."
},
{
"msg_contents": "\nI don't think we can apply these because config.guess comes from GNU\nconfigure, not from us. Is there something we can do to\ntemplate/.similar to fix this? Can someone tell Billie how to get the\nproper .similar string for Unixware so we can add it, and rerun configure?\n\n\n> Configure no longer recognizes UnixWare 7 as a supported OS. The attached \n> patches will correct this.\n> \n> With these patches, config.guess will produce:\n> \n> \ti?86-pc-unixware7.0.1\t(where ? is 3 or 5 depending on the chipset)\n> \n> which config.sub will accept and pass on to configure. Configure was changed \n> to recognize 'unixware*' as a valid type instead of 'sysv5'. I belive these \n> changes brings the recognition of the unixware systems in line with the intent \n> of the config.guess and config.sub routines (Intel x86 machines identified by \n> iX86-pc constructs followed by -OSname). The other option would be to have \n> config.guess output 'i?86-pc-sysv5-unixware7.0.1', but I did not have time to \n> get it to work in that form (yet).\nContent-Description: uw7.config.patch\n\n[Attachment, skipping...]\n*** src/config.guess.orig\tSun Apr 11 16:12:39 1999\n--- src/config.guess\tSun Apr 11 16:15:54 1999\n***************\n*** 709,715 ****\n \t (/bin/uname -X|egrep '^Machine.*Pentium' >/dev/null) \\\n \t && UNAME_MACHINE=i586\n \tfi\n! \techo ${UNAME_MACHINE}-unixware-${UNAME_RELEASE}-${UNAME_VERSION}\n \texit 0 ;;\n pc:*:*:*)\n # uname -m prints for DJGPP always 'pc', but it prints nothing about\n--- 709,715 ----\n \t (/bin/uname -X|egrep '^Machine.*Pentium' >/dev/null) \\\n \t && UNAME_MACHINE=i586\n \tfi\n! \techo ${UNAME_MACHINE}-pc-unixware${UNAME_VERSION}\n \texit 0 ;;\n pc:*:*:*)\n # uname -m prints for DJGPP always 'pc', but it prints nothing about\n*** src/config.sub.orig\tSat Apr 10 20:05:46 1999\n--- src/config.sub\tSun Apr 11 16:57:02 1999\n***************\n*** 692,697 ****\n--- 692,699 ----\n \t-svr4*)\n \t\tos=-sysv4\n \t\t;;\n+ \t-unixware7*)\n+ \t\t;;\n \t-unixware*)\n \t\tos=-sysv4.2uw\n \t\t;;\n*** /tmp/configure.in\tSat Apr 17 16:20:30 1999\n--- src/configure.in\tSat Apr 17 16:20:47 1999\n***************\n*** 44,50 ****\n \t\t *) os=unknown need_tas=no ;;\n esac ;;\n sysv4*) os=svr4 need_tas=no ;;\n! sysv5*) os=unixware need_tas=no ;;\n *) echo \"\"\n echo \"*************************************************************\"\n echo \"configure does not currently recognize your operating system,\"\n--- 44,50 ----\n \t\t *) os=unknown need_tas=no ;;\n esac ;;\n sysv4*) os=svr4 need_tas=no ;;\n! unixware*) os=unixware need_tas=no ;;\n *) echo \"\"\n echo \"*************************************************************\"\n echo \"configure does not currently recognize your operating system,\"\n*** /tmp/configure\tSat Apr 17 16:17:35 1999\n--- src/configure\tFri Apr 16 13:54:22 1999\n***************\n*** 649,654 ****\n--- 649,655 ----\n \t\t *) os=unknown need_tas=no ;;\n esac ;;\n sysv4*) os=svr4 need_tas=no ;;\n+ unixware*) os=unixware need_tas=no ;;\n *) echo \"\"\n echo \"*************************************************************\"\n echo \"configure does not currently recognize your operating system,\"\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:08:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] UnixWare 7 patches for current CVS code."
}
] |
[
{
"msg_contents": "LIVE PERSONAL PSYCHIC! (as seen on T.V.)\n\nLEARN TODAY WHAT YOUR FUTURE HOLDS FOR\nLOVE, MONEY, MARRIAGE, JOB, & HEALTH\n\nASTROLOGY CLAIRVOYANCY\nNUMEROLOGY TAROT\n\nALL QUESTIONS ANSWERED IMMEDIATELY!\n\nREALIZE YOUR DESTINY! CALL RIGHT NOW!\n\n1-900-226-4140 or 1-800-372-3384 for VISA, MC, & AMEX\n\n(These are not sex lines!)\n\nThis message is intended for Psychic Readers, Psychic Users and people who are involved in the $1 Billion a year Psychic Industry. If this message has reached you in error, please disregard it and accept our apoligies. To be removed from this list, please respond with the subject \"remove\". Thank You.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLIVE PERSONAL PSYCHIC! (as seen on T.V.)\n\n\nLEARN TODAY WHAT YOUR FUTURE HOLDS FOR\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 19 Apr 99 00:45:01 Pacific Daylight Time",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "adv: Important Psychic Message For You..."
}
] |
[
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>>>>>>> select title, summary, time from story t where time = (select\n>>>>>>> max(s.time) from story s GROUP BY s.title);\n>> \n>>>> Why doesn't replacing \"=\" with \"IN\" produce a result? It wouldn't be the\n>>>> desired result, but I thought this was legal.\n>> \n>> I thought so too (on both counts). Are you saying it doesn't work?\n>> What happens? Which version are you using?\n\n> httpd=> select title, summary, time from story t where time IN (select\n> max(s.time) from story s GROUP BY s.title);\n> ERROR: parser: Subselect has too many or too few fields.\n\n> I'm using postgresql-snap-990329.tgz\n\nYeah, I see it too. This looks like a definite bug to me, but I have\nother bugs to squash right now :-(. Anyone else want to jump on this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Apr 1999 12:06:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Finding the \"most recent\" rows "
},
{
"msg_contents": "> Chris Bitmead <[email protected]> writes:\n> >>>>>>> select title, summary, time from story t where time = (select\n> >>>>>>> max(s.time) from story s GROUP BY s.title);\n> >> \n> >>>> Why doesn't replacing \"=\" with \"IN\" produce a result? It wouldn't be the\n> >>>> desired result, but I thought this was legal.\n> >> \n> >> I thought so too (on both counts). Are you saying it doesn't work?\n> >> What happens? Which version are you using?\n> \n> > httpd=> select title, summary, time from story t where time IN (select\n> > max(s.time) from story s GROUP BY s.title);\n> > ERROR: parser: Subselect has too many or too few fields.\n\nThis is not legal. If you use GROUP BY, the field must be in the target\nlist. In this case, s.title is not in the target list of the subselect.\nI realize it can't be in the subselect target list because you can only\nhave one column in the target list, but that is the case.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:43:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Finding the \"most recent\" rows"
}
] |
[
{
"msg_contents": "i've posted to general mailing list, but no clear answer, so i ask to\nyou. i hope this is right.\n\nwhat are max db and table size under Linux x86 and Linux Alpha.\n\nAre there other Limitations? If yes, what are.\nI've a big project in my mind (may be 100 Gb of data) so i must know the\nnumbers\n\nThank you for your answer and good job!\n\nvalter mazzola, http://naturalismedicina.com/linswap, The Linux & GPL\napps banner exchange program.\n\n",
"msg_date": "Fri, 23 Apr 1999 20:43:01 +0200",
"msg_from": "valter <[email protected]>",
"msg_from_op": true,
"msg_subject": "what are postgresql limits?"
},
{
"msg_contents": "On Fri, 23 Apr 1999, valter wrote:\n\n> i've posted to general mailing list, but no clear answer, so i ask to\n> you. i hope this is right.\n> \n> what are max db and table size under Linux x86 and Linux Alpha.\n> \n> Are there other Limitations? If yes, what are.\n> I've a big project in my mind (may be 100 Gb of data) so i must know the\n> numbers\n> \n> Thank you for your answer and good job!\n> \n> valter mazzola, http://naturalismedicina.com/linswap, The Linux & GPL\n> apps banner exchange program.\n\nThere are no maximum sizes that we are currently aware of *except* for the\nnumber of rows that can be in a database, which, I believe, is limited\nathtis time to a 32bit int (oid) ...\n\nOther then that, we had a bug in pre-v6.5 servers that due to the 2gig\nlimitation on files in some OSs, a problem did occur, but this bug is\nfixed in v6.5 ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Apr 1999 16:25:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] what are postgresql limits?"
},
{
"msg_contents": "> i've posted to general mailing list, but no clear answer, so i ask to\n> you. i hope this is right.\n> \n> what are max db and table size under Linux x86 and Linux Alpha.\n> \n> Are there other Limitations? If yes, what are.\n> I've a big project in my mind (may be 100 Gb of data) so i must know the\n> numbers\n\nNo limits.\n\n100GB is fine. Use 6.5 beta for testing. Some platforms had bugs in\n2gig files, though I believe alpha was OK for that.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:55:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] what are postgresql limits?"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > i've posted to general mailing list, but no clear answer, so i ask to\n> > you. i hope this is right.\n> >\n> > what are max db and table size under Linux x86 and Linux Alpha.\n> >\n> > Are there other Limitations? If yes, what are.\n> > I've a big project in my mind (may be 100 Gb of data) so i must know the\n> > numbers\n>\n> No limits.\n>\n\nwrite this in the documentation , Unlimited is a good point!\n\n>\n> 100GB is fine. Use 6.5 beta for testing. Some platforms had bugs in\n> 2gig files, though I believe alpha was OK for that.\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 23 Apr 1999 22:03:03 +0200",
"msg_from": "valter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] what are postgresql limits?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > > i've posted to general mailing list, but no clear answer, so i ask to\n> > > you. i hope this is right.\n> > >\n> > > what are max db and table size under Linux x86 and Linux Alpha.\n> > >\n> > > Are there other Limitations? If yes, what are.\n> > > I've a big project in my mind (may be 100 Gb of data) so i must know the\n> > > numbers\n> >\n> > No limits.\n> >\n> \n> write this in the documentation , Unlimited is a good point!\n\nI will write this in the FAQ. This is a good point.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 16:56:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] what are postgresql limits?"
}
] |
[
{
"msg_contents": "\nPlease note that Advocacy is what http://www.pgsql.com is being formed\naround...offering Commercial Support is just one aspect of it...\n\nOn Fri, 23 Apr 1999, Daniel Lundin wrote:\n\n> On Thu, 22 Apr 1999, Hannu Krosing wrote:\n> > I think you should try to produce a high-contrast image, that then \n> > someone with artistic amitions could draw clean. \n> > \n> > Though more likely a good clear image could be produced by just \n> > hand-drawing the thing from the definition.\n> > \n> > ----------------------\n> > Hannu\n> \n> I posted some time ago a first quick draft of a high contrast, hand drawn,\n> stylized elephant. It doesn't have the diamond in the image as of writing,\n> but of course that's easily fixed.\n> \n> http://test1.umc.se/\n> \n> The web design used was also a quick draft done real quick. I abandonded\n> the idea when I saw Dmitry Samersoff's proposal, which I felt was very\n> well done.\n> \n> The PostgreSQL logo design and website should be redone and done in a\n> professional way before the release of 6.5. From my perspective alone, my\n> clients don't take PostgreSQL seriously when I direct them to the webpages\n> for reference, and that is highly unwanted. \n> \n> Regarding this, and the adovacy issue,\n> I have some thoughts on the subject:\n> \n> My wish is to set up an postgresql-adovacy group responsible for exposure\n> issues, publishing comparisons and benchmarks, web design and document\n> consistency (not the documentation itself but rather making sure it's well\n> written and consistent to the end user/developer).\n> Writing papers on the philosophy around the database, the movement and the\n> use of postgresql in different applications.\n> Banners, web buttons and merchandise also falls under the responsibility\n> of postgresql-advocacy.\n> this is not a trivial part of succesful software, and it _definitely_\n> shouldn't be ignored (as seems somewhat the case now at times).\n> \n> As it is now, everyone I've actually _personally_ showed postgresql have\n> been amazed what it can do (the extensibility chord is a good one), but\n> not a SINGLE ONE knew anything about its features or sometimes even that\n> it existed before demonstrated before them.\n> And these are MySQL/Oracle/Sybase/Informix/FooDBMS users who after all\n> in varying degree are in the trade. \n> \n> This is what I'd like to work on changing. \n> \n> Any comments/thoughts on this?\n> \n> /Daniel\n> \n> _________________________________________________________________ /\\__ \n> \\/ \n> Daniel Lundin \n> MediaCenter, UNIX and BeOS Developer \n> \n> \"Blessed is the mind not capable of doubt\" \n> \n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Apr 1999 15:52:33 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL Webpage -> PGSQL Advocacy"
},
{
"msg_contents": "> The web design used was also a quick draft done real quick. I abandonded\n> the idea when I saw Dmitry Samersoff's proposal, which I felt was very\n> well done.\n\nPerhaps we can have the nice graphic elephant for our main page, and a\ndrawn elephant for small logos if the graphic elephant does not look\ngood in small sizes.\n\n> Regarding this, and the adovacy issue,\n> I have some thoughts on the subject:\n> \n> My wish is to set up an postgresql-adovacy group responsible for exposure\n> issues, publishing comparisons and benchmarks, web design and document\n> consistency (not the documentation itself but rather making sure it's well\n> written and consistent to the end user/developer).\n> Writing papers on the philosophy around the database, the movement and the\n> use of postgresql in different applications.\n> Banners, web buttons and merchandise also falls under the responsibility\n> of postgresql-advocacy.\n> this is not a trivial part of succesful software, and it _definitely_\n> shouldn't be ignored (as seems somewhat the case now at times).\n\nGood idea. Yes we certainly need this.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:41:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Webpage -> PGSQL Advocacy"
}
] |
[
{
"msg_contents": "\nLast night, we *f8inally* got the new CPU and hard drive installed on the\nserver...I'm currently moving over everything dealign with postgresql onto\nit, both http://www.postgresql.org *and* http://www.pgsql.com, and will be\nrebuilding all the databases and archives as soon as that is done...\n\n\nOn Fri, 23 Apr 1999, Andrew Merrill wrote:\n\n> Since April 15, the mailing list archives do not seem to have been\n> getting updated.\n> \n> For example, the web page\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1999-04/index.html\n> has no messages posted since April 15.\n> \n> (I am not subscribed to the hackers list, and since I cannot read the\n> archives, I will not see any replies to this message. If you need more\n> info from me, please email me directly.)\n> \n> Andrew Merrill\n> [email protected]\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Apr 1999 15:54:36 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] mailing list archives not updating"
}
] |
[
{
"msg_contents": "\nOn 23-Apr-99 \"D'Arcy\" \"J.M.\" Cain wrote:\n> When I try to start the postmaster from the current sources, I get the\n> following error.\n> \n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432015, num=16,\n> permission=600\n> \n> My command is as follows.\n> \n> /usr/local/postgres/bin/postmaster -S -D /usr/local/postgres/data\n> \n> The same command works with PostgreSQL 6.5.0 build from sources in\n> Feb. I did a full distclean, reconfigure, build and initdb. There\n> is plenty of space on the system so I assume that the above error is\n> bogus but I can't imagine what has changed to make this fail. Any ideas?\n\nHopefully this won't bounce 'cuze of your quotes that my mailer for some\nreason doesn't like.. but anyway...\n\nI had something like this earlier today. Assuming your system has something\nlike ipcs, what does it say? (if you don't have that command, try apropos\non ipc to find out what the status command is) I had a bunch of old items\nhanging on to pieces of shared memory causing my problem.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Fri, 23 Apr 1999 15:06:43 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Can't start postmaster"
},
{
"msg_contents": "> When I try to start the postmaster from the current sources, I get the\n> following error.\n> \n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432015, num=16, permission=600\n> \n> My command is as follows.\n> \n> /usr/local/postgres/bin/postmaster -S -D /usr/local/postgres/data\n> \n> The same command works with PostgreSQL 6.5.0 build from sources in\n> Feb. I did a full distclean, reconfigure, build and initdb. There\n> is plenty of space on the system so I assume that the above error is\n> bogus but I can't imagine what has changed to make this fail. Any ideas?\n> \n\nUse pgsql/bin/ipcclean. You have old shared memory around.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:54:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Can't start postmaster"
},
{
"msg_contents": "Thus spake Vince Vielhaber\n> On 23-Apr-99 \"D'Arcy\" \"J.M.\" Cain wrote:\n> > When I try to start the postmaster from the current sources, I get the\n> > following error.\n> > \n> > IpcSemaphoreCreate: semget failed (No space left on device) key=5432015, num=16,\n> > permission=600\n> \n> Hopefully this won't bounce 'cuze of your quotes that my mailer for some\n> reason doesn't like.. but anyway...\n\nThat's what happens when you have punctuation in your name I suppose. :-)\n\n> I had something like this earlier today. Assuming your system has something\n> like ipcs, what does it say? (if you don't have that command, try apropos\n> on ipc to find out what the status command is) I had a bunch of old items\n> hanging on to pieces of shared memory causing my problem.\n\nHere's the output.\n\nMessage Queues:\nT ID KEY MODE OWNER GROUP\n\nShared Memory:\nT ID KEY MODE OWNER GROUP\nm 786432 5432010 --rwa------ postgres postgres\nm 720897 5432001 --rw------- postgres postgres\nm 327682 5432007 --rw------- postgres postgres\nm 65539 296499 --rw-rw-rw- root wheel\n\nSemaphores:\nT ID KEY MODE OWNER GROUP\ns 65536 1426359859 --rw-rw-rw- root wheel\n\nI assume the postgres ones will go away when I stop the old postmaster.\nThe thing is that the old one still works so there must be something\nthat's changed in the current one.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 23 Apr 1999 16:16:21 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Can't start postmaster"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > When I try to start the postmaster from the current sources, I get the\n> > following error.\n> > \n> > IpcSemaphoreCreate: semget failed (No space left on device) key=5432015, num=16, permission=600\n> > \n> > My command is as follows.\n> > \n> > /usr/local/postgres/bin/postmaster -S -D /usr/local/postgres/data\n> > \n> > The same command works with PostgreSQL 6.5.0 build from sources in\n> > Feb. I did a full distclean, reconfigure, build and initdb. There\n> > is plenty of space on the system so I assume that the above error is\n> > bogus but I can't imagine what has changed to make this fail. Any ideas?\n> > \n> \n> Use pgsql/bin/ipcclean. You have old shared memory around.\n\nHmm. That didn't do it. I did reboot and it came up. I did have to do\nan initdb before using it though.\n\nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 23 Apr 1999 16:38:49 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Can't start postmaster"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n>>>> When I try to start the postmaster from the current sources, I get the\n>>>> following error.\n>>>> \n>>>> IpcSemaphoreCreate: semget failed (No space left on device) key=5432015, num=16,\n>>>> permission=600\n\nYou're out of semaphores, not disk space. Evidently your kernel is only\nconfigured to allow a few dozen semaphores --- you'd be wise to increase\nthat parameter, especially if you want to be able to run more than one\npostmaster at once. (See the current FAQ, item 2.13.)\n\n> The thing is that the old one still works so there must be something\n> that's changed in the current one.\n\nWell, yes, things change... the system is now designed to grab all the\nsemaphores it wants at postmaster startup, rather than risking not being\nable to grab them later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Apr 1999 11:46:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Can't start postmaster "
}
] |
[
{
"msg_contents": "\nFor those that have been bitching and griping about\nhttp://www.postgresql.org ... STOP IT!! :)\n\nThere is a reason why there have been no changes to it...\n\nDmitry and Vince are currently working on replacing it with the one that\nDmitry worked up. The one that is there is merely a placement marker\nwhile they work ou the kinks...\n\nIf you want to see what its going to, check out\nhttp://devnull.wplus.net/pub/postgres ... Its cleaner then what is there\nnow, *and* doesn't use frames, which should make just about everyone\nhappy. Even I like the new one without the frames...:)\n\nIf you are going to reply to this email, *please* reply to\[email protected], as it doesn't belong on -hackers...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Apr 1999 16:13:33 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL WWW Site ..."
}
] |
[
{
"msg_contents": "\n\nThis is not something that can be added in for v6.5...it doesn't fix a\nbug...sounds like something good for v6.6 though ...\n\n\nOn Thu, 22 Apr 1999, Theo Kramer wrote:\n\n> Vadim Mikheev wrote:\n> > \n> > Theo Kramer wrote:\n> > >\n> > > Hi\n> > >\n> > > I am having a problem with MVCC on Postgres 6.5 Beta\n> > >\n> > > Assume a table with a field called name with a unique index thereon\n> Vadim wrote\n> > > I have two questions\n> > >\n> > > 2. Is some sort of configurable timeout mechanism available?\n> > \n> > No, currently.\n> \n> Sounds like it would something really useful for 6.5. It would be\n> really useful in interactive system to detect a unique constraint\n> being violated. I would be happy to do it if someone could point\n> me to the relevant areas in the code.\n> \n> --------\n> Regards\n> Theo\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Apr 1999 16:30:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] MVCC Question"
}
] |
[
{
"msg_contents": "\n...are currently being re-built and indexed. Please let me know of any\nproblems with them and I'll get to it asap...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 23 Apr 1999 16:31:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "List Archives ..."
}
] |
[
{
"msg_contents": "> I wonder what happened with :\n> \n> progi=> create index mot_idx on mot (no_piece,no_ref,no_fab_marq_mod,\n> \t\t\t\t annee_deb,annee_fin);\n> ERROR: mot_idx: cannot extend\n> \n> Is there a limit in the index size ?\n> \n> \n\nAre you out of disk space?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:34:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: mot_idx: cannot extend"
},
{
"msg_contents": "> > I wonder what happened with :\n> > \n> > progi=> create index mot_idx on mot (no_piece,no_ref,no_fab_marq_mod,\n> > \t\t\t\t annee_deb,annee_fin);\n> > ERROR: mot_idx: cannot extend\n> > \n> > Is there a limit in the index size ?\n> > \n> > \n> \n> Are you out of disk space?\n> \n\nI have updated the message in 6.5beta to read\n\n\tERROR: mot_idx: cannot extend. Check free disk space.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:58:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: mot_idx: cannot extend"
}
] |
[
{
"msg_contents": "Hello,\nI am trying to run the ImageViewer.java example that is part of the\nsrc/interfaces/jdbc/ examples. I have a database test which has two\nimages loaded into it with the ImageVIewer app.\n\nI am running a cvs snapshot as of 4/22, java1.2, Solaris 7, cc: WorkShop\nCompilers 5.0 98/12/15 C 5.0.\n\nI envoke the postmaster as:\n su bpm -c \"${PGSQLHOME}/bin/postmaster -i -d -D ${PGDATA} 2>&1 >\n${PGDATA}/trace.log\"\n\nto get a trace.log file.\n\nI start the java app as:\nvlad: java -classpath $MYCLASSPATH example.ImageViewer\njdbc:postgresql:test bpm foo\nConnecting to Database URL = jdbc:postgresql:test\nSelecting oid for item_4.gif\nGot oid 149387\nImport complete\nSelecting oid for item_2.gif\nGot oid 149441\n\nAt this point I get a SQLException: \"Fastpath: index_rescan: invalid\namrescan regproc\"\n\nIn the trace.log, I have:\nvlad: tail -f trace.log\nread_pg_options: all=1,verbose=2,query\ndebug info:\n User = bpm\n RemoteHost = 127.0.0.1\n RemotePort = 33694\n DatabaseName = test\n Verbose = 2\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 64\n sortmem = 512\n query echo = f\nInitPostgres\nStartTransactionCommand\nquery: show datestyle\nProcessUtility: show datestyle\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nquery: select proname, oid from pg_proc where proname = 'lo_open' or\nproname = 'lo_close' or proname = 'lo_creat' or proname = 'lo_unlink' or\nproname = 'lo_lseek' or proname = 'lo_tell' or proname = 'loread' or\nproname = 'lowrite'\nProcessQuery\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nquery: select imgname from images order by imgname\nProcessQuery\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nquery: select imgoid from images where imgname='item_4.gif'\nProcessQuery\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nquery: select imgoid from images where imgname='item_2.gif'\nProcessQuery\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nCommitTransactionCommand\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\nStartTransactionCommand\nAbortCurrentTransaction\nLockReleaseAll: lockmethod=1, pid=4369\nLockReleaseAll: reinitializing lockQueue\nLockReleaseAll: done\n\n\nWhich doesn't show an error :-(\n\n>From the commandline where I envoked postmaster, I have:\n# /etc/init.d/postgress.init start\nFindExec: found \"/opt/pgsql/bin/postgres\" using argv[0]\nFindExec: found \"/opt/pgsql/bin/postgres\" using argv[0]\n/opt/pgsql/bin/postmaster: BackendStartup: pid 4481 user bpm db test\nsocket 5\nNOTICE: DateStyle is Postgres with US (NonEuropean) conventions\nERROR: index_rescan: invalid amrescan regproc\n\n\nThen the java app is basically dead & I have to exit.\n\nAny ideas?\n\nThanks.\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n",
"msg_date": "Fri, 23 Apr 1999 14:35:41 -0500",
"msg_from": "Brian P Millett <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: index_rescan: invalid amrescan regproc ???"
},
{
"msg_contents": "On Fri, 23 Apr 1999, Brian P Millett wrote:\n\n> Hello,\n> I am trying to run the ImageViewer.java example that is part of the\n> src/interfaces/jdbc/ examples. I have a database test which has two\n> images loaded into it with the ImageVIewer app.\n> \n> I am running a cvs snapshot as of 4/22, java1.2, Solaris 7, cc: WorkShop\n> Compilers 5.0 98/12/15 C 5.0.\n\n[snip]\n\n> At this point I get a SQLException: \"Fastpath: index_rescan: invalid\n> amrescan regproc\"\n\nThis looks like something in the backend has died while a fastpath\nfunction was being used. I'll see if I can recreate it here.\n\n[snip]\n\n> Then the java app is basically dead & I have to exit.\n> \n> Any ideas?\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Fri, 23 Apr 1999 22:23:37 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: index_rescan: invalid amrescan regproc ???"
},
{
"msg_contents": "Peter T Mount <[email protected]> writes:\n> On Fri, 23 Apr 1999, Brian P Millett wrote:\n>> At this point I get a SQLException: \"Fastpath: index_rescan: invalid\n>> amrescan regproc\"\n\n> This looks like something in the backend has died while a fastpath\n> function was being used. I'll see if I can recreate it here.\n\nJust to save you repeating the look-around I just did: that's\ncoming out of backend/access/index/indexam.c, and it's apparently\ncomplaining that a pg_am table row has a null amrescan field.\nThat shouldn't be true of any of the pg_am rows in a correct\ninstallation.\n\nBrian, what does 'select * from pg_am' show? Did you do an initdb\nwhile installing the 6.5 snapshot you're using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Apr 1999 13:46:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ERROR: index_rescan: invalid amrescan regproc ??? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Peter T Mount <[email protected]> writes:\n> > On Fri, 23 Apr 1999, Brian P Millett wrote:\n> >> At this point I get a SQLException: \"Fastpath: index_rescan: invalid\n> >> amrescan regproc\"\n>\n> > This looks like something in the backend has died while a fastpath\n> > function was being used. I'll see if I can recreate it here.\n>\n> Just to save you repeating the look-around I just did: that's\n> coming out of backend/access/index/indexam.c, and it's apparently\n> complaining that a pg_am table row has a null amrescan field.\n> That shouldn't be true of any of the pg_am rows in a correct\n> installation.\n>\n> Brian, what does 'select * from pg_am' show? Did you do an initdb\n\n> while installing the 6.5 snapshot you're using?\n\nNo I did not. :-( I had done everything else but that. Sorry.\n\nI did remove the old & do an \"initdb\", \"createdb test\", then reran the :\njava -classpath $MYCLASSPATH example.ImageViewer jdbc:postgresql:test bpm\nfoo\n\nThen I got the following SQLexception.\n\nERROR: index_beginscan: invalid ambeginscan regproc\n\n\nThen I did the\ntest=> select * from pg_am;\namname|amowner|amkind|amstrategies|amsupport|amgettuple |aminsert\n|amdelete |amgetattr|amsetlock|amsettid|amfreetuple|ambeginscan\n|amrescan |amendscan |ammarkpos |amrestrpos |amopen|amclose|ambuild\n|amcreate|amdestroy\n------+-------+------+------------+---------+------------+----------+----------+---------+---------+--------+-----------+-------------+----------+-----------+-----------+------------+------+-------+---------+--------+---------\n\nrtree | 159|o | 8| 3|rtgettuple |rtinsert\n|rtdelete |- |- |- |- |rtbeginscan\n|rtrescan |rtendscan |rtmarkpos |rtrestrpos |- |- |rtbuild\n|- |-\nbtree | 159|o | 5| 1|btgettuple |btinsert\n|btdelete |- |- |- |- |btbeginscan\n|btrescan |btendscan |btmarkpos |btrestrpos |- |- |btbuild\n|- |-\nhash | 159|o | 1|\n1|hashgettuple|hashinsert|hashdelete|- |- |-\n|-\n|hashbeginscan|hashrescan|hashendscan|hashmarkpos|hashrestrpos|-\n|- |hashbuild|- |-\ngist | 159|o | 100|\n7|gistgettuple|gistinsert|gistdelete|- |- |-\n|-\n|gistbeginscan|gistrescan|gistendscan|gistmarkpos|gistrestrpos|-\n|- |gistbuild|- |-\n(4 rows)\n\n\nThanks.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n",
"msg_date": "Mon, 26 Apr 1999 09:18:38 -0500",
"msg_from": "Brian P Millett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: ERROR: index_rescan: invalid amrescan regproc ???"
},
{
"msg_contents": "> I did remove the old & do an \"initdb\", \"createdb test\", then reran the :\n> java -classpath $MYCLASSPATH example.ImageViewer jdbc:postgresql:test bpm\n> foo\n\n> Then I got the following SQLexception.\n> ERROR: index_beginscan: invalid ambeginscan regproc\n\n> Then I did the\n> test=> select * from pg_am;\n> [ perfectly normal-looking pg_am table ... ]\n\nHmm. Nothing wrong with the table that I can see; conclusion is that\nits cache image in memory must be messed up. Perhaps you are indeed\ndealing with a platform-specific bug. Or it could be a memory-clobber\nkind of problem (but I'd think lots of people would be reporting strange\nbehavior if we had one of those on the loose).\n\nYou might try building the backend with assert checking turned on\n(--enable-cassert) to see if any problems are detected.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Apr 1999 10:54:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ERROR: index_rescan: invalid amrescan regproc ??? "
}
] |
[
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> This is messy, but I think it beats the alternatives --- unless someone\n> >> has a better idea? (I don't think redoing the FE/BE protocol yet again\n> >> is very attractive at this stage.)\n> \n> > That's a tough one. Why are elog(NOTICE) being sent? Is there a way to\n> > buffer those instead?\n> \n> I thought about that, but gave it up when I realized that it doesn't\n> offer a solution to the elog(ERROR) case. The only way not to choke\n> for elog(ERROR) is not to start sending the data message until you've\n> constructed it completely --- or to have a way of aborting the partially\n> sent message, which is feasible for COPY OUT but not really practical\n> for SELECT data messages.\n\nIf you get elog(ERROR), can't you just abort the current message, and\nsend the elog(), or is it very involved.\n\n> The particular case I saw last night involved get_groname() complaining\n> during an attempt to display an incorrect ACL, but that's not very\n> interesting. The real reason that I'm so exercised about this is that\n> intermittent NOTICE-embedded-in-data-message failures are exactly the\n> thing that forced my company to give up using 6.3.2 last summer and\n> put our production application on pre-alpha 6.4. Talk about nervous.\n> We lived to tell the tale, but I didn't like it one bit.\n\nHmmm.\n\n> At the time I didn't understand what was causing the problem, but now\n> I realize that 6.4 had simply masked the fundamental protocol problem\n> by eliminating the particular NOTICE condition (sorry I don't remember\n> exactly what that NOTICE was). That bug is still there waiting to bite\n> me again, and I aim to squash it before it can.\n\nYes, I vaguelly remember that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:53:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: light dawns: serious bug in FE/BE protocol handling"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> If you get elog(ERROR), can't you just abort the current message, and\n> send the elog(), or is it very involved.\n\nYou can do that in the COPY case (good thing too), but not in the SELECT\ncase; if part of a D or B (data) message has already been flushed out to\nthe frontend then there's no way to stay in sync except to complete the\nmessage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Apr 1999 17:38:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: light dawns: serious bug in FE/BE protocol handling"
}
] |
[
{
"msg_contents": "\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 15:58:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "I am feeling overwhelmed"
}
] |
[
{
"msg_contents": "I have been on two vacations in the past two months, and I installed a\nnew PC upstairs and an Ethernet network in my home.\n\nThis has left me little time for PostgreSQL, and I have fallen behind.\n\nI hope to get together a 6.5beta bugs list soon, and mark the source\nas 6.5 like README files and stuff.\n\nLet me also say Tom Lane is doing a fantastic job. His tremendous\nperformance has allowed me to relax.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 16:45:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Just catching up"
}
] |
[
{
"msg_contents": "Let me also say that the amount of effort being poured into PostgreSQL\nhas left me dizzy. I can no longer keep track of everything going on,\nand no longer understand many of the issues involved.\n\nThanks again to Tom Lane for taking up many of these issues.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Apr 1999 16:53:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Keeping up"
}
] |
[
{
"msg_contents": "The JDBC driver included with the current 6.5 betas appears to have the\nsame problems with time zones in getTimestamp() that 6.4.2 had. This\nhas caused problems with my JDBC application.\n\nDiscussion and suggested fixes are below.\n\nThe relevant code is in getTimestamp(int) in ResultSet.java:\n\n1 public Timestamp getTimestamp(int columnIndex) throws SQLException\n2 {\n3 String s = getString(columnIndex);\n4 SimpleDateFormat df = new SimpleDateFormat(\"yyyy-MM-dd\nHH:mm:sszzz\");\n5 if (s != null)\n6 {\n7 int TZ = new Float(s.substring(19)).intValue();\n8 TZ = TZ * 60 * 60 * 1000;\n9 TimeZone zone = TimeZone.getDefault();\n10 zone.setRawOffset(TZ);\n11 String nm = zone.getID();\n12 s = s.substring(0,18) + nm;\n\n\nThere are two problems, as I see it.\n\n1) The last line should be:\n s = s.substring(0,19) + nm;\n\nIn Java, the second argument to substring() is 1 more than the last\nindex to include in the substring. Using 18 will drop the last digit of\nthe seconds field of the time. Not only does this lose data, but some\nversions of Java will fail to parse the resulting malformed time string.\n\n2) I believe that everything from line 7 to 12 (inclusive) should be\ndeleted. It appears that what is going on here is that the time zone,\nexpressed as an offset in hours from GMT, is extracted from the supplied\ntimestamp (line 7), and used to set the raw offset of the TimeZone\nobject \"zone\" (line 10). Then, on line 11, the ID field from the zone\nis extracted, and appended to the supplied string on line 12.\n\nThe problem is that setting the raw offset of a TimeZone object does not\nchange the ID field. Thus, what this code does is appends the current\nlocal time zone to all supplied timestamp strings, which is clearly not\ncorrect.\n\nThe Java time/date parser can directly handle time/date strings that\nexpress the timezone in hours offset from GMT, which is the format that\nis being generated by PostgreSQL in the first place. Thus, deleting\nlines 7 through 12 will restore correct timezone handling.\n\nOf course, implementing fix #2 eliminates the need for fix #1.\n\nI made these changes on my copy, and found that they worked, so timezone\nhandling worked as it ought to. (My application is used by users in a\nvariety of time zones, and records the time&date of various events, so\nit is essential that correct time zone info be preserved.)\n\nI'll submit this as a patch if needed, but I wanted to make sure that\nI'm not missing something important before doing so.\n\nAndrew Merrill\[email protected]\n\n",
"msg_date": "Fri, 23 Apr 1999 13:53:25 -0700",
"msg_from": "Andrew Merrill <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems with time zones in JDBC"
}
] |
[
{
"msg_contents": "Hey folks, still looking for a little help on this. I can insert data\nand (obviously create tables). Further, I've discovered PHP with pg\nsupport yields an unknown symbol 'lo_unlink' when Apache tries to load\nit. Any help/pointers would be really appreciated.\n\n-JP\n\nI have checked the bakc archives and can find several references to the\nfollowing error, but know fixes.\n\nOS: Linux (kernel v2.2.2)\nDistro: YellowDogLinux (Red Hat based)\nArch: PowerPC (not a G3)\nPG: 6.4.2\n\nThe error is that any operation having to do with oid type's results in an\nerror, ie:\n\ncreatedb test\ndestroydb test\nERROR: typeidTypeRelid: Invalid type - oid = 0\nERROR: typeidTupeRelid: Invalid type - oid = 0\n\nSame thing if I load psql and do a \\d <tablename>. I can create databases\nand tables and such however.\n\nIdeas? pg 6.3.x worked on linux ppc platforms previously, but the newer\ndistros might be using the new glibc. Would this make a difference?\n\n-JP",
"msg_date": "Sat, 24 Apr 1999 22:28:58 +0000",
"msg_from": "JP Rosevear <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: [HACKERS] typeidTypeRelid Error]"
},
{
"msg_contents": "Subject changed.\n\n> Hey folks, still looking for a little help on this. I can insert data\n> and (obviously create tables). Further, I've discovered PHP with pg\n> support yields an unknown symbol 'lo_unlink' when Apache tries to load\n> it. Any help/pointers would be really appreciated.\n\nWhat kind of version of apache/PHP/PostgreSQL are you using? Do you\nuse DSO? What was the exact error message when you found lo_unlink was\nan unknown symbol? What kind of platform are you running?\n\nI myself have been using apache 1.3.6(DSO)/PHP 3.0.7/PostgreSQL 6.4.2\nor current on x86 Linux/LinuxPPC/mips Linux/FreeBSD for a while but\nhave never seen problems you mentioned.\n---\nTatsuo Ishii\n\n",
"msg_date": "Tue, 27 Apr 1999 10:32:36 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "unknown symbol 'lo_unlink'"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n\n> Subject changed.\n>\n> > Hey folks, still looking for a little help on this. I can insert data\n> > and (obviously create tables). Further, I've discovered PHP with pg\n> > support yields an unknown symbol 'lo_unlink' when Apache tries to load\n> > it. Any help/pointers would be really appreciated.\n>\n> What kind of version of apache/PHP/PostgreSQL are you using? Do you\n> use DSO? What was the exact error message when you found lo_unlink was\n> an unknown symbol? What kind of platform are you running?\n>\n> I myself have been using apache 1.3.6(DSO)/PHP 3.0.7/PostgreSQL 6.4.2\n> or current on x86 Linux/LinuxPPC/mips Linux/FreeBSD for a while but\n> have never seen problems you mentioned.\n\nThanks for your reply. I'm using apache 1.3.6 (DSO), PHP 3.0.6 and PG\n6.4.2. I was able to successfully run apache/php/pg under LinuxPPC R4 and\nhaven't had any probs under i386 Linux distros, however I have tried the\nYDL and R5 rpms (as well as compiling from tarballs) and run in to this\nerror every time on YDL. I think its related to the other problems but I'm\n\nnot sure. The error message is \"Unknown symbol lo_unlink\". It might be\nhelpful if you let me know the options you pass to the configure script,\nthen I could try with those options.\n\nTIA\n-JP\n\n\n\n\n",
"msg_date": "Tue, 27 Apr 1999 13:29:19 +0000",
"msg_from": "JP Rosevear <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] unknown symbol 'lo_unlink'"
},
{
"msg_contents": "> > > Hey folks, still looking for a little help on this. I can insert data\n> > > and (obviously create tables). Further, I've discovered PHP with pg\n> > > support yields an unknown symbol 'lo_unlink' when Apache tries to load\n> > > it. Any help/pointers would be really appreciated.\n> >\n> > What kind of version of apache/PHP/PostgreSQL are you using? Do you\n> > use DSO? What was the exact error message when you found lo_unlink was\n> > an unknown symbol? What kind of platform are you running?\n> >\n> > I myself have been using apache 1.3.6(DSO)/PHP 3.0.7/PostgreSQL 6.4.2\n> > or current on x86 Linux/LinuxPPC/mips Linux/FreeBSD for a while but\n> > have never seen problems you mentioned.\n> \n> Thanks for your reply. I'm using apache 1.3.6 (DSO), PHP 3.0.6 and PG\n> 6.4.2. I was able to successfully run apache/php/pg under LinuxPPC R4 and\n> haven't had any probs under i386 Linux distros, however I have tried the\n> YDL and R5 rpms (as well as compiling from tarballs) and run in to this\n> error every time on YDL. I think its related to the other problems but I'm\n> \n> not sure. The error message is \"Unknown symbol lo_unlink\". It might be\n> helpful if you let me know the options you pass to the configure script,\n> then I could try with those options.\n\nPostgreSQL:\tconfigure --with-mb=EUC_JP\nApache:\t\tOPTIM=\"-O2\" /configure --enable-module=so\nphp:\t\tconfigure --with-pgsql --with-apache=../apache_1.3.6 --enable-track-vars --with-apxs=/usr/local/apache/bin/apxs\n\nI also use LinuxPPC R4. So I guess there's something special with YDL\nand R5, not with your builds. (BTW, what is YDL?) I'm going to try R5\nafter official R5 is released.\n---\nTatsuo Ishii\n\n",
"msg_date": "Wed, 28 Apr 1999 00:02:45 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] unknown symbol 'lo_unlink' "
},
{
"msg_contents": "At 18:02 +0300 on 27/04/1999, Tatsuo Ishii wrote:\n\n\n>\n> I also use LinuxPPC R4. So I guess there's something special with YDL\n> and R5, not with your builds. (BTW, what is YDL?) I'm going to try R5\n> after official R5 is released.\n\nYellow Dog Linux is yet another version of linux for PPC. It appears that\nit has many common features with R5, but will not have the installer and\nNetscape Communicator.\n\nIt seems that the most important (and relevant?) feature of these two\nflavors of linux is that they use glibc2 instead of libc5, which was the C\nlibrary on R4. This means everything relating to dynamically loaded code is\nchanged.\n\nDisclaimer: This is just a superficial observation based on reading the\nLinuxPPC website. I have no actual experience with LinuxPPC, as I intend to\ninstall it for the first time on my mac when R5 is out.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Thu, 29 Apr 1999 14:42:18 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] unknown symbol 'lo_unlink'"
}
] |
[
{
"msg_contents": "Does anyone know offhand what are the implications of calling palloc()\nin the backend's outer loop? (That is, in postgres.c, but outside the\nStart/CommitTransactionCommand calls?) How long will such memory\nremain allocated if not explicitly pfree'd?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Apr 1999 20:56:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "palloc at outer loop?"
}
] |
[
{
"msg_contents": "PostgreSQL 6.3.2 said:\n\nNOTICE: BlowawayRelationBuffers(places, 322): block 336 is referenced\n(private 0, last 0, global 1)\nFATAL 1: VACUUM (vc_rpfheap): BlowawayRelationBuffers returned -2\n\nAnd now I need to go find the vacuum lock file and blow it away, but I\ndon't really understand the above message, so wanna ask if I need to worry\nabout it...\n\n-- \"TANSTAAFL\" Rich [email protected] webmaster@ and www. all of:\nR&B/jazz/blues/rock - jademaze.com music industry org - chatmusic.com\nacoustic/funk/world-beat - astrakelly.com sculptures - olivierledoux.com\nmy own nascent company - l-i-e.com cool coffeehouse - uncommonground.com\n\n\n",
"msg_date": "Sat, 24 Apr 1999 21:17:58 -0500",
"msg_from": "[email protected] (Richard Lynch)",
"msg_from_op": true,
"msg_subject": "Vacuum Crash"
},
{
"msg_contents": "Richard Lynch wrote:\n> \n> PostgreSQL 6.3.2 said:\n> \n> NOTICE: BlowawayRelationBuffers(places, 322): block 336 is referenced\n> (private 0, last 0, global 1)\n> FATAL 1: VACUUM (vc_rpfheap): BlowawayRelationBuffers returned -2\n> \n> And now I need to go find the vacuum lock file and blow it away, but I\n> don't really understand the above message, so wanna ask if I need to worry\n> about it...\n\nRestart postmaster, remove pg_vlock file and run vacuum again.\n\nVadim\n",
"msg_date": "Sun, 25 Apr 1999 23:58:47 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Vacuum Crash"
},
{
"msg_contents": "Using pg_dump with 6.5b1 on solaris sparc, crashes with a core dump.\n\nThis means I can't keep backups and I can't upgrade my data model without\nbeing able to export the old data.\n\nIf one of the developers wants debug info let me know what you need (e.g.,\nwhat commands to run in gdb--though I'll have to install this or get run\npermissions from the sysadmin).\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Mon, 24 May 1999 13:45:24 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dump core dumps"
},
{
"msg_contents": "Beta1 is kind of old. Also, this is clearly a hackers list question. I\nrecommend you try the most recent snapshot from ftp.postgresql.org.\n\n\n> Using pg_dump with 6.5b1 on solaris sparc, crashes with a core dump.\n> \n> This means I can't keep backups and I can't upgrade my data model without\n> being able to export the old data.\n> \n> If one of the developers wants debug info let me know what you need (e.g.,\n> what commands to run in gdb--though I'll have to install this or get run\n> permissions from the sysadmin).\n> \n> -- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\n> PGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 May 1999 17:24:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] pg_dump core dumps"
}
] |
[
{
"msg_contents": "\nIs the LIMIT feature very efficient? I want to start using it for quite\na few things, but I'm wondering, what happens when I have a zillion\nrecords and I want the first 10, is that going to be an efficient thing\nto do?\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 25 Apr 1999 08:16:58 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficiency of LIMIT ?"
},
{
"msg_contents": "On Sun, Apr 25, 1999 at 08:16:58AM +0000, Chris Bitmead wrote:\n> \n> Is the LIMIT feature very efficient? I want to start using it for quite\n> a few things, but I'm wondering, what happens when I have a zillion\n> records and I want the first 10, is that going to be an efficient thing\n> to do?\n\n\tI am curious about this myself. As far as I can tell, it doesn't\ngive anything that cursors don't provide, but introduces more \"features\"\ninto the parser. Do we need this?\n\n",
"msg_date": "Sun, 25 Apr 1999 11:49:46 -0700",
"msg_from": "Adam Haberlach <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficiency of LIMIT ?"
},
{
"msg_contents": "> On Sun, Apr 25, 1999 at 08:16:58AM +0000, Chris Bitmead wrote:\n> > \n> > Is the LIMIT feature very efficient? I want to start using it for quite\n> > a few things, but I'm wondering, what happens when I have a zillion\n> > records and I want the first 10, is that going to be an efficient thing\n> > to do?\n> \n> \tI am curious about this myself. As far as I can tell, it doesn't\n> give anything that cursors don't provide, but introduces more \"features\"\n> into the parser. Do we need this?\n\nThis is pretty correct, though it stops the executor from completing all\nthe result queries, while cursors don't. The complete the entire query\nand store the result for later fetches.. We support it because MySQL\nusers and others asked for it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 25 Apr 1999 15:41:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficiency of LIMIT ?"
},
{
"msg_contents": "\nHere is Tom's comment on the issue.\n\n\n> Chris Bitmead <[email protected]> writes:\n> > Following is I believe evidence of a pretty bad bug in postgres. This is\n> > the 990329 snapshot.\n> \n> > httpd=> insert into category(name, image, parent) SELECT 'boo', 'boo',\n> > oid FROM category* where name = 'foo';\n> > INSERT 158370 1\n> > httpd=> select * from category;\n> > name |image|url|parent\n> > --------+-----+---+------\n> > foo |foo | | 0\n> > bar |bar | |158321\n> > Products|.gif | | \n> > (3 rows)\n> \n> > Ok, what's going on here. The 'boo' record did not appear!\n> \n> Interesting. You'll notice the INSERT response claims that a tuple was\n> inserted, and even gives you the OID for it. Wonder where it went?\n> \n> Anyway, that sure suggests that the SELECT part worked, and the problem\n> is that the new tuple got dropped on the floor later. (Or could it have\n> been inserted into another table? Are there other tables that category*\n> includes?)\n> \n> I'm guessing this has something to do with Vadim's recent work, but\n> I don't pretend to know what's wrong...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 11:49:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Pretty bad bug in Postgres."
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>> Following is I believe evidence of a pretty bad bug in postgres. This is\n>> the 990329 snapshot.\n\n>> httpd=> insert into category(name, image, parent) SELECT 'boo', 'boo',\n>> oid FROM category* where name = 'foo';\n>> INSERT 158370 1\n>> httpd=> select * from category;\n>> name |image|url|parent\n>> --------+-----+---+------\n>> foo |foo | | 0\n>> bar |bar | |158321\n>> Products|.gif | | \n>> (3 rows)\n\n>> Ok, what's going on here. The 'boo' record did not appear!\n\nChris, I can't reproduce this here anymore --- do you still see it with\ncurrent sources?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 1999 18:52:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Pretty bad bug in Postgres. "
}
] |
[
{
"msg_contents": "This is on the TODO list.\n\nI actually have a solution that seems to work fine, but I wanted to run it past \nthe backend guru's after we have finished the 6.5 beta.\n\nSorry I din't get it finished before the beta started.\n\n-Ryan\n\n\n> Hello!\n> \n> VIEW on 6.4.2 ignores DISTINCT. Is it a bug? known?\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> ---------- Forwarded message ----------\n> Date: Fri, 23 Apr 1999 13:33:00 +0400 (MSD)\n> From: Artem Chuprina <[email protected]>\n> To: Oleg Broytmann <[email protected]>\n> Subject: create view as select distinct\n> \n> pirit=> select distinct value_at from www_counter_store;\n> value_at\n> ----------\n> 04-22-1999\n> (1 row)\n> \n> pirit=> create view www_counter_store_dates as select distinct value_at from \nwww_counter_store;\n> CREATE\n> pirit=> select * from www_counter_store_dates;\n> ----------\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> (15 rows)\n> \n> -- \n> Artem Chuprina E-mail: [email protected]\n> Network Administrator FIDO: 2:5020/371.32\n> PIRIT Corp. Phone: +7(095) 115-7101\n> \n",
"msg_date": "Sun, 25 Apr 1999 13:32:02 -0600 (MDT)",
"msg_from": "Ryan Bradetich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] create view as select distinct (fwd)"
},
{
"msg_contents": ">\n> This is on the TODO list.\n>\n> I actually have a solution that seems to work fine, but I wanted to run it past\n> the backend guru's after we have finished the 6.5 beta.\n>\n> Sorry I din't get it finished before the beta started.\n>\n> -Ryan\n\n I wonder how it does!\n\n Have the following:\n\n CREATE TABLE t1 (a int4, b text);\n CREATE TABLE t2 (c int4, d text);\n CREATE VIEW v2 AS SELECT DISTINCT ON c * FROM t2;\n\n Populate them with:\n\n t1:\n 1 'one'\n 1 'ena'\n 2 'two'\n 2 'thio'\n 3 'three'\n 3 'tria'\n 4 'four'\n 4 'tessera'\n\n t2:\n 1 'I'\n 1 'eins'\n 2 'II'\n 2 'zwei'\n 3 'III'\n 3 'drei'\n\n Now you do\n\n SELECT t1.a, t1.b, v2.d FROM t1, v2\n WHERE t1.a = v2.c;\n\n Does that work and produce the correct results? Note that\n there are more than one correct results. The DISTINCT SELECT\n from t2 already has. But in any case, the above SELECT should\n present 6 rows (all the rows of t1 from 1 to 33 in english\n and greek) and column d must show either the roman or german\n number.\n\n To make it more complicated, add table t3 and populate it\n with more languages. Then setup\n\n CREATE VIEW v3 AS SELECT DISTINCT ON e * FROM t3;\n\n and expand the above SELECT to a join over t1, v2, v3.\n\n Finally, think about a view that is a DISTINCT SELECT over\n multiple tables. Now you build another view as SELECT from\n the first plus some other table and make the new view\n DISTINCT again.\n\n The same kind of problem causes that views currently cannot\n have ORDER BY or GROUP BY clauses. All these clauses can only\n appear once per query, so there is no room where the rewrite\n system can place multiple different ones. Implementing this\n requires first dramatic changes to the querytree layout and I\n think it needs subselecting RTE's too.\n\n\nSorry - Jan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Apr 1999 17:35:29 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create view as select distinct (fwd)"
}
] |
[
{
"msg_contents": "The following works in psql:\n\n SELECT MemberID, InvoiceID, sum(quantity * unitprice) as InvAmount,\nsi_InventoryCategory(InventoryID) as CategoriesID\n FROM InvoiceLines\n WHERE memberid = 685\n GROUP BY MemberID, InvoiceID, InventoryID;\n\nThe following causes psql to abort:\n\n SELECT MemberID, InvoiceID, sum(quantity * unitprice) as InvAmount,\nsi_InventoryCategory(InventoryID) as CategoriesID\n FROM InvoiceLines\n WHERE memberid = 685\n GROUP BY MemberID, InvoiceID, CategoriesID;\n\nHere is the abort message:\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nThere is nothing in the postgreSQL log files.\n\nInvoiceLines is a table. Here is si_InventoryCategory():\n\nCREATE FUNCTION si_InventoryCategory(int4) RETURNS int4 AS '\n 'select it.CategoriesID from Inventory i, InventoryType it where\ni.InventoryID = $1 and i.InventoryTypeID = it.InventoryTypeID' LANGUAGE\n'sql';\n\n\nI am using Red Hat 5.1, PostgreSQL version 6.5 as of this morning. Any\nsuggestions on what I can do to work around this?\n\nThanks, Michael\n",
"msg_date": "Sun, 25 Apr 1999 16:08:54 -0500",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Functions with aggregations (i.e. group by) causes an abort"
},
{
"msg_contents": "Michael J Davis <[email protected]> writes:\n> The following causes psql to abort:\n\n> SELECT MemberID, InvoiceID, sum(quantity * unitprice) as InvAmount,\n> si_InventoryCategory(InventoryID) as CategoriesID\n> FROM InvoiceLines\n> WHERE memberid = 685\n> GROUP BY MemberID, InvoiceID, CategoriesID;\n\nThe proximate cause of the coredump is that replace_agg_refs is finding\na variable that isn't in its target list. I've added a test for that\ncondition, so that you get an error rather than a coredump; but that's\nnot much help for Michael.\n\nIt's possible to duplicate the problem with a much simpler test case.\nAll you need is a GROUP BY on an expression. For example:\n\nregression=> create table aggtest1 (ID int4, quantity float8);\nCREATE\nregression=> select sum(quantity), ID+1\nregression-> from aggtest1 group by ID;\nsum|?column?\n---+--------\n |\n(1 row)\n\nregression=> select sum(quantity), ID+1\nregression-> from aggtest1 group by ID+1;\nERROR: replace_agg_clause: variable not in target list\n\n(That last converts to a coredump if your sources are older than this\nemail...)\n\nI think the answer is that we need to add ID to the target list for the\nagg node, but I'm not really sure. Maybe the target list is OK and the\nreal problem is that replace_agg_clause needs to be able to recognize\ntargetlist matches on whole expressions (so that it would do something\nwith the \"ID+1\" expression instead of recursing down to \"ID\"). Anyone\nunderstand this stuff?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Apr 1999 20:46:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Functions with aggregations (i.e. group by) causes an\n\tabort"
}
] |
[
{
"msg_contents": "Any objection to the pacthes below? Seems they solve problems\nreported by a user in Japan (both on 6.4.2 and current).\n--\nTatsuo Ishii\n\n>From: \"Hiroshi Inoue\" <[email protected]>\n>To: \"pgsql-hackers\" <[email protected]>\n>Subject: [HACKERS] A patch for FATAL 1:btree: BTP_CHAIN flag was expected\n>Date: Tue, 13 Apr 1999 19:00:57 +0900\n>Message-ID: <[email protected]>\n\n>Hello all,\n>\n>There exists the bug that causes elog() FATAL 1:btree: \n>BTP_CHAIN flag was expected.\n>The following patch would solve the bug partially. \n>\n>It seems that the bug is caused by _bt_split() in nbtinsert.c.\n>BTP_CHAIN flags of buf/rbuf are always off immediately after \n>_bt_split(),so the pages may be in inconsistent state.\n>Though the flags are chagned correctly before _bt_relbuf(),\n>buf/rbuf are not _bt_wrt(norel)buf()'d after the change\n>(buf/rbuf are already _bt_wrtnorelbuf()'d in _bt_split() ). \n>\n>Comments ?\n>\n>Thanks.\n>\n>Hiroshi Inoue\n>[email protected]\n>\n>*** backend/access/nbtree/nbtinsert.c.orig\tMon Mar 29 17:00:48 1999\n>--- backend/access/nbtree/nbtinsert.c\tMon Apr 12 11:41:33 1999\n>***************\n>*** 679,686 ****\n> \t\t\t\t{\n> \t\t\t\t\t_bt_updateitem(rel, keysz, pbuf,\n> \t\t\t\t\t\t\t\t stack->bts_btitem, lowLeftItem);\n>! \t\t\t\t\t_bt_relbuf(rel, buf, BT_WRITE);\n>! \t\t\t\t\t_bt_relbuf(rel, rbuf, BT_WRITE);\n> \t\t\t\t}\n> \t\t\t\telse\n> \t\t\t\t{\n>--- 679,686 ----\n> \t\t\t\t{\n> \t\t\t\t\t_bt_updateitem(rel, keysz, pbuf,\n> \t\t\t\t\t\t\t\t stack->bts_btitem, lowLeftItem);\n>! \t\t\t\t\t_bt_wrtbuf(rel, buf);\n>! \t\t\t\t\t_bt_wrtbuf(rel, rbuf);\n> \t\t\t\t}\n> \t\t\t\telse\n> \t\t\t\t{\n>***************\n>*** 705,712 ****\n> \t\t\t\t\t *\n> \t\t\t\t\t * Mmm ... I foresee problems here. - vadim 06/10/97\n> \t\t\t\t\t */\n>! \t\t\t\t\t_bt_relbuf(rel, buf, BT_WRITE);\n>! \t\t\t\t\t_bt_relbuf(rel, rbuf, BT_WRITE);\n> \n> \t\t\t\t\t/*\n> \t\t\t\t\t * A regular _bt_binsrch should find the right place\n>--- 705,712 ----\n> \t\t\t\t\t *\n> \t\t\t\t\t * Mmm ... I foresee problems here. - vadim 06/10/97\n> \t\t\t\t\t */\n>! \t\t\t\t\t_bt_wrtbuf(rel, buf);\n>! \t\t\t\t\t_bt_wrtbuf(rel, rbuf);\n> \n> \t\t\t\t\t/*\n> \t\t\t\t\t * A regular _bt_binsrch should find the right place\n>***************\n>*** 731,738 ****\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n>! \t\t\t\t_bt_relbuf(rel, buf, BT_WRITE);\n>! \t\t\t\t_bt_relbuf(rel, rbuf, BT_WRITE);\n> \t\t\t}\n> \n> \t\t\tnewskey = _bt_mkscankey(rel, &(new_item->bti_itup));\n>--- 731,738 ----\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n>! \t\t\t\t_bt_wrtbuf(rel, buf);\n>! \t\t\t\t_bt_wrtbuf(rel, rbuf);\n> \t\t\t}\n> \n> \t\t\tnewskey = _bt_mkscankey(rel, &(new_item->bti_itup));\n>\n>Hiroshi Inoue\n>[email protected]\n>\n\n",
"msg_date": "Mon, 26 Apr 1999 11:53:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A patch for FATAL 1:btree: BTP_CHAIN flag was expected "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> Any objection to the pacthes below? Seems they solve problems\n> reported by a user in Japan (both on 6.4.2 and current).\n> --\n> Tatsuo Ishii\n> \n> >From: \"Hiroshi Inoue\" <[email protected]>\n> >To: \"pgsql-hackers\" <[email protected]>\n> >Subject: [HACKERS] A patch for FATAL 1:btree: BTP_CHAIN flag was expected\n> >Date: Tue, 13 Apr 1999 19:00:57 +0900\n> >Message-ID: <[email protected]>\n> \n> >Hello all,\n> >\n> >There exists the bug that causes elog() FATAL 1:btree:\n> >BTP_CHAIN flag was expected.\n> >The following patch would solve the bug partially.\n> >\n> >It seems that the bug is caused by _bt_split() in nbtinsert.c.\n> >BTP_CHAIN flags of buf/rbuf are always off immediately after\n> >_bt_split(),so the pages may be in inconsistent state.\n> >Though the flags are chagned correctly before _bt_relbuf(),\n> >buf/rbuf are not _bt_wrt(norel)buf()'d after the change\n> >(buf/rbuf are already _bt_wrtnorelbuf()'d in _bt_split() ).\n> >\n\nLet me check it...\nI'll commit it myself...\n\nVadim\n",
"msg_date": "Mon, 26 Apr 1999 14:57:03 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A patch for FATAL 1:btree: BTP_CHAIN flag was expected"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> >\n> >There exists the bug that causes elog() FATAL 1:btree:\n> >BTP_CHAIN flag was expected.\n> >The following patch would solve the bug partially.\n> >\n> >It seems that the bug is caused by _bt_split() in nbtinsert.c.\n> >BTP_CHAIN flags of buf/rbuf are always off immediately after\n> >_bt_split(),so the pages may be in inconsistent state.\n> >Though the flags are chagned correctly before _bt_relbuf(),\n> >buf/rbuf are not _bt_wrt(norel)buf()'d after the change\n> >(buf/rbuf are already _bt_wrtnorelbuf()'d in _bt_split() ).\n\nExactly! If left/right pages would be flushed by other transaction\nbefore setting BTP_CHAIN flag then this flag would be lost!\nWhere were my eyes! -:)\n\nThanks, Hiroshi!\nCommitted.\n\nAll versions >= 6.1 are affected by this bug...\n\nI'll make patch for 6.4.2 in a few days if no one else...\n\nVadim\n",
"msg_date": "Sun, 02 May 1999 00:16:07 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A patch for FATAL 1:btree: BTP_CHAIN flag was expected"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.