threads
listlengths
1
2.99k
[ { "msg_contents": "> Hello!\n> \n> I rewrote my 4-tables join to use subselects:\n> \n> SELECT DISTINCT subsec_id FROM positions\n> WHERE pos_id IN\n> (SELECT DISTINCT pos_id\n> FROM central\n> WHERE shop_id IN\n> (SELECT shop_id FROM shops\n> WHERE distr_id IN\n> (SELECT distr_id FROM districts\n> WHERE city_id = 2)\n> )\n> )\n> ;\n> \n> This does not work, either - postgres loops forever, until I cancel\n> psql.\n> \n> I splitted it - I ran\n> \n> (SELECT DISTINCT pos_id\n> FROM central\n> WHERE shop_id IN\n> (SELECT shop_id FROM shops\n> WHERE distr_id IN\n> (SELECT distr_id FROM districts\n> WHERE city_id = 2)\n> )\n> )\n> \n> and stored result in a file. Then I substituted the \n> subselect with the\n> file:\n> \n> SELECT DISTINCT subsec_id FROM positions\n> WHERE pos_id IN\n> (1, 2, 3, 6, 22, 25, 26, 27, 28, 29, 31, 33, 34, 35, 38, 41, \n> 42, 44, 45,\n> 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 60, 61, 62, 63, 64)\n> \n> and got desired result within a second.\n> \n> This finally solves my problem, but I need to pass a long \n> way to find\n> that postgres cannot handle such not too complex joins and subselects.\n\nIf you think about the query long enough you'll realize that several\nthings have to be assummed for that query to be efficient.\nLooking at you final query first, and assuming that you have an index on\npositions(pos_id):\n SELECT DISTINCT subsec_id FROM positions\n WHERE pos_id IN\n (1, 2, 3, 6, 22, 25, 26, 27, 28, 29, 31, 33, 34, 35, 38, 41, \n 42, 44, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, \n 60, 61, 62, 63, 64)\n\nThis turns into 36 OR clauses in the backend; which as you have\nexpressed is not a problem for PostgreSQL. But as soon as you make that\nIN clause a subselect Postgres can't assume that the results will be\nstatic for each row that it's evaluating, therefore you have:\n X rows in positions compared to Y rows from the subselect = X*Y\ncompares\nLet's assume that there are only 10 rows in positions, and that each\nselect of central return 36 rows. We get up to 360 comparisons for that\nquery, and none of the compares is likely to use the index because they\nare OR'ed together. \nNow let's throw in the other subselect and assume that each table only\nhas 10 rows besides for central which obviously has to have more so\nwe'll assume 40.\n 10 rows from district\n indexed on city_id (*YAY*)\t(btree index (maybe 2 compares)\n -------------------------\n\t 2 rows results\n OR= 10 rows from shops\t\t+ 2) * 10\n -------------------------\n 5 rows results\n OR= 40 rows from central\t\t+ 5) * 40\n -------------------------\n 36 rows results\n OR= 10 rows from position + 36) * 10)\n ------------------------- ------------------\n 5 rows results 18360 comparisons for this query\nAnd you have to remember that only the innermost subselect is likely to\nuse an index. (My math could be wrong but,) I think you get the idea.\n\nTry your query this way:\n SELECT DISTINCT subsec_id\n FROM positions p\n WHERE EXISTS(SELECT 1\n FROM central c, shops s, districts d\n WHERE p.pos_id = c.pos_id AND \n c.shop_id = s.shop_id AND\n s.distr_id = d.distr_id AND\n d.city_id = 2);\nMake sure you have indexes on pos_id, shop_id, distr_id, and city_id.\n", "msg_date": "Wed, 10 Mar 1999 14:18:15 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Bug on complex subselect (was: Bug on complex join)" }, { "msg_contents": "Forgive what may be a dumb question. When I subscribed to this group,\ninstructions for unsubscribing where as indicated in the excerpt below:\n\n >\n >-- \n >\n >Welcome to the pgsql-hackers mailing list!\n >\n >Please save this message for future reference. Thank you.\n >\n >If you ever want to remove yourself from this mailing list,\n >you can send mail to <[email protected]> with the following\n >command in the body of your email message:\n >\n > unsubscribe pgsql-hackers\n >\n >or from another account, besides [email protected]:\n >\n > unsubscribe pgsql-hackers [email protected]\n\nThis doesn't work. The response was\n\n >>>> unsubscribe psql-hackers\n **** unsubscribe: unknown list 'psql-hackers'.\n **** Help for [email protected]:\n\n\nSo I read the instructions on the PostGresQL page\nfor the hackers mailing list, and it says\n\n \"To subscribe or unsubscribe from the list, send mail to\n [email protected]. The body of the message\nshould \n contain the single line \"subscribe\" or \"unsubscribe\". \n\nWhen I do this, I get\n\n ----- The following addresses had permanent fatal errors -----\n <[email protected]>\n\n ----- Transcript of session follows -----\n ... while talking to postgresql.org.:\n >>> RCPT To:<[email protected]>\n <<< 550 <[email protected]>... User unknown\n 550 <[email protected]>... User unknown\n\nNot to be ungrateful or anything, but I would like to get myself off\nthis\nlist. (I don't have time to filter through 50-100 messages/day).\n\nAny suggestions on how I get myself off the list? (Perhaps it is\nworthwhile\neither updating the instructions on your website, or including an\n_up_to_date_\ncopy in the signatures being sent out on the list?)\n\nThomas\n", "msg_date": "Wed, 10 Mar 1999 16:09:56 -0500", "msg_from": "Thomas Reinke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug on complex subselect (was: Bug on complex join)" }, { "msg_contents": "Hello!\n\n Vadim already gave the idea to use EXISTS. Will try it.\n Thanks to all who replied!\n\nOn Wed, 10 Mar 1999, Jackson, DeJuan wrote:\n> Try your query this way:\n> SELECT DISTINCT subsec_id\n> FROM positions p\n> WHERE EXISTS(SELECT 1\n> FROM central c, shops s, districts d\n> WHERE p.pos_id = c.pos_id AND \n> c.shop_id = s.shop_id AND\n> s.distr_id = d.distr_id AND\n> d.city_id = 2);\n\n> Make sure you have indexes on pos_id, shop_id, distr_id, and city_id.\n\n All these are primary keys in corresponding tables, and hence have\nUNIQUE indicies. Is it enough?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 11 Mar 1999 15:48:12 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Bug on complex subselect (was: Bug on complex join)" } ]
[ { "msg_contents": "The problem is that you keep leaving out the 'g' in pgsql\nTake a loser look at the instructions.\n\t-DEJ\n\n> -----Original Message-----\n> From: Thomas Reinke [mailto:[email protected]]\n> Sent: Wednesday, March 10, 1999 3:10 PM\n> To: [email protected]\n> Subject: Re: [HACKERS] Bug on complex subselect (was: Bug on complex\n> join)\n> \n> \n> Forgive what may be a dumb question. When I subscribed to this group,\n> instructions for unsubscribing where as indicated in the \n> excerpt below:\n> \n> >\n> >-- \n> >\n> >Welcome to the pgsql-hackers mailing list!\n> >\n> >Please save this message for future reference. Thank you.\n> >\n> >If you ever want to remove yourself from this mailing list,\n> >you can send mail to <[email protected]> with the following\n> >command in the body of your email message:\n> >\n> > unsubscribe pgsql-hackers\n> >\n> >or from another account, besides [email protected]:\n> >\n> > unsubscribe pgsql-hackers [email protected]\n> \n> This doesn't work. The response was\n> \n> >>>> unsubscribe psql-hackers\n> **** unsubscribe: unknown list 'psql-hackers'.\n> **** Help for [email protected]:\n> \n> \n> So I read the instructions on the PostGresQL page\n> for the hackers mailing list, and it says\n> \n> \"To subscribe or unsubscribe from the list, send mail to\n> [email protected]. The body of the message\n> should \n> contain the single line \"subscribe\" or \"unsubscribe\". \n> \n> When I do this, I get\n> \n> ----- The following addresses had permanent fatal errors -----\n> <[email protected]>\n> \n> ----- Transcript of session follows -----\n> ... while talking to postgresql.org.:\n> >>> RCPT To:<[email protected]>\n> <<< 550 <[email protected]>... User unknown\n> 550 <[email protected]>... User unknown\n> \n> Not to be ungrateful or anything, but I would like to get myself off\n> this\n> list. (I don't have time to filter through 50-100 messages/day).\n> \n> Any suggestions on how I get myself off the list? (Perhaps it is\n> worthwhile\n> either updating the instructions on your website, or including an\n> _up_to_date_\n> copy in the signatures being sent out on the list?)\n> \n> Thomas\n> \n", "msg_date": "Wed, 10 Mar 1999 15:29:20 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Bug on complex subselect (was: Bug on complex join)" } ]
[ { "msg_contents": "Hi,\nusing the structure from a.sql, populating with some entries, using the query in\nsel_reise.sh (with hitting some tuples) I get only a subset (1 entry!) of the\nmatching tuples. Delting all lines in sel_reise.sh where 'e.*' is used\n(3.lines), I get all tuples. The same script works without errors on pgsql 6.3\nusing the same data. \n\nSome stats with my original data:\n\nabr=> select count(*) from reise where begt > 'Feb 01 00:00:00 1999';\ncount\n-----\n 13\n(1 row)\n \nabr=> select distinct mid from reise where begt > 'Feb 01 00:00:00 1999';\nmid\n---\n605\n(1 row)\n\nabr=> select distinct mid from emp;\nmid\n---\n 0\n605\n(2 rows)\n \nAny clues?\n\n\nBye!\n----\nMichael Reifenberger\nPlaut Software GmbH, R/3 Basis", "msg_date": "Wed, 10 Mar 1999 23:46:21 +0100 (CET)", "msg_from": "Michael Reifenberger <[email protected]>", "msg_from_op": true, "msg_subject": "query corruption for complexer queries on -current?" } ]
[ { "msg_contents": "Oleg Broytmann <[email protected]> wrote:\n\n> I rewrote my 4-tables join to use subselects:\n> \n> SELECT DISTINCT subsec_id FROM positions\n> WHERE pos_id IN\n> (SELECT DISTINCT pos_id\n> FROM central\n> WHERE shop_id IN\n> (SELECT shop_id FROM shops\n> WHERE distr_id IN\n> (SELECT distr_id FROM districts\n> WHERE city_id = 2)\n> )\n> )\n> ;\n> \n> This does not work, either - postgres loops forever, until I cancel\n> psql.\n\nYes, it's very ancient bug I knew it from time when subselects fisrt\nappeared.\n\n> This finally solves my problem, but I need to pass a long way to find\n> that postgres cannot handle such not too complex joins and subselects.\n\nPostgres cannot quick handle even simpler subselect on small enough\nbase (~ 1000 records) and when subselect return only one value.\nExecuting query like \"SELECT ... WHERE ... IN ( SELECT ...\" \nPostgres eats memory and takes too long time too complete.\nWhen it eats to many memory FreeBSD killed it.\nThe single way to resolve it is to rewrite subselect using EXISTS.\n\nWith best regards,\nIgor Sysoev\nhttp://www.nitek.ru/~igor/\n\n", "msg_date": "Thu, 11 Mar 1999 11:50:06 +0300", "msg_from": "\"Igor Sysoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug on complex subselect (was: Bug on complex join)" } ]
[ { "msg_contents": "Oleg Broytmann <[email protected]> wrote:\n\n> I rewrote my 4-tables join to use subselects:\n> \n> SELECT DISTINCT subsec_id FROM positions\n> WHERE pos_id IN\n> (SELECT DISTINCT pos_id\n> FROM central\n> WHERE shop_id IN\n> (SELECT shop_id FROM shops\n> WHERE distr_id IN\n> (SELECT distr_id FROM districts\n> WHERE city_id = 2)\n> )\n> )\n> ;\n> \n> This does not work, either - postgres loops forever, until I cancel\n> psql.\n\nYes, it's very ancient bug I knew it from time when subselects fisrt\nappeared.\n\n> This finally solves my problem, but I need to pass a long way to find\n> that postgres cannot handle such not too complex joins and subselects.\n\nPostgres cannot quick handle even simpler subselect on small enough\nbase (~ 1000 records) and when subselect return only one value.\nExecuting query like \"SELECT ... WHERE ... IN ( SELECT ...\" \nPostgres eats memory and takes too long time too complete.\nWhen it eats to many memory FreeBSD killed it.\nThe single way to resolve it is to rewrite subselect using EXISTS.\n\nWith best regards,\nIgor Sysoev\nhttp://www.nitek.ru/~igor/\n\n\n", "msg_date": "Thu, 11 Mar 1999 11:54:40 +0300", "msg_from": "\"Igor Sysoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug on complex subselect (was: Bug on complex join)" } ]
[ { "msg_contents": "Included are patches, made by Mr. Toshimi Aoki\n([email protected]), to addd support for NetBSD/macppc box.\n\nWe have tested the patches on three platforms:\n\nNetBSD/macppc\nLinuxPPC\nFreeBSD 2.2.6-RELEASE\n\nAll of them seem happy with the regression test. Note that, however,\ncompiling with optimization enabled on NetBSD/macppc causes an initdb\nfailure (other two platforms are ok). After checking the asm code, we\nare suspecting that might be a compiler(egcs) bug.\n--\nTatsuo Ishii\n\nbegin 644 netbsd-macppc.patch.gz\nM'XL(\".F9YS8``VYE=&)S9\"UM86-P<&,N<&%T8V@`Y5G-;^/&%3]K_XJW7B?1\nM!_5!?5@R%0.Q+3JKA2R[HIPF:0N\"(D<2;8ID.:1K8W?_F`#)K;>B:(OTD&,2\nM(`5ZZK7''GKOM6^&I\"A*E-?*MND\"U8'FS+SWF_=^[\\V;&;I8+(([H[^VJM33\nMJQ--OR&V474=SZ\\:][;E:`;QJA-J5/2*XYFSW#@@H!`70`2Q(35J4JL)XN%A\nMYTFY7'XD4&X\\#^!<\\T!$C*;4J$O-%L,X?%),_U@;VH=\"IPF\\R:9@S0/`M_(3\nM@&?F%`PR-6UBY%5U8;I450OPZA7DD]XA\\4^4'NM___VE\\*UV5R@@0(ZZGFG[\nMTSSQ/,=3%X12;48$V#,L>K^`_'NT`%/-M(BQ)X\"M+4BAR[0\\X@>>#<.KP:#[\nMI`3/B)4R1!Z<X7PX$`MRM/Q<LPV+K.\"@'B4,[]8QC1P`%&]=CJ_/-2^7FP33\nM7YQ<G2G]SW_5Y51LTDM]QT.#JR@Z17JI:CGZ31PJ);#AC$R08ZAU)+$CU>HA\nMS=FAVH*U6[1$L2.(]2A>`-Q)VS\"G.:@6`4-TT+E152A6V=C35/PN+T]91%!L\nMZ/A$`IL0`S2P39W`3-=!=VSJ>QJ3!8TNX)9XU'1LH`Z8/NB:#1,\"IFUQ`3X!\nM]37?U(&1BRU?HZH1+!;W^0(G(+8T2J4=+%55U_D-\\5R=!?DGLCB3ZV9-$)L'\nM,=>Y/9Y4KR.#$V>0V<B/K!536)N7<Q,A1]QL15YA8H<9LE,Y\\$V+5C4#RX6_\nM)8,/)9:#VS,X#;%;XC9;C8:`C\\.8SM<Y=%\"V=<<@8W-!%%>S\\X6EEVLI@3$,\nM[E(5)DIG+\"[X-#2?^`BBFE0EKJ//\\X833\"P\"UTSF)7<G,2!B_2T-6,O2QQF2\nM#HUIZU9@D+\"(\\QDJ\\R@PF@\\]HB.=4*M+]9:$9F?L`ED`NX4%U^BRF#P+78,K\nM158O+Y3^I^JX?RZO##P_5M2QK(S5XV%/5>3QUB+CW[L$NR\"PJ3ECJP_9`<KJ\nMG>IWXV*0KNF:Y<ZUL')PF^(=Z.UM6@O4C[<MB[]&4TA2&O,I,>L3657Z'Z-%\nM+\\XO<3/G>14N[:VT/0ML9AG7[0_'W-7/+X9RHLGH\":>,^/G14ZZQ\\N:I'TA=\nM=NJ8)Q7E16\"QQ!5KK**T&F],7*Z^6]K6D8-:<FAAS8/UE,G(C(2,TG8R2ML1\nM2F_(GE(R0^;1*0I8+B8[#9\\*6&C@T\\<F,3O19-B1';2PCF,-7PD:HQ[:4&M+\nMK<Z6;6!3?<>@U3M\"O;%1;GK'8YFEF]I7U.&%/.SEKPNY_#4<'4%O'/84HG6Y\nMKG%Z-1K)PS%3R+U$#:80]75?/W8+(7<^P3,D!C*W6K[UP/.([2<%O+O%\"#1[\nMQ8XLB.NPKL7^K^7I_\\S_M9SZ27C(S(M&6Z@W&\\N\\6\"[2C$GDRXO3YRE_><];\nM1WMMLW[`QZ4%F^IQG\"-_XCC_=_QY7/3^4WZE\"\\E\"NT$`B]#J>?16844\\=72I\nMUZ#&#I22R'>`]EHQR8;(*\"@'#UR&A)6+JRB(M9#R$IA35AS9-5'Y3!G+Y]@U\nMZ)T-CC]6H'0$Y9];0EG&OC@P[U6H(^'3R5;-[><'O0*4[Z!\\0K'.$HV1679@\nM_R/8_Y#A1-?,1,[CPQ]6G,DU$\\&QCX@^=V`2F)9AVC/@.`;@.-%]!.(BW@+*\nM4VQ47%//=+@M'*XXW!#$Y`JQ5$X;LFKPR=3Q=*)Y^MR\\)?!+)@BQ&^&<,27I\nM:/N$^E6/S#R\\O5?)G8L6$Z-*`EV]=BM.X\"<?+HY=#^H==F3%S;\\A9NW\\;P);\nMRX!67:H]F`&MN&S\\[$H>?2:!X3DNWNR8PW_XV]=?__Z+'_[ZQW]V<37)H]'%\nM2`(8$0NO2GA-3(:AYQ#*+ID@WYG4?YJ@Z<B=3S;P(,^?/BXZ`;[_[5_^_M7O\nMOOCS5]_!+2-7\\P3X]E]_^D$\\_O*;+__!=^:\\>%#@:R\\-:^(QX&X%EK=%2-L6\nM4)8O$]\\C))SVL4#U3*\"Y1N>03VQ&M'#UM.)4VIG(#Y+Q#\\!@5-I()6%4_I\\Q\nMF5XWR\\)&YY8Y63D?:S:(;1!;_)M1)^NHE5;=K2JV&T)[>26!J*\"Q3W3*\\T'_\nMA/>QZRKU-<LJ<_RR0=R<=)3NC`1#`]@H_MW/#X_/Y<)^OC=0KL[.^I\\6*OMY\nMY4(]/WYQ,5(_D4=*_V(8]_6'*WV8-^P756%5&7#(!RM49,`IU\\CEL'#OYT]C\nM]0(?C0MX<CE\"YSO+G?<=<WYC:PF[4ZQDD$(=]D$3G>?6Q&!\\SWD<0)K52#UB\nM;C>&T_FM._;4G`4>J9AVF-WG3G21Z+\"/UV);PDM:1G:O*NZ6V_6Z@/O+,K<!\nM\\,Q@%@O@T\"/VQC\\1JKY&CVP'NFQUPQ17.PXUBJ^BMV8H'K52&L2:'MUC^4II\nMBDO-^G;-;I?S:A,?1XJO')?8[\"4V+,,N_!FSX\"X486_9,J\"9D0R^;(K86#>I\nM3]Q0)&ZMRX67#Z&QLC#>4>98=N<V.6.]ND8)[.W/'>JKNAOLX8)EW0#1,;BP\nM:@0.$*KI,?H[%H^L9529!7@BVOR(@LOH0&IVLHY2JXH['ISKAX+86#T\\\\X[V\nM:GXL-/V@<R-=('58/Z6B5.3_T.''6/;QOJRYKD7*$;7[+Z]8=51'\\D`^5N37\nM7/3.]*'&'\"[%B*ZK2^$_C$+`4@08Q3#\"#)-A`[(40RZC$:E)B#SWB9TV,HT9\n.B3QLY[\\!.L&B0:T;```_\n`\nend\n", "msg_date": "Thu, 11 Mar 1999 19:29:20 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "[CURRENT] NetBSD/macppc porting patch" }, { "msg_contents": "Applied\n\n\n> Included are patches, made by Mr. Toshimi Aoki\n> ([email protected]), to addd support for NetBSD/macppc box.\n> \n> We have tested the patches on three platforms:\n> \n> NetBSD/macppc\n> LinuxPPC\n> FreeBSD 2.2.6-RELEASE\n> \n> All of them seem happy with the regression test. Note that, however,\n> compiling with optimization enabled on NetBSD/macppc causes an initdb\n> failure (other two platforms are ok). After checking the asm code, we\n> are suspecting that might be a compiler(egcs) bug.\n> --\n> Tatsuo Ishii\n> \n> begin 644 netbsd-macppc.patch.gz\n> M'XL(\".F9YS8``VYE=&)S9\"UM86-P<&,N<&%T8V@`Y5G-;^/&%3]K_XJW7B?1\n> M!_5!?5@R%0.Q+3JKA2R[HIPF:0N\"(D<2;8ID.:1K8W?_F`#)K;>B:(OTD&,2\n> M(`5ZZK7''GKOM6^&I\"A*E-?*MND\"U8'FS+SWF_=^[\\V;&;I8+(([H[^VJM33\n> MJQ--OR&V474=SZ\\:][;E:`;QJA-J5/2*XYFSW#@@H!`70`2Q(35J4JL)XN%A\n> MYTFY7'XD4&X\\#^!<\\T!$C*;4J$O-%L,X?%),_U@;VH=\"IPF\\R:9@S0/`M_(3\n> M@&?F%`PR-6UBY%5U8;I450OPZA7DD]XA\\4^4'NM___VE\\*UV5R@@0(ZZGFG[\n> MTSSQ/,=3%X12;48$V#,L>K^`_'NT`%/-M(BQ)X\"M+4BAR[0\\X@>>#<.KP:#[\n> MI`3/B)4R1!Z<X7PX$`MRM/Q<LPV+K.\"@'B4,[]8QC1P`%&]=CJ_/-2^7FP33\n> M7YQ<G2G]SW_5Y51LTDM]QT.#JR@Z17JI:CGZ31PJ);#AC$R08ZAU)+$CU>HA\n> MS=FAVH*U6[1$L2.(]2A>`-Q)VS\"G.:@6`4-TT+E152A6V=C35/PN+T]91%!L\n> MZ/A$`IL0`S2P39W`3-=!=VSJ>QJ3!8TNX)9XU'1LH`Z8/NB:#1,\"IFUQ`3X!\n> M]37?U(&1BRU?HZH1+!;W^0(G(+8T2J4=+%55U_D-\\5R=!?DGLCB3ZV9-$)L'\n> M,=>Y/9Y4KR.#$V>0V<B/K!536)N7<Q,A1]QL15YA8H<9LE,Y\\$V+5C4#RX6_\n> M)8,/)9:#VS,X#;%;XC9;C8:`C\\.8SM<Y=%\"V=<<@8W-!%%>S\\X6EEVLI@3$,\n> M[E(5)DIG+\"[X-#2?^`BBFE0EKJ//\\X833\"P\"UTSF)7<G,2!B_2T-6,O2QQF2\n> M#HUIZU9@D+\"(\\QDJ\\R@PF@\\]HB.=4*M+]9:$9F?L`ED`NX4%U^BRF#P+78,K\n> M158O+Y3^I^JX?RZO##P_5M2QK(S5XV%/5>3QUB+CW[L$NR\"PJ3ECJP_9`<KJ\n> MG>IWXV*0KNF:Y<ZUL')PF^(=Z.UM6@O4C[<MB[]&4TA2&O,I,>L3657Z'Z-%\n> M+\\XO<3/G>14N[:VT/0ML9AG7[0_'W-7/+X9RHLGH\":>,^/G14ZZQ\\N:I'TA=\n> M=NJ8)Q7E16\"QQ!5KK**T&F],7*Z^6]K6D8-:<FAAS8/UE,G(C(2,TG8R2ML1\n> M2F_(GE(R0^;1*0I8+B8[#9\\*6&C@T\\<F,3O19-B1';2PCF,-7PD:HQ[:4&M+\n> MK<Z6;6!3?<>@U3M\"O;%1;GK'8YFEF]I7U.&%/.SEKPNY_#4<'4%O'/84HG6Y\n> MKG%Z-1K)PS%3R+U$#:80]75?/W8+(7<^P3,D!C*W6K[UP/.([2<%O+O%\"#1[\n> MQ8XLB.NPKL7^K^7I_\\S_M9SZ27C(S(M&6Z@W&\\N\\6\"[2C$GDRXO3YRE_><];\n> M1WMMLW[`QZ4%F^IQG\"-_XCC_=_QY7/3^4WZE\"\\E\"NT$`B]#J>?16844\\=72I\n> MUZ#&#I22R'>`]EHQR8;(*\"@'#UR&A)6+JRB(M9#R$IA35AS9-5'Y3!G+Y]@U\n> MZ)T-CC]6H'0$Y9];0EG&OC@P[U6H(^'3R5;-[><'O0*4[Z!\\0K'.$HV1679@\n> M_R/8_Y#A1-?,1,[CPQ]6G,DU$\\&QCX@^=V`2F)9AVC/@.`;@.-%]!.(BW@+*\n> M4VQ47%//=+@M'*XXW!#$Y`JQ5$X;LFKPR=3Q=*)Y^MR\\)?!+)@BQ&^&<,27I\n> M:/N$^E6/S#R\\O5?)G8L6$Z-*`EV]=BM.X\"<?+HY=#^H==F3%S;\\A9NW\\;P);\n> MRX!67:H]F`&MN&S\\[$H>?2:!X3DNWNR8PW_XV]=?__Z+'_[ZQW]V<37)H]'%\n> M2`(8$0NO2GA-3(:AYQ#*+ID@WYG4?YJ@Z<B=3S;P(,^?/BXZ`;[_[5_^_M7O\n> MOOCS5]_!+2-7\\P3X]E]_^D$\\_O*;+__!=^:\\>%#@:R\\-:^(QX&X%EK=%2-L6\n> M4)8O$]\\C))SVL4#U3*\"Y1N>03VQ&M'#UM.)4VIG(#Y+Q#\\!@5-I()6%4_I\\Q\n> MF5XWR\\)&YY8Y63D?:S:(;1!;_)M1)^NHE5;=K2JV&T)[>26!J*\"Q3W3*\\T'_\n> MA/>QZRKU-<LJ<_RR0=R<=)3NC`1#`]@H_MW/#X_/Y<)^OC=0KL[.^I\\6*OMY\n> MY4(]/WYQ,5(_D4=*_V(8]_6'*WV8-^P756%5&7#(!RM49,`IU\\CEL'#OYT]C\n> M]0(?C0MX<CE\"YSO+G?<=<WYC:PF[4ZQDD$(=]D$3G>?6Q&!\\SWD<0)K52#UB\n> M;C>&T_FM._;4G`4>J9AVF-WG3G21Z+\"/UV);PDM:1G:O*NZ6V_6Z@/O+,K<!\n> M\\,Q@%@O@T\"/VQC\\1JKY&CVP'NFQUPQ17.PXUBJ^BMV8H'K52&L2:'MUC^4II\n> MBDO-^G;-;I?S:A,?1XJO')?8[\"4V+,,N_!FSX\"X486_9,J\"9D0R^;(K86#>I\n> M3]Q0)&ZMRX67#Z&QLC#>4>98=N<V.6.]ND8)[.W/'>JKNAOLX8)EW0#1,;BP\n> M:@0.$*KI,?H[%H^L9529!7@BVOR(@LOH0&IVLHY2JXH['ISKAX+86#T\\\\X[V\n> M:GXL-/V@<R-=('58/Z6B5.3_T.''6/;QOJRYKD7*$;7[+Z]8=51'\\D`^5N37\n> M7/3.]*'&'\"[%B*ZK2^$_C$+`4@08Q3#\"#)-A`[(40RZC$:E)B#SWB9TV,HT9\n> .B3QLY[\\!.L&B0:T;```_\n> `\n> end\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Mar 1999 11:02:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] [CURRENT] NetBSD/macppc porting patch" } ]
[ { "msg_contents": "Greetings,\nthere seems to be a slight bug in the parsing of the\nnextval function, tested under 6.4.2.\nIt affects also the SERIAL type.\n\nSymptom :\n\nCREATE SEQUENCE \"AA\";\n-- Correct, quoted identifier is allowed;\nSELECT NEXTVAL('AA');\n--Produces Error\n--aa.nextval: sequence does not exist\n\n\nProbable source of problem, the Argument to nextval is\nnot handled correctly as an Table Identifier.\n\nE.g. nextval('\"AA\"') is generates \n\"aa\".nextval: sequence does not exist\n\nNote the lowercase between the quotes.\n\nI quickly browsed the sources, but have not found the\nplace where the conversion to lowercase occurs.\n\nPlease check this against 6.5 and 6.4.3 too.\n\nWith Regards,\n\tStefan Wehner\n\n\n", "msg_date": "Thu, 11 Mar 1999 14:22:12 +0100 (MET)", "msg_from": "[email protected] (Stefan Wehner)", "msg_from_op": true, "msg_subject": "Bug with sequences in 6.4.2" }, { "msg_contents": "> Greetings,\n> there seems to be a slight bug in the parsing of the\n> nextval function, tested under 6.4.2.\n> It affects also the SERIAL type.\n> \n> Symptom :\n> \n> CREATE SEQUENCE \"AA\";\n> -- Correct, quoted identifier is allowed;\n> SELECT NEXTVAL('AA');\n> --Produces Error\n> --aa.nextval: sequence does not exist\n> \n\nLet me comment on this. In the first statement, \"AA\" is used in an\nSQL command, and we handle this correctly. In the second case, NEXTVAL\nis a function, called with a string.\n\nNow in parse_func.c, we convert 'AA' to lower to try and find the\nsequence table. My assumption is that we should attempt to find the\ntable without doing a lower(), and if that fails, try lower.\n\nDoes that make sense to people. We can't just lower it in every case.\n\n\n\n> \n> Probable source of problem, the Argument to nextval is\n> not handled correctly as an Table Identifier.\n> \n> E.g. nextval('\"AA\"') is generates \n> \"aa\".nextval: sequence does not exist\n> \n> Note the lowercase between the quotes.\n> \n> I quickly browsed the sources, but have not found the\n> place where the conversion to lowercase occurs.\n> \n> Please check this against 6.5 and 6.4.3 too.\n> \n> With Regards,\n> \tStefan Wehner\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 10:55:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "\n<problem with case of sequence name snipped>\n\n> Now in parse_func.c, we convert 'AA' to lower to try and find the\n> sequence table. My assumption is that we should attempt to find the\n> table without doing a lower(), and if that fails, try lower.\n> \n> Does that make sense to people. We can't just lower it in every case.\n> \n\nSounds line exactly the right fix to me. Oh, and a quick scan for other\ncases of this assumption (auto lower() of field and attribute names) would\nbe nice, too.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 15 Mar 1999 10:46:14 -0600 (CST)", "msg_from": "[email protected] (Ross J. Reedstrom)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "> Greetings,\n> there seems to be a slight bug in the parsing of the\n> nextval function, tested under 6.4.2.\n> It affects also the SERIAL type.\n> \n> Symptom :\n> \n> CREATE SEQUENCE \"AA\";\n> -- Correct, quoted identifier is allowed;\n> SELECT NEXTVAL('AA');\n> --Produces Error\n> --aa.nextval: sequence does not exist\n> \n> \n> Probable source of problem, the Argument to nextval is\n> not handled correctly as an Table Identifier.\n> \n> E.g. nextval('\"AA\"') is generates \n> \"aa\".nextval: sequence does not exist\n> \n> Note the lowercase between the quotes.\n> \n> I quickly browsed the sources, but have not found the\n> place where the conversion to lowercase occurs.\n\nOK, I have made the following change to allow case-sensitive handling of\nnextval. It tries the exact case first, then tries lowercase.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: parse_func.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_func.c,v\nretrieving revision 1.39\nretrieving revision 1.40\ndiff -c -r1.39 -r1.40\n*** parse_func.c\t1999/02/23 07:51:53\t1.39\n--- parse_func.c\t1999/03/15 16:48:34\t1.40\n***************\n*** 7,13 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_func.c,v 1.39 1999/02/23 07:51:53 thomas Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 7,13 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_func.c,v 1.40 1999/03/15 16:48:34 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 21,26 ****\n--- 21,27 ----\n #include \"access/relscan.h\"\n #include \"access/sdir.h\"\n #include \"catalog/catname.h\"\n+ #include \"catalog/heap.h\"\n #include \"catalog/indexing.h\"\n #include \"catalog/pg_inherits.h\"\n #include \"catalog/pg_proc.h\"\n***************\n*** 440,446 ****\n \n \t\tif (nodeTag(pair) == T_Ident && ((Ident *) pair)->isRel)\n \t\t{\n- \n \t\t\t/*\n \t\t\t * a relation\n \t\t\t */\n--- 441,446 ----\n***************\n*** 573,588 ****\n \t\tchar\t *seqrel;\n \t\ttext\t *seqname;\n \t\tint32\t\taclcheck_result = -1;\n- \t\textern text *lower(text *string);\n \n \t\tAssert(length(fargs) == ((funcid == F_SETVAL) ? 2 : 1));\n \t\tseq = (Const *) lfirst(fargs);\n \t\tif (!IsA((Node *) seq, Const))\n \t\t\telog(ERROR, \"Only constant sequence names are acceptable for function '%s'\", funcname);\n! \t\tseqname = lower((text *) DatumGetPointer(seq->constvalue));\n! \t\tpfree(DatumGetPointer(seq->constvalue));\n! \t\tseq->constvalue = PointerGetDatum(seqname);\n! \t\tseqrel = textout(seqname);\n \n \t\tif ((aclcheck_result = pg_aclcheck(seqrel, GetPgUserName(),\n \t\t\t\t\t (((funcid == F_NEXTVAL) || (funcid == F_SETVAL)) ?\n--- 573,593 ----\n \t\tchar\t *seqrel;\n \t\ttext\t *seqname;\n \t\tint32\t\taclcheck_result = -1;\n \n \t\tAssert(length(fargs) == ((funcid == F_SETVAL) ? 2 : 1));\n \t\tseq = (Const *) lfirst(fargs);\n \t\tif (!IsA((Node *) seq, Const))\n \t\t\telog(ERROR, \"Only constant sequence names are acceptable for function '%s'\", funcname);\n! \n! \t\tseqrel = textout((text *) DatumGetPointer(seq->constvalue));\n! \t\tif (RelnameFindRelid(seqrel) == InvalidOid)\n! \t\t{\n! \t\t\tpfree(seqrel);\n! \t\t\tseqname = lower((text *) DatumGetPointer(seq->constvalue));\n! \t\t\tpfree(DatumGetPointer(seq->constvalue));\n! \t\t\tseq->constvalue = PointerGetDatum(seqname);\n! \t\t\tseqrel = textout(seqname);\n! \t\t}\n \n \t\tif ((aclcheck_result = pg_aclcheck(seqrel, GetPgUserName(),\n \t\t\t\t\t (((funcid == F_NEXTVAL) || (funcid == F_SETVAL)) ?", "msg_date": "Mon, 15 Mar 1999 11:50:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> CREATE SEQUENCE \"AA\";\n>> -- Correct, quoted identifier is allowed;\n>> SELECT NEXTVAL('AA');\n>> --Produces Error\n\n> Let me comment on this. In the first statement, \"AA\" is used in an\n> SQL command, and we handle this correctly. In the second case, NEXTVAL\n> is a function, called with a string.\n> Now in parse_func.c, we convert 'AA' to lower to try and find the\n> sequence table. My assumption is that we should attempt to find the\n> table without doing a lower(), and if that fails, try lower.\n> Does that make sense to people. We can't just lower it in every case.\n\nThat would create an ambiguity that is better avoided. I think nextval\nought to duplicate the parser's behavior --- if possible, actually call\nthe same routine the parser uses for looking up a sequence name.\nI suggest that it operate like this:\n\n(1)\tnextval('AA')\t\toperates on sequence aa\n\n\tAA is lowercased, same as unquoted AA would be by the parser.\n\n(2)\tnextval('\"AA\"')\t\toperates on sequence AA\n\n\tQuoted \"AA\" is treated as AA, same as parser would do it.\n\nThis should be fully backward compatible with existing SQL code, since\nthe existing nextval() code implements case (1). I doubt anyone has\ntried putting double quotes into their nextval arguments, so adding\nthe case (2) behavior shouldn't break anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 22:33:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2 " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> CREATE SEQUENCE \"AA\";\n> >> -- Correct, quoted identifier is allowed;\n> >> SELECT NEXTVAL('AA');\n> >> --Produces Error\n> \n> > Let me comment on this. In the first statement, \"AA\" is used in an\n> > SQL command, and we handle this correctly. In the second case, NEXTVAL\n> > is a function, called with a string.\n> > Now in parse_func.c, we convert 'AA' to lower to try and find the\n> > sequence table. My assumption is that we should attempt to find the\n> > table without doing a lower(), and if that fails, try lower.\n> > Does that make sense to people. We can't just lower it in every case.\n> \n> That would create an ambiguity that is better avoided. I think nextval\n> ought to duplicate the parser's behavior --- if possible, actually call\n> the same routine the parser uses for looking up a sequence name.\n> I suggest that it operate like this:\n> \n> (1)\tnextval('AA')\t\toperates on sequence aa\n> \n> \tAA is lowercased, same as unquoted AA would be by the parser.\n> \n> (2)\tnextval('\"AA\"')\t\toperates on sequence AA\n> \n> \tQuoted \"AA\" is treated as AA, same as parser would do it.\n> \n> This should be fully backward compatible with existing SQL code, since\n> the existing nextval() code implements case (1). I doubt anyone has\n> tried putting double quotes into their nextval arguments, so adding\n> the case (2) behavior shouldn't break anything.\n\nI can do that. It looked kind of strange. Usually they do:\n\n\tcreate sequence \"Aa\";\n\nBecause nextval is a function with a parameter, it would be\nnextval('\"Aa\"'). I don't think we want to do this for all function\nparameters, but just for nextval.\n\nI can do that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 23:07:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "> That would create an ambiguity that is better avoided. I think nextval\n> ought to duplicate the parser's behavior --- if possible, actually call\n> the same routine the parser uses for looking up a sequence name.\n> I suggest that it operate like this:\n> \n> (1)\tnextval('AA')\t\toperates on sequence aa\n> \n> \tAA is lowercased, same as unquoted AA would be by the parser.\n> \n> (2)\tnextval('\"AA\"')\t\toperates on sequence AA\n> \n> \tQuoted \"AA\" is treated as AA, same as parser would do it.\n> \n> This should be fully backward compatible with existing SQL code, since\n> the existing nextval() code implements case (1). I doubt anyone has\n> tried putting double quotes into their nextval arguments, so adding\n> the case (2) behavior shouldn't break anything.\n\nGood idea. Done.\n\ntest=> select nextval('\"Aa\"');\nnextval\n-------\n 3\n(1 row)\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 23:32:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Good idea. Done.\n> \n> test=> select nextval('\"Aa\"');\n> nextval\n> -------\n> 3\n> (1 row)\n\nselect currval('\"Aa\"');\n\n?\n\nVadim\n", "msg_date": "Tue, 16 Mar 1999 11:39:55 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Good idea. Done.\n> > \n> > test=> select nextval('\"Aa\"');\n> > nextval\n> > -------\n> > 3\n> > (1 row)\n> \n> select currval('\"Aa\"');\n\nSomeone complained they could not get a sequence named \"Aa\" because the\ncode auto-lowercased the nextval parameter. The new code accepts\nnextval('\"Aa\"'), and preserves the case. Other cases are\nauto-lowercased.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 23:50:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > select currval('\"Aa\"');\n> \n> Someone complained they could not get a sequence named \"Aa\" because the\n> code auto-lowercased the nextval parameter. The new code accepts\n> nextval('\"Aa\"'), and preserves the case. Other cases are\n> auto-lowercased.\n\nNo, just looked in new code and seems that your changes in \nparse_func.c handle currval() and setval() as well, sorry.\n\nVadim\n", "msg_date": "Tue, 16 Mar 1999 11:54:25 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > select currval('\"Aa\"');\n> > \n> > Someone complained they could not get a sequence named \"Aa\" because the\n> > code auto-lowercased the nextval parameter. The new code accepts\n> > nextval('\"Aa\"'), and preserves the case. Other cases are\n> > auto-lowercased.\n> \n> No, just looked in new code and seems that your changes in \n> parse_func.c handle currval() and setval() as well, sorry.\n\nI meant that non-double quoted cases are auto-lowered. I didn't realize\nI was also dealing with other *val cases. I guess that is good.\n\nI am on IRC now.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 23:57:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Bug with sequences in 6.4.2" } ]
[ { "msg_contents": "I have checked in fixes for all of the genuine bugs that I found in\npg_operator and pg_proc by means of mechanical consistency checks.\n\nI would like to add these consistency checks to the regression tests,\nbut right now they still produce some bogus \"failures\":\n\nQUERY: SELECT p1.oid, p1.oprname, p2.oid, p2.proname\nFROM pg_operator AS p1, pg_proc AS p2\nWHERE p1.oprcode = p2.oid AND\n p1.oprkind = 'b' AND\n (p2.pronargs != 2 OR\n p1.oprresult != p2.prorettype OR\n (p1.oprleft != p2.proargtypes[0] AND p2.proargtypes[0] != 0) OR\n (p1.oprright != p2.proargtypes[1] AND p2.proargtypes[1] != 0));\n oid|oprname| oid|proname \n----+-------+----+-------------\n 609|< | 66|int4lt \n 610|> | 147|int4gt \n 611|<= | 149|int4le \n 612|>= | 150|int4ge \n 974||| |1258|textcat \n 979||| |1258|textcat \n1055|~ |1254|textregexeq \n1056|!~ |1256|textregexne \n1063|~ |1254|textregexeq \n1064|!~ |1256|textregexne \n1211|~~ | 850|textlike \n1212|!~~ | 851|textnlike \n1213|~~ | 850|textlike \n1214|!~~ | 851|textnlike \n1232|~* |1238|texticregexeq\n1233|!~* |1239|texticregexne\n1234|~* |1238|texticregexeq\n1235|!~* |1239|texticregexne\n 820|= | 920|network_eq \n 821|<> | 925|network_ne \n 822|< | 921|network_lt \n 823|<= | 922|network_le \n 824|> | 923|network_gt \n 825|>= | 924|network_ge \n 826|<< | 927|network_sub \n 827|<<= | 928|network_subeq\n 828|>> | 929|network_sup \n1004|>>= | 930|network_supeq\n(28 rows)\n\nAll of these mismatches occur because pg_operator contains more than\none entry for each of the underlying procs. For example, oid 974\nis the operator for \"bpchar || bpchar\", which is implemented by\nthe same proc as \"text || text\". That's OK because the two types are\nbinary-compatible. But there's no good way for an automated test to\nknow that it's OK.\n\nI see a couple of different ways to deal with this:\n\n1. Drop all of the above pg_operator entries. They are all redundant\nanyway, given that in each case the data types named by the operator\nare considered binary-compatible with those named by the underlying\nproc. If these entries were not present, the parser would still find\nthe operator, it'd just match against the pg_operator entry that names\nthe underlying type.\n\n2. Make additional entries in pg_proc so that all of the above operators\ncan point to pg_proc entries that agree with them as to datatypes.\n(These entries could still point at the same underlying C function,\nof course.)\n\n3. Extend the pg_type catalog to provide info about binary compatibility\nof different types, so that the opr_sanity regress test could discover\nwhether a type mismatch is really a problem or not.\n\n\nI like option #1 because it is the least work ;-). The only real\nobjection to it is that if we go down that path, we're essentially\nsaying that the only way to use the same proc to operate on multiple\ndata types is to declare the data types binary-equivalent --- that is,\nto allow the data types to be substituted for each other in *every*\noperation on those types. I can imagine having a couple of types that\nyou want to share one or two operations for, but not go so far as to\nmark them binary-equivalent. But we have no examples of this --- all of\nthe existing cases of overloaded operators are for types that actually\nare declared binary-equivalent.\n\nOption #2 is nothing but a hack; it would get the job done, but not\nelegantly.\n\nOption #3 is the most work, and it would also imply making the regress\ntest a lot slower since it'd have to join more tables to discover\nwhether there is a problem or not. But conceptually it's the cleanest\nanswer, if we can figure out exactly what info has to be stored.\n\n\nI think we might as well remove the above-named operators from\npg_operator in any case; they're just dead weight given the existence\nof binary-compatibility declarations for the underlying data types.\nThe question is whether we need to allow for future operators to link\nto pg_proc entries that name different data types that are *not* marked\nfully binary compatible. And if so, how could we teach the regress test\nnot to complain? (Maybe we don't have to; just add the exceptions to\nthe expected output from the test. That's an ugly answer though...)\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Mar 1999 10:56:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Binary-compatible types vs. overloaded operators" }, { "msg_contents": "\nI'm currently using the CVS code as part of a project I'm working on and\nhave not run any any problems. Today at work, looking at my oracle\nprocesses which look like\n\n PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n 6174 oradba 23 0 33M 12M sleep 0:00 0.18% 0.12% oracle\n 6166 oradba 33 0 35M 12M sleep 0:00 0.16% 0.11% oracle\n 6168 oradba 33 0 33M 10M sleep 0:00 0.13% 0.08% oracle\n 6170 oradba 33 0 33M 10M sleep 0:00 0.13% 0.08% oracle\n 6176 oradba 27 0 33M 10M sleep 0:00 0.08% 0.05% oracle\n 6172 oradba 33 0 33M 10M sleep 0:00 0.08% 0.05% oracle\n 351 oradba 33 0 11M 2944K sleep 0:00 0.03% 0.02% tnslsnr\n\nI've started to wonder why bother with this bloated beast at all. We are\njust now moving to an RDBMS and so its the perfect time to switch (Oracle\ndoesn't cost us a cent so no money lost). Problem is, I'd rather not go\nback to 6.4.2. I'm thinking move the Oracle stuff to CVS postgresql and\njust dealing with the problems if any come up. I'd hope that would\nprovide you guys with additional feedback and thus possibly help improve\nthe code base prior to 6.5.\n\nSo, are there any show stoppers in CVS postgresql. I myself haven't hit\nanything, but I'm not really pounding on it yet.\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 Kansas\nState University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n", "msg_date": "Thu, 11 Mar 1999 10:32:32 -0600 (EST)", "msg_from": "James Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "What unresolved issues are in CVS?" }, { "msg_contents": "> I've started to wonder why bother with this bloated beast at all. We are\n> just now moving to an RDBMS and so its the perfect time to switch (Oracle\n> doesn't cost us a cent so no money lost). Problem is, I'd rather not go\n> back to 6.4.2. I'm thinking move the Oracle stuff to CVS postgresql and\n> just dealing with the problems if any come up. I'd hope that would\n> provide you guys with additional feedback and thus possibly help improve\n> the code base prior to 6.5.\n> \n> So, are there any show stoppers in CVS postgresql. I myself haven't hit\n> anything, but I'm not really pounding on it yet.\n\nThe only thing I know of is that the new MVCC code doesn't vacuum\nproperly yet. It does vacuum, I just suspect is does not know how to\nprevent vacuuming of rows that are being viewed by other backends.\n\nPerhaps Vadim can comment on that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Mar 1999 12:18:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What unresolved issues are in CVS?" }, { "msg_contents": "On Thu, 11 Mar 1999, Bruce Momjian wrote:\n\n> The only thing I know of is that the new MVCC code doesn't vacuum\n> properly yet. It does vacuum, I just suspect is does not know how to\n> prevent vacuuming of rows that are being viewed by other backends.\n> \n> Perhaps Vadim can comment on that.\n\nSo if other backends are not active then vacuum works OK? Nightly vacuums\non inactive databases are OK? If so then its \"good enough for government\nwork\" :)\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n", "msg_date": "Thu, 11 Mar 1999 11:28:27 -0600 (EST)", "msg_from": "James Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What unresolved issues are in CVS?" }, { "msg_contents": "> On Thu, 11 Mar 1999, Bruce Momjian wrote:\n> \n> > The only thing I know of is that the new MVCC code doesn't vacuum\n> > properly yet. It does vacuum, I just suspect is does not know how to\n> > prevent vacuuming of rows that are being viewed by other backends.\n> > \n> > Perhaps Vadim can comment on that.\n> \n> So if other backends are not active then vacuum works OK? Nightly vacuums\n> on inactive databases are OK? If so then its \"good enough for government\n> work\" :)\n\nMy guess is that this is true.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Mar 1999 12:37:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What unresolved issues are in CVS?" }, { "msg_contents": "Questions:\n\na) Parameterized Types\n\nI was wondering if a parameter mechanism is/will be\navailable in 4.5 for user defined types. Currently,\nVARCHAR(10), NUMBER(6,2) are parameterized types, \nthis is more or less what I'm after (only it'd be\ngreat if strings could be parameters too)\n\nWhy? I'm creating types from abstract algebra\n(group theory) and there are an infinite number \nof them, thus I need several parameters to specify \nthe information. \n\nEven if it allowed one integer, this would be cool,\nsince I could load a table with the options, and\nthen pass the OID of the tuple.\n\nb) Session Memory\n\nI was wondering how I can get to memory area associated\nwith a user's session, similar to the package level \nvariables in Oracle.\n\nThanks!\n\nClark\n", "msg_date": "Thu, 11 Mar 1999 17:58:35 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Parameterized Types and Session Memory" }, { "msg_contents": "\nTodays CVS\nSolaris 2.5.1\nGCC 2.7.2.2\n\nmake[3]: Entering directory `/home/postgres/pgsql/src/backend/utils/adt'\ngcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes\n-I../.. -c int8.c -o int8.o\nint8.c: In function `int8out':\nint8.c:83: `INT64_FORMAT' undeclared (first use this function)\nint8.c:83: (Each undeclared identifier is reported only once\nint8.c:83: for each function it appears in.)\nmake: *** [int8.o] Error 1\n\n\n>From the configure.in file I assume this is the part you'd need to see\n\nchecking for snprintf... (cached) no\nchecking for vsnprintf... (cached) no\nchecking for isinf... (cached) no\nchecking for getrusage... (cached) yes\nchecking for srandom... (cached) yes\nchecking for gethostname... (cached) yes\nchecking for random... (cached) yes\nchecking for inet_aton... (cached) no\nchecking for strerror... (cached) yes\nchecking for strdup... (cached) yes\nchecking for strtol... (cached) yes\nchecking for strtoul... (cached) yes\nchecking for strcasecmp... (cached) yes\nchecking for cbrt... (cached) yes\nchecking for rint... (cached) yes\nchecking whether 'long int' is 64 bits... no\nchecking whether 'long long int' is 64 bits... yes\n\n\nThe (cached) is due to my running it a second time to clip this output.\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n\n", "msg_date": "Thu, 11 Mar 1999 16:41:38 -0600 (CST)", "msg_from": "James Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "INT64_FORMAT missing" }, { "msg_contents": "James Thompson <[email protected]> writes:\n> Todays CVS\n> int8.c: In function `int8out':\n> int8.c:83: `INT64_FORMAT' undeclared (first use this function)\n> int8.c:83: (Each undeclared identifier is reported only once\n> int8.c:83: for each function it appears in.)\n\nFixed, I hope.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Mar 1999 20:48:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] INT64_FORMAT missing " }, { "msg_contents": "On Thu, 11 Mar 1999, Bruce Momjian wrote:\n\n> > On Thu, 11 Mar 1999, Bruce Momjian wrote:\n> > \n> > > The only thing I know of is that the new MVCC code doesn't vacuum\n> > > properly yet. It does vacuum, I just suspect is does not know how to\n> > > prevent vacuuming of rows that are being viewed by other backends.\n> > > \n> > > Perhaps Vadim can comment on that.\n> > \n> > So if other backends are not active then vacuum works OK? Nightly vacuums\n> > on inactive databases are OK? If so then its \"good enough for government\n> > work\" :)\n> \n> My guess is that this is true.\n\nIt is true for me ;-)\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Mon, 15 Mar 1999 18:55:37 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What unresolved issues are in CVS?" }, { "msg_contents": "> I have checked in fixes for all of the genuine bugs that I found in\n> pg_operator and pg_proc by means of mechanical consistency checks.\n> I would like to add these consistency checks to the regression tests,\n> but right now they still produce some bogus \"failures\":\n<snip results>\n> All of these mismatches occur because pg_operator contains more than\n> one entry for each of the underlying procs. For example, oid 974\n> is the operator for \"bpchar || bpchar\", which is implemented by\n> the same proc as \"text || text\". That's OK because the two types are\n> binary-compatible. But there's no good way for an automated test to\n> know that it's OK.\n> I see a couple of different ways to deal with this:\n> 1. Drop all of the above pg_operator entries. They are all redundant\n> anyway, given that in each case the data types named by the operator\n> are considered binary-compatible with those named by the underlying\n> proc. If these entries were not present, the parser would still find\n> the operator, it'd just match against the pg_operator entry that names\n> the underlying type.\n\nJust a comment: types which are brute-force allowed to be binary\ncompatible (brute-force because it is compiled into the code rather\nthan entered into a table) would not be handled exactly the same as if\nthere were an explicit entry for them. With explicit entries there is\nan exact match on for the operator found by the parser on its first\ntry. With binary compatibility but no explicit entry then the parser\ntries first for that explicit match, fails, and then tries some\nheuristics to get a good alternative. I would think that for anything\nother than *very* small queries and tables the extra time would be\nnegligible.\n\n> 3. Extend the pg_type catalog to provide info about binary \n> compatibility of different types, so that the opr_sanity regress test \n> could discover whether a type mismatch is really a problem or not.\n\nThis is the elegant solution of course, and probably a lot of work :)\n\n - Tom\n", "msg_date": "Sun, 21 Mar 1999 07:04:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary-compatible types vs. overloaded operators" } ]
[ { "msg_contents": "Hello,\n\nI'm trying to compile postgresql on Siemens Nixdorf RS600 running \nSINIX-Y using the gnu toolchain. \n\nI am new to programming SINIX.\n\nAs I've read that this version should be able to run the same binaries \nas IRIX I tried to run it using --with-template=irix5\n\nconfigure went fine, after i added --without-CXX, but make gave me \nthe following:\n\n-------------\n[postgres@zeus src]$make\nmake lexverify\nmake[1]: Entering directory `/dbs/vol2/home/postgres/pgsql/src'\nmake -C lextest all\nmake[2]: Entering directory `/dbs/vol2/home/postgres/pgsql/src/lextest'\nmake[2]: Nothing to be done for `all'.\nmake[2]: Leaving directory `/dbs/vol2/home/postgres/pgsql/src/lextest'\nmake[1]: Leaving directory `/dbs/vol2/home/postgres/pgsql/src'\nmake -C utils all\nmake[1]: Entering directory `/dbs/vol2/home/postgres/pgsql/src/utils'\nmake[1]: Nothing to be done for `all'.\nmake[1]: Leaving directory `/dbs/vol2/home/postgres/pgsql/src/utils'\nmake -C backend all\nmake[1]: Entering directory `/dbs/vol2/home/postgres/pgsql/src/backend'\nmake -C access all \nmake[2]: Entering directory\n`/dbs/vol2/home/postgres/pgsql/src/backend/access'\nmake -C common SUBSYS.o\nmake[3]: Entering directory\n`/dbs/vol2/home/postgres/pgsql/src/backend/access/common'\nld -r -o SUBSYS.o heaptuple.o heapvalid.o indextuple.o indexvalid.o\nprinttup.o scankey.o tupdesc.o \ncollect2: ld returned 1 exit status\nld: heaptuple.o: fatal error: symbolic tables not ordered\n\nmake[3]: *** [SUBSYS.o] Error 1\nmake[3]: Leaving directory\n`/dbs/vol2/home/postgres/pgsql/src/backend/access/common'\nmake[2]: *** [submake] Error 2\nmake[2]: Leaving directory\n`/dbs/vol2/home/postgres/pgsql/src/backend/access'\nmake[1]: *** [access.dir] Error 2\nmake[1]: Leaving directory `/dbs/vol2/home/postgres/pgsql/src/backend'\nmake: *** [all] Error 2\n--------------\n\nI'm quite pussled by the \"symbolic tables not ordered\" message.\n\nCan anyone tell me what could it possibly mean and how to avoild it ?\n\nIt may be not specifically a postgres question but a gcc/ld one , \nbut I was able to compile apache 1.3.3 by just doing ./configure; make\n\nAny help (including pointer to better places to ask SINIX questions)\nmuch welcome.\n\nor does anyone have a compiled binary for irix5 that they could let me\ntry?\n\n----------------\nHannu\n", "msg_date": "Thu, 11 Mar 1999 19:58:15 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Compiling PostgreSQL on SINIX" }, { "msg_contents": "> I'm quite pussled by the \"symbolic tables not ordered\" message.\n> \n> Can anyone tell me what could it possibly mean and how to avoild it ?\n> \n> It may be not specifically a postgres question but a gcc/ld one , \n> but I was able to compile apache 1.3.3 by just doing ./configure; make\n\nIt usually means it wants tsort run on the library. You may find other\nOS's that have that configured, and if you add that, it may fix the\nproblem. Send us a patch, and we will try and get it into the\ndistribution.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Mar 1999 13:06:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Compiling PostgreSQL on SINIX" } ]
[ { "msg_contents": "> Hello!\n> \n> Vadim already gave the idea to use EXISTS. Will try it.\n> Thanks to all who replied!\n> \n> On Wed, 10 Mar 1999, Jackson, DeJuan wrote:\n> > Try your query this way:\n> > SELECT DISTINCT subsec_id\n> > FROM positions p\n> > WHERE EXISTS(SELECT 1\n> > FROM central c, shops s, districts d\n> > WHERE p.pos_id = c.pos_id AND \n> > c.shop_id = s.shop_id AND\n> > s.distr_id = d.distr_id AND\n> > d.city_id = 2);\n> \n> > Make sure you have indexes on pos_id, shop_id, distr_id, \n> and city_id.\n> \n> All these are primary keys in corresponding tables, and hence have\n> UNIQUE indicies. Is it enough?\n> \n> Oleg.\nYou should have indexes on both the primary and the referenced table.\n(i.e. positions.pos_id and central.pos_id) It gives PostgreSQL more\noptions on which join methods to use while still having an index to\nreference.\n\n\t-DEJ\n", "msg_date": "Thu, 11 Mar 1999 12:00:32 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Bug on complex subselect (was: Bug on complex join)" }, { "msg_contents": "Hi!\n\nOn Thu, 11 Mar 1999, Jackson, DeJuan wrote:\n> > > Make sure you have indexes on pos_id, shop_id, distr_id, \n> > and city_id.\n> > \n> > All these are primary keys in corresponding tables, and hence have\n> > UNIQUE indicies. Is it enough?\n> > \n> > Oleg.\n> You should have indexes on both the primary and the referenced table.\n> (i.e. positions.pos_id and central.pos_id) It gives PostgreSQL more\n> options on which join methods to use while still having an index to\n> reference.\n\n Understand.\n Thank you.\n\n> \n> \t-DEJ\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 11 Mar 1999 21:00:54 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Bug on complex subselect (was: Bug on complex join)" } ]
[ { "msg_contents": "\nThis is a very urgent problem for me to resolve, if there is anything I can do to help figure this out,\nplease let me know.\n\nPeter has my older test code that failed out 6.3.2 and 6.4.2.\nI have compiled my postgres with --enable-cassert.\n\nall of these are done through the jdbc driver.\n\n\n\n1)\tI have two problems with the jdbc driver.\n\t1)\tthe linux (blackdown) jdk 1.1.6 has a version string\n\tof Linux_JDK_1.1.6_v1, not 1.1.6, so the jdbc driver\n\ttries to load the 1.2 driver which of course isn't present..\n\tMinor fix to add a if || getProperty( \"java.version\").startsWith( LinuxJDK_1.1 )\n\tto the 1.1 test\n\n\t2)\tthe LARGE OBJECT selector in database metadata is\n {\"LARGE OBJECT\",\t\"(relkind='r' and relname ~ '^xinv')\"},\n\n\tand for this version of postgres appears to need to be\n {\"LARGE OBJECT\",\t\"(relkind='l' and relname ~ '^xinv')\"},\n\n\n\nI haven't checked into the data handling stuff that is so\nproblematic...\n\n1)\tI can't fetch large objects except within transactions, this\nis new, It used to be that it didn't matter\n2)\tI can't store more that 1 large object in a transaction,\n\tif i have multiple lo's i have to store them using\n\tautocommit(false).\n\n3)\tafter uploading 280 lo's in in autocommit(true) mode\n\tI get the following message in the postmaster log file\n\nNOTICE: DateStyle is Postgres with US (NonEuropean) conventionsN\nOTICE: SIReadEntryData: cache state reset\nTRAP: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1467)\n\n!(RelationNameCache->hctl->nkeys == 10) (0) [No such file or directory]\nNOTICE: Message from PostgreSQL backend:\n\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction and am going to terminate your database system connection and exit.\n\tPlease reconnect to the database system and repeat your query.\nNOTICE: Message from PostgreSQL backend:\n\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction and am going to terminate your database system connection and exit.\n\tPlease reconnect to the database system and repeat your query.\n\n\n-- the message I get if I try to do multiple lo's inside of a transaction is:\nNOTICE: DateStyle is Postgres with US (NonEuropean) conventions\nTRAP: Bad Argument to Function Call(\"!(AllocSetContains(set, pointer)):\", File: \"aset.c\", Line: 292)\n\n!(AllocSetContains(set, pointer)) (0) [No such file or directory]\nNOTICE: Message from PostgreSQL backend:\n\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction and am going to terminate your database system connection and exit.\n\tPlease reconnect to the database system and repeat your query.\nNOTICE: Message from PostgreSQL backend:\n\tThe Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction and am going to terminate your database system connection and exit.\n\tPlease reconnect to the database system and repeat your query.\n\n", "msg_date": "Thu, 11 Mar 1999 21:57:09 -0800", "msg_from": "Jason Venner <[email protected]>", "msg_from_op": true, "msg_subject": "more on large objects using postgresql.snapshot from today (March 11)" } ]
[ { "msg_contents": "I created functions to add comments based on the table and column name\nrather than using the oids. I include these calls as part of my table\ncreation scripts. I assume that by initdb you mean when the database is\ncreated than when Postgres is started. If the oids can change every time\nyou start Posgtres, this would be a problem. \n\nI can upload my functions for integrating into Postgres as a standard way to\nadd comments (if requested). Maybe we should consider attaching\ndescriptions to a table and or column name rather than an oid?\n\nOn another note, I have been following the thread about passing NULL\nparameters into a C function call. What is happening with this? If a\nparameter of a C function call contains NULL, the function is either not\ncalled or returns NULL. I would like to change this so that my C function\ncan return a value even though its parameters are NULL.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tFriday, March 12, 1999 11:31 AM\n\tTo:\tMichael Davis\n\tCc:\[email protected]\n\tSubject:\tRe: [GENERAL] Comments on columns?\n\n\t> I did that and it works great. I would like \\dd tablename to show\nthe table\n\t> comment and its column comments. \n\n\tYes, I would like this too, and have considered it. The problem I\nhave\n\thad with system tables is that the oid of the table and columns is\n\tgenerated at initdb time, and not assigned constant values as the\n\tpg_proc entries are.\n\n\tI could modify psql to show comments, but I can't figure out how to\nget\n\tthe system tables to show this.\n\n\tIn fact, the TODO list has:\n\n\t * allow pg_descriptions when creating types, tables, columns, and\nfunctions\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Fri, 12 Mar 1999 12:46:58 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [GENERAL] Comments on columns?" }, { "msg_contents": "> I created functions to add comments based on the table and column name\n> rather than using the oids. I include these calls as part of my table\n> creation scripts. I assume that by initdb you mean when the database is\n> created than when Postgres is started. If the oids can change every time\n> you start Posgtres, this would be a problem. \n> \n> I can upload my functions for integrating into Postgres as a standard way to\n> add comments (if requested). Maybe we should consider attaching\n> descriptions to a table and or column name rather than an oid?\n\nI like to keep using the oid because it allows comments on functions and\noperators, which would make a 'name'-based system work badly.\n\nI plan to add system tablename/column descriptions by doing some kind of\nname/oid mapping function for this. This will happen after 6.5.\n\nI will also try to add some standard way of adding descriptions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 16:09:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Comments on columns?" } ]
[ { "msg_contents": "\nFor me, using the postgres snap, I got yesterday from (march 11)\nftp.postgresql.org,\nthe following test program cut from test/examples/lotest.c\nfailes on the 16th file.\n\nThe dummy files are named after their size/512 bytes.\nThe log file message I get is, included in the tar file.\n\nNote: I don't seem to have any problems if I use lo_import and lo_export\nThe problem of course is that I don't want my data to have to go through tmp files\non it's way in or out of postgres.\n\nthe program only requires libpq and the postgres include directory.\nUsage:\nlotest [-u dbuser (null)] [-p password(null)] [-P port (null)] [-d dbname(prod)] [-h host(null)] dummy/*\nI use it\ncreatedb tmp\nlotest -u jason -d tmp dummy/*\nlotest -u jason -d tmp dummy/*\ncan't create large objectimport of dummy/01 failed\ncan't create large objectimport of dummy/02 failed\nimported dummy/03 to 88837\nimported dummy/04 to 88852\nimported dummy/05 to 88867\nimported dummy/06 to 88883\nimported dummy/07 to 88899\nimported dummy/08 to 88915\nimported dummy/09 to 88931\nimported dummy/10 to 88948\nimported dummy/11 to 88965\nimported dummy/12 to 88982\nimported dummy/13 to 89000\nimported dummy/14 to 89018\nimported dummy/15 to 89036\nerror while writing lo for \"dummy/16\", got back -1, wanted 2048 : No such file or directory\nimport of dummy/16 failed\ncan't create large objectimport of dummy/17 failed\ncan't create large objectimport of dummy/18 failed\ncan't create large objectimport of dummy/19 failed\ncan't create large objectimport of dummy/20 failed\nEND command failed\n\n\n\nbegin 600 testfile.tar.gz\nM'XL(\"&M\\Z38``W1E<W1F:6QE+G1A<@#MW?U3VT8:!_#\\:OT53YQ)D8G!\\CLU\nMI3,I(3WF6B`DZ<T=81A96F,ULN1*,I\"V^=_OV=5*MK$<DS;J3*??S]T0O-IW\nMK=:KE9KX82+B9-=Q'I6G:5F]3H<>$5&WTU9_$EGZ3VI:O6Z;J-?KMUK];K/7\nMYZ!FN]M\\1%:)=<K-XL2.B![];,=A\\(EXMV,A_+^B0G^M)U[@^#-7T#=QXOK>\nM<'?\\K;$4YH7W@R(ON%X.&SE!XB\\'S0*/$R^%53G[Z2\\[([$[KMX/;>AC<7IL\nM1(]=,?(\"X9K?O7WY^OA_1S7C21I\".H!:5F?/>\"(\"UQL91F/;H&WR)M,P2EYZ\nMOJ`=_EPA'4(C&53U@BOY2V!/1)6\\(`G)M1-[:,>\"[)A\\.[H6%`Y_%D[\"M>)?\nM3CVWRMGP_QL&CY+$<XB#C'DIYMGW3A@$M,T_Z^2,>2!M9R74C-\\,DO$KE8K,\nM[-C=Y\\]<JOY\\-5(!,E&E,IR-+G2[+N?1@N$'OCCKAKQ*DLET?D`FY0^RT<3-\nM#J<BH&0LTF9RLX:\"(F&['%M%:/#/D4L'*J*95;!.IU?G+TY/?OAOG:Q>KU=3\nM^8_(Y*C?D%53Q?Y6236V2411&*692:,ICX-D9/)9YB-UJCIVL)6D=>&3?Z<J\nM\\Z[Z-'Y7?1=4ZY3WR[[.(!+)+`K(2C]_7&R0P[5/A&K2XEG)&Y/V)S?(#Z]4\nM7%[email protected][.CYZ_H-_5K_\\Y/WYSE+<J2W/`3:/?C$\\T09>^6')5UWJQSA]U\nM.XCFO^BZ78UTY52'J[JEQ==7*F;.$QS03I/NUXWRRAW.^S<0MYR,GOHSU;7F\nM+(B]:[Y8.#\"XKF7=4UCGA5[60X1&43A1??TV.VUD!R[=1EZB1I,\\Y`4W(HJ]\nM,%#'\\Q-Q.Y:Q33,=J=QFF2</H#KQB*YGEVJM1M_F`TIW5J.QVL);S_=UL4_=\nMM%H\\?&0+]960#R+*1Q%?%VE7JW3SOK[**Y&FS1/(L2`3?9,=R*M4,\"#2,9^V\nM4A;`,Y_L^!$'ID.[3M=A0D/;><]5KM.M'21\\&IZZU7J>:9YY?MUQ\\5FUY@TA\nMFJK23*KR_Q:\"EZ^3;-RI,^GX82RXNU5L>2VHSXM]4%,3A<XBFX8X+3>3)K87\nMF')&(1[I3CI_;6_+#S<D9R].*.X2P0G5$1YZ_$UYO3\\/EDDYT`NRJ8V<;$K;\nMIEDL(CXSUCQD:L?Q;1BYBZ&T[0[5\"3V@ZC0*W>K\\@)JXEZ*.PS@/R0:?2:;#\nM8=<BX9J8NB6R\"7SVQH/98#IP!]R=-7I\\0$>G+_/K*[[U$F=LDI.'./)[8&N\\\nM-=#]K$N;MUH:\\@!_O[\\0?Y;'UPW>$'^:QU_HC@UISN9ITC[9$-_-X^>=NS8%\nM?ZG:,S_)$JQ>E$_CP7%P8_N>*[MU-A%\\FI]R'\\]B^UH,^`*EBYV9:OPE_S;-\nMFR4_N5R!RW1&N9`_=W=W+U>O#'FV+JQ+'H#U[/?Y^!=W'I_69A;P44^\\<K#Q\nMY:1&P_T+K='(6D7RNSV=9N79S,ZT3/CL@)L8\\6]F51XZV,JON>6C*ETVW7,^\nM:8^NR2D]N#:O+.VSEKHJ=8ZRZ];D)P^MS2U-)_/*L\\KZ?DUVV>&U6>;IEW/E\nM0;<N1SZT/C>5;IZ5NHCE>HF_4\"\\XPK.F6NMD(7S6#VCKG;6UOWK&.$>'O^6S\nMN/+:7CYI*Q'4U;ON8'4KFV(+S^IJ_)43NQ)%7VD/+G'QK*_&OG?B5R*HN>;!\nM9=T?%JLI\"D;&2J1\\OGIXN0L#IZ#,Y;&S6IZ<ZQY2%O_@]:E1X25-+!*:3=6*\nM14?E)8L\\TI\"QTM7ZMCS$0^WLE8[B#DW]6[XF4S%X;J'??^=X<NT_BV4D]35R\nM>'IR<G3XYOCTY.KTWQL7;#IK6KS;N/%L.7'RZB$[JMN9E?U8ECU?F:SD??9*\nMK15^%+&<A=,TM?FD>?:*[Y2\\>*P/S*?.>_/IQ[17(A'S9\"D7A+'J%W$GG'05\nM4?WNZ/OCDVJ^B#Y[E<9]G?9())=/7->S[\\^/7E\\=GO[XX_.3%]PI:Q?8*CNN\nMU&0B5Y@CF[\\37%[@Z1KR&?&%':ELLQ#=$-GW^T4-F\"]GXW$X\\]TLD[Q9O$P0\nM@>#5*WD)>3$%H5HE\\^=`\")<7:WQB[)N0[^M4+A,Q\":,//)/9[^-\\F;M<,4/=\nMUI&^0Y35&,EEV[Y>\"_'*4JY\"LH_/GLU7&&H9I+X.#])ONC3*I9X]TPSYV,(=\nMICH-*LG\"$,EC;A@F57W[&X[D<,N[>SG#CR1\\'I-%F82S),]$+FMCV5G978?,\nM8_7>(Z_;O2_L@K%U=/+BRXTLSNSO/JX*JI56RI*5^FB4LO_CA]?R1):2=T;N\nM_UG6VOV_3L?J9_M_O6:K*_?_.`'V__X*#;X9;23MQM`+&E->,TWLF._M!O0=\nMW]**P.7+,4IFTP%-^9)N[O7WTJ6'ZBQ>\\ZC;[SCDJ`EUC>*\\>-KS?/=\")KX<\nMD.QM=1O-]U]C,N\\GN8[D;?X.WQ_ON&W^<=;E'S?-=M/JM_2-<\\UXR7/F$<\\C\nM`YYX9WR]5XMRJ7)-93GZCL)PQ7!VS?<-HW!@5-[*1F0.TN88E7.^1A/Q+[ER\nM)+6IX-B^7$=F1\\[D7\"J/6$;EA?XN/TGOK[@\"1N4G$0WY!CS+M6U43L)LVT2%\nMC(Q*XDVX5G$6AT-X52#B>5U.PFAB^X;<$HR]7_/,>AVC$G/Y/(]D0=UFRZC\\\nM,A,\\K0AG'.K\\C@,O.=-]8*C3]R:R@]A62Z'#=(8T5\"H^R?+KV#B+0H<7$F\\3\nMS_>2/%1&]8K2;L@S%KY<T/#-?+K9P3-@NI4SO;[B0$?.FY'(CLLEO]XGVZ(P\nMNA>L]C.*PN4.74'X+/\"]X'W!`?Z\"$T7AB?#]U6\"YB;4:JC:8MK+>>B5;^_F=\nM='1^?GH^D#?J01#F6XUW7G\"SM[?7;AO/AWR.#V=1Q+?9\"\\G_8'[=S\\[OLQN$\nM!$B`!$B`!$B`!$B`!$B`!$B`!$B`!$B`!%\\LP9OSYV<#>JD>;M#S.!:1/&I6\nM'YLFF?))0\\TTAV'HU\\@T'Y.M(ER)P!YR_)I\\DLFACW444R78KIG^5#UG23_3\nM=LVJ,17YZ,X14UF\"[1^&@>OEA6W.H%JGKTQ=T[RBM3J9Z@E8;3&R?)0O[*D]\nMV74X5=-J-F4%.&K6$EG`SK?^]&KDV]<Q?476G=5415HUJM4&G$H^*!LLY?.#\nM%W\"(RLTP_M']0R;_<7$24CQSQNESOS`BUXN$DX31A\\LUF_8#^2;?5.Z?N_(M\nMP6FZZ2GBW=W=M2D.?6$'LZG<(%UX7B`?8?%X5=O]Z8-TZCTLBTAX`?>J[7N_\nMRHK$W#><D7Z0QA<$Q6)B3\\>AW.B.QQQ^I1Z7R?8^L2X-SMN5Z5[+0X=J8]1\\\nM+SX<=%NBU^G62>ZN'^RU]WI?]VKE/%O[.W!GD\\F'1KEE4,?J=[OKGO^E'[+W\nM_SO-COQ/`KKMSB/JEENMU#_\\^5]Z_JUFF65\\^K__X+-M+9[_M@SI]UIX_@M0\nM-GW]M\\HL8\\/UWRJ\\_CNX_@$`RJ3G_W:996R8_]N%\\W\\/\\S\\``$!9]/=_I\\PR\nM-GS_=XJ^__MX_Q\\```\"@%'K]5^JCU@WKOV[A^@_/_P``````OCB]_N^56<:&\nM]7^O</V/]W\\```````\"^*'W_UR^SC`WW?_W\"^S^\\_PL```````#PQ>C[_[TR\nMR]CT[_^NWO_+O^<*]_\\``````````%^\"WO_YNLPR-NS_%/S]OTVKV<;^#P``\nM````````P)^5[O\\U2]UJV[#_5_#O_W`*_/T?`````````````'^*WO]MEEG&\nMAOW?@G__5Z;`_B\\`````````````P!^E]_];99:Q8?^_L[S_W\\'?_PD`````\nM`````````/`GZ.<_[3++V/#\\IUOX_`=__S<```````````````#`9]//_SIE\nMEK'A^5^O\\/E?%\\__```````````````````^AW[^VRVSC`W/?_N%SW_[>/X+\nM``````````````````#P4/KY?Z_,,C[]_+]E%3[_;^'Y/P``````````````\nM`````,!#Z/<_^F66L>']CV;A^Q\\=O/\\!`````````````````````+\")?O]G\nMK\\PR-KS_TRI\\_Z>']W\\````````````````````````^1;__]76996QX_ZM=\nM]/Y7R\\+[7P````````````````````````#KI.__M4I]U6[#^W^=PK__K8GW\nD_P````````````````````````````````#^#U%&F=X`\"`(`\n`\nend\n", "msg_date": "Fri, 12 Mar 1999 12:48:05 -0800", "msg_from": "Jason Venner <[email protected]>", "msg_from_op": true, "msg_subject": "Simple C++ program using libpq that failes very quickly using the lo\n\tinterface" } ]
[ { "msg_contents": "Sorry that many of you have had trouble getting my webcam to work.\n\nThe locking code was not reliable, and there were no proper messages\nwhen the camera was in use, or a non-Netscape browser was used. I have\nfixed these items, so it should work reliably from now on.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Mar 1999 22:55:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "My webcam" } ]
[ { "msg_contents": "A couple days ago I asked what happens when I hit 2GB and I was told \"no\nworries\". Well, I hit 2GB and it barfed on me. It looks like postgres is\ntrying to create a new file, but it isn't able to use it....\n\n/dev/sda is where my database resides and it has 38% free.\n\nRight now, geocrawler is still up and running, but no new mail can be\ninserted into it until I can get this figured out....\n\nPlease respond directly to [email protected]\n\nThanks,\n\nTim\n\n\njava.sql.SQLException: ERROR: tbl_mail_archive: cannot extend <-- BAD NEWS\n\n at postgresql.Connection.ExecSQL(Compiled Code)\n at postgresql.Statement.execute(Compiled Code)\n at postgresql.Statement.executeUpdate(Compiled Code)\n at com.geocrawler.mail.News2SQL.processFile(Compiled Code)\n at com.geocrawler.mail.News2SQL.<init>(Compiled Code)\n at com.geocrawler.mail.News2SQL.main(Compiled Code)\njava.sql.SQLException: IOError while reading from backend:\njava.io.IOException:\nThe backend has broken the connection. Possibly the action you have\nattempted ha\ns caused it to close.\n at postgresql.PG_Stream.ReceiveChar(Compiled Code)\n at postgresql.Connection.ExecSQL(Compiled Code)\n at postgresql.Statement.execute(Compiled Code)\n at postgresql.Statement.executeUpdate(Compiled Code)\n at com.geocrawler.mail.News2SQL.processFile(Compiled Code)\n at com.geocrawler.mail.News2SQL.<init>(Compiled Code)\n at com.geocrawler.mail.News2SQL.main(Compiled Code)\n\n\nls -l\n\n-rw------- 1 postgres postgres 2147482624 Mar 13 22:43 tbl_mail_archive\n-rw------- 1 postgres postgres 0 Mar 13 22:41 tbl_mail_archive.1\n\n\ndf\n\nFilesystem 1024-blocks Used Available Capacity Mounted on\n/dev/sdb3 1931651 725575 1106236 40% /\n/dev/sda 8573425 5077127 3051717 62% /fireball\n\n\n\n", "msg_date": "Sat, 13 Mar 1999 22:46:49 -0600", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "URGENT - \"CANNOT EXTEND\"" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> A couple days ago I asked what happens when I hit 2GB and I was told \"no\n> worries\". Well, I hit 2GB and it barfed on me. It looks like postgres is\n> trying to create a new file, but it isn't able to use it....\n> \n> /dev/sda is where my database resides and it has 38% free.\n> \n> Right now, geocrawler is still up and running, but no new mail can be\n> inserted into it until I can get this figured out....\n> \n> Please respond directly to [email protected]\n\nOK, the problem is that some OS's have trouble with tables that are\nexactly 2gig. While PostgreSQL does not have a problem, and many OS's\ndon't, some can't handle a file that is exactly 2 gigs, so we have\nmodified the 6.5 unreleased code to stop growing the table at about 1\ngig, and create a new table file so that is not a problem. \n\nPeter Mount made the change. I don't see anything on our FTP server\nabout that patch, so perhaps Peter will have to supply one, or we will\nhave to grab one. If you search the hackers archive on the ftp site,\nyou will see discussion about it, and perhaps a patch.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Mar 1999 00:07:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] URGENT - \"CANNOT EXTEND\"" }, { "msg_contents": "On Sun, 14 Mar 1999, Bruce Momjian wrote:\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > A couple days ago I asked what happens when I hit 2GB and I was told \"no\n> > worries\". Well, I hit 2GB and it barfed on me. It looks like postgres is\n> > trying to create a new file, but it isn't able to use it....\n> > \n> > /dev/sda is where my database resides and it has 38% free.\n> > \n> > Right now, geocrawler is still up and running, but no new mail can be\n> > inserted into it until I can get this figured out....\n> > \n> > Please respond directly to [email protected]\n> \n> OK, the problem is that some OS's have trouble with tables that are\n> exactly 2gig. While PostgreSQL does not have a problem, and many OS's\n> don't, some can't handle a file that is exactly 2 gigs, so we have\n> modified the 6.5 unreleased code to stop growing the table at about 1\n> gig, and create a new table file so that is not a problem. \n> \n> Peter Mount made the change. I don't see anything on our FTP server\n> about that patch, so perhaps Peter will have to supply one, or we will\n> have to grab one. If you search the hackers archive on the ftp site,\n> you will see discussion about it, and perhaps a patch.\n\nI did post a patch when I found the problem to the patches list. That one\nset it to 1 block short of 2Gig, and afterwards, we decided to change it\nto 1Gig. I thought the changes had been made already.\n\nI'm finally about to get back into the swing of things, and I'll post the\npatch again as soon as I resync my sources.\n\nAs for it working with 6.4, it should do, although you probably have to\napply the patch manually.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Sun, 14 Mar 1999 10:03:26 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] URGENT - \"CANNOT EXTEND\"" }, { "msg_contents": "On Sun, 14 Mar 1999, Bruce Momjian wrote:\n\n[snip]\n> Peter Mount made the change. I don't see anything on our FTP server\n> about that patch, so perhaps Peter will have to supply one, or we will\n> have to grab one. If you search the hackers archive on the ftp site,\n> you will see discussion about it, and perhaps a patch.\n\nI've dug this out of my mail archives. It splits the relations at 1Gb.\nThis diff is from this mornings cvs.\n\nGood job I don't delete what I send out ;-)\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf", "msg_date": "Sun, 14 Mar 1999 11:00:53 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] URGENT - \"CANNOT EXTEND\"" } ]
[ { "msg_contents": "Very cool!\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tSaturday, March 13, 1999 10:14 PM\n\tTo:\tMichael Davis\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] parser enhancement request for 6.5\n\n\tApplied.\n\n\n\t[Charset iso-8859-1 unsupported, filtering to ASCII...]\n\t> I have a problem with Access97 not working properly when entering\nnew\n\t> records using a sub form, i.e. entering a new order/orderlines or\nmaster and\n\t> detail tables. The problem is caused by a SQL statement that\nAccess97 makes\n\t> involving NULL. The syntax that fails is \"column_name\" = NULL.\nThe\n\t> following attachment was provided by -Jose'-. It contains a very\nsmall\n\t> enhancement to gram.y that will allow Access97 to work properly\nwith sub\n\t> forms. Can this enhancement be added to release 6.5?\n\t> \n\t> <<gram.patch>> \n\t> Thanks, Michael\n\t> \n\n\t[Attachment, skipping...]\n\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Sat, 13 Mar 1999 23:24:37 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] parser enhancement request for 6.5" } ]
[ { "msg_contents": "Hi,\n\n I didn't think that patching libpq was the corret answer either. I just\ndidn't want to go messing around with the backend :-)\n\nJerry\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nTo: Bruce Momjian <[email protected]>\nCc: [email protected] <[email protected]>\nDate: Monday, March 15, 1999 2:34 AM\nSubject: Re: [HACKERS] libpq and SPI\n\n\n>Uh, I didn't actually believe that that patch was a good idea. Hacking\n>libpq to survive a protocol violation committed by the backend is *not*\n>a solution; the correct answer is to fix the backend. Otherwise we will\n>have to discover similar workarounds for other clients that do not\n>use libpq (ODBC, for example).\n>\n>Please reverse out that patch until someone can find some time to look\n>at the issue. (I will, if no one else does, but it would probably be\n>more efficient for someone who already knows something about SPI to\n>fix it...)\n>\n> regards, tom lane\n\n", "msg_date": "Mon, 15 Mar 1999 07:52:03 +0900", "msg_from": "\"Gerald L. Gay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpq and SPI " } ]
[ { "msg_contents": "I notice that some of the people committing configure fixes are using\nautoconf 2.13 while some are still on 2.12. This is a Bad Thing ---\nit's not only generating huge diffs at each commit, but we don't know\nwhich script version we've got day to day.\n\nWe need to standardize what version is being used. 2.13 is probably\nthe right choice, unless anyone knows of serious bugs in it. (I'm\nstill on 2.12 myself but am willing to upgrade.)\n\nAn alternative possibility is to stop keeping configure in the CVS\nrepository, but that would mean expecting everyone who uses the CVS\nsources to have autoconf installed ... I suspect that's a bad idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Mar 1999 20:28:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Autoconf versions" }, { "msg_contents": "On Sun, 14 Mar 1999, Tom Lane wrote:\n\n> I notice that some of the people committing configure fixes are using\n> autoconf 2.13 while some are still on 2.12. This is a Bad Thing ---\n> it's not only generating huge diffs at each commit, but we don't know\n> which script version we've got day to day.\n> \n> We need to standardize what version is being used. 2.13 is probably\n> the right choice, unless anyone knows of serious bugs in it. (I'm\n> still on 2.12 myself but am willing to upgrade.)\n> \n> An alternative possibility is to stop keeping configure in the CVS\n> repository, but that would mean expecting everyone who uses the CVS\n> sources to have autoconf installed ... I suspect that's a bad idea.\n\nWell, you've totally lost me here, on what exactly the problem\nis...especially with you last statement. If there is a problem with\nvarious users using 2.13 vs 2.12, how is that fixed by removing configure\nfrom CVS and relying on ppl having autoconf installed?\n\nWhat sort of problems are you noticing? I'm running 2.13 at home and 2.12\non hub, so I interchangeably commit depending on the machine I'm on\n*shrug*\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Mar 1999 00:24:46 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Autoconf versions" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Well, you've totally lost me here, on what exactly the problem\n> is...especially with you last statement. If there is a problem with\n> various users using 2.13 vs 2.12, how is that fixed by removing configure\n> from CVS and relying on ppl having autoconf installed?\n\nWell, it wouldn't do much to help in debugging configure failures,\ntrue. (But at least we'd be able to ask \"what autoconf version have you\ngot?\" and expect a useful answer --- right now, if someone reports a\nconfigure failure and doesn't say exactly when he last updated, we\nmight have a dickens of a time figuring out whether he had a 2.12 or\n2.13 script. If he does another update, the evidence would be gone.)\n\nMostly I just want to cut down the overhead of massive diffs in the\nconfigure script and ensure that we know which version of autoconf\nwill be in the release.\n\n> What sort of problems are you noticing?\n\nI have not observed any problems --- yet. But considering the length\nof time between 2.12 and 2.13, I assume there are some significant\ndifferences in their behavior ;-). We should make sure we have the\nright version in place for our 6.5 release.\n\n> I'm running 2.13 at home and 2.12 on hub, so I interchangeably commit\n> depending on the machine I'm on\n\nI've been using autoconf for a long time, and I've never yet seen two\nreleases that could safely be treated as interchangeable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 00:21:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Autoconf versions " }, { "msg_contents": "On Mon, 15 Mar 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Well, you've totally lost me here, on what exactly the problem\n> > is...especially with you last statement. If there is a problem with\n> > various users using 2.13 vs 2.12, how is that fixed by removing configure\n> > from CVS and relying on ppl having autoconf installed?\n> \n> Well, it wouldn't do much to help in debugging configure failures,\n> true. (But at least we'd be able to ask \"what autoconf version have you\n> got?\" and expect a useful answer --- right now, if someone reports a\n> configure failure and doesn't say exactly when he last updated, we\n> might have a dickens of a time figuring out whether he had a 2.12 or\n> 2.13 script. If he does another update, the evidence would be gone.)\n> \n> Mostly I just want to cut down the overhead of massive diffs in the\n> configure script and ensure that we know which version of autoconf\n> will be in the release.\n> \n> > What sort of problems are you noticing?\n> \n> I have not observed any problems --- yet. But considering the length\n> of time between 2.12 and 2.13, I assume there are some significant\n> differences in their behavior ;-). We should make sure we have the\n> right version in place for our 6.5 release.\n> \n> > I'm running 2.13 at home and 2.12 on hub, so I interchangeably commit\n> > depending on the machine I'm on\n> \n> I've been using autoconf for a long time, and I've never yet seen two\n> releases that could safely be treated as interchangeable.\n\nWell, I've been using autoconf since...since we moved everything over to\nit, what, two years ago? I have yet to see a problem using one version\nover the next. If you can show a problem, please feel free to point it\nout, but until we can do that, requiring 2.12 or 2.13 explicitly, IMHO, is\nridiculous. flex 2.54+ made sense, because of an acknowledged\nproblem...autoconf versions, though, there are no acknowledged problems\nbetween each that I'm aware of...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Mar 1999 03:01:20 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Autoconf versions " }, { "msg_contents": "unsubscribe\n", "msg_date": "Mon, 15 Mar 1999 09:32:21 +0100", "msg_from": "Maurizio Marini <[email protected]>", "msg_from_op": false, "msg_subject": "None" } ]
[ { "msg_contents": "secret <[email protected]> writes:\n>>>> PostgreSQL is also crashing 1-2 times a day on me, although I have a\n>>>> handy perl script to keep it alive now <grin>...\n\n> basically the server randomly dies with a:\n> ERROR: postmaster: StreamConnection: accept: Invalid argument\n> pmdie 3\n> (then signals all children to drop dead)\n\nHmm. That shouldn't happen, especially not randomly; if the accept\nworks the first time then it should work forever after, since the\narguments being passed in never change.\n\nThe error is coming from StreamConnection() in\npgsql/src/backend/libpq/pqcomm.c. Could you maybe add some debugging\ncode to the routine to see what the server_fd and port arguments are\nwhen accept() fails? I think just changing the first elog() to\n\nelog(ERROR,\n \"postmaster: StreamConnection: accept: %m\\nserver_fd = %d, port = %p\",\n server_fd, port);\n\nwould do for starters. This would let us eliminate the possibility that\nthe routine is getting passed bad arguments.\n\nAn alternative possibility is to run the postmaster under truss so you\ncan see what arguments are passed to the kernel on every kernel call,\nbut that'd generate a pretty verbose logfile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Mar 1999 21:35:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster dies (was Re: Very disappointing performance)" }, { "msg_contents": "Tom Lane wrote:\n\n> secret <[email protected]> writes:\n> >>>> PostgreSQL is also crashing 1-2 times a day on me, although I have a\n> >>>> handy perl script to keep it alive now <grin>...\n>\n> > basically the server randomly dies with a:\n> > ERROR: postmaster: StreamConnection: accept: Invalid argument\n> > pmdie 3\n> > (then signals all children to drop dead)\n>\n> Hmm. That shouldn't happen, especially not randomly; if the accept\n> works the first time then it should work forever after, since the\n> arguments being passed in never change.\n>\n> The error is coming from StreamConnection() in\n> pgsql/src/backend/libpq/pqcomm.c. Could you maybe add some debugging\n> code to the routine to see what the server_fd and port arguments are\n> when accept() fails? I think just changing the first elog() to\n>\n> elog(ERROR,\n> \"postmaster: StreamConnection: accept: %m\\nserver_fd = %d, port = %p\",\n> server_fd, port);\n>\n> would do for starters. This would let us eliminate the possibility that\n> the routine is getting passed bad arguments.\n>\n> An alternative possibility is to run the postmaster under truss so you\n> can see what arguments are passed to the kernel on every kernel call,\n> but that'd generate a pretty verbose logfile.\n>\n> regards, tom lane\n\n Done. I'll install the new binaries at the end of the day when no one is\nusing the database and give you a copy of the logs when it dies again. Thank\nyou for the help on this, it's very much appreciated.\n\nDavid Secret\nMIS Director\nKearney Development Co., Inc.\n\n\n", "msg_date": "Mon, 15 Mar 1999 13:51:54 -0500", "msg_from": "secret <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster dies (was Re: Very disappointing performance)" }, { "msg_contents": "Tom Lane wrote:\n\n> secret <[email protected]> writes:\n> >>>> PostgreSQL is also crashing 1-2 times a day on me, although I have a\n> >>>> handy perl script to keep it alive now <grin>...\n>\n> > basically the server randomly dies with a:\n> > ERROR: postmaster: StreamConnection: accept: Invalid argument\n> > pmdie 3\n> > (then signals all children to drop dead)\n>\n> Hmm. That shouldn't happen, especially not randomly; if the accept\n> works the first time then it should work forever after, since the\n> arguments being passed in never change.\n>\n> The error is coming from StreamConnection() in\n> pgsql/src/backend/libpq/pqcomm.c. Could you maybe add some debugging\n> code to the routine to see what the server_fd and port arguments are\n> when accept() fails? I think just changing the first elog() to\n>\n> elog(ERROR,\n> \"postmaster: StreamConnection: accept: %m\\nserver_fd = %d, port = %p\",\n> server_fd, port);\n>\n> would do for starters. This would let us eliminate the possibility that\n> the routine is getting passed bad arguments.\n>\n> An alternative possibility is to run the postmaster under truss so you\n> can see what arguments are passed to the kernel on every kernel call,\n> but that'd generate a pretty verbose logfile.\n>\n> regards, tom lane\n\nquery: SELECT \"material_id\" ,\"name\" ,\"short_name\" ,\"legacy\" FROM \"material\"\nORDE\nR BY \"legacy\" DESC,\"name\"\nProcessQuery\n! system usage stats:\n! 0.017961 elapsed 0.020000 user 0.000000 system sec\n! [0.050000 user 0.020000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 6/24 [127/201] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit rate =\n10\n0.00%\n! Local blocks: 0 read, 0 written, buffer hit rate =\n0.\n00%\n! Direct blocks: 0 read, 0 written\nCommitTransactionCommand\nERROR: postmaster: StreamConnection: accept: Invalid argument\nserver_fd = 3, port = 0x816aa70\npmdie 3\nSignalChildren: sending signal 15 to process 16943\nSignalChildren: sending signal 15 to process 16942\nSignalChildren: sending signal 15 to process 16941\n\n There we go, it crashed this morning...(interestingly it went all of\nyesterday without crashing)... Does this shed some light? If not what would\nyou like me to do next? I have 700M+ to keep a log file, as long as it doesn't\ngenerate that much in a day we should be okay with a very verbose log.\n\n Just tell me what code mods or runtime options to use...\n\nDavid Secret\nMIS Director\nKearney Development Co., Inc.\n\n\n", "msg_date": "Tue, 16 Mar 1999 09:05:02 -0500", "msg_from": "secret <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster dies (was Re: Very disappointing performance)" }, { "msg_contents": "secret <[email protected]> writes:\n> ERROR: postmaster: StreamConnection: accept: Invalid argument\n> server_fd = 3, port = 0x816aa70\n\n> There we go, it crashed this morning...(interestingly it went all of\n> yesterday without crashing)... Does this shed some light?\n\nNot much ... it shows pretty much what we expected, ie, nothing\nobviously wrong.\n\nWhat I would suggest doing next is running the postmaster under 'truss'\nor some similar utility that can generate a logfile of all the kernel\ncalls made by the postmaster. I can't give you any details on how to do\nthat --- perhaps some other reader can help? What we're looking for is\nanything that might have changed the state of file descriptor 3 shortly\nbefore the crash.\n\nBTW, some tips on debugging this. Maybe these are obvious, maybe not:\n\n1. This accept call is not associated with normal query processing, but\nwith receiving connection requests from new clients. Almost certainly\nthe bug is not triggered by processing queries but by connection\nattempts. You probably could make the crash happen sooner by starting\nand stopping clients in a steady stream (not that you want a crash\nsooner on your real system, of course, but for debugging it'd be nice\nnot to have to wait for long).\n\n2. You might want to build a playpen system that you can stress into\ncrashing without taking out your live server. The easiest way to do\nthat is just to duplicate your installation on another machine, but if\nno other machine is handy (or if you suspect a platform-dependent bug,\nwhich I do here) the best bet is to build a debugging version of\nPostgres that has nonstandard values for the installation directory\nand server's port address. For example I usually build trial versions\nwith\n\n./configure --with-pgport=5440 --prefix=/users/postgres/testversion\n\n(plus any options you normally use, of course). I think it might also\nbe possible to set these values while running initdb and starting the\ntest postmaster, without having to recompile; but I don't know the\nexact incantations to use to do it that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 1999 10:48:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster dies (was Re: Very disappointing performance) " }, { "msg_contents": "Tom Lane wrote:\n\n> secret <[email protected]> writes:\n> > ERROR: postmaster: StreamConnection: accept: Invalid argument\n> > server_fd = 3, port = 0x816aa70\n>\n> > There we go, it crashed this morning...(interestingly it went all of\n> > yesterday without crashing)... Does this shed some light?\n>\n> Not much ... it shows pretty much what we expected, ie, nothing\n> obviously wrong.\n>\n> What I would suggest doing next is running the postmaster under 'truss'\n> or some similar utility that can generate a logfile of all the kernel\n> calls made by the postmaster. I can't give you any details on how to do\n> that --- perhaps some other reader can help? What we're looking for is\n> anything that might have changed the state of file descriptor 3 shortly\n> before the crash.\n>\n> BTW, some tips on debugging this. Maybe these are obvious, maybe not:\n>\n> 1. This accept call is not associated with normal query processing, but\n> with receiving connection requests from new clients. Almost certainly\n> the bug is not triggered by processing queries but by connection\n> attempts. You probably could make the crash happen sooner by starting\n> and stopping clients in a steady stream (not that you want a crash\n> sooner on your real system, of course, but for debugging it'd be nice\n> not to have to wait for long).\n>\n> 2. You might want to build a playpen system that you can stress into\n> crashing without taking out your live server. The easiest way to do\n> that is just to duplicate your installation on another machine, but if\n> no other machine is handy (or if you suspect a platform-dependent bug,\n> which I do here) the best bet is to build a debugging version of\n> Postgres that has nonstandard values for the installation directory\n> and server's port address. For example I usually build trial versions\n> with\n>\n> ./configure --with-pgport=5440 --prefix=/users/postgres/testversion\n>\n> (plus any options you normally use, of course). I think it might also\n> be possible to set these values while running initdb and starting the\n> test postmaster, without having to recompile; but I don't know the\n> exact incantations to use to do it that way.\n>\n> regards, tom lane\n\n Would strace work instead of truss? I have strace... Will you be able to\ninterpret the strace files & determine the problem do you think?\n\n You've been the only one to respond on this, so I'm a tad worried about\nbeing left out in the cold on this one... I'd be glad to pay for support if\nthere is a place I can do that, heck I pay for support on other software\nproducts, why not PostgreSQL?\n\n Please let me know. I'll begin an strace tonight...\n\nDavid\n\n\n", "msg_date": "Tue, 16 Mar 1999 11:36:39 -0500", "msg_from": "secret <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster dies (was Re: Very disappointing performance)" }, { "msg_contents": "> Would strace work instead of truss? I have strace... Will you be able to\n> interpret the strace files & determine the problem do you think?\n> \n> You've been the only one to respond on this, so I'm a tad worried about\n> being left out in the cold on this one... I'd be glad to pay for support if\n> there is a place I can do that, heck I pay for support on other software\n> products, why not PostgreSQL?\n> \n> Please let me know. I'll begin an strace tonight...\n\nI can't imagine he has enough disk space for truss/ktrace output for a\nfull day of backend activity, does he?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 13:50:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\n\tperformance)" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > Would strace work instead of truss? I have strace... Will you be able to\n> > interpret the strace files & determine the problem do you think?\n> >\n> > You've been the only one to respond on this, so I'm a tad worried about\n> > being left out in the cold on this one... I'd be glad to pay for support if\n> > there is a place I can do that, heck I pay for support on other software\n> > products, why not PostgreSQL?\n> >\n> > Please let me know. I'll begin an strace tonight...\n>\n> I can't imagine he has enough disk space for truss/ktrace output for a\n> full day of backend activity, does he?\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n Ur, I'll postpone this to Thursday, when I can monitor the disk space very\ncarefully, how much space are we talking about here? 1G? 2G? 3G? 10G?\n\n Maybe I can temporarily install a hard disk just for that purpose.... There\nare only a few users on the database, it really isn't *THAT* active.\n\n--David\n\n\n", "msg_date": "Tue, 16 Mar 1999 17:20:49 -0500", "msg_from": "secret <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\n\tperformance)" }, { "msg_contents": "> \n> Ur, I'll postpone this to Thursday, when I can monitor the disk space very\n> carefully, how much space are we talking about here? 1G? 2G? 3G? 10G?\n> \n> Maybe I can temporarily install a hard disk just for that purpose.... There\n> are only a few users on the database, it really isn't *THAT* active.\n\nHard to say. I would turn it on for 15 minutes and see. ktrace can\ngenerate a 1MB files in a minute.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 17:33:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\n\tperformance)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I can't imagine he has enough disk space for truss/ktrace output for a\n> full day of backend activity, does he?\n\nThat's why I was encouraging him to set up a playpen and actively\nwork at crashing it, rather than waiting around to see whether it'd\nhappen before his disk fills up ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 1999 19:32:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\n\tperformance)" }, { "msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > I can't imagine he has enough disk space for truss/ktrace output for a\n> > full day of backend activity, does he?\n>\n> That's why I was encouraging him to set up a playpen and actively\n> work at crashing it, rather than waiting around to see whether it'd\n> happen before his disk fills up ;-)\n>\n> regards, tom lane\n\n I've built a simple program to record the last N lines(currently\n5000...Suggestions?) of input... What I'd like to do is pipe STDIN and\nSTDERR to this program, but \"|\" doesn't do this, do you all have a\nsuggestion on how to do this? If I can then I can get you the system trace\nand hopefully get this crash bug fixed.\n\n\n\n", "msg_date": "Tue, 23 Mar 1999 11:35:39 -0500", "msg_from": "secret <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\n\tperformance)" }, { "msg_contents": "On Tue, 23 Mar 1999, secret wrote:\n> I've built a simple program to record the last N lines(currently\n>5000...Suggestions?) of input... What I'd like to do is pipe STDIN and\n>STDERR to this program, but \"|\" doesn't do this, do you all have a\n>suggestion on how to do this? If I can then I can get you the system trace\n>and hopefully get this crash bug fixed.\n\nstrace ... 2>&1 | tail -5000\n\nNote that tail is a standard *nix program.\n\nTaral\n", "msg_date": "Tue, 23 Mar 1999 10:41:39 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: postmaster dies (was Re: Very disappointing\n\tperformance)" } ]
[ { "msg_contents": "Tom Lane <[email protected]> writes:\n>An alternative possibility is to stop keeping configure in the CVS\n>repository, but that would mean expecting everyone who uses the CVS\n>sources to have autoconf installed ... I suspect that's a bad idea.\n\nI don't see why that's a bad idea. That's exactly what the Gnome project\ndoes, and it works for them. I would submit that anyone who can't be bothered\nto take 10 minutes to install autoconf has no business mucking around with\na development tree anyway.\n\n\t-Michael Robinson\n\n", "msg_date": "Mon, 15 Mar 1999 12:26:45 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Autoconf versions" } ]
[ { "msg_contents": "I reposted the patch from home yesterday, as bruce pointed it out in\nanother thread.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Sunday, March 14, 1999 5:52 PM\nTo: [email protected]\nSubject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0 \n\n\nSay guys,\n\nI just noticed that RELSEG_SIZE still hasn't been reduced per the\ndiscussion from early February. Let's make sure that doesn't slip\nthrough the cracks, OK?\n\nI think Peter Mount was supposed to be off testing this issue.\nPeter, did you learn anything further?\n\nWe should probably apply the patch to REL6_4 as well...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 09:03:22 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problems with >2GB tables on Linux 2.0 " }, { "msg_contents": "Just a question. Does your patch let vacuum handle segmented tables?\n--\nTatsuo Ishii\n\n>I reposted the patch from home yesterday, as bruce pointed it out in\n>another thread.\n>\n>Peter\n>\n>--\n>Peter T Mount, IT Section\n>[email protected]\n>Anything I write here are my own views, and cannot be taken as the\n>official words of Maidstone Borough Council\n>\n>-----Original Message-----\n>From: Tom Lane [mailto:[email protected]]\n>Sent: Sunday, March 14, 1999 5:52 PM\n>To: [email protected]\n>Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0 \n>\n>\n>Say guys,\n>\n>I just noticed that RELSEG_SIZE still hasn't been reduced per the\n>discussion from early February. Let's make sure that doesn't slip\n>through the cracks, OK?\n>\n>I think Peter Mount was supposed to be off testing this issue.\n>Peter, did you learn anything further?\n>\n>We should probably apply the patch to REL6_4 as well...\n>\n>\t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 16 Mar 1999 10:40:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 " } ]
[ { "msg_contents": "You can always find me on IRC (ircnet) most Sunday evenings after 2000UT\non #astronomy.\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Clark Evans [mailto:[email protected]]\nSent: Sunday, March 14, 1999 9:13 PM\nCc: [email protected]\nSubject: [HACKERS] ICQ?\n\n\nWhich hackers are on ICQ?\nThanks!\nClark\n", "msg_date": "Mon, 15 Mar 1999 09:04:33 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] ICQ?" }, { "msg_contents": "Thus spake Peter Mount\n> You can always find me on IRC (ircnet) most Sunday evenings after 2000UT\n> on #astronomy.\n\nIf your client can handle being in multiple channels, why not drop one\ninto #PostgreSQL? Marc and I are generally there all alone and I am\nafraid people will start talking. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Mon, 15 Mar 1999 08:05:55 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ICQ?" } ]
[ { "msg_contents": "Hello hackers...\n\nI've spent the last couple of evening tracing through the drop table/sequence \ncode trying to figure out the best to drop the sequence when the table is \ndropped.\n\nHere is what I am proposing to do. I just wanted to throw out my idea and get \nsome feedback since I am just beginning to understand how the backend works.\n\nTake the following example:\nCREATE TABLE foo (i SERIAL, t text);\n\nThis creates table foo, index foo_i_key, and the sequence foo_i_seq.\n\nThe sequence ocuppies three of the system tables: pg_class, pg_attribute, and \npg_attrdef. When the table gets dropped, the table foo and foo_i_key are \nremoved. The default portion of the sequence is also removed from the \npg_attrdef system table, because the attrelid matches the table's oid. \n\nI believe this is incorrect ... I think the attrelid should match the seqences \noid instead of the table's oid to prevent the following error:\n\nryan=> CREATE TABLE foo (i SERIAL, t text);\nNOTICE: CREATE TABLE will create implicit sequence foo_i_seq for SERIAL column \nfoo.i\nNOTICE: CREATE TABLE/UNIQUE will create implicit index foo_i_key for table foo\nCREATE\n\nryan=> \\d\n\nDatabase = ryan\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | rbrad | foo | table |\n | rbrad | foo_i_key | index |\n | rbrad | foo_i_seq | sequence |\n +------------------+----------------------------------+----------+\n\nryan=> \\d foo;\n\nTable = foo\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| i | int4 not null default nextval('f | 4 |\n| t | text | var |\n+----------------------------------+----------------------------------+-------+\nIndex: foo_i_key\n\nryan=> drop sequence foo_i_seq;\nDROP\n\nryan=> \\d\n\nDatabase = ryan\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | rbrad | foo | table |\n | rbrad | foo_i_key | index |\n +------------------+----------------------------------+----------+\nryan=> \\d foo;\n\nTable = foo\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| i | int4 not null default nextval('f | 4 |\n| t | text | var |\n+----------------------------------+----------------------------------+-------+\nIndex: foo_i_key\n\nryan=> insert into foo (t) values ('blah');\nERROR: foo_i_seq.nextval: sequence does not exist\n\nryan=>\n\nThis looks pretty easy to fix.\n\nBack to my origional point .. I think we need another system table to map the \nsequence oid to the table's oid. I've noticed this done with the inheritance, \nindexes, etc ... but I don't see a pg_sequence table.\n\nI would be glad to try and finish this in the next couple of evenings if this \nlooks like the correct approach to the problem, otherwise could someone point me \nin the right direction :)\n\nThanks,\n-Ryan\n", "msg_date": "Mon, 15 Mar 1999 03:12:20 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Sequences...." }, { "msg_contents": "\nCan I ask where we are with this.\n\n> Hello hackers...\n> \n> I've spent the last couple of evening tracing through the drop table/sequence \n> code trying to figure out the best to drop the sequence when the table is \n> dropped.\n> \n> Here is what I am proposing to do. I just wanted to throw out my idea and get \n> some feedback since I am just beginning to understand how the backend works.\n> \n> Take the following example:\n> CREATE TABLE foo (i SERIAL, t text);\n> \n> This creates table foo, index foo_i_key, and the sequence foo_i_seq.\n> \n> The sequence ocuppies three of the system tables: pg_class, pg_attribute, and \n> pg_attrdef. When the table gets dropped, the table foo and foo_i_key are \n> removed. The default portion of the sequence is also removed from the \n> pg_attrdef system table, because the attrelid matches the table's oid. \n> \n> I believe this is incorrect ... I think the attrelid should match the seqences \n> oid instead of the table's oid to prevent the following error:\n> \n> ryan=> CREATE TABLE foo (i SERIAL, t text);\n> NOTICE: CREATE TABLE will create implicit sequence foo_i_seq for SERIAL column \n> foo.i\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index foo_i_key for table foo\n> CREATE\n> \n> ryan=> \\d\n> \n> Database = ryan\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | rbrad | foo | table |\n> | rbrad | foo_i_key | index |\n> | rbrad | foo_i_seq | sequence |\n> +------------------+----------------------------------+----------+\n> \n> ryan=> \\d foo;\n> \n> Table = foo\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | i | int4 not null default nextval('f | 4 |\n> | t | text | var |\n> +----------------------------------+----------------------------------+-------+\n> Index: foo_i_key\n> \n> ryan=> drop sequence foo_i_seq;\n> DROP\n> \n> ryan=> \\d\n> \n> Database = ryan\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | rbrad | foo | table |\n> | rbrad | foo_i_key | index |\n> +------------------+----------------------------------+----------+\n> ryan=> \\d foo;\n> \n> Table = foo\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | i | int4 not null default nextval('f | 4 |\n> | t | text | var |\n> +----------------------------------+----------------------------------+-------+\n> Index: foo_i_key\n> \n> ryan=> insert into foo (t) values ('blah');\n> ERROR: foo_i_seq.nextval: sequence does not exist\n> \n> ryan=>\n> \n> This looks pretty easy to fix.\n> \n> Back to my origional point .. I think we need another system table to map the \n> sequence oid to the table's oid. I've noticed this done with the inheritance, \n> indexes, etc ... but I don't see a pg_sequence table.\n> \n> I would be glad to try and finish this in the next couple of evenings if this \n> looks like the correct approach to the problem, otherwise could someone point me \n> in the right direction :)\n> \n> Thanks,\n> -Ryan\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 20:42:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." } ]
[ { "msg_contents": "\nTom Lane writes...\n>secret <[email protected]> writes:\n>>>>> PostgreSQL is also crashing 1-2 times a day on me, although I have a\n>>>>> handy perl script to keep it alive now <grin>...\n>\n>> basically the server randomly dies with a:\n>> ERROR: postmaster: StreamConnection: accept: Invalid argument\n>> pmdie 3\n>> (then signals all children to drop dead)\n>\n>Hmm. That shouldn't happen, especially not randomly; if the accept\n>works the first time then it should work forever after, since the\n>arguments being passed in never change.\n>\n>[snip]\n>\n>An alternative possibility is to run the postmaster under truss so you\n>can see what arguments are passed to the kernel on every kernel call,\n>but that'd generate a pretty verbose logfile.\n>\n\nFWIW...\n\nIf your (secret's) system uses strace, you can tell it to filter\njust specific calls or groups of calls. For example, \n\n strace -f -s 256 -e trace=network -o /tmp/strace.log -p <postmaster pid>\n\nShould trace all the network operations of the postmaster and\nall the children. I'm not sure if the socket reads/writes will\nbe included or not. The -s sets the 'snapshot' length. \n\nTruss probably has similar options that can be enabled in some\nbaroque manner.\n\n-- cary\n\n\n\n", "msg_date": "Mon, 15 Mar 1999 08:09:01 -0500 (EST)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster dies (was Re: Very disappointing performance)" } ]
[ { "msg_contents": "\nDear Hackers!\n\nDomain to make mirror of postgresql site \nin St.Petersburg (RUSSIA) allocated\n\n(postgresql.wplus.net)\n\n1. Is there any mirroring policy\n2. What size postgresql site have \n3. What url is best to start mirroring\n\n\n\n-- \nDmitry Samersoff\n DM\\S, [email protected], ICQ: 3161705 \n http://devnull.wplus.net\n\n", "msg_date": "Mon, 15 Mar 1999 16:36:55 +0300 (MSK)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "new mirror" }, { "msg_contents": "> \n> Dear Hackers!\n> \n> Domain to make mirror of postgresql site \n> in St.Petersburg (RUSSIA) allocated\n> \n> (postgresql.wplus.net)\n> \n> 1. Is there any mirroring policy\n> 2. What size postgresql site have \n> 3. What url is best to start mirroring\n> \n\nSee helping us/mirrors at www.postgresql.org for info.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 10:31:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new mirror" } ]
[ { "msg_contents": "I have been in discussion with Vadim, and expect he will have his MVCC\nvacuum changes done within the next week or two.\n\nSo, we should prepare for beta starting soon. I will make the 6.5\nCHANGES list. If people are sitting on patches/changes, please try and\ncomplete them in the next week.\n\nThanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 09:14:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5 beta soon" } ]
[ { "msg_contents": "I'll have to ;-)\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: D'Arcy\" \"J.M.\" Cain [mailto:[email protected]]\nSent: Monday, March 15, 1999 1:06 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [HACKERS] ICQ?\n\n\nThus spake Peter Mount\n> You can always find me on IRC (ircnet) most Sunday evenings after\n2000UT\n> on #astronomy.\n\nIf your client can handle being in multiple channels, why not drop one\ninto #PostgreSQL? Marc and I are generally there all alone and I am\nafraid people will start talking. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Mon, 15 Mar 1999 14:32:12 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] ICQ?" } ]
[ { "msg_contents": "Here is a project that indexes source code into HTML:\n\n\thttp://lxr.linux.no/blurb.html\n\nAn example of the FreeBSD kernel source code is at:\n\n\thttp://lxr.linux.no/freebsd/source\n\nMay be interesting for us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 12:23:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "HTML index of source code" }, { "msg_contents": "\nInstalling it now...having a slight config problem, but the URL will be:\n\n\thttp://www.postgresql.org/xref\n\nRight now, there is stuff there, just doesn't do anything...\n\nEverything is pure perl scripts, so this should work pretty native on the\nmirror sites *cross figners*\n\n\n\nOn Mon, 15 Mar 1999, Bruce Momjian wrote:\n\n> Here is a project that indexes source code into HTML:\n> \n> \thttp://lxr.linux.no/blurb.html\n> \n> An example of the FreeBSD kernel source code is at:\n> \n> \thttp://lxr.linux.no/freebsd/source\n> \n> May be interesting for us.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Mar 1999 16:14:13 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HTML index of source code" }, { "msg_contents": "> \n> Installing it now...having a slight config problem, but the URL will be:\n> \n> \thttp://www.postgresql.org/xref\n> \n> Right now, there is stuff there, just doesn't do anything...\n> \n> Everything is pure perl scripts, so this should work pretty native on the\n> mirror sites *cross figners*\n> \n\nMan, that was fast. You must have liked it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 15:24:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] HTML index of source code" }, { "msg_contents": "On Mon, 15 Mar 1999, Bruce Momjian wrote:\n\n> > \n> > Installing it now...having a slight config problem, but the URL will be:\n> > \n> > \thttp://www.postgresql.org/xref\n> > \n> > Right now, there is stuff there, just doesn't do anything...\n> > \n> > Everything is pure perl scripts, so this should work pretty native on the\n> > mirror sites *cross figners*\n> > \n> \n> Man, that was fast. You must have liked it.\n\nNah, just like to keep ppl happy :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Mar 1999 23:59:02 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HTML index of source code" }, { "msg_contents": "> On Mon, 15 Mar 1999, Bruce Momjian wrote:\n> \n> > > \n> > > Installing it now...having a slight config problem, but the URL will be:\n> > > \n> > > \thttp://www.postgresql.org/xref\n> > > \n> > > Right now, there is stuff there, just doesn't do anything...\n> > > \n> > > Everything is pure perl scripts, so this should work pretty native on the\n> > > mirror sites *cross figners*\n> > > \n> > \n> > Man, that was fast. You must have liked it.\n> \n> Nah, just like to keep ppl happy :)\n\nDon't know if we will like it, but it looked interesting.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 23:08:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] HTML index of source code" } ]
[ { "msg_contents": "\nA friend just asked me a question...he just mentioned that someone he knew\nhad just landed a job at AOL, working with a database that was described\nas being *bigger* then Oracle, with the substring \"red\" in the name...and\nearning something like $150+K US...\n\nAnyone have any ideas?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 Mar 1999 15:12:08 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Odd question..." }, { "msg_contents": "On Mon, 15 Mar 1999, The Hermit Hacker wrote:\n\n> A friend just asked me a question...he just mentioned that someone he knew\n> had just landed a job at AOL, working with a database that was described\n> as being *bigger* then Oracle, with the substring \"red\" in the name...and\n> earning something like $150+K US...\n\nLast I heard, AOL was a hard-core Oracle shop. Brad Knowles has stated\npublicly that they use a big RDBMS as the back-end for their mail system\n(and having helped build a mail system for 1m customers, I believe it);\nI think that he even mentioned Oracle by name, and I know that Oracle\nbrags about them.\n\nThey might have a special build of Oracle to deal with their very high-end\nrequirements, or maybe they have something else entirely. I suspect the\nformer.\n\n--\nTodd Graham Lewis 32�49'N,83�36'W (800) 719-4664, x22804\n******Linux****** MindSpring Enterprises [email protected]\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 15 Mar 1999 14:19:40 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Odd question..." } ]
[ { "msg_contents": "Could be referring to Red Brick. I believe they provide a third party index\n(and other tools or utilities) that is much faster than Oracles native\nindex.\n\n\t-----Original Message-----\n\tFrom:\tThe Hermit Hacker [SMTP:[email protected]]\n\tSent:\tMonday, March 15, 1999 12:12 PM\n\tTo:\[email protected]\n\tSubject:\t[HACKERS] Odd question...\n\n\n\tA friend just asked me a question...he just mentioned that someone\nhe knew\n\thad just landed a job at AOL, working with a database that was\ndescribed\n\tas being *bigger* then Oracle, with the substring \"red\" in the\nname...and\n\tearning something like $150+K US...\n\n\tAnyone have any ideas?\n\n\tMarc G. Fournier \n\tSystems Administrator @ hub.org \n\tprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\t\n", "msg_date": "Mon, 15 Mar 1999 13:23:00 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Odd question..." }, { "msg_contents": "\nOn 15-Mar-99 Michael Davis wrote:\n> Could be referring to Red Brick. I believe they provide a third party index\n> (and other tools or utilities) that is much faster than Oracles native\n> index.\n\nI was just about to say the same thing. Informix bought Red Brick in\nJanuary ('99).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Mon, 15 Mar 1999 14:30:33 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Odd question..." } ]
[ { "msg_contents": "Should I try to finish this before 6.5? Or wait for the next release?\n\n-Ryan\n\n> Hello hackers...\n> \n> I've spent the last couple of evening tracing through the drop table/sequence \n> code trying to figure out the best to drop the sequence when the table is \n> dropped.\n> \n> Here is what I am proposing to do. I just wanted to throw out my idea and get \n> some feedback since I am just beginning to understand how the backend works.\n> \n> Take the following example:\n> CREATE TABLE foo (i SERIAL, t text);\n> \n> This creates table foo, index foo_i_key, and the sequence foo_i_seq.\n> \n> The sequence ocuppies three of the system tables: pg_class, pg_attribute, and \n> pg_attrdef. When the table gets dropped, the table foo and foo_i_key are \n> removed. The default portion of the sequence is also removed from the \n> pg_attrdef system table, because the attrelid matches the table's oid. \n> \n> I believe this is incorrect ... I think the attrelid should match the seqences \n> oid instead of the table's oid to prevent the following error:\n> \n> ryan=> CREATE TABLE foo (i SERIAL, t text);\n> NOTICE: CREATE TABLE will create implicit sequence foo_i_seq for SERIAL \ncolumn \n> foo.i\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index foo_i_key for table \nfoo\n> CREATE\n> \n> ryan=> \\d\n> \n> Database = ryan\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | rbrad | foo | table |\n> | rbrad | foo_i_key | index |\n> | rbrad | foo_i_seq | sequence |\n> +------------------+----------------------------------+----------+\n> \n> ryan=> \\d foo;\n> \n> Table = foo\n> \n+----------------------------------+----------------------------------+-------+\n> | Field | Type | \nLength|\n> \n+----------------------------------+----------------------------------+-------+\n> | i | int4 not null default nextval('f | 4 \n|\n> | t | text | var \n|\n> \n+----------------------------------+----------------------------------+-------+\n> Index: foo_i_key\n> \n> ryan=> drop sequence foo_i_seq;\n> DROP\n> \n> ryan=> \\d\n> \n> Database = ryan\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | rbrad | foo | table |\n> | rbrad | foo_i_key | index |\n> +------------------+----------------------------------+----------+\n> ryan=> \\d foo;\n> \n> Table = foo\n> \n+----------------------------------+----------------------------------+-------+\n> | Field | Type | \nLength|\n> \n+----------------------------------+----------------------------------+-------+\n> | i | int4 not null default nextval('f | 4 \n|\n> | t | text | var \n|\n> \n+----------------------------------+----------------------------------+-------+\n> Index: foo_i_key\n> \n> ryan=> insert into foo (t) values ('blah');\n> ERROR: foo_i_seq.nextval: sequence does not exist\n> \n> ryan=>\n> \n> This looks pretty easy to fix.\n> \n> Back to my origional point .. I think we need another system table to map the \n> sequence oid to the table's oid. I've noticed this done with the inheritance, \n> indexes, etc ... but I don't see a pg_sequence table.\n> \n> I would be glad to try and finish this in the next couple of evenings if this \n> looks like the correct approach to the problem, otherwise could someone point \nme \n> in the right direction :)\n> \n> Thanks,\n> -Ryan\n> \n", "msg_date": "Mon, 15 Mar 1999 15:07:52 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Ryan Bradetich wrote:\n> > Back to my origional point .. I think we need another system table to map the\n> > sequence oid to the table's oid. I've noticed this done with the inheritance,\n> > indexes, etc ... but I don't see a pg_sequence table.\n\nSounds good.\n\nAs long as a sequence can point to more than \none table/column combination.\n\nOr, I guess, you can have the relationship 1-1 for\nthe SERIAL type, but this should not prevent using\nsequences across more than one table if you don't use SERIAL.\nI often use a sequence for 3 or more tables in a system\nso that I can use 'generic' functions on the tables\nand produce reports without conficting primary keys..\n\n:) Clark\n", "msg_date": "Mon, 15 Mar 1999 22:59:10 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Ryan Bradetich wrote:\n> >\n> > Take the following example:\n> > CREATE TABLE foo (i SERIAL, t text);\n> >\n> > This creates table foo, index foo_i_key, and the sequence foo_i_seq.\n> >\n> > The sequence ocuppies three of the system tables: pg_class, pg_attribute, and\n> > pg_attrdef. When the table gets dropped, the table foo and foo_i_key are\n> > removed. The default portion of the sequence is also removed from the\n> > pg_attrdef system table, because the attrelid matches the table's oid.\n> >\n> > I believe this is incorrect ... I think the attrelid should match the seqences\n> > oid instead of the table's oid to prevent the following error:\n\npg_attrdef->attrelid is used to store DEFAULT definition\nfor particular attribute -> DEFAULT part of SERIAL definition \nwill not work after this...\n\n> >\n> > Back to my origional point .. I think we need another system table to map the\n> > sequence oid to the table's oid. I've noticed this done with the inheritance,\n> > indexes, etc ... but I don't see a pg_sequence table.\n\nSequences and tables are independent things - that's why\nthere was no pg_sequence table. Currently each sequence\nhas row in pg_class, i.e. sequences are special tables.\nBut I agreed that we need in new table to reflect\nSERIAL <--> sequence dependencies.\n\nVadim\n", "msg_date": "Tue, 16 Mar 1999 11:22:01 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." } ]
[ { "msg_contents": "Man, are those people packed into Great Britain, or what?\n\nVadim, good thing you are there, or we would have a huge gap in Asia. \nSee, you were meant to work on this project.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 18:28:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Globe" }, { "msg_contents": ">\n> Man, are those people packed into Great Britain, or what?\n\n And I'm sure Peter Mount isn't placed right since I haven't\n found Maidstone on my map's, nor on any lat/long source in\n the internet. Where should that dot sit really?\n\n>\n> Vadim, good thing you are there, or we would have a huge gap in Asia.\n> See, you were meant to work on this project.\n\n Yepp - after all I was really surprised how widely we are\n spread out.\n\n But there are some bad, bad, big gaps. South-America, Africa\n and Australia.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 16 Mar 1999 11:05:23 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Globe" }, { "msg_contents": "> >\n> > Man, are those people packed into Great Britain, or what?\n> \n> And I'm sure Peter Mount isn't placed right since I haven't\n> found Maidstone on my map's, nor on any lat/long source in\n> the internet. Where should that dot sit really?\n> \n> >\n> > Vadim, good thing you are there, or we would have a huge gap in Asia.\n> > See, you were meant to work on this project.\n> \n> Yepp - after all I was really surprised how widely we are\n> spread out.\n> \n> But there are some bad, bad, big gaps. South-America, Africa\n> and Australia.\n\nYes, we need to work on that. I am surprised too how spread out we are.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 13:29:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Globeu" }, { "msg_contents": "> >\n> > Man, are those people packed into Great Britain, or what?\n> \n> And I'm sure Peter Mount isn't placed right since I haven't\n> found Maidstone on my map's, nor on any lat/long source in\n> the internet. Where should that dot sit really?\n\nAsk and you shall receive:\n\n\tMaidstone:\t51.17N, 0.32 E\n\nThere is also a political/neighborhood of the same name listed at:\n\n\tMaidstone:\t51.17N, 0.35 E\n\nOf course, both would appear in the same spot on our map.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 13:38:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Globe]" }, { "msg_contents": "On Tue, 16 Mar 1999, Jan Wieck wrote:\n\n> >\n> > Man, are those people packed into Great Britain, or what?\n> \n> And I'm sure Peter Mount isn't placed right since I haven't\n> found Maidstone on my map's, nor on any lat/long source in\n> the internet. Where should that dot sit really?\n\nMaidstone is about 25 miles south east of London.\n\nAccording to my GPS:\n\n\tN 51'13.422'\tE000'34.168'\n\nWhen I looked, it looked ok.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Tue, 16 Mar 1999 19:47:32 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Globe" } ]
[ { "msg_contents": "> Ryan Bradetich wrote:\n> > > Back to my origional point .. I think we need another system table to map \nthe\n> > > sequence oid to the table's oid. I've noticed this done with the \ninheritance,\n> > > indexes, etc ... but I don't see a pg_sequence table.\n> \n> Sounds good.\n> \n> As long as a sequence can point to more than \n> one table/column combination.\n> \n> Or, I guess, you can have the relationship 1-1 for\n> the SERIAL type, but this should not prevent using\n> sequences across more than one table if you don't use SERIAL.\n> I often use a sequence for 3 or more tables in a system\n> so that I can use 'generic' functions on the tables\n> and produce reports without conficting primary keys..\n> \n> :) Clark\n\nHmm.. Good points.... I'll make sure that doesn't happen. Thanks for the tips. \n:)\n\n-Ryan\n", "msg_date": "Mon, 15 Mar 1999 18:06:18 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": ">> As long as a sequence can point to more than \n>> one table/column combination.\n\nDoesn't seem like a problem to me --- as far as I understood Ryan,\nthe new table he's proposing would only contain entries for sequences\ncreated to implement SERIAL keywords. For those, I think there should\nindeed be a 1-1 mapping between parent table (+column) and resulting\nsequence.\n\nBut yeah, don't break usage of ordinary standalone sequences ;-).\n\nAnother thing to think about is what it's going to take to dump and\nreload this structure in pg_dump. We need to be able to reconstitute\nthe system tables' contents and the current value of the SERIAL sequence\nafter a reload.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 22:23:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences.... " }, { "msg_contents": "BTW, has anyone thought twice about the interaction of SERIAL columns\nwith inheritance? If I create a table having a SERIAL column and then\ncreate a child table that inherits from the first, what happens? Does\nthe child share the use of the parent's sequence (implying that serial\nnumber assignments are unique across the parent and all its children)?\nOr does the child get a new sequence object of its very own --- and if\nso, what does that sequence object start out at?\n\nWe ought to find out what the current code actually does and then think\nabout whether we like it or not; I'll bet that the current behavior was\nnot designed but just fell out of the implementation.\n\nIf we do want shared use of a parent's sequence, that's going to\ncomplicate Ryan's new system table considerably --- probably it needs\nto have a row for each table using a particular sequence-created-to-\nimplement-SERIAL, and the sequence object can be deleted only when the\nlast reference to it goes away. Life may become even more interesting\nfor pg_dump, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 1999 22:45:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences.... " } ]
[ { "msg_contents": "I still cannot get pg_dump to work since I fixed the system yesterday. This\nis a real mess and I need to make sure I have a current backup.\n\n-I upgraded from 6.4 -> 6.4.2 and applied the 2GB patch\n-I did \"initdb\" from the postgres user account\n-I cannot get pg_dump to work:\n\n\n-------\n[tim@db /]$ pg_dump db_domain > /fireball/pg_dumps/db_domain.dump\npg_dump error in finding the template1 database\n-------\n\n\nAt this point, the postmaster dies and restarts.\n\nI think I'm getting to where I need some real help getting this thing to\ndump again.\n\nThe database is up and running just fine - I just cannot dump.\n\nAny tips or advice is GREATLY needed and appreciated at this point. All I\nneed is it to take a dump ;-)\n\nTim\n\[email protected]\n\n\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian <[email protected]>\nTo: [email protected] <[email protected]>\nDate: Monday, March 15, 1999 11:57 AM\nSubject: Re: [SQL] Re: [HACKERS] URGENT -\n\n\n>>\n>>\n>> >From 6.4 -> 6.4.2\n>>\n>> The production database is working well, but pg_dump doesn't work. Now\n>> I'm worried that my database will corrupt again and I won't have it\n>> backed up.\n>\n>6.4 to 6.4.2 should work just fine, and the patch should not change\n>that. Are you saying the application of the patch caused the system to\n>be un-dumpable? Or perhaps was it the stopping of the postmaster. I\n>can work with you to get it dump-able if needed?\n>\n>--\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n\n", "msg_date": "Mon, 15 Mar 1999 19:25:58 -0600", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "While running the regression test on current, I noticed some tests\nfaild due to a change of error message for dropping non existing\ntables.\n\nRelation foo Does Not Exist!\n\n\t--->\n\nRelation foo does not exist\n\nI just want to know the reason for those changes.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 16 Mar 1999 10:52:53 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "non existing table error message changed?" }, { "msg_contents": "> While running the regression test on current, I noticed some tests\n> faild due to a change of error message for dropping non existing\n> tables.\n> \n> Relation foo Does Not Exist!\n> \n> \t--->\n> \n> Relation foo does not exist\n> \n> I just want to know the reason for those changes.\n\nNot sure, but the second is better. I have modified the regression\ntests.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 21:56:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non existing table error message changed?" }, { "msg_contents": "> While running the regression test on current, I noticed some tests\n> faild due to a change of error message for dropping non existing\n> tables.\n> Relation foo Does Not Exist!\n> --->\n> Relation foo does not exist\n> I just want to know the reason for those changes.\n\nTo make the error messages consistant in style. I hope this is not\ncausing trouble for apps...\n\n - Tom\n", "msg_date": "Sun, 21 Mar 1999 15:53:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non existing table error message changed?" } ]
[ { "msg_contents": "Hello all,\n\nCREATE USER/ALTER USER doesn't work well for \n99/02/23 snapshot;\n\n=> create user user1;\nERROR: Bad abstime external representation ''\n\nI didn't understand the reason.\n \n=> alter user fred createuser;\nERROR: parser: parse error at or near \"where\"\n\nI found it's because of the use snprintf() instead \nof sprintf(). Different from sprintf(),snprintf() \nclears its target first. \nAlterUser() function uses the statement such as\n \n\tsnprintf(sql, \"....\", sql, ...) \n\nIn this case,the content of sql which is also a \nsource of snprintf is cleared before execution.\n\nThanks. \n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 16 Mar 1999 11:44:05 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "snprintf() instead of sprintf() ?" }, { "msg_contents": "I have gone through the code, and added pstrdup() to cases where\nsnprintf was uses with the same string on input and output. Should fix\nthese problems.\n\n\n---------------------------------------------------------------------------\n\nHello all,\n\nCREATE USER/ALTER USER doesn't work well for \n99/02/23 snapshot;\n\n=> create user user1;\nERROR: Bad abstime external representation ''\n\nI didn't understand the reason.\n \n=> alter user fred createuser;\nERROR: parser: parse error at or near \"where\"\n\nI found it's because of the use snprintf() instead \nof sprintf(). Different from sprintf(),snprintf() \nclears its target first. \nAlterUser() function uses the statement such as\n \n\tsnprintf(sql, \"....\", sql, ...) \n\nIn this case,the content of sql which is also a \nsource of snprintf is cleared before execution.\n\nThanks. \n\nHiroshi Inoue\[email protected]\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 00:01:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] snprintf() instead of sprintf() ?" } ]
[ { "msg_contents": "contrib/array_iterator/array_iterator.c won't compile.\nIncluded patches should fix the problem.\n--\nTatsuo Ishii\n\n*** pgsql/contrib/array/array_iterator.c~\tFri Jan 22 07:40:16 1999\n--- pgsql/contrib/array/array_iterator.c\tTue Mar 16 11:31:40 1999\n***************\n*** 28,37 ****\n \n #include \"array_iterator.h\"\n \n array_iterator(Oid elemtype, Oid proc, int and, ArrayType *array, Datum value)\n {\n \tHeapTuple\ttyp_tuple;\n! \tTypeTupleForm typ_struct;\n \tbool\t\ttypbyval;\n \tint\t\t\ttyplen;\n \tfunc_ptr\tproc_fn;\n--- 28,38 ----\n \n #include \"array_iterator.h\"\n \n+ static int32\n array_iterator(Oid elemtype, Oid proc, int and, ArrayType *array, Datum value)\n {\n \tHeapTuple\ttyp_tuple;\n! \tForm_pg_type typ_struct;\n \tbool\t\ttypbyval;\n \tint\t\t\ttyplen;\n \tfunc_ptr\tproc_fn;\n***************\n*** 43,48 ****\n--- 44,50 ----\n \t\t\t *dim;\n \tchar\t *p;\n \tFmgrInfo finf; /*Tobias Gabele Jan 18 1999*/\n+ \t\n \n \t/* Sanity checks */\n \tif ((array == (ArrayType *) NULL)\n***************\n*** 67,73 ****\n \t\telog(ERROR, \"array_iterator: cache lookup failed for type %d\", elemtype);\n \t\treturn 0;\n \t}\n! \ttyp_struct = (TypeTupleForm) GETSTRUCT(typ_tuple);\n \ttyplen = typ_struct->typlen;\n \ttypbyval = typ_struct->typbyval;\n \n--- 69,75 ----\n \t\telog(ERROR, \"array_iterator: cache lookup failed for type %d\", elemtype);\n \t\treturn 0;\n \t}\n! \ttyp_struct = (Form_pg_type) GETSTRUCT(typ_tuple);\n \ttyplen = typ_struct->typlen;\n \ttypbyval = typ_struct->typbyval;\n \n[srapc451.sra.co.jp]t-ishii{123} \n", "msg_date": "Tue, 16 Mar 1999 11:45:36 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "[CURRENT] contrib/array_iterator patch" }, { "msg_contents": "Applied.\n\n\n> contrib/array_iterator/array_iterator.c won't compile.\n> Included patches should fix the problem.\n> --\n> Tatsuo Ishii\n> \n> *** pgsql/contrib/array/array_iterator.c~\tFri Jan 22 07:40:16 1999\n> --- pgsql/contrib/array/array_iterator.c\tTue Mar 16 11:31:40 1999\n> ***************\n> *** 28,37 ****\n> \n> #include \"array_iterator.h\"\n> \n> array_iterator(Oid elemtype, Oid proc, int and, ArrayType *array, Datum value)\n> {\n> \tHeapTuple\ttyp_tuple;\n> ! \tTypeTupleForm typ_struct;\n> \tbool\t\ttypbyval;\n> \tint\t\t\ttyplen;\n> \tfunc_ptr\tproc_fn;\n> --- 28,38 ----\n> \n> #include \"array_iterator.h\"\n> \n> + static int32\n> array_iterator(Oid elemtype, Oid proc, int and, ArrayType *array, Datum value)\n> {\n> \tHeapTuple\ttyp_tuple;\n> ! \tForm_pg_type typ_struct;\n> \tbool\t\ttypbyval;\n> \tint\t\t\ttyplen;\n> \tfunc_ptr\tproc_fn;\n> ***************\n> *** 43,48 ****\n> --- 44,50 ----\n> \t\t\t *dim;\n> \tchar\t *p;\n> \tFmgrInfo finf; /*Tobias Gabele Jan 18 1999*/\n> + \t\n> \n> \t/* Sanity checks */\n> \tif ((array == (ArrayType *) NULL)\n> ***************\n> *** 67,73 ****\n> \t\telog(ERROR, \"array_iterator: cache lookup failed for type %d\", elemtype);\n> \t\treturn 0;\n> \t}\n> ! \ttyp_struct = (TypeTupleForm) GETSTRUCT(typ_tuple);\n> \ttyplen = typ_struct->typlen;\n> \ttypbyval = typ_struct->typbyval;\n> \n> --- 69,75 ----\n> \t\telog(ERROR, \"array_iterator: cache lookup failed for type %d\", elemtype);\n> \t\treturn 0;\n> \t}\n> ! \ttyp_struct = (Form_pg_type) GETSTRUCT(typ_tuple);\n> \ttyplen = typ_struct->typlen;\n> \ttypbyval = typ_struct->typbyval;\n> \n> [srapc451.sra.co.jp]t-ishii{123} \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 22:10:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [CURRENT] contrib/array_iterator patch" } ]
[ { "msg_contents": "Thanks Bruce.\n\nDo you have any idea what happens with pg_dump when it hits 2GB??? Is it set\nup to segment the files on linux? If not, I may have hit a brick wall here,\nand have no way to back this baby up.\n\nTim\n\n\n-----Original Message-----\nFrom: Bruce Momjian <[email protected]>\nTo: [email protected] <[email protected]>\nCc: [email protected] <[email protected]>\nDate: Monday, March 15, 1999 7:53 PM\nSubject: Re: [SQL] Re: [HACKERS] URGENT -\n\n\n>Call me at my signature phone number.\n>\n>\n>[Charset iso-8859-1 unsupported, filtering to ASCII...]\n>> I still cannot get pg_dump to work since I fixed the system yesterday.\nThis\n>> is a real mess and I need to make sure I have a current backup.\n>>\n>> -I upgraded from 6.4 -> 6.4.2 and applied the 2GB patch\n>> -I did \"initdb\" from the postgres user account\n>> -I cannot get pg_dump to work:\n>>\n>>\n>> -------\n>> [tim@db /]$ pg_dump db_domain > /fireball/pg_dumps/db_domain.dump\n>> pg_dump error in finding the template1 database\n>> -------\n>>\n>>\n>> At this point, the postmaster dies and restarts.\n>>\n>> I think I'm getting to where I need some real help getting this thing to\n>> dump again.\n>>\n>> The database is up and running just fine - I just cannot dump.\n>>\n>> Any tips or advice is GREATLY needed and appreciated at this point. All I\n>> need is it to take a dump ;-)\n>>\n>> Tim\n>>\n>> [email protected]\n>>\n>>\n>>\n>>\n>>\n>> -----Original Message-----\n>> From: Bruce Momjian <[email protected]>\n>> To: [email protected] <[email protected]>\n>> Date: Monday, March 15, 1999 11:57 AM\n>> Subject: Re: [SQL] Re: [HACKERS] URGENT -\n>>\n>>\n>> >>\n>> >>\n>> >> >From 6.4 -> 6.4.2\n>> >>\n>> >> The production database is working well, but pg_dump doesn't work. Now\n>> >> I'm worried that my database will corrupt again and I won't have it\n>> >> backed up.\n>> >\n>> >6.4 to 6.4.2 should work just fine, and the patch should not change\n>> >that. Are you saying the application of the patch caused the system to\n>> >be un-dumpable? Or perhaps was it the stopping of the postmaster. I\n>> >can work with you to get it dump-able if needed?\n>> >\n>> >--\n>> > Bruce Momjian | http://www.op.net/~candle\n>> > [email protected] | (610) 853-3000\n>> > + If your life is a hard drive, | 830 Blythe Avenue\n>> > + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>> >\n>>\n>>\n>\n>\n>--\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Mon, 15 Mar 1999 21:05:02 -0600", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Thanks Bruce.\n> \n> Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set\n> up to segment the files on linux? If not, I may have hit a brick wall here,\n> and have no way to back this baby up.\n> \n\npg_dump only dumps a flat unix file. That can be any size your OS\nsupports. It does not segment. However, a 2gig table will dump to a\nmuch smaller version than 2gig because of the overhead for every record.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 22:08:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "On Mon, 15 Mar 1999, Bruce Momjian wrote:\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Thanks Bruce.\n> > \n> > Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set\n> > up to segment the files on linux? If not, I may have hit a brick wall here,\n> > and have no way to back this baby up.\n> > \n> \n> pg_dump only dumps a flat unix file. That can be any size your OS\n> supports. It does not segment. However, a 2gig table will dump to a\n> much smaller version than 2gig because of the overhead for every record.\n\nHmmm, I think that, as some people are now using >2Gig tables, we should\nthink of adding segmentation to pg_dump as an option, otherwise this is\ngoing to become a real issue at some point.\n\nAlso, I think we could do with having some standard way of dumping and\nrestoring large objects.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Wed, 17 Mar 1999 23:17:11 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "> > pg_dump only dumps a flat unix file. That can be any size your OS\n> > supports. It does not segment. However, a 2gig table will dump to a\n> > much smaller version than 2gig because of the overhead for every record.\n> \n> Hmmm, I think that, as some people are now using >2Gig tables, we should\n> think of adding segmentation to pg_dump as an option, otherwise this is\n> going to become a real issue at some point.\n\nSo the OS doesn't get a table over 2 gigs. Does anyone have a table\nthat dumps a flat file over 2gig's, whose OS can't support files over 2\ngigs. Never heard of a complaint.\n\n\n> \n> Also, I think we could do with having some standard way of dumping and\n> restoring large objects.\n\nI need to add a separate large object type.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 18:23:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "> BTW, has anyone thought twice about the interaction of SERIAL columns\n> with inheritance? If I create a table having a SERIAL column and then\n> create a child table that inherits from the first, what happens? Does\n> the child share the use of the parent's sequence (implying that serial\n> number assignments are unique across the parent and all its children)?\n> Or does the child get a new sequence object of its very own --- and if\n> so, what does that sequence object start out at?\n> \n> We ought to find out what the current code actually does and then think\n> about whether we like it or not; I'll bet that the current behavior was\n> not designed but just fell out of the implementation.\n\nCurrently the parent and child share the same sequence.\n\nryan=> CREATE TABLE parent (i SERIAL);\nNOTICE: CREATE TABLE will create implicit sequence parent_i_seq for SERIAL column parent.i\nNOTICE: CREATE TABLE/UNIQUE will create implicit index parent_i_key for table parent\nCREATE\nryan=> CREATE TABLE child (t text) INHERITS (parent);\nCREATE\nryan=> INSERT INTO parent VALUES (NEXTVAL('parent_i_seq'));\nINSERT 18731 1\nryan=> INSERT INTO child (t) values ('test');\nINSERT 18732 1\nryan=> INSERT INTO parent VALUES (NEXTVAL('parent_i_seq'));\nINSERT 18733 1\nryan=> SELECT * FROM parent;\ni\n-\n1\n3\n(2 rows)\n\nryan=> SELECT * FROM child;\ni|t\n-+----\n2|test\n(1 row)\n\nryan=>\n\n> If we do want shared use of a parent's sequence, that's going to\n> complicate Ryan's new system table considerably --- probably it needs\n> to have a row for each table using a particular sequence-created-to-\n> implement-SERIAL, and the sequence object can be deleted only when the\n> last reference to it goes away. Life may become even more interesting\n> for pg_dump, too.\n> \n> \t\t\tregards, tom lane\n\nI'm not sure about the pg_dump, but I do not see the need to added complexity to the system table because of the shared sequence. The \nparent table can not be dropped while the child table exists, which would be the last reference to the serial-sequence.\n\nryan=> drop table parent;\nERROR: Relation '18718' inherits 'parent'\nryan=>\n\nThis is the behavior I would expect, but then again I'm pretty new to databases ... and know nothing about the standard :)\n\n-Ryan\n\nP.S. I hope to finish the patch tonight for the system table, but I will probably need some help/input on the pg_dump issues.\n\n", "msg_date": "Tue, 16 Mar 1999 00:34:33 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequences...." } ]
[ { "msg_contents": "I see below in the current TODO file.\n\n* Add REGEX internationalization\n\nI thougt this has been already done with the multi-byte support since\n6.3.1?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 16 Mar 1999 16:41:10 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "TODO item" }, { "msg_contents": "> I see below in the current TODO file.\n> \n> * Add REGEX internationalization\n> \n> I thougt this has been already done with the multi-byte support since\n> 6.3.1?\n> --\n> Tatsuo Ishii\n> \n> \n\nThanks. Removed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 13:09:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO item" } ]
[ { "msg_contents": "unsubscribe\n\n____________________________________________________________________\nMore than just email--Get your FREE Netscape WebMail account today at http://home.netscape.com/netcenter/mail\n", "msg_date": "16 Mar 99 07:46:17 PST", "msg_from": "Dan Hrabarchuk <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "It simply reduces the size of each segment from 2Gb to 1Gb. The problem\nwas that some OS's (Linux in my case) don't like files exactly 2Gb in\nsize. I don't know how vacuum interacts with the storage manager, but in\ntheory it should be transparent.\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Tatsuo Ishii [mailto:[email protected]]\nSent: Tuesday, March 16, 1999 1:41 AM\nTo: Peter Mount\nCc: Tom Lane; [email protected]\nSubject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0 \n\n\nJust a question. Does your patch let vacuum handle segmented tables?\n--\nTatsuo Ishii\n\n>I reposted the patch from home yesterday, as bruce pointed it out in\n>another thread.\n>\n>Peter\n>\n>--\n>Peter T Mount, IT Section\n>[email protected]\n>Anything I write here are my own views, and cannot be taken as the\n>official words of Maidstone Borough Council\n>\n>-----Original Message-----\n>From: Tom Lane [mailto:[email protected]]\n>Sent: Sunday, March 14, 1999 5:52 PM\n>To: [email protected]\n>Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0 \n>\n>\n>Say guys,\n>\n>I just noticed that RELSEG_SIZE still hasn't been reduced per the\n>discussion from early February. Let's make sure that doesn't slip\n>through the cracks, OK?\n>\n>I think Peter Mount was supposed to be off testing this issue.\n>Peter, did you learn anything further?\n>\n>We should probably apply the patch to REL6_4 as well...\n>\n>\t\t\tregards, tom lane\n>\n", "msg_date": "Tue, 16 Mar 1999 07:52:14 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problems with >2GB tables on Linux 2.0 " }, { "msg_contents": ">Just a question. Does your patch let vacuum handle segmented tables?\n>--\n>Tatsuo Ishii\n\n>It simply reduces the size of each segment from 2Gb to 1Gb. The problem\n>was that some OS's (Linux in my case) don't like files exactly 2Gb in\n>size. I don't know how vacuum interacts with the storage manager, but in\n>theory it should be transparent.\n\nOk. So we still have following problem:\n\ntest=> vacuum smallcat;\nNOTICE: Can't truncate multi-segments relation smallcat\nVACUUM\n\nMaybe this should be added to TODO if it's not already there.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 17 Mar 1999 13:45:35 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 " } ]
[ { "msg_contents": "> unsubscribe\n> \n", "msg_date": "Tue, 16 Mar 1999 10:48:19 -0500", "msg_from": "\"Nugent, Michael P (SAIC)\" <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "There would be value is creating something similar to this for users of\nPostgreSQL. I would be great to see a summary of how many users there are,\nwhere they are, and brief statement of how they are using PostgreSQL.\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tTuesday, March 16, 1999 11:41 AM\n\tTo:\[email protected]\n\tCc:\[email protected]; [email protected]; [email protected];\[email protected]; [email protected]\n\tSubject:\tRe: [HACKERS] Re: Developers Globe (FINAL)\n\n\t> > I have to agree on this.\n\t> \n\t> Man - first it is too flat, now the logo detracs, the flashes\n\t> cause problems and ppl vote agains. Next someone want's it\n\t> back rotating :-)\n\t> \n\t> OK, ok - but slow motion now. A version that works for NS4\n\t> and IE4 is in place. Let's leave it there until we have\n\t> something better. I really wanted to get my fingers back on\n\t> raytracing for a long time and that time is now. I'll produce\n\t> some different map's the next days and place them all into\n\t> ~wieck/index.html, then we can have a voting.\n\n\tNow you know how I felt with the globe(too fast, hard to see), and\n\tstatic map(bad dots).\n\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Tue, 16 Mar 1999 12:50:13 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: Developers Globe (FINAL)" } ]
[ { "msg_contents": "My copy of parse.h may have gotten out of sync, so I'm taking\nanother snapshot, but this is a recent build error that I got\nfrom building a 1:45 PM EST snapshot.\n\n-----------------\n\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -c outfuncs.c -o outfuncs.o\nIn file included from outfuncs.c:45:\n../parse.h:19: parse error before `JoinUsing'\n../parse.h:19: warning: no semicolon at end of struct or union\n../parse.h:31: parse error before `}'\n../parse.h:31: warning: data definition has no type or storage class\n../parse.h:257: parse error before `yylval'\n../parse.h:257: warning: data definition has no type or storage class\nmake[2]: *** [outfuncs.o] Error 1\nmake[2]: Leaving directory `/usr/local/src/pgsql/CURRENT/src/backend/nodes'\nmake[1]: *** [nodes.dir] Error 2\nmake[1]: Leaving directory `/usr/local/src/pgsql/CURRENT/src/backend'\nmake: *** [all] Error 2\n", "msg_date": "Tue, 16 Mar 1999 19:25:40 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "Current Tree Build Failure" } ]
[ { "msg_contents": "> \n> Bruce, it is failing on the first select:\n> \n> -->findLastBuiltinOid(void) {\n> --> SELECT oid from pg_database where datname = 'template1'\n> }\n> \n> template1=> SELECT oid from pg_database where datname = 'template1';\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before\n> or while pr\n> ocessing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible.\n> Terminating.\n> \n> \n> That's where it bombs out.\n> \n> Tim\n> \n\nCan someone suggest why this would be happening? The SELECT looks\npretty simple.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 15:11:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "Thus spake Bruce Momjian\n> > template1=> SELECT oid from pg_database where datname = 'template1';\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally before\n> \n> Can someone suggest why this would be happening? The SELECT looks\n> pretty simple.\n\nIt works for me but I have the same thing happening on a database\nwhere I do the following.\n\n SELECT * INTO TABLE tempset FROM trash WHERE \"Shape\" ~* 'Prow' ;\n\nThe tempset table doesn't exist and trash is just a table that the\nuser is trying to grab a subselect from. I don't know if there is\na connection here but it brought to mind a question I have been meaning\nto ask, how do I read the pg_log file? I can't seem to find anything\nthat reads this. Also, what sort of stuff gets put into that log?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 16 Mar 1999 15:54:39 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "Seth McQuale pointed out that the follwing does not work:\n SELECT LASTNAME || ',' || FIRSTNAME [AS] NAME FROM FRIENDS;\n\nThe solution, was:\n SELECT ( LASTNAME || ',' ) || FIRSTNAME AS NAME FROM FRIENDS;\n\nI looked at pg_operator and didn't see any flag to mark\nan operator as 'associative'. Perhaps if we added a flag\nlike this, the re-write system could be modified to handle\ncases like this.\n\nThoughts?\n\nClark Evans\n", "msg_date": "Tue, 16 Mar 1999 21:29:55 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "Associative Operators? (Was: Re: [NOVICE] Out of frying pan,\n\tinto fire)" }, { "msg_contents": "> Seth McQuale pointed out that the follwing does not work:\n> SELECT LASTNAME || ',' || FIRSTNAME [AS] NAME FROM FRIENDS;\n> \n> The solution, was:\n> SELECT ( LASTNAME || ',' ) || FIRSTNAME AS NAME FROM FRIENDS;\n> \n> I looked at pg_operator and didn't see any flag to mark\n> an operator as 'associative'. Perhaps if we added a flag\n> like this, the re-write system could be modified to handle\n> cases like this.\n> \n> Thoughts?\n> \n> Clark Evans\n> \n> \n\nMy guess is that we should auto-left-associate functions like || if no\nparens are present. It would be a small change to the parser.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 17:23:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> My guess is that we should auto-left-associate functions like || if no\n> parens are present. It would be a small change to the parser.\n\nI was trying to describe a more general solution,\nwhere the operator is marked if it is associative\nwhen it is created. This would allow the same\nmechanism to be used for user defined types.\n\n:) Clark\n", "msg_date": "Tue, 16 Mar 1999 22:34:57 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "Thus spake Bruce Momjian\n> > I looked at pg_operator and didn't see any flag to mark\n> > an operator as 'associative'. Perhaps if we added a flag\n> > like this, the re-write system could be modified to handle\n> > cases like this.\n> \n> My guess is that we should auto-left-associate functions like || if no\n> parens are present. It would be a small change to the parser.\n\nAnd wouldn't require a dump/reload.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 16 Mar 1999 17:36:58 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> And wouldn't require a dump/reload.\n\nWhy would this require a dump/reload? It would seem\nto me that this would only be needed if you changed\nthe database storage system? Am I missing something?\n\nClark\n", "msg_date": "Tue, 16 Mar 1999 22:42:04 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> And wouldn't require a dump/reload.\n\nOoh. If you change pg_operators and add a column. Ok.\nFor the new version, don't they have to do this anyway?\n\nClark\n", "msg_date": "Tue, 16 Mar 1999 22:45:22 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "\nThis is fixed in 6.5 beta.\n\n\n> Seth McQuale pointed out that the follwing does not work:\n> SELECT LASTNAME || ',' || FIRSTNAME [AS] NAME FROM FRIENDS;\n> \n> The solution, was:\n> SELECT ( LASTNAME || ',' ) || FIRSTNAME AS NAME FROM FRIENDS;\n> \n> I looked at pg_operator and didn't see any flag to mark\n> an operator as 'associative'. Perhaps if we added a flag\n> like this, the re-write system could be modified to handle\n> cases like this.\n> \n> Thoughts?\n> \n> Clark Evans\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 20:51:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" } ]
[ { "msg_contents": "In this QUERY:\n\n\tSELECT keyname\n\tFROM markmain\n\tWHERE mark_id NOT IN(SELECT mark_id \n\t\t\t FROM markaty)\n\nI have an index on markaty.mark_id, and have vacuum analyzed. EXPLAIN\nshows:\n\n\tSeq Scan on markmain (cost=2051.43 size=45225 width=12)\n\t SubPlan\n\t -> Seq Scan on markaty (cost=2017.41 size=52558 width=4)\n\nVadim, why isn't this using the index? Each table has 50k rows. Is it\nNOT IN that is causing the problem? IN produces the same plan, though. \nIf I do a traditional join: \n\n\tSELECT keyname\n FROM markmain , markaty\n WHERE markmain.mark_id = markaty.mark_id\n\nI then get a hash join plan:\n\t\n\tHash Join (cost=10768.51 size=90519 width=20)\n\t -> Seq Scan on markmain (cost=2051.43 size=45225 width=16)\n\t -> Hash (cost=0.00 size=0 width=0)\n\t -> Seq Scan on markaty (cost=2017.41 size=52558 width=4)\n\nSeems the optimizer could either hash the subquery, or us an index. \nCertainly would be faster than a sequental scan, no?\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life IS a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Mar 1999 17:50:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Subqueries and indexes" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> In this QUERY:\n> \n> SELECT keyname\n> FROM markmain\n> WHERE mark_id NOT IN(SELECT mark_id\n> FROM markaty)\n> \n> I have an index on markaty.mark_id, and have vacuum analyzed. EXPLAIN\n> shows:\n> \n> Seq Scan on markmain (cost=2051.43 size=45225 width=12)\n> SubPlan\n> -> Seq Scan on markaty (cost=2017.41 size=52558 width=4)\n> \n> Vadim, why isn't this using the index? Each table has 50k rows. Is it\n> NOT IN that is causing the problem? IN produces the same plan, though.\n....\n> \n> Seems the optimizer could either hash the subquery, or us an index.\n> Certainly would be faster than a sequental scan, no?\n\nOptimizer should hash the subquery, but I didn't implement this -:(\nTry to rewrite query using NOT EXISTS and index will be used.\n\nVadim\n", "msg_date": "Wed, 17 Mar 1999 08:42:15 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Subqueries and indexes" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > In this QUERY:\n> > \n> > SELECT keyname\n> > FROM markmain\n> > WHERE mark_id NOT IN(SELECT mark_id\n> > FROM markaty)\n> > \n> > I have an index on markaty.mark_id, and have vacuum analyzed. EXPLAIN\n> > shows:\n> > \n> > Seq Scan on markmain (cost=2051.43 size=45225 width=12)\n> > SubPlan\n> > -> Seq Scan on markaty (cost=2017.41 size=52558 width=4)\n> > \n> > Vadim, why isn't this using the index? Each table has 50k rows. Is it\n> > NOT IN that is causing the problem? IN produces the same plan, though.\n> ....\n> > \n> > Seems the optimizer could either hash the subquery, or us an index.\n> > Certainly would be faster than a sequental scan, no?\n> \n> Optimizer should hash the subquery, but I didn't implement this -:(\n> Try to rewrite query using NOT EXISTS and index will be used.\n\nHow hard would it be to implement it? I know you are deep into MVCC,\nbut doing a nested loop to join a subquery is really bad.\n\nNow, in our defense, I tried this with commercial Ingres 6.4, and it\ntook so long I copied the data into PostgreSQL and tried to run it\nthere. Eventually, I copied the data into a second table, and did a\nDELETE FROM using two tables in the WHERE clause, and the rows left\nwhere my NOT IN result. It did use a hash join in that case.\n\nObviously, Ingres was doing a nested loop do, but I want to do better\nthan Ingres.\n\nI think we really need to get that hash enabled. Is there something I\ncan do to enable it, or can I do something to help you enable it?\n\nAll queries can't be rewritten as EXISTS.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 00:48:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Subqueries and indexes" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > >\n> > > Seems the optimizer could either hash the subquery, or us an index.\n> > > Certainly would be faster than a sequental scan, no?\n> >\n> > Optimizer should hash the subquery, but I didn't implement this -:(\n> > Try to rewrite query using NOT EXISTS and index will be used.\n> \n> How hard would it be to implement it? I know you are deep into MVCC,\n> but doing a nested loop to join a subquery is really bad.\n\nNot very hard, for un-correlated subqueries at least.\nI have no time to do this for 6.5...\n\n> \n...\n> \n> All queries can't be rewritten as EXISTS.\n\nAll except of subqueries with aggregates in target list.\n\nVadim\n", "msg_date": "Wed, 17 Mar 1999 14:07:36 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Subqueries and indexes" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > >\n> > > > Seems the optimizer could either hash the subquery, or us an index.\n> > > > Certainly would be faster than a sequental scan, no?\n> > >\n> > > Optimizer should hash the subquery, but I didn't implement this -:(\n> > > Try to rewrite query using NOT EXISTS and index will be used.\n> > \n> > How hard would it be to implement it? I know you are deep into MVCC,\n> > but doing a nested loop to join a subquery is really bad.\n> \n> Not very hard, for un-correlated subqueries at least.\n> I have no time to do this for 6.5...\n\n\nIs it possible before 6.5 final?\n\n\n> > All queries can't be rewritten as EXISTS.\n> \n> All except of subqueries with aggregates in target list.\n\nI am confused. How do I rewrite this to use exists?\n\n SELECT keyname\n FROM markmain\n WHERE mark_id NOT IN(SELECT mark_id\n FROM markaty)\n\n\nEven if I use IN instead of NOT IN, I don't see how to do it without\nmaking it a correlated subquery.\n\n SELECT keyname\n FROM markmain\n WHERE EXISTS (SELECT mark_id\n FROM markaty\n\t\t WHERE markmain.mark_id = markaty.mark_id)\n\nThis is a correlated subquery. It did not use hash, but it did use the\nindex on markaty:\n\n\tSeq Scan on markmain (cost=16.02 size=334 width=12)\n\t SubPlan\n\t -> Index Scan using i_markaty on markaty (cost=2.10 size=3 width=4)\n\nWhile the index usage is good, the fact is the subquery is executed for\nevery row of markmain, isn't it? That's one query executed for each row\nin markmain, isn't it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 13:10:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Subqueries and indexes" } ]
[ { "msg_contents": "However it is implemented, I would really enjoy this enhancement.\nAdditionally, it would be nice if I could create a new operator using C and\nhave this new operator be associative if desired.\n\nSpeaking of this, If either LASTNAME or FIRSTNAME is NULL then the result of\n((LASTNAME || ',' ) || FIRSTNAME) will return NULL. I would like to be able\nto alter this such that the result will contain what ever is not NULL. I\ntried to create a C function to overcome this but noticed that if any\nparameter in my C function is NULL then the C function always returns NULL.\nI saw some references in the archives about this issue but was unable to\ndetermine where it was left. What is the status of this issue?\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tTuesday, March 16, 1999 3:24 PM\n\tTo:\[email protected]\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] Associative Operators? (Was: Re:\n[NOVICE] Out of frying pan, into fire)\n\n\t> Seth McQuale pointed out that the follwing does not work:\n\t> SELECT LASTNAME || ',' || FIRSTNAME [AS] NAME FROM FRIENDS;\n\t> \n\t> The solution, was:\n\t> SELECT ( LASTNAME || ',' ) || FIRSTNAME AS NAME FROM FRIENDS;\n\t> \n\t> I looked at pg_operator and didn't see any flag to mark\n\t> an operator as 'associative'. Perhaps if we added a flag\n\t> like this, the re-write system could be modified to handle\n\t> cases like this.\n\t> \n\t> Thoughts?\n\t> \n\t> Clark Evans\n\t> \n\t> \n\n\tMy guess is that we should auto-left-associate functions like || if\nno\n\tparens are present. It would be a small change to the parser.\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Tue, 16 Mar 1999 17:14:33 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of f\n\trying pan, into fire)" }, { "msg_contents": "Michael Davis wrote:\n> Speaking of this, If either LASTNAME or FIRSTNAME is NULL then the result of\n> ((LASTNAME || ',' ) || FIRSTNAME) will return NULL. I would like to be able\n> to alter this such that the result will contain what ever is not NULL. I\n> tried to create a C function to overcome this but noticed that if any\n> parameter in my C function is NULL then the C function always returns NULL.\n> I saw some references in the archives about this issue but was unable to\n> determine where it was left. What is the status of this issue?\n\nAlthough I feel initial opposition to this idea, on second \nconsideration, I guess it is reasonable behavior, in Oracle, \nthe NVL function and the DECODE function both handle NULL \narguments without having the result be NULL.\n\nHowever, I'm unaware of any other exceptions in the Oracle\ndatabase on this issue. I believe that user defined functions \nare not allowed to have special NULL treatment -- perhaps \nOracle has DECODE and NVL hard coded deep in the guts of \ntheir query processor, while the other functions arn't.\n\nWould a compromise be to add DECODE and NVL ?\n\nClark\n", "msg_date": "Tue, 16 Mar 1999 23:22:40 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "Clark Evans wrote:\n> \n<snipped discussion of 'something || NULL ' returning non-NULL>\n\n> However, I'm unaware of any other exceptions in the Oracle\n> database on this issue. I believe that user defined functions\n> are not allowed to have special NULL treatment -- perhaps\n> Oracle has DECODE and NVL hard coded deep in the guts of\n> their query processor, while the other functions arn't.\n> \n> Would a compromise be to add DECODE and NVL ?\n\nWhat do DECODE and NVL do?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 16 Mar 1999 17:58:33 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" }, { "msg_contents": "\"Ross J. Reedstrom\" wrote:\n> What do DECODE and NVL do?\n\nTABLE_A\nCOLROW COLVALUE\n----- --------\nROW1 a\nROW2 b\nROW3 <- NULL\nROW4 d\nROW5 <- NULL\n\n5 rows\n\n\n-- NVL function:\n--\n-- This function takes a value of any time, and checks\n-- to see if the value IS NULL. If argument is not null, \n-- then it simply returns it's argument. Otherwise, it\n-- returns what is provided as the second argument.\n--\n\nSELECT COLROW, NVL(COLVALUE,'XX') AS NOT_NULL_COLVALUE \n FROM TABLE_A\n\nCOLROW NOT_NULL_COLVALUE\n----- --------\nROW1 a\nROW2 b\nROW3 XX\nROW4 d\nROW5 XX\n\n5 rows\n\nval,lookup,val,default\n\n-- DECODE function ( CASE/SWITCH like function )\n--\n-- This function takes an even number of arguments, N\n--\n-- The function compaires the first argument against each\n-- even numbered argument in the argumet list. If it is\n-- a match, then it returns the following value in the\n-- argument list. If there is no match, then the last\n-- argument (the default value) is returned. For matching\n-- purposes a NULL = NULL. The first argument and the\n-- middle even arguments must all be the same type, as well\n-- as the last argument and the middle odd arguments.\n--\n\nSELECT COLROW, DECODE(COLVAL,\n\t\t\t'd',4,\n\t\t\t'e',0,\n\t\t\tNULL,9,\n\t\t\t1\n ) AS DECODE_COLVALUE\n FROM TABLE_A\n\nCOLROW DECODE_COLVALUE\n----- --------\nROW1 1\nROW2 1\nROW3 9\nROW4 4\nROW5 9\n\n5 rows\n\n\nHope this helps!\n\nClark\n", "msg_date": "Wed, 17 Mar 1999 00:39:54 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Oracle's DECODE and NVL " } ]
[ { "msg_contents": "I would like for you to also consider adding the following to gram.y for\nversion 6.5:\n\n| NULL_P '=' a_expr\n { $$ = makeA_Expr(ISNULL, NULL, $3,\nNULL); }\n\nI know there was some discussion about this earlier including comments\nagainst this. Access 97 is now generating the following statement and\nerror:\n\nSQLDriverConnect(out)='DSN=PostgreSQL;DATABASE=mp;SERVER=192.168.97.2;PORT=5\n432;UID=kari;PWD=;READONLY=0;PROTOCOL=6.4;FAKEOIDINDEX=0;SHOWOIDCOLUMN=0;ROW\nVERSIONING=0;SHOWSYSTEMTABLES=0;CONNSETTINGS='\nconn=154616224, \nquery='SELECT \"RentalOrders\".\"rentalorderlinesid\" FROM \"rentalorderlines\"\n\"RentalOrders\" WHERE ( NULL = \"rentalorderid\" ) '\nERROR from backend during send_query: 'ERROR: parser: parse error at or\nnear \"=\"'\n\n\nThe above code changed allows Access 97 to work correctly. I would be happy\nto consider any other possible alternatives.\n\nThanks, Michael\n\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tSaturday, March 13, 1999 10:14 PM\n\tTo:\tMichael Davis\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] parser enhancement request for 6.5\n\n\tApplied.\n\n\n\t[Charset iso-8859-1 unsupported, filtering to ASCII...]\n\t> I have a problem with Access97 not working properly when entering\nnew\n\t> records using a sub form, i.e. entering a new order/orderlines or\nmaster and\n\t> detail tables. The problem is caused by a SQL statement that\nAccess97 makes\n\t> involving NULL. The syntax that fails is \"column_name\" = NULL.\nThe\n\t> following attachment was provided by -Jose'-. It contains a very\nsmall\n\t> enhancement to gram.y that will allow Access97 to work properly\nwith sub\n\t> forms. Can this enhancement be added to release 6.5?\n\t> \n\t> <<gram.patch>> \n\t> Thanks, Michael\n\t> \n\n\t[Attachment, skipping...]\n\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Tue, 16 Mar 1999 17:55:04 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] parser enhancement request for 6.5" }, { "msg_contents": "> I would like for you to also consider adding the following to gram.y \n> for version 6.5:\n\nI had the same problem (patch not complete, working on more complete\nchanges, screwed up now that I've got to resolve changes) for this set\nof patches as I did for the int8 stuff.\n\nYour suggested feature should have been in the original patch, and I\nhave patches on my machine which would have done it correctly. btw,\nthere is a fundamental shift/reduce conflict trying for \"where NULL =\nvalue\", though \"where value = NULL\" seems to be OK. This is *such* a\nkludge! Thanks to M$...\n\nWonder what else I'll find as I wade through 1000 e-mails? :/\n\n - Tom\n", "msg_date": "Sun, 21 Mar 1999 15:06:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parser enhancement request for 6.5" }, { "msg_contents": "Hello!\n\nI tried to create a simple function, to \"variable value validate\" :)\nHere:\ntext *default_text(text* input) {\n\tchar *ret;\n\tchar def[20];\n\tif (input) ret=input;\n\tstrcpy((def+4),\"Default\");\n\t(*((int4*)def)) = strlen(def+4)+4;\n\tret=def;\n\telog(NOTICE,\"Here:%i\", (int4)(*def))\n}\nThis retunrs with the text \"Default\", if input value IS NULL, and the\nwith original value if not.\nSo try it with postgres:\ntron=> create table test (v text);\ntron=> insert into test values(NULL);\ntron=> insert into test values('1');\nCREATE INSERT INSERT\ntron=> select default_text(v) from test;\nNOTICE: Here: 11\nNOTICE: Here: 5\n?column?\n--------\n\n 1\nI don't seek this in the source, but i think, all function, who take a NULL\nvalue as parameter can't return with a NOT NULL value.\nBut why? Ooops... And can i check about an int4 if IS NULL ?\n??\n--\n // [email protected] // http://lsc.kva.hu/ //\n\n", "msg_date": "Fri, 26 Mar 1999 20:37:48 +0100 (NFT)", "msg_from": "\"Vazsonyi Peter[ke]\" <[email protected]>", "msg_from_op": false, "msg_subject": "NULL handling question" }, { "msg_contents": "> I don't seek this in the source, but i think, all function, who take a \n> NULL value as parameter can't return with a NOT NULL value.\n> But why?\n\nPostgres assumes that a NULL input will give a NULL output, and never\ncalls your routine at all. Since NULL means \"don't know\", there is a\nstrong argument that this is correct behavior.\n\n> And can i check about an int4 if IS NULL ?\n\nNot as cleanly as the pass-by-reference data types. I vaguely recall\nthat the input and output routines can look for a second or third\nargument, one of which is a NULL indicator. But that mechanism is not\ngenerally usable in other contexts afaik.\n\n - Tom\n", "msg_date": "Mon, 29 Mar 1999 15:47:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL handling question" }, { "msg_contents": "Thus spake Thomas Lockhart\n> > I don't seek this in the source, but i think, all function, who take a \n> > NULL value as parameter can't return with a NOT NULL value.\n> > But why?\n> \n> Postgres assumes that a NULL input will give a NULL output, and never\n> calls your routine at all. Since NULL means \"don't know\", there is a\n\nActually, the problem is that it does call the function. After it\nreturns it throws away the result and so the effect is that the function\nnever gets called but in the meantime, the function has to deal with\nNULL inputs for nothing. This has been hanging around since the last\nrelease. I looked at the dispatch code but it wasn't very clear where\nwe have to put the test to do this correctly. Maybe we can get it cleaned\nup before release this time.\n\n\n> strong argument that this is correct behavior.\n\nI agree but recently I said that there was no stored procedures in PostgreSQL\nand someone corrected me pointing out that functions with no return were\nin effect stored procedures. Do the same arguments apply? If a procedure\nis passed a NULL argument, should the side effects be bypassed?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Mon, 29 Mar 1999 12:27:05 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL handling question" }, { "msg_contents": "On Mon, 29 Mar 1999, Thomas Lockhart wrote:\n> Postgres assumes that a NULL input will give a NULL output,\n\nBut why? That is not true in all case, i mean so like: \"FALSE && dont'know\"\nis always FALSE.\n\n> and never calls your routine at all.\n\nBut! I see the output of elogs in function.\nI don't sure about 6.5, i test it not for a long time. The 6.4.x calls my\nfunctions always (with one or more NULL parameters).\n\nThen if the return value has \"pass-by-reference\" type, can i give a NULL or\na NOT NULL value. I don't now realy, but i think it's posible to give NULL\nindicator with int4, bool, etc like type results.\n\nI mean this feature is necessary... Not? ;)\nAny opinion?\n\nSo thans for all.\n\n--\n NeKo@(kva.hu|Kornel.szif.hu) the servant of Crash\n hu:http://lsc.kva.hu en:-- (sorry, my english is...)\n\n", "msg_date": "Mon, 29 Mar 1999 18:32:43 +0100 (NFT)", "msg_from": "\"Vazsonyi Peter[ke]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL handling question" }, { "msg_contents": "> > Postgres assumes that a NULL input will give a NULL output,\n> But why? That is not true in all case, i mean so like: \"FALSE && \n> dont'know\" is always FALSE.\n\nYour example shows a flaw in the Postgres premise on this topic,\nperhaps.\n\n> > and never calls your routine at all.\n> But! I see the output of elogs in function.\n> The 6.4.x calls my\n> functions always (with one or more NULL parameters).\n\nIt's been discussed before, and as you and others note it seems the\nbehavior has changed so that functions are called even with NULL\ninput. But the job wasn't finished since the results are ignored.\n\n - Tom\n", "msg_date": "Mon, 29 Mar 1999 18:09:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NULL handling question" }, { "msg_contents": "\nAdded to 6.5 beta.\n\n\n> I would like for you to also consider adding the following to gram.y for\n> version 6.5:\n> \n> | NULL_P '=' a_expr\n> { $$ = makeA_Expr(ISNULL, NULL, $3,\n> NULL); }\n> \n> I know there was some discussion about this earlier including comments\n> against this. Access 97 is now generating the following statement and\n> error:\n> \n> SQLDriverConnect(out)='DSN=PostgreSQL;DATABASE=mp;SERVER=192.168.97.2;PORT=5\n> 432;UID=kari;PWD=;READONLY=0;PROTOCOL=6.4;FAKEOIDINDEX=0;SHOWOIDCOLUMN=0;ROW\n> VERSIONING=0;SHOWSYSTEMTABLES=0;CONNSETTINGS='\n> conn=154616224, \n> query='SELECT \"RentalOrders\".\"rentalorderlinesid\" FROM \"rentalorderlines\"\n> \"RentalOrders\" WHERE ( NULL = \"rentalorderid\" ) '\n> ERROR from backend during send_query: 'ERROR: parser: parse error at or\n> near \"=\"'\n> \n> \n> The above code changed allows Access 97 to work correctly. I would be happy\n> to consider any other possible alternatives.\n> \n> Thanks, Michael\n> \n> \n> \t-----Original Message-----\n> \tFrom:\tBruce Momjian [SMTP:[email protected]]\n> \tSent:\tSaturday, March 13, 1999 10:14 PM\n> \tTo:\tMichael Davis\n> \tCc:\[email protected]\n> \tSubject:\tRe: [HACKERS] parser enhancement request for 6.5\n> \n> \tApplied.\n> \n> \n> \t[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \t> I have a problem with Access97 not working properly when entering\n> new\n> \t> records using a sub form, i.e. entering a new order/orderlines or\n> master and\n> \t> detail tables. The problem is caused by a SQL statement that\n> Access97 makes\n> \t> involving NULL. The syntax that fails is \"column_name\" = NULL.\n> The\n> \t> following attachment was provided by -Jose'-. It contains a very\n> small\n> \t> enhancement to gram.y that will allow Access97 to work properly\n> with sub\n> \t> forms. Can this enhancement be added to release 6.5?\n> \t> \n> \t> <<gram.patch>> \n> \t> Thanks, Michael\n> \t> \n> \n> \t[Attachment, skipping...]\n> \n> \n> \t-- \n> \t Bruce Momjian | http://www.op.net/~candle\n> \t [email protected] | (610) 853-3000\n> \t + If your life is a hard drive, | 830 Blythe Avenue\n> \t + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 20:54:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parser enhancement request for 6.5" }, { "msg_contents": "> > I would like for you to also consider adding the following to gram.y for\n> > version 6.5:\n> > | NULL_P '=' a_expr\n> > { $$ = makeA_Expr(ISNULL, NULL, $3, NULL); }\n> > I know there was some discussion about this earlier including comments\n> > against this. Access 97 is now generating the following statement and\n> > error...\n\nI'm not certain that this patch should survive. There are at least two\nother places in the parser which should be modified for symmetry (the\n\"b_expr\" and the default expressions) and I recall that these lead to\nmore shift/reduce conflicts. Remember that shift/reduce conflicts\nindicate that some portion of the parser logic can *never* be reached,\nwhich means that some feature (perhaps the new one, or perhaps an\nexisting one) is disabled.\n\nThere is currently a single shift/reduce conflict in gram.y, and I'm\nsuprised to find that it is *not* due to the \"NULL_P '=' a_expr\" line.\nI'm planning on touching gram.y to hunt down the shift/reduce conflict\n(from previous work I think it in Stefan's \"parens around selects\"\nmods), and I'll look at the NULL_P issue again also.\n\nI'll reiterate something which everyone probably knows: \"where NULL =\nexpr\" is *not* standard SQL92, and any company selling products which\nimplement this rather than the standard \"where expr is NULL\" should\nmake your \"don't buy\" list, rather than your \"only buy\" list, which is\nwhat they are trying to force you to do :(\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 10 May 1999 14:53:20 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parser enhancement request for 6.5" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> There is currently a single shift/reduce conflict in gram.y, and I'm\n> suprised to find that it is *not* due to the \"NULL_P '=' a_expr\" line.\n> I'm planning on touching gram.y to hunt down the shift/reduce conflict\n> (from previous work I think it in Stefan's \"parens around selects\"\n> mods), and I'll look at the NULL_P issue again also.\n\n No - not the parens.\n\n Looking at the y.output (produced with -v) I see that the\n conflict is at state 266 when in the SelectStmt the FOR\n keyword of FOR UPDATE has been seen. The SelectStmt is also\n used in CursorStmt.\n\n The rule cursor_clause in CursorStmt results in an\n elog(ERROR) telling that cursors for update are not\n supported. But in fact a\n\n DECLARE x1 CURSOR FOR SELECT * FROM x FOR UPDATE OF x;\n\n doesn't throw an error. So it is the CursorStmt's\n cursor_clause that is currently unreachable in the parser.\n Instead the SelectStmt's for_update_clause has already eaten\n up the FOR UPDATE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 18:28:49 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parser enhancement request for 6.5" } ]
[ { "msg_contents": "I am not suggesting that all functions or operators work this way, because I\nbelieve the current implementation is more standard. I would like to be\nable to over ride this default behavior and change a NULL value into return\nvalue. A great example of this is the Access function nz(int_value). If\nint_value is null, nz() returns 0, it is it not null then int_value is\nreturned. I used this function frequently in Access and would like to take\nadvantage of this in PostgreSQL.\n\nThanks, Michael\t\n\n\t-----Original Message-----\n\tFrom:\tClark Evans [SMTP:[email protected]]\n\tSent:\tTuesday, March 16, 1999 4:23 PM\n\tTo:\tMichael Davis\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] Associative Operators? (Was: Re:\n[NOVICE] Out of frying pan, into fire)\n\n\tMichael Davis wrote:\n\t> Speaking of this, If either LASTNAME or FIRSTNAME is NULL then the\nresult of\n\t> ((LASTNAME || ',' ) || FIRSTNAME) will return NULL. I would like\nto be able\n\t> to alter this such that the result will contain what ever is not\nNULL. I\n\t> tried to create a C function to overcome this but noticed that if\nany\n\t> parameter in my C function is NULL then the C function always\nreturns NULL.\n\t> I saw some references in the archives about this issue but was\nunable to\n\t> determine where it was left. What is the status of this issue?\n\n\tAlthough I feel initial opposition to this idea, on second \n\tconsideration, I guess it is reasonable behavior, in Oracle, \n\tthe NVL function and the DECODE function both handle NULL \n\targuments without having the result be NULL.\n\n\tHowever, I'm unaware of any other exceptions in the Oracle\n\tdatabase on this issue. I believe that user defined functions \n\tare not allowed to have special NULL treatment -- perhaps \n\tOracle has DECODE and NVL hard coded deep in the guts of \n\ttheir query processor, while the other functions arn't.\n\n\tWould a compromise be to add DECODE and NVL ?\n\n\tClark\n", "msg_date": "Tue, 16 Mar 1999 18:02:06 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of f\n\trying pan, into fire)" } ]
[ { "msg_contents": "Speaking of decode, this would be another valuable function to have.\n\n\t-----Original Message-----\n\tFrom:\tRoss J. Reedstrom [SMTP:[email protected]]\n\tSent:\tTuesday, March 16, 1999 4:59 PM\n\tTo:\[email protected]\n\tSubject:\tRe: [HACKERS] Associative Operators? (Was: Re:\n[NOVICE] Out of frying pan, into fire)\n\n\tClark Evans wrote:\n\t> \n\t<snipped discussion of 'something || NULL ' returning non-NULL>\n\n\t> However, I'm unaware of any other exceptions in the Oracle\n\t> database on this issue. I believe that user defined functions\n\t> are not allowed to have special NULL treatment -- perhaps\n\t> Oracle has DECODE and NVL hard coded deep in the guts of\n\t> their query processor, while the other functions arn't.\n\t> \n\t> Would a compromise be to add DECODE and NVL ?\n\n\tWhat do DECODE and NVL do?\n\n\tRoss\n\t-- \n\tRoss J. Reedstrom, Ph.D., <[email protected]> \n\tNSBRI Research Scientist/Programmer\n\tComputer and Information Technology Institute\n\tRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 16 Mar 1999 18:02:50 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of f\n\trying pan, into fire)" } ]
[ { "msg_contents": "I've finished that patch for the serial->sequences but I am not happy with it. \n\nI don't think belive my proposed idea is the correct approach. I still want to work on this \nidea, but I'm not sure how to proceed.\n\nHere are my thoughts:\n\n1. If I use my proposed idea, everything works great until you dump/reload the table. The dump \ndoesn't store the data type as serial, but as int with a default and a sequence. So when the \ntable gets reloaded, the relationship nolonger exists. (I think I finally understand the dump \nissues now Tom :))\n\n2. Tom suggested reference counting the seqence in one of his email's (I know it was for this \npurpose, but I still liked the idea). I thought about this for a while, and concluded that this \nis probably not the correct solution since the sequence can be accessed from something else \nbesides the default values. \n\n3. Vadim pointed out that sequences and tables are really seperate entities and now that I \nunderstand them better, I agree with him. The serial type is not really a type, but a shortcut \nto create a int4 data type, a key, and a sequence in one command.\n\nI have two current thoughts on how to proceed from here... I want to toss them out, get some \nmore feedback ... rediscover that I haven't really thought them out well enought and do this all \nover again :)\n\n1. Leave it as it is now. It works, just explain to people that sequences and tables are \nseperate entities, and the serial type is just a shortcut.\n\n2. Create a new data type serial. I haven't thought idea out much, and need to investigate it \nsome more. I'm thinking it would be binary equivilent with the int4 type, and use most of the \nexisting seqence code, but not put the seqence name entry in the pg_class system table. Does \nthis sound like a feasible idea? \n\nThanks for your input.\n\n-Ryan\n", "msg_date": "Tue, 16 Mar 1999 21:49:40 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Wow. Serious effort here.\n\nRyan Bradetich wrote:\n> 1. Leave it as it is now. It works, just explain to \n> people that sequences and tables are seperate entities, \n> and the serial type is just a shortcut.\n\nI dislike this approach. It seems that it is hiding detail\nthat is necessary for proper maintence. It isn't that\nhard to type in the code. IMHO, the shortcut causes more\nconfusion that it helps. So, I propose a third option:\n\n0. Remove the SERIAL construct.\n\n\n> 2. Create a new data type serial. I haven't thought idea \n> out much, and need to investigate it some more. I'm thinking \n> it would be binary equivilent with the int4 type, and use \n> most of the existing seqence code, but not put the seqence \n> name entry in the pg_class system table. Does this sound \n> like a feasible idea?\n\nThis dosn't sound all that bad... but I really\nwonder what the advantage is. \n\nI vote for \"0\". Sorry to dissappoint.\n\nClark\n", "msg_date": "Wed, 17 Mar 1999 05:36:07 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Ryan Bradetich wrote:\n> \n> I've finished that patch for the serial->sequences but I am not happy with it.\n> \n> I don't think belive my proposed idea is the correct approach. I still want to work on this\n> idea, but I'm not sure how to proceed.\n> \n> Here are my thoughts:\n> \n> 1. If I use my proposed idea, everything works great until you dump/reload the table. The dump\n> doesn't store the data type as serial, but as int with a default and a sequence. So when the\n> table gets reloaded, the relationship nolonger exists. (I think I finally understand the dump\n> issues now Tom :))\n\nAdd attnum - attribute number of SERIAL column in table -\nto new relation: using this pg_dump will know what\ncolumns are SERIAL ones and what are not...\n\nVadim\n", "msg_date": "Wed, 17 Mar 1999 13:57:48 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Thus spake Ryan Bradetich\n> 2. Create a new data type serial. I haven't thought idea out much, and need to investigate it \n> some more. I'm thinking it would be binary equivilent with the int4 type, and use most of the \n> existing seqence code, but not put the seqence name entry in the pg_class system table. Does \n> this sound like a feasible idea? \n\nI like it. I always wondered why serial wasn't a real datatype. I never\nwondered out loud for fear of finding myself with a new project. :-)\n\nIf we are thinking about it, I wonder if we could enhance it somewhat.\nUnder the current system, the attribute values and underlying method\nare divorced but if we make serial a first class type then we can look\nat the values, perhaps, when getting the next number. For example, if\nwe add a row and specify a number, the system can note that and skip\nthat number later. That would certainly fix the dump/restore problem.\n\nAlternatively, maybe we can enforce the serialism of the type. Even\nif the user specifies a value, ignore it and put the next number in\nanyway. Of course, that just brings us back to the dump/restore\nproblem so...\n\nDo as above but allow the user to specify a number as long as it is\navailable and is lower than the next number in the series. When\nthe restore happens, you need to set the start value to the previous\nnext value then the inserts can restore with all the old values. You\ncan even safely insert new records while the restore is happening.\n\nIf we decide to leave things more or less as they are, how about a new\nflag for sequences and indexes that sets a row as system generated\nrather than user specified? We can then set that field when a sequence\nor index is generated by the system such as for the serial type or\nprimary keys. Dump could then be written to ignore these rows on\noutput. That would deal with the current issue.\n\nJust some random thought from someone who woke up too early.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Mar 1999 05:03:00 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Thus spake Clark Evans\n> 0. Remove the SERIAL construct.\n\nAck! No! Once we ship a feature I think we better be very careful\nabout dropping it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Mar 1999 05:08:24 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "> Wow. Serious effort here.\n> \n> Ryan Bradetich wrote:\n> > 1. Leave it as it is now. It works, just explain to \n> > people that sequences and tables are seperate entities, \n> > and the serial type is just a shortcut.\n> \n> I dislike this approach. It seems that it is hiding detail\n> that is necessary for proper maintence. It isn't that\n> hard to type in the code. IMHO, the shortcut causes more\n> confusion that it helps. So, I propose a third option:\n> \n> 0. Remove the SERIAL construct.\n\nWhen they create a serial, we tell them:\n\n\tltest=> create table testx(x serial);\n\tNOTICE: CREATE TABLE will create implicit sequence testx_x_seq for\n\tSERIAL column testx.x\n\tNOTICE: CREATE TABLE/UNIQUE will create implicit index testx_x_key for\n\ttable testx\n\tCREATE\n\nSo it is not so terrible to tell them they have to delete it when\nfinished. We could also add a column to pg_class which tells us this\nsequence was auto-created from oid 12, and remove it when we delete a\ntable. Or, we could name just try to delete a sequence that has the\nname of the table with _seq at the end.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 11:18:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Thus spake Ryan Bradetich\n>> 2. Create a new data type serial. I haven't thought idea out much,\n>> and need to investigate it some more. I'm thinking it would be binary\n>> equivilent with the int4 type, and use most of the existing seqence\n>> code, but not put the seqence name entry in the pg_class system\n>> table. Does this sound like a feasible idea?\n\n> I like it.\n\nA binary-equivalent type does seem like a handier way to represent\nSERIAL than what we are doing. You still need a way to find the\nassociated sequence object, though, so a table mapping from\ntable-and-column to sequence OID is still necessary. (Unless we\nwere to use the atttypmod field for the column to hold the sequence\nobject's OID? Seems a tad fragile though, since there's no way to\nupdate an atttypmod field in an existing table.)\n\nI don't like the idea of not putting the sequence name into pg_class.\nThat would mean that the sequence object is not directly accessible\nfor housekeeping operations. If you do that, you'd have to invent\nall-new ways to do the following:\n\t* currval, setval, nextval (yes there are scenarios where a\n\t direct nextval on the sequence is useful)\n\t* dump and reload the sequence in pg_dump\n\n> Alternatively, maybe we can enforce the serialism of the type. Even\n> if the user specifies a value, ignore it and put the next number in\n> anyway.\n\nI don't like that at *all*.\n\n> Do as above but allow the user to specify a number as long as it is\n> available and is lower than the next number in the series.\n\nI think better would be that the sequence value is silently forced to\nbe at least as large as the inserted number, whenever a specific number\nis inserted into a SERIAL field. That would ensure we never generate\nduplicates, but not require keeping any extra state.\n\n> If we decide to leave things more or less as they are, how about a new\n> flag for sequences and indexes that sets a row as system generated\n> rather than user specified? We can then set that field when a sequence\n> or index is generated by the system such as for the serial type or\n> primary keys.\n\nYes, it'd be nice to think about fixing up primary-key implicit indexes\nwhile we are at it --- they have some of the same problems as SERIAL ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 1999 11:54:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences.... " }, { "msg_contents": "> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > Thus spake Ryan Bradetich\n> >> 2. Create a new data type serial. I haven't thought idea out much,\n> >> and need to investigate it some more. I'm thinking it would be binary\n> >> equivilent with the int4 type, and use most of the existing seqence\n> >> code, but not put the seqence name entry in the pg_class system\n> >> table. Does this sound like a feasible idea?\n> \n> > I like it.\n> \n> A binary-equivalent type does seem like a handier way to represent\n> SERIAL than what we are doing. You still need a way to find the\n> associated sequence object, though, so a table mapping from\n> table-and-column to sequence OID is still necessary. (Unless we\n> were to use the atttypmod field for the column to hold the sequence\n> object's OID? Seems a tad fragile though, since there's no way to\n> update an atttypmod field in an existing table.)\n\natttypmod seems like a perfect idea. We also need a unique type for\nlarge objects, so oid's and large objects can be distinguished. We\ncould do both at the same time, and with Thomas's new type coersion\nstuff, we don't need to create tons of functions for each new type.\n\n> \n> I don't like the idea of not putting the sequence name into pg_class.\n> That would mean that the sequence object is not directly accessible\n> for housekeeping operations. If you do that, you'd have to invent\n> all-new ways to do the following:\n> \t* currval, setval, nextval (yes there are scenarios where a\n> \t direct nextval on the sequence is useful)\n> \t* dump and reload the sequence in pg_dump\n\nYes, let's keep it in pg_class. No reason not to.\n\n> > If we decide to leave things more or less as they are, how about a new\n> > flag for sequences and indexes that sets a row as system generated\n> > rather than user specified? We can then set that field when a sequence\n> > or index is generated by the system such as for the serial type or\n> > primary keys.\n> \n> Yes, it'd be nice to think about fixing up primary-key implicit indexes\n> while we are at it --- they have some of the same problems as SERIAL ...\n\nMy guess is that 6.5 is too close to be making such sweeping changes,\nthough the pg_dump problems with SERIAL certainly make this an important\nissue.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 13:14:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Sorry for the igorance, but I'm not quite\nunderstanding. Assuming a new SERIAL type\nis made. What would be the difference \nbetween the new type and an OID?\n\nThanks!\n\nClark\n", "msg_date": "Wed, 17 Mar 1999 23:43:30 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "OID vs SERIAL? (Was: Re: [HACKERS] Sequences....)" }, { "msg_contents": "Bruce Momjian wrote:\n> It would only mark the the column as an OID that is used for Serial.\n> The same thing with large objects, so it is an oid, and used for large\n> objects. It allows pg_dump and other programs to understand the use of\n> the oid.\n\nSo, the serial column and the OID column would be one and \nthe same? Why is there a sequence problem then?\n\nClark\n", "msg_date": "Wed, 17 Mar 1999 23:49:09 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID vs SERIAL? (Was: Re: [HACKERS] Sequences....)" }, { "msg_contents": "> Sorry for the igorance, but I'm not quite\n> understanding. Assuming a new SERIAL type\n> is made. What would be the difference \n> between the new type and an OID?\n\nIt would only mark the the column as an OID that is used for Serial. \nThe same thing with large objects, so it is an oid, and used for large\nobjects. It allows pg_dump and other programs to understand the use of\nthe oid.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 18:49:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID vs SERIAL? (Was: Re: [HACKERS] Sequences....)" }, { "msg_contents": "> Bruce Momjian wrote:\n> > It would only mark the the column as an OID that is used for Serial.\n> > The same thing with large objects, so it is an oid, and used for large\n> > objects. It allows pg_dump and other programs to understand the use of\n> > the oid.\n> \n> So, the serial column and the OID column would be one and \n> the same? Why is there a sequence problem then?\n\nIt just marks the oid column as being a serial column. We also have\natttypmod, and could easily use that for marking oid's used for serial,\nand those used for large objects. Would be nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 18:54:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID vs SERIAL? (Was: Re: [HACKERS] Sequences....)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> [ several of us like making SERIAL a new data type ]\n\n> My guess is that 6.5 is too close to be making such sweeping changes,\n\nI agree, we should probably not expect to squeeze such a change in for\n6.5.\n\nAlthough we've been hand-waving about how this could be done, I think\nit would require either ugly hackery or some nontrivial extensions to\nthe system. AFAIR, for example, there is no data-type-specific code\nthat gets executed when NULL is assigned to a column, therefore no\neasy way for a SERIAL data type to get control and substitute a suitable\ndefault value. Probably we'd end up still having to use a \"DEFAULT\"\nclause to make that happen. It seems to need some thought, anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 1999 20:50:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences.... " }, { "msg_contents": "Clark Evans <[email protected]> writes:\n> Sorry for the igorance, but I'm not quite\n> understanding. Assuming a new SERIAL type\n> is made. What would be the difference \n> between the new type and an OID?\n\nThe new type would have an identifying OID, namely the OID assigned\nto its row in pg_type. This OID would be the data type indicator for\nall SERIAL columns.\n\nHowever, for each SERIAL column there would need to be a sequence\nobject, and this sequence object would have its *own* unique OID\n(the OID assigned to its row in pg_class, IIRC).\n\nTo manipulate a SERIAL column you need to be able to look up the OID\nof its sequence, so that you can do things to the sequence. I suggested\nthat storing a copy of the sequence's OID in the column's atttypmod\nfield would be worthwhile, because it could be accessed directly when\nworking on the table containing the SERIAL column, without having to do\na lookup in a system table.\n\nI think it'd still be a good idea to have a system table containing the\nmapping from SERIAL columns to (OIDs of) their associated sequences.\nThe atttypmod idea is just a trick to bypass having to do lookups in\nthis table for the most common operations on a SERIAL column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 1999 21:25:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID vs SERIAL? (Was: Re: [HACKERS] Sequences....) " }, { "msg_contents": "> \n> I think it'd still be a good idea to have a system table containing the\n> mapping from SERIAL columns to (OIDs of) their associated sequences.\n> The atttypmod idea is just a trick to bypass having to do lookups in\n> this table for the most common operations on a SERIAL column.\n\nBut what use would a new system table be, if the atttypmod can do it for\nus?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 00:16:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID vs SERIAL? (Was: Re: [HACKERS] Sequences....)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I think it'd still be a good idea to have a system table containing the\n>> mapping from SERIAL columns to (OIDs of) their associated sequences.\n>> The atttypmod idea is just a trick to bypass having to do lookups in\n>> this table for the most common operations on a SERIAL column.\n\n> But what use would a new system table be, if the atttypmod can do it for\n> us?\n\nTo handle the less common cases --- in particular, the reverse lookup:\ngiven a sequence, is it the implementation of a SERIAL column somewhere?\n(DROP SEQUENCE ought to refuse to drop it if so, I think.)\n\nAlso, assuming that we want inherited tables to share the parent's\nsequence, I suspect there are cases where you need to be able to\nfind all the tables sharing a given sequence. This would be rather\ndifficult if the only representation was atttypmod fields. (You\ncould probably work out something reasonably efficient based on\nthe assumption that all the tables involved must be related by\ninheritance --- but it wouldn't be as easy as a single SELECT,\nand it *could not* be done in pure SQL because atttypmod isn't\nan SQL concept.)\n\nBasically I think that all this structural information ought to be\nexplicitly represented in regular SQL data structures where you can\nmanipulate it (SELECT on it and so forth). We can use atttypmod as an\ninternal-to-the-implementation cache to avoid the most common lookup\nthat we'd otherwise need, but it'd be a design mistake not to have the\ninformation represented in a more conventional form.\n\nIt might help to compare this issue to index and inheritance\nrelationships. We have explicit representations in the system tables\nof the inherits-from and is-an-index-of relationships; if we did not,\nmany tasks would become much harder. The backend plays some tricks\ninternally to avoid constantly having to do fullblown lookups in those\ntables, but the tables need to be there anyway. I say we need to add\nan explicit representation of the is-sequence-for-SERIAL relationship\nfor the same reasons, even if we can install an internal shortcut\nthat's used by some backend operations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 1999 00:51:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OID vs SERIAL? (Was: Re: [HACKERS] Sequences....) " }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > Alternatively, maybe we can enforce the serialism of the type. Even\n> > if the user specifies a value, ignore it and put the next number in\n> > anyway.\n> \n> I don't like that at *all*.\n\nI'm not entirely crazy about it myself. I included it as an option because\nit seemed to follow from the definition of serial number. However, in\npractice I imagine that people would find it overly restrictive.\n\n> > Do as above but allow the user to specify a number as long as it is\n> > available and is lower than the next number in the series.\n> \n> I think better would be that the sequence value is silently forced to\n> be at least as large as the inserted number, whenever a specific number\n> is inserted into a SERIAL field. That would ensure we never generate\n> duplicates, but not require keeping any extra state.\n\nI see your point but that could cause problems if you start your sequence\ntoo high. I guess the answer to that is, \"Don't do that.\"\n\nHmm. Are you suggesting that if I insert a number higher than the next\nsequence that the intervening numbers are never available?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 18 Mar 1999 22:38:59 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> Thus spake Tom Lane\n> > \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > > Alternatively, maybe we can enforce the serialism of the type. Even\n> > > if the user specifies a value, ignore it and put the next number in\n> > > anyway.\n> > I don't like that at *all*.\n> I'm not entirely crazy about it myself. I included it as an option because\n> it seemed to follow from the definition of serial number. However, in\n> practice I imagine that people would find it overly restrictive.\n> \n\nWell, I'd do it a little different. If a sequence is bound\nto a column, and the user provides a value, throw an error! \n\nThis is what I did in Oracle when I implemented system \nassigned keys in a large project that I worked on. For \nnormal operations, this is the way you want it. Any other \nway will be a nightmare! (I added the trigger to find the \nclient application that was being .. let's say .. very bad)\n\nNow... for table loading, you have a different issue:\n\n\"D'Arcy J.M. Cain\" wrote:\n> > > Do as above but allow the user to specify a number as long as it is\n> > > available and is lower than the next number in the series.\n> > I think better would be that the sequence value is silently forced to\n> > be at least as large as the inserted number, whenever a specific number\n> > is inserted into a SERIAL field. That would ensure we never generate\n> > duplicates, but not require keeping any extra state.\n> \n> I see your point but that could cause problems if you start your sequence\n> too high. I guess the answer to that is, \"Don't do that.\"\n> \n> Hmm. Are you suggesting that if I insert a number higher than the next\n> sequence that the intervening numbers are never available?\n\nIf you are loading a table with records that are out of sequence,\nthen there is a manual issue involved.\n\nPerhaps what is needed is a \"bind\" function:\n\nFUNCTION bind( TABLE, COLUMN, SEQUENCE ) RETURNS OLD_SEQUENCE;\n\n This procedure binds a table, column to auto-populate\n with a given sequence. It returns the old sequence\n (possibly null) associated with the TABLE/COLUMN. \n The column, of course, must be an INT4 'compatible' type,\n and the SEQUENCE cannot be bound to any other TABLE/COLUMN,\n Also, the max(COLUMN) > curval(SEQUENCE) \n If any of the conditions are false, then the BIND throws\n an error, i.e., don't force the BIND to work.\n Bind, of course, could use atttypmod field in pg_attributes.\n\n If a sequence is associated with the TABLE/COLUMN during\n dump, then DUMP will automatically treat them together \n as a single unit. If the column appears in an INSERT\n or an UPDATE, and the bound sequence is not null, then\n an error is issued. Likewise, if nextval('sequence') is\n called on a bound sequence, then an error is issued.\n\n unbind(TABLE,COLUMN) is short for bind(TABLE,COLUMN,NULL);\n \"CREATE TABLE x ( y SERIAL );\"\n becomes short for \n \"CREATE TABLE x ( y INT4 ); CREATE SEQUENCE xys; BIND(x,y,xys);\"\n \n\nThis gives you the best of both worlds. If you want to treat\nthe sequence, and table/column seperately, unbind them. Otherwise,\nyou may bind them together. So, if you are going to manually \nmess with the column, then you must UNBIND the sequence, \ndo your altercations, and then REBIND the sequence back \nto the table.\n\nThoughts? \n\nClark\n", "msg_date": "Fri, 19 Mar 1999 04:36:44 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "FUNCTION bind (TABLE, COLUMN,\n SEQUENCE) returns OLD_SEQUENCE? (Was:\n\tRe: [HACKERS] Sequences....)" }, { "msg_contents": "Caution: Random Thoughts & Trigger Tangent\n\nThis whole discussion got me to thinking about triggers.\nAre we making, in this case, a specialized trigger that \npopulates a table column from a sequence on insert? \nPerhaps it may be instructive to look at the \ngeneral case for enlightenment.\n\nAside, I really don't like Oracle's trigger concept:\n\"CREATE TRIGGER xxx ON INSERT OF tablename AS\"\n\nI'd rather see the trigger object as a stand alone \nblock of code that is \"bound\" to one or more tables.\nThus, the above, would be a short hand for:\n\n\"CREATE TRIGGER xxx AS .... ; BIND xxx TO tablename ON INSERT;\"\n\nNow.. if you wanted to _way_ generalize this...\nYou can think of \"INSERT/UPDATE/DELETE\" as mutating actions\ntaken on a table object. What mutating actions does a \nsequence object have? NEXTVAL\n\nSo... perhaps the trigger concept could be extended\npast tables but onto any object that has mutating actions?\n(you have to excuse the lack of rule system knowledge here)\n\nAnd... if you want go further into the muck, perhaps\nwe could have triggers that have a binding with more\nthan one object.. \n\n> FUNCTION bind( TABLE, COLUMN, SEQUENCE ) RETURNS OLD_SEQUENCE;\n\nBecomes,\n\n FUNCTION bind( SEQUENCE_TRIGGER, TABLE, COLUMN, SEQUENCE ) \n RETURNS OLD_SEQUENCE;\n\nHmm. Oh well I thought I was going somewhere.... better\nre-name this a tangent.\n\n:) Clark\n", "msg_date": "Fri, 19 Mar 1999 05:20:41 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Trigger Tangent (Was: bind (Was: sequences ))" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n>> I think better would be that the sequence value is silently forced to\n>> be at least as large as the inserted number, whenever a specific number\n>> is inserted into a SERIAL field. That would ensure we never generate\n>> duplicates, but not require keeping any extra state.\n\n> Hmm. Are you suggesting that if I insert a number higher than the next\n> sequence that the intervening numbers are never available?\n\nRight. Seems to me that the cost of keeping track of \"holes\" in the\nassignment sequence would vastly exceed the value of not wasting any\nsequence numbers. (Unless you have some brilliant idea for how to do\nit with a minimal amount of storage?)\n\nAlso, the major real use for loading specific values into a SERIAL\ncolumn is for a database dump and reload, where you need to be able\nto preserve the original serial number assignments. In this situation,\ntrying to keep track of \"holes\" would be counterproductive for two reasons:\n\n 1. During the incoming COPY we'd very likely not see the tuples in\n their original order of creation; so a lot of cycles would be\n wasted keeping track of apparent holes that would get filled in\n shortly later. \n\n 2. After we're done loading, any remaining gaps in the usage of\n serial numbers very likely reflect tuples that once existed and\n were deleted. If we re-use those serial values, we may do fatal\n damage to the application's logic, since we have then violated\n the fundamental guarantee of a SERIAL column: never generate any\n duplicate serial numbers.\n\nYou could get around problem #2 if the extra state needed to keep track\nof holes could itself be saved and reloaded by pg_dump. But this is\ngetting way past the point of being an attractive alternative, and the\nimplementation no longer looks very much like a SEQUENCE object...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Mar 1999 09:58:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences.... " } ]
[ { "msg_contents": "> \n> >> I tried to create a C function to overcome this but noticed that if any\n> >> parameter in my C function is NULL then the C function always returns\n> NULL.\n> >> I saw some references in the archives about this issue but was unable\n> to\n> >> determine where it was left. What is the status of this issue?\n> \nYes, this is current behavior.\n\n\t> Would a compromise be to add DECODE and NVL ?\n\nThe Standard has the more flexible function COALESCE, which is already\nimplemented in postgresql.\nSimply say coalesce(field_that_can_be_null,value_to_return_insteadof_null)\n\nAndreas \n", "msg_date": "Wed, 17 Mar 1999 09:29:00 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of f\n\trying pan, into fire)" } ]
[ { "msg_contents": "Hello!\n\n I ran the query\nupdate producers SET cor_id = producer_id % 9 + 1;\n\n and found that result is eqiuvalent to\nupdate producers SET cor_id = producer_id % 9;\n\n I added parens:\nupdate producers SET cor_id = (producer_id % 9) + 1;\n\n and got what I needed.\n\n Is it a bug, a feature, or I just misinterpreted the syntax?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Wed, 17 Mar 1999 11:49:55 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Modulo syntax" }, { "msg_contents": "> Hello!\n> \n> I ran the query\n> update producers SET cor_id = producer_id % 9 + 1;\n> \n> and found that result is eqiuvalent to\n> update producers SET cor_id = producer_id % 9;\n> \n> I added parens:\n> update producers SET cor_id = (producer_id % 9) + 1;\n> \n> and got what I needed.\n\nLooks like a bug. We have associativity for +, -, * and /, but not %.\n>From gram.y:\n\n\t%left '+' '-'\n\t%left '*' '/'\n\nI will add '%' to that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 11:21:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Modulo syntax" }, { "msg_contents": "Fixed now.\n\n> Hello!\n> \n> I ran the query\n> update producers SET cor_id = producer_id % 9 + 1;\n> \n> and found that result is eqiuvalent to\n> update producers SET cor_id = producer_id % 9;\n> \n> I added parens:\n> update producers SET cor_id = (producer_id % 9) + 1;\n> \n> and got what I needed.\n> \n> Is it a bug, a feature, or I just misinterpreted the syntax?\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 17:01:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Modulo syntax" }, { "msg_contents": "> > I ran the query\n> > update producers SET cor_id = producer_id % 9 + 1;\n> > and found that result is eqiuvalent to\n> > update producers SET cor_id = producer_id % 9;\n> > I added parens:\n> > update producers SET cor_id = (producer_id % 9) + 1;\n> > and got what I needed.\n> Looks like a bug. We have associativity for +, -, * and /, but not %.\n> From gram.y:\n> %left '+' '-'\n> %left '*' '/'\n> I will add '%' to that.\n\nThis will not fix the associativity problem, unless you *also* go\nthrough and add the explicit syntax *throughout* gram.y, as is\ncurrently done for '+', '-', etc.\n\nI'm pretty sure that we don't want to do this, since there are way too\nmany other operators which would need the same treatment.\n\nThe correct solution will be to identify the operator as a particular\nclass in scan.l, include that class in the associativity declarations,\nand then handle that class in the body of gram.y. Sort of like we do\nfor generic operators already, but with some discrimination between\nthem. To be done right, we should look up the precedence in a db\ntable, to allow new operators to participate in the scheme. In any\ncase, gram.y will become more complex...\n\nUnless we are going to solve this, I would suggest backing out the\nchange in gram.y.\n\n - Tom\n", "msg_date": "Fri, 26 Mar 1999 15:57:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Modulo syntax" }, { "msg_contents": "> > > I ran the query\n> > > update producers SET cor_id = producer_id % 9 + 1;\n> > > and found that result is eqiuvalent to\n> > > update producers SET cor_id = producer_id % 9;\n> > > I added parens:\n> > > update producers SET cor_id = (producer_id % 9) + 1;\n> > > and got what I needed.\n> > Looks like a bug. We have associativity for +, -, * and /, but not %.\n> > From gram.y:\n> > %left '+' '-'\n> > %left '*' '/'\n> > I will add '%' to that.\n> \n> This will not fix the associativity problem, unless you *also* go\n> through and add the explicit syntax *throughout* gram.y, as is\n> currently done for '+', '-', etc.\n> \n> I'm pretty sure that we don't want to do this, since there are way too\n> many other operators which would need the same treatment.\n\nI did this for %. I felt it was common enough and similar to / that\npeople should expect it to have / associativity. I did not play with\nany other operators.\n\n> \n> The correct solution will be to identify the operator as a particular\n> class in scan.l, include that class in the associativity declarations,\n> and then handle that class in the body of gram.y. Sort of like we do\n> for generic operators already, but with some discrimination between\n> them. To be done right, we should look up the precedence in a db\n> table, to allow new operators to participate in the scheme. In any\n> case, gram.y will become more complex...\n\nYikes. Don't think we want to go there.\n> \n> Unless we are going to solve this, I would suggest backing out the\n> change in gram.y.\n\nI would like to keep % as a special case like /.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Mar 1999 11:47:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Modulo syntax" } ]
[ { "msg_contents": "unsubscribe\n\n", "msg_date": "Wed, 17 Mar 1999 14:30:40 +-4-30", "msg_from": "\"R. Jalili\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Hello!\n\n A friend of me got errors with float8, so I forward his questions here.\n AFAIK last error (float8-to-date) had been fixed in CURRENT, but what\nabout other errors?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n---------- Forwarded message ----------\nDate: Wed, 17 Mar 1999 12:34:05 +0300 (MSK)\nFrom: Artem Chuprina <[email protected]>\nTo: [email protected]\nSubject: Re: PostgreSQL 6.4.2: float8::text\n\nran=> create table test (f float4, ff float8);\nCREATE\nran=> insert into test values (0,0);\nINSERT 149524 1\nran=> select f||'$' from test;\nERROR: There is more than one possible operator '||' for types 'float4' and 'unknown'\n You will have to retype this query using an explicit cast\nran=> select ff||'$' from test;\nERROR: There is more than one possible operator '||' for types 'float8' and 'unknown'\n You will have to retype this query using an explicit cast\nran=> select f::text from test;\nERROR: Function 'text(float4)' does not exist\n There is more than one function that satisfies the given argument types\n You will have to retype your query using explicit typecasts\nran=> select ff::text from test;\ntext \n----------------------------\nSat Jan 01 03:00:00 2000 MSK\n(1 row)\n\n", "msg_date": "Wed, 17 Mar 1999 13:05:48 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 6.4.2: float8::text (fwd)" } ]
[ { "msg_contents": "I am still not able to pg_dump my data after recovering from this disaster.\nMy files are now segmented at 1GB, vacuuming is fine, but pg_dump has a\nproblem \"locating the template1 database\".\n\nI sure hope someone can help me come up with a way to safely backup this\ndata. Someone sent me a patch for the Linux Kernel that will allow it to\nhandle files > 2GB, but that won't help me with my backup problems.\n\nThanks,\n\nTim Perdue\ngeocrawler.com\n\n\n\n\n-----Original Message-----\nFrom: Tatsuo Ishii <[email protected]>\nTo: Peter Mount <[email protected]>\nCc: [email protected] <[email protected]>; Tom Lane <[email protected]>;\[email protected] <[email protected]>\nDate: Tuesday, March 16, 1999 10:45 PM\nSubject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0\n\n\n>>Just a question. Does your patch let vacuum handle segmented tables?\n>>--\n>>Tatsuo Ishii\n>\n>>It simply reduces the size of each segment from 2Gb to 1Gb. The problem\n>>was that some OS's (Linux in my case) don't like files exactly 2Gb in\n>>size. I don't know how vacuum interacts with the storage manager, but in\n>>theory it should be transparent.\n>\n>Ok. So we still have following problem:\n>\n>test=> vacuum smallcat;\n>NOTICE: Can't truncate multi-segments relation smallcat\n>VACUUM\n>\n>Maybe this should be added to TODO if it's not already there.\n>--\n>Tatsuo Ishii\n\n", "msg_date": "Wed, 17 Mar 1999 07:08:52 -0600", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0 " }, { "msg_contents": "Thus spake Tim Perdue\n> I am still not able to pg_dump my data after recovering from this disaster.\n> My files are now segmented at 1GB, vacuuming is fine, but pg_dump has a\n> problem \"locating the template1 database\".\n\nI recall once creating a shell script that dumped a table. I can't\nremember why I didn't use pg_dump but the shell script was pretty simple.\nIf you can read the data, you should be able to make this work. Worst\ncase you may have to handcraft each table dump but beyond that it should\nwork pretty good.\n\nHere's how I would do it in Python. Requires the PostgreSQL module\nfound at http://www.druid.net/pygresql/.\n\n\n#! /usr/bin/env python\nfrom pg import DB\ndb = DB() # opens default database on local machine - adjust as needed\n\nfor t in db.query(\"SELECT * FROM mytable\").dictresult():\n print \"\"\"INSERT INTO mytable (id, desc, val)\n VALUES (%(id), '%(desc)', %(val));\"\"\" % t\n\nThen feed that back into a new database. I assume you have the schema\nbacked up somewhere. You may have to get fancy with formatting special\ncharacters and stuff. If you can still read the schema you may be able\nto automate this more. It would depend on the amount of tables in your\ndatabase and the complexity. See the get_tables() and get_attnames()\nmethods in PyGreSQL.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Mar 1999 08:47:17 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with >2GB tables on Linux 2.0" } ]
[ { "msg_contents": "I think this was intended for the list. :) Clark\n\n-------- Original Message --------\nFrom: Terry Mackintosh <[email protected]>\nSubject: Re: [HACKERS] Sequences....\nTo: Clark Evans <[email protected]>\n\nHi all\n\nOn Wed, 17 Mar 1999, Clark Evans wrote:\n\n> > 0. Remove serial type.\n> >\n> > 2. Create a new data type serial. I haven't thought idea \n> > out much, and need to investigate it some more. I'm thinking \n> > it would be binary equivilent with the int4 type, and use \n> > most of the existing seqence code, but not put the seqence \n> > name entry in the pg_class system table. Does this sound \n> > like a feasible idea?\n> \n> This dosn't sound all that bad... but I really\n> wonder what the advantage is. \n\nWell, as I'm starting to use the serial \"type\" a fair amount, I feel I\nshould pop up here, for what it's worth:)\n\nI kind of like option 2, maybe serial can even take some paramaters so\nthat when a dump/reload is done it will know where to take up? and it can\nbe put in the dump as SERIAL with parameters, instead of as INT4?\nOr maybe at the end of the reload it's value can be set, that might be\ncleaner.\n\nThis brings up a related issue, the fact that a dump file does NOT look\nlike the origenal script that made the database, that is things like\nSERIAL, PRIMARY KEY, REFERENCES table (field), VIEW, and probably some\nother things that I missed, none of these things are reconstruced in the\ndump file in any intuative way.\n\nAll these things can be done in a more round about, more obtuse way, but\nthe whole point of them (seems to me any way) is to make the source file\neasier to read and understand. Am I off base here? if so then what is the\npoint?\n\nSo, working off the last point, should not the dump file, aside from it's\ndata, look like the origanal script that made the database? so if a table\nhas a PRIMARY KEY, then instead of an index at the bottom of the dump\nfile, the table should have ... PRIMARY KEY... in it.\n\nThe main reasones I use such constructs are 1. readability and 2.\nconvienance. As relates to a dump file, #1 is lost and #2 does not\nmatter, unless maybe one wants to hand edit the dump file for some\nreason.\n\nJust my thoughts,\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\nProfessional Web Hosting and site design to include programming\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n", "msg_date": "Wed, 17 Mar 1999 18:09:27 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: [HACKERS] Sequences....]" }, { "msg_contents": "Hi Clark and all\n\nOn Wed, 17 Mar 1999, Clark Evans wrote:\n\n> I think this was intended for the list. :) Clark\n\nYes, thanks, I've gotten used to another list were the list address is in\nthe ReplyTo mail header:)\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\nProfessional Web Hosting and site design to include programming\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n", "msg_date": "Wed, 17 Mar 1999 14:34:14 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: [HACKERS] Sequences....]" } ]
[ { "msg_contents": "\nI just deleted all 50,000 rows from a table that has one int4 and one text\nfield.\n\nWhy does vacuum take so long? If all the rows are superceeded, so no\nrows actually have to be moved, should it take so long for vacuum to\nrun?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 13:25:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum slowness" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I just deleted all 50,000 rows from a table that has one int4 and one text\n> field.\n\n> Why does vacuum take so long? If all the rows are superceeded, so no\n> rows actually have to be moved, should it take so long for vacuum to\n> run?\n\nDo you have any indexes on the table? I've noticed (and complained in\nthe past ;-)) that vacuuming a table takes unreasonably long if there\nare a lot of dead index entries to be cleaned. It seems faster to drop\nand recreate the index in a case like that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 1999 20:52:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum slowness " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I just deleted all 50,000 rows from a table that has one int4 and one text\n> field.\n> \n> Why does vacuum take so long? If all the rows are superceeded, so no\n> rows actually have to be moved, should it take so long for vacuum to\n> run?\n\nIndices?\n\nVadim\n", "msg_date": "Thu, 18 Mar 1999 08:53:41 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum slowness" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > I just deleted all 50,000 rows from a table that has one int4 and one text\n> > field.\n> > \n> > Why does vacuum take so long? If all the rows are superceeded, so no\n> > rows actually have to be moved, should it take so long for vacuum to\n> > run?\n> \n> Indices?\n\nYes. That seems to be the problem. 45k lines, COPY is fast, DELETE is\nfast if there are no indexes. With an index, it takes a long time. \nBummer. Ideas?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 21:45:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: vacuum slowness" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Indices?\n> \n> Yes. That seems to be the problem. 45k lines, COPY is fast, DELETE is\n> fast if there are no indexes. With an index, it takes a long time.\n> Bummer. Ideas?\n\nI hope to implement space re-using and address vacuum slowness\nin 6.6\n\nVadim\n", "msg_date": "Thu, 18 Mar 1999 12:06:42 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: vacuum slowness" }, { "msg_contents": "Vadim Mikheev wrote:\n> I hope to implement space re-using and address vacuum slowness in 6.6\n\nAre you intending to keep it so that you could still run PostgreSQL\non top of a WORM (Write once Read Many) device? I'm plannng to\nput some databases directly on these new write-only DVD drives\ncoming out.... I'd want to keep the indexes on a (WMRM) hard drive though.\n\n\n:) Clark\n", "msg_date": "Thu, 18 Mar 1999 05:36:34 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: vacuum slowness" }, { "msg_contents": "Clark Evans wrote:\n> \n> Vadim Mikheev wrote:\n> > I hope to implement space re-using and address vacuum slowness in 6.6\n> \n> Are you intending to keep it so that you could still run PostgreSQL\n> on top of a WORM (Write once Read Many) device? I'm plannng to\n> put some databases directly on these new write-only DVD drives\n> coming out.... I'd want to keep the indexes on a (WMRM) hard drive though.\n\nIs it possible to use WORM now?\n\nVadim\n", "msg_date": "Thu, 18 Mar 1999 13:46:20 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: vacuum slowness" }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> Is it possible to use WORM now?\n> \n\nI don't know, but it's on my to-try list. I'm hoping it\nwill work (got all excited when I was reading the acedemic papers)\nThis was one of the goals of the database... \n\nIt just seems for situations where a high degree of auditability is \nneeded that running the database on top of a WORM is a fantastic idea.\nI'm writing a bookkeeping system, and think it would be a very\nvalueable reason to move to 'free software'. It's the killer feature\nOracle dosn't have. Well, acedemically it sounds nice. *smirk*\n\nIt's all speculation, but fun speculation anyway...\n\n:) Clark\n\nP.S. Perhaps it's not all that great of an idea. I intend to journal\nall of the interactions with the database to a CDR, I was just hoping\nto get it for free.... *evil grin*\n", "msg_date": "Thu, 18 Mar 1999 07:32:27 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: vacuum slowness" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Thursday, March 18, 1999 2:07 PM\n> To: Bruce Momjian\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Re: vacuum slowness\n> \n> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > Indices?\n> > \n> > Yes. That seems to be the problem. 45k lines, COPY is fast, DELETE is\n> > fast if there are no indexes. With an index, it takes a long time.\n> > Bummer. Ideas?\n> \n> I hope to implement space re-using and address vacuum slowness\n> in 6.6\n>\n\nWe would be able to vacuum without blocking same-table writers in v6.5 ?\nOr would VACUUM block same-table readers as VACUUM does currently ?\n \nThanks.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 18 Mar 1999 17:19:40 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: vacuum slowness" }, { "msg_contents": "> > Is it possible to use WORM now?\n> I don't know, but it's on my to-try list. I'm hoping it\n> will work (got all excited when I was reading the acedemic papers)\n> This was one of the goals of the database...\n\n... which we probably gave up when we removed time travel, quite a\nwhile ago.\n\n - Tom\n", "msg_date": "Fri, 26 Mar 1999 16:38:30 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: vacuum slowness" } ]
[ { "msg_contents": "> > \n> > All except of subqueries with aggregates in target list.\n> \n> I am confused. How do I rewrite this to use exists?\n> \n> SELECT keyname\n> FROM markmain\n> WHERE mark_id NOT IN(SELECT mark_id\n> FROM markaty)\n> \n> \n> Even if I use IN instead of NOT IN, I don't see how to do it without\n> making it a correlated subquery.\n> \n> SELECT keyname\n> FROM markmain\n> WHERE EXISTS (SELECT mark_id\n> FROM markaty\n> \t\t WHERE markmain.mark_id = markaty.mark_id)\n> \n> This is a correlated subquery. It did not use hash, but it did use the\n> index on markaty:\n> \n> \tSeq Scan on markmain (cost=16.02 size=334 width=12)\n> \t SubPlan\n> \t -> Index Scan using i_markaty on markaty (cost=2.10 size=3 width=4)\n> \n> While the index usage is good, the fact is the subquery is executed for\n> every row of markmain, isn't it? That's one query executed for each row\n> in markmain, isn't it?\n\nI just tried this with NOT EXISTS, and it was VERY fast. Can we discuss\nthe issues, and perhaps auto-rewrite these as exists. Is that always\nbetter than hash?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 13:32:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Subqueries and indexes" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > While the index usage is good, the fact is the subquery is executed for\n> > every row of markmain, isn't it? That's one query executed for each row\n> > in markmain, isn't it?\n> \n> I just tried this with NOT EXISTS, and it was VERY fast. Can we discuss\n> the issues, and perhaps auto-rewrite these as exists. Is that always\n> better than hash?\n\nNot always, but there is no hashing currently, so you could try\nre-writing for IN/NOT IN subqueries without aggregates...\n\nKeep in mind that expression subqueries must return <= 1 rows,\nso it's probably better don't rewrite them (or you'll have to\nadd this check to EXISTS code).\n\nVadim\n", "msg_date": "Thu, 18 Mar 1999 09:34:16 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Subqueries and indexes" } ]
[ { "msg_contents": "I have made a change so that all operators except \"+-*/%\" are now\nleft-associative, rather than being non-associative:\n\n\tselect 'a' || 'b' || 'c';\n\nThe old code does:\n\n\ttest=> select 'a' || 'b' || 'c';\n\tERROR: parser: parse error at or near \"||\"\n\nIs this a problem for people? It will now not complain about missing\nparens, but left-associate all these operations. Any problems with\nthat?\n\nThe code still associates \"+-*/%\" so that \"*/%\" is done first, then\n\"+-\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 15:57:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Use of multiple || and precidence" } ]
[ { "msg_contents": "\nNOTICE: SIAssignBackendId: discarding tag 2147483505\n/extra/env1/logs/postmaster.log: Wed Mar 17 13:07:55 PST 1999: FATAL 1: Backend cache invalidation initialization failed\n\nIt seems to be happening when my databases cross some size threshold.\n\nProblem occurrs under freebsd and linux.\n\n\n", "msg_date": "Wed, 17 Mar 1999 13:09:59 -0800", "msg_from": "Jason Venner <[email protected]>", "msg_from_op": true, "msg_subject": "What does this mean: SIAssignBackendId: discarding tag 2147483505 in\n\t6.3.4" } ]
[ { "msg_contents": "Yes!!!!\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tWednesday, March 17, 1999 1:58 PM\n\tTo:\[email protected]\n\tSubject:\t[HACKERS] Use of multiple || and precidence\n\n\tI have made a change so that all operators except \"+-*/%\" are now\n\tleft-associative, rather than being non-associative:\n\n\t\tselect 'a' || 'b' || 'c';\n\n\tThe old code does:\n\n\t\ttest=> select 'a' || 'b' || 'c';\n\t\tERROR: parser: parse error at or near \"||\"\n\n\tIs this a problem for people? It will now not complain about\nmissing\n\tparens, but left-associate all these operations. Any problems with\n\tthat?\n\n\tThe code still associates \"+-*/%\" so that \"*/%\" is done first, then\n\t\"+-\".\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Wed, 17 Mar 1999 15:46:29 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Use of multiple || and precidence" } ]
[ { "msg_contents": "\nSELECT 12 & 4;\n\nAny reason we don't support this? I know Sybase and Mysql do. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 17 Mar 1999 17:13:48 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "&" } ]
[ { "msg_contents": "I have added a new postgres -O option to override restrictions, so\nsystem tables can be directly manipulated.\n\nI have also modified initdb, so it uses this new option, and removed the\nhacks needed to change xpg_ to pg_.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 17:51:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New postgres -O option for system tables" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have added a new postgres -O option to override restrictions, so\n> system tables can be directly manipulated.\n> \n>\n\nThis seems very useful.\n\n:) Clark\n", "msg_date": "Wed, 17 Mar 1999 23:00:37 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New postgres -O option for system tables" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > I have added a new postgres -O option to override restrictions, so\n> > system tables can be directly manipulated.\n> > \n> >\n> \n> This seems very useful.\n\nYea. The old initdb would create xpg_, then do system table updates and\n'mv' to make it a pg_ table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 18:09:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New postgres -O option for system tables" } ]
[ { "msg_contents": "To load the cache in several cases, we have to do it with sequential\nscans, because there is no index on these tables, partially because they\ndidn't have multi-key indexes in the old days. I want to add system\nindexes and have the cache use them before we go beta. I should finish\nby the end of the weekend.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 18:08:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "cache lookups" } ]
[ { "msg_contents": "Remember, folks, there is a pg_attribute.atttypmod field that is an\nint32 that is ready for use by other types.\n\nFor example, Clark and I were discussing putting the oid of the sequence\ninto that column, destruction of the sequence could be automatic. Would\nalso make pg_dump understand which sequences are valid for dumping.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 19:12:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "atttypmod usage" } ]
[ { "msg_contents": "It won't be long for me until that happens. Not long at all. Considering\nI've amassed 2.2 GB in just 3-4 weeks....\n\nI'm really surprised to see that Linux has such a lame file limitation. I\nthink even the macintosh can handle single files in the terabyte range now.\n\nTim\n\n\n-----Original Message-----\nFrom: Bruce Momjian <[email protected]>\nTo: Peter T Mount <[email protected]>\nCc: [email protected] <[email protected]>; [email protected]\n<[email protected]>\nDate: Wednesday, March 17, 1999 5:29 PM\nSubject: Re: [HACKERS] \"CANNOT EXTEND\" -\n\n\n>> > pg_dump only dumps a flat unix file. That can be any size your OS\n>> > supports. It does not segment. However, a 2gig table will dump to a\n>> > much smaller version than 2gig because of the overhead for every\nrecord.\n>>\n>> Hmmm, I think that, as some people are now using >2Gig tables, we should\n>> think of adding segmentation to pg_dump as an option, otherwise this is\n>> going to become a real issue at some point.\n>\n>So the OS doesn't get a table over 2 gigs. Does anyone have a table\n>that dumps a flat file over 2gig's, whose OS can't support files over 2\n>gigs. Never heard of a complaint.\n>\n>\n>>\n>> Also, I think we could do with having some standard way of dumping and\n>> restoring large objects.\n>\n>I need to add a separate large object type.\n>--\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Wed, 17 Mar 1999 18:42:47 -0600", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> It won't be long for me until that happens. Not long at all. Considering\n> I've amassed 2.2 GB in just 3-4 weeks....\n\nHow large are your flat files. Also, the postgresql problems are with\nfiles that are exactly 2gig. It is possible files over that will be ok.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 21:38:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > > Thus spake Ryan Bradetich\n> > >> 2. Create a new data type serial. I haven't thought idea out much,\n> > >> and need to investigate it some more. I'm thinking it would be binary\n> > >> equivilent with the int4 type, and use most of the existing seqence\n> > >> code, but not put the seqence name entry in the pg_class system\n> > >> table. Does this sound like a feasible idea?\n> > \n> > > I like it.\n> > \n> > A binary-equivalent type does seem like a handier way to represent\n> > SERIAL than what we are doing. You still need a way to find the\n> > associated sequence object, though, so a table mapping from\n> > table-and-column to sequence OID is still necessary. (Unless we\n> > were to use the atttypmod field for the column to hold the sequence\n> > object's OID? Seems a tad fragile though, since there's no way to\n> > update an atttypmod field in an existing table.)\n> \n> atttypmod seems like a perfect idea. We also need a unique type for\n> large objects, so oid's and large objects can be distinguished. We\n> could do both at the same time, and with Thomas's new type coersion\n> stuff, we don't need to create tons of functions for each new type.\n\nI'll play around with this idea for a while and see what I come up with. I'm \nnot sure if I completely understand, but I'll form questions as I continue to \ndig into the source code. :)\n\n> > \n> > I don't like the idea of not putting the sequence name into pg_class.\n> > That would mean that the sequence object is not directly accessible\n> > for housekeeping operations. If you do that, you'd have to invent\n> > all-new ways to do the following:\n> > \t* currval, setval, nextval (yes there are scenarios where a\n> > \t direct nextval on the sequence is useful)\n> > \t* dump and reload the sequence in pg_dump\n> \n> Yes, let's keep it in pg_class. No reason not to.\n\nOk, you convicned me.\n\n> > > If we decide to leave things more or less as they are, how about a new\n> > > flag for sequences and indexes that sets a row as system generated\n> > > rather than user specified? We can then set that field when a sequence\n> > > or index is generated by the system such as for the serial type or\n> > > primary keys.\n> > \n> > Yes, it'd be nice to think about fixing up primary-key implicit indexes\n> > while we are at it --- they have some of the same problems as SERIAL ...\n\nI'm not following this... When a table is dropped, all the indexes for that \ntable get dropped. The indexes are associated with a table, whereas the \nsequences are just sequences not associated with a table. Am I understanding \nthe issue correctly?\n\n> My guess is that 6.5 is too close to be making such sweeping changes,\n> though the pg_dump problems with SERIAL certainly make this an important\n> issue.\n\nDo you want me to try and get the serial stuff finished before 6.5? or should we \nwait?\n\n-Ryan\n", "msg_date": "Wed, 17 Mar 1999 18:11:24 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Ryan Bradetich <[email protected]> writes:\n>>>> Yes, it'd be nice to think about fixing up primary-key implicit indexes\n>>>> while we are at it --- they have some of the same problems as SERIAL ...\n\n> I'm not following this... When a table is dropped, all the indexes for\n> that table get dropped. The indexes are associated with a table,\n> whereas the sequences are just sequences not associated with a table.\n> Am I understanding the issue correctly?\n\nIt's mainly a pg_dump issue: can pg_dump identify such an index as\nhaving come from a PRIMARY KEY spec rather than a separate CREATE INDEX\ncommand? This goes back to the complaint about pg_dump not being able\nto fully reconstruct the logical connections in a database.\n\nA related issue is inheritance: if I say PRIMARY KEY in the definition\nof a table, and then make a child table that inherits from that table,\nI'd expect the child's field to act like a PRIMARY KEY too --- in other\nwords it should have a unique index created for it. Right now I don't\nbelieve that that happens.\n\nWhat it all comes down to is that mapping these structures into \"lower\nlevel\" objects without remembering the higher-level structure isn't\nfully satisfactory. We need an explicit, persistent representation of\nthe PRIMARY KEY attribute. In that way it's the same problem as SERIAL.\nThe best solutions might be different, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 1999 21:33:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences.... " }, { "msg_contents": "> > > Yes, it'd be nice to think about fixing up primary-key implicit indexes\n> > > while we are at it --- they have some of the same problems as SERIAL ...\n> \n> I'm not following this... When a table is dropped, all the indexes for that \n> table get dropped. The indexes are associated with a table, whereas the \n> sequences are just sequences not associated with a table. Am I understanding \n> the issue correctly?\n\nNot sure. It just seem to relate.\n\n> \n> > My guess is that 6.5 is too close to be making such sweeping changes,\n> > though the pg_dump problems with SERIAL certainly make this an important\n> > issue.\n> \n> Do you want me to try and get the serial stuff finished before 6.5? or should we \n> wait?\n\nProbably wait, unless we can do it easily.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Mar 1999 21:40:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." }, { "msg_contents": "Thus spake Ryan Bradetich\n> > > > If we decide to leave things more or less as they are, how about a new\n> > > > flag for sequences and indexes that sets a row as system generated\n> > > > rather than user specified? We can then set that field when a sequence\n> > > > or index is generated by the system such as for the serial type or\n> > > > primary keys.\n> > > \n> > > Yes, it'd be nice to think about fixing up primary-key implicit indexes\n> > > while we are at it --- they have some of the same problems as SERIAL ...\n> \n> I'm not following this... When a table is dropped, all the indexes for that \n> table get dropped. The indexes are associated with a table, whereas the \n> sequences are just sequences not associated with a table. Am I understanding \n> the issue correctly?\n\nI was thinking more for pg_dump. If it is a system index, don't dump it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Mar 1999 23:04:40 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequences...." } ]
[ { "msg_contents": ">\n> Jan Wieck wrote:\n> > Why not - where do you have height information of the entire\n> > earth?\n>\n> I have no idea where you'd get that info. I was just\n> joking, you are going way beyond the call of duty.\n> It's cool.\n>\n> :) clark\n\n That's no joke!\n\n Well - you wanted mountains - there they are (Example 2).\n But please don't tell me next you want planes in the air,\n smog over NY and dolphins in the sea :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 18 Mar 1999 03:48:23 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "One more globe" }, { "msg_contents": "On Thu, 18 Mar 1999, Jan Wieck wrote:\n\n> >\n> > Jan Wieck wrote:\n> > > Why not - where do you have height information of the entire\n> > > earth?\n> >\n> > I have no idea where you'd get that info. I was just\n> > joking, you are going way beyond the call of duty.\n> > It's cool.\n> >\n> > :) clark\n> \n> That's no joke!\n> \n> Well - you wanted mountains - there they are (Example 2).\n> But please don't tell me next you want planes in the air,\n> smog over NY and dolphins in the sea :-)\n\nI love it...\n\nCan ya really do the dolphins though? :) \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 18 Mar 1999 00:00:11 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] One more globe" }, { "msg_contents": "> >\n> > Jan Wieck wrote:\n> > > Why not - where do you have height information of the entire\n> > > earth?\n> >\n> > I have no idea where you'd get that info. I was just\n> > joking, you are going way beyond the call of duty.\n> > It's cool.\n> >\n> > :) clark\n> \n> That's no joke!\n> \n> Well - you wanted mountains - there they are (Example 2).\n> But please don't tell me next you want planes in the air,\n> smog over NY and dolphins in the sea :-)\n\nJan, is there no limit to what you can do?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 00:19:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] One more globe" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > Well - you wanted mountains - there they are (Example 2).\n> > But please don't tell me next you want planes in the air,\n> > smog over NY and dolphins in the sea :-)\n>\n> Jan, is there no limit to what you can do?\n\n Rayshade has no wireframe objects like POVray. A wireframe is\n an elegant way to describe complex things like e.g. screws.\n In rayshade you have only some primitves like sphere, box,\n cone, cylinder and some flat things like triangle, disc and\n polygon. The problem with missing wireframes is that things\n like characters are really hard to define.\n\n Anything to build must be described as combinations of such\n primitive objects. This process is called Constructive Solid\n Geometry (CSG). For example to make a hole in a wall you take\n a box and scale it to 5.0, 0.2, 3.0 (x,y,z). Now you take\n another box, scale it to 1.0, 0.21, 0.5 move it into the\n center and subtract it from the wall. There is now the hole\n where you can put in the window. If you build four walls\n don't forget the hole for the door :-)\n\n Another powerful primitive is the heightfield (what I've used\n to build the mountains on the map). It uses a special file of\n raw floating point values that describe the altitude of a\n point on a square plane. I've used the etopo5 topography data\n (altitudes in meters for every 5 minutes of the earth,\n 4320x2160 points though) and converted that into such a\n heightfield plus a color image ranging from deep blue at\n ocean bottom to white on the altitude of the himalaya.\n\n Look here for some other examples what's possible with these\n few features:\n\n http://www-graphics.stanford.edu/~cek/rayshade/gallery/gallery.html\n\n My favorites are Chem, Trees, Magic Chain and of course the\n scenes from Nathan Obrien! The final PostgreSQL developers\n globe might be another candidate for the gallery :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 18 Mar 1999 13:07:14 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] One more globe" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > Well - you wanted mountains - there they are (Example 2).\n> > > But please don't tell me next you want planes in the air,\n> > > smog over NY and dolphins in the sea :-)\n> >\n> > Jan, is there no limit to what you can do?\n\nI think the major problem I have with globe #3 is the white shaft of the\npins. It is the first thing I see when I look at the image. Can the\nshafts be silver, black, grey, or red. They maybe they would not stick\nout. The eye should be drawn to the map, and the pin heads.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 23:07:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] One more globe" }, { "msg_contents": "\nOn 20-Mar-99 Bruce Momjian wrote:\n> The eye should be drawn to the map, and the pin heads.\n\n*giggle*\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Fri, 19 Mar 1999 23:12:44 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] One more globe" }, { "msg_contents": "> \n> On 20-Mar-99 Bruce Momjian wrote:\n> > The eye should be drawn to the map, and the pin heads.\n> \n> *giggle*\n> \n\nI knew I was going to get a giggle on this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 23:15:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] One more globe" } ]
[ { "msg_contents": "The flat files are sitting at 1.9 GB right now (the primary email table).\n\nSo I'm approaching the point where I can't backup using the COPY command.\n\nI'm probably using the wrong OS for this. But the performance of postgres is\nstill dazzling me (some people complain about the performance, but when I\nsee it pick 10,000 rows out of 1.4 million, sort them, and return them in a\nsecond, I'm blown away).\n\nTim\n\n-----Original Message-----\nFrom: Bruce Momjian <[email protected]>\nTo: [email protected] <[email protected]>\nCc: [email protected] <[email protected]>\nDate: Wednesday, March 17, 1999 8:46 PM\nSubject: Re: [HACKERS] \"CANNOT EXTEND\" -\n\n\n>[Charset iso-8859-1 unsupported, filtering to ASCII...]\n>> It won't be long for me until that happens. Not long at all. Considering\n>> I've amassed 2.2 GB in just 3-4 weeks....\n>\n>How large are your flat files. Also, the postgresql problems are with\n>files that are exactly 2gig. It is possible files over that will be ok.\n>\n>\n>--\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Wed, 17 Mar 1999 21:10:00 -0600", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> The flat files are sitting at 1.9 GB right now (the primary email table).\n> \n> So I'm approaching the point where I can't backup using the COPY command.\n> \n> I'm probably using the wrong OS for this. But the performance of postgres is\n> still dazzling me (some people complain about the performance, but when I\n> see it pick 10,000 rows out of 1.4 million, sort them, and return them in a\n> second, I'm blown away).\n\nPlease see if you can create files over 2 gig. I believe is it only OS\nbugs in dealing with exactly 2gig files that is the problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 00:20:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > The flat files are sitting at 1.9 GB right now (the primary email table).\n> >\n> > So I'm approaching the point where I can't backup using the COPY command.\n> >\n> > I'm probably using the wrong OS for this. But the performance of postgres is\n> > still dazzling me (some people complain about the performance, but when I\n> > see it pick 10,000 rows out of 1.4 million, sort them, and return them in a\n> > second, I'm blown away).\n> \n> Please see if you can create files over 2 gig. I believe is it only OS\n> bugs in dealing with exactly 2gig files that is the problem.\n\nalso note that pg_dump can write to a pipe, so you can use it thus :\n\n> pg_dump megabase | split -b 500000k - megabase.dump.\n\n> createdb new_megabase\n\n> cat megabase.dump.* | psql new_megabase\n\nto achieve space-saving, you can also pipe the thing through g(un)zip \n\n---------------------\nHannu\n", "msg_date": "Thu, 18 Mar 1999 08:48:16 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "\nTom Lane writes:\n> Bruce Momjian <[email protected]> writes:\n> > I just deleted all 50,000 rows from a table that has one int4 and one\ntext\n> > field.\n>\n> > Why does vacuum take so long? If all the rows are superceeded, so no\n> > rows actually have to be moved, should it take so long for vacuum to\n> > run?\n>\n> Do you have any indexes on the table? I've noticed (and complained in\n> the past ;-)) that vacuuming a table takes unreasonably long if there\n> are a lot of dead index entries to be cleaned. It seems faster to drop\n> and recreate the index in a case like that.\n\nHi everyone,\n\nI am working on a large project right now which involves the use of a\ntable that has thousands of inserts and updates performed each day. (At\nthe end of the day, about 20000 inserts have occured, and each inserted\nrow gets modified 2 or 3 times) Vacuums take absolutely ages and\nunfortunately the system must run continuously 24 hours per day so I can't\nafford to have the table locked for ages while it is being vacuumed.\n\nI've played around with vacuum quite a bit, and I've found that if I do\none huge vacuum every so often, it takes longer than if I do lots of\nvacuum's during the day, this way the tables are kept more 'compacted' and\nthere is less moving around of data required, and so it runs a bit faster.\n\nAs the number of days of new data stored increases, the size of the tables\ngrows to the point where a vacuum can take 10 minutes or so, and this is\nunacceptable considering it occurs in a few seconds without indexes. To\nget around this, once every day, I grab entries which are in the active\ntable that are older than two days, and move them into an archive table\nwhich never changes. This way, I can keep the active table small and do\nvacuums within a minute or so, allowing me to keep my software from\nwaiting too long. I'd really like to avoid doing this though, because it\ncauses complications - lately I've found that vacuuming is becoming a\nmajor hassle which I'd rather not have to do at all :)\n\nWhat I was wanting to know if there was a way of temporarily disabling\nindexes while the vacuum is occuring, and then update it all in one hit\nonce the update is completely finished. This would be equivalent to\ndropping and recreating them, but I don't want to do that in case\nsomething dies during the vacuum and my tables are left without indexes on\nthem.\n\nOr perhaps telling Postgres to do a partial vacuum, with a time limit set\nto say 20 seconds and it will do it in stages over the period of a day.\nThis way the database can still run and we can keep the dbms cleaned. From\nwhat I understand, the new MVCC support in 6.5 will be able to do vacuum's\nin the background, or is this for the future?\n\nAlso, I had a look at the src/commands/vacuum.c code, and had a bit of a\nread through it. One thing I wasn't sure about is the method it uses to\nmove the rows around while it is doing the index. Lets say that we have\n100 rows, and the first one is deleted and so is empty. Does every single\nrow get moved back one, or does only one row get moved to fill in the\nempty gap?\n\nIs the vacuum code moving tons of rows around the table, causing the\nindexes to be updated lots of times and slowing things down?\n\n\nIf someone could give me some hints about how to best handle my tables to\nget good vacuum times I would really appreciate it.\n\n\nbtw, keep up the good work everyone, I've been following this mailing list\nand developing with Postgres since the days of pre-6.0 and I'm very\nimpressed with all the great improvements that have been made to Postgres\nover the years!\n\nThanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n", "msg_date": "Thu, 18 Mar 1999 15:16:52 +1030 (CST)", "msg_from": "Wayne Piekarski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] vacuum slowness" }, { "msg_contents": "\n\n6.5beta has a much faster vacuumer when indexes are used. Please try\nthat when you can.\n\n> \n> Tom Lane writes:\n> > Bruce Momjian <[email protected]> writes:\n> > > I just deleted all 50,000 rows from a table that has one int4 and one\n> text\n> > > field.\n> >\n> > > Why does vacuum take so long? If all the rows are superceeded, so no\n> > > rows actually have to be moved, should it take so long for vacuum to\n> > > run?\n> >\n> > Do you have any indexes on the table? I've noticed (and complained in\n> > the past ;-)) that vacuuming a table takes unreasonably long if there\n> > are a lot of dead index entries to be cleaned. It seems faster to drop\n> > and recreate the index in a case like that.\n> \n> Hi everyone,\n> \n> I am working on a large project right now which involves the use of a\n> table that has thousands of inserts and updates performed each day. (At\n> the end of the day, about 20000 inserts have occured, and each inserted\n> row gets modified 2 or 3 times) Vacuums take absolutely ages and\n> unfortunately the system must run continuously 24 hours per day so I can't\n> afford to have the table locked for ages while it is being vacuumed.\n> \n> I've played around with vacuum quite a bit, and I've found that if I do\n> one huge vacuum every so often, it takes longer than if I do lots of\n> vacuum's during the day, this way the tables are kept more 'compacted' and\n> there is less moving around of data required, and so it runs a bit faster.\n> \n> As the number of days of new data stored increases, the size of the tables\n> grows to the point where a vacuum can take 10 minutes or so, and this is\n> unacceptable considering it occurs in a few seconds without indexes. To\n> get around this, once every day, I grab entries which are in the active\n> table that are older than two days, and move them into an archive table\n> which never changes. This way, I can keep the active table small and do\n> vacuums within a minute or so, allowing me to keep my software from\n> waiting too long. I'd really like to avoid doing this though, because it\n> causes complications - lately I've found that vacuuming is becoming a\n> major hassle which I'd rather not have to do at all :)\n> \n> What I was wanting to know if there was a way of temporarily disabling\n> indexes while the vacuum is occuring, and then update it all in one hit\n> once the update is completely finished. This would be equivalent to\n> dropping and recreating them, but I don't want to do that in case\n> something dies during the vacuum and my tables are left without indexes on\n> them.\n> \n> Or perhaps telling Postgres to do a partial vacuum, with a time limit set\n> to say 20 seconds and it will do it in stages over the period of a day.\n> This way the database can still run and we can keep the dbms cleaned. From\n> what I understand, the new MVCC support in 6.5 will be able to do vacuum's\n> in the background, or is this for the future?\n> \n> Also, I had a look at the src/commands/vacuum.c code, and had a bit of a\n> read through it. One thing I wasn't sure about is the method it uses to\n> move the rows around while it is doing the index. Lets say that we have\n> 100 rows, and the first one is deleted and so is empty. Does every single\n> row get moved back one, or does only one row get moved to fill in the\n> empty gap?\n> \n> Is the vacuum code moving tons of rows around the table, causing the\n> indexes to be updated lots of times and slowing things down?\n> \n> \n> If someone could give me some hints about how to best handle my tables to\n> get good vacuum times I would really appreciate it.\n> \n> \n> btw, keep up the good work everyone, I've been following this mailing list\n> and developing with Postgres since the days of pre-6.0 and I'm very\n> impressed with all the great improvements that have been made to Postgres\n> over the years!\n> \n> Thanks,\n> Wayne\n> \n> ------------------------------------------------------------------------------\n> Wayne Piekarski Tel: (08) 8221 5221\n> Research & Development Manager Fax: (08) 8221 5220\n> SE Network Access Pty Ltd Mob: 0407 395 889\n> 222 Grote Street Email: [email protected]\n> Adelaide SA 5000 WWW: http://www.senet.com.au\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 21:05:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum slowness" } ]
[ { "msg_contents": "How to hack into it?\n\n thanks\n\n\n", "msg_date": "Thu, 18 Mar 1999 00:09:38 -0500", "msg_from": "\"i love lesbians\" <[email protected]>", "msg_from_op": true, "msg_subject": "hotmail" } ]
[ { "msg_contents": "\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t: Andriy I Pilipenko\nYour email address\t: [email protected]\n\nCategory\t\t: runtime: back-end: SQL\nSeverity\t\t: serious\n\nSummary: Bug in optimizer\n\nSystem Configuration\n--------------------\n Operating System : FreeBSD 2.2.6, FreeBSD 3.1, Linux 2.0.36\n\n PostgreSQL version : 6.4.2\n\n Compiler used : gcc 2.7.2.1\n\nHardware:\n---------\nPentium, AMD K6, 256M RAM, 64M RAM\n\nVersions of other tools:\n------------------------\ngmake 3.76.1, flex 2.5.4\n\n--------------------------------------------------------------------------\n\nProblem Description:\n--------------------\nBackend forgets about indexes if WHERE clause includes \nnegative number. This causes great slowdown in queries\non large tables.\n\n--------------------------------------------------------------------------\n\nTest Case:\n----------\nHere is an example session. Note that in first SELECT\nbackend uses index scan, and in second one it uses \nsequental scan.\n\n== cut ==\nbamby=> create table table1 (field1 int);\nCREATE\nbamby=> create index i_table1__field1 on table1 (field1);\nCREATE\nbamby=> explain select * from table1 where field1 = 1;\nNOTICE: QUERY PLAN:\n\nIndex Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n\nEXPLAIN\nbamby=> explain select * from table1 where field1 = -1;\nNOTICE: QUERY PLAN:\n\nSeq Scan on table1 (cost=0.00 size=0 width=4)\n\nEXPLAIN\n== cut ==\n\n--------------------------------------------------------------------------\n\nSolution:\n---------\n\n\n--------------------------------------------------------------------------\n\n", "msg_date": "Thu, 18 Mar 1999 08:34:57 -0500 (EST)", "msg_from": "Unprivileged user <nobody>", "msg_from_op": true, "msg_subject": "General Bug Report: Bug in optimizer" }, { "msg_contents": "Unprivileged user wrote:\n> \n> PostgreSQL version : 6.4.2\n>\n...\n> \n> Here is an example session. Note that in first SELECT\n> backend uses index scan, and in second one it uses\n> sequental scan.\n> \n> == cut ==\n> bamby=> create table table1 (field1 int);\n> CREATE\n> bamby=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> bamby=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n ^^^^^^\nHmmm... Seems that vacuum wasn't run for table1.\nWhy is index used ?!!!\nIt's bug!\n\n> EXPLAIN\n> bamby=> explain select * from table1 where field1 = -1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on table1 (cost=0.00 size=0 width=4)\n\nRun \n\nvacuum table1\n\nVadim\n", "msg_date": "Thu, 18 Mar 1999 20:55:22 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "On Thu, 18 Mar 1999, Vadim Mikheev wrote:\n\n> Unprivileged user wrote:\n> > \n> > PostgreSQL version : 6.4.2\n> >\n> ...\n> > \n> > Here is an example session. Note that in first SELECT\n> > backend uses index scan, and in second one it uses\n> > sequental scan.\n> > \n> > == cut ==\n> > bamby=> create table table1 (field1 int);\n> > CREATE\n> > bamby=> create index i_table1__field1 on table1 (field1);\n> > CREATE\n> > bamby=> explain select * from table1 where field1 = 1;\n> > NOTICE: QUERY PLAN:\n> > \n> > Index Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n> ^^^^^^\n> Hmmm... Seems that vacuum wasn't run for table1.\n> Why is index used ?!!!\n> It's bug!\n\nWhy I need to vacuum immediately after creating table? \n\nHere is another example from live system:\n\n== cut ==\n\nstatserv=> select count(*) from ctime;\ncount\n-----\n94256\n(1 row)\n\nstatserv=> explain select * from ctime where ctg=-1;\nNOTICE: QUERY PLAN:\n\nSeq Scan on ctime (cost=3646.86 size=8412 width=54)\n\nEXPLAIN\nstatserv=> explain select * from ctime where ctg=1;\nNOTICE: QUERY PLAN:\n\nIndex Scan using i_ctime__ctg on ctime (cost=2.05 size=2 width=54)\n\nEXPLAIN\n\n== cut ==\n\n> \n> > EXPLAIN\n> > bamby=> explain select * from table1 where field1 = -1;\n> > NOTICE: QUERY PLAN:\n> > \n> > Seq Scan on table1 (cost=0.00 size=0 width=4)\n> \n> Run \n> \n> vacuum table1\n\nDid it. Doesn't help.\n\n\n Andriy I Pilipenko\n PAI1-RIPE\n\n\n", "msg_date": "Thu, 18 Mar 1999 17:22:14 +0200 (EET)", "msg_from": "Andriy I Pilipenko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "Andriy I Pilipenko wrote:\n> \n> Why I need to vacuum immediately after creating table?\n\nOh, sorry, I missed this -:)\nNevertheless, using index for \n\nselect * from table1 where field1 = 1;\n\nis bug!\n\n> \n> Here is another example from live system:\n> \n> statserv=> explain select * from ctime where ctg=-1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on ctime (cost=3646.86 size=8412 width=54)\n\nAs well as this one.\nShould be fixed easy... Could someone address this? -:)\n\nVadim\n", "msg_date": "Thu, 18 Mar 1999 23:29:56 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Andriy I Pilipenko wrote:\n> > \n> > Why I need to vacuum immediately after creating table?\n> \n> Oh, sorry, I missed this -:)\n> Nevertheless, using index for \n> \n> select * from table1 where field1 = 1;\n> \n> is bug!\n\nIt is possible the new optimizer fixes this. He needs to try the new\nsnapshot to see.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 12:07:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Andriy I Pilipenko wrote:\n> > >\n> > > Why I need to vacuum immediately after creating table?\n> >\n> > Oh, sorry, I missed this -:)\n> > Nevertheless, using index for\n> >\n> > select * from table1 where field1 = 1;\n> >\n> > is bug!\n> \n> It is possible the new optimizer fixes this. He needs to try the new\n> snapshot to see.\n\nvac=> create table table1 (field1 int);\nCREATE\nvac=> create index i_table1__field1 on table1 (field1);\nCREATE\nvac=> explain select * from table1 where field1 = 1;\nNOTICE: QUERY PLAN:\n\nIndex Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n\nUnfixed...\n\nVadim\n", "msg_date": "Fri, 19 Mar 1999 00:47:34 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> > It is possible the new optimizer fixes this. He needs to try the new\n> > snapshot to see.\n> \n> vac=> create table table1 (field1 int);\n> CREATE\n> vac=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> vac=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n> \n> Unfixed...\n> \n\nLet me tell you why I don't think this is a bug. The optimizer will\nchoose ordered results over unordered results if the costs are the same.\nIn this case, the cost of the query is zero, so it chose to use the\nindex because the index produces an ordered result.\n\nThis works well for un-vacuumed tables, because it thinks everything is\nzero cost, and chooses the index.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 13:36:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Let me tell you why I don't think this is a bug. The optimizer will\n> choose ordered results over unordered results if the costs are the same.\n> In this case, the cost of the query is zero, so it chose to use the\n> index because the index produces an ordered result.\n> \n> This works well for un-vacuumed tables, because it thinks everything is\n> zero cost, and chooses the index.\n\nAgreed, this is ok as long as\n\nvac=> create table table1 (field1 int);\nCREATE\nvac=> insert into table1 values (1);\nINSERT 1583349 1\nvac=> create index i_table1__field1 on table1 (field1);\nCREATE\nvac=> explain select * from table1 where field1 = 1;\nNOTICE: QUERY PLAN:\n\nSeq Scan on table1 (cost=1.03 size=1 width=4)\n\n- SeqScan is used for small tables.\n\nSo, only bug reported is left.\n\nVadim\n", "msg_date": "Fri, 19 Mar 1999 01:42:12 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Let me tell you why I don't think this is a bug. The optimizer will\n> > choose ordered results over unordered results if the costs are the same.\n> > In this case, the cost of the query is zero, so it chose to use the\n> > index because the index produces an ordered result.\n> > \n> > This works well for un-vacuumed tables, because it thinks everything is\n> > zero cost, and chooses the index.\n> \n> Agreed, this is ok as long as\n> \n> vac=> create table table1 (field1 int);\n> CREATE\n> vac=> insert into table1 values (1);\n> INSERT 1583349 1\n> vac=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> vac=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on table1 (cost=1.03 size=1 width=4)\n> \n> - SeqScan is used for small tables.\n> \n> So, only bug reported is left.\n> \n\nCan you get on IRC now? Why are you up so late?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 13:46:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Agreed, this is ok as long as\n> \n> vac=> create table table1 (field1 int);\n> CREATE\n> vac=> insert into table1 values (1);\n> INSERT 1583349 1\n> vac=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> vac=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on table1 (cost=1.03 size=1 width=4)\n> \n> - SeqScan is used for small tables.\n> \n> So, only bug reported is left.\n\nSo, yes, I suppose there is an inconsistency there. Zero-sized\ntables(according to vacuum), use index, while tables with some data\ndon't use index.\n\nHow does the system know there is a row in there if you didn't run\nvacuum? That confuses me.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 13:53:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Agreed, this is ok as long as\n> \n> vac=> create table table1 (field1 int);\n> CREATE\n> vac=> insert into table1 values (1);\n> INSERT 1583349 1\n> vac=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> vac=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on table1 (cost=1.03 size=1 width=4)\n> \n> - SeqScan is used for small tables.\n> \n> So, only bug reported is left.\n\nMy guess is that the creation of the index updates the table size\nstatistics.\n\nHowever, when I see zero size, I don't know if it is accurate, or if\nsomeone has added rows since the last vacuum/index creation, so I think\nit is correct to use an index on a zero-length table if it is\nappropriate. If the size is 1, I will assume that number is accurate,\nand do a sequential scan.\n\nDoes that make sense?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 13:56:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> So, yes, I suppose there is an inconsistency there. Zero-sized\n> tables(according to vacuum), use index, while tables with some data\n> don't use index.\n> \n> How does the system know there is a row in there if you didn't run\n> vacuum? That confuses me.\n\nCreate index updates ntuples & npages in pg_class...\n\nVadim\n", "msg_date": "Fri, 19 Mar 1999 01:56:32 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> My guess is that the creation of the index updates the table size\n> statistics.\n\nYes.\n\n> However, when I see zero size, I don't know if it is accurate, or if\n> someone has added rows since the last vacuum/index creation, so I think\n> it is correct to use an index on a zero-length table if it is\n> appropriate. If the size is 1, I will assume that number is accurate,\n> and do a sequential scan.\n> \n> Does that make sense?\n\nYes. But we have to fix SeqScan for field1 = -1...\n\nVadim\n", "msg_date": "Fri, 19 Mar 1999 01:58:22 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > My guess is that the creation of the index updates the table size\n> > statistics.\n> \n> Yes.\n> \n> > However, when I see zero size, I don't know if it is accurate, or if\n> > someone has added rows since the last vacuum/index creation, so I think\n> > it is correct to use an index on a zero-length table if it is\n> > appropriate. If the size is 1, I will assume that number is accurate,\n> > and do a sequential scan.\n> > \n> > Does that make sense?\n> \n> Yes. But we have to fix SeqScan for field1 = -1...\n\nWoh, I just tried it myself, and was able to reproduce it. I will check\ninto it now. Gee, that is very strange.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 14:00:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > My guess is that the creation of the index updates the table size\n> > statistics.\n> \n> Yes.\n> \n> > However, when I see zero size, I don't know if it is accurate, or if\n> > someone has added rows since the last vacuum/index creation, so I think\n> > it is correct to use an index on a zero-length table if it is\n> > appropriate. If the size is 1, I will assume that number is accurate,\n> > and do a sequential scan.\n> > \n> > Does that make sense?\n> \n> Yes. But we have to fix SeqScan for field1 = -1...\n\nThe basic problem is that the -1 is stored as:\n\t\n\t{ EXPR \n\t :typeOid 0 \n\t :opType op \n\t :oper \n\t { OPER \n\t :opno 558 \n\t :opid 0 \n\t :opresulttype 23 \n\t }\n\t \n\t :args (\n\t { CONST \n\t :consttype 23 \n\t :constlen 4 \n\t :constisnull false \n\t :constvalue 4 [ 4 0 0 0 ] \n\t :constbyval true \n\t }\n\t )\n\nThis is clearly undesirable, and causes the optimizer to think it can't\nuse the index. \n\nIs this bug report for 6.4.*, or did are you running the current\ndevelopment tree? I assume you are running 6.4.*, and am surprised this\ndid not show as a larger problem.\n\nI will look in the grammer for a fix. This should come across as a\nsingle -4 constant.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 14:59:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Agreed, this is ok as long as\n> \n> vac=> create table table1 (field1 int);\n> CREATE\n> vac=> insert into table1 values (1);\n> INSERT 1583349 1\n> vac=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> vac=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on table1 (cost=1.03 size=1 width=4)\n> \n> - SeqScan is used for small tables.\n> \n> So, only bug reported is left.\n\nI see it now. The -4 is coming over as a unary minus, and a 4. That is\nOK, because the executor knows how to deal with a unary minus, but the\noptimizer thinks it is a operator and a constant, which it is, but it\ndoes not know how to index an operator with a constant.\n\nUnary minus is probably the not only operator that can be auto-folded\ninto the constant. In fact, it may be valuable to auto-fold all\noperator-constant pairs into just constants.\n\nIn fact, that may not be necessary. If we code so that we check that\nthe right-hand side is totally constants, and make the change in the\nexecutor(if needed), we can just pass it all through. However, we need\nthe constant for optimizer min/max comparisons when using >, but we\ncould do without that if needed, so we don't have to evaluate operators\nand functions outside the executor.\n\nThe quick fix may be to just make sure -4 does not use unary minus in\nthe parser.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 15:13:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Let me tell you why I don't think this is a bug. The optimizer will\n> > choose ordered results over unordered results if the costs are the same.\n> > In this case, the cost of the query is zero, so it chose to use the\n> > index because the index produces an ordered result.\n> > \n> > This works well for un-vacuumed tables, because it thinks everything is\n> > zero cost, and chooses the index.\n> \n> Agreed, this is ok as long as\n> \n> vac=> create table table1 (field1 int);\n> CREATE\n> vac=> insert into table1 values (1);\n> INSERT 1583349 1\n> vac=> create index i_table1__field1 on table1 (field1);\n> CREATE\n> vac=> explain select * from table1 where field1 = 1;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on table1 (cost=1.03 size=1 width=4)\n> \n> - SeqScan is used for small tables.\n> \n> So, only bug reported is left.\n> \n> Vadim\n> \n\nFixed:\n\t\n\ttest=> explain select * from table1 where field1 = 1;\n\tNOTICE: QUERY PLAN:\n\t\n\tIndex Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n\t\n\tEXPLAIN\n\ttest=> explain select * from table1 where field1 = -1;\n\tNOTICE: QUERY PLAN:\n\t\n\tIndex Scan using i_table1__field1 on table1 (cost=0.00 size=0 width=4)\n\nThe function fixing it is in gram.y:\n\t\n\tstatic Node *doNegate(Node *n)\n\t{\n\t if (IsA(n, A_Const))\n\t {\n\t A_Const *con = (A_Const *)n;\n\t\n\t if (con->val.type == T_Integer)\n\t {\n\t con->val.val.ival = -con->val.val.ival;\n\t return n;\n\t }\n\t if (con->val.type == T_Float)\n\t {\n\t con->val.val.dval = -con->val.val.dval;\n\t return n;\n\t }\n\t }\n\t\n\t return makeA_Expr(OP, \"-\", NULL, n);\n\t}\n\n\nIt tries to merge the negative into the constant. We already had\nspecial '-' handling in the grammer, so I just call this function,\nrather than doing makeA_Expr in all cases.\n\nCommitted.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 16:33:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "On Thu, 18 Mar 1999, Bruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> > > \n> > > Let me tell you why I don't think this is a bug. The optimizer will\n> > > choose ordered results over unordered results if the costs are the same.\n> > > In this case, the cost of the query is zero, so it chose to use the\n> > > index because the index produces an ordered result.\n> > > \n> > > This works well for un-vacuumed tables, because it thinks everything is\n> > > zero cost, and chooses the index.\n> > \n> > Agreed, this is ok as long as\n> > \n> > vac=> create table table1 (field1 int);\n> > CREATE\n> > vac=> insert into table1 values (1);\n> > INSERT 1583349 1\n> > vac=> create index i_table1__field1 on table1 (field1);\n> > CREATE\n> > vac=> explain select * from table1 where field1 = 1;\n> > NOTICE: QUERY PLAN:\n> > \n> > Seq Scan on table1 (cost=1.03 size=1 width=4)\n> > \n> > - SeqScan is used for small tables.\n> > \n> > So, only bug reported is left.\n> > \n> \n> Can you get on IRC now? Why are you up so late?\n\nThat's something we need on our globe...timezones :) \n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 19 Mar 1999 09:32:16 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" }, { "msg_contents": "> > Can you get on IRC now? Why are you up so late?\n> \n> That's something we need on our globe...timezones :) \n\nHe is always 12 hours ahead of me. Here is my Vadim command:\n\n\tTZ=Asia/Krasnoyarsk\n\texport TZ\n\tdate\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 13:43:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: Bug in optimizer" } ]
[ { "msg_contents": "> So the OS doesn't get a table over 2 gigs. Does anyone have a table\n> that dumps a flat file over 2gig's, whose OS can't support files over 2\n> gigs. Never heard of a complaint.\n> \nProbably because people dump to tape or pipes, that compress the dump \nwith gzip and split it with split -b ? That is what I would do with my\nbackups:\n\npg_dump | gzip --fast | split -b512m - backup.monday.gz.\n\nAndreas\n\n", "msg_date": "Thu, 18 Mar 1999 16:21:54 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "Hello,\ncan some one please tell me how to delete a table in a database and how\nto delete one row a data in a table, for a PostgreSQL database?\nThank you very much.\nRegards...... lch\n\n", "msg_date": "Fri, 19 Mar 1999 00:41:00 +0800", "msg_from": "hoelc <[email protected]>", "msg_from_op": true, "msg_subject": "delete data" }, { "msg_contents": "Hi\n\nOn Fri, 19 Mar 1999, hoelc wrote:\n\n> Hello,\n> can some one please tell me how to delete a table in a database and how\n> to delete one row a data in a table, for a PostgreSQL database?\n> Thank you very much.\n> Regards...... lch\nSure, in psql:\ndrop table tablename;\ndelete from table tablename where ....what ever....\n\nBut what it sounds like you really need is to get/read the book\n\"The Practical SQL Handbook\" ISBN 0-201-44787-8\nMine is the 3rd edition, check if there is a newer one.\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\nProfessional Web Hosting and site design to include programming\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n", "msg_date": "Thu, 18 Mar 1999 13:14:36 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] delete data" } ]
[ { "msg_contents": "\n> copy tbl_name to 'file';\n> \n> Have any ideas for splitting that?\n> \n> \nI was going to answer with the following:\n\n#!/bin/sh\nmkfifo tapepipe.$$\n( gzip --fast -c < tapepipe.$$ | split -b512m - tbl_name.unl. ) &\npsql -c \"copy tbl_name to 'tapepipe.$$'\" regression \nrm tapepipe.$$\n\nbut it does not work, since psql does not like the named pipe. So use:\n\npsql -qc 'copy onek to stdout' regression | gzip --fast -c | split -b512m -\nonek.unl.\n\nAndreas\n", "msg_date": "Thu, 18 Mar 1999 18:56:39 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "Hi all\n\nWoops, sent this to scrappy instead of the list, sorry scrappy, here it is\nfor the list.\n\nBTW, bye for awail, thank you all for such a great job.\n\nOn Wed, 17 Mar 1999, The Hermit Hacker wrote:\n> Jan...\n> \n> \tAny way of coming up with a suitable Logo to replace what we have\n> on the main screen? Something tasteful that jumps out at you? I like the\n> one we do have, but I don't think its very \"strong\"? How about it? :)\n\nI've been thinking about that also, but am not very artistic so no good\nideas have come to mind. However, one person (forget who) suggested a big\ntruck hauling a heavy load. I did not really care for it, but forward\nthe idea any way.\n\nAnother idea, but I'm not sure how to implement it, I once seen an old\nsci-fi movie where all knowledge known to man was keep in this big crystal,\nall optical storage, crystals can be very pretty, but I'm not sure how to\nshow one storing data? lines of 1's and 0's reflecting around off the\ninside walls? maybe with streams coming out the points to connect to\ncomputers all over the world to symbolize the multiuser aspect of\nPostgreSQL? glowing fiber optic lines maybe? Maybe with some people\ngrinding and polishing on the crystal to symbolize the developers?\nMaybe the crystal floats in the sky?\n\nWell, it may not be a great idea, but I through it out there as food for\nthought and leave it to some one more artistically inclined then I to\nimplement it if they want.\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\nProfessional Web Hosting and site design to include programming\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n\n", "msg_date": "Thu, 18 Mar 1999 14:18:54 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Developers Globe (FINAL) (fwd)" } ]
[ { "msg_contents": "I have been looking into what it would take to remove derived files\nfrom the CVS repository, and it doesn't look bad at all. I propose\nwe do so before 6.5 beta.\n\nIn case anyone's forgotten: the issue is derived files, such as gram.c,\nwhich we currently keep in the CVS repository even though they are not\nmaster source files. Doing so causes a number of headaches, including\nwasted time to check in and check out updates to both master and derived\nfiles, unreasonable bulk of the CVS files for these derived files,\nerrors due to timestamp skew (after checking out, it can look like you\nhave an up-to-date derived file when you do not), etc etc.\n\nThe only reason for keeping these files in CVS is so that users who\nobtain the source distribution don't have to have tools that can rebuild\nthese files. But there's a better way to handle that: generate the derived\nfiles while preparing tarballs. That way we can remove the derived\nfiles from CVS. We'll also eliminate the other time skew problem that's\nbeen seen in more than one past release tarball: the derived files will\nbe certain to have newer timestamps than their masters in the tarballs.\n\nThe most reliable way to do this is just to have a script that does\n\tconfigure\n\t\"make\" all the derived files\n\tmake distclean\nand invoke this script as part of the tarball generation procedure.\nConfiguring in order to find out which yacc and lex to use may seem\na tad expensive ;-) but this way will work, whereas taking shortcuts\nwould have a tendency to break. Doing the make distclean also ensures\nthat the tarball will not contain any extraneous files, which seems like\na good idea.\n\nI have just tested this procedure and determined that it takes less than\n2 minutes on hub.org, which seems well within the realm of acceptability\nfor a nightly batch job.\n\nSo, a few questions for the list:\n\n1. Does anyone object to removing these files from the CVS repository and\nhandling them as above:\n\tsrc/backend/parser/gram.c\n\tsrc/backend/parser/parse.h\n\tsrc/backend/parser/scan.c\n\tsrc/interfaces/ecpg/preproc/preproc.c\n\tsrc/interfaces/ecpg/preproc/preproc.h\n\tsrc/interfaces/ecpg/preproc/pgc.c\n\n2. Should we also handle src/configure this way? That would mean that\npeople who obtain the code straight from CVS would have to have autoconf\ninstalled. It's probably a good idea but I'm not certain.\n\n3. src/pl/plpgsql/src/ also contains yacc and lex output files that are\nchecked into CVS. We definitely should remove them from CVS, but should\nwe leave them to be generated by recipients of the distribution, or\nshould we handle them like the big grammar files? I don't think they\nare big enough to break anyone's yacc, but...\n\n4. Currently, a recipient must have at least minimally working yacc/lex\ncapability anyway, because the bootstrap files in src/backend/bootstrap/\nare not pre-built in the distribution. If we used the same procedure\nfor the bootstrap and plpgsql files as for the bigger parsers, then it\nwould be possible to build Postgres without a local yacc or lex. Is\nthis worth doing, or would it just bloat the distribution to no purpose?\nAs far as I know we have not gotten complaints about the need for\nyacc/lex for these files; it's only that the parser and ecpg grammars\nare too big for some vendor versions...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 1999 21:40:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Removing derived files from CVS" }, { "msg_contents": "> I have been looking into what it would take to remove derived files\n> from the CVS repository, and it doesn't look bad at all. I propose\n> we do so before 6.5 beta.\n> \n> In case anyone's forgotten: the issue is derived files, such as gram.c,\n> which we currently keep in the CVS repository even though they are not\n> master source files. Doing so causes a number of headaches, including\n> wasted time to check in and check out updates to both master and derived\n> files, unreasonable bulk of the CVS files for these derived files,\n> errors due to timestamp skew (after checking out, it can look like you\n> have an up-to-date derived file when you do not), etc etc.\n\nWe have not been able to reliably make releases with the proper\ntimestamps on gram.c, which is critical for end-users, so any change\nthat will make this gram.c more automatic is welcomed by me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 21:44:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" }, { "msg_contents": "Tom Lane wrote:\n> \n> I have been looking into what it would take to remove derived files\n> from the CVS repository, and it doesn't look bad at all. I propose\n> we do so before 6.5 beta.\n\nSure, as long as it is clear what additional tools are need, what are \ntheir versions, and where do I get them for common platforms.\n\n:) Clark\n", "msg_date": "Fri, 19 Mar 1999 02:50:59 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" }, { "msg_contents": "On Thu, 18 Mar 1999, Bruce Momjian wrote:\n\n> > I have been looking into what it would take to remove derived files\n> > from the CVS repository, and it doesn't look bad at all. I propose\n> > we do so before 6.5 beta.\n> > \n> > In case anyone's forgotten: the issue is derived files, such as gram.c,\n> > which we currently keep in the CVS repository even though they are not\n> > master source files. Doing so causes a number of headaches, including\n> > wasted time to check in and check out updates to both master and derived\n> > files, unreasonable bulk of the CVS files for these derived files,\n> > errors due to timestamp skew (after checking out, it can look like you\n> > have an up-to-date derived file when you do not), etc etc.\n> \n> We have not been able to reliably make releases with the proper\n> timestamps on gram.c, which is critical for end-users, so any change\n> that will make this gram.c more automatic is welcomed by me.\n\nAgreed here too...someone at one point mentioned that there might be a\nway, inside of CVS, to have it auto-generate these files as its being\nchecked out (ie. if file is configure.in, run autoconf)...\n\nI just scan'd through the cvs info file, and couldn't find\nanything...anyone know about something like this?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 19 Mar 1999 09:40:59 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> Agreed here too...someone at one point mentioned that there might be a\n> way, inside of CVS, to have it auto-generate these files as its being\n> checked out (ie. if file is configure.in, run autoconf)...\n\n>From the info file:\nModule options\n--------------\n\n Either regular modules or ampersand modules can contain options,\nwhich supply additional information concerning the module.\n[snip]\n`-i PROG'\n Specify a program PROG to run whenever files in a module are\n committed. PROG runs with a single argument, the full pathname of\n the affected directory in a source repository. The `commitinfo',\n `loginfo', and `verifymsg' files provide other ways to call a\n program on commit.\n\n`-o PROG'\n Specify a program PROG to run whenever files in a module are\n checked out. PROG runs with a single argument, the module name.\n\n>From my reading, it looks like the easiest thing to do is set up\ncommit rules such that committing gram.y automatically generates\ngram.c. It looks like it might be difficult to have gram.c generated\ncompletely \"on the fly\" and then passed to the CVS client.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "19 Mar 1999 08:55:36 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" }, { "msg_contents": "Clark Evans <[email protected]> writes:\n> Tom Lane wrote:\n>> I have been looking into what it would take to remove derived files\n\n> Sure, as long as it is clear what additional tools are need, what are \n> their versions, and where do I get them for common platforms.\n\nYou already need yacc (or bison) and lex (or flex). The only new\nthing would be autoconf, and that only if we choose to remove\nsrc/configure from the CVS fileset. You get autoconf from any\nGNU archive site. 2.13 is the current release, I believe.\n\nIIRC autoconf depends on GNU m4, so that's actually two tools not one,\nbut the installation is straightforward. If you're on Linux you\nprobably have GNU m4 anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Mar 1999 09:46:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Removing derived files from CVS " }, { "msg_contents": "[email protected] writes:\n> Then <[email protected]> spoke up and said:\n>> Agreed here too...someone at one point mentioned that there might be a\n>> way, inside of CVS, to have it auto-generate these files as its being\n>> checked out (ie. if file is configure.in, run autoconf)...\n\n> From my reading, it looks like the easiest thing to do is set up\n> commit rules such that committing gram.y automatically generates\n> gram.c.\n\nI thought about that, but it only solves *one* of the problems we've\nrun into: developers forgetting to commit a derived file when they\ncommit the master. We'd still have these problems:\n * excessive CVS traffic for the derived files (check the\n version-to-version diffs for gram.c or configure to see what I'm\n talking about: a small change to the master often generates huge\n diffs on the derived). That costs everyone who downloads from\n CVS. It's probably faster to generate gram.c or configure locally\n than to pull these diffs from CVS.\n * unreliable timestamps after a \"cvs update\": the derived may or may\n not look newer than the master, depending on what order cvs updates\n them in. So you may end up rebuilding locally anyway.\n * unreliable timestamps in tarball drops: same as above.\n\nIf we could run a program during check *out* not check in then we might\nhave something, but I see no facility for that in cvs. There'd be\nsevere portability problems anyway (how do you know what incantation to\nmutter to run yacc/bison, when you haven't done configure yet?).\n\nSo I think removing the deriveds from CVS altogether is a much better\nanswer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Mar 1999 10:42:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Removing derived files from CVS " }, { "msg_contents": "On 19 Mar 1999 [email protected] wrote:\n\n> Then <[email protected]> spoke up and said:\n> > Agreed here too...someone at one point mentioned that there might be a\n> > way, inside of CVS, to have it auto-generate these files as its being\n> > checked out (ie. if file is configure.in, run autoconf)...\n> \n> >From the info file:\n> Module options\n> --------------\n> \n> Either regular modules or ampersand modules can contain options,\n> which supply additional information concerning the module.\n> [snip]\n> `-i PROG'\n> Specify a program PROG to run whenever files in a module are\n> committed. PROG runs with a single argument, the full pathname of\n> the affected directory in a source repository. The `commitinfo',\n> `loginfo', and `verifymsg' files provide other ways to call a\n> program on commit.\n> \n> `-o PROG'\n> Specify a program PROG to run whenever files in a module are\n> checked out. PROG runs with a single argument, the module name.\n> \n> >From my reading, it looks like the easiest thing to do is set up\n> commit rules such that committing gram.y automatically generates\n> gram.c. It looks like it might be difficult to have gram.c generated\n> completely \"on the fly\" and then passed to the CVS client.\n\nCan you provide an exampmle of using/doing this? It sounds like the\nbetter solution of them all, if it can be done this way..\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 19 Mar 1999 14:45:07 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" }, { "msg_contents": "I have installed a script (src/tools/release_prep) that generates the\nparser and ecpg/preproc derived files on-the-fly, and removed said files\nfrom CVS.\n\n(I didn't do anything about src/configure --- how do people feel about\nthat? I'd want to see hub's autoconf updated to 2.13 anyway, if it is\ngoing to start generating configure locally.)\n\nIn order to generate snapshot tarballs that contain these derived files,\nyou need to replace ~pgsql/bin/mk-snapshot at hub.org with the attached\nscript. (You can find a copy in ~tgl/bin/mk-snapshot at hub, if you'd\nrather copy that file than cut-n-paste.) It doesn't look like I have\nwrite permission on that file, so it's up to you.\n\nYou'll need to make a comparable mod in whatever script you use for\npreparing releases, too, but I didn't find that one in looking around.\n\nBTW: in testing this script, I produced a tarball of 5894631 bytes,\nwhereas last night's snapshot is 5974070 bytes. It would appear that\nthere's 80k (compressed) worth of cruft in the ~pgsql/pgsql tree that\nCVSup is not cleaning out. Indeed the *,v files in that toplevel\ndirectory are not there in a fresh checkout. I'd suggest rm -rf'ing\nthe whole tree and making CVSup do a fresh checkout.\n\n\t\t\tregards, tom lane\n\n\n#!/bin/sh\nPATH=/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin\ncd /home/projects/pgsql\n# check out tree\n/usr/local/bin/cvsup -L 1 -g -Z README.cvsup\n# perform prerelease cleanup\ncd pgsql\nsrc/tools/release_prep\ncd ..\n# make the snapshot tarfile\ntar czpf tmp/postgresql.snapshot.tar.gz pgsql\nrm -f ftp/pub/postgresql.snapshot.tar.gz\nmv -f tmp/postgresql.snapshot.tar.gz ftp/pub/postgresql.snapshot.tar.gz\n", "msg_date": "Sat, 20 Mar 1999 13:57:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Removing derived files from CVS " }, { "msg_contents": "\nScript looks good...regenerating a new snapshot right now ...\n\n\nOn Sat, 20 Mar 1999, Tom Lane wrote:\n\n> I have installed a script (src/tools/release_prep) that generates the\n> parser and ecpg/preproc derived files on-the-fly, and removed said files\n> from CVS.\n> \n> (I didn't do anything about src/configure --- how do people feel about\n> that? I'd want to see hub's autoconf updated to 2.13 anyway, if it is\n> going to start generating configure locally.)\n> \n> In order to generate snapshot tarballs that contain these derived files,\n> you need to replace ~pgsql/bin/mk-snapshot at hub.org with the attached\n> script. (You can find a copy in ~tgl/bin/mk-snapshot at hub, if you'd\n> rather copy that file than cut-n-paste.) It doesn't look like I have\n> write permission on that file, so it's up to you.\n> \n> You'll need to make a comparable mod in whatever script you use for\n> preparing releases, too, but I didn't find that one in looking around.\n> \n> BTW: in testing this script, I produced a tarball of 5894631 bytes,\n> whereas last night's snapshot is 5974070 bytes. It would appear that\n> there's 80k (compressed) worth of cruft in the ~pgsql/pgsql tree that\n> CVSup is not cleaning out. Indeed the *,v files in that toplevel\n> directory are not there in a fresh checkout. I'd suggest rm -rf'ing\n> the whole tree and making CVSup do a fresh checkout.\n> \n> \t\t\tregards, tom lane\n> \n> \n> #!/bin/sh\n> PATH=/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin\n> cd /home/projects/pgsql\n> # check out tree\n> /usr/local/bin/cvsup -L 1 -g -Z README.cvsup\n> # perform prerelease cleanup\n> cd pgsql\n> src/tools/release_prep\n> cd ..\n> # make the snapshot tarfile\n> tar czpf tmp/postgresql.snapshot.tar.gz pgsql\n> rm -f ftp/pub/postgresql.snapshot.tar.gz\n> mv -f tmp/postgresql.snapshot.tar.gz ftp/pub/postgresql.snapshot.tar.gz\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 21 Mar 1999 23:36:48 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n\n Tom> IIRC autoconf depends on GNU m4, so that's actually two tools\n Tom> not one, but the installation is straightforward. If you're\n Tom> on Linux you probably have GNU m4 anyway.\n\nI actually thought parts of autoconf use Perl, too.... Or maybe that\nwas automake?\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\nComment: Processed by Mailcrypt 3.4, an Emacs/PGP interface\n\niQCVAwUBNwGcneoW38lmvDvNAQHVPQP/V0oR0cvbr7kVjXKqhMm+eeMaV4UpDgAG\nI1QxjNXoXM/RQC1x7mFglKKm+2T9KV99elAWxWZ9cQpRMBGYsfR+LpO7mwX6CRFq\n+ePc0rGvLKqjt4PpGLa5+i5186fz40VR3dowS6xSeyCqLLtntV+njJyMX89QH4VM\n6LAHK6yGIaY=\n=JGbl\n-----END PGP SIGNATURE-----\n", "msg_date": "30 Mar 1999 22:55:10 -0500", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" }, { "msg_contents": "Roland Roberts <[email protected]> writes:\n>>>>>> \"Tom\" == Tom Lane <[email protected]> writes:\nTom> IIRC autoconf depends on GNU m4,\n\n> I actually thought parts of autoconf use Perl, too.... Or maybe that\n> was automake?\n\nNope, no Perl in autoconf. I'm less sure about automake.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Mar 1999 11:30:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Removing derived files from CVS " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n\n Tom> Roland Roberts <[email protected]> writes:\n >>>>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n Tom> IIRC autoconf depends on GNU m4,\n\n >> I actually thought parts of autoconf use Perl, too.... Or\n >> maybe that was automake?\n\n Tom> Nope, no Perl in autoconf. I'm less sure about automake.\n\nI found it; it does use Perl in the optional `autoscan' script. But\nthat's not really relevant for Postgres....\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 76-15 113th Street, Apt 3B\[email protected] Forest Hills, NY 11375\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\nComment: Processed by Mailcrypt 3.4, an Emacs/PGP interface\n\niQCVAwUBNwLlCOoW38lmvDvNAQEgfwQAkw/T4BCtEmtl88F+ci2plkvPdPyQdl3u\nTa6/hQKDaP11L/mp+DiNjDXtTk+9q0wEdwIVRZlPoxxnlaa2x0itxnETvzLMV24D\n7R78iDyxgQ7yf067zblFrPUnp+tp7lrZfpP1TTCrduSGO1vbP8npX4K7Hwo4lj1f\n3UdFHtbuc8g=\n=p63d\n-----END PGP SIGNATURE-----\n", "msg_date": "31 Mar 1999 22:16:25 -0500", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Removing derived files from CVS" } ]
[ { "msg_contents": "Here is the list. It is up-to-date as of Monday. Let me know if I have\nmissed anything, or attributed something incorrectly to someone.\n\nI will shortly add some descriptive text to the release.\n\n---------------------------------------------------------------------------\n\nAdd \"vacuumdb\" utility\nFix text<->float8 and text<->float4 conversion functions(Tom)\nFix for creating tables with mixed-case constraints(Billy)\nSpeed up libpq by allocating memory better(Tom)\nEXPLAIN all indices used(Tom)\nImprove port matching(Tom)\nPortability fixes for SunOS\nChange exp()/pow() behavior to generate error on underflow/overflow(Jan)\nImplement CASE expression(Thomas)\nFix bug in pg_dump -z\nNew pg_dump table output format(Constantin)\nAdd string min()/max() functions(Thomas)\nExtend new type coersion techniques to aggregates(Thomas)\nNew moddatetime contrib(Terry)\nUpdate to pgaccess(Constantin)\nFix problems in the muti-byte code(Tatsuo)\nFix case where executor evaluates functions twice(Tatsuo)\nMemory overrun cleanups(Tatsuo)\nFix for lo_import crash(Tatsuo)\nAdjust handling of data type names to suppress double quotes(Thomas)\nAdd routines for single-byte \"char\" type(Thomas)\nImproved substr() function(Thomas)\nUse type coersion for matching columns and DEFAULT(Thomas)\nAdd CASE statement support(Thomas)\nImproved multi-byte handling(Tatsuo)\nAdd NT/Win32 backend port and enable dynamic loading(Magnus and Horak Daniel)\nMulti-version concurrency control/MVCC(Vadim)\nNew Serialized mode(Vadim)\nUpgrade to Pygress(D'Arcy)\nNew SET TRANSACTION ISOLATION LEVEL(Vadim)\nNew LOCK TABLE IN ... MODE(Vadim)\nNew port to Cobalt Qube(Mips) running Linux(Tatsuo)\nFix deadlock so it only checks once after one second of sleep(Bruce)\nPort to NetBSD/m68k(Mr. Mutsuki Nakajima)\nPort to NetBSD/sun3(Mr. Mutsuki Nakajima)\nUpdate odbc version\nNew NUMERIC data type(Jan)\nNew SELECT FOR UPDATE(Vadim)\nHandle \"NaN\" and \"Infinity\" for input values(Jan)\nBetter date/year handling(Thomas)\nImproved handling of backend connections(Magnus)\nNew options ELOG_TIMESTAMPS and USE_SYSLOG options for log files(Massimo)\nNew TCL_ARRAYS option(Massimo)\nNew INTERSECT and EXCEPT(Stefan)\nNew pg_index.indisprimary for primary key tracking(D'Arcy)\nNew pg_dump option to allow dropping of tables before creation(Brook)\nFixes for aggregates and PL/pgsql(Hiroshi)\nSpeedup of row output routines(Tom)\nJDBC improvements(Peter)\nFix for subquery crash(Vadim)\nNew READ COMMITTED isolation level(Vadim)\nNew TEMP tables/indexes(Bruce)\nPrevent sorting of result is already sorted(Jan)\nFix for libpq function PQfnumber and case-insensitive names(Bahman Rafatjoo)\nFix for large object write-into-middle, remove extra block(Tatsuo)\nNew memory allocation optimization(Jan)\nAllow psql to do \\p\\g(Bruce)\nAllow multiple rule actions(Jan)\nFix for pg_dump -d or -D and quote special characters in INSERT\nAdded LIMIT/OFFSET functionality(Jan)\nRemoved CURRENT keyword for rule queries(Jan)\nImprove optimizer when joining a large number of tables(Bruce)\nAddition of Stefan Simkovics' Master's Thesis to docs(Stefan)\nNew routines to convert between int8 and text/varchar types(Thomas)\nNew bushy plans, where meta-tables are joined(Bruce)\nEnable right-hand queries by default(Bruce)\nAllow reliable maximum number of backends to be set at configure time\n (--with-maxbackends and postmaster switch (-N backends))(Tom)\nRepair serious problems with dynahash(Tom)\nFix INET/CIDR portability problems\nFix problem with selectivity error in ALTER TABLE ADD COLUMN(Bruce)\nFix executor so mergejoin of different column types works(Tom)\nGEQO default now 11 tables because of optimizer speedups(Tom)\nFix for Alpha OR selectivity bug\nFix OR index selectivity problem(Bruce)\nAllow btree/hash index on the int8 type(Ryan)\nAllow Var = NULL for MS-SQL portability(Michael)\nFix so \\d shows proper length for char()/varchar()(Ryan)\nFix tutorial code(Clark)\nImprove destroyuser checking(Oliver)\nFix for Kerberos(Rodney McDuff)\nModify contrib check_primary_key() so either \"automatic\" or \"dependent\"(Anand)\nAllow psql \\d on a view show query(Ryan)\nSpeedup for LIKE(Bruce)\nFix for dropping database while dirty buffers(Bruce)\nFix so sequence nextval() can be case-sensitive(Bruce)\nFix for tcl/tk configuration(Vince)\nSTOP: 1999/03/15\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 22:39:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5 Features list" }, { "msg_contents": "> Fix case where executor evaluates functions twice(Tatsuo)\n\nI don't think above was done by me. Maybe Hirosi?\n\n>Port to NetBSD/macppc(Toshimi Aoki)\n\nAlso could you add above?\n\nBTW, I'm going to add a new Cyrillic support to the multi-byte support\nnext week (around 3/23-3/35) if this is not too late. Yes, I know we\nalready have the rcode support. But it seems to have some difficulties\nwith on-the-fly encoding conversion. (Oleg, is this explanation\ncorrect?)\n\nAnyway, the new Cyrillic support will not kill existing rcode support. \nIn another word, users can choose whatever Cyrillic support they like.\n---\nTatsuo Ishii\n\n\n", "msg_date": "Fri, 19 Mar 1999 23:05:52 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Features list " }, { "msg_contents": "Hi!\n\nOn Fri, 19 Mar 1999, Tatsuo Ishii wrote:\n> BTW, I'm going to add a new Cyrillic support to the multi-byte support\n> next week (around 3/23-3/35) if this is not too late. Yes, I know we\n> already have the rcode support. But it seems to have some difficulties\n> with on-the-fly encoding conversion. (Oleg, is this explanation\n> correct?)\n\n RECODE has no difficulties - it just does not allow to choose what\nencoding a client want to get. RECODE chooses client encoding by client IP\naddress, what is very unflexible.\n\n> Anyway, the new Cyrillic support will not kill existing rcode support. \n> In another word, users can choose whatever Cyrillic support they like.\n\n I think new cyrillic support makes RECODE obsolete. Somewhere in the\nfuture we should remove RECODE, I think.\n\n> ---\n> Tatsuo Ishii\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 19 Mar 1999 17:18:10 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Features list " }, { "msg_contents": "On Fri, 19 Mar 1999, Tatsuo Ishii wrote:\n\n> Date: Fri, 19 Mar 1999 23:05:52 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: Bruce Momjian <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] 6.5 Features list \n> \n> > Fix case where executor evaluates functions twice(Tatsuo)\n> \n> I don't think above was done by me. Maybe Hirosi?\n> \n> >Port to NetBSD/macppc(Toshimi Aoki)\n> \n> Also could you add above?\n> \n> BTW, I'm going to add a new Cyrillic support to the multi-byte support\n> next week (around 3/23-3/35) if this is not too late. Yes, I know we\n> already have the rcode support. But it seems to have some difficulties\n> with on-the-fly encoding conversion. (Oleg, is this explanation\n> correct?)\n\nHere is Oleg Bartunov not Oleg Broytman :-) Several months ago we discussed\ncyrillic support with mb code, but I was too busy and thanks to Oleg Broytmann\nwho kindly agreed to communnicate with you. Short question:\nWill ALTER TABLE has support for changing of encoding for existing\ndata ?\nNot really difficulties but your mb support is an elegant and flexible way to\ndo on-fly encoding support. rcode requires some fixed pre-configuration\nbased on host/net definitions.\n\n\n> \n> Anyway, the new Cyrillic support will not kill existing rcode support. \n> In another word, users can choose whatever Cyrillic support they like.\n\nrcode IMHO could be removed from the source if mb will works ok.\n\n\tBest regards,\n\t\tOleg\n\n> ---\n> Tatsuo Ishii\n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 19 Mar 1999 18:15:59 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Features list " }, { "msg_contents": "Hi,\n\n> Here is Oleg Bartunov not Oleg Broytman :-)\n\nWao. Is Oleg a pupular first name?\n\n> Several months ago we discussed\n> cyrillic support with mb code, but I was too busy and thanks to Oleg Broytmann\n> who kindly agreed to communnicate with you. Short question:\n> Will ALTER TABLE has support for changing of encoding for existing\n> data ?\n\nNo. The backend encoding can be defined for each database but not for\neach table. To have a different encoding for each table, we need an\nencoding/chaset attribute in the pg_class table. Someday I would try\nthis (probably with support for NATIONAL CHARACTER).\n\nIs this a serious issue for you?\n\n> Not really difficulties but your mb support is an elegant and flexible way to\n> do on-fly encoding support. rcode requires some fixed pre-configuration\n> based on host/net definitions.\n\nThanks for choosing MB support.\n\nBTW, mb(multi-byte support) may not be a suitable naming, since\ncyrillic is definitely a single byte encoding:-)\n---\nTatsuo Ishii\n", "msg_date": "Sat, 20 Mar 1999 11:04:24 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Features list " }, { "msg_contents": "On Thu, Mar 18, 1999 at 10:39:40PM -0500, Bruce Momjian wrote:\n> Here is the list. It is up-to-date as of Monday. Let me know if I have\n> missed anything, or attributed something incorrectly to someone.\n> ...\n\nI think I forgot to submit the ecpg changes list. Hopefully I'm not too late\nfor this. Here we go:\n\nAdded the following commands:\n\t- exec sql whenever sqlwarning\n\t- exec sql prepare\n\t- exec sql execute\n\t- exec sql deallocate prepare\n\t- exec sql type\n\t- exec sql var\n\t- exec sql free\n\t- exec sql declare statement\nAdded ECPGstatus() function.\nAdded support for different connections in one program.\nAdded support for unions.\nAdded auto-allocating for host arrays.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sat, 20 Mar 1999 20:19:47 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Features list" }, { "msg_contents": "> On Thu, Mar 18, 1999 at 10:39:40PM -0500, Bruce Momjian wrote:\n> > Here is the list. It is up-to-date as of Monday. Let me know if I have\n> > missed anything, or attributed something incorrectly to someone.\n> > ...\n> \n> I think I forgot to submit the ecpg changes list. Hopefully I'm not too late\n> for this. Here we go:\n\nAdded this:\n\n\tEcpg fixes/features, see src/interfaces/ecpg/ChangeLog file(Michael)\n\tJdbc fixes/features, see src/interfaces/jdbc/CHANGELOG(Peter)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 21:07:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5 Features list" } ]
[ { "msg_contents": "Here it is with the summary at the top:\n\n\n---------------------------------------------------------------------------\n\n POSTGRESQL 6.5\n\n\nThis release marks the development team's final mastery of the source\ncode we inherited from Berkeley. You will see we are now easily adding\nmajor features, thanks to the increasing size and experience of our\nworld-wide development team:\n\nMulti-version concurrency control(MVCC): This removes our old\ntable-level locking, and replaces it with a locking system that is\nsuperior to most commercial database systems. In a traditional system,\neach row that is modified is locked until committed, preventing reads by\nother users. MVCC uses the natural multi-version nature of PostgreSQL\nto allow readers to continue reading consistent data during writer\nactivity. Writers continue to use the compact pg_log transaction\nsystem. This is all preformed without having to allocate a lock for\nevery row like traditional database systems. So, basically, we no\nlonger have table-level locking, we have something better than row-level\nlocking.\n\nNumeric data type: We now have a true numeric data type, with\nuser-specified precision.\n\nTemporary tables: Temporary tables are guaranteed to have unique names\nwithin a database session, and are destroyed on session exit.\n\nNew SQL features: We now have CASE, INTERSECT, and EXCEPT statement\nsupport. We have new LIMIT/OFFSET, SET TRANSACTION ISOLATION LEVEL,\nSELECT ... FOR UPDATE, and an improved LOCK command.\n\nSpeedups: We continue to speed up PostgreSQL, thanks to the variety of\ntalents within our team. We have sped up memory allocation,\noptimization, and row transfers routines.\n\nOther: We continue to expand our port list, this time including\nWin32/NT. Most interfaces have new versions, and existing functionality\nhas been improved.\n\nPlease look through the list to see the full extent of our changes in\nthis PostgreSQL 6.5 release.\n\n---------------------------------------------------------------------------\n\nAdd \"vacuumdb\" utility\nFix text<->float8 and text<->float4 conversion functions(Tom)\nFix for creating tables with mixed-case constraints(Billy)\nSpeed up libpq by allocating memory better(Tom)\nEXPLAIN all indices used(Tom)\nImprove port matching(Tom)\nPortability fixes for SunOS\nChange exp()/pow() behavior to generate error on underflow/overflow(Jan)\nImplement CASE expression(Thomas)\nFix bug in pg_dump -z\nNew pg_dump table output format(Constantin)\nAdd string min()/max() functions(Thomas)\nExtend new type coersion techniques to aggregates(Thomas)\nNew moddatetime contrib(Terry)\nUpdate to pgaccess(Constantin)\nFix problems in the muti-byte code(Tatsuo)\nFix case where executor evaluates functions twice(Tatsuo)\nMemory overrun cleanups(Tatsuo)\nFix for lo_import crash(Tatsuo)\nAdjust handling of data type names to suppress double quotes(Thomas)\nAdd routines for single-byte \"char\" type(Thomas)\nImproved substr() function(Thomas)\nUse type coersion for matching columns and DEFAULT(Thomas)\nAdd CASE statement support(Thomas)\nImproved multi-byte handling(Tatsuo)\nAdd NT/Win32 backend port and enable dynamic loading(Magnus and Horak Daniel)\nMulti-version concurrency control/MVCC(Vadim)\nNew Serialized mode(Vadim)\nUpgrade to Pygress(D'Arcy)\nNew SET TRANSACTION ISOLATION LEVEL(Vadim)\nNew LOCK TABLE IN ... MODE(Vadim)\nNew port to Cobalt Qube(Mips) running Linux(Tatsuo)\nFix deadlock so it only checks once after one second of sleep(Bruce)\nPort to NetBSD/m68k(Mr. Mutsuki Nakajima)\nPort to NetBSD/sun3(Mr. Mutsuki Nakajima)\nUpdate odbc version\nNew NUMERIC data type(Jan)\nNew SELECT FOR UPDATE(Vadim)\nHandle \"NaN\" and \"Infinity\" for input values(Jan)\nBetter date/year handling(Thomas)\nImproved handling of backend connections(Magnus)\nNew options ELOG_TIMESTAMPS and USE_SYSLOG options for log files(Massimo)\nNew TCL_ARRAYS option(Massimo)\nNew INTERSECT and EXCEPT(Stefan)\nNew pg_index.indisprimary for primary key tracking(D'Arcy)\nNew pg_dump option to allow dropping of tables before creation(Brook)\nFixes for aggregates and PL/pgsql(Hiroshi)\nSpeedup of row output routines(Tom)\nJDBC improvements(Peter)\nFix for subquery crash(Vadim)\nNew READ COMMITTED isolation level(Vadim)\nNew TEMP tables/indexes(Bruce)\nPrevent sorting of result is already sorted(Jan)\nFix for libpq function PQfnumber and case-insensitive names(Bahman Rafatjoo)\nFix for large object write-into-middle, remove extra block(Tatsuo)\nNew memory allocation optimization(Jan)\nAllow psql to do \\p\\g(Bruce)\nAllow multiple rule actions(Jan)\nFix for pg_dump -d or -D and quote special characters in INSERT\nAdded LIMIT/OFFSET functionality(Jan)\nRemoved CURRENT keyword for rule queries(Jan)\nImprove optimizer when joining a large number of tables(Bruce)\nAddition of Stefan Simkovics' Master's Thesis to docs(Stefan)\nNew routines to convert between int8 and text/varchar types(Thomas)\nNew bushy plans, where meta-tables are joined(Bruce)\nEnable right-hand queries by default(Bruce)\nAllow reliable maximum number of backends to be set at configure time\n (--with-maxbackends and postmaster switch (-N backends))(Tom)\nRepair serious problems with dynahash(Tom)\nFix INET/CIDR portability problems\nFix problem with selectivity error in ALTER TABLE ADD COLUMN(Bruce)\nFix executor so mergejoin of different column types works(Tom)\nGEQO default now 11 tables because of optimizer speedups(Tom)\nFix for Alpha OR selectivity bug\nFix OR index selectivity problem(Bruce)\nAllow btree/hash index on the int8 type(Ryan)\nAllow Var = NULL for MS-SQL portability(Michael)\nFix so \\d shows proper length for char()/varchar()(Ryan)\nFix tutorial code(Clark)\nImprove destroyuser checking(Oliver)\nFix for Kerberos(Rodney McDuff)\nModify contrib check_primary_key() so either \"automatic\" or \"dependent\"(Anand)\nAllow psql \\d on a view show query(Ryan)\nSpeedup for LIKE(Bruce)\nFix for dropping database while dirty buffers(Bruce)\nFix so sequence nextval() can be case-sensitive(Bruce)\nFix for tcl/tk configuration(Vince)\nSTOP: 1999/03/15\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Mar 1999 23:03:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5 Feature list and summary" }, { "msg_contents": "Hi!\n\n There are duplicates (and probably minor misspelling).\n\nOn Thu, 18 Mar 1999, Bruce Momjian wrote:\n> Implement CASE expression(Thomas)\n> Add CASE statement support(Thomas)\n\n> Prevent sorting of result is already sorted(Jan)\n\n \"iF already sorted\"?\n\n> Fix for Alpha OR selectivity bug\n> Fix OR index selectivity problem(Bruce)\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 19 Mar 1999 12:36:32 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Feature list and summary" } ]
[ { "msg_contents": "Hello all\n\nI sent this to the GENERAL list but have not received a reply in 2 days.\nSo, I decided to send this to the HACKERS list.\n\n--------------------------------------------------------------------------------\n\nHello\n\nWe have two different machines both running Red Hat 5.2, but one machine\n\nhas PostgreSQL version 6.3 while the other machine has PostgreSQL\nversion 6.4.2. Our problem is that we have a table which we want to\nstore political boundaries as polygons. Taking into account the 8K limit\n\nper tuple and the storage space for a polygon (4 + 32n) we determined\nthat we can only store 255 points for each polygon. Therefore, we\nensured that all our polygons have less than 250 points.\n\nOur problem is with the insert command when we insert a large polygon\n(~200 points). With version 6.3 we have no problem inserting the\npolygon. In fact all the polygons were inserted, verifying that they\nwere less than the 8K limit. However with version 6.4.2, the backend\ncloses the connection during the insert.\n\nThe first time we noticed this, we had a lot of trailing zeros on the\nvalues of the points and eliminating the zeros allowed a smaller polygon\n\n(that previously failed with ~ 170 points) to be inserted in version\n6.4.2. This seems to indicate that there is some kind of limit on the\nlength of the query string. Isolating the query string that failed\nindicates that the string is about 4500 bytes.\n\nDoes anyone have any idea what happened between 6.3 and 6.4.2 and what\nwe can do to solve this problem? We checked the archives but only found\nreferences to the 8K limit. Any help would be greatly appreciated.\n\nThank you for your time and effort in this matter.\n\nBest regards\n\nTaravudh\n\n", "msg_date": "Fri, 19 Mar 1999 11:26:36 +0700", "msg_from": "Taravudh Tipdecho <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with query length " }, { "msg_contents": "Taravudh Tipdecho <[email protected]> writes:\n> Our problem is with the insert command when we insert a large polygon\n> (~200 points). With version 6.3 we have no problem inserting the\n> polygon. In fact all the polygons were inserted, verifying that they\n> were less than the 8K limit. However with version 6.4.2, the backend\n> closes the connection during the insert.\n> The first time we noticed this, we had a lot of trailing zeros on the\n> values of the points and eliminating the zeros allowed a smaller polygon\n> (that previously failed with ~ 170 points) to be inserted in version\n> 6.4.2. This seems to indicate that there is some kind of limit on the\n> length of the query string. Isolating the query string that failed\n> indicates that the string is about 4500 bytes.\n\nThat's really odd; it's hard to believe that the maximum query length\ngot shorter.\n\nI presume the backend dropped a corefile when it crashed; can you use\ngdb on the corefile to provide a backtrace? Alternatively, can you\nprovide a fairly short psql script that demonstrates the problem?\n(If you're right about the problem, just a CREATE TABLE and INSERT\noughta do it...) I will look into it if I can reproduce it here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Mar 1999 10:46:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with query length " } ]
[ { "msg_contents": "Is there a CVS target for docs? I don't like the idea to check the whole\nsource out just to change ecpg.sgml. Yes, I am actively editing some stuff\nin this file. :-)\n\nI'd like to commit my changes frequently just to make sure nothing is lost.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 19 Mar 1999 08:54:57 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "CVS target for docs" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> Is there a CVS target for docs? I don't like the idea to check the whole\n> source out just to change ecpg.sgml.\n\nIf you're using cvs it's easy to update any particular subtree of the\ndistribution. You do have to do a \"cvs checkout pgsql\" to populate the\nwhole tree once, but you don't have to update all of it frequently if\nyou don't want to --- just do\n\tcd pgsql/doc\n\tcvs update\nto update everything under the doc subdirectory.\n\nI'm not sure that it's real safe to update just part of the src tree\nthis way, since there are so often related changes in different parts\nof the source. But it oughta work just fine for tracking the docs\nwithout the source.\n\nAFAIK, CVSup has no comparable facility.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Mar 1999 11:29:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS target for docs " }, { "msg_contents": "On Sat, Mar 20, 1999 at 11:29:55AM -0500, Tom Lane wrote:\n> If you're using cvs it's easy to update any particular subtree of the\n> distribution. You do have to do a \"cvs checkout pgsql\" to populate the\n> whole tree once, but you don't have to update all of it frequently if\n> you don't want to --- just do\n\nIt's not that I have problems with the update. Currently I have ecpg checked\nout on postgresql.org while working with my old cvsup setup at home. I then\ntransfer my patch to postgresql.org, apply and commit. \n\nI'm currently thinking about moving to cvs completely but wonder how much\nmore network traffic this will cause.\n\nThe idea with the docs was just to not checkout the whole source on Marc's\nmachine. But if I move it home it shouldn't matter.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sat, 20 Mar 1999 21:10:52 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS target for docs" }, { "msg_contents": "> I'm currently thinking about moving to cvs completely but wonder how much\n> more network traffic this will cause.\n\nFWIW, I've been using remote cvs from my home machine and it seems to\nwork very well, and reasonably speedily. I ran a \"cvs update\" on the\nPostgres tree just now, while watching hub's CPU load via \"top\" in\nanother window. Elapsed time was 2m 45s, and the server's CPU usage\non hub never got above 3%. This run only had to pull a couple of files,\nsince I'd just updated yesterday --- a typical run probably takes more\nlike 4m or so. Network bandwidth doesn't seem to be the limiting factor\nin an update (to judge from das blinkenlights on my router), though it\nis the bottleneck in a full checkout.\n\nIf what you're currently doing is cvs or cvsup into a local directory\nat hub, then transferring the files to home via tar and ftp, I've got\nto think that remote cvs is a vastly more efficient and less error-prone\nsolution.\n\nBTW, I recommend putting\n\tcvs -z3\n\tupdate -d -P\n\tcheckout -P\nin your ~/.cvsrc. The first of these invokes gzip -3 compression for\nall cvs network transfers; that should take care of bandwidth problems.\nThe other two make the default handling of subdirectories more\nreasonable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Mar 1999 10:40:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS target for docs " }, { "msg_contents": "On Fri, 19 Mar 1999, Michael Meskes wrote:\n\n> Is there a CVS target for docs? I don't like the idea to check the whole\n> source out just to change ecpg.sgml. Yes, I am actively editing some stuff\n> in this file. :-)\n> \n> I'd like to commit my changes frequently just to make sure nothing is lost.\n\ncvs checkout -P pgsql-doc\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 21 Mar 1999 23:39:12 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS target for docs" }, { "msg_contents": "\nTom, Michael doesn't work on anything else other then docs and ecpg...why\nwould he want to update everything?\n\nRight now, to get at ecpg, he check's out 'pgsql-ecpg'...and pgsql-doc is\nalways an available module...\n\nNot sure why such a simple thing needed to be so complicated :(\n\n\nOn Sun, 21 Mar 1999, Tom Lane wrote:\n\n> > I'm currently thinking about moving to cvs completely but wonder how much\n> > more network traffic this will cause.\n> \n> FWIW, I've been using remote cvs from my home machine and it seems to\n> work very well, and reasonably speedily. I ran a \"cvs update\" on the\n> Postgres tree just now, while watching hub's CPU load via \"top\" in\n> another window. Elapsed time was 2m 45s, and the server's CPU usage\n> on hub never got above 3%. This run only had to pull a couple of files,\n> since I'd just updated yesterday --- a typical run probably takes more\n> like 4m or so. Network bandwidth doesn't seem to be the limiting factor\n> in an update (to judge from das blinkenlights on my router), though it\n> is the bottleneck in a full checkout.\n> \n> If what you're currently doing is cvs or cvsup into a local directory\n> at hub, then transferring the files to home via tar and ftp, I've got\n> to think that remote cvs is a vastly more efficient and less error-prone\n> solution.\n> \n> BTW, I recommend putting\n> \tcvs -z3\n> \tupdate -d -P\n> \tcheckout -P\n> in your ~/.cvsrc. The first of these invokes gzip -3 compression for\n> all cvs network transfers; that should take care of bandwidth problems.\n> The other two make the default handling of subdirectories more\n> reasonable.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 21 Mar 1999 23:40:20 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS target for docs " }, { "msg_contents": "On Sun, Mar 21, 1999 at 11:40:20PM -0400, The Hermit Hacker wrote:\n> \n> Tom, Michael doesn't work on anything else other then docs and ecpg...why\n> would he want to update everything?\n\nIn fact I do update everything via cvsup as soon as I go online. Kind of\nalpha testing. :-)\n\n> Right now, to get at ecpg, he check's out 'pgsql-ecpg'...and pgsql-doc is\n> always an available module...\n\nThanks Marc.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Mon, 22 Mar 1999 06:56:29 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS target for docs" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Tom, Michael doesn't work on anything else other then docs and ecpg...why\n> would he want to update everything?\n\nSo he has an up-to-date backend to test with? I dunno, maybe he doesn't\nneed that.\n\n> Right now, to get at ecpg, he check's out 'pgsql-ecpg'...and pgsql-doc is\n> always an available module...\n\nI wasn't aware that there were CVS module names for subsets of the\ndistribution, actually. Is there a list posted somewhere of all the\navailable module names? The FAQ_CVS page probably ought to include\nthat info.\n\n> Not sure why such a simple thing needed to be so complicated :(\n\nSeems to me that which module he checks out doesn't affect the\ncomplexity of the process much... remote cvs has still gotta be\nsimpler than using an intermediate staging area on hub.org.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Mar 1999 10:13:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS target for docs " }, { "msg_contents": "On Mon, 22 Mar 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Tom, Michael doesn't work on anything else other then docs and ecpg...why\n> > would he want to update everything?\n> \n> So he has an up-to-date backend to test with? I dunno, maybe he doesn't\n> need that.\n> \n> > Right now, to get at ecpg, he check's out 'pgsql-ecpg'...and pgsql-doc is\n> > always an available module...\n> \n> I wasn't aware that there were CVS module names for subsets of the\n> distribution, actually. Is there a list posted somewhere of all the\n> available module names? The FAQ_CVS page probably ought to include\n> that info.\n\n\tcvs checkout CVSROOT/modules\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 22 Mar 1999 13:12:45 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS target for docs " } ]
[ { "msg_contents": "> >psql -qc 'copy onek to stdout' regression | gzip --fast -c | split -b512m\n> -\n> >onek.unl.\n> \n> Is there any danger when you split these files? I'm worried about\n> corruption, etc.\n> \nOn the contrary, gzip will notice file corruption when decompressing,\nsince it checks the CRC. You won't notice otherwise. We do our Informix\nbackups this way, that are about 1 - 15 Gb (compression factor ~ 1:4).\nWe haven't had problems since using this method (3 years now).\n\n> So what is the command to pull it back from the segments?\n\nYou get files with suffix aa ab ac and so on.\nIf you make sure no other files lurk around, that match the following mask,\nyou simply restore with:\n\ncat onek.unl.* | gzip -cd | psql -qc 'copy onek from stdin' regression\n\n\tAndreas\n", "msg_date": "Fri, 19 Mar 1999 09:13:45 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] \"CANNOT EXTEND\" -" } ]
[ { "msg_contents": "Hello all,\n\nWhile testing Index scan,I found the following phonomenon.\n \n\tSELECT id from xxxxxx\n\twhere id=10 or id=11;\n\nis very fast.\n\nBut \n\tSELECT id from xxxxxx\n\twhere (id>=10 and id<=10)\n\tor (id>=11 and id<=11);\n\nis very slow.\nWhy ?\n\nThe EXPLAIN(not verbose) output of both SQL are same \nexcept cost and size.\n\n\tNOTICE: QUERY PLAN:\n\n\tIndex Scan using xxxxxx_pkey, xxxxxx_pkey on xxxxxx \n\t (cost=1136.17 size=197 width=4)\n\n\nIt seems that (id>=..) is included in indexqual but (id<=.. )\nis not.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 19 Mar 1999 18:26:23 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "indexes with OR clauses is slow ?" } ]
[ { "msg_contents": "> This brings up a related issue, the fact that a dump file does NOT look\n> like the origenal script that made the database, that is things like\n> SERIAL, PRIMARY KEY, REFERENCES table (field), VIEW, and probably some\n> other things that I missed, none of these things are reconstruced in the\n> dump file in any intuative way.\n>\n> All these things can be done in a more round about, more obtuse way, but\n> the whole point of them (seems to me any way) is to make the source file\n> easier to read and understand. Am I off base here? if so then what is the\n> point?\n> \n> So, working off the last point, should not the dump file, aside from it's\n> data, look like the origanal script that made the database? so if a table\n> has a PRIMARY KEY, then instead of an index at the bottom of the dump\n> file, the table should have ... PRIMARY KEY... in it.\n> \n> The main reasones I use such constructs are 1. readability and 2.\n> convienance. As relates to a dump file, #1 is lost and #2 does not\n> matter, unless maybe one wants to hand edit the dump file for some\n> reason.\n\nI don't think we want to make the dump look like the script that created. The \ndump is setup to reload the database quickly. That is why the tables and \nsequences are created first, then the data is inserted into the tables and \nfinally the indexes are added last. If we make dump look like the database \nscript, then the dump-reload will take considerably longer.\n\nI do not have any emperical data at this time to show the above is true, but I \ncan gather some if someone is interested in it.\n\n-Ryan\n\nP.S. I'm working on a pro's and con's list for the sequences.. then I'm planning \non implimenting a few different options and see what everyone prefers. Not sure \nwhen I'll have this done, this project has become more complex then I first \nimagined :)\n", "msg_date": "Fri, 19 Mar 1999 02:27:46 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: [HACKERS] Sequences....]" } ]
[ { "msg_contents": "Hi,\n\nI suggest the following portability patch, which does not\nchange functionality, but makes the code more ANSI C'ish.\nMy AIX xlc compiler barfs on all of these. Can someone please\nreview and apply to current.\n\n <<port.patch>> \nThanks\nAndreas", "msg_date": "Fri, 19 Mar 1999 18:26:20 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "portability patch" }, { "msg_contents": "Applied. It certainly clears up some funny-looking code.\n\n\n> Hi,\n> \n> I suggest the following portability patch, which does not\n> change functionality, but makes the code more ANSI C'ish.\n> My AIX xlc compiler barfs on all of these. Can someone please\n> review and apply to current.\n> \n> <<port.patch>> \n> Thanks\n> Andreas\n> \n> \n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 13:56:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] portability patch" } ]
[ { "msg_contents": "Hi all,\n\nI have problems with SELECTing a view that I had been able to create.\nThe postmaster just keeps dropping the line as I'm trying to SELECT from\nthat view I had already defined (SQL statements/definititions attached).\n\nI made a workaround (without using views) since I saw something about\nbuggy views in the new documentation and I thought I had found\nsomething. This try was a bigger join which included everything (and did\nmore filtering):\n\nSELECT subject, type\nFROM students, subjects, examtypes\nWHERE students.profession = subjects.profession AND\n students.pk = myid() AND\n subjects.exams[ mysemester() ] = examtypes.pk AND\n examtypes.admin = 'yes';\n\nI was glad because it worked but after a few tries the postmaster\ndropped me out again... I did not give up and recreated the database but\nit unfortunately it came up with the same bug.\n\nI have a Linux (2.0.34) and Postgres 6.4.2. Please, if anyone knows\nanything that I can do, share it with me! I'd be very grateful!\n\nThanks in advance,\nPeter Blazso", "msg_date": "Fri, 19 Mar 1999 20:07:14 +0100", "msg_from": "Peter Blazso <[email protected]>", "msg_from_op": true, "msg_subject": "problems with a view" } ]
[ { "msg_contents": "\nPlatform: Alpha, Digital UNIX 4.0D\nSoftware: PostgreSQL 6.4.2 and 6.5 snaphot (11 March 1999)\n\nI have a table as follows:\n\nTable = lineitem\n+------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+------------------------+----------------------------------+-------+\n| l_orderkey | int4 not null | 4 |\n| l_partkey | int4 not null | 4 |\n| l_suppkey | int4 not null | 4 |\n| l_linenumber | int4 not null | 4 |\n| l_quantity | float4 not null | 4 |\n| l_extendedprice | float4 not null | 4 |\n| l_discount | float4 not null | 4 |\n| l_tax | float4 not null | 4 |\n| l_returnflag | char() not null | 1 |\n| l_linestatus | char() not null | 1 |\n| l_shipdate | date | 4 |\n| l_commitdate | date | 4 |\n| l_receiptdate | date | 4 |\n| l_shipinstruct | char() not null | 25 |\n| l_shipmode | char() not null | 10 |\n| l_comment | char() not null | 44 |\n+------------------------+----------------------------------+-------+\nIndex: lineitem_index_\n\nthat ends up having on the order of 500,000 rows (about 100 MB on disk). \n\nI then run an aggregation query as:\n\n--\n-- Query 1\n--\nselect l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, \nsum(l_extendedprice) as sum_base_price, \nsum(l_extendedprice*(1-l_discount)) as sum_disc_price, \nsum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, \navg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, \navg(l_discount) as avg_disc, count(*) as count_order \nfrom lineitem \nwhere l_shipdate <= ('1998-12-01'::datetime - interval '90 day')::date \ngroup by l_returnflag, l_linestatus \norder by l_returnflag, l_linestatus;\n\n\nwhen I run this against 6.4.2, the postgres process grows to upwards of\n1 GB of memory (at which point something overflows and it dumps core) -\nI watch it grow through 200 MB, 400 MB, 800 MB, dies somewhere near 1 GB\nof allocated memory).\n\nIf I take out a few of the \"sum\" expressions it gets better, removing\nsum_disk_price and sum_charge causes it to be only 600 MB and the query\nactually (eventually) completes. Takes about 10 minutes on my 500 MHz\nmachine with 256 MB core and 4 GB of swap.\n\nThe problem seems to be the memory allocation mechanism. Looking at a\ncall trace, it is doing some kind of \"sub query\" plan for each row in\nthe database. That means it does ExecEval and postquel_function and\npostquel_execute and all their friends for each row in the database. \nAllocating a couple hundred bytes for each one.\n\nThe problem is that none of these allocations are freed - they seem to\ndepend on the AllocSet to free them at the end of the transaction. This\nmeans it isn't a \"true\" leak, because the bytes are all freed at the\n(very) end of the transaction, but it does mean that the process grows\nto unreasonable size in the meantime. There is no need for this,\nbecause the individual expression results are aggregated as it goes\nalong, so the intermediate nodes can be freed.\n\nI spent half a day last week chasing down the offending palloc() calls\nand execution stacks sufficiently that I think I found the right places\nto put pfree() calls.\n\nAs a result, I have changes in the files:\n\nsrc/backend/executor/execUtils.c\nsrc/backend/executor/nodeResult.c \nsrc/backend/executor/nodeAgg.c \nsrc/backend/executor/execMain.c \n\npatches to these files are attached at the end of this message. These\nfiles are based on the 6.5.0 snapshot downloaded from ftp.postgreql.org\non 11 March 1999.\n\nApologies for sending patches to a non-released version. If anyone has\nproblems applying the patches, I can send the full files (I wanted to\navoid sending a 100K shell archive to the list). If anyone cares about\nreproducing my exact problem with the above table, I can provide the 100\nMB pg_dump file for download as well.\n\nSecondary Issue: the reason I did not use the 6.4.2 code to make my\nchanges is because the AllocSet calls in that one were particularly\negregious - they only had the skeleton of the allocsets code that exists\nin the 6.5 snapshots, so they were calling malloc() for all of the 8 and\n16 byte allocations that the above query causes.\n\nUsing the fixed code reduces the maximum memory requirement on the above\nquery to about 210 MB, and reduces the runtime to (an acceptable) 1.5\nminutes - a factor of more than 6x improvement on my 256 MB machine.\n\nNow the biggest part of the execution time is in the sort before the\naggregation (which isn't strictly needed, but that is an optimization\nfor another day).\n\nOpen Issue: there is still a small \"leak\" that I couldn't eliminate, I\nthink I chased it down to the constvalue allocated in\nexecQual::ExecTargetList(), but I couldn't figure out where to properly\nfree it. 8 bytes leaked was much better than 750 bytes, so I stopped\nbanging my head on that particular item.\n\nSecondary Open Issue: what I did have to do to get down to 210 MB of\ncore was reduce the minimum allocation size in AllocSet to 8 bytes from\n16 bytes. That reduces the 8 byte leak above to a true 8 byte, rather\nthan a 16 byte leak. Otherwise, I think the size was 280 MB (still a\nbig improvement on 1000+ MB). I only changed this in my code and I am\nnot including a changed mcxt.c for that.\n\nI hope my changes are understandable/reasonable. Enjoy.\n\nErik Riedel\nCarnegie Mellon University\nwww.cs.cmu.edu/~riedel\n\n--------------[aggregation_memory_patch.sh]-----------------------\n\n#! /bin/sh\n# This is a shell archive, meaning:\n# 1. Remove everything above the #! /bin/sh line.\n# 2. Save the resulting text in a file.\n# 3. Execute the file with /bin/sh (not csh) to create:\n#\texecMain.c.diff\n#\texecUtils.c.diff\n#\tnodeAgg.c.diff\n#\tnodeResult.c.diff\n# This archive created: Fri Mar 19 15:47:17 1999\nexport PATH; PATH=/bin:/usr/bin:$PATH\nif test -f 'execMain.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'execMain.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'execMain.c.diff'\n583c\n\t\n.\n398a\n\n.\n396a\n\t/* XXX - clean up some more from ExecutorStart() - er1p */\n\tif (NULL == estate->es_snapshot) {\n\t /* nothing to free */\n\t} else {\n\t if (estate->es_snapshot->xcnt > 0) { \n\t pfree(estate->es_snapshot->xip);\n\t }\n\t pfree(estate->es_snapshot);\n\t}\n\n\tif (NULL == estate->es_param_exec_vals) {\n\t /* nothing to free */\n\t} else {\n\t pfree(estate->es_param_exec_vals);\n\t estate->es_param_exec_vals = NULL;\n\t}\n\n.\nSHAR_EOF\nfi\nif test -f 'execUtils.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'execUtils.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'execUtils.c.diff'\n368a\n}\n\n/* ----------------\n *\t\tExecFreeExprContext\n * ----------------\n */\nvoid\nExecFreeExprContext(CommonState *commonstate)\n{\n\tExprContext *econtext;\n\n\t/* ----------------\n\t *\tget expression context. if NULL then this node has\n\t *\tnone so we just return.\n\t * ----------------\n\t */\n\tecontext = commonstate->cs_ExprContext;\n\tif (econtext == NULL)\n\t\treturn;\n\n\t/* ----------------\n\t *\tclean up memory used.\n\t * ----------------\n\t */\n\tpfree(econtext);\n\tcommonstate->cs_ExprContext = NULL;\n}\n\n/* ----------------\n *\t\tExecFreeTypeInfo\n * ----------------\n */\nvoid\nExecFreeTypeInfo(CommonState *commonstate)\n{\n\tTupleDesc tupDesc;\n\n\ttupDesc = commonstate->cs_ResultTupleSlot->ttc_tupleDescriptor;\n\tif (tupDesc == NULL)\n\t\treturn;\n\n\t/* ----------------\n\t *\tclean up memory used.\n\t * ----------------\n\t */\n\tFreeTupleDesc(tupDesc);\n\tcommonstate->cs_ResultTupleSlot->ttc_tupleDescriptor = NULL;\n.\n274a\n\n.\nSHAR_EOF\nfi\nif test -f 'nodeAgg.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'nodeAgg.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'nodeAgg.c.diff'\n376a\n\t\t\t\t\t\tpfree(oldVal); /* XXX - new, let's free the old datum - er1p */\n.\n374a\n\t\t\t\t\t\toldVal = value1[aggno]; /* XXX - save so we can free later - er1p */\n.\n112a\n\tDatum oldVal = (Datum) NULL; /* XXX - so that we can save and free on\neach iteration - er1p */\n.\nSHAR_EOF\nfi\nif test -f 'nodeResult.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'nodeResult.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'nodeResult.c.diff'\n278a\n\tpfree(resstate); node->resstate = NULL; /* XXX - new for us - er1p */\n.\n265a\n\tExecFreeExprContext(&resstate->cstate); /* XXX - new for us - er1p */\n\tExecFreeTypeInfo(&resstate->cstate); /* XXX - new for us - er1p */\n.\nSHAR_EOF\nfi\nexit 0\n#\tEnd of shell archive\n\n", "msg_date": "Fri, 19 Mar 1999 15:58:29 -0500 (EST)", "msg_from": "Erik Riedel <[email protected]>", "msg_from_op": true, "msg_subject": "aggregation memory leak and fix" }, { "msg_contents": "> when I run this against 6.4.2, the postgres process grows to upwards of\n> 1 GB of memory (at which point something overflows and it dumps core) -\n> I watch it grow through 200 MB, 400 MB, 800 MB, dies somewhere near 1 GB\n> of allocated memory).\n> \n> If I take out a few of the \"sum\" expressions it gets better, removing\n> sum_disk_price and sum_charge causes it to be only 600 MB and the query\n> actually (eventually) completes. Takes about 10 minutes on my 500 MHz\n> machine with 256 MB core and 4 GB of swap.\n\nWow, that is large.\n\n> The problem seems to be the memory allocation mechanism. Looking at a\n> call trace, it is doing some kind of \"sub query\" plan for each row in\n> the database. That means it does ExecEval and postquel_function and\n> postquel_execute and all their friends for each row in the database. \n> Allocating a couple hundred bytes for each one.\n\nI will admit we really haven't looked at executor issues recently. We\nhave had few bug reports in that area, so we normally have just left it\nalone. I was aware of some memory allocation issues with aggregates,\nbut I thought I fixed them in 6.5. Obviously not.\n\n> The problem is that none of these allocations are freed - they seem to\n> depend on the AllocSet to free them at the end of the transaction. This\n> means it isn't a \"true\" leak, because the bytes are all freed at the\n> (very) end of the transaction, but it does mean that the process grows\n> to unreasonable size in the meantime. There is no need for this,\n> because the individual expression results are aggregated as it goes\n> along, so the intermediate nodes can be freed.\n\nYes, but a terrible over-allocation.\n\n> I spent half a day last week chasing down the offending palloc() calls\n> and execution stacks sufficiently that I think I found the right places\n> to put pfree() calls.\n> \n> As a result, I have changes in the files:\n> \n> src/backend/executor/execUtils.c\n> src/backend/executor/nodeResult.c \n> src/backend/executor/nodeAgg.c \n> src/backend/executor/execMain.c \n> \n> patches to these files are attached at the end of this message. These\n> files are based on the 6.5.0 snapshot downloaded from ftp.postgreql.org\n> on 11 March 1999.\n> \n> Apologies for sending patches to a non-released version. If anyone has\n> problems applying the patches, I can send the full files (I wanted to\n> avoid sending a 100K shell archive to the list). If anyone cares about\n> reproducing my exact problem with the above table, I can provide the 100\n> MB pg_dump file for download as well.\n\nNo apologies necessary. Glad to have someone digging into that area of\nthe code. We will gladly apply your patches to 6.5. However, I request\nthat you send context diffs(diff -c). Normal diffs are just too\nerror-prone in application. Send them, and I will apply them right\naway.\n\n> Secondary Issue: the reason I did not use the 6.4.2 code to make my\n> changes is because the AllocSet calls in that one were particularly\n> egregious - they only had the skeleton of the allocsets code that exists\n> in the 6.5 snapshots, so they were calling malloc() for all of the 8 and\n> 16 byte allocations that the above query causes.\n\nGlad you used 6.5. Makes it easier to merge them into our next release.\n\n\n> Using the fixed code reduces the maximum memory requirement on the above\n> query to about 210 MB, and reduces the runtime to (an acceptable) 1.5\n> minutes - a factor of more than 6x improvement on my 256 MB machine.\n> \n> Now the biggest part of the execution time is in the sort before the\n> aggregation (which isn't strictly needed, but that is an optimization\n> for another day).\n\nNot sure why that is there? Perhaps for GROUP BY processing?\n\n> \n> Open Issue: there is still a small \"leak\" that I couldn't eliminate, I\n> think I chased it down to the constvalue allocated in\n> execQual::ExecTargetList(), but I couldn't figure out where to properly\n> free it. 8 bytes leaked was much better than 750 bytes, so I stopped\n> banging my head on that particular item.\n\nCan you give me the exact line? Is it the palloc(1)?\n\n\n> Secondary Open Issue: what I did have to do to get down to 210 MB of\n> core was reduce the minimum allocation size in AllocSet to 8 bytes from\n> 16 bytes. That reduces the 8 byte leak above to a true 8 byte, rather\n> than a 16 byte leak. Otherwise, I think the size was 280 MB (still a\n> big improvement on 1000+ MB). I only changed this in my code and I am\n> not including a changed mcxt.c for that.\n\nMaybe Jan, our memory optimizer, can discuss the 8 vs. 16 byte issue.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 17:22:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "\n> No apologies necessary. Glad to have someone digging into that area of\n> the code. We will gladly apply your patches to 6.5. However, I request\n> that you send context diffs(diff -c). Normal diffs are just too\n> error-prone in application. Send them, and I will apply them right\n> away.\n> \nContext diffs attached. This was due to my ignorance of diff. When I\nmade the other files, I though \"hmm, these could be difficult to apply\nif the code has changed a bit, wouldn't it be good if they included a\nfew lines before and after the fix\". Now I know \"-c\".\n\n> Not sure why that is there? Perhaps for GROUP BY processing?\n> \nRight, it is a result of the Group processing requiring sorted input. \nJust that it doesn't \"require\" sorted input, it \"could\" be a little more\nflexible and the sort wouldn't be necessary. Essentially this would be\na single \"AggSort\" node that did the aggregation while sorting (probably\nwith replacement selection rather than quicksort). This definitely\nwould require some code/smarts that isn't there today.\n\n> > think I chased it down to the constvalue allocated in\n> > execQual::ExecTargetList(), but I couldn't figure out where to properly\n> > free it. 8 bytes leaked was much better than 750 bytes, so I stopped\n> > banging my head on that particular item.\n> \n> Can you give me the exact line? Is it the palloc(1)?\n> \nNo, the 8 bytes seem to come from the ExecEvalExpr() call near line\n1530. Problem was when I tried to free these, I got \"not in AllocSet\"\nerrors, so something more complicated was going on.\n\nThanks.\n\nErik\n\n-----------[aggregation_memory_patch.sh]----------------------\n\n#! /bin/sh\n# This is a shell archive, meaning:\n# 1. Remove everything above the #! /bin/sh line.\n# 2. Save the resulting text in a file.\n# 3. Execute the file with /bin/sh (not csh) to create:\n#\texecMain.c.diff\n#\texecUtils.c.diff\n#\tnodeAgg.c.diff\n#\tnodeResult.c.diff\n# This archive created: Fri Mar 19 19:35:42 1999\nexport PATH; PATH=/bin:/usr/bin:$PATH\nif test -f 'execMain.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'execMain.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'execMain.c.diff'\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/executor/\nexecMain.c\tThu Mar 11 23:59:11 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/612/src/backend/executor/\nexecMain.c\tFri Mar 19 15:03:28 1999\n***************\n*** 394,401 ****\n--- 394,419 ----\n \n \tEndPlan(queryDesc->plantree, estate);\n \n+ \t/* XXX - clean up some more from ExecutorStart() - er1p */\n+ \tif (NULL == estate->es_snapshot) {\n+ \t /* nothing to free */\n+ \t} else {\n+ \t if (estate->es_snapshot->xcnt > 0) { \n+ \t pfree(estate->es_snapshot->xip);\n+ \t }\n+ \t pfree(estate->es_snapshot);\n+ \t}\n+ \n+ \tif (NULL == estate->es_param_exec_vals) {\n+ \t /* nothing to free */\n+ \t} else {\n+ \t pfree(estate->es_param_exec_vals);\n+ \t estate->es_param_exec_vals = NULL;\n+ \t}\n+ \n \t/* restore saved refcounts. */\n \tBufferRefCountRestore(estate->es_refcount);\n+ \n }\n \n void\n***************\n*** 580,586 ****\n \t/*\n \t *\tinitialize result relation stuff\n \t */\n! \n \tif (resultRelation != 0 && operation != CMD_SELECT)\n \t{\n \t\t/*\n--- 598,604 ----\n \t/*\n \t *\tinitialize result relation stuff\n \t */\n! \t\n \tif (resultRelation != 0 && operation != CMD_SELECT)\n \t{\n \t\t/*\nSHAR_EOF\nfi\nif test -f 'execUtils.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'execUtils.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'execUtils.c.diff'\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/executor/\nexecUtils.c\tThu Mar 11 23:59:11 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/612/src/backend/executor/\nexecUtils.c\tFri Mar 19 14:55:59 1999\n***************\n*** 272,277 ****\n--- 272,278 ----\n #endif\n \t\ti++;\n \t}\n+ \n \tif (len > 0)\n \t{\n \t\tExecAssignResultType(commonstate,\n***************\n*** 366,371 ****\n--- 367,419 ----\n \n \tpfree(projInfo);\n \tcommonstate->cs_ProjInfo = NULL;\n+ }\n+ \n+ /* ----------------\n+ *\t\tExecFreeExprContext\n+ * ----------------\n+ */\n+ void\n+ ExecFreeExprContext(CommonState *commonstate)\n+ {\n+ \tExprContext *econtext;\n+ \n+ \t/* ----------------\n+ \t *\tget expression context. if NULL then this node has\n+ \t *\tnone so we just return.\n+ \t * ----------------\n+ \t */\n+ \tecontext = commonstate->cs_ExprContext;\n+ \tif (econtext == NULL)\n+ \t\treturn;\n+ \n+ \t/* ----------------\n+ \t *\tclean up memory used.\n+ \t * ----------------\n+ \t */\n+ \tpfree(econtext);\n+ \tcommonstate->cs_ExprContext = NULL;\n+ }\n+ \n+ /* ----------------\n+ *\t\tExecFreeTypeInfo\n+ * ----------------\n+ */\n+ void\n+ ExecFreeTypeInfo(CommonState *commonstate)\n+ {\n+ \tTupleDesc tupDesc;\n+ \n+ \ttupDesc = commonstate->cs_ResultTupleSlot->ttc_tupleDescriptor;\n+ \tif (tupDesc == NULL)\n+ \t\treturn;\n+ \n+ \t/* ----------------\n+ \t *\tclean up memory used.\n+ \t * ----------------\n+ \t */\n+ \tFreeTupleDesc(tupDesc);\n+ \tcommonstate->cs_ResultTupleSlot->ttc_tupleDescriptor = NULL;\n }\n \n /* ----------------------------------------------------------------\nSHAR_EOF\nfi\nif test -f 'nodeAgg.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'nodeAgg.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'nodeAgg.c.diff'\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/executor/\nnodeAgg.c\tThu Mar 11 23:59:11 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/612/src/backend/executor/\nnodeAgg.c\tFri Mar 19 15:01:21 1999\n***************\n*** 110,115 ****\n--- 110,116 ----\n \t\t\t\tisNull2 = FALSE;\n \tbool\t\tqual_result;\n \n+ \tDatum oldVal = (Datum) NULL; /* XXX - so that we can save and free\non each iteration - er1p */\n \n \t/* ---------------------\n \t *\tget state info from node\n***************\n*** 372,379 ****\n--- 373,382 ----\n \t\t\t\t\t\t */\n \t\t\t\t\t\targs[0] = value1[aggno];\n \t\t\t\t\t\targs[1] = newVal;\n+ \t\t\t\t\t\toldVal = value1[aggno]; /* XXX - save so we can free later - er1p */\n \t\t\t\t\t\tvalue1[aggno] =\t(Datum) fmgr_c(&aggfns->xfn1,\n \t\t\t\t\t\t\t\t\t\t (FmgrValues *) args, &isNull1);\n+ \t\t\t\t\t\tpfree(oldVal); /* XXX - new, let's free the old datum - er1p */\n \t\t\t\t\t\tAssert(!isNull1);\n \t\t\t\t\t}\n \t\t\t\t}\nSHAR_EOF\nfi\nif test -f 'nodeResult.c.diff'\nthen\n\techo shar: \"will not over-write existing file 'nodeResult.c.diff'\"\nelse\ncat << \\SHAR_EOF > 'nodeResult.c.diff'\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/executor/\nnodeResult.c\tThu Mar 11 23:59:12 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/612/src/backend/executor/\nnodeResult.c\tFri Mar 19 14:57:26 1999\n***************\n*** 263,268 ****\n--- 263,270 ----\n \t *\t\t is freed at end-transaction time. -cim 6/2/91\n \t * ----------------\n \t */\n+ \tExecFreeExprContext(&resstate->cstate); /* XXX - new for us - er1p */\n+ \tExecFreeTypeInfo(&resstate->cstate); /* XXX - new for us - er1p */\n \tExecFreeProjectionInfo(&resstate->cstate);\n \n \t/* ----------------\n***************\n*** 276,281 ****\n--- 278,284 ----\n \t * ----------------\n \t */\n \tExecClearTuple(resstate->cstate.cs_ResultTupleSlot);\n+ \tpfree(resstate); node->resstate = NULL; /* XXX - new for us - er1p */\n }\n \n void\nSHAR_EOF\nfi\nexit 0\n#\tEnd of shell archive\n\n", "msg_date": "Fri, 19 Mar 1999 19:43:02 -0500 (EST)", "msg_from": "Erik Riedel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> \n> > No apologies necessary. Glad to have someone digging into that area of\n> > the code. We will gladly apply your patches to 6.5. However, I request\n> > that you send context diffs(diff -c). Normal diffs are just too\n> > error-prone in application. Send them, and I will apply them right\n> > away.\n> > \n> Context diffs attached. This was due to my ignorance of diff. When I\n> made the other files, I though \"hmm, these could be difficult to apply\n> if the code has changed a bit, wouldn't it be good if they included a\n> few lines before and after the fix\". Now I know \"-c\".\n\nApplied.\n\n> > Not sure why that is there? Perhaps for GROUP BY processing?\n> > \n> Right, it is a result of the Group processing requiring sorted input. \n> Just that it doesn't \"require\" sorted input, it \"could\" be a little more\n> flexible and the sort wouldn't be necessary. Essentially this would be\n> a single \"AggSort\" node that did the aggregation while sorting (probably\n> with replacement selection rather than quicksort). This definitely\n> would require some code/smarts that isn't there today.\n\nI think you will find make_groupPlan adds the sort as needed by the\nGROUP BY. I assume you are suggesting to do the aggregate/GROUP on unsorted\ndata, which is hard to do in a flexible way.\n\n> > > think I chased it down to the constvalue allocated in\n> > > execQual::ExecTargetList(), but I couldn't figure out where to properly\n> > > free it. 8 bytes leaked was much better than 750 bytes, so I stopped\n> > > banging my head on that particular item.\n> > \n> > Can you give me the exact line? Is it the palloc(1)?\n> > \n> No, the 8 bytes seem to come from the ExecEvalExpr() call near line\n> 1530. Problem was when I tried to free these, I got \"not in AllocSet\"\n> errors, so something more complicated was going on.\n\nYes, if you look inside ExecEvalExpr(), you will see it tries to get a\nvalue for the expression(Datum). It may return an int, float4, or a\nstring. In the last case, that is actually a pointer and not a specific\nvalue.\n\nSo, in some cases, the value can just be thrown away, or it may be a\npointer to memory that can be freed after the call to heap_formtuple()\nlater in the function. The trick is to find the function call in\nExecEvalExpr() that is allocating something, and conditionally free\nvalues[] after the call to heap_formtuple(). If you don't want find it,\nperhaps you can send me enough info so I can see it here.\n\nI wonder whether it is the call to CreateTupleDescCopy() inside\nExecEvalVar()?\n\nAnother problem I just fixed is that fjIsNull was not being pfree'ed if\nit was used with >64 targets, but I don't think that affects you.\n\nI also assume you have run your recent patch through the the\ntest/regression tests, so see it does not cause some other area to fail,\nright?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 20:58:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> Yes, if you look inside ExecEvalExpr(), you will see it tries to get a\n> value for the expression(Datum). It may return an int, float4, or a\n> string. In the last case, that is actually a pointer and not a specific\n> value.\n> \n> So, in some cases, the value can just be thrown away, or it may be a\n> pointer to memory that can be freed after the call to heap_formtuple()\n> later in the function. The trick is to find the function call in\n> ExecEvalExpr() that is allocating something, and conditionally free\n> values[] after the call to heap_formtuple(). If you don't want find it,\n> perhaps you can send me enough info so I can see it here.\n> \n> I wonder whether it is the call to CreateTupleDescCopy() inside\n> ExecEvalVar()?\n\nI am now not totally sure about what I said above. The general plan,\nthough is accurate, that perhaps something is being allocated. I need\nto see the query again, which I don't have anymore.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 21:03:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> \n> Platform: Alpha, Digital UNIX 4.0D\n> Software: PostgreSQL 6.4.2 and 6.5 snaphot (11 March 1999)\n> \n> I have a table as follows:\n> \n> Table = lineitem\n> +------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +------------------------+----------------------------------+-------+\n> | l_orderkey | int4 not null | 4 |\n> | l_partkey | int4 not null | 4 |\n> | l_suppkey | int4 not null | 4 |\n> | l_linenumber | int4 not null | 4 |\n> | l_quantity | float4 not null | 4 |\n> | l_extendedprice | float4 not null | 4 |\n> | l_discount | float4 not null | 4 |\n> | l_tax | float4 not null | 4 |\n> | l_returnflag | char() not null | 1 |\n> | l_linestatus | char() not null | 1 |\n> | l_shipdate | date | 4 |\n> | l_commitdate | date | 4 |\n> | l_receiptdate | date | 4 |\n> | l_shipinstruct | char() not null | 25 |\n> | l_shipmode | char() not null | 10 |\n> | l_comment | char() not null | 44 |\n> +------------------------+----------------------------------+-------+\n> Index: lineitem_index_\n> \n> that ends up having on the order of 500,000 rows (about 100 MB on disk). \n> \n> I then run an aggregation query as:\n> \n> --\n> -- Query 1\n> --\n> select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, \n> sum(l_extendedprice) as sum_base_price, \n> sum(l_extendedprice*(1-l_discount)) as sum_disc_price, \n> sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, \n> avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, \n> avg(l_discount) as avg_disc, count(*) as count_order \n> from lineitem \n> where l_shipdate <= ('1998-12-01'::datetime - interval '90 day')::date \n> group by l_returnflag, l_linestatus \n> order by l_returnflag, l_linestatus;\n> \n\nOK, I do have the query. Please try removing the (1+l_tax) so it is\njust l_tax, and change the 1998... to just a simple date string, and see\nif the problem goes away. If we can find something specific in the\nquery that is causing the memory over-allocation, it is that much easier\nto find the cause.\n\nAlso, try removing all the arithmetic in the query or simplify the query\nto see if there is a certain part that is causing it. If it is really\nan 8-byte issue, it must be very small indeed, and only visible because\nyou have so much data, and are attentive.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 21:07:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, \n> sum(l_extendedprice) as sum_base_price, \n> sum(l_extendedprice*(1-l_discount)) as sum_disc_price, \n> sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, \n> avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, \n> avg(l_discount) as avg_disc, count(*) as count_order \n> from lineitem \n> where l_shipdate <= ('1998-12-01'::datetime - interval '90 day')::date \n> group by l_returnflag, l_linestatus \n> order by l_returnflag, l_linestatus;\n\n\nOK, I have researched this. I think you will find that:\n\n\t('1998-12-01'::datetime - interval '90 day')::date \n\nis the cause. In datetime_mi(), you will see:\n\t\n\t dt1 = *datetime1;\n\t dt2 = *datetime2;\n\t \n\t result = palloc(sizeof(TimeSpan));\n ^^^^^^\n\t if (DATETIME_IS_RELATIVE(dt1))\n\t dt1 = SetDateTime(dt1);\n\t if (DATETIME_IS_RELATIVE(dt2))\n\t dt2 = SetDateTime(dt2);\n\nThis obviously shows us allocating a return value, that probably is not\nfree'ed until the end of the query. TimeSpan is:\n\t \n\ttypedef struct\n\t{\n\t double time; /* all time units other than months and\n\t * years */\n\t int4 month; /* months and years, after time for\n\t * alignment */\n\t} TimeSpan;\n\nNow, we certainly could pre-compute this constant once before doing the\nquery, but we don't, and even if we did, this would not fix the case\nwhere a Var is involved in the expression.\n\nWhen we grab values directly from tuples, like Var, the tuples are\nauto-free'ed at the end, because they exist in tuple that we track. \nValues computed inside very deep functions are tough for us to free. In\nfact, this could be a very complex expression, with a variety of\ntemporary palloc'ed values used during the process.\n\nMy only quick solution would seem to be to add a new \"expression\" memory\ncontext, that can be cleared after every tuple is processed, clearing\nout temporary values allocated inside an expression. This probably\ncould be done very easily, because the entry/exit locations into the\nexpression system are very limited.\n\nIdeas people?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 21:33:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> \n> > No apologies necessary. Glad to have someone digging into that area of\n> > the code. We will gladly apply your patches to 6.5. However, I request\n> > that you send context diffs(diff -c). Normal diffs are just too\n> > error-prone in application. Send them, and I will apply them right\n> > away.\n> > \n> Context diffs attached. This was due to my ignorance of diff. When I\n> made the other files, I though \"hmm, these could be difficult to apply\n> if the code has changed a bit, wouldn't it be good if they included a\n> few lines before and after the fix\". Now I know \"-c\".\n\nWe are seeing regression failure on aggregates after the patches. It is\nhappening in nodeAgg.c, line 379:\n\n\n pfree(oldVal); /* XXX - new, let's free the old datum -$\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 07:50:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> \n> > No apologies necessary. Glad to have someone digging into that area of\n> > the code. We will gladly apply your patches to 6.5. However, I request\n> > that you send context diffs(diff -c). Normal diffs are just too\n> > error-prone in application. Send them, and I will apply them right\n> > away.\n> > \n> Context diffs attached. This was due to my ignorance of diff. When I\n> made the other files, I though \"hmm, these could be difficult to apply\n> if the code has changed a bit, wouldn't it be good if they included a\n> few lines before and after the fix\". Now I know \"-c\".\n\nI have had to back out the nodeAgg.c part of the patch. The rest looks\nOK, partly because it is dealing with freeing expression context, and\npartly because I don't understand it all.\n\nThe nodeAgg.c part of the patch is clearly trying to free memory\nallocated as intermediate parts of the expression, and has to be solved\nby a more general solution as I discussed.\n\nIf you look in backend/utils/adt/*.c, you will see lots of intermediate\nmemory allocated. The allocacations as part of the *in/*out functions\nare not a problem, because they are called only in other areas, and are\nfreed, but the other ones are probably not free'ed until statement\ntermination. I may need to specifically put those pfree's in their own\ncontext, and free them on tuple completion.\n\nI am waiting to hear what others say about my ideas. Feel free to keep\ndigging.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 08:17:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> My only quick solution would seem to be to add a new \"expression\" memory\n> context, that can be cleared after every tuple is processed, clearing\n> out temporary values allocated inside an expression.\n\nRight, this whole problem of growing backend memory use during a large\nSELECT (or COPY, or probably a few other things) is one of the things\nthat we were talking about addressing by revising the memory management\nstructure.\n\nI think what we want inside the executor is a distinction between\nstorage that must live to the end of the statement and storage that is\nonly needed while processing the current tuple. The second kind of\nstorage would go into a separate context that gets flushed every so\noften. (It could be every tuple, or every dozen or hundred tuples\ndepending on what seems the best tradeoff of cycles against memory\nusage.)\n\nI'm not sure that just two contexts is enough, either. For example in\n\tSELECT field1, SUM(field2) GROUP BY field1;\nthe working memory for the SUM aggregate could not be released after\neach tuple, but perhaps we don't want it to live for the whole statement\neither --- in that case we'd need a per-group context. (This particular\nexample isn't very convincing, because the same storage for the SUM\n*could* be recycled from group to group. But I don't know whether it\nactually *is* reused or not. If fresh storage is palloc'd for each\ninstantiation of SUM then we have a per-group leak in this scenario.\nIn any case, I'm not sure all aggregate functions have constant memory\nrequirements that would let them recycle storage across groups.)\n\nWhat we need to do is work out what the best set of memory context\ndefinitions is, and then decide on a strategy for making sure that\nlower-level routines allocate their return values in the right context.\nIt'd be nice if the lower-level routines could still call palloc() and\nnot have to worry about this explicitly --- otherwise we'll break not\nonly a lot of our own code but perhaps a lot of user code. (User-\nspecific data types and SPI code all use palloc, no?)\n\nI think it is too late to try to fix this for 6.5, but it ought to be a\ntop priority for 6.6.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Mar 1999 11:48:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > My only quick solution would seem to be to add a new \"expression\" memory\n> > context, that can be cleared after every tuple is processed, clearing\n> > out temporary values allocated inside an expression.\n> \n> Right, this whole problem of growing backend memory use during a large\n> SELECT (or COPY, or probably a few other things) is one of the things\n> that we were talking about addressing by revising the memory management\n> structure.\n> \n> I think what we want inside the executor is a distinction between\n> storage that must live to the end of the statement and storage that is\n> only needed while processing the current tuple. The second kind of\n> storage would go into a separate context that gets flushed every so\n> often. (It could be every tuple, or every dozen or hundred tuples\n> depending on what seems the best tradeoff of cycles against memory\n> usage.)\n> \n> I'm not sure that just two contexts is enough, either. For example in\n> \tSELECT field1, SUM(field2) GROUP BY field1;\n> the working memory for the SUM aggregate could not be released after\n> each tuple, but perhaps we don't want it to live for the whole statement\n> either --- in that case we'd need a per-group context. (This particular\n> example isn't very convincing, because the same storage for the SUM\n> *could* be recycled from group to group. But I don't know whether it\n> actually *is* reused or not. If fresh storage is palloc'd for each\n> instantiation of SUM then we have a per-group leak in this scenario.\n> In any case, I'm not sure all aggregate functions have constant memory\n> requirements that would let them recycle storage across groups.)\n> \n> What we need to do is work out what the best set of memory context\n> definitions is, and then decide on a strategy for making sure that\n> lower-level routines allocate their return values in the right context.\n> It'd be nice if the lower-level routines could still call palloc() and\n> not have to worry about this explicitly --- otherwise we'll break not\n> only a lot of our own code but perhaps a lot of user code. (User-\n> specific data types and SPI code all use palloc, no?)\n\nLet me make an argument here.\n\nLet's suppose that we want to free all the memory used as expression\nintermediate values after each row is processed.\n\nIt is my understanding that all these are created in utils/adt/*.c\nfiles, and that the entry point to all those functions via\nfmgr()/fmgr_c().\n\nSo, if we go into an expression memory context before calling\nfmgr/fmgr_c in the executor, and return to the normal context after the\nfunction call, all our intermediates are trapped in the expression\nmemory context.\n\nAt the end of each row, we just free the expression memory context. In\nalmost all cases, the data is stored in tuples, and we can free it. In\na few cases like aggregates, we have to save off the value we need to\nkeep before freeing the expression context. In fact, you could even\noptimize the cleanup to only do free'ing if some expression memory was\nallocated. In most cases, it is not.\n\nIn fact the nodeAgg.c patch that I backed out attempted to do that,\nthough because there wasn't code that checked if the Datum was\npg_type.typbyval, it didn't work 100%.\n\nIn fact, a quick look at the executor shows:\n\t\n\t#$ grep \"fmgr(\" *.c |detab -t 4\n\texecUtils.c: predString = fmgr(F_TEXTOUT, &indexStruct->indpred);\n\tnodeGroup.c: val1 = fmgr(typoutput, attr1, typelem,\n\tnodeGroup.c: val2 = fmgr(typoutput, attr2, typelem,\n\tnodeUnique.c: val1 = fmgr(typoutput, attr1, typelem,\n\tnodeUnique.c: val2 = fmgr(typoutput, attr2, typelem,\n\tspi.c: return (fmgr(foutoid, val, typelem,\n\n\t#$ grep \"fmgr_c(\" *.c |detab -t 4\n\texecQual.c: return (Datum) fmgr_c(&fcache->func, (FmgrValues *)argV, isNull);\n\tnodeAgg.c: value1[aggno] = (Datum)\tfmgr_c(&aggfns->xfn1,\n\tnodeAgg.c: value2[aggno] = (Datum) fmgr_c(&aggfns->xfn2,\n\tnodeAgg.c: value1[aggno] = (Datum) fmgr_c(&aggfns->finalfn,\n\nThe fmgr(out*) calls are probably not an issue, because they are already\ncleaned up. The only issue are the fmgr_c calls. execQual is the MAJOR\none for all expressions, and the nodeAgg calls would have to have some\nsaving of the last entry done.\n\nSeems pretty straight-forward to me. The fact is we have a pretty clean\nmemory allocation system. Not sure if a redesign is necessary if we can\nclean up any per-tuple allocations, and I think this would do the trick.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Mar 1999 14:20:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> What we need to do is work out what the best set of memory context\n>> definitions is, and then decide on a strategy for making sure that\n>> lower-level routines allocate their return values in the right context.\n\n> Let's suppose that we want to free all the memory used as expression\n> intermediate values after each row is processed.\n> It is my understanding that all these are created in utils/adt/*.c\n> files, and that the entry point to all those functions via\n> fmgr()/fmgr_c().\n\nThat's probably the bulk of the specific calls of palloc(). Someone\n(Jan?) did a scan of the code a while ago looking for palloc() calls,\nand there aren't that many outside of the data-type-specific functions.\nBut we'd have to look individually at all the ones that are elsewhere.\n\n> So, if we go into an expression memory context before calling\n> fmgr/fmgr_c in the executor, and return to the normal context after the\n> function call, all our intermediates are trapped in the expression\n> memory context.\n\nOK, so you're saying we leave the data-type-specific functions as is\n(calling palloc() to allocate their result areas), and make each call\nsite specifically responsible for setting the context that palloc() will\nallocate from? That could work, I think. We'd need to see what side\neffects it'd have on other uses of palloc().\n\nWhat we'd probably want is to use a stack discipline for the current\npalloc-target memory context: when you set the context, you get back the\nID of the old context, and you are supposed to restore that old context\nbefore returning.\n\n> At the end of each row, we just free the expression memory context. In\n> almost all cases, the data is stored in tuples, and we can free it. In\n> a few cases like aggregates, we have to save off the value we need to\n> keep before freeing the expression context.\n\nActually, nodeAgg would just have to set an appropriate context before\ncalling fmgr to execute the aggregate's transition functions, and then\nit wouldn't need an extra copy step. The results would come back in the\nright context already.\n\n> In fact, you could even optimize the cleanup to only do free'ing if\n> some expression memory was allocated. In most cases, it is not.\n\nJan's stuff should already fall through pretty quickly if there's\nnothing in the context, I think. Note that what we want to do between\ntuples is a \"context clear\" of the expression context, not a \"context\ndelete\" and then \"context create\" a new expression context. Context\nclear should be a pretty quick no-op if nothing's been allocated in that\ncontext...\n\n> In fact the nodeAgg.c patch that I backed out attempted to do that,\n> though because there wasn't code that checked if the Datum was\n> pg_type.typbyval, it didn't work 100%.\n\nRight. But if we approach it this way (clear the context at appropriate\ntimes) rather than thinking in terms of explicitly pfree'ing individual\nobjects, life gets much simpler. Also, if we insist on being able to\npfree individual objects inside a context, we can't use Jan's faster\nallocator! Remember, the reason it is faster and lower overhead is that\nit doesn't keep track of individual objects, only pools.\n\nI'd like to see us head in the direction of removing most of the\nexplicit pfree calls that exist now, and instead rely on clearing\nmemory contexts at appropriate times in order to manage memory.\nThe fewer places where we need pfree, the more contexts can be run\nwith the low-overhead space allocator. Also, the fewer explicit\npfrees we need, the simpler and more reliable the code gets.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Mar 1999 15:50:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> What we need to do is work out what the best set of memory context\n> >> definitions is, and then decide on a strategy for making sure that\n> >> lower-level routines allocate their return values in the right context.\n> \n> > Let's suppose that we want to free all the memory used as expression\n> > intermediate values after each row is processed.\n> > It is my understanding that all these are created in utils/adt/*.c\n> > files, and that the entry point to all those functions via\n> > fmgr()/fmgr_c().\n> \n> That's probably the bulk of the specific calls of palloc(). Someone\n> (Jan?) did a scan of the code a while ago looking for palloc() calls,\n> and there aren't that many outside of the data-type-specific functions.\n> But we'd have to look individually at all the ones that are elsewhere.\n> \n> > So, if we go into an expression memory context before calling\n> > fmgr/fmgr_c in the executor, and return to the normal context after the\n> > function call, all our intermediates are trapped in the expression\n> > memory context.\n> \n> OK, so you're saying we leave the data-type-specific functions as is\n> (calling palloc() to allocate their result areas), and make each call\n> site specifically responsible for setting the context that palloc() will\n> allocate from? That could work, I think. We'd need to see what side\n> effects it'd have on other uses of palloc().\n> \n> What we'd probably want is to use a stack discipline for the current\n> palloc-target memory context: when you set the context, you get back the\n> ID of the old context, and you are supposed to restore that old context\n> before returning.\n> \n> > At the end of each row, we just free the expression memory context. In\n> > almost all cases, the data is stored in tuples, and we can free it. In\n> > a few cases like aggregates, we have to save off the value we need to\n> > keep before freeing the expression context.\n> \n> Actually, nodeAgg would just have to set an appropriate context before\n> calling fmgr to execute the aggregate's transition functions, and then\n> it wouldn't need an extra copy step. The results would come back in the\n> right context already.\n> \n> > In fact, you could even optimize the cleanup to only do free'ing if\n> > some expression memory was allocated. In most cases, it is not.\n> \n> Jan's stuff should already fall through pretty quickly if there's\n> nothing in the context, I think. Note that what we want to do between\n> tuples is a \"context clear\" of the expression context, not a \"context\n> delete\" and then \"context create\" a new expression context. Context\n> clear should be a pretty quick no-op if nothing's been allocated in that\n> context...\n> \n> > In fact the nodeAgg.c patch that I backed out attempted to do that,\n> > though because there wasn't code that checked if the Datum was\n> > pg_type.typbyval, it didn't work 100%.\n> \n> Right. But if we approach it this way (clear the context at appropriate\n> times) rather than thinking in terms of explicitly pfree'ing individual\n> objects, life gets much simpler. Also, if we insist on being able to\n> pfree individual objects inside a context, we can't use Jan's faster\n> allocator! Remember, the reason it is faster and lower overhead is that\n> it doesn't keep track of individual objects, only pools.\n> \n> I'd like to see us head in the direction of removing most of the\n> explicit pfree calls that exist now, and instead rely on clearing\n> memory contexts at appropriate times in order to manage memory.\n> The fewer places where we need pfree, the more contexts can be run\n> with the low-overhead space allocator. Also, the fewer explicit\n> pfrees we need, the simpler and more reliable the code gets.\n\nTom, are you saying you agree with my approach, and I should give it a\ntry?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Mar 1999 00:07:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, are you saying you agree with my approach, and I should give it a\n> try?\n\nIf what I said was the same as what you were thinking, then yeah ;-)\n\nBut I still think it's too late to try to fit this into 6.5, unless\nthe changes turn out to be way more localized than I expect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Mar 1999 10:17:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, are you saying you agree with my approach, and I should give it a\n> > try?\n> \n> If what I said was the same as what you were thinking, then yeah ;-)\n> \n> But I still think it's too late to try to fit this into 6.5, unless\n> the changes turn out to be way more localized than I expect.\n\nYes, very localized. Only a few lines.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Mar 1999 11:50:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "> --\n> -- Query 1\n> --\n> select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, \n> sum(l_extendedprice) as sum_base_price, \n> sum(l_extendedprice*(1-l_discount)) as sum_disc_price, \n> sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, \n> avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, \n> avg(l_discount) as avg_disc, count(*) as count_order \n> from lineitem \n> where l_shipdate <= ('1998-12-01'::datetime - interval '90 day')::date \n> group by l_returnflag, l_linestatus \n> order by l_returnflag, l_linestatus;\n> \n> \n> when I run this against 6.4.2, the postgres process grows to upwards of\n> 1 GB of memory (at which point something overflows and it dumps core) -\n> I watch it grow through 200 MB, 400 MB, 800 MB, dies somewhere near 1 GB\n> of allocated memory).\n> \n\nHere is my first attempt at fixing the expression memory leak you\nmentioned. I have run it through the regression tests, and it seems to\nbe harmless there.\n\nI am interested to see if it fixes the expression leak you saw. I have\nnot committed this yet. I want to look at it some more.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: execQual.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execQual.c,v\nretrieving revision 1.50\ndiff -c -r1.50 execQual.c\n*** execQual.c\t1999/03/20 02:07:31\t1.50\n--- execQual.c\t1999/03/24 06:09:25\n***************\n*** 51,56 ****\n--- 51,57 ----\n #include \"utils/fcache2.h\"\n #include \"utils/mcxt.h\"\n #include \"utils/memutils.h\"\n+ #include \"utils/portal.h\"\n \n \n /*\n***************\n*** 65,70 ****\n--- 66,74 ----\n bool\t\texecConstByVal;\n int\t\t\texecConstLen;\n \n+ bool\t\tallocatedQualContext;\n+ bool\t\tallocatedQualNesting;\n+ \n /* static functions decls */\n static Datum ExecEvalAggref(Aggref *aggref, ExprContext *econtext, bool *isNull);\n static Datum ExecEvalArrayRef(ArrayRef *arrayRef, ExprContext *econtext,\n***************\n*** 801,814 ****\n \telse\n \t{\n \t\tint\t\t\ti;\n \n \t\tif (isDone)\n \t\t\t*isDone = true;\n \t\tfor (i = 0; i < fcache->nargs; i++)\n \t\t\tif (fcache->nullVect[i] == true)\n \t\t\t\t*isNull = true;\n \n! \t\treturn (Datum) fmgr_c(&fcache->func, (FmgrValues *) argV, isNull);\n \t}\n }\n \n--- 805,852 ----\n \telse\n \t{\n \t\tint\t\t\ti;\n+ \t\tDatum\t\td;\n+ \t\tchar\t\tpname[64];\n+ \t\tPortal \t\tqual_portal;\n+ \t\tMemoryContext oldcxt;\n \n \t\tif (isDone)\n \t\t\t*isDone = true;\n \t\tfor (i = 0; i < fcache->nargs; i++)\n \t\t\tif (fcache->nullVect[i] == true)\n \t\t\t\t*isNull = true;\n+ \n+ \t\t/*\n+ \t\t * Assign adt *.c memory in separate context to prevent\n+ \t\t * unbounded memory growth in large queries that use functions.\n+ \t\t * We clear this memory after the qual has been completed.\n+ \t\t * bjm 1999/03/24\n+ \t\t */\n+ \t\tstrcpy(pname, \"<Qual manager>\");\n+ \t\tqual_portal = GetPortalByName(pname);\n+ \t\tif (!PortalIsValid(qual_portal))\n+ \t\t{\n+ \t\t\tqual_portal = CreatePortal(pname);\n+ \t\t\tAssert(PortalIsValid(qual_portal));\n \n! \t\t\toldcxt = MemoryContextSwitchTo(\n! \t\t\t\t\t(MemoryContext) PortalGetHeapMemory(qual_portal));\n! \t\t\tStartPortalAllocMode(DefaultAllocMode, 0);\n! \t\t\tMemoryContextSwitchTo(oldcxt);\n! \n! \t \t\tallocatedQualContext = true;\n! \t \t\tallocatedQualNesting = 0;\n! \t\t}\n! \t\tallocatedQualNesting++;\n! \n! \t\toldcxt = MemoryContextSwitchTo(\n! \t\t\t\t(MemoryContext) PortalGetHeapMemory(qual_portal));\n! \n! \t\td = (Datum) fmgr_c(&fcache->func, (FmgrValues *) argV, isNull);\n! \n! \t\tMemoryContextSwitchTo(oldcxt);\n! \t\tallocatedQualNesting--;\n! \t\treturn d;\n \t}\n }\n \n***************\n*** 1354,1359 ****\n--- 1392,1399 ----\n {\n \tList\t *clause;\n \tbool\t\tresult;\n+ \tchar\t\tpname[64];\n+ \tPortal \t\tqual_portal;\n \n \t/*\n \t *\tdebugging stuff\n***************\n*** 1387,1396 ****\n \t\t\tbreak;\n \t}\n \n \t/*\n \t *\tif result is true, then it means a clause failed so we\n \t *\treturn false. if result is false then it means no clause\n! \t *\tfailed so we return true.\n \t */\n \tif (result == true)\n \t\treturn false;\n--- 1427,1459 ----\n \t\t\tbreak;\n \t}\n \n+ \tif (allocatedQualContext && allocatedQualNesting == 0)\n+ \t{\n+ \t\tMemoryContext oldcxt;\n+ \n+ \t\tstrcpy(pname, \"<Qual manager>\");\n+ \t\tqual_portal = GetPortalByName(pname);\n+ \t\t/*\n+ \t\t *\tallocatedQualContext may have been improperly set from\n+ \t\t *\tfrom a previous run.\n+ \t\t */\n+ \t\tif (PortalIsValid(qual_portal))\n+ \t\t{\n+ \t\t\toldcxt = MemoryContextSwitchTo(\n+ \t\t\t\t\t(MemoryContext) PortalGetHeapMemory(qual_portal));\n+ \t\t \t\n+ EndPortalAllocMode();\n+ StartPortalAllocMode(DefaultAllocMode, 0);\n+ \n+ \t\t\tMemoryContextSwitchTo(oldcxt);\n+ \t\t}\n+ \t\tallocatedQualContext = false;\n+ \t}\n+ \n \t/*\n \t *\tif result is true, then it means a clause failed so we\n \t *\treturn false. if result is false then it means no clause\n! \t *\tfailed so we return true. ...Yikes, who wrote that?\n \t */\n \tif (result == true)\n \t\treturn false;", "msg_date": "Wed, 24 Mar 1999 01:11:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" }, { "msg_contents": "\n> I am interested to see if it fixes the expression leak you saw. I have\n> not committed this yet. I want to look at it some more.\n> \nI'm afraid that this doesn't seem to have any effect on my query.\n\nLooking at your code, I think the problem is that most of the\nallocations in my query are on the top part of the if statement that\nyou modified (i.e. the == SQLlanguageId part). Below is a snippet of\na trace from my query, with approximate line numbers for execQual.c\nwith your patch applied:\n\n(execQual) language == SQLlanguageId (execQual.c:757)\n(execQual) execute postquel_function (execQual.c:759)\n(mcxt) MemoryContextAlloc 32 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 16 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 528 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 56 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 88 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 24 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 8 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 65 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 48 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 8 bytes in ** Blank Portal **-heap\n(execQual) else clause NOT SQLlanguageId (execQual.c:822)\n(execQual) install qual memory context (execQual.c:858)\n(execQual) exit qual context (execQual.c:862)\n(mcxt) MemoryContextAlloc 60 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 16 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 64 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 64 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 528 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 16 bytes\n(execQual) return from postquel_function (execQual.c:764)\n(execQual) return from ExecEvalFuncArgs (execQual.c:792)\n(execQual) else clause NOT SQLlanguageId (execQual.c:822)\n(execQual) install qual memory context (execQual.c:858)\n(execQual) exit qual context (execQual.c:862)\n(mcxt) MemoryContextAlloc 108 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 108 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 128 bytes\n(execQual) else clause NOT SQLlanguageId (execQual.c:822)\n(execQual) install qual memory context (execQual.c:858)\n(mcxt) MemoryContextAlloc 8 bytes in <Qual manager>-heap\n(execQual) exit qual context (execQual.c:862)\n\n<pattern repeats>\n\n(execQual) language == SQLlanguageId (execQual.c:757)\n(execQual) execute postquel_function (execQual.c:759)\n(mcxt) MemoryContextAlloc 32 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 16 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 528 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 56 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 88 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 24 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 8 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 65 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 48 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 8 bytes in ** Blank Portal **-heap\n(execQual) else clause NOT SQLlanguageId (execQual.c:822)\n(execQual) install qual memory context (execQual.c:858)\n(execQual) exit qual context (execQual.c:862)\n(mcxt) MemoryContextAlloc 60 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 16 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 64 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 64 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 528 bytes\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 16 bytes\n(execQual) return from postquel_function (execQual.c:764)\n(execQual) return from ExecEvalFuncArgs (execQual.c:792)\n(execQual) else clause NOT SQLlanguageId (execQual.c:822)\n(execQual) install qual memory context (execQual.c:858)\n(execQual) exit qual context (execQual.c:862)\n(mcxt) MemoryContextAlloc 108 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextAlloc 108 bytes in ** Blank Portal **-heap\n(mcxt) MemoryContextFree in ** Blank Portal **-heap freed 128 bytes\n(execQual) else clause NOT SQLlanguageId (execQual.c:822)\n(execQual) install qual memory context (execQual.c:858)\n(mcxt) MemoryContextAlloc 8 bytes in <Qual manager>-heap\n(execQual) exit qual context (execQual.c:862)\n\n\nthe MemoryContext lines give the name of the portal where each\nallocation is happening - you see that your Qual manager only captures\na very small number (one) of the allocations, the rest are in the\nupper part of the if statement.\n\nNote that I also placed a printf next to your EndPortalAllocMode() and\nStartPortalAllocMode() fix in ExecQual() - I believe this is what is\nsupposed to clear the portal and free the memory - and that printf\nnever appears in the above trace.\n\nSorry if the trace is a little confusing, but I hope that it helps you\nzero in.\n\nErik\n\n\n\n\n\n", "msg_date": "Wed, 24 Mar 1999 13:05:58 -0500 (EST)", "msg_from": "Erik Riedel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] aggregation memory leak and fix" } ]
[ { "msg_contents": "I have not been able to track down any information on coalesce nor get\nPostgreSQL execute it. Where is this documented?\n\n\t-----Original Message-----\n\tFrom:\tZeugswetter Andreas IZ5\n[SMTP:[email protected]]\n\tSent:\tWednesday, March 17, 1999 1:29 AM\n\tTo:\[email protected]\n\tSubject:\tRe: [HACKERS] Associative Operators? (Was: Re:\n[NOVICE] Out of f rying pan, into fire)\n\n\t> \n\t> >> I tried to create a C function to overcome this but noticed\nthat if any\n\t> >> parameter in my C function is NULL then the C function always\nreturns\n\t> NULL.\n\t> >> I saw some references in the archives about this issue but was\nunable\n\t> to\n\t> >> determine where it was left. What is the status of this issue?\n\t> \n\tYes, this is current behavior.\n\n\t\t> Would a compromise be to add DECODE and NVL ?\n\n\tThe Standard has the more flexible function COALESCE, which is\nalready\n\timplemented in postgresql.\n\tSimply say\ncoalesce(field_that_can_be_null,value_to_return_insteadof_null)\n\n\tAndreas \n", "msg_date": "Fri, 19 Mar 1999 16:16:06 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of f\n\trying pan, into fire)" }, { "msg_contents": "> I have not been able to track down any information on coalesce nor get\n> PostgreSQL execute it. Where is this documented?\n\nHmm. I seem to not have put it into the docs yet. Look at any SQL book\nwritten in the last few years for details on its usage, or ask here.\n\nbtw, COALESCE() is a special case of CASE, which is also implemented,\nbut which currently has problems with handling results from multiple\ntables.\n\n - Tom\n", "msg_date": "Fri, 26 Mar 1999 15:30:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Associative Operators? (Was: Re: [NOVICE] Out of frying\n\tpan, into fire)" } ]
[ { "msg_contents": "The list of enhancements for version 6.5 is amazing. My thanks to everyone\nfor the support they are giving this database. I am finding a pleasure to\nwork with PostgreSQL. Please excuse me for bringing these two issues up\nagain but they are still cause me some problems:\n\n1)\tif a C function call in a SQL statement contains a null parameter\nvalue, the C function returns NULL. I would like to be able to accept a\nNULL parameter value in my C function and still return a value. The plpgsql\nlanguage supports this. I am wondering if the C interface could work the\nsame way the plpgsql language works and allow null values to be converted to\nnon null values. \n\n2)\tAccess 97 is generating the following SQL statement: 'select\n\"orderlines\".\"orderlinesid\" from \"orderlines\" where (NULL = \"orderid\")'.\nThis fails in PostgreSQL and causes an error in my Access97 application.\nThis happens when I try an insert a new record using a form that contains a\nparent child relationship on orders and orderlines. What are others doing\nto work around this problem? One suggested work around was the following\nenhancement to gram.y:\n\n\t| NULL_P '=' a_expr\n\t { $$ = makeA_Expr(ISNULL,\nNULL, $3,\n\tNULL); }\n\n\tI have made this enhancement to my version 6.4.2 and it solved my\nproblem. I would like to see this issue resolved in version 6.5 of\nPostgreSQL?\n\nThanks, Michael\n", "msg_date": "Fri, 19 Mar 1999 16:32:48 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Additional requests for version 6.5" }, { "msg_contents": "> 2)\tAccess 97 is generating the following SQL statement: 'select\n> \"orderlines\".\"orderlinesid\" from \"orderlines\" where (NULL = \"orderid\")'.\n> This fails in PostgreSQL and causes an error in my Access97 application.\n> This happens when I try an insert a new record using a form that contains a\n> parent child relationship on orders and orderlines. What are others doing\n> to work around this problem? One suggested work around was the following\n> enhancement to gram.y:\n> \n> \t| NULL_P '=' a_expr\n> \t { $$ = makeA_Expr(ISNULL,\n> NULL, $3,\n> \tNULL); }\n> \n> \tI have made this enhancement to my version 6.4.2 and it solved my\n> problem. I would like to see this issue resolved in version 6.5 of\n> PostgreSQL?\n\nIt is in the 6.5 tree already. It is generating a shift/reduce\nconflict, so someone is going to figure out how to fix that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Mar 1999 17:44:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Additional requests for version 6.5" } ]
[ { "msg_contents": "I pulled the latest version of the source tree yesterday and complied\nPosgtreSQL 6.5. The get the following error on any select statement\ncontaining min() and max():\n\nmp=> select max(addressid) from addresses;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\nI am running Red Hat 5.1.\n\nFYI, I also noticed failures in the regressions tests for int2 and int4 and\nsome others (int8 was okay). I did not dig very deep into this but the only\nerror I could see with int2 and int4 occurred when the value that was being\ninserted into the table was too large for the field. For example, inserting\n100000 into an int2 field.\n\nThanks, Michael\n", "msg_date": "Sat, 20 Mar 1999 01:08:33 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "min() and max() causing aborts" }, { "msg_contents": "> I pulled the latest version of the source tree yesterday and complied\n> PosgtreSQL 6.5. The get the following error on any select statement\n> containing min() and max():\n> \n> mp=> select max(addressid) from addresses;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> \n> I am running Red Hat 5.1.\n> \n> FYI, I also noticed failures in the regressions tests for int2 and int4 and\n> some others (int8 was okay). I did not dig very deep into this but the only\n> error I could see with int2 and int4 occurred when the value that was being\n> inserted into the table was too large for the field. For example, inserting\n> 100000 into an int2 field.\n> \n\nI recommend a clean compile and initdb to see if that fixes it. Do you\nsee anything in the postmaster log file?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 07:41:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts" }, { "msg_contents": "> I pulled the latest version of the source tree yesterday and complied\n> PosgtreSQL 6.5. The get the following error on any select statement\n> containing min() and max():\n> \n> mp=> select max(addressid) from addresses;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n\nIt was my commit of someone's memory cleanups yesterday that broke it. \nI will check into it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 07:49:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts" }, { "msg_contents": "> I pulled the latest version of the source tree yesterday and complied\n> PosgtreSQL 6.5. The get the following error on any select statement\n> containing min() and max():\n> \n> mp=> select max(addressid) from addresses;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n\nI have backed out the part of the patch I think was the problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 08:17:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts" } ]
[ { "msg_contents": "> I'm a bit surprised. I thought there would be a 6.4.3. When will 6.5 be\n> released?\n> \n> > system. This is all preformed without having to allocate a lock for\n> \n> ....performed...\n> \n> > Numeric data type: We now have a true numeric data type, with\n> > user-specified precision.\n> \n> I thought this was not in 6.5. I was communicating with Jan Wieck who\n> said he didn't have the time to do it for release 6.5.\n\nWe are not releasing a 6.4.3. We decided that if we didn't do a lot of\ntesting, it may be less stable than 6.4.2, and we didn't want to expend\ntime on testing when we should be working on 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 07:48:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5 Feature list and summary" }, { "msg_contents": ">\n> > I'm a bit surprised. I thought there would be a 6.4.3. When will 6.5 be\n> > released?\n> >\n> > > system. This is all preformed without having to allocate a lock for\n> >\n> > ....performed...\n> >\n> > > Numeric data type: We now have a true numeric data type, with\n> > > user-specified precision.\n> >\n> > I thought this was not in 6.5. I was communicating with Jan Wieck who\n> > said he didn't have the time to do it for release 6.5.\n>\n> We are not releasing a 6.4.3. We decided that if we didn't do a lot of\n> testing, it may be less stable than 6.4.2, and we didn't want to expend\n> time on testing when we should be working on 6.5.\n\n NUMERIC is in v6.5. But it is not the final implementation I\n wanted. Sometimes I'll implement it again from scratch. The\n new implementation will be a true replacement where only the\n internal storage format changes. It will store the value as\n short integers with base 10000 what's looking strange for\n human beeings but is much better for fast computations.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 22 Mar 1999 12:05:52 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 Feature list and summary" } ]
[ { "msg_contents": "I recompiled everything several time, did an initdb each time, reloaded my\ndatabase, and did a vacuum. No I did not see anything in the postmaster log\nfile. \n\nI updated my cvs tree this morning. And rebuild everything again. I\ndeleted my data directory and in initdb followed by:\n\ncreatedb mp\npsql -e < mp.out\t\t\t-- dump from 6.4.2\nConnection to database 'postgres' failed.\nFATAL 1: Database postgres does not exist in pg_database\n\nI cant even load data with the lastest set of changes. There are no\nmessages in the postgres .log file or .err files.\n\nThere are still failures in the regression tests:\n\nboolean .. ok\nchar .. ok\nname .. ok\nvarchar .. ok\ntext .. ok\nstrings .. ok\nint2 .. failed\nint4 .. failed\nint8 .. ok\noid .. ok\nfloat4 .. ok\nfloat8 .. failed\nnumerology .. ok\npoint .. ok\nlseg .. ok\nbox .. ok\npath .. ok\npolygon .. ok\ncircle .. ok\ngeometry .. failed\ntimespan .. ok\ndatetime .. ok\nreltime .. ok\nabstime .. ok\ntinterval .. ok\nhorology .. ok\ninet .. ok\ncomments .. ok\nopr_sanity .. ok\ncreate_function_1 .. ok\ncreate_type .. ok\ncreate_table .. ok\ncreate_function_2 .. ok\nconstraints .. ok\ntriggers .. failed\ncopy .. ok\ncreate_misc .. ok\ncreate_aggregate .. ok\ncreate_operator .. ok\ncreate_view .. ok\ncreate_index .. ok\nsanity_check .. ok\nerrors .. ok\nselect .. ok\nselect_into .. ok\nselect_distinct .. ok\nselect_distinct_on .. ok\nselect_implicit .. ok\nselect_having .. failed\nsubselect .. ok\nunion .. ok\ncase .. ok\njoin .. ok\naggregates .. failed\ntransactions .. ok\nrandom .. ok\nportals .. ok\nmisc .. failed\narrays .. ok\nbtree_index .. ok\nhash_index .. ok\nselect_views .. ok\nalter_table .. ok\nportals_p2 .. ok\nrules .. ok\nlimit .. ok\ninstall_plpgsql .. ok\nplpgsql .. ok\ntemp .. ok\n\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tSaturday, March 20, 1999 5:41 AM\n\tTo:\tMichael Davis\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] min() and max() causing aborts\n\n\t> I pulled the latest version of the source tree yesterday and\ncomplied\n\t> PosgtreSQL 6.5. The get the following error on any select\nstatement\n\t> containing min() and max():\n\t> \n\t> mp=> select max(addressid) from addresses;\n\t> pqReadData() -- backend closed the channel unexpectedly.\n\t> This probably means the backend terminated abnormally\nbefore or\n\t> while processing the request.\n\t> We have lost the connection to the backend, so further processing\nis\n\t> impossible. Terminating.\n\t> \n\t> \n\t> I am running Red Hat 5.1.\n\t> \n\t> FYI, I also noticed failures in the regressions tests for int2 and\nint4 and\n\t> some others (int8 was okay). I did not dig very deep into this\nbut the only\n\t> error I could see with int2 and int4 occurred when the value that\nwas being\n\t> inserted into the table was too large for the field. For example,\ninserting\n\t> 100000 into an int2 field.\n\t> \n\n\tI recommend a clean compile and initdb to see if that fixes it. Do\nyou\n\tsee anything in the postmaster log file?\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Sat, 20 Mar 1999 15:29:37 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] min() and max() causing aborts" }, { "msg_contents": "Fixed this morning at 8am EST.\n\n\n> I recompiled everything several time, did an initdb each time, reloaded my\n> database, and did a vacuum. No I did not see anything in the postmaster log\n> file. \n> \n> I updated my cvs tree this morning. And rebuild everything again. I\n> deleted my data directory and in initdb followed by:\n> \n> createdb mp\n> psql -e < mp.out\t\t\t-- dump from 6.4.2\n> Connection to database 'postgres' failed.\n> FATAL 1: Database postgres does not exist in pg_database\n> \n> I cant even load data with the lastest set of changes. There are no\n> messages in the postgres .log file or .err files.\n> \n> There are still failures in the regression tests:\n> \n> boolean .. ok\n> char .. ok\n> name .. ok\n> varchar .. ok\n> text .. ok\n> strings .. ok\n> int2 .. failed\n> int4 .. failed\n> int8 .. ok\n> oid .. ok\n> float4 .. ok\n> float8 .. failed\n> numerology .. ok\n> point .. ok\n> lseg .. ok\n> box .. ok\n> path .. ok\n> polygon .. ok\n> circle .. ok\n> geometry .. failed\n> timespan .. ok\n> datetime .. ok\n> reltime .. ok\n> abstime .. ok\n> tinterval .. ok\n> horology .. ok\n> inet .. ok\n> comments .. ok\n> opr_sanity .. ok\n> create_function_1 .. ok\n> create_type .. ok\n> create_table .. ok\n> create_function_2 .. ok\n> constraints .. ok\n> triggers .. failed\n> copy .. ok\n> create_misc .. ok\n> create_aggregate .. ok\n> create_operator .. ok\n> create_view .. ok\n> create_index .. ok\n> sanity_check .. ok\n> errors .. ok\n> select .. ok\n> select_into .. ok\n> select_distinct .. ok\n> select_distinct_on .. ok\n> select_implicit .. ok\n> select_having .. failed\n> subselect .. ok\n> union .. ok\n> case .. ok\n> join .. ok\n> aggregates .. failed\n> transactions .. ok\n> random .. ok\n> portals .. ok\n> misc .. failed\n> arrays .. ok\n> btree_index .. ok\n> hash_index .. ok\n> select_views .. ok\n> alter_table .. ok\n> portals_p2 .. ok\n> rules .. ok\n> limit .. ok\n> install_plpgsql .. ok\n> plpgsql .. ok\n> temp .. ok\n> \n> \n> \t-----Original Message-----\n> \tFrom:\tBruce Momjian [SMTP:[email protected]]\n> \tSent:\tSaturday, March 20, 1999 5:41 AM\n> \tTo:\tMichael Davis\n> \tCc:\[email protected]\n> \tSubject:\tRe: [HACKERS] min() and max() causing aborts\n> \n> \t> I pulled the latest version of the source tree yesterday and\n> complied\n> \t> PosgtreSQL 6.5. The get the following error on any select\n> statement\n> \t> containing min() and max():\n> \t> \n> \t> mp=> select max(addressid) from addresses;\n> \t> pqReadData() -- backend closed the channel unexpectedly.\n> \t> This probably means the backend terminated abnormally\n> before or\n> \t> while processing the request.\n> \t> We have lost the connection to the backend, so further processing\n> is\n> \t> impossible. Terminating.\n> \t> \n> \t> \n> \t> I am running Red Hat 5.1.\n> \t> \n> \t> FYI, I also noticed failures in the regressions tests for int2 and\n> int4 and\n> \t> some others (int8 was okay). I did not dig very deep into this\n> but the only\n> \t> error I could see with int2 and int4 occurred when the value that\n> was being\n> \t> inserted into the table was too large for the field. For example,\n> inserting\n> \t> 100000 into an int2 field.\n> \t> \n> \n> \tI recommend a clean compile and initdb to see if that fixes it. Do\n> you\n> \tsee anything in the postmaster log file?\n> \n> \t-- \n> \t Bruce Momjian | http://www.op.net/~candle\n> \t [email protected] | (610) 853-3000\n> \t + If your life is a hard drive, | 830 Blythe Avenue\n> \t + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 20:16:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts" } ]
[ { "msg_contents": "These used to work:\n\nregression=> select %f.f1 FROM FLOAT8_TBL f;\nERROR: parser: parse error at or near \"%\"\nregression=> select f.f1 % FROM FLOAT8_TBL f;\nERROR: parser: parse error at or near \"from\"\n\nThis is causing the float8 regress test to fail.\n\nI suspect this has to do with Bruce's recent hacking on operator\nassociativity.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Mar 1999 20:34:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Unary % operator is broken in current sources" }, { "msg_contents": "> These used to work:\n> \n> regression=> select %f.f1 FROM FLOAT8_TBL f;\n> ERROR: parser: parse error at or near \"%\"\n> regression=> select f.f1 % FROM FLOAT8_TBL f;\n> ERROR: parser: parse error at or near \"from\"\n> \n> This is causing the float8 regress test to fail.\n> \n> I suspect this has to do with Bruce's recent hacking on operator\n> associativity.\n\nI see. I see the same problem with / and +:\n\t\n\ttest=> select %f.f1 FROM FLOAT8_TBL f;\n\tERROR: parser: parse error at or near \"%\"\n\ttest=> select /f.f1 FROM FLOAT8_TBL f;\n\tERROR: parser: parse error at or near \"/\"\n\ttest=> select +f.f1 FROM FLOAT8_TBL f;\n\tERROR: parser: parse error at or near \"+\"\n\n\\do % shows:\n\n\ttest=> \\do %\n\top|left_arg|right_arg|result |description \n\t--+--------+---------+-------+-------------------\n\t% | |float8 |float8 |truncate to integer\n\t% |float8 | |float8 |truncate to integer\n\t% |int2 |int2 |int2 |modulus \n\t% |int2 |int4 |int4 |modulus \n\t% |int4 |int2 |int4 |modulus \n\t% |int4 |int4 |int4 |modulus \n\t% |numeric |numeric |numeric|modulus \n\t(7 rows)\n\t\n\nOK, I made the change. It works now with special entries for %4 and 4%\nin the grammer, similar to our handling of -4:\n\t\n\tregression=> select %f.f1 FROM FLOAT8_TBL f;\n\t?column? \n\t---------------------\n\t0 \n\t-34 \n\t-1004 \n\t-1.2345678901234e+200\n\t0 \n\t(5 rows)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 21:25:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unary % operator is broken in current sources" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, I made the change. It works now with special entries for %4 and 4%\n> in the grammer, similar to our handling of -4:\n\nHmm, is that an adequate solution? Are you sure there are no other\noperators like % ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Mar 1999 21:51:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Unary % operator is broken in current sources " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, I made the change. It works now with special entries for %4 and 4%\n> > in the grammer, similar to our handling of -4:\n> \n> Hmm, is that an adequate solution? Are you sure there are no other\n> operators like % ?\n\nNot sure. I know I only changed % to have precedence like /. No one is\ncomplaining, and I think the problems are restricted to +,-,*,/, and %. \nShould I fix any of these other ones?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 22:13:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unary % operator is broken in current sources" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Not sure. I know I only changed % to have precedence like /. No one is\n> complaining, and I think the problems are restricted to +,-,*,/, and %. \n> Should I fix any of these other ones?\n\nRight now I think % is the only problem, since it's the only operator\nthat has all three syntaxes (infix, prefix, postfix):\n\nregression=> select distinct p1.oprname, p1.oprkind, p2.oprkind from\nregression-> pg_operator as p1, pg_operator as p2\nregression-> where p1.oprname = p2.oprname and p1.oprkind < p2.oprkind;\noprname|oprkind|oprkind\n-------+-------+-------\n# |b |l\n% |b |l\n% |b |r\n% |l |r\n- |b |l\n?- |b |l\n?| |b |l\n@ |b |l\n(8 rows)\n\nHaving both infix and prefix syntaxes doesn't seem to confuse the\nparser --- at least, we have regress tests of both prefix @ and\ninfix @ (likewise #) and they're not complaining. Probably you need\na postfix syntax plus one or both of the other syntaxes to yield an\nambiguity that will confuse the parser. I haven't tried to track it\ndown in the grammar, however.\n\nMy concern with hacking in a special case for '%' in the grammar\nis that we'll need to do it again anytime someone adds an operator\nwith the right set of syntaxes. It'd be better to understand *why*\nthe parser is having a hard time with this all of a sudden, and fix it\nwithout reference to any particular operator. Postgres is supposed to\nbe extensible after all...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Mar 1999 10:55:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Unary % operator is broken in current sources " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Not sure. I know I only changed % to have precedence like /. No one is\n> > complaining, and I think the problems are restricted to +,-,*,/, and %. \n> > Should I fix any of these other ones?\n> \n> Right now I think % is the only problem, since it's the only operator\n> that has all three syntaxes (infix, prefix, postfix):\n> \n> regression=> select distinct p1.oprname, p1.oprkind, p2.oprkind from\n> regression-> pg_operator as p1, pg_operator as p2\n> regression-> where p1.oprname = p2.oprname and p1.oprkind < p2.oprkind;\n> oprname|oprkind|oprkind\n> -------+-------+-------\n> # |b |l\n> % |b |l\n> % |b |r\n> % |l |r\n> - |b |l\n> ?- |b |l\n> ?| |b |l\n> @ |b |l\n> (8 rows)\n> \n> Having both infix and prefix syntaxes doesn't seem to confuse the\n> parser --- at least, we have regress tests of both prefix @ and\n> infix @ (likewise #) and they're not complaining. Probably you need\n> a postfix syntax plus one or both of the other syntaxes to yield an\n> ambiguity that will confuse the parser. I haven't tried to track it\n> down in the grammar, however.\n> \n> My concern with hacking in a special case for '%' in the grammar\n> is that we'll need to do it again anytime someone adds an operator\n> with the right set of syntaxes. It'd be better to understand *why*\n> the parser is having a hard time with this all of a sudden, and fix it\n> without reference to any particular operator. Postgres is supposed to\n> be extensible after all...\n\nI can tell you what I think. +,-,*,/,% have special precedence so */ is\ndone before +-. This is causing infix/prefix to break. When % did not\nbehave with precidence like /, it worked fine.\n\nSo, I would only have to add cases for +,-,/,*. We already have \"-\"\nprefix done for negative numbers.\n\nComments on how to proceed?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Mar 1999 13:57:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unary % operator is broken in current sources" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> So, I would only have to add cases for +,-,/,*. We already have \"-\"\n> prefix done for negative numbers.\n\n> Comments on how to proceed?\n\nTom Lockhart probably knows this stuff better than anyone else.\nI vote we put the issue on \"hold\" until he's caught up with his\nemail ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Mar 1999 15:30:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Unary % operator is broken in current sources " } ]
[ { "msg_contents": "> The 'triggers' regress test is failing in current sources,\n> producing a bunch of unexpected error messages like this:\n> \n> ERROR: check_primary_key: even number of arguments should be specified\n> \n> I traced that error string to contrib/spi/refint.c, and find\n> that you changed it recently ...\n> \n> \t\t\tregards, tom lane\n> \n\nYes, it was some enhancement. Perhaps someone can comment on this? I\nremember someone knowing about this thing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Mar 1999 22:12:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: check_primary_key is busted too" } ]
[ { "msg_contents": "I just did another cvs update and nothing was updated. So, I have already\napplied this patch, the path has not been committed in to cvs, or the patch\ndid not do it for me :-).\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tSaturday, March 20, 1999 6:17 PM\n\tTo:\tMichael Davis\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] min() and max() causing aborts\n\n\tFixed this morning at 8am EST.\n\n\n\t> I recompiled everything several time, did an initdb each time,\nreloaded my\n\t> database, and did a vacuum. No I did not see anything in the\npostmaster log\n\t> file. \n\t> \n\t> I updated my cvs tree this morning. And rebuild everything\nagain. I\n\t> deleted my data directory and in initdb followed by:\n\t> \n\t> createdb mp\n\t> psql -e < mp.out\t\t\t-- dump from 6.4.2\n\t> Connection to database 'postgres' failed.\n\t> FATAL 1: Database postgres does not exist in pg_database\n\t> \n\t> I cant even load data with the lastest set of changes. There are\nno\n\t> messages in the postgres .log file or .err files.\n\t> \n\t> There are still failures in the regression tests:\n\t> \n\t> boolean .. ok\n\t> char .. ok\n\t> name .. ok\n\t> varchar .. ok\n\t> text .. ok\n\t> strings .. ok\n\t> int2 .. failed\n\t> int4 .. failed\n\t> int8 .. ok\n\t> oid .. ok\n\t> float4 .. ok\n\t> float8 .. failed\n\t> numerology .. ok\n\t> point .. ok\n\t> lseg .. ok\n\t> box .. ok\n\t> path .. ok\n\t> polygon .. ok\n\t> circle .. ok\n\t> geometry .. failed\n\t> timespan .. ok\n\t> datetime .. ok\n\t> reltime .. ok\n\t> abstime .. ok\n\t> tinterval .. ok\n\t> horology .. ok\n\t> inet .. ok\n\t> comments .. ok\n\t> opr_sanity .. ok\n\t> create_function_1 .. ok\n\t> create_type .. ok\n\t> create_table .. ok\n\t> create_function_2 .. ok\n\t> constraints .. ok\n\t> triggers .. failed\n\t> copy .. ok\n\t> create_misc .. ok\n\t> create_aggregate .. ok\n\t> create_operator .. ok\n\t> create_view .. ok\n\t> create_index .. ok\n\t> sanity_check .. ok\n\t> errors .. ok\n\t> select .. ok\n\t> select_into .. ok\n\t> select_distinct .. ok\n\t> select_distinct_on .. ok\n\t> select_implicit .. ok\n\t> select_having .. failed\n\t> subselect .. ok\n\t> union .. ok\n\t> case .. ok\n\t> join .. ok\n\t> aggregates .. failed\n\t> transactions .. ok\n\t> random .. ok\n\t> portals .. ok\n\t> misc .. failed\n\t> arrays .. ok\n\t> btree_index .. ok\n\t> hash_index .. ok\n\t> select_views .. ok\n\t> alter_table .. ok\n\t> portals_p2 .. ok\n\t> rules .. ok\n\t> limit .. ok\n\t> install_plpgsql .. ok\n\t> plpgsql .. ok\n\t> temp .. ok\n\t> \n\t> \n\t> \t-----Original Message-----\n\t> \tFrom:\tBruce Momjian [SMTP:[email protected]]\n\t> \tSent:\tSaturday, March 20, 1999 5:41 AM\n\t> \tTo:\tMichael Davis\n\t> \tCc:\[email protected]\n\t> \tSubject:\tRe: [HACKERS] min() and max() causing aborts\n\t> \n\t> \t> I pulled the latest version of the source tree yesterday\nand\n\t> complied\n\t> \t> PosgtreSQL 6.5. The get the following error on any select\n\t> statement\n\t> \t> containing min() and max():\n\t> \t> \n\t> \t> mp=> select max(addressid) from addresses;\n\t> \t> pqReadData() -- backend closed the channel unexpectedly.\n\t> \t> This probably means the backend terminated\nabnormally\n\t> before or\n\t> \t> while processing the request.\n\t> \t> We have lost the connection to the backend, so further\nprocessing\n\t> is\n\t> \t> impossible. Terminating.\n\t> \t> \n\t> \t> \n\t> \t> I am running Red Hat 5.1.\n\t> \t> \n\t> \t> FYI, I also noticed failures in the regressions tests for\nint2 and\n\t> int4 and\n\t> \t> some others (int8 was okay). I did not dig very deep into\nthis\n\t> but the only\n\t> \t> error I could see with int2 and int4 occurred when the\nvalue that\n\t> was being\n\t> \t> inserted into the table was too large for the field. For\nexample,\n\t> inserting\n\t> \t> 100000 into an int2 field.\n\t> \t> \n\t> \n\t> \tI recommend a clean compile and initdb to see if that fixes\nit. Do\n\t> you\n\t> \tsee anything in the postmaster log file?\n\t> \n\t> \t-- \n\t> \t Bruce Momjian |\nhttp://www.op.net/~candle\n\t> \t [email protected] | (610) 853-3000\n\t> \t + If your life is a hard drive, | 830 Blythe Avenue\n\t> \t + Christ can be your backup. | Drexel Hill,\nPennsylvania\n\t> 19026\n\t> \n\t> \n\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Sat, 20 Mar 1999 23:12:07 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] min() and max() causing aborts" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> I just did another cvs update and nothing was updated. So, I have already\n> applied this patch, the path has not been committed in to cvs, or the patch\n> did not do it for me :-).\n\nAll I can say is that I was getting the exact pattern of failures you\ngot until I backed out the patch.\n\nAre other people now seeing aggregate failure. If so, I will post the\nremainder of the patch that is still installed to see if removing that\nwill help, thought the other part deals more with expressions than\naggregates.\n\nI just tested and it shows as applied. I am confused.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Mar 1999 01:29:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Are other people now seeing aggregate failure. If so, I will post the\n> remainder of the patch that is still installed to see if removing that\n> will help, thought the other part deals more with expressions than\n> aggregates.\n\nYesterday evening (after you partially backed out that patch) I updated\nand rebuilt and ran regression test. I didn't see any regress failures\ninvolving aggregates, and a quick hand smoke-test of max and min looks\nOK:\n\nregression=> select max(f1) from float8_tbl;\nmax\n---\n 0\n(1 row)\n\nregression=> select min(f1) from float8_tbl;\nmin\n---------------------\n-1.2345678901234e+200\n(1 row)\n\nThere were a couple of other things broken yesterday, but they didn't\nseem related to Michael's problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Mar 1999 12:00:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts " }, { "msg_contents": "> Yesterday evening (after you partially backed out that patch) I updated\n> and rebuilt and ran regression test. I didn't see any regress failures\n> involving aggregates, and a quick hand smoke-test of max and min looks\n> OK:\n\nI am attaching the patch I BACKED HOW, so the user can see if it is in\ntheir tree. It should not be ther.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n***\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/executor/nodeAgg.c\nThu Mar 11 23:59:11 1999\n---\n/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/612/src/backend/executor/nodeAgg.c\nFri Mar 19 15:01:21 1999\n***************\n*** 110,115 ****\n--- 110,116 ----\n \t\t\t\tisNull2 = FALSE;\n \tbool\t\tqual_result;\n \n+ \tDatum oldVal = (Datum) NULL; /* XXX - so that we can save and free on each iteration - er1p */\n \n \t/* ---------------------\n \t *\tget state info from node\n***************\n*** 372,379 ****\n--- 373,382 ----\n \t\t\t\t\t\t */\n \t\t\t\t\t\targs[0] = value1[aggno];\n \t\t\t\t\t\targs[1] = newVal;\n+ \t\t\t\t\t\toldVal = value1[aggno]; /* XXX - save so we can free later - er1p */\n \t\t\t\t\t\tvalue1[aggno] =\t(Datum) fmgr_c(&aggfns->xfn1,\n \t\t\t\t\t\t\t\t\t\t (FmgrValues *) args, &isNull1);\n+ \t\t\t\t\t\tpfree(oldVal); /* XXX - new, let's free the old datum - er1p */\n \t\t\t\t\t\tAssert(!isNull1);\n \t\t\t\t\t}\n \t\t\t\t}", "msg_date": "Sun, 21 Mar 1999 13:58:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts" }, { "msg_contents": "> I am attaching the patch I BACKED HOW, so the user can see if it is in\n> their tree. It should not be ther.\n\nI see the problem with that patch: it's assuming that the old value is\nnecessarily a pointer to something. For pass-by-value types (eg int4),\nit ain't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Mar 1999 14:15:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] min() and max() causing aborts " } ]
[ { "msg_contents": "In case anyone is waiting for something from me...\n\nI've rebuilt my home machine after losing two disks in one day. Just\ngot back on the air, but I am leaving town tomorrow for several days\nfor work. Will catch up over the next week or so.\n\nI was also shocked, just shocked, at how sloppy my sysadmin was at\ndoing backups :/ So, a side effect of the crash is that my linux/libc5\nmachine is now a linux/glibc2 machine. I won't be able to directly\ntest libc5 any longer, but should do better with the glibc2 stuff than\npreviously.\n\nI've lost all mail sent before March 11, and any recent bookmarks and\nmailing addresses. Oh well, that's one way to simplify life...\n\nRegards.\n\n - Tom\n", "msg_date": "Sun, 21 Mar 1999 07:27:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Back on line, sort of" } ]
[ { "msg_contents": "Is there a reason why the special behaviour of '%' was only added to a_expr\nand not to b_expr?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sun, 21 Mar 1999 13:06:52 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Operator '%'" }, { "msg_contents": "> Is there a reason why the special behaviour of '%' was only added to a_expr\n> and not to b_expr?\n> \n\nSorry, missed that. Adding now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Mar 1999 00:06:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Operator '%'" } ]