threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\n Bruce: Would you please apply the patch at the end?\n\n Terry: Sorry, but dumping views is covered completely by this\n approach for dumping rewrite rules.\n\n Playing around with pg_dump for a while resulted in some\n fixes, enhancements and some found bugs not yet fixed. After\n all I was able to get useful results when dumping/reloading\n the regression database.\n\n The reload didn't recreated the full regression database.\n Table f_star couldn't be restored correctly (see below) and I\n don't know if the operators <% and >=% for the widget type\n have been recreated correctly. Anything else except for some\n datetime data went in successful :-).\n\n Bugs first:\n\n o Something in the datetime type seems to be broken.\n regression=> select 'Tue Feb 11 02:32:01.00 1997 MET'::datetime;\n ?column?\n ----------------------------\n Tue Feb 11 02:32:01 1997 MET\n (1 row)\n\n Is it O.K. that '.00' after time is omitted?\n\n regression=> select 'Sun May 11 10:59:12 1947 MET DDST'::datetime;\n ?column?\n ---------------------------------\n Sun May 11 12:59:12 1947 MET DDST\n (1 row)\n\n But this is definitely 2 hours ahead!\n\n o Dumping inheritance doesn't produce the correct queries\n to recreate the tables. After the regression test, table\n f_star has attributes (aa, cc, ee, ff, f, e, a). But\n after recreation from dump file it reads (aa, a, cc, ee,\n e, ff, f). Then, the copy to reinsert the data fails\n (pg_atoi fails to parse ((1,3),(2,4)) as data for column\n ee ,-).\n\n o Dumping operators needs to be checked. It outputs an\n operators commutator in CREATE OPERATOR before that is\n defined. I haven't checked if that is legal, but remember\n that there is something in the code that\n commutator/negator should only be defined on the second\n one of an operator pair.\n\n During reload of the regression dump, first the <%\n operator is created with telling COMMUTATOR = >=%. The\n following CREATE OPERATOR for >=% fails then with\n\n ERROR: OperatorDef: operator \">=%\" already defined\n\n The two dumps from a dump/reload/dump sequence show up\n the same CREATE OPERATOR statements. So it might be O.K.\n\n\n Fixes in the patch below:\n\n o rewriteDefine now checks that view rules are named\n _RETviewname. pg_dump depends on that when deciding if a\n table is a view or not (to omit the data in the dump).\n\n o The rule backparsing utility functions now double quote\n all identifiers. This makes the system views a little\n lesser readable. But pg_dump'ed rules succeed even if\n identifiers contain upper case.\n\n\n Enhancements to pg_dump:\n\n o User defined procedural languages are dumped as CREATE\n PROCEDURAL LANGUAGE statements.\n\n o In functional indexes the function names get formatted by\n fmtId() to support upper case function names.\n\n o The check for ClanguageId in the lookup for trigger\n procedures is removed. Triggers could also be defined in\n procedural languages!\n\n o User defined functions in procedural languages are dumped\n correctly.\n\n o All views and rewrite rules get dumped after triggers.\n Views are installed first as regular tables without data\n and later turned into real views via CREATE RULE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\nbegin 644 dump_rules.diff.gz\nM'XL(\"`4`&38\"`V1U;7!?<G5L97,N9&EF9@#-7&E3&TG2_BS_BK(B;.L\"U+H%\nM\"QL,R![%BP468F<W#*%HU\"71MFC)?=@0L_[O;V9=7=TJ7?A@'3.H55U'9E96\nMYI-957+<\\9CLC'P2^*/=F>].]F[MT6?J.7L^_>:[(96?IW3L>G1WA!77U7E1\nM*!0V[C#S?N:1\\U%(2)U8UGZUNE\\O$ZO=;KW8V=G99+1$!_7]2GF_WN(=%)+_\nM&%F56K54J;4(*\\`1>$&;P//.\"T(RF;<S_WXXGPSM,/3=VRBD&7PZ8.]&=[:?\nMR60*4.+9]Y07NEX(9>[!BV)<@S[,Z2BDSA\"K?>P=O^^<'@^.SSH]4B3U&VS'\nMFNX5V`<ID,L9\">^H3\\G(]KQ92&XIZ?8N!YWC4](['_S9[;TKD=W=73-/C7:I\nMTJQSGA9ZAE;8-0EM?T+#J1N$9#9F)9>=L\\[)@-BCT`41WD=!^.(E;T,?H'#Z\nM2.[M<'3'ZM*OU`N)3Z<VJXRD\\/[WV\"=[/52O#\\D=M>?#V9QZ?HZ_F]U^VCF\"\nM\"BB0O!#<F.32#0])[^KL+,^GIED&OAIR:GX#7Z5?SYAQ`MOU4K5<D1.8R7S7\nMV'T9LVM[#@&EMZ=\"5Y`U(K3%AD\\H).<]P;]LZ$=3>#OUJ>T\\)GE;)!,8<8;8\nM(\"`O!<'D;U8W,Y[Y).>\"`,H'Q\"7_(,L:[AQYT?W9;/0Y@'K%(K9G<]EN`(O5\nM>)DM8?&)C\"W,V^_BS32=U7*[5+5JL8VI6N52M5(S,,^T:32=!31%;1[-\";,H\nM()^B005`BU%_F1\"8N$!4J((.&?8[@Z\\N_8;?=D7;/?89S'TP5N-<PCJ52!9;\nMO`JR)6+4Z*(09Q#ZH_MYNG$0WH=0&\\A@U5&V910,MLK0Z6R2Z_3[YWT8!6GB\nM]**\\7P4ILI$`WBJS2`:0I@\\KJ$(Q*EF\"G)REWBP*W6FP9SOA'A+`OJ5\\F;&&\nMV9,9JQK\\6,WLQ]8W;^Z#(ZM6EWNQFE4OU2J6,ACH0,`.ADPE7,^A#\\1!]^BB\nM*HD*.^H?+]C#=20UXC8:PPR=]#O'@P[,1+=WVODW3A$L.OA[=0G^!Q]R,$5L\nM=3D//AWM',%0;A!Y[I>(DG^2[%6O^^&J0[)DGV1%39RM613F7N=8FREK)G4K\nMOU#'<PQU4(\"2XYC^7\\'Q=?95<)U%KL73,W-NG/MJNU0#P**L\"RNHQZ)!UH\\=\nM)Y8,19-!IPY;:+)&4CBQB8!%;H>YS_21\"^@ZF]6<FL8]U/@(_WNS&_1MY[>?\nM8'5VG6,)FGK1_2T8[1UBY;F-37<\\<QW9,YT&U%@)_'H,P]C\"URD`.;D.ER/^\nM,]*6%Y9B&5O\"`RU*@W3'S`'99!QY#$B`UV72+!%;\"!=0@&^'8,M&4SL(N'2-\nM,]:HE&J-=NS>00G=81C-P>-<7G3Q*;1OIW3GZ*L]#3Z6;PZT6N%HH18\\.308\nM:;7&WDS4&GM,\\#G1%@4]'R%ER.]+TT00.0WI-]@=S`#0%-&<H!CLO>Q8C)L7\nMK6-;S'2R42O5FI58)_^W6(ZU^NE,%TT]QQW'\\C\"J1`M4HJ6K1,)5CNP1Z-=T\nM-OL,$D-O.?=G(_(*UK'M3JD#GEK3=GP7*S-^NPS]\"%S)(<G)8(;5(>\\Z@\\M!\nM_^IDD,/OP)F@57#!6(A-4MS3SA$\\<YLD1*JWR()Q-'3$96)XD<T+E>.:T@+#\nMWM:`X;-)HIBF,Y[,'Q30]=-%9%2>=JM4+VOXLEZNE.I6.9;A1HMMH[6VQ5+;\nM4'[;K3!SAVQQ2;>1--\\F@34JK5*CWM#ATA^1.W44@@XT\\$!\"^A\"N1!!`TOPQ\nM\"2#Z5V<=$KL4(&D@P3F*1T0E.C,*,1_(/N/U='R)$&2A.QZK*M3O!N#6?73N\nML\"S$`-]<#&PAEAB&CW/*7\"^/<49V0,D;Z\\U^PB+R\\42\\/#B/7<$M1%6?U4KB\nMC2O&QE<7IRB`=8VKQL;=WF6GOW[DFK'Q*9\"]<F284SN:AOMLC32JM5*C4=$!\nMY*_0`1U8_)@67#^C'FC^\\2F:L*;Y.EU8TWR=-BQMKO3!:\",:5JG1+.M)-#E_\nM,H>S*'$H7)Q31*VR#0>M,`T,(@JLQ',3#-F2(XB6^>PL>(U=D_4T0&+9.X;O\nM[%T^-I*\"C>XX)O[.#HCM\"27Z$ME3=^R.1/8+D:T;(D]LR32:()*6GG_[)2(I\nMDB5F?E-!F1W-IJ):[6:>+$&3DK5K5JDML\\\\B],\",CC,CP&LPT[-7(9U.AW;`\nMPZ>_#1J/YB$-8]F;$+VX3P-G=L\\^M<QD$J6W:^U2N]Z,<<./DK.(JU<15%SH\nM8U/XW`;-;#>;B8QWLJN`2G`+Y121$$#%[(&AIA_2=/XVD>_2WZO:8RW9)9H(\nMF:2EH@1B''6<&/4[__PN),\"FJ-DHM5OM1`Y\\,TZ+\"S4U\\:X4P9J6/T\\V>J^K\nMI%,T--;;IN1FTABKW+1*\\*>E@\\!%UPX?,#;&\"WW;F]#![;3CA?XC*>2)%][E\nMOD34?V0J#'ZD+XW@#K%*1+YB(#IVYPHD\"/?6[<5X9:46\"KLC$SE\\EX.P;0YA\nM7_E<0V1QP#3%*K>JP&$[D2-[+@[5[*Q6,^.,;L6Y>:XANH8_VBX?R*5>LJQX\nM`V+!6\\3+*+V(-HAI5ABX%0',YA&?9=6`(ZO63EB\\]>1O2.=B1'](EGE4Z&)(\nM'^9^[LM=K!();8E3<ID,CE@B$&EW%\"AATV'5F\\!/H_Y;I\\,4E_]\"1LT3V6P#\nMXZW&<YDA`9/?]L_?;VN&;/+7GYU^![.=$>!P<`,3%R\"0H!>]@A@<(9':8Q/S\nMC<NOHNW_/2/;/\\DV/5$<1JVH5&%Y5ZIZ/E!L36:.)Q.?3C*$D((]F:!L>`D3\nMRLRA!V94`'5WCN\"/IOHIY<^9L0\"#ZI&N^WXXY(EO3=TS)->#P9&('!N*6^E\\\nMB7RU_3EX;O<A;^P^F\\=AF4Y4`'U:%1U^;L?T%@!GO32NGU4>1J6H5EHEJUIM\nM:DJ!9TU(P!)[T\"_F\\_PIA\"%3*-+TX-N=.Z4Y*`,I8]A$OMQ!@R]W.T=SVX>8\nM!>3/ZJU::[)[;R86V!TJ<G*1P8`O%2*,NY=G'<CKUPM4\\C!.@,$-QR^N&#]&\nMI\"^3D'0LMK8+O<Y?A6P^;T*968]^VU7J6*TU0-QU/1/]L\\2M!/43)/YS.#8K\nM7*V*$JCI\"J>6Y)-PO]Y\\%<!_::H3)T`TC!\\_?%^R6%,AOR35=4IJ$O'XEMBX\nM$IWX-(Q\\C\\RA,R>:Y^*LO`0LU3H8JVHSH1T_+INBD>]$6/0$N5VOE=P6IG-;\nMD:[H>@-9&_6R5@6]K-7*B_E[FT`GKC?!DU]B!R8'AI=G*3;)W<IMF\\7TJW0'\nM`GYR,#V>^=0>W>6F[#P*<*X&XZ><K%H=''FM45Y,,_\\PJ4*,*3J7,W\"]/0M&\nMZ=<;$#C7&[$;H@^X>8-V#`\\[,I=7&-]/_.'8=AP_]WKL>N/9<!:%\\RC,P\\O1\nMS`NPQ<Z1?(IHPG?BO]=NX$53(&K'4IY%'M5`!CE/^_OLF%*\\!8?9;;D%!\\]J\nM\"TX@,+$D]0XB[[,W^^9!O'6H%J-X_[%\\`UR]N2Z_X6)+G!1Y\\RIXP\\](,?8Y\nM$5QWV<S76RBF5HQFGE5,_/C*_YZH#\">T7&]O/ADZT?U\\;S2[OY]Y\\F\"6X47J\nM/):AAN$85CMU#&M=J_I^N;9?JRP_?54M-TM5JZ*=[\\.\"2KSHF7=F/IMM^P`.\nM&L,<:+LP6&$R_$K]VUE`\\]QFCH4,@]\"AO@]B?!40))+9C\"@D$&3X.VR+BCIL\nM%YPZD0\\*-@4P$=D3&A\"8<D^=H<M,AL@='J0+\\*`46')50#U'6&H<X`+Z.H-.\nM@AQ262),,6'2HONWD3<*()Q6!0/0H8`W_2[.)R[CM4B$?TOQ2K;F51[`40R2\nM#1DD@D'&QN;,&:?<JL,,QP?NMIWAE[^?:XB8)-,0D2@N>2!EA]3(N\\K05\"M5\nM8+C]9)7>GF%;$?;[.5YMEL2GP2ZI-\\L-DZJRG64R-ZOOUYO[H(9+35.K4FHU\nM8\\.$7]4<@O1\"=T2^SER'S<+QR5GN;?>L0PI<9@.<W2[(AX2W*\"8FG'2C`?`W\nMH7Z0:,D<%N%N3NF(F#^MVT)X.^7RQ[@*YP!?<7N2'J>/._*;#%+<>!#%#.]C\nM=$='G]_._`_1+*0YO?=@@?/1E-J^&B.GC<:&,4KJW*-H8A(\\8`%O-XYI=,WG\nM/*UVN62UZUH.NPW8LIW8CI5^E\\6)L+X\".J4C=IV!K=*Q/[LGH$EBSQ.>Q-T9\nMDN63D_W&CMO+*KM(^R&1NZ10BTDWBWN\"<>-=<:H``8#U1G6%E>1Q!GS%#Y6_\nM`1B0SB?ZL,0/R<4'^D!'.5S,GB<R>$NL;Z/=+#7+FL-MELM0H&Z&\"!.$4K^$\nM:;VWN\\X#=,QE'NL*4:J15`O>4FGVI@VE_^2JNL5P<EMY/)Y&P9UHN2PHKS?*\nM\"+\\;>E#.[BN`!CD/_,X\"?_R'\\FJBI%B,]]^$+=60'M/`CZSB#4X\\T#M!I\\/1\nM'EAXD26\"X?2:\"#>PQHG$'5T'ZQ)370]C\"]6=K&(:'_`,A(6H5:#%Y03BS,3G\nM.#C0;M10(,*L&:_3E$$]X(^V=N!;%8K:\"J.).6\"7*R#Z2N`@7B3^,94/F+L*\nM9\\SIL5TI5%<7RJ',IR,@#Q;56H3&.F9_\\+`U6@KX2$*PM=9\"(1=^LX.@'Q.&\nM+VGVF'^#2@R%7;SC6?-,I@`/3'?%336V[CZ^/_[WAZM._S]GG=X->\\LOM7EA\nM-`^T[^X0>.$WWY)E\\RE$%D%(G<47(WLZ'0.KZ3?@P^?NE/HZ,06]>ZUL65US\nM[Q(\"CUWG05ZC,=J<[\"T%F\"#.+C*$@]7^^U_6_N(#%]DEV/4HR,$7EK^Y>-?O\nM7`Y/SM^_/^Z=#L__3T.Z\"W#GC\\Z[;H\\@6&'7=MB)U6LY'@2(;CCTW!&=/@J*\nMDL@Z;=S%\\:P\"W\\4`>RSU2MEIOB$!Q6XPGZK2\\_YIIT_^^`\\11^Z+:RSPMK(8\nM7%V<P<=J422U/+\\O+^EM(Q2NCHQP?$*KZPM+S(B6RIG!&O*HJH^P+RO>R*E.\nMJ*RYNGJ=;\"/U;5DC^5YO);77W$:^S2I&TM?1^\"J4-\\Y02HFAF$#4B5K6KPO_\nM)>D5XA6>(W8<\"WZ#NXTB4>E)/=4L;;9R&<DQ1\"0KS36_HB5C0#;6T:$:+#'(\nM>GVY@R4$4E)VE=\\B4TL`(@N\\K@&&TT-M*BT5\"<][B#SE,FU3:S\"C([E$!(GL\nMF$-C.4,\"\"\"5QYB9D:4JQ<7O9(!^3(&7*R8ZO.EWTST\\ZIU?]XS-R=MQ[=W7\\\nMKD,P9:,,QI]@V<[`9(!,H0*8NHLN?L4Z!W&&(;=<Z<3*R;,L$4#!\\$T>+TD-\nM^E>7@\\ZINB7%NA&\\BV_C^[#K)-1,)CCCZHK5F%.?TIP4XD&B*%%93>O%!X;H\nM$^;#[\"$@PA1+.=U&],4N)`L$P?2:?<UL!1Q@_;!WY@0`ZW`)T*DT`=54-61<\nML:Q*\"?ZHJ(]!/^XROZ3</)$>\\Y/F5PLX[A\"(T,I8$2ZUU$UV2]UDYPL<9\\V]\nMV451P.PS],:3_0=+R*_6@7R9V.?D5Y#\\.(V*G=Q2F$5Z!<+1ER\".!`*+(XMB\nMB@R)5+N]0:??.SZ+`:ONJ`2W,/6J(:@<A./Q:9/Y8TY)@)USXOUE-<^$=Q&(\nM<?23;8:%N'OIL\">;C7?YX:<Q\"EVEQXS[4XA2`4JN3!XVU@N2F%&J(D.=E7KC\nM1BWB]0AM`99LA=%6NAIIXL'1K`9MJUU&QFA9EB$YZ2.6X3D)Z+B/?^5D8V\"K\nM3;H8?AVDVU)X25\"WJ>PDJ)OYIOAG:V%R75H*^!A+H@X0;FU%JI$^H%N:W45$\nMH2\\;_6#J>A\"AE@!9@TR?N%+3GKB\\`\"J6Z^8ZM[?0BK.E)Y]B</'VJG<RZ/)+\nMXSD4F>;.-<'EY4XDR7WB0/03H%!5C:4-H$SD+_Y>XCSJ&.0W+\"TQ\\D7D'.,1\nMP?T$-&3H`Y1S</XV<44[AAL.PK<_'L]=)Y>\"=,EY%TD*O)O],CG2\"F\\#HZ<F\nMDNPO:WZRO!TX!]&.K+7[IB%)5F[H;4E\\[/$VIAN\\U?*Z\"Z0R3Z,3J%*%XWD4\nM!JAA(D4F+BY4K#8`'_T\\X7-,OERH):+6XBK\"S3]'A+_=4VLV-`C4L*Q2I:%M\nM)[+T':=YKT`N/[MS\\J]NYR]U\"R>0E_?EK^/PN_K!OUSZ#9.3.9&(Y#(1/RX@\nM<FNPWH'G2#CG##N\\R%_QPTM]\\/JP1+4>XG)Q:@.$=<'*4A7C\\B7PKUD&1IME\nM_8`;@S0@3$9C\\DR@;FY>!<S\"O`*/!9]Y'I7(PS!?^'SP'*P=LI\\&DJ:(Y;79\nM;S*5;^(#>2Q84,,F2A,-$D>\"*JT:H->6_L,+/XU\\KIFRHU_%A?FGB-H`P-OM\nMIO[K-=4&;GYJ\"KD(R;5YUT$Y<\\2X$K3WH4BT?_QTHRT/X?._QY%5<E=(9$JW\nMVQ=:ORV43I!\"K!4C6@YHV>70^+M,+X9:@&1.HL9-AO'5T@-MWSQU!&#==BF_\nMI9K8Y=]@DQ]K\\M\\0PM0A^%P\\_L.W*<@WR@;@+_=4[BGD3CGDJ2$N*?@J$D/Q\nMMGYB/UBDB.1$A\\K6:'LB,D.4M#K/AOQ_`=2/?ZWI'0U98*_=+19W=,,[-^`R\nM,?XZ4RI*@,!`GD&&3G+:/ES\\HTLJ8#B^U&\\QR\\!\"-\"BIC;YTA*$V`.7.X6$B\nM(P3]]DY3.X\"\\G\\/DWJ&JKW+.6B/,BDJ]7=229P]CN&5)9Z;YA+'%DCQ>LY2#\nM#>(!;E*,<4W26!CBA?AEUJ1UI\\`)YI+NT6(DU&LAJ\\SM&DLKFX0CTH:<:W,Z\nM-:8EGV;Q9\\0:8JN4;=5M=D+C;NDYBKOU)S3NGG9\"(]FLME^U]BLK#H]5,,%5\nMT=PK+U#Y.OH04M\\C7<_A?@O$#L^@34$./5<!E$%\\9Q(2U>/S&0A;M]E%)!OL\nM(N+DI`?Z95N6)J[X\":Z?SM7B0.P(D3X.%/`>U(DBT8=VJNC%_P,9I`P-1E4`\n!````\n`\nend\n",
"msg_date": "Mon, 5 Oct 1998 19:27:14 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "pg_dump and more"
},
{
"msg_contents": "Hi Jan and all\n\nOn Mon, 5 Oct 1998, Jan Wieck wrote:\n\n> Hi,\n> \n> Bruce: Would you please apply the patch at the end?\n> \n> Terry: Sorry, but dumping views is covered completely by this\n> approach for dumping rewrite rules.\n\nSounds good, I really don't mind that what I did does not get used.\nFor it seems that I blow on the right coal and a fire started.\nDumping of views and such has been lacking for WAY TOO LONG.\nAnd it sounds like even more came out of all of this then any of us would\nhave thought.\n\nAlso good because I have not had time to do any more with it:-)\n\nThanks for all you've done, sounds like you've done a greate job.\n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Tue, 6 Oct 1998 17:18:11 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and more"
},
{
"msg_contents": "> Hi,\n> \n> Bruce: Would you please apply the patch at the end?\n> \n> Terry: Sorry, but dumping views is covered completely by this\n> approach for dumping rewrite rules.\n> \n> Playing around with pg_dump for a while resulted in some\n> fixes, enhancements and some found bugs not yet fixed. After\n> all I was able to get useful results when dumping/reloading\n> the regression database.\n\nI have backed out Terry's patch, and applied this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 6 Oct 1998 18:08:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and more"
},
{
"msg_contents": "> Hi Jan and all\n> \n> On Mon, 5 Oct 1998, Jan Wieck wrote:\n> \n> > Hi,\n> > \n> > Bruce: Would you please apply the patch at the end?\n> > \n> > Terry: Sorry, but dumping views is covered completely by this\n> > approach for dumping rewrite rules.\n> \n> Sounds good, I really don't mind that what I did does not get used.\n> For it seems that I blow on the right coal and a fire started.\n\nI like this sentence.\n\n> Dumping of views and such has been lacking for WAY TOO LONG.\n> And it sounds like even more came out of all of this then any of us would\n> have thought.\n\nYes, though I knew Jan had this up his sleeve, and it was on the open\nitems list.\n\nI have removed the mention of the rules limitation in pg_dump man pages\nand sgml.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 6 Oct 1998 22:50:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and more"
}
] |
[
{
"msg_contents": "Michael Meskes wrote:\n> \n> On Sat, Oct 03, 1998 at 06:58:04PM +0000, Thomas G. Lockhart wrote:\n> > I'm trying to fix up initlocation to accept an environment variable as\n> > an input parameter (in addition to the absolute path name it already\n> > accepts). \n> \n> And how shall it dsitinguish between a path and a variable?\n> \n> > I'd like to be able to say:\n> > \n> > setenv PGDATA2 /home/postgres/data\n> > initlocation PGDATA2\n> \n> But this could mean the path PGDATA2 too, couldn't it?\n\ninitlocation $PGDATA2\n\n?\n",
"msg_date": "Mon, 5 Oct 1998 18:31:07 +0100 (BST)",
"msg_from": "[email protected] (Patrick Welche)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] need help with csh"
},
{
"msg_contents": "Patrick Welche wrote:\n> \n> > And how shall it dsitinguish between a path and a variable?\n> > > setenv PGDATA2 /home/postgres/data\n> > > initlocation PGDATA2\n> > And how shall it distinguish between a path and a variable?\n> > But this could mean the path PGDATA2 too, couldn't it?\n\nI test that the argument, when translated, has a non-empty result. If it\ndoesn't, I assume that it is an actual path.\n\n> initlocation $PGDATA2\n\nYes, that's how it has always worked. I wanted it to also function\ncorrectly when invoked with the environment variable as an argument (if\nsomeone forgets to put in the \"dollar sign\").\n\nI have a working version committed to the cvs tree which uses printenv\nto extract the contents of the argument. It correctly distinguishes\nbetween paths and envars, and does the right thing with both.\n\nI'd appreciate any reports of problems on any platforms or environments.\nThanks all for the tips.\n\n - Tom\n",
"msg_date": "Tue, 06 Oct 1998 01:34:07 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] need help with csh"
}
] |
[
{
"msg_contents": "Additions\n---------\ntest new cidr/IP address type(Tom Helbekkmo)\ncomplete rewrite system changes(Jan)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\nman pages/sgml synchronization\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nmulti-verion concurrency control - work-in-progress for 6.5\nimprove reporting of syntax errors by showing location of error in query\nuse index with constants on functions\nallow chaining of pages to allow >8k tuples\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nno min/max for oid type\nimprove PRIMARY KEY handling\nhave psql dump out rules text with new function\ngenerate postmaster pid file and remove flock/fcntl lock code\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 6 Oct 1998 00:33:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.4 items"
}
] |
[
{
"msg_contents": "Hi,\n\nI compiled Sundays snapshot in preparation of a SVR4/SINIX Readme. A\ncouple of the problems, which I saw, have already been addressed by\nthe recent UnixWare patch.\n\nAdmittedly my operating system platform is somewhat old, however I\nwould think, that there are more os versions out there, which don't\nhave a 'long long' data type.\n\nIn particular my compiler didn't like the following line in\nsrc/backend/port/snprintf.c:\n\n/* IRIX doesn't do 'long long' in va_arg(), so use a typedef */\ntypedef long long long_long;\n\nI'm on a simple 32bit architecture with no long long support, so I\nchanged 'long long' to 'long' and everything was okay.\n\nAny comments?\n\nMfG/Regards\n--\n /==== Siemens AG\n / Ridderbusch / , ICP CS XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n",
"msg_date": "Tue, 6 Oct 1998 09:41:31 +0200 (MDT)",
"msg_from": "Frank Ridderbusch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Portability Issue in src/backend/port/snprintf.c (I think)"
},
{
"msg_contents": "> Hi,\n> \n> I compiled Sundays snapshot in preparation of a SVR4/SINIX Readme. A\n> couple of the problems, which I saw, have already been addressed by\n> the recent UnixWare patch.\n> \n> Admittedly my operating system platform is somewhat old, however I\n> would think, that there are more os versions out there, which don't\n> have a 'long long' data type.\n> \n> In particular my compiler didn't like the following line in\n> src/backend/port/snprintf.c:\n> \n> /* IRIX doesn't do 'long long' in va_arg(), so use a typedef */\n> typedef long long long_long;\n> \n> I'm on a simple 32bit architecture with no long long support, so I\n> changed 'long long' to 'long' and everything was okay.\n\nGood point.\n\nI have fixed snprintf.c so it properly works on machines that don't do\n'long long'. I used HAVE_LONG_INT_64 defines around the proper areas.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Wed, 7 Oct 1998 13:14:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Portability Issue in src/backend/port/snprintf.c (I\n\tthink)"
},
{
"msg_contents": "> I compiled Sundays snapshot in preparation of a SVR4/SINIX Readme. A\n> couple of the problems, which I saw, have already been addressed by\n> the recent UnixWare patch.\n\nHi Frank. Can I consider this a report of success? Bruce has fixed the\nproblem you mentioned, and I'd like to update the ports list for this\nrelease. Let me know...\n\nAlso, can everyone else who has done testing recently please report on\nthe regression test for their platform? We'll assume that the current\nproblems on Bruce's release list will be addressed before release, but\nwe should still get a passing regression test without those fixes.\n\nI can report recent success on Linux-2.0.30 (and since that is the\nregression reference machine it will stay good :)\n\nTIA\n\n - Tom\n",
"msg_date": "Sat, 10 Oct 1998 04:37:37 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Portability Issue in src/backend/port/snprintf.c (I\n\tthink)"
}
] |
[
{
"msg_contents": "Additions\n---------\ntest new cidr/IP address type(Tom Helbekkmo)\ncomplete rewrite system changes(Jan)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\nvacuum crash\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\nman pages/sgml synchronization\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nmulti-verion concurrency control - work-in-progress for 6.5\nimprove reporting of syntax errors by showing location of error in query\nuse index with constants on functions\nallow chaining of pages to allow >8k tuples\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nno min/max for oid type\nimprove PRIMARY KEY handling\ngenerate postmaster pid file and remove flock/fcntl lock code\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 6 Oct 1998 23:20:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.4 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> test new cidr/IP address type(Tom Helbekkmo)\n\nLooks good to me. I haven't done really heavy testing yet, and I'd\nalso like to update the regression test -- what I included was just a\nvery quick sequence to see that it basically worked, but we should\nhave a more comprehensive test. No great hurry there, though: for\nnow, I'd say it's ready for shipping, modulo the renaming of IPADDR to\nINET, for which I'm sending a separate patch kit.\n\nOne problem though, seemingly lately introduced: It's nice to be able\nto input IP addresses in various formats, for compatibility with other\nsoftware. One of the common formats is a network byte order hex\nstring, like 0x12345678. Until very recently, I could check what the\nheck the actual address behind such a representation was, by going\n\"select '0x12345678'::ipaddr;\". This no longer works, because the\nsystem now helpfully transforms the hex into a long int or something\nand then tries to treat the result as an ipaddr. Uh-oh.\n\nThe intuitively correct thing would be for the hex string to be read\nand converted as a numeric value only if the type it is being cast to\nis, indeed, numeric in nature. In the given case, it should be up to\nipaddr_in() to make sense of the character string. Or what do you say?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "07 Oct 1998 11:26:20 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> \n> > test new cidr/IP address type(Tom Helbekkmo)\n> \n> Looks good to me. I haven't done really heavy testing yet, and I'd\n> also like to update the regression test -- what I included was just a\n> very quick sequence to see that it basically worked, but we should\n> have a more comprehensive test. No great hurry there, though: for\n> now, I'd say it's ready for shipping, modulo the renaming of IPADDR to\n> INET, for which I'm sending a separate patch kit.\n\nOK. I think Thomas is adding it to the regression tests, particularly\nso people can see how it works. Your readme is now in the manual.\n\n\n> One problem though, seemingly lately introduced: It's nice to be able\n> to input IP addresses in various formats, for compatibility with other\n> software. One of the common formats is a network byte order hex\n> string, like 0x12345678. Until very recently, I could check what the\n> heck the actual address behind such a representation was, by going\n> \"select '0x12345678'::ipaddr;\". This no longer works, because the\n> system now helpfully transforms the hex into a long int or something\n> and then tries to treat the result as an ipaddr. Uh-oh.\n> \n> The intuitively correct thing would be for the hex string to be read\n> and converted as a numeric value only if the type it is being cast to\n> is, indeed, numeric in nature. In the given case, it should be up to\n> ipaddr_in() to make sense of the character string. Or what do you say?\n\nThomas?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Wed, 7 Oct 1998 20:12:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "> > Looks good to me. I haven't done really heavy testing yet, and I'd\n> > also like to update the regression test\n> OK. I think Thomas is adding it to the regression tests, particularly\n> so people can see how it works. Your readme is now in the manual.\n\nIt would be great if you could gin up a real regression test, since I'm\nup to my eyeballs in other Postgres projects. But if you can't, or want\nsome help, let me know. I'd be happy to help with integration of the\ntest...\n\n> > One problem though, seemingly lately introduced: It's nice \n> > to input IP addresses in various formats...\n> > One of the common formats is a network byte order hex\n> > string, like 0x12345678. Until very recently, I could check what \n> > the heck the actual address behind such a representation was, by \n> > going \"select '0x12345678'::ipaddr;\". This no longer works, because \n> > the system now helpfully transforms the hex into a long int or \n> > something and then tries to treat the result as an ipaddr. Uh-oh.\n\nThere is a problem, but this is not the explanation. See below...\n\n> > The intuitively correct thing would be for the hex string to be read\n> > and converted as a numeric value only if the type it is being cast \n> > to is, indeed, numeric in nature. In the given case, it should be \n> > up to ipaddr_in() to make sense of the character string.\n\nWhat you describe as \"no longer works\" is actually a core dump on my\nmachine:\n\npostgres=> select 'x12345678'::ipaddr;\nERROR: could not parse \"x12345678\"\npostgres=> select '0x12345678'::ipaddr;\npqReadData() -- backend closed the channel unexpectedly.\n\nAny single-quote-delimited string is passed to the corresponding _in()\nroutine without change. So the system is not changing anything before\nipaddr_in() sees it for input. It appears that the _in() code tries to\nhandle strings starting with \"X\", but fails to do so (though I'm not\nexactly sure what a correct value would look like; I would have guessed\nthat something like 'x12121212' might be legal, but it fails). It also\nappears that the code crashes if there is a leading zero and an\notherwise hex-looking value following, even though it at least sometimes\nis checking that each character is valid.\n\nI haven't looked at it further, but the core dump feature should be a\n\"must-fix\" and I would put the \"hex input\" in the same category since\nthe code claims to handle it but doesn't. Volunteers?\n\n - Tom\n",
"msg_date": "Thu, 08 Oct 1998 07:13:37 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n\n(Oh, and \"::ipaddr\" is now \"::inet\", of course...)\n\n> postgres=> select 'x12345678'::ipaddr;\n> ERROR: could not parse \"x12345678\"\n\nThis is as it should be -- it demands the leading 0 to work, as in:\n\n> postgres=> select '0x12345678'::ipaddr;\n\n...which should return:\n\n?column? \n---------------\n18.52.86.120/32\n(1 row)\n\nbut in your case gives:\n\n> pqReadData() -- backend closed the channel unexpectedly.\n\n...or a number of other, weird errors, with or without crashes, but\nmostly with. It also changes with time, so it looks very much like\na memory corruption. Now, I've read and re-read my code, and I'm\npretty damn sure that I don't write outside the palloc()ed area at\nall, and I do palloc() the sum of VARHDRSZ and the size of the data\nstructure I'm storing. I also set VARSIZE(dst) to the right number\n(the same as the number of bytes I palloc()).\n\nLooking at debug output from the backend, it is obvious that inet_in()\ndoes the right thing: the expected 12 byte data structure looks the\nway it ought to, and since all written data ends up inside it (that\nis, it's filled with what I expect to see there), it's hard to imagine\nwhat might end up outside. It's also very strange that this happens\nonly when the '0x12345678'::inet format is used. It would be natural\nfrom this to suspect that inet_net_pton() is at fault, but my tests of\nthat function don't show up any problems.\n\nWeirdly, this patch fixes the problem:\n\nIndex: inet.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet.c,v\nretrieving revision 1.2\ndiff -r1.2 inet.c\n52c52\n< \tdst = palloc(VARHDRSZ + sizeof(inet_struct));\n---\n> \tdst = palloc(VARHDRSZ + sizeof(inet_struct) + 1);\n\nNow I'm explicitly asking for at least one byte more than I need, and\nI'm pretty damn sure that I never touch that extra byte, but something\nseems to, since the problem goes away. It's arrogant of me, I know,\nbut barring a complete misunderstanding on my part of how variable\nsize records work (or a stupid bug that I've been staring at for hours\nwithout seeing, of course), I'd say that something outside my code is\nat fault. Any ideas as to how to try to find out?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "09 Oct 1998 10:50:00 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n> Now I'm explicitly asking for at least one byte more than I need, and\n> I'm pretty damn sure that I never touch that extra byte, but something\n> seems to, since the problem goes away. It's arrogant of me, I know,\n> but barring a complete misunderstanding on my part of how variable\n> size records work (or a stupid bug that I've been staring at for hours\n> without seeing, of course), I'd say that something outside my code is\n> at fault. Any ideas as to how to try to find out?\n\nWell, I hate to ruin your day, but coming pre-armed with the knowledge\nthat the code is writing one byte too many, it's pretty obvious that the\nfirst loop in inet_net_pton_ipv4 does indeed do that. Specifically at\n\t\t\telse if (size-- > 0)\n\t\t\t\t*++dst = 0, dirty = 0;\nwhere, when size (the number of remaining destination bytes) is reduced\nto 0, the code nonetheless advances dst and clears the next byte.\nThe loop logic is fundamentally faulty: you can't check for emsgsize\noverflow until you get a digit that is supposed to go into another byte.\nI'd try something like\n\n\ttmp = 0;\n\tndigits = 0; // ndigits is # of hex digits seen for cur byte\n\twhile (ch = next hex digit)\n\t{\n\t\tn = numeric equivalent of ch;\n\t\tassert(n >= 0 && n <= 15);\n\t\ttmp = (tmp << 4) | n;\n\t\tif (++ndigits == 2)\n\t\t{\n\t\t\tif (size-- <= 0)\n\t\t\t\tgoto emsgsize;\n\t\t\t*dst++ = (u_char) tmp;\n\t\t\ttmp = 0, ndigits = 0;\n\t\t}\n\t}\n\tif (ndigits)\n\t\tgoto enoent;\t// odd number of hex digits is bogus\n\n\nBTW, shouldn't this routine clear out all \"size\" bytes of the\ndestination, even if the given data doesn't fill it all? A memset\nat the top might be worthwhile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Oct 1998 11:01:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Well, I hate to ruin your day,\n\nYou sure didn't! You made my day! :-)\n\n> but coming pre-armed with the knowledge\n> that the code is writing one byte too many, it's pretty obvious that the\n> first loop in inet_net_pton_ipv4 does indeed do that. Specifically at\n> \t\t\telse if (size-- > 0)\n> \t\t\t\t*++dst = 0, dirty = 0;\n> where, when size (the number of remaining destination bytes) is reduced\n> to 0, the code nonetheless advances dst and clears the next byte.\n\nYou're quite right, and I should have caught this one myself, only I\nmust have goofed when I tested the function. (It is written in a\nstyle that's just obscure enough that I chose testing it instead of\nstudying it until I understood in detail what it did.) As far as I\ncan tell, the only actual error in Paul Vixie's code is that the two\nlines you quote above should be:\n\n\t\t\telse if (--size > 0)\n\t\t\t\t*++dst = 0, dirty = 0;\n\nThat is, size should be predecremented instead of postdecremented.\nI'm cc-ing Paul on this, as I assume he wants to get the fix into\nBIND, which is where inet_net_ntop() and inet_net_pton() came from.\n\n> BTW, shouldn't this routine clear out all \"size\" bytes of the\n> destination, even if the given data doesn't fill it all? A memset\n> at the top might be worthwhile.\n\nIt does: there is code further down that handles that. Another\nexample of the obscure programming style: this function really shows\nwhat a low-level language C is! :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "09 Oct 1998 22:38:37 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n> As far as I\n> can tell, the only actual error in Paul Vixie's code is that the two\n> lines you quote above should be:\n> \t\t\telse if (--size > 0)\n> \t\t\t\t*++dst = 0, dirty = 0;\n\nNo, that's still wrong, because it will error out (jump to emsgsize)\none byte sooner than it should. The loop is fundamentally broken\nbecause it wants to grab and zero a byte before it knows whether there\nare any digits for the byte.\n\nI think its behavior for an odd number of digits is wrong too, or at\nleast pretty nonintuitive. I like my code a lot better ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Oct 1998 17:20:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items "
},
{
"msg_contents": "i'll take a tiny exception to the declaration that the code is obscure --\nsince that just means you have not looked at the rest of bind :-). but\nthanks for the bug report, i'll fix this in bind 8.1.3.\n",
"msg_date": "Fri, 09 Oct 1998 22:34:08 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> No, that's still wrong, [...]\n\nI'll leave that to you and Paul. Right now, the code works as I\nexpect it to, and that's enough for me. :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "10 Oct 1998 12:28:19 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "I wrote:\n\n> As far as I can tell, the only actual error in Paul Vixie's code is\n> that the two lines you quote above should be:\n> \n> \t\t\telse if (--size > 0)\n> \t\t\t\t*++dst = 0, dirty = 0;\n\n\nThat is, the patch to apply to fix the INET type is:\n\nIndex: inet_net_pton.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_pton.c,v\nretrieving revision 1.2\ndiff -r1.2 inet_net_pton.c\n120c120\n< \t\t\telse if (size-- > 0)\n---\n> \t\t\telse if (--size > 0)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "10 Oct 1998 14:12:40 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Tom Ivar Helbekkmo <[email protected]> writes:\n> > As far as I\n> > can tell, the only actual error in Paul Vixie's code is that the two\n> > lines you quote above should be:\n> > \t\t\telse if (--size > 0)\n> > \t\t\t\t*++dst = 0, dirty = 0;\n> \n> No, that's still wrong, because it will error out (jump to emsgsize)\n> one byte sooner than it should. The loop is fundamentally broken\n> because it wants to grab and zero a byte before it knows whether there\n> are any digits for the byte.\n\nDamn! You're right! I swear I tested that change at home, and found\nit to do what I wanted -- but now that I've put it into my production\nsystem at work, it doesn't work here. Heck, I even read the code very\ncarefully to verify that it was all that was needed! I think I really\nneed the holiday I'm taking next week!\n\nTo keep this in sync with BIND, maybe Paul could take a look, and fix\nthe code the way he wants it there?\n\nWhoops. It just hit me. When I tested my \"quick fix\" last night, I\njust looked at what happened to the data, and ignored the return\nvalue. Silly of me...\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "10 Oct 1998 14:38:25 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> I think its behavior for an odd number of digits is wrong too, or at\n> least pretty nonintuitive. I like my code a lot better ;-)\n\nWell, I like your code better too on closer inspection. I changed it\na bit, though, because I don't feel that an odd number of digits in\nthe hex string is wrong at all -- it's just a representation of a bit\nstring, with the representation padded to an even multiple of four\nbits.\n\nAnyway, here's what I ended up with for the code in question:\n\n if (ch == '0'&& (src[0] == 'x'|| src[0] == 'X')\n && isascii(src[1]) && isxdigit(src[1]))\n {\n /* Hexadecimal: Eat nybble string. */\n if (size <= 0)\n goto emsgsize;\n tmp = 0;\n dirty = 0;\n src++; /* skip x or X. */\n while ((ch = *src++) != '\\0'&&\n isascii(ch) && isxdigit(ch))\n {\n if (isupper(ch))\n ch = tolower(ch);\n n = strchr(xdigits, ch) - xdigits;\n assert(n >= 0 && n <= 15);\n tmp = (tmp << 4) | n;\n if (++dirty == 2) {\n if (size-- <= 0)\n goto emsgsize;\n *dst++ = (u_char) tmp;\n tmp = 0, dirty = 0;\n }\n }\n if (dirty) {\n if (size-- <= 0)\n goto emsgsize;\n tmp <<= 4;\n *dst++ = (u_char) tmp;\n }\n }\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "10 Oct 1998 20:14:51 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Applied\n\n> I wrote:\n> \n> > As far as I can tell, the only actual error in Paul Vixie's code is\n> > that the two lines you quote above should be:\n> > \n> > \t\t\telse if (--size > 0)\n> > \t\t\t\t*++dst = 0, dirty = 0;\n> \n> \n> That is, the patch to apply to fix the INET type is:\n> \n> Index: inet_net_pton.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_pton.c,v\n> retrieving revision 1.2\n> diff -r1.2 inet_net_pton.c\n> 120c120\n> < \t\t\telse if (size-- > 0)\n> ---\n> > \t\t\telse if (--size > 0)\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 21:31:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "> Applied\n>\n> > I wrote:\n> >\n> > > As far as I can tell, the only actual error in Paul Vixie's code is\n> > > that the two lines you quote above should be:\n> > >\n> > > \t\t\telse if (--size > 0)\n> > > \t\t\t\t*++dst = 0, dirty = 0;\n\n[snip]\n\n> > < \t\t\telse if (size-- > 0)\n> > ---\n> > > \t\t\telse if (--size > 0)\n\nDidn't we agree this WASN'T the fix for the problem? Something about return\nvalues? I seem to remember that this piece of code needed to be rewritten...\n\nTaral\n\n",
"msg_date": "Sun, 11 Oct 1998 21:16:17 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "> Didn't we agree this WASN'T the fix for the problem? Something about \n> return values? I seem to remember that this piece of code needed to be \n> rewritten...\n\nYup. Bruce, Tom has another patch he posted which will behave better...\n\n - Tom\n",
"msg_date": "Mon, 12 Oct 1998 06:03:22 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "> > Didn't we agree this WASN'T the fix for the problem? Something about \n> > return values? I seem to remember that this piece of code needed to be \n> > rewritten...\n> \n> Yup. Bruce, Tom has another patch he posted which will behave better...\n> \n> - Tom\n> \n\nI have lost it. Could someone resend it, and the patch to reverse too.\n\nThanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 09:47:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I have lost it. Could someone resend it, and the patch to reverse too.\n\nThis is the one that shouldn't have been applied:\n\nIndex: inet_net_pton.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_pton.c,v\nretrieving revision 1.2\ndiff -r1.2 inet_net_pton.c\n120c120\n< \t\t\telse if (size-- > 0)\n---\n> \t\t\telse if (--size > 0)\n\nThis is the one that works:\n\nIndex: inet_net_pton.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_pton.c,v\nretrieving revision 1.2\ndiff -r1.2 inet_net_pton.c\n108c108,109\n< \t\t*dst = 0, dirty = 0;\n---\n> \t\ttmp = 0;\n> \t\tdirty = 0;\n117,122c118,127\n< \t\t\t*dst |= n;\n< \t\t\tif (!dirty++)\n< \t\t\t\t*dst <<= 4;\n< \t\t\telse if (size-- > 0)\n< \t\t\t\t*++dst = 0, dirty = 0;\n< \t\t\telse\n---\n> \t\t\ttmp = (tmp << 4) | n;\n> \t\t\tif (++dirty == 2) {\n> \t\t\t\tif (size-- <= 0)\n> \t\t\t\t\tgoto emsgsize;\n> \t\t\t\t*dst++ = (u_char) tmp;\n> \t\t\t\ttmp = 0, dirty = 0;\n> \t\t\t}\n> \t\t}\n> \t\tif (dirty) {\n> \t\t\tif (size-- <= 0)\n123a129,130\n> \t\t\ttmp <<= 4;\n> \t\t\t*dst++ = (u_char) tmp;\n125,126d131\n< \t\tif (dirty)\n< \t\t\tsize--;\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "12 Oct 1998 17:16:41 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Both applied, with the first one reverse applied. Thanks.\n\n\n> Bruce Momjian <[email protected]> writes:\n> \n> > I have lost it. Could someone resend it, and the patch to reverse too.\n> \n> This is the one that shouldn't have been applied:\n> \n> Index: inet_net_pton.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_pton.c,v\n> retrieving revision 1.2\n> diff -r1.2 inet_net_pton.c\n> 120c120\n> < \t\t\telse if (size-- > 0)\n> ---\n> > \t\t\telse if (--size > 0)\n> \n> This is the one that works:\n> \n> Index: inet_net_pton.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/inet_net_pton.c,v\n> retrieving revision 1.2\n> diff -r1.2 inet_net_pton.c\n> 108c108,109\n> < \t\t*dst = 0, dirty = 0;\n> ---\n> > \t\ttmp = 0;\n> > \t\tdirty = 0;\n> 117,122c118,127\n> < \t\t\t*dst |= n;\n> < \t\t\tif (!dirty++)\n> < \t\t\t\t*dst <<= 4;\n> < \t\t\telse if (size-- > 0)\n> < \t\t\t\t*++dst = 0, dirty = 0;\n> < \t\t\telse\n> ---\n> > \t\t\ttmp = (tmp << 4) | n;\n> > \t\t\tif (++dirty == 2) {\n> > \t\t\t\tif (size-- <= 0)\n> > \t\t\t\t\tgoto emsgsize;\n> > \t\t\t\t*dst++ = (u_char) tmp;\n> > \t\t\t\ttmp = 0, dirty = 0;\n> > \t\t\t}\n> > \t\t}\n> > \t\tif (dirty) {\n> > \t\t\tif (size-- <= 0)\n> 123a129,130\n> > \t\t\ttmp <<= 4;\n> > \t\t\t*dst++ = (u_char) tmp;\n> 125,126d131\n> < \t\tif (dirty)\n> < \t\t\tsize--;\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 11:57:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.4 items"
},
{
"msg_contents": "Wow. Tough room! I've got to admit that I don't have any test cases\nwith hex strings in them, and had the hex portion of inet_net_pton_ipv4()\never been tested here it would have shown every error you folks found:\n\n> From: Tom Ivar Helbekkmo <[email protected]>\n> Date: 09 Oct 1998 22:38:37 +0200\n> \n> As far as I can tell, the only actual error in Paul Vixie's code is that\n> the two lines you quote above should be:\n> \n> \t\t\telse if (--size > 0)\n> \t\t\t\t*++dst = 0, dirty = 0;\n> \n> That is, size should be predecremented instead of postdecremented.\n> I'm cc-ing Paul on this, as I assume he wants to get the fix into\n> BIND, which is where inet_net_ntop() and inet_net_pton() came from.\n\nAs later bespoken by someone else, this is not yet the right answer.\n\n> > BTW, shouldn't this routine clear out all \"size\" bytes of the\n> > destination, even if the given data doesn't fill it all? A memset\n> > at the top might be worthwhile.\n\nNo. \"size\" is the maximum I'm allowed to fill -- but I only touch the\nbits I need, and I return the number of bits I touched.\n\n> It does: there is code further down that handles that. Another\n> example of the obscure programming style: this function really shows\n> what a low-level language C is! :-)\n\nC is not a language, it is a very smart macro assembler. Modula-3 is a\nlanguage. I'm told that Eiffel is a language. Scheme is definitely a\nlanguage. But C is not a language. It's just what we all use :-).\n\nMore seriously, this function is called in the inner loop and speed was\nimportant enough to trade off some simplicity for. The amazing thing is,\nbecause I was blasting the same byte of *dst twice per loop iteration, it\nwas actually sped up considerably by the patches suggested here. I've got\negg on my face this time fershure man.\n\nAnd as I said, if you think this routine is obscure, that just shows that\nyou've not looked at the rest of BIND. This routine is at least self\ncontained and its bugs don't lean on other bugs.\n\n> Date: Fri, 09 Oct 1998 17:20:22 -0400\n> From: Tom Lane <[email protected]>\n> \n> > As far as I can tell, the only actual error in Paul Vixie's code is\n> > that the two lines you quote above should be:\n> >\n> > \t\t\telse if (--size > 0)\n> > \t\t\t\t*++dst = 0, dirty = 0;\n> \n> No, that's still wrong, because it will error out (jump to emsgsize)\n> one byte sooner than it should. The loop is fundamentally broken\n> because it wants to grab and zero a byte before it knows whether there\n> are any digits for the byte.\n\nYes.\n\n> From: Tom Ivar Helbekkmo <[email protected]>\n> Date: 10 Oct 1998 20:14:51 +0200\n> \n> > I think its behavior for an odd number of digits is wrong too, or at\n> > least pretty nonintuitive. I like my code a lot better ;-)\n> \n> Well, I like your code better too on closer inspection. I changed it\n> a bit, though, because I don't feel that an odd number of digits in\n> the hex string is wrong at all -- it's just a representation of a bit\n> string, with the representation padded to an even multiple of four\n> bits.\n\nI agree with this interpretation. I'm now wracking my brain trying to\nremember what other source file I got this code from, in what other \npackage, in what year, done for what project, because it's possible that\nthe bug was inherited.\n\n> Anyway, here's what I ended up with for the code in question:\n> \n> if (ch == '0'&& (src[0] == 'x'|| src[0] == 'X')\n> && isascii(src[1]) && isxdigit(src[1]))\n> {\n> /* Hexadecimal: Eat nybble string. */\n> if (size <= 0)\n> goto emsgsize;\n> tmp = 0;\n> dirty = 0;\n> src++; /* skip x or X. */\n> while ((ch = *src++) != '\\0'&&\n> isascii(ch) && isxdigit(ch))\n> {\n> if (isupper(ch))\n> ch = tolower(ch);\n> n = strchr(xdigits, ch) - xdigits;\n> assert(n >= 0 && n <= 15);\n> tmp = (tmp << 4) | n;\n> if (++dirty == 2) {\n> if (size-- <= 0)\n> goto emsgsize;\n> *dst++ = (u_char) tmp;\n> tmp = 0, dirty = 0;\n> }\n> }\n> if (dirty) {\n> if (size-- <= 0)\n> goto emsgsize;\n> tmp <<= 4;\n> *dst++ = (u_char) tmp;\n> }\n> }\n\nSince that made some stylistic changes that had nothing to do with the\nalgorythm, it caught my eye (that's a hint.) In keeping with its spirit,\nI came up with the following. Comments please?\n\n if (ch == '0' && (src[0] == 'x' || src[0] == 'X')\n && isascii(src[1]) && isxdigit(src[1])) {\n /* Hexadecimal: Eat nybble string. */\n if (size <= 0)\n goto emsgsize;\n dirty = 0;\n src++; /* skip x or X. */\n while ((ch = *src++) != '\\0' && isascii(ch) && isxdigit(ch)) {\n if (isupper(ch))\n ch = tolower(ch);\n n = strchr(xdigits, ch) - xdigits;\n assert(n >= 0 && n <= 15);\n if (dirty == 0)\n tmp = n;\n else\n tmp = (tmp << 4) | n;\n if (++dirty == 2) {\n if (size-- <= 0)\n goto emsgsize;\n *dst++ = (u_char) tmp;\n dirty = 0;\n }\n }\n if (dirty) { /* Odd trailing nybble? */\n if (size-- <= 0)\n goto emsgsize;\n *dst++ = (u_char) (tmp << 4);\n }\n }\n\nFor the record, I found my original code completely impenetrable, and I'd\nlike to both thank those of you who slogged through it, and apologize for\nnot testing every code path before I released it. I oughta know better,\nand you find folks deserve better.\n",
"msg_date": "Tue, 13 Oct 1998 00:37:28 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.4 items "
}
] |
[
{
"msg_contents": "I think I mentioned this before, but no-one fixed it... The variable 'value'\nis being assigned a long_long when %ll is being used, but value is only a\n'long'... Don'tcha wish there were type-checking for va_arg?\n\nEnjoy.\n\nTaral\n\n*** ./backend/port/snprintf.c.orig Wed Oct 7 17:15:04 1998\n--- ./backend/port/snprintf.c Wed Oct 7 17:16:07 1998\n***************\n*** 130,136 ****\n--- 130,140 ----\n dopr(char *buffer, const char *format,...)\n {\n int ch;\n+ #ifdef HAVE_LONG_INT_64\n+ long_long value;\n+ #else\n long value;\n+ #endif\n int longflag = 0;\n int longlongflag = 0;\n int pointflag = 0;\n\n",
"msg_date": "Wed, 7 Oct 1998 16:19:01 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oops in snprintf.c"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> I think I mentioned this before, but no-one fixed it... The variable 'value'\n> is being assigned a long_long when %ll is being used, but value is only a\n> 'long'... Don'tcha wish there were type-checking for va_arg?\n> \n> Enjoy.\n> \n> Taral\n> \n> *** ./backend/port/snprintf.c.orig Wed Oct 7 17:15:04 1998\n> --- ./backend/port/snprintf.c Wed Oct 7 17:16:07 1998\n> ***************\n> *** 130,136 ****\n> --- 130,140 ----\n> dopr(char *buffer, const char *format,...)\n> {\n> int ch;\n> + #ifdef HAVE_LONG_INT_64\n> + long_long value;\n> + #else\n> long value;\n> + #endif\n> int longflag = 0;\n> int longlongflag = 0;\n> int pointflag = 0;\n> \n> \n> \n\nIt is more compilcated than that. You have to fix fmtnum too. I have\napplied a patch. Let me know how it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Wed, 7 Oct 1998 20:35:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Oops in snprintf.c"
}
] |
[
{
"msg_contents": "There still is an empty test/regress/expected/numerology-NetBSD.out\nfile floating around. It should be deleted from the tree as there is\nno need for a special case.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 7 Oct 1998 15:35:45 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "extra regression expected file"
}
] |
[
{
"msg_contents": "The ChangeLog file is missing. I didn't spot this until I went in to add\nan entry to it. Luckily, I still had the mail I sent back in August, which\nwas when it was created.\n\nIt should go in the src/interfaces/jdbc directory, along side the\nMakefile.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf",
"msg_date": "Wed, 7 Oct 1998 22:51:26 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing file from JDBC Driver"
},
{
"msg_contents": "Done.\n\n\n> \n> The ChangeLog file is missing. I didn't spot this until I went in to add\n> an entry to it. Luckily, I still had the mail I sent back in August, which\n> was when it was created.\n> \n> It should go in the src/interfaces/jdbc directory, along side the\n> Makefile.\n> \n> Peter\n> \n> -- \n> Peter T Mount [email protected]\n> Main Homepage: http://www.retep.org.uk\n> PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n> Java PDF Generator: http://www.retep.org.uk/pdf\nContent-Description: src/interfaces/jdbc/ChangeLog\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Wed, 7 Oct 1998 20:40:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Missing file from JDBC Driver"
}
] |
[
{
"msg_contents": "Here is a modified version of the vacuum crash I am studying.\n\nInteresting is that if I do a 'vacuum getting' instead of 'select * from\ngetting;vacuum;', I see a different error message. Rather than a crash\ndue to an Assert(), psql shows:\n\n\tNOTICE: AbortTransaction and not in in-progress state \n\tNOTICE: AbortTransaction and not in in-progress state \n\nand the postmaster log file shows:\n\n\tERROR: cannot write block -1 of [] blind\n\tAbortCurrentTransaction\n\tNOTICE: AbortTransaction and not in in-progress state \n\nThis can be debugged by commenting out the psql command at the end,\nrunning the script, and then running a backend from gdb and doing\n'vacuum getting'.\n\nThis looks like it may be easier to track down. Vadim, any chance you\ncan bail me out here, again.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n#!/bin/sh\n\nDBNAME=ptest\n\ndestroydb $DBNAME\ncreatedb $DBNAME\npsql -e $DBNAME <<EOF\ncreate table header\n(\n\thost\ttext\tnot null,\n\tport\tint\tnot null,\n\tpath\ttext\tnot null,\n\tfile\ttext\tnot null,\n\textra\ttext\tnot null,\n\tname\ttext\tnot null,\n\tvalue\ttext\tnot null\n);\ncreate index header_url_idx on header (host, port, path, file, extra);\ncreate unique index header_uniq_idx on header (host, port, path, file, extra, name);\n\ncreate table reference\n(\n\tf_url\ttext\tnot null,\n\tt_url\ttext\tnot null\n);\ncreate index reference_from_idx on reference (f_url);\ncreate index reference_to_idx on reference (t_url);\ncreate unique index reference_uniq_idx on reference (f_url, t_url);\n\ncreate table extension\n(\n\text\ttext\tnot null,\n\tnote\ttext\n);\ncreate unique index extension_ext_idx on extension (ext);\n\ncreate table getting\n(\n\thost\ttext\tnot null,\n\tport\tint\tnot null,\n\tip\ttext\tnot null,\n\twhen\tdatetime\tnot null\n);\n--create unique index getting_ip_idx on getting (ip);\nEOF\n#psql -c \"delete from getting; vacuum;\" $DBNAME\n psql -c \"vacuum getting;\" $DBNAME\n# psql -c \"select * from getting; vacuum;\" $DBNAME\n#psql -c \"delete from getting;\" $DBNAME\n#psql -c \"select * from getting;\" $DBNAME\n#psql -c \"vacuum;\" $DBNAME\n#psql -c \"vacuum; vacuum;\" $DBNAME",
"msg_date": "Wed, 7 Oct 1998 20:02:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum crash"
},
{
"msg_contents": "> Here is a modified version of the vacuum crash I am studying.\n> Interesting is that if I do a 'vacuum getting' instead of 'select * \n> from getting;vacuum;', I see a different error message. Rather than a \n> crash due to an Assert(), psql shows:\n> NOTICE: AbortTransaction and not in in-progress state\n> NOTICE: AbortTransaction and not in in-progress state\n> and the postmaster log file shows:\n> ERROR: cannot write block -1 of [] blind\n> AbortCurrentTransaction\n> NOTICE: AbortTransaction and not in in-progress state\n> This can be debugged by commenting out the psql command at the end,\n> running the script, and then running a backend from gdb and doing\n> 'vacuum getting'.\n\nWell Oleg, you are not alone now. Others are seeing your problem, so it\nis much more likely to be fixed than when it was just one person\nreporting the symptom. If you are familiar with backend code then\nperhaps you could help track down the problem. If you aren't familiar\nwith it, you may be interested in volunteering to test patches from\nBruce. This would be especially helpful since you have a real talent for\ndemonstrating this problem :)\n\n - Tom\n",
"msg_date": "Thu, 08 Oct 1998 06:33:50 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum crash"
},
{
"msg_contents": "On Thu, 8 Oct 1998, Thomas G. Lockhart wrote:\n\n> Date: Thu, 08 Oct 1998 06:33:50 +0000\n> From: \"Thomas G. Lockhart\" <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Bruce Momjian <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] vacuum crash\n> \n> > Here is a modified version of the vacuum crash I am studying.\n> > Interesting is that if I do a 'vacuum getting' instead of 'select * \n> > from getting;vacuum;', I see a different error message. Rather than a \n> > crash due to an Assert(), psql shows:\n> > NOTICE: AbortTransaction and not in in-progress state\n> > NOTICE: AbortTransaction and not in in-progress state\n> > and the postmaster log file shows:\n> > ERROR: cannot write block -1 of [] blind\n> > AbortCurrentTransaction\n> > NOTICE: AbortTransaction and not in in-progress state\n> > This can be debugged by commenting out the psql command at the end,\n> > running the script, and then running a backend from gdb and doing\n> > 'vacuum getting'.\n> \n> Well Oleg, you are not alone now. Others are seeing your problem, so it\n> is much more likely to be fixed than when it was just one person\n> reporting the symptom. If you are familiar with backend code then\n> perhaps you could help track down the problem. If you aren't familiar\n> with it, you may be interested in volunteering to test patches from\n> Bruce. This would be especially helpful since you have a real talent for\n> demonstrating this problem :)\n> \n> - Tom\n> \n\nUnfortunately I'm not familiar with backend code but would happy\nto test everything as soon as patches will be available.\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 8 Oct 1998 11:45:34 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum crash"
}
] |
[
{
"msg_contents": "I think I am on to something.\n\nThe funny way I update the system tables with vacuum optimization\ninformation is wrong, I think, and the new cache code is properly\ncomplaining about it.\n\nI will keep testing. If I comment out the following code, the vacuum works.\nIs there a better way to do this?\n\n---------------------------------------------------------------------------\n\n /* XXX -- after write, should invalidate relcache in other backends */\n WriteNoReleaseBuffer(ItemPointerGetBlockNumber(&rtup->t_ctid));\n \n /*\n * invalidating system relations confuses the function cache of\n * pg_operator and pg_opclass, bjm\n */\n if (!IsSystemRelationName(pgcform->relname.data))\n RelationInvalidateHeapTuple(rd, rtup);\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Wed, 7 Oct 1998 22:23:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum crash"
},
{
"msg_contents": "I am wrong here. There is something else wrong.\n\n\n> I think I am on to something.\n> \n> The funny way I update the system tables with vacuum optimization\n> information is wrong, I think, and the new cache code is properly\n> complaining about it.\n> \n> I will keep testing. If I comment out the following code, the vacuum works.\n> Is there a better way to do this?\n> \n> ---------------------------------------------------------------------------\n> \n> /* XXX -- after write, should invalidate relcache in other backends */\n> WriteNoReleaseBuffer(ItemPointerGetBlockNumber(&rtup->t_ctid));\n> \n> /*\n> * invalidating system relations confuses the function cache of\n> * pg_operator and pg_opclass, bjm\n> */\n> if (!IsSystemRelationName(pgcform->relname.data))\n> RelationInvalidateHeapTuple(rd, rtup);\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Thu, 8 Oct 1998 00:44:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] vacuum crash"
}
] |
[
{
"msg_contents": "subscribe\n\n\n",
"msg_date": "Thu, 8 Oct 1998 13:00:51 +0800",
"msg_from": "bisheng <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "> > changes are version independent. The main difference from \n> other port is\n> > the renamed system table pg_version (vs. PG_VERSION) to \n> pg_ver - Windows\n> \n> I thought Windows allowed any case, so you could open a file with\n> \"PG_VERSION\" or \"pg_version\" and it will open any file of any matching\n> case.\n\nThe problem is that there exists file PG_VERSION where is the current\nversion stored (now 6.4) in the directory ./data/base/template1 and when\nthe bootstrap code wants to create pg_version system table it stops\nbecause the file with the \"same\" name already exists.\n\n> What would you like done with this patch? Should it merged into the\n> tree, or just used for people testing things on NT, and later \n> merged in\n> as you feel more comfortable? You can make a 6.4 final \n> patch, perhaps.\n\nI think we should wait for the final 6.4 version (I hope it will be soon\navailable) and than make a patch against it and include it also in the\n6.5 development tree. There are some open issues yet.\n\nI run the regression tests yesterday, the results are here:\n=============== running regression queries... =================\nboolean .. ok\nchar .. ok\nname .. ok\nvarchar .. ok\ntext .. ok\nstrings .. ok\nint2 .. failed\nint4 .. failed\nint8 .. failed\noid .. ok\nfloat4 .. ok\nfloat8 .. failed\nnumerology .. failed\npoint .. ok\nlseg .. ok\nbox .. ok\npath .. ok\npolygon .. ok\ncircle .. ok\ngeometry .. failed\ntimespan .. ok\ndatetime .. failed\nreltime .. ok\nabstime .. failed\ntinterval .. failed\nhorology .. failed\ncomments .. ok\ncreate_function_1 .. ok\ncreate_type .. ok\ncreate_table .. ok\ncreate_function_2 .. failed\nconstraints .. failed\ntriggers .. failed\ncopy .. ok\ncreate_misc .. ok\ncreate_aggregate .. ok\ncreate_operator .. ok\ncreate_view .. ok\ncreate_index .. ok\nsanity_check .. ok\nerrors .. ok\nselect .. ok\nselect_into .. ok\nselect_distinct .. ok\nselect_distinct_on .. ok\nselect_implicit .. ok\nselect_having .. ok\nsubselect .. ok\nunion .. failed\naggregates .. ok\ntransactions .. ok\nrandom .. ok\nportals .. ok\nmisc .. failed\narrays .. ok\nbtree_index .. ok\nhash_index .. ok\nselect_views .. ok\nalter_table .. ok\nportals_p2 .. ok\nsetup_ruletest .. ok\nrun_ruletest .. failed\n\nnow some explanations:\n- int2, int4 - there are OK but there is a problem with the error\nmessages from libc\n- int8 - the libc does probably have no support for long long ints in\nprintf()\n- run_ruletest - the difference is only in the name that is selected\nfrom the tables\n- many other tests failed due to not having the dynamicly loaded code in\nDLLs\n\n\t\t\t\t\tDan\n",
"msg_date": "Thu, 8 Oct 1998 11:26:15 +0200 ",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NT port of PGSQL - success"
},
{
"msg_contents": "> The problem is that there exists file PG_VERSION where is the current\n> version stored (now 6.4) in the directory ./data/base/template1 and \n> when the bootstrap code wants to create pg_version system table it \n> stops because the file with the \"same\" name already exists.\n> I think we should wait for the final 6.4 version (I hope it will be \n> soon available) and than make a patch against it and include it also \n> in the 6.5 development tree.\n\nMost of us aren't NT propellerheads, but now that a port might be\navailable I'm sure the mailing lists will get more folks who are. Then a\ntremendous step forward such as you've take will be greeted with more\nenthusiasm :)\n\n> There are some open issues yet.\n> now some explanations:\n> - int8 - the libc does probably have no support for long long ints in\n> printf()\n\nThere is a local definition for snprintf() which might have this support\nfor you. Look in backend/port/snprintf.c\n\n> - run_ruletest - the difference is only in the name that is selected\n> from the tables\n> - many other tests failed due to not having the dynamicly loaded code \n> in DLLs\n\nIs DLL support so different that it will never work, or have you not had\ntime to look at it?\n\nI would like to list NT as being \"supported with patches, see web site\"\nfor the next release (or \"partially supported...\"). Is it premature to\ndo that?\n\nGood work btw...\n\n - Tom\n",
"msg_date": "Thu, 08 Oct 1998 15:36:09 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NT port of PGSQL - success"
},
{
"msg_contents": "> > > changes are version independent. The main difference from \n> > other port is\n> > > the renamed system table pg_version (vs. PG_VERSION) to \n> > pg_ver - Windows\n> > \n> > I thought Windows allowed any case, so you could open a file with\n> > \"PG_VERSION\" or \"pg_version\" and it will open any file of any matching\n> > case.\n> \n> The problem is that there exists file PG_VERSION where is the current\n> version stored (now 6.4) in the directory ./data/base/template1 and when\n> the bootstrap code wants to create pg_version system table it stops\n> because the file with the \"same\" name already exists.\n\nThat is a good point. Actually template1/pg_version is never used,\nbecause it is used for a 'version' command we don't support, and I don't\nthink ever worked. Perhaps I will remove the file.\n\n> \n> > What would you like done with this patch? Should it merged into the\n> > tree, or just used for people testing things on NT, and later \n> > merged in\n> > as you feel more comfortable? You can make a 6.4 final \n> > patch, perhaps.\n> \n> I think we should wait for the final 6.4 version (I hope it will be soon\n> available) and than make a patch against it and include it also in the\n> 6.5 development tree. There are some open issues yet.\n> \n> I run the regression tests yesterday, the results are here:\n> =============== running regression queries... =================\n> boolean .. ok\n> char .. ok\n> name .. ok\n> varchar .. ok\n> text .. ok\n> strings .. ok\n> int2 .. failed\n> int4 .. failed\n> int8 .. failed\n> oid .. ok\n> float4 .. ok\n> float8 .. failed\n> numerology .. failed\n> point .. ok\n> lseg .. ok\n> box .. ok\n> path .. ok\n> polygon .. ok\n> circle .. ok\n> geometry .. failed\n> timespan .. ok\n> datetime .. failed\n> reltime .. ok\n> abstime .. failed\n> tinterval .. failed\n> horology .. failed\n> comments .. ok\n> create_function_1 .. ok\n> create_type .. ok\n> create_table .. ok\n> create_function_2 .. failed\n> constraints .. failed\n> triggers .. failed\n> copy .. ok\n> create_misc .. ok\n> create_aggregate .. ok\n> create_operator .. ok\n> create_view .. ok\n> create_index .. ok\n> sanity_check .. ok\n> errors .. ok\n> select .. ok\n> select_into .. ok\n> select_distinct .. ok\n> select_distinct_on .. ok\n> select_implicit .. ok\n> select_having .. ok\n> subselect .. ok\n> union .. failed\n> aggregates .. ok\n> transactions .. ok\n> random .. ok\n> portals .. ok\n> misc .. failed\n> arrays .. ok\n> btree_index .. ok\n> hash_index .. ok\n> select_views .. ok\n> alter_table .. ok\n> portals_p2 .. ok\n> setup_ruletest .. ok\n> run_ruletest .. failed\n> \n> now some explanations:\n> - int2, int4 - there are OK but there is a problem with the error\n> messages from libc\n> - int8 - the libc does probably have no support for long long ints in\n> printf()\n> - run_ruletest - the difference is only in the name that is selected\n> from the tables\n> - many other tests failed due to not having the dynamicly loaded code in\n> DLLs\n\nWe have all these problems on most platforms, except for the last one.\n\nGood job.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Thu, 8 Oct 1998 11:36:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NT port of PGSQL - success"
},
{
"msg_contents": "> > - run_ruletest - the difference is only in the name that is selected\n> > from the tables\n> > - many other tests failed due to not having the dynamicly loaded code \n> > in DLLs\n> \n> Is DLL support so different that it will never work, or have you not had\n> time to look at it?\n> \n> I would like to list NT as being \"supported with patches, see web site\"\n> for the next release (or \"partially supported...\"). Is it premature to\n> do that?\n\nGood questions. I think for NT, we may have to just supply a binary on\nthe web site, as I think the tools need for the port may not be\navailable for normal NT sites. That is OK, because there is only on NT\nbinary (for i386, at least, and NT 4.0).\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Thu, 8 Oct 1998 11:46:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NT port of PGSQL - success]"
}
] |
[
{
"msg_contents": "> DeJuan, could you ask on the more general lists whether anyone has\n> tried\n> the tutorials with success, and solicit comments or corrections? TIA\n> \n> - Tom\n> \nSo, has anyone tried the tutorials (with/with out success)? Let me know\nwhat did and didn't work for you. We're trying to re-vamp the docs\nwhile we have the chance, so vent. But, remember be constructive.\n\n\t-DEJ\n\nP.S. Sorry to those of you who receive many copies of this. I'll get\nabout 6 myself.\n",
"msg_date": "Thu, 8 Oct 1998 11:29:23 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: [COMMITTERS] 'pgsql/doc/src/sgml protocol.sgml'"
},
{
"msg_contents": "What tutorials?\n\nOn Thu, 8 Oct 1998, Jackson, DeJuan wrote:\n\n> So, has anyone tried the tutorials (with/with out success)? Let me know\n> what did and didn't work for you. We're trying to re-vamp the docs\n> while we have the chance, so vent. But, remember be constructive.\n> \n> \t-DEJ\n> \n> P.S. Sorry to those of you who receive many copies of this. I'll get\n> about 6 myself.\n> \n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Thu, 8 Oct 1998 14:06:00 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] RE: [HACKERS] Re: [COMMITTERS] 'pgsql/doc/src/sgml\n\tprotocol.sgml'"
}
] |
[
{
"msg_contents": "\nSince you listed your address on a RDBMS web site, I thought \nyou might be willing to help.\n\nI represent a leading edge technology firm an seek people \nTo join the professional services division. The most immediate \nneed is for qualified people in the Atlanta, GA area, although we \nneed people along the entire East Coast.\n\nThe following is the job description:\n\nSenior/Principle Consultant\n\nDue to tremendous growth in our Sales and Professional Services \nBusinesses, we have the need to add a key technical resource to our \nProfessional Services business unit. This individual is the \"go to\" technical \nperson for the Professional Services team. We need experience \ndeveloping with C++ in both UNIX and NT environments, experience \nin design and in all phases of the software development cycle, \nRDBMS experience with Oracle, Sybase and NT SQL Server and the \nability to provide both pre and post sales support as needed. \n\nThe position includes: Very competitive compensation package (up to \nlow six figures plus bonus) Excellent benefits A chance to join a very \nprofitable, leading edge, fast growing organization.\n\nIf you know anyone who might be interested please forward this to them \nor contact me today.\n\nDave Eide\nQuest_IT a division of Diedre Moire Corporation\n510 Horizon Center\nRobbbinsville, NJ 08691\nPhone: 609-584-9000 ext 273\nFax: 609-584-9575\nEmail: [email protected]\n\n\n\n",
"msg_date": "Thu, 8 Oct 1998 12:35:49 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "JOBOP Software Development Consultant"
}
] |
[
{
"msg_contents": "> On 07-Oct-98 Jackson, DeJuan wrote:\n> >> Is there any way to select a random row from a table using an SQL\n> >> query?\n> >> I'm using Postgresql 6.3.2.\n> >> \n> > I'd look at using cursors and random().\n> \n> It's a good idea. Do you (or someone else) know how to use the\n> following\n> PostgreSQL functions: oidrand(oid,int4) ,oidsrand(int4)? What are they\n> intended for and what is their result?\n> \nNever seen them before but it looks like:\n oidsrand(int4) -- seeds the random number generator for oidrand to\nuse\n oidrand(oid, int4) -- returns a psudo-random oid\n\nThe parameters to oidrand I can't figure out.\nAnybody else?\n\t\t-DEJ\n> ---\n> \n> ------------------------------------\n> Mauro Bartolomeoli\n> e-mail: [email protected]\n> ICQ#: 9602542\n> ------------------------------------\n",
"msg_date": "Thu, 8 Oct 1998 11:46:49 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [SQL] random tuple"
},
{
"msg_contents": "> > >> Is there any way to select a random row from a table?\n> > > I'd look at using cursors and random().\n> > Do you (or someone else) know how to use the following\n> > PostgreSQL functions: oidrand(oid,int4) ,oidsrand(int4)?\n> oidsrand(int4) -- seeds the random number generator for oidrand\n> oidrand(oid, int4) -- returns a psudo-random oid\n\nThe regression test uses oidrand(), which is where I stumbled across it.\nThe behavior is that oidrand() returns a boolean true/false with an\ninverse probability specified by the second argument. For example, given\na table t with 100 entries, and the query\n\n select * from t where oidrand(oid, 10)\n\nwill return, on average, 10 (10%) of the entries at random. The function\nis called 100 times in the course of the query, and uses random() or\nsomething similar to decide whether to return true or false for any\nparticular instance.\n\n select * from t where oidrand(oid, 1)\n\nwill, on average, return all entries (1/1 = 100%).\n\n select * from t where oidrand(oid, 100)\n\nwill, on average, return 1 entry (1/100 = 1%) so sometimes will return\none, zero, or two entries, and occasionally return more than two\nentries.\n\nIt's pretty random, probably with a Poisson distribution depending on\nwhat you are asking for.\n\nPresumably oidsrand() allows one to change the seed to keep the\npseudo-random results from repeating from one run to the next. But I\nhaven't looked into it.\n\n - Tom\n",
"msg_date": "Fri, 09 Oct 1998 04:43:55 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: [SQL] random tuple"
}
] |
[
{
"msg_contents": "It seems that moving the installation of man pages from\nsrc/GNUmakefile to docs/Makefile did not leave an appropriate target\nin the original Makefile to refer to. That is, docs/Makefile refers\nto a nonexistent install-man target in the src/GNUmakefile. The\nfollowing patch adds that needed target.\n\nNOTE: there are still man pages installed by the install target in\nsrc/GNUmakefile. See the install and doc targets in\ninterfaces/libpq++/Makefile. Should this be removed? Should a\n(mostly do nothing) install-man target be added to all the makefiles\nso that everything could be traversed from the top as for other\ntargets?\n\nCheers,\nBrook\n\n===========================================================================\n\n--- GNUmakefile.in.orig\tWed Oct 7 01:00:16 1998\n+++ GNUmakefile.in\tThu Oct 8 10:44:45 1998\n@@ -43,6 +43,9 @@\n \t$(MAKE) -C pl install\n \tcat ../register.txt\n \n+install-man:\n+\t$(MAKE) -C man install\n+\n lexverify:\n \t$(MAKE) -C lextest all\n \t@if test ! -f lextest/lextest; then \\\n",
"msg_date": "Thu, 8 Oct 1998 11:06:21 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "man page installation patch"
},
{
"msg_contents": "> It seems that moving the installation of man pages from\n> src/GNUmakefile to docs/Makefile did not leave an appropriate target\n> in the original Makefile to refer to. That is, docs/Makefile refers\n> to a nonexistent install-man target in the src/GNUmakefile. The\n> following patch adds that needed target.\n\nThanks for the patch. I'll apply it as I clean up other loose ends.\n\n> NOTE: there are still man pages installed by the install target in\n> src/GNUmakefile. See the install and doc targets in\n> interfaces/libpq++/Makefile. Should this be removed? Should a\n> (mostly do nothing) install-man target be added to all the makefiles\n> so that everything could be traversed from the top as for other\n> targets?\n\nYeah, I noticed that hiding man page source the other day. I'm planning\non converting it to sgml and yanking the nroff version. Should be gone\nby release day.\n\nI'll look at the makefiles again and try to get them self-consistant.\n\n - Tom\n",
"msg_date": "Fri, 09 Oct 1998 05:00:32 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] man page installation patch"
}
] |
[
{
"msg_contents": "I have had a few problems with the PL stuff recently committed. The\nfollowing patches fix the problems (i.e., all regression tests pass)\nin what I hope to be a platform-independent fashion. The accomplish\nthe following:\n\n1. Allow configure to check for the existence of the tclConfig.sh\n script needed to configure the tcl component of PL. Configure\n already checks other parts of the tcl installation, so it might\n as well check for this script, too, so that paths need not be\n hard-coded into mkMakefile.tcldefs.\n\n A few extra files are created by configure from templates.\n\n Configure is also cleaned up a bit at the end so the long list of\n output files is easier to deal with.\n\nNOTE: run autoconf.\n\n2. A new script mkMakefile.tcldefs.sh is created from the template\n mkMakefile.tcldefs.sh.in so that the substitution for tclConfig.sh\n can be inserted. The script is simplified and renamed to reflect\n the fact that it is a sh script.\n\nNOTE: pl/tcl/mkMakefile.tcldefs should be removed from the tree.\n\n3. The Makefile executes /bin/sh on the new script rather than\n directly executing the script (hence the name change to make it\n more explicit).\n\n4. There are shared library problems with the plpgsql/src/Makefile.\n The port-specific code was taken from the interfaces/tcl?/Makefile\n so that shared libraries should work on all platforms. This means \n that that Makefile must be a template for configure.\n\nNOTE: pl/plpgsql/src/Makefile should be removed from the tree.\n\nNOTE: should we be including libtool in our distribution to simplify\nshared library (and other stuff) support?\n\nCheers,\nBrook\n\n===========================================================================\n--- configure.in.orig\tWed Oct 7 01:00:23 1998\n+++ configure.in\tThu Oct 8 08:22:40 1998\n@@ -799,6 +799,30 @@\n \tAC_SUBST(TCL_LIB)\n fi\n \n+dnl Check for Tcl configuration script tclConfig.sh\n+if test \"$USE_TCL\"; then\n+\tAC_MSG_CHECKING(for tclConfig.sh)\n+\tlibrary_dirs=\"/usr/lib $LIBRARY_DIRS\"\n+\tTCL_CONFIG_SH=\n+\tfor dir in $library_dirs; do\n+\t\tfor tcl_dir in $tcl_dirs; do\n+\t\t\tif test -z \"$TCL_CONFIG_SH\"; then\n+\t\t\t\tif test -d \"$dir/$tcl_dir\" -a -r \"$dir/$tcl_dir/tclConfig.sh\"; then\n+\t\t\t\t\tTCL_CONFIG_SH=$dir/$tcl_dir/tclConfig.sh\n+\t\t\t\tfi\n+\t\t\tfi\n+\t\tdone\n+\tdone\n+\tif test -z \"$TCL_CONFIG_SH\"; then\n+\t\tAC_MSG_RESULT(no)\n+\t\tAC_MSG_WARN(tcl support disabled; Tcl configuration script missing)\n+\t\tUSE_TCL=\n+\telse\n+\t\tAC_MSG_RESULT($TCL_CONFIG_SH)\n+\t\tAC_SUBST(TCL_CONFIG_SH)\n+\tfi\n+fi\n+\n dnl Check for location of Tk support (only if Tcl used)\n dnl Disable Tcl support if Tk not found\n \n@@ -883,4 +907,21 @@\n \n AC_CONFIG_HEADER(interfaces/odbc/config.h)\n \n-AC_OUTPUT(GNUmakefile Makefile.global backend/port/Makefile bin/pg_version/Makefile bin/psql/Makefile bin/pg_dump/Makefile backend/utils/Gen_fmgrtab.sh interfaces/libpq/Makefile interfaces/libpq++/Makefile interfaces/libpgtcl/Makefile interfaces/ecpg/lib/Makefile include/version.h interfaces/odbc/Makefile.global interfaces/odbc/GNUmakefile)\n+AC_OUTPUT(\n+ GNUmakefile\n+ Makefile.global\n+ backend/port/Makefile\n+ backend/utils/Gen_fmgrtab.sh\n+ bin/pg_dump/Makefile\n+ bin/pg_version/Makefile\n+ bin/psql/Makefile\n+ include/version.h\n+ interfaces/ecpg/lib/Makefile\n+ interfaces/libpgtcl/Makefile\n+ interfaces/libpq++/Makefile\n+ interfaces/libpq/Makefile\n+ interfaces/odbc/GNUmakefile\n+ interfaces/odbc/Makefile.global\n+ pl/plpgsql/src/Makefile\n+ pl/tcl/mkMakefile.tcldefs.sh\n+)\n===========================================================================\n--- pl/tcl/mkMakefile.tcldefs.sh.in.orig\tWed Oct 7 14:45:20 1998\n+++ pl/tcl/mkMakefile.tcldefs.sh.in\tWed Oct 7 14:40:37 1998\n@@ -0,0 +1,12 @@\n+\n+if [ -f @TCL_CONFIG_SH@ ]; then\n+ . @TCL_CONFIG_SH@\n+else\n+ echo \"@TCL_CONFIG_SH@ not found\"\n+ echo \"I need this file! Please make a symbolic link to this file\"\n+ echo \"and start make again.\"\n+ exit 1\n+fi\n+\n+set | grep '^TCL' > Makefile.tcldefs\n+exit 0\n===========================================================================\n--- pl/tcl/Makefile.orig\tThu Apr 9 17:02:53 1998\n+++ pl/tcl/Makefile\tWed Oct 7 15:52:48 1998\n@@ -77,7 +77,7 @@\n all: $(INFILES)\n \n Makefile.tcldefs:\n-\t./mkMakefile.tcldefs\n+\t/bin/sh mkMakefile.tcldefs.sh\n \n #\n # Clean \n===========================================================================\n--- pl/plpgsql/src/Makefile.in.orig\tThu Oct 8 08:18:46 1998\n+++ pl/plpgsql/src/Makefile.in\tThu Oct 8 08:21:07 1998\n@@ -0,0 +1,131 @@\n+#-------------------------------------------------------------------------\n+#\n+# Makefile\n+# Makefile for the plpgsql shared object\n+#\n+# IDENTIFICATION\n+# $Header: /usr/local/cvsroot/pgsql/src/pl/plpgsql/src/Makefile,v 1.1 1998/09/25 15:50:02 momjian Exp $\n+#\n+#-------------------------------------------------------------------------\n+\n+#\n+# Tell make where the postgresql sources live\n+#\n+SRCDIR= ../../..\n+\n+#\n+# Include the global and port specific Makefiles\n+#\n+include $(SRCDIR)/Makefile.global\n+\n+PORTNAME=@PORTNAME@\n+\n+CFLAGS+= -I$(LIBPQDIR) -I$(SRCDIR)/include\n+LFLAGS+= -i -l\n+\n+# For fmgr.h\n+CFLAGS+= -I$(SRCDIR)/backend\n+\n+LDADD+= -L$(LIBPQDIR) -lpq\n+\n+ifeq ($(PORTNAME), linux)\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+ LDFLAGS_SL\t\t= -shared\n+endif\n+\n+ifeq ($(PORTNAME), bsd)\n+ ifdef BSD_SHLIB\n+ LDFLAGS_SL\t\t= -x -Bshareable -Bforcearchive\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+ endif\n+endif\n+\n+ifeq ($(PORTNAME), bsdi)\n+ ifdef BSD_SHLIB\n+ ifeq ($(LDSUFFIX), .so)\n+ LD\t\t:= shlicc\n+ LDFLAGS_SL\t+= -O -shared\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+ endif\n+ ifeq ($(LDSUFFIX), .o)\n+ LD\t\t:= shlicc\n+ LDFLAGS_SL\t+= -O -r\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+ endif\n+ endif\n+endif\n+\n+ifeq ($(PORTNAME), solaris)\n+ LDFLAGS_SL\t\t:= -G -z text\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+endif\n+\n+ifeq ($(PORTNAME), unixware)\n+ LDFLAGS_SL\t\t:= -G -z text\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+endif\n+\n+ifeq ($(PORTNAME), univel)\n+ LDFLAGS_SL\t\t:= -G -z text\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+endif\n+\n+ifeq ($(PORTNAME), hpux)\n+ LDFLAGS_SL\t\t:= -b\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+endif\n+\n+#\n+# DLOBJ is the dynamically-loaded object file.\n+#\n+DLOBJ= plpgsql$(DLSUFFIX)\n+\n+OBJS=\tpl_parse.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o\n+\n+ALL=\t$(DLOBJ)\n+\n+#\n+# Build the shared object\n+#\n+all: $(ALL)\n+\n+$(DLOBJ):\t$(OBJS)\n+\n+#\n+# Clean \n+#\n+clean:\n+\trm -f $(ALL)\n+\trm -f *.o y.tab.h pl.tab.h pl_gram.c gram.c pl_scan.c scan.c\n+\n+install: all\n+\t$(INSTALL) $(INSTL_LIB_OPTS) $(DLOBJ) $(DESTDIR)$(LIBDIR)/$(DLOBJ)\n+\n+$(DLOBJ):\t$(OBJS)\n+\t$(LD) $(LDFLAGS_SL) -o $@ $(OBJS)\n+\n+\n+\n+pl_handler.o:\tpl_handler.c plpgsql.h pl.tab.h\n+\n+pl_comp.o:\tpl_comp.c plpgsql.h pl.tab.h\n+\n+pl_exec.o:\tpl_exec.c plpgsql.h pl.tab.h\n+\n+pl_funcs.o:\tpl_funcs.c plpgsql.h pl.tab.h\n+\n+pl_parse.o:\tpl_gram.c pl_scan.c plpgsql.h\n+\t$(CC) $(CFLAGS) -c -o $@ pl_gram.c\n+\n+pl_gram.c:\tgram.c\n+\tsed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' <gram.c >pl_gram.c\n+\tsed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' <y.tab.h >pl.tab.h\n+\n+pl_scan.c:\tscan.c\n+\tsed -e 's/yy/plpgsql_yy/g' -e 's/YY/PLPGSQL_YY/g' <scan.c >pl_scan.c\n+\n+gram.c:\t\tgram.y\n+\n+scan.c:\t\tscan.l\n+\n+pl.tab.h:\tpl_gram.c\n",
"msg_date": "Thu, 8 Oct 1998 11:27:22 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL patches"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> NOTE: should we be including libtool in our distribution to simplify\n> shared library (and other stuff) support?\n\nThis is probably a reasonable thing to think about for the next release\n(I think it's too late to risk it for 6.4). It'd be a nice way of\ngetting rid of that platform-specific Makefile cruft I was complaining\nabout the other day.\n\nBut ... libtool isn't completely ready for prime time. I've been\ndistributing the latest release of libjpeg with libtool-based shared\nlib support, but I was not brave enough to make that the default\nconfiguration, let alone depend on its working correctly to be able\nto build at all. And I've gotten enough trouble reports to convince\nme this was a wise choice. If we do use libtool, we had better make\nsure that there is a a way to fall back to a simple no-shared-libraries\nbuild process.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Oct 1998 18:40:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL patches "
},
{
"msg_contents": "Applied, and Makefile.in added.\n\n> I have had a few problems with the PL stuff recently committed. The\n> following patches fix the problems (i.e., all regression tests pass)\n> in what I hope to be a platform-independent fashion. The accomplish\n> the following:\n> \n> 1. Allow configure to check for the existence of the tclConfig.sh\n> script needed to configure the tcl component of PL. Configure\n> already checks other parts of the tcl installation, so it might\n> as well check for this script, too, so that paths need not be\n> hard-coded into mkMakefile.tcldefs.\n> \n> A few extra files are created by configure from templates.\n> \n> Configure is also cleaned up a bit at the end so the long list of\n> output files is easier to deal with.\n> \n> NOTE: run autoconf.\n> \n> 2. A new script mkMakefile.tcldefs.sh is created from the template\n> mkMakefile.tcldefs.sh.in so that the substitution for tclConfig.sh\n> can be inserted. The script is simplified and renamed to reflect\n> the fact that it is a sh script.\n> \n> NOTE: pl/tcl/mkMakefile.tcldefs should be removed from the tree.\n> \n> 3. The Makefile executes /bin/sh on the new script rather than\n> directly executing the script (hence the name change to make it\n> more explicit).\n> \n> 4. There are shared library problems with the plpgsql/src/Makefile.\n> The port-specific code was taken from the interfaces/tcl?/Makefile\n> so that shared libraries should work on all platforms. This means \n> that that Makefile must be a template for configure.\n> \n> NOTE: pl/plpgsql/src/Makefile should be removed from the tree.\n> \n> NOTE: should we be including libtool in our distribution to simplify\n> shared library (and other stuff) support?\n> \n> Cheers,\n> Brook\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Thu, 8 Oct 1998 19:45:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] PL patches"
},
{
"msg_contents": " Bruce, please apply this too.\n\n>\n> I have had a few problems with the PL stuff recently committed. The\n> following patches fix the problems (i.e., all regression tests pass)\n> in what I hope to be a platform-independent fashion. The accomplish\n> the following:\n\n Thanks for assisting in that area, Brook. It all really needs\n to become platform independent.\n\n There where a few more problems fixed by the patch below.\n\n o configure.in\n\n The tclConfig.sh file here doesn't reside in the tcl\n subdirectory. It is sitting in /usr/lib directly. I\n added another check for that.\n\n NOTE: run autoconf\n\n o pl/tcl/mkMakefile.tcldefs.sh.in\n\n At least one bash I'm using on one of my systems single\n quotes the values in the output of the set command. But\n make interprets CC=gcc -O2 different from CC='gcc -O2'.\n\n o pl/tcl/pltcl.c\n\n Return values where allocated in SPI memory context and\n got freed on SPI_finish().\n\n o pl/pgsql/Makefile.in\n\n David Hartwig had some bad problems compiling PL/pgSQL on\n AIX. I found that the AIX specific mkldexport.sh doesn't\n support multiple object files. I added another linking\n step where all the objects are combined first into\n plpgsql.o and only this one is then linked into a shared\n object.\n\n David (or someone else with access to AIX), could you\n please check if this works now?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\ndiff -cr src.orig/configure.in src/configure.in\n*** src.orig/configure.in\tFri Oct 9 09:13:14 1998\n--- src/configure.in\tFri Oct 9 09:54:18 1998\n***************\n*** 812,817 ****\n--- 812,822 ----\n \t\t\t\tfi\n \t\t\tfi\n \t\tdone\n+ \t\tif test -z \"$TCL_CONFIG_SH\"; then\n+ \t\t\tif test -d \"$dir\" -a -r \"$dir/tclConfig.sh\"; then\n+ \t\t\t\tTCL_CONFIG_SH=$dir/tclConfig.sh\n+ \t\t\tfi\n+ \t\tfi\n \tdone\n \tif test -z \"$TCL_CONFIG_SH\"; then\n \t\tAC_MSG_RESULT(no)\ndiff -cr src.orig/pl/plpgsql/src/Makefile.in src/pl/plpgsql/src/Makefile.in\n*** src.orig/pl/plpgsql/src/Makefile.in\tFri Oct 9 09:13:42 1998\n--- src/pl/plpgsql/src/Makefile.in\tFri Oct 9 09:26:59 1998\n***************\n*** 79,85 ****\n #\n DLOBJ= plpgsql$(DLSUFFIX)\n \n! OBJS=\tpl_parse.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o\n \n ALL=\t$(DLOBJ)\n \n--- 79,87 ----\n #\n DLOBJ= plpgsql$(DLSUFFIX)\n \n! OBJS=\tplpgsql.o\n! \n! PLOBJS=\tpl_parse.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o\n \n ALL=\t$(DLOBJ)\n \n***************\n*** 87,92 ****\n--- 89,97 ----\n # Build the shared object\n #\n all: $(ALL)\n+ \n+ $(OBJS):\t$(PLOBJS)\n+ \t$(LD) -r -o $(OBJS) $(PLOBJS)\n \n $(DLOBJ):\t$(OBJS)\n \ndiff -cr src.orig/pl/tcl/mkMakefile.tcldefs.sh.in src/pl/tcl/mkMakefile.tcldefs.sh.in\n*** src.orig/pl/tcl/mkMakefile.tcldefs.sh.in\tFri Oct 9 09:13:41 1998\n--- src/pl/tcl/mkMakefile.tcldefs.sh.in\tFri Oct 9 09:15:44 1998\n***************\n*** 8,12 ****\n exit 1\n fi\n \n! set | grep '^TCL' > Makefile.tcldefs\n exit 0\n--- 8,15 ----\n exit 1\n fi\n \n! for v in `set | grep '^TCL' | sed -e 's/=.*//'` ; do\n! echo $v = `eval \"echo \\\\$$v\"`\n! done >Makefile.tcldefs\n! \n exit 0\ndiff -cr src.orig/pl/tcl/pltcl.c src/pl/tcl/pltcl.c\n*** src.orig/pl/tcl/pltcl.c\tFri Oct 9 09:13:41 1998\n--- src/pl/tcl/pltcl.c\tFri Oct 9 10:40:08 1998\n***************\n*** 417,428 ****\n \n \tpltcl_call_level--;\n \n- \t/************************************************************\n- \t * Disconnect from SPI manager\n- \t ************************************************************/\n- \tif (SPI_finish() != SPI_OK_FINISH)\n- \t\telog(ERROR, \"pltcl: SPI_finish() failed\");\n- \n \treturn retval;\n }\n \n--- 417,422 ----\n***************\n*** 731,736 ****\n--- 725,739 ----\n \t\tsiglongjmp(Warn_restart, 1);\n \t}\n \n+ \t/************************************************************\n+ \t * Disconnect from SPI manager and then create the return\n+ \t * values datum (if the input function does a palloc for it\n+ \t * this must not be allocated in the SPI memory context\n+ \t * because SPI_finish would free it).\n+ \t ************************************************************/\n+ \tif (SPI_finish() != SPI_OK_FINISH)\n+ \t\telog(ERROR, \"pltcl: SPI_finish() failed\");\n+ \n \tretval = (Datum) (*fmgr_faddr(&prodesc->result_in_func))\n \t\t(pltcl_safe_interp->result,\n \t\t prodesc->result_in_elem,\n***************\n*** 1051,1058 ****\n \t * The return value from the procedure might be one of\n \t * the magic strings OK or SKIP or a list from array get\n \t ************************************************************/\n! \tif (strcmp(pltcl_safe_interp->result, \"OK\") == 0)\n \t\treturn rettup;\n \tif (strcmp(pltcl_safe_interp->result, \"SKIP\") == 0)\n \t{\n \t\treturn (HeapTuple) NULL;;\n--- 1054,1065 ----\n \t * The return value from the procedure might be one of\n \t * the magic strings OK or SKIP or a list from array get\n \t ************************************************************/\n! \tif (SPI_finish() != SPI_OK_FINISH)\n! \t\telog(ERROR, \"pltcl: SPI_finish() failed\");\n! \n! \tif (strcmp(pltcl_safe_interp->result, \"OK\") == 0) {\n \t\treturn rettup;\n+ \t}\n \tif (strcmp(pltcl_safe_interp->result, \"SKIP\") == 0)\n \t{\n \t\treturn (HeapTuple) NULL;;\n***************\n*** 1309,1315 ****\n \tint\t\t\tloop_rc;\n \tint\t\t\tntuples;\n \tHeapTuple *tuples;\n! \tTupleDesc\ttupdesc;\n \tsigjmp_buf\tsave_restart;\n \n \tchar\t *usage = \"syntax error - 'SPI_exec \"\n--- 1316,1322 ----\n \tint\t\t\tloop_rc;\n \tint\t\t\tntuples;\n \tHeapTuple *tuples;\n! \tTupleDesc\ttupdesc = NULL;\n \tsigjmp_buf\tsave_restart;\n \n \tchar\t *usage = \"syntax error - 'SPI_exec \"\n",
"msg_date": "Fri, 9 Oct 1998 11:00:21 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "PL patches (one more)"
},
{
"msg_contents": "> o pl/tcl/mkMakefile.tcldefs.sh.in\n> \n> At least one bash I'm using on one of my systems single\n> quotes the values in the output of the set command. But\n> make interprets CC=gcc -O2 different from CC='gcc -O2'.\n\nistm that perhaps\n\n make CC=gcc CFLAGS+=-O2\n\nwould be the best choice for achieving this. (And it works :).\n\n - Tom\n",
"msg_date": "Fri, 09 Oct 1998 13:46:46 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL patches (one more)"
},
{
"msg_contents": ">\n> > o pl/tcl/mkMakefile.tcldefs.sh.in\n> >\n> > At least one bash I'm using on one of my systems single\n> > quotes the values in the output of the set command. But\n> > make interprets CC=gcc -O2 different from CC='gcc -O2'.\n>\n> istm that perhaps\n>\n> make CC=gcc CFLAGS+=-O2\n>\n> would be the best choice for achieving this. (And it works :).\n>\n> - Tom\n>\n\n Right - but that's not the point.\n\n If (as it is on one of my systems) the shells set command\n outputs\n\n TCL_LIBS='-ldl -lieee -lm'\n\n instead of\n\n TCL_LIBS=-ldl -lieee -lm\n\n and we put this exactly into the Makefile.tcldefs, then gmake\n will put the whole string into one single argv element in the\n linker call. But then the linker will not find the library\n \"libdl -lieee -lm.a\" or it's shared version.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 9 Oct 1998 16:07:21 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL patches (one more)"
},
{
"msg_contents": "Applied.\n\n\n> Bruce, please apply this too.\n> \n> >\n> > I have had a few problems with the PL stuff recently committed. The\n> > following patches fix the problems (i.e., all regression tests pass)\n> > in what I hope to be a platform-independent fashion. The accomplish\n> > the following:\n> \n> Thanks for assisting in that area, Brook. It all really needs\n> to become platform independent.\n> \n> There where a few more problems fixed by the patch below.\n> \n> o configure.in\n> \n> The tclConfig.sh file here doesn't reside in the tcl\n> subdirectory. It is sitting in /usr/lib directly. I\n> added another check for that.\n> \n> NOTE: run autoconf\n> \n> o pl/tcl/mkMakefile.tcldefs.sh.in\n> \n> At least one bash I'm using on one of my systems single\n> quotes the values in the output of the set command. But\n> make interprets CC=gcc -O2 different from CC='gcc -O2'.\n> \n> o pl/tcl/pltcl.c\n> \n> Return values where allocated in SPI memory context and\n> got freed on SPI_finish().\n> \n> o pl/pgsql/Makefile.in\n> \n> David Hartwig had some bad problems compiling PL/pgSQL on\n> AIX. I found that the AIX specific mkldexport.sh doesn't\n> support multiple object files. I added another linking\n> step where all the objects are combined first into\n> plpgsql.o and only this one is then linked into a shared\n> object.\n> \n> David (or someone else with access to AIX), could you\n> please check if this works now?\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 12:43:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] PL patches (one more)"
}
] |
[
{
"msg_contents": "I have made more functions status or NOT_USED to reflect our source tree\nchanges in the past 6/9 months.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Thu, 8 Oct 1998 20:42:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "more static/NOT_USED functions"
},
{
"msg_contents": "> I have made more functions static or NOT_USED to reflect our source tree\n ^^^^^^\n> changes in the past 6/9 months.\n\nOops.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Thu, 8 Oct 1998 21:10:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] more static/NOT_USED functions"
}
] |
[
{
"msg_contents": "Just curious, what functions did we finally settle on for this type?\nI ask because I just thought of another function that might be useful.\n\nselect block(cidr1, cidr2);\n\nto return 1 if cidr2 is part of the cidr1 block, -1 if cidr1 is part\nof the cidr2 block and 0 if neither.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 9 Oct 1998 00:15:23 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "CIDR type and functions"
},
{
"msg_contents": "> Just curious, what functions did we finally settle on for this type?\n> I ask because I just thought of another function that might be useful.\n> \n> select block(cidr1, cidr2);\n> \n> to return 1 if cidr2 is part of the cidr1 block, -1 if cidr1 is part\n> of the cidr2 block and 0 if neither.\n> \n\nCheck \\df and \\fo. There is a >> and << operators that do subnet\ntesting.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 01:00:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CIDR type and functions"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> Check \\df and \\fo. There is a >> and << operators that do subnet\n> testing.\n\n[I assume you meant \\do]\n\nCool. I checked out what other functions as well. I don't see a couple\nthat were discussed early on though. Specifically, how about functions\nto extract the host, the network or the netmask from an address?\n\nHere are the functions I had suggested.\n\n netmask('192.3.4.5/24::cidr') == 255.255.255.0\n masklen('192.3.4.5/24::cidr') == 24\n host('192.3.4.5/24::cidr') == 192.3.4.5\n network('192.3.4.5/24::cidr') == 192.3.4.0\n\nand perhaps;\n\n class('192.3.4.5/24::cidr') == C\n classnet('192.3.4.5/24::cidr') == 192.3.4\n\nCan I help code up some of this stuff?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 9 Oct 1998 14:16:09 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CIDR type and functions"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > Check \\df and \\fo. There is a >> and << operators that do subnet\n> > testing.\n> \n> [I assume you meant \\do]\n> \n> Cool. I checked out what other functions as well. I don't see a couple\n> that were discussed early on though. Specifically, how about functions\n> to extract the host, the network or the netmask from an address?\n> \n> Here are the functions I had suggested.\n> \n> netmask('192.3.4.5/24::cidr') == 255.255.255.0\n> masklen('192.3.4.5/24::cidr') == 24\n> host('192.3.4.5/24::cidr') == 192.3.4.5\n> network('192.3.4.5/24::cidr') == 192.3.4.0\n> \n> and perhaps;\n> \n> class('192.3.4.5/24::cidr') == C\n> classnet('192.3.4.5/24::cidr') == 192.3.4\n> \n> Can I help code up some of this stuff?\n\nYes, we need those. Code them up, and I will add them as standard\ntypes.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 15:35:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CIDR type and functions"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > Here are the functions I had suggested.\n> > \n> > netmask('192.3.4.5/24::cidr') == 255.255.255.0\n> > masklen('192.3.4.5/24::cidr') == 24\n> > host('192.3.4.5/24::cidr') == 192.3.4.5\n> > network('192.3.4.5/24::cidr') == 192.3.4.0\n\nI forgot\n broadcast('192.3.4.5/24::cidr') == 192.3.4.255\n\n> > and perhaps;\n> > \n> > class('192.3.4.5/24::cidr') == C\n> > classnet('192.3.4.5/24::cidr') == 192.3.4\n\nI'll leave these for the moment as I'm not sure what to do with invalid\nclassful addresses such as 192.3.4.5/16. I'll bring it up again after\n6.4 is released.\n\n> > Can I help code up some of this stuff?\n> \n> Yes, we need those. Code them up, and I will add them as standard\n> types.\n\nOK, I started but I could use a small change to inet_net_ntop.c which\nI think impliments something we discussed anyway. I just need to know\nif this is going to affect anything else. Basically it allows for\nthe number of bits to be -1 which is interpreted as a host with\nunspecified netmask. The changes cause all outputs to leave of\nthe netmask part if it is -1. I realize that I will have to change\nsome of the other functions in inet.c but is there anything else\nthat might bite me?\n\nIf there is no problem I'll resubmit this to the patches list.\n\n*** ../src.original/./backend/utils/adt/inet_net_ntop.c\tFri Oct 9 17:37:27 1998\n--- ./backend/utils/adt/inet_net_ntop.c\tFri Oct 9 17:39:05 1998\n***************\n*** 85,90 ****\n--- 85,97 ----\n \tchar\t *t;\n \tu_int\t\tm;\n \tint\t\t\tb;\n+ \tint\t\t\tprint_bits = 1;\n+ \n+ \tif (bits == -1)\n+ \t{\n+ \t\tbits = 32;\n+ \t\tprint_bits = 0;\n+ \t}\n \n \tif (bits < 0 || bits > 32)\n \t{\n***************\n*** 129,137 ****\n \t}\n \n \t/* Format CIDR /width. */\n! \tif (size < sizeof \"/32\")\n! \t\tgoto emsgsize;\n! \tdst += SPRINTF((dst, \"/%u\", bits));\n \treturn (odst);\n \n emsgsize:\n--- 136,147 ----\n \t}\n \n \t/* Format CIDR /width. */\n! \tif (print_bits)\n! \t{\n! \t\tif (size < sizeof \"/32\")\n! \t\t\tgoto emsgsize;\n! \t\tdst += SPRINTF((dst, \"/%u\", bits));\n! \t}\n \treturn (odst);\n \n emsgsize:\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 9 Oct 1998 17:51:41 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CIDR type and functions"
},
{
"msg_contents": "D'Arcy J.M. Cain:\n\n> netmask('192.3.4.5/24::cidr') == 255.255.255.0\n> masklen('192.3.4.5/24::cidr') == 24\n> host('192.3.4.5/24::cidr') == 192.3.4.5\n> network('192.3.4.5/24::cidr') == 192.3.4.0\n> broadcast('192.3.4.5/24::cidr') == 192.3.4.255\n> and perhaps;\n> class('192.3.4.5/24::cidr') == C\n> classnet('192.3.4.5/24::cidr') == 192.3.4\n\nBruce Momjian:\n\n> Yes, we need those. Code them up, and I will add them as standard\n> types.\n\nThis is really all contrary to the concept of CIDR notation. While I\ndid end up calling the type INET instead of CIDR (which seemed to be\nthe consensus when the discussion was going on, because INET would be\nmore intuitively understandable by users than CIDR), I still stuck to\nthe behavior that Paul Vixie wanted: CIDR notation and representation.\nIt seemed to me that this was a good compromise, as it gives us a\nclean, standards-based notation where host addresses are a functioning\nspecial case. However, networks, netmasks, broadcast addresses and\ninterface addresses all need to be stored in separate INET values.\n\nIf what we actually want is what D'Arcy shows above, then we should\ndrop CIDR notation, stop using Paul Vixie's functions for dealing with\nthe same, and change the storage format to include more information,\nand a different and more flexible input format. Just off the top of\nmy head, 'inet 158.37.96.1 netmask 0xffffff00 broadcast 158.37.96.255'\nwould be a cool input (and output) format to support. :-)\n\nD'Arcy J.M. Cain:\n\n> OK, I started but I could use a small change to inet_net_ntop.c [...]\n\nI have to side with Paul on this one: that file and its companion with\nthe similar name were pulled in from the BIND distribution simply for\nconvenience, so we wouldn't have to insist that people must install\nBIND from source to be able to install PostgreSQL. We really, really\nshouldn't change them. If what we want is not what they implement, we\nshould drop them and implement what we want.\n\nMy vote (surprise! surprise!) goes toward keeping what we've got right\nnow. As Paul says, there's an RFC describing the behavior of the data\ntype as it now stands, and it's not as if it were difficult to do all\nthe other things one might want to do with it.\n\nHowever, adding utility functions to do some of the things D'Arcy\nsuggests sounds good. I would think that they should be defined to\ntake INET parameters and return INET results, and the most useful ones\nseem to me to be the following three (shown with various inputs):\n\nnetmask('158.37.96')\t\t==> '255.255.255.0/32'\nnetmask('158.37.96/21')\t\t==> '255.255.248.0/32'\nbroadcast('158.37')\t\t==> '158.37.255.255/32'\nbroadcast('158.37.96')\t\t==> '158.37.96.255/32'\nbroadcast('158.37.96/21')\t==> '158.37.103.255/32'\nnetwork('158.37.96.15')\t\t==> '158.37/16'\n\nNote that the last one has to assume the old class A/B/C/D/E stuff.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "10 Oct 1998 13:57:54 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CIDR type and functions"
},
{
"msg_contents": "Thus spake Tom Ivar Helbekkmo\n> D'Arcy J.M. Cain:\n> > netmask('192.3.4.5/24::cidr') == 255.255.255.0\n> > masklen('192.3.4.5/24::cidr') == 24\n> > host('192.3.4.5/24::cidr') == 192.3.4.5\n> > network('192.3.4.5/24::cidr') == 192.3.4.0\n> > broadcast('192.3.4.5/24::cidr') == 192.3.4.255\n> > and perhaps;\n> > class('192.3.4.5/24::cidr') == C\n> > classnet('192.3.4.5/24::cidr') == 192.3.4\n> \n> Bruce Momjian:\n> \n> > Yes, we need those. Code them up, and I will add them as standard\n> > types.\n> \n> This is really all contrary to the concept of CIDR notation. While I\n> did end up calling the type INET instead of CIDR (which seemed to be\n> the consensus when the discussion was going on, because INET would be\n> more intuitively understandable by users than CIDR), I still stuck to\n\nI thought there was also a suggestion that it would handle more than\njust straight CIDR.\n\n> the behavior that Paul Vixie wanted: CIDR notation and representation.\n> It seemed to me that this was a good compromise, as it gives us a\n> clean, standards-based notation where host addresses are a functioning\n> special case. However, networks, netmasks, broadcast addresses and\n> interface addresses all need to be stored in separate INET values.\n\nIf you mean that a particular column in a table can only hold one\nof the above types, I agree. However, I see no problem with a basic\ntype that can handle conceptually different objects. I made an analogy\nin an earlier post to integers. They can be ordinals (ID, part number,\netc.) or they can be cardinals such as quantity. We would never\nconsider creating a field that sometimes has a part number and sometimes\nhas a quantity but the same base type can be used for both. Similarly,\nthere should be a type that handles different types associated with\nIP numbers.\n\n> If what we actually want is what D'Arcy shows above, then we should\n> drop CIDR notation, stop using Paul Vixie's functions for dealing with\n> the same, and change the storage format to include more information,\n> and a different and more flexible input format. Just off the top of\n> my head, 'inet 158.37.96.1 netmask 0xffffff00 broadcast 158.37.96.255'\n> would be a cool input (and output) format to support. :-)\n\nI'm not sure I understand this. Given a host and a netmask I can infer\nthe network and the broadcast. Why specify them separately? That's\nwhat my functions were for.\n\n> D'Arcy J.M. Cain:\n> > OK, I started but I could use a small change to inet_net_ntop.c [...]\n> I have to side with Paul on this one: that file and its companion with\n> the similar name were pulled in from the BIND distribution simply for\n> convenience, so we wouldn't have to insist that people must install\n> BIND from source to be able to install PostgreSQL. We really, really\n> shouldn't change them. If what we want is not what they implement, we\n> should drop them and implement what we want.\n\nOr impliment both.\n\n> My vote (surprise! surprise!) goes toward keeping what we've got right\n> now. As Paul says, there's an RFC describing the behavior of the data\n> type as it now stands, and it's not as if it were difficult to do all\n> the other things one might want to do with it.\n> \n> However, adding utility functions to do some of the things D'Arcy\n> suggests sounds good. I would think that they should be defined to\n> take INET parameters and return INET results, and the most useful ones\n> seem to me to be the following three (shown with various inputs):\n> \n> netmask('158.37.96')\t\t==> '255.255.255.0/32'\n> netmask('158.37.96/21')\t\t==> '255.255.248.0/32'\n> broadcast('158.37')\t\t==> '158.37.255.255/32'\n> broadcast('158.37.96')\t\t==> '158.37.96.255/32'\n> broadcast('158.37.96/21')\t==> '158.37.103.255/32'\n> network('158.37.96.15')\t\t==> '158.37/16'\n\nI'm not sure why the /32 on the end of the netmask and broadcast functions.\nThese are convenience functions I thought. Of course, I am also assuming\nthat an unspecified netmask has a special desigation (-1) so it would\nbe possible to convert back and forth.\n\nThe two you left out (ignoring class functions for the moment) are\nsimple convenience functions too. The masklen function simply pulls\nthe bits value out of the underlying structure and the host function\ndrops the netmask part of the output. I suppose there should also be\na function like this.\n\ncombine('158.37.96', 24) ==> '158.37.96/24'\n\nIt would be nice if this also worked.\n\ncombine('158.37.96', '255.255.255.0') ==> '158.37.96/24'\n\nBut we have to decide what to do with invalid inputs like '255.255.0.255'.\nDo we raise an exception, treat it as /16 or something else?\n\nAssuming that casting would make the following coercions, the rest, other\nthan the last one, should work as I envision it.\n\n'158.37.96' ==> '158.37.96.0/24'\n'158.37.96/21' ==> '158.37.96.0/21'\n'158.37' ==> '158.37.0.0/16'\n'158.37.96' ==> '158.37.96.0/24'\n'158.37.96/21' ==> '158.37.96.0/21'\n\nThe coercion rule is pretty simple. If there are less than 4 octets\nthen pad it out to zeros and make the masklen 32 - (8 * X) where X\nis the number of zeroes added.\n\nHmm. There's one place that separate types for networks and hosts\nwould be useful. You could disallow those coercions on the host type,\nThat way you catch errors like people entering '158.37.11' when they\nmeant to type '158.37.1.1' but otherwise they would be treated the\nsame.\n\n> Note that the last one has to assume the old class A/B/C/D/E stuff.\n\nNot unreasonable. I was simply going to ignore class specific stuff\nuntil after 6.4. However, I don't see a big problem with it. Assume\nthe following coercion.\n\n'158.37.96.15' ==> '158.37.96.15/-1'\n\nThen we make the network function use the old classful rules when the\nmask length is unspecified (-1.)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 10 Oct 1998 09:58:08 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CIDR type and functions"
}
] |
[
{
"msg_contents": "Additions\n---------\ntest new cidr/IP address type(Tom Helbekkmo)\ncomplete rewrite system changes(Jan)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\nvacuum crash\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\nman pages/sgml synchronization\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nmulti-verion concurrency control - work-in-progress for 6.5\nimprove reporting of syntax errors by showing location of error in query\nuse index with constants on functions\nallow chaining of pages to allow >8k tuples\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nno min/max for oid type\nimprove PRIMARY KEY handling\ngenerate postmaster pid file and remove flock/fcntl lock code\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 00:29:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.4 items"
}
] |
[
{
"msg_contents": "> Most of us aren't NT propellerheads, but now that a port might be\n> available I'm sure the mailing lists will get more folks who \n> are. Then a\n> tremendous step forward such as you've take will be greeted with more\n> enthusiasm :)\n\nMy primary development is Linux/i386 but NT with Cugwin is almost Unix -\ncommand line utilities, gcc, ... And the clients want M$...\n\n> > - many other tests failed due to not having the dynamicly \n> loaded code \n> > in DLLs\n> \n> Is DLL support so different that it will never work, or have \n> you not had\n> time to look at it?\n\nIt is only not so simple to create shared library in Cygwin as in Linux.\nThere are some makefiles that creates DLL \"automagicly\". It needs 5\nstep, 2 or 3 temporary files but it possible... There can be \"DLL in\nCygwin\" gurus ...\n\n\t\t\t\tDan Horak\n",
"msg_date": "Fri, 9 Oct 1998 13:15:01 +0200 ",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NT port of PGSQL - success"
}
] |
[
{
"msg_contents": "> > I would like to list NT as being \"supported with patches, \n> see web site\"\n> > for the next release (or \"partially supported...\"). Is it \n> premature to\n> > do that?\n> \n> Good questions. I think for NT, we may have to just supply a \n> binary on\n> the web site, as I think the tools need for the port may not be\n> available for normal NT sites. That is OK, because there is \n> only on NT\n> binary (for i386, at least, and NT 4.0).\nThe tools are from:\nCygwin B19 (http://www.cygnus.com/misc/gnu-win32/) - it makes Unix from\nWindows :-)\nEGCS 1.1 (http://www.xraylith.wisc.edu/~khan/software/gnu-win32/)\nCygIPC library v 1.01 from Ludovic Lange\n(http://www.multione.capgemini.fr/tools/pack_ipc/)\nSome headers missing in Cygwin I took from Linux/Intel - can be\ndistributed as patch for standard Cygwin enviroment\n\nI have the files (~0.9 MB) created after \"make install\" ready but I\ndon't know now what else is needed (in addition to cygwinb19.dll,\n/etc/passwd and SYSV IPC stuff) - it has to be first experimentaly found\n:-)\n\n\t\t\t\tDan Horak\n",
"msg_date": "Fri, 9 Oct 1998 13:29:05 +0200 ",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NT port of PGSQL - success]"
}
] |
[
{
"msg_contents": "> I just built the latest CVSup on my Solaris 2.7\n> beta machine using the 5.0 64bit compilers.\n\nSo what is the latest CVSup? I'm running v15.2 (current back when\nPostgreSQL started using it) but haven't kept in touch with the\ndistribution...\n\n - Tom\n",
"msg_date": "Fri, 09 Oct 1998 13:54:50 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] plpgsql patch.."
}
] |
[
{
"msg_contents": "I am sorry, I meant that I built the lastest version of\npostgres that I cvsuped this morning. I was just trying to\nshow people that it looks like very little if any changes\nwill need to be made to compile and run on the 64bit Solaris 2.7.\n\nHowever, I did build the lastes CVSup and it is version 15.4.2.\n\nMatt\n>> I just built the latest CVSup on my Solaris 2.7\n>> beta machine using the 5.0 64bit compilers.\n>\n>So what is the latest CVSup? I'm running v15.2 (current back when\n>PostgreSQL started using it) but haven't kept in touch with the\n>distribution...\n>\n> - Tom\n\n----------\nMatthew C. Aycock\nOperating Systems Analyst/Admin, Senior\nDept Math/CS\nEmory University, Atlanta, GA \nInternet: [email protected] \t\t\n\n\n",
"msg_date": "Fri, 9 Oct 1998 09:58:10 -0400 (EDT)",
"msg_from": "\"Matthew C. Aycock\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] plpgsql patch.."
}
] |
[
{
"msg_contents": "How are people handling the fact that libpq is dynamic, and psql needs\nto find it. I don't see people using -rpath as a link option.\n\nAre people setting LD_LIBRARY_PATH to the PostgreSQL library directory?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 13:10:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "dynamic libraries"
},
{
"msg_contents": " How are people handling the fact that libpq is dynamic, and psql needs\n to find it. I don't see people using -rpath as a link option.\n\n Are people setting LD_LIBRARY_PATH to the PostgreSQL library directory?\n\nUsing ldconfig and /etc/ld.so.conf:\n\n # ld.so.conf\n /usr/X11R6/lib\n /usr/pkg/lib\n /usr/local/lib\n /usr/local/pgsql/lib\n\nThis is on NetBSD 1.3.2.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 9 Oct 1998 11:30:41 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": ">\n> How are people handling the fact that libpq is dynamic, and psql needs\n> to find it. I don't see people using -rpath as a link option.\n>\n> Are people setting LD_LIBRARY_PATH to the PostgreSQL library directory?\n>\n\n I have a /usr/local/pgsql/lib line in the /etc/ld.so.conf\n file and run ldconfig(8) when I get any error from the\n dynamic linker (e.g. pg_id fails to load libpq.so.2 :-).\n Usually the problem disappears then.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 9 Oct 1998 19:44:55 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> How are people handling the fact that libpq is dynamic, and psql needs\n> to find it. I don't see people using -rpath as a link option.\n> \n> Are people setting LD_LIBRARY_PATH to the PostgreSQL library directory?\n> \n> Using ldconfig and /etc/ld.so.conf:\n> \n> # ld.so.conf\n> /usr/X11R6/lib\n> /usr/pkg/lib\n> /usr/local/lib\n> /usr/local/pgsql/lib\n> \n> This is on NetBSD 1.3.2.\n\nELF shared libraries are new to BSDI 4.0, so I was a little confused. I\neditied ld.so.conf, but did not know I needed to run ldconfig, which I\nhave done figured out, and it works.\n\nMy larger question is why we don't get more reports of problems like\nthis. Do novices just know to go edit ld.so.conf, and run ldconfig?\n\nIt is probably in the Linux FAQ, but is everyone reading that when they\nget the error?\n\nI am trying to figure out how to deal with this for BSDI 4.0 users. I\nam sure they are going to be confused.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 13:46:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Fri, 9 Oct 1998, Bruce Momjian wrote:\n\n> How are people handling the fact that libpq is dynamic, and psql needs\n> to find it. I don't see people using -rpath as a link option.\n> \n> Are people setting LD_LIBRARY_PATH to the PostgreSQL library directory?\n\nI've got the directory defined in /etc/ld.co.conf, and ran ldconfig. Works\nfine since then.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Fri, 9 Oct 1998 19:48:30 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On UnixWare 7.0 (and Solaris systems, also) I export LD_RUN_PATH which \ncontains the paths to the dynamic libraries. When the linker runs, it \nincorporates the paths into the output file so that LD_LIBRARY_PATH is not \nneeded to find the needed dynamic libraries.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Fri, 09 Oct 1998 22:24:16 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "On Fri, 9 Oct 1998, Billy G. Allie wrote:\n> On UnixWare 7.0 (and Solaris systems, also) I export LD_RUN_PATH which \n> contains the paths to the dynamic libraries. When the linker runs, it \n> incorporates the paths into the output file so that LD_LIBRARY_PATH is not \n> needed to find the needed dynamic libraries.\n\nDamnit, I'm really getting annoyed by all of this... An ELF system should\nnot be using ldconfig or LD_LIBRARY_PATH to find its libraries.\n\nELF executables are told where to find their binaries at compile time. On\nSolaris this involves using '-R/path/to/libs' to add the a path to be\ncompiled into the binary. I believe this works for Linux/ELF as well.\nFreeBSD/ELF is using -rpath I think, but someone should check. (I'm\nconverting my 3.0-current system to ELF at the moment but its only a\n486dx50 so its kind of slow.)\n\nIf PostgreSQL is not doing this IT IS BROKEN.\n\nRegardless, do whatever you want; I keep on fixing this myself when I\ncompile new releases so I'm not likely to notice any further brokeness.\n\nHave a good one.\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Mon, 12 Oct 1998 01:57:37 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "> > On UnixWare 7.0 (and Solaris systems, also) I export LD_RUN_PATH \n> > which contains the paths to the dynamic libraries. When the linker \n> > runs, it incorporates the paths into the output file so that \n> > LD_LIBRARY_PATH is not needed to find the needed dynamic libraries.\n<gratuitous griping snipped :)>\n> ...An ELF system should\n> not be using ldconfig or LD_LIBRARY_PATH to find its libraries.\n\nIt sounds like you have a strong opinion on this, but I'll need more\ninfo to help convince/educate me...\n\n> ELF executables are told where to find their binaries at compile time. \n> On Solaris this involves using '-R/path/to/libs' to add a path to \n> be compiled into the binary. I believe this works for Linux/ELF as \n> well. FreeBSD/ELF is using -rpath I think, but someone should check. \n> (I'm converting my 3.0-current system to ELF at the moment but its \n> only a 486dx50 so its kind of slow.)\n\nA nice feature of putting libraries into /etc/ld.so.conf is that the\nlibraries are found automatically as a system resource. Hard-linking the\npaths (or possible paths) in the executable seems to be a bit\nrestrictive.\n\nSince ld.so.conf is a very useful feature for linking with at least some\nkinds of libraries, perhaps you can suggest or point to the guidelines a\nsystem builder would use to choose what mechanism to use for a specific\ncase? I could image guidelines that would say to put system-wide\nresources into ld.so.conf, and user-installed resources into\nLD_LIBRARY_PATH or the \"-R/r\" flags.\n\nThe recent bump in libpq version number (entirely appropriate imho)\nillustrated the downside to using ld.so.conf in that my root account had\nto rerun ldconfig to make the new library known to the system. otoh it\nwas really easy to do...\n\n - Tom\n",
"msg_date": "Mon, 12 Oct 1998 14:58:28 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Mon, 12 Oct 1998, Thomas G. Lockhart wrote:\n> A nice feature of putting libraries into /etc/ld.so.conf is that the\n> libraries are found automatically as a system resource. Hard-linking\n> the paths (or possible paths) in the executable seems to be a bit\n> restrictive.\n\nI'm not sure how that is a feature at all. Having loads of junk in your\nlibrary search path really slows things down.\n\nAn ELF system does not have an ld.so.conf. (Note that FreeBSD/ELF does\nhave an ld.so.conf but I believe this is only for transition purposes.)\n\nIf you (the system administrator) install a package, you know where it is\ninstalled. You are able let the binary take care of tracking where its\nlibraries are supposed to be, not the system.\n\n> Since ld.so.conf is a very useful feature for linking with at least some\n> kinds of libraries, perhaps you can suggest or point to the guidelines a\n> system builder would use to choose what mechanism to use for a specific\n> case? I could image guidelines that would say to put system-wide\n> resources into ld.so.conf, and user-installed resources into\n> LD_LIBRARY_PATH or the \"-R/r\" flags.\n\nHaving to set LD_LIBRARY_PATH to make things work is bogus; what if\nsomeone forgets to set it? What if a user can't edit ld.so.conf (ignoring\nthe fact that it won't exist on a real ELF system).\n\nCompiling the information into the binary is much prefered. If for some\nreason you have to move the libraries, using LD_LIBRARY_PATH to keep them\nrunning is a good bandaid until you can recimpile or edit the compiled in\npaths (if your system supports such tools.)\n\n> The recent bump in libpq version number (entirely appropriate imho)\n> illustrated the downside to using ld.so.conf in that my root account had\n> to rerun ldconfig to make the new library known to the system. otoh it\n> was really easy to do...\n\nElf systems have no 'major' version number. On an a.out system you'd get\nsomething like 'libpq.so.1.1'. ELF would call this library 'libpq1.so'\nwhich would be a link to 'libpq1.so.1'. If the 'major' number is to be\nchanged (ie: an incompatible interface change was made) you must change\nthe name of the library. For a.out it would become 'libpq.so.2.0' and ELF\n'libpq2.so -> libpq2.so.0'.\n\nAnyhow, in summary, depending on enviornment variables or a hacked linkrer\nthat supports 'ld.so.conf' is a bad thing on a real ELF system. ELF\nprovides for compiled in search paths and they should be used. This\nreduces the additional steps a user must take to have a running system and\ndoes not violate the POLA. Since the compile/build process knows where\nthe install destination will be, nothing prevents it from doing the right\nthing and using '-R' or '-rpath' ld(1) directives to set the search path.\n\nI've done the whole LD_LIBRARY_PATH and it sucks; I had one that was\nnearly a page long. How the heck do you maintain such a thing and make\nsure nobody else introduces a trojaned library that appears earlier in\nyour path?\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Mon, 12 Oct 1998 11:36:07 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> Anyhow, in summary, depending on enviornment variables or a hacked linkrer\n> that supports 'ld.so.conf' is a bad thing on a real ELF system. ELF\n> provides for compiled in search paths and they should be used. This\n> reduces the additional steps a user must take to have a running system and\n> does not violate the POLA. Since the compile/build process knows where\n> the install destination will be, nothing prevents it from doing the right\n> thing and using '-R' or '-rpath' ld(1) directives to set the search path.\n\nJust to comment. If we use -R or -rpath, people need to use that for\n_every_ application that uses libpq, etc. That seems like a pain to me.\n\nB1ecause people have not had problems in the past using ld.so.conf, and I\ncan see them having problems with -R or -rpath, I would hesistate to\nchange it, though I can see why some installations would prefer the\n-R/-rpath.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 12:01:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Mon, 12 Oct 1998, Bruce Momjian wrote:\n> Just to comment. If we use -R or -rpath, people need to use that for\n> _every_ application that uses libpq, etc. That seems like a pain to me.\n\nThe alternative is more painful. If PostgreSQL were the only application\npackage installed on a system your LD_LIBRARY_PATH would be really short.\n\n> B1ecause people have not had problems in the past using ld.so.conf, and I\n> can see them having problems with -R or -rpath, I would hesistate to\n> change it, though I can see why some installations would prefer the\n> -R/-rpath.\n\nI'll continue to ignore the fact that some ELF systems do have a\nbastardized runtime linker and use ld.so.conf when I state that ELF\nsystems have no ld.so.conf, so its LD_LIBRARY_PATH or -R/--rpath (I looked\nup the flag finally.)\n\nld.so.conf or ldconfig with various directories on the command line is\nnecessary for a non-ELF system; this is the way you do things. ELF fixes\nthis (the problem is when you have a zillion different directories to\nsearch for libraries in and it starts taking a long time to start\ndynamically linked programs on a loaded system. I'll assume everyoen sees\nthe security problems with a system wide library path.) So for a.out or\nother non-ELF systems, I'm proposing no change; do whatever works. For\nELF, the specification supports compiled in library search paths; lets use\nthem. Asking the system administrator to keep track of another library\npath is most assuming. -R/--rpath also makes it simpler for non-root\nusers to install PostgreSQL.\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Mon, 12 Oct 1998 12:18:13 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "\"Matthew N. Dodd\" wrote:\n > For\n >ELF, the specification supports compiled in library search paths; lets use\n >them. Asking the system administrator to keep track of another library\n >path is most assuming. -R/--rpath also makes it simpler for non-root\n >users to install PostgreSQL.\n \nIf you do this, Debian Linux will consider it a bug and I shall have to take\nit out for the Debian package. From Debian documentation:\n\n \"`-rpath' can cause big problems if the referenced\n libraries get updated. Therefore, no Debian package should use the\n `-rpath' option.\"\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Blessed is the man who makes the LORD his trust, \n who does not look to the proud, to those who turn \n aside to false gods.\" Psalms 40:4 \n\n\n",
"msg_date": "Mon, 12 Oct 1998 21:20:59 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "On Mon, 12 Oct 1998, Oliver Elphick wrote:\n> \"Matthew N. Dodd\" wrote:\n> > For\n> >ELF, the specification supports compiled in library search paths; lets use\n> >them. Asking the system administrator to keep track of another library\n> >path is most assuming. -R/--rpath also makes it simpler for non-root\n> >users to install PostgreSQL.\n> \n> If you do this, Debian Linux will consider it a bug and I shall have to take\n> it out for the Debian package. From Debian documentation:\n>\n> \"`-rpath' can cause big problems if the referenced\n> libraries get updated. Therefore, no Debian package should use the\n> `-rpath' option.\"\n\nYes, since Debian distributes binary packages where users can install the\npackage anywhere they like, compiling in a search path causes problems.\n\nLet me ask you though, when was the last time you updated the shared libs\nand didn't update the utils that used them? Regardless, under ELF, a\nmajor number change will require relinking anyway as ELF has no 'major\nrevision number'.\n\nA solution would be to compile in multiple probable locations for the\nlibrary in to the binary. Another solution is to beat users up until they\nno longer have the desire to install packages in non-standard places.\n\nRegardless, just because Debian or any other system can't figure out how\nto do library versioning doesn't mean it should handycap any correct ELF\nlibrary solution. The little warning you pased about -rpath is bogus; if\nthe library changes and the minor version is bumped, no problems will be\nexperienced because, by definition, such changes do not alter behavior.\nA change that would cause problems will require a relink anyway as you're\nno longer linking against the same library. (libpq1.so.0 vs libpq2.so.0).\n\nFor those of you coming from an a.out or other background, your libraries\naren't going to be named the same. I am not proposing any changes to the\na.out library naming.\n\nELF provides compiled in library search paths for a reason; it is the\ncorrect thing to do. How it effects binary packages of a particluar OS\n(FreeBSD, NetBSD, Debian or whatever) is beyond the scope of the\npostgresql development project. I'm pretty sure postgresql is provided in\nsource form for that reason. :)\n\nBTW, you misspelled 'Debian GNU/Linux'.\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Mon, 12 Oct 1998 16:40:05 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "\"Matthew N. Dodd\" wrote:\n >Let me ask you though, when was the last time you updated the shared libs\n >and didn't update the utils that used them? \n\nWell, using Debian's packaging system, I don't have to worry about it; the\ndependencies should take care of such things...\n > ...\n >Regardless, just because Debian or any other system can't figure out how\n >to do library versioning doesn't mean it should handycap any correct ELF\n >library solution. The little warning you pased about -rpath is bogus; if\n >the library changes and the minor version is bumped, no problems will be\n >experienced because, by definition, such changes do not alter behavior.\n >A change that would cause problems will require a relink anyway as you're\n >no longer linking against the same library. (libpq1.so.0 vs libpq2.so.0).\n\nI've been trying to find any more information than the single sentence I\nquoted; there was nothing in the documentation on my own system. \n\n >BTW, you misspelled 'Debian GNU/Linux'.\n \nSheer laziness!\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Blessed is the man who makes the LORD his trust, \n who does not look to the proud, to those who turn \n aside to false gods.\" Psalms 40:4 \n\n\n",
"msg_date": "Mon, 12 Oct 1998 22:52:17 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "> On Mon, 12 Oct 1998, Bruce Momjian wrote:\n> > Just to comment. If we use -R or -rpath, people need to use that for\n> > _every_ application that uses libpq, etc. That seems like a pain to me.\n> \n> The alternative is more painful. If PostgreSQL were the only application\n> package installed on a system your LD_LIBRARY_PATH would be really short.\n> \n> > B1ecause people have not had problems in the past using ld.so.conf, and I\n> > can see them having problems with -R or -rpath, I would hesistate to\n> > change it, though I can see why some installations would prefer the\n> > -R/-rpath.\n> \n> I'll continue to ignore the fact that some ELF systems do have a\n> bastardized runtime linker and use ld.so.conf when I state that ELF\n> systems have no ld.so.conf, so its LD_LIBRARY_PATH or -R/--rpath (I looked\n> up the flag finally.)\n> \n> ld.so.conf or ldconfig with various directories on the command line is\n> necessary for a non-ELF system; this is the way you do things. ELF fixes\n> this (the problem is when you have a zillion different directories to\n> search for libraries in and it starts taking a long time to start\n> dynamically linked programs on a loaded system. I'll assume everyoen sees\n> the security problems with a system wide library path.) So for a.out or\n> other non-ELF systems, I'm proposing no change; do whatever works. For\n> ELF, the specification supports compiled in library search paths; lets use\n> them. Asking the system administrator to keep track of another library\n> path is most assuming. -R/--rpath also makes it simpler for non-root\n> users to install PostgreSQL.\n\nMatthew:\n\nI am running UnixWare 7.01, a System V Release 4 based system. It is an ELF based system with roots back to the first ELF based systems. It's linker does not have a -R or --rpath option. To have UnixWare's ld command embed the location of the shared libraries into the executable, you set the LD_RUN_PATH to the path(s) containing the libraries. From the syntax of the --rpath option, it is apparent you are running the GNU C compiler with ELF support (an upstart, late commer in the world of ELF support). You should know that the one true path of ELF support is to use the LD_RUN_PATH environment variable, not -R/--rpath :-> I find it much easier to set LD_RUN_PATH then to have configure figure out that the a system is running GNU C with ELF support and for that system only, use -R/--rpath. Check out your ld command. If it supports LD_LIBRARY_PATH, it probably supprorts LD_RUN_PATH. If it does, then use it to embed the library locations into your executable.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Tue, 13 Oct 1998 02:03:09 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "Folks,\n\n this debate is becoming more and more a philosophic\n discussion about \"if it is right to force end users to use\n -rpath or ld.so.conf\". I think it's not the PostgreSQL\n developers teams subject to make a decision about it. And\n even if, I think we cannot make such a decision until release\n schedule of 6.4.\n\n PostgreSQL should be easily installable out of the box. On\n systems where ld.so.conf is the defacto standard, forcing\n -rpath will be IMHO a drawback against PostgreSQL (the user\n already made his OS decision). If using a search path means a\n loss of performance or security, systems where this is the\n standard way have other problems than those coming with\n PostgreSQL.\n\n We can clearify in the installation instructions that using\n ld.so.conf requires root permissions any time the library\n interface changes or LD_LIBRARY_PATH can be used (if a non\n privileged user wants to play around with it).\n\n For 6.5 we could discuss if using ld.so.conf, LD_LIBRARY_PATH\n or -rpath could become a configure option.\n\n What we never should do is to be arrogant and say \"PostgreSQL\n MUST be installed using the ONE and ONLY correct way of\n shared library usage\". This would only become a pseudo\n argument against PostgreSQL.\n\n Let's all calm down and release. There are end users waiting\n for the capabilities of 6.4. They don't care about how the\n shared libs are used as long as it's easy to use them.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 13 Oct 1998 09:50:07 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> Let's all calm down and release. There are end users waiting\n> for the capabilities of 6.4. They don't care about how the\n> shared libs are used as long as it's easy to use them.\n\nDon't panic Jan! I took up the discussion because Matthew seemed to have\nstrong opinions on a subject that afaik is not an issue really. So I was\nhoping to learn more about the fine points, and I think I have.\n\nIt looks like there may be pros and cons to each method, but for me the\n\"old style\" of using ld.conf.so allows some independence between apps\nand library location that -rpath/-R may not.\n\nI would expect that, as Jan suggests, it is best to leave the choice to\nthe installer.\n\nAnyway, if Matthew wants to write up the way one would put an entry for\nLDFLAGS or LDFLAGS_SO or ?? in a Makefile.custom to get the behavior he\nis advocating I would be happy to include it in the Admin/installation\ndocs as an installation tip or alternative.\n\nMatthew?\n\n - Tom\n",
"msg_date": "Tue, 13 Oct 1998 14:26:23 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Billy G. Allie wrote:\n> I am running UnixWare 7.01, a System V Release 4 based system. It is\n> an ELF based system with roots back to the first ELF based systems. \n> It's linker does not have a -R or --rpath option. To have UnixWare's\n> ld command embed the location of the shared libraries into the\n> executable, you set the LD_RUN_PATH to the path(s) containing the\n> libraries.\n\nOk.\n\n> From the syntax of the --rpath option, it is apparent you are running\n> the GNU C compiler with ELF support (an upstart, late commer in the\n> world of ELF support). \n\nActually, while I did mention --rpath (In the context of a FreeBSD/ELF or\nLinux/ELF system) I am running Solaris which uses the -R flag to tell\nld(1) where things are. -R takes prescedence over LD_RUN_PATH according\nto my doc.\n\n> You should know that the one true path of ELF support is to use the\n> LD_RUN_PATH environment variable, not -R/--rpath :-> I find it much\n> easier to set LD_RUN_PATH then to have configure figure out that the a\n> system is running GNU C with ELF support and for that system only, use\n> -R/--rpath. Check out your ld command. If it supports\n> LD_LIBRARY_PATH, it probably supprorts LD_RUN_PATH. If it does, then\n> use it to embed the library locations into your executable.\n\nI'm pretty sure all ELF systems support LD_LIBRARY_PATH and LD_RUN_PATH.\nUsing -R/--rpath allows us to have better control of what search paths are\ncompiled in. Who knows what the user has LD_RUN_PATH set to. Should\nconfigure ask them if they want to use LD_RUN_PATH as well? Should we\nfind all the libraries we are to link with and construct our own\n-R/--rpath? For systems that don't support -R/--rpath we'll have to do\nthis anyway as we'll be messing with LD_RUN_PATH.\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Tue, 13 Oct 1998 11:15:43 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries "
},
{
"msg_contents": "> > Let's all calm down and release. There are end users waiting\n> > for the capabilities of 6.4. They don't care about how the\n> > shared libs are used as long as it's easy to use them.\n> \n> Don't panic Jan! I took up the discussion because Matthew seemed to have\n> strong opinions on a subject that afaik is not an issue really. So I was\n> hoping to learn more about the fine points, and I think I have.\n> \n> It looks like there may be pros and cons to each method, but for me the\n> \"old style\" of using ld.conf.so allows some independence between apps\n> and library location that -rpath/-R may not.\n> \n> I would expect that, as Jan suggests, it is best to leave the choice to\n> the installer.\n> \n> Anyway, if Matthew wants to write up the way one would put an entry for\n> LDFLAGS or LDFLAGS_SO or ?? in a Makefile.custom to get the behavior he\n> is advocating I would be happy to include it in the Admin/installation\n> docs as an installation tip or alternative.\n\nFrankly, I think the environment variable LD_RUN_PATH is the only way to\ngo(see man ld.so). Setting the flag on every link, and for user apps\ntoo, seems too painful for regular use.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 11:24:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Thomas G. Lockhart wrote:\n> Anyway, if Matthew wants to write up the way one would put an entry for\n> LDFLAGS or LDFLAGS_SO or ?? in a Makefile.custom to get the behavior he\n> is advocating I would be happy to include it in the Admin/installation\n> docs as an installation tip or alternative.\n> \n> Matthew?\n\nWhen I install 6.4 on the systems here I'll see if I can make clean\npatches and submit them.\n\nLike I said in my first message this is a sore subject only because I run\ninto it so much and few software packages seem to deal with it correctly.\nWhat PostgreSQL does won't really affect me as I'll just keep doing what\nI've been doing (along with lots of cursing). If my patches are of any\nuse then maybe PostgreSQL won't be on my list of things to fix shared libs\nbefore compiling.\n\nAnyhow, getting 6.4 out is of paramount importance so this discussion is\nacademic at this point.\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Tue, 13 Oct 1998 12:05:32 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> Anyhow, getting 6.4 out is of paramount importance so this discussion \n> is academic at this point.\n\nWell, I was proposing that you document how to use the -rpath/-R style\nof building the v6.4beta. If you can do that in the next few days then\nit can appear in the v6.4 docs. If not, then it will wait for sometime\nlater. \n\nIn either case, we aren't proposing to change the current methods, which\nare independent of loader configuration and options (for example, those\ninstalling into /usr/lib just need to reboot to get ldconfig run), but\nrather allowing you to document the way you would suggest doing it.\n\nYour choice...\n\n - Tom\n",
"msg_date": "Tue, 13 Oct 1998 18:34:44 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Thomas G. Lockhart wrote:\n> Well, I was proposing that you document how to use the -rpath/-R style\n> of building the v6.4beta. If you can do that in the next few days then\n> it can appear in the v6.4 docs. If not, then it will wait for sometime\n> later. \n> \n> In either case, we aren't proposing to change the current methods, which\n> are independent of loader configuration and options (for example, those\n> installing into /usr/lib just need to reboot to get ldconfig run), but\n> rather allowing you to document the way you would suggest doing it.\n\nWell, for Solaris I've always added '-R' flags that correspond to the\nvarious -L flags in the appropriate make files. Since $(LIBDIR) is equal\nto $(POSTGRESDIR)/lib which is the final installation directory it kind of\nmakes sense to use '-R' as we're only specifying additional linker search\ndirectories as supplied to us by the user. I suppose for Unixware we\ncould 'setenv LD_RUN_PATH $(LIBDIR)' or something.\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Tue, 13 Oct 1998 14:39:29 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
}
] |
[
{
"msg_contents": "> > How are people handling the fact that libpq is dynamic, and psql\n> needs\n> > to find it. I don't see people using -rpath as a link option.\n> > \n> > Are people setting LD_LIBRARY_PATH to the PostgreSQL library\n> directory?\n> > \n> > Using ldconfig and /etc/ld.so.conf:\n> > \n> > # ld.so.conf\n> > /usr/X11R6/lib\n> > /usr/pkg/lib\n> > /usr/local/lib\n> > /usr/local/pgsql/lib\n> > \n> > This is on NetBSD 1.3.2.\n> \n> ELF shared libraries are new to BSDI 4.0, so I was a little confused.\n> I\n> editied ld.so.conf, but did not know I needed to run ldconfig, which I\n> have done figured out, and it works.\n> \n> My larger question is why we don't get more reports of problems like\n> this. Do novices just know to go edit ld.so.conf, and run ldconfig?\n> \n> It is probably in the Linux FAQ, but is everyone reading that when\n> they\n> get the error?\n> \n> I am trying to figure out how to deal with this for BSDI 4.0 users. I\n> am sure they are going to be confused.\nIt's in the INSTALL file. Or at least it was.\n\t\t-DEJ\n",
"msg_date": "Fri, 9 Oct 1998 14:41:04 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> > have done figured out, and it works.\n> > \n> > My larger question is why we don't get more reports of problems like\n> > this. Do novices just know to go edit ld.so.conf, and run ldconfig?\n> > \n> > It is probably in the Linux FAQ, but is everyone reading that when\n> > they\n> > get the error?\n> > \n> > I am trying to figure out how to deal with this for BSDI 4.0 users. I\n> > am sure they are going to be confused.\n> It's in the INSTALL file. Or at least it was.\n\nI have updated the INSTALL and sgml/INSTALL files to recommend ldconfig\non Linux/ELF and any ELF-based system.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 15:52:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": " I have updated the INSTALL and sgml/INSTALL files to recommend ldconfig\n on Linux/ELF and any ELF-based system.\n\nIt's not an ELF thing! This applies to any (I think) shared library\nsystem. At least I know my NetBSD a.out system uses ldconfig in this\nway. Your note should be more general.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 9 Oct 1998 15:04:08 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> I have updated the INSTALL and sgml/INSTALL files to recommend ldconfig\n> on Linux/ELF and any ELF-based system.\n> \n> It's not an ELF thing! This applies to any (I think) shared library\n> system. At least I know my NetBSD a.out system uses ldconfig in this\n> way. Your note should be more general.\n\nYes, I wanted to improve it too. ELF came with SVr4, and the Linux\nmention, while valuable for a large base of users, was misleading to others.\n\nHere is the new text:\n\n If necessary, tell UNIX how to find your shared libraries. If you\n are using an ELF-based system, such as Linux, do ONE of the following,\n preferably the first:\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 17:41:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": " Yes, I wanted to improve it too. ELF came with SVr4, and the Linux\n mention, while valuable for a large base of users, was misleading to others.\n\n Here is the new text:\n\n If necessary, tell UNIX how to find your shared libraries. If you\n\tare using an ELF-based system, such as Linux, do ONE of the following,\n\tpreferably the first:\n\nI'm not sure what your list is that follows this, but if it includes\neither LD_LIBRARY_PATH or /etc/ld.so.conf and ldconfig, then your\nrestriction to ELF is not correct. (And why single out Linux for\nspecial notes when postgresql applies MUCH more broadly and ELF is\nMUCH more widely used than on Linux boxes?)\n\nPerhaps you can say something like\n\n If necessary, tell UNIX how to find your shared libraries.\n Commonly, ONE of the following (preferably the first) is\n sufficient:\n\n\t- mention /etc/ld.so.conf and ldconfig\n\t- mention LD_LIBRARY_PATH\n\t ...\n\nIf there really are system-specific things, they can be added to the\nlist with a caveat about their applicability.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 9 Oct 1998 16:20:39 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "> If necessary, tell UNIX how to find your shared libraries.\n> Commonly, ONE of the following (preferably the first) is\n> sufficient:\n> \n> \t- mention /etc/ld.so.conf and ldconfig\n> \t- mention LD_LIBRARY_PATH\n> \t ...\n> \n> If there really are system-specific things, they can be added to the\n> list with a caveat about their applicability.\n\nI have changed it to:\n\n 14) If necessary, tell UNIX how to find your shared libraries. You can\n do ONE of the following, preferably the first:\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 23:08:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
},
{
"msg_contents": "On Fri, 9 Oct 1998, Bruce Momjian wrote:\n\n> > > have done figured out, and it works.\n> > > \n> > > My larger question is why we don't get more reports of problems like\n> > > this. Do novices just know to go edit ld.so.conf, and run ldconfig?\n> > > \n> > > It is probably in the Linux FAQ, but is everyone reading that when\n> > > they\n> > > get the error?\n> > > \n> > > I am trying to figure out how to deal with this for BSDI 4.0 users. I\n> > > am sure they are going to be confused.\n> > It's in the INSTALL file. Or at least it was.\n> \n> I have updated the INSTALL and sgml/INSTALL files to recommend ldconfig\n> on Linux/ELF and any ELF-based system.\n\nldconfig was there before ELF became popular.\n\nWhen I originally started using Linux back in 94, I was using Slackware,\nand it used aout. ldconfig was in use then. \n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sat, 10 Oct 1998 10:33:40 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] dynamic libraries"
}
] |
[
{
"msg_contents": "With the new PL code in pl/tcl I am getting a bunch of warnings like\nthe following:\n\n\tpltcl.c:443: warning: variable `prodesc' might be clobbered by `longjmp' or `vfork' \n\nI have never seen this before in any of my programming. What do they\nmean? Is anyone else seeing these? Does it matter?\n\nCheers,\nBrook\n",
"msg_date": "Fri, 9 Oct 1998 15:01:32 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL compile warning messages"
},
{
"msg_contents": ">\n> With the new PL code in pl/tcl I am getting a bunch of warnings like\n> the following:\n>\n> pltcl.c:443: warning: variable `prodesc' might be clobbered by `longjmp' or `vfork'\n>\n> I have never seen this before in any of my programming. What do they\n> mean? Is anyone else seeing these? Does it matter?\n>\n> Cheers,\n> Brook\n>\n>\n\n I get them too and don't know exactly what they mean. I think\n gcc is just telling that after returning to the setjmp\n location via longjmp the variable might not contain what\n there was before (the longjmp mechanism only restores the\n stack pointers, not the variable contents on the stack).\n\n But the longjmp's are there only to clean up the Tcl's\n interpreter nesting and allocations in the case of an\n elog(ERROR) before really jumping back into the postgres main\n loop. Tcl doesn't allocate via palloc, so there would be\n memory allocations never free'd otherwise. The mentioned\n variables aren't accessed after the longjumping session began\n (it's really a longjmp party if Tcl triggers use in turn Tcl\n functions where the queries invoke other functions/triggers\n and so on :-).\n\n Since PL/Tcl is designed to be used as trigger language,\n raising errors with elog would be a normal operation from\n inside of PL/Tcl.\n\n Nothing I tested so far is really broken. I assume it doesn't\n really matter.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 9 Oct 1998 23:37:58 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL compile warning messages"
},
{
"msg_contents": "> I get them too and don't know exactly what they mean. I think\n> gcc is just telling that after returning to the setjmp\n> location via longjmp the variable might not contain what\n> there was before (the longjmp mechanism only restores the\n> stack pointers, not the variable contents on the stack).\n> \n> But the longjmp's are there only to clean up the Tcl's\n> interpreter nesting and allocations in the case of an\n> elog(ERROR) before really jumping back into the postgres main\n> loop. Tcl doesn't allocate via palloc, so there would be\n> memory allocations never free'd otherwise. The mentioned\n> variables aren't accessed after the longjumping session began\n> (it's really a longjmp party if Tcl triggers use in turn Tcl\n> functions where the queries invoke other functions/triggers\n> and so on :-).\n\nSee postgres.c. We used to have that warning in postgres.c, but someone\nchanged something to fix it. I can't see what was changed, now that I\nam looking at it.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 17:58:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL compile warning messages"
},
{
"msg_contents": "Bruce wrote:\n>\n> > I get them too and don't know exactly what they mean. I think\n> > gcc is just telling that after returning to the setjmp\n> > location via longjmp the variable might not contain what\n> > there was before (the longjmp mechanism only restores the\n> > stack pointers, not the variable contents on the stack).\n> >\n> > But the longjmp's are there only to clean up the Tcl's\n> > interpreter nesting and allocations in the case of an\n> > elog(ERROR) before really jumping back into the postgres main\n> > loop. Tcl doesn't allocate via palloc, so there would be\n> > memory allocations never free'd otherwise. The mentioned\n> > variables aren't accessed after the longjumping session began\n> > (it's really a longjmp party if Tcl triggers use in turn Tcl\n> > functions where the queries invoke other functions/triggers\n> > and so on :-).\n>\n> See postgres.c. We used to have that warning in postgres.c, but someone\n> changed something to fix it. I can't see what was changed, now that I\n> am looking at it.\n\n I took a lood at it and didn't saw the changes either. Then I\n played around with the code.\n\n In some cases only a strange workaround could prevent that\n warning. Creating another variable of the same type and\n somewhere in the function doing var2 = var1; and then using\n var2 instead (doesn't do anything useful and makes the code\n more obscure).\n\n From the gcc manpage:\n\n -W Print extra warning messages for these events:\n\n � A nonvolatile automatic variable might be changed\n by a call to longjmp. These warnings are possible\n only in optimizing compilation.\n\n The compiler sees only the calls to setjmp. It\n cannot know where longjmp will be called; in fact,\n a signal handler could call it at any point in the\n code. As a result, you may get a warning even when\n there is in fact no problem because longjmp cannot\n in fact be called at the place which would cause a\n problem.\n\n In fact I think it's legal to ignore these warnings because\n there is in fact no place which would cause a problem.\n\n And in fact I love this snippet of the manpage :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 12 Oct 1998 20:31:42 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL compile warning messages"
}
] |
[
{
"msg_contents": "Looking over the list of open items, it seems we are on target for a\nNovember 1 release. I think that is the date Marc had in mind, and it\nlooks like we are in good shape, assuming I can fix the \"vacuum crash\"\nitem.\n\nI don't think we can release without the 'ps' args fix, and we can't\nturn that off because it caused by the fork() removal I did long ago.\n\n---------------------------------------------------------------------------\n\n\nAdditions\n---------\ntest new cidr/IP address type(Tom Helbekkmo)\ncomplete rewrite system changes(Jan)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\nvacuum crash\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\nman pages/sgml synchronization\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nmulti-verion concurrency control - work-in-progress for 6.5\nimprove reporting of syntax errors by showing location of error in query\nuse index with constants on functions\nallow chaining of pages to allow >8k tuples\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nno min/max for oid type\nimprove PRIMARY KEY handling\ngenerate postmaster pid file and remove flock/fcntl lock code\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 9 Oct 1998 18:03:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Release schedule"
},
{
"msg_contents": "On Fri, 9 Oct 1998, Bruce Momjian wrote:\n\n> Looking over the list of open items, it seems we are on target for a\n> November 1 release. I think that is the date Marc had in mind, and it\n> looks like we are in good shape, assuming I can fix the \"vacuum crash\"\n> item.\n> \n> I don't think we can release without the 'ps' args fix, and we can't\n> turn that off because it caused by the fork() removal I did long ago.\n\n\tMost humble apologies...I have a bad hard drive in my machine that\nI'm just recovering from (and awaiting replacement for *sigh*)...am\nspending a good portion of today at the office trying to get things back\non track...\n\n\tCurrently am doing a build on Solaris x86/Sparc 2.6, as well as\nFreeBSD, but the FreeBSD one is kind of a munged machine until my home\nmachine is fixed *sigh*\n\n\tIf I get a clean build off of both x86/Sparc of Solaris, and\nunless there are any major objections, I'm going to put out a\nbeta2...'ps' args fix to follow shortly...\n\n\t > \n> ---------------------------------------------------------------------------\n> \n> \n> Additions\n> ---------\n> test new cidr/IP address type(Tom Helbekkmo)\n> complete rewrite system changes(Jan)\n> CREATE TABLE test (x text, s serial) fails if no database creation permission\n> regression test all platforms\n> vacuum crash\n> \n> Serious Items\n> ------------\n> change pg args for platforms that don't support argv changes\n> \t(setproctitle()?, sendmail hack?)\n> \n> Docs\n> ----\n> man pages/sgml synchronization\n> generate html/postscript documentation\n> make sure all changes are documented properly\n> \n> Minor items\n> -----------\n> cnf-ify still can exhaust memory, make SET KSQO more generic\n> permissions on indexes: what do they do? should it be prevented?\n> multi-verion concurrency control - work-in-progress for 6.5\n> improve reporting of syntax errors by showing location of error in query\n> use index with constants on functions\n> allow chaining of pages to allow >8k tuples\n> allow multiple generic operators in expressions without the use of parentheses\n> document/trigger/rule so changes to pg_shadow create pg_pwd\n> large objects orphanage\n> improve group handling\n> no min/max for oid type\n> improve PRIMARY KEY handling\n> generate postmaster pid file and remove flock/fcntl lock code\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Mon, 12 Oct 1998 15:59:40 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release schedule"
}
] |
[
{
"msg_contents": "i would very much like inet_net_pton to not be changed in this way,\neven though it's an internal server function the way postgres 6.4\nwill be packaged. there is an RFC specifying what this function does.\n",
"msg_date": "Fri, 09 Oct 1998 21:09:53 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hackers-digest V1 #1013 "
},
{
"msg_contents": "Thus spake Paul A Vixie\n> i would very much like inet_net_pton to not be changed in this way,\n\nYou mean inet_net_ntop, right?\n\n> even though it's an internal server function the way postgres 6.4\n> will be packaged. there is an RFC specifying what this function does.\n\nAre you talking about RFC2133? That one doesn't even specify bits as\nan argument so this is already different. Is there another one I\nshould be looking at?\n\nIf a function is based on a standards document like this, shouldn't\nwe include that as a comment in the file?\n\nAnyway, I seem to be mistaken about the whole cidr or inet type.\nBased on the discussions we had earlier I am surprised by the\nfollowing.\n\ndarcy=> select '198.1.2.3/8'::inet;\n?column?\n--------\n198/8 \n(1 row)\n\nI would have expected it to print what I entered. If the above is\ncorrect then perhaps we still need a cidr type that behaves differently\nor rename this to cidr and write a new inet type.\n\nHere is what I thought we were talking about taken from postings in\nthis list back in July.\n\n>From Bruce Momjian:\n> My guess is that it is going to output x.x.x.x/32, but we should supply\n> a function so they can get just the IP or the mask from the type. That\n> way, people who don't want the cidr format can pull out the part they\n> want.\n\nThis suggests that the whole address is stored and by default would be\noutput. Are we outputting just the network part now and expecting my\nfunctions to get the host part?\n\nI said:\n> Perhaps there is an underlying difference of assumptions about what\n> the actual type is. Let me take a stab at defining it (without\n> naming it) and see if we're all on the same bus.\n> \n> I see the underlying data type storing two things, a host address\n> (which can hold an IPv4 or IPv6 IP) and a netmask which can be\n> stored as a small int, 8 bits is plenty. The input function would\n> read IP numbers as follows (I'm making some of this up as I go.)\n> \n> x.x.x.x/y IP x.x.x.x with masklen y\n> x.x.x.x:y.y.y.y IP x.x.x.x with masklen determined by examining\n> y.y.y.y raising an exception if it is an invalid\n> mask (defined as all ones followed by all zeroes)\n> x.x.x.x IP x.x.x.x masklen of 32\n> \n> The output functions would print in a standard way, possibly allowing\n> alternate representations like we do for money. Also, there would\n> be functions to extract the host, the network or the netmask.\n> \n> Is this close to what everyone thinks or are we talking about completely\n> different things?\n\nNo one contradicted me so I assumed that there was agreement.\n\n>From Bruce Momjian:\n> That way, if they specify cidr bits, we store it. If they don't we make\n> the bits field equal -1, and print/sort appropriately. The addr len is\n> usually 3, but ip6 is also easy to add by making the addr len equal 6.\n\nSupporting the idea of setting bits to -1 to mean an unspecified netmask.\n\nI also checked doc/README.inet. It seems to support what I expect\nalthough it doesn't mention setting bits to -1.\n\nSo what do I do? Should I redo the inet functions without using the\ninet_net_* functions as described above or is the current behaviour the\none we wanted?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 10 Oct 1998 09:08:37 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: hackers-digest V1 #1013"
},
{
"msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n> Based on the discussions we had earlier I am surprised by the\n> following.\n\n> darcy=> select '198.1.2.3/8'::inet;\n> ?column?\n> --------\n> 198/8 \n> (1 row)\n\n> I would have expected it to print what I entered.\n\nWhy? You told it to truncate the data to 8 bits, so it did. (At least,\nthat's my understanding of what the /n notation means, but maybe I'm\nmistaken.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 10 Oct 1998 11:41:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: hackers-digest V1 #1013 "
},
{
"msg_contents": "Thus spake Tom Lane\n> [email protected] (D'Arcy J.M. Cain) writes:\n> > Based on the discussions we had earlier I am surprised by the\n> > following.\n> \n> > darcy=> select '198.1.2.3/8'::inet;\n> > ?column?\n> > --------\n> > 198/8 \n> > (1 row)\n> \n> > I would have expected it to print what I entered.\n> \n> Why? You told it to truncate the data to 8 bits, so it did. (At least,\n> that's my understanding of what the /n notation means, but maybe I'm\n> mistaken.)\n\nAs I explained, I was surprised based on my understanding of the type\nbased on previous postings.\n\nBTW, for a real world example of the usage I was expecting, look at an\nAscend router. In a connection profile you can specify an IP for the\nremote side as, e.g., 198.96.119.225/28. The Ascend pulls out all\nthe information it needs to set up that connection. It assigns\n198.96.119.225 to the remote host, it routes the 16 addresses in that\nsubnet to that interface and, if RIP is enabled (a bad idea but allowed)\nthen it knows to announce on 198.96.119.239.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 10 Oct 1998 12:32:13 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: hackers-digest V1 #1013"
},
{
"msg_contents": "my take on this is that (1) inet_net_pton() is definitely broken in that\nit is willing to write one more byte of output than its arguments allow,\nand (2) inet_net_{pton,ntop}() is not suitable for postgresql's use here.\n\ni can fix (1) in bind 8.1.3, and postgresql already has it fixed in its\nversion of bind's library.\n\nthere is no way to fix (2) without an api change. we need, in order to\nmeet the stated needs of folks here who have uses for an inet-like type,\na way to have the prefix length be less than the size of the bitstring\nbeing introduced. \"16.1.0.2/28\" was given as an example, that being a\nway to give both a host address and its netmask on some wire. i've \nwanted this functionality from time to time in the past and i can see\nwhy for postgresql's purposes that's what should be provided for the\n\"inet\" (or, given this change, more properly the \"cidr\") type.\n\nbut inet_net_ntop() only returns one thing (the prefix length) and the\ncaller is expected to know how many octets of mantissa were generated\nbased on this returned prefix length. i propose a new interface, which\nwould have a different name, a different argument list, and a different\nuse:\n\n\tint\n\tinet_cidr_pton(af, src, dst, size, int *used)\n\nthis would work very much the same as inet_net_pton() except for three\nthings: (1) nonzero trailing mantissas (host parts) would be allowed; (2)\nthe number of consumed octets of dst would be set into *used; and (3) there\nwould be absolutely no classful assumptions made if the pfxlen is not\nspecified in the input.\n\n\tint\n\tinet_cidr_ntop(ag, src, len, bits, dst, size)\n\nthis would work very much the same as inet_net_ntop() except that the\nsize (in octets) of src's mantissa would be given in the new \"len\" argument\nand not imputed from \"bits\" as occurs now. \"bits\" would just be output\nas the \"/NN\" at the end of the string, and would never be optional.\n\nif this is agreeable, i'll code it up and submit it here for testing before\ni push it out in bind 8.1.3. i really do agree with the functionality we've\ndrifted toward in this type -- if folks want to express the netmask of a\nhost address without getting in trouble for a nonzero host part that's\nlost during parsing and storage, then they jolly well ought to be able to\ndo that.\n",
"msg_date": "Sun, 11 Oct 1998 01:59:34 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: inet/cidr/bind"
},
{
"msg_contents": "Paul A Vixie <[email protected]> writes:\n\n> if this is agreeable, i'll code it up and submit it here for testing\n> before i push it out in bind 8.1.3. i really do agree with the\n> functionality we've drifted toward in this type -- if folks want to\n> express the netmask of a host address without getting in trouble for\n> a nonzero host part that's lost during parsing and storage, then\n> they jolly well ought to be able to do that.\n\nI certainly agree. We should then be able to switch very easily and\ncleanly to the use of these new input and output functions, and add\nthe various utility functions that D'Arcy has outlined. Should be\nquick work.\n\nWe should leave the type name as INET, I think.\n\nBruce/Marc: we're OK for 6.4 with this still, right? Even if it takes\na couple more days to get everything to fall into place?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "11 Oct 1998 13:06:26 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "> Paul A Vixie <[email protected]> writes:\n> \n> > if this is agreeable, i'll code it up and submit it here for testing\n> > before i push it out in bind 8.1.3. i really do agree with the\n> > functionality we've drifted toward in this type -- if folks want to\n> > express the netmask of a host address without getting in trouble for\n> > a nonzero host part that's lost during parsing and storage, then\n> > they jolly well ought to be able to do that.\n> \n> I certainly agree. We should then be able to switch very easily and\n> cleanly to the use of these new input and output functions, and add\n> the various utility functions that D'Arcy has outlined. Should be\n> quick work.\n> \n> We should leave the type name as INET, I think.\n> \n> Bruce/Marc: we're OK for 6.4 with this still, right? Even if it takes\n> a couple more days to get everything to fall into place?\n\nI think so. It does not affect people testing other things.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 13:05:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "Thus spake Tom Ivar Helbekkmo\n> > if this is agreeable, i'll code it up and submit it here for testing\n> > before i push it out in bind 8.1.3. i really do agree with the\n> > functionality we've drifted toward in this type -- if folks want to\n> > express the netmask of a host address without getting in trouble for\n> > a nonzero host part that's lost during parsing and storage, then\n> > they jolly well ought to be able to do that.\n\nOK, I have cobbled up some functions and modified others in inet.c\nbased on the suggested cidr utility functions. I have made a few\nassumptions which I am sure we will have to look at so that everyone\nis comfortable with the code. I have added the following new functions.\n\nchar *inet_netmask(inet * addr);\nint4 inet_masklen(inet * addr);\nchar *inet_host(inet * addr);\nchar *inet_network_no_bits(inet * addr);\nchar *inet_network_and_bits(inet * addr);\nchar *inet_broadcast(inet * addr);\n\nThe difference between inet_network_no_bits and inet_network_and_bits is\nthat for a.b.c.d/24, the former will return a.b.c and the latter will\nreturn a.b.c/24. I couldn't use inet_network as a name anyway since\nit conflicts with something in arpa/inet.h (On NetBSD -current).\n\nIf someone will add the necessary entries to the catalogues, I'll\nstart testing them. I'll post the changes to inet.c and builtins.h\nto the patches list. I can't guarantee that they are bug-free yet\nbut it does compile and shouldn't interfere with anything anyone\nelse is doing.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 11 Oct 1998 15:24:12 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
}
] |
[
{
"msg_contents": "I am no longer able to use CVS update from anoncvs.\n\n$ cvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot update -d -P\ncvs server: Updating .\ncvs server: Updating CVSROOT\ncvs server: failed to create lock directory in repository \n`/usr/local/cvsroot/CVSROOT': Permission denied\ncvs server: failed to obtain dir lock in repository \n`/usr/local/cvsroot/CVSROOT'\ncvs [server aborted]: read lock failed - giving up\n\n\nAm i using the wrong command, or is it a server problem?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"He that covereth his sins shall not prosper; but whoso\n confesseth and forsaketh them shall have mercy.\" \n Proverbs 28:13 \n\n\n",
"msg_date": "Sat, 10 Oct 1998 08:44:01 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS update"
}
] |
[
{
"msg_contents": "\nOK, i finally got some time. I know I should have gotten\nthis out long ago\n\nI had three actual bugs I had to fix with 6.3.1 for our\nproduction application. \n\n1) can't enter float .001 (FIXED)\n2) Can't dump/restore varchar fields (See attached bug report/patch)\n3) Problems with tcl interface, storing/retrieving lists (under discussion)\n\nIs this the right fix? How do I get the patch applied? \nI haven't supplied patches before. The patch is against\nthe latest snapshot.\n\n-- cary\n\nBug report, can't restore varchar fields\n---------------------------------------\n\nVersion: postgresql snapshot dated oct 9 (well, that's when I downloaded it).\n\nProblem: Pg dump dumps varchar fields as varchar(-5).\n\ncary=> create table fred (id int, name varchar, salary float);\nCREATE\ncary=> \\q\n[cary@jason new]$ pg_dump cary -t fred > /tmp/fred.sql\n[cary@jason new]$ cat /tmp/fred.sql\nCREATE TABLE \"fred\" (\"id\" \"int4\", \"name\" varchar(-5), \"salary\" \"float8\");\nCOPY \"fred\" FROM stdin;\n\\.\n[cary@jason new]$ psql < /tmp/fred.sql\nCREATE TABLE \"fred\" (\"id\" \"int4\", \"name\" varchar(-5), \"salary\" \"float8\");\nERROR: parser: parse error at or near \"-\"\nCOPY \"fred\" FROM stdin;\nEOF\n\nSolution: fix pg_dump\n---------------------------------------- start patch ----------------------\n[cary@jason pg_dump]$ rcsdiff -C 5 pg_dump.c\n===================================================================\nRCS file: RCS/pg_dump.c,v\nretrieving revision 1.1\ndiff -C 5 -r1.1 pg_dump.c\n*** pg_dump.c\t1998/10/10 11:24:22\t1.1\n--- pg_dump.c\t1998/10/10 11:34:47\n***************\n*** 2647,2660 ****\n \t\t\t\t\t\tsprintf(q, \"%s%s%s %s\",\n \t\t\t\t\t\t\t\tq,\n \t\t\t\t\t\t\t\t(actual_atts > 0) ? \", \" : \"\",\n \t\t\t\t\t\t\t\tfmtId(tblinfo[i].attnames[j]),\n \t\t\t\t\t\t\t\ttblinfo[i].typnames[j]);\n! \n! \t\t\t\t\t\tsprintf(q, \"%s(%d)\",\n \t\t\t\t\t\t\t\tq,\n \t\t\t\t\t\t\t\ttblinfo[i].atttypmod[j] - VARHDRSZ);\n \t\t\t\t\t\tactual_atts++;\n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t{\n \t\t\t\t\t\tstrcpy(id1, fmtId(tblinfo[i].attnames[j]));\n--- 2647,2664 ----\n \t\t\t\t\t\tsprintf(q, \"%s%s%s %s\",\n \t\t\t\t\t\t\t\tq,\n \t\t\t\t\t\t\t\t(actual_atts > 0) ? \", \" : \"\",\n \t\t\t\t\t\t\t\tfmtId(tblinfo[i].attnames[j]),\n \t\t\t\t\t\t\t\ttblinfo[i].typnames[j]);\n! \t\t\t\t\t\tif(tblinfo[i].atttypmod[j] != -1) {\n! \t\t\t\t\t\t sprintf(q, \"%s(%d)\",\n \t\t\t\t\t\t\t\tq,\n \t\t\t\t\t\t\t\ttblinfo[i].atttypmod[j] - VARHDRSZ);\n+ \t\t\t\t\t\t}\n+ \t\t\t\t\t\telse {\n+ \t\t\t\t\t\t sprintf(q, \"%s\", q);\n+ \t\t\t\t\t\t}\n \t\t\t\t\t\tactual_atts++;\n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t{\n \t\t\t\t\t\tstrcpy(id1, fmtId(tblinfo[i].attnames[j]));\n---------------------- end of patch ------------------------------------------\n",
"msg_date": "Sat, 10 Oct 1998 11:31:35 -0400 (EDT)",
"msg_from": "\"Cary B. O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug and Patch for dump/restore of varchars"
},
{
"msg_contents": "Applied\n\n\n> \n> OK, i finally got some time. I know I should have gotten\n> this out long ago\n> \n> I had three actual bugs I had to fix with 6.3.1 for our\n> production application. \n> \n> 1) can't enter float .001 (FIXED)\n> 2) Can't dump/restore varchar fields (See attached bug report/patch)\n> 3) Problems with tcl interface, storing/retrieving lists (under discussion)\n> \n> Is this the right fix? How do I get the patch applied? \n> I haven't supplied patches before. The patch is against\n> the latest snapshot.\n> \n> -- cary\n> \n> Bug report, can't restore varchar fields\n> ---------------------------------------\n> \n> Version: postgresql snapshot dated oct 9 (well, that's when I downloaded it).\n> \n> Problem: Pg dump dumps varchar fields as varchar(-5).\n> \n> cary=> create table fred (id int, name varchar, salary float);\n> CREATE\n> cary=> \\q\n> [cary@jason new]$ pg_dump cary -t fred > /tmp/fred.sql\n> [cary@jason new]$ cat /tmp/fred.sql\n> CREATE TABLE \"fred\" (\"id\" \"int4\", \"name\" varchar(-5), \"salary\" \"float8\");\n> COPY \"fred\" FROM stdin;\n> \\.\n> [cary@jason new]$ psql < /tmp/fred.sql\n> CREATE TABLE \"fred\" (\"id\" \"int4\", \"name\" varchar(-5), \"salary\" \"float8\");\n> ERROR: parser: parse error at or near \"-\"\n> COPY \"fred\" FROM stdin;\n> EOF\n> \n> Solution: fix pg_dump\n> ---------------------------------------- start patch ----------------------\n> [cary@jason pg_dump]$ rcsdiff -C 5 pg_dump.c\n> ===================================================================\n> RCS file: RCS/pg_dump.c,v\n> retrieving revision 1.1\n> diff -C 5 -r1.1 pg_dump.c\n> *** pg_dump.c\t1998/10/10 11:24:22\t1.1\n> --- pg_dump.c\t1998/10/10 11:34:47\n> ***************\n> *** 2647,2660 ****\n> \t\t\t\t\t\tsprintf(q, \"%s%s%s %s\",\n> \t\t\t\t\t\t\t\tq,\n> \t\t\t\t\t\t\t\t(actual_atts > 0) ? \", \" : \"\",\n> \t\t\t\t\t\t\t\tfmtId(tblinfo[i].attnames[j]),\n> \t\t\t\t\t\t\t\ttblinfo[i].typnames[j]);\n> ! \n> ! \t\t\t\t\t\tsprintf(q, \"%s(%d)\",\n> \t\t\t\t\t\t\t\tq,\n> \t\t\t\t\t\t\t\ttblinfo[i].atttypmod[j] - VARHDRSZ);\n> \t\t\t\t\t\tactual_atts++;\n> \t\t\t\t\t}\n> \t\t\t\t\telse\n> \t\t\t\t\t{\n> \t\t\t\t\t\tstrcpy(id1, fmtId(tblinfo[i].attnames[j]));\n> --- 2647,2664 ----\n> \t\t\t\t\t\tsprintf(q, \"%s%s%s %s\",\n> \t\t\t\t\t\t\t\tq,\n> \t\t\t\t\t\t\t\t(actual_atts > 0) ? \", \" : \"\",\n> \t\t\t\t\t\t\t\tfmtId(tblinfo[i].attnames[j]),\n> \t\t\t\t\t\t\t\ttblinfo[i].typnames[j]);\n> ! \t\t\t\t\t\tif(tblinfo[i].atttypmod[j] != -1) {\n> ! \t\t\t\t\t\t sprintf(q, \"%s(%d)\",\n> \t\t\t\t\t\t\t\tq,\n> \t\t\t\t\t\t\t\ttblinfo[i].atttypmod[j] - VARHDRSZ);\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t\telse {\n> + \t\t\t\t\t\t sprintf(q, \"%s\", q);\n> + \t\t\t\t\t\t}\n> \t\t\t\t\t\tactual_atts++;\n> \t\t\t\t\t}\n> \t\t\t\t\telse\n> \t\t\t\t\t{\n> \t\t\t\t\t\tstrcpy(id1, fmtId(tblinfo[i].attnames[j]));\n> ---------------------- end of patch ------------------------------------------\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 22:06:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug and Patch for dump/restore of varchars"
}
] |
[
{
"msg_contents": "Something seems to be wrong here unless I am doing something wrong that\nI can't see. Here is a dump of one session.\n\nvex=> select version();\nversion \n-------------------------------------------------------------------------\nPostgreSQL 6.3.2 on i386-unknown-netbsd1.3B, compiled by gcc 2.7.2.2+myc1\n(1 row)\n\nvex=> SELECT relname, relacl FROM pg_class, pg_user\nvex-> WHERE ( relkind = 'r' OR relkind = 'i' OR relkind = 'S') AND\nvex-> relname = 'client' AND\nvex-> usesysid = relowner\nvex-> ORDER BY relname;\nrelname|relacl \n-------+--------------\nclient |{\"=\",\"root=r\"}\n(1 row)\n\nvex=> select user;\ngetpgusername\n-------------\ncarol \n(1 row)\n\nvex=> select * from client;\nERROR: client: Permission denied.\n\nExactly what I expected. Now, here is another.\n\ntrends=> select version();\nversion \n--------------------------------------------------------------------------\nPostgreSQL 6.4.0 on i386-unknown-netbsd1.3.2, compiled by gcc 2.7.2.2+myc1\n(1 row)\n\ntrends=> SELECT relname, relacl FROM pg_class, pg_user\ntrends-> WHERE ( relkind = 'r' OR relkind = 'i' OR relkind = 'S') AND\ntrends-> relname = 'client' AND\ntrends-> usesysid = relowner\ntrends-> ORDER BY relname;\nrelname|relacl \n-------+---------------\nclient |{\"=\",\"db=arwR\"}\n(1 row)\n\ntrends=> select user;\ngetpgusername\n-------------\ndarcy \n(1 row)\n\ntrends=> select * from client;\n[Rows returned as if I had permissions]\n\nAm I missing something or did table permission protections get lost on\nthe way to 6.4?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 10 Oct 1998 11:58:37 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Permissions not working?"
},
{
"msg_contents": ">\n> Something seems to be wrong here unless I am doing something wrong that\n> I can't see. Here is a dump of one session.\n>\n> version\n> -------------------------------------------------------------------------\n> PostgreSQL 6.3.2 on i386-unknown-netbsd1.3B, compiled by gcc 2.7.2.2+myc1\n>\n> relname|relacl\n> -------+--------------\n> client |{\"=\",\"root=r\"}\n>\n> getpgusername\n> -------------\n> carol\n>\n> vex=> select * from client;\n> ERROR: client: Permission denied.\n>\n> Exactly what I expected. Now, here is another.\n\n I assume user carol does not have usesuper set in pg_shadow.\n\n> version\n> --------------------------------------------------------------------------\n> PostgreSQL 6.4.0 on i386-unknown-netbsd1.3.2, compiled by gcc 2.7.2.2+myc1\n>\n> relname|relacl\n> -------+---------------\n> client |{\"=\",\"db=arwR\"}\n>\n> getpgusername\n> -------------\n> darcy\n>\n> trends=> select * from client;\n> [Rows returned as if I had permissions]\n\n Here I assume user darcy has usesuper set in pg_shadow. Check\n and correct me if I'm wrong. The superuser flag is set if you\n allow darcy to create users on createuser time.\n\n> Am I missing something or did table permission protections get lost on\n> the way to 6.4?\n\n The only thing changed is that relations accessed due to\n rewrite rules get checked against the owner of the relation,\n the rules are fired for (ev_class attribute of pg_rewrite).\n This was only done for read access due to rules in 6.3 and\n now does also check for append/write access since we open all\n rules to regular users in 6.4.\n\n Someone can now setup a view from tables she has access to\n and then grant access to the view but does not need to grant\n access to the tables the view is made of. This is how we make\n pg_user (a view) publicly readable but protect pg_shadow (the\n selected table) from public access.\n\n Or someone can setup rules on insert, update and delete to\n one table (granted) that do logging of these events into a\n log table (not granted).\n\n All the required permissions are checked during the actual\n query rewriting. Thus, later ACL changes will correctly be\n in effect. Example:\n\n Table t1 owner user_a granted select to user_b\n View v1 owner user_b granted select to user_c\n\n user_c can select from v1\n\n Now user_a revokes select on t1 from user_b\n\n user_c gets 't1: permission denied' on select from v1\n\n But if user_b is a superuser (usesuper set)\n\n user_c can still select from v1\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 12 Oct 1998 13:12:21 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Permissions not working?"
},
{
"msg_contents": "Thus spake Jan Wieck\n> I assume user carol does not have usesuper set in pg_shadow.\n\nCorrect.\n\n> Here I assume user darcy has usesuper set in pg_shadow. Check\n> and correct me if I'm wrong. The superuser flag is set if you\n> allow darcy to create users on createuser time.\n\nCorrect again. I half suspected something like this. Perhaps the\nprompt in createuser should be changed to reflect that the user is\nbeing granted full superuser privileges rather than just being able\nto create more users.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 12 Oct 1998 09:03:52 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Permissions not working?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI have trouble understanding the message I got in response to CREATE INDEX:\n\nCREATE INDEX ecix ON ec USING btree ( ec ec_code_ops );\nERROR: RelationInvokeStrategy: cannot evaluate strategy 5\n\nThe strategy in question is defined as:\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy, \n amopselect, amopnpages)\n SELECT am.oid, opcl.oid, c.opoid, 5,\n 'btreesel'::regproc, 'btreenpage'::regproc\n FROM pg_am am, pg_opclass opcl, ec_code_ops_tmp c\n WHERE amname = 'btree' and opcname = 'ec_code_ops' \n and c.oprname = '>';\n\nwhere ec_code_ops_tmp is the result of:\n\nSELECT o.oid AS opoid, o.oprname\nINTO TABLE ec_code_ops_tmp\nFROM pg_operator o, pg_type t\nWHERE o.oprleft = t.oid and o.oprright = t.oid\n and t.typname = 'ec_code';\n\nBut I can evaluate this function as an operator without problems:\n\nemp=> select * from test_ec where ec > '5';\n ec\n---------\n 6.2.-.-\n 5.4.1.9\n 5.4.3.9\n5.2.1.114\n(4 rows)\n\n\nThis did not happen to me in older versions, including 6.3.2-release. I am wondering whether something changed in btree after 6.3.2, and how do I accomodate my extensions if it did.\n\nThank you,\n\nGene\n\n(I would appreciate a direct reply, as I am not a member of the hackers list)\n\n",
"msg_date": "Sat, 10 Oct 1998 16:06:16 -0400",
"msg_from": "\"Gene Selkov, Jr.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "a btree strategy hosed in 6.4.BETA1 and in current snapshot"
}
] |
[
{
"msg_contents": "\nI think I have another problem similar to what I just reported. This time it is in GiST. It does not complain when it builds the index but any attempt to use the table with exesisting GiST indices causes this error:\n\nemp=> select count(*) from pho;\nERROR: index_info: no amop 783 18630 1\n\n--Gene\n",
"msg_date": "Sat, 10 Oct 1998 18:17:36 -0400",
"msg_from": "\"Gene Selkov, Jr.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "yet another problem in recent builds, GIST this time"
}
] |
[
{
"msg_contents": "The attached patches affect the following areas:\n\n1. The make file for 'bin/pgtclsh' was failing on my system because the tcl/tk\n libraries could not be found. I have changed the make file so that it uses\n the contents of tclConfig.sh and tkConfig.sh to obtain the information it\n needs to link the programs.\n\n Affected files (relative to the PGSQL source directory):\n\n src/configure, src/configure.in, src/bin/pgtclsh/Makefile\n\n New files (relative to the PGSQL source directory):\n\n src/bin/pgtclsh/mkMakefile.tcltkdefs.sh.in\n\n2. On my system, '/usr/lib' contains the information for tcl7.6 and tk4.2,\n which is used for the SCO System Administration tool. '/opt/lib' contains\n the information tcl8.0 and tk8.0, which I used for my development purposes.\n Configure was finding the wrong one even if I used '--with-libs=/opt/lib'\n to specifiy the directory to use. This patch corrects this by changing the\n order in which directories are searched for the [tcl|tk]Config.sh files so\n that '/usr/lib' is searched last. This change is in keeping with the help\n message that states that '--with-libs' is used to specify the site library\n directories for TCL/TK, etc.\n\n Affected files (relative to the PGSQL source directory):\n\n src/configure, src/configure.in\n\n3. The file that creates the 'Makefile.tcldefs' file in 'pl/tcl' left unex-\n panded variable references the created file. For example:\n\n TCL_LIB_FILE = libtcl8.0${TCL_DBGX}.so\n\n This patch corrects the problem.\n\n4. The installation of 'libpgtcl.so' was failing because 'libpgtcl.so' already\n existed as a symbolic link to a file. This patch corrects the problem by\n explicitly removing libpgtcl.so from the destination directory.\n\n Affected files (relative to the PGSQL source directory):\n\n src/interfaces/libpgtcl/Makefile.in\n\nWith these changes, the only manual changes I make after running configure is\nto add '-o' and '-g' options to the INST_EXE_OPTS, INSTL_LIB_OPTS, and\nINSTL_SHLIB_OPTS variables in 'Makefile.global'. I do this so that the correct\nowner and group are assigned when I install postgreSQL (as root).\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sat, 10 Oct 1998 20:27:06 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.4 patches - portability related."
},
{
"msg_contents": "Applied.\n\n\n> The attached patches affect the following areas:\n> \n> 1. The make file for 'bin/pgtclsh' was failing on my system because the tcl/tk\n> libraries could not be found. I have changed the make file so that it uses\n> the contents of tclConfig.sh and tkConfig.sh to obtain the information it\n> needs to link the programs.\n> \n> Affected files (relative to the PGSQL source directory):\n> \n> src/configure, src/configure.in, src/bin/pgtclsh/Makefile\n> \n> New files (relative to the PGSQL source directory):\n> \n> src/bin/pgtclsh/mkMakefile.tcltkdefs.sh.in\n> \n> 2. On my system, '/usr/lib' contains the information for tcl7.6 and tk4.2,\n> which is used for the SCO System Administration tool. '/opt/lib' contains\n> the information tcl8.0 and tk8.0, which I used for my development purposes.\n> Configure was finding the wrong one even if I used '--with-libs=/opt/lib'\n> to specifiy the directory to use. This patch corrects this by changing the\n> order in which directories are searched for the [tcl|tk]Config.sh files so\n> that '/usr/lib' is searched last. This change is in keeping with the help\n> message that states that '--with-libs' is used to specify the site library\n> directories for TCL/TK, etc.\n> \n> Affected files (relative to the PGSQL source directory):\n> \n> src/configure, src/configure.in\n> \n> 3. The file that creates the 'Makefile.tcldefs' file in 'pl/tcl' left unex-\n> panded variable references the created file. For example:\n> \n> TCL_LIB_FILE = libtcl8.0${TCL_DBGX}.so\n> \n> This patch corrects the problem.\n> \n> 4. The installation of 'libpgtcl.so' was failing because 'libpgtcl.so' already\n> existed as a symbolic link to a file. This patch corrects the problem by\n> explicitly removing libpgtcl.so from the destination directory.\n> \n> Affected files (relative to the PGSQL source directory):\n> \n> src/interfaces/libpgtcl/Makefile.in\n> \n> With these changes, the only manual changes I make after running configure is\n> to add '-o' and '-g' options to the INST_EXE_OPTS, INSTL_LIB_OPTS, and\n> INSTL_SHLIB_OPTS variables in 'Makefile.global'. I do this so that the correct\n> owner and group are assigned when I install postgreSQL (as root).\n> \nContent-Description: uw7-1.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 22:41:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] PostgreSQL 6.4 patches - portability related."
}
] |
[
{
"msg_contents": "After looking into the issue of using PID file locks vs. flock/unlock, I have \ncome to the following conclusions:\n\n1. It is generally agreed that a PID lock file should replace the current me-\n thod of locking (fcntl based locking). (See the message thread with\n '[HACKERS] flock patch breaks things here' in the subject).\n\n2. The purpose of the lock file is to prevent multiple postmasters from run-\n ning on the same port and database.\n\n3. Two PID files will be necessary, one to prevent mulitple instances of post-\n masters from running against the same data base, and one to prevent \nmultiple\n instances from using the same port.\n\n4. The database lock will be located in the DATA directory being locked.\n\n5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n\nComments, questions, concerns?\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Sat, 10 Oct 1998 20:47:35 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postmaster locking issues."
},
{
"msg_contents": "> After looking into the issue of using PID file locks vs. flock/unlock, I have \n> come to the following conclusions:\n> \n> 1. It is generally agreed that a PID lock file should replace the current me-\n> thod of locking (fcntl based locking). (See the message thread with\n> '[HACKERS] flock patch breaks things here' in the subject).\n> \n> 2. The purpose of the lock file is to prevent multiple postmasters from run-\n> ning on the same port and database.\n> \n> 3. Two PID files will be necessary, one to prevent mulitple instances of post-\n> masters from running against the same data base, and one to prevent \n> multiple\n> instances from using the same port.\n> \n> 4. The database lock will be located in the DATA directory being locked.\n> \n> 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n\nYes, except lock file should be kept in /tmp. I don't have\n/var/opt/..., and I doubt others do either.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sat, 10 Oct 1998 21:35:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "> 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n\nWouldn't the bind() call fail for one of the postmasters if they both try to\nuse the same port?\n\nTaral\n\n",
"msg_date": "Sat, 10 Oct 1998 23:51:58 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "On Sat, 10 Oct 1998, Bruce Momjian wrote:\n\n> > After looking into the issue of using PID file locks vs. flock/unlock, I have \n> > come to the following conclusions:\n> > \n> > 1. It is generally agreed that a PID lock file should replace the current me-\n> > thod of locking (fcntl based locking). (See the message thread with\n> > '[HACKERS] flock patch breaks things here' in the subject).\n> > \n> > 2. The purpose of the lock file is to prevent multiple postmasters from run-\n> > ning on the same port and database.\n> > \n> > 3. Two PID files will be necessary, one to prevent mulitple instances of post-\n> > masters from running against the same data base, and one to prevent \n> > multiple\n> > instances from using the same port.\n> > \n> > 4. The database lock will be located in the DATA directory being locked.\n> > \n> > 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n> \n> Yes, except lock file should be kept in /tmp. I don't have\n> /var/opt/..., and I doubt others do either.\n\nMy RedHat system doesn't have /var/opt either. I'd agree with /tmp as\nthat's been in every unix style system I've used so far.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sun, 11 Oct 1998 09:47:31 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
}
] |
[
{
"msg_contents": "The SQL script that installs the PL/pgSQL language has the location of the\npostgres library directory hard coded. This patch corrects the problem by\nhaving configure generate the SQL script.\n\nAffected Files (relative to the PGSQL base directory):\n\n src/configure, src/configure.in\n\nDeleted Files (relative to the PGSQL base directory):\n\n src/pl/plpgsql/src/mklang.sql\n\nNew Files (relative to the PGSQL base directory):\n\n src/pl/plgsql/src/mklang.sql.in\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sat, 10 Oct 1998 23:16:01 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.4 patches - portability related."
},
{
"msg_contents": "Applied\n\n\n\n> The SQL script that installs the PL/pgSQL language has the location of the\n> postgres library directory hard coded. This patch corrects the problem by\n> having configure generate the SQL script.\n> \n> Affected Files (relative to the PGSQL base directory):\n> \n> src/configure, src/configure.in\n> \n> Deleted Files (relative to the PGSQL base directory):\n> \n> src/pl/plpgsql/src/mklang.sql\n> \n> New Files (relative to the PGSQL base directory):\n> \n> src/pl/plgsql/src/mklang.sql.in\n> \nContent-Description: uw7-2.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 22:44:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.4 patches - portability related."
}
] |
[
{
"msg_contents": "\nI've just done a cvs update, and there seem to be very old copies of files\nfrom different directories in with the main core package.\n\nThe following files (all under the src/interfaces/jdbc/postgresql\ndirectory) need to be deleted:\n\n\tChangeLog\n\tPG_Object.java\n\tPGbox.java\n\tPGcircle.java\n\tPGlseg.java\n\tPGpath.java\n\tPGpoint.java\n\tPGpolygon.java\n\tPGtokenizer.java\n\nI'll check that everything else is ok, but these files will just confuse\nthings a little bit.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sun, 11 Oct 1998 10:44:47 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Files in wrong place with latest cvs update"
}
] |
[
{
"msg_contents": "Some time in the last 48 hours, a problem was introduced that causes\nany attempt at an error message to crash the backend:\n\n| barsoom:tih> createdb\n| barsoom:tih> psql\n| Welcome to the POSTGRESQL interactive sql monitor:\n| Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n| \n| type \\? for help on slash commands\n| type \\q to quit\n| type \\g or terminate with semicolon to execute query\n| You are currently connected to the database: tih\n| \n| tih=> dfsasdfa;\n| pqReadData() -- backend closed the channel unexpectedly.\n| This probably means the backend terminated abnormally before\n| or while processing the request.\n| We have lost the connection to the backend, so further processing is\n| impossible. Terminating.\n| barsoom:tih> \n\nI did 'gmake distclean', 'cvs update', 'configure', 'gmake all',\n'gmake install', 'initdb', 'postmaster -i -S' and 'createuser tih'\njust before what's logged above, so it was in a pristine environment.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "11 Oct 1998 13:11:48 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error messages crash backend"
},
{
"msg_contents": "> Some time in the last 48 hours, a problem was introduced that causes\n> any attempt at an error message to crash the backend:\n> \n> | barsoom:tih> createdb\n> | barsoom:tih> psql\n> | Welcome to the POSTGRESQL interactive sql monitor:\n> | Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> | \n> | type \\? for help on slash commands\n> | type \\q to quit\n> | type \\g or terminate with semicolon to execute query\n> | You are currently connected to the database: tih\n> | \n> | tih=> dfsasdfa;\n> | pqReadData() -- backend closed the channel unexpectedly.\n> | This probably means the backend terminated abnormally before\n> | or while processing the request.\n> | We have lost the connection to the backend, so further processing is\n> | impossible. Terminating.\n> | barsoom:tih> \n> \n> I did 'gmake distclean', 'cvs update', 'configure', 'gmake all',\n> 'gmake install', 'initdb', 'postmaster -i -S' and 'createuser tih'\n> just before what's logged above, so it was in a pristine environment.\n\nVery strange. I have made some backend function parameter fixes, but\nthat should not change anything.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 13:06:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error messages crash backend"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Very strange. I have made some backend function parameter fixes, but\n> that should not change anything.\n\nFalse alarm -- sorry. It turns out to be a bug in the signal handling\nin NetBSD-current: when the backend tries to signal(myself,SIGQUIT), it\ngoes down in flames instead. I'll report it through channels.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "12 Oct 1998 10:01:31 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Error messages crash backend"
}
] |
[
{
"msg_contents": "> \"Gene Selkov, Jr.\" <[email protected]> wrote:\n> > \n> > I think I have another problem similar to what I just reported. This time it is in GiST. It does not\n> > complain when it builds the index but any attempt to use the table with exesisting GiST indices causes\n> > this error:\n> \n> You seem to be the only one who actually uses GiST <grin>\n\nSee manuals. There is a web site for gist.\n\n> \n> During last 2-3 years I have posted the question about the usability\n> of GiST indexes to various Postgres lists about 3 times with absolutely \n> no reaction, so assumed that they didn't work at all ;(\n> \n> Could you point me to any information (FAQs, TFMs, ...) about their\n> usage ?\n> \n> I have been under an impression that the easiest way of adding new \n> indexing strategies (I personally need full-text) to postgres would\n> be thru GiST, but as I have had no luck in getting them to waork as \n> they were, I assumed that they were in fact unsupported remnants of \n> a long-forgotten project.\n\nFull text can now be done with the new /contrib/fulltextindex functions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 13:07:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: yet another problem in recent builds,\n GIST this time"
},
{
"msg_contents": "\nHannu Krosing <[email protected]> wrote:\n\n> \"Gene Selkov, Jr.\" <[email protected]> wrote:\n> > \n> > I think I have another problem similar to what I just reported. This time it is in GiST. It does not\n> > complain when it builds the index but any attempt to use the table with exesisting GiST indices causes\n> > this error:\n> \n> You seem to be the only one who actually uses GiST <grin>\n>\n> During last 2-3 years I have posted the question about the usability\n> of GiST indexes to various Postgres lists about 3 times with absolutely \n> no reaction, so assumed that they didn't work at all ;(\n\nMaybe I came to know postgres too late, but I got the same feeling shortly after I subscribed to the lists. I had a few people ask about it but I doubt they made it to the point where it becomes useful. Maybe because it is not exactly a plug and play thing.\n\n> Could you point me to any information (FAQs, TFMs, ...) about their\n> usage ?\n\nIf only you and me ask about it, there would hardly be any FAQ. However, please refer to my earlier postings on the subject. I did make some changes to the code but I believe the description of my experience with GiST is up to date. Not sure how to quote the exact hyperlink (because of frames), but you can search the old pgsql-questions list for 'selkov gist'. The first two messages that come up are the most relevant (those dated Thu, 19 Feb 1998 13:40:18 and Wed, 08 Apr 1998 10:25:11)\n\n\n> I have been under an impression that the easiest way of adding new \n> indexing strategies (I personally need full-text) to postgres would\n> be thru GiST, but as I have had no luck in getting them to work as \n> they were, I assumed that they were in fact unsupported remnants of \n> a long-forgotten project.\n\nThis is an abandoned project, but I would be happy to have it preserved in at least the state it was in 6.3.2 and before. It appears screwed up in 6.4.x\n\nTo put the long story short, GiST uses the strategies of R-tree, and it is, in fact, a version of R-tree. Its current implementation does not actually allow you to add new strategies, but it helps you reuse those defined for R-tree with various data types, unlike the postgres R-tree itself, which can be only be used used with built-ins, such as 2D geo types (boxes, polygons, etc.). There is an example of a GiST over text in Joe Hellerstein's source, http://selkov-7.mcs.anl.gov/pggist-patched.tgz, it might be close to what you need.\n\nI asked Joe about further development and he told me part of it moved to the project referred to as PREDATOR, http://simon.cs.cornell.edu/Info/Projects/PREDATOR/predator.html\n\nI did not yet look into it, but I was told that PREDATOR is a practical test bed for the most advanced indexing technologies. It is also an open source ORDBMS software. I have no idea what it's worth as a database server, but I was advised that it can be used as a development platform for new types and indexing algorithms (even those developed for other systems, such as postgres). \n\nAs to the GiST in postgres, we're on our own here. It is possible to get help from the original developers (in the form of questions and answers), but they are unlikely to do work on it actively.\n\n> I feel happy and revitalised to know the contrary.\n> \n> --------------\n> Hannu Krosing\n\nLikewise, I am pleased to know someone else is thinking about it. Although I am amazed at the rate of progress postgreSQL is making, I wish it remained as science-oriented as it originally was. I believe the extensibility continues to be its major virtue. I witnessed numerous infertile attempts to use commercial business-oriented software for scientific databasing. There is very little you can do with money, int, float, date and text. It is the extensibility of types and access methods that makes any real-world database a gold mine for a researcher. See, for example, this site (http://wit.mcs.anl.gov/EMP/), where I am trying to put together a retrieval interface to the enzymology database, EMP. In particular, this example illustrates the use of extensions indexed with GiST: http://wit.mcs.anl.gov/EMP/select_emp_advanced.cgi?E1.ec_code=ec&E1.ec_code.op=%7E%09is+in+range&E1.ec_code.patt=2.1&ec_code.count=1&T1.text=tax&T1.text.op=%7E*%09matches+regex.%2C+case-insensitive&T1.tex!\n t.patt=mammalia%7Crodent%7Cprimat%7Caves&T2.text=phd&T2.text.op=%7E%09matches+regex.%2C+case-sensitive&T2.text.patt=v%7CVM%7CMA%7CKC&T3.text=sl&T3.text.op=%7E*%09matches+regex.%2C+case-insensitive&T3.text.patt=cytosol&text.count=1&N1.seg=pho&N1.seg.op=%7E%09contained+in&N1.seg.patt=7+..+7.5&seg.count=1&constraint=%28N1+%26%26+T2%29+%26+E1+%26+T1+%26+T3&do=Run+the+query\n\nAlthough this is still a very young project (as far as databasing goes), it is considered to be a unique achievement. Currently, my life depends on it: postgres and extensions are the only tools in their kind that allow me to accomplish my job before I am fired. You don't have to be familiar with enzymology to figure out that this kind of data can't be successfully used with Oracle or Sybase and clones.\n\nHope this does not scare you off...\n\n--Gene\n",
"msg_date": "Mon, 12 Oct 1998 02:39:14 -0400",
"msg_from": "\"Gene Selkov Jr.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: yet another problem in recent builds, GIST this time "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > \"Gene Selkov, Jr.\" <[email protected]> wrote:\n> > >\n> > > I think I have another problem similar to what I just reported. This time it is in GiST. It does not\n> > > complain when it builds the index but any attempt to use the table with exesisting GiST indices causes\n> > > this error:\n> >\n> > You seem to be the only one who actually uses GiST <grin>\n> \n> See manuals. There is a web site for gist.\n\nI know. It was put there after I pointed it out ;)\n\nThe website is about GiST, but has very little info about \nactually using GiST with PostgreSQL\n\n> \n> Full text can now be done with the new /contrib/fulltextindex functions.\n> \n\nDoes the code there work with 6.3.2 also, or have I to wait for 6.4 ?\n\n-------------\nHannu\n",
"msg_date": "Mon, 12 Oct 1998 20:33:29 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: yet another problem in recent builds,\n GIST this time"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > \"Gene Selkov, Jr.\" <[email protected]> wrote:\n> > > >\n> > > > I think I have another problem similar to what I just reported. This time it is in GiST. It does not\n> > > > complain when it builds the index but any attempt to use the table with exesisting GiST indices causes\n> > > > this error:\n> > >\n> > > You seem to be the only one who actually uses GiST <grin>\n> > \n> > See manuals. There is a web site for gist.\n> \n> I know. It was put there after I pointed it out ;)\n> \n> The website is about GiST, but has very little info about \n> actually using GiST with PostgreSQL\n> \n> > \n> > Full text can now be done with the new /contrib/fulltextindex functions.\n> > \n> \n> Does the code there work with 6.3.2 also, or have I to wait for 6.4 ?\n\nI think it works for both.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 15:45:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: yet another problem in recent builds,\n GIST this time"
}
] |
[
{
"msg_contents": "I saw some talk on this subject and thought this might help .\nbye\n\n I suggest that you check out \nHellerstein's Generalized Search Tree (GIST) at \nhttp://gist.CS.Berkeley.EDU:8000/gist/\n\n\n\n",
"msg_date": "Sun, 11 Oct 1998 14:00:59 -0400 (EDT)",
"msg_from": "Wayne <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIST"
}
] |
[
{
"msg_contents": "\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tFrank Ridderbusch\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel Pentium II\n\n Operating System (example: Linux 2.0.26 ELF) \t: Solaris 2.7beta\n\n PostgreSQL version (example: PostgreSQL-6.4) : PostgreSQL-6.4\n\t\t\t\t\t\t Snapshot from 12.Oct.98\n\n Compiler used (example: gcc 2.8.0)\t\t: egcs 1.1\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nBuild process does not create a shared library in\nsrc/pl/plpgsql/src/Makefile. \n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n./configure --prefix=/usr/local/pgsql-6.4\nmake\n\nThe compilation process stops in the directory src/pl/plpgsql/src with \nundefined symbols while linking.\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nAt the top of src/pl/plpgsql/src/Makefile there is a definition \n\nPORTNAME=solaris_sparc\n\nHowever this is solaris_i386. And later there are these four lines:\n\nifeq ($(PORTNAME), solaris)\n LDFLAGS_SL := -G -z text\n CFLAGS += $(CFLAGS_SL)\nendif\n\nThere LDFLAGS_SL are not correctly setup to build a shared library.\n\nI removed the _sparc from PORTNAME and '-z text' from\nLDFLAGS_SL. After this everything compiled fine.\n\nMfG/Regards\n--\n /==== Siemens AG\n / Ridderbusch / , ICP CS XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n",
"msg_date": "Sun, 11 Oct 1998 21:52:54 +0200 (MDT)",
"msg_from": "Frank Ridderbusch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Minor problem with Solaris 2.7beta"
},
{
"msg_contents": "> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> Build process does not create a shared library in\n> src/pl/plpgsql/src/Makefile. \n> \n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> ./configure --prefix=/usr/local/pgsql-6.4\n> make\n> \n> The compilation process stops in the directory src/pl/plpgsql/src with \n> undefined symbols while linking.\n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> At the top of src/pl/plpgsql/src/Makefile there is a definition \n> \n> PORTNAME=solaris_sparc\n\nThis should be solaris_i386 for you. Why did it choose solaris_sparc? \nDid configure guess this value?\n\n> \n> However this is solaris_i386. And later there are these four lines:\n> \n> ifeq ($(PORTNAME), solaris)\n> LDFLAGS_SL := -G -z text\n> CFLAGS += $(CFLAGS_SL)\n> endif\n> \n> There LDFLAGS_SL are not correctly setup to build a shared library.\n> \n> I removed the _sparc from PORTNAME and '-z text' from\n> LDFLAGS_SL. After this everything compiled fine.\n\nOK, the configuration Makefiles looked for 'solaris' while we now have\n'solaris_i386' and 'solaris_sparc'. I have fixed these files, and\nremoved the '-z text' from the solaris_i386 makefiles. Get a new\nversion of postgresql and let me know how it goes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 21:04:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor problem with Solaris 2.7beta"
},
{
"msg_contents": "Bruce Momjian writes:\n .....\n > > PORTNAME=solaris_sparc\n > \n > This should be solaris_i386 for you. Why did it choose solaris_sparc? \n > Did configure guess this value?\n > \n\nHere is the relevant section from config.status.\n\n...\ns%@host@%i386-pc-solaris2.7%g\ns%@host_alias@%i386-pc-solaris2.7%g\ns%@host_cpu@%i386%g\ns%@host_vendor@%pc%g\ns%@host_os@%solaris2.7%g\ns%@TAS@%tas.o%g\ns%@ODBCINSTDIR@%%g\ns%@CC@%gcc%g\ns%@CC_VERSION@%pgcc-2.91.57%g\ns%@CPP@%gcc -E%g\ns%@PORTNAME@%solaris_sparc%g\n...\n\nApparently configure already got it wrong.\n--\nRegards,\n\tFrank\n",
"msg_date": "Mon, 12 Oct 1998 08:53:49 +0200 (MDT)",
"msg_from": "Frank Ridderbusch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Minor problem with Solaris 2.7beta"
},
{
"msg_contents": "> Bruce Momjian writes:\n> .....\n> > > PORTNAME=solaris_sparc\n> > \n> > This should be solaris_i386 for you. Why did it choose solaris_sparc? \n> > Did configure guess this value?\n> > \n> \n> Here is the relevant section from config.status.\n> \n> ...\n> s%@host@%i386-pc-solaris2.7%g\n> s%@host_alias@%i386-pc-solaris2.7%g\n> s%@host_cpu@%i386%g\n> s%@host_vendor@%pc%g\n> s%@host_os@%solaris2.7%g\n> s%@TAS@%tas.o%g\n> s%@ODBCINSTDIR@%%g\n> s%@CC@%gcc%g\n> s%@CC_VERSION@%pgcc-2.91.57%g\n> s%@CPP@%gcc -E%g\n> s%@PORTNAME@%solaris_sparc%g\n> ...\n> \n> Apparently configure already got it wrong.\n\nI am confused. Please add 'set -x' to the second line of configure and\nrun:\n\n\n\tconfigure >/tmp/x 2>&1\n\nand send me /tmp/x privately. \tI am looking to see why the variable\nhost_no_ver did not match the proper entry in template/.similar.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 10:06:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor problem with Solaris 2.7beta"
}
] |
[
{
"msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n\n> I have another question. What is the point of \"used?\" Can't I just\n> assume 4 octets for ipv4 and 6 for ipv6? Can I set it to NULL if I\n> don't care about the value?\n\nNot an answer to your question, but IPV6 does not use 6 byte\naddresses; they are 16 bytes long.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "11 Oct 1998 21:55:45 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
}
] |
[
{
"msg_contents": "\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tFrank Ridderbusch\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: MIPS R3000\n\n Operating System (example: Linux 2.0.26 ELF) \t: SINIX 5.42A10\n\t\t\t\t\t\t (SVR4)\n\n PostgreSQL version (example: PostgreSQL-6.4) : PostgreSQL-6.4\n\t\t\t\t\t\t Snapshot from 12.Oct.98\n\n Compiler used (example: gcc 2.8.0)\t\t: cc\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nMissing section for creating shared library for SVR4 in \nsrc/pl/plpgsql/src/Makefile\n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nAdd a section in src/pl/plpgsql/src/Makefile, which should look like\nthis (or in src/pl/plpgsql/src/Makefile.in?):\n\nifeq ($(PORTNAME), svr4)\n LDFLAGS_SL := -G\n CFLAGS += $(CFLAGS_SL)\nendif\n\nMfG/Regards\n--\n /==== Siemens AG\n / Ridderbusch / , ICP CS XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n\n",
"msg_date": "Sun, 11 Oct 1998 22:43:43 +0200 (MDT)",
"msg_from": "Frank Ridderbusch <[email protected]>",
"msg_from_op": true,
"msg_subject": "No Shared Libs for SVR4 in src/pl/plpgsql/src/Makefile"
},
{
"msg_contents": "> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> Missing section for creating shared library for SVR4 in \n> src/pl/plpgsql/src/Makefile\n> \n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> \n> \n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> Add a section in src/pl/plpgsql/src/Makefile, which should look like\n> this (or in src/pl/plpgsql/src/Makefile.in?):\n> \n> ifeq ($(PORTNAME), svr4)\n> LDFLAGS_SL := -G\n> CFLAGS += $(CFLAGS_SL)\n> endif\n\nAdded svr4 to the tree. Let me know how it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 21:10:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] No Shared Libs for SVR4 in src/pl/plpgsql/src/Makefile"
}
] |
[
{
"msg_contents": "Can someone on Linux check to see of -export-dynamic and -Bdynamic do\nthe same thing?\n\nThanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 17:31:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux and -export-dynamic"
},
{
"msg_contents": "> Can someone on Linux check to see of -export-dynamic and -Bdynamic do\n> the same thing?\n> \n> Thanks.\n\nI am also interested if -Bdynamic is the same as -export-dynamic.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 18:13:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Linux and -export-dynamic"
},
{
"msg_contents": ">> Can someone on Linux check to see of -export-dynamic and -Bdynamic do\n>> the same thing?\n\nI think they are different. From the man page of ld:\n\n-export-dynamic \n When creating an ELF file, add all symbols to the\n dynamic symbol table. Normally, the dynamic symbol\n table contains only symbols which are used by a dy-\n namic object. This option is needed for some uses\n of dlopen.\n\n-Bdynamic\n Link against dynamic libraries. This is only mean-\n ingful on platforms for which shared libraries are\n supported. This option is normally the default on\n such platforms.\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Mon, 12 Oct 1998 10:07:59 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux and -export-dynamic "
},
{
"msg_contents": "> >> Can someone on Linux check to see of -export-dynamic and -Bdynamic do\n> >> the same thing?\n> \n> I think they are different. From the man page of ld:\n> \n> -export-dynamic \n> When creating an ELF file, add all symbols to the\n> dynamic symbol table. Normally, the dynamic symbol\n> table contains only symbols which are used by a dy-\n> namic object. This option is needed for some uses\n> of dlopen.\n> \n> -Bdynamic\n> Link against dynamic libraries. This is only mean-\n> ingful on platforms for which shared libraries are\n> supported. This option is normally the default on\n> such platforms.\n\nI figured that out as I read some more. Thanks. BSDI mentions\n-rdynamic for such uses, but seems to understand -export-dyanamic too,\nso I am using that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Sun, 11 Oct 1998 21:12:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Linux and -export-dynamic"
}
] |
[
{
"msg_contents": "The current (as of 8 Oct) parser will not let me select a field named\n\"timestamp\" from a table. It worked just fine a week or so ago.\nI presume the problem is that timestamp is also a data type name.\n\nIs this a bug, or can I expect that addition of data types may break\ntables that used to work?\n\ntree=> \\d stringdataseriesclass\n\nTable = stringdataseriesclass\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| label | text not null | var |\n| timestamp | datetime not null | 8 |\n| value | text not null | var |\n+----------------------------------+----------------------------------+-------+\n\ntree=> select timestamp from stringdataseriesclass;\nERROR: parser: parse error at or near \"from\"\n\ntree=> select label from stringdataseriesclass;\nlabel\n-----\n(0 rows)\n\ntree=>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Oct 1998 18:13:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parser breakage: \"timestamp\" has become a reserved word"
},
{
"msg_contents": "> The current (as of 8 Oct) parser will not let me select a field named\n> \"timestamp\" from a table. It worked just fine a week or so ago.\n> I presume the problem is that timestamp is also a data type name.\n\nThat worked just fine, but declaring a field\n\n TIMESTAMP WITH TIME ZONE\n\ndid not, even though I had thought I'd implemented it quite a while ago.\nApparently the declaration for TIMESTAMP in keywords.c didn't make it\ninto the code, even though it was a token declared in gram.y.\n \nTIMESTAMP needed to be a key word declared in the parser since it\nappears in a multi-word phrase in SQL92.\n\n> Is this a bug, or can I expect that addition of data types may break\n> tables that used to work?\n\nIn principle you can expect that using an SQL92 reserved word for a\ncolumn name will lead to trouble and grief, or at least portability\nproblems ;)\n\nBut it looks like gram.y will allow TIMESTAMP as a column name, so I'll\nput it into the code soon (the next day or so).\n\nThe new docs have several lists of key words, reserved and unreserved,\nand how they relate to Postgres, SQL92, and SQL3. Hopefully that will\nhelp, if one reads the docs. Most people who know what they are doing\ndon't do much reading, so I don't know if it will help much.\n\n - Tom\n",
"msg_date": "Mon, 12 Oct 1998 06:53:11 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parser breakage: \"timestamp\" has become a reserved word"
},
{
"msg_contents": "btw, you can patch the problem yourself in the meantime by adding a line\nto backend/parser/gram.y:\n\n1) in an editor, look for the line starting with \"ColId\"\n (near line # 4639)\n2) add a line in the block of code immediately following the \"ColId\"\nwhich references \"timestamp\" in the same way that the other lines\nreference those reserved words.\n3) re-install the backend (you must have bison available on your system\nto rebuild the gram.y).\n\n - Tom\n",
"msg_date": "Mon, 12 Oct 1998 15:03:55 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parser breakage: \"timestamp\" has become a reserved word"
}
] |
[
{
"msg_contents": "My patches to 'mkMakeFile.tcldefs.sh.in' and 'mkMakefile.tcltkdefs.sh.in', \nwhile they fixed the problem with unexpanded variable references, \nre-introduced a bug that Jan Wieck had squashed. The problem has to do with \nsome shells (ksh95 and some (newer) versions of bash) quoting the output of \nthe set command. This patch corrects that problem while still insuring that \nall variable references are expanded.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sun, 11 Oct 1998 19:32:25 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.4 patches - portability related."
},
{
"msg_contents": "Applied.\n\n\n> My patches to 'mkMakeFile.tcldefs.sh.in' and 'mkMakefile.tcltkdefs.sh.in', \n> while they fixed the problem with unexpanded variable references, \n> re-introduced a bug that Jan Wieck had squashed. The problem has to do with \n> some shells (ksh95 and some (newer) versions of bash) quoting the output of \n> the set command. This patch corrects that problem while still insuring that \n> all variable references are expanded.\n> \nContent-Description: uw7-3.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 00:45:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.4 patches - portability related."
}
] |
[
{
"msg_contents": "Hi.\n\nI'm sorry for posting this to [Hackers] group but I cound't find an expert\nin [General] group who can help me. So to speak.\n\nI have a question regarding programming in C/C++ and postgreSQL.\nI'm a C/C++ programmer but not an expert.\n\nMy question is:\nIs there any way that I can save couple of results to one PGresult * ??\n\nThis is precisely what I'm going to do..\nI'm developing search engine.\nLet's assume. If user enter for keywords like \"Search Engine\"\nMy program will separate each tokens.. and postgres will search like as\nbelow..\n\nSomething Like this\n------------------------\nBEGIN;\nDECLARE portal1 CURSOR FOR select * from db where description ~* '^search\nengine$';\nDECLARE portal2 CURSOR FOR select * from db where description ~* 'search\nengine';\nDECLARE portal3 CURSOR FOR select * from db where description ~* = 'search'\nand description ~* 'engine';\nDECLARE portal4 CURSOR FOR select * from db where description ~* = 'search'\nor description ~* 'engine';\n\nres = PQexec(conn,\"FETCH ALL in portal*\"); // Like this\nEND;\n------------------------\nportal1 sould go first, portal4 should go last.\nAnd I don't wanna use any insert/copy command\n(Don't wanna make any new temporary table. becasue there will be lots of\nquerys)\n\nI really need this function.\n\nIf anyone has any idea please let me know..\n\nThanks in advence.\n--\n// http://korea.co.kr All you need to know about KOREA\n// SeeHyun Lee <[email protected]> ICQ# 3413400\n\n\n",
"msg_date": "Sun, 11 Oct 1998 20:39:31 -0400",
"msg_from": "\"Agape\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question regarding PGresult *"
}
] |
[
{
"msg_contents": "Additions\n---------\ntest new cidr/IP address type(Tom Helbekkmo)\ncomplete rewrite system changes(Jan)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\nman pages/sgml synchronization\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nmulti-verion concurrency control - work-in-progress for 6.5\nimprove reporting of syntax errors by showing location of error in query\nuse index with constants on functions\nallow chaining of pages to allow >8k tuples\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nno min/max for oid type\nimprove PRIMARY KEY handling\ngenerate postmaster pid file and remove flock/fcntl lock code\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 01:01:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.4 issues"
}
] |
[
{
"msg_contents": "> >> 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n\nAs everyone has pointed out, we can't count on a specific convention for\nwhere the lock file should go. However, we can probably count on a\nconvention to be followed on a particular OS. That would seem to be\nsomething to define in Makefile.port, which could then be overridden by\na configure option or Makefile.custom entry.\n\nOn my RedHat Linux systems, the appropriate place seems to be /var/lock\nor /var/lock/pgsql.\n\n - Tom\n",
"msg_date": "Mon, 12 Oct 1998 06:16:50 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "> > >> 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n> \n> As everyone has pointed out, we can't count on a specific convention for\n> where the lock file should go. However, we can probably count on a\n> convention to be followed on a particular OS. That would seem to be\n> something to define in Makefile.port, which could then be overridden by\n> a configure option or Makefile.custom entry.\n> \n> On my RedHat Linux systems, the appropriate place seems to be /var/lock\n> or /var/lock/pgsql.\n\nI have talked to Billy. I see no real value to try chasing around\nOS-specific lock file locations. /tmp is fine, and if they don't have\nsticky bits in /tmp, they have much larger problems than the location of\nthe lock file.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 09:48:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "> I have talked to Billy. I see no real value to try chasing around\n> OS-specific lock file locations. /tmp is fine, and if they don't have\n> sticky bits in /tmp, they have much larger problems than the location \n> of the lock file.\n\nThe OS-specific areas would be a nice feature, but not for this release\nof course. We could either put in the hooks to specify the area now, and\ndefault to /tmp, or we could put that off until December.\n\nPerhaps Billy and I can work it through then...\n\n - Tom\n",
"msg_date": "Mon, 12 Oct 1998 14:38:44 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "> > I have talked to Billy. I see no real value to try chasing around\n> > OS-specific lock file locations. /tmp is fine, and if they don't have\n> > sticky bits in /tmp, they have much larger problems than the location \n> > of the lock file.\n> \n> The OS-specific areas would be a nice feature, but not for this release\n> of course. We could either put in the hooks to specify the area now, and\n> default to /tmp, or we could put that off until December.\n> \n> Perhaps Billy and I can work it through then...\n\nYes, let's wait. I don't want another round of 'My OS has ...' at this\npoint in the release cycle. I think Marc would have a heart attack.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 10:44:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "On Mon, 12 Oct 1998, Thomas G. Lockhart wrote:\n\n> > >> 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n> \n> As everyone has pointed out, we can't count on a specific convention for\n> where the lock file should go. However, we can probably count on a\n> convention to be followed on a particular OS. That would seem to be\n> something to define in Makefile.port, which could then be overridden by\n> a configure option or Makefile.custom entry.\n> \n> On my RedHat Linux systems, the appropriate place seems to be /var/lock\n> or /var/lock/pgsql.\n\n\tFreeBSD has a /var/spool/lock...Solaris has /var/spool/locks\n\n\tHrmmmm...there are a couple of ways of doing this...we could put\nit into the template for the OS as a -DLOCK_DIR=\"\", which would probably\nbe the simplist (instead of having configure searching all\npossibilities)...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Thu, 15 Oct 1998 05:56:50 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "On Mon, 12 Oct 1998, Bruce Momjian wrote:\n\n> > > >> 5. The port lock will be kept in '/var/opt/pgsql/lock/'.\n> > \n> > As everyone has pointed out, we can't count on a specific convention for\n> > where the lock file should go. However, we can probably count on a\n> > convention to be followed on a particular OS. That would seem to be\n> > something to define in Makefile.port, which could then be overridden by\n> > a configure option or Makefile.custom entry.\n> > \n> > On my RedHat Linux systems, the appropriate place seems to be /var/lock\n> > or /var/lock/pgsql.\n> \n> I have talked to Billy. I see no real value to try chasing around\n> OS-specific lock file locations. /tmp is fine, and if they don't have\n> sticky bits in /tmp, they have much larger problems than the location of\n> the lock file.\n\n\tI personally do not like putting stuff like that into /tmp.../tmp,\nIMHO, is such that if i need to, I can run 'find /tmp -mtime +1 -exec rm\n{} \\;\" without feeling a twinge of worry or guilt...putting lock and/or\npid files in there is just plain *wrong*...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Thu, 15 Oct 1998 05:59:35 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
},
{
"msg_contents": "> > On my RedHat Linux systems, the appropriate place seems to be \n> > /var/lock or /var/lock/pgsql.\n> FreeBSD has a /var/spool/lock...Solaris has /var/spool/locks\n> we could put it into the template for the OS as a -DLOCK_DIR=\"\", which \n> would probably be the simplist (instead of having configure searching \n> all possibilities)...\n\nI like this. And one can override it with Makefile.custom if desired.\n\nThere are other items appearing in some of the Makefiles which would do\nbetter to go in the templates. In particular, some of the library\nMakefiles have big chunks of \"ifeq ($(PORTNAME), xxx)\" code containing\nlots of duplicate or repetative info. We can/should move all of this\nstuff out into the templates and into configure.\n\nSomething to do for v6.5...\n\n - Tom\n",
"msg_date": "Thu, 15 Oct 1998 14:34:43 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
}
] |
[
{
"msg_contents": "> I strongly disagree. If we wish postgresql to please people, we do not want\n> to start out by annoying them with the install. Specifically, some of us\n> (I happen to be one), feel that the FHS is important and that packages\n> should follow it. \n> \n> Also, /tmp is a security risk. Why make it harder for someone trying to\n> use postgresql, say for a real application involving money or confidential\n> information or something, to run a tightly nailed down system. \n> \n> This item should follow platform convention by default (ie, FHS on Linux\n> and other \"progressive\" systems) but be configurable at install time.\n\nOK, I agree, we should allow locks to be placed wherever people want\nthem, and the periodic cleaning of /tmp is a good argument.\n\nPerhaps, rather than using OS-specific stuff, we can just test from\nconfigure for various directories that are writeable, and choose the\nbest one. That seems nicer to me, and fewer configuration headaches.\n\nI assume this would apply to lock and socket files? If so, each client\nfor unix-domain sockets would have to know about the location chosen at\nconfigure time. That sounds messy. I am adding this item to the\nTODO/Open Items list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 12:06:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
}
] |
[
{
"msg_contents": "Why not just create a ~postgres/locks directory and place the\nsocket file and postmaster locks there. Then we could place a\nlock in each DATA directory signifing the pid of the postmaster\nthat contains the lock?\n\nMatt\n>> I strongly disagree. If we wish postgresql to please people, we do not want\n>> to start out by annoying them with the install. Specifically, some of us\n>> (I happen to be one), feel that the FHS is important and that packages\n>> should follow it. \n>> \n>> Also, /tmp is a security risk. Why make it harder for someone trying to\n>> use postgresql, say for a real application involving money or confidential\n>> information or something, to run a tightly nailed down system. \n>> \n>> This item should follow platform convention by default (ie, FHS on Linux\n>> and other \"progressive\" systems) but be configurable at install time.\n>\n>OK, I agree, we should allow locks to be placed wherever people want\n>them, and the periodic cleaning of /tmp is a good argument.\n>\n>Perhaps, rather than using OS-specific stuff, we can just test from\n>configure for various directories that are writeable, and choose the\n>best one. That seems nicer to me, and fewer configuration headaches.\n>\n>I assume this would apply to lock and socket files? If so, each client\n>for unix-domain sockets would have to know about the location chosen at\n>configure time. That sounds messy. I am adding this item to the\n>TODO/Open Items list.\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n----------\nMatthew C. Aycock\nOperating Systems Analyst/Admin, Senior\nDept Math/CS\nEmory University, Atlanta, GA \nInternet: [email protected] \t\t\n\n\n",
"msg_date": "Mon, 12 Oct 1998 12:23:31 -0400 (EDT)",
"msg_from": "\"Matthew C. Aycock\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster locking issues."
}
] |
[
{
"msg_contents": "The configure script is not correctly substituting the TCL/TK\nlibraries it finds. Please remember that setting variables in the\nconfigure script is not enough to get them substituted into Makefiles\nand such.\n\nPlease apply the following patch and rerun autoconf.\n\nCheers,\nBrook\n\n===========================================================================\n--- configure.in.orig\tMon Oct 12 01:00:20 1998\n+++ configure.in\tMon Oct 12 11:08:29 1998\n@@ -800,6 +800,7 @@\n \t\tUSE_TCL=\n \telse\n \t\tTCL_LIB=-l$TCL_LIB\n+\t\tAC_SUBST(TCL_LIB)\n \tfi\n fi\n \n@@ -883,6 +884,7 @@\n \t\tUSE_TCL=\n \telse\n \t\tTK_LIB=-l$TK_LIB\n+\t\tAC_SUBST(TK_LIB)\n \tfi\n \n \tLIBS=\"$ice_save_LIBS\"\n",
"msg_date": "Mon, 12 Oct 1998 11:27:43 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "TCL/TK library glitches in configure.in"
},
{
"msg_contents": ">\n> The configure script is not correctly substituting the TCL/TK\n> libraries it finds. Please remember that setting variables in the\n> configure script is not enough to get them substituted into Makefiles\n> and such.\n>\n> Please apply the following patch and rerun autoconf.\n\n The funny thing is that commenting out TCL_LIB and TK_LIB\n from Makefile.global works too (even --with-tcl).\n\n>\n> Cheers,\n> Brook\n>\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 12 Oct 1998 20:34:54 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in"
},
{
"msg_contents": "I have removed the mention of TCL_LIB from Makefile.global. The new\ncode looks in tcconfig.sh, and gets the values there. I just fixed it\ntoday. Please check for the commented out entries in Makefile.global\nand test it to see if it works. Seems to work here.\n\nHowever, I will apply the patch and remove the Makefile.global comments\nof TCL_LIB, just so we can keep it around. I think Billy is working on\nremoval of that hole section.\n\nThanks.\n\n> The configure script is not correctly substituting the TCL/TK\n> libraries it finds. Please remember that setting variables in the\n> configure script is not enough to get them substituted into Makefiles\n> and such.\n> \n> Please apply the following patch and rerun autoconf.\n> \n> Cheers,\n> Brook\n> \n> ===========================================================================\n> --- configure.in.orig\tMon Oct 12 01:00:20 1998\n> +++ configure.in\tMon Oct 12 11:08:29 1998\n> @@ -800,6 +800,7 @@\n> \t\tUSE_TCL=\n> \telse\n> \t\tTCL_LIB=-l$TCL_LIB\n> +\t\tAC_SUBST(TCL_LIB)\n> \tfi\n> fi\n> \n> @@ -883,6 +884,7 @@\n> \t\tUSE_TCL=\n> \telse\n> \t\tTK_LIB=-l$TK_LIB\n> +\t\tAC_SUBST(TK_LIB)\n> \tfi\n> \n> \tLIBS=\"$ice_save_LIBS\"\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 15:43:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in"
},
{
"msg_contents": "> >\n> > The configure script is not correctly substituting the TCL/TK\n> > libraries it finds. Please remember that setting variables in the\n> > configure script is not enough to get them substituted into Makefiles\n> > and such.\n> >\n> > Please apply the following patch and rerun autoconf.\n> \n> The funny thing is that commenting out TCL_LIB and TK_LIB\n> from Makefile.global works too (even --with-tcl).\n\nYes, this is Billy's fix of yesterday, by using tclconfig.sh.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 12 Oct 1998 15:44:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in"
},
{
"msg_contents": "> > >\n> > > The configure script is not correctly substituting the TCL/TK\n> > > libraries it finds. Please remember that setting variables in the\n> > > configure script is not enough to get them substituted into Makefiles\n> > > and such.\n> > >\n> > > Please apply the following patch and rerun autoconf.\n> > \n> > The funny thing is that commenting out TCL_LIB and TK_LIB\n> > from Makefile.global works too (even --with-tcl).\n> \n> Yes, this is Billy's fix of yesterday, by using tclconfig.sh.\n\nSigh...\n\nI REALLY must not make patches in the wee hours of the morning. I forgot to \ncopy the original Makefile.global.in to Makefile.global.in.orig before I \nremoved the references to TCL_LIB and TK_LIB. Hence the changes did not make \nit into the patch.\n\nMy most humble apologies.\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Tue, 13 Oct 1998 02:21:41 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in "
},
{
"msg_contents": ">\n> I have removed the mention of TCL_LIB from Makefile.global. The new\n> code looks in tcconfig.sh, and gets the values there. I just fixed it\n> today. Please check for the commented out entries in Makefile.global\n> and test it to see if it works. Seems to work here.\n>\n> However, I will apply the patch and remove the Makefile.global comments\n> of TCL_LIB, just so we can keep it around. I think Billy is working on\n> removal of that hole section.\n>\n> Thanks.\n>\n\n Checked out at Oct. 13 09:52 MET DST (it's +0200 so should be\n 03:52 in your TZ), configured with switches --enable-shared\n and --with-tcl.\n\n Compiled clean and regression went through without a single\n 'failed'.\n\n Linux i486 ELF 2.1.88\n\n BTW: You still mention the rewrite system on the todo list.\n What was left where replacing some functions (OffsetVarNodes\n and the like) by the new versions that are now static in the\n rewriteHandler.\n\n But the old versions are used by the parser somewhere and I'm\n not totally sure if switching there to use the new ones would\n have any side effects.\n\n Things are stable now and working for what we currently have.\n I'll not touch the rewrite system (except for bugs) again\n before 6.4 is released. Please close the item.\n\n When we go on and develop for 6.5 I think we will work on\n more capabilities in the parser/planner (e.g. subselects in\n the targetlist, outer joins etc.). This will require\n additions in the rewrite system and then it's time to get on\n it again.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 13 Oct 1998 12:12:26 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in"
},
{
"msg_contents": "> Sigh...\n> \n> I REALLY must not make patches in the wee hours of the morning. I forgot to \n> copy the original Makefile.global.in to Makefile.global.in.orig before I \n> removed the references to TCL_LIB and TK_LIB. Hence the changes did not make \n> it into the patch.\n> \n> My most humble apologies.\n\nI had done the same thing, and only removed the defines in\nMakefile.global.in yesterday.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 11:13:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in"
},
{
"msg_contents": "> >\n> > I have removed the mention of TCL_LIB from Makefile.global. The new\n> > code looks in tcconfig.sh, and gets the values there. I just fixed it\n> > today. Please check for the commented out entries in Makefile.global\n> > and test it to see if it works. Seems to work here.\n> >\n> > However, I will apply the patch and remove the Makefile.global comments\n> > of TCL_LIB, just so we can keep it around. I think Billy is working on\n> > removal of that hole section.\n> >\n> > Thanks.\n> >\n> \n> Checked out at Oct. 13 09:52 MET DST (it's +0200 so should be\n> 03:52 in your TZ), configured with switches --enable-shared\n> and --with-tcl.\n\nCool. I think with Billy's help, we can remove the while tcl testing\nsection from configure.in, at least the part that was bound to specific\ntcl/tk directories and version numbers.\n\n> \n> Compiled clean and regression went through without a single\n> 'failed'.\n> \n> Linux i486 ELF 2.1.88\n> \n> BTW: You still mention the rewrite system on the todo list.\n> What was left where replacing some functions (OffsetVarNodes\n> and the like) by the new versions that are now static in the\n> rewriteHandler.\n> \n> But the old versions are used by the parser somewhere and I'm\n> not totally sure if switching there to use the new ones would\n> have any side effects.\n> \n> Things are stable now and working for what we currently have.\n> I'll not touch the rewrite system (except for bugs) again\n> before 6.4 is released. Please close the item.\n\nOK, this is all I was waiting for. I suspected that was the case, but\nneeded to hear it from you.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 11:27:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TCL/TK library glitches in configure.in"
}
] |
[
{
"msg_contents": "Greetings...\n Having finally gotten an upgrade to the web server hardware here,\nI've installed 6.4 on the new system and want to dump over the first\ndatabase I've chosen to migrate. The old system is the release\nversion of 6.2. I did a pg_dump of the database with the 6.2 version\nof pg_dump, but got errors when I tried to load it into 6.4 on the new\nsystem. So, I figured I should do both the dump and the load with the\n6.4 version of pg_dump. When I try, however, I get the following\nmessage:\n\tConnection to database 'rrc' failed.\n\tFailed to authenticate client as Postgres user 'postgres'\n\tusing <unknown authentication type>: be_recvauth:\n\tunrecognized message type: 131072\n This is with the Oct. 11 snapshot. Any suggestions on how to\nproceed? TIA...\n\n-Brandon :)\n",
"msg_date": "Mon, 12 Oct 1998 16:55:30 -0500 (CDT)",
"msg_from": "Brandon Ibach <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.2 -> 6.4 snapshot"
}
] |
[
{
"msg_contents": "Does somebody have solution for this problem that was discussed here a month ago?\n\n>> \n>> the stream functions on AIX need a size_t for addrlen's in fe-connect.c and pqcomm.c.\n>>This has come up before. AIX wants size_t for certain structures like\n>getsockname(). I believe the third parameter on AIX is size_t, while it\n>used to be int on my machine, but is not socklen_t. Is this correct? \n>The 'int' code works fine for me, but I can see why AIX is having a\n>problem, and perhaps it is time for configure to check on the various\n>types.\n>\n>\tgetsockname(int s, struct sockaddr *name, socklen_t *namelen);\n\nOk, so this gets tricky. In 4.2.1 it is size_t and in 4.3.1 it is as above with socklen_t :-(\n\n\nPeter Gucwa\n\n",
"msg_date": "Mon, 12 Oct 1998 18:52:43 -0400",
"msg_from": "Peter Gucwa <[email protected]>",
"msg_from_op": true,
"msg_subject": "compilation problem on AIX"
},
{
"msg_contents": "On Mon, 12 Oct 1998, Peter Gucwa wrote:\n\n> Does somebody have solution for this problem that was discussed here a month ago?\n> \n> >> \n> >> the stream functions on AIX need a size_t for addrlen's in fe-connect.c and pqcomm.c.\n> >>This has come up before. AIX wants size_t for certain structures like\n> >getsockname(). I believe the third parameter on AIX is size_t, while it\n> >used to be int on my machine, but is not socklen_t. Is this correct? \n> >The 'int' code works fine for me, but I can see why AIX is having a\n> >problem, and perhaps it is time for configure to check on the various\n> >types.\n> >\n> >\tgetsockname(int s, struct sockaddr *name, socklen_t *namelen);\n> \n> Ok, so this gets tricky. In 4.2.1 it is size_t and in 4.3.1 it is as\n> above with socklen_t :-(\n\nIf someone can make me a *short* code stub that fails to compile depending\non which is used, I can add this to configure...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Mon, 12 Oct 1998 20:37:59 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] compilation problem on AIX"
}
] |
[
{
"msg_contents": "Does anyone know what asynchronous/synchronous clients in libpq means?\nThere's a real vague reference to it in the HISTORY file for the latest\nbeta, something about a bug being fixed. I didn't see any docs\non this aspect of libpq and was wondering if some wizard could \ntake it upon himself to educate me (this affects the reputation of\nPostgres95 at our company which is going to be installing Postgres95\nat various hospitals around the world.)\n\nThanks,\n\nBrian\n\n",
"msg_date": "Mon, 12 Oct 1998 20:18:13 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: pgsql-interfaces-digest V1 #105"
}
] |
[
{
"msg_contents": "I've just had this sent to me.\n\nIn this case, the correct method that should have been used (in JDBC) is\nstmnt.executeUpdate(), but the fact that he couldn't remove the sequence\nis worrying.\n\nWas this an old problem now fixed?\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: Mon, 12 Oct 1998 21:25:21 -0700\nFrom: Jason Venner <[email protected]>\nTo: Peter T Mount <[email protected]>\nSubject: Sequences and jdbc 6.3.2\n\n\nI have an interesting problem.\nI would like to create sequences via the jdbc interface. The problem\nis that if I try to create a sequence via jdbc\n\nStatement stmnt = connection.createStatement();\nstmnt.execute( \"create sequence image_seq start 50\" );\n\nThe creation failes\nexecute returns false and\nthe postmaster spits out\nERROR: cannot create image_seq\n\nThen, forever after, even by psql, I can't create or drop that\nsequence name.\nI have to destroy the database.\n\n",
"msg_date": "Tue, 13 Oct 1998 06:51:39 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequences and jdbc 6.3.2 (fwd)"
}
] |
[
{
"msg_contents": "> From: [email protected] (D'Arcy J.M. Cain)\n> Date: Sun, 11 Oct 1998 07:17:58 -0400 (EDT)\n> \n> Thus spake Paul A Vixie\n> > \tint\n> > \tinet_cidr_pton(af, src, dst, size, int *used)\n> > \n> > this would work very much the same as inet_net_pton() except for three\n> > things: (1) nonzero trailing mantissas (host parts) would be allowed; (2)\n> > the number of consumed octets of dst would be set into *used; and (3) there\n> > would be absolutely no classful assumptions made if the pfxlen is not\n> > specified in the input.\n> \n> Is there also agreement on the use of -1 to mean unspecified netmask?\n\nok. this means we have to return octets and use an argument for *bits.\n\n> How about the optional input form h.h.h.h:m.m.m.m to specify netmask?\n\ni'd rather avoid this, since cidr does not allow noncontiguous netmasks\nand i'd rather not create another error return case unless it's REALLY\nimportant. is it? as currently specified:\n\n/*\n * static int\n * inet_cidr_pton(af, src, dst, size, *bits)\n * convert network address from presentation to network format.\n * accepts hex octets, hex strings, decimal octets, and /CIDR.\n * \"size\" is in bytes and describes \"dst\". \"bits\" is set to the\n * /CIDR prefix length if one was specified, or -1 otherwise.\n * return:\n * number of octets consumed of \"dst\", or -1 if some failure occurred\n * (check errno). ENOENT means it was not a valid network address.\n * note:\n * 192.5.5.1/28 has a nonzero host part, which means it isn't a network\n * as called for by inet_net_pton() but it can be a host address with\n * an included netmask.\n * author:\n * Paul Vixie (ISC), October 1998\n */\nint\ninet_net_pton(int af, const char *src,\n void *dst, size_t size,\n int *bits)\n{\n switch (af) {\n case AF_INET:\n return (inet_cidr_pton_ipv4(src, dst, size, bits));\n default:\n errno = EAFNOSUPPORT;\n return (-1);\n }\n}\n\n> > \tint\n> > \tinet_cidr_ntop(ag, src, len, bits, dst, size)\n> > \n> > this would work very much the same as inet_net_ntop() except that the\n> > size (in octets) of src's mantissa would be given in the new \"len\" argument\n> > and not imputed from \"bits\" as occurs now. \"bits\" would just be output\n> > as the \"/NN\" at the end of the string, and would never be optional.\n> \n> And if bits is -1 then don't print the /NN part, right?\n\nok. here's what that looks like, for comments before i write it:\n\n/*\n * char *\n * inet_cidr_ntop(af, src, len, bits, dst, size)\n * convert network address from network to presentation format.\n * generates \"/CIDR\" style result unless \"bits\" is -1.\n * return:\n * pointer to dst, or NULL if an error occurred (check errno).\n * note:\n * 192.5.5.1/28 has a nonzero host part, which means it isn't a network\n * as called for by inet_net_pton() but it can be a host address with\n * an included netmask.\n * author:\n * Paul Vixie (ISC), October 1998\n */\nchar *\ninet_cidr_ntop(int af, const void *src, size_t len, int bits,\n char *dst, size_t size)\n{\n switch (af) {\n case AF_INET:\n return (inet_cidr_ntop_ipv4(src, len, bits, dst, size));\n default:\n errno = EAFNOSUPPORT;\n return (NULL);\n }\n}\n\n> From: [email protected] (D'Arcy J.M. Cain)\n> Date: Sun, 11 Oct 1998 07:40:41 -0400 (EDT)\n> \n> Thus spake Paul A Vixie\n> > \tint\n> > \tinet_cidr_pton(af, src, dst, size, int *used)\n> > \n> > this would work very much the same as inet_net_pton() except for three\n> > things: (1) nonzero trailing mantissas (host parts) would be allowed; (2)\n> > the number of consumed octets of dst would be set into *used; and (3) there\n> > would be absolutely no classful assumptions made if the pfxlen is not\n> > specified in the input.\n> \n> I have another question. What is the point of \"used?\" Can't I just\n> assume 4 octets for ipv4 and 6 for ipv6? Can I set it to NULL if I\n> don't care about the value?\n\nwe probably could have done this until we had to return octets and fill *used\nwith the bits. but more importantly, i think we should still only touch the\noctets in *dst that are nec'y. this is consistent with the _ntop() as well.\n\n> From: [email protected] (D'Arcy J.M. Cain)\n> Date: Sun, 11 Oct 1998 20:22:25 -0400 (EDT)\n> \n> ... [One] more thing. I built my stuff on the assumption that the\n> inet_cidr_ntop function returned char *, not int. I assume that was\n> just an error in your message. In fact, here is the way I added the\n> prototypes to builtins.h.\n\nYes.\n\n> char *inet_cidr_ntop(int af, const void *src, size_t len, int bits, char *dst, size_t size);\n> int inet_cidr_pton(int af, const void *src, void *dst, size_t size, int *used);\n> \n> Is this what you had in mind?\n\nYes. But note that as now proposed, inet_cidr_pton() returns octets not bits\nas earlier proposed, and sets *used to the bits not the octets as earlier\nproposed.\n\nIf there are no further comments?\n\n(In case y'all are wondering, this is how BIND's other library functions\ngot specified, though the driving application wasn't PostGreSQL last time.)\n",
"msg_date": "Tue, 13 Oct 1998 01:11:47 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: inet/cidr/bind "
},
{
"msg_contents": "Thus spake Paul A Vixie\n> > From: [email protected] (D'Arcy J.M. Cain)\n> > How about the optional input form h.h.h.h:m.m.m.m to specify netmask?\n> \n> i'd rather avoid this, since cidr does not allow noncontiguous netmasks\n> and i'd rather not create another error return case unless it's REALLY\n> important. is it? as currently specified:\n\nNot that important I think. It was just a leftover though from earlier\ndiscussions. I just wanted to make sure we considered it. The issue\nof the extra error return came up back then too.\n\n> /*\n> * static int\n> * inet_cidr_pton(af, src, dst, size, *bits)\n> * convert network address from presentation to network format.\n> * accepts hex octets, hex strings, decimal octets, and /CIDR.\n> * \"size\" is in bytes and describes \"dst\". \"bits\" is set to the\n> * /CIDR prefix length if one was specified, or -1 otherwise.\n> * return:\n> * number of octets consumed of \"dst\", or -1 if some failure occurred\n> * (check errno). ENOENT means it was not a valid network address.\n\nSo if it is a network we don't have to fill the whole structure, right?\nWhat happens on these calls?\n\n inet_cidr_pton(af, \"192.5/16\", dst, sizeof dst, &bits);\n inet_cidr_pton(af, \"192.5/24\", dst, sizeof dst, &bits);\n inet_cidr_pton(af, \"192.5.5.1/16\", dst, sizeof dst, &bits);\n\nI'm guessing that the return and bits for each would be (2, 16), (3, 24)\nand (4, 16). Is that correct or since they are all ipv4 addresses would\nthe size always be 4?\n\n> * note:\n> * 192.5.5.1/28 has a nonzero host part, which means it isn't a network\n> * as called for by inet_net_pton() but it can be a host address with\n> * an included netmask.\n> * author:\n> * Paul Vixie (ISC), October 1998\n> */\n> int\n> inet_net_pton(int af, const char *src,\n\ninet_cidr_pton?\n\n> ok. here's what that looks like, for comments before i write it:\n> \n> /*\n> * char *\n> * inet_cidr_ntop(af, src, len, bits, dst, size)\n> * convert network address from network to presentation format.\n> * generates \"/CIDR\" style result unless \"bits\" is -1.\n\nSounds right.\n\n> > I have another question. What is the point of \"used?\" Can't I just\n> > assume 4 octets for ipv4 and 6 for ipv6? Can I set it to NULL if I\n> > don't care about the value?\n> \n> we probably could have done this until we had to return octets and fill *used\n> with the bits. but more importantly, i think we should still only touch the\n> octets in *dst that are nec'y. this is consistent with the _ntop() as well.\n\nDoes this mean we need to add a size element to the inet structure?\n\n> Yes. But note that as now proposed, inet_cidr_pton() returns octets not bits\n> as earlier proposed, and sets *used to the bits not the octets as earlier\n> proposed.\n\nOK. I'll wait till your stuff has been added to fix my stuff. That way\nI can test it and send in the final changes once (hopefully.)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 13 Oct 1998 10:32:01 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "> So if it is a network we don't have to fill the whole structure, right?\n\nright.\n\n> What happens on these calls?\n> \n> inet_cidr_pton(af, \"192.5/16\", dst, sizeof dst, &bits);\n> inet_cidr_pton(af, \"192.5/24\", dst, sizeof dst, &bits);\n> inet_cidr_pton(af, \"192.5.5.1/16\", dst, sizeof dst, &bits);\n> \n> I'm guessing that the return and bits for each would be (2, 16), (3, 24)\n> and (4, 16). Is that correct or since they are all ipv4 addresses would\n> the size always be 4?\n\nyes. :-). i mean, the former. {2,16}, {3,24}, and {4,16}. ipv4 is the\nfamily of the address but does not dictate the size of the prefix. i still\ndon't want to touch octets which aren't specified, any more than i would\nwant to emit them in _ntop(). but that's my preference speaking -- what is\nyours?\n\n> > int\n> > inet_net_pton(int af, const char *src,\n> \n> inet_cidr_pton?\n\noops, yeah. you can see where i copied this stuff from.\n\n> Does this mean we need to add a size element to the inet structure?\n\ni think so, yes.\n",
"msg_date": "Tue, 13 Oct 1998 09:08:23 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind "
},
{
"msg_contents": "Thus spake Paul A Vixie\n> > inet_cidr_pton(af, \"192.5/16\", dst, sizeof dst, &bits);\n> > inet_cidr_pton(af, \"192.5/24\", dst, sizeof dst, &bits);\n> > inet_cidr_pton(af, \"192.5.5.1/16\", dst, sizeof dst, &bits);\n> > \n> > I'm guessing that the return and bits for each would be (2, 16), (3, 24)\n> > and (4, 16). Is that correct or since they are all ipv4 addresses would\n> > the size always be 4?\n> \n> yes. :-). i mean, the former. {2,16}, {3,24}, and {4,16}. ipv4 is the\n> family of the address but does not dictate the size of the prefix. i still\n> don't want to touch octets which aren't specified, any more than i would\n> want to emit them in _ntop(). but that's my preference speaking -- what is\n> yours?\n\nWell, I don't mind filling in the whole structure. It would simplify\na few things and we wouldn't need to add a size element to the structure.\nThe network function will output it correctly, I think.\n\ninet_network_with_bits('192.5/16') => '192.5/16'\ninet_network_with_bits('192.5.5.1/16') => '192.5/16'\ninet_network_with_bits('192.5/24') => '192.5.0/16'\n\nDoes this seem right?\n\n> > Does this mean we need to add a size element to the inet structure?\n> i think so, yes.\n\nUnless we zero-pad, right?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 13 Oct 1998 12:58:03 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "> From: [email protected] (D'Arcy J.M. Cain)\n> Date: Tue, 13 Oct 1998 12:58:03 -0400 (EDT)\n> \n> Well, I don't mind filling in the whole structure. It would simplify\n> a few things and we wouldn't need to add a size element to the structure.\n\nok.\n\n> The network function will output it correctly, I think.\n> \n> inet_network_with_bits('192.5/16') => '192.5/16'\n> inet_network_with_bits('192.5.5.1/16') => '192.5/16'\n> inet_network_with_bits('192.5/24') => '192.5.0/16'\n> \n> Does this seem right?\n\nfor networks, yes.\n\n> > > Does this mean we need to add a size element to the inet structure?\n> > i think so, yes.\n> \n> Unless we zero-pad, right?\n\nok. here's the current proposal. any further comments?\n\n/*\n * char *\n * inet_cidr_ntop(af, src, bits, dst, size)\n * convert network address from network to presentation format.\n * generates \"/CIDR\" style result unless \"bits\" is -1. \"src\"'s\n * size is determined from its \"af\".\n * return:\n * pointer to dst, or NULL if an error occurred (check errno).\n * note:\n * 192.5.5.1/28 has a nonzero host part, which means it isn't a network\n * as called for by inet_net_pton() but it can be a host address with\n * an included netmask.\n * author:\n * Paul Vixie (ISC), October 1998\n */\nchar *\ninet_cidr_ntop(int af, const void *src, int bits, char *dst, size_t size) {\n switch (af) {\n case AF_INET:\n return (inet_cidr_ntop_ipv4(src, bits, dst, size));\n default:\n errno = EAFNOSUPPORT;\n return (NULL);\n }\n}\n\n...\n\n/*\n * int\n * inet_cidr_pton(af, src, dst, *bits)\n * convert network address from presentation to network format.\n * accepts hex octets, hex strings, decimal octets, and /CIDR.\n * \"dst\" is assumed large enough for its \"af\". \"bits\" is set to the\n * /CIDR prefix length if one was specified, or -1 otherwise.\n * return:\n * 0 on success, or -1 if some failure occurred (check errno).\n * ENOENT means it was not a valid network address.\n * note:\n * 192.5.5.1/28 has a nonzero host part, which means it isn't a network\n * as called for by inet_net_pton() but it can be a host address with\n * an included netmask.\n * author:\n * Paul Vixie (ISC), October 1998\n */\nint\ninet_cidr_pton(int af, const char *src, void *dst, int *bits) {\n switch (af) {\n case AF_INET:\n return (inet_cidr_pton_ipv4(src, dst, bits));\n default:\n errno = EAFNOSUPPORT;\n return (-1);\n }\n}\n",
"msg_date": "Tue, 13 Oct 1998 11:27:31 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind "
},
{
"msg_contents": "Thus spake Paul A Vixie\n> > The network function will output it correctly, I think.\n> > \n> > inet_network_with_bits('192.5/16') => '192.5/16'\n> > inet_network_with_bits('192.5.5.1/16') => '192.5/16'\n> > inet_network_with_bits('192.5/24') => '192.5.0/16'\n> > \n> > Does this seem right?\n> \n> for networks, yes.\n\nHmm. It _is_ the network function I was talking about. The same inputs\nwhould give the following results.\n\nInput Network (with) Network (without) Host Broadcast\n192.5/16 192.5/16 192.5 192.5.0.0 192.5.255.255\n192.5.5.1/16 192.5/16 192.5 192.5.0.0 192.5.255.255\n192.5/24 192.5.0/16 192.5.0 192.5.0.0 192.5.0.255\n\nOf course, you wouldn't expect the first and last to have the host function\napplied to it. They are probably in a field used to store networks.\n\n> ok. here's the current proposal. any further comments?\n> \n> /*\n> * char *\n> * inet_cidr_ntop(af, src, bits, dst, size)\n> * convert network address from network to presentation format.\n> * generates \"/CIDR\" style result unless \"bits\" is -1. \"src\"'s\n> * size is determined from its \"af\".\n\nAnd size is the available space in dst, right? Perfect.\n\n> /*\n> * int \n> * inet_cidr_pton(af, src, dst, *bits)\n> * convert network address from presentation to network format.\n> * accepts hex octets, hex strings, decimal octets, and /CIDR.\n> * \"dst\" is assumed large enough for its \"af\". \"bits\" is set to the\n> * /CIDR prefix length if one was specified, or -1 otherwise.\n\nThis sounds bang-on to me. How soon before your functions are in the\ntree? I'll start modifying my code based on this but I won't send it\nin until I have tested it against your functions.\n\nBy George! I think we've got it. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 13 Oct 1998 22:36:21 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "> This sounds bang-on to me. How soon before your functions are in the\n> tree? I'll start modifying my code based on this but I won't send it\n> in until I have tested it against your functions.\n\nI have not seen any patch yet. Paul, was your earlier posting supposed\nto be applied? If so, let me know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 23:21:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n\n> By George! I think we've got it. :-)\n\nYup! Great work, guys! I like what I see in the tree so far -- just\nwaiting for the transition to complete so I can use my network data\nagain! :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "14 Oct 1998 08:09:26 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "> > This sounds bang-on to me. How soon before your functions are in the\n> > tree? I'll start modifying my code based on this but I won't send it\n> > in until I have tested it against your functions.\n> \n> I have not seen any patch yet. Paul, was your earlier posting supposed\n> to be applied? If so, let me know.\n\nno. i will supply new source files to replace and augment the bind-based\nsource files in your current pool. (i'll want the new $Id:$'s for example.)\n",
"msg_date": "Tue, 13 Oct 1998 23:14:24 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind "
},
{
"msg_contents": "> > By George! I think we've got it. :-)\n> Yup! Great work, guys! I like what I see in the tree so far -- just\n> waiting for the transition to complete so I can use my network data\n> again! :-)\n\nI've forgotten who volunteered to write or update docs for this. I need\nto freeze the User's Guide fairly soon (~4 days?), and need to add a\nmention of the CIDR data type in \"datatype.sgml\".\n\nI assume that the README.inet which is currently in the tree is not an\naccurate document now? I would be happy to transcribe a plain text\ndescription into sgml, and would then expect last-minute updates to\nhappen in the sgml source rather than the original plain text. OK?\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 06:42:57 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
},
{
"msg_contents": "> > > By George! I think we've got it. :-)\n> > Yup! Great work, guys! I like what I see in the tree so far -- just\n> > waiting for the transition to complete so I can use my network data\n> > again! :-)\n> \n> I've forgotten who volunteered to write or update docs for this. I need\n> to freeze the User's Guide fairly soon (~4 days?), and need to add a\n> mention of the CIDR data type in \"datatype.sgml\".\n> \n> I assume that the README.inet which is currently in the tree is not an\n> accurate document now? I would be happy to transcribe a plain text\n> description into sgml, and would then expect last-minute updates to\n> happen in the sgml source rather than the original plain text. OK?\n\nThere were no volunteers, and in fact, it is still changing. When it is\ndone, one of the few people who have followed the features will have to\nwrite something up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 12:33:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: inet/cidr/bind"
}
] |
[
{
"msg_contents": "> Does somebody have solution for this problem that was discussed here a month ago?\n> \n> >> \n> >> the stream functions on AIX need a size_t for addrlen's in fe-connect.c and pqcomm.c.\n> >>This has come up before. AIX wants size_t for certain structures like\n> >getsockname(). I believe the third parameter on AIX is size_t, while it\n> >used to be int on my machine, but is not socklen_t. Is this correct? \n> >The 'int' code works fine for me, but I can see why AIX is having a\n> >problem, and perhaps it is time for configure to check on the various\n> >types.\n> >\n> >\tgetsockname(int s, struct sockaddr *name, socklen_t *namelen);\n> \n> Ok, so this gets tricky. In 4.2.1 it is size_t and in 4.3.1 it is as above with socklen_t :-(\n\nI would simply do:\n\n#ifndef size_t\ntypedef int size_t \n#endif\n\n#ifndef socklen_t\ntypedef size_t socklen_t\n#endif\n\nand use socklen_t which is now standard for socket functions\n\nAndreas\n\nPS.: I am back from \"vacation\" and am now happy father of our 16 day old daughter Hannah :-)\n\n\n",
"msg_date": "Tue, 13 Oct 1998 13:52:38 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: compilation problem on AIX"
},
{
"msg_contents": "> PS.: I am back from \"vacation\" and am now happy father of our 16 day \n> old daughter Hannah :-)\n\nCongratulations! We'll smoke a virtual cigar to celebrate.\n\n - Tom\n",
"msg_date": "Tue, 13 Oct 1998 14:14:01 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AW: compilation problem on AIX"
},
{
"msg_contents": "Andreas Zeugswetter <[email protected]> writes:\n> I would simply do:\n\n> #ifndef size_t\n> typedef int size_t \n> #endif\n\n> #ifndef socklen_t\n> typedef size_t socklen_t\n> #endif\n\nThat has no hope of working, since typedefs generally are not macros.\n\nMarc had the right idea: a configure test is the only real way to\ndiscover how getsockname() is declared. A small problem is that\nconfigure can only detect outright compilation failures, not warnings.\nThat's probably good enough, but people with nonstandard definitions\nof getsockname may have to live with looking at warnings.\n\n> and use socklen_t which is now standard for socket functions\n\nIt is? The machines I have access to think the parameter is plain,\nunvarnished \"int\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Oct 1998 10:38:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AW: compilation problem on AIX "
},
{
"msg_contents": "> Marc had the right idea: a configure test is the only real way to\n> discover how getsockname() is declared. A small problem is that\n> configure can only detect outright compilation failures, not warnings.\n> That's probably good enough, but people with nonstandard definitions\n> of getsockname may have to live with looking at warnings.\n\nJust redeclare the function with the parameters you expect. Most compilers\nwill fail if you redeclare with parameters of different types or different\nnumber of parameters, but silently ignore functionally identical prototype\nlines.\n\nTaral\n\n",
"msg_date": "Tue, 13 Oct 1998 12:36:31 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] AW: compilation problem on AIX "
}
] |
[
{
"msg_contents": "Hello all,\n\none of these days I discovered that my Tcl/Tk application has memory\nleaks.\nTracing them I discovered that the Tcl interface, libpgtcl , especially\npgtclCmds.c has bugs in pg_select function.\n\nStrange thing, I removed once that bugs and send a diff to developers,\nand I'm positively sure that 6.2.1 version came out with the bug free\nversion.\n\nIt seems that someone has done some changes in pgtclCmds.c but never\ntook the good version from PostgreSQL.org !!!\n\nI cannot find now the changes that I have made then but I can remember\nthat :\n\n1. in pg_select function, there is no PQclear(result) at the end of the\nloop causing memory leaks with every pg_select command that is executed\n2. in pg_select function, looping through the result, in case of\nTCL_BREAK or TCL_ERROR , there is a straight return without releasing\ninfo structure, nor PQClear-ing the result.\n\nPLEASE, who is maintaining libpgtcl, could you contact me directly (\[email protected] ) in order to check that errors and to modify them in order\nto deliver a proper tcl library with 6.4 version ?\n\nThe RedHat 5.0 and 5.1 distribution , including PgAccess is suffering\nfrom the same bug.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Tue, 13 Oct 1998 17:46:25 +0300",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgtclCmds.c has bugs that have been removed once !!!!!!"
}
] |
[
{
"msg_contents": "I am using a i386 machine with RedHat 5.1 installed.\n\nTrying to recompile the 6.3.2 version of PostgreSQL I found the\nfollowing errors :\n\ngcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes\n-I../.. -c ipc.c -o ipc.o\nIn file included from /usr/include/sys/sem.h:31,\n from ipc.c:38:\n/usr/include/sys/sem_buf.h:57: redefinition of `union semun'\nmake[3]: *** [ipc.o] Error 1 \n\n\nand also\n\ngcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes\n-I../.. -c proc.c -o proc.o\nIn file included from /usr/include/sys/sem.h:31,\n from proc.c:71:\n/usr/include/sys/sem_buf.h:57: redefinition of `union semun'\nmake[3]: *** [proc.o] Error 1 \n\n\n>From that point, the compilation was aborted.\n\n\nAny clues ?\n\nPlease cc: me directly to [email protected]\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Tue, 13 Oct 1998 17:47:07 +0300",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compiling errors on RedHat 5.1"
}
] |
[
{
"msg_contents": "Getting smaller. I moved some stuff to the TODO list:\n\nAdditions\n---------\nnew functoins and test INET address type(Tom Helbekkmo)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\nman pages/sgml synchronization\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nimprove PRIMARY KEY handling\ngenerate postmaster pid file and remove flock/fcntl lock code\nadd ability to specifiy location of lock/socket files\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 11:40:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.4 items"
}
] |
[
{
"msg_contents": "I have removed mentions of TCL_INCDIR,TCL_LIB, TK_INCDIR, and TK_LIB\nfrom the system, and we no longer check for specific tcl/tk versions.\n\nWith Billy's changes, we not use tclConfig.sh and tkConfig.sh. We\nsearch for them in the _normal_ places, and use those for the defines.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 12:45:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "TCL_LIB, TCL_INCDIR removed"
}
] |
[
{
"msg_contents": "Hi,\n\nI took a look at mysql and was very impressed with possibility\nto limit number of rows returned from select. This is very useful\nfeature for Web applications when user need to browse results of\nselection page by page. In my application I have to do full\nselect every time user press button [Next] and show requested page\nusing perl. This works more or less ok for several thousands rows but\ntotally unusable for large selections. But now I'm about to work\nwith big database and I don't know how I'll stay with postgres :-)\nIt'll just doesn't work if customer will wait several minutes just browse\nnext page. Mysql lacks some useful features postgres has \n(subselects, transaction ..) but for most Web applications I need\njust select :-) I dont' know how LIMIT is implemented in Mysql and\nI know it's not in SQL92 standart, but this makes Mysql very popular.\n\nIs it difficult to implement this feature in postgres ?\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 13 Oct 1998 21:31:51 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "What about LIMIT in SELECT ?"
},
{
"msg_contents": "Hi, my 2 cents...\n\nI agree completely, LIMIT would be VERY usefull in web based apps, which\nis all I run. It does not matter to me if it is not part of a formal\nstandard. The idea is so common that it is a defacto standard.\n\nI would not expect it for this release, but could it get put on the TODO\nlist for next time? I am even willing to work at an apprentise level on\nthis with a more expeireanced person that knows this stuff.\n\nA note on implimentation:\nI *used to* :) work with VFP on NT's :(\nAnd the way VFP did LIMIT, it would only return the number of rows asked\nfor, BUT it still did the WHOLE search!\nSo on a larger table, which we had (property tax database for the county),\nif some one put in too vague a query, it would try to collect ALL of the\nrows as the initial result set, then give you the first x rows of that.\n\nThis did save on pushing mass amounts of data out to the browser, but it\nwould have been even better if it could have simply aborted the select\nafter having found x rows.\n\nAlso, it did not have the concept of an offset, so one could not select\n100 rows, starting 200 rows in, which would be REALLY usefull for \"paging\"\nthrough data. I do not know if mySQL or any other has such a concept\neither, but it would be nice.\n\nSo a properly implemented \"LIMIT\" could:\n1. Save pushing mass amounts of data across the web, that no one wants\nany way.\n2. Stop vague queries from bogging down the server.\n(On very larg tables this could be critical!)\n3. Enable \"Paging\" of data. (easyer then now (app. level))\n4. Would be a very nice feather in PostgreSQL's cap that could make it\neven more attractive to those looking at all sorts of databases out there.\n\nHave a great day.\n\nOn Tue, 13 Oct 1998, Oleg Bartunov wrote:\n\n> Hi,\n> \n> I took a look at mysql and was very impressed with possibility\n> to limit number of rows returned from select. This is very useful\n> feature for Web applications when user need to browse results of\n> selection page by page. In my application I have to do full\n> select every time user press button [Next] and show requested page\n> using perl. This works more or less ok for several thousands rows but\n> totally unusable for large selections. But now I'm about to work\n> with big database and I don't know how I'll stay with postgres :-)\n> It'll just doesn't work if customer will wait several minutes just browse\n> next page. Mysql lacks some useful features postgres has \n> (subselects, transaction ..) but for most Web applications I need\n> just select :-) I dont' know how LIMIT is implemented in Mysql and\n> I know it's not in SQL92 standart, but this makes Mysql very popular.\n> \n> Is it difficult to implement this feature in postgres ?\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n\n",
"msg_date": "Tue, 13 Oct 1998 14:53:22 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "\nWhat is wrong with the already implemented FETCH command?\n\n\nOn Tue, 13 Oct 1998, Terry Mackintosh wrote:\n\n> Hi, my 2 cents...\n> \n> I agree completely, LIMIT would be VERY usefull in web based apps, which\n> is all I run. It does not matter to me if it is not part of a formal\n> standard. The idea is so common that it is a defacto standard.\n> \n> I would not expect it for this release, but could it get put on the TODO\n> list for next time? I am even willing to work at an apprentise level on\n> this with a more expeireanced person that knows this stuff.\n> \n> A note on implimentation:\n> I *used to* :) work with VFP on NT's :(\n> And the way VFP did LIMIT, it would only return the number of rows asked\n> for, BUT it still did the WHOLE search!\n> So on a larger table, which we had (property tax database for the county),\n> if some one put in too vague a query, it would try to collect ALL of the\n> rows as the initial result set, then give you the first x rows of that.\n> \n> This did save on pushing mass amounts of data out to the browser, but it\n> would have been even better if it could have simply aborted the select\n> after having found x rows.\n> \n> Also, it did not have the concept of an offset, so one could not select\n> 100 rows, starting 200 rows in, which would be REALLY usefull for \"paging\"\n> through data. I do not know if mySQL or any other has such a concept\n> either, but it would be nice.\n> \n> So a properly implemented \"LIMIT\" could:\n> 1. Save pushing mass amounts of data across the web, that no one wants\n> any way.\n> 2. Stop vague queries from bogging down the server.\n> (On very larg tables this could be critical!)\n> 3. Enable \"Paging\" of data. (easyer then now (app. level))\n> 4. Would be a very nice feather in PostgreSQL's cap that could make it\n> even more attractive to those looking at all sorts of databases out there.\n> \n> Have a great day.\n> \n> On Tue, 13 Oct 1998, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > I took a look at mysql and was very impressed with possibility\n> > to limit number of rows returned from select. This is very useful\n> > feature for Web applications when user need to browse results of\n> > selection page by page. In my application I have to do full\n> > select every time user press button [Next] and show requested page\n> > using perl. This works more or less ok for several thousands rows but\n> > totally unusable for large selections. But now I'm about to work\n> > with big database and I don't know how I'll stay with postgres :-)\n> > It'll just doesn't work if customer will wait several minutes just browse\n> > next page. Mysql lacks some useful features postgres has \n> > (subselects, transaction ..) but for most Web applications I need\n> > just select :-) I dont' know how LIMIT is implemented in Mysql and\n> > I know it's not in SQL92 standart, but this makes Mysql very popular.\n> > \n> > Is it difficult to implement this feature in postgres ?\n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> \n> Terry Mackintosh <[email protected]> http://www.terrym.com\n> sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n> \n> Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n> -------------------------------------------------------------------\n> Success Is A Choice ... book by Rick Patino, get it, read it!\n> \n> \n> \n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 15:03:48 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Hi, my 2 cents...\n> \n> I agree completely, LIMIT would be VERY usefull in web based apps, which\n> is all I run. It does not matter to me if it is not part of a formal\n> standard. The idea is so common that it is a defacto standard.\n> \n> I would not expect it for this release, but could it get put on the TODO\n> list for next time? I am even willing to work at an apprentise level on\n> this with a more expeireanced person that knows this stuff.\n\nI assume everyone has read the FAQ item:\n\n\tHow do I <I>select</I> only the first few rows of a query?\n\n\n\tSee the fetch manual page.<P>\n\t\n\tThis only prevents all row results from being transfered to the client.\n\tThe entire query must be evaluated, even if you only want just the first\n\tfew rows. Consider a query that has an order by. There is no way\n\tto return any rows until the entire query is evaluated and sorted.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 17:05:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Marc G. Fournier wrote:\n\n> \n> What is wrong with the already implemented FETCH command?\n> \n\nAh ... I did not know about it :)\nGuess I should RTFM.\n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Wed, 14 Oct 1998 13:07:16 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Bruce Momjian wrote:\n\n> \tSee the fetch manual page.<P>\n\nOK, I will.\n \t\n> \tThis only prevents all row results from being transfered to the client.\n\nYes, this is good, but this is only half the problem ...\n\n> \tThe entire query must be evaluated, even if you only want just the first\n\n... this is the other half.\n\n> \tfew rows. Consider a query that has an order by. There is no way\n> \tto return any rows until the entire query is evaluated and sorted.\n\nThis is where I was hoping one of you guru types might have some insight,\n-- how to stop short a query at X rows, even if it has an order by.\n\nNo way?\n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Wed, 14 Oct 1998 13:33:33 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Hi all.\nI didn't follow all the posts about this thread.\nSo this post may be out of center.\n\nI think current PostgreSQL lacks the concern to the response to get first\nrows quickly.\nFor example,queries with ORDER BY clause necessarily include sort steps\nand process all target rows to get first rows only.\nSo I modified my code for ORDER BY cases and use on trial.\nI don't understand PostgreSQL sources,so my code is not complete.\n\nI modified my code for the following 2 cases.\n\n1.In many cases the following query uses index scan.\n SELECT * from ... where key > ...; (where (key) is an index)\n If so,we can omit sort steps from the access plan for the following\nquery.\n SELECT * from ... where key > ... order by key;\n\n Currently cursors without sort steps may be sensitive diffrent from\n cursors with sort steps. But no one mind it.\n\n2.In many cases the following query uses index scan same as case 1.\n SELECT * from ... where key < ...;(where (key) is an index)\n If so and if we scan the index backward,we can omit sort steps from\n the access plan for the following query.\n SELECT * from ... where key < ... order by key desc;\n\n To achive this(backward scan),I used hidden(provided for the future ?)code\n that is never executed and is not necessarily correct.\n\nIn the following cases I didn't modify my code to use index scan,\nbecause I couldn't formulate how to tell PostgreSQL optimizer whether\nthe response to get first rows is needed or the throughput to process\nsufficiently many target rows is needed.\n\n3.The access plan made by current PostgreSQL optimizer for a query with\n ORDER BY clause doesn't include index scan.\n\nI thought the use of Tatsuo's QUERY_LIMIT to decide that the responce\nis needed. It is sufficient but not necessary ?\nIn Oracle the hints FIRST_ROWS,ALL_ROWS are used.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 15 Oct 1998 13:52:32 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] What about LIMIT in SELECT ?"
}
] |
[
{
"msg_contents": "\nI'm still investigating, but here is what initdb is givign me right now...\nif anyone has any suggestions while I debug at this end, great...:\n\n./initdb --pglib=/home/centre/marc/pgsql/lib --pgdata=/home/centre/marc/pgsql/data\n\nWe are initializing the database system with username marc (uid=1416).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /home/centre/marc/pgsql/data\n\nCreating Postgres database system directory\n/home/centre/marc/pgsql/data/base\n\nCreating template database in /home/centre/marc/pgsql/data/base/template1\n\nCreating global classes in /home/centre/marc/pgsql/data/base\n\nAdding template1 database to pg_database...\n\nVacuuming template1\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nCreating public pg_user view\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nmv: cannot access /home/centre/marc/pgsql/data/base/template1/xpg_user\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nCreating view pg_rules\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nmv: cannot access /home/centre/marc/pgsql/data/base/template1/xpg_rules\nSegmentation Fault - core dumped\nCreating view pg_views\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nmv: cannot access /home/centre/marc/pgsql/data/base/template1/xpg_views\nSegmentation Fault - core dumped\nCreating view pg_tables\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nmv: cannot access /home/centre/marc/pgsql/data/base/template1/xpg_tables\nSegmentation Fault - core dumped\nCreating view pg_indexes\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nmv: cannot access /home/centre/marc/pgsql/data/base/template1/xpg_indexes\nSegmentation Fault - core dumped\nLoading pg_description\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\nSegmentation Fault - core dumped\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 14:45:36 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "initdb: major failures under Solaris 2.6/gcc 2.8.1"
}
] |
[
{
"msg_contents": ">Hi, my 2 cents...\n>\n>I agree completely, LIMIT would be VERY usefull in web based apps, which\n>is all I run. It does not matter to me if it is not part of a formal\n>standard. The idea is so common that it is a defacto standard.\n\ni'm not familiar with mysql and using \"LIMIT\" but wouldn't this same effect\nbe achieved by declaring a cursor and fetching however many records in the\ncursor? it's a very noticeable improvement when you only want the first 20\nout of 500 in a 200k record database, at least.\n\njeff\n\n",
"msg_date": "Tue, 13 Oct 1998 14:23:35 -0500",
"msg_from": "\"Jeff Hoffmann\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Jeff Hoffmann wrote:\n> >I agree completely, LIMIT would be VERY usefull in web based apps, which\n> >is all I run. It does not matter to me if it is not part of a formal\n> >standard. The idea is so common that it is a defacto standard.\n> \n> i'm not familiar with mysql and using \"LIMIT\" but wouldn't this same effect\n> be achieved by declaring a cursor and fetching however many records in the\n> cursor? it's a very noticeable improvement when you only want the first 20\n> out of 500 in a 200k record database, at least.\n\nThe problem with declaring a cursor vs. the \"LIMIT\" clause is that the\n\"LIMIT\" clause, if used properly by the database engine (along with the\ndatabase engine using indexes in \"ORDER BY\" clauses) allows the database\nengine to short-circuit the tail end of the query. That is, if you have 25\nnames and the last one ends with BEAVIS, the database engine doesn't have\nto go through the BUTTHEADS and KENNYs and etc. \n\nTheoretically a cursor is superior to the \"LIMIT\" clause because you're\neventually going to want the B's and K's and etc. anyhow -- but only in a\nstateful enviornment. In the stateless web environment, a cursor is\nuseless because the connection can close at any time even when you're\nusing \"persistent\" connections (and of course when the connection closes\nthe cursor closes). \n\nI wanted very badly to use PostgreSQL for a web project I'm working on,\nbut it just wouldn't do the job :-(. \n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n\"To call Microsoft an innovator is like calling the Pope Jewish ...\" \n -- James Love (Consumer Project on Technology)\n\n",
"msg_date": "Tue, 13 Oct 1998 16:24:20 -0400 (EDT)",
"msg_from": "Eric Lee Green <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Eric Lee Green wrote:\n\n> On Tue, 13 Oct 1998, Jeff Hoffmann wrote:\n> > >I agree completely, LIMIT would be VERY usefull in web based apps, which\n> > >is all I run. It does not matter to me if it is not part of a formal\n> > >standard. The idea is so common that it is a defacto standard.\n> > \n> > i'm not familiar with mysql and using \"LIMIT\" but wouldn't this same effect\n> > be achieved by declaring a cursor and fetching however many records in the\n> > cursor? it's a very noticeable improvement when you only want the first 20\n> > out of 500 in a 200k record database, at least.\n> \n> The problem with declaring a cursor vs. the \"LIMIT\" clause is that the\n> \"LIMIT\" clause, if used properly by the database engine (along with the\n> database engine using indexes in \"ORDER BY\" clauses) allows the database\n> engine to short-circuit the tail end of the query. That is, if you have 25\n> names and the last one ends with BEAVIS, the database engine doesn't have\n> to go through the BUTTHEADS and KENNYs and etc. \n> \n> Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n> eventually going to want the B's and K's and etc. anyhow -- but only in a\n> stateful enviornment. In the stateless web environment, a cursor is\n> useless because the connection can close at any time even when you're\n> using \"persistent\" connections (and of course when the connection closes\n> the cursor closes). \n\nOokay, I'm sorry, butyou lost me here. I haven't gotten into using\nCURSORs/FETCHs yet, since I haven't need it...but can you give an example\nof what you would want to do using a LIMIT? I may be missing something,\nbut wha is the different between using LIMIT to get X records, and\ndefiniing a cursor to FETCH X records?\n\nPractical example of *at least* the LIMIT side would be good, so that we\ncan at least see a physical example of what LIMIT can do that\nCURSORs/FETCH can't...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 16:48:25 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n> eventually going to want the B's and K's and etc. anyhow -- but only in a\n> stateful enviornment. In the stateless web environment, a cursor is\n> useless because the connection can close at any time even when you're\n> using \"persistent\" connections (and of course when the connection closes\n> the cursor closes). \n> \n> I wanted very badly to use PostgreSQL for a web project I'm working on,\n> but it just wouldn't do the job :-(. \n\nSee my other posting mentioning the FAQ item on this subject. If you\nare going after only one table(no joins), and have no ORDER BY, we could\nshort-circuit the evaluation, but how many queries could use LIMIT in\nthat case? Zero, I think.\n\nWhat we could do is _if_ there is only one table(no joins), and an index\nexists that matches the ORDER BY, we could use the index to\nshort-circuit the query.\n\nI have added this item to the TODO list:\n\n* Allow LIMIT ability on single-table queries that have no ORDER BY or\n a matching index\n \nThis looks do-able, and a real win. Would this make web applications\nhappier? If there is an ORDER BY and no index, or a join, I can't\nfigure out how we would short-circuit the query.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 17:16:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Ookay, I'm sorry, butyou lost me here. I haven't gotten into using\n> CURSORs/FETCHs yet, since I haven't need it...but can you give an example\n> of what you would want to do using a LIMIT? I may be missing something,\n> but wha is the different between using LIMIT to get X records, and\n> definiing a cursor to FETCH X records?\n> \n> Practical example of *at least* the LIMIT side would be good, so that we\n> can at least see a physical example of what LIMIT can do that\n> CURSORs/FETCH can't...\n\nMy guess in a web application is that the transaction is started for\nevery new page, so you can't have transactions spanning SQL sessions.\n\nLIMIT theoretically would allow you to start up where you left off.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 17:18:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Marc G. Fournier wrote:\n> On Tue, 13 Oct 1998, Eric Lee Green wrote:\n> > Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n> > eventually going to want the B's and K's and etc. anyhow -- but only in a\n> > stateful enviornment. In the stateless web environment, a cursor is\n> > useless because the connection can close at any time even when you're\n> Ookay, I'm sorry, butyou lost me here. I haven't gotten into using\n> CURSORs/FETCHs yet, since I haven't need it...but can you give an example\n> of what you would want to do using a LIMIT? I may be missing something,\n\nWhoops! Sorry, I goofed in my post (typing faster than my brain :-).\nWhat I *MEANT* to say was that this superiority of cursors was not \napplicable in a web environment.\n\n> but wha is the different between using LIMIT to get X records, and\n> definiing a cursor to FETCH X records?\n\n>From a logical point of view, none. From an implementation point of\nview, it is a matter of speed. Declaring a cursor four times, doing a\nquery four times, and fetching X records four times takes more time\nthan just doing a query with a LIMIT clause four times (assuming your\nquery results in four screenfulls of records).\n\n> Practical example of *at least* the LIMIT side would be good, so that we\n> can at least see a physical example of what LIMIT can do that\n> CURSORs/FETCH can't...\n\nYou can do everything with CURSORs/FETCH that you can do with LIMIT.\nIn a non-web environment, where you have stateful connections, a FETCH\nis always going to be faster than a SELECT...LIMIT statement. (Well,\nit would be if implemented correctly, but I'll leave that to others to\nhaggle over). However: In a CGI-type environment, cursors are a huge\nperformance drain because in the example above you end up doing this\nhuge query four times, with its results stored in the cursor four\ntimes, and only a few values are ever fetched from the cursor before it\nis destroyed by the end of the CGI script. \n\nWhereas with the SELECT...LIMIT paradigm, the database engine does NOT\nprocess the entire huge query, it quits processing once it reaches the\nlimit. (Well, at least MySQL does so, if you happen to be using an\n\"ORDER BY\" supported by an index). Obviously doing 1/4th the work four times\nis better than doing the whole tamale four times :-}. \n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n\"To call Microsoft an innovator is like calling the Pope Jewish ...\" \n -- James Love (Consumer Project on Technology)\n\n",
"msg_date": "Tue, 13 Oct 1998 18:39:01 -0400 (EDT)",
"msg_from": "Eric Lee Green <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Eric Lee Green wrote:\n\n> Whoops! Sorry, I goofed in my post (typing faster than my brain :-).\n> What I *MEANT* to say was that this superiority of cursors was not \n> applicable in a web environment.\n\n\tS'alright...now please backup your statement with the *why*...\n\n> > but wha is the different between using LIMIT to get X records, and\n> > definiing a cursor to FETCH X records?\n> \n> >From a logical point of view, none. From an implementation point of\n> view, it is a matter of speed. Declaring a cursor four times, doing a\n> query four times, and fetching X records four times takes more time\n> than just doing a query with a LIMIT clause four times (assuming your\n> query results in four screenfulls of records).\n\n\tI'm going to be brain-dead here, since, as I've disclaimered\nbefore, I've not used CURSORs/FETCHs as of yet...one person came back\nalready and stated that, for him, CURSOR/FETCH results were near\ninstantaneous with a 167k+ table...have you tested the two to ensure that,\nin fact, one is/isn't faster then the other?\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 18:48:35 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Bruce Momjian wrote:\n> > Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n> > eventually going to want the B's and K's and etc. anyhow -- but only in a\n> > stateful enviornment. In the stateless web environment, a cursor is\n> > useless because the connection can close at any time even when you're\n> > using \"persistent\" connections (and of course when the connection closes\n> What we could do is _if_ there is only one table(no joins), and an index\n> exists that matches the ORDER BY, we could use the index to\n> short-circuit the query.\n\nThis is exactly what MySQL does in this situation, except that it can use\nthe ORDER BY to do the short circuiting even if there is a join involved\nif all of the elements of the ORDER BY belong to one table. Obviously if\nI'm doing an \"ORDER BY table1.foo table2.bar\" that isn't going to work!\nBut \"select table1.fsname,table1.lname,table2.receivables where \ntable2.receivables > 0 and table1.custnum=table2.custnum order by\n(table1.lname,table1.fsname) limit 50\" can be short-circuited by fiddling\nwith the join order -- table1.fsname table1.lname have to be the first two\nthings in the join order. \n\nWhether this is feasible in PostgreSQL I have no earthly idea. This would\nseem to conflict with the join optimizer.\n\n> happier? If there is an ORDER BY and no index, or a join, I can't\n> figure out how we would short-circuit the query.\n\nIf there is an ORDER BY and no index you can't short-circuit the query.\nMySQL doesn't either. Under certain circumstances (such as above) you can\nshort-circuit a join, but it's unclear whether it'd be easy to add such\na capability to PostgreSQL given the current structure of the query\noptimizer. (And I certainly am not in a position to tackle it, at the\nmoment MySQL is sufficing for my project despite the fact that it is \nquite limited compared to PostgreSQL, I need to get my project finished\nfirst). \n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n\"To call Microsoft an innovator is like calling the Pope Jewish ...\" \n -- James Love (Consumer Project on Technology)\n\n",
"msg_date": "Tue, 13 Oct 1998 18:55:22 -0400 (EDT)",
"msg_from": "Eric Lee Green <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Marc G. Fournier wrote:\n> On Tue, 13 Oct 1998, Eric Lee Green wrote:\n> > Whoops! Sorry, I goofed in my post (typing faster than my brain :-).\n> > What I *MEANT* to say was that this superiority of cursors was not \n> > applicable in a web environment.\n> \n> \tS'alright...now please backup your statement with the *why*...\n\nOkay. It is because CGI is a stateless environment. You cannot just keep a\ncursor open and walk up and down it, which is the superiority of cursors\n(it is always faster to walk up and down a pre-SELECT'ed list than it is\nto perform additional SELECTs). You have to destroy it upon exiting the\nCGI script (which presumably just fetched 25 items or so to display on\nan HTML page -- think DejaNews). \n\nCreating a cursor and destroying a cursor take time. Less time, in a\nnormal environment, than it would take to make multiple SELECT statements,\nwhich is the superiority of cursors in a normal environment. But, like I\nsaid, CGI isn't normal -- the CGI script exits at the end of displaying 25\nitems, at which point the cursor is destroyed, thus destroying any benefit\nyou could have gotten while adding additional overhead.\n\nIn addition there is the possibility of short-circuiting the SELECT if\nthere is a LIMIT clause and there is no ORDER BY clause or the ORDER BY\nclause is walking down an index (the later being a possibility only if\nthere is no 'join' involved or if the 'join' is simple enough that it can\nbe done without running afoul of the join optimizer). Cursors, by their\nnature, require performing the entire tamale first. \n\n> > >From a logical point of view, none. From an implementation point of\n> > view, it is a matter of speed. Declaring a cursor four times, doing a\n>\n> already and stated that, for him, CURSOR/FETCH results were near\n> instantaneous with a 167k+ table...have you tested the two to ensure that,\n> in fact, one is/isn't faster then the other?\n\nCURSOR/FETCH *SHOULD* be nearly instantaneous, because you're merely\nfetching values from a pre-existing query result. As I said, in normal\n(non-CGI) use, a cursor is FAR superior to a \"LIMIT\" clause. \n\nBut the question of whether declaring a cursor four times and destroying\nit four times takes a sizable amount of time compared to a LIMIT\nclause... it's really hard to test, unfortunately, due to the differing\nnatures of MySQL and PostgreSQL. MySQL starts up a connection very fast\nwhile PostgreSQL takes awhile (has anybody done work on the \"herd of\nservers\" concept to tackle that?). It is hard, in a CGI environment, to\ndetirmine if the poor speed (in terms of number of hits the server can\ntake) is due to the slow connection startup time or due to the cursor\noverhead. I could write a benchmark program that kept the connection open\nand did just the cursor timings, but I'm not particularly motivated.\n\nI think RMS has a point when he decries the fact that non-free software is\nbecoming more available for Linux (MySQL is definitely non-free)... i.e.,\nthat it takes away people's motivation to improve the free software. The\nonly good part there is that MySQL is hardly suitable for normal database\nwork -- it is very much optimized for web serving and other applications\nof that sort where speed and CGI-friendliness are more important than\nfunctionality.\n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n\"To call Microsoft an innovator is like calling the Pope Jewish ...\" \n -- James Love (Consumer Project on Technology)\n\n",
"msg_date": "Tue, 13 Oct 1998 19:17:20 -0400 (EDT)",
"msg_from": "Eric Lee Green <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "I can't speak to the relative efficiencies of the methods, but I do\nperform queries that present data subsets to web browsers using postgresql\nwith the following method:\n\n\t1) collect data input; do cgi query; write tuples to temporary file\n\t2) html page index sent back to browser contains page specific\n\t\treferences to temporary file name and tuple range.\n\t3) Subsequent data retrievals reference temporary file using sed and\n\t\ttuple range\n\t4) temporary file is destroyed 15min after last access time by a\n\t\tbackground process.\nThis consumes disk space, but I assume it conserves memory compared to\na cursor/fetch sequence performed in a persistent db connection.\n\nFor a general purpose query, I'm not sure if there is any other\nalternative to this method unless you wish to reperform the query\nfor each retrieved html page.\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n",
"msg_date": "Tue, 13 Oct 1998 19:27:04 -0400 (EDT)",
"msg_from": "Marc Howard Zuckman <[email protected]>",
"msg_from_op": false,
"msg_subject": "[HACKERS] Alternative to LIMIT in SELECT ?"
},
{
"msg_contents": "> Whereas with the SELECT...LIMIT paradigm, the database engine does NOT\n> process the entire huge query, it quits processing once it reaches the\n> limit. (Well, at least MySQL does so, if you happen to be using an\n> \"ORDER BY\" supported by an index). Obviously doing 1/4th the work four times\n> is better than doing the whole tamale four times :-}. \n\nAnd no join, I assume.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 20:47:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "This might be off-topic, but...\n\nI've found ExecutorLimit() (in executor/execMain.c) is useful for me\nespecially when issuing an ad-hock query via psql. I personally use\nthe function with customized set command.\n\nset query_limit to 'num';\n\n\tlimit the max number of results returned by the backend\n\nshow query_limit;\n\n\tdisplay the current query limit\n\nreset query_limit;\n\n\tdisable the query limit (unlimited number of results allowed)\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Wed, 14 Oct 1998 10:40:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ? "
},
{
"msg_contents": "> I've found ExecutorLimit() (in executor/execMain.c) is useful for me\n> especially when issuing an ad-hock query via psql. I personally use\n> the function with customized set command.\n\nLooks interesting. So where are the patches? :)\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 02:11:06 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ?"
},
{
"msg_contents": ">> I've found ExecutorLimit() (in executor/execMain.c) is useful for me\n>> especially when issuing an ad-hock query via psql. I personally use\n>> the function with customized set command.\n>\n>Looks interesting. So where are the patches? :)\n\nI'll post pacthes within 24 hours:-)\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Wed, 14 Oct 1998 11:20:09 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ? "
},
{
"msg_contents": ">>> I've found ExecutorLimit() (in executor/execMain.c) is useful for me\n>>> especially when issuing an ad-hock query via psql. I personally use\n>>> the function with customized set command.\n>>\n>>Looks interesting. So where are the patches? :)\n>\n>I'll post pacthes within 24 hours:-)\n\nHere it is.\n--\nTatsuo Ishii\[email protected]\n----------------------------------------------------------------\n*** backend/commands/variable.c.orig\tFri Oct 9 09:56:51 1998\n--- backend/commands/variable.c\tWed Oct 14 13:06:15 1998\n***************\n*** 18,23 ****\n--- 18,27 ----\n #ifdef MULTIBYTE\n #include \"mb/pg_wchar.h\"\n #endif\n+ #ifdef QUERY_LIMIT\n+ #include \"executor/executor.h\"\n+ #include \"executor/execdefs.h\"\n+ #endif\n \n static bool show_date(void);\n static bool reset_date(void);\n***************\n*** 40,45 ****\n--- 44,54 ----\n static bool show_ksqo(void);\n static bool reset_ksqo(void);\n static bool parse_ksqo(const char *);\n+ #ifdef QUERY_LIMIT\n+ static bool show_query_limit(void);\n+ static bool reset_query_limit(void);\n+ static bool parse_query_limit(const char *);\n+ #endif\n \n extern Cost _cpu_page_wight_;\n extern Cost _cpu_index_page_wight_;\n***************\n*** 546,551 ****\n--- 555,600 ----\n }\t/* reset_timezone() */\n \n /*-----------------------------------------------------------------------*/\n+ #ifdef QUERY_LIMIT\n+ static bool\n+ parse_query_limit(const char *value)\n+ {\n+ int32 limit;\n+ \n+ if (value == NULL) {\n+ reset_query_limit();\n+ return(TRUE);\n+ }\n+ limit = pg_atoi(value, sizeof(int32), '\\0');\n+ if (limit <= -1) {\n+ elog(ERROR, \"Bad value for # of query limit (%s)\", value);\n+ }\n+ ExecutorLimit(limit);\n+ return(TRUE);\n+ }\n+ \n+ static bool\n+ show_query_limit(void)\n+ {\n+ int limit;\n+ \n+ limit = ExecutorGetLimit();\n+ if (limit == ALL_TUPLES) {\n+ elog(NOTICE, \"No query limit is set\");\n+ } else {\n+ elog(NOTICE, \"query limit is %d\",limit);\n+ }\n+ return(TRUE);\n+ }\n+ \n+ static bool\n+ reset_query_limit(void)\n+ {\n+ ExecutorLimit(ALL_TUPLES);\n+ return(TRUE);\n+ }\n+ #endif\n+ /*-----------------------------------------------------------------------*/\n struct VariableParsers\n {\n \tconst char *name;\n***************\n*** 584,589 ****\n--- 633,643 ----\n \t{\n \t\t\"ksqo\", parse_ksqo, show_ksqo, reset_ksqo\n \t},\n+ #ifdef QUERY_LIMIT\n+ \t{\n+ \t\t\"query_limit\", parse_query_limit, show_query_limit, reset_query_limit\n+ \t},\n+ #endif\n \t{\n \t\tNULL, NULL, NULL, NULL\n \t}\n*** backend/executor/execMain.c.orig\tThu Oct 1 11:03:58 1998\n--- backend/executor/execMain.c\tWed Oct 14 11:24:06 1998\n***************\n*** 83,94 ****\n #undef ALL_TUPLES\n #define ALL_TUPLES queryLimit\n \n- int\t\t\tExecutorLimit(int limit);\n- \n int\n ExecutorLimit(int limit)\n {\n \treturn queryLimit = limit;\n }\n \n #endif\n--- 83,98 ----\n #undef ALL_TUPLES\n #define ALL_TUPLES queryLimit\n \n int\n ExecutorLimit(int limit)\n {\n \treturn queryLimit = limit;\n+ }\n+ \n+ int\n+ ExecutorGetLimit()\n+ {\n+ \treturn queryLimit;\n }\n \n #endif\n*** include/executor/executor.h.orig\tFri Oct 9 10:02:07 1998\n--- include/executor/executor.h\tWed Oct 14 11:24:07 1998\n***************\n*** 86,91 ****\n--- 86,95 ----\n extern TupleTableSlot *ExecutorRun(QueryDesc *queryDesc, EState *estate, int feature, int count);\n extern void ExecutorEnd(QueryDesc *queryDesc, EState *estate);\n extern HeapTuple ExecConstraints(char *caller, Relation rel, HeapTuple tuple);\n+ #ifdef QUERY_LIMIT\n+ extern int ExecutorLimit(int limit);\n+ extern int ExecutorGetLimit(void);\n+ #endif\n \n /*\n * prototypes from functions in execProcnode.c\n",
"msg_date": "Wed, 14 Oct 1998 13:11:20 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ? "
},
{
"msg_contents": "Applied, with one question.\n\n\n> >>> I've found ExecutorLimit() (in executor/execMain.c) is useful for me\n> >>> especially when issuing an ad-hock query via psql. I personally use\n> >>> the function with customized set command.\n> >>\n> >>Looks interesting. So where are the patches? :)\n> >\n> >I'll post pacthes within 24 hours:-)\n> \n> Here it is.\n> + #ifdef QUERY_LIMIT\n> + static bool\n> + parse_query_limit(const char *value)\n> + {\n> + int32 limit;\n> + \n> + if (value == NULL) {\n> + reset_query_limit();\n> + return(TRUE);\n> + }\n\nAny idea how 'value' could be null? I could not see how that would\nhappen. I can see how GEQO could have a NULL when you say ON, and no\nvalue. Same with rplans.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 01:12:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ?"
},
{
"msg_contents": ">> + #ifdef QUERY_LIMIT\n>> + static bool\n>> + parse_query_limit(const char *value)\n>> + {\n>> + int32 limit;\n>> + \n>> + if (value == NULL) {\n>> + reset_query_limit();\n>> + return(TRUE);\n>> + }\n>\n>Any idea how 'value' could be null? I could not see how that would\n>happen.\n\nNot sure. I just followed the way other set commands are doing.\n\n>I can see how GEQO could have a NULL when you say ON, and no\n>value. Same with rplans.\n\nHmm... I think in that case, 'value' would be 'ON', not NULL. right?\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Wed, 14 Oct 1998 14:53:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ? "
},
{
"msg_contents": "> >> + if (value == NULL) {\n> >> + reset_query_limit();\n> >> + return(TRUE);\n> >> + }\n> >Any idea how 'value' could be null? I could not see how that would\n> >happen.\n> Not sure. I just followed the way other set commands are doing.\n\nThis is how RESET is implemented.\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 06:19:11 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Bruce Momjian wrote:\n> My guess in a web application is that the transaction is started for\n> every new page, so you can't have transactions spanning SQL sessions.\n> \n> LIMIT theoretically would allow you to start up where you left off.\n\nIt really does depend largly on the architecuture of the website doesn't\nit.\n\nLIMIT probably allows web site developers a quick and dirty way to do what\nshould properly be done with a web-DB proxy. I seem to remember mod_perl\nhaving a solution for this sort of thing.\n\n--\n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Wed, 14 Oct 1998 02:25:31 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Eric Lee Green wrote:\n> it's really hard to test, unfortunately, due to the differing natures\n> of MySQL and PostgreSQL. MySQL starts up a connection very fast while\n> PostgreSQL takes awhile (has anybody done work on the \"herd of\n> servers\" concept to tackle that?).\n\nIs MySQL really all that much faster? I've got a large number of CLI\nutilities that pull data from a DB on a central server and I'm lagging on\nscroll speed in my xterms for the most part. I've yet to see any\nmeasureable lag in connection setup.\n\nMy hardware isn't all that fast either. (Ultra5 client, Ultra1/170E\nserver.)\n\n--\n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n",
"msg_date": "Wed, 14 Oct 1998 02:32:30 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Eric Lee Green wrote:\n>\n> On Tue, 13 Oct 1998, Jeff Hoffmann wrote:\n> > >I agree completely, LIMIT would be VERY usefull in web based apps, which\n> > >is all I run. It does not matter to me if it is not part of a formal\n> > >standard. The idea is so common that it is a defacto standard.\n> >\n> > i'm not familiar with mysql and using \"LIMIT\" but wouldn't this same effect\n> > be achieved by declaring a cursor and fetching however many records in the\n> > cursor? it's a very noticeable improvement when you only want the first 20\n> > out of 500 in a 200k record database, at least.\n>\n> The problem with declaring a cursor vs. the \"LIMIT\" clause is that the\n> \"LIMIT\" clause, if used properly by the database engine (along with the\n> database engine using indexes in \"ORDER BY\" clauses) allows the database\n> engine to short-circuit the tail end of the query. That is, if you have 25\n> names and the last one ends with BEAVIS, the database engine doesn't have\n> to go through the BUTTHEADS and KENNYs and etc.\n>\n> Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n> eventually going to want the B's and K's and etc. anyhow -- but only in a\n> stateful enviornment. In the stateless web environment, a cursor is\n> useless because the connection can close at any time even when you're\n> using \"persistent\" connections (and of course when the connection closes\n> the cursor closes).\n\n I'm missing something. Well it's right that in the stateless\n web environment a cursor has to be declared and closed for\n any single CGI call. But even if you have a LIMIT clause,\n your CGI must know with which key to start.\n\n So your query must look like\n\n SELECT ... WHERE key > 'last processed key' ORDER BY key;\n\n And your key must be unique (or at least contain no duplicate\n entries) or you might miss some rows between the pages (have\n 100 Brown's in the table and last processed key was a Brown\n while using LIMIT).\n\n In postgres you could actually do the following (but read on\n below - it's not optimized correct)\n\n BEGIN;\n DECLARE c CURSOR FOR SELECT ... WHERE key > 'last' ORDER BY key;\n FETCH 20 IN c;\n (process the 20 rows in CGI)\n CLOSE c;\n COMMIT;\n\n Having LIMIT looks more elegant and has less overhead in CGI-\n backend communication. But the cursor version is SQL\n standard and portable.\n\n>\n> I wanted very badly to use PostgreSQL for a web project I'm working on,\n> but it just wouldn't do the job :-(.\n\n I've done some tests and what I found out might be a bug in\n PostgreSQL's query optimizer. Having a table with 25k rows\n where key is a text field with a unique index. Now I used\n EXPLAIN for some queries\n\n SELECT * FROM tab;\n\n results in a seqscan - expected.\n\n SELECT * FROM tab ORDER BY key;\n\n results in a sort->seqscan - I would have\n expected an indexscan!\n\n SELECT * FROM tab WHERE key > 'G';\n\n results in an indexscan - expected.\n\n SELECT * FROM tab WHERE key > 'G' ORDER BY key;\n\n results in a sort->indexscan - hmmm.\n\n These results stay the same even if I blow up the table by\n duplicating all rows (now with a non-unique index) to 100k\n rows and have them presorted in the table.\n\n Needless to say that everything is vacuum'd for statistics.\n\n The last one is the query we would need in the web\n environment used over a cursor as in the example above. But\n due to the sort, the backend selects until the end of the\n table, sorts them and then returns only the first 20 rows\n (out of sorts result).\n\n This is very painful if the qualification (key > ...) points\n to the beginning of the key list.\n\n Looking at planner.c I can see, that if there is a sortClause\n in the parsetree, the planner creates a sort node and does\n absolutely not check if there is an index that could be used\n to do it. In the examples above, the sort is absolutely\n needless because the index scan will already return the\n tuples in the right order :-).\n\n Somewhere deep in my brain I found a statement that sorting\n sorted data isn't only unnecessary (except the order\n changes), it is slow too compared against sorting randomly\n ordered data.\n\n Can we fix this before 6.4 release, will it be a past 6.4 or\n am I doing something wrong here? I think it isn't a fix (it's\n a planner enhancement) so it should really be a past 6.4\n item.\n\n For now, the only possibility is to omit the ORDER BY in the\n query and hope the planner will always generate an index scan\n (because of the qualification 'key > ...'). Doing so I\n selected multiple times 20 rows (with the last key qual like\n a CGI would do) in separate transactions. Using cursor and\n fetch speeds up the access by a factor of 1000! But it is\n unsafe and thus NOT RECOMMENDED! It's only a test if cursors\n can do the LIMIT job - and they could if the planner would do\n a better job.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 13:09:21 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Wed, 14 Oct 1998, Jan Wieck wrote:\n\n> Date: Wed, 14 Oct 1998 13:09:21 +0200 (MET DST)\n> From: Jan Wieck <[email protected]>\n> To: Eric Lee Green <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [HACKERS] What about LIMIT in SELECT ?\n> \n> Eric Lee Green wrote:\n> >\n> > On Tue, 13 Oct 1998, Jeff Hoffmann wrote:\n> > > >I agree completely, LIMIT would be VERY usefull in web based apps, which\n> > > >is all I run. It does not matter to me if it is not part of a formal\n> > > >standard. The idea is so common that it is a defacto standard.\n> > >\n> > > i'm not familiar with mysql and using \"LIMIT\" but wouldn't this same effect\n> > > be achieved by declaring a cursor and fetching however many records in the\n> > > cursor? it's a very noticeable improvement when you only want the first 20\n> > > out of 500 in a 200k record database, at least.\n> >\n> > The problem with declaring a cursor vs. the \"LIMIT\" clause is that the\n> > \"LIMIT\" clause, if used properly by the database engine (along with the\n> > database engine using indexes in \"ORDER BY\" clauses) allows the database\n> > engine to short-circuit the tail end of the query. That is, if you have 25\n> > names and the last one ends with BEAVIS, the database engine doesn't have\n> > to go through the BUTTHEADS and KENNYs and etc.\n> >\n> > Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n> > eventually going to want the B's and K's and etc. anyhow -- but only in a\n> > stateful enviornment. In the stateless web environment, a cursor is\n> > useless because the connection can close at any time even when you're\n> > using \"persistent\" connections (and of course when the connection closes\n> > the cursor closes).\n> \n> I'm missing something. Well it's right that in the stateless\n> web environment a cursor has to be declared and closed for\n> any single CGI call. But even if you have a LIMIT clause,\n> your CGI must know with which key to start.\n> \n This is not a problem for CGI-script to know which key to start.\n Without LIMIT every CGI call backend will do *FULL* selection\n and cursor helps just in fetching a definite number of rows,\n in principle I can do this with CGI-script. Also, cursor\n returns data back in ASCII format (man l declare) and this requires\n additional job for backend to convert data from intrinsic (binary)\n format. Right implementation of LIMIT offset,number_of_rows could be\n a great win and make postgres superior free database engine for\n Web applications. Many colleagues of mine used mysql instead of\n postgres just because of lacking LIMIT. Tatsuo posted a patch\n for set query_limit to 'num', I just tested it and seems it\n works fine. Now, we need only possibility to specify offset,\n say \n set query_limit to 'offset,num'\n ( Tatsuo, How difficult to do this ?)\n and LIMIT problem will ne gone.\n\n I'm wonder how many useful patches could be hidden from people :-),\n \n\tRegards,\n\n\t\tOleg\n\nPS.\n\n\tTatsuo, do you have patch for 6.3.2 ?\n I can't wait for 6.4 :-)\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Wed, 14 Oct 1998 16:53:53 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> I've done some tests and what I found out might be a bug in\n> PostgreSQL's query optimizer.\n> SELECT * FROM tab ORDER BY key;\n> results in a sort->seqscan - I would have\n> expected an indexscan!\n\nGiven that a table _could_ be completely unsorted on disk, it is\nprobably reasonable to suck the data in for a possible in-memory sort\nrather than skipping around the disk to pick up individual tuples via\nthe index. Don't know if vacuum has a statistic on \"orderness\"...\n\n> SELECT * FROM tab WHERE key > 'G' ORDER BY key;\n> results in a sort->indexscan - hmmm.\n> The last one is the query we would need in the web\n> environment used over a cursor as in the example above. But\n> due to the sort, the backend selects until the end of the\n> table, sorts them and then returns only the first 20 rows\n> (out of sorts result).\n\nSo you are saying that for this last case the sort was unnecessary? Does\nthe backend traverse the index in the correct order to guarantee that\nthe tuples are coming out already sorted? Does a hash index give the\nsame plan (I would expect a sort->seqscan for a hash index)?\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 13:59:56 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> This is not a problem for CGI-script to know which key to start.\n\n Never meant that would be a problem. A FORM variable will of\n course do this.\n\n> Without LIMIT every CGI call backend will do *FULL* selection\n> and cursor helps just in fetching a definite number of rows,\n> in principle I can do this with CGI-script. Also, cursor\n> returns data back in ASCII format (man l declare) and this requires\n> additional job for backend to convert data from intrinsic (binary)\n> format. Right implementation of LIMIT offset,number_of_rows could be\n> a great win and make postgres superior free database engine for\n> Web applications. Many colleagues of mine used mysql instead of\n\n That's the point I was missing. The offset!\n\n> postgres just because of lacking LIMIT. Tatsuo posted a patch\n> for set query_limit to 'num', I just tested it and seems it\n> works fine. Now, we need only possibility to specify offset,\n> say\n> set query_limit to 'offset,num'\n> ( Tatsuo, How difficult to do this ?)\n> and LIMIT problem will ne gone.\n\n Think you haven't read my posting completely. Even with the\n executor limit, the complete scan into the sort is done by\n the backend. You need to specify ORDER BY to get the same\n list again (without the offset doesn't make sense). But\n currently, ORDER BY forces a sort node into the query plan.\n\n What the executor limit tells is how many rows will be\n returned from the sorted data. Not what goes into the sort.\n Filling the sort and sorting the data consumes the most time\n of the queries execution.\n\n I haven't looked at Tatsuo's patch very well. But if it\n limits the amount of data going into the sort (on ORDER BY),\n it will break it! The requested ordering could be different\n from what the choosen index might return. The used index is\n choosen by the planner upon the qualifications given, not the\n ordering wanted.\n\n So if you select WHERE b = 1 ORDER BY a, then it will use an\n index on attribute b to match the qualification. The complete\n result of that index scan goes into the sort to get ordered\n by a. If now the executor limit stops sort filling after the\n limit is exceeded, only the same tuples will go into the sort\n every time. But they have nothing to do with the requested\n order by a.\n\n What LIMIT first needs is a planner enhancement. In file\n backend/optimizer/plan/planner.c line 284 it must be checked\n if the actual plan is an indexscan, if the indexed attributes\n are all the same as those in the given sort clause and that\n the requested sort order (operator) is that what the index\n will return. If that all matches, it can ignore the sort\n clause and return the index scan itself.\n\n Second enhancement must be the handling of the offset. In\n the executor, the index scan must skip offset index tuples\n before returning the first. But NOT if the plan isn't a\n 1-table-index-scan. In that case the result tuples (from the\n topmost unique/join/whatever node) have to be skipped.\n\n With these enhancements, the index tuples to be skipped\n (offset) will still be scanned, but not the data tuples they\n point to. Index scanning might be somewhat faster.\n\n This all will only speedup simple 1-table-queries, no joins\n or if the requested order isn't that what the index exactly\n returns.\n\n Anyway, I'll take a look if I can change the planner to omit\n the sort if the tests described above are true. I think it\n would be good anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 16:24:56 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": ">\n> > SELECT * FROM tab WHERE key > 'G' ORDER BY key;\n> > results in a sort->indexscan - hmmm.\n> > The last one is the query we would need in the web\n> > environment used over a cursor as in the example above. But\n> > due to the sort, the backend selects until the end of the\n> > table, sorts them and then returns only the first 20 rows\n> > (out of sorts result).\n>\n> So you are saying that for this last case the sort was unnecessary? Does\n> the backend traverse the index in the correct order to guarantee that\n> the tuples are coming out already sorted? Does a hash index give the\n> same plan (I would expect a sort->seqscan for a hash index)?\n\n Good point! As far as I can see, the planner chooses index\n usage only depending on the WHERE clause. A hash index is\n only usable when the given qualification uses = on the\n indexed attribute(s).\n\n If the sortClause exactly matches the indexed attributes of\n the ONE used btree index and all operators request ascending\n order I think the index scan already returns the correct\n order. Who know's definitely?\n\n Addition to my last posting: ... and if the index scan is\n using a btree index ...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 16:34:47 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> >> + #ifdef QUERY_LIMIT\n> >> + static bool\n> >> + parse_query_limit(const char *value)\n> >> + {\n> >> + int32 limit;\n> >> + \n> >> + if (value == NULL) {\n> >> + reset_query_limit();\n> >> + return(TRUE);\n> >> + }\n> >\n> >Any idea how 'value' could be null? I could not see how that would\n> >happen.\n> \n> Not sure. I just followed the way other set commands are doing.\n> \n> >I can see how GEQO could have a NULL when you say ON, and no\n> >value. Same with rplans.\n> \n> Hmm... I think in that case, 'value' would be 'ON', not NULL. right?\n\nYes, I think so, value would be ON. I will look into it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 12:18:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ?"
},
{
"msg_contents": "> > >> + if (value == NULL) {\n> > >> + reset_query_limit();\n> > >> + return(TRUE);\n> > >> + }\n> > >Any idea how 'value' could be null? I could not see how that would\n> > >happen.\n> > Not sure. I just followed the way other set commands are doing.\n> \n> This is how RESET is implemented.\n\nOh.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 12:24:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Alternative to LIMIT in SELECT ?"
},
{
"msg_contents": "> > I've done some tests and what I found out might be a bug in\n> > PostgreSQL's query optimizer.\n> > SELECT * FROM tab ORDER BY key;\n> > results in a sort->seqscan - I would have\n> > expected an indexscan!\n> \n> Given that a table _could_ be completely unsorted on disk, it is\n> probably reasonable to suck the data in for a possible in-memory sort\n> rather than skipping around the disk to pick up individual tuples via\n> the index. Don't know if vacuum has a statistic on \"orderness\"...\n\nThomas is correct on this. Vadim has run some tests, and with our\noptimized psort() code, the in-memory sort is often faster than using\nthe index to get the tuple, because you are jumping all over the drive. \nI don't remember, but obviously there is a break-even point where\ngetting X rows using the index on a table of Y rows is faster , but\ngetting X+1 rows on a table of Y rows is faster getting all the rows\nsequentailly, and doing the sort.\n\nYou would have to pick only certain queries(no joins, index matches\nORDER BY), take the number of rows requested, and the number of rows\nselected, and figure out if it is faster to use the index, or a\nsequential scan and do the ORDER BY yourself.\n\n\nAdd to this the OFFSET capability. I am not sure how you are going to\nget into the index and start at the n-th entry, unless perhaps you just\nsequential scan the index.\n\nIn fact, many queries just get column already indexed, and we could just\npull the data right out of the index.\n\nI have added this to the TODO list:\n\n\t* Pull requested data directly from indexes, bypassing heap data \n\nI think this has to be post-6.4 work, but I think we need to work in\nthis direction. I am holding off any cnfify fixes for post-6.4, but a\n6.4.1 performance release certainly is possible.\n\n\nBut, you are correct that certain cases where in index is already being\nused on a query, you could just skip the sort IF you used the index to\nget the rows from the base table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 13:21:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Jeff Hoffmann wrote:\n\n> >Hi, my 2 cents...\n> >\n> >I agree completely, LIMIT would be VERY usefull in web based apps, which\n> >is all I run. It does not matter to me if it is not part of a formal\n> >standard. The idea is so common that it is a defacto standard.\n> \n> i'm not familiar with mysql and using \"LIMIT\" but wouldn't this same effect\n> be achieved by declaring a cursor and fetching however many records in the\n> cursor? it's a very noticeable improvement when you only want the first 20\n> out of 500 in a 200k record database, at least.\n\nYes, while this is an improvement, it still has to do the entire query,\nwould be nice if the query could be terminated after a designated number\nof rows where found, thus freeing system resources that are other wise\nconsumed. \nI have seen web users run ridculous querys, like search for the\nletter 'a', and it happens to be a substring search, now the box go'es ape\nshit for 5 or 10 min.s while it basically gets the whole db as the search\nresult. All this befor you can do a 'FETCH', as I understand FETCH, I\nwill need to read up on it.\n\nNote that I do not have any databases that larg on my box, I was thinking\nback to my VFP/NT experiances.\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Wed, 14 Oct 1998 13:21:51 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Bruce Momjian wrote:\n\n> What we could do is _if_ there is only one table(no joins), and an index\n> exists that matches the ORDER BY, we could use the index to\n> short-circuit the query.\n> \n> I have added this item to the TODO list:\n> \n> * Allow LIMIT ability on single-table queries that have no ORDER BY or\n> a matching index\n> \n> This looks do-able, and a real win. Would this make web applications\n> happier? If there is an ORDER BY and no index, or a join, I can't\n> figure out how we would short-circuit the query.\n> \nYes, this would do for most of my apps.\nIt may just be my lack of sophistication, but I find that most web apps\nare very simple in nature/table layout, and thus queries are often on only\na single table.\n\nThanks\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Wed, 14 Oct 1998 13:41:24 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "On Tue, 13 Oct 1998, Bruce Momjian wrote:\n\n> My guess in a web application is that the transaction is started for\n> every new page, so you can't have transactions spanning SQL sessions.\n> \n> LIMIT theoretically would allow you to start up where you left off.\n\n************ EXACTLY !-)\nPlus, it could also be used to limit bogus-run-away queries.\n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Wed, 14 Oct 1998 13:45:40 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Thomas is correct on this. Vadim has run some tests, and with our\n> optimized psort() code, the in-memory sort is often faster than using\n> the index to get the tuple, because you are jumping all over the drive.\n> I don't remember, but obviously there is a break-even point where\n> getting X rows using the index on a table of Y rows is faster , but\n> getting X+1 rows on a table of Y rows is faster getting all the rows\n> sequentailly, and doing the sort.\n>\n> You would have to pick only certain queries(no joins, index matches\n> ORDER BY), take the number of rows requested, and the number of rows\n> selected, and figure out if it is faster to use the index, or a\n> sequential scan and do the ORDER BY yourself.\n\nSince a sort loads the data into memory anyway, how about speeding up the\nsort by using the index? Or does that take up too much memory? (approx 40%\nmore than the data alone, I think)\n\nTAra\n\n",
"msg_date": "Wed, 14 Oct 1998 12:56:00 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> But, you are correct that certain cases where in index is already being\n> used on a query, you could just skip the sort IF you used the index to\n> get the rows from the base table.\n\n Especially in the case where\n\n SELECT ... WHERE key > 'val' ORDER BY key;\n\n creates a Sort->IndexScan plan. The index scan already jumps\n around on the disc to collect the sorts input and the sort\n finally returns exactly the same output (if the used index is\n only on key).\n\n And this is the case for large tables. The planner first\n decides to use an index scan due to the WHERE clause and\n later it notices the ORDER BY clause and creates a sort over\n the scan.\n\n I'm actually hacking around on it to see what happens if I\n suppress the sort node in some cases.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 19:57:27 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Thomas is correct on this. Vadim has run some tests, and with our\n> > optimized psort() code, the in-memory sort is often faster than using\n> > the index to get the tuple, because you are jumping all over the drive.\n> > I don't remember, but obviously there is a break-even point where\n> > getting X rows using the index on a table of Y rows is faster , but\n> > getting X+1 rows on a table of Y rows is faster getting all the rows\n> > sequentailly, and doing the sort.\n> >\n> > You would have to pick only certain queries(no joins, index matches\n> > ORDER BY), take the number of rows requested, and the number of rows\n> > selected, and figure out if it is faster to use the index, or a\n> > sequential scan and do the ORDER BY yourself.\n> \n> Since a sort loads the data into memory anyway, how about speeding up the\n> sort by using the index? Or does that take up too much memory? (approx 40%\n> more than the data alone, I think)\n\nNot sure you can do that. The index points to heap tuples/tids, and\nthough there are tids in the rows, you can't access them as tids in\nmemory.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 14:01:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Thomas is correct on this. Vadim has run some tests, and with our\n> optimized psort() code, the in-memory sort is often faster than using\n> the index to get the tuple, because you are jumping all over the drive. \n> I don't remember, but obviously there is a break-even point where\n> getting X rows using the index on a table of Y rows is faster , but\n> getting X+1 rows on a table of Y rows is faster getting all the rows\n> sequentailly, and doing the sort.\n> \n> You would have to pick only certain queries(no joins, index matches\n> ORDER BY), take the number of rows requested, and the number of rows\n> selected, and figure out if it is faster to use the index, or a\n> sequential scan and do the ORDER BY yourself.\n> \n> Add to this the OFFSET capability. I am not sure how you are going to\n> get into the index and start at the n-th entry, unless perhaps you just\n> sequential scan the index.\n> \n> In fact, many queries just get column already indexed, and we could just\n> pull the data right out of the index.\n> \n> I have added this to the TODO list:\n> \n> \t* Pull requested data directly from indexes, bypassing heap data \n> \n> I think this has to be post-6.4 work, but I think we need to work in\n> this direction. I am holding off any cnfify fixes for post-6.4, but a\n> 6.4.1 performance release certainly is possible.\n> \n> \n> But, you are correct that certain cases where in index is already being\n> used on a query, you could just skip the sort IF you used the index to\n> get the rows from the base table.\n\nI have had more time to think about this. Basically, for pre-sorted\ndata, our psort code is very fast, because it does not need to sort\nanything. It just moves the rows in and out of the sort memory. Yes,\nit could be removed in some cases, and probably should be, but it is not\ngoing to produce great speedups.\n\nThe more general case I will describe below.\n\nFirst, let's look at a normal query:\n\n\tSELECT *\n\tFROM tab\n\tORDER BY col1\n\nThis is not going to use an index, and probably should not because it is\nfaster for large tables to sort them in memory, rather than moving all\nover the disk. For small tables, if the entire table fits in the buffer\ncache, it may be faster to use the index, but on a small table the sort\ndoesn't take very long either, and the buffer cache effectiveness is\naffected by other backends using it, so it may be better not to count on\nit for a speedup.\n\nHowever, if you only want the first 10 rows, that is a different story. \nWe pull all the rows into the backend, sort them, then return 10 rows. \nThe query, if we could do it, should be written as:\n\n\tSELECT *\n\tFROM tab\n\tWHERE col1 < some_unknown_value\n\tORDER BY col1\n\nIn this case, the optimizer looks at the column statistics, and properly\nuses an index to pull only a small subset of the table. This is the\ntype of behavior people want for queries returning only a few values.\n\nBut, unfortunately, we don't know that mystery value.\n\nNow, everyone agrees we need an index matching the ORDER BY to make this\nquery quick, but we don't know that mystery value, so currently we\nexecute the whole query, and do a fetch.\n\nWhat I am now thinking is that maybe we need a way to walk around that\nindex. Someone months ago asked how to do that, and we told him he\ncouldn't, because this not a C-ISAM/dbm type database. However, if we\ncould somehow pass into the query the index location we want to start\nat, and how many rows we need, that would solve our problem, and perhaps\neven allow joined queries to work, assuming the table in the ORDER BY is\nin an outer join loop.\n\n\tSELECT *\n\tFROM tab\n\tWHERE col1 < some_unknown_value\n\tORDER BY col1\n\tUSING INDEX tab_idx(452) COUNT 100\n\nwhere 452 is an 452th index entry, and COUNT is the number of index rows\nyou want to process. The query may return more or less than 100 rows if\nthere is a join and it joins to zero or more than one row in the joined\ntable, but this seems like perhaps a good way to go at it. We need to\ndo it this way because if a single index row returns 4 result rows, and\nonly two of the four rows fit in the number of rows returnd as set by the\nuser, it is hard to re-start the query at the proper point, because you\nwould have to process the index rows a second time, and return just part\nof the result, and that is hard.\n\nIf the index changes, or rows are added, the results are going to be\nunreliable, but that is probably going to be true of any state-less\nimplementation we can devise.\n\nI think this may be fairly easy to implement. We could sequential scan\nthe index to get to the 452th row. That is going to be quick. We can\npass the 452 into the btree index code, so only a certain range of index\ntuples are returned, and the system believes it has processed the entire\nquery, while we know it hasn't. Doesn't really work with hash, so we\nwill not allow it for those indexes. \n\nTo make it really easy, we could implement it as a 'SET' command, so we\ndon't actually have it as part of the query, and have to pass it around\nthrough all the modules. You would do the proper 'SET' before running\nthe query. Optimizer would look at 'SET' value to force index use.\n\n\tSET INDEX TO tab_idx START 452 COUNT 100\n\nor\n\n\tSET INDEX TO tab_idx FROM 452 COUNT 451\n\nThere would have to be some way to signal that the end of the index had\nbeen reached, because returning zero rows is not enough of a guarantee\nin a joined SELECT.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 14:27:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> I have had more time to think about this. Basically, for pre-sorted\n> data, our psort code is very fast, because it does not need to sort\n> anything. It just moves the rows in and out of the sort memory. Yes,\n> it could be removed in some cases, and probably should be, but it is not\n> going to produce great speedups.\n\n And I got the time to hack around about this.\n\n I hacked in a little check into the planner, that compares\n the sortClause against the key field list of an index scan\n and just suppresses the sort node if it exactly matchs and\n all sort operators are \"<\".\n\n I tested with a 10k row table where key is a text field. The\n base query is a\n\n SELECT ... WHERE key > 'val' ORDER BY key;\n\n The used 'val' is always a key that is close to the first of\n all keys in the table ('' on the first query and the last\n selected value on subsequent ones).\n\n Scenario 1 (S1) uses exactly the above query but processes\n only the first 20 rows from the result buffer. Thus the\n frontend receives nearly the whole table.\n\n Scenario 2 (S2) uses a cursor and FETCH 20. But closes the\n cursor and creates a new one for the next selection (only\n with another 'val') as it would occur in a web application.\n\n If there is no index on key, the backend will allways do a\n Sort->SeqScan and due to the 'val' close to the lowest\n existing key nearly all tuples get scanned and put into the\n sort. S1 here runs about 10 seconds and S2 about 6 seconds.\n The speedup in S2 results from the reduced overhead of\n sending not wanted tuples into the frontend.\n\n Now with a btree index on key and an unpatched backend.\n Produced plan is always a Sort->IndexScan. S1 needs 16\n seconds and S2 needs 12 seconds. Again nearly all data is put\n into the sort but this time over the index scan and that is\n slower.\n\n Last with the btree index on key and the patched backend.\n This time the plan is a plain IndexScan because the ORDER BY\n clause exactly matches the sort order of the choosen index.\n S1 needs 13 seconds and S2 less than 0.2! This dramatic\n speedup comes from the fact, that this time the index scan is\n the toplevel executor node and the executor run is stopped\n after 20 tuples have been selected.\n\n Analysis of the above timings:\n\n If there is an ORDER BY clause, using an index scan is the\n clever way if the indexqual dramatically reduces the the\n amount of data selected and sorted. I think this is the\n normal case (who really selects nearly all rows from a 5M row\n table?). So choosing the index path is correct. This will\n hurt if someone really selects most of the rows and the index\n scan jumps over the disc. But here the programmer should use\n an unqualified query to perform a seqscan and do the\n qualification in the frontend application.\n\n The speedup for the cursor/fetch scenario is so impressive\n that I'll create a post 6.4 patch. I don't want it in 6.4\n because there is absolutely no query in the whole regression\n test, where it suppresses the sort node. So we have\n absolutely no check that it doesn't break anything.\n\n For a web application, that can use a unique key to select\n the next amount of rows, it will be a big win.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 23:05:07 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": ">> postgres just because of lacking LIMIT. Tatsuo posted a patch\n>> for set query_limit to 'num', I just tested it and seems it\n>> works fine. Now, we need only possibility to specify offset,\n>> say\n>> set query_limit to 'offset,num'\n>> ( Tatsuo, How difficult to do this ?)\n>> and LIMIT problem will ne gone.\n>\n> Think you haven't read my posting completely. Even with the\n> executor limit, the complete scan into the sort is done by\n> the backend. You need to specify ORDER BY to get the same\n> list again (without the offset doesn't make sense). But\n> currently, ORDER BY forces a sort node into the query plan.\n\nI think we have understanded your point. set query_limit is just a\neasy alternative of using cursor and fetch.\n\n> I haven't looked at Tatsuo's patch very well. But if it\n> limits the amount of data going into the sort (on ORDER BY),\n> it will break it! The requested ordering could be different\n> from what the choosen index might return. The used index is\n> choosen by the planner upon the qualifications given, not the\n> ordering wanted.\n\nI think it limits the final result. When query_limit is set,\nthe arg \"numberTuples\" of ExecutePlan() is set to it instead of 0\n(this means no limit).\n\nTalking about \"offset,\" it shouldn't be very difficult. I guess all we\nhave to do is adding a new arg \"offset\" to ExecutePlan() then making\nobvious modifications. (and of course we have to modify set\nquery_limit syntax but it's trivial)\n\nHowever, before going ahead, I would like to ask other hackers about\nthis direction. This might be convenient for some users, but still the \nessential performance issue would remain. In another word, this is a\nshort-term solution not a intrinsic one, IMHO.\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Thu, 15 Oct 1998 11:34:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ? "
},
{
"msg_contents": "> > I have had more time to think about this. Basically, for pre-sorted\n> > data, our psort code is very fast, because it does not need to sort\n> > anything. It just moves the rows in and out of the sort memory. Yes,\n> > it could be removed in some cases, and probably should be, but it is not\n> > going to produce great speedups.\n> \n> And I got the time to hack around about this.\n> \n> I hacked in a little check into the planner, that compares\n> the sortClause against the key field list of an index scan\n> and just suppresses the sort node if it exactly matchs and\n> all sort operators are \"<\".\n> \n> I tested with a 10k row table where key is a text field. The\n> base query is a\n> \n> SELECT ... WHERE key > 'val' ORDER BY key;\n> \n> The used 'val' is always a key that is close to the first of\n> all keys in the table ('' on the first query and the last\n> selected value on subsequent ones).\n\nThis is good stuff. I want to think about it for a day. Sounds very\npromising.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 01:52:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n\n> I think we have understanded your point. set query_limit is just a\n> easy alternative of using cursor and fetch.\n>\n> > I haven't looked at Tatsuo's patch very well. But if it\n> > limits the amount of data going into the sort (on ORDER BY),\n> > it will break it! The requested ordering could be different\n> > from what the choosen index might return. The used index is\n> > choosen by the planner upon the qualifications given, not the\n> > ordering wanted.\n>\n> I think it limits the final result. When query_limit is set,\n> the arg \"numberTuples\" of ExecutePlan() is set to it instead of 0\n> (this means no limit).\n>\n> Talking about \"offset,\" it shouldn't be very difficult. I guess all we\n> have to do is adding a new arg \"offset\" to ExecutePlan() then making\n> obvious modifications. (and of course we have to modify set\n> query_limit syntax but it's trivial)\n\n The offset could become\n\n FETCH n IN cursor [OFFSET n];\n\n and\n\n SELECT ... [LIMIT offset,count];\n\n The FETCH command already calls ExecutorRun() with the given\n count (the tuple limit). Telling it the offset too is really\n simple. And ExecutorRun() could check if the toplevel\n executor node is an index scan. Skipping tuples during the\n index scan requires, that all qualifications are in the\n indexqual, thus any tuple returned by it will become a final\n result row (as it would be in the simple 1-table-queries we\n discussed). If that isn't the case, the executor must\n fallback to skip the final result tuples and that is after an\n eventually processed sort/merge of the complete result set.\n That would only reduce communication to the client and memory\n required there to buffer the result set (not a bad thing\n either).\n\n ProcessQueryDesc() in tcop/pquery.c also calls ExecutorRun()\n but with a constant 0 tuple count. Having offset and count in\n the parsetree would make it without any state variables or\n SET command. And it's the only clean way to restrict LIMIT to\n SELECT queries. Any thrown in LIMIT to ExecutorRun() from\n another place could badly hurt the rewrite system. Remember\n that non-instead actions on insert/update/delete are\n processed before the original query! And what about SQL\n functions that get processed during the evaluation of another\n query (view using an SQL function for count(*))?\n\n A little better would it be to make the LIMIT values able to\n be parameter nodes. C or PL functions use the prepared plan\n feature of the SPI manager for performance reasons.\n Especially the offset value might there need to be a\n parameter that the executor has to pick out first. If we\n change the count argument of ExecutorRun to a List *limit,\n this one could be NIL (to mean the old 0 count 0 offset\n behaviour) or a list of two elements that both can be either\n a Const or a Param of type int4. Easy for the executor to\n evaluate.\n\n The only places where ExecutorRun() is called are\n tcop/pquery.c (queries from frontend), commands/command.c\n (FETCH command), executor/functions.c (SQL functions) and\n executor/spi.c (SPI manager). So it is easy to change the\n call interface too.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 15 Oct 1998 14:23:43 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "This is a little bit off-topic,\nI did some timings with latest cvs on my real database \n( all output redirected to /dev/null ), table contains 8798 records,\n31 columns, order key have indices.\n\n1.select count(*) from work_flats;\n0.02user 0.00system 0:00.18elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (131major+21minor)pagefaults 0swaps\n\n2.select * from work_flats order by rooms, metro_id;\n2.35user 0.25system 0:10.11elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (131major+2799minor)pagefaults 0swaps\n\n3.set query_limit to '150';\nSET VARIABLE\nselect * from work_flats order by rooms, metro_id;\n0.06user 0.00system 0:02.75elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (131major+67minor)pagefaults 0swaps\n\n4.begin;\ndeclare tt cursor for\nselect * from work_flats order by rooms, metro_id;\nfetch 150 in tt;\nend;\n0.05user 0.01system 0:02.76elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (131major+67minor)pagefaults 0swaps\n\nAs you can see timings for query_limit and cursor are very similar,\nI didn't expected this. So, in principle, enhanced version of fetch\n(with offset) would cover all we need from LIMIT, but query_limit would be\nstill useful, for example to restrict loadness of server.\nWill all enhancements you discussed go to the 6.4 ?\nI'm really interested in testing this stuff because I begin new project\nand everything we discussed here are badly needed.\n\n\n\tRegards,\n\n\t Oleg\n\n\n\nOn Thu, 15 Oct 1998, Jan Wieck wrote:\n\n> Date: Thu, 15 Oct 1998 14:23:43 +0200 (MET DST)\n> From: Jan Wieck <[email protected]>\n> To: [email protected]\n> Cc: [email protected], [email protected], [email protected]\n> Subject: Re: [HACKERS] What about LIMIT in SELECT ?\n> \n> Tatsuo Ishii wrote:\n> \n> > I think we have understanded your point. set query_limit is just a\n> > easy alternative of using cursor and fetch.\n> >\n> > > I haven't looked at Tatsuo's patch very well. But if it\n> > > limits the amount of data going into the sort (on ORDER BY),\n> > > it will break it! The requested ordering could be different\n> > > from what the choosen index might return. The used index is\n> > > choosen by the planner upon the qualifications given, not the\n> > > ordering wanted.\n> >\n> > I think it limits the final result. When query_limit is set,\n> > the arg \"numberTuples\" of ExecutePlan() is set to it instead of 0\n> > (this means no limit).\n> >\n> > Talking about \"offset,\" it shouldn't be very difficult. I guess all we\n> > have to do is adding a new arg \"offset\" to ExecutePlan() then making\n> > obvious modifications. (and of course we have to modify set\n> > query_limit syntax but it's trivial)\n> \n> The offset could become\n> \n> FETCH n IN cursor [OFFSET n];\n> \n> and\n> \n> SELECT ... [LIMIT offset,count];\n> \n> The FETCH command already calls ExecutorRun() with the given\n> count (the tuple limit). Telling it the offset too is really\n> simple. And ExecutorRun() could check if the toplevel\n> executor node is an index scan. Skipping tuples during the\n> index scan requires, that all qualifications are in the\n> indexqual, thus any tuple returned by it will become a final\n> result row (as it would be in the simple 1-table-queries we\n> discussed). If that isn't the case, the executor must\n> fallback to skip the final result tuples and that is after an\n> eventually processed sort/merge of the complete result set.\n> That would only reduce communication to the client and memory\n> required there to buffer the result set (not a bad thing\n> either).\n> \n> ProcessQueryDesc() in tcop/pquery.c also calls ExecutorRun()\n> but with a constant 0 tuple count. Having offset and count in\n> the parsetree would make it without any state variables or\n> SET command. And it's the only clean way to restrict LIMIT to\n> SELECT queries. Any thrown in LIMIT to ExecutorRun() from\n> another place could badly hurt the rewrite system. Remember\n> that non-instead actions on insert/update/delete are\n> processed before the original query! And what about SQL\n> functions that get processed during the evaluation of another\n> query (view using an SQL function for count(*))?\n> \n> A little better would it be to make the LIMIT values able to\n> be parameter nodes. C or PL functions use the prepared plan\n> feature of the SPI manager for performance reasons.\n> Especially the offset value might there need to be a\n> parameter that the executor has to pick out first. If we\n> change the count argument of ExecutorRun to a List *limit,\n> this one could be NIL (to mean the old 0 count 0 offset\n> behaviour) or a list of two elements that both can be either\n> a Const or a Param of type int4. Easy for the executor to\n> evaluate.\n> \n> The only places where ExecutorRun() is called are\n> tcop/pquery.c (queries from frontend), commands/command.c\n> (FETCH command), executor/functions.c (SQL functions) and\n> executor/spi.c (SPI manager). So it is easy to change the\n> call interface too.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #======================================== [email protected] (Jan Wieck) #\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 15 Oct 1998 20:01:23 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "OK, I have had my day of thinking, and will address this specific\nposting first, because it is the most fundamental concerning the future\ndirection of the optimization.\n\n> \n> And I got the time to hack around about this.\n> \n> I hacked in a little check into the planner, that compares\n> the sortClause against the key field list of an index scan\n> and just suppresses the sort node if it exactly matchs and\n> all sort operators are \"<\".\n> \n> I tested with a 10k row table where key is a text field. The\n> base query is a\n> \n> SELECT ... WHERE key > 'val' ORDER BY key;\n> \n> The used 'val' is always a key that is close to the first of\n> all keys in the table ('' on the first query and the last\n> selected value on subsequent ones).\n> \n> Scenario 1 (S1) uses exactly the above query but processes\n> only the first 20 rows from the result buffer. Thus the\n> frontend receives nearly the whole table.\n\nOK.\n\n> \n> Scenario 2 (S2) uses a cursor and FETCH 20. But closes the\n> cursor and creates a new one for the next selection (only\n> with another 'val') as it would occur in a web application.\n> \n> If there is no index on key, the backend will allways do a\n> Sort->SeqScan and due to the 'val' close to the lowest\n> existing key nearly all tuples get scanned and put into the\n> sort. S1 here runs about 10 seconds and S2 about 6 seconds.\n> The speedup in S2 results from the reduced overhead of\n> sending not wanted tuples into the frontend.\n\nMakes sense. All rows are processed, but not sent to client.\n\n> \n> Now with a btree index on key and an unpatched backend.\n> Produced plan is always a Sort->IndexScan. S1 needs 16\n> seconds and S2 needs 12 seconds. Again nearly all data is put\n> into the sort but this time over the index scan and that is\n> slower.\n\nVACUUM ANALYZE could affect this. Because it had no stats, it thought\nindex use would be faster, but in fact because 'val' was near the lowest\nvalue, it as selecting 90% of the table, and would have been better with\na sequential scan. pg_statistics's low/hi values for a column could\nhave told that to the optimizer.\n\nI know the good part of the posting is coming.\n\n> Last with the btree index on key and the patched backend.\n> This time the plan is a plain IndexScan because the ORDER BY\n> clause exactly matches the sort order of the chosen index.\n> S1 needs 13 seconds and S2 less than 0.2! This dramatic\n> speedup comes from the fact, that this time the index scan is\n> the toplevel executor node and the executor run is stopped\n> after 20 tuples have been selected.\n\nOK, seems like in the S1 case, the use of the psort/ORDER BY code on top\nof the index was taking and extra 3 seconds, which is 23%. That is a\nlot more than I thought for the psort code, and shows we could gain a\nlot by removing unneeded sorts from queries that are already using\nmatching indexes.\n\nJust for clarity, added to TODO. I think everyone is clear on this one,\nand its magnitude is a surprise to me:\n\n * Prevent psort() usage when query already using index matching ORDER BY\n\n\n> Analysis of the above timings:\n> \n> If there is an ORDER BY clause, using an index scan is the\n> clever way if the indexqual dramatically reduces the the\n> amount of data selected and sorted. I think this is the\n> normal case (who really selects nearly all rows from a 5M row\n> table?). So choosing the index path is correct. This will\n> hurt if someone really selects most of the rows and the index\n> scan jumps over the disc. But here the programmer should use\n> an unqualified query to perform a seqscan and do the\n> qualification in the frontend application.\n\nFortunately, the optimizer already does the index selection for us, and\nguesses pretty well if the index or sequential scan is better. Once we\nimplement the above removal of psort(), we will have to change the\ntimings because now you have to compare index scan against sequential\nscan AND psort(), because in the index scan situation, you don't need\nthe psort(), assuming the ORDER BY matches the index exactly.\n\n> The speedup for the cursor/fetch scenario is so impressive\n> that I'll create a post 6.4 patch. I don't want it in 6.4\n> because there is absolutely no query in the whole regression\n> test, where it suppresses the sort node. So we have\n> absolutely no check that it doesn't break anything.\n> \n> For a web application, that can use a unique key to select\n> the next amount of rows, it will be a big win.\n\nOK, I think the reason the regression test did not show your code being\nused is important.\n\nFirst, most of the tables are small in the regression test, so sequential\nscans are faster. Second, most queries using indexes are either joins,\nwhich do the entire table, or equality tests, like col = 3, where there\nis no matching ORDER BY because all the col values are 3. Again, your\ncode can't help with these.\n\nThe only regression-type code that would use it would be a 'col > 3'\nqualification with a col ORDER BY, and there aren't many of those.\n\nHowever, if we think of the actual application you are addressing, it is\na major win. If we are going after only one row of the index, it is\nfast. If we are going after the entire table, it is faster to\nsequential scan and psort(). You big win is with the partial queries,\nwhere you end up doing a full sequential scan or index scan, then and\nORDER BY, while you really only need a few rows from the query, and if\nyou deal directly with the index, you can prevent many rows from being\nprocessed. It is the ability to skip processing those extra rows that\nmakes it a big win, not so much the removal of the ORDER BY, though that\nhelps too.\n\nYour solution really is tailored for this 'partial' query application,\nand I think it is a big need for certain applications that can't use\ncursors, like web apps. Most other apps have long-time connections to\nthe database, and are better off with cursors.\n\nI did profiling to improve startup time, because the database\nrequirements of web apps are different from normal db apps, and we have\nto adjust to that.\n\nSo, to reiterate, full queries are not benefited as much from the new\ncode, because sequential scan/psort is faster, or because the index only\nretrieves a small number of rows because the qualification of values is\nvery specific.\n\nThose open-ended, give me the rows from 100 to 199 really need your\nmodifications.\n\nOK, we have QUERY_LIMIT, and that allows us to throw any query at the\nsystem, and it will return that many of the first rows for the ORDER BY.\nNo fancy stuff required. If we can get a matching index, we may be able\nto remove the requirement of scanning all the row (with Jan's patch),\nand that is a big win. If not, we at least prevent the rows from being\nreturned to the client.\n\nHowever, there is the OFFSET issue. This is really a case where the\nuser wants to _restart_ the query where they left off. That is a\ndifferent problem. All of a sudden, we need to evaluate more of the\nquery, and return a segment from the middle of the result set.\n\nI think we need to decide how to handle such a restart. Do we\nre-evaluate the entire query, skipping all the rows up to OFFSET, and\nreturn the number of rows they requested after OFFSET. I would think we\ndon't want to do that, do we. It would be much easier to code. If it\nis a single table, skipping forward has to be done anyway, because we\ncan't just _jump_ to the 100th entry in the index, unless we pass some\n_tid_ to the user, and expect them to pass that back to start the query.\nI don't think we went to do that. It is ugly, and the row may have\nmoved since we started. So, for a single table, adding a QUERY_OFFSET\nwould do exactly what we need, with Jan's patches.\n\nFor a joined query, I think you will have to do the entire _join_ before\nreturning anything. \n\nYou can't just process all the joins up to the OFFSET location, and you\ncan't just jump to the 100th index location, because you don't know that\nthe 100th index location produced the 100th result just returned to the\nuser. You have to process the whole query, and because of the join and\nnot knowing which data row from each table is going to make which entry\nin the final result. If you are really craft, and the ORDER BY table is\nin the outer part of the join loop, you could start processing the table\nthat is part of the outer loop in _index_ order, because you know that\nthe rows processed in index order are going to produce the output in\nresult order. You then could process and throw away the results up to\noffset, and generate the needed rows and stop.\n\nThe other way of doing it is to specify a query limit based on specific\nindex entries, so you say I want the query returned by the first 20\nindex entries matching the ORDER BY, or entries 100-199, and the query\nis limited to using only those entries in the index. In that case,\nthough, in joins, you could return more or less rows in the result\ndepending on the other tables, and that may be unacceptable. However,\nfor this case, the advantage is that you don't need to process the rows\nfrom 1 to 99 because you have been told the user only wants rows from\ncertain index slots. If the user requests rows 50000-50100, this would\nbe much faster because you don't have to process the 50000 rows before\nreturning any data. However, I question how often people grab stuff\nfrom the center of large data sets. Seems the QUERY_OFFSET idea may be\neasier for users.\n\nI will be commenting on the rest of the optimization postings tomorrow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 01:34:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> This is a little bit off-topic,\n> I did some timings with latest cvs on my real database \n> ( all output redirected to /dev/null ), table contains 8798 records,\n> 31 columns, order key have indices.\n> \n> 1.select count(*) from work_flats;\n> 0.02user 0.00system 0:00.18elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k\n> 0inputs+0outputs (131major+21minor)pagefaults 0swaps\n> \n> 2.select * from work_flats order by rooms, metro_id;\n> 2.35user 0.25system 0:10.11elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k\n> 0inputs+0outputs (131major+2799minor)pagefaults 0swaps\n> \n> 3.set query_limit to '150';\n> SET VARIABLE\n> select * from work_flats order by rooms, metro_id;\n> 0.06user 0.00system 0:02.75elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k\n> 0inputs+0outputs (131major+67minor)pagefaults 0swaps\n> \n> 4.begin;\n> declare tt cursor for\n> select * from work_flats order by rooms, metro_id;\n> fetch 150 in tt;\n> end;\n> 0.05user 0.01system 0:02.76elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k\n> 0inputs+0outputs (131major+67minor)pagefaults 0swaps\n> \n> As you can see timings for query_limit and cursor are very similar,\n> I didn't expected this. So, in principle, enhanced version of fetch\n> (with offset) would cover all we need from LIMIT, but query_limit would be\n> still useful, for example to restrict loadness of server.\n> Will all enhancements you discussed go to the 6.4 ?\n> I'm really interested in testing this stuff because I begin new project\n> and everything we discussed here are badly needed.\n> \n\nWhen you say output to /dev/null, is that on the client, on the backend?\nI will assume the client, because of the timings you are reporting.\n\nWhat is the time of this, which has no ORDER BY?\n\n\tselect * from work_flats;\n\n\nAs far as I can tell, the timing differences you are seeing are based on\nthe fact that the data is not being transfered to the client. This is\nthe current sole use of query_limit, and a good one. The web-app need\nis to prevent processing of the entire table for just a few rows, and\ncurrently query_limit does not do this, though Jan's patches do this.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 02:15:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Your solution really is tailored for this 'partial' query application,\n> and I think it is a big need for certain applications that can't use\n> cursors, like web apps. Most other apps have long-time connections to\n> the database, and are better off with cursors.\n\nAnd there are persistant web servers available too, to help work around\nthis \"stateless connection problem\"? Let's remember that we are solving\na problem which has few requirements for data integrity, and which is\nstarting to get out of the realm of Postgres' strengths (almost any\nscheme can barf data up to a client if it doesn't care whether it is\nrepeatable or complete).\n\nNeat stuff though :)\n\n - Tom\n",
"msg_date": "Fri, 16 Oct 1998 06:52:31 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": ">\n> I know the good part of the posting is coming.\n>\n> > Last with the btree index on key and the patched backend.\n> > This time the plan is a plain IndexScan because the ORDER BY\n> > clause exactly matches the sort order of the chosen index.\n> > S1 needs 13 seconds and S2 less than 0.2! This dramatic\n> > speedup comes from the fact, that this time the index scan is\n> > the toplevel executor node and the executor run is stopped\n> > after 20 tuples have been selected.\n>\n> OK, seems like in the S1 case, the use of the psort/ORDER BY code on top\n> of the index was taking and extra 3 seconds, which is 23%. That is a\n> lot more than I thought for the psort code, and shows we could gain a\n> lot by removing unneeded sorts from queries that are already using\n> matching indexes.\n>\n> Just for clarity, added to TODO. I think everyone is clear on this one,\n> and its magnitude is a surprise to me:\n>\n> * Prevent psort() usage when query already using index matching ORDER BY\n>\n>\n\nI can't find the reference to descending order cases except my posting.\nIf we use an index scan to remove sorts in those cases,backward positioning\nand scanning are necessary.\n\n> > Analysis of the above timings:\n> >\n> > If there is an ORDER BY clause, using an index scan is the\n> > clever way if the indexqual dramatically reduces the the\n> > amount of data selected and sorted. I think this is the\n> > normal case (who really selects nearly all rows from a 5M row\n> > table?). So choosing the index path is correct. This will\n> > hurt if someone really selects most of the rows and the index\n> > scan jumps over the disc. But here the programmer should use\n> > an unqualified query to perform a seqscan and do the\n> > qualification in the frontend application.\n>\n> Fortunately, the optimizer already does the index selection for us, and\n> guesses pretty well if the index or sequential scan is better. Once we\n> implement the above removal of psort(), we will have to change the\n> timings because now you have to compare index scan against sequential\n> scan AND psort(), because in the index scan situation, you don't need\n> the psort(), assuming the ORDER BY matches the index exactly.\n>\n\nLet t be a table with 2 indices, index1(key1,key2), index2(key1,key3).\ni.e. key1 is common to index1 and index2.\n\nAnd for the query\n select * from t where key1>....;\n\nIf PosgreSQL optimizer choose [ index scan on index1 ] we can't remove\nsorts from the following query.\n\tselect * from t where key1>... order by key1,key3;\n\nSimilarly if [ index scan on index2 ] are chosen we can't remove sorts\nfrom the following query.\n\tselect * from t where key1>... order by key1,key2;\n\nBut in both cases (clever) optimizer can choose another index for scan.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 20 Oct 1998 17:24:09 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> > * Prevent psort() usage when query already using index matching ORDER BY\n> >\n> >\n>\n> I can't find the reference to descending order cases except my posting.\n> If we use an index scan to remove sorts in those cases,backward positioning\n> and scanning are necessary.\n\n I think it's only thought as a reminder that the optimizer\n needs some optimization.\n\n That topic, and the LIMIT stuff too I think, is past 6.4 work\n and may go into a 6.4.1 performance release. So when we are\n after 6.4, we have enough time to work out a real solution,\n instead of just throwing in a patch as a quick shot.\n\n What we two did where steps in the same direction. Your one\n covers more situations, but after all if multiple people have\n the same idea there is a good chance that it is the right\n thing to do.\n\n>\n> Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).\n> i.e. key1 is common to index1 and index2.\n>\n> And for the query\n> select * from t where key1>....;\n>\n> If PosgreSQL optimizer choose [ index scan on index1 ] we can't remove\n> sorts from the following query.\n> select * from t where key1>... order by key1,key3;\n>\n> Similarly if [ index scan on index2 ] are chosen we can't remove sorts\n> from the following query.\n> select * from t where key1>... order by key1,key2;\n>\n> But in both cases (clever) optimizer can choose another index for scan.\n\n Right. As I remember, your solution does basically the same\n as my one. It does not change the optimizers decision about\n the index or if an index at all is used. So I assume they\n hook into the same position where depending on the order by\n clause the sort node is added. And that is at the very end of\n the optimizer.\n\n What you describe above requires changes in upper levels of\n optimization. Doing that is far away from my knowledge about\n the optimizer. And some of your earlier statements let me\n think you aren't familiar enough with it too. We need at\n least help from others to do it well.\n\n I don't want to dive that deep into the optimizer. There was\n a far too long time where the rule system was broken and got\n out of sync with the parser/optimizer capabilities. I fixed\n many things in it for 6.4. My first priority now is, not to\n let such a situation come up again.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 20 Oct 1998 11:25:22 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> >\n> > I know the good part of the posting is coming.\n> >\n> > > Last with the btree index on key and the patched backend.\n> > > This time the plan is a plain IndexScan because the ORDER BY\n> > > clause exactly matches the sort order of the chosen index.\n> > > S1 needs 13 seconds and S2 less than 0.2! This dramatic\n> > > speedup comes from the fact, that this time the index scan is\n> > > the toplevel executor node and the executor run is stopped\n> > > after 20 tuples have been selected.\n> >\n> > OK, seems like in the S1 case, the use of the psort/ORDER BY code on top\n> > of the index was taking and extra 3 seconds, which is 23%. That is a\n> > lot more than I thought for the psort code, and shows we could gain a\n> > lot by removing unneeded sorts from queries that are already using\n> > matching indexes.\n> >\n> > Just for clarity, added to TODO. I think everyone is clear on this one,\n> > and its magnitude is a surprise to me:\n> >\n> > * Prevent psort() usage when query already using index matching ORDER BY\n> >\n> >\n\nIn a multi-column ORDER BY, the direction of the sorts will have to be\nidentical too. That is assumed, I think. If all are descending, I\nthink we can traverse the index in reverse order, or can't we do that. \nI am not sure, but if we can't, descending would fail, and require a\npsort.\n\n\n> \n> I can't find the reference to descending order cases except my posting.\n> If we use an index scan to remove sorts in those cases,backward positioning\n> and scanning are necessary.\n> \n> > > Analysis of the above timings:\n> > >\n> > > If there is an ORDER BY clause, using an index scan is the\n> > > clever way if the indexqual dramatically reduces the the\n> > > amount of data selected and sorted. I think this is the\n> > > normal case (who really selects nearly all rows from a 5M row\n> > > table?). So choosing the index path is correct. This will\n> > > hurt if someone really selects most of the rows and the index\n> > > scan jumps over the disc. But here the programmer should use\n> > > an unqualified query to perform a seqscan and do the\n> > > qualification in the frontend application.\n> >\n> > Fortunately, the optimizer already does the index selection for us, and\n> > guesses pretty well if the index or sequential scan is better. Once we\n> > implement the above removal of psort(), we will have to change the\n> > timings because now you have to compare index scan against sequential\n> > scan AND psort(), because in the index scan situation, you don't need\n> > the psort(), assuming the ORDER BY matches the index exactly.\n> >\n> \n> Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).\n> i.e. key1 is common to index1 and index2.\n> \n> And for the query\n> select * from t where key1>....;\n> \n> If PosgreSQL optimizer choose [ index scan on index1 ] we can't remove\n> sorts from the following query.\n> \tselect * from t where key1>... order by key1,key3;\n> \n> Similarly if [ index scan on index2 ] are chosen we can't remove sorts\n> from the following query.\n> \tselect * from t where key1>... order by key1,key2;\n> \n> But in both cases (clever) optimizer can choose another index for scan.\n\nYes, the optimizer is going to have to be smart by looking at the ORDER\nBY, and nudging the code to favor a certain index. This is also true in\na join, where we will want to use an index in cases we would normally\nnot use it, and prefer a certain index over others.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Oct 1998 12:44:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Right. As I remember, your solution does basically the same\n> as my one. It does not change the optimizers decision about\n> the index or if an index at all is used. So I assume they\n> hook into the same position where depending on the order by\n> clause the sort node is added. And that is at the very end of\n> the optimizer.\n> \n> What you describe above requires changes in upper levels of\n> optimization. Doing that is far away from my knowledge about\n> the optimizer. And some of your earlier statements let me\n> think you aren't familiar enough with it too. We need at\n> least help from others to do it well.\n> \n> I don't want to dive that deep into the optimizer. There was\n> a far too long time where the rule system was broken and got\n> out of sync with the parser/optimizer capabilities. I fixed\n> many things in it for 6.4. My first priority now is, not to\n> let such a situation come up again.\n\nI agree. Another good thing is that the LIMIT thing will not require a\ndump/reload, so it is a good candidate for a minor release.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Oct 1998 12:45:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> >\n> > I agree. Another good thing is that the LIMIT thing will not require a\n> > dump/reload, so it is a good candidate for a minor release.\n> \n> That's wrong, sorry.\n> \n> The limit thing as I implemented it adds 2 new variables to\n> the Query structure. Rewrite rules are stored as querytrees\n> and in the existing pg_rewrite entries that would be missing.\n\nOh, sorry. I forgot. That could be tough.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Oct 1998 13:02:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": ">\n> I agree. Another good thing is that the LIMIT thing will not require a\n> dump/reload, so it is a good candidate for a minor release.\n\n That's wrong, sorry.\n\n The limit thing as I implemented it adds 2 new variables to\n the Query structure. Rewrite rules are stored as querytrees\n and in the existing pg_rewrite entries that would be missing.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 20 Oct 1998 19:12:19 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> \n> > >\n> > > I agree. Another good thing is that the LIMIT thing will not require a\n> > > dump/reload, so it is a good candidate for a minor release.\n> > \n> > That's wrong, sorry.\n> > \n> > The limit thing as I implemented it adds 2 new variables to\n> > the Query structure. Rewrite rules are stored as querytrees\n> > and in the existing pg_rewrite entries that would be missing.\n> \n> Oh, sorry. I forgot. That could be tough.\n\n But it wouldn't hurt to add them now to have them in\n place. The required out-, read- and copyfuncs are in\n my patch too. This would prevent dump/load when we\n later add the real LIMIT functionality. And it does\n not change anything now.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Tue, 20 Oct 1998 19:22:40 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> >\n> > I agree. Another good thing is that the LIMIT thing will not require a\n> > dump/reload, so it is a good candidate for a minor release.\n> \n> That's wrong, sorry.\n> \n> The limit thing as I implemented it adds 2 new variables to\n> the Query structure. Rewrite rules are stored as querytrees\n> and in the existing pg_rewrite entries that would be missing.\n\nNot sure how to address this. Perhaps we could write a query as part of\nthe upgrade that added these to the existing rules, or we could require\nan initdb of all beta users.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Oct 1998 13:26:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> > \n> > > >\n> > > > I agree. Another good thing is that the LIMIT thing will not require a\n> > > > dump/reload, so it is a good candidate for a minor release.\n> > > \n> > > That's wrong, sorry.\n> > > \n> > > The limit thing as I implemented it adds 2 new variables to\n> > > the Query structure. Rewrite rules are stored as querytrees\n> > > and in the existing pg_rewrite entries that would be missing.\n> > \n> > Oh, sorry. I forgot. That could be tough.\n> \n> But it wouldn't hurt to add them now to have them in\n> place. The required out-, read- and copyfuncs are in\n> my patch too. This would prevent dump/load when we\n> later add the real LIMIT functionality. And it does\n> not change anything now.\n> \n\nJan, we found that I am having to require an initdb for the INET/CIDR\ntype, so if you want stuff to change the views/rules for the limit\naddition post 6.4, please send them in and I will apply them.\n\nYou clearly have the syntax down, so I think you should go ahead.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Oct 1998 02:09:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> Jan, we found that I am having to require an initdb for the INET/CIDR\n> type, so if you want stuff to change the views/rules for the limit\n> addition post 6.4, please send them in and I will apply them.\n> \n> You clearly have the syntax down, so I think you should go ahead.\n\n This is the part that will enable post 6.4 add of the\n LIMIT stuff without initdb.\n\n Regression tested.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\ndiff -cr src.orig/backend/nodes/copyfuncs.c src/backend/nodes/copyfuncs.c\n*** src.orig/backend/nodes/copyfuncs.c\tFri Oct 16 11:53:40 1998\n--- src/backend/nodes/copyfuncs.c\tFri Oct 16 13:32:35 1998\n***************\n*** 1578,1583 ****\n--- 1578,1586 ----\n \t\tnewnode->unionClause = temp_list;\n \t}\n \n+ \tNode_Copy(from, newnode, limitOffset);\n+ \tNode_Copy(from, newnode, limitCount);\n+ \n \treturn newnode;\n }\n \ndiff -cr src.orig/backend/nodes/outfuncs.c src/backend/nodes/outfuncs.c\n*** src.orig/backend/nodes/outfuncs.c\tFri Oct 16 11:53:40 1998\n--- src/backend/nodes/outfuncs.c\tFri Oct 16 13:30:50 1998\n***************\n*** 259,264 ****\n--- 259,268 ----\n \tappendStringInfo(str, (node->hasSubLinks ? \"true\" : \"false\"));\n \tappendStringInfo(str, \" :unionClause \");\n \t_outNode(str, node->unionClause);\n+ \tappendStringInfo(str, \" :limitOffset \");\n+ \t_outNode(str, node->limitOffset);\n+ \tappendStringInfo(str, \" :limitCount \");\n+ \t_outNode(str, node->limitCount);\n }\n \n static void\ndiff -cr src.orig/backend/nodes/readfuncs.c src/backend/nodes/readfuncs.c\n*** src.orig/backend/nodes/readfuncs.c\tFri Oct 16 11:53:40 1998\n--- src/backend/nodes/readfuncs.c\tFri Oct 16 13:31:43 1998\n***************\n*** 163,168 ****\n--- 163,174 ----\n \ttoken = lsptok(NULL, &length);\t\t/* skip :unionClause */\n \tlocal_node->unionClause = nodeRead(true);\n \n+ \ttoken = lsptok(NULL, &length);\t\t/* skip :limitOffset */\n+ \tlocal_node->limitOffset = nodeRead(true);\n+ \n+ \ttoken = lsptok(NULL, &length);\t\t/* skip :limitCount */\n+ \tlocal_node->limitCount = nodeRead(true);\n+ \n \treturn local_node;\n }\n \ndiff -cr src.orig/include/nodes/parsenodes.h src/include/nodes/parsenodes.h\n*** src.orig/include/nodes/parsenodes.h\tFri Oct 16 11:53:58 1998\n--- src/include/nodes/parsenodes.h\tFri Oct 16 13:35:32 1998\n***************\n*** 60,65 ****\n--- 60,67 ----\n \n \tList\t *unionClause;\t/* unions are linked under the previous\n \t\t\t\t\t\t\t\t * query */\n+ \tNode\t *limitOffset;\t/* # of result tuples to skip */\n+ \tNode\t *limitCount;\t\t/* # of result tuples to return */\n \n \t/* internal to planner */\n \tList\t *base_rel_list;\t/* base relation list */\n***************\n*** 639,644 ****\n--- 641,648 ----\n \tchar\t *portalname;\t\t/* the portal (cursor) to create */\n \tbool\t\tbinary;\t\t\t/* a binary (internal) portal? */\n \tbool\t\tunionall;\t\t/* union without unique sort */\n+ \tNode\t *limitOffset;\t/* # of result tuples to skip */\n+ \tNode\t *limitCount;\t\t/* # of result tuples to return */\n } SelectStmt;\n \n \n",
"msg_date": "Thu, 22 Oct 1998 10:53:10 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> > Jan, we found that I am having to require an initdb for the INET/CIDR\n> > type, so if you want stuff to change the views/rules for the limit\n> > addition post 6.4, please send them in and I will apply them.\n> > \n> > You clearly have the syntax down, so I think you should go ahead.\n> \n> This is the part that will enable post 6.4 add of the\n> LIMIT stuff without initdb.\n> \n> Regression tested.\n\nApplied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Oct 1998 09:52:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Jan, and luck in getting the older elephant picture where the elephant's\ncolor was more natural and not so gray?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 14 Jun 1999 18:36:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "elephant picture"
},
{
"msg_contents": "\nOn 14-Jun-99 Bruce Momjian wrote:\n> Jan, and luck in getting the older elephant picture where the elephant's\n> color was more natural and not so gray?\n\nI wouldn't worry about it. The new one has a blue tint to it :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 14 Jun 1999 20:23:58 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [DOCS] elephant picture"
},
{
"msg_contents": "> \n> \n> On 14-Jun-99 Bruce Momjian wrote:\n> > Jan, and luck in getting the older elephant picture where the elephant's\n> > color was more natural and not so gray?\n> \n> I wouldn't worry about it. The new one has a blue tint to it :)\n\n Like an HP49G :-)\n\n\tI like it too.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n",
"msg_date": "Tue, 15 Jun 1999 12:02:24 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] elephant picture"
}
] |
[
{
"msg_contents": "\nI reported earlier a SegFault when doing an initdb, and have narrowed it\ndown somewhat...still probing, but figured I'd see if I'm overlooking\nsomething obvious...\n\n>From the command line, I'm running:\n\necho \"vacuum\" | postgres -o /dev/null -F -Q -D/home/centre/marc/pgsql/data\n\nThis is the stage that the first SegFault happens in initdb.\n\nI added some debugging, and it turns out that the 'DataDir' variable isn't\nbeing initialized at this point:\n\nvoid\nread_pg_options(SIGNAL_ARGS)\n{\n int fd;\n int n;\n int verbose;\n char buffer[BUF_SIZE];\n char c;\n char *s,\n *p;\n \n printf(\"before sprintf()\\n\");\n printf(\"%s\\n\", DataDir);\n sprintf(buffer, \"%s/%s\", DataDir, \"pg_options\");\n printf(\"after sprintf()\\n\");\n if ((fd = open(buffer, O_RDONLY)) < 0)\n return;\n\n=====================\n\nStill looking, but uncovered a slight bug:\n\ndiff -cr postgres.c.orig postgres.c\n*** postgres.c.orig Tue Oct 13 16:47:00 1998\n--- postgres.c Tue Oct 13 16:47:33 1998\n***************\n*** 1052,1057 ****\n--- 1052,1058 ----\n \n case 'D': /* PGDATA directory */\n DataDir = optarg;\n+ break;\n \n case 'd': /* debug level */\n flagQ = false;\n\n\nFurther into it...if I do a setenv PGDATA, it gets around the 'bug'...back\nlater...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 15:51:07 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd problem with read_pg_options ..."
},
{
"msg_contents": "> \n> \n> I reported earlier a SegFault when doing an initdb, and have narrowed it\n> down somewhat...still probing, but figured I'd see if I'm overlooking\n> something obvious...\n> \n> echo \"vacuum\" | postgres -o /dev/null -F -Q -D/home/centre/marc/pgsql/data\n> \n> This is the stage that the first SegFault happens in initdb.\n> \n> I added some debugging, and it turns out that the 'DataDir' variable isn't\n> being initialized at this point:\n> \n> void\n> read_pg_options(SIGNAL_ARGS)\n> {\n> int fd;\n> int n;\n> int verbose;\n> char buffer[BUF_SIZE];\n> char c;\n> char *s,\n> *p;\n> \n> printf(\"before sprintf()\\n\");\n> printf(\"%s\\n\", DataDir);\n> sprintf(buffer, \"%s/%s\", DataDir, \"pg_options\");\n> printf(\"after sprintf()\\n\");\n> if ((fd = open(buffer, O_RDONLY)) < 0)\n> return;\n> \n> =====================\n> \n> Still looking, but uncovered a slight bug:\n> \n> diff -cr postgres.c.orig postgres.c\n> *** postgres.c.orig Tue Oct 13 16:47:00 1998\n> --- postgres.c Tue Oct 13 16:47:33 1998\n> ***************\n> *** 1052,1057 ****\n> --- 1052,1058 ----\n> \n> case 'D': /* PGDATA directory */\n> DataDir = optarg;\n> + break;\n> \n> case 'd': /* debug level */\n> flagQ = false;\n> \n> \n> Further into it...if I do a setenv PGDATA, it gets around the 'bug'...back\n> later...\n> \n> Marc G. Fournier [email protected]\n> Systems Administrator @ hub.org \n> scrappy@{postgresql|isc}.org ICQ#7615664\n> \n\nThe problem is that read_pg_options needs DataDir to read its file but\nDataDir is set after read_pg_options if postgres is called interactively.\nIf postgres is forked by postgres DataDir is read from the PGDATA enviromnent\nvariable set by the postmaster and this explains while the bug disappears.\nI have written this patch but I don't like it. Any better idea?\n\n*** src/backend/utils/init/globals.c.orig\tThu Oct 15 00:13:03 1998\n--- src/backend/utils/init/globals.c\tThu Oct 15 00:13:07 1998\n***************\n*** 46,52 ****\n struct Port *MyProcPort;\n long\t\tMyCancelKey;\n \n! char\t *DataDir;\n \n /*\n * The PGDATA directory user says to use, or defaults to via environment\n--- 46,52 ----\n struct Port *MyProcPort;\n long\t\tMyCancelKey;\n \n! char\t *DataDir = NULL;\n \n /*\n * The PGDATA directory user says to use, or defaults to via environment\n*** src/backend/utils/misc/trace.c.orig\tThu Sep 3 09:00:39 1998\n--- src/backend/utils/misc/trace.c\tThu Oct 15 00:18:49 1998\n***************\n*** 343,348 ****\n--- 343,353 ----\n \tchar\t *s,\n \t\t\t *p;\n \n+ \tif (!DataDir) {\n+ \t fprintf(stderr, \"read_pg_options: DataDir not defined\\n\");\n+ \t return;\n+ \t}\n+ \n \tsprintf(buffer, \"%s/%s\", DataDir, \"pg_options\");\n \tif ((fd = open(buffer, O_RDONLY)) < 0)\n \t\treturn;\n*** src/backend/tcop/postgres.c.orig\tTue Sep 1 09:01:27 1998\n--- src/backend/tcop/postgres.c\tThu Oct 15 00:23:24 1998\n***************\n*** 1049,1055 ****\n--- 1049,1061 ----\n \t\t\t\tbreak;\n \n \t\t\tcase 'D':\t\t\t/* PGDATA directory */\n+ \t\t\t if (!DataDir) {\n+ \t\t\t\t DataDir = optarg;\n+ \t\t\t\t /* must be done after DataDir is defined */\n+ \t\t\t\t read_pg_options(0);\n+ \t\t\t\t}\n \t\t\t\tDataDir = optarg;\n+ \t\t\t\tbreak;\n \n \t\t\tcase 'd':\t\t\t/* debug level */\n \t\t\t\tflagQ = false;\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Thu, 15 Oct 1998 00:37:05 +0200 (MET DST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Odd problem with read_pg_options ..."
},
{
"msg_contents": "> The problem is that read_pg_options needs DataDir to read its file but\n> DataDir is set after read_pg_options if postgres is called interactively.\n> If postgres is forked by postgres DataDir is read from the PGDATA enviromnent\n> variable set by the postmaster and this explains while the bug disappears.\n> I have written this patch but I don't like it. Any better idea?\n\nI have applied it. You can improve it later, if you wish.\n\n> \n> *** src/backend/utils/init/globals.c.orig\tThu Oct 15 00:13:03 1998\n> --- src/backend/utils/init/globals.c\tThu Oct 15 00:13:07 1998\n> ***************\n> *** 46,52 ****\n> struct Port *MyProcPort;\n> long\t\tMyCancelKey;\n> \n> ! char\t *DataDir;\n> \n> /*\n> * The PGDATA directory user says to use, or defaults to via environment\n> --- 46,52 ----\n> struct Port *MyProcPort;\n> long\t\tMyCancelKey;\n> \n> ! char\t *DataDir = NULL;\n> \n> /*\n> * The PGDATA directory user says to use, or defaults to via environment\n> *** src/backend/utils/misc/trace.c.orig\tThu Sep 3 09:00:39 1998\n> --- src/backend/utils/misc/trace.c\tThu Oct 15 00:18:49 1998\n> ***************\n> *** 343,348 ****\n> --- 343,353 ----\n> \tchar\t *s,\n> \t\t\t *p;\n> \n> + \tif (!DataDir) {\n> + \t fprintf(stderr, \"read_pg_options: DataDir not defined\\n\");\n> + \t return;\n> + \t}\n> + \n> \tsprintf(buffer, \"%s/%s\", DataDir, \"pg_options\");\n> \tif ((fd = open(buffer, O_RDONLY)) < 0)\n> \t\treturn;\n> *** src/backend/tcop/postgres.c.orig\tTue Sep 1 09:01:27 1998\n> --- src/backend/tcop/postgres.c\tThu Oct 15 00:23:24 1998\n> ***************\n> *** 1049,1055 ****\n> --- 1049,1061 ----\n> \t\t\t\tbreak;\n> \n> \t\t\tcase 'D':\t\t\t/* PGDATA directory */\n> + \t\t\t if (!DataDir) {\n> + \t\t\t\t DataDir = optarg;\n> + \t\t\t\t /* must be done after DataDir is defined */\n> + \t\t\t\t read_pg_options(0);\n> + \t\t\t\t}\n> \t\t\t\tDataDir = optarg;\n> + \t\t\t\tbreak;\n> \n> \t\t\tcase 'd':\t\t\t/* debug level */\n> \t\t\t\tflagQ = false;\n> \n> -- \n> Massimo Dal Zotto\n> \n> +----------------------------------------------------------------------+\n> | Massimo Dal Zotto email: [email protected] |\n> | Via Marconi, 141 phone: ++39-461-534251 |\n> | 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n> | Italy pgp: finger [email protected] |\n> +----------------------------------------------------------------------+\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 02:05:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Odd problem with read_pg_options ..."
}
] |
[
{
"msg_contents": "\nFound it...we do a read_pg_option *before* doing the getopt():\n\n> echo \"vacuum\" | postgres -o /dev/null -F -Q -D\n/home/centre/marc/pgsql/data\nbefore sprintf()\n/home/centre/marc/pgsql/data\nafter sprintf()\n-D option\n\nSo, when DataDir isn't set by the ENV variable, of course, this doesn't\nwork...\n\nHate to be picky here, but there isn't even any documentation on the\npg_options file:\n\nPattern not found\n> grep pg_option *\n> pwd\n/usr/local/src/marc/pgsql/src/man\n\n\nI just added a check to postgres.c so that if(DataDir) is set, *then* do\nread_pg_options, else skip it. Whomever added this code in, would it make\nbetter sense to have this parsed *after* getopt() is finished? And is\nthere any documenation for this feature?\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 16:06:44 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ordering problem in backend/tcop/postgres.c ..."
}
] |
[
{
"msg_contents": "Here are patches needed to complie under AIX 4.2.\nI changed configure.in, pqcomm.c, config.h.in, and fe-connect.c.\nAlso I had to install flex because lex did not want to translate pgc.l.\n\n*** configure.in\tTue Oct 13 14:05:50 1998\n--- configure.in.orig\tTue Oct 13 14:02:18 1998\n***************\n*** 444,458 ****\n AC_HEADER_TIME\n AC_STRUCT_TM\n \n- AC_MSG_CHECKING(for type of last arg to accept)\n- AC_TRY_COMPILE([#include <stdlib.h>\n- #include <sys/types.h>\n- #include <sys/socket.h>\n- ],\n- [int a = accept(1, (struct sockaddr *) 0, (size_t *) 0);],\n- [AC_DEFINE(SOCKET_SIZE_TYPE, size_t) AC_MSG_RESULT(size_t)],\n- [AC_DEFINE(SOCKET_SIZE_TYPE, int) AC_MSG_RESULT(int)])\n- \n dnl Check for any \"odd\" conditions\n AC_MSG_CHECKING(for int timezone)\n AC_TRY_LINK([#include <time.h>],\n--- 444,449 ----\n\n*** pqcomm.c\tTue Oct 13 15:11:56 1998\n--- pqcomm.c.orig\tTue Oct 13 14:48:44 1998\n***************\n*** 660,667 ****\n int\n StreamConnection(int server_fd, Port *port)\n {\n! \tint\t\t\tlen;\n! \tSOCKET_SIZE_TYPE\taddrlen;\n \tint\t\t\tfamily = port->raddr.sa.sa_family;\n \n \t/* accept connection (and fill in the client (remote) address) */\n--- 660,667 ----\n int\n StreamConnection(int server_fd, Port *port)\n {\n! \tint\t\t\tlen,\n! \t\t\t\taddrlen;\n \tint\t\t\tfamily = port->raddr.sa.sa_family;\n \n \t/* accept connection (and fill in the client (remote) address) */\n***************\n*** 732,739 ****\n int\n StreamOpen(char *hostName, short portName, Port *port)\n {\n! \tSOCKET_SIZE_TYPE\tlen;\n! \tint\t\t\terr;\n \tstruct hostent *hp;\n \textern int\terrno;\n \n--- 732,739 ----\n int\n StreamOpen(char *hostName, short portName, Port *port)\n {\n! \tint\t\t\tlen,\n! \t\t\t\terr;\n \tstruct hostent *hp;\n \textern int\terrno;\n \n\n*** config.h.in\tTue Oct 13 14:32:40 1998\n--- config.h.in.orig\tTue Oct 13 14:33:28 1998\n***************\n*** 210,218 ****\n /* Set to 1 if you want to Enable ASSERT CHECKING */\n #undef USE_ASSERT_CHECKING\n \n- /* Define as the base type of the last arg to accept */\n- #undef SOCKET_SIZE_TYPE\n- \n /*\n * Code below this point should not require changes\n */\n--- 210,215 ----\n\n*** fe-connect.c\tTue Oct 13 15:20:14 1998\n--- fe-connect.c.orig\tTue Oct 13 15:19:10 1998\n***************\n*** 472,478 ****\n \n \tStartupPacket sp;\n \tAuthRequest areq;\n! \tSOCKET_SIZE_TYPE\tladdrlen = sizeof(SockAddr);\n \tint\t\t\tportno,\n \t\t\t\tfamily,\n \t\t\t\tlen;\n--- 472,478 ----\n \n \tStartupPacket sp;\n \tAuthRequest areq;\n! \tint\t\t\tladdrlen = sizeof(SockAddr);\n \tint\t\t\tportno,\n \t\t\t\tfamily,\n \t\t\t\tlen;\n\n\n\n-----Original Message-----\nFrom:\tMarc G. Fournier [SMTP:[email protected]]\nSent:\tMonday, October 12, 1998 8:38 PM\nTo:\tPeter Gucwa\nCc:\t'[email protected]'; '[email protected]'\nSubject:\tRe: [HACKERS] compilation problem on AIX\n\nOn Mon, 12 Oct 1998, Peter Gucwa wrote:\n\n> Does somebody have solution for this problem that was discussed here a month ago?\n> \n> >> \n> >> the stream functions on AIX need a size_t for addrlen's in fe-connect.c and pqcomm.c.\n> >>This has come up before. AIX wants size_t for certain structures like\n> >getsockname(). I believe the third parameter on AIX is size_t, while it\n> >used to be int on my machine, but is not socklen_t. Is this correct? \n> >The 'int' code works fine for me, but I can see why AIX is having a\n> >problem, and perhaps it is time for configure to check on the various\n> >types.\n> >\n> >\tgetsockname(int s, struct sockaddr *name, socklen_t *namelen);\n> \n> Ok, so this gets tricky. In 4.2.1 it is size_t and in 4.3.1 it is as\n> above with socklen_t :-(\n\nIf someone can make me a *short* code stub that fails to compile depending\non which is used, I can add this to configure...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 16:22:47 -0400",
"msg_from": "Peter Gucwa <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] compilation problem on AIX"
},
{
"msg_contents": "> > >\tgetsockname(int s, struct sockaddr *name, socklen_t *namelen);\n> >\n> > Ok, so this gets tricky. In 4.2.1 it is size_t and in 4.3.1 it is as\n> > above with socklen_t :-(\n>\n> If someone can make me a *short* code stub that fails to compile depending\n> on which is used, I can add this to configure...\n\n--- cut here ---\n#include <sys/socket.h>\n\nint getsockname(int, struct sockaddr *, socklen_t *);\n\nint x() {return 0;}\n--- cut here ---\n\nIf your compiler insists on size_t instead of socklen_t, this will fail with\nan error.\n\nMy test (since linux doesn't care):\n\ntest.c:\nextern int x(int);\n\nextern int x(long);\n\n% gcc -c test.c\n~\ntest.c:3: conflicting types for `x'\ntest.c:1: previous declaration of `x'\n% echo $?\n~\n1\n\nEnjoy.\n\nTaral\n\n",
"msg_date": "Tue, 13 Oct 1998 17:08:57 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] compilation problem on AIX"
}
] |
[
{
"msg_contents": ">On Tue, 13 Oct 1998, Eric Lee Green wrote:\n>\n>> On Tue, 13 Oct 1998, Jeff Hoffmann wrote:\n>> > >I agree completely, LIMIT would be VERY usefull in web based apps,\nwhich\n>> > >is all I run. It does not matter to me if it is not part of a formal\n>> > >standard. The idea is so common that it is a defacto standard.\n>> >\n>> > i'm not familiar with mysql and using \"LIMIT\" but wouldn't this same\neffect\n>> > be achieved by declaring a cursor and fetching however many records in\nthe\n>> > cursor? it's a very noticeable improvement when you only want the\nfirst 20\n>> > out of 500 in a 200k record database, at least.\n>>\n>> The problem with declaring a cursor vs. the \"LIMIT\" clause is that the\n>> \"LIMIT\" clause, if used properly by the database engine (along with the\n>> database engine using indexes in \"ORDER BY\" clauses) allows the database\n>> engine to short-circuit the tail end of the query. That is, if you have\n25\n>> names and the last one ends with BEAVIS, the database engine doesn't have\n>> to go through the BUTTHEADS and KENNYs and etc.\n>>\n>> Theoretically a cursor is superior to the \"LIMIT\" clause because you're\n>> eventually going to want the B's and K's and etc. anyhow -- but only in a\n>> stateful enviornment. In the stateless web environment, a cursor is\n>> useless because the connection can close at any time even when you're\n>> using \"persistent\" connections (and of course when the connection closes\n>> the cursor closes).\n>\n>Ookay, I'm sorry, butyou lost me here. I haven't gotten into using\n>CURSORs/FETCHs yet, since I haven't need it...but can you give an example\n>of what you would want to do using a LIMIT? I may be missing something,\n>but wha is the different between using LIMIT to get X records, and\n>definiing a cursor to FETCH X records?\n>\n>Practical example of *at least* the LIMIT side would be good, so that we\n>can at least see a physical example of what LIMIT can do that\n>CURSORs/FETCH can't...\n>\n\n\nfetch with cursors should work properly (i.e., you can short circuit it by\njust ending your transaction) my understanding on how this works is exactly\nhow you explained LIMIT to work. here's some empirical proof from one of my\nsample databases:\n\nthe sample table i'm using has 156k records (names of people)\ni'm using a PP180 with 128MB RAM and some old slow SCSI drives.\n\npublic_mn=> select count(*) from public_ramsey;\n count\n------\n156566\n(1 row)\n\ni did the following query:\npublic_mn=> select * from public_ramsey where ownerlname ~ 'SMITH';\n\nwhich returned 711 matches and took about 12 seconds.\n\ni did the same thing with a cursor:\n\npublic_mn=> begin;\nBEGIN\npublic_mn=> declare test cursor for select * from public_ramsey where\nownerlname ~ 'SMITH';\nSELECT\n\nthe select was instantaneous.\n\npublic_mn=> fetch 20 in test;\n\nreturns 20 records almost instantaneously. each additional 20 took less\nthan a second, as well.\n\nif this isn't what you're talking about, i don't understand what you're\nsaying.\n\njeff\n\n",
"msg_date": "Tue, 13 Oct 1998 16:56:48 -0500",
"msg_from": "\"Jeff Hoffmann\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
}
] |
[
{
"msg_contents": "\nI just built and regression tested the current source tree on both Solaris\nx86 and Solaris Sparc, and other then a few bugs that I've fixed, it was\nsmooth...\n\nAny arguments against getting a BETA2 out tomorrow afternoon? \n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 18:50:05 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> Any arguments against getting a BETA2 out tomorrow afternoon?\n\nNone, though I've just stumbled across some config stuff which would be\nnice to clean up.\n\nIt came up when I tried upgrading compilers. The new one omitted an\nexplicit cpp, the preprocessor. Builds failed because references to it\nare hardcoded, along with paths to find it, in at least two script files\nfor the backend. \n\nIt also turns out that autoconf already checks for cpp, or the\nequivalent, but the result wasn't being used. So, fine, but...\n\nautoconf concludes that \"gcc -E\" is equivalent to cpp on my system. And\nit is, except that it needs an explicit bare \"-\" argument to try reading\nfrom a pipe, which is how cpp was being used. I can test for \"gcc\" being\nin the command, and add the argument, _or_ can change the scripts to\nwrite a temporary file instead (they already write some temp files).\n\nComments? Suggestions??\n\n - Tom\n\nOh, I'm probably going to revert back to the compiler package which\nincludes cpp...\n",
"msg_date": "Wed, 14 Oct 1998 01:33:45 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "I will be posting a configure.in tonight with further clean ups to the TCL/TK \nconfiguration. TCL/TK compile information will now be obtained from the \ntclConfig.sh and tkConfig.sh, without having to have a list of different \nversions of TCL and TK to search for.\n\nTwo item I would like to bring up for discussion is:\n\n1. Currently TCL/TK support is disabled if TCL is present, but TK is not. This\n is not a good thing because the PL/tcl language is only dependant on TCL,\n not TK. Also pgtclsh only requires TCL. I propose changing configure so\n that TCL and TK support are seperate and the TCL dependant parts of the\n postgreSQL distribution will still build even if TK is not present.\n\n2. There is currently duplicate Makefile.tcldefs. I propose create a\n Makefile.tcldefs and Makefile.tkdefs at the same directory level as\n Makefile.global and have any Makefile that needs them include them from\n that location. This would then become the 'standard' for compiling TCL\n and TK based packages in the postgreSQL distribution.\n\nAny comments and/or concerns?\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Tue, 13 Oct 1998 22:11:15 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ... "
},
{
"msg_contents": "> I will be posting a configure.in tonight with further clean ups to the TCL/TK \n> configuration. TCL/TK compile information will now be obtained from the \n> tclConfig.sh and tkConfig.sh, without having to have a list of different \n> versions of TCL and TK to search for.\n> \n> Two item I would like to bring up for discussion is:\n> \n> 1. Currently TCL/TK support is disabled if TCL is present, but TK is not. This\n> is not a good thing because the PL/tcl language is only dependant on TCL,\n> not TK. Also pgtclsh only requires TCL. I propose changing configure so\n> that TCL and TK support are seperate and the TCL dependant parts of the\n> postgreSQL distribution will still build even if TK is not present.\n> \n> 2. There is currently duplicate Makefile.tcldefs. I propose create a\n> Makefile.tcldefs and Makefile.tkdefs at the same directory level as\n> Makefile.global and have any Makefile that needs them include them from\n> that location. This would then become the 'standard' for compiling TCL\n> and TK based packages in the postgreSQL distribution.\n> \n> Any comments and/or concerns?\n\nBe aware I removed the TCL_LIB, ... stuff from configure.in this\nmorning. That was the old stuff, and your new stuff made it\nunnecessary.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Tue, 13 Oct 1998 22:43:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> autoconf concludes that \"gcc -E\" is equivalent to cpp on my system. And\n> it is, except that it needs an explicit bare \"-\" argument to try reading\n> from a pipe, which is how cpp was being used. I can test for \"gcc\" being\n> in the command, and add the argument, _or_ can change the scripts to\n> write a temporary file instead (they already write some temp files).\n\nAdd the following to configure.in after AC_PROG_CPP and use the resultant\nflag from CPPSTDIN: (either nothing or -)\n\n--- cut here ---\nAC_DEFUN(AC_TRY_CPPSTDIN,\n[AC_REQUIRE_CPP()dnl\ncat > conftest.$ac_ext <<EOF\n[#]line __oline__ \"configure\"\n#include \"confdefs.h\"\n[$1]\nEOF\nac_try=\"$ac_cpp $CPPSTDIN <conftest.$ac_ext >/dev/null 2>conftest.out\"\nAC_TRY_EVAL(ac_try)\nac_err=`grep -v '^ *+' conftest.out`\nif test -z \"$ac_err\"; then\n ifelse([$2], , :, [rm -rf conftest*\n $2])\nelse\n echo \"$ac_err\" >&AC_FD_CC\n echo \"configure: failed program was:\" >&AC_FD_CC\n cat conftest.$ac_ext >&AC_FD_CC\nifelse([$3], , , [ rm -rf conftest*\n $3\n])dnl\nfi\nrm -f conftest*])\n\nAC_MSG_CHECKING(how to use cpp with stdin)\nif test -z \"$CPPSTDIN\"; then\nAC_CACHE_VAL(ac_cv_cpp_stdin,\n[ CPPSTDIN=\"\"\n AC_TRY_CPPSTDIN([#include <assert.h>\nSyntax Error], , CPPSTDIN=\"-\")\n ac_cv_cpp_stdin=\"$CPPSTDIN\"])\n CPPSTDIN=\"$ac_cv_cpp_stdin\"\nelse\n ac_cv_cpp_stdin=\"$CPPSTDIN\"\nfi\nAC_MSG_RESULT($CPP $CPPSTDIN)\nAC_SUBST(CPPSTDIN)\n--- cut here ---\n\nTaral\n\nP.S. Yes, do use the DEFUN just in case we want to add more broken variants\nlater :)\n\n",
"msg_date": "Tue, 13 Oct 1998 21:56:39 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "On Tue, 13 Oct 1998, Billy G. Allie wrote:\n\n> I will be posting a configure.in tonight with further clean ups to the TCL/TK \n> configuration. TCL/TK compile information will now be obtained from the \n> tclConfig.sh and tkConfig.sh, without having to have a list of different \n> versions of TCL and TK to search for.\n> \n> Two item I would like to bring up for discussion is:\n> \n> 1. Currently TCL/TK support is disabled if TCL is present, but TK is not. This\n> is not a good thing because the PL/tcl language is only dependant on TCL,\n> not TK. Also pgtclsh only requires TCL. I propose changing configure so\n> that TCL and TK support are seperate and the TCL dependant parts of the\n> postgreSQL distribution will still build even if TK is not present.\n> \n> 2. There is currently duplicate Makefile.tcldefs. I propose create a\n> Makefile.tcldefs and Makefile.tkdefs at the same directory level as\n> Makefile.global and have any Makefile that needs them include them from\n> that location. This would then become the 'standard' for compiling TCL\n> and TK based packages in the postgreSQL distribution.\n> \n> Any comments and/or concerns?\n> -- \n\n\tThese changes sound great for post-v6.4...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 13 Oct 1998 23:18:32 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ... "
},
{
"msg_contents": ">\n>\n> I just built and regression tested the current source tree on both Solaris\n> x86 and Solaris Sparc, and other then a few bugs that I've fixed, it was\n> smooth...\n>\n> Any arguments against getting a BETA2 out tomorrow afternoon?\n\n Have a crashing backend after a huge transaction on the next\n insert into a table with indices. Crash is reproducable and\n seems to be due to a corrupted index file.\n\n Recompiling with COPT=-g now...\n\n>\n> Marc G. Fournier [email protected]\n> Systems Administrator @ hub.org\n> scrappy@{postgresql|isc}.org ICQ#7615664\n>\n>\n>\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 09:51:36 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "That was me:\n>\n> >\n> >\n> > I just built and regression tested the current source tree on both Solaris\n> > x86 and Solaris Sparc, and other then a few bugs that I've fixed, it was\n> > smooth...\n> >\n> > Any arguments against getting a BETA2 out tomorrow afternoon?\n>\n> Have a crashing backend after a huge transaction on the next\n> insert into a table with indices. Crash is reproducable and\n> seems to be due to a corrupted index file.\n>\n> Recompiling with COPT=-g now...\n>\n\n Harrr - using text_ops on an int4 field in CREATE INDEX\n doesn't make much sense.\n\n Bruce, please add 6.5 TODO:\n\n Parser must check on CREATE INDEX that the opcdeftype of the\n used operator class is compatible with the indexed field or\n the index functions return type.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 14 Oct 1998 16:42:41 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": " I will be posting a configure.in tonight with further clean ups to the TCL/TK \n configuration. TCL/TK compile information will now be obtained from the \n tclConfig.sh and tkConfig.sh, without having to have a list of different \n versions of TCL and TK to search for.\n\nThese ideas sound great, but I don't understand one thing.\n\nHow do you locate the *Config.sh scripts without looking in a bunch of\ndirectories until you find them?\n\nUnless there is a general means of finding these which 1) doesn't\ninvolve checking directories associated with different versions of\ntcl/tk, and 2) does allow for the possibility that tcl/tk may not be\ninstalled in a particular filesystem (/usr/local, for example), I\nstrongly recommend keeping the part of configure that searches for the\nlocation of tcl/tk. Perhaps we don't need to store the location of\ninclude/library files based on the configure script, but I think we do\nneed to use essentially the same mechanism to find the *Config.sh\nscripts.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 14 Oct 1998 08:44:53 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> [...] TCL/TK compile information will now be obtained from the \n> tclConfig.sh and tkConfig.sh, without having to have a list of different \n> versions of TCL and TK to search for.\n> \n> These ideas sound great, but I don't understand one thing.\n> \n> How do you locate the *Config.sh scripts without looking in a bunch of\n> directories until you find them?\n> \n> Unless there is a general means of finding these which 1) doesn't\n> involve checking directories associated with different versions of\n> tcl/tk, and 2) does allow for the possibility that tcl/tk may not be\n> installed in a particular filesystem (/usr/local, for example), I\n> strongly recommend keeping the part of configure that searches for the\n> location of tcl/tk. Perhaps we don't need to store the location of\n> include/library files based on the configure script, but I think we do\n> need to use essentially the same mechanism to find the *Config.sh\n> scripts.\n\nI wasn't clear enough in my explaination. I still search directories for the\n*Config.sh files, but I generalized it so that a list of TCL and TK versions do not have to be maintained. Here is the segment of code that performs the search for tclConfig.sh:\n\n library_dirs=\"$LIBRARY_DIRS /usr/lib\"\n TCL_CONFIG_SH=\n for dir in $library_dirs; do\n if test -d \"$dir\" -a -r \"$dir/tclConfig.sh\"; then\n TCL_CONFIG_SH=$dir/tclConfig.sh\n break\n fi\n for tcl_dir in $dir/tcl[0-9]*.[0-9]*\n do\n if test -d \"$tcl_dir\" -a -r \"$tcl_dir/tclConfig.sh\"\n then\n TCL_CONFIG_SH=$tcl_dir/tclConfig.sh\n break 2\n fi\n done\n done\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Wed, 14 Oct 1998 11:26:47 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ... "
},
{
"msg_contents": " I wasn't clear enough in my explaination. I still search\n directories for the *Config.sh files, but I generalized it so that\n a list of TCL and TK versions do not have to be maintained. \n\nOK. That clears things up, now. But ...\n\n Here is the segment of code that performs the search for\n tclConfig.sh:\n\n\t for tcl_dir in $dir/tcl[0-9]*.[0-9]*\n\t do\n\t ...\n\t done\n\nI think that this will fail by finding the LOWER version of tcl\nbefore a higher version. For example, if I have both 7.6 and 8.0\ninstalled, won't this find 7.6 first?\n\nMight there not be installations with an old version lying around?\n\nShould the script be enhanced to at least report on ALL versions\nfound?\n\nShould the script simply assign for each version found, rather than\nbreaking out of the loop? That might have a better chance at catching\nthe highest version; although, it doesn't won't order 8.9 and 8.10\ncorrectly.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 14 Oct 1998 10:43:57 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> \n> Harrr - using text_ops on an int4 field in CREATE INDEX\n> doesn't make much sense.\n> \n> Bruce, please add 6.5 TODO:\n> \n> Parser must check on CREATE INDEX that the opcdeftype of the\n> used operator class is compatible with the indexed field or\n> the index functions return type.\n\nBut, we don't require ops* anymore. Should we prevent people from using\nwhatever ops they want?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 13:01:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> I think that this will fail by finding the LOWER version of tcl\n> before a higher version. For example, if I have both 7.6 and 8.0\n> installed, won't this find 7.6 first?\n\nSorry, I've lost track of the discussion. Is it the case that people\nknow that pgtcl does not install at the moment? On my machine the\ninstallation procedure has trouble finding one of the shell files under\ndiscussion...\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 17:59:41 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "On Wed, 14 Oct 1998, Jan Wieck wrote:\n\n> >\n> >\n> > I just built and regression tested the current source tree on both Solaris\n> > x86 and Solaris Sparc, and other then a few bugs that I've fixed, it was\n> > smooth...\n> >\n> > Any arguments against getting a BETA2 out tomorrow afternoon?\n> \n> Have a crashing backend after a huge transaction on the next\n> insert into a table with indices. Crash is reproducable and\n> seems to be due to a corrupted index file.\n> \n> Recompiling with COPT=-g now...\n\n\tI saw Bruce's last on this...will wait until you guys have this\nsorted out before I build the snapshot...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Wed, 14 Oct 1998 15:29:03 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> \n> On Wed, 14 Oct 1998, Jan Wieck wrote:\n> \n> > >\n> > >\n> > > I just built and regression tested the current source tree on both Solaris\n> > > x86 and Solaris Sparc, and other then a few bugs that I've fixed, it was\n> > > smooth...\n> > >\n> > > Any arguments against getting a BETA2 out tomorrow afternoon?\n> > \n> > Have a crashing backend after a huge transaction on the next\n> > insert into a table with indices. Crash is reproducable and\n> > seems to be due to a corrupted index file.\n> > \n> > Recompiling with COPT=-g now...\n> \n> \tI saw Bruce's last on this...will wait until you guys have this\n> sorted out before I build the snapshot...\n> \n\n Was my fault - not a bug.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Wed, 14 Oct 1998 23:06:19 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> > I think that this will fail by finding the LOWER version of tcl\n> > before a higher version. For example, if I have both 7.6 and 8.0\n> > installed, won't this find 7.6 first?\n>\n> Sorry, I've lost track of the discussion. Is it the case that people\n> know that pgtcl does not install at the moment? On my machine the\n> installation procedure has trouble finding one of the shell files under\n> discussion...\n>\nCan you provide more details, such as the error messages generated by make?\n--\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n",
"msg_date": "Wed, 14 Oct 1998 17:33:49 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ... "
},
{
"msg_contents": "> > I think that this will fail by finding the LOWER version of tcl\n> > before a higher version. For example, if I have both 7.6 and 8.0\n> > installed, won't this find 7.6 first?\n> \n> Sorry, I've lost track of the discussion. Is it the case that people\n> know that pgtcl does not install at the moment? On my machine the\n> installation procedure has trouble finding one of the shell files under\n> discussion...\n\nThis is news to me. The current code looks for tclConfig.sh and\ntkConfig.sh in the various standard directories. In my case, they are\nin /usr/contrib/lib. Can you add the directory that has those file to\nyour search path include dirs.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 19:37:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> On Wed, 14 Oct 1998, Jan Wieck wrote:\n> \n> > >\n> > >\n> > > I just built and regression tested the current source tree on both Solaris\n> > > x86 and Solaris Sparc, and other then a few bugs that I've fixed, it was\n> > > smooth...\n> > >\n> > > Any arguments against getting a BETA2 out tomorrow afternoon?\n> > \n> > Have a crashing backend after a huge transaction on the next\n> > insert into a table with indices. Crash is reproducable and\n> > seems to be due to a corrupted index file.\n> > \n> > Recompiling with COPT=-g now...\n> \n> \tI saw Bruce's last on this...will wait until you guys have this\n> sorted out before I build the snapshot...\n\nFixed. He had the wrote ops_ on the index.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 19:39:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n\n library_dirs=\"$LIBRARY_DIRS /usr/lib\"\n TCL_CONFIG_SH=\n for dir in $library_dirs; do\n [...]\n done\n\nIf you can assume that Tcl is installed and the version the user wants\nis first in their path, you should be able to limit this to\n\n library_dirs=`echo 'puts $auto_path' | tclsh`\n\nroland\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\nComment: Processed by Mailcrypt 3.4, an Emacs/PGP interface\n\niQCVAwUBNiVfs+oW38lmvDvNAQFpPgP9Gsk515V66r0BHUk9hIE8atCyxu08QIpo\nkRWGLd3gO5vs04Y56OrNAwCZuddfr1lx+S01MP6G5HKdHWQ9z1mDGixODYrdW9K2\n39HT3OHJ9YEzgoQV77m1Ef9OPmLpuboXMg1iEd4+Wv/PJrTvVVmLHCD98wMjgpgF\nBenOPn5wV0w=\n=NiLZ\n-----END PGP SIGNATURE-----\n-- \nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 101 West 15th St #4NN\n New York, NY 10011\n\n",
"msg_date": "14 Oct 1998 22:36:39 -0400",
"msg_from": "Roland Roberts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> I think that this will fail by finding the LOWER version of tcl\n> before a higher version. For example, if I have both 7.6 and 8.0\n> installed, won't this find 7.6 first?\n> \n> Might there not be installations with an old version lying around?\n> \n> Should the script be enhanced to at least report on ALL versions\n> found?\n> \n> Should the script simply assign for each version found, rather than\n> breaking out of the loop? That might have a better chance at catching\n> the highest version; although, it doesn't won't order 8.9 and 8.10\n> correctly.\n\nActually, I've looked at the configure for TCL and concluded came to the \nfollowing conclusions:\n\n1. By default, the *Config.sh files go in the same directory as the library\n files (default: /usr/local/lib).\n\n2. If you built more than 1 version of TCL/TK, the *Config.sh will reflect\n the last version built unless you override the default location of the\n libraries.\n\n3. If you more than one version of TCL/TK and want to use a specific version,\n you need to tell configure where the *Config.sh file you want to use are\n with the --with-libs or --with-library option.\n\nBearing these points in mind, I will be removing the code that looks in tclX.Y \n(where X.Y is the version) and tkX.Y directories. It will only look for \n*Config.sh files in the following directories (in the order given):\n\n\t$LIBRARY_DIRS\t\t(set with --with-libs or --with-libraries)\n\t/usr/local/lib\n\t/usr/contrib/lib\n\t/opt/lib\n\t/usr/lib\n\nThis method will make searching for various versions TCL/TK unnecessary while \nstill being able to find the correct TCL/TK in most cases.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Thu, 15 Oct 1998 00:46:18 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ... "
},
{
"msg_contents": "> library_dirs=\"$LIBRARY_DIRS /usr/lib\"\n> TCL_CONFIG_SH=\n> for dir in $library_dirs; do\n> [...]\n> done\n> \n> If you can assume that Tcl is installed and the version the user wants\n> is first in their path, you should be able to limit this to\n> \n> library_dirs=`echo 'puts $auto_path' | tclsh`\n> \n\nThis is one of those cases where you sit back and think \"Now why didn't I think of that!\"\n\nThe only thing I would add is to allow $LIBRARY_DIR to be searched first to allow an alternate version of TCL/TK to be specified. Also, I would add a check to see if tclsh is in the path.\n\nCan anyone think of a good reason not to use this method?\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Thu, 15 Oct 1998 01:00:15 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ... "
},
{
"msg_contents": "> Actually, I've looked at the configure for TCL and concluded came to the \n> following conclusions:\n> \n> 1. By default, the *Config.sh files go in the same directory as the library\n> files (default: /usr/local/lib).\n> \n> 2. If you built more than 1 version of TCL/TK, the *Config.sh will reflect\n> the last version built unless you override the default location of the\n> libraries.\n> \n> 3. If you more than one version of TCL/TK and want to use a specific version,\n> you need to tell configure where the *Config.sh file you want to use are\n> with the --with-libs or --with-library option.\n> \n> Bearing these points in mind, I will be removing the code that looks in tclX.Y \n> (where X.Y is the version) and tkX.Y directories. It will only look for \n> *Config.sh files in the following directories (in the order given):\n\n\nFYI, I may have already done that in configure.in.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 01:49:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
},
{
"msg_contents": "> > Sorry, I've lost track of the discussion. Is it the case that people\n> > know that pgtcl does not install at the moment? On my machine the\n> > installation procedure has trouble finding one of the shell files \n> > under discussion...\n> Can you provide more details, such as the error messages generated by \n> make?\n\nSorry, false alarm. I had seen it fail recently, but it is working great\nnow.\n\n - Tom\n",
"msg_date": "Thu, 15 Oct 1998 14:19:31 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2 ..."
}
] |
[
{
"msg_contents": "Hey, the odbc stuff no longer installs. There is a softlink from what\nused to be the shared library to itself. What were you trying to\naccomplish with the patch? I'm not sure I'll fix it correctly if I don't\nknow where you are headed with it...\n\nIn general, it's ok to have the actual library be named with an explicit\nmajor and minor version number, then have softlinks with a name having\nthe major version and no version number, right?\n\n - Tom\n\nrevision 1.4\ndate: 1998/10/09 21:28:50; author: momjian; state: Exp; lines: +5 -5\nmajor/minor shared name cleanup\n",
"msg_date": "Wed, 14 Oct 1998 02:24:25 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "odbc installation broken"
},
{
"msg_contents": "I've committed changes to the configure scripts which make a more\nuniform naming and softlinking of installed shared libraries for the\nvarious packages. The naming convention is that which has been used by\nPostgres in the past as well as that used on my Linux machine (i.e.\nlibrary name with full major and minor version appended to the name,\nthen softlinks for just the library name and the name with the major\nversion only).\n\nSo, libpq is named libpq.so.2.0, and there are softlinks libpq.so.2 and\nlibpq.so. I've adjusted the other libraries (or cleaned up code which\nalready did it) to do the same thing. In the same cleanup, I have the\ncode now using the existing $(DLSUFFIX) parameter from Makefile.global\nas intended for naming the shared libraries.\n\nAlso, I've adjusted the configure.in (and configure) to detect all flags\nnecessary for cpp (from Taral) and adjusted Gen_fmgrtab.sh.in to use\nthem. I added a genbki.sh.in so that the genbki.sh script can also use\nthe auto-detected cpp configuration. Formerly, these were hardcoded into\nthe scripts.\n\nThis will require a \"configure\" for an updated tree to start using the\nnew genbki.sh.in .\n\nPlease report any problems asap...\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 16:44:33 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] odbc installation broken"
}
] |
[
{
"msg_contents": "I am having problems getting the perl interface to work (but see below\nfor a fix). If I install postgresql on a bare system (i.e., one with\nno previous postgresql installed), the perl interface gets installed,\nbut the test.pl script fails with an undefined symbol error (libpq\ncannot be found). However, if I rebuild the perl interface AFTER\npostgresql (actually, I think libpq is all that is necessary here) is\ninstalled, the test.pl script works fine.\n\nIt seems that the installation procedure should be the following:\n\n - build postgresql (including the perl interface)\n - install postgresql, particularly libpq\n - rebuild the perl interface\n - reinstall the perl interface\n\nThe following patches accomplish the following:\n\n- change interfaces/Makefile to reflect the order above by doing a\n make clean in perl5 before installing the perl interface; this\n forces a rebuild after libpq has been installed. Note that\n perl-makefile-dep is no longer required under this scheme.\n\n- create Makefile.PL.in for perl5 so that the installed path for libpq\n (as determined by configure) can be incorporated into the ld path\n when the perl interface library is built.\n\n- fix configure.in to create Makefile.PL from Makefile.PL.in\n\nNOTE: rerun autoconf\n\nNOTE: delete interfaces/perl5/Makefile.PL\n\nCheers,\nBrook\n\n===========================================================================\n--- interfaces/Makefile.orig\tWed Oct 7 01:00:23 1998\n+++ interfaces/Makefile\tTue Oct 13 16:29:16 1998\n@@ -15,13 +15,11 @@\n include $(SRCDIR)/Makefile.global\n \n \n-perl-makefile-dep :=\n-ifeq ($(USE_PERL), true)\n- perl-makefile-dep := perl5/Makefile\n-endif\n+PERL_CLEAN := DO_NOTHING\n+install: PERL_CLEAN := clean\n \n \n-.DEFAULT all install clean dep depend distclean: $(perl-makefile-dep)\n+.DEFAULT all install clean dep depend distclean:\n \t$(MAKE) -C libpq $@\n \t$(MAKE) -C ecpg $@\n ifeq ($(HAVE_Cplusplus), true)\n@@ -33,6 +31,8 @@\n \t$(MAKE) -C libpgtcl $@\n endif\n ifeq ($(USE_PERL), true)\n+\t-$(MAKE) -C perl5 $(PERL_CLEAN)\n+\t$(MAKE) perl5/Makefile\n \t$(MAKE) -C perl5 $@\n endif\n ifeq ($(USE_ODBC), true)\n===========================================================================\n--- interfaces/perl5/Makefile.PL.in.orig\tMon Oct 12 16:25:23 1998\n+++ interfaces/perl5/Makefile.PL.in\tMon Oct 12 16:25:00 1998\n@@ -0,0 +1,68 @@\n+#-------------------------------------------------------\n+#\n+# $Id: Makefile.PL,v 1.9 1998/09/27 19:12:21 mergl Exp $\n+#\n+# Copyright (c) 1997, 1998 Edmund Mergl\n+#\n+#-------------------------------------------------------\n+\n+use ExtUtils::MakeMaker;\n+use Config;\n+use strict;\n+\n+# because the perl5 interface is always contained in the source tree,\n+# we can be sure about the location of the include files and libs.\n+# For development and testing we still test for POSTGRES_HOME.\n+#\n+#print \"\\nConfiguring Pg\\n\";\n+#print \"Remember to actually read the README file !\\n\";\n+#die \"\\nYou didn't read the README file !\\n\" unless ($] >= 5.002);\n+#\n+#if (! $ENV{POSTGRES_HOME}) {\n+# warn \"\\$POSTGRES_HOME not defined. Searching for PostgreSQL...\\n\";\n+# foreach(qw(../../../ /usr/local/pgsql /usr/pgsql /home/pgsql /opt/pgsql /usr/local/postgres /usr/postgres /home/postgres /opt/postgres)) {\n+# if (-d \"$_/lib\") {\n+# $ENV{POSTGRES_HOME} = $_;\n+# last;\n+# }\n+# }\n+#}\n+#\n+#if (-d \"$ENV{POSTGRES_HOME}/lib\") {\n+# print \"Found PostgreSQL in $ENV{POSTGRES_HOME}\\n\";\n+#} else {\n+# die \"Unable to determine PostgreSQL\\n\";\n+#}\n+\n+my %opts;\n+\n+if (! $ENV{POSTGRES_HOME}) {\n+\n+ my $cwd = `pwd`;\n+ chop $cwd;\n+\n+ %opts = (\n+ NAME => 'Pg',\n+ VERSION_FROM => 'Pg.pm',\n+ INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n+ OBJECT => \"Pg\\$(OBJ_EXT)\",\n+ LIBS => [\"-L@prefix@/lib -L$cwd/../libpq -lpq\"],\n+ );\n+\n+} else {\n+\n+ %opts = (\n+ NAME => 'Pg',\n+ VERSION_FROM => 'Pg.pm',\n+ INC => \"-I$ENV{POSTGRES_HOME}/include\",\n+ OBJECT => \"Pg\\$(OBJ_EXT)\",\n+ LIBS => [\"-L$ENV{POSTGRES_HOME}/lib -lpq\"],\n+ );\n+}\n+\n+\n+WriteMakefile(%opts);\n+\n+exit(0);\n+\n+# end of Makefile.PL\n===========================================================================\n--- configure.in.orig\tMon Oct 12 01:00:20 1998\n+++ configure.in\tMon Oct 12 16:26:35 1998\n@@ -959,6 +961,7 @@\n \tinterfaces/libpgtcl/Makefile\n \tinterfaces/odbc/GNUmakefile\n \tinterfaces/odbc/Makefile.global\n+\tinterfaces/perl5/Makefile.PL\n \tpl/plpgsql/src/Makefile\n \tpl/plpgsql/src/mklang.sql\n \tpl/tcl/mkMakefile.tcldefs.sh\n",
"msg_date": "Wed, 14 Oct 1998 10:55:38 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "perl interface bug?"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> I am having problems getting the perl interface to work (but see below\n> for a fix). If I install postgresql on a bare system (i.e., one with\n> no previous postgresql installed), the perl interface gets installed,\n> but the test.pl script fails with an undefined symbol error (libpq\n> cannot be found). However, if I rebuild the perl interface AFTER\n> postgresql (actually, I think libpq is all that is necessary here) is\n> installed, the test.pl script works fine.\n\nThis is a longstanding problem in the Perl module. It's fairly closely\nrelated to the recent flamefest about LD_LIBRARY_PATH, btw --- the\ntrouble is that if shared lib A refers to shared lib B, then lib A may\nneed to contain a hardwired path to lib B. At least on some platforms.\nI think your intended fix of rebuilding the Perl shared lib after libpq\nhas been installed might solve the problem nicely for these platforms.\n\nHPUX is one of the places where there's a problem, so I'll try out your\nfix as soon as I get a chance.\n\nIf this flies, we should rip out the commented-out cruft in\nperl5/Makefile.PL.in, and probably also remove its use of POSTGRES_HOME\nenvironment variable...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Oct 1998 17:14:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug? "
},
{
"msg_contents": " Brook Milligan <[email protected]> writes:\n > I am having problems getting the perl interface to work (but see below\n > for a fix). If I install postgresql on a bare system (i.e., one with\n > no previous postgresql installed), the perl interface gets installed,\n > but the test.pl script fails with an undefined symbol error (libpq\n > cannot be found). However, if I rebuild the perl interface AFTER\n > postgresql (actually, I think libpq is all that is necessary here) is\n > installed, the test.pl script works fine.\n\n This is a longstanding problem in the Perl module. It's fairly closely\n related to the recent flamefest about LD_LIBRARY_PATH, btw --- the\n trouble is that if shared lib A refers to shared lib B, then lib A may\n need to contain a hardwired path to lib B. At least on some platforms.\n I think your intended fix of rebuilding the Perl shared lib after libpq\n has been installed might solve the problem nicely for these platforms.\n\nYes, that seems to be the problem, though I don't quite recognize why\nlibpgtcl works (it has the same libpgtcl refers to libpq feature, but\nI tried it quite awhile ago with pgaccess and all was fine). Perhaps\nthere is a subtle difference between the ld command for it and the\nonce constructed for perl.\n\nI chose this approach (i.e., make clean, rebuild perl stuff after\ninstall) because I didn't want to gut the automatic perl Makefile.PL\nstuff, which I don't understand completely. To me this seemed like a\nfairly clean solution, but anything better from the perl wizards is\nwelcome.\n\n HPUX is one of the places where there's a problem, so I'll try out your\n fix as soon as I get a chance.\n\nPlease let me know how it works. I'd like to see this fixed in 6.4\nsince it will remove a substantial bug in the perl interface.\n\n If this flies, we should rip out the commented-out cruft in\n perl5/Makefile.PL.in, and probably also remove its use of POSTGRES_HOME\n environment variable...\n\nAbsolutely. That stuff really isn't necessary, I don't think. I just\ndidn't want to trash the file so dramatically. :)\n\nCheers,\nBrook\n",
"msg_date": "Wed, 14 Oct 1998 16:07:25 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> Yes, that seems to be the problem, though I don't quite recognize why\n> libpgtcl works (it has the same libpgtcl refers to libpq feature, but\n> I tried it quite awhile ago with pgaccess and all was fine).\n\nOn HPUX, the reason libpgtcl works as a shared lib is that it gets\ninstalled into the same library directory as libpq. HPUX uses a search\npath for shared libs (typically embedded into the executable, though you\ncan arrange to look at an environment variable instead if you're so\ninclined). So, if the application was able to find libpgtcl, it'll\nfind libpq too.\n\nI imagine that generally the same story holds for other systems,\nbut haven't looked closely.\n\nThe reason the Perl module fails is that it gets installed somewhere\nelse, viz. the perl library tree. The Perl executable knows about\nlooking in the library tree for shlibs, but it's never heard of\n/usr/local/pgsql/lib/ unless you hack it specially.\n\nIt might be that installing a copy of libpq.so into the same directory\nthat the perl module shlib goes into would make it work. I haven't\ntried that, but if it works it might be a better answer than this\nrebuild-afterwards approach.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Oct 1998 19:25:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug? "
},
{
"msg_contents": "> Please let me know how it works. I'd like to see this fixed in 6.4\n> since it will remove a substantial bug in the perl interface.\n> \n> If this flies, we should rip out the commented-out cruft in\n> perl5/Makefile.PL.in, and probably also remove its use of POSTGRES_HOME\n> environment variable...\n> \n> Absolutely. That stuff really isn't necessary, I don't think. I just\n> didn't want to trash the file so dramatically. :)\n\nAdded to Open 6.4 items list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Oct 1998 21:49:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Brook Milligan <[email protected]> writes:\n> > I am having problems getting the perl interface to work (but see below\n> > for a fix). If I install postgresql on a bare system (i.e., one with\n> > no previous postgresql installed), the perl interface gets installed,\n> > but the test.pl script fails with an undefined symbol error (libpq\n> > cannot be found). However, if I rebuild the perl interface AFTER\n> > postgresql (actually, I think libpq is all that is necessary here) is\n> > installed, the test.pl script works fine.\n> \n> This is a longstanding problem in the Perl module. It's fairly closely\n> related to the recent flamefest about LD_LIBRARY_PATH, btw --- the\n> trouble is that if shared lib A refers to shared lib B, then lib A may\n> need to contain a hardwired path to lib B. At least on some platforms.\n> I think your intended fix of rebuilding the Perl shared lib after libpq\n> has been installed might solve the problem nicely for these platforms.\n> \n> HPUX is one of the places where there's a problem, so I'll try out your\n> fix as soon as I get a chance.\n> \n> If this flies, we should rip out the commented-out cruft in\n> perl5/Makefile.PL.in, and probably also remove its use of POSTGRES_HOME\n> environment variable...\n> \n> regards, tom lane\n\n\nI think the best solution is just not to build the perl interface\ntogether with postgresql. It will never work, because 'make test' \nneeds the postmaster to be up and running. \n\nDon't change Makefile.PL, POSTGRES_HOME is needed when building the perl\nmodule outside the postgresql source-tree.\n\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Thu, 15 Oct 1998 07:58:16 +0000",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": " It might be that installing a copy of libpq.so into the same directory\n that the perl module shlib goes into would make it work. I haven't\n tried that, but if it works it might be a better answer than this\n rebuild-afterwards approach.\n\nNO, NO, NO! We don't want TWO copies of libpq floating around. The\ncorrect solution, I think, is to make the libraries know about each\nother. My patch does that, though there may be other solutions that\ndo the same thing.\n\nWhat we need is to determine if there are cases in which that rebuild\nafter install method does not work.\n\nCheers,\nBrook\n",
"msg_date": "Thu, 15 Oct 1998 08:16:01 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": " I think the best solution is just not to build the perl interface\n together with postgresql. It will never work, because 'make test' \n needs the postmaster to be up and running. \n\nI will argue strongly that we DO want to build the perl interface\nstraight out of the box. Ideally we will have a COMPLETE system that\ncan be built and installed in one pass, such that every part of it\nwill work. I have demonstrated that it is possible (at least on\nNetBSD) to do this and still resolve the inter-library references\nbetween the perl interface and libpq.\n\nWe cannot kill this idea because 'make test' doesn't run without the\npostmaster. After all, NONE of the regression tests do either!\nNevertheless, we still build and install the complete system; then run\nthe tests. The same is applicable to the perl interface.\n\n Don't change Makefile.PL, POSTGRES_HOME is needed when building the perl\n module outside the postgresql source-tree.\n\nThis may be the case and is a good reason to include the conditional\nin Makefile.PL. Nothing is really lost by doing so.\n\nCheers,\nBrook\n",
"msg_date": "Thu, 15 Oct 1998 08:21:13 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> We cannot kill this idea because 'make test' doesn't run without the\n> postmaster. After all, NONE of the regression tests do either!\n\nEdmund is thinking of a different situation. The perl5 interface is\nalso a \"Perl module\", which means that it is supposed to build and\ninstall under very rigid rules --- this is supposed to work:\n\t\tperl Makefile.PL\n\t\tmake\n\t\tmake test\n\t\tmake install\n\nIf you were installing the perl5 module separately from Postgres proper,\nand already had a running Postgres server, that should indeed work.\n(With the caveat that you first have to set POSTGRES_HOME environment\nvariable to /usr/local/pgsql or local equivalent.)\n\nIt's not *entirely* clear to me that that's a very plausible scenario,\nsince we distribute the perl5 module along with Postgres. But it does\nwork and we probably shouldn't break it.\n\nCome to think of it, Brook's proposed changes do break the perl5\ndirectory as a standalone Perl module, because it no longer includes a\nMakefile.PL, only a Makefile.PL.in. Can we avoid that by finding some\nother way to get the install target directory name into the perl5\nmakefile? Maybe, instead of being rewritten by autoconf, Makefile.PL\ncould actively go look for the directory name, say by looking to see if\n../../Makefile.global exists and contains a POSTGRESDIR= line. If not,\nfall back to requiring POSTGRES_HOME to be set.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Oct 1998 11:39:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug? "
},
{
"msg_contents": " Edmund is thinking of a different situation. The perl5 interface is\n also a \"Perl module\", which means that it is supposed to build and\n install under very rigid rules --- this is supposed to work:\n\t\t perl Makefile.PL\n\t\t make\n\t\t make test\n\t\t make install\n\nOk, I see now. So, we need the following:\n\n- a Makefile.PL that works out of the box for the above sequence,\n given that POSTGRES_HOME is set properly.\n\n- a Makefile.PL that works with the Postgres installation and gets the\n right shared library, so if we run `make test' later (or use the\n interface in any way) it works immediately upon Postgres\n installation.\n\nA solution:\n\n- a default Makefile.PL that works\n\n- a Makefile.PL replaced from Makefile.PL.in that still works with the\n above, but satisfies the second criterion.\n\nThe patches below do this and I think satisfy all considerations\n(you'll let me know if they don't, right? :)\n\n- Makefile.PL has an explanation and if POSTGRES_HOME is not set will\n print a useful message.\n\n- Makefile.PL.in replaces that message with the right thing for\n finding the installed shared library based on configure.\n\n- Makefile: same as before; do the clean before installing to pick up\n the right shared library. Note that no clean is done for any other\n targets.\n\n- configure.in: output Makefile.PL along with the rest. Note also,\n that in the current version $WHOAMI is never set so the perl test\n always fails. Perhaps this should be fixed by setting the variable\n correctly, but in any case the comments no longer agree with this\n stuff anyway.\n\nCheers,\nBrook\n\n===========================================================================\n--- Makefile.PL.orig\tThu Oct 15 11:13:55 1998\n+++ Makefile.PL\tThu Oct 15 12:16:33 1998\n@@ -1,4 +1,3 @@\n-# Generated automatically from Makefile.PL.in by configure.\n #-------------------------------------------------------\n #\n # $Id: Makefile.PL,v 1.9 1998/09/27 19:12:21 mergl Exp $\n@@ -11,29 +10,25 @@\n use Config;\n use strict;\n \n-# because the perl5 interface is always contained in the source tree,\n-# we can be sure about the location of the include files and libs.\n-# For development and testing we still test for POSTGRES_HOME.\n-#\n-#print \"\\nConfiguring Pg\\n\";\n-#print \"Remember to actually read the README file !\\n\";\n-#die \"\\nYou didn't read the README file !\\n\" unless ($] >= 5.002);\n-#\n-#if (! $ENV{POSTGRES_HOME}) {\n-# warn \"\\$POSTGRES_HOME not defined. Searching for PostgreSQL...\\n\";\n-# foreach(qw(../../../ /usr/local/pgsql /usr/pgsql /home/pgsql /opt/pgsql /usr/local/postgres /usr/postgres /home/postgres /opt/postgres)) {\n-# if (-d \"$_/lib\") {\n-# $ENV{POSTGRES_HOME} = $_;\n-# last;\n-# }\n-# }\n-#}\n-#\n-#if (-d \"$ENV{POSTGRES_HOME}/lib\") {\n-# print \"Found PostgreSQL in $ENV{POSTGRES_HOME}\\n\";\n-#} else {\n-# die \"Unable to determine PostgreSQL\\n\";\n-#}\n+# This Makefile.PL is intended for standalone use when PostgreSQL is\n+# already installed. In that case, install the perl module as follows:\n+# \n+# setenv POSTGRES_HOME /path/to/root/of/installed/postgres\n+# perl Makefile.PL\n+# make\n+# make test\n+# make install\n+\n+# During normal installation of PostgreSQL, this file will be replaced\n+# by one derived from Makefile.PL.in so that the installed shared\n+# library libpq.so will be found during installation of this module.\n+# As a result, the POSTGRES_HOME environment variable need not be set\n+# during PostgreSQL installation. Note that ../Makefile takes care of\n+# the `perl Makefile.PL' command. Note also that it is still possible\n+# to follow the standalone installation procedure, even after\n+# configuring and installing PostgreSQL, because the `else'\n+# conditional branch below is identical in both Makefile.PL and\n+# Makefile.PL.in.\n \n my %opts;\n \n@@ -42,13 +37,16 @@\n my $cwd = `pwd`;\n chop $cwd;\n \n- %opts = (\n- NAME => 'Pg',\n- VERSION_FROM => 'Pg.pm',\n- INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n- OBJECT => \"Pg\\$(OBJ_EXT)\",\n- LIBS => [\"-L/usr/pkg/pgsql/lib -L$cwd/../libpq -lpq\"],\n- );\n+ print \"To install the perl interface for PostgreSQL do the following:\\n\";\n+ print \" - install PostgreSQL\\n\";\n+ print \" - set the POSTGRES_HOME environment variable appropriately\\n\";\n+ print \" - in this directory ($cwd):\\n\";\n+ print \" perl Makefile.PL\\n\";\n+ print \" make\\n\";\n+ print \" make test\t[ with a postmaster running ]\\n\";\n+ print \" make install\\n\";\n+\n+ exit(1);\n \n } else {\n \n===========================================================================\n--- interfaces/perl5/Makefile.PL.in.orig\tThu Oct 15 11:09:15 1998\n+++ interfaces/perl5/Makefile.PL.in\tThu Oct 15 11:57:17 1998\n@@ -0,0 +1,44 @@\n+#-------------------------------------------------------\n+#\n+# $Id: Makefile.PL,v 1.9 1998/09/27 19:12:21 mergl Exp $\n+#\n+# Copyright (c) 1997, 1998 Edmund Mergl\n+#\n+#-------------------------------------------------------\n+\n+use ExtUtils::MakeMaker;\n+use Config;\n+use strict;\n+\n+my %opts;\n+\n+if (! $ENV{POSTGRES_HOME}) {\n+\n+ my $cwd = `pwd`;\n+ chop $cwd;\n+\n+ %opts = (\n+ NAME => 'Pg',\n+ VERSION_FROM => 'Pg.pm',\n+ INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n+ OBJECT => \"Pg\\$(OBJ_EXT)\",\n+ LIBS => [\"-L@prefix@/lib -L$cwd/../libpq -lpq\"],\n+ );\n+\n+} else {\n+\n+ %opts = (\n+ NAME => 'Pg',\n+ VERSION_FROM => 'Pg.pm',\n+ INC => \"-I$ENV{POSTGRES_HOME}/include\",\n+ OBJECT => \"Pg\\$(OBJ_EXT)\",\n+ LIBS => [\"-L$ENV{POSTGRES_HOME}/lib -lpq\"],\n+ );\n+}\n+\n+\n+WriteMakefile(%opts);\n+\n+exit(0);\n+\n+# end of Makefile.PL\n===========================================================================\n--- interfaces/Makefile.orig\tWed Oct 7 01:00:23 1998\n+++ interfaces/Makefile\tTue Oct 13 16:29:16 1998\n@@ -15,13 +15,11 @@\n include $(SRCDIR)/Makefile.global\n \n \n-perl-makefile-dep :=\n-ifeq ($(USE_PERL), true)\n- perl-makefile-dep := perl5/Makefile\n-endif\n+PERL_CLEAN := DO_NOTHING\n+install: PERL_CLEAN := clean\n \n \n-.DEFAULT all install clean dep depend distclean: $(perl-makefile-dep)\n+.DEFAULT all install clean dep depend distclean:\n \t$(MAKE) -C libpq $@\n \t$(MAKE) -C ecpg $@\n ifeq ($(HAVE_Cplusplus), true)\n@@ -33,6 +31,8 @@\n \t$(MAKE) -C libpgtcl $@\n endif\n ifeq ($(USE_PERL), true)\n+\t-$(MAKE) -C perl5 $(PERL_CLEAN)\n+\t$(MAKE) perl5/Makefile\n \t$(MAKE) -C perl5 $@\n endif\n ifeq ($(USE_ODBC), true)\n===========================================================================\n--- configure.in.orig\tThu Oct 15 01:00:20 1998\n+++ configure.in\tThu Oct 15 11:54:09 1998\n@@ -266,17 +266,6 @@\n [ USE_PERL=false; AC_MSG_RESULT(disabled) ]\n )\n \n-#dnl Verify that postgres is already installed\n-#dnl per instructions for perl interface installation\n-if test \"$USE_PERL\" = \"true\"\n-then\n-\tif test \"$WHOAMI\" != \"root\"\n-\tthen\tAC_MSG_WARN(perl support disabled; must be root to install)\n-\t\tUSE_PERL=\n-\tfi\n-fi\n-export USE_PERL\n-\n dnl We include odbc support unless we disable it with --with-odbc=false\n AC_MSG_CHECKING(setting USE_ODBC)\n AC_ARG_WITH(\n@@ -917,6 +906,7 @@\n \tinterfaces/libpgtcl/Makefile\n \tinterfaces/odbc/GNUmakefile\n \tinterfaces/odbc/Makefile.global\n+\tinterfaces/perl5/Makefile.PL\n \tpl/plpgsql/src/Makefile\n \tpl/plpgsql/src/mklang.sql\n \tpl/tcl/mkMakefile.tcldefs.sh\n",
"msg_date": "Thu, 15 Oct 1998 12:28:33 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "> Edmund is thinking of a different situation. The perl5 interface is\n> also a \"Perl module\", which means that it is supposed to build and\n> install under very rigid rules --- this is supposed to work:\n> \t\t perl Makefile.PL\n> \t\t make\n> \t\t make test\n> \t\t make install\n> \n> Ok, I see now. So, we need the following:\n> \n> - a Makefile.PL that works out of the box for the above sequence,\n> given that POSTGRES_HOME is set properly.\n> \n> - a Makefile.PL that works with the Postgres installation and gets the\n> right shared library, so if we run `make test' later (or use the\n> interface in any way) it works immediately upon Postgres\n> installation.\n> \n> A solution:\n> \n> - a default Makefile.PL that works\n> \n> - a Makefile.PL replaced from Makefile.PL.in that still works with the\n> above, but satisfies the second criterion.\n> \n> The patches below do this and I think satisfy all considerations\n> (you'll let me know if they don't, right? :)\n> \n> - Makefile.PL has an explanation and if POSTGRES_HOME is not set will\n> print a useful message.\n> \n> - Makefile.PL.in replaces that message with the right thing for\n> finding the installed shared library based on configure.\n> \n> - Makefile: same as before; do the clean before installing to pick up\n> the right shared library. Note that no clean is done for any other\n> targets.\n> \n> - configure.in: output Makefile.PL along with the rest. Note also,\n> that in the current version $WHOAMI is never set so the perl test\n> always fails. Perhaps this should be fixed by setting the variable\n> correctly, but in any case the comments no longer agree with this\n> stuff anyway.\n> \n> Cheers,\n> Brook\n\nThis patch does not apply properly. Please resubmit.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 15:26:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": " > Ok, I see now. So, we need the following:\n > \n > - a Makefile.PL that works out of the box for the above sequence,\n > given that POSTGRES_HOME is set properly.\n > \n > - a Makefile.PL that works with the Postgres installation and gets the\n > right shared library, so if we run `make test' later (or use the\n > interface in any way) it works immediately upon Postgres\n > installation.\n\n let's forget about the make test. In order to get the right\n libpq.so it should be sufficient to change the Makefile in the \n interfaces directory in a way, that 'make' and 'make install'\n for perl5 is called after 'make install' in libpq. Of course\n I would have to adapt Makefile.PL in order to use pgsql/lib\n instead of pgsql/src/interfaces/libpq as linkpath for libpq.so.\n\nI don't think we need to give up on make test. Either the installer\nalready has postgresql installed and running (in which case the\nstandard perl procedure with POSTGRES_HOME set will work) or he/she\ndoesn't and is doing this as part of the main postgresql\ninstallation. In that case we just repeat the build after libpq is\ninstalled; no problem.\n\n But: for 'make install' in the perl directory, you need to be \n root, because the perl installation usually is owned by root.\n How do you want to solve this problem ? Those people without\n root access can say 'perl Makefile.PL PREFIX=/my/perl_directory'\n to install the module into a private directory. Again this\n is not possible with a hard coded 'perl Makefile'.\n\nThis is a complication. Perhaps to be solved secondarily. For my\ninformation so I can think about solutions, in your command what\nexactly is PREFIX pointing to? Directly to the root of the perl\nlibrary tree?\n\nWould a solution be to enhance the --with-perl option to point to the\ndirectory of interest unless configure is run by root? In that case\nthe interfaces/Makefile could include the prefix argument if\nnecessary and things would just work. If one does the perl stuff\nstandalone, they can always issue the command with a prefix\nthemselves.\n\nLet's get the rest of this done right first, though and worry about\nthis root/nonroot install problem next. I goofed my earlier patches,\nso I'll resubmit them and go from there.\n\nCheers,\nBrook\n",
"msg_date": "Thu, 15 Oct 1998 15:15:58 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": " This patch does not apply properly. Please resubmit.\n\nSorry. Here is a set that works against my copy of the BETA2\ntarball. Hope that works now.\n\nCheers,\nBrook\n\n===========================================================================\n--- interfaces/perl5/Makefile.PL.orig\tMon Sep 28 01:00:21 1998\n+++ interfaces/perl5/Makefile.PL\tThu Oct 15 12:55:19 1998\n@@ -10,29 +10,25 @@\n use Config;\n use strict;\n \n-# because the perl5 interface is always contained in the source tree,\n-# we can be sure about the location of the include files and libs.\n-# For development and testing we still test for POSTGRES_HOME.\n-#\n-#print \"\\nConfiguring Pg\\n\";\n-#print \"Remember to actually read the README file !\\n\";\n-#die \"\\nYou didn't read the README file !\\n\" unless ($] >= 5.002);\n-#\n-#if (! $ENV{POSTGRES_HOME}) {\n-# warn \"\\$POSTGRES_HOME not defined. Searching for PostgreSQL...\\n\";\n-# foreach(qw(../../../ /usr/local/pgsql /usr/pgsql /home/pgsql /opt/pgsql /usr/local/postgres /usr/postgres /home/postgres /opt/postgres)) {\n-# if (-d \"$_/lib\") {\n-# $ENV{POSTGRES_HOME} = $_;\n-# last;\n-# }\n-# }\n-#}\n-#\n-#if (-d \"$ENV{POSTGRES_HOME}/lib\") {\n-# print \"Found PostgreSQL in $ENV{POSTGRES_HOME}\\n\";\n-#} else {\n-# die \"Unable to determine PostgreSQL\\n\";\n-#}\n+# This Makefile.PL is intended for standalone use when PostgreSQL is\n+# already installed. In that case, install the perl module as follows:\n+# \n+# setenv POSTGRES_HOME /path/to/root/of/installed/postgres\n+# perl Makefile.PL\n+# make\n+# make test\n+# make install\n+\n+# During normal installation of PostgreSQL, this file will be replaced\n+# by one derived from Makefile.PL.in so that the installed shared\n+# library libpq.so will be found during installation of this module.\n+# As a result, the POSTGRES_HOME environment variable need not be set\n+# during PostgreSQL installation. Note that ../Makefile takes care of\n+# the `perl Makefile.PL' command. Note also that it is still possible\n+# to follow the standalone installation procedure, even after\n+# configuring and installing PostgreSQL, because the `else'\n+# conditional branch below is identical in both Makefile.PL and\n+# Makefile.PL.in.\n \n my %opts;\n \n@@ -41,14 +37,17 @@\n my $cwd = `pwd`;\n chop $cwd;\n \n- %opts = (\n- NAME => 'Pg',\n- VERSION_FROM => 'Pg.pm',\n- INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n- OBJECT => \"Pg\\$(OBJ_EXT)\",\n- LIBS => [\"-L$cwd/../libpq -lpq\"],\n- );\n+ print \"To install the perl interface for PostgreSQL do the following:\\n\";\n+ print \" - install PostgreSQL\\n\";\n+ print \" - set the POSTGRES_HOME environment variable appropriately\\n\";\n+ print \" - in this directory ($cwd):\\n\";\n+ print \" perl Makefile.PL\\n\";\n+ print \" make\\n\";\n+ print \" make test\t[ with a postmaster running ]\\n\";\n+ print \" make install\\n\";\n \n+ exit(1);\n+ \n } else {\n \n %opts = (\n===========================================================================\n--- interfaces/perl5/Makefile.PL.in.orig\tThu Oct 15 11:09:15 1998\n+++ interfaces/perl5/Makefile.PL.in\tThu Oct 15 11:57:17 1998\n@@ -0,0 +1,44 @@\n+#-------------------------------------------------------\n+#\n+# $Id: Makefile.PL,v 1.9 1998/09/27 19:12:21 mergl Exp $\n+#\n+# Copyright (c) 1997, 1998 Edmund Mergl\n+#\n+#-------------------------------------------------------\n+\n+use ExtUtils::MakeMaker;\n+use Config;\n+use strict;\n+\n+my %opts;\n+\n+if (! $ENV{POSTGRES_HOME}) {\n+\n+ my $cwd = `pwd`;\n+ chop $cwd;\n+\n+ %opts = (\n+ NAME => 'Pg',\n+ VERSION_FROM => 'Pg.pm',\n+ INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n+ OBJECT => \"Pg\\$(OBJ_EXT)\",\n+ LIBS => [\"-L@prefix@/lib -L$cwd/../libpq -lpq\"],\n+ );\n+\n+} else {\n+\n+ %opts = (\n+ NAME => 'Pg',\n+ VERSION_FROM => 'Pg.pm',\n+ INC => \"-I$ENV{POSTGRES_HOME}/include\",\n+ OBJECT => \"Pg\\$(OBJ_EXT)\",\n+ LIBS => [\"-L$ENV{POSTGRES_HOME}/lib -lpq\"],\n+ );\n+}\n+\n+\n+WriteMakefile(%opts);\n+\n+exit(0);\n+\n+# end of Makefile.PL\n===========================================================================\n--- interfaces/Makefile.orig\tWed Oct 7 01:00:23 1998\n+++ interfaces/Makefile\tTue Oct 13 16:29:16 1998\n@@ -15,13 +15,11 @@\n include $(SRCDIR)/Makefile.global\n \n \n-perl-makefile-dep :=\n-ifeq ($(USE_PERL), true)\n- perl-makefile-dep := perl5/Makefile\n-endif\n+PERL_CLEAN := DO_NOTHING\n+install: PERL_CLEAN := clean\n \n \n-.DEFAULT all install clean dep depend distclean: $(perl-makefile-dep)\n+.DEFAULT all install clean dep depend distclean:\n \t$(MAKE) -C libpq $@\n \t$(MAKE) -C ecpg $@\n ifeq ($(HAVE_Cplusplus), true)\n@@ -33,6 +31,8 @@\n \t$(MAKE) -C libpgtcl $@\n endif\n ifeq ($(USE_PERL), true)\n+\t-$(MAKE) -C perl5 $(PERL_CLEAN)\n+\t$(MAKE) perl5/Makefile\n \t$(MAKE) -C perl5 $@\n endif\n ifeq ($(USE_ODBC), true)\n===========================================================================\n--- configure.in.orig\tThu Oct 15 01:00:20 1998\n+++ configure.in\tThu Oct 15 11:54:09 1998\n@@ -266,17 +266,6 @@\n [ USE_PERL=false; AC_MSG_RESULT(disabled) ]\n )\n \n-#dnl Verify that postgres is already installed\n-#dnl per instructions for perl interface installation\n-if test \"$USE_PERL\" = \"true\"\n-then\n-\tif test \"$WHOAMI\" != \"root\"\n-\tthen\tAC_MSG_WARN(perl support disabled; must be root to install)\n-\t\tUSE_PERL=\n-\tfi\n-fi\n-export USE_PERL\n-\n dnl We include odbc support unless we disable it with --with-odbc=false\n AC_MSG_CHECKING(setting USE_ODBC)\n AC_ARG_WITH(\n@@ -917,6 +906,7 @@\n \tinterfaces/libpgtcl/Makefile\n \tinterfaces/odbc/GNUmakefile\n \tinterfaces/odbc/Makefile.global\n+\tinterfaces/perl5/Makefile.PL\n \tpl/plpgsql/src/Makefile\n \tpl/plpgsql/src/mklang.sql\n \tpl/tcl/mkMakefile.tcldefs.sh\n",
"msg_date": "Thu, 15 Oct 1998 15:23:55 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "Brook Milligan wrote:\n> \n> Edmund is thinking of a different situation. The perl5 interface is\n> also a \"Perl module\", which means that it is supposed to build and\n> install under very rigid rules --- this is supposed to work:\n> perl Makefile.PL\n> make\n> make test\n> make install\n> \n> Ok, I see now. So, we need the following:\n> \n> - a Makefile.PL that works out of the box for the above sequence,\n> given that POSTGRES_HOME is set properly.\n> \n> - a Makefile.PL that works with the Postgres installation and gets the\n> right shared library, so if we run `make test' later (or use the\n> interface in any way) it works immediately upon Postgres\n> installation.\n> \n\n\nlet's forget about the make test. In order to get the right\nlibpq.so it should be sufficient to change the Makefile in the \ninterfaces directory in a way, that 'make' and 'make install'\nfor perl5 is called after 'make install' in libpq. Of course\nI would have to adapt Makefile.PL in order to use pgsql/lib\ninstead of pgsql/src/interfaces/libpq as linkpath for libpq.so.\n\nBut: for 'make install' in the perl directory, you need to be \nroot, because the perl installation usually is owned by root.\nHow do you want to solve this problem ? Those people without\nroot access can say 'perl Makefile.PL PREFIX=/my/perl_directory'\nto install the module into a private directory. Again this\nis not possible with a hard coded 'perl Makefile'.\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Thu, 15 Oct 1998 22:09:14 +0000",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "Applied.\n\n> This patch does not apply properly. Please resubmit.\n> \n> Sorry. Here is a set that works against my copy of the BETA2\n> tarball. Hope that works now.\n> \n> Cheers,\n> Brook\n> \n> ===========================================================================\n> --- interfaces/perl5/Makefile.PL.orig\tMon Sep 28 01:00:21 1998\n> +++ interfaces/perl5/Makefile.PL\tThu Oct 15 12:55:19 1998\n> @@ -10,29 +10,25 @@\n> use Config;\n> use strict;\n> \n> -# because the perl5 interface is always contained in the source tree,\n> -# we can be sure about the location of the include files and libs.\n> -# For development and testing we still test for POSTGRES_HOME.\n> -#\n> -#print \"\\nConfiguring Pg\\n\";\n> -#print \"Remember to actually read the README file !\\n\";\n> -#die \"\\nYou didn't read the README file !\\n\" unless ($] >= 5.002);\n> -#\n> -#if (! $ENV{POSTGRES_HOME}) {\n> -# warn \"\\$POSTGRES_HOME not defined. Searching for PostgreSQL...\\n\";\n> -# foreach(qw(../../../ /usr/local/pgsql /usr/pgsql /home/pgsql /opt/pgsql /usr/local/postgres /usr/postgres /home/postgres /opt/postgres)) {\n> -# if (-d \"$_/lib\") {\n> -# $ENV{POSTGRES_HOME} = $_;\n> -# last;\n> -# }\n> -# }\n> -#}\n> -#\n> -#if (-d \"$ENV{POSTGRES_HOME}/lib\") {\n> -# print \"Found PostgreSQL in $ENV{POSTGRES_HOME}\\n\";\n> -#} else {\n> -# die \"Unable to determine PostgreSQL\\n\";\n> -#}\n> +# This Makefile.PL is intended for standalone use when PostgreSQL is\n> +# already installed. In that case, install the perl module as follows:\n> +# \n> +# setenv POSTGRES_HOME /path/to/root/of/installed/postgres\n> +# perl Makefile.PL\n> +# make\n> +# make test\n> +# make install\n> +\n> +# During normal installation of PostgreSQL, this file will be replaced\n> +# by one derived from Makefile.PL.in so that the installed shared\n> +# library libpq.so will be found during installation of this module.\n> +# As a result, the POSTGRES_HOME environment variable need not be set\n> +# during PostgreSQL installation. Note that ../Makefile takes care of\n> +# the `perl Makefile.PL' command. Note also that it is still possible\n> +# to follow the standalone installation procedure, even after\n> +# configuring and installing PostgreSQL, because the `else'\n> +# conditional branch below is identical in both Makefile.PL and\n> +# Makefile.PL.in.\n> \n> my %opts;\n> \n> @@ -41,14 +37,17 @@\n> my $cwd = `pwd`;\n> chop $cwd;\n> \n> - %opts = (\n> - NAME => 'Pg',\n> - VERSION_FROM => 'Pg.pm',\n> - INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n> - OBJECT => \"Pg\\$(OBJ_EXT)\",\n> - LIBS => [\"-L$cwd/../libpq -lpq\"],\n> - );\n> + print \"To install the perl interface for PostgreSQL do the following:\\n\";\n> + print \" - install PostgreSQL\\n\";\n> + print \" - set the POSTGRES_HOME environment variable appropriately\\n\";\n> + print \" - in this directory ($cwd):\\n\";\n> + print \" perl Makefile.PL\\n\";\n> + print \" make\\n\";\n> + print \" make test\t[ with a postmaster running ]\\n\";\n> + print \" make install\\n\";\n> \n> + exit(1);\n> + \n> } else {\n> \n> %opts = (\n> ===========================================================================\n> --- interfaces/perl5/Makefile.PL.in.orig\tThu Oct 15 11:09:15 1998\n> +++ interfaces/perl5/Makefile.PL.in\tThu Oct 15 11:57:17 1998\n> @@ -0,0 +1,44 @@\n> +#-------------------------------------------------------\n> +#\n> +# $Id: Makefile.PL,v 1.9 1998/09/27 19:12:21 mergl Exp $\n> +#\n> +# Copyright (c) 1997, 1998 Edmund Mergl\n> +#\n> +#-------------------------------------------------------\n> +\n> +use ExtUtils::MakeMaker;\n> +use Config;\n> +use strict;\n> +\n> +my %opts;\n> +\n> +if (! $ENV{POSTGRES_HOME}) {\n> +\n> + my $cwd = `pwd`;\n> + chop $cwd;\n> +\n> + %opts = (\n> + NAME => 'Pg',\n> + VERSION_FROM => 'Pg.pm',\n> + INC => \"-I$cwd/../libpq -I$cwd/../../include\",\n> + OBJECT => \"Pg\\$(OBJ_EXT)\",\n> + LIBS => [\"-L@prefix@/lib -L$cwd/../libpq -lpq\"],\n> + );\n> +\n> +} else {\n> +\n> + %opts = (\n> + NAME => 'Pg',\n> + VERSION_FROM => 'Pg.pm',\n> + INC => \"-I$ENV{POSTGRES_HOME}/include\",\n> + OBJECT => \"Pg\\$(OBJ_EXT)\",\n> + LIBS => [\"-L$ENV{POSTGRES_HOME}/lib -lpq\"],\n> + );\n> +}\n> +\n> +\n> +WriteMakefile(%opts);\n> +\n> +exit(0);\n> +\n> +# end of Makefile.PL\n> ===========================================================================\n> --- interfaces/Makefile.orig\tWed Oct 7 01:00:23 1998\n> +++ interfaces/Makefile\tTue Oct 13 16:29:16 1998\n> @@ -15,13 +15,11 @@\n> include $(SRCDIR)/Makefile.global\n> \n> \n> -perl-makefile-dep :=\n> -ifeq ($(USE_PERL), true)\n> - perl-makefile-dep := perl5/Makefile\n> -endif\n> +PERL_CLEAN := DO_NOTHING\n> +install: PERL_CLEAN := clean\n> \n> \n> -.DEFAULT all install clean dep depend distclean: $(perl-makefile-dep)\n> +.DEFAULT all install clean dep depend distclean:\n> \t$(MAKE) -C libpq $@\n> \t$(MAKE) -C ecpg $@\n> ifeq ($(HAVE_Cplusplus), true)\n> @@ -33,6 +31,8 @@\n> \t$(MAKE) -C libpgtcl $@\n> endif\n> ifeq ($(USE_PERL), true)\n> +\t-$(MAKE) -C perl5 $(PERL_CLEAN)\n> +\t$(MAKE) perl5/Makefile\n> \t$(MAKE) -C perl5 $@\n> endif\n> ifeq ($(USE_ODBC), true)\n> ===========================================================================\n> --- configure.in.orig\tThu Oct 15 01:00:20 1998\n> +++ configure.in\tThu Oct 15 11:54:09 1998\n> @@ -266,17 +266,6 @@\n> [ USE_PERL=false; AC_MSG_RESULT(disabled) ]\n> )\n> \n> -#dnl Verify that postgres is already installed\n> -#dnl per instructions for perl interface installation\n> -if test \"$USE_PERL\" = \"true\"\n> -then\n> -\tif test \"$WHOAMI\" != \"root\"\n> -\tthen\tAC_MSG_WARN(perl support disabled; must be root to install)\n> -\t\tUSE_PERL=\n> -\tfi\n> -fi\n> -export USE_PERL\n> -\n> dnl We include odbc support unless we disable it with --with-odbc=false\n> AC_MSG_CHECKING(setting USE_ODBC)\n> AC_ARG_WITH(\n> @@ -917,6 +906,7 @@\n> \tinterfaces/libpgtcl/Makefile\n> \tinterfaces/odbc/GNUmakefile\n> \tinterfaces/odbc/Makefile.global\n> +\tinterfaces/perl5/Makefile.PL\n> \tpl/plpgsql/src/Makefile\n> \tpl/plpgsql/src/mklang.sql\n> \tpl/tcl/mkMakefile.tcldefs.sh\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 00:37:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "Brook Milligan wrote:\n> \n> > Ok, I see now. So, we need the following:\n> >\n> > - a Makefile.PL that works out of the box for the above sequence,\n> > given that POSTGRES_HOME is set properly.\n> >\n> > - a Makefile.PL that works with the Postgres installation and gets the\n> > right shared library, so if we run `make test' later (or use the\n> > interface in any way) it works immediately upon Postgres\n> > installation.\n> \n> let's forget about the make test. In order to get the right\n> libpq.so it should be sufficient to change the Makefile in the\n> interfaces directory in a way, that 'make' and 'make install'\n> for perl5 is called after 'make install' in libpq. Of course\n> I would have to adapt Makefile.PL in order to use pgsql/lib\n> instead of pgsql/src/interfaces/libpq as linkpath for libpq.so.\n> \n> I don't think we need to give up on make test. Either the installer\n> already has postgresql installed and running (in which case the\n> standard perl procedure with POSTGRES_HOME set will work) or he/she\n> doesn't and is doing this as part of the main postgresql\n> installation. In that case we just repeat the build after libpq is\n> installed; no problem.\n> \n> But: for 'make install' in the perl directory, you need to be\n> root, because the perl installation usually is owned by root.\n> How do you want to solve this problem ? Those people without\n> root access can say 'perl Makefile.PL PREFIX=/my/perl_directory'\n> to install the module into a private directory. Again this\n> is not possible with a hard coded 'perl Makefile'.\n> \n> This is a complication. Perhaps to be solved secondarily. For my\n> information so I can think about solutions, in your command what\n> exactly is PREFIX pointing to? Directly to the root of the perl\n> library tree?\n> \n> Would a solution be to enhance the --with-perl option to point to the\n> directory of interest unless configure is run by root? In that case\n> the interfaces/Makefile could include the prefix argument if\n> necessary and things would just work. If one does the perl stuff\n> standalone, they can always issue the command with a prefix\n> themselves.\n> \n> Let's get the rest of this done right first, though and worry about\n> this root/nonroot install problem next. I goofed my earlier patches,\n> so I'll resubmit them and go from there.\n> \n> Cheers,\n> Brook\n\nthe standard path for installing perl modules is .../lib/perl5/site_perl/...\nOnly in case someone has no root access and no possibility to make a\nprivate perl installation, he will use the PREFIX option. The disadvantage\nof installing a module in a private directory is, in every perl script using \nthis module the user will have to add the searchpath, so that perl is able to \nfind the module at this non-standard place. Obviously this is only a work-around \nand should not be used unless the user has no root access and the user is forced \nto use the perl installation of the sys-admin. \n\nI guess in 95% the user/sys-admin will have to install the perl-module manually,\nwhich means \n\n - cd-ing into interfaces/perl5 \n - make install\n\n\nWhat's the big difference to:\n\n - cd-ing into interfaces/perl5 \n - perl Makefile.PL\n - make\n - make install\n\nIn other words, I still prefer the solution not to build\nthe perl module together with postgresql. I think it is\nsufficient to mention the perl module and the commands\nneeded in order to install it in the INSTALL file and\nthat's it.\n\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Fri, 16 Oct 1998 10:40:55 +0000",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug?"
},
{
"msg_contents": "I have thought some more about the Perl-module-install problem,\nand concluded that we still don't have it quite right. But I think\nthat I know how to do it right.\n\nThere are two problems remaining in Brook's latest patch:\n\n1. Allowing Makefile.PL to be overwritten by configure from\nMakefile.PL.in is a no-no. A cardinal rule of software source\ndistributions is that the build procedure must not modify any of the\nfiles that are part of the distributed fileset. To do otherwise\ncreates a trap for developers (who might accidentally check in a\nlocally-customized copy instead of a pristine original), and a trap for\ninstallers too (who cannot get back to the original fileset by doing\n\"make clean\").\n\n(I see that Bruce's actual installation of Brook's patch removed\nMakefile.PL from the fileset. That avoids the above traps, but it\nbreaks the original goal of keeping the perl5 subdirectory a valid\nstand-alone-installable Perl module.)\n\n2. As it stands, when building in the Postgres source tree the Perl\nmodule's search paths for include and lib files will be /usr/local/pgsql\n(or local equivalent) *first*, the source tree *second*. This is no\ngood if you are installing a new Postgres version on a machine that\ncurrently has an older one installed. The build phase will compile\nand link the Perl module against the installed version in\n/usr/local/pgsql, not against the new version in the surrounding source\ntree. Sure, it'll get rebuilt correctly during the install pass, but\nthis approach offers no security that you actually have a working tree\nuntil you do the install. If it's gonna fail, you'd like to know sooner\nthan that...\n\nSo, what we really want is a two-phase Perl module build/install\nprocess: during the build phase, compile and link against the files\nin the surrounding tree, ignoring /usr/local/pgsql entirely. In\nthe install phase (and not before then!), blow away that version and\nrebuild using the already-installed include files and libpq library\nfile that now reside in /usr/local/pgsql. This ensures that the\ncompleted Perl module will have the right path to the libpq library\n(if the platform demands one).\n\nFortunately, it's easy to do this in a way that's compatible with also\nsupporting stand-alone installation of the Perl module. We make the\nPerl module's Makefile.PL use relative paths (and *only* relative paths)\nif POSTGRES_HOME is not defined, and we make it use POSTGRES_HOME\n(*only*) if that environment variable is set. Then we tweak the\nsrc/interfaces Makefile so that during \"make install\", we do \"make\nclean\" and then re-execute Makefile.PL *with POSTGRES_HOME set*.\nSo during the install phase, the Perl module is rebuilt the same way\nit would be if being installed standalone.\n\nThis is actually a lot simpler than what we had before.\n\nA free side benefit is that our regular builds will test the\nstandalone-install code path, so we'll know if it breaks.\n\nI'll test and check in the necessary changes this evening, unless\nI hear loud protests...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Oct 1998 20:35:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] perl interface bug? "
},
{
"msg_contents": "I envision 2 general classes of postgresql installers: 1) root, and\n2) nonroot users wishing to try/use postgresql.\n\nIn the case of root sysadmins, everything (including perl) should\ninstall out of the box. It does so now without affecting the\nstandalone perl interface installation.\n\nIn the case of nonroot installers, there are two subcases: 2a) perl is\ninstalled in system directories with only root access, and 2b) perl\nwas installed in some other place by the postgresql installer.\n\nIn case 2b, again there is no problem. The install (with a suitable\n--prefix=... argument to configure) should proceed unimpeded.\n\nIn case 2a, postgresql is installable under control of the\n--prefix=... argument, but there will be a conflict when perl is\ninstalled do to lack of access to the perl filesystem for the perl\ninterface shared library. In this case, the installer can install\npostgresql WITHOUT the --with-perl option to configure. Later,\nsomeone with root permission can do the \n\n\tcd interfaces/perl5\n\tperl Makefile.PL\n\tmake\n\tmake test\n\tmake install\n\nsequence. I don't see any situations that lose here. Am I missing\nsomething?\n\nIn conclusion, I see our current perl interface handling as addressing\nall the relevant conditions (thanks to Tom Lane for finishing it\nup!).\n\nCheers,\nBrook\n",
"msg_date": "Mon, 19 Oct 1998 16:58:31 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] perl interface bug?"
}
] |
[
{
"msg_contents": "OK, I've installed new beta html docs on the web page, and put them into\ntar files in ftp://postgresql.org/pub/patches.\n\nI've also set my umask to allow group write permissions, so that the\npgsql group on hub.org can use my directories. Once we've gotten the\njade problem solved (see below) you should be able to regenerate docs at\nwill with a simple \"gmake postgres.html\" from my cvs tree.\n\n> > There is a fully functioning jade/docbook installation on hub.org\n> > (postgresql.org).\n\nNote to scrappy:\n\njade seems to have gone away, or I've got my paths wrong:\n\n$ gmake user.html\n(rm -rf *.htm)\njade -D ref -D ../graphics -V %use-id-as-filename% -d\n/home/users/t/thomas/db118.d/docbook/html/docbook.dsl -t sgml user.sgml\ngmake: jade: Command not found\ngmake: *** [user.html] Error 127\n\nHelp!\n\n - Tom\n",
"msg_date": "Wed, 14 Oct 1998 18:13:48 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Alternative to LIMIT in SELECT ?"
}
] |
[
{
"msg_contents": ">On Tue, 13 Oct 1998, Bruce Momjian wrote:\n>\n>> My guess in a web application is that the transaction is started for\n>> every new page, so you can't have transactions spanning SQL sessions.\n>>\n>> LIMIT theoretically would allow you to start up where you left off.\n\n\nslightly off topic question, but how would this work? by doing an order by\nand knowing the last record? or is this handled automatically in MySQL?\n\n>\n>************ EXACTLY !-)\n>Plus, it could also be used to limit bogus-run-away queries.\n\n\ni'm not really following this thread too closely (i'm reading but not always\nthinking while i read) so i probably missed some things, but it seems to me\nthe query limit env. variable hack is a *good thing* (i would use it, at\nleast -- specifically to eliminate runaway queries, not so much to limit the\n# of matches) and the cursor approach would be the *right way* but not\nalways the easiest way (in # of steps) or fastest way (because of the order\nby/sort business -- i've been using cursors in web apps for a while now, so\ni never noticed that this causes things to get ugly or think about why). i\ncan understand things getting ugly with joins and sorting, but on a plain,\nsingle table select with no indexes (or even with indexes), which is\napparently the target of the todo item, i still don't get the difference.\nyou'd still be stuck scanning, which i would think would be the time\nconsuming part.\n\ni guess that's why i'm just a lurker and not really a hacker, though.\n\n",
"msg_date": "Wed, 14 Oct 1998 14:16:54 -0500",
"msg_from": "\"Jeff Hoffmann\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "MySQL question (was Re: [HACKERS] What about LIMIT in SELECT ?)"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm writing some documentation about my recent changes to the backend\nbut I'm having problems with the sgml format used in the doc sources.\nThe makefile in doc/src references some files in /usr/lib/sgml/stylesheets\nwhich aren't installed in my system. I would like to know where to find\nthose files and if it is possible to convert the sgml format to and from\nsome format easier to edit. I've found converters to lyx and html, which\ncould easily be edited with proper programs, but none to convert back one\nof these formats into sgml. Any suggestion?\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Wed, 14 Oct 1998 23:08:24 +0200 (MET DST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "sgml"
},
{
"msg_contents": "Hi,\nI have cvs'ed PostgreSQL \n\ncd /usr/local/pgsql\ncvs -z3 update -d\n\nand when I do:\n\ncd src\nautoconf (all fine)\nautoheader (lot's of harmless warning messages)\n./configure --with-tcl --with-perl\n\nconfigure: warning: perl support disabled; must be root to install\n\nmake\n\nsome harmless messages\nAll of PostgreSQL is successfully made. Ready to install.\n\nmake install\n\nThank you for choosing PostgreSQL, the most advanced open source database\nengine.\n\ncd ../doc\n\nmake\nif [ ! -d /usr/local/pgsql/doc ]; then mkdir /usr/local/pgsql/doc; fi\n\ncd src\n\noh mei (18.000 lines with docs on my screen!!)\n\ncd /usr/lib/sgml\nmkdir stylesheets\ncd stylesheets\nmkdir jade\nln -s /usr/lib/dsssl/stylesheets/docbook /usr/lib/sgml/stylesheets/jade\n\ncd /usr/local/pgsql/doc/src\nmake\n\nonly two warnings\njade:ref/set.sgml:455:5:E: document type does not allow element \"PARA\"\nhere\njade:start.sgml:229:26:X: reference to non-existent ID \"PROGRAMMERS-GUIDE\"\n\nWow, looks great! Many thanks to Oliver.\n\n-Egon\n\n\n",
"msg_date": "Thu, 15 Oct 1998 01:13:24 +0200 (MET DST)",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": false,
"msg_subject": "Problems"
},
{
"msg_contents": "> I'm writing some documentation about my recent changes to the backend\n> but I'm having problems with the sgml format used in the doc sources.\n\nOK, first off, just remember that the words are the most important, and\nthe markup can be fixed later. And I'll help as much as you want.\n\n> The makefile in doc/src references some files in \n> /usr/lib/sgml/stylesheets\n> which aren't installed in my system. I would like to know where to \n> find those files\n\n http://www.nwalsh.com/docbook/dsssl/index.html\n\nI'm using the v1.18 experimental release at the moment. Look in the\nappendix on docs inside the postgres integrated documentation for a\ndescription of what your Makefile.custom would look like.\n\nThe docs also discuss how to install jade and the other pieces used to\nconvert the sgml into html and hardcopy.\n\n> ... and if it is possible to convert the sgml format to \n> and from some format easier to edit. I've found converters to lyx and \n> html, which could easily be edited with proper programs, but none to \n> convert back one of these formats into sgml. Any suggestion?\n\nThe problem with both lyx (latex) and html is that these formats mark up\nappearance, not content. So, there is no easy way to go back to the\ncomplete content-markup of the original sgml. I've heard that the lyx\nfolks are working toward understanding DocBook sgml, but would guess\nthat it isn't coming soon.\n\nI have the impression that FrameMaker can use sgml for input and output,\nbut that is a commercial product.\n\nemacs has an sgml editing mode, but I haven't been able to get it to\nwork completely for me yet. It has trouble finding or reading the DTDs\nto teach itself how to understand the DocBook definitions.\n\nI just type the stuff in emacs or vi, which is pretty primitive.\n\n - Tom\n",
"msg_date": "Thu, 15 Oct 1998 06:47:02 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sgml"
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n> \n> 6.4-BETA1 doesn't work with Tcl 8.*. The current snapshot does though.\n> Also I think a BETA2 tarball should be out today or tomorrow.\n\nIt's just fine. I'll wait for it. \nI just wanted to test 6.4 BETA against PgAccess and maybe make some\nimprovements to PgAccess in order to release a new version of PgAccess\nwith 6.4\n\nIt's there something \"Libpgtcl now gets async notifies from libpq(Tom)\".\n\nWhat means that ? Is there something that could improve PgAccess ?\nThe most important thing that could improve PgAccess would be another\nimplementation of pg_select.\n\nFor the moment, pg_select is issuing a \"select\" statement to backend IS\nWAITING FOR ALL DATA TO COME, and then process the tcl loop.\nIt would be smarter to do something like that :\n\npg_select : \"command sent to backend\" \nloop\n wait for another tuple to come\n if (end_of_records) break\n process tcl loop script\nendofloop\n\nany chance to implement such a behavior in the future ?\n\nUnder a real multitasking environment and working with a server across a\nnetwork, this would reduce the time of processing selects that could be\nexecuted in the same time that data is coming through the wire. Also,\nwill be memory consumption reduction, because every tuple can be\nreleased after tcl loop processing, instead of buffering the WHOLE query\nresult in the local memory.\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Thu, 15 Oct 1998 08:13:22 +0300",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wishes for PostgreSQL 6.4"
},
{
"msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> It's there something \"Libpgtcl now gets async notifies from libpq(Tom)\".\n> What means that ? Is there something that could improve PgAccess ?\n\nIf you want to listen for notifies, you just have to do \"pg_listen\".\nThe bit of code you specify is automatically executed from the Tcl idle\nloop whenever a matching notify message arrives, just like button\ncallbacks and such. The old \"pg_notifies\" (sp?) command is not needed\nanymore.\n\n> For the moment, pg_select is issuing a \"select\" statement to backend IS\n> WAITING FOR ALL DATA TO COME, and then process the tcl loop.\n\nWe speculated a while back about extending PQgetResult to be able to\nreturn partial result sets, say by specifying an upper limit on the\nnumber of tuples per result set. (Setting the upper limit to be one,\nas you imply, is probably not very efficient ... I'd guess a few dozen\ntuples per cycle might be reasonable.) Once that was done pg_select\ncould be rewritten to make use of it. It's not going to happen for\n6.4, obviously. Maybe for 6.5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Oct 1998 11:51:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wishes for PostgreSQL 6.4 "
}
] |
[
{
"msg_contents": "Additions\n---------\nnew functoins and test INET address type(Tom Helbekkmo)\nCREATE TABLE test (x text, s serial) fails if no database creation permission\nregression test all platforms\nfix perl/tcl compile/library issues\n\nSerious Items\n------------\nchange pg args for platforms that don't support argv changes\n\t(setproctitle()?, sendmail hack?)\n\nDocs\n----\ngenerate html/postscript documentation\nmake sure all changes are documented properly\n\nMinor items\n-----------\ncnf-ify still can exhaust memory, make SET KSQO more generic\npermissions on indexes: what do they do? should it be prevented?\nallow multiple generic operators in expressions without the use of parentheses\ndocument/trigger/rule so changes to pg_shadow create pg_pwd\nlarge objects orphanage\nimprove group handling\nimprove PRIMARY KEY handling\ngenerate postmaster pid file and remove flock/fcntl lock code\nadd ability to specifiy location of lock/socket files\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 01:59:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.4 items"
}
] |
[
{
"msg_contents": "The attached patches clean-up the TCL/TK configuration as follows:\n\n1. Removed unnecessary code relating to TCL/TK configuration from configure.\n\n2. Change TCL/TK enabling to be dependant on the existance of an executable\n tclsh (locatable via $PATH) and the existance of tclConfig.sh and\n tkConfig.sh.\n\n3. The directories that are searched for the *Config.sh file is determined by\n the contents of $LIBRARY_DIRS (set by '--with-libs' or '--with-libraries')\n and the output generated by executing 'echo \"puts $auto_path\" | tchsh\". \n [Thanks Roland!]\n\n Note: If TK is installed in a different location the TCL, you must use the\n --with-libs (or --with-libraries) option of configure to specify it's\n location.\n\n4. Added \"USE_TK\" to Makefile.global which is set if TK support is available\n (as determined by the existance of tkConfig.sh). USE_TK will only be set\n true if USE_TCL is true, and TK support is available. This will allow\n features/programs that only depend on TCL to compile and install even if\n TK support is missing.\n\n5. Modified the pgtclsh Makefile so that pgtclsh will compile and install even\n if TK support is missing. pgtksh will not be built unless TK support is\n available.\n\nNOTE: The file, bin/pgtclsh/mkMakefile.tcltkdefs.sh.in, is no longer needed and\n can be removed.\n\nNOTE: With these changes (and earlier ones), manually setting USE_TCL and \n USE_TK in Makefile.global becomes a very bad idea and will cause the\n build to fail.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Thu, 15 Oct 1998 04:02:27 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "TCL/TK configuration clean-up patches"
},
{
"msg_contents": " 1. Removed unnecessary code relating to TCL/TK configuration from configure.\n\nGenerally a good idea, but what is unnecessary depends on perspective.\nSee below.\n\n 2. Change TCL/TK enabling to be dependant on the existance of an executable\n tclsh (locatable via $PATH) and the existance of tclConfig.sh and\n tkConfig.sh.\n\nWhat if the executable is not named tclsh? This is not just idle\nspeculation, because of the incompatibilities between different tcl\nversions. There is real reason to maintain multiple versions if tcl\napplications require the features of one version that are not in\nanother. In such cases it is natural to name the executable tclsh7.6\nor tclsh8.0 or some such thing in order to distinguish them. \n\nThis really is the same problem as looking for the include/library\ndirectories only now you are looking for (perhaps) versioned\nexecutables.\n\nIt seems to me that the following might be a better solution:\n\n - new configure argument to set TCL variable\n --with-tcl=tclsh8.0\t\t[ with default = tclsh ]\n\n - USE_TCL disabled if the named tcl is not found\n\n - TclConfig.sh found via 'echo \"puts $auto_path\" | ${TCL}'\n\n - use TclConfig.sh to generate Makefile.tcl\n\nThis would give full flexibility to specify the tcl executable while\nmaintaining a useful default. Is there any need to search the whole\nlibrary list for TclConfig.sh?\n\n 3. The directories that are searched for the *Config.sh file is determined by\n the contents of $LIBRARY_DIRS (set by '--with-libs' or '--with-libraries')\n and the output generated by executing 'echo \"puts $auto_path\" | tchsh\". \n [Thanks Roland!]\n\nSee above.\n\n Note: If TK is installed in a different location the TCL, you must use the\n\t --with-libs (or --with-libraries) option of configure to specify it's\n\t location.\n\nWould it make sense to have a --with-tk-lib=/usr/xxx/tk configure\noption? I suggest this because the only purpose is to find the\ntkConfig.sh during configuration, not to influence all linking.\nSimilarly, it might make sense to have --with-tcl-lib=/usr/xxx/tcl to\noverride the mechanism mentioned above that relies on the tcl\nexecutable.\n\nThese ideas would lead to the following configure arguments:\n\n --with-tcl=... specifies tcl excutable name (or path); used to\n find tcl libs which are searched for tclConfig.sh\n\n --with-tcl-lib=... specifies a list of tcl lib directories;\n used to find tclConfig.sh if tcl executable unavailable\n\n --with-tk-lib=... specifies a list of tk lib directories;\n used to find tkConfig.sh\n\nWith suitable defaults for these, the behavior you are proposing could\neasily be obtained, while still maintaining rather general\nconfiguration capability.\n\n 4. Added \"USE_TK\" to Makefile.global which is set if TK support is available\n (as determined by the existance of tkConfig.sh). USE_TK will only be set\n true if USE_TCL is true, and TK support is available. This will allow\n features/programs that only depend on TCL to compile and install even if\n TK support is missing.\n\nGood.\n\n 5. Modified the pgtclsh Makefile so that pgtclsh will compile and install even\n if TK support is missing. pgtksh will not be built unless TK support is\n available.\n\nGood.\n\nBe sure to modify the INSTALL document to reflect the arguments to\nconfigure.\n\nCheers,\nBrook\n",
"msg_date": "Thu, 15 Oct 1998 08:48:45 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] TCL/TK configuration clean-up patches"
},
{
"msg_contents": "Applied.\n\n> The attached patches clean-up the TCL/TK configuration as follows:\n> \n> 1. Removed unnecessary code relating to TCL/TK configuration from configure.\n> \n> 2. Change TCL/TK enabling to be dependant on the existance of an executable\n> tclsh (locatable via $PATH) and the existance of tclConfig.sh and\n> tkConfig.sh.\n> \n> 3. The directories that are searched for the *Config.sh file is determined by\n> the contents of $LIBRARY_DIRS (set by '--with-libs' or '--with-libraries')\n> and the output generated by executing 'echo \"puts $auto_path\" | tchsh\". \n> [Thanks Roland!]\n> \n> Note: If TK is installed in a different location the TCL, you must use the\n> --with-libs (or --with-libraries) option of configure to specify it's\n> location.\n> \n> 4. Added \"USE_TK\" to Makefile.global which is set if TK support is available\n> (as determined by the existance of tkConfig.sh). USE_TK will only be set\n> true if USE_TCL is true, and TK support is available. This will allow\n> features/programs that only depend on TCL to compile and install even if\n> TK support is missing.\n> \n> 5. Modified the pgtclsh Makefile so that pgtclsh will compile and install even\n> if TK support is missing. pgtksh will not be built unless TK support is\n> available.\n> \n> NOTE: The file, bin/pgtclsh/mkMakefile.tcltkdefs.sh.in, is no longer needed and\n> can be removed.\n> \n> NOTE: With these changes (and earlier ones), manually setting USE_TCL and \n> USE_TK in Makefile.global becomes a very bad idea and will cause the\n> build to fail.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 11:58:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCL/TK configuration clean-up patches"
}
] |
[
{
"msg_contents": "\nIs now available, based on the source tree as of ~3am on Oct 15th. \n\nWith this BETA, the source code is frozen...there is *absolutely* no\nchanges to be made except that which is required to fix a bug that\npertains to the building and/or running on a particular platform, or to\nfix a crucial bug.\n\nIf any BETA can stand, on its own, for 1 week, without a change being\nrequired to the source code (docs don't count in this), then it will\nbecome the released version.\n\nAgain, unless it is a *crucial* fix, please withhold all patches until\nafter we get this released and out to the public...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Thu, 15 Oct 1998 04:39:07 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v6.4 BETA2..."
},
{
"msg_contents": "Thus spake Marc G. Fournier\n> With this BETA, the source code is frozen...there is *absolutely* no\n> changes to be made except that which is required to fix a bug that\n> pertains to the building and/or running on a particular platform, or to\n> fix a crucial bug.\n\nWhat about the inet stuff. I'm just waiting for Paul's inet_cidr*\nfunctions to test and submit the final stuff. As it stand it will\ncrash the backend if you use it. I guess that makes it a bug fix,\nright?\n\nI'll submit the builtins.h changes in the meantime. Perhaps I can\ncobble up my own version of the inet_cidr* functions at least to\nbe able to test my functions.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 15 Oct 1998 07:38:04 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2..."
},
{
"msg_contents": "On Thu, 15 Oct 1998, D'Arcy J.M. Cain wrote:\n\n> Thus spake Marc G. Fournier\n> > With this BETA, the source code is frozen...there is *absolutely* no\n> > changes to be made except that which is required to fix a bug that\n> > pertains to the building and/or running on a particular platform, or to\n> > fix a crucial bug.\n> \n> What about the inet stuff. I'm just waiting for Paul's inet_cidr*\n> functions to test and submit the final stuff. As it stand it will\n> crash the backend if you use it. I guess that makes it a bug fix,\n> right?\n\n\tBut of course :) Anything that affects the backend, especially\nsomething that can relatively easily reproduce a crash, is a definite bug\nthat needs to be fixed...\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Thu, 15 Oct 1998 10:34:27 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2..."
},
{
"msg_contents": "> Thus spake Marc G. Fournier\n> > With this BETA, the source code is frozen...there is *absolutely* no\n> > changes to be made except that which is required to fix a bug that\n> > pertains to the building and/or running on a particular platform, or to\n> > fix a crucial bug.\n> \n> What about the inet stuff. I'm just waiting for Paul's inet_cidr*\n> functions to test and submit the final stuff. As it stand it will\n> crash the backend if you use it. I guess that makes it a bug fix,\n> right?\n\nI like your thinking.\n \n> I'll submit the builtins.h changes in the meantime. Perhaps I can\n> cobble up my own version of the inet_cidr* functions at least to\n> be able to test my functions.\n\nJust wait and submit the whole thing. It is more likely to work in that\ncase.\n\nYes, I think that will have to be slipped in. Hope Marc agrees.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:06:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2..."
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > I'll submit the builtins.h changes in the meantime. Perhaps I can\n> > cobble up my own version of the inet_cidr* functions at least to\n> > be able to test my functions.\n> \n> Just wait and submit the whole thing. It is more likely to work in that\n> case.\n\nSure. I meant that I would cobble up a rough approximation of the\nfunctions locally just so I could test my stuff. I'm still waiting\nfor Paul to build the real ones.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 15 Oct 1998 12:28:20 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v6.4 BETA2..."
}
] |
[
{
"msg_contents": "Jan wrote:\n> If there is an ORDER BY clause, using an index scan is the\n> clever way if the indexqual dramatically reduces the the\n> amount of data selected and sorted. I think this is the\n> normal case\n\nyes\n \n> (who really selects nearly all rows from a 5M row\n> table?).\n\nData Warehouse apps\n\n> This will hurt if someone really selects most of the rows and the index\n> scan jumps over the disc. \n\nI think this is a non issue, since if a qual is not restrictive enough,\nthe optimizer should choose a seq scan anyway. Doesn' t it do this already ? \n\n> But here the programmer should use\n> an unqualified query to perform a seqscan and do the\n> qualification in the frontend application.\n\nI would reformulate this to:\nHere the backend should do a seq scan and use the qualification to eliminate \nnot wanted rows.\n\nResumee:\nYou have to look at this from the cost point of view. If there is an order by that can be \ndone with an index, this will make the index a little more preferrable than for the same \nquery without the order by, but it should not force the index.\nYou have to give the sort a cost, so that the index access can be compared to the\nseq scan and sort path.\n\nAndreas\n\n\n",
"msg_date": "Thu, 15 Oct 1998 10:56:50 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: order by and index path"
},
{
"msg_contents": "> Jan wrote:\n> > If there is an ORDER BY clause, using an index scan is the\n> > clever way if the indexqual dramatically reduces the the\n> > amount of data selected and sorted. I think this is the\n> > normal case\n> \n> yes\n> \n> > (who really selects nearly all rows from a 5M row\n> > table?).\n> \n> Data Warehouse apps\n> \n> > This will hurt if someone really selects most of the rows and the index\n> > scan jumps over the disc. \n> \n> I think this is a non issue, since if a qual is not restrictive enough,\n> the optimizer should choose a seq scan anyway. Doesn' t it do this already ? \n\nYes it does.\n\n> \n> > But here the programmer should use\n> > an unqualified query to perform a seqscan and do the\n> > qualification in the frontend application.\n> \n> I would reformulate this to:\n> Here the backend should do a seq scan and use the qualification to eliminate \n> not wanted rows.\n> \n> Resumee:\n> You have to look at this from the cost point of view. If there is an order by that can be \n> done with an index, this will make the index a little more preferrable than for the same \n> query without the order by, but it should not force the index.\n> You have to give the sort a cost, so that the index access can be compared to the\n> seq scan and sort path.\n\nThis cost is compared. The optimizer uses the min-max values for the\ncolumn stored in pg_statistic to see how much of the table is being\nrequested, and decides on an index or not.\n\nDoing the restriction on the fontend sounds kind of cheesy to me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:01:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: order by and index path"
}
] |
[
{
"msg_contents": "Andread wrote:\n> > (who really selects nearly all rows from a 5M row\n> > table?).\n>\n> Data Warehouse apps\n>\n> > This will hurt if someone really selects most of the rows and the index\n> > scan jumps over the disc.\n>\n> I think this is a non issue, since if a qual is not restrictive enough,\n> the optimizer should choose a seq scan anyway. Doesn' t it do this already ?\n> [...]\n\n Right and wrong.\n\n Right - it is the optimizers job to decide if an index should\n be used or not. And the decision has to be made based on the\n cost.\n\n Wrong - PostgreSQL's query optimizer doesn't do it already.\n It assumes that a qualification is always restrictive enough\n and chooses an index scan any time if the qualification can\n be thrown into the indexqual.\n\n In the following I only discuss the situation where\n qualifications can be used in the indexqual.\n\n Calculating the cost of a query is easy. Have N tuples in P\n data-pages where the given qualification will match M.\n Assuming that the tuples are not in ascending order in the\n data pages, the cost fetching one tuple by its TID raises\n with P (more seeking necessary). Now you can calculate the\n cost of an index scan by C=M/N*P*F where F is some constant\n factor to make C comparable to a current seqscan cost value\n (I know, it must be smarter, but for this description a\n simple calculation is enough).\n\n The only problem is that the optimizer has absolutely no\n chance to estimate M (the mystic value as Bruce called it).\n In a given qualification\n\n WHERE key > 0 AND key <= 100\n\n it cannot know if this would result in 0 or 100% of the rows.\n To estimate that, it needs statistical information about the\n key ranges that are present. Assume there would be 11 keys\n remembered by the last vacuum run, that break up the whole\n present key range of 10000 tuples into 10 chunks and they\n read\n\n 5 40 70 90 500 600 1000 1100 1400 1500 2000\n\n where 5 is the lowest key at all, 40 is the key of tuple 1000\n (in key order), 70 is the key of tuple 2000 and so on. Now\n looking at the qualification and this key range information\n would tell, that the absolute limit of rows returned by an\n index scan would be 3999 (which still could have a key value\n of 100). But the qualification\n\n WHERE key >= 50\n\n could return at max 8999 tuples and\n\n WHERE key > 50 AND key < 70\n\n has a maximum of 998 result tuples. This would be the\n information required to make the right decision for the case\n where all rows selected are wanted.\n\n We do not have this statistical information. So the whole\n thing is at this time academic.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 15 Oct 1998 13:35:09 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: order by and index path"
},
{
"msg_contents": "> We do not have this statistical information. So the whole\n> thing is at this time academic.\n\nI recall that Commercial Ingres made the assumption that one row (or 1%\nof rows? My memory of Ingres is fading :) would be returned from a\nqualified query if no statistics were available to suggest otherwise.\n\nIt did collect statistics on data distribution to try to help make those\noptimizer choices.\n\nIt may be reasonable to assume that if there is an index, then using it\nwith any qualified query would be a win. Since the alternative is to\ndecide to _not_ use an index, a decision for which we have no support\nwith existing statistics.\n\n - Tom\n",
"msg_date": "Thu, 15 Oct 1998 14:28:36 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: order by and index path"
},
{
"msg_contents": ">\n> > We do not have this statistical information. So the whole\n> > thing is at this time academic.\n>\n> I recall that Commercial Ingres made the assumption that one row (or 1%\n> of rows? My memory of Ingres is fading :) would be returned from a\n> qualified query if no statistics were available to suggest otherwise.\n>\n> It did collect statistics on data distribution to try to help make those\n> optimizer choices.\n>\n> It may be reasonable to assume that if there is an index, then using it\n> with any qualified query would be a win. Since the alternative is to\n> decide to _not_ use an index, a decision for which we have no support\n> with existing statistics.\n\n It may be also reasonable to collect statistic information\n and use that to quantify the cost of an index scan.\n\n The vacuum cleaner scans all indices on a relation vacuum'd\n completely. And at that time it already knows the number of\n pages and tuples in the heap relation (has that in the\n vcrelstats).\n\n Based on this it could decide to take every n'th index tuple\n while scanning and drop them somewhere where other backends\n can find them. This would be the statistical information\n needed by the optimizer to estimate the real cost of an index\n scan. It is only of interest for big tables, where hopping\n from block to block will make an index scan a looser against\n a seqscan in a many row matching scan. So it's up to the\n optimizer do decide based on the # of pages if statistical\n information is really required for cost calculation.\n\n Having the final indexqual along with the statistical\n information it will be a little tricky to figure out how many\n rows it might return, but not impossible.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 15 Oct 1998 17:03:51 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Statistics on key distribution (was: Re: order by and index path)"
},
{
"msg_contents": "> could return at max 8999 tuples and\n> \n> WHERE key > 50 AND key < 70\n> \n> has a maximum of 998 result tuples. This would be the\n> information required to make the right decision for the case\n> where all rows selected are wanted.\n> \n> We do not have this statistical information. So the whole\n> thing is at this time academic.\n\nBut we do have statistical information in pg_statistic if you run vacuum\nanalyze.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:04:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: order by and index path"
},
{
"msg_contents": "> > We do not have this statistical information. So the whole\n> > thing is at this time academic.\n> \n> I recall that Commercial Ingres made the assumption that one row (or 1%\n> of rows? My memory of Ingres is fading :) would be returned from a\n> qualified query if no statistics were available to suggest otherwise.\n> \n> It did collect statistics on data distribution to try to help make those\n> optimizer choices.\n> \n> It may be reasonable to assume that if there is an index, then using it\n> with any qualified query would be a win. Since the alternative is to\n> decide to _not_ use an index, a decision for which we have no support\n> with existing statistics.\n\nFor =, the assumion is 1 row, for > the assumption is 1/3 of the table.\nWith pg_statistic, it uses that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:12:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: order by and index path"
},
{
"msg_contents": ">\n> > could return at max 8999 tuples and\n> >\n> > WHERE key > 50 AND key < 70\n> >\n> > has a maximum of 998 result tuples. This would be the\n> > information required to make the right decision for the case\n> > where all rows selected are wanted.\n> >\n> > We do not have this statistical information. So the whole\n> > thing is at this time academic.\n>\n> But we do have statistical information in pg_statistic if you run vacuum\n> analyze.\n\n Nice (forgot that - pardon), anyway only having lowest and\n highest key values isn't enough to make a useful estimation\n about how many rows an indexqual will return. If we change\n pg_statistic in a way that more keys can get stored per\n relation/attribute, then the optimizer would have a real\n chance on it.\n\n I have\n\n starelid\n staattnum\n staitupno\n staop\n stakey\n\n in mind, where staitupno tells the position of the key in a\n complete index scan. Then it becomes the place to fill in the\n key range information as described in my posting.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 15 Oct 1998 18:36:26 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: order by and index path"
}
] |
[
{
"msg_contents": "Hello!\n\n There are examples below and can anybody explain me - how to use indexes\n in PostgreSQL for best perfomance? Look here:\n\n create table aaa (num int2, name text);\n create index ax on aaa (num);\n\n explain select * from aaa where num = 5;\n Index Scan on aaa (cost=0.00 size=0 width=14)\n\n explain select * from aaa where num > 5;\n Seq Scan on aaa (cost=0.00 size=0 width=14)\n\n Why PostgreSQL in the first case uses index, but in the second - doesn't ?\n As I understand, there is no big difference between queries. Are there\n general recommendations on creating indexes?\n\n This questions because I'm relatively new to SQL and hope somebody can\n help me :) Thank you.\n\n---\nVladimir Litovka <[email protected]>\n\n",
"msg_date": "Thu, 15 Oct 1998 15:35:53 +0300 (EEST)",
"msg_from": "Vladimir Litovka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing perfomance using indexes"
},
{
"msg_contents": "Hello,\n\nWhat version of PostgreSQL you're talking about ?\nI also noticed such behaivour in 6.4 beta, try \nint4 instead of int2 and see what happens. I don't know the reason\nbut in my case it works. 6.3.2 uses indices in both cases !\n\n\tRegards,\n \tOleg\n\nOn Thu, 15 Oct 1998, Vladimir Litovka wrote:\n\n> Date: Thu, 15 Oct 1998 15:35:53 +0300 (EEST)\n> From: Vladimir Litovka <[email protected]>\n> Reply-To: [email protected]\n> To: PgSQL-sql <[email protected]>\n> Subject: [SQL] Optimizing perfomance using indexes\n> \n> Hello!\n> \n> There are examples below and can anybody explain me - how to use indexes\n> in PostgreSQL for best perfomance? Look here:\n> \n> create table aaa (num int2, name text);\n> create index ax on aaa (num);\n> \n> explain select * from aaa where num = 5;\n> Index Scan on aaa (cost=0.00 size=0 width=14)\n> \n> explain select * from aaa where num > 5;\n> Seq Scan on aaa (cost=0.00 size=0 width=14)\n> \n> Why PostgreSQL in the first case uses index, but in the second - doesn't ?\n> As I understand, there is no big difference between queries. Are there\n> general recommendations on creating indexes?\n> \n> This questions because I'm relatively new to SQL and hope somebody can\n> help me :) Thank you.\n> \n> ---\n> Vladimir Litovka <[email protected]>\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Thu, 15 Oct 1998 19:09:27 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Optimizing perfomance using indexes"
},
{
"msg_contents": "> What version of PostgreSQL you're talking about ?\n> I also noticed such behaivour in 6.4 beta, try\n> int4 instead of int2 and see what happens. I don't know the reason\n> but in my case it works. 6.3.2 uses indices in both cases !\n> > There are examples below and can anybody explain me - how to use \n> > indexes in PostgreSQL for best perfomance? Look here:\n> > create table aaa (num int2, name text);\n> > create index ax on aaa (num);\n> > explain select * from aaa where num = 5;\n> > Index Scan on aaa (cost=0.00 size=0 width=14)\n> > explain select * from aaa where num > 5;\n> > Seq Scan on aaa (cost=0.00 size=0 width=14)\n> > Why PostgreSQL in the first case uses index, but in the second - \n> > doesn't ?\n\nFor Postgres (all versions), the \"5\" is read as an int4 in the scanner\n(before parsing). Your column is int2. In v6.3.2 and before, the _only_\nmechanism for implicit type conversion/coersion was to convert constants\nto strings and then convert the strings back to constants. No other\nsituation was handled, so implicit conversion between any non-constant\nwas not allowed.\n\nVladimir is probably running v6.3.2 or before?\n\nFor v6.4, the Postgres parser looks for _functions_ to convert types,\nfor constants and for every other situation. Also, there needs to be a\n\"promotion\" of types so that, for example, int4's are not forced to\nbecome int2's, with the risk of overflow (another drawback with the old\nscheme: \"where num < 100000\" would fail or overflow since the 100000 was\nforced to be an int2).\n\nSo with v6.4 your query\n select * from aaa where num = 5;\n\nbecomes\n select * from aaa where int4(num) = 5;\n\nwhich has a hard time using an int2 index. I plan on increasing support\nfor function calls and indices in v6.5. In the meantime, you can specify\nyour query as\n select * from aaa where num = '5';\n\nwhich will choose the type for the string constant from the other\nargument \"num\". Or you can be explicit:\n select * from aaa where num = int2 '5'; -- SQL92\n select * from aaa where num = '5'::int2; -- old Postgres\n\nThere is a chapter in the User's Guide (\"Type Conversion\") in the v6.4\ndocs which discusses this; if you want to look at the beta docs let me\nknow if it needs more info...\n\n - Tom\n",
"msg_date": "Fri, 16 Oct 1998 06:10:54 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Optimizing perfomance using indexes"
}
] |
[
{
"msg_contents": "The problem is probably that the functions incorrectly return text *\nbut use it like char *. text * is a varlena\n\ne.g. text *inet_netmask in inet.c\n\teighter must be char *inet_netmask or correctly use macros VARDATA and VARSIZE\n \nAndreas\n\n----------\nVon: \tD'Arcy J.M. Cain[SMTP:[email protected]]\nGesendet: \tDonnerstag, 15. Oktober 1998 13:54\nAn: \tMarc G. Fournier\nCc: \[email protected]\nBetreff: \tRe: [HACKERS] PostgreSQL v6.4 BETA2...\n\nThus spake Marc G. Fournier\n> With this BETA, the source code is frozen...there is *absolutely* no\n> changes to be made except that which is required to fix a bug that\n> pertains to the building and/or running on a particular platform, or to\n> fix a crucial bug.\n\nWhat about the inet stuff. I'm just waiting for Paul's inet_cidr*\nfunctions to test and submit the final stuff. As it stand it will\ncrash the backend if you use it. I guess that makes it a bug fix,\nright?\n\nI'll submit the builtins.h changes in the meantime. Perhaps I can\ncobble up my own version of the inet_cidr* functions at least to\nbe able to test my functions.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n\n\n",
"msg_date": "Thu, 15 Oct 1998 14:37:49 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] PostgreSQL v6.4 BETA2..."
},
{
"msg_contents": "Thus spake Andreas Zeugswetter\n> The problem is probably that the functions incorrectly return text *\n> but use it like char *. text * is a varlena\n\nYes, Bruce explained the change but I don't want to submit patches now\nuntil the other functions have been added and I can test the module.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 15 Oct 1998 11:10:23 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] PostgreSQL v6.4 BETA2..."
},
{
"msg_contents": "> The problem is probably that the functions incorrectly return text *\n> but use it like char *. text * is a varlena\n> \n> e.g. text *inet_netmask in inet.c\n> \teighter must be char *inet_netmask or correctly use macros VARDATA and VARSIZE\n\nNo, they are missing sections as we wait for additional inet functions.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:09:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] PostgreSQL v6.4 BETA2..."
}
] |
[
{
"msg_contents": "> Date: Thu, 15 Oct 1998 10:56:50 +0200\nAndreas Zeugswetter wrote:\n> \n> Resumee:\n> You have to look at this from the cost point of view. If there is an order by that can be \n> done with an index, this will make the index a little more preferrable than for the same \n> query without the order by, but it should not force the index.\n> You have to give the sort a cost, so that the index access can be compared to the\n> seq scan and sort path.\n\nBut if we finally get only 10 first rows, it may still be _much_ faster \nto do just the index scan, which, being the toplevel executor will get \nonly the needed rows.\n\nSo here we come back to the LIMIT clause (or something like it) because \nthe optimiser has no knowledge whatever about the use of DECLARE/FETCH \n(I assume).\n\n\n\n------------------\nHannu Krosing\n",
"msg_date": "Thu, 15 Oct 1998 17:08:12 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: order by and index path (was: What about LIMIT in SELECT ?)"
}
] |
[
{
"msg_contents": "LAtest cvs has a little bug in src/interfaces/ecpg/lib/Makefile.in\n\n $(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n\nmust be \n $(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho.o typename.sho.o\n ^^\n\n\tRegards,\n\t\t\n\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 15 Oct 1998 19:05:31 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "small bug in src/interfaces/ecpg/lib/Makefile.in"
},
{
"msg_contents": "Applied.\n\n> LAtest cvs has a little bug in src/interfaces/ecpg/lib/Makefile.in\n> \n> $(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n> \n> must be \n> $(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho.o typename.sho.o\n> ^^\n> \n> \tRegards,\n> \t\t\n> \tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:15:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] small bug in src/interfaces/ecpg/lib/Makefile.in"
},
{
"msg_contents": "On Thu, Oct 15, 1998 at 07:05:31PM +0400, Oleg Bartunov wrote:\n> LAtest cvs has a little bug in src/interfaces/ecpg/lib/Makefile.in\n> \n> $(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n> \n> must be \n\nHere's a patch. It also includes the latest parser changes.\n\nMichael\n\ndiff -ruN ecpg/ChangeLog ecpg.mm/ChangeLog\n--- ecpg/ChangeLog\tSat Oct 3 07:47:10 1998\n+++ ecpg.mm/ChangeLog\tThu Oct 15 19:41:20 1998\n@@ -348,4 +348,11 @@\n \t- Synced preproc.y with gram.y yet again.\n \t- Set version to 2.4.3\n \n+Mon Okt 12 12:36:04 CEST 1998\n \n+\t- Synced preproc.y with gram.y yet again.\n+\n+Thu Okt 15 10:05:04 CEST 1998\n+\n+\t- Synced preproc.y with gram.y yet again.\n+ - Set version to 2.4.4\ndiff -ruN ecpg/lib/Makefile ecpg.mm/lib/Makefile\n--- ecpg/lib/Makefile\tThu Oct 15 19:05:00 1998\n+++ ecpg.mm/lib/Makefile\tThu Oct 15 19:22:07 1998\n@@ -75,8 +75,8 @@\n \n all: lib$(NAME).a $(shlib)\n \n-$(shlib): ecpglib.sho.o typename.sho.o\n-\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n+$(shlib): ecpglib.sho typename.sho\n+\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho\n \n clean:\n \trm -f *.o *.sho *.a core a.out *~ $(shlib) lib$(NAME)$(DLSUFFIX)\n@@ -105,7 +105,7 @@\n typename.o : typename.c ../include/ecpgtype.h\n \t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c $< -o $@\n \n-ecpglib.sho.o : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n+ecpglib.sho : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n-typename.sho.o : typename.c ../include/ecpgtype.h\n+typename.sho : typename.c ../include/ecpgtype.h\n \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\ndiff -ruN ecpg/lib/Makefile.in ecpg.mm/lib/Makefile.in\n--- ecpg/lib/Makefile.in\tWed Oct 14 20:14:53 1998\n+++ ecpg.mm/lib/Makefile.in\tThu Oct 15 19:22:16 1998\n@@ -74,8 +74,8 @@\n \n all: lib$(NAME).a $(shlib)\n \n-$(shlib): ecpglib.sho.o typename.sho.o\n-\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n+$(shlib): ecpglib.sho typename.sho\n+\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho\n \n clean:\n \trm -f *.o *.sho *.a core a.out *~ $(shlib) lib$(NAME)$(DLSUFFIX)\n@@ -104,7 +104,7 @@\n typename.o : typename.c ../include/ecpgtype.h\n \t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c $< -o $@\n \n-ecpglib.sho.o : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n+ecpglib.sho : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n-typename.sho.o : typename.c ../include/ecpgtype.h\n+typename.sho : typename.c ../include/ecpgtype.h\n \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\ndiff -ruN ecpg/preproc/Makefile ecpg.mm/preproc/Makefile\n--- ecpg/preproc/Makefile\tSat Oct 3 07:47:10 1998\n+++ ecpg.mm/preproc/Makefile\tThu Oct 15 19:41:10 1998\n@@ -3,7 +3,7 @@\n \n MAJOR_VERSION=2\n MINOR_VERSION=4\n-PATCHLEVEL=3\n+PATCHLEVEL=4\n \n CFLAGS+=-I../include -DMAJOR_VERSION=$(MAJOR_VERSION) \\\n \t-DMINOR_VERSION=$(MINOR_VERSION) -DPATCHLEVEL=$(PATCHLEVEL) \\\ndiff -ruN ecpg/preproc/preproc.y ecpg.mm/preproc/preproc.y\n--- ecpg/preproc/preproc.y\tSat Oct 3 07:47:14 1998\n+++ ecpg.mm/preproc/preproc.y\tThu Oct 15 10:11:17 1998\n@@ -2153,6 +2153,10 @@\n \t\t\t\t{\n \t\t\t\t\t$$ = cat2_str(make1_str(\"unlisten\"), $2);\n }\n+\t\t| UNLISTEN '*'\n+\t\t\t\t{\n+\t\t\t\t\t$$ = make1_str(\"unlisten *\");\n+ }\n ;\n \n /*****************************************************************************\n@@ -3796,9 +3800,9 @@\n \t\t\t\t}\n \t\t;\n \n-ParamNo: PARAM\n+ParamNo: PARAM opt_indirection\n \t\t\t\t{\n-\t\t\t\t\t$$ = make_name();\n+\t\t\t\t\t$$ = cat2_str(make_name(), $2);\n \t\t\t\t}\n \t\t;\n \n@@ -3896,6 +3900,7 @@\n \t\t| STDIN { $$ = make1_str(\"stdin\"); }\n \t\t| STDOUT { $$ = make1_str(\"stdout\"); }\n \t\t| TIME\t\t\t\t{ $$ = make1_str(\"time\"); }\n+\t\t| TIMESTAMP\t\t\t{ $$ = make1_str(\"timestamp\"); }\n \t\t| TIMEZONE_HOUR { $$ = make1_str(\"timezone_hour\"); }\n | TIMEZONE_MINUTE { $$ = make1_str(\"timezone_minute\"); }\n \t\t| TRIGGER\t\t\t{ $$ = make1_str(\"trigger\"); }\n\n-- \nDr. Michael Meskes | Th.-Heuss-Str. 61, D-41812 Erkelenz | Go SF49ers!\nSenior-Consultant | business: [email protected] | Go Rhein Fire!\nMummert+Partner | private: [email protected] | Use Debian\nUnternehmensberatung AG | [email protected] | GNU/Linux!\n",
"msg_date": "Thu, 15 Oct 1998 19:42:21 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] small bug in src/interfaces/ecpg/lib/Makefile.in"
},
{
"msg_contents": "Applied.\n\n\n> On Thu, Oct 15, 1998 at 07:05:31PM +0400, Oleg Bartunov wrote:\n> > LAtest cvs has a little bug in src/interfaces/ecpg/lib/Makefile.in\n> > \n> > $(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n> > \n> > must be \n> \n> Here's a patch. It also includes the latest parser changes.\n> \n> Michael\n> \n> diff -ruN ecpg/ChangeLog ecpg.mm/ChangeLog\n> --- ecpg/ChangeLog\tSat Oct 3 07:47:10 1998\n> +++ ecpg.mm/ChangeLog\tThu Oct 15 19:41:20 1998\n> @@ -348,4 +348,11 @@\n> \t- Synced preproc.y with gram.y yet again.\n> \t- Set version to 2.4.3\n> \n> +Mon Okt 12 12:36:04 CEST 1998\n> \n> +\t- Synced preproc.y with gram.y yet again.\n> +\n> +Thu Okt 15 10:05:04 CEST 1998\n> +\n> +\t- Synced preproc.y with gram.y yet again.\n> + - Set version to 2.4.4\n> diff -ruN ecpg/lib/Makefile ecpg.mm/lib/Makefile\n> --- ecpg/lib/Makefile\tThu Oct 15 19:05:00 1998\n> +++ ecpg.mm/lib/Makefile\tThu Oct 15 19:22:07 1998\n> @@ -75,8 +75,8 @@\n> \n> all: lib$(NAME).a $(shlib)\n> \n> -$(shlib): ecpglib.sho.o typename.sho.o\n> -\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n> +$(shlib): ecpglib.sho typename.sho\n> +\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho\n> \n> clean:\n> \trm -f *.o *.sho *.a core a.out *~ $(shlib) lib$(NAME)$(DLSUFFIX)\n> @@ -105,7 +105,7 @@\n> typename.o : typename.c ../include/ecpgtype.h\n> \t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c $< -o $@\n> \n> -ecpglib.sho.o : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n> +ecpglib.sho : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n> \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n> -typename.sho.o : typename.c ../include/ecpgtype.h\n> +typename.sho : typename.c ../include/ecpgtype.h\n> \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n> diff -ruN ecpg/lib/Makefile.in ecpg.mm/lib/Makefile.in\n> --- ecpg/lib/Makefile.in\tWed Oct 14 20:14:53 1998\n> +++ ecpg.mm/lib/Makefile.in\tThu Oct 15 19:22:16 1998\n> @@ -74,8 +74,8 @@\n> \n> all: lib$(NAME).a $(shlib)\n> \n> -$(shlib): ecpglib.sho.o typename.sho.o\n> -\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho.o\n> +$(shlib): ecpglib.sho typename.sho\n> +\t$(LD) $(LDFLAGS_SL) -o $@ ecpglib.sho typename.sho\n> \n> clean:\n> \trm -f *.o *.sho *.a core a.out *~ $(shlib) lib$(NAME)$(DLSUFFIX)\n> @@ -104,7 +104,7 @@\n> typename.o : typename.c ../include/ecpgtype.h\n> \t$(CC) $(CFLAGS) -I../include $(PQ_INCLUDE) -c $< -o $@\n> \n> -ecpglib.sho.o : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n> +ecpglib.sho : ecpglib.c ../include/ecpglib.h ../include/ecpgtype.h\n> \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n> -typename.sho.o : typename.c ../include/ecpgtype.h\n> +typename.sho : typename.c ../include/ecpgtype.h\n> \t$(CC) $(CFLAGS) $(CFLAGS_SL) -I../include $(PQ_INCLUDE) -c $< -o $@\n> diff -ruN ecpg/preproc/Makefile ecpg.mm/preproc/Makefile\n> --- ecpg/preproc/Makefile\tSat Oct 3 07:47:10 1998\n> +++ ecpg.mm/preproc/Makefile\tThu Oct 15 19:41:10 1998\n> @@ -3,7 +3,7 @@\n> \n> MAJOR_VERSION=2\n> MINOR_VERSION=4\n> -PATCHLEVEL=3\n> +PATCHLEVEL=4\n> \n> CFLAGS+=-I../include -DMAJOR_VERSION=$(MAJOR_VERSION) \\\n> \t-DMINOR_VERSION=$(MINOR_VERSION) -DPATCHLEVEL=$(PATCHLEVEL) \\\n> diff -ruN ecpg/preproc/preproc.y ecpg.mm/preproc/preproc.y\n> --- ecpg/preproc/preproc.y\tSat Oct 3 07:47:14 1998\n> +++ ecpg.mm/preproc/preproc.y\tThu Oct 15 10:11:17 1998\n> @@ -2153,6 +2153,10 @@\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat2_str(make1_str(\"unlisten\"), $2);\n> }\n> +\t\t| UNLISTEN '*'\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = make1_str(\"unlisten *\");\n> + }\n> ;\n> \n> /*****************************************************************************\n> @@ -3796,9 +3800,9 @@\n> \t\t\t\t}\n> \t\t;\n> \n> -ParamNo: PARAM\n> +ParamNo: PARAM opt_indirection\n> \t\t\t\t{\n> -\t\t\t\t\t$$ = make_name();\n> +\t\t\t\t\t$$ = cat2_str(make_name(), $2);\n> \t\t\t\t}\n> \t\t;\n> \n> @@ -3896,6 +3900,7 @@\n> \t\t| STDIN { $$ = make1_str(\"stdin\"); }\n> \t\t| STDOUT { $$ = make1_str(\"stdout\"); }\n> \t\t| TIME\t\t\t\t{ $$ = make1_str(\"time\"); }\n> +\t\t| TIMESTAMP\t\t\t{ $$ = make1_str(\"timestamp\"); }\n> \t\t| TIMEZONE_HOUR { $$ = make1_str(\"timezone_hour\"); }\n> | TIMEZONE_MINUTE { $$ = make1_str(\"timezone_minute\"); }\n> \t\t| TRIGGER\t\t\t{ $$ = make1_str(\"trigger\"); }\n> \n> -- \n> Dr. Michael Meskes | Th.-Heuss-Str. 61, D-41812 Erkelenz | Go SF49ers!\n> Senior-Consultant | business: [email protected] | Go Rhein Fire!\n> Mummert+Partner | private: [email protected] | Use Debian\n> Unternehmensberatung AG | [email protected] | GNU/Linux!\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 00:40:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] small bug in src/interfaces/ecpg/lib/Makefile.in"
}
] |
[
{
"msg_contents": "Jan Wieck wrote:\n\n> And I got the time to hack around about this.\n>\n <...\n a lovely explanation of a patch to achiev 60X speedup \n for a typical web query\n ...>\n\n> The speedup for the cursor/fetch scenario is so impressive\n> that I'll create a post 6.4 patch. I don't want it in 6.4\n> because there is absolutely no query in the whole regression\n> test, where it suppresses the sort node.\n\nGood, then it works as expected ;)\n\nMore seriously, it is not within powers of current regression test \nframework to test speed improvements (only the case where \nperformance-wise bad implementation will actually crash the backend, \nas in the cnfify problem, but AFAIK we dont test for those now)\n\n> So we have absolutely no check that it doesn't break anything.\n\nIf it did pass the regression, then IMHO it did not break anything.\n\nI would vote for putting it in (maybe with a \n'set fix_optimiser_stupidity on' safeguard to enable it). I see no \nreason to postpone it to 6.4.1 and force almost everybody to first \npatch their copy and then upgrade very soon.\n\nI would even go far enough to call it a bugfix, as it does not really \nintroduce any new functionality only fixes some existing functionality \nso that much bigger databases can be actually used.\n\nI would compare it in this sense to finding the places where \nusername/password get truncated below their actual values in pg_passwd\n;)\n\n---------------\n Hannu Krosing\n",
"msg_date": "Thu, 15 Oct 1998 19:08:23 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> I would even go far enough to call it a bugfix, as it does not really \n> introduce any new functionality only fixes some existing functionality \n> so that much bigger databases can be actually used.\n> \n> I would compare it in this sense to finding the places where \n> username/password get truncated below their actual values in pg_passwd\n> ;)\n\nWe just can't test is on the wide variation of people's queries, though\npassing the regression test is a good indication it is OK.\n\nHowever, we are very close to release. Yes, I know it is a pain to\nwait, but we are not even done discussion all the options yet, and I\nstill have the cnfify fix to look at.\n\nI am sure we will have post 6.4 releases, just like we have everyone\nruning 6.3.2 rather than 6.3. There will be other now-undiscovered\nfixes in post 6.4 cleanup releases, and we will hopefully have a _full_\nsolution to the problem at that point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 12:42:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Jan Wieck wrote:\n> > The speedup for the cursor/fetch scenario is so impressive\n> > that I'll create a post 6.4 patch. I don't want it in 6.4\n> > because there is absolutely no query in the whole regression\n> > test, where it suppresses the sort node.\n>\n> Good, then it works as expected ;)\n>\n> More seriously, it is not within powers of current regression test\n> framework to test speed improvements (only the case where\n> performance-wise bad implementation will actually crash the backend,\n> as in the cnfify problem, but AFAIK we dont test for those now)\n>\n> > So we have absolutely no check that it doesn't break anything.\n>\n> If it did pass the regression, then IMHO it did not break anything.\n\n Thats the point. The check if the sort node is required\n returns TRUE for ALL queries of the regression. So the\n behaviour when it returns FALSE is absolutely not tested.\n\n>\n> I would vote for putting it in (maybe with a\n> 'set fix_optimiser_stupidity on' safeguard to enable it). I see no\n> reason to postpone it to 6.4.1 and force almost everybody to first\n> patch their copy and then upgrade very soon.\n>\n> I would even go far enough to call it a bugfix, as it does not really\n> introduce any new functionality only fixes some existing functionality\n> so that much bigger databases can be actually used.\n\n I can't call it a bugfix because it is only a performance win\n in some situations. And I feel the risk is too high to put\n untested code into the backend at BETA2 time. The max we\n should do is to take this one and the LIMIT thing (maybe\n implemented as I suggested lately), and put out a Web-\n Performance-Release at the same time we release 6.4.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 15 Oct 1998 19:01:33 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Hannu Krosing wrote:\n> \n> > Jan Wieck wrote:\n> > > The speedup for the cursor/fetch scenario is so impressive\n> > > that I'll create a post 6.4 patch. I don't want it in 6.4\n> > > because there is absolutely no query in the whole regression\n> > > test, where it suppresses the sort node.\n> >\n> > Good, then it works as expected ;)\n> >\n> > More seriously, it is not within powers of current regression test\n> > framework to test speed improvements (only the case where\n> > performance-wise bad implementation will actually crash the backend,\n> > as in the cnfify problem, but AFAIK we dont test for those now)\n> >\n> > > So we have absolutely no check that it doesn't break anything.\n> >\n> > If it did pass the regression, then IMHO it did not break anything.\n> \n> Thats the point. The check if the sort node is required\n> returns TRUE for ALL queries of the regression. So the\n> behaviour when it returns FALSE is absolutely not tested.\n\nThe only way to find out is to make a new test, maybe by comparing\n\nselect * from t where key > 1 order by key;\n\nwhere sort node can be dropped\n\nand \n\nselect * from t where (key+1) > 2 order by key;\n\nwhere it probably can't (I guess the optimiser is currently not smart\nenough)\n \n> I can't call it a bugfix because it is only a performance win\n> in some situations.\n\nIn the extreme case the situation can be exhaustion of swap and disk \nspace resulting in a frozen computer, just trying to get 10 first rows \nfrom a table. Its not exactly a bug, but it's not the expected \nbehaviour either.\n\n> And I feel the risk is too high to put\n> untested code into the backend at BETA2 time. The max we\n> should do is to take this one and the LIMIT thing (maybe\n> implemented as I suggested lately), and put out a Web-\n> Performance-Release at the same time we release 6.4.\n\nOr perhaps have the patches in /contrib in 6.4 distribution \n(preferrably with an option to configure to apply them ;)\n\n-----------------\nHannu\n",
"msg_date": "Thu, 15 Oct 1998 21:13:08 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
},
{
"msg_contents": "> More seriously, it is not within powers of current regression test\n> framework to test speed improvements (only the case where\n> performance-wise bad implementation will actually crash the backend,\n> as in the cnfify problem, but AFAIK we dont test for those now)\n\nActually, I keep informal track of run-times for the entire regression\ntest, which gives me an indication of how things are going. For much of\nthe v6.4 development, I was seeing runtimes of around 2:30 on my system\n(a bit less, a bit more depending on the phase of the moon).\n\nThe runtime is currently up at 3:48 (3:40 with egcs-1.1b and\n-mpentiumpro rather than the usual gcc-2.7.1 and -m486). I am hoping\nthat most of that recent increase in runtime is from the recent\nadditions of Jan's rules and embedded programming language tests.\n\nAlthough the times aren't directly comparable with older releases (e.g.\nwe used to have char2,4,8,16 tests and we now have Jan's new tests)\nthere has been a distinct downward trend in runtimes.\n\nBut you're correct in that these timing tests are fairly insensitive to\nmajor improvements in only one query scenerio, since that makes a\nrelatively small change in the total runtime.\n\n - Tom\n",
"msg_date": "Fri, 16 Oct 1998 06:32:40 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What about LIMIT in SELECT ?"
}
] |
[
{
"msg_contents": "\n> > (who really selects nearly all rows from a 5M row\n> > table?).\n\nI have to comment here from a practical standpoint. My company does the above\ndozens of times per day using Oracle 7 under Solaris. This is a *VERY*\ncommon operation in data processing organizations (like mine) running large\nbatch jobs in tight time windows. At least under Oracle 7 (not sure about\nProgreSQL, still running tests), it is impractical to sort an extraction\nusing ORDER BY, but *MUCH* cheaper to index the fields wanted and then\nartificially constrain the query\nto convince the oprimizer to use the index for extraction. You can\nget the whole dataset in any order you want with few resources. Again, we\ndo it all the time.\n\nMarty\n",
"msg_date": "Thu, 15 Oct 1998 10:32:52 -0600",
"msg_from": "Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: order by and index path"
}
] |
[
{
"msg_contents": "I went to test my functions but got the following.\n\ndarcy=> select '198.1.2.3/8'::inet;\nERROR: type name lookup of inet failed\n\nWas the type backed out while waiting for completion? Any chance of\ngetting it put back so I can make the tests?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 15 Oct 1998 12:58:37 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Did the inet type get backed out?"
},
{
"msg_contents": "> I went to test my functions but got the following.\n> \n> darcy=> select '198.1.2.3/8'::inet;\n> ERROR: type name lookup of inet failed\n> \n> Was the type backed out while waiting for completion? Any chance of\n> getting it put back so I can make the tests?\n\nIt's in there:\n\n\ttest=> create table testv (x inet);\n\nNot sure why your test doesn't work. I think there needs to be a\nfunction named inet().\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Oct 1998 13:51:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > darcy=> select '198.1.2.3/8'::inet;\n> > ERROR: type name lookup of inet failed\n> > \n> > Was the type backed out while waiting for completion? Any chance of\n> > getting it put back so I can make the tests?\n> \n> It's in there:\n> \n> \ttest=> create table testv (x inet);\n> \n> Not sure why your test doesn't work. I think there needs to be a\n> function named inet().\n\nBut it worked before. In fact it still works on another system with\nan earlier compile.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 15 Oct 1998 17:15:27 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > > darcy=> select '198.1.2.3/8'::inet;\n> > > ERROR: type name lookup of inet failed\n> > > \n> > > Was the type backed out while waiting for completion? Any chance of\n> > > getting it put back so I can make the tests?\n> > \n> > It's in there:\n> > \n> > \ttest=> create table testv (x inet);\n> > \n> > Not sure why your test doesn't work. I think there needs to be a\n> > function named inet().\n> \n> But it worked before. In fact it still works on another system with\n> an earlier compile.\n\nThat is strange. I haven't done anything in a long while that would\naffect this. I applied your patch to add the functions. That is the\nonly thing I can think of.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 02:17:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "[email protected] (D'Arcy J.M. Cain) writes:\n\n> But it worked before. In fact it still works on another system with\n> an earlier compile.\n\n\"Works\" for me, using a cvs update from yesterday morning (the morning\nafter the BETA 2 freeze), modulo the fact that someone committed\nchanges to #ifdef out (\"#ifdef BAD\") all the calls to the actual inet\nparser routines, effectively causing all data to be rejected. Since\nwe had an implementation that actually worked, and the changes that we\nwanted to make were compatible with currently stored data, it would\nhave been smarter to leave it working until the changes were ready to\nbe committed. It's better to be able to keep testing something that\ndoesn't have all the wanted functionality than to disable it until an\nunknown time in the future! :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "16 Oct 1998 08:21:59 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Thus spake Tom Ivar Helbekkmo\n> [email protected] (D'Arcy J.M. Cain) writes:\n> > But it worked before. In fact it still works on another system with\n> > an earlier compile.\n> \n> \"Works\" for me, using a cvs update from yesterday morning (the morning\n> after the BETA 2 freeze), modulo the fact that someone committed\n> changes to #ifdef out (\"#ifdef BAD\") all the calls to the actual inet\n> parser routines, effectively causing all data to be rejected. Since\n\nThat's odd. I know that Bruce #ifdef'd out the core of the _new_\nfunctions I sent in but I didn't realize that he took the existing\nones out too.\n\n> we had an implementation that actually worked, and the changes that we\n> wanted to make were compatible with currently stored data, it would\n> have been smarter to leave it working until the changes were ready to\n> be committed. It's better to be able to keep testing something that\n> doesn't have all the wanted functionality than to disable it until an\n> unknown time in the future! :-)\n\nYes, I agree. Bruce, can we put the inet_in and inet_out functions\nback the way they were?\n\nHowever, I have put all the code back in locally for testing so that\nisn't why mine isn't working. I'll try with today's sup.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 16 Oct 1998 08:11:53 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "> Thus spake Tom Ivar Helbekkmo\n> > [email protected] (D'Arcy J.M. Cain) writes:\n> > > But it worked before. In fact it still works on another system with\n> > > an earlier compile.\n> > \n> > \"Works\" for me, using a cvs update from yesterday morning (the morning\n> > after the BETA 2 freeze), modulo the fact that someone committed\n> > changes to #ifdef out (\"#ifdef BAD\") all the calls to the actual inet\n> > parser routines, effectively causing all data to be rejected. Since\n> \n> That's odd. I know that Bruce #ifdef'd out the core of the _new_\n> functions I sent in but I didn't realize that he took the existing\n> ones out too.\n> \n> > we had an implementation that actually worked, and the changes that we\n> > wanted to make were compatible with currently stored data, it would\n> > have been smarter to leave it working until the changes were ready to\n> > be committed. It's better to be able to keep testing something that\n> > doesn't have all the wanted functionality than to disable it until an\n> > unknown time in the future! :-)\n> \n> Yes, I agree. Bruce, can we put the inet_in and inet_out functions\n> back the way they were?\n> \n> However, I have put all the code back in locally for testing so that\n> isn't why mine isn't working. I'll try with today's sup.\n\nI just ifdef'ed out the calls to the non-existant functions. That is\nall.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 11:21:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I just ifdef'ed out the calls to the non-existant functions. That is\n> all.\n\nYou probably meant to. What you did was to #ifdef out all the calls\nto the missing inet_cidr_*() functions in the code that D'Arcy added,\n_and_ the ones to the existing inet_net_*() ones that are needed for\nthe code to work at all. Easy mistake to make when things are named\nso similarly.\n\nIf you remove the erroneous #ifdef BAD stuff, and apply the following\npatch to the (current) inet_net_pton.c, we'll have a working INET type\nagain, only missing the improvements that D'Arcy and Paul cooperated\nto hash out.\n\nOh, and D'Arcy: about documentation: should you and I maybe bounce a\nfile of SGML back and forth a couple of times, getting the type and\nfunctions properly described? I can start it off if you like, but I'm\nnot sure I'll be able to find time to do it until monday...\n\n-tih\n\n*** inet_net_pton.c.old\tFri Oct 16 19:44:25 1998\n--- inet_net_pton.c.new\tFri Oct 16 19:45:40 1998\n***************\n*** 100,133 ****\n \n \tch = *src++;\n \tif (ch == '0' && (src[0] == 'x' || src[0] == 'X')\n! \t\t&& isascii(src[1]) && isxdigit(src[1]))\n! \t{\n \t\t/* Hexadecimal: Eat nybble string. */\n \t\tif (size <= 0)\n \t\t\tgoto emsgsize;\n- \t\ttmp = 0;\n \t\tdirty = 0;\n! \t\tsrc++;\t\t\t\t\t/* skip x or X. */\n! \t\twhile ((ch = *src++) != '\\0' &&\n! \t\t\t isascii(ch) && isxdigit(ch))\n! \t\t{\n \t\t\tif (isupper(ch))\n \t\t\t\tch = tolower(ch);\n \t\t\tn = strchr(xdigits, ch) - xdigits;\n \t\t\tassert(n >= 0 && n <= 15);\n! \t\t\ttmp = (tmp << 4) | n;\n \t\t\tif (++dirty == 2) {\n \t\t\t\tif (size-- <= 0)\n \t\t\t\t\tgoto emsgsize;\n \t\t\t\t*dst++ = (u_char) tmp;\n! \t\t\t\ttmp = 0, dirty = 0;\n \t\t\t}\n \t\t}\n! \t\tif (dirty) {\n \t\t\tif (size-- <= 0)\n \t\t\t\tgoto emsgsize;\n! \t\t\ttmp <<= 4;\n! \t\t\t*dst++ = (u_char) tmp;\n \t\t}\n \t}\n \telse if (isascii(ch) && isdigit(ch))\n--- 100,131 ----\n \n \tch = *src++;\n \tif (ch == '0' && (src[0] == 'x' || src[0] == 'X')\n! \t && isascii(src[1]) && isxdigit(src[1])) {\n \t\t/* Hexadecimal: Eat nybble string. */\n \t\tif (size <= 0)\n \t\t\tgoto emsgsize;\n \t\tdirty = 0;\n! \t\tsrc++;\t/* skip x or X. */\n! \t\twhile ((ch = *src++) != '\\0' && isascii(ch) && isxdigit(ch)) {\n \t\t\tif (isupper(ch))\n \t\t\t\tch = tolower(ch);\n \t\t\tn = strchr(xdigits, ch) - xdigits;\n \t\t\tassert(n >= 0 && n <= 15);\n! \t\t\tif (dirty == 0)\n! \t\t\t\ttmp = n;\n! \t\t\telse\n! \t\t\t\ttmp = (tmp << 4) | n;\n \t\t\tif (++dirty == 2) {\n \t\t\t\tif (size-- <= 0)\n \t\t\t\t\tgoto emsgsize;\n \t\t\t\t*dst++ = (u_char) tmp;\n! \t\t\t\tdirty = 0;\n \t\t\t}\n \t\t}\n! \t\tif (dirty) { /* Odd trailing nybble? */\n \t\t\tif (size-- <= 0)\n \t\t\t\tgoto emsgsize;\n! \t\t\t*dst++ = (u_char) (tmp << 4);\n \t\t}\n \t}\n \telse if (isascii(ch) && isdigit(ch))\n\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "16 Oct 1998 19:51:45 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Thus spake Tom Ivar Helbekkmo\n> Bruce Momjian <[email protected]> writes:\n> > I just ifdef'ed out the calls to the non-existant functions. That is\n> > all.\n> \n> You probably meant to. What you did was to #ifdef out all the calls\n> to the missing inet_cidr_*() functions in the code that D'Arcy added,\n> _and_ the ones to the existing inet_net_*() ones that are needed for\n> the code to work at all. Easy mistake to make when things are named\n> so similarly.\n\nAh. I thought I had left the inet_in and inet_out alone before sending\nin the patches but I wasn't sure and I have made more drastic changes\nlocally since then so I couldn't tell for sure.\n\nHowever, I have removed the comments but it still thinks that there is\nno inet type at all.\n\n> If you remove the erroneous #ifdef BAD stuff, and apply the following\n> patch to the (current) inet_net_pton.c, we'll have a working INET type\n> again, only missing the improvements that D'Arcy and Paul cooperated\n> to hash out.\n\nWhile I'm posting anyway, Paul; do you have an ETA yet?\n\n> Oh, and D'Arcy: about documentation: should you and I maybe bounce a\n> file of SGML back and forth a couple of times, getting the type and\n> functions properly described? I can start it off if you like, but I'm\n> not sure I'll be able to find time to do it until monday...\n\nSure. Is there an existing file for the existing inet type that I can\nstart working on?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 16 Oct 1998 18:44:53 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Applied.\n\nBAD defines removed for existing functions. Oops.\n\n> Bruce Momjian <[email protected]> writes:\n> \n> > I just ifdef'ed out the calls to the non-existant functions. That is\n> > all.\n> \n> You probably meant to. What you did was to #ifdef out all the calls\n> to the missing inet_cidr_*() functions in the code that D'Arcy added,\n> _and_ the ones to the existing inet_net_*() ones that are needed for\n> the code to work at all. Easy mistake to make when things are named\n> so similarly.\n> \n> If you remove the erroneous #ifdef BAD stuff, and apply the following\n> patch to the (current) inet_net_pton.c, we'll have a working INET type\n> again, only missing the improvements that D'Arcy and Paul cooperated\n> to hash out.\n> \n> Oh, and D'Arcy: about documentation: should you and I maybe bounce a\n> file of SGML back and forth a couple of times, getting the type and\n> functions properly described? I can start it off if you like, but I'm\n> not sure I'll be able to find time to do it until monday...\n> \n> -tih\n> \n> *** inet_net_pton.c.old\tFri Oct 16 19:44:25 1998\n> --- inet_net_pton.c.new\tFri Oct 16 19:45:40 1998\n> ***************\n> *** 100,133 ****\n> \n> \tch = *src++;\n> \tif (ch == '0' && (src[0] == 'x' || src[0] == 'X')\n> ! \t\t&& isascii(src[1]) && isxdigit(src[1]))\n> ! \t{\n> \t\t/* Hexadecimal: Eat nybble string. */\n> \t\tif (size <= 0)\n> \t\t\tgoto emsgsize;\n> - \t\ttmp = 0;\n> \t\tdirty = 0;\n> ! \t\tsrc++;\t\t\t\t\t/* skip x or X. */\n> ! \t\twhile ((ch = *src++) != '\\0' &&\n> ! \t\t\t isascii(ch) && isxdigit(ch))\n> ! \t\t{\n> \t\t\tif (isupper(ch))\n> \t\t\t\tch = tolower(ch);\n> \t\t\tn = strchr(xdigits, ch) - xdigits;\n> \t\t\tassert(n >= 0 && n <= 15);\n> ! \t\t\ttmp = (tmp << 4) | n;\n> \t\t\tif (++dirty == 2) {\n> \t\t\t\tif (size-- <= 0)\n> \t\t\t\t\tgoto emsgsize;\n> \t\t\t\t*dst++ = (u_char) tmp;\n> ! \t\t\t\ttmp = 0, dirty = 0;\n> \t\t\t}\n> \t\t}\n> ! \t\tif (dirty) {\n> \t\t\tif (size-- <= 0)\n> \t\t\t\tgoto emsgsize;\n> ! \t\t\ttmp <<= 4;\n> ! \t\t\t*dst++ = (u_char) tmp;\n> \t\t}\n> \t}\n> \telse if (isascii(ch) && isdigit(ch))\n> --- 100,131 ----\n> \n> \tch = *src++;\n> \tif (ch == '0' && (src[0] == 'x' || src[0] == 'X')\n> ! \t && isascii(src[1]) && isxdigit(src[1])) {\n> \t\t/* Hexadecimal: Eat nybble string. */\n> \t\tif (size <= 0)\n> \t\t\tgoto emsgsize;\n> \t\tdirty = 0;\n> ! \t\tsrc++;\t/* skip x or X. */\n> ! \t\twhile ((ch = *src++) != '\\0' && isascii(ch) && isxdigit(ch)) {\n> \t\t\tif (isupper(ch))\n> \t\t\t\tch = tolower(ch);\n> \t\t\tn = strchr(xdigits, ch) - xdigits;\n> \t\t\tassert(n >= 0 && n <= 15);\n> ! \t\t\tif (dirty == 0)\n> ! \t\t\t\ttmp = n;\n> ! \t\t\telse\n> ! \t\t\t\ttmp = (tmp << 4) | n;\n> \t\t\tif (++dirty == 2) {\n> \t\t\t\tif (size-- <= 0)\n> \t\t\t\t\tgoto emsgsize;\n> \t\t\t\t*dst++ = (u_char) tmp;\n> ! \t\t\t\tdirty = 0;\n> \t\t\t}\n> \t\t}\n> ! \t\tif (dirty) { /* Odd trailing nybble? */\n> \t\t\tif (size-- <= 0)\n> \t\t\t\tgoto emsgsize;\n> ! \t\t\t*dst++ = (u_char) (tmp << 4);\n> \t\t}\n> \t}\n> \telse if (isascii(ch) && isdigit(ch))\n> \n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 23:59:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "> Thus spake Tom Ivar Helbekkmo\n> > Bruce Momjian <[email protected]> writes:\n> > > I just ifdef'ed out the calls to the non-existant functions. That is\n> > > all.\n> > \n> > You probably meant to. What you did was to #ifdef out all the calls\n> > to the missing inet_cidr_*() functions in the code that D'Arcy added,\n> > _and_ the ones to the existing inet_net_*() ones that are needed for\n> > the code to work at all. Easy mistake to make when things are named\n> > so similarly.\n> \n> Ah. I thought I had left the inet_in and inet_out alone before sending\n> in the patches but I wasn't sure and I have made more drastic changes\n> locally since then so I couldn't tell for sure.\n> \n> However, I have removed the comments but it still thinks that there is\n> no inet type at all.\n> \n> > If you remove the erroneous #ifdef BAD stuff, and apply the following\n> > patch to the (current) inet_net_pton.c, we'll have a working INET type\n> > again, only missing the improvements that D'Arcy and Paul cooperated\n> > to hash out.\n> \n> While I'm posting anyway, Paul; do you have an ETA yet?\n> \n> > Oh, and D'Arcy: about documentation: should you and I maybe bounce a\n> > file of SGML back and forth a couple of times, getting the type and\n> > functions properly described? I can start it off if you like, but I'm\n> > not sure I'll be able to find time to do it until monday...\n> \n> Sure. Is there an existing file for the existing inet type that I can\n> start working on?\n\nI have again re-added the BAD defines because there are calls to\nexisting function are causing errors. Basically, inet is broken.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Oct 1998 00:09:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I have again re-added the BAD defines because there are calls to\n> existing function are causing errors. Basically, inet is broken.\n\nWhoops. Looks like D'Arcy changed those function calls in the patch\nfrom him that you applied -- probably because you were planning to\nchange the inet_net_*() functions, right, D'Arcy? I don't have time\nto look at them right now, but if monday comes around and we don't\nhave the new version of the INET type in place, I'll have to do the\nwork locally to get it back to the working state it was in, anyway,\nand I'll submit complete patches then. I'm using the current state\nof the PostgreSQL code in production here, and I really, really need\na working INET type, like, right now. :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "17 Oct 1998 17:10:12 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
},
{
"msg_contents": "Thus spake Tom Ivar Helbekkmo\n> Bruce Momjian <[email protected]> writes:\n> > I have again re-added the BAD defines because there are calls to\n> > existing function are causing errors. Basically, inet is broken.\n\nDamn! I guess I did change those functions. Sorry about that.\n\n> Whoops. Looks like D'Arcy changed those function calls in the patch\n> from him that you applied -- probably because you were planning to\n> change the inet_net_*() functions, right, D'Arcy? I don't have time\n> to look at them right now, but if monday comes around and we don't\n> have the new version of the INET type in place, I'll have to do the\n> work locally to get it back to the working state it was in, anyway,\n> and I'll submit complete patches then. I'm using the current state\n> of the PostgreSQL code in production here, and I really, really need\n> a working INET type, like, right now. :-)\n\nPaul said that he expects to have his stuff in this weekend. I promise\nto test and submit my stuff the minute I see it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 17 Oct 1998 13:18:23 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Did the inet type get backed out?"
}
] |
[
{
"msg_contents": "Outch,\n\n insert and insensitive have the wrong order in keywords.c!\n\n I ran against that while adding the LIMIT keyword - just to\n notice that I'm working on an implementation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 16 Oct 1998 15:27:48 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Bug in keywords.c"
},
{
"msg_contents": "> Outch,\n> \n> insert and insensitive have the wrong order in keywords.c!\n> \n> I ran against that while adding the LIMIT keyword - just to\n> notice that I'm working on an implementation.\n\nI have resorted the keywords in my editor, and applied the fix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Oct 1998 19:28:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in keywords.c"
}
] |
[
{
"msg_contents": "To report any other bug, fill out the form below and e-mail it to\[email protected].\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\nYour name : Jose' Soares\nYour email address : [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) : Intel Pentium\n\n Operating System (example: Linux 2.0.26 ELF) : Linux 2.0.34 Elf\n\n PostgreSQL version (example: PostgreSQL-6.1) : PostgreSQL-6.4-BETA2\n\n Compiler used (example: gcc 2.7.2) : gcc 2.7.2.1\n\n\nPlease enter a FULL description of your problem\n\n\nI found this bug:\n\n$ psql prova\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: prova\n\nprova=> \\d \nCouldn't find any tables, sequences or indices!\nprova=> create table prova(a text);\nCREATE\nprova=> \\d prova\n\nTable = prova\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| a | text | var |\n+----------------------------------+----------------------------------+-------+\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while pr\nocessing the request. \n\nBest regards,\n Jose' mailto:[email protected]\n\n\n",
"msg_date": "Fri, 16 Oct 1998 15:58:22 +0200",
"msg_from": "Sferacarta Software <[email protected]>",
"msg_from_op": true,
"msg_subject": "bug on BETA2"
}
] |
[
{
"msg_contents": "Here we go,\n\n this is up to now only for discussion, do not apply to CVS!\n\n Those involved into the LIMIT discussion please comment.\n\n Here is what I had in mind for the SELECT ... LIMIT. It adds\n\n SELECT ... [LIMIT count [, offset]]\n\n to the parser and arranges that these values are passed down\n to the executor.\n\n It is a clean implementation of LIMIT (regression tested) and\n the open items on it are to enable parameters and handle it\n in SQL functions and SPI stuff (currently ignored in both).\n Optimizing the executor would require the other sort node\n stuff discussion first to come to a conclusion. For now it\n skips final result rows - but that's already one step forward\n since it reduces the rows sent to the frontend to exactly\n that what LIMIT requested.\n\n I've seen the queryLimit by SET variable stuff and that\n really can break rewrite rules, triggers or functions. This\n is because the query limit will be inherited by any query\n (inserts, updates, deletes too) done by them. Have a rule for\n constraint deletes of referencing tuples\n\n CREATE RULE del_table1 AS ON DELETE TO table1 DO\n DELETE FROM table2 WHERE ref = OLD.key;\n\n If the user now sets the query limit to 1 via SET and deletes\n a row from table1, only the first found record in table2 will\n be constraint deleted, not all of them.\n\n This is a feature where users can get around rules that\n ensure data integrity.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\nbegin 644 opt_limit.diff.gz\nM'XL(`$]=)S8\"`^4\\:W?B1K*?R:_H82=98&1;$F^<>`^+F1DV&!S`N<DGCBP:\nM6SM\"(I*PQSOQ?[]5_9!:(`$S<1Y[+V<&I.[JZJ[NZGIUM1?.<DE.[(\"$@7WJ\nM!\\[=V:UE?Z#>XLSV5RO+6X3RX=1&F#W57U4JE6/0%-X&#AG;$3$:Q#`Z]6JG\nMVB)&N]WZZN3DY$`?J;9FI][HU&J\\;27]88.IMC6H9J^(&E_K!)Y.OB+D;XYG\nMNYL%)<5-Y+CAV<K^&)W>%S-JUGX066YV7?@4VI9]3['VC5(;1H'CW?$V\\.^L\nMPKI5/U!**H7\"-4/>_TCM3>0'[ZFUOJ(K/W@B8;19+C.I,G1#,_1&0I>AFU#0\nMDI05'\"\\J%`I+:D6;@)YCR0\\;&CQ=TM\"&/G^1SZR&]];SO8A^C(C-?\\^!E@*4\nMA8#'=5:VO_&B<TY)(9N4`M`26IX3/1&8#?M#F#URHZH91DL9N5'7C&HM'OGS\nMGD[>L$YZ`9!%B85##2/B^3#;R\\!?D>B>DCOG@7J$#9<\\6.Z&\\D8YN,[P9T57\nM(8U*WT@R-:)K)'3^0_UE29:5RVQ\"Y.MI]+2FA<)W9#9G<Y2N9.-B$-^1P6A6\nM&P\\N,P!<ZD&]Z`?6JU;.`&(D`%CI$E9R51;+L`/FA-[&=0'N;7<X[6<`W#X!\nM)ASOY*:?V1YF8$]S!U@\\3`$@3#X?W-&(K0;?-\\GJB'?/6E$.F8/@+)MWF@W-\nM:)F<=P`N=^/`?*68NHS0`OP=C1+($A]0F7'V*U*0N\"8;KQ1O$HW$+:<1<)YL\nMI!&QOS3.;V6Y/YPE*2THL.9WWY&1[]$R;$68JZOQCWTD#<>R7@:4)EU`4[87\nMVK\"+V\\DN_NL06!KA+JN41S?#8?(6;YG?1OHB5P%1,5SV<&4YWI8&RJC/5D$9\nM@,?KH$.-&QU#[]1;^4JH4=,:[43DP6M3EXL<PHP[-GGPG07I>XMKU_)*^$4J\nM:_C62)\\M\":E0!*1LHD63V6;MTIEUZ]*IZT>DPM>6,@SI5K#`\"4:V\"%P=0(D5\nMA'0&\"Z*1WFHQ`YE%_#4-`+\\/D&\\0$D03\\9=+$!\"LPU`@P&)OL[JE@2@F4]OR\nM+IV`VMB8+.23@&<$EBIK4(K1<N/995(JJ]3P\"0`2)A04)WV@I6WZ0OC62!8>\nM+5M<-$#5R'E'5;LC;#[WP[`@%V\\/#;:6NK,275M1-MGVFN`,QAL,7_A6`O2?\nM<,K$>A0*\\8(D>KR``XG73@@/8\"LCX:N_-KE<@(#XX)RE%'SY)&2K##\"4FJJA\nMU`01VVS'(G:;M(\"&&Y=9.F#],(OS$L49RC16B.Q7*)0J6.!X;$3(@4QG,J.+\nM$P026D_*N#G\"BJ1QL_,YWH)J`TEME:1V33/K\"4E\\PD\\N:#A?![Y-0;<O9.=J\nMI0LJ'3<3&\"D>6`?.8NPLV`#?9`^06U\\@X;F-)76\\(-A?LK?AX&HP([9K;4)A\nM>N4@8J87JHN8\"<@KT!F@7\\I8\\XD)'V&`5FP.P684QL9^KJW`6D'=6JUC:-E3\nM^.A$]CTIH7DXL^Z2;LH,O^B@`'8-E59<AQ<5['@-2ZP<-%W<^ES`)\"\"PPN62\nM:')RD5AM90EZ\"_;J!VDSQ3VRX<L>J>O?E?J3R7BBD>(::VA$`V*%!#IVI/P%\nM4S<BS@H8=D5![2_($XV*>=TLZ-(\"3L[L8>/1CVN0SH\"\"6<]HJY*O%]O]%362\nM,7N\\OV?9$2ZA&-ZW1.=SF^XM10+H\"*3BEA*/WL'V>:\"\"@N?8J#R:\\_BV^BV,\nMQS'LX;O$X-YA.Z5J+]<)_^$@TTDAH?*<TD<B103'L=??B^%X7W\\8O['NTNRF\nMNEUI;N-CV\\ML'&0_KX$PE*LEU!*SHE'MY,C=MF;J]=C[$!^B&%6B(+:IU%+%\nMK'J5E':'P_GLYGK8GZ:`W_K!HQ4L4A95\"D!5/^>\\0JP':@2S86IF(]$(+S)4\nMH:G5(NXNO]#`L^;<U%N::50376>\"^C:;K1>@[,TN9>0%*<NDQH3!FTV%FE93\nM,]O5/X&:?X)K\\YO)J<+H:Z827-.K6E5OORPY>@IF/.KS_?+2:],`JZJA6E7-\nM>DMK-FH*,8=]*%[\\6QTI7OP%WE0F8<V:J35K\"M.U#%UKF<HJ?>(HN>5+N\"Y,\nM;\\V4\",Y1T*BA81U`A#^21]3-J#<BZCX1[CY3'@#\".02CFH0?G+5L)GJ.&/D$\nM1'Q`P8\\(45W</@G%SB?N-.X)5M,%W-!3>.]OW`6)8\"DBGRQ\\`O+?@1FC%.:?\nMN.!)NJ%[email protected]*#`5J`7=\"/)(3I+DNP$_(ORQ,O><2>;5D_?/'(\nMA51+0M6?G*C50G=BC-7Q-CPP&T<\\,V?VB-@(KC^R1I@7'%$`#D1'%,@O\"(_D\nMM:YV=*.CF_GQ$5/70;4VI&K%R1`J&<V=87<*ULMX.OOAIC^<]\\975]W198F&\nMY3+Y!^G_U._-)_T9\"`;2$6\\W(QGN\"B@@\\5+.*@U/+GY9:`1_I6L:NZ4ZTTEB\nM093`!)?68I1BX_QYH\\P.QC$3-F?X1W!1N';R^(=5'>`<!I/!,^T#/+/;#KBE\nMU3';^=S2!%'=3(EJ5M\",)=JM[[NP-4,921IXD3^A+I/%L%1+RPWYUK/OK8!%\nM$4\"*^B.P@57IQX]0`K%I<XY\"I`49RWIA0V:-NV76M5955R2QV=!:R7$4ZXUQ\nMPO1Z,&<&[7Q\\?3/Z?C3^'\\8K7%*\\(?*3?402?RK\\YXCCDBA]7J)@V(O_3'D[\nM_A!%?E)G*0E6DCI4V0&.SU8DL'+`D@WL`H$2>/NT);L%FP?10AR\\1+$GEMV$\nM'\\#P)O$9238H.XJ1E(KCF#RDZ#$?A90?T&R!RH`\\DQ\\L(H\\B@D?DF2)>!Q1]\nM/[8O1'1><\"!C>SSK7()?Q_D1A-;-#%AR.NO.IMD,7@,&KQN)AZ2P\\_C[>>]F\nM,AU/$D;./XX0`H_)R;?HUD7I4X8Y8K0W04#1`U:#7!FAKW,Y#?$>Q:GH75W.\nMI_UAOS<CWWQ#XKY/+N3I!711YKNT`?92HY'LTA>CZMA#E3^,W'PU@5(#S^#7\nM3ZCDMRV-G=IL9;$#MJLO:GJVOMC?%/2,V:G6\\U6&46^V-*/>4CQ)6:0LK$<?\nML9^3BXT'T]9CH228Y(BNUG/7$0%@(8,+N%H@I]9/)12F&A&--1[1&/-@V?EA\nMR)Y8:KY=!6\\)D%B;'UH8?Q/EKTM2N6]9$JC/796<EK`H>J>N[['ZZFW-;\"C)\nM&;P@<>ZM]1IZF;(\\BH&W]$&0!1J/K9U<W%OA=',[=+P/(1A8Q2C8T\"+85D6F\nMW8O\\:\"D'`\\\"I*USDL'.@`Y>*P^PP`E_+7(3*JA,19<I\"N,,<^Q$RYCB(KQ=+\nMB\\\\R_?CJ@76PR.<=I78?\\RA@G\\L]>4V!?8Q.K;IG3[/C/363!`N:B<<>^=`-\nM[%\\W7,-CB4O9;\\`<N(ONR^=,\"Z(7FF8&K@9=W[;<>98PP+()#+F$#%>6QR5'\nM=Z6R\"7<AU:[4VMVNWGQ!5YR!<GKJB8!R1D>)*$I:'2&-6\"@D.+/`.7_Z#]WB\nMJ.W*;(;:ALK@)R.;G_:VK()+T3&;>]BII6M&2SU6PX*VKKI[,I0^I2ZUHVFT\nMBCHBA@/Z]E6IE)2#'D^B7&50VBRC`O->1-!:!%V2J$L46%ZX](-5@J2T%B9#\nM+F(9B^=8MAEH:T#J>!3`3!R2-0ZAZ\"D'%,^<)`KR-Y>XWB8(_>!8XM08U$&F\nMNPNLU>E3%D?PFKWLQD$R>,W<RVL9S>H=L[HW(Q&,Y+KB!.*KL9N1:%O`+_[=\nM&?PBUV2G'N+)QW:-9:,=>/;1LJ/MC$2)<WTW1P\\J3DT45O[5S7`V^.?/LWX*\nMW^H6X1_15<8&1R1:%`JPQ-&<'\\6!V0LOMT_,@-)XL&W.#IQ\"^<9KD$K^R%\"@\nMJ12C8\"\\<S%]'<RL(K*?Y+;#>(D1S*MPJ8@CHQW4@VEA1%``@<.,\\LH([&HGR\nMK0*3GVW`5\"A]P1#CT\"NY\"_S-.A[5#/CHC@9O07EU@[N0S^77S#?]%N7E10'U\nMVIS+T:0*HQ/4\\BX*L/?FT(62O&$DMNC_RRGDK;%-R&3\"G,F8+Y[8[%,\\3+%K\nM;F5/Z(8BY@4Z!]QTZ(E&]OW\\WG^<KRSOB:O[U%#4D<YE!\"7!@M-P41BOHRG]\nM90C/20U,$G7CNKY+5WDGCV:CJ9G-:K*_F(<_O];(V_ZL]QY^AN/N#'[0J82O\nM_N#=\"!XFXROX1ON`-7HWZ8X`Z-UD?`,MWW=_'(S>P>_X9@*8&,0`6@U&HSX6\nMP,.T/YH.9H,?^_QE,L/?67_R8W?(GL;P/>5+]J\\QMOV^_[-&AMW1NYON.V@T\nM['<O61_#_EMH.QQ\\CX7C7E<,Z*K+!G\\U&-W,9X]'L/3Z,NE?R0'34G0W&\nM(^P1GFXF[*'WO@N$COH_`=;1&/_C`Q#*VMY<]2>#'F\\]?JN1\\0C_#V%HXVO$\nM!;\\3_'_9QQ_HFM'/#Q<-F.<DCOA?/<]X:O+7G>X<?]#`1*9&RB.L@TNH*X*Q\nM0(15H8@W\\GA/`RI>!)0J:,B]]0#^51J`V?420!&WW)@A&5*HH-AM!=5J07M\\\nM97V@S#M+*J0%4RAXS(WX98,>Q&LSYSBUH>.IO:Z>#\\,;%)EFZDC50^<7Z8F=\nMDM>MK9X4A^5U6ZU#.I,J0Y?&'[-@7QL&RX49#,NR^%-\\%.OMN\"?1?4G7`(D1\nMFZ$)5.Q:`)\"Q!?0L'YBE>+@C'I'/ZR%5^RQ)??T:C5<14?/D#/#JO.G'_(ZF\nMKL8CF@;HAJ8:H?^5=*<]>:Q<^$18/\\5OB^?D6=1?]C,`+A2`LTK_ZGKV<^4L\nM\"\\M91>315,X(GR?A]VVS8Z?`ST5WU0_YN_9WDJF59$\\N#SJ4Y\"\\L.\"Q1K0Q?\nM9OE<]/LKR>O@$\"869=[\"!5XIHUJXHJ(Y@)^GR=SMK5,8L`BW=)OX2O/CF/3&\nM8V4QFWDB-2KC'H92FW,)0X'8NH'QVM@&R+A[H=3N7+Q(M=R^=9&JW+YRD<77\nMPO'BD\\PRO$9^>J9R,KW`&1.Y5]SBVYOK)?>-N,ATV!?[0)\\>_6`19D<`DMJ]\nM/ED\"]ME^64[3.K3KF'M\"D@9L>,-,3C`^%9E]740]?-G_Z5F3A:!OG\"ADY>]!\nM_\\VF295'@V*LWZ'XA!?#L**BU/`)=$B]T&&I:2E30`&(J+7@E3-0]$D%K\"*P\nM5C$Q%I0JGQ>/GS5QQXJ3)<782Y!U</QO]M/]&\\G*7#VSI1E5-<;/\"FH*W>`@\nM+$!S%F.K26)WZ3(J<A,J+G(^(%5H3TERV(XI\"NLJ@0-B/%8*M(SB8I\\1\"\";C\nM95)D,]*850:%^?LHH(^P$E3^7M*EXVW'TW)@LO=4#G#&SFID[ZQC$$#K5J>^\nMYQRF:IA:U5`L'%9@*AEP]]1\"J\\T/:8D^@\"0\"+\\]-L@5%FHQ(P1$:RO'(@T,?\nMB<.%6+A98[2-+G82==BQ5]JZ$*F_Y-=?B5K;V\\D+WA*D:K9QNE<YG+`8AVQQ\nMQ(2/^/3TE*4_+47^$I[!!QN7DM4FY)FJ()\\7!'-%$`D+_8BV_'PVY\"E?)9E0\nM.T<0&!\"V^!IW+I\\U__;?)Q<P=2SBN\"]X%MG^^FS-:-]BKU1--E.E0#)8J9G-\nM2OG-:AW=[%0;>X)GNE:+\\X24R-;V+6$9VA(G(1B](A6>\"3&6YZ*8:,QR\\N(\"\nM,!5@LEZEKB1=\\X/5^(Y+YFT7R9S9]WP_^]X-BPLR2G?C@G\\RI8>O[KSL5&2G\nMBX%3:!B_Q\\4N]>3LU9%K+^\\J95Y*PEN<KN7Q8G@0\"=J\"@M_AKM874'#\\=:PC\nM2<P4_55=JU;CO8MIEL2RHPT3@\\'&B]-#3T\\/7`T^+L-B<C.2^7SB.GEH/5\"0\nMSDL_9-9OWUN(&UY\"MMZL%]!<E+&SV\"2#E^Q<F]*RDB_$/5Y)JW(C^O>G55F]\nMG0R2/XSZS)5O5K5JLYF$+YD*FR/])7['E3%3%)_Y,'>#!0<PYK7#PFFI^\\QG\nM7/01VQ$OU8?2,&4Y[):KI_\"[^E8(\\/1=8GPXO6>Z<4]]6O?N`=S5P/4M8^[8\nMQF9'KW6,/:>EK:K64F_6`@]$/KK:(<\\MC%.!T1Q2+F=+*44_@IWO\\8NE3!I)\nM+I]&5A`==:.4Z2\\5S\\Y5Z!>XC'N>C#6^H(R(8?L<.\\@8`=[*9P-E6%BT(H!9\nMB<(25]K@&[@TT$B<J@KVFZ:T8OGQ924M[X>;_N3G.3-%>:8:6Y1$I_S?6I1C\nM[([_BJ7*%PX\\(X8)%O:X)1MVJ[-%PRY<AF1H94N&`VVKG6J]4]V3RM_0M49=\nM^5,'\\-I44RCP#(Q9\"DJ4^ASU$WL/B1504%_>!W\"\"-MZ\"!EQ5!O3!\\3>A>J6G\nMPATVX=\\A+S\"T:F(#HOT;W@]-7S\")?)XAD]62YS,4\\EN*K!BV7X1F93$+<.>P\nM&G6-!Z/F.C4F]M8**7JR/(T0&V$)D;XMP>*\\OWK2J+:UAOHWC!HU`PJ2-+DX\nMBSU),N$4*']QI62SQ(LRCM'F\">%\\B#Q7_A;<T>#IG*<&6X2_LHNFC+*R0/./\nJ5\".V8K`1>&?LC3PZT;V_B8@X[<#CAC]SA9Y)<AHC_9+_!=<!%F+O2@``\n`\nend\n",
"msg_date": "Fri, 16 Oct 1998 17:35:39 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "I said:\n\n> It is a clean implementation of LIMIT (regression tested) and\n> the open items on it are to enable parameters and handle it\n> in SQL functions and SPI stuff (currently ignored in both).\n> Optimizing the executor would require the other sort node\n> stuff discussion first to come to a conclusion. For now it\n> skips final result rows - but that's already one step forward\n> since it reduces the rows sent to the frontend to exactly\n> that what LIMIT requested.\n\n Parameters - done\n\n SPI stuff - done\n\n SQL functions - no LIMIT (cannot work)\n\n For SPI calls, a LIMIT clause in the query will take\n precedence over the tcount argument to\n SPI_exec()/SPI_execp(). So SPI functions stay 100% backward\n compatible, but LIMIT is also available for C and PL\n functions.\n\n Unfortunately code is frozen. And since this is feature, it\n is past 6.4. Or can we get it out of the refrigerator for a\n moment, Marc?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 16 Oct 1998 20:36:08 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I've seen the queryLimit by SET variable stuff and that\n> really can break rewrite rules, triggers or functions. This\n> is because the query limit will be inherited by any query\n> (inserts, updates, deletes too) done by them.\n> [ example snipped ]\n> This is a feature where users can get around rules that\n> ensure data integrity.\n\nOuch. I think this point is a *fatal* objection to implementing\nquery limit as a SET variable. That might be a quick-and-dirty way\nof getting some functionality going, but we can't let it loose on the\nworld like that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Oct 1998 13:21:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation) "
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> > I've seen the queryLimit by SET variable stuff and that\n> > really can break rewrite rules, triggers or functions. This\n> > is because the query limit will be inherited by any query\n> > (inserts, updates, deletes too) done by them.\n> > [ example snipped ]\n> > This is a feature where users can get around rules that\n> > ensure data integrity.\n> \n> Ouch. I think this point is a *fatal* objection to implementing\n> query limit as a SET variable. That might be a quick-and-dirty way\n> of getting some functionality going, but we can't let it loose on the\n> world like that.\n\nOK, I assume you are saying that you like LIMIT/OFFSET in the query, but\nnot as a SET command that could be unreliable.\n\nJan has already coded a much more reliable, user-friently way, by\nputting the LIMIT/OFFSET in the query, and I think that is the way to\ngo too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Oct 1998 23:47:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "> Here we go,\n> \n> this is up to now only for discussion, do not apply to CVS!\n> \n> Those involved into the LIMIT discussion please comment.\n> \n> Here is what I had in mind for the SELECT ... LIMIT. It adds\n> \n> SELECT ... [LIMIT count [, offset]]\n> \n> to the parser and arranges that these values are passed down\n> to the executor.\n\nMy only suggestion is that I don't like syntax where you have value\n'a,b', and a and b have different meanings.\n\nI would prefer:\n\n SELECT ... [LIMIT count [OFFSET offset]]\n\nThis makes things much clearer for people reading the query.\n\nWhat if someone wants the rows from 500 to the end. Should we allow\nthe syntax to be:\n\n SELECT ... [LIMIT count] [OFFSET offset]\n\nLIMIT and OFFSET are independent.\n\n> It is a clean implementation of LIMIT (regression tested) and\n> the open items on it are to enable parameters and handle it\n> in SQL functions and SPI stuff (currently ignored in both).\n> Optimizing the executor would require the other sort node\n> stuff discussion first to come to a conclusion. For now it\n> skips final result rows - but that's already one step forward\n> since it reduces the rows sent to the frontend to exactly\n> that what LIMIT requested.\n> \n> I've seen the queryLimit by SET variable stuff and that\n> really can break rewrite rules, triggers or functions. This\n> is because the query limit will be inherited by any query\n> (inserts, updates, deletes too) done by them. Have a rule for\n> constraint deletes of referencing tuples\n> \n> CREATE RULE del_table1 AS ON DELETE TO table1 DO\n> DELETE FROM table2 WHERE ref = OLD.key;\n> \n> If the user now sets the query limit to 1 via SET and deletes\n> a row from table1, only the first found record in table2 will\n> be constraint deleted, not all of them.\n> \n> This is a feature where users can get around rules that\n> ensure data integrity.\n\n\nOK, I am all for removal of SET QUERY_LIMIT, especially if we think we\ncan get something better in a post-6.4 release.\n\nI assume the current strategy for impelemting LIMIT..OFFSET is:\n\n\tFor single-table queries, if the index matches the ORDER BY, use\nthe index to do the LIMIT..OFFSET. Large offset value require a\nsequential scan of the index until it reaches the OFFSET.\n\n\tFor joins, if an index matches the ORDER BY, and the indexed\ntable is on the outside of a join loop, use the index to force the query\nto execute in ORDER BY order, and reduce the number of values in the\nquery.\n\nIt would be nifty if we could peek into the index and change LIMIT to an\nactual range of value that would automatically match an index, then you\nhave to force the optimizer to use the index, i.e.\n\n\n\tSELECT * FROM tab LIMIT 100\n\nbecomes:\n\n\tSELECT * FROM tab WHERE x < 732\n\nbut that is very strange to do, and I would prefer not to approach it\nthat way. Seems like Jan has already done it better than that.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Oct 1998 00:42:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "Hi all,\ncurrently CVS is not working. It gives the following error message:\n\nmarliesle:/usr/local/pgsql# cvs -z3 update -d\nFatal error, aborting.\n: no such user\n\n-Egon\n\n",
"msg_date": "Sun, 18 Oct 1998 07:45:15 +0200 (MET DST)",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": false,
"msg_subject": "CVS not working"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, I assume you are saying that you like LIMIT/OFFSET in the query, but\n> not as a SET command that could be unreliable.\n\nPrecisely. Adding LIMIT/OFFSET options to SELECT sounds like a good idea\nfor all the reasons previously given. But the SET QUERYLIMIT command\nis positively dangerous --- I think we should take it out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Oct 1998 12:01:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation) "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> What if someone wants the rows from 500 to the end. Should we allow\n> the syntax to be:\n> SELECT ... [LIMIT count] [OFFSET offset]\n> LIMIT and OFFSET are independent.\n\nI like that syntax the best, but remember we are not inventing in\na green field here. Isn't this a feature that already exists in\nother DBMs? We should probably copy their syntax, unless it's\ntruly spectacularly awful...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Oct 1998 12:04:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation) "
},
{
"msg_contents": "On Sun, 18 Oct 1998, Tom Lane wrote:\n\n> Date: Sun, 18 Oct 1998 12:04:49 -0400\n> From: Tom Lane <[email protected]>\n> To: Bruce Momjian <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] SELECT ... LIMIT (trial implementation) \n> \n> Bruce Momjian <[email protected]> writes:\n> > What if someone wants the rows from 500 to the end. Should we allow\n> > the syntax to be:\n> > SELECT ... [LIMIT count] [OFFSET offset]\n> > LIMIT and OFFSET are independent.\n> \n> I like that syntax the best, but remember we are not inventing in\n> a green field here. Isn't this a feature that already exists in\n> other DBMs? We should probably copy their syntax, unless it's\n> truly spectacularly awful...\n> \n> \t\t\tregards, tom lane\n> \n\nMysql uses LIMIT [offset,] rows \n>From documentation:\n\n LIMIT takes one or two numeric arguments. A single argument\n represents the maximum number of rows to return in a result. If two\n arguments are given the first argument is the offset to the first row to\n return, while the second is the maximum number of rows to return in the\n result. \n\nWhat would be nice if somehow total number of rows could be returned.\nThis is often needed for altavista-like application.\nOf course, I can do\nselect count(*) from sometable ... LIMIT offset, rows\nand then\nselect ... from sometable ... LIMIT offset, rows\nbut this seems not elegant solution.\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 18 Oct 1998 21:45:24 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation) "
},
{
"msg_contents": "Jan,\n\nI tested your patch on my Linux box and it works ok, except\naggregates functions doesn't work properly, for example\ncount(*) always produces 0\n\nkdo=> select count(*) from work_flats limit 10,1000;\ncount\n-----\n(0 rows)\n\nwhile\n\nkdo=> select rooms from work_flats limit 10,1000;\nrooms\n-----\n 3\n 3\n 3\n 3\n 3\n 3\n 3\n 3\n 3\n 3\n(10 rows)\n\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 18 Oct 1998 21:58:41 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "Oleg Bartunov wrote:\n\n>\n> Jan,\n>\n> I tested your patch on my Linux box and it works ok, except\n> aggregates functions doesn't work properly, for example\n> count(*) always produces 0\n\n They work absolutely properly :-)\n\n>\n> kdo=> select count(*) from work_flats limit 10,1000;\n> count\n> -----\n> (0 rows)\n\n As I wrote, the executor skips final result rows. In the\n obove query, there is only one result row (the one returned\n by the aggregate function). You asked the executor to skip it\n and it did.\n\n We cannot limit selections in deeper levels than the top one.\n This would give unpredictable results in joins.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 18 Oct 1998 21:22:42 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "Oleg Bartunov wrote:\n\n> On Sun, 18 Oct 1998, Tom Lane wrote:\n>\n> > Bruce Momjian <[email protected]> writes:\n> > > What if someone wants the rows from 500 to the end. Should we allow\n> > > the syntax to be:\n> > > SELECT ... [LIMIT count] [OFFSET offset]\n> > > LIMIT and OFFSET are independent.\n> >\n> > I like that syntax the best, but remember we are not inventing in\n> > a green field here. Isn't this a feature that already exists in\n> > other DBMs? We should probably copy their syntax, unless it's\n> > truly spectacularly awful...\n> >\n> > regards, tom lane\n> >\n>\n> Mysql uses LIMIT [offset,] rows\n> >From documentation:\n>\n> LIMIT takes one or two numeric arguments. A single argument\n> represents the maximum number of rows to return in a result. If two\n> arguments are given the first argument is the offset to the first row to\n> return, while the second is the maximum number of rows to return in the\n> result.\n\n Simple change, just flip them in gram.y.\n\n And for the 500 to end:\n\n SELECT ... LIMIT 500, 0 (after flipped)\n\n The 0 has the same meaning as ALL. And that could also be\n added to the parser easily so one can say\n\n SELECT ... LIMIT 500, ALL\n\n too.\n\n>\n> What would be nice if somehow total number of rows could be returned.\n> This is often needed for altavista-like application.\n> Of course, I can do\n> select count(*) from sometable ... LIMIT offset, rows\n> and then\n> select ... from sometable ... LIMIT offset, rows\n> but this seems not elegant solution.\n\n Absolutely makes no sense for me. As said in the other\n posting, aggregates do the counting scan in a deeper level\n and thus cannot get limited. So if you invoke an aggregate,\n the whole scan is always done.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 18 Oct 1998 21:29:43 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "On Sun, 18 Oct 1998, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > What if someone wants the rows from 500 to the end. Should we allow\n> > the syntax to be:\n> > SELECT ... [LIMIT count] [OFFSET offset]\n> > LIMIT and OFFSET are independent.\n> \n> I like that syntax the best, but remember we are not inventing in\n> a green field here. Isn't this a feature that already exists in\n> other DBMs? We should probably copy their syntax, unless it's\n> truly spectacularly awful...\n> \n> \t\t\tregards, tom lane\n\nNone that I have used (VFP, M$ SQL Server) that had 'LIMIT', had 'OFFSET'.\nSo it would seem that the very idea of OFFSET is to break with what others\nare doing.\n\nI too like the above syntax. \nWhy mimic, when you can do better? Go for it!\n\nJust my vote, have a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n",
"msg_date": "Sun, 18 Oct 1998 15:58:57 -0400 (EDT)",
"msg_from": "Terry Mackintosh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation) "
},
{
"msg_contents": "Hi folks,\nFatal error, aborting.\n: no such user\n\n-Egon\n\n\n",
"msg_date": "Sun, 18 Oct 1998 22:00:39 +0200 (MET DST)",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": false,
"msg_subject": "CVS ..."
},
{
"msg_contents": ">\n> On Sun, 18 Oct 1998, Tom Lane wrote:\n>\n> > Bruce Momjian <[email protected]> writes:\n> > > What if someone wants the rows from 500 to the end. Should we allow\n> > > the syntax to be:\n> > > SELECT ... [LIMIT count] [OFFSET offset]\n> > > LIMIT and OFFSET are independent.\n> >\n> > I like that syntax the best, but remember we are not inventing in\n> > a green field here. Isn't this a feature that already exists in\n> > other DBMs? We should probably copy their syntax, unless it's\n> > truly spectacularly awful...\n> >\n> > regards, tom lane\n>\n> None that I have used (VFP, M$ SQL Server) that had 'LIMIT', had 'OFFSET'.\n> So it would seem that the very idea of OFFSET is to break with what others\n> are doing.\n>\n> I too like the above syntax.\n> Why mimic, when you can do better? Go for it!\n>\n\n We have a powerful parser. So we can provide\n\n ... [ LIMIT { rows | ALL } ] [ OFFSET skip ]\n\n or\n\n ... [ LIMIT [ skip , ] { rows | ALL } ]\n\n at the same time.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 18 Oct 1998 22:05:31 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
},
{
"msg_contents": "> We have a powerful parser. So we can provide\n> [...]\n\n This version now accepts all of the following\n\n ... [ LIMIT rows ] [ OFFSET skip ]\n ... [ OFFSET skip ] [ LIMIT rows ]\n ... [ LIMIT [ skip , ] rows ]\n\n rows can be a positive integer constant greater that 0, a $n\n parameter (in SPI_prepare()) or the keyword ALL. 0 isn't\n accepted as constant to force ALL in that case making clear\n that this is wanted. In the parameter version the integer\n value 0 still is used to mean ALL.\n\n skip can be a positive integer constant greater or equal to 0\n or a $n parameter for SPI_prepare.\n\n If any of these syntaxes is used in SPI_prepare()'d plans,\n the given tcount argument for SPI_execp() is ignored and the\n plan or parameter values are used.\n\n Anyone happy now?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\nbegin 644 opt_limit.diff.gz\nM'XL(\")%0*C8\"`V]P=%]L:6UI=\"YD:69F`.4\\:W?;-K*?U5^!:-.NY-\"V*-EZ\nMN?$>5:83;67)U2-MSSWWZ-`29/-&(E62BN--_=_OS``@08F4E4>;[JY/&Y'`\nM8(`9#&8&@P%GSGS.#J<^\"_SID><[M\\<W]O0M=V?'4V^YM-U9H!Z.I@BSH_J;\nM@X.#?=#D+GV']:<A,ZO,-)NGE6:ESLQ&H_[-X>'A$WTDVI:;I]7FR8EH>Y#\\\nMH\\%4&@94TRNBQM=3!D^'WS#V-\\>=+M8SSO+KT%D$Q\\OI^_#H+I]2L_+\\T%ZD\nMUP4/P=2>WG&L?:'5!J'ON+>B#?QW?$#=ZG]0R@YRN6M\";KWGTW7H^:^YO;KB\nM2\\]_8$&XGL]3J3)+IF&6JC%=9JD,!75%6<YQPUPN-^=VN/;Y&9;\\M.;^PP4/\nMIM#G;^J9:D1O;<\\-^?N03<7O&=\"2@[(`\\\"R<Y=1;N^&9H\"273DH.:`ELUPD?\nM&'!C^C9('[E9,4RSKHW</#7,RDDT\\L<=G;R@3MH^D,69C4,-0N9ZP.VY[RU9\nM>,?9K?..NXR&R][9BS47C3)P'>//DB\\#'A:^4V0:K&2PP/D7]^8%558L$D/4\nMZU'XL.*YW$LVFA\"/DI4T+H)XR3J]T4F_<Y$\"L.`NU,M^8+Y.BBE`1`*`%2Y@\nM)I=%.0U;8$[@KA<+@+ML=8=6\"L#-`V#\"\\0[&5FI[X,\".Y@Z(>)``0)AL.;CE\nM(<V&6#?Q[,AWUUYR`9F!X#A==FI5PZR7A>P`7.;\"`7XEA+J(T!+\\%0]CR((8\nM4)$D^QG+*5R#M5N(%HG!HI;#$\"1/-3*87%^&D+>B6A_.G!5F'$3SY4O6\\UQ>\nMA*4(O+KJO[&0-!S+:NYS'G<!36DM-&`5-^)5_-<AL-##5790[(V[W?@M6C*?\nM1_HLTP!Q.5QZN+(==\\,\"I=2GFZ`4P/UMT!.-RZ5FZ;1I-K*-4/7$J#9BE0>O\nMM9*:Y``X[DS9.\\^9,<N=72]LMX#_L(,5_&LPBZ:$'7`$Y,1HV62T7BWXR+Y9\nM\\.'\"\"]F!F%M.&)*M8()CC#0)PAQ`B>T'?`038K#V<C8\"G<6\\%?<!OP>0+Q`2\nM5!/SYG-0$-1A(!%@L;M>WG!?%K/AU'8O')]/L3&;J2<)3P06#E9@%,/YVIT6\nM6:&H4R,8`\"0,.!A._HX7-ND+X%^#I>$QTM5%%4R-XCN:VBUE\\[%_A`6E>'-H\nML+3TE17;V@-MD6W.\"7(P6F#X(I82H/^`+)/SD<M%$Q+;\\1P.))H[J3Q`K,Q8\nMKO[:Y`H%`NI#2)96\\.E,2#<9X\"C5=$>I!BJVUHA4[\"9I/@_6\"_)TP/LAC_,\"\nMU1GJ-\"I$\\<OE\"@=8X+@T(I1`LIGD=`F\"0$.7XC+ACE\"1<FZV_O;WH!I`4D,G\nMJ7%BE.LQ28+AA^<\\F*Q\\;\\K!ML]4YWKE`DPZ+B9P4ESP#IQ9WYG1`%^D#U!X\nM7Z#AA8^E;+PDV)O36[=SU1FQZ<)>!]+URD!$KA>:BT@(V#.P&6!?BECS@92/\nM=$`/I@*\".)J[MGU[\"86KK<*N$X0==^[E5O36$54T!8YR6G+!O1-.[U@!_<:1\nM?1OW7Z2.9<\\Y<'BX<N^:HB@WC2:W0.5@`J/69Q(F!H%^BP79Y/`\\=N>*\"O0&\nM'-FWT;A4CT2)ZG$5HZ/RM!XEL0\"CSSP63A;`D(D#'(E[(9Y'332.XQ]?>+<%\nM:S#H#PR6)R`><I_-/9]!MXXR!.!SA[\">F3*,I,-Y/B(,X0L.R1R37?V/\\[]'\nM;QUWAI-\\W1JTKB:=WIM6%SQCYKQXH0;P00U$&V34\\J5JV1M?L>^^TS$[5+M2\nMO*8:9Q:1%7,:7QZW&;'=AQS=E^;,1I_\";T_O)($63\"MBON$T7V)G$V/=D#FM\nM@UC@TF5NQN<VZ#LE;HD!K%W^?@4V'+0'[;%P1\\.^G3$[2)\"<-UC*4A)#>U0=\nM(=URE-^SDB1X'W)=?@M*]ITB]C':>NRMGX3R_1SU)##LT$[QMBQ63IMEGZJ;\nMY/;S2=6D;(RNF;0QQ$9(ZB5Z_62U%\"'3M%*BMS]3*8G!_$5UDN3SUU!)>_'E\nMHS62P+I;(25E[<_21]1K4AWIP9ND-A)#W*F,-BE-TT7@4JE%*YU;VHNC\\YKA\nMO36,<NDTBF'(/Z9MS61!M#/32[7-V;.XM-7M3D;CZZXU3`!?>OZ][<\\2^[($\nM@.[$GHD*.1_H5Y8;9:/<B/W*+S)4Z>_K12+H]H4&GL;S<JENE,V*%@4NE8V*\nM%BW]=,I>;%/&OB!EJ=2485K*-8T:LV94RI6O0,T/]O3M9Y-3J1GED[)&3J5B\nM5\"J-+TM.*0'3[UEBO7SIN:G\"WJRJ[\\UJ];I1:YQHQ#P=B1'%GQN.$<6?$)-)\nM):QV4C9J)YK0U4]*1OU4FZ4/`J78/S-API-+,Z&\",QPX].!@'D\"%W[-[]-V6\nM0$S(%P_2?G$11D8>PM:<!6^=E6HF>PZ)?`8JWN>_K6'&P%S</$C'3S#N*.H)\nM9G,!N*&GX,Y;+V8LA*D(/3;SF$-6<\\8Y\\)\\M^#N^\"%0SP+WR@L\"Y67!6F*WQ\nMF`=@9_P]\"X#=105VR/YIN_(EB]CC#>]83!X[5V9).A^'AWJUM+)X4N.X:W&\\\nM$YV;I')VCP@KSC^*1I`58M4`GHBQ:I\"?$&3-:MUHELQFJ9P=934K)<.LG&J!\nM$2K84O,Y$9D6;!/>*<BK.%5282T1!R^J<`C-$,6V<(4>GI-_T$]$+=COO[--\nMB/;6QF'#S]\"W(^*49/A3ET4<(-_M`?IP<!DLN0O\"+)P/&CL.RK7]6RDOVBH4\nMQW].ECDL@0M252X(MI\"N\"WILW=80G/W^</33V.I.VOVKJU;OHL\"#8I']@UF_\nM6.W)P!J!`F5-^3;NJ<,%GP,2-Q$:Y,'A^6\\S@^&O\"@1&0<`2\\5<*KA8&%MY'\nMJ0*CC!3,UQME^M$'S6C&\\/=8;<'*R5IG5/7$\"B.8E+75>&)M;;=K-,NGS4HE\nM>U75P*35$B:-\"N)5=>-Y\"Y\"U0,7M.V[H#?B\";!9,U=Q>!$)%3>]LGV*V()U>\nM#W8KNI40$NM+Y29WUB&Y')-I=,2*S*=PH-A>J-:Z%Q[92^F'I]%4+Y\\:]4I)\nMLV;EFE&OGFJ:0DK)\\+HSH<4ZZ5^/>S_V^C^3'`EM^V+7*?4K&7X@G2#-#]DF\nMX4<]?2PM-V;\"*<A4*!^D-4W%A#W07TOMD(2F`<-&9A2QAX\"=>>^X[SO@WV\"Y\nMLE^\"^RS:62J,&3T)4Z:F)G7DT;8)M*\"(O^PW_)Z7\"-S@^*.Q'[%Q@,D`\"V]J\nM+U@[2@F(J/#8:@U3(0:&H@>F?2]25%Z`+H5Z;H!>KK:8>ADE\";#X+YDMD`2-\nM4@84J)XWD`*ZX&Z$=2N%(`6>]MT27N82A%J\\9KN!B`.(!G%60`H@918H^E1V\nM02I\"-)9[(!2Y!AN`\"=F*CI^W](/:DN/:D89=KA5Q#DV.X\\KG*-.DG^29M%SM\nMI'XPPV<^XW.Q]L%XC$>P_(>CUFB8KDQ.0)F<FO&.7E,=_1\\G[?%@V!_$2B/[\nM$%X:'K)7E^@>A,FS]0EBG*Y]GU-`23O:23GP.5-LB/0ALJ)]=3$96EVK/<)@\nM5=3WX;DZLX<NBD(CUL&_KU=CC?C%J-K4#'UY`OB5B,TVUJA%,.]L]8`.V:9?\nMO%6;;K*WP+:M]DDIW6KO;@K6OMP$;S?;'3ZMU0WSM*[%/521-JTNO\\=^#L_7\nM+K\"M+?3K2Q;RY8H\"M0EKA^L.5-CJH8\"^JL%D8X-I$UD\\>QJR+:=:+%8I61(D\nM\\JF>FAAO'6;/2URY:UIBJ(^=E8R6,\"FEYFDI>U+*IPVC7-42$D5![$W9JQ7T\nM,J3<03PN`#7F&^)`X/#\\S@Z&ZYNNX[X-P,W-A_Z:Y\\'#S9./E1?I%!D8`$Z?\nMX;R`G0`=.%4\"9DL0Q%QF(M1W03(FFH9P2SAV(Q0^SE/XVI&V^\"@'7,P>;/5F\nMV;*CU>X2'@WL8Z4GJRF(C]D\\V>&,BY06/7L2\"VIQ?\"GTH!M8OXM@!8\\%L6?Y\nM#KR%V_\"N>$8V$&,F26$01I`<J$F:,L\"R`0RY@`(7[8GW[DH7$^%:Z5WIM=M=\nMO?B$KH0`9?34ECY$2D>Q*HI;[:&-R)[YQ[9K+Q[^Q3<D:K,R7:`VH5+DR4R7\nMIYTM*\\TR;.]J.\\0)C+Q9UU-)L*!1TC?=ZOQOR!=\\&@[#9=B4,0:PM\\\\*A;@<\nM/++8N!?!:%,6(>9ZRM\"'#$[$,<+0M]U@[OG+&$EA)1V&3,3JC$E@V12@C0'I\nMX]$`4W$HT7@*15L[YWP4)-%V)HNX]MH/,**T'W%ZQ/1)H;N%K=G10YI$B)J=\nMXB9`4F2MO%/69+/AVA7-ZJQ<;I9+S4HU6]#`13[5MMOX:FYGX4]MD!?O]AA^\nM46K2T^TQ?K998T_1#SQ^;T_#S2Q\\A7-U.\\'M592.+WW\\JW%WU/GAUY&5P+>\\\nM0?A[#%A@@SV2\"W,YF&+8B9#*!*<77FX>R($R1&A8G'T'ZDW4()7BD5\"@JQ2A\nMH!<!YJW\"B>W[]L/D!D1O%J`[%6P4B6#F^Y4OV]AAZ`,@2.,DM/U;'LKRC8*R\nM.(D#5FA]P1\"C@P)VZWOK532J$<C1+?<OP7BU_-M`\\/);VKA^C_KR/(=V;2+T\nM:%R%,2)NN^<Y6'L3Z$)+6#1C7_2_DH6B-;8)2\"=,2,=\\,F/3SYPQK;RVD3%8\nM,C4U+]$YL)>'GG@XO9O<>?>3I>T^\"'.?&(H^THD(+<@B<4PQ43<O8LS(FO-<\nM?Q4.^6^8\"A/7`./X(JJS%GR9=79>KM:,<JT2KSD*#TRN#79IC=JOX:?;;XW@\nM![>9\\(_5>=6#AT'_\"OY%GX$:O1JT>@#T:M`?0\\O7K3>=WBOX[8\\'@([email protected]\"J\nMT^M96``/0ZLW[(PZ;RSQ,ACA[\\@:O&EUZ:D/_P[%-/ZSCVU_M'XU6+?5>S5N\nMO8)&7:MU07UTK4MHV^W\\B(7]=DL.Z*I%@[_J],8CHN:JWQN]QH=>ZTH=Z?=:\nMHTZ_ASW\"TWA`#^W7+2\"T9_T\"6'M]_!\\?@%!J.[ZR!IVV:-V_-%B_A_]W86C]\nM:\\0%OP/\\_\\+\"'^B:Z*<]2<T$/M?BA?GOS&>,&_YUV9TJYHV&4=%31#J]]L\"Z\nMLGK$D`OK%_QY#;V-AL2I$=\"-K.E%K`>>M/M7UYTN]M7M`$0/68!@R`@Q''CZ\nMD<AO_0(<'EO$$O74?R//QGO6ST@I#*`ULBY^B)_'0T3>Z_<LXL1KXCP\\=\"Y_\nMI5\\Q&IRL?N=BB%RP!JT1\\N\"Z-1S^#(R`IT&_;5T0>ZFS@=7N7`.^@853@;]#\nM:X0_,`4]P#'HPV`&8W5N/[1^&EN]-@#\"6#HX1P\".O!^^1L#AJ(4RA)$[R;WA\nMZ`+E!GY@`@P,5P)K8!R$;=Q3G!KW1AU`]J;5'H^O\\+?;`:`WUN\"'_M\"BAR%P\nM4&:25&\"NM&/P?_>YNKPDEO^GSEE&3`;TW:E^Q@5OIT:Y6M*<DQR3GKWF8K#[\nM.^YS^2*A=&//[NQWCGN;!*\"]M0+07!ZQH6`IGD!.VSOE])T#[HF7]EM.$9*X\nM0NTB<CF7MO*_K7$7_[R<<1Q<+9615LT_P#<H*I<31^<N!J\"0GB@P\\+R^T9,6\nM-'C>T.N0SKC*+)WI297/39/.M#IQSF>4=(F--T($X5VA9``2,]H*QE#1]AZ`\nMS`V@QSBI47%[9T?B=#&KAT3MHR+U^7/M?,)5'!#56>S'C,!:28\\)UDSPSQ)I\nM0[^SUK`=)3!\\8-1/_OO\\&7N4]1=6\"L\"Y!G!\\8%U=CWX].$[#<GP@,R\\/CIG@\nMDXR];(IC,R=.XE+\\/?9WX^]LVS6D8S[1U4)$_@KJMX=+]GFY\"/^<%,]DQ[^S\nM1`^ZDRFT4X:ON;N3DR+U]'0G3V&BTZ0-7-GC^N1NMKGR&<0C+I&P$&$[/F`D\nM#C).)IL#^%ER_K='WLQUZ,!-Q73$$A`G]DF-1&7%C2SOYR;[GIGI\"<:B,SR^\nM$<?5RW5`B;>M;I=Y/K,Q[<K!#%P\\Q^6PB1)YQYC^DM?[<65>>\\H=;*TVXP*V\nM!K%Q^_JYN0F0<N]:J]VZ=)UHN7GC.E&Y>=TZ3;?(`)283V#1QTS('\\BETE^7\nM291'GV23N*^19!.5Z612X$$DX\\?I_IOU=),N(2)43)O<+>[N&&9RX>GK_#-6\nMWG_ABOC+3[;P\">3'09Z.];[E#_>>/PO23QCBVITQWQCLH^.^6M-D[)?.&78<\nM,H`S8Y;C_(@/>8K?Y>4&Z=%0A>!+.V&0CW=,<97+_7P4*X#B0U$,PPKS*EH0\nM0P?<%68BGP@K:``AMV?Y:$,65X3<!P'-QX$'K<H3Q?U'0WZW1)\"E7+0O0=:3\nMXW^QF^[/)\"MU]LIUPZSH.014<*+1O8!>85>0CR(P\"ON\"S\\.\\\",=$1<Y;I`IC\nM,XH<,O5Y&:F)X8`8-Z]VQ%&Q1P3B]C@NFA)I%.')(N*T:IA5,Y$(`045C0C4\nM6GD5SE&XW?42)F^:C\\([JL*;0UG_4I$07804/EH$Y<QPWG$O'14M</SM\\0\"V\nMRS$<$MI'(K.5@,_O08RX^KW@<\\?=/&S,@$E7\"!G`*6JAFJX6]D$`K>O-TQU)\nM*A6S;%3,Q&66<O(RRQVW<3OM!;S`WW$WG/@RMS1*%<H=JVQZZ6T[+GOG\\'OF\nMB`3J8+W\"HT@^V\\JYIYR@W=G<GY3)G>A5#2?07-1C$=B#$1\\='=%-AKF\\BH!Y\nMF?YZP2/?%P\\X9@S3F1$)G8O)MB)U+1\"W-PKJ;MP$06!`V.);%#_!->_F_P[/\nM@75T'+OK9#&<>JOC%=&^(5Z)FG2A2H\"DB%(M792RFYTT2^7=)XLEXR1*9=>.\nM_38_&Z;._62:\"![ML0.1\\]]726-X9Y\"NUT0%8,Z!6<\\2WRBY%EEGT4<O4C]_\nMH80S_<-?'_TA#CHT)4JW#TV_,J5/?\\OCR[(B_4;#J5$VS3_B2R]Z6M&S/>=>\nM?;PD]2LE^%FGA>V*8GB0=RTE!7_`QUL^@8+]O\\^R)XFIJK]2,BJ5:.UBKC:S\nMI^&:U*\"_=J.;7D='3WPK;+_DT\\&XIZZ<R._+!39%$N9>0)>F+7<F/_DB=>MX\nM-8/FLHP2U>++>&SK.RI&6F:J_+\"7HE7[1-H?3ZLV>UO?Z?K3J$^=^5K%J-1J\nM\\0$7F;`)TE\\0'[V*LON+4?ZIB-KB&<66\"\">U[J/@N.PC\\B.^5!]:PXT$YLUR\nM/45QV]Y*!9[\\N!@^'-V1;=Q1G[2].P\"W+?#IAC.W;^-RLW32-'>DDM4K1EW_\nMU!;(0.CA=C@0]]KB.VWR6P3R:VU*2_'WL$EQQ9>F2!MI-_'\\<*]/3)']TO%L\nM?1OM\"WR=ZRP>:_3%,D0,RV??048(\\#-]-%#\"0C$;'[@2!@5AM&%CL^\"^P:+;\nM5.\"_&5HKNDY4U&XL_#2V!K].R!452?PT*;%-^<^:E'W\\CG^+J<I6#B)=F!0+\nM/6[HANWJ=-6P#9>B&>KIFN&)MI5FY;19V7$KMUHRJMJ=7'RMZ?FEF`Q$GH)V\nM?'B&]HG>`V;[',R7^Q8V06MWQGUA*GW^SO'6@7X[_T!>M1/[.[HIB&CUK$]$\nM^S?\\%%#RKGCHB?3AM)8BV3.7W5*F#--ZD9:5`BZPG<-JM#4NC%K8U(C8&SO@\nMN),5=RRP$98PM;=E6)SU&=1JI6%4]8\\:5T],*(CO$$07+>,,7$&!]@G6PI2R\nM4HLXQJGXH*X8HKC.>0/;4?_A3-R:LIEXI<^Y$&5%B>8?B48T8[`01&?TQNZ=\n>\\,Y;ATP>0^,Y\\-><H4<6'Y.K?<G_`Z+L!%(`6P``\n`\nend\n",
"msg_date": "Sun, 18 Oct 1998 22:44:15 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT ... LIMIT (trial implementation)"
}
] |
[
{
"msg_contents": "Hi,\n\n I finally had success in compiling jade 1.2 (damned thing\n :-). Now I'm looking for the stylesheet files needed to\n create PostgreSQL documentations.\n\n Thomas, you should have them. Can you mail them to me or at\n least point me to where I can grab 'em?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 16 Oct 1998 17:39:25 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "where to get stylesheet?"
},
{
"msg_contents": "> I finally had success in compiling jade 1.2 (damned thing\n> :-). Now I'm looking for the stylesheet files needed to\n> create PostgreSQL documentations.\n\nI'll mail you the style sheet I'm working with now (and am planning on\nusing for the v6.4 release). For future reference, the maintainer\ndistributes the sheets from\n\n http://www.nwalsh.com/docbook/dsssl/index.html\n\n(which you can get to from www.nwalsh.com).\n\nLook in the appendix in the Developer's Guide or the integrated docs in\nthe chapter on documentation for suggestions on an appropriate\nMakefile.custom to point to the right style sheets.\n\n - Tom\n",
"msg_date": "Sat, 17 Oct 1998 00:46:40 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] where to get stylesheet?"
}
] |
[
{
"msg_contents": "I wrote a JDBC based web app that I had been using an older version of\nPostgreSQL and recently updated it to 6.3.2. In my new install PostgreSQL\nwill grow to enormous sizes (+100M) and then shrink back down without\nany regard to the intensity of queries hitting it. Oddly, beating the snot\nout of the server with lots of queries does _not_ seem to make the server\ngrow. It only shows the behaivior when it is getting hit by real net \nconnections. There are several other new elements involved (new libc6, new\nApache, new JServ, etc) but I am befoggulated by the memory growth of\nPostgreSQL. Running Postmaster with -d 2 doesn't indicate anything\nspecial when the memory bounces up and down and -d 3 proved too much for\nmy tiny brain pan.\n\nI am thouroughly confused and don't really know where to start to try\nand debug the problem.\n\nAny hints?!?\nE\n\nps. 6.4b2 shows the same behavior.\n\n-- \n___________________________________________________________________\nEan Schuessler Freak\nNovare International Inc. Freak Central\n*** WARNING: This signature may contain jokes.\n",
"msg_date": "Fri, 16 Oct 1998 13:40:48 -0500",
"msg_from": "\"Ean R . Schuessler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL grows to enormous size."
},
{
"msg_contents": "On Fri, 16 Oct 1998, Ean R . Schuessler wrote:\n\n> I wrote a JDBC based web app that I had been using an older version of\n> PostgreSQL and recently updated it to 6.3.2. In my new install PostgreSQL\n> will grow to enormous sizes (+100M) and then shrink back down without\n> any regard to the intensity of queries hitting it. Oddly, beating the snot\n> out of the server with lots of queries does _not_ seem to make the server\n> grow. It only shows the behaivior when it is getting hit by real net \n> connections. There are several other new elements involved (new libc6, new\n> Apache, new JServ, etc) but I am befoggulated by the memory growth of\n> PostgreSQL. Running Postmaster with -d 2 doesn't indicate anything\n> special when the memory bounces up and down and -d 3 proved too much for\n> my tiny brain pan.\n\nI ran a test earlier in the week, and saw a similar problem. It doesn't\nlook like JDBC's fault, as I got similar results using copy in psql.\n\nI haven't had chance to look at it deeper, as I've been finishing off the\nJDBC documentation.\n\n> I am thouroughly confused and don't really know where to start to try\n> and debug the problem.\n\nWhat version of postgresql were you using before you upgraded?\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sat, 17 Oct 1998 14:51:32 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL grows to enormous size."
}
] |
[
{
"msg_contents": "Umm... I dunno why this happened, but someone please apply this:\n\n*** ./src/GNUmakefile.in.orig Fri Oct 16 19:38:07 1998\n--- ./src/GNUmakefile.in Fri Oct 16 19:38:31 1998\n***************\n*** 78,91 ****\n bin/psql/Makefile \\\n bin/pgtclsh/mkMakefile.tcltkdefs.sh \\\n bin/pgtclsh/Makefile.tcltkdefs \\\n- \n- ? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh\n- ? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh.in\n- ? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh.in\n- ? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh\n- ? pgsql/src/bin/pgtclsh/Makefile.tkdefs\n- ? pgsql/src/bin/pgtclsh/Makefile.\n- \n bin/pg_dump/Makefile \\\n bin/pg_version/Makefile \\\n include/config.h \\\n--- 78,83 ----\n\nTaral\n",
"msg_date": "Fri, 16 Oct 1998 18:40:54 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Junk in GNUmakefile.in"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Umm... I dunno why this happened, but someone please apply this:\n> \n> *** ./src/GNUmakefile.in.orig Fri Oct 16 19:38:07 1998\n> --- ./src/GNUmakefile.in Fri Oct 16 19:38:31 1998\n> ***************\n> *** 78,91 ****\n> bin/psql/Makefile \\\n> bin/pgtclsh/mkMakefile.tcltkdefs.sh \\\n> bin/pgtclsh/Makefile.tcltkdefs \\\n> - \n> - ? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh\n> - ? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh.in\n> - ? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh.in\n> - ? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh\n> - ? pgsql/src/bin/pgtclsh/Makefile.tkdefs\n> - ? pgsql/src/bin/pgtclsh/Makefile.\n> - \n> bin/pg_dump/Makefile \\\n> bin/pg_version/Makefile \\\n> include/config.h \\\n> --- 78,83 ----\n\nYep, that's me. I fixed it about 1PM EST today.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 22:25:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Junk in GNUmakefile.in"
},
{
"msg_contents": "> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Umm... I dunno why this happened, but someone please apply this:\n> > \n> > *** ./src/GNUmakefile.in.orig Fri Oct 16 19:38:07 1998\n> > --- ./src/GNUmakefile.in Fri Oct 16 19:38:31 1998\n> > ***************\n> > *** 78,91 ****\n> > bin/psql/Makefile \\\n> > bin/pgtclsh/mkMakefile.tcltkdefs.sh \\\n> > bin/pgtclsh/Makefile.tcltkdefs \\\n> > - \n> > - ? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh\n> > - ? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh.in\n> > - ? pgsql/src/bin/pgtclsh/mkMakefile.tcldefs.sh.in\n> > - ? pgsql/src/bin/pgtclsh/mkMakefile.tkdefs.sh\n> > - ? pgsql/src/bin/pgtclsh/Makefile.tkdefs\n> > - ? pgsql/src/bin/pgtclsh/Makefile.\n> > - \n> > bin/pg_dump/Makefile \\\n> > bin/pg_version/Makefile \\\n> > include/config.h \\\n> > --- 78,83 ----\n> \n> Yep, that's me. I fixed it about 1PM EST today.\n\nNope. I was editing GNUmakefile instead of GNUmakefile.in. I am fixing\nit now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Oct 1998 23:37:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Junk in GNUmakefile.in"
}
] |
[
{
"msg_contents": "> If you remove the erroneous #ifdef BAD stuff, and apply the following\n> patch to the (current) inet_net_pton.c, we'll have a working INET type\n> again, only missing the improvements that D'Arcy and Paul cooperated\n> to hash out.\n\ninet_net_pton.c won't be part of the final solution anyway. but the\npatch you included is the same thing we're putting in bind 8.next.\n\n> While I'm posting anyway, Paul; do you have an ETA yet?\n\ni was waiting for an answer to my \"why hex?\" question but i then worked\nit out on my own. look for something over the weekend from here.\n\n",
"msg_date": "Fri, 16 Oct 1998 21:53:43 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hackers-digest V1 #1030 "
}
] |
[
{
"msg_contents": "Hello!\n\n There is problem - I don't know where is it.\n\n I'm using PostgreSQL 6.3.2 under Linux-2.0 with official patches:\n\n . linux_elf.patch-980421.gz\n . gram.c.patch-980428\n . configure-980430.gz\n . btree_adj-980730.gz\n \n There are two tables in my test database:\n\ncreate table AAA (anum int2, aname char(16), ata int4 default 0);\ninsert into AAA values (0, '0');\ninsert into AAA values (1, '1');\n-- Note: ata hasn't initialized!\n\ncreate table BBB (bnum int2, bname char(16));\ninsert into BBB values (0, '0');\ninsert into BBB values (1, '1');\n\n Now try some queries:\n\ntest=> select * from aaa, bbb where bnum = anum;\nanum| aname|ata|bnum| bname\n----+----------------+---+----+----------------\n 0|0 | 0| 0|0 \n 1|1 | 0| 1|1 \n(2 rows)\n\n It's OK, in both AAA and BBB all fields as expected.\n\ntest=> select * from aaa, bbb where bnum = ata;\nNOTICE: ExecInitMergeJoin: left and right sortop's are unequal!\nanum| aname|ata|bnum| bname\n----+----------------+---+----+----------------\n 0|0 | 0| 0|0 \n 1|1 | 0| 0|0 \n(2 rows)\n\n Why bnum and bname are 0 and '0' respectively? I'm agree - ata and bnum\nhas different types, but why illegal comparison resets BBB's fields? In the\nlast query I'm expect for one row in result, where bnum == ata == 0!\n\n Who is wrong - I'm or PostgreSQL? And why?\n\n---\nVladimir Litovka <[email protected]>\n\n",
"msg_date": "Sat, 17 Oct 1998 16:19:15 +0300 (EEST)",
"msg_from": "Vladimir Litovka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is this BUG or FEATURE?"
},
{
"msg_contents": "> create table AAA (anum int2, aname char(16), ata int4 default 0);\n> insert into AAA values (0, '0');\n> insert into AAA values (1, '1');\n> -- Note: ata hasn't initialized!\n\nYes it has. ata = 0 in both because you set a default.\n\n> test=> select * from aaa, bbb where bnum = ata;\n> NOTICE: ExecInitMergeJoin: left and right sortop's are unequal!\n> anum| aname|ata|bnum| bname\n> ----+----------------+---+----+----------------\n> 0|0 | 0| 0|0\n> 1|1 | 0| 0|0\n> (2 rows)\n\nYou asked for all cases where bnum = ata. I assume select * from aaa,bbb;\nwould have returned:\n\nanum| aname|ata|bnum| bname\n----+----------------+---+----+----------------\n 0|0 | 0| 0|0\n 0|0 | 0| 1|1\n 1|1 | 0| 0|0\n 1|1 | 0| 1|1\n\nNow filter for bnum=ata and you get two rows.\n\nAm I wrong here?\n\nTaral\n\n",
"msg_date": "Sat, 17 Oct 1998 12:44:33 -0500",
"msg_from": "\"Taral\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [SQL] Is this BUG or FEATURE?"
},
{
"msg_contents": "Hi!\n\nOn Sat, 17 Oct 1998, Taral wrote:\n\n> anum| aname|ata|bnum| bname\n> ----+----------------+---+----+----------------\n> 0|0 | 0| 0|0\n> 0|0 | 0| 1|1\n> 1|1 | 0| 0|0\n> 1|1 | 0| 1|1\n> \n> Now filter for bnum=ata and you get two rows.\n> \n> Am I wrong here?\n\n Egghhhhh....... X-/\n\n Thank you :)\n\n-- \nVladimir Litovka <[email protected]>\n\n",
"msg_date": "Sat, 17 Oct 1998 20:48:31 +0300 (EEST)",
"msg_from": "Vladimir Litovka <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [SQL] Is this BUG or FEATURE?"
}
] |
[
{
"msg_contents": "The make file for interfaces/ecpg/lib cuased the following problem:\n\ngmake[3]: *** No rule to make target `ecpglib.sho.o', needed by `libecpg.so.1'.\n\nThe following patch will fix the problem. It works by removing the need for \nthe *.sho files, which seems to be only used to create the shared libraries. \nThese files are not needed since the ecpglib.o and typename.o files will be \nbuilt correctly for inclusion to the shared libraries because CFLAGS_SL are \nadded to CFLAGS if the systems is one of the systems supporting shared \nlibraries.\n\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sat, 17 Oct 1998 14:29:06 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with interfaces/ecpg/lib/Makefile.in in PostgreSQL 6.4."
},
{
"msg_contents": "On Sat, Oct 17, 1998 at 02:29:06PM -0400, Billy G. Allie wrote:\n> The make file for interfaces/ecpg/lib cuased the following problem:\n> \n> gmake[3]: *** No rule to make target `ecpglib.sho.o', needed by `libecpg.so.1'.\n\nIs this still in there? I had a fix for this in m ylast patch. \n\n> The following patch will fix the problem. It works by removing the need for \n> the *.sho files, which seems to be only used to create the shared libraries. \n\nThe *.sho are needed. It seems much easier to just remove the .o after the\n.sho.\n\n> These files are not needed since the ecpglib.o and typename.o files will be \n> built correctly for inclusion to the shared libraries because CFLAGS_SL are \n> added to CFLAGS if the systems is one of the systems supporting shared \n> libraries.\n\nI'm afraid that's not true. The .o files are for the static library. The .sho\nfiles are compiled with -fpic instead so they are better suited for the\nshared library.\n\nMichael\n\n-- \nDr. Michael Meskes | Th.-Heuss-Str. 61, D-41812 Erkelenz | Go SF49ers!\nSenior-Consultant | business: [email protected] | Go Rhein Fire!\nMummert+Partner | private: [email protected] | Use Debian\nUnternehmensberatung AG | [email protected] | GNU/Linux!\n",
"msg_date": "Sat, 17 Oct 1998 21:22:56 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with interfaces/ecpg/lib/Makefile.in in\n\tPostgreSQL 6.4."
},
{
"msg_contents": "On Sat, 17 Oct 1998 21:22:56 EDT, Michael Meskes wrote:\n> On Sat, Oct 17, 1998 at 02:29:06PM -0400, Billy G. Allie wrote:\n>\t[...] \n> > These files are not needed since the ecpglib.o and typename.o files will be \n> > built correctly for inclusion to the shared libraries because CFLAGS_SL are \n> > added to CFLAGS if the systems is one of the systems supporting shared \n> > libraries.\n> \n> I'm afraid that's not true. The .o files are for the static library. The .sho\n> files are compiled with -fpic instead so they are better suited for the\n> shared library.\n> \n\nIf the system is one that supports building shared libraries, then the \nvariable CFLAGS is modified as follows: CFLAGS += $(CFLAGS_SL). This occurs \nwithin the IF statement for setting up support for shared libraries for the \nparticular system. Therefore, the *.o files are also compiled with the flag \nfor position independant code (PIC). This does not affect there use in a \nstatic library, but does make it possible to use them in the dynamic \nlibraries. If the system for which postgreSQL is being built is not one of \nthe systems for which shared library support is included, then the *.o files \nare compiled without the flag for PIC support. So as I said, the *.sho files \nnot necessary.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Sat, 17 Oct 1998 19:18:22 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem with interfaces/ecpg/lib/Makefile.in in\n\tPostgreSQL 6.4."
},
{
"msg_contents": "On Sat, Oct 17, 1998 at 07:18:22PM -0400, Billy G. Allie wrote:\n> If the system is one that supports building shared libraries, then the \n> variable CFLAGS is modified as follows: CFLAGS += $(CFLAGS_SL). This occurs \n> within the IF statement for setting up support for shared libraries for the \n> particular system. Therefore, the *.o files are also compiled with the flag \n> for position independant code (PIC). This does not affect there use in a \n\nCorrect so far.\n\n> static library, but does make it possible to use them in the dynamic \n> libraries. If the system for which postgreSQL is being built is not one of \n\nYes, they can be used. But the static library is better off without the\n-fpic flag.\n\nMichael\n\n-- \nDr. Michael Meskes | Th.-Heuss-Str. 61, D-41812 Erkelenz | Go SF49ers!\nSenior-Consultant | business: [email protected] | Go Rhein Fire!\nMummert+Partner | private: [email protected] | Use Debian\nUnternehmensberatung AG | [email protected] | GNU/Linux!\n",
"msg_date": "Mon, 19 Oct 1998 08:17:03 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with interfaces/ecpg/lib/Makefile.in in\n\tPostgreSQL 6.4."
}
] |
[
{
"msg_contents": "The TCL/TK configuration cleanup patches I submitted have the following \nproblem:\n\n 'tclsh' still had to be found even if --with-libs (or --with-libraries) was\n specified to configure.\n\n --with-libs is really an overloaded option. It really should only be used\n to specify additions directories to search in order to file needed\n libraries. It was also being used to locate the *Config.sh files.\n\nThis patch addresses these problems by:\n\n1. Creating a new option (--with-tclconfig) which is used to specify the\n location of the *Config.sh files. If the tkConfig.sh is located in a\n different location than the tclConfig.sh, you can give both locations\n seperated by a space. For example:\n \n --with-tclconfig=\"/opt/lib/tcl8.0 /opt/lib/tk8.0\"\n\n2. Changing the search logic so that if '--with-tclconfg' is specified, then\n the tcl shell program is not used to obtain a list of directories to search\n for the *Config.sh files. It is assumed that the directories given with\n the --with-tclconfig option will contain the *Config.sh files.\n\n3. Adding 'tcl' as a name of the tcl shell program to be searched for if\n 'tclsh' was not found in the PATH. This seems to be another common name\n for the tcl shell program.\n\nThis patch also moves the clean-up of the generated Makefile.tcldefs and \nMakefile.tkdefs in bin/pgtclsh from GNUmakefile.in to the Makefile in \nbin/pgtclsh (where, IMHO, they belong).\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sat, 17 Oct 1998 14:49:04 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "TCL/TK configuration fixes for PostgreSQL 6.4"
},
{
"msg_contents": "Applied.\n\n> The TCL/TK configuration cleanup patches I submitted have the following \n> problem:\n> \n> 'tclsh' still had to be found even if --with-libs (or --with-libraries) was\n> specified to configure.\n> \n> --with-libs is really an overloaded option. It really should only be used\n> to specify additions directories to search in order to file needed\n> libraries. It was also being used to locate the *Config.sh files.\n> \n> This patch addresses these problems by:\n> \n> 1. Creating a new option (--with-tclconfig) which is used to specify the\n> location of the *Config.sh files. If the tkConfig.sh is located in a\n> different location than the tclConfig.sh, you can give both locations\n> seperated by a space. For example:\n> \n> --with-tclconfig=\"/opt/lib/tcl8.0 /opt/lib/tk8.0\"\n> \n> 2. Changing the search logic so that if '--with-tclconfg' is specified, then\n> the tcl shell program is not used to obtain a list of directories to search\n> for the *Config.sh files. It is assumed that the directories given with\n> the --with-tclconfig option will contain the *Config.sh files.\n> \n> 3. Adding 'tcl' as a name of the tcl shell program to be searched for if\n> 'tclsh' was not found in the PATH. This seems to be another common name\n> for the tcl shell program.\n> \n> This patch also moves the clean-up of the generated Makefile.tcldefs and \n> Makefile.tkdefs in bin/pgtclsh from GNUmakefile.in to the Makefile in \n> bin/pgtclsh (where, IMHO, they belong).\nContent-Description: uw7-1.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Oct 1998 00:11:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] TCL/TK configuration fixes for PostgreSQL 6.4"
},
{
"msg_contents": "Applied.\n\n> The TCL/TK configuration cleanup patches I submitted have the following \n> problem:\n> \n> 'tclsh' still had to be found even if --with-libs (or --with-libraries) was\n> specified to configure.\n> \n> --with-libs is really an overloaded option. It really should only be used\n> to specify additions directories to search in order to file needed\n> libraries. It was also being used to locate the *Config.sh files.\n> \n> This patch addresses these problems by:\n> \n> 1. Creating a new option (--with-tclconfig) which is used to specify the\n> location of the *Config.sh files. If the tkConfig.sh is located in a\n> different location than the tclConfig.sh, you can give both locations\n> seperated by a space. For example:\n> \n> --with-tclconfig=\"/opt/lib/tcl8.0 /opt/lib/tk8.0\"\n> \n> 2. Changing the search logic so that if '--with-tclconfg' is specified, then\n> the tcl shell program is not used to obtain a list of directories to search\n> for the *Config.sh files. It is assumed that the directories given with\n> the --with-tclconfig option will contain the *Config.sh files.\n> \n> 3. Adding 'tcl' as a name of the tcl shell program to be searched for if\n> 'tclsh' was not found in the PATH. This seems to be another common name\n> for the tcl shell program.\n> \n> This patch also moves the clean-up of the generated Makefile.tcldefs and \n> Makefile.tkdefs in bin/pgtclsh from GNUmakefile.in to the Makefile in \n> bin/pgtclsh (where, IMHO, they belong).\nContent-Description: uw7-1.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Oct 1998 00:18:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] TCL/TK configuration fixes for PostgreSQL 6.4"
}
] |
[
{
"msg_contents": "Somebody didn't take into account the possibility that $(shlib)\nis the same name as lib$(NAME)$(DLSUFFIX).\n\ninstall-shlib: $(shlib)\n\t$(INSTALL) $(INSTL_SHLIB_OPTS) $(shlib) $(LIBDIR)/$(shlib)\n\trm -f $(LIBDIR)/lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION)\n\trm -f $(LIBDIR)/lib$(NAME)$(DLSUFFIX)\n\tcd $(LIBDIR) && $(LN_S) -f $(shlib) lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION)\n\tcd $(LIBDIR) && $(LN_S) -f $(shlib) lib$(NAME)$(DLSUFFIX)\n\nThis deletes the actual shared lib and replaces it with a symlink\npointing to itself. Grumble. The debris:\n\n$ ls -lF /usr/local/pgsql/lib\ntotal 2296\n-r--r--r-- 1 postgres users 624 Oct 17 21:56 global1.bki.source\n-r--r--r-- 1 postgres users 0 Oct 17 21:56 global1.description\n-r--r--r-- 1 postgres users 34882 Oct 17 21:56 libecpg.a\n-rw-r--r-- 1 postgres users 90220 Oct 17 21:56 libpgtcl.a\nlrwxr-xr-x 1 postgres users 11 Oct 17 21:56 libpgtcl.sl@ -> libpgtcl.sl\nlrwxr-xr-x 1 postgres users 11 Oct 17 21:56 libpgtcl.sl.2@ -> libpgtcl.sl\n-rw-r--r-- 1 postgres users 158224 Oct 17 21:56 libpq++.a\n-rw-r--r-- 1 postgres users 196120 Oct 17 21:56 libpq.a\nlrwxr-xr-x 1 postgres users 8 Oct 17 21:56 libpq.sl@ -> libpq.sl\nlrwxr-xr-x 1 postgres users 8 Oct 17 21:56 libpq.sl.2@ -> libpq.sl\n-r--r--r-- 1 postgres users 160633 Oct 17 21:56 local1_template1.bki.source\n-r--r--r-- 1 postgres users 17622 Oct 17 21:56 local1_template1.description\n-r--r--r-- 1 postgres users 2838 Oct 17 21:56 pg_geqo.sample\n-r--r--r-- 1 postgres users 5192 Oct 17 21:56 pg_hba.conf.sample\n-rw-r--r-- 1 postgres users 452568 Oct 17 21:57 plpgsql.sl\n\n\nBTW, the install for plpgsql.sl is not right either --- it doesn't have\nthis symlink problem, but the permissions on the file are wrong. HPUX\nwants shlibs to be executable.\n\n\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Oct 1998 22:25:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Latest shared-lib makefile revisions fail on HPUX"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.