threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "On Thu, 11 Jun 1998,\n\n> >Bruce Momjian:\n> >> PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do\n> >> this if we could help it. I think we will still need to run initdb,\n> and\n> >> move the data files.\n> >\n> >I had thought we were going to avoid changing this unless there were\n> changes\n> >to persistant structures. Do you know what changed to require this?\n> \n> Humm... I think:\n> \n> \tEven if catalogs would not be changed, initdb is required\n> \tsince we have added a new function octet_length().\n> \n> Please correct me if I'm wrong.\n\nWith the bits I've claimed, were going to have even more functions, and a\nnew data type (large objects), so initdb is certain IMO.\n\n--\nPeter Mount, [email protected] \nPostgres email to [email protected] & [email protected]\nRemember, this is my work email, so please CC my home address, as I may \nnot always have time to reply from work.\n\n\n",
"msg_date": "Thu, 11 Jun 1998 08:02:18 +0100 (BST)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FW: [HACKERS] now 6.4 "
}
] |
[
{
"msg_contents": "I am trying to use Libpq++ for PG6.3.2 and Red Hat 5.1, and am having\ntrouble\ncompiling programs that use it. I tried #include <libpq++.h> and didn't\nlink\nand libpq dirs and I got some linker errors (not able to find such and\nsuch\nfunctions), then I tried linking the way I used to with just plain old\nlibpq,\nbut to avail. Everyone says that libpq is better, but I like the C++\ninterface better, and want to try to get that to work.\n\nIf anyone has had any luck with libpq++ in general and could send me the\nbasics on what command line options I need to use to compile and what\nheader\nfiles to include I would greatly appreciate it.\n\nPlease cc me any responses.\n\nThanks,\n\nSidney Traynham\n\[email protected]\n\n******************************************************\nSidney Traynham\t\t\tPhone:\t(888) 932-7780\n206 N. Oakland St.\t\t\t(703) 527-4672\nArlington, VA, 22203\t\tFax: \t(703) 527-5004\nEmail: [email protected]\tPager:\t(703) 469-7441\n******************************************************\n\n",
"msg_date": "Thu, 11 Jun 1998 17:11:12 -0400 (EDT)",
"msg_from": "Sidney Traynham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Libpq++ and RH5.1"
},
{
"msg_contents": "On Thu, 11 Jun 1998, Sidney Traynham wrote:\n\n> I am trying to use Libpq++ for PG6.3.2 and Red Hat 5.1, and am having\n> trouble\n> compiling programs that use it. I tried #include <libpq++.h> and didn't\n> link\n> and libpq dirs and I got some linker errors (not able to find such and\n> such\n> functions), then I tried linking the way I used to with just plain old\n> libpq,\n> but to avail. Everyone says that libpq is better, but I like the C++\n> interface better, and want to try to get that to work.\n> \n> If anyone has had any luck with libpq++ in general and could send me the\n> basics on what command line options I need to use to compile and what\n> header\n> files to include I would greatly appreciate it.\n\nIIRC you have to include both libpq++.h as well as libpq-fe.h, and you'll \nhave to link with -lpq -lpq++ and possibly with -lcrypt\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 12 Jun 1998 09:47:21 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Libpq++ and RH5.1"
}
] |
[
{
"msg_contents": "Here is a list of usenet articles about inlining that just appeared in\ncomp.compilers.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!newsfeed.direct.ca!news-peer.sprintlink.net!news-backup-west.sprintlink.net!news.sprintlink.net!208.31.42.33!iecc.com!iecc.com!not-for-mail\nFrom: Mark Sanvitale <[email protected]>\nNewsgroups: comp.compilers\nSubject: why not inline all functions?\nDate: 9 Jun 1998 12:03:34 -0400\nOrganization: Teradyne, Incorporated\nLines: 57\nSender: [email protected]\nApproved: [email protected]\nMessage-ID: <[email protected]>\nNNTP-Posting-Host: ivan.iecc.com\nKeywords: performance\nXref: readme1.op.net comp.compilers:5583 \n\nFunctions are great for making written code (C, C++, etc.) mode\nreadable and structured, however, they do not seem to make much sense\nwhen you get down to the raw machine code which actually is executed\nby a processor.\n\nAs far as my understanding of the matter goes, the most basic way to\nslow down a processor is to make it execute an instruction besides the\none immediately following the current instruction, thus, why not make\na compiler which turns every function into an inline function? This\nwould save you the overhead inherent in a traditional function call\n(push everything defining the current state of the processor on the\nstack, make fresh copies of the parameters for the function, and,\nafterwards, pop things off the stack to return the processor to the\npre-function state, not to mention losing the chance to take advantage\nof any instruction prefetching the processor might do).\n\nThe output of such a compiler would be larger binary files (since\nevery call to a function would expand to the entire function body)\nhowever the execution time for such a program should be improved\n(relative to a non-inlining compiler) by a factor proportional to the\nnumber of function calls in the program.\n\nNow, a \"inline everything\" scheme might run into some roadblocks when\nit comes to external functions which are resolved at link time and the\nnotion of dynamic linking is not compatible with such a method.\nStill, I think compilers should try to inline every function it can\nwithout depending on the programmer to specify a function as \"inline\"\n(C++).\n\nPerhaps compilers already take advantage of the idea I have outlined or\nperhaps there are some problems with the idea which I don't know about\n(an old C++ book I have says, \"Compiler limits prevent complicated\nfunctions from being inlined,\" but no further explanation is given.\n\nWhat do you all think?\n\nNOTE - During my quest for a CS degree I did not have the chance to take\na compilers course (it was a tech elective which never fit in my\nschedule) so if my assertion is totally pointless please just point out\nthe reason I am wrong and spare me the \"Are you stupid/crazy/lost\"\ncomments. In the area of compilers I admit to being fairly ignorant (as\nfar as professional programmers go).\n\nMark S\n\"computer geek and movie freak\"\n[Well, if you ignore recursive and mutually recursive functions which\ncan't be in-lined, the main problem is code bloat. Remember that since\ncomputers have fairly small caches and finite main memory, big code can\nrun slower due to cache reloads and page faults, even though you aviod\nrelatively slow procedure calls and returns. Aggressive optimizing\ncompilers (usually for Fortran) do indeed do a lot of in-lining already,\nthough. -John]\n\n--\nSend compilers articles to [email protected], meta-mail to\[email protected]. Archives at http://www.iecc.com/compilers\n\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!nntp-out.monmouth.com!newspeer.monmouth.com!news-peer-east.sprintlink.net!news-peer.sprintlink.net!news-backup-west.sprintlink.net!news.sprintlink.net!208.31.42.33!iecc.com!iecc.com!not-for-mail\nFrom: Joerg Schoen <[email protected]>\nNewsgroups: comp.compilers\nSubject: Re: why not inline all functions?\nDate: 11 Jun 1998 16:11:47 -0400\nOrganization: University of Heidelberg, Germany\nLines: 64\nSender: [email protected]\nApproved: [email protected]\nMessage-ID: <[email protected]>\nReferences: <[email protected]>\nNNTP-Posting-Host: ivan.iecc.com\nKeywords: performance, practice\nXref: readme1.op.net comp.compilers:5604 \n\nMark Sanvitale <[email protected]> wrote:\n: Functions are great for making written code (C, C++, etc.) mode\n: readable and structured, however, they do not seem to make much sense\n: when you get down to the raw machine code which actually is executed\n: by a processor.\n\n: As far as my understanding of the matter goes, the most basic way to\n: slow down a processor is to make it execute an instruction besides the\n: one immediately following the current instruction, thus, why not make\n: a compiler which turns every function into an inline function? This\n: would save you the overhead inherent in a traditional function call\n: (push everything defining the current state of the processor on the\n\nIn my experience \"pushing the current state on the stack\" refers only\nto variables that reside in processor registers. If the function is\nsmall, it will be inlined (at sufficiently high optimization level)\nand no pushs are necessary. If the function is big and a function call\nis done, it is more likely that some time is spent in the function and\nthe processors registers are used therein for performance. Thus it is\nreasonable to \"free\" them for reusage in the function by pushing away\nthem.\n\n: stack, make fresh copies of the parameters for the function, and,\n: afterwards, pop things off the stack to return the processor to the\n: pre-function state, not to mention losing the chance to take advantage\n: of any instruction prefetching the processor might do).\n\n: The output of such a compiler would be larger binary files (since\n: every call to a function would expand to the entire function body)\n: however the execution time for such a program should be improved\n: (relative to a non-inlining compiler) by a factor proportional to the\n: number of function calls in the program.\n\nNo, that's not true. Consider a long function or a short one with a\nloop that is executed a couple of times. You then can neglect the cost\nof calling the function versus the time spent in the function itself.\n\n: Now, a \"inline everything\" scheme might run into some roadblocks when\n: it comes to external functions which are resolved at link time and the\n: notion of dynamic linking is not compatible with such a method.\n\nI know that some compilers have an \"ucode\" format that is different to\nthe usual object file format (which is used in libraries). As far as\nmy understanding goes, compilers can do much more with the ucode\nformat in the linking stage, I think they can also do inlining.\n\n: Still, I think compilers should try to inline every function it can\n: without depending on the programmer to specify a function as \"inline\"\n: (C++).\n\nAs our moderator pointed out, you have to consider the cost of\nloading new instructions into the cache. If the function is a separate\ncode, it will be in the cache after the first call and probably stay\nthere. That improves performance compared to the case of inlined\nfunctions that consist of separate code blocks that have all to be\nloaded into the cache.\n\n Joerg Schoen\nE-mail: Joerg.Schoen AT tc DOT pci DOT uni-heidelberg DOT de\nWeb-Page: http://www.pci.uni-heidelberg.de/tc/usr/joerg\n--\nSend compilers articles to [email protected], meta-mail to\[email protected]. Archives at http://www.iecc.com/compilers\n\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!nntp-out.monmouth.com!newspeer.monmouth.com!news-peer-east.sprintlink.net!news-peer.sprintlink.net!news-backup-west.sprintlink.net!news.sprintlink.net!208.31.42.33!iecc.com!iecc.com!not-for-mail\nFrom: Ben Elliston <[email protected]>\nNewsgroups: comp.compilers\nSubject: Re: why not inline all functions?\nDate: 11 Jun 1998 16:13:27 -0400\nOrganization: Cygnus Solutions\nLines: 52\nSender: [email protected]\nApproved: [email protected]\nMessage-ID: <[email protected]>\nReferences: <[email protected]>\nNNTP-Posting-Host: ivan.iecc.com\nKeywords: optimize, practice\nXref: readme1.op.net comp.compilers:5602 \n\nMark Sanvitale <[email protected]> writes:\n\n> Functions are great for making written code (C, C++, etc.) mode\n> readable and structured, however, they do not seem to make much sense\n> when you get down to the raw machine code which actually is executed\n> by a processor.\n\nThere is still one purpose for retaining this structure in the\nexecutable. I'll get to this below.\n\n> The output of such a compiler would be larger binary files (since\n> every call to a function would expand to the entire function body)\n> however the execution time for such a program should be improved\n> (relative to a non-inlining compiler) by a factor proportional to the\n> number of function calls in the program.\n\nI remember reading somewhere that, in fact, by inlining functions like\nthis, the potential for optimisations is greater. My intuition can\nsee why. But code bloat would still be significant.\n\n> Perhaps compilers already take advantage of the idea I have outlined or\n> perhaps there are some problems with the idea which I don't know about\n> (an old C++ book I have says, \"Compiler limits prevent complicated\n> functions from being inlined,\" but no further explanation is given.\n\nIn my opinion, the major drawback to inline functions is that they are\nmuch harder to debug. Suppose you have code which frequently calls:\n\n\tmy_sqrt(..)\n\nAnd you suspect a bug in your square root function. If my_sqrt() were\ninlined at every point that it was used, it would be impossible to set\na breakpoint at the start of my_sqrt() and to examine an invocation of\nthis function. It becomes a real headache.\n\nFurthermore, it helps a great deal to know the calling sequence at\nruntime in case, say, an assertion fails and you need to know how your\nfunction was called and what the arguments were.\n\n---\nBen Elliston\[email protected]\n[That's not a very persuasive argument. There's no reason the debugger\ncan't know all the places the inline function was called and put breakpoints\non all of them. It has to do something similar in compilers that unwind\nloops already. -John]\n\n\n--\nSend compilers articles to [email protected], meta-mail to\[email protected]. Archives at http://www.iecc.com/compilers\n\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!nntp-out.monmouth.com!newspeer.monmouth.com!news-peer-east.sprintlink.net!news-peer.sprintlink.net!news-backup-west.sprintlink.net!news.sprintlink.net!208.31.42.33!iecc.com!iecc.com!not-for-mail\nFrom: Andy Ayers <[email protected]>\nNewsgroups: comp.compilers\nSubject: Re: why not inline all functions?\nDate: 11 Jun 1998 16:15:13 -0400\nOrganization: MIT Laboratory for Computer Science\nLines: 59\nSender: [email protected]\nApproved: [email protected]\nMessage-ID: <[email protected]>\nReferences: <[email protected]>\nNNTP-Posting-Host: ivan.iecc.com\nKeywords: optimize, performance\nXref: readme1.op.net comp.compilers:5603 \n\nMark Sanvitale <[email protected]> writes:\n\n> ... why not make\n> a compiler which turns every function into an inline function? This\n> would save you the overhead inherent in a traditional function call\n> (push everything defining the current state of the processor on the\n> stack, make fresh copies of the parameters for the function, and,\n> afterwards, pop things off the stack to return the processor to the\n> pre-function state, not to mention losing the chance to take advantage\n> of any instruction prefetching the processor might do).\n> ...\n> Perhaps compilers already take advantage of the idea I have outlined or\n> perhaps there are some problems with the idea which I don't know about\n> (an old C++ book I have says, \"Compiler limits prevent complicated\n> functions from being inlined,\" but no further explanation is given.\n>\n> What do you all think?\n\nWhile working on high-level optimization at HP, some colleagues and I\nessentially pursued a similar idea. Our goal was not so much to remove\ncall overhead as it was to open up larger expanses of code to\naggressive intraprocedural optimization. Some of the results are\nsummarized in a paper \"Aggressive Inlining\" which we presented at PLDI\nlast year. HP-UX compilers from 9.x onwards are capable of aggressive,\nprofile-driven, cross-module inlining.\n\nInterprocedural optimization was (and is) often hampered by the\ncomplexity and scale of the supporting interprocedural analyses and\nwhole-program information is often required to get good results. Once\nyou've done the analysis you often end up duplicating callee code to\ntake advantage of some specialized calling context. Inlining does this\nduplication and specialization for you without requiring much\nheavyweight analysis.\n\nWe saw very good results on most codes -- benchmarks, certainly, but\nalso many user and production codes and some parts of the HPUX\nkernel. Fortran was actually trickier than C or C++ because the formal\nparameter aliasing properties are difficult to describe accurately\nonce a subroutine is inlined. The only program I remember us actually\n\"flattening\" completely was the spec benchmark fpppp.\n\nOn the flip side, getting good results with inlining requires\nprofiling, but this is quickly becoming a must for any aggressive\noptimizing compiler. The code placement tool (ala Pettis & Hanson)\nneeds to be inlining-aware. Code growth is not that big of a problem\nin many codes. Many very large codes have relatively small dynamic hot\nspots. Database codes are a notable exception. Another big downside\nis that aggressive cross-module inlining builds in cross-module\ndependences, so you lose the quick build-time benefits of separate\ncompilation. Debugging runtime failures in heavily inlined code is a\nhuge challenge. And finally, many user codes are not interprocedurally\nclean, making it hard to inline at some sites and preserve whatever\ntwisted semantics were implied by the original call.\n\n\t-- Andy Ayers\n--\nSend compilers articles to [email protected], meta-mail to\[email protected]. Archives at http://www.iecc.com/compilers\n\nPath: readme1.op.net!op.net!cezanne.op.net!op.net!newsfeed.direct.ca!news-peer.sprintlink.net!news-backup-east.sprintlink.net!news.sprintlink.net!208.31.42.33!iecc.com!iecc.com!not-for-mail\nFrom: [email protected] (Sean McDirmid)\nNewsgroups: comp.compilers\nSubject: Re: why not inline all functions?\nDate: 11 Jun 1998 16:59:26 -0400\nOrganization: Computer Science & Engineering, U of Washington, Seattle\nLines: 46\nSender: [email protected]\nApproved: [email protected]\nMessage-ID: <[email protected]>\nReferences: <[email protected]>\nNNTP-Posting-Host: ivan.iecc.com\nKeywords: optimize\nXref: readme1.op.net comp.compilers:5607 \n\nMark Sanvitale ([email protected]) wrote:\n\n: Now, a \"inline everything\" scheme might run into some roadblocks when\n: it comes to external functions which are resolved at link time and the\n: notion of dynamic linking is not compatible with such a method.\n: Still, I think compilers should try to inline every function it can\n: without depending on the programmer to specify a function as \"inline\"\n: (C++).\n\nHmm, you forgot to mention recursive functions and virtual methods in\nC++. In C++, sometimes you execute a method that can only be determined\nat runtime. That's why, if you \"inline\" a method that can be invoked\nvirtually, the compiler will probably just compile it to a function for\nexternal accesses.\n\nOf course, there are tools like Vortex that are attempting to solve this\nproblem through extreme global analysis. See:\n\nhttp://www.cs.washington.edu/research/projects/cecil/\n\n: Perhaps compilers already take advantage of the idea I have outlined or\n: perhaps there are some problems with the idea which I don't know about\n: (an old C++ book I have says, \"Compiler limits prevent complicated\n: functions from being inlined,\" but no further explanation is given.\n\nOver \"inlining\" leads to greater code bloat. This could adversely affect\nyour instruction cache (or maybe not...). Whenever you optimize here, you\nmight deoptimize somewhere else (isn't computer science fun!).\n\n: What do you all think?\n\n: NOTE - During my quest for a CS degree I did not have the chance to take\n: a compilers course (it was a tech elective which never fit in my\n: schedule) so if my assertion is totally pointless please just point out\n: the reason I am wrong and spare me the \"Are you stupid/crazy/lost\"\n: comments. In the area of compilers I admit to being fairly ignorant (as\n: far as professional programmers go).\n\nA compilers course probably would not have gone into these issues. Maybe\nan architecture course...\n\nSean\n--\nSend compilers articles to [email protected], meta-mail to\[email protected]. Archives at http://www.iecc.com/compilers\n\nPath: readme1.op.net!op.net!out2.nntp.cais.net!in1.nntp.cais.net!nntp.abs.net!news-peer-east.sprintlink.net!news-peer.sprintlink.net!news-backup-east.sprintlink.net!news.sprintlink.net!208.31.42.33!iecc.com!iecc.com!not-for-mail\nFrom: Thomas Niemann <[email protected]>\nNewsgroups: comp.compilers\nSubject: Re: why not inline all functions?\nDate: 11 Jun 1998 17:00:10 -0400\nOrganization: Compilers Central\nLines: 26\nSender: [email protected]\nApproved: [email protected]\nMessage-ID: <[email protected]>\nReferences: <[email protected]>\nNNTP-Posting-Host: ivan.iecc.com\nKeywords: optimize, practice\nXref: readme1.op.net comp.compilers:5613 \n\nI worked on compilers for Prime and Apollo in the 80's, and implemented\nprocedure inlining for both. For testing, I tried to inline\neverything. I recall one program that seemingly \"hung\" the compiler.\nBeing persistent, I let it compile overnight. It finally did, and\nexecuted properly. On inspection, the call graph for the code resembled\na binary tree, and quickly resulted in a huge program that took hours to\ncompile. The final released product tried to determine good candidates\nfor inlining (another topic). As I recall, we didn't expand recursive\nroutines, though we could have inlined a few times.\n\nLarge programs not only take a long time to compile, but may incurr\nexcessive paging, as there may be insufficient memory to hold the\nexecutable. The big win obtained from inlining is due to optimization.\nInlining a procedure exposes the procedure's body to the surrounding\ncode, allowing for classical optimizations to take place. One of the\nbenchmarks in our test suite terminated with a divide-by-zero exception\nafter inlining. A matrix-multiplication function was inlined. The\noptimizer could then determine that there was no use of the calculation,\nso the code was eliminated. Thus, it took zero time to do the\ncalculation. The divide-by-zero error occurred when dividing elapsed\ntime into a constant. At any rate, we had qualms about reporting\nbenchmark figures for such a result.\n--\nSend compilers articles to [email protected], meta-mail to\[email protected]. Archives at http://www.iecc.com/compilers",
"msg_date": "Thu, 11 Jun 1998 22:58:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "inlining"
},
{
"msg_contents": "> \n> Here is a list of usenet articles about inlining that just appeared in\n> comp.compilers.\n\nGood discussion and I am happy to see you post it. I follow comp.arch\nregularly and there are often very interesting hints there too amid\nthe dross. Actually it is not a high traffic group except for the\noccasional \"sunspot cycle\".\n\n> optimizing compiler. The code placement tool (ala Pettis & Hanson)\n> needs to be inlining-aware. Code growth is not that big of a problem\n> in many codes. Many very large codes have relatively small dynamic hot\n> spots. Database codes are a notable exception. Another big downside\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nDatabase codes are the mothers heartbreak of both the compiler design and\nhardware architecture communities. They blow up caches, there are never\n5 instructions in a row before a branch, they whack at the whole working\nset (which blows up the tlb and bus), they have poor locality so when they\nmiss cache you can't fix it with bandwith. Everything depends on everything\nso you can't parallize at small scales. Hopeless really.\n\nBtw, I sure wish someone would comment on the S_LOCK analysis even if only\nto tell me not to make such long posts as it wastes bandwidth. Or was it just\ntoo long to read?\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Thu, 11 Jun 1998 22:08:28 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] inlining"
},
{
"msg_contents": "> Btw, I sure wish someone would comment on the S_LOCK analysis even if only\n> to tell me not to make such long posts as it wastes bandwidth. Or \n> was it just too long to read?\n\nI read it all! Great analysis of the situation and not a waste, IMHO.\n\nOne comment...when you ran the tests in succession, could the cache be\nresponsible for the timing groupings in the same test? Should a\nlittle program be run in between to \"flush\" the cache full of garbage\nso each real run will miss? Seem to recall a little program, in CUJ,\nI think, that set up a big array and then iterated over it to trash\nthe cache.\n\nDarren aka [email protected]\n",
"msg_date": "Fri, 12 Jun 1998 07:50:34 -0400",
"msg_from": "\"Stupor Genius\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] inlining"
},
{
"msg_contents": "> \n> > Btw, I sure wish someone would comment on the S_LOCK analysis even if only\n> > to tell me not to make such long posts as it wastes bandwidth. Or \n> > was it just too long to read?\n> \n> I read it all! Great analysis of the situation and not a waste, IMHO.\n> \n> One comment...when you ran the tests in succession, could the cache be\n> responsible for the timing groupings in the same test? Should a\n> little program be run in between to \"flush\" the cache full of garbage\n> so each real run will miss? Seem to recall a little program, in CUJ,\n> I think, that set up a big array and then iterated over it to trash\n> the cache.\n\nYes, that is a good point. When testing in a loop, the function is in\nthe cache, while in normal use, the function may not be in the cache\nbecause of intervening instructions.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 12 Jun 1998 08:21:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] inlining"
},
{
"msg_contents": "At 4:50 AM -0700 6/12/98, Stupor Genius wrote:\n>One comment...when you ran the tests in succession, could the cache be\n>responsible for the timing groupings in the same test? Should a\n>little program be run in between to \"flush\" the cache full of garbage\n>so each real run will miss? Seem to recall a little program, in CUJ,\n>I think, that set up a big array and then iterated over it to trash\n>the cache.\n\nObviously I'm commenting at second hand, and perhaps this problem is\nhandled properly, but:\n\nMany CPU's have independent data and instruction caches. Setting up a big\narray and moving through it will flush the data cache, but most benchmark\nanomalies are likely to be due to the instruction cache, aren't they?\n\nAlso, if you have a process (program) stop and then restart is the OS smart\nenough to reconnect the VM state in such a way that the cache isn't flushed\nanyway? Can it even preserve cache coherence through a fork (when the VM\nstate is mostly preserved)? I doubt it.\n\nThat said if you are testing multiple SQL statements within a single\nconnection (so the backend doesn't fork a new process) then I could see\nsome anomalies. Otherwise I doubt it.\n\nAnyone know better?\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n",
"msg_date": "Fri, 12 Jun 1998 13:19:56 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] inlining"
}
] |
[
{
"msg_contents": " I hate to open a potential can of worms here, but here's another\npossibility. I recall someone telling me about a database (InterBase,\nI believe it was) that could have rows with different structures all\nin the same table. In other words, they could add a field to the\ntable, and any new rows would have it, while the old ones would not,\nand the database would deal with it on the fly.\n Could we implement some type of \"version\" field in the table\nstructure which would allow this type of thing? If this is\nreasonable, we could have it in 6.4 and potentially never have to\nworry too much about reloading tables after that. With version info\nin each tuple, we could convert the table \"as we go\" or in some other\ngradual way.\n Of course, the question of how much work it would take to have the\nbackend support this needs to be considered, as well as the issue of\nhow this would impact performance.\n\n-Brandon :)\n",
"msg_date": "Thu, 11 Jun 1998 22:47:34 -0500 (CDT)",
"msg_from": "Brandon Ibach <[email protected]>",
"msg_from_op": true,
"msg_subject": "Upgrading (was: now 6.4)"
},
{
"msg_contents": "> \n> I hate to open a potential can of worms here, but here's another\n> possibility. I recall someone telling me about a database (InterBase,\n> I believe it was) that could have rows with different structures all\n> in the same table. In other words, they could add a field to the\n> table, and any new rows would have it, while the old ones would not,\n> and the database would deal with it on the fly.\n> Could we implement some type of \"version\" field in the table\n> structure which would allow this type of thing? If this is\n> reasonable, we could have it in 6.4 and potentially never have to\n> worry too much about reloading tables after that. With version info\n> in each tuple, we could convert the table \"as we go\" or in some other\n> gradual way.\n> Of course, the question of how much work it would take to have the\n> backend support this needs to be considered, as well as the issue of\n> how this would impact performance.\n\nActually, we already have that. When you add a column to a table, it\ndoes not re-structure the old rows. However, system tables do not\nalways add columns. Sometimes we change them. Also there is lots\nmore/different rows for tables, and keeping that straight would be\nterrible.\n\nIf we can keep the original data files and require initdb and a\nre-index, that would be good.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 12 Jun 1998 00:40:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Upgrading (was: now 6.4)"
}
] |
[
{
"msg_contents": "> Yep, I think this is do'able, UNLESS Vadim decides he needs to change\n> the structure of the data/index files. At that point, we are lost.\n\n> In the past, we have made such changes, and they were very much needed. \n> Not sure about the 6.4 release, but no such changes have been made yet.\n\nI thought Vadim was going to change the oid in btree index files to ctid,\nin my opinion a very useful change. (Or was he intending to add it ?)\nThen a btree index rebuild would be necessary.\n\nAndreas\n\n\n",
"msg_date": "Fri, 12 Jun 1998 09:30:49 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] now 6.4"
},
{
"msg_contents": "Andreas Zeugswetter wrote:\n> \n> > Yep, I think this is do'able, UNLESS Vadim decides he needs to change\n> > the structure of the data/index files. At that point, we are lost.\n> \n> > In the past, we have made such changes, and they were very much needed.\n> > Not sure about the 6.4 release, but no such changes have been made yet.\n> \n> I thought Vadim was going to change the oid in btree index files to ctid,\n> in my opinion a very useful change. (Or was he intending to add it ?)\n> Then a btree index rebuild would be necessary.\n\nOID was removed from btree tuples ~ year ago.\nNow I want to use heap tuple ID (referenced by index tuple)\nas (last) part of index key and get rid of BT_CHAIN flag:\nall keys will be UNIQUE and there will be no problems with\nhandling duplicate keys any more (idea (C) Oracle -:)\n\nBut this means that heap tuple id will be added to index\ntuples on internal pages, not on the leaf ones...\n\nVadim\n",
"msg_date": "Fri, 12 Jun 1998 15:53:02 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] now 6.4"
}
] |
[
{
"msg_contents": "Hi, \n\nI have a table for instance boci (row_number int, col2 float, col3\nfloat). The boci table has about 10000 rows. \nI want to update col3 column of the table in every row with different\nvalues. Desired values are in an array has 10000 elements. I want to be\nthe value of the col3 column in the first row equal to the first element\nof the array and so on. \nI've done it with a 'for' cycle, but it is very slow. I want to do\ntheese updates with one transaction. \n\nHere is a little piece of my program:\n\nPQexec(conn, \"BEGIN\");\nfor (i=0; i<10000; i++)\n{\n sprintf(buff, \"UPDATE boci SET col3 = %f WHERE row_number=%d\",\narray[i], i);\n PQexec(conn, buff);\n}\nPQexec(conn, \"END\");\n\nI can't solve this problem with COPY command becouse I have to update\ncol3 column in every minute, but I don't want to change row_number and\ncol2 columns. My problem is the updating and not the creation of the\ntable. Creation is fast enough.\nThis program is very, very slow. Is there any way making this program\nmuch faster (for instance with CURSOR or 'block write' or something\nelse)? Please write me a little program that describes your ideas!\n\nThanks for your help in advance!\n\nPlease help me!!!\n",
"msg_date": "Fri, 12 Jun 1998 10:30:17 +0200",
"msg_from": "Lendvary Gyorgy <[email protected]>",
"msg_from_op": true,
"msg_subject": "update by one transaction"
},
{
"msg_contents": "> I have a table for instance boci (row_number int, col2 float, col3\n> float). The boci table has about 10000 rows. \n> I want to update col3 column of the table in every row with different\n> values. Desired values are in an array has 10000 elements. I want to be\n> the value of the col3 column in the first row equal to the first element\n> of the array and so on. \n> I've done it with a 'for' cycle, but it is very slow. I want to do\n> theese updates with one transaction. \n> \n> Here is a little piece of my program:\n> \n> PQexec(conn, \"BEGIN\");\n> for (i=0; i<10000; i++)\n> {\n> sprintf(buff, \"UPDATE boci SET col3 = %f WHERE row_number=%d\",\n> array[i], i);\n> PQexec(conn, buff);\n> }\n> PQexec(conn, \"END\");\n> \n> I can't solve this problem with COPY command becouse I have to update\n> col3 column in every minute, but I don't want to change row_number and\n> col2 columns. My problem is the updating and not the creation of the\n> table. Creation is fast enough.\n> This program is very, very slow. Is there any way making this program\n> much faster (for instance with CURSOR or 'block write' or something\n> else)? Please write me a little program that describes your ideas!\n\nTry creating an index on row_number. Right now to do the update the whole\ntable has to be scanned. With an index only the matching rows will be\nscanned. \n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n",
"msg_date": "Sat, 13 Jun 1998 00:11:22 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] update by one transaction"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm looking for a function like COALESCE() or the Oracle NVL(),\nto returns a ZERO value instead of a NULL value.\nTo have the result: NULL+1 = 1 instead of NULL+1 = NULL\nHave PostgreSQL something like this ?\nI tried to write it on C but I can't realize the beavior of NULLs,\nI can't get that my program returns a zero instead of a null.\nI'm not a C programmer, could somebody help me ?\n\nSELECT * FROM emp;\nname |salary|age|dept\n-----------+------+---+-----\nSam | 1200| 16|toy\nClaire | 5000| 32|shoe\nBill | 4200| 36|shoe\nGinger | 4800| 30|candy\nNULL VALUES| | |\n(5 rows)\n\nSELECT name,NVL(salary)+100 AS dream FROM emp;\nname |dream\n-----------+-----\nSam | 1300\nClaire | 5100\nBill | 4300\nGinger | 4900\nNULL VALUES| <--- I expected 100 here.\n(5 rows)\n Thanks, Jose'\n | |\n~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~~~\n Progetto HYGEA ---- ---- www.sferacarta.com\n Sfera Carta Software ---- ---- [email protected]\n Via Bazzanese, 69 | | Fax. ++39 51 6131537\nCasalecchio R.(BO) Italy | | Tel. ++39 51 591054\n\n",
"msg_date": "Fri, 12 Jun 1998 10:53:12 +0000 (UTC)",
"msg_from": "\"Jose' Soares Da Silva\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "COALESCE() or NVL()"
},
{
"msg_contents": "I got sum(money attribute) to return 0.00 instead of NULL when there\nare zero tuples in class, by redefining the sum() aggregate to set\ninitcond1 to 0.00. Perhaps you do something similar with your AVL().\n\n -- Replace existing sum(money) to return $0.00\n -- for zero instances\n \n drop aggregate sum money;\n create aggregate sum (sfunc1 = cash_pl, -- sum\n basetype = money,\n stype1 = money,\n initcond1 = '0.00');\n\n\n\nJose' Soares Da Silva writes:\n > Hi all,\n > \n > I'm looking for a function like COALESCE() or the Oracle NVL(),\n > to returns a ZERO value instead of a NULL value.\n > To have the result: NULL+1 = 1 instead of NULL+1 = NULL\n > Have PostgreSQL something like this ?\n > I tried to write it on C but I can't realize the beavior of NULLs,\n > I can't get that my program returns a zero instead of a null.\n > I'm not a C programmer, could somebody help me ?\n > \n > SELECT * FROM emp;\n > name |salary|age|dept\n > -----------+------+---+-----\n > Sam | 1200| 16|toy\n > Claire | 5000| 32|shoe\n > Bill | 4200| 36|shoe\n > Ginger | 4800| 30|candy\n > NULL VALUES| | |\n > (5 rows)\n > \n > SELECT name,NVL(salary)+100 AS dream FROM emp;\n > name |dream\n > -----------+-----\n > Sam | 1300\n > Claire | 5100\n > Bill | 4300\n > Ginger | 4900\n > NULL VALUES| <--- I expected 100 here.\n > (5 rows)\n > Thanks, Jose'\n > | |\n > ~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~~~\n > Progetto HYGEA ---- ---- www.sferacarta.com\n > Sfera Carta Software ---- ---- [email protected]\n > Via Bazzanese, 69 | | Fax. ++39 51 6131537\n > Casalecchio R.(BO) Italy | | Tel. ++39 51 591054\n > \n\n-- \n------------------------------------------------------------\nRex McMaster [email protected] \n [email protected]\n PGP Public key: http://www.compsoft.com.au/~rmcm/pgp-pk\n",
"msg_date": "Sat, 13 Jun 1998 13:37:03 +1000 (AEST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] COALESCE() or NVL()"
},
{
"msg_contents": "On Sat, 13 Jun 1998 [email protected] wrote:\n\n> I got sum(money attribute) to return 0.00 instead of NULL when there\n> are zero tuples in class, by redefining the sum() aggregate to set\n> initcond1 to 0.00. Perhaps you do something similar with your AVL().\n> \n> -- Replace existing sum(money) to return $0.00\n> -- for zero instances\n> \n> drop aggregate sum money;\n> create aggregate sum (sfunc1 = cash_pl, -- sum\n> basetype = money,\n> stype1 = money,\n> initcond1 = '0.00');\n> \nWhat I need is a scalar function that, unfortunatelly hasn't an initcond1.\nI don't know how to make a select like:\n\nSELECT COALESCE(field) FROM table;\nor \nSELECT CASE\n WHEN field IS NOT NULL THEN field\n ELSE 0\n END CASE \nFROM table;\n\n> Jose' Soares Da Silva writes:\n> > Hi all,\n> > \n> > I'm looking for a function like COALESCE() or the Oracle NVL(),\n> > to returns a ZERO value instead of a NULL value.\n> > To have the result: NULL+1 = 1 instead of NULL+1 = NULL\n> > Have PostgreSQL something like this ?\n> > I tried to write it on C but I can't realize the beavior of NULLs,\n> > I can't get that my program returns a zero instead of a null.\n> > I'm not a C programmer, could somebody help me ?\n> > \n> > SELECT * FROM emp;\n> > name |salary|age|dept\n> > -----------+------+---+-----\n> > Sam | 1200| 16|toy\n> > Claire | 5000| 32|shoe\n> > Bill | 4200| 36|shoe\n> > Ginger | 4800| 30|candy\n> > NULL VALUES| | |\n> > (5 rows)\n> > \n> > SELECT name,NVL(salary)+100 AS dream FROM emp;\n> > name |dream\n> > -----------+-----\n> > Sam | 1300\n> > Claire | 5100\n> > Bill | 4300\n> > Ginger | 4900\n> > NULL VALUES| <--- I expected 100 here.\n> > (5 rows)\n> > Thanks, Jose'\n\n",
"msg_date": "Wed, 17 Jun 1998 12:03:43 +0000 (UTC)",
"msg_from": "\"Jose' Soares Da Silva\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] COALESCE() or NVL()"
},
{
"msg_contents": "At 12:03 PM 6/17/98 +0000, Jose' Soares Da Silva wrote:\n>> Jose' Soares Da Silva writes:\n>> > SELECT name,NVL(salary)+100 AS dream FROM emp;\n>> > name |dream\n>> > -----------+-----\n>> > Sam | 1300\n>> > Claire | 5100\n>> > Bill | 4300\n>> > Ginger | 4900\n>> > NULL VALUES| <--- I expected 100 here.\n>> > (5 rows)\n\nSELECT name, NVL(salary, 0) + 100 AS dream FROM emp;\n\nNVL() takes two values: the column/variable, and the value to use if NULL.\n\n--\nRobin Thomas\[email protected]\n",
"msg_date": "Thu, 18 Jun 1998 13:44:48 -0700",
"msg_from": "Robin Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] COALESCE() or NVL()"
},
{
"msg_contents": "Hello Robin,\n\ngioved�, 18 giugno 98, you wrote:\n\nRT> At 12:03 PM 6/17/98 +0000, Jose' Soares Da Silva wrote:\n>>> Jose' Soares Da Silva writes:\n>>> > SELECT name,NVL(salary)+100 AS dream FROM emp;\n>>> > name |dream\n>>> > -----------+-----\n>>> > Sam | 1300\n>>> > Claire | 5100\n>>> > Bill | 4300\n>>> > Ginger | 4900\n>>> > NULL VALUES| <--- I expected 100 here.\n>>> > (5 rows)\n\nRT> SELECT name, NVL(salary, 0) + 100 AS dream FROM emp;\n\nRT> NVL() takes two values: the column/variable, and the value to use if NULL.\n\nRT> --\nRT> Robin Thomas\nRT> [email protected]\nI don't think this work Robin, because there isn't such function on\nPostgreSQL.\nthe only thing that I have is:\n\n function nvl(int4, int4) does not exist\n\nDo you know how to implement it on PostgreSQL ?\n\nBest regards,\n Jose' mailto:[email protected]\n\n\n",
"msg_date": "Wed, 8 Jul 1998 15:35:06 +0200",
"msg_from": "Sferacarta Software <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re[2]: [GENERAL] COALESCE() or NVL()"
},
{
"msg_contents": "\nI am trying to use the nvl function with no success\nwhen I say:\n select bar, NVL(foo, 0) from nulltest;\nI get the error:\nERROR: function nvl(int4, int4) does not exist\n\nDoes anyone have any suggestions?\nSummer\n\n\nOn Thu, 18 Jun 1998, Robin Thomas wrote:\n\n> At 12:03 PM 6/17/98 +0000, Jose' Soares Da Silva wrote:\n> >> Jose' Soares Da Silva writes:\n> >> > SELECT name,NVL(salary)+100 AS dream FROM emp;\n> >> > name |dream\n> >> > -----------+-----\n> >> > Sam | 1300\n> >> > Claire | 5100\n> >> > Bill | 4300\n> >> > Ginger | 4900\n> >> > NULL VALUES| <--- I expected 100 here.\n> >> > (5 rows)\n> \n> SELECT name, NVL(salary, 0) + 100 AS dream FROM emp;\n> \n> NVL() takes two values: the column/variable, and the value to use if NULL.\n> \n> --\n> Robin Thomas\n> [email protected]\n> \n> \n\n",
"msg_date": "Thu, 16 Jul 1998 15:15:21 -0600 (MDT)",
"msg_from": "Summer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] COALESCE() or NVL()"
}
] |
[
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > I have no problem with catalog changes and dumping the schema if we can\n> > write a script to help them do it. I would hope we can avoid having to\n> > make\n> > someone dump and reload their own data. I am thinking that it could be\n> > pretty inconvenient to dump/load and reindex something like a 50GB table\n> > with\n> > 6 indexes.\n\n1) It did take me about 14 hours to reload a dumpped database last\nweek. I have a Sun Ultra with UW SCSI disks, (no fsync big buffer\ncache and all) I have about 16M rows overall. By end of year I \ncould have 100M rows.\n\n2) If you can fix Postgres so that it _could_ in fact handle a 50GB\ntable I would be very happy to reload even 100M rows. My current \ndatabase has tables called \"tablename_001, tablename_002, \ntablename_003 and so on, all <2GB slices of what should go into one \nlarger table.\n\nI said a few weeks ago that I would report on performence of a\nPent. 200 with IDE vs. a Sun Ultra with UW SCSI. Here is an\nobservation. Both PC and Sun had same data and software.\n\nWhile index building on the Linux PC it is CPU bound. Top shows\nmy 200Mhz non-MMX pentium maxed out near 85%\n\nOn the Sun (Solaris) the process is I/O bound. Even with the much\nfaster disk.\n\nIt apears that UW SCSI disks are about double the speed of\nIDE while a 200Mhz Ultra SPARC CPU is maybe five or six\ntimes faster then a 200Mhz Pentium. Overall database operations\nare two to four times faster on the Sun.\n\nConclusions: 1) the Sun's CPU power is overkill. We could do \nwith a dual CPU Pentium. II 400Mhz system, 256MB RAM and save\nsome $$. 2) UW SCSI disks are not all that fast. What is needed\nis a hardware RAID box with N-way disk striping. \n-- \n \n-- \n--Chris Albertson\n\n [email protected] Voice: 626-351-0089 X127\n Logicon RDA, Pasadena California Fax: 626-351-0699\n",
"msg_date": "Fri, 12 Jun 1998 11:30:31 -0700",
"msg_from": "Chris Albertson <[email protected]>",
"msg_from_op": true,
"msg_subject": "dump/load"
}
] |
[
{
"msg_contents": "I have generated new, more consistent template names:\n\nCVS/\t\t\tgeneric\t\t\tsco\naix_325\t\t\thpux_cc\t\t\tsolaris_i386_cc\naix_41\t\t\thpux_gcc\t\tsolaris_i386_gcc\naix_gcc\t\t\tirix5\t\t\tsolaris_sparc_cc\nalpha\t\t\tlinux_alpha\t\tsolaris_sparc_gcc\nbsdi_2.0\t\tlinux_i386\t\tsunos4_cc\nbsdi_2.1\t\tlinux_sparc\t\tsunos4_gcc\nbsdi_3.0\t\tnetbsd\t\t\tsvr4\ndgux\t\t\tnextstep\t\tultrix4\nfreebsd\t\t\topenbsd\t\t\tunivel\n\nlinux is gone, replaced by linux_i386. The others are just renamed to\nbe more consistent. /template/.similar was also updated.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 12 Jun 1998 19:17:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "template names"
}
] |
[
{
"msg_contents": "Hello all,\n\ne.g.\n\n I want to delete a large object with this table\n\nCREATE TABLE image (\n name text,\n raster oid\n);\n\n-- from programmer's guide\n\nin the psql\n\nfoo=> select lo_unlink(raster) from image;\nERROR: function int4(oid) does not exist\n\nWhy builtin \"lo_unlink\" is defined as accepting int4 not oid? Then do I\nhave to do\nfoo=> select lo_unlink(int4(oid_text(raster))) from image;\nOR\ndefine \"raster\" as int4? I don't think all these are good idea... Then\nhow to delete \"lo\" in the \"psql\"?\n\nBest Regards,\nC.S.Park\n\n",
"msg_date": "Sat, 13 Jun 1998 11:18:30 +0900",
"msg_from": "\"Park, Chul-Su\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[QUESTIONS] builtin lo_unlink(int4)? why int4 not oid?"
},
{
"msg_contents": "On Sat, 13 Jun 1998, Park, Chul-Su wrote:\n\n> Hello all,\n> \n> e.g.\n> \n> I want to delete a large object with this table\n> \n> CREATE TABLE image (\n> name text,\n> raster oid\n> );\n> \n> -- from programmer's guide\n> \n> in the psql\n> \n> foo=> select lo_unlink(raster) from image;\n> ERROR: function int4(oid) does not exist\n> \n> Why builtin \"lo_unlink\" is defined as accepting int4 not oid? Then do I\n> have to do\n> foo=> select lo_unlink(int4(oid_text(raster))) from image;\n> OR\n> define \"raster\" as int4? I don't think all these are good idea... Then\n> how to delete \"lo\" in the \"psql\"?\n\nI've just tested this, and I get the same thing (on 6.3.2, and yesterdays \nCVS versions).\n\nlo_unlink should be defined with oid (which I thought was the case). \n\nA temporary way round is:\n\n\tselect lo_unlink(raster::int4) from image;\n\nHackers: Is there any reason why it's defined as an int4?\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sat, 13 Jun 1998 10:58:12 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [QUESTIONS] builtin lo_unlink(int4)? why int4 not oid?"
},
{
"msg_contents": "> > foo=> select lo_unlink(raster) from image;\n> > ERROR: function int4(oid) does not exist\n> > \n> > Why builtin \"lo_unlink\" is defined as accepting int4 not oid? Then do I\n> > have to do\n> > foo=> select lo_unlink(int4(oid_text(raster))) from image;\n> > OR\n> > define \"raster\" as int4? I don't think all these are good idea... Then\n> > how to delete \"lo\" in the \"psql\"?\n> \n> I've just tested this, and I get the same thing (on 6.3.2, and yesterdays \n> CVS versions).\n> \n> lo_unlink should be defined with oid (which I thought was the case). \n> \n> A temporary way round is:\n> \n> \tselect lo_unlink(raster::int4) from image;\n> \n> Hackers: Is there any reason why it's defined as an int4?\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n\n\nfoo=> select count(lo_unlink(raster::int4)) from bar;\nERROR: function int4(oid) does not exist\n\nI'm using v6.3.2(patched) on SunSolaris/Redhat5.0\n\nBest Regards, C.S.Park\n\n",
"msg_date": "Sat, 13 Jun 1998 19:20:35 +0900 (JST)",
"msg_from": "Chul Su Park <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [QUESTIONS] builtin lo_unlink(int4)? why int4 not oid?"
},
{
"msg_contents": "On Sat, 13 Jun 1998, Chul Su Park wrote:\n\n> > > foo=> select lo_unlink(raster) from image;\n> > > ERROR: function int4(oid) does not exist\n> > > \n> > > Why builtin \"lo_unlink\" is defined as accepting int4 not oid? Then do I\n> > > have to do\n> > > foo=> select lo_unlink(int4(oid_text(raster))) from image;\n> > > OR\n> > > define \"raster\" as int4? I don't think all these are good idea... Then\n> > > how to delete \"lo\" in the \"psql\"?\n> > \n> > I've just tested this, and I get the same thing (on 6.3.2, and yesterdays \n> > CVS versions).\n> > \n> > lo_unlink should be defined with oid (which I thought was the case). \n> > \n> > A temporary way round is:\n> > \n> > \tselect lo_unlink(raster::int4) from image;\n> > \n> > Hackers: Is there any reason why it's defined as an int4?\n> > \n> > -- \n> > Peter T Mount [email protected] or [email protected]\n> \n> \n> foo=> select count(lo_unlink(raster::int4)) from bar;\n> ERROR: function int4(oid) does not exist\n> \n> I'm using v6.3.2(patched) on SunSolaris/Redhat5.0\n\nWhat patches have you applied?\n\nI'm running Redhat 4.1 on two machines. One has 6.3.2 (unpatched), the\nother the current CVS version, and the workaround worked.\n\nInfact on the current CVS machine, I just re-ran initdb, and tried the\nworkaround again, and it worked.\n\nI'm just wondering if one of the patches has removed int4(oid).\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sat, 13 Jun 1998 12:31:33 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [QUESTIONS] builtin lo_unlink(int4)? why int4 not oid?"
},
{
"msg_contents": "> I've just tested this, and I get the same thing (on 6.3.2, and yesterdays \n> CVS versions).\n> \n> lo_unlink should be defined with oid (which I thought was the case). \n> \n> A temporary way round is:\n> \n> \tselect lo_unlink(raster::int4) from image;\n> \n> Hackers: Is there any reason why it's defined as an int4?\n\nHere is a patch for to make lo_unlink use oid, not int4.\n\n---------------------------------------------------------------------------\n\nIndex: src/include/catalog/pg_proc.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/catalog/pg_proc.h,v\nretrieving revision 1.59\ndiff -c -r1.59 pg_proc.h\n*** pg_proc.h\t1998/05/29 13:36:31\t1.59\n--- pg_proc.h\t1998/06/13 20:23:49\n***************\n*** 1174,1180 ****\n DATA(insert OID = 963 ( close_lb\t\t PGUID 11 f t f 2 f 600 \"628 603\" 100 0 10 100 foo bar ));\n DESCR(\"closest point to line on box\");\n \n! DATA(insert OID = 964 ( lo_unlink\t\t PGUID 11 f t f 1 f 23 \"23\" 100 0 0 100\tfoo bar ));\n DESCR(\"large object unlink(delete)\");\n DATA(insert OID = 972 ( regproctooid\t PGUID 11 f t f 1 f 26 \"24\" 100 0 0 100\tfoo bar ));\n DESCR(\"get oid for regproc\");\n--- 1174,1180 ----\n DATA(insert OID = 963 ( close_lb\t\t PGUID 11 f t f 2 f 600 \"628 603\" 100 0 10 100 foo bar ));\n DESCR(\"closest point to line on box\");\n \n! DATA(insert OID = 964 ( lo_unlink\t\t PGUID 11 f t f 1 f 23 \"26\" 100 0 0 100\tfoo bar ));\n DESCR(\"large object unlink(delete)\");\n DATA(insert OID = 972 ( regproctooid\t PGUID 11 f t f 1 f 26 \"24\" 100 0 0 100\tfoo bar ));\n DESCR(\"get oid for regproc\");\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 13 Jun 1998 16:26:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [QUESTIONS] builtin lo_unlink(int4)? why int4 not oid?"
}
] |
[
{
"msg_contents": "Hi. I was able to receive mail yesterday (and mostly got caught up), but\nam off the air again, probably at least through the weekend. So if this\nhas already been brought up, sorry...\n\nI've been working on more patches for automatic type conversion, and\nthey fall on top of some of the recent \"junkfilter\" patches. So I've\nbeen waiting for that to settle down before finalizing my patches,\nparticularly since I've been off the air for e-mail for the last week :(\n\nAnyway, the current behavior from the source tree is that the new\n\"junkfilter\" regression test fails (as does the \"random\" test, but I\nhaven't looked into that yet). However, not only does the regression\ntest fail by crashing the backend, it seems to take the postmaster with\nit. This seems to happen repeatably on my i686/linux system with the\ncurrent source tree as well as with the same tree with my patches. The\nfact that the junkfilter test crashes doesn't bother me much (since I'm\nalready working around there I can probably track it down) but the\npostmaster getting taken out is more worrisome.\n\nIs it possible that the recent change from fork/exec to just fork leaves\nthe postmaster more exposed? I can imagine that it might, but don't have\nany direct experience with it so am just guessing. Any other ideas? Do\npeople see this on other platforms? This is the first time I can recall\nseeing the postmaster go away on a crash of a backend (but of course my\nmemory isn't what it should be :)\n\n - Tom\n",
"msg_date": "Sat, 13 Jun 1998 05:06:05 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd behavior in regression test?"
},
{
"msg_contents": "> \n> Hi. I was able to receive mail yesterday (and mostly got caught up), but\n> am off the air again, probably at least through the weekend. So if this\n> has already been brought up, sorry...\n> \n> I've been working on more patches for automatic type conversion, and\n> they fall on top of some of the recent \"junkfilter\" patches. So I've\n> been waiting for that to settle down before finalizing my patches,\n> particularly since I've been off the air for e-mail for the last week :(\n> \n> Anyway, the current behavior from the source tree is that the new\n> \"junkfilter\" regression test fails (as does the \"random\" test, but I\n> haven't looked into that yet). However, not only does the regression\n> test fail by crashing the backend, it seems to take the postmaster with\n> it. This seems to happen repeatably on my i686/linux system with the\n> current source tree as well as with the same tree with my patches. The\n> fact that the junkfilter test crashes doesn't bother me much (since I'm\n> already working around there I can probably track it down) but the\n> postmaster getting taken out is more worrisome.\n\nRandom is now seeded automatically, so you will need to seed it with a\nfixed value before using it.\n\n> \n> Is it possible that the recent change from fork/exec to just fork leaves\n> the postmaster more exposed? I can imagine that it might, but don't have\n> any direct experience with it so am just guessing. Any other ideas? Do\n> people see this on other platforms? This is the first time I can recall\n> seeing the postmaster go away on a crash of a backend (but of course my\n> memory isn't what it should be :)\n\nMy guess is that the postmaster can no longer restart its backends after\none of them aborts. Something I need to check into perhaps.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 13 Jun 1998 01:22:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd behavior in regression test?"
},
{
"msg_contents": "> > \n> > Is it possible that the recent change from fork/exec to just fork leaves\n> > the postmaster more exposed? I can imagine that it might, but don't have\n> > any direct experience with it so am just guessing. Any other ideas? Do\n> > people see this on other platforms? This is the first time I can recall\n> > seeing the postmaster go away on a crash of a backend (but of course my\n> > memory isn't what it should be :)\n> \n> My guess is that the postmaster can no longer restart its backends after\n> one of them aborts. Something I need to check into perhaps.\n> \n\nI just tried killing a running backend, and could not get the postmaster\nto disappear.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 13 Jun 1998 01:32:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Odd behavior in regression test?"
},
{
"msg_contents": "On Sat, 13 Jun 1998, Bruce Momjian wrote:\n> > Is it possible that the recent change from fork/exec to just fork leaves\n> > the postmaster more exposed? I can imagine that it might, but don't have\n> > any direct experience with it so am just guessing. Any other ideas? Do\n> > people see this on other platforms? This is the first time I can recall\n> > seeing the postmaster go away on a crash of a backend (but of course my\n> > memory isn't what it should be :)\n> \n> My guess is that the postmaster can no longer restart its backends after\n> one of them aborts. Something I need to check into perhaps.\n\nYesterday, while I was working on the Large Object Orphaning problem, I\nwas having similar problems. I had to stop and restart the postmaster\nbefore I could do anything afterwards.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sat, 13 Jun 1998 11:04:32 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Odd behavior in regression test?"
},
{
"msg_contents": "On Sat, 13 Jun 1998, Bruce Momjian wrote:\n\n> > > \n> > > Is it possible that the recent change from fork/exec to just fork leaves\n> > > the postmaster more exposed? I can imagine that it might, but don't have\n> > > any direct experience with it so am just guessing. Any other ideas? Do\n> > > people see this on other platforms? This is the first time I can recall\n> > > seeing the postmaster go away on a crash of a backend (but of course my\n> > > memory isn't what it should be :)\n> > \n> > My guess is that the postmaster can no longer restart its backends after\n> > one of them aborts. Something I need to check into perhaps.\n> > \n> \n> I just tried killing a running backend, and could not get the postmaster\n> to disappear.\n\nTry generating a segmentation fault in a loadable module... works\neverytime here.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sat, 13 Jun 1998 11:05:19 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Odd behavior in regression test?"
},
{
"msg_contents": "> > Is it possible that the recent change from fork/exec to just fork leaves\n> > the postmaster more exposed? I can imagine that it might, but don't have\n> > any direct experience with it so am just guessing. Any other ideas? Do\n> > people see this on other platforms? This is the first time I can recall\n> > seeing the postmaster go away on a crash of a backend (but of course my\n> > memory isn't what it should be :)\n> \n> My guess is that the postmaster can no longer restart its backends after\n> one of them aborts. Something I need to check into perhaps.\n\nDoes your postmaster stop running, or does it crash any backend that is\nstarted. I am seeing the latter, and the cause appears to be that the\npostmaster environment after the restart of the shared memory is not\nproper for a backend. I am looking into it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 14 Jun 1998 18:11:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Odd behavior in regression test?"
},
{
"msg_contents": "> > Is it possible that the recent change from fork/exec to just fork leaves\n> > the postmaster more exposed? I can imagine that it might, but don't have\n> > any direct experience with it so am just guessing. Any other ideas? Do\n> > people see this on other platforms? This is the first time I can recall\n> > seeing the postmaster go away on a crash of a backend (but of course my\n> > memory isn't what it should be :)\n> \n> My guess is that the postmaster can no longer restart its backends after\n> one of them aborts. Something I need to check into perhaps.\n> \n\nThis is now fixed.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 27 Jun 1998 10:46:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Odd behavior in regression test?"
}
] |
[
{
"msg_contents": "> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> Dropping table after aborting a transanction makes PosgresSQL unsable.\n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> [srashd]t-ishii{67} psql -e test < b\n> QUERY: drop table test;\n> WARN:Relation test Does Not Exist!\n> QUERY: create table test (i int4);\n> QUERY: create index iindex on test using btree(i);\n> QUERY: begin;\n> QUERY: insert into test values (100);\n> QUERY: select * from test;\n> i\n> ---\n> 100\n> (1 row)\n> \n> QUERY: rollback;\n> QUERY: drop table test;\n> NOTICE:AbortTransaction and not in in-progress state \n> NOTICE:AbortTransaction and not in in-progress state \n> \n> Note that if I do not make an index, it would be ok.\n\nCan someone comment on the cause of the above problem? Is it a bug to\nadd to the TODO list? I have verified it still exists in the current\nsources.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 13 Jun 1998 01:17:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] NOTICE:AbortTransaction and not in in-progress state"
}
] |
[
{
"msg_contents": "\nhello!\n\nI tried to make a function, like strftime in C\ncalled pstrtime( format, date )\n\nhere's the source... but when I compile it and create the function\nunder 6.3.2 first time it gives back an emty string.. after the backend \nstops ;(\n\nplease correct my faults,\n----------------------------------------------------------------------\n#include <stdio.h> /* for sprintf() */\n#include <string.h>\n#include <limits.h>\n#include \"postgres.h\"\n#include \"miscadmin.h\"\n#include \"utils/builtins.h\"\n#include \"utils/nabstime.h\"\n#include \"utils/datetime.h\"\n#include \"access/xact.h\"\n\n#define JDATE_2000 2451545\n\nchar *pstrtime(char *format, DateADT val);\n\n\nchar *\npstrtime( char *format , DateADT val)\n{\n int year,\n month,\n day;\n struct tm * time1;\n char *sometext;\n sometext = malloc(100);\n time1 = malloc(sizeof(struct tm));\n\n j2date(val + JDATE_2000, &year, &month, &day);\n\n time1->tm_year=year;\n time1->tm_mon=month-1;\n time1->tm_mday=day;\n\n strftime(sometext,90,format,time1 );\n free(time1);\n return( sometext );\n}\n\n\nanyway how to create this function under psql ?\nC type postgres type\n-----------------------------------\nDateADT\t\t\tdate\nchar * \t\t\t???????????\n\nthanks,\n\n\tBest regards, \n\t\tRedax\n.----------------------------------------------------------.\n|Zsolt Varga | tel/fax: +36 36 422811 |\n| AgriaComputer LTD | email: [email protected] |\n| System Administrator | URL: http://www.agria.hu/ |\n`----------------------------------------------------------'\n\n",
"msg_date": "Sat, 13 Jun 1998 09:08:38 +0200 (CEST)",
"msg_from": "Zsolt Varga <[email protected]>",
"msg_from_op": true,
"msg_subject": "my strftime func doesn't work. please help."
},
{
"msg_contents": "Zsolt Varga writes:\n> I tried to make a function, like strftime in C\n> called pstrtime( format, date )\n> \n> here's the source... but when I compile it and create the function\n> under 6.3.2 first time it gives back an emty string.. after the backend \n> stops ;(\n> \n> please correct my faults,\n> ----------------------------------------------------------------------\n> #include <stdio.h> /* for sprintf() */\n> #include <string.h>\n> #include <limits.h>\n> #include \"postgres.h\"\n> #include \"miscadmin.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/nabstime.h\"\n> #include \"utils/datetime.h\"\n> #include \"access/xact.h\"\n> \n> #define JDATE_2000 2451545\n> \n> char *pstrtime(char *format, DateADT val);\n> \n> \n> char *\n> pstrtime( char *format , DateADT val)\n> {\n> int year,\n> month,\n> day;\n> struct tm * time1;\n> char *sometext;\n> sometext = malloc(100);\n> time1 = malloc(sizeof(struct tm));\n> \n> j2date(val + JDATE_2000, &year, &month, &day);\n> \n> time1->tm_year=year;\n> time1->tm_mon=month-1;\n> time1->tm_mday=day;\n> \n> strftime(sometext,90,format,time1 );\n> free(time1);\n> return( sometext );\n> }\n\n\nI don't see what is causing your failure, perhaps you might want to test\nit under gdb. But, you are using malloc() way too much which will make this\nquite slow and wastful of memory. Also, you probably should be using palloc()\nnot malloc() or it will cause the backend to leak memory. For example:\n\nchar *\npstrtime( char *format , DateADT val)\n{\n\tint \tyear,\n\t\tmonth,\n\t\tday;\n\tstruct tm time1;\n\tchar\tbuf[256];\n\tchar\t*result;\n\n\tj2date(val + JDATE_2000, &year, &month, &day);\n\tmemset(time1, 0, sizeof(time1));\n\ttime1.tm_year = year;\n\ttime1.tm_mon = month-1;\n\ttime1.tm_mday = day;\n\n\tstrftime(buf, 90, format, &time1);\n\n\tresult = palloc(1 + strlen(buf));\n\tif (result)\n\t strcpy(result, buf, sizeof(buf))\n\treturn result;\n}\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n",
"msg_date": "Sat, 13 Jun 1998 13:49:50 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] my strftime func doesn't work. please help."
}
] |
[
{
"msg_contents": "Hi,\n\ncurrently I'm writing a Web application, which should \nbe able to interface to any SQL database. Using perl this\nseems to be straigt forward, mainly due to the DBI module\nof Tim Bunce. What makes this task really difficult are\nthe SQL-dialects of every database. Although SQL is\nstandardized, there are many, subtle differences which have\nto be taken into account. After stripping down my application\nto an absolut basic syntax, there is still one problem left.\n\nPostgreSQL understands the following syntax:\n\n select count(SUBSTR(var,1,5)), SUBSTR(var,1,5) from t group by 2;\n select count(SUBSTR(var,1,5)) as x, SUBSTR(var,1,5) as y from t group by y;\n\nUnfortunately other databases - like Oracle - are not able to\nhandle these statements. Oracle understands only the following syntax:\n\n select count(SUBSTR(var,1,5)), SUBSTR(var,1,5) from t group by SUBSTR(var,1,5);\n\nwhich gives an error with PostgreSQL !\n\n\nI don't know if any of these variants are standard or non-standard,\nbut it would be very helpful, if PostgreSQL would be able to\nhandle all of these examples. From the functional point of view,\nthere is no difference. I guess, only the parser has to be adapted.\n\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Sat, 13 Jun 1998 20:07:42 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wishlist for next version: group by clause"
},
{
"msg_contents": "\nI didn't realize PG could not do\n\n\tgroup by [function on column]\n\nOuch!\n\nI *think* all the \"real\" RDBMS can do this. If Oracle and \nSybase both support it, that makes it more or less a de facto \nstandard :-) I'm sure we use this syntax in several places in \nour apps -- certainly in our time-series analysis package. \nImplication is that 6.3 is still not functional enough to \nreplace an existing commercial SQL server such as Oracle or \nSybase for production apps, without expensive manual proofing \nand rewriting of embedded SQL statements.\n\nDoes anyone know whether this group by syntax is ANSI SQL92?\n\n\t\t\t---------------\n\nThere must be many sites in the same boat with mine: running \nan outmoded version of one of the Big Guys' engines, unwilling \nto pay the outrageous support and upgrade fees required to get \ncurrent, wanting full Linux support, yet unable to switch to \nPG because of small gotchas like this one. It's a small \ngotcha if you are writing a brand new app, but it's a large \ngotcha if you have to comb through thousands of embedded SQL \nstatements in hundreds of production apps and manually fix it \nin each instance. \n\nIs there a list of the \"PG is different\" gotchas like this, \nand is their elimination being given a high priority? I think \n\"plug-n-play\" replacement of existing servers with PG is a \ngood practical goal -- so long as the app writers have wisely \navoided vendor-specific syntax in their SQL, of course :-) \nI think conversions of this sort would be good publicity \nfor PG, and I would be willing to write up a public report\non mine if and when PG evolves to the point where I can\ndo it!\n\nWhat do you all think about the PR value of new PG-driven\napps vs conversion of existing production apps?\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n",
"msg_date": "Tue, 16 Jun 1998 11:33:39 -0700 (PDT)",
"msg_from": "De Clarke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wishlist for next version: group by clause "
},
{
"msg_contents": "> \n> \n> I didn't realize PG could not do\n> \n> \tgroup by [function on column]\n> \n> Ouch!\n> \n> I *think* all the \"real\" RDBMS can do this. If Oracle and \n> Sybase both support it, that makes it more or less a de facto \n> standard :-) I'm sure we use this syntax in several places in \n> our apps -- certainly in our time-series analysis package. \n> Implication is that 6.3 is still not functional enough to \n> replace an existing commercial SQL server such as Oracle or \n> Sybase for production apps, without expensive manual proofing \n> and rewriting of embedded SQL statements.\n> \n> Does anyone know whether this group by syntax is ANSI SQL92?\n\nAdded to TODO. Vadim may have a comment on this, and how hard it is to\ndo. I know we allow functional indexes, but am not sure how that\nrelates to this problem.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 16 Jun 1998 15:09:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wishlist for next version: group by clause"
},
{
"msg_contents": "Do you mean in a select statement? Such as:\n\n SELECT func(date) as month, count(*) FROM foo GROUP BY month;\n\nOr even:\n\n SELECT count(*) FROM foo GROUP BY func(date);\n\nThe first is supported. The second would require some changes to the parser.\n\nDe Clarke wrote:\n\n> I didn't realize PG could not do\n>\n> group by [function on column]\n>\n> Ouch!\n>\n\n\n\n",
"msg_date": "Tue, 16 Jun 1998 16:27:35 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wishlist for next version: group by clause"
},
{
"msg_contents": "\nDavid Hartwig said:\n\n>> Do you mean in a select statement? Such as:\n>> \n>> SELECT func(date) as month, count(*) FROM foo GROUP BY month;\n>> \n>> Or even:\n>> \n>> SELECT count(*) FROM foo GROUP BY func(date);\n>> \n>> The first is supported. The second would require some changes to the parser.\n\n#2 was what I had in mind...\n\nThis is a pointless query, but it demonstrates a couple of\nthings that the sybase SQL interpreter supports:\n\n\tselect avg(datepart(minute,date)) from hires_events \n\t\tgroup by datepart(hour,date)\n\n1. you can apply stat functions such as avg and sum to\n\tfunctions on columns as well as to raw columns\n\n2. you can group by a function on a column\n\nI think Oracle will do this also...\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n",
"msg_date": "Tue, 16 Jun 1998 14:25:18 -0700 (PDT)",
"msg_from": "De Clarke <[email protected]>",
"msg_from_op": false,
"msg_subject": "group by : syntactic example (sybase)"
}
] |
[
{
"msg_contents": "Hello,\n\nI was wondering what was the status of PL/Perl.\n\nThanks,\n\nEdwin S. Ramirez\n\n",
"msg_date": "Sat, 13 Jun 1998 17:30:19 -0400",
"msg_from": "\"Edwin S. Ramirez\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL/Perl"
}
] |
[
{
"msg_contents": "\nI have been playing a little with the performance tests found in\npgsql/src/tests/performance and have a few observations that might be of\nminor interest.\n\nThe tests themselves are simple enough although the result parsing in the\ndriver did not work on Linux. I am enclosing a patch below to fix this. I\nthink it will also work better on the other systems.\n\nA summary of results from my testing are below. Details are at the bottom\nof this message.\n\nMy test system is 'leslie':\n\n linux 2.0.32, gcc version 2.7.2.3\n P133, HX chipset, 512K L2, 32MB mem\n NCR810 fast scsi, Quantum Atlas 2GB drive (7200 rpm).\n\n\n Results Summary (times in seconds)\n\n Single txn 8K txn Create 8K idx 8K random Simple\nCase Description 8K insert 8K insert Index Insert Scans Orderby\n=================== ========== ========= ====== ====== ========= =======\n1 From Distribution\n P90 FreeBsd -B256 39.56 1190.98 3.69 46.65 65.49 2.27\n IDE\n\n2 Running on leslie\n P133 Linux 2.0.32 15.48 326.75 2.99 20.69 35.81 1.68\n SCSI 32M\n\n3 leslie, -o -F\n no forced writes 15.90 24.98 2.63 20.46 36.43 1.69\n\n4 leslie, -o -F\n no ASSERTS 14.92 23.23 1.38 18.67 33.79 1.58\n\n5 leslie, -o -F -B2048\n more buffers 21.31 42.28 2.65 25.74 42.26 1.72\n\n6 leslie, -o -F -B2048\n more bufs, no ASSERT 20.52 39.79 1.40 24.77 39.51 1.55\n\n\n\n\n Case to Case Difference Factors (+ is faster)\n\n Single txn 8K txn Create 8K idx 8K random Simple\nCase Description 8K insert 8K insert Index Insert Scans Orderby\n=================== ========== ========= ====== ====== ========= =======\n\nleslie vs BSD P90. 2.56 3.65 1.23 2.25 1.83 1.35\n\n(noflush -F) vs no -F -1.03 13.08 1.14 1.01 -1.02 1.00\n\nNo Assert vs Assert 1.05 1.07 1.90 1.06 1.07 1.09\n\n-B256 vs -B2048 1.34 1.69 1.01 1.26 1.16 1.02\n\n\nObservations:\n\n - leslie (P133 linux) appears to be about 1.8 times faster than the\n P90 BSD system used for the test result distributed with the source, not\n counting the 8K txn insert case which was completely disk bound.\n\n - SCSI disks make a big (factor of 3.6) difference. During this test the\n disk was hammering and cpu utilization was < 10%.\n\n - Assertion checking seems to cost about 7% except for create index where\n it costs 90%\n\n - the -F option to avoid flushing buffers has tremendous effect if there are\n many very small transactions. Or, another way, flushing at the end of the\n transaction is a major disaster for performance.\n\n - Something is very wrong with our buffer cache implementation. Going from\n 256 buffers to 2048 buffers costs an average of 25%. In the 8K txn case\n it costs about 70%. I see looking at the code and profiling that in the 8K\n txn case this is in BufferSync() which examines all the buffers at commit\n time. I don't quite understand why it is so costly for the single 8K row\n txn (35%) though.\n\nIt would be nice to have some more tests. Maybe the Wisconsin stuff will\nbe useful.\n\n\n\n----------------- patch to test harness. apply from pgsql ------------\n*** src/test/performance/runtests.pl.orig\tSun Jun 14 11:34:04 1998\n\nDifferences %\n\n\n----------------- patch to test harness. apply from pgsql ------------\n*** src/test/performance/runtests.pl.orig\tSun Jun 14 11:34:04 1998\n--- src/test/performance/runtests.pl\tSun Jun 14 12:07:30 1998\n***************\n*** 84,123 ****\n open (STDERR, \">$TmpFile\") or die;\n select (STDERR); $| = 1;\n \n! for ($i = 0; $i <= $#perftests; $i++)\n! {\n \t$test = $perftests[$i];\n \t($test, $XACTBLOCK) = split (/ /, $test);\n \t$runtest = $test;\n! \tif ( $test =~ /\\.ntm/ )\n! \t{\n! \t\t# \n \t\t# No timing for this queries\n- \t\t# \n \t\tclose (STDERR);\t\t# close $TmpFile\n \t\topen (STDERR, \">/dev/null\") or die;\n \t\t$runtest =~ s/\\.ntm//;\n \t}\n! \telse\n! \t{\n \t\tclose (STDOUT);\n \t\topen(STDOUT, \">&SAVEOUT\");\n \t\tprint STDOUT \"\\nRunning: $perftests[$i+1] ...\";\n \t\tclose (STDOUT);\n \t\topen (STDOUT, \">/dev/null\") or die;\n \t\tselect (STDERR); $| = 1;\n! \t\tprintf \"$perftests[$i+1]: \";\n \t}\n \n \tdo \"sqls/$runtest\";\n \n \t# Restore STDERR to $TmpFile\n! \tif ( $test =~ /\\.ntm/ )\n! \t{\n \t\tclose (STDERR);\n \t\topen (STDERR, \">>$TmpFile\") or die;\n \t}\n- \n \tselect (STDERR); $| = 1;\n \t$i++;\n }\n--- 84,116 ----\n open (STDERR, \">$TmpFile\") or die;\n select (STDERR); $| = 1;\n \n! for ($i = 0; $i <= $#perftests; $i++) {\n \t$test = $perftests[$i];\n \t($test, $XACTBLOCK) = split (/ /, $test);\n \t$runtest = $test;\n! \tif ( $test =~ /\\.ntm/ ) {\n \t\t# No timing for this queries\n \t\tclose (STDERR);\t\t# close $TmpFile\n \t\topen (STDERR, \">/dev/null\") or die;\n \t\t$runtest =~ s/\\.ntm//;\n \t}\n! \telse {\n \t\tclose (STDOUT);\n \t\topen(STDOUT, \">&SAVEOUT\");\n \t\tprint STDOUT \"\\nRunning: $perftests[$i+1] ...\";\n \t\tclose (STDOUT);\n \t\topen (STDOUT, \">/dev/null\") or die;\n \t\tselect (STDERR); $| = 1;\n! \t\tprint \"$perftests[$i+1]: \";\n \t}\n \n \tdo \"sqls/$runtest\";\n \n \t# Restore STDERR to $TmpFile\n! \tif ( $test =~ /\\.ntm/ ) {\n \t\tclose (STDERR);\n \t\topen (STDERR, \">>$TmpFile\") or die;\n \t}\n \tselect (STDERR); $| = 1;\n \t$i++;\n }\n***************\n*** 128,138 ****\n open (TMPF, \"<$TmpFile\") or die;\n open (RESF, \">$ResFile\") or die;\n \n! while (<TMPF>)\n! {\n! \t$str = $_;\n! \t($test, $rtime) = split (/:/, $str);\n! \t($tmp, $rtime, $rest) = split (/[ \t]+/, $rtime);\n! \tprint RESF \"$test: $rtime\\n\";\n }\n \n--- 121,130 ----\n open (TMPF, \"<$TmpFile\") or die;\n open (RESF, \">$ResFile\") or die;\n \n! while (<TMPF>) {\n! if (m/^(.*: ).* ([0-9:.]+) *elapsed/) {\n! \t ($test, $rtime) = ($1, $2);\n! \t print RESF $test, $rtime, \"\\n\";\n! }\n }\n\n------------------------------------------------------------------------\n\n \n------------------------- testcase detail --------------------------\n \n1. from distribution\n DBMS:\t\tPostgreSQL 6.2b10\n OS:\t\tFreeBSD 2.1.5-RELEASE\n HardWare:\ti586/90, 24M RAM, IDE\n StartUp:\tpostmaster -B 256 '-o -S 2048' -S\n Compiler:\tgcc 2.6.3\n Compiled:\t-O, without CASSERT checking, with\n \t\t-DTBL_FREE_CMD_MEMORY (to free memory\n \t\tif BEGIN/END after each query execution)\n DB connection startup: 0.20\n 8192 INSERTs INTO SIMPLE (1 xact): 39.58\n 8192 INSERTs INTO SIMPLE (8192 xacts): 1190.98\n Create INDEX on SIMPLE: 3.69\n 8192 INSERTs INTO SIMPLE with INDEX (1 xact): 46.65\n 8192 random INDEX scans on SIMPLE (1 xact): 65.49\n ORDER BY SIMPLE: 2.27\n \n \n2. run on leslie with asserts\n DBMS:\t\tPostgreSQL 6.3.2 (plus changes to 98/06/01)\n OS:\t\tLinux 2.0.32 leslie\n HardWare:\ti586/133 HX 512, 32M RAM, fast SCSI, 7200rpm\n StartUp:\tpostmaster -B 256 '-o -S 2048' -S\n Compiler:\tgcc 2.7.2.3\n Compiled:\t-O, WITH CASSERT checking, with\n \t\t-DTBL_FREE_CMD_MEMORY (to free memory\n \t\tif BEGIN/END after each query execution)\n DB connection startup: 0.10\n 8192 INSERTs INTO SIMPLE (1 xact): 15.48\n 8192 INSERTs INTO SIMPLE (8192 xacts): 326.75\n Create INDEX on SIMPLE: 2.99\n 8192 INSERTs INTO SIMPLE with INDEX (1 xact): 20.69\n 8192 random INDEX scans on SIMPLE (1 xact): 35.81\n ORDER BY SIMPLE: 1.68\n \n \n3. with -F to avoid forced i/o\n DBMS:\t\tPostgreSQL 6.3.2 (plus changes to 98/06/01)\n OS:\t\tLinux 2.0.32 leslie\n HardWare:\ti586/133 HX 512, 32M RAM, fast SCSI, 7200rpm\n StartUp:\tpostmaster -B 256 '-o -S 2048 -F' -S\n Compiler:\tgcc 2.7.2.3\n Compiled:\t-O, WITH CASSERT checking, with\n \t\t-DTBL_FREE_CMD_MEMORY (to free memory\n \t\tif BEGIN/END after each query execution)\n DB connection startup: 0.10\n 8192 INSERTs INTO SIMPLE (1 xact): 15.90\n 8192 INSERTs INTO SIMPLE (8192 xacts): 24.98\n Create INDEX on SIMPLE: 2.63\n 8192 INSERTs INTO SIMPLE with INDEX (1 xact): 20.46\n 8192 random INDEX scans on SIMPLE (1 xact): 36.43\n ORDER BY SIMPLE: 1.69\n \n \n4. no asserts, -F to avoid forced I/O\n DBMS:\t\tPostgreSQL 6.3.2 (plus changes to 98/06/01)\n OS:\t\tLinux 2.0.32 leslie\n HardWare:\ti586/133 HX 512, 32M RAM, fast SCSI, 7200rpm\n StartUp:\tpostmaster -B 256 '-o -S 2048' -S\n Compiler:\tgcc 2.7.2.3\n Compiled:\t-O, No CASSERT checking, with\n \t\t-DTBL_FREE_CMD_MEMORY (to free memory\n \t\tif BEGIN/END after each query execution)\n DB connection startup: 0.10\n 8192 INSERTs INTO SIMPLE (1 xact): 14.92\n 8192 INSERTs INTO SIMPLE (8192 xacts): 23.23\n Create INDEX on SIMPLE: 1.38\n 8192 INSERTs INTO SIMPLE with INDEX (1 xact): 18.67\n 8192 random INDEX scans on SIMPLE (1 xact): 33.79\n ORDER BY SIMPLE: 1.58\n \n \n5. with more buffers (2048 vs 256) and -F to avoid forced i/o\n DBMS:\t\tPostgreSQL 6.3.2 (plus changes to 98/06/01)\n OS:\t\tLinux 2.0.32 leslie\n HardWare:\ti586/133 HX 512, 32M RAM, fast SCSI, 7200rpm\n StartUp:\tpostmaster -B 2048 '-o -S 2048 -F' -S\n Compiler:\tgcc 2.7.2.3\n Compiled:\t-O, WITH CASSERT checking, with\n \t\t-DTBL_FREE_CMD_MEMORY (to free memory\n \t\tif BEGIN/END after each query execution)\n DB connection startup: 0.11\n 8192 INSERTs INTO SIMPLE (1 xact): 21.31\n 8192 INSERTs INTO SIMPLE (8192 xacts): 42.28\n Create INDEX on SIMPLE: 2.65\n 8192 INSERTs INTO SIMPLE with INDEX (1 xact): 25.74\n 8192 random INDEX scans on SIMPLE (1 xact): 42.26\n ORDER BY SIMPLE: 1.72\n \n \n6. No Asserts, more buffers (2048 vs 256) and -F to avoid forced i/o\n DBMS:\t\tPostgreSQL 6.3.2 (plus changes to 98/06/01)\n OS:\t\tLinux 2.0.32 leslie\n HardWare:\ti586/133 HX 512, 32M RAM, fast SCSI, 7200rpm\n StartUp:\tpostmaster -B 2048 '-o -S 2048 -F' -S\n Compiler:\tgcc 2.7.2.3\n Compiled:\t-O, No CASSERT checking, with\n \t\t-DTBL_FREE_CMD_MEMORY (to free memory\n \t\tif BEGIN/END after each query execution)\n DB connection startup: 0.11\n 8192 INSERTs INTO SIMPLE (1 xact): 20.52\n 8192 INSERTs INTO SIMPLE (8192 xacts): 39.79\n Create INDEX on SIMPLE: 1.40\n 8192 INSERTs INTO SIMPLE with INDEX (1 xact): 24.77\n 8192 random INDEX scans on SIMPLE (1 xact): 39.51\n ORDER BY SIMPLE: 1.55\n---------------------------------------------------------------------\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Sun, 14 Jun 1998 15:35:13 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": true,
"msg_subject": "performance tests, initial results"
},
{
"msg_contents": "First, thanks for fixing runtests.pl, David (I'm not so cool perl\nprogrammer as I would like to be -:)\n\n> - Assertion checking seems to cost about 7% except for create index where\n> it costs 90%\n\nWow! This should be discovered!\n\n> - Something is very wrong with our buffer cache implementation. Going from\n> 256 buffers to 2048 buffers costs an average of 25%. In the 8K txn case\n> it costs about 70%. I see looking at the code and profiling that in the 8K\n> txn case this is in BufferSync() which examines all the buffers at commit\n ^^^^^^^^^^^^^^^^^^^^^^^^\n> time. I don't quite understand why it is so costly for the single 8K row\n> txn (35%) though.\n\nThis one is in my plans for 1 - 1.5 year :)\n\nVadim\n",
"msg_date": "Mon, 15 Jun 1998 09:09:18 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] performance tests, initial results"
}
] |
[
{
"msg_contents": "subscribe\n\n",
"msg_date": "Mon, 15 Jun 1998 13:11:22 +0300",
"msg_from": "Vassilis Papadimos <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscribe"
}
] |
[
{
"msg_contents": "This message received no replies from the SQL list and I forward\nit to hackers looking for additional thoughts.\n\nEXECUTIVE SUMMARY:\n\nI have two tables with identical structure.\nOne table has a unique index on 5 of the \n6 table attributes.\n\nWhen attempting to insert from the non-indexed\ntable into the uniquely indexed table, the\ninsert fails due to \"duplicate key\" error. (index definition below)\n\nHowever, this query, which tries to identify tuples with identical keys,\nreturns 0 rows. Each attribute included in the multifield index\nis qualified in the where clause. Why doesn't the\nselect show the duplicate tuples?\n\n select newpropsales.* from newpropsales n, propsales p\n where n.city=p.city and n.county=p.county and\n n.street=p.street and n.streetno=p.streetno and\n n.closingdate=p.closingdate ;\n\nclosingdate|county|city|streetno|street|price\n- -----------+------+----+--------+------+-----\n(0 rows)\n\n\n---------- Forwarded message ----------\nDate: Fri, 5 Jun 1998 19:42:21 -0400 (EDT)\nFrom: Marc Howard Zuckman <[email protected]>\nSubject: Need help understanding unique indices\n\nI have a need to incrementally add new data to a table with this\nstructure:\nTable = propsales\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| closingdate | date | 4 |\n| county | varchar() | 50 |\n| city | varchar() | 50 |\n| streetno | varchar() | 10 |\n| street | varchar() | 70 |\n| price | float8 | 8 |\n+----------------------------------+----------------------------------+-------+\n\nA second table, newpropsales, exists with identical structure.\n\nThe original table, propsales has a unique index that includes all of the\nrecord fields except the price field. The index is defined as follows:\n\nCREATE UNIQUE INDEX propsales_key on propsales using btree ( city varchar_ops, \nstreet varchar_ops, streetno varchar_ops,\ncounty varchar_ops, closingdate date_ops );\n\nWhen loading new data into the database, it is loaded into table\nnewpropsales. An effort to remvove duplicate tuples is then made\nusing this series of queries:\n\ndelete from recentpropsales; --temporary table with identical structure to those above.\n- -- get rid of any duplicates contained solely within newpropsales\ninsert into recentpropsales select distinct * from newpropsales; \ndelete from newpropsales;\ninsert into newpropsales select * from recentpropsales;\ndelete from recentpropsales;\ndelete from newminclosingdate;\ninsert into newminclosingdate select min(closingdate) from newpropsales;\n- -- get tuples from accumulated data that are in same time frame as new data.\ninsert into recentpropsales select propsales.* from propsales,newminclosingdate where \nclosingdate >= newminclosingdate.min;\n\n- -- attempt to eliminate duplicates tuples that are present in\n- -- both tables considered together\n- -- This will NOT eliminate all index duplicates because\n- -- price is not indexed. Therefore, tuples that are identical\n- -- in every way but have different price values will not be\n- -- deleted from the new data set.\n\ndelete from newpropsales where exists (\nselect city from recentpropsales r where\nr.county=newpropsales.county and r.price=newpropsales.price and\nr.city=newpropsales.city and r.closingdate=newpropsales.closingdate\nand r.street=newpropsales.street and r.streetno=newpropsales.streetno);\n\nAll of this seems to work ok. But, this fails\n\ninsert into propsales select * from newpropsales;\n\nbecause a duplicate key is encountered.\n\nHowever, this query, which tries to identify tuples with identical keys,\nreturns 0 rows. Why?\n\n select newpropsales.* from newpropsales n, propsales p\n where n.city=p.city and n.county=p.county and\n n.street=p.street and n.streetno=p.streetno and\n n.closingdate=p.closingdate ;\n\nclosingdate|county|city|streetno|street|price\n- -----------+------+----+--------+------+-----\n(0 rows)\n\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n\n",
"msg_date": "Mon, 15 Jun 1998 11:53:16 -0400 (EDT)",
"msg_from": "Marc Howard Zuckman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help understanding unique indices (fwd)"
}
] |
[
{
"msg_contents": "> \n> Hi,\n> since the chance for the new template names I can configure, make and\n> start the postmaster. psql report's \"cant't resolve symbol\n> 'PQsetdbLogin'. I have solved it now by myself. In\n> src/interfaces/libpq/Makefile.in I changed\n> \n> \tifeq ($(PORTNAME), linux)\n> to\n> \tifeq ($(PORTNAME), linux_i386)\n\nI think I have to make every linux port define the symbol 'linux'. I\nwill check into this.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 15 Jun 1998 13:46:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] template portname problems"
},
{
"msg_contents": "> \n> Hi,\n> since the chance for the new template names I can configure, make and\n> start the postmaster. psql report's \"cant't resolve symbol\n> 'PQsetdbLogin'. I have solved it now by myself. In\n> src/interfaces/libpq/Makefile.in I changed\n> \n> \tifeq ($(PORTNAME), linux)\n> to\n> \tifeq ($(PORTNAME), linux_i386)\n> \n> -Egon\n> \n> \n\nOK, made the change you suggested. Please give it a try.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 15 Jun 1998 17:00:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] template portname problems"
}
] |
[
{
"msg_contents": "I have written a script to remove braces around single statements, if\nthe statement is only one line in length.\n\nThe macro fixup context diff was 1,200 lines, and this diff is 12k\nlines.\n\nHope no one is sitting on patches.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 15 Jun 1998 15:16:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "removal of braces"
},
{
"msg_contents": "> \n> I have written a script to remove braces around single statements, if\n> the statement is only one line in length.\n> \n> The macro fixup context diff was 1,200 lines, and this diff is 12k\n> lines.\n> \n> Hope no one is sitting on patches.\n\nThey had things like:\n\n\tif (test != 0)\n\t\tmacro;\n\nwhile the macro was:\n\n#define macro() \\\n\tstmt1; \\\n\tstmt2; \\\n\tstmt3;\n\nOf course, only the stmt1 is conditional. The rest are always executed.\nI am sure there were some bugs fixed by this cleanup.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 15 Jun 1998 16:55:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] removal of braces"
},
{
"msg_contents": "Bruce Momjian chortles ominiously:\n> I have written a script to remove braces around single statements, if\n> the statement is only one line in length.\n> \n> The macro fixup context diff was 1,200 lines, and this diff is 12k\n> lines.\n> \n> Hope no one is sitting on patches.\n\nIs this trip necessary? While I am a strong believer in aesthetics when\nit comes to code (make it pretty first, making pretty code work is easy),\nI am not sure I support wholesale changes (12,000 lines of diff) for\nthe sake of purely cosmetic issues.\n\nIt is somewhat costly to the developers as we will all have to pull a complete\nnew source tree from CVS.\n\nIt is also somewhat risky. Suppose the script makes an error some\nwhere due to a tricky macro or suchlike. If this is not in something that\ngets checked by the regression test how likely are we to find it?\n\nAnd, for those of us contemplating larger projects where we might change a\nlarge number of files over a period of weeks or months, it presents a\nreally scary merge problem.\n\nThat said, if you get my patch in before you whack the braces, I don't have\nanything right now that would be harmed.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Mon, 15 Jun 1998 13:59:58 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] removal of braces"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am sure there were some bugs fixed by this cleanup.\n\nAnd some introduced, most likely. Just how much does your script know\nabout C syntax, macros, etc?\n\nI concur with David Gould that this is a risky, expensive exercise\nwith very little potential gain.\n\nIf you were able to identify any spots with the sort of macro-induced\nbug you illustrated, great! Fix 'em by hand. But I doubt that a\n12000-line wholesale modification is worth the trouble. Heck, half\nthe code I've looked at in pgsql doesn't even conform to indentation/\ntab-width conventions ... and *those* we have well-tested tools to fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jun 1998 17:24:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] removal of braces "
},
{
"msg_contents": "> \n> Bruce Momjian chortles ominiously:\n> > I have written a script to remove braces around single statements, if\n> > the statement is only one line in length.\n> > \n> > The macro fixup context diff was 1,200 lines, and this diff is 12k\n> > lines.\n> > \n> > Hope no one is sitting on patches.\n> \n> Is this trip necessary? While I am a strong believer in aesthetics when\n> it comes to code (make it pretty first, making pretty code work is easy),\n> I am not sure I support wholesale changes (12,000 lines of diff) for\n> the sake of purely cosmetic issues.\n> \n> It is somewhat costly to the developers as we will all have to pull a complete\n> new source tree from CVS.\n\nYes, it is a tradeoff in making so much of a change, but I reviewed the\ncode, and patch file, and it looked good. I thought it was broken\nbecause I could not recompile after it, but then realized the macros\nwere broken.\n\n> It is also somewhat risky. Suppose the script makes an error some\n> where due to a tricky macro or suchlike. If this is not in something that\n> gets checked by the regression test how likely are we to find it?\n\nIf it is broken, we back out the changes.\n\n> \n> And, for those of us contemplating larger projects where we might change a\n> large number of files over a period of weeks or months, it presents a\n> really scary merge problem.\n\nThe code was very conservative:\n\n\tawk '\t{\tline3 = $0; /* remove un-needed braces around single statements */\n\t\t\tif (skips > 0)\n\t\t\t\tskips--;\n\t\t\tif (line1 ~ \"\t\t*{$\" &&\n\t\t\t line2 ~ \"\t\t*[^;{}]*;$\" &&\n\t\t\t line3 ~ \"\t\t*}$\")\n\t\t\t{\n\t\t\t\tprint line2;\n\t\t\t\tline1 = \"\";\n\t\t\t\tline2 = \"\";\n\t\t\t\tline3 = \"\";\n\t\t\t\tskips = 3;\n\t\t\t}\n\t\t\telse\n\t \t\t\tif (skips == 0 && NR >= 3)\n\t\t\t\t\tprint line1;\n\t\t\tline1 = line2;\n\t\t\tline2 = line3;\n\t\t\tline3 = \"\";\n\t\t}\n\t\tEND {\n\t\t\tif (skips <= 1)\n\t\t\t\tprint line1;\n\t\t\tif (skips <= 2)\n\t\t\t\tprint line2;\n\t}'\n\n\n> \n> That said, if you get my patch in before you whack the braces, I don't have\n> anything right now that would be harmed.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 16 Jun 1998 03:22:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] removal of braces"
},
{
"msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I am sure there were some bugs fixed by this cleanup.\n> \n> And some introduced, most likely. Just how much does your script know\n> about C syntax, macros, etc?\n\nI have posted the patch to the list.\n\n> \n> I concur with David Gould that this is a risky, expensive exercise\n> with very little potential gain.\n> \n> If you were able to identify any spots with the sort of macro-induced\n> bug you illustrated, great! Fix 'em by hand. But I doubt that a\n> 12000-line wholesale modification is worth the trouble. Heck, half\n> the code I've looked at in pgsql doesn't even conform to indentation/\n> tab-width conventions ... and *those* we have well-tested tools to fix.\n\nThat is more work than fixing all the macros. I think 98% should be\nproperly formatted for 4-space tabs. I run pgindent just before every\nbeta period, and have added the brace removal code to pgindent.\n\nI got tired of removing them as I saw them. Now, we have a standard.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 16 Jun 1998 03:23:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] removal of braces"
}
] |
[
{
"msg_contents": "\n(14 Jun 1998)\n\nHere is the long awaited optimized version of the S_LOCK patch. This\nincorporates all the precedeing patches and emailed suggestions and the\nresults of the performance testing I posted last week. I would like\nto get this tested on as many platforms as possible so I can verify it went\nin correctly (as opposed to the horrorshow last time I sent in a patch).\n\nOnce this is confirmed, I will make a tarball of files that can be dropped\ninto a 6.3.2 source tree as a few people have asked for this in 6.3.2 as\nwell.\n\n\nI have changed the portability model a bit from the last patch. Noticing\nthat the *BSD platforms and Linux all seem to use GCC, I have separated\nthe platforms in GCC platforms and non-GCC platforms. This was more or\nless the case before because the GCC platforms were using the GCC specific\n__asm__() syntax, so I have merely made it explicit. Having done so, I then\nfelt free to use the GCC __inline__ feature as well which made the code\nprettier.\n\nPossibly someone with access to the BSD platforms could get rid of the\nNEED_I386_TAS_ASM and NEED_SPARC_TAS_ASM in the include/port/bsd.h and\ninclude/port/bsdi.h as they are redundant now.\n\nThe patch is base on the pgsql CVS tree as of midday 14 Jun 1998.\n\nTo apply the patch, (assuming you unpacked the tar file in /tmp):\n\ncd pgsql\npatch < /tmp/s_lock.patch\n\n\nI have tested this on i386 Linux, but as it involves platform specific code\nit would be helpful if someone on each of the other platforms could test it.\nTo make it easy to test, I have included a simple test case:\n\n cd pgsql/src/backend/storage/buffer\n make s_lock_test\n\nThis will hang for a couple minutes and then abort with some alarming messages\nabout \"Stuck Spinlock\". This is what is supposed to happen. If it exits\nquickly, your spinlocks are broken.\n\nIf it fails, please add -S to the compile flags in the s_lock_test target\nin src/storage/buffer/Makefile and make s_lock_test again. This will\ngenerate a s_lock.s file containing the assembler code. You can examine this\nand see if something is wrong with the expansion. Or you can send it to me\n([email protected]) and I will have a look. \n\n\nI ran regressions and everything that used to fail still failed ... ;-)\nBtw, the failures I see on glibc linux are:\nint2 .. failed\nint4 .. failed\noidint2 .. failed\noidint4 .. failed\nfloat8 .. failed\ngeometry .. failed\nrandom .. failed\njunkfilter .. failed\n\n-dg\n\n\nIndex: src/backend/storage/buffer/Makefile\ndiff -c src/backend/storage/buffer/Makefile:1.1.1.2 src/backend/storage/buffer/Makefile:1.2\n*** src/backend/storage/buffer/Makefile:1.1.1.2\tSun May 24 23:55:04 1998\n--- src/backend/storage/buffer/Makefile\tSun Jun 14 19:37:44 1998\n***************\n*** 27,33 ****\n \trm -f SUBSYS.o $(OBJS) s_lock_test\n \n s_lock_test: s_lock.c\n! \t$(CC) $(CFLAGS) -DS_LOCK_TEST=1 -g s_lock.c -o s_lock_test\n \t./s_lock_test\n \n ifeq (depend,$(wildcard depend))\n--- 27,33 ----\n \trm -f SUBSYS.o $(OBJS) s_lock_test\n \n s_lock_test: s_lock.c\n! \t$(CC) $(CFLAGS) -DS_LOCK_TEST=1 s_lock.c -o s_lock_test\n \t./s_lock_test\n \n ifeq (depend,$(wildcard depend))\nIndex: src/backend/storage/buffer/s_lock.c\ndiff -c src/backend/storage/buffer/s_lock.c:1.1.1.2 src/backend/storage/buffer/s_lock.c:1.2\n*** src/backend/storage/buffer/s_lock.c:1.1.1.2\tSun May 24 23:55:05 1998\n--- src/backend/storage/buffer/s_lock.c\tSun Jun 14 19:37:44 1998\n***************\n*** 11,18 ****\n *\n *-------------------------------------------------------------------------\n */\n \n #include <stdio.h>\n \n #include \"config.h\"\n #include \"c.h\"\n--- 11,19 ----\n *\n *-------------------------------------------------------------------------\n */\n \n #include <stdio.h>\n+ #include <sys/time.h>\n \n #include \"config.h\"\n #include \"c.h\"\n***************\n*** 22,46 ****\n /*\n * Each time we busy spin we select the next element of this array as the\n * number of microseconds to wait. This accomplishes pseudo random back-off.\n! * Values are not critical and are weighted to the low end of the range. They\n! * were chosen to work even with different select() timer resolutions on\n! * different platforms.\n! * note: total time to cycle through all 16 entries might be about .1 second.\n! */\n! int\t\t\ts_spincycle[S_NSPINCYCLE] =\n! {0, 0, 0, 1000, 5000, 0, 10000, 3000,\n! 0, 10000, 0, 15000, 9000, 21000, 6000, 30000\n };\n \n \n- #if defined(S_LOCK_DEBUG)\n /*\n! * s_lock(lock) - take a spinlock\n! * add intrumentation code to this and define S_LOCK_DEBUG\n! * instead of hacking up the macro in s_lock.h\n */\n void\n! s_lock(slock_t *lock, char *file, int line)\n {\n \tint\t\t\tspins = 0;\n \n--- 23,63 ----\n /*\n * Each time we busy spin we select the next element of this array as the\n * number of microseconds to wait. This accomplishes pseudo random back-off.\n! * Values are not critical but 10 milliseconds is a common platform\n! * granularity.\n! * note: total time to cycle through all 16 entries might be about .07 sec.\n! */\n! #define S_NSPINCYCLE 20\n! #define S_MAX_BUSY 500 * S_NSPINCYCLE\n! \n! int\ts_spincycle[S_NSPINCYCLE] =\n! { 0, 0, 0, 0, 10000, 0, 0, 0, 10000, 0,\n! 0, 10000, 0, 0, 10000, 0, 10000, 0, 10000, 10000\n };\n \n \n /*\n! * s_lock_stuck(lock) - complain about a stuck spinlock\n! */\n! static void\n! s_lock_stuck(volatile slock_t *lock, const char *file, const int line)\n! {\n! \tfprintf(stderr,\n! \t\t\t\"\\nFATAL: s_lock(%08x) at %s:%d, stuck spinlock. Aborting.\\n\",\n! \t\t\t(unsigned int) lock, file, line);\n! \tfprintf(stdout,\n! \t\t\t\"\\nFATAL: s_lock(%08x) at %s:%d, stuck spinlock. Aborting.\\n\",\n! \t\t\t(unsigned int) lock, file, line);\n! \tabort();\n! }\n! \n! \n! \n! /*\n! * s_lock(lock) - take a spinlock with backoff\n */\n void\n! s_lock(volatile slock_t *lock, const char *file, const int line)\n {\n \tint\t\t\tspins = 0;\n \n***************\n*** 49,162 ****\n \t\tstruct timeval delay;\n \n \t\tdelay.tv_sec = 0;\n! \t\tdelay.tv_usec = s_spincycle[spins++ % S_NSPINCYCLE];\n \t\t(void) select(0, NULL, NULL, NULL, &delay);\n! \t\tif (spins > S_MAX_BUSY)\n \t\t{\n! \t\t\t/* It's been well over a minute... */\n \t\t\ts_lock_stuck(lock, file, line);\n \t\t}\n \t}\n }\n- #endif /* S_LOCK_DEBUG */\n \n \n- /*\n- * s_lock_stuck(lock) - deal with stuck spinlock\n- */\n- void\n- s_lock_stuck(slock_t *lock, char *file, int line)\n- {\n- \tfprintf(stderr,\n- \t\t\t\"\\nFATAL: s_lock(%08x) at %s:%d, stuck spinlock. Aborting.\\n\",\n- \t\t\t(unsigned int) lock, file, line);\n- \tfprintf(stdout,\n- \t\t\t\"\\nFATAL: s_lock(%08x) at %s:%d, stuck spinlock. Aborting.\\n\",\n- \t\t\t(unsigned int) lock, file, line);\n- \tabort();\n- }\n- \n \n \n /*\n! * Various TAS implementations moved from s_lock.h to avoid redundant\n! * definitions of the same routine.\n! * RESOLVE: move this to tas.c. Alternatively get rid of tas.[cso] and fold\n! * all that into this file.\n */\n \n \n! #if defined(linux)\n /*************************************************************************\n! * All the Linux flavors\n */\n \n \n- #if defined(__alpha)\n- int\n- tas(slock_t *lock)\n- {\n- \tslock_t\t\t_res;\n- \n- __asm__(\" ldq $0, %0 \\n\\\n- bne $0, already_set \\n\\\n- ldq_l $0, %0\t \\n\\\n- bne $0, already_set \\n\\\n- or $31, 1, $0 \\n\\\n- stq_c $0, %0\t \\n\\\n- beq $0, stqc_fail \\n\\\n- success: bis $31, $31, %1 \\n\\\n- mb\t\t \\n\\\n- jmp $31, end\t \\n\\\n- stqc_fail: or $31, 1, $0\t \\n\\\n- already_set: bis $0, $0, %1\t \\n\\\n- end: nop \": \"=m\"(*lock), \"=r\"(_res): :\"0\");\n- \n- \treturn (_res != 0);\n- }\n- #endif /* __alpha */\n- \n- \n- \n- #if defined(i386)\n- int\n- tas(slock_t *lock)\n- {\n- \tslock_t\t\t_res = 1;\n- \n- __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1));\n- \treturn (_res != 0);\n- }\n- #endif /* i386 */\n- \n- \n- \n- #if defined(sparc)\n- \n- int\n- tas(slock_t *lock)\n- {\n- \tslock_t\t\t_res;\n- \tslock_t *tmplock = lock;\n- \n- __asm__(\"ldstub [%1], %0\" \\\n- :\t\t\t\"=&r\"(_res), \"=r\"(tmplock) \\\n- :\t\t\t\"1\"(tmplock));\n- \treturn (_res != 0);\n- }\n- \n- #endif /* sparc */\n- \n- \n \n #if defined(PPC)\n! \n! static int\n! tas_dummy()\n {\n! \t__asm__(\"\t\t\t\t\\n\\\n! tas:\t\t\t\t\t\t\\n\\\n! \t\t\tlwarx\t5,0,3\t\\n\\\n \t\t\tcmpwi\t5,0\t\t\\n\\\n \t\t\tbne\t\tfail\t\\n\\\n \t\t\taddi\t5,5,1\t\\n\\\n--- 66,104 ----\n \t\tstruct timeval delay;\n \n \t\tdelay.tv_sec = 0;\n! \t\tdelay.tv_usec = s_spincycle[spins % S_NSPINCYCLE];\n \t\t(void) select(0, NULL, NULL, NULL, &delay);\n! \t\tif (++spins > S_MAX_BUSY)\n \t\t{\n! \t\t\t/* It's been over a minute... */\n \t\t\ts_lock_stuck(lock, file, line);\n \t\t}\n \t}\n }\n \n \n \n \n /*\n! * Various TAS implementations that cannot live in s_lock.h as no inline\n! * definition exists (yet).\n! * In the future, get rid of tas.[cso] and fold it into this file.\n */\n \n \n! #if defined(__GNUC__)\n /*************************************************************************\n! * All the gcc flavors that are not inlined\n */\n \n \n \n #if defined(PPC)\n! /* Note: need a nice gcc constrained asm version so it can be inlined */\n! int\n! tas(volatile slock_t *lock)\n {\n! \t__asm__(\"lwarx\t5,0,3\t\\n\\\n \t\t\tcmpwi\t5,0\t\t\\n\\\n \t\t\tbne\t\tfail\t\\n\\\n \t\t\taddi\t5,5,1\t\\n\\\n***************\n*** 169,189 ****\n \tblr\t\t\t\t\\n\\\n \t\");\n }\n- \n #endif /* PPC */\n \n \n \n! #else /* defined(linux) */\n /***************************************************************************\n! * All Non-Linux\n */\n \n \n \n #if defined(sun3)\n static void\n! tas_dummy()\t\t\t\t\t\t/* really means: extern int tas(slock_t *lock); */\n {\n \tasm(\"LLA0:\");\n \tasm(\" .data\");\n--- 111,130 ----\n \tblr\t\t\t\t\\n\\\n \t\");\n }\n #endif /* PPC */\n \n \n \n! #else /* defined(__GNUC__) */\n /***************************************************************************\n! * All non gcc\n */\n \n \n \n #if defined(sun3)\n static void\n! tas_dummy()\t\t\t\t/* really means: extern int tas(slock_t *lock); */\n {\n \tasm(\"LLA0:\");\n \tasm(\" .data\");\n***************\n*** 208,233 ****\n \n #if defined(NEED_SPARC_TAS_ASM)\n /*\n! * bsd and bsdi sparc machines\n */\n- \n- /* if we're using -ansi w/ gcc, use __asm__ instead of asm */\n- #if defined(__STRICT_ANSI__)\n- #define asm(x)\t__asm__(x)\n- #endif /* __STRICT_ANSI__ */\n- \n static void\n! tas_dummy()\t\t\t\t\t\t/* really means: extern int tas(slock_t *lock); */\n {\n \tasm(\".seg \\\"data\\\"\");\n \tasm(\".seg \\\"text\\\"\");\n \tasm(\"_tas:\");\n- \n \t/*\n \t * Sparc atomic test and set (sparc calls it \"atomic load-store\")\n \t */\n \tasm(\"ldstub [%r8], %r8\");\n- \n \tasm(\"retl\");\n \tasm(\"nop\");\n }\n--- 149,166 ----\n \n #if defined(NEED_SPARC_TAS_ASM)\n /*\n! * sparc machines not using gcc\n */\n static void\n! tas_dummy()\t\t\t\t/* really means: extern int tas(slock_t *lock); */\n {\n \tasm(\".seg \\\"data\\\"\");\n \tasm(\".seg \\\"text\\\"\");\n \tasm(\"_tas:\");\n \t/*\n \t * Sparc atomic test and set (sparc calls it \"atomic load-store\")\n \t */\n \tasm(\"ldstub [%r8], %r8\");\n \tasm(\"retl\");\n \tasm(\"nop\");\n }\n***************\n*** 237,312 ****\n \n \n \n- #if defined(NEED_VAX_TAS_ASM)\n- /*\n- * VAXen -- even multiprocessor ones\n- * (thanks to Tom Ivar Helbekkmo)\n- */\n- typedef unsigned char slock_t;\n- \n- int\n- tas(slock_t *lock)\n- {\n- \tregister\tret;\n- \n- \tasm(\"\tmovl $1, r0\n- \t\tbbssi $0, (%1), 1f\n- \t\tclrl r0\n- 1:\tmovl r0, %0 \"\n- :\t\t\"=r\"(ret)\t\t\t\t/* return value, in register */\n- :\t\t\"r\"(lock)\t\t\t\t/* argument, 'lock pointer', in register */\n- :\t\t\"r0\");\t\t\t\t\t/* inline code uses this register */\n- \n- \treturn ret;\n- }\n- \n- #endif /* NEED_VAX_TAS_ASM */\n- \n- \n- \n #if defined(NEED_I386_TAS_ASM)\n! /*\n! * i386 based things\n! */\n! \n! #if defined(USE_UNIVEL_CC)\n! asm int\n! tas(slock_t *s_lock)\n! {\n! \t%lab locked;\n! /* Upon entry, %eax will contain the pointer to the lock byte */\n! \tpushl % ebx\n! \txchgl % eax, %ebx\n! \txor % eax, %eax\n! \tmovb $255, %al\n! \tlock\n! \txchgb % al, (%ebx)\n! \tpopl % ebx\n! }\n! \n! \n! #else /* USE_UNIVEL_CC */\n! \n! int\n! tas(slock_t *lock)\n! {\n! \tslock_t\t\t_res = 1;\n \n- __asm__(\"lock; xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1));\n- \treturn (_res != 0);\n- }\n \n- #endif /* USE_UNIVEL_CC */\n \n! #endif /* NEED_I386_TAS_ASM */\n \n \n- #endif /* linux */\n \n \n #if defined(S_LOCK_TEST)\n \n! slock_t\t\ttest_lock;\n \n void\n main()\n--- 170,194 ----\n \n \n \n #if defined(NEED_I386_TAS_ASM)\n! /* non gcc i386 based things */\n! #endif /* NEED_I386_TAS_ASM */\n \n \n \n! #endif /* not __GNUC__ */\n \n \n \n \n+ /*****************************************************************************/\n #if defined(S_LOCK_TEST)\n \n! /*\n! * test program for verifying a port.\n! */\n! \n! volatile slock_t\ttest_lock;\n \n void\n main()\n***************\n*** 330,336 ****\n \tprintf(\"S_LOCK_TEST: this will hang for a few minutes and then abort\\n\");\n \tprintf(\" with a 'stuck spinlock' message if S_LOCK()\\n\");\n \tprintf(\" and TAS() are working.\\n\");\n! \tS_LOCK(&test_lock);\n \n \tprintf(\"S_LOCK_TEST: failed, lock not locked~\\n\");\n \texit(3);\n--- 212,218 ----\n \tprintf(\"S_LOCK_TEST: this will hang for a few minutes and then abort\\n\");\n \tprintf(\" with a 'stuck spinlock' message if S_LOCK()\\n\");\n \tprintf(\" and TAS() are working.\\n\");\n! \ts_lock(&test_lock, __FILE__, __LINE__);\n \n \tprintf(\"S_LOCK_TEST: failed, lock not locked~\\n\");\n \texit(3);\n***************\n*** 338,340 ****\n--- 220,223 ----\n }\n \n #endif /* S_LOCK_TEST */\n+ \nIndex: src/include/storage/s_lock.h\ndiff -c src/include/storage/s_lock.h:1.1.1.2 src/include/storage/s_lock.h:1.2\n*** src/include/storage/s_lock.h:1.1.1.2\tSun May 24 23:57:20 1998\n--- src/include/storage/s_lock.h\tSun Jun 14 19:37:47 1998\n***************\n*** 11,19 ****\n *\n *-------------------------------------------------------------------------\n */\n /*\n *\t DESCRIPTION\n! *\t\tThe public functions that must be provided are:\n *\n *\t\tvoid S_INIT_LOCK(slock_t *lock)\n *\n--- 11,20 ----\n *\n *-------------------------------------------------------------------------\n */\n+ \n /*\n *\t DESCRIPTION\n! *\t\tThe public macros that must be provided are:\n *\n *\t\tvoid S_INIT_LOCK(slock_t *lock)\n *\n***************\n*** 45,55 ****\n *\t\t#define TAS(lock) tas(lock)\n *\t\tint tas(slock_t *lock)\t\t// True if lock already set\n *\n *\t\tIf none of this can be done, POSTGRES will default to using\n *\t\tSystem V semaphores (and take a large performance hit -- around 40%\n *\t\tof its time on a DS5000/240 is spent in semop(3)...).\n *\n- *\tNOTES\n *\t\tAIX has a test-and-set but the recommended interface is the cs(3)\n *\t\tsystem call. This provides an 8-instruction (plus system call\n *\t\toverhead) uninterruptible compare-and-set operation. True\n--- 46,60 ----\n *\t\t#define TAS(lock) tas(lock)\n *\t\tint tas(slock_t *lock)\t\t// True if lock already set\n *\n+ *\t\tThere are default implementations for all these macros at the bottom\n+ *\t\tof this file. Check if your platform can use these or needs to\n+ *\t\toverride them.\n+ *\n+ *\tNOTES\n *\t\tIf none of this can be done, POSTGRES will default to using\n *\t\tSystem V semaphores (and take a large performance hit -- around 40%\n *\t\tof its time on a DS5000/240 is spent in semop(3)...).\n *\n *\t\tAIX has a test-and-set but the recommended interface is the cs(3)\n *\t\tsystem call. This provides an 8-instruction (plus system call\n *\t\toverhead) uninterruptible compare-and-set operation. True\n***************\n*** 57,66 ****\n *\t\tregression test suite by about 25%. I don't have an assembler\n *\t\tmanual for POWER in any case.\n *\n- *\t\tThere are default implementations for all these macros at the bottom\n- *\t\tof this file. Check if your platform can use these or needs to\n- *\t\toverride them.\n- *\n */\n #if !defined(S_LOCK_H)\n #define S_LOCK_H\n--- 62,67 ----\n***************\n*** 69,91 ****\n \n #if defined(HAS_TEST_AND_SET)\n \n! #if defined(linux)\n! /***************************************************************************\n! * All Linux\n */\n \n #if defined(__alpha)\n! \n #define S_UNLOCK(lock) { __asm__(\"mb\"); *(lock) = 0; }\n \n #endif /* __alpha */\n \n \n \n \n! #else /* linux */\n /***************************************************************************\n! * All non Linux\n */\n \n #if defined (nextstep)\n--- 70,173 ----\n \n #if defined(HAS_TEST_AND_SET)\n \n! \n! #if defined(__GNUC__)\n! /*************************************************************************\n! * All the gcc inlines\n */\n \n #if defined(__alpha)\n! #define TAS(lock) tas(lock)\n #define S_UNLOCK(lock) { __asm__(\"mb\"); *(lock) = 0; }\n \n+ static __inline__ int\n+ tas(volatile slock_t *lock)\n+ {\n+ \tregister slock_t\t_res;\n+ \n+ __asm__(\" ldq $0, %0 \\n\\\n+ bne $0, already_set \\n\\\n+ ldq_l $0, %0\t \\n\\\n+ bne $0, already_set \\n\\\n+ or $31, 1, $0 \\n\\\n+ stq_c $0, %0\t \\n\\\n+ beq $0, stqc_fail \\n\\\n+ success: bis $31, $31, %1 \\n\\\n+ mb\t\t \\n\\\n+ jmp $31, end\t \\n\\\n+ stqc_fail: or $31, 1, $0\t \\n\\\n+ already_set: bis $0, $0, %1\t \\n\\\n+ end: nop \" : \"=m\"(*lock), \"=r\"(_res) : : \"0\");\n+ \n+ \treturn (int) _res;\n+ }\n #endif /* __alpha */\n \n \n \n+ #if defined(i386)\n+ #define TAS(lock) tas(lock)\n+ \n+ static __inline__ int\n+ tas(volatile slock_t *lock)\n+ {\n+ \tregister slock_t\t_res = 1;\n+ \n+ __asm__(\"lock; xchgb %0,%1\" : \"=q\"(_res), \"=m\"(*lock) : \"0\"(_res) );\n+ \treturn (int) _res;\n+ }\n+ #endif /* i386 */\n+ \n+ \n+ \n+ #if defined(sparc)\n+ #define TAS(lock) tas(lock)\n+ \n+ static __inline__ int\n+ tas(volatile slock_t *lock)\n+ {\n+ \tregister slock_t\t_res = 1;\n+ \n+ __asm__(\"ldstub [%1], %0\" \\\n+ : \"=r\"(_res), \"=m\"(*lock) \\\n+ : \"1\"(lock));\n+ \treturn (int) _res;\n+ }\n+ #endif /* sparc */\n+ \n+ \n+ \n+ #if defined(NEED_VAX_TAS_ASM)\n+ /*\n+ * VAXen -- even multiprocessor ones\n+ * (thanks to Tom Ivar Helbekkmo)\n+ */\n+ #define TAS(lock) tas(lock)\n+ \n+ typedef unsigned char slock_t;\n+ \n+ static __inline__ int\n+ tas(volatile slock_t *lock)\n+ {\n+ \tregister\t_res;\n+ \n+ \t__asm__(\"\tmovl $1, r0\n+ \t\tbbssi $0, (%1), 1f\n+ \t\tclrl r0\n+ 1:\tmovl r0, %0 \"\n+ :\t\t\"=r\"(_res)\t\t\t\t/* return value, in register */\n+ :\t\t\"r\"(lock)\t\t\t\t/* argument, 'lock pointer', in register */\n+ :\t\t\"r0\");\t\t\t\t\t/* inline code uses this register */\n+ \treturn (int) _res;\n+ }\n+ #endif /* NEED_VAX_TAS_ASM */\n+ \n+ \n \n! \n! #else /* __GNUC__ */\n /***************************************************************************\n! * All non gcc\n */\n \n #if defined (nextstep)\n***************\n*** 95,108 ****\n */\n \n #define S_LOCK(lock)\tmutex_lock(lock)\n- \n #define S_UNLOCK(lock)\tmutex_unlock(lock)\n- \n #define S_INIT_LOCK(lock)\tmutex_init(lock)\n- \n /* For Mach, we have to delve inside the entrails of `struct mutex'. Ick! */\n #define S_LOCK_FREE(alock)\t((alock)->lock == 0)\n- \n #endif /* nextstep */\n \n \n--- 177,186 ----\n***************\n*** 118,130 ****\n * for the R3000 chips out there.\n */\n #define TAS(lock)\t(!acquire_lock(lock))\n- \n #define S_UNLOCK(lock)\trelease_lock(lock)\n- \n #define S_INIT_LOCK(lock)\tinit_lock(lock)\n- \n #define S_LOCK_FREE(lock)\t(stat_lock(lock) == UNLOCKED)\n- \n #endif /* __sgi */\n \n \n--- 196,204 ----\n***************\n*** 137,149 ****\n * (see storage/ipc.h).\n */\n #define TAS(lock)\t(msem_lock((lock), MSEM_IF_NOWAIT) < 0)\n- \n #define S_UNLOCK(lock)\tmsem_unlock((lock), 0)\n- \n #define S_INIT_LOCK(lock)\tmsem_init((lock), MSEM_UNLOCKED)\n- \n #define S_LOCK_FREE(lock)\t(!(lock)->msem_state)\n- \n #endif /* __alpha */\n \n \n--- 211,219 ----\n***************\n*** 156,162 ****\n * (see storage/ipc.h).\n */\n #define TAS(lock)\tcs((int *) (lock), 0, 1)\n- \n #endif /* _AIX */\n \n \n--- 226,231 ----\n***************\n*** 175,235 ****\n {-1, -1, -1, -1};\n \n #define S_UNLOCK(lock)\t(*(lock) = clear_lock)\t/* struct assignment */\n- \n #define S_LOCK_FREE(lock)\t( *(int *) (((long) (lock) + 15) & ~15) != 0)\n- \n #endif /* __hpux */\n \n \n \n! #endif /* else defined(linux) */\n \n \n \n \n- /****************************************************************************\n- * Default Definitions - override these above as needed.\n- */\n- \n- #if !defined(S_LOCK)\n- \n- #include <sys/time.h>\n \n! #define S_NSPINCYCLE\t16\n! #define S_MAX_BUSY\t\t1000 * S_NSPINCYCLE\n \n- extern int\ts_spincycle[];\n- extern void s_lock_stuck(slock_t *lock, char *file, int line);\n \n! #if defined(S_LOCK_DEBUG)\n \n- extern void s_lock(slock_t *lock);\n \n- #define S_LOCK(lock) s_lock(lock, __FILE__, __LINE__)\n \n- #else /* S_LOCK_DEBUG */\n \n! #define S_LOCK(lock) if (1) { \\\n! \tint spins = 0; \\\n! \twhile (TAS(lock)) { \\\n! \t\tstruct timeval\tdelay; \\\n! \t\tdelay.tv_sec = 0; \\\n! \t\tdelay.tv_usec = s_spincycle[spins++ % S_NSPINCYCLE]; \\\n! \t\t(void) select(0, NULL, NULL, NULL, &delay); \\\n! \t\tif (spins > S_MAX_BUSY) { \\\n! \t\t\t/* It's been well over a minute... */ \\\n! \t\t\ts_lock_stuck(lock, __FILE__, __LINE__); \\\n! \t\t} \\\n! \t} \\\n! } else\n \n! #endif /* S_LOCK_DEBUG */\n #endif /* S_LOCK */\n \n- \n- \n #if !defined(S_LOCK_FREE)\n! #define S_LOCK_FREE(lock)\t((*lock) == 0)\n #endif /* S_LOCK_FREE */\n \n #if !defined(S_UNLOCK)\n--- 244,299 ----\n {-1, -1, -1, -1};\n \n #define S_UNLOCK(lock)\t(*(lock) = clear_lock)\t/* struct assignment */\n #define S_LOCK_FREE(lock)\t( *(int *) (((long) (lock) + 15) & ~15) != 0)\n #endif /* __hpux */\n \n \n \n! #if defined(NEED_I386_TAS_ASM)\n! /* non gcc i386 based things */\n \n \n+ #if defined(USE_UNIVEL_CC)\n+ #define TAS(lock)\ttas(lock)\n \n+ asm int\n+ tas(slock_t *s_lock)\n+ {\n+ \t%lab locked;\n+ \t/* Upon entry, %eax will contain the pointer to the lock byte */\n+ \tpushl % ebx\n+ \txchgl % eax, %ebx\n+ \txor % eax, %eax\n+ \tmovb $255, %al\n+ \tlock\n+ \txchgb % al, (%ebx)\n+ \tpopl % ebx\n+ }\n+ #endif /* USE_UNIVEL_CC */\n \n \n! #endif /* NEED_I386_TAS_ASM */\n \n \n! #endif /* else defined(__GNUC__) */\n \n \n \n \n! /****************************************************************************\n! * Default Definitions - override these above as needed.\n! */\n \n! #if !defined(S_LOCK)\n! extern void s_lock(volatile slock_t *lock, const char *file, const int line);\n! #define S_LOCK(lock) \\\n! if (TAS((volatile slock_t *) lock)) {\\\n! s_lock((volatile slock_t *) lock, __FILE__, __LINE__); \\\n! } else\n #endif /* S_LOCK */\n \n #if !defined(S_LOCK_FREE)\n! #define S_LOCK_FREE(lock)\t(*(lock) == 0)\n #endif /* S_LOCK_FREE */\n \n #if !defined(S_UNLOCK)\n***************\n*** 241,252 ****\n #endif /* S_INIT_LOCK */\n \n #if !defined(TAS)\n! int\t\t\ttas(slock_t *lock); /* port/.../tas.s, or s_lock.c */\n! \n! #define TAS(lock)\t\ttas(lock)\n #endif /* TAS */\n \n \n #endif /* HAS_TEST_AND_SET */\n- \n #endif /* S_LOCK_H */\n--- 305,315 ----\n #endif /* S_INIT_LOCK */\n \n #if !defined(TAS)\n! int\ttas(volatile slock_t *lock); /* port/.../tas.s, or s_lock.c */\n! #define TAS(lock)\t\ttas((volatile slock_t *) lock)\n #endif /* TAS */\n \n \n #endif /* HAS_TEST_AND_SET */\n #endif /* S_LOCK_H */\n+ \n",
"msg_date": "Mon, 15 Jun 1998 13:26:14 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": true,
"msg_subject": "Revised Optimized S_LOCK patch"
}
] |
[
{
"msg_contents": "I didn't sit down and analyze what you did wrong, but this test worked\nfor me:\n\nDROP TABLE propsales;\nCREATE TABLE propsales (\n closingdate date,\n county varchar(50),\n city varchar(50),\n streetno varchar(10),\n street varchar(70),\n price float8\n);\nCREATE UNIQUE INDEX propsales_key on propsales using btree ( city\nvarchar_ops, \nstreet varchar_ops, streetno varchar_ops,\ncounty varchar_ops, closingdate date_ops );\nDROP TABLE newpropsales;\nCREATE TABLE newpropsales (\n closingdate date,\n county varchar(50),\n city varchar(50),\n streetno varchar(10),\n street varchar(70),\n price float8\n);\nINSERT INTO propsales VALUES('6/15/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO propsales VALUES('6/16/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO propsales VALUES('6/17/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO propsales VALUES('6/18/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO newpropsales VALUES('6/15/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO newpropsales VALUES('6/16/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO newpropsales VALUES('6/29/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO newpropsales VALUES('6/30/98', 'Dallas', 'Dallas', '9859',\n'Valley Ranch Pkwy.', 10830.73);\nINSERT INTO propsales \nSELECT n.*\n FROM newpropsales AS n\n WHERE NOT EXISTS (SELECT p.*\n FROM propsales AS p\n WHERE n.city = p.city AND\n n.street = p.street AND\n n.streetno = p.streetno AND\n n.county = p.county AND\n n.closingdate = p.closingdate);\nSELECT * FROM propsales;\n\n\tEnjoy,\n\t-DEJ\n\n\n\n> -----Original Message-----\n> This message received no replies from the SQL list and I forward\n> it to hackers looking for additional thoughts.\n> \n> EXECUTIVE SUMMARY:\n> \n> I have two tables with identical structure.\n> One table has a unique index on 5 of the \n> 6 table attributes.\n> \n> When attempting to insert from the non-indexed\n> table into the uniquely indexed table, the\n> insert fails due to \"duplicate key\" error. (index definition below)\n> \n> However, this query, which tries to identify tuples with identical\n> keys,\n> returns 0 rows. Each attribute included in the multifield index\n> is qualified in the where clause. Why doesn't the\n> select show the duplicate tuples?\n> \n> select newpropsales.* from newpropsales n, propsales p\n> where n.city=p.city and n.county=p.county and\n> n.street=p.street and n.streetno=p.streetno and\n> n.closingdate=p.closingdate ;\n> \n> closingdate|county|city|streetno|street|price\n> - -----------+------+----+--------+------+-----\n> (0 rows)\n> \n> \n> ---------- Forwarded message ----------\n> Date: Fri, 5 Jun 1998 19:42:21 -0400 (EDT)\n> From: Marc Howard Zuckman <[email protected]>\n> Subject: Need help understanding unique indices\n> \n> I have a need to incrementally add new data to a table with this\n> structure:\n> Table = propsales\n> +----------------------------------+----------------------------------\n> +-------+\n> | Field | Type\n> | Length|\n> +----------------------------------+----------------------------------\n> +-------+\n> | closingdate | date\n> | 4 |\n> | county | varchar()\n> | 50 |\n> | city | varchar()\n> | 50 |\n> | streetno | varchar()\n> | 10 |\n> | street | varchar()\n> | 70 |\n> | price | float8\n> | 8 |\n> +----------------------------------+----------------------------------\n> +-------+\n> \n> A second table, newpropsales, exists with identical structure.\n> \n> The original table, propsales has a unique index that includes all of\n> the\n> record fields except the price field. The index is defined as\n> follows:\n> \n> CREATE UNIQUE INDEX propsales_key on propsales using btree ( city\n> varchar_ops, \n> street varchar_ops, streetno varchar_ops,\n> county varchar_ops, closingdate date_ops );\n> \n> When loading new data into the database, it is loaded into table\n> newpropsales. An effort to remvove duplicate tuples is then made\n> using this series of queries:\n> \n> delete from recentpropsales; --temporary table with identical\n> structure to those above.\n> - -- get rid of any duplicates contained solely within newpropsales\n> insert into recentpropsales select distinct * from newpropsales; \n> delete from newpropsales;\n> insert into newpropsales select * from recentpropsales;\n> delete from recentpropsales;\n> delete from newminclosingdate;\n> insert into newminclosingdate select min(closingdate) from\n> newpropsales;\n> - -- get tuples from accumulated data that are in same time frame as\n> new data.\n> insert into recentpropsales select propsales.* from\n> propsales,newminclosingdate where \n> closingdate >= newminclosingdate.min;\n> \n> - -- attempt to eliminate duplicates tuples that are present in\n> - -- both tables considered together\n> - -- This will NOT eliminate all index duplicates because\n> - -- price is not indexed. Therefore, tuples that are identical\n> - -- in every way but have different price values will not be\n> - -- deleted from the new data set.\n> \n> delete from newpropsales where exists (\n> select city from recentpropsales r where\n> r.county=newpropsales.county and r.price=newpropsales.price and\n> r.city=newpropsales.city and r.closingdate=newpropsales.closingdate\n> and r.street=newpropsales.street and\n> r.streetno=newpropsales.streetno);\n> \n> All of this seems to work ok. But, this fails\n> \n> insert into propsales select * from newpropsales;\n> \n> because a duplicate key is encountered.\n> \n> However, this query, which tries to identify tuples with identical\n> keys,\n> returns 0 rows. Why?\n> \n> select newpropsales.* from newpropsales n, propsales p\n> where n.city=p.city and n.county=p.county and\n> n.street=p.street and n.streetno=p.streetno and\n> n.closingdate=p.closingdate ;\n> \n> closingdate|county|city|streetno|street|price\n> - -----------+------+----+--------+------+-----\n> (0 rows)\n> \n> \n> Marc Zuckman\n> [email protected]\n> \n> _\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n> _ Visit The Home and Condo MarketPlace\t\t _\n> _ http://www.ClassyAd.com\t\t\t _\n> _\t\t\t\t\t\t\t _\n> _ FREE basic property listings/advertisements and searches. _\n> _\t\t\t\t\t\t\t _\n> _ Try our premium, yet inexpensive services for a real\t _\n> _ selling or buying edge!\t\t\t\t _\n> _\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n> \n> \n",
"msg_date": "Mon, 15 Jun 1998 16:21:40 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Need help understanding unique indices (fwd)"
},
{
"msg_contents": "On Mon, 15 Jun 1998, Jackson, DeJuan wrote:\n\n> I didn't sit down and analyze what you did wrong, but this test worked\n> for me:\n> \n> DROP TABLE propsales;\n> CREATE TABLE propsales (\n> closingdate date,\n> county varchar(50),\n> city varchar(50),\n> streetno varchar(10),\n> street varchar(70),\n> price float8\n> );\n> CREATE UNIQUE INDEX propsales_key on propsales using btree ( city\n> varchar_ops, \n> street varchar_ops, streetno varchar_ops,\n> county varchar_ops, closingdate date_ops );\n> DROP TABLE newpropsales;\n> CREATE TABLE newpropsales (\n> closingdate date,\n> county varchar(50),\n> city varchar(50),\n> streetno varchar(10),\n> street varchar(70),\n> price float8\n> );\n> INSERT INTO propsales VALUES('6/15/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO propsales VALUES('6/16/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO propsales VALUES('6/17/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO propsales VALUES('6/18/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO newpropsales VALUES('6/15/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO newpropsales VALUES('6/16/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO newpropsales VALUES('6/29/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO newpropsales VALUES('6/30/98', 'Dallas', 'Dallas', '9859',\n> 'Valley Ranch Pkwy.', 10830.73);\n> INSERT INTO propsales \n> SELECT n.*\n> FROM newpropsales AS n\n> WHERE NOT EXISTS (SELECT p.*\n> FROM propsales AS p\n> WHERE n.city = p.city AND\n> n.street = p.street AND\n> n.streetno = p.streetno AND\n> n.county = p.county AND\n> n.closingdate = p.closingdate);\n> SELECT * FROM propsales;\n> \n> \tEnjoy,\n> \t-DEJ\n> \nWhile this query makes just as much sense as the ones that I tried,\nit also fails on my database. Once again, I do not understand why.\nBug???\n\nrealestate=> begin;\nBEGIN\nrealestate=> INSERT INTO propsales \nrealestate-> SELECT n.*\nrealestate-> FROM newpropsales AS n\nrealestate-> WHERE NOT EXISTS (SELECT p.*\nrealestate-> FROM propsales AS p\nrealestate-> WHERE n.city = p.city AND\nrealestate-> n.street = p.street AND\nrealestate-> n.streetno = p.streetno AND\nrealestate-> n.county = p.county AND\nrealestate-> n.closingdate = p.closingdate);\nERROR: Cannot insert a duplicate key into a unique index\nrealestate=> abort;\nABORT\n\n\n\n\n\n> \n> \n> > -----Original Message-----\n> > This message received no replies from the SQL list and I forward\n> > it to hackers looking for additional thoughts.\n> > \n> > EXECUTIVE SUMMARY:\n> > \n> > I have two tables with identical structure.\n> > One table has a unique index on 5 of the \n> > 6 table attributes.\n> > \n> > When attempting to insert from the non-indexed\n> > table into the uniquely indexed table, the\n> > insert fails due to \"duplicate key\" error. (index definition below)\n> > \n> > However, this query, which tries to identify tuples with identical\n> > keys,\n> > returns 0 rows. Each attribute included in the multifield index\n> > is qualified in the where clause. Why doesn't the\n> > select show the duplicate tuples?\n> > \n> > select newpropsales.* from newpropsales n, propsales p\n> > where n.city=p.city and n.county=p.county and\n> > n.street=p.street and n.streetno=p.streetno and\n> > n.closingdate=p.closingdate ;\n> > \n> > closingdate|county|city|streetno|street|price\n> > - -----------+------+----+--------+------+-----\n> > (0 rows)\n> > \n> > \n> > ---------- Forwarded message ----------\n> > Date: Fri, 5 Jun 1998 19:42:21 -0400 (EDT)\n> > From: Marc Howard Zuckman <[email protected]>\n> > Subject: Need help understanding unique indices\n> > \n> > I have a need to incrementally add new data to a table with this\n> > structure:\n> > Table = propsales\n> > +----------------------------------+----------------------------------\n> > +-------+\n> > | Field | Type\n> > | Length|\n> > +----------------------------------+----------------------------------\n> > +-------+\n> > | closingdate | date\n> > | 4 |\n> > | county | varchar()\n> > | 50 |\n> > | city | varchar()\n> > | 50 |\n> > | streetno | varchar()\n> > | 10 |\n> > | street | varchar()\n> > | 70 |\n> > | price | float8\n> > | 8 |\n> > +----------------------------------+----------------------------------\n> > +-------+\n> > \n> > A second table, newpropsales, exists with identical structure.\n> > \n> > The original table, propsales has a unique index that includes all of\n> > the\n> > record fields except the price field. The index is defined as\n> > follows:\n> > \n> > CREATE UNIQUE INDEX propsales_key on propsales using btree ( city\n> > varchar_ops, \n> > street varchar_ops, streetno varchar_ops,\n> > county varchar_ops, closingdate date_ops );\n> > \n> > When loading new data into the database, it is loaded into table\n> > newpropsales. An effort to remvove duplicate tuples is then made\n> > using this series of queries:\n> > \n> > delete from recentpropsales; --temporary table with identical\n> > structure to those above.\n> > - -- get rid of any duplicates contained solely within newpropsales\n> > insert into recentpropsales select distinct * from newpropsales; \n> > delete from newpropsales;\n> > insert into newpropsales select * from recentpropsales;\n> > delete from recentpropsales;\n> > delete from newminclosingdate;\n> > insert into newminclosingdate select min(closingdate) from\n> > newpropsales;\n> > - -- get tuples from accumulated data that are in same time frame as\n> > new data.\n> > insert into recentpropsales select propsales.* from\n> > propsales,newminclosingdate where \n> > closingdate >= newminclosingdate.min;\n> > \n> > - -- attempt to eliminate duplicates tuples that are present in\n> > - -- both tables considered together\n> > - -- This will NOT eliminate all index duplicates because\n> > - -- price is not indexed. Therefore, tuples that are identical\n> > - -- in every way but have different price values will not be\n> > - -- deleted from the new data set.\n> > \n> > delete from newpropsales where exists (\n> > select city from recentpropsales r where\n> > r.county=newpropsales.county and r.price=newpropsales.price and\n> > r.city=newpropsales.city and r.closingdate=newpropsales.closingdate\n> > and r.street=newpropsales.street and\n> > r.streetno=newpropsales.streetno);\n> > \n> > All of this seems to work ok. But, this fails\n> > \n> > insert into propsales select * from newpropsales;\n> > \n> > because a duplicate key is encountered.\n> > \n> > However, this query, which tries to identify tuples with identical\n> > keys,\n> > returns 0 rows. Why?\n> > \n> > select newpropsales.* from newpropsales n, propsales p\n> > where n.city=p.city and n.county=p.county and\n> > n.street=p.street and n.streetno=p.streetno and\n> > n.closingdate=p.closingdate ;\n> > \n> > closingdate|county|city|streetno|street|price\n> > - -----------+------+----+--------+------+-----\n> > (0 rows)\n> > \n> > \n> > Marc Zuckman\n> > [email protected]\n> > \n> > _\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n> > _ Visit The Home and Condo MarketPlace\t\t _\n> > _ http://www.ClassyAd.com\t\t\t _\n> > _\t\t\t\t\t\t\t _\n> > _ FREE basic property listings/advertisements and searches. _\n> > _\t\t\t\t\t\t\t _\n> > _ Try our premium, yet inexpensive services for a real\t _\n> > _ selling or buying edge!\t\t\t\t _\n> > _\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n> > \n> > \n> \n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n",
"msg_date": "Fri, 19 Jun 1998 00:24:03 -0400 (EDT)",
"msg_from": "Marc Howard Zuckman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Need help understanding unique indices (fwd)"
},
{
"msg_contents": "Marc Howard Zuckman wrote:\n> \n> While this query makes just as much sense as the ones that I tried,\n> it also fails on my database. Once again, I do not understand why.\n> Bug???\n> \n> realestate=> begin;\n> BEGIN\n> realestate=> INSERT INTO propsales\n> realestate-> SELECT n.*\n> realestate-> FROM newpropsales AS n\n> realestate-> WHERE NOT EXISTS (SELECT p.*\n> realestate-> FROM propsales AS p\n> realestate-> WHERE n.city = p.city AND\n> realestate-> n.street = p.street AND\n> realestate-> n.streetno = p.streetno AND\n> realestate-> n.county = p.county AND\n> realestate-> n.closingdate = p.closingdate);\n> ERROR: Cannot insert a duplicate key into a unique index\n\nI can't reproduce this! (6.3.2 on Solaris 2.5 (sparc),\n6.4-current on FreeBSD 2.2.6)\n\nVadim\n",
"msg_date": "Fri, 19 Jun 1998 22:47:47 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Need help understanding unique indices (fwd)"
}
] |
[
{
"msg_contents": "The postodbc you mention is way out of date and is no longer being\nsupported.\n\nGo to http://www.insightdist.com/psqlodbc for the latest (updated Jun\n13th) odbc driver.\nAvailable as self-extracting install (postdrv.exe) OR source code\n(postsrc.zip).\n\nAlso, you should use the \"interfaces\" mailing list when it concerns the\nodbc driver.\n\nByron\n\n\nDmitry Samersoff wrote:\n\n> Dear All,\n>\n> When my Postgres 6.2.1 has upgraded to 6.3.2\n> (hba enabled, pg_hba.conf contain to lines:\n> local all trust\n> host piter 195.242.10.174 255.255.255.255 trust\n> )\n>\n> I have trouble to connect it by my MS Access through ODBC\n> (postodbc package 0.31 from postodbc.magenet.com)\n>\n> Postgres report error \"User authentification failed\"\n>\n> It message is result missing result of\n> old_be_recvauth in backend/libpq/auth.c\n>\n> To resolve problem immediatly, i just modyfy this function\n> to return always true.\n> IT IS TERRIBLE!\n>\n> Is there a normal way to correct this problem?\n>\n> Thank you !\n>\n> ---\n> DM\\S [email protected]\n> http://www.piter-press.ru/webmaster\n\n\n\n",
"msg_date": "Mon, 15 Jun 1998 17:38:12 -0400",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] User authentification failed"
},
{
"msg_contents": "Dear All,\n\nWhen my Postgres 6.2.1 has upgraded to 6.3.2\n(hba enabled, pg_hba.conf contain to lines:\n local all trust\n host piter 195.242.10.174 255.255.255.255 trust\n )\n\n\nI have trouble to connect it by my MS Access through ODBC\n(postodbc package 0.31 from postodbc.magenet.com)\n\nPostgres report error \"User authentification failed\"\n\nIt message is result missing result of\n old_be_recvauth in backend/libpq/auth.c\n\nTo resolve problem immediatly, i just modyfy this function\nto return always true.\n IT IS TERRIBLE!\n\nIs there a normal way to correct this problem?\n\nThank you !\n\n\n---\n DM\\S [email protected]\n http://www.piter-press.ru/webmaster\n\n\n",
"msg_date": "Tue, 16 Jun 1998 00:59:53 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "User authentification failed"
},
{
"msg_contents": "May I suggust upgrading your driver. Check out:\n\n http://www.insightdist.com/psqlodbc\n\nAlso use list pgsql-interfaces for ODBC topics.\n\nDmitry Samersoff wrote:\n\n> Dear All,\n>\n> When my Postgres 6.2.1 has upgraded to 6.3.2\n> (hba enabled, pg_hba.conf contain to lines:\n> local all trust\n> host piter 195.242.10.174 255.255.255.255 trust\n> )\n>\n> I have trouble to connect it by my MS Access through ODBC\n> (postodbc package 0.31 from postodbc.magenet.com)\n>\n> Postgres report error \"User authentification failed\"\n>\n> It message is result missing result of\n> old_be_recvauth in backend/libpq/auth.c\n>\n> To resolve problem immediatly, i just modyfy this function\n> to return always true.\n> IT IS TERRIBLE!\n>\n> Is there a normal way to correct this problem?\n>\n> Thank you !\n\n\n\n",
"msg_date": "Mon, 15 Jun 1998 19:15:37 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User authentification failed"
},
{
"msg_contents": "On Tue, 16 Jun 1998, Dmitry Samersoff wrote:\n\nIs your postmaster running with -i parameter ?\n> Dear All,\n> \n> When my Postgres 6.2.1 has upgraded to 6.3.2\n> (hba enabled, pg_hba.conf contain to lines:\n> local all trust\n> host piter 195.242.10.174 255.255.255.255 trust\n> )\n> \n> \n> I have trouble to connect it by my MS Access through ODBC\n> (postodbc package 0.31 from postodbc.magenet.com)\n> \n> Postgres report error \"User authentification failed\"\n> \n> It message is result missing result of\n> old_be_recvauth in backend/libpq/auth.c\n> \n> To resolve problem immediatly, i just modyfy this function\n> to return always true.\n> IT IS TERRIBLE!\n> \n> Is there a normal way to correct this problem?\n> \n> Thank you !\n> \n> \n> ---\n> DM\\S [email protected]\n> http://www.piter-press.ru/webmaster\n> \n> \n> \n> \n\n | |\n~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~~~\n Progetto HYGEA ---- ---- www.sferacarta.com\n Sfera Carta Software ---- ---- [email protected]\n Via Bazzanese, 69 | | Fax. ++39 51 6131537\nCasalecchio R.(BO) Italy | | Tel. ++39 51 591054\n\n",
"msg_date": "Tue, 16 Jun 1998 11:00:05 +0000 (UTC)",
"msg_from": "\"Jose' Soares Da Silva\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User authentification failed"
}
] |
[
{
"msg_contents": "\nsas=> explain update user set usrid = 'aaaaaaaa' where usrseqid=usrseqid('zlb');\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on user (cost=344.07 size=658 width=154)\n\nEXPLAIN\nsas=> explain update user set usrid = 'aaaaaaaa' where usrseqid=(select usrseqid('zlb'));\nNOTICE: QUERY PLAN:\n\nIndex Scan on user (cost=1.05 size=1 width=154)\n InitPlan\n -> Result (cost=0.00 size=0 width=0)\n\nEXPLAIN\nsas=> \n\n\nas you can see, it uses the index when the RHS of the comparison in the where\nclause is a subquery, but a sequential scan when it isn't. is this a bug?\n",
"msg_date": "Mon, 15 Jun 1998 16:44:31 -0700",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "seq scan only when function not in subquery (bug?)"
},
{
"msg_contents": "Brett McCormick wrote:\n> \n> sas=> explain update user set usrid = 'aaaaaaaa' \n> where usrseqid=usrseqid('zlb');\n> \n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on user (cost=344.07 size=658 width=154)\n\n...\n \n> as you can see, it uses the index when the RHS of the comparison \n> in the where clause is a subquery, but a sequential scan when it \n> isn't. is this a bug?\n\nYes, and very old :)\n\nThis is from my recent posting:\n---\nAnother issue - handling of functions with constant args \nin queries - for query\n\nselect * from T where A = upper ('bbb')\n\nfunction upper ('bbb') will be executed for each tuple in T!\nMore of that - if there is index on T(A) then this index will\nnot be used for this query!\nObviously, upper ('bbb') should be executed (by Executor, not\nparser/planner) once: new Param type (PARAM_EXEC) implemented \nfor subselects could help here too...\n---\n\nActually, this is easy to fix...\n\nVadim\n",
"msg_date": "Tue, 16 Jun 1998 10:32:39 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] seq scan only when function not in subquery (bug?)"
},
{
"msg_contents": "\nOn Tue, 16 June 1998, at 10:32:39, Vadim Mikheev wrote:\n\n> Another issue - handling of functions with constant args \n> in queries - for query\n> \n> select * from T where A = upper ('bbb')\n> \n> function upper ('bbb') will be executed for each tuple in T!\n> More of that - if there is index on T(A) then this index will\n> not be used for this query!\n> Obviously, upper ('bbb') should be executed (by Executor, not\n> parser/planner) once: new Param type (PARAM_EXEC) implemented \n> for subselects could help here too...\n> ---\n> \n> Actually, this is easy to fix...\n\nI was going to reply to this but never did -- how do you tell if it\nneeds to be executed once per query or once per tuple? What if you\nwanted to call a function which returned a different value for each\ntuple, like random()?\n",
"msg_date": "Tue, 16 Jun 1998 14:03:04 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] seq scan only when function not in subquery (bug?)"
},
{
"msg_contents": "> On Tue, 16 June 1998, at 10:32:39, Vadim Mikheev wrote:\n> \n> > Another issue - handling of functions with constant args \n> > in queries - for query\n> > \n> > select * from T where A = upper ('bbb')\n> > \n> > function upper ('bbb') will be executed for each tuple in T!\n> > More of that - if there is index on T(A) then this index will\n> > not be used for this query!\n> > Obviously, upper ('bbb') should be executed (by Executor, not\n> > parser/planner) once: new Param type (PARAM_EXEC) implemented \n> > for subselects could help here too...\n> > ---\n> > \n> > Actually, this is easy to fix...\n> \n> I was going to reply to this but never did -- how do you tell if it\n> needs to be executed once per query or once per tuple? What if you\n> wanted to call a function which returned a different value for each\n> tuple, like random()?\n\nTo make this work, you need an attribute in the functions table (and\ninternal info about the function) that tells if the function is \"variant\"\nor not. A variant function can return different results with the same\narguments eg random(), or has side effects. A non variant function returns\nthe same result for the same arguments and has no side-effects.\n\nIf you have a non-variant function, then the easy way to optimize it is\nto memoize the arguments and result of the last time you called it. Then\nthe next time you want to call it, check if the arguments are the same and\nif so, merely return the previously saved result instead of calling the\nfunction.\n\nExample:\n\ncreate function city_from_zipcode(integer) returns varchar not variant;\n\nselect name, street, city_from_zipcode(zipcode), zipcode\n from (select * from customers order by zipcode);\n\nIf customers was sorted by zipcode, this would only call city_from_zipcode()\neach time the zipcode changed instead of for each row.\n\nIt would also cover the case of \"function('constant');\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n",
"msg_date": "Tue, 16 Jun 1998 14:40:48 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] seq scan only when function not in subquery (bug?)"
}
] |
[
{
"msg_contents": "> \n> The basic problem is that PostgreSQL doesn't understand that Null match any\n> datatype. \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> \n> If you have two table created this way:\n> \n> create table test00\n> (\n> posizione int4 not null primary key,\n> testo varchar(50),\n> campo float8,\n> dataeora datetime\n> );\n> \n> create table test01\n> (\n> posizione int4 not null primary key,\n> testo varchar(50),\n> campo float8,\n> dataeora datetime\n> );\n> \n> and you try to implement an outer join (not yet supported) using the union\n> clause this way:\n> \n> SELECT \n> \ttest00.posizione, \n> \ttest01.posizione\n> FROM \n> \ttest00, \n> \ttest01\n> WHERE \n> \ttest00.posizione = test01.posizione \n> UNION\n> SELECT \n> \ttest00.posizione, \n> \tNull\n> FROM test00\n> WHERE \n> \tNOT EXISTS (SELECT * FROM test01 WHERE test01.posizione = test00.posizione);\n> \n> postgres reports the following error:\n> \n> ERROR: Each UNION query must have identical target types.\n> \n> If you replace Null with an integer everything works well, so the datatype\n> mismatch is detected on the Null.\n> \n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> \n> The problem is in src/backend/parser/parse_clause.c in function:\n> \n> List * transformUnionClause(List *unionClause, List *targetlist) \n> \n> Near the end there's a check on data types that looks like:\n> \n> if (((TargetEntry *)lfirst(prev_target))->resdom->restype !=\n> ((TargetEntry *)lfirst(next_target))->resdom->restype)\n> elog(ERROR,\"Each UNION query must have identical target types.\");\n> \n> this check should be performed only when both entry are not a Null costant,\n> else it should be ignored because Null should match any datatype. I don't\n> know how PostgreSQL handles Null internally else I had changed the code\n> myself. Anyway I'm sure you PostgreSQL gurus will know how to do it in few\n> seconds.\n> \n> Hope it helps !\n> \n> P.S. My compliments to all the development staff. Just few more\n> enhancements (outer join support, slightly better optimizer and few things\n> more) and PostgreSQL will compare to (and sometimes beat) most commercial\n> high quality DBMS.\n> \n> \tDr. Sbragion Denis\n> \tInfoTecna\n> \tTel, Fax: +39 39 2324054\n> \tURL: http://space.tin.it/internet/dsbragio\n> \n> \n\nThomas, we now get:\n\n\tselect usesysid from pg_user union select null ;\n\tERROR: type id lookup of 0 failed\n\nwhich not good either. Can you address this issue?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 15 Jun 1998 22:58:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] Small bug in union"
}
] |
[
{
"msg_contents": "I would not worry too much about the kernel parameter change.\nSystems with such low values will need those parameter changes for most \nother databases too (since they all use SYS V shm). \nInstallations where the administrator is not involved will most likely be test/university\ninstallations, where memory is usually sparse anyway (buffer increase not wanted). \n\nI would not change to mmap if that would actually require a file on some systems.\n(systems with Gb of RAM are starting to gain ground is only one argument)\n\nAndreas\n\n> I am going through my mailbox and I believe we never found a portable\n> way to allocate shared memory other than system V shared memory. \n> Increasing the amount of buffers beyond a certain amount requires a\n> kernel change.\n> \n> Has anyone come up with a better way?\n\n\n",
"msg_date": "Tue, 16 Jun 1998 13:56:26 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?"
}
] |
[
{
"msg_contents": "I have applied all outstanding patches, except certain ones that I have\ncommented to the authors.\n\nIf you don't see your patch, please let me know.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 16 Jun 1998 11:15:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "All patches applied"
}
] |
[
{
"msg_contents": "Just a reminder. Those people who have submitted patches that change\nthe user-visible behavior of PostgreSQL will need to submit changes to\nthe manual pages and/or sgml manual so the documenation remains current.\n\nI usually start asking specific people as am creating the HISTORY file,\nbut people may want to submit them before I start asking.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 16 Jun 1998 11:24:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "documentation reminder"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm wondering if we can use btree index to sort the results in a\ncertain condition. The idea is, if the order-items in the order by\nclause have a btree index, then why we need to sort them again?\n\nI'm starting to look at the executor code. However this kind of\n\"optimization\" might be better to be done in the optimizer, I don't\nknow.\n\nSuggestion?\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Wed, 17 Jun 1998 11:13:11 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "using a btree index in order by clause?"
},
{
"msg_contents": "> \n> Hi,\n> \n> I'm wondering if we can use btree index to sort the results in a\n> certain condition. The idea is, if the order-items in the order by\n> clause have a btree index, then why we need to sort them again?\n> \n> I'm starting to look at the executor code. However this kind of\n> \"optimization\" might be better to be done in the optimizer, I don't\n> know.\n\nFYI, often, using an index to sort a tables is SLOWER than loading all the\nrows into psort and sorting them, because the index causes random seeks\nall over the table.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 17 Jun 1998 06:44:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] using a btree index in order by clause?"
}
] |
[
{
"msg_contents": "> I'm wondering if we can use btree index to sort the results in a\n> certain condition. The idea is, if the order-items in the order by\n> clause have a btree index, then why we need to sort them again?\n\nReal life tests done by bruce (and I also did some on Informix) showed \nthat sorting is cheaper/faster than doing the index access, if the index does not\nreduce the result set substantially.\nThe index will currently already be used if the where restriction suggests it.\nThis leads to presorted data. \nIt would be nice if the optimizer could eliminate the sort in this case,\neven though the sort routine behaves well with presorted data,\nbut here it does not actually do anything.\n\nI think the index access for order by would actually be a gain for certain cases:\n1. Interactive browsing of data (I want the first row very fast)\n2. Large result sets, that won't fit on temporary disk space.\n\nThe biggies also use this access method.\n\nGreetings\nAndreas\n\n",
"msg_date": "Wed, 17 Jun 1998 09:33:43 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] using a btree index in order by clause?"
},
{
"msg_contents": ">> I'm wondering if we can use btree index to sort the results in a\n>> certain condition. The idea is, if the order-items in the order by\n>> clause have a btree index, then why we need to sort them again?\n>\n>Real life tests done by bruce (and I also did some on Informix) showed \n>that sorting is cheaper/faster than doing the index access, if the index does not\n>reduce the result set substantially.\n>The index will currently already be used if the where restriction suggests it.\n>This leads to presorted data. \n>It would be nice if the optimizer could eliminate the sort in this case,\n>even though the sort routine behaves well with presorted data,\n>but here it does not actually do anything.\n>\n>I think the index access for order by would actually be a gain for certain cases:\n>1. Interactive browsing of data (I want the first row very fast)\n>2. Large result sets, that won't fit on temporary disk space.\n\nI think these are big win too.\n\nBy the way, max(), min() would be optimized in the same way, I guess.\n\n>The biggies also use this access method.\n ~~~~~~~do you mean commercial RDBMSs?\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Wed, 17 Jun 1998 17:18:52 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] using a btree index in order by clause? "
},
{
"msg_contents": "> \n> I think these are big win too.\n> \n> By the way, max(), min() would be optimized in the same way, I guess.\n> \n> >The biggies also use this access method.\n> ~~~~~~~do you mean commercial RDBMSs?\n\nI have modified the TODO list:\n\n* Use indexes in ORDER BY for restrictive data sets, min(), max()(Costin Oproiu)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 17 Jun 1998 18:43:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] using a btree index in order by clause?"
}
] |
[
{
"msg_contents": "> Can someone comment on this error message? It certainly looks like a\n> bug, but I can't figure out why he is getting these problems.\nWell, I comment... :) That message is old... I sent that before 6.3.2\ncame out, but 6.3.2 suffers from the same problem. Just tested it on a\n6.3.2 tar build (no patches) on a Red Hat 5.0 system. Statements run:\ncreate table list (k int2);\ninsert into list values (1);\ninsert into list select max(k)+1;\ninsert into list select max(k)+1;\ninsert into list select max(k)+1;\ninsert into list select max(k)+1;\ninsert into list select max(k)+1;\nselect * from list;\ncreate table list2 (k1 int2 NOT NULL, k2 int2 NOT NULL);\ncreate UNIQUE INDEX l1 ON list2(k1, k2);\ncreate UNIQUE INDEX l2 ON list2(k2, k1); \ninsert into list2 select l1.k, l2.k from list as l1, list as l2;\nselect * from list2;\nvacuum verbose analyze list2;\ncluster l1 on list2;\ncluster l2 on list2;\n\nTry it, try it. }8-> (I'm a devil or a cow can't remember which.)\n\n\t\t-DEJ\n\n> ----------------------------------------------------------------------\n> -----\n> \n> \n> > \n> > Just thought I'd try the cluster command. What am I doing wrong.\n> > ReadHat 5.0\n> > 6.3.1 rpm's\n> > \n> > [djackson@www]$ psql template1\n> > Welcome to the POSTGRESQL interactive sql monitor:\n> > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > \n> > type \\? for help on slash commands\n> > type \\q to quit\n> > type \\g or terminate with semicolon to execute query\n> > You are currently connected to the database: template1\n> > \n> > template1=> \\d\n> > Couldn't find any tables, sequences or indices!\n> > template1=> \\l\n> > datname |datdba|datpath \n> > ---------+------+---------\n> > template1| 100|template1\n> > postgres | 100|postgres \n> > (2 rows)\n> > \n> > template1=> create database test;\n> > CREATEDB\n> > template1=> \\connect test \n> > connecting to new database: test\n> > test=> create table list (k int2);\n> > CREATE\n> > test=> insert into list values (1);\n> > INSERT 33769 1\n> > test=> insert into list select max(k)+1;\n> > .\n> > .\n> > .\n> > test=> select * from list;\n> > k\n> > -\n> > 1\n> > 2\n> > 3\n> > 4\n> > 5\n> > 6\n> > (6 rows)\n> > \n> > test=> create table list2 (k1 int2 NOT NULL, k2 int2 NOT NULL);\n> > CREATE\n> > test=> create UNIQUE INDEX l1 ON list2(k1, k2);\n> > CREATE\n> > test=> create UNIQUE INDEX l2 ON list2(k2, k1); \n> > CREATE\n> > test=> insert into list2 select l1.k, l2.k from list as l1, list as\n> l2;\n> > INSERT 0 36\n> > test=> select * from list2;\n> > k1|k2\n> > --+--\n> > 1| 1\n> > 2| 1\n> > 3| 1\n> > .\n> > .\n> > .\n> > 4| 6\n> > 5| 6\n> > 6| 6\n> > (36 rows)\n> > \n> > test=> vacuum verbose analyze list2;\n> > NOTICE: Rel list2: Pages 1: Changed 0, Reapped 0, Empty 0, New 0;\n> Tup\n> > 36: Vac 0, Crash 0, UnUsed 0, MinLen 44, MaxLen 44; Re-using:\n> > Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> > NOTICE: Ind l2: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> > NOTICE: Ind l1: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> > VACUUM\n> > test=> cluster l1 on list2;\n> > ERROR: Cannot create unique index. Table contains non-unique values\n> > test=> cluster l2 on list2; \n> > PQexec() -- Request was sent to backend, but backend closed the\n> channel\n> > before responding.\n> > This probably means the backend terminated abnormally before\n> or\n> > while processing the request.\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania\n> 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 17 Jun 1998 09:54:49 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Bug or Short between my brain and the keyboard?"
}
] |
[
{
"msg_contents": "I accidentally lost my e-mail folder, and am restoring it from last\nnight's backup.\n\nIf anyone sent me e-mail today directly, rather than to the list, please\nre-send it. If you CC'ed it to the list, I will see it in the archives.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 17 Jun 1998 15:27:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "lost my e-mail"
}
] |
[
{
"msg_contents": "As I have noticed this message appears if you try query like \n\nupdate table1 set a=table2.a where b=table2.b;\n\nand there are 2 or more strings in table that contain the same \"b\". \n\nAleksey.\n\n",
"msg_date": "Thu, 18 Jun 1998 12:56:13 +0300 (IDT)",
"msg_from": "Aleksey Dashevsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Non-functional update, only first update is performed"
}
] |
[
{
"msg_contents": "Dear Postgres developers,\n\n My problem is writing speed, when I want to update more rows in one\ntrnsaction. In ORACLE this can be solved by cursor or 'bind array'. Is\nit possible to update changed data from cursor to the database. If not,\nwhy not. Can you tell me other method for solving this problem? I have\nalready written a letter: 'Update by one transaction', but I haven't got\nany useful answer.\n\nThank you for your help:\n Gyorgy Lendvary ([email protected])\n\n",
"msg_date": "Thu, 18 Jun 1998 15:43:13 +0200",
"msg_from": "Lendvary Gyorgy <[email protected]>",
"msg_from_op": true,
"msg_subject": "cursors and other dragons"
},
{
"msg_contents": "On Thu, 18 Jun 1998, Lendvary Gyorgy wrote:\n\n> Dear Postgres developers,\n> \n> My problem is writing speed, when I want to update more rows in one\n> trnsaction. In ORACLE this can be solved by cursor or 'bind array'. Is\n> it possible to update changed data from cursor to the database. If not,\n> why not. Can you tell me other method for solving this problem? I have\n> already written a letter: 'Update by one transaction', but I haven't got\n> any useful answer.\n> \n> Thank you for your help:\n> Gyorgy Lendvary ([email protected])\n> \n> \n\nPlease, write example of your sql-code by using ORACLE.\n\n\n\n\n",
"msg_date": "Mon, 22 Jun 1998 10:12:22 +0300 (EEST)",
"msg_from": "Alexzander Blashko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cursors and other dragons"
}
] |
[
{
"msg_contents": "Hi y'all,\n\nto unify the framework of the look an feeling of c++ programs relying on\nlibpq++ I propose to migrate to a consistant use of 'string' to represent any\ntext rather than 'char *'.\n\nEspecially I refer to PgConnection::Exec(char *), ExecCommandOk() and\nExecTuplesOk(). It is very easy to switch over to the use of strings: in the\nheaderfiles only (char * --> string) has to be changed and in the\ncorresponding c++ files, the call to the c-functions becomes e.g.\nPQexec(pgConn, query) --> PQexec(pgConn, query.c_str()).\n\nThe old style call via 'char *' would still be possible, since 'char *' is\nautomatically convertet to 'string'. \n\nAlthough I could easily change this for myself, I think it would be\nappropriate to include it in an upcoming release since it definitely results\nin a improved and more consistent handling of libpq++.\n\nPlease cc: any comments or replys to '[email protected]'.\n\nCheers, Andreas\n\n\n//////////////////////////////////////////////////////////////////////////////\n\n . . . . . . ... Andreas Hauck\n ____ : Inst. fuer Theor. Physik\n __nn__ _______ ________ ____ :_:____U Auf der Morgenstelle 14\n :____:-:_____:-:______:-:___|-:_______) 72076 Tuebingen\n__oo__oo_oo___oo_oo____oo_oo_oo_ooOOOOoo|\\___________________________________\n\n phone : +49 (0) 7071/29-74131 * fax : +49 (0) 7071/29-5850\n e-mail : [email protected]\n http://homepages.uni-tuebingen.de/andreas.hauck\n\n",
"msg_date": "Thu, 18 Jun 1998 17:48:32 +0200 (MET DST)",
"msg_from": "Andreas Hauck <[email protected]>",
"msg_from_op": true,
"msg_subject": "minor improvement to libpq++ ..."
},
{
"msg_contents": "On Thu, 18 Jun 1998, Andreas Hauck wrote:\n\n> Hi y'all,\n> \n> to unify the framework of the look an feeling of c++ programs relying on\n> libpq++ I propose to migrate to a consistant use of 'string' to represent any\n> text rather than 'char *'.\n> \n> Especially I refer to PgConnection::Exec(char *), ExecCommandOk() and\n> ExecTuplesOk(). It is very easy to switch over to the use of strings: in the\n> headerfiles only (char * --> string) has to be changed and in the\n> corresponding c++ files, the call to the c-functions becomes e.g.\n> PQexec(pgConn, query) --> PQexec(pgConn, query.c_str()).\n> \n> The old style call via 'char *' would still be possible, since 'char *' is\n> automatically convertet to 'string'. \n\nI'm all for it, but are you sure about the automatic conversion? I don't \nthink it does that.... I just did a grep on bastring.h, and it doesn't \nshow a char* operator. And string.c_str() returns a const char*, which \ngives all sorts of errors/warning during compile.....\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 18 Jun 1998 22:59:29 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] minor improvement to libpq++ ..."
}
] |
[
{
"msg_contents": "\nThe default RedHat install doesn't include pidentd, which\napparently is now required by PG (?). When I install the\nRPM for pidentd, I get past the \"unrecognized authentication\"\nproblem.\n\n\nBut I still have a configuration problem of some kind:\nI cannot connect to the postmaster.\n\n\n86) musashi.ucolick.org.postgres: psql -d template1\nConnection to database 'template1' failed.\nconnectDB() failed: Is the postmaster running and accepting TCP/IP(with -i) connections at 'localhost' on port '5432'?\n\n\n87) musashi.ucolick.org.postgres: telnet localhost 5432\nTrying 127.0.0.1...\ntelnet: Unable to connect to remote host: Connection refused\n88) musashi.ucolick.org.postgres: \n\n\nI can connect to other services on localhost (the usual inetd stuff).\n\nThis feels like a slight Linux misconfig problem.... has anyone else\nbeen here?\n\nde\n\n",
"msg_date": "Thu, 18 Jun 1998 21:56:20 -0700 (PDT)",
"msg_from": "De Clarke <[email protected]>",
"msg_from_op": true,
"msg_subject": "RedHat 5.1 Postgres 6.3.2 problem resolved"
},
{
"msg_contents": "On Thu, 18 Jun 1998, De Clarke wrote:\n\n> \n> The default RedHat install doesn't include pidentd, which\n> apparently is now required by PG (?). When I install the\n> RPM for pidentd, I get past the \"unrecognized authentication\"\n> problem.\n\ncheck the pg_hba.conf file...ident authentication is an option but it\nisn't \"shipped\" by us with it enabled...\n\n> 86) musashi.ucolick.org.postgres: psql -d template1\n> Connection to database 'template1' failed.\n> connectDB() failed: Is the postmaster running and accepting TCP/IP(with -i) connections at 'localhost' on port '5432'?\n\nThis one sounds like you have PGHOST set to a value, so its trying to use\na TCP/IP vs Unix Domain Socket for the connection. Either restart\npostmaster with the -i option, or set PGHOST to NULL (environment\nvariable)\n\n> 87) musashi.ucolick.org.postgres: telnet localhost 5432\n> Trying 127.0.0.1...\n> telnet: Unable to connect to remote host: Connection refused\n> 88) musashi.ucolick.org.postgres: \n\n\tSame as above...without the -i option, it doesn't listen on port\n5432...\n\n\n",
"msg_date": "Fri, 19 Jun 1998 07:38:53 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RedHat 5.1 Postgres 6.3.2 problem resolved"
}
] |
[
{
"msg_contents": "> \n> Hello Bruce!\n> \n> I don't know exactly if the following bug has already been corrected. Nor\n> do I know who is responsible for the code around prune_joinrel() and that's\n> the reason why I send this to you. Could you please forward it to the\n> appropriate mailinglist and (or) to the responsible developer?\n> \n> I detected the bug when I wanted to test my HAVING code. It is already\n> finished (including the use of subselects in the HavingQual) and all\n> the bugs from April are covered. But I don't want to release a patch \n> without having tested it very well. So I hope I will be able to send\n> a patch next week!\n\nGreat.\n\n> \n> Regards Stefan\n> ------------------------------------------------------------------------\n> Here's the description of the prune_joinrel() bug:\n> \n> When I formulated the following query (using a subselect) executing it\n> on postgresql-6.3.2 (original, not the snapshot) on linux I observed\n> the following strange behaviour: (The table definitions are attached \n> below!) \n\n\nThe prune_joinrels function had a bug in 6.3.* releases that was fixed\nin 6.3.2:\n\n\trevision 1.13\n\tdate: 1998/04/02 07:27:15; author: vadim; state: Exp; lines: +11 -25\n\tFix merging pathes of pruned rels (\"indices are unused\" problem).\n\nThe basic problem is that certain indexes were not being used when they\nshould have been. I introduced the bug when I tried to fix some\nrecursion in the optimizer, and Vadim found my errors and fixed them.\n\nI would suggest that the current 6.3.2 is correct, and that there is a\nbug somewhere in your code. I believe the 6.3 version is working for\nyou because it is buggy and is not using certain indexes that it\nnormally should be using. The error in psort() sould seem to confirm my\nsuspision. You may try the EXPLAIN command to see how the different\nversions are executing your query. That is how we found out about the\n'missing index' problem in the first place.\n\n---------------------------------------------------------------------------\n\n-\n> \n> psql client:\n> ------------\n> stefan=> select s.sid\n> stefan-> from supplier s\n> stefan-> where s.sid in (select se1.pid\n> stefan-> from supplier s1, sells se1, part p1\n> stefan-> where s1.sid=se1.sid and se1.sid=s.sid and se1.pid=p1.pid);\n> \n> FATAL: unrecognized data from the backend. It probably dumped core.\n> \n> postmaster server:\n> ------------------\n> postgres:/home/postgres# postmaster \n> Failed Assertion(\"!(((Psortstate *)node->psortstate) != \n> (Psortstate *) ((void *)0)):\", File: \"psort.c\", Line: 778)\n> !(((Psortstate *)node->psortstate) != \n> (Psortstate *) ((void *)0)) (0) [No such file or directory]\n> \n> \n> I found out that this error did *NOT* occur with postgresql-6.3 and \n> looked at the changes since 6.3. It seems that the new function\n> \n> prune_joinrel() in the file\n> postgresql-6.3.2/src/backend/optimizer/path/prune.c\n> \n> is the reason for the error. When I replaced the new function by the\n> old one from version 6.3, the error did not occur any more.\n> \n> \n> Table definition:\n> -----------------\n> create table supplier (sid int4,\n> sname char(20),\n> city char(20));\n> \n> create table sells (pid int4,\n> sid int4);\n> \n> create table part (pname char(20),\n> pid int4, \n> cost int4); \n> \n> insert into supplier (sid, sname, city)\n> values (1,'stefan','wien');\n> insert into supplier (sid, sname, city)\n> values (2,'richi','breitenfurt');\n> insert into supplier (sid, sname, city)\n> values (3,'eva','breitenfurt');\n> insert into supplier (sid, sname, city)\n> values (4,'walter','wien');\n> insert into supplier (sid, sname, city)\n> values (5,'edith','moedling');\n> insert into supplier (sid, sname, city)\n> values (6,'manu','breitenfurt');\n> insert into supplier (sid, sname, city)\n> values (7,'hugo','moedling');\n> \n> insert into sells (pid, sid)\n> values (1,1);\n> insert into sells (pid, sid)\n> values (1,2);\n> insert into sells (pid, sid)\n> values (2,3);\n> insert into sells (pid, sid)\n> values (2,4);\n> insert into sells (pid, sid)\n> values (3,5);\n> insert into sells (pid, sid)\n> values (4,6);\n> insert into sells (pid, sid)\n> values (5,2);\n> insert into sells (pid, sid)\n> values (6,1);\n> \n> insert into part (pname, pid, cost)\n> values ('kabel',1,100);\n> insert into part (pname, pid, cost)\n> values ('patrone',2,200);\n> insert into part (pname, pid, cost)\n> values ('maus',3,500);\n> insert into part (pname, pid, cost)\n> values ('tastatur',4,750);\n> insert into part (pname, pid, cost)\n> values ('bildschirm',5,8430);\n> insert into part (pname, pid, cost)\n> values ('festplatte',6,3450);\n> \n> \n> Destroy tables:\n> ---------------\n> drop table part;\n> drop table supplier;\n> drop table sells;\n> \n> Regards \n> Stefan\n> -- \n> +------------------------------------------------------------------------+\n> + Simkovics Stefan +\n> + Student an der TU Wien (Informatik) +\n> + Tel.: 02239/3367 +\n> + email: [email protected] | [email protected] +\n> +------------------------------------------------------------------------+\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 19 Jun 1998 22:58:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug with prune_joinrel() in 6.3.2"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nI am having problems with the new spinlock code on the SPARC linux\nplatform. (Latest CVSup)\n\nThe compiler doesn't seem to like the \"asm\" part of s_lock.h for (sparc)\n\nHere's one of the compiles that fails.\n\nmake[3]: Entering directory `/usr/local/pgsql/src/backend/storage/ipc'\ngcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes \n-I../.. -c spin.c -o spin.o\n../../../include/storage/s_lock.h: In function `SpinAcquire':\n../../../include/storage/s_lock.h:131: inconsistent operand constraints in an \n`asm'\n../../../include/storage/s_lock.h:131: inconsistent operand constraints in an \n`asm'\n../../../include/storage/s_lock.h:131: inconsistent operand constraints in an \n`asm'\n../../../include/storage/s_lock.h:131: inconsistent operand constraints in an \n`asm'\n../../../include/storage/s_lock.h: In function `SpinRelease':\n../../../include/storage/s_lock.h:131: inconsistent operand constraints in an \n`asm'\n../../../include/storage/s_lock.h:131: inconsistent operand constraints in an \n`asm'\nmake[3]: *** [spin.o] Error 1 \n\nTh ccsym information may be useful.\n\n[postgres@sparclinux pgsql]$ src/tools/ccsym\n__GNUC__=2\n__GNUC_MINOR__=7\n__ELF__\nunix\nsparc\nlinux\n__ELF__\n__unix__\n__sparc__\n__linux__\n__unix\n__sparc\n__linux\nsystem=unix\nsystem=posix\ncpu=sparc\nmachine=sparc\n[postgres@sparclinux pgsql]$\n\nKeith.\n\n",
"msg_date": "Sat, 20 Jun 1998 14:06:30 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "s_lock.h problem on S/Linux"
},
{
"msg_contents": "> \n> Hi hackers.\n> \n> I am having problems with the new spinlock code on the SPARC linux\n> platform. (Latest CVSup)\n> \n> The compiler doesn't seem to like the \"asm\" part of s_lock.h for (sparc)\n> \n> Here's one of the compiles that fails.\n\nThank you for testing and reporting this. It is my fault of course, but as\nI don't have access to a sparc for testing I just did what I could. I am\nguessing here, but please apply the following to your pgsql and let me know\nwhat happens. Also, cd to src/storage/buffer and do 'make s_lock_test' as\nwell.\n\n \n*** src/include/storage/s_lock.h.orig\tSun Jun 14 19:37:47 1998\n--- src/include/storage/s_lock.h\tSat Jun 20 18:01:13 1998\n***************\n*** 130,136 ****\n \n __asm__(\"ldstub [%1], %0\" \\\n : \"=r\"(_res), \"=m\"(*lock) \\\n! : \"1\"(lock));\n \treturn (int) _res;\n }\n #endif /* sparc */\n--- 130,136 ----\n \n __asm__(\"ldstub [%1], %0\" \\\n : \"=r\"(_res), \"=m\"(*lock) \\\n! : \"0\"(_res));\n \treturn (int) _res;\n }\n #endif /* sparc */\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Sat, 20 Jun 1998 18:07:27 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
},
{
"msg_contents": "Patch applied.\n\n> > \n> > Hi hackers.\n> > \n> > I am having problems with the new spinlock code on the SPARC linux\n> > platform. (Latest CVSup)\n> > \n> > The compiler doesn't seem to like the \"asm\" part of s_lock.h for (sparc)\n> > \n> > Here's one of the compiles that fails.\n> \n> Thank you for testing and reporting this. It is my fault of course, but as\n> I don't have access to a sparc for testing I just did what I could. I am\n> guessing here, but please apply the following to your pgsql and let me know\n> what happens. Also, cd to src/storage/buffer and do 'make s_lock_test' as\n> well.\n> \n> \n> *** src/include/storage/s_lock.h.orig\tSun Jun 14 19:37:47 1998\n> --- src/include/storage/s_lock.h\tSat Jun 20 18:01:13 1998\n> ***************\n> *** 130,136 ****\n> \n> __asm__(\"ldstub [%1], %0\" \\\n> : \"=r\"(_res), \"=m\"(*lock) \\\n> ! : \"1\"(lock));\n> \treturn (int) _res;\n> }\n> #endif /* sparc */\n> --- 130,136 ----\n> \n> __asm__(\"ldstub [%1], %0\" \\\n> : \"=r\"(_res), \"=m\"(*lock) \\\n> ! : \"0\"(_res));\n> \treturn (int) _res;\n> }\n> #endif /* sparc */\n> \n> -dg\n> \n> David Gould [email protected] 510.628.3783 or 510.305.9468 \n> Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n> \"Don't worry about people stealing your ideas. If your ideas are any\n> good, you'll have to ram them down people's throats.\" -- Howard Aiken\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 18 Jul 1998 10:37:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
},
{
"msg_contents": "I haven't seen any followups to this, but I finally got around to\ncompiling the system again myself, and David's fix is not quite right.\nHe says to apply this:\n\n> *** src/include/storage/s_lock.h.orig\tSun Jun 14 19:37:47 1998\n> --- src/include/storage/s_lock.h\tSat Jun 20 18:01:13 1998\n> ***************\n> *** 130,136 ****\n> \n> __asm__(\"ldstub [%1], %0\" \\\n> : \"=r\"(_res), \"=m\"(*lock) \\\n> ! : \"1\"(lock));\n> \treturn (int) _res;\n> }\n> #endif /* sparc */\n> --- 130,136 ----\n> \n> __asm__(\"ldstub [%1], %0\" \\\n> : \"=r\"(_res), \"=m\"(*lock) \\\n> ! : \"0\"(_res));\n> \treturn (int) _res;\n> }\n> #endif /* sparc */\n\nHowever, the reference to the lock pointer as \"1\" was closer to being\ncorrect that then \"0\" is! The trouble is that the compiler doesn't\nlike the mixed indirection in the references for the lock pointer with\nthe \"1\" there in the original. Changing the input parameter as shown\nto indicate _res fixes this, but is wrong, since that's not the input.\nIn the current sources, the \"1\" has been changed to a \"0\", erroneously\ncalling _res an input, but the name of the variable to use is still\n'lock', making it really confusing by fetching the right input (the\npointer), and stuffing it into the wrong register -- and causing the\nassembler to join in the chorus of complaints when it sees the double\ndereferencing brackets in its source... :-)\n\nMuch better is to actually specify the constraints individually, and\nthen simply refer to the input parameter in the instruction. Here's\nthe patch I have to apply to the current sources to get it to compile\nand work right (I've tested it with s_lock_test, of course):\n\nIndex: s_lock.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/storage/s_lock.h,v\nretrieving revision 1.39\ndiff -r1.39 s_lock.h\n131,133c131,133\n< __asm__(\"ldstub [%1], %0\" \\\n< : \"=r\"(_res), \"=m\"(*lock) \\\n< : \"0\"(lock));\n---\n> __asm__(\"ldstub [%2], %0\"\n> : \"=r\" (_res), \"=m\" (*lock)\n> : \"r\" (lock));\n\nOh, yeah, I guess I didn't have to remove the backslashes, but this is\nthe GCC section, and they're not needed.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "19 Jul 1998 11:16:52 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
},
{
"msg_contents": "Patch applied.\n\n\n> I haven't seen any followups to this, but I finally got around to\n> compiling the system again myself, and David's fix is not quite right.\n> He says to apply this:\n> \n> > *** src/include/storage/s_lock.h.orig\tSun Jun 14 19:37:47 1998\n> > --- src/include/storage/s_lock.h\tSat Jun 20 18:01:13 1998\n> > ***************\n> > *** 130,136 ****\n> > \n> > __asm__(\"ldstub [%1], %0\" \\\n> > : \"=r\"(_res), \"=m\"(*lock) \\\n> > ! : \"1\"(lock));\n> > \treturn (int) _res;\n> > }\n> > #endif /* sparc */\n> > --- 130,136 ----\n> > \n> > __asm__(\"ldstub [%1], %0\" \\\n> > : \"=r\"(_res), \"=m\"(*lock) \\\n> > ! : \"0\"(_res));\n> > \treturn (int) _res;\n> > }\n> > #endif /* sparc */\n> \n> However, the reference to the lock pointer as \"1\" was closer to being\n> correct that then \"0\" is! The trouble is that the compiler doesn't\n> like the mixed indirection in the references for the lock pointer with\n> the \"1\" there in the original. Changing the input parameter as shown\n> to indicate _res fixes this, but is wrong, since that's not the input.\n> In the current sources, the \"1\" has been changed to a \"0\", erroneously\n> calling _res an input, but the name of the variable to use is still\n> 'lock', making it really confusing by fetching the right input (the\n> pointer), and stuffing it into the wrong register -- and causing the\n> assembler to join in the chorus of complaints when it sees the double\n> dereferencing brackets in its source... :-)\n> \n> Much better is to actually specify the constraints individually, and\n> then simply refer to the input parameter in the instruction. Here's\n> the patch I have to apply to the current sources to get it to compile\n> and work right (I've tested it with s_lock_test, of course):\n> \n> Index: s_lock.h\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/include/storage/s_lock.h,v\n> retrieving revision 1.39\n> diff -r1.39 s_lock.h\n> 131,133c131,133\n> < __asm__(\"ldstub [%1], %0\" \\\n> < : \"=r\"(_res), \"=m\"(*lock) \\\n> < : \"0\"(lock));\n> ---\n> > __asm__(\"ldstub [%2], %0\"\n> > : \"=r\" (_res), \"=m\" (*lock)\n> > : \"r\" (lock));\n> \n> Oh, yeah, I guess I didn't have to remove the backslashes, but this is\n> the GCC section, and they're not needed.\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 05:44:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
}
] |
[
{
"msg_contents": "David,\n\nAfter making your suggested changes and then doing a \"make s_lock_test\"\nI get the following error messages.\n\n\n[postgres@sparclinux buffer]$ make s_lock_test\ngcc -g -I../../../include -I../../../backend -O2 -I../.. -DS_LOCK_TEST=1 \ns_lock.c -o s_lock_test\n/tmp/cca10794.s: Assembler messages:\n/tmp/cca10794.s:290: Error: Bad expression\n/tmp/cca10794.s:290: Error: Missing ')' assumed\n/tmp/cca10794.s:290: Error: Bad expression\n/tmp/cca10794.s:290: Error: Missing ')' assumed\n/tmp/cca10794.s:290: Error: Illegal operands\n/tmp/cca10794.s:440: Error: Bad expression\n/tmp/cca10794.s:440: Error: Missing ')' assumed\n/tmp/cca10794.s:440: Error: Bad expression\n/tmp/cca10794.s:440: Error: Missing ')' assumed\n/tmp/cca10794.s:440: Error: Illegal operands\nmake: *** [s_lock_test] Error 1 \n\nIf I compile with -save-temps and look at the s_lock.s file I see, on\nline 290:-\n\n.stabn 68,0,131,.LM9-s_lock\n.LM9:\n.LL45:\n ldstub [[%i0]], %o0 <-------\n.stabn 68,0,134,.LM10-s_lock\n.LM10:\n and %o0,0xff,%o0 \n \n\nThe double square braces look strange to me so I removed the single\nbraces in s_lock.h.\n\nThe modified file compiles OK and in s_lock.s I can see:-\n\n.stabn 68,0,131,.LM9-s_lock\n.LM9:\n.LL45:\n ldstub [%i0], %o0\n.stabn 68,0,134,.LM10-s_lock\n.LM10:\n and %o0,0xff,%o0 \n \nNow I know absoloutely nothing whatsoever about SPARC (or gnu)\nassembler so the code changes could result in nothing like a\ntest and set function but...\n\n[postgres@sparclinux buffer]$ make s_lock_test\ngcc -g -I../../../include -I../../../backend -O2 -I../.. -DS_LOCK_TEST=1 \ns_lock.c -o s_lock_test\n./s_lock_test\nS_LOCK_TEST: this will hang for a few minutes and then abort\n with a 'stuck spinlock' message if S_LOCK()\n and TAS() are working.\n\nFATAL: s_lock(00020be8) at s_lock.c:215, stuck spinlock. Aborting.\n\nFATAL: s_lock(00020be8) at s_lock.c:215, stuck spinlock. Aborting.\nmake: *** [s_lock_test] IOT trap/Abort (core dumped)\nmake: *** Deleting file `s_lock_test' \n\nRunning s_lock_test does result in a hang for a few minutes\nand then a \"stuck spinlock\" message so perhaps it's not all\nthat bad. (Not sure about the core dump though :-( )\n\nKeith.\n\n\n\n\[email protected] (David Gould)\n> \n> > \n> > Hi hackers.\n> > \n> > I am having problems with the new spinlock code on the SPARC linux\n> > platform. (Latest CVSup)\n> > \n> > The compiler doesn't seem to like the \"asm\" part of s_lock.h for (sparc)\n> > \n> > Here's one of the compiles that fails.\n> \n> Thank you for testing and reporting this. It is my fault of course, but as\n> I don't have access to a sparc for testing I just did what I could. I am\n> guessing here, but please apply the following to your pgsql and let me know\n> what happens. Also, cd to src/storage/buffer and do 'make s_lock_test' as\n> well.\n\n\n",
"msg_date": "Sun, 21 Jun 1998 16:59:04 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
},
{
"msg_contents": "> \n> David,\n> \n> After making your suggested changes and then doing a \"make s_lock_test\"\n> I get the following error messages.\n> \n> \n> [postgres@sparclinux buffer]$ make s_lock_test\n> gcc -g -I../../../include -I../../../backend -O2 -I../.. -DS_LOCK_TEST=1 \n> s_lock.c -o s_lock_test\n> /tmp/cca10794.s: Assembler messages:\n> /tmp/cca10794.s:290: Error: Bad expression\n> /tmp/cca10794.s:290: Error: Missing ')' assumed\n> /tmp/cca10794.s:290: Error: Bad expression\n> /tmp/cca10794.s:290: Error: Missing ')' assumed\n> /tmp/cca10794.s:290: Error: Illegal operands\n> /tmp/cca10794.s:440: Error: Bad expression\n> /tmp/cca10794.s:440: Error: Missing ')' assumed\n> /tmp/cca10794.s:440: Error: Bad expression\n> /tmp/cca10794.s:440: Error: Missing ')' assumed\n> /tmp/cca10794.s:440: Error: Illegal operands\n> make: *** [s_lock_test] Error 1 \n> \n> If I compile with -save-temps and look at the s_lock.s file I see, on\n> line 290:-\n> \n> .stabn 68,0,131,.LM9-s_lock\n> .LM9:\n> .LL45:\n> ldstub [[%i0]], %o0 <-------\n> .stabn 68,0,134,.LM10-s_lock\n> .LM10:\n> and %o0,0xff,%o0 \n> \n> \n> The double square braces look strange to me so I removed the single\n> braces in s_lock.h.\n> \n> The modified file compiles OK and in s_lock.s I can see:-\n> \n> .stabn 68,0,131,.LM9-s_lock\n> .LM9:\n> .LL45:\n> ldstub [%i0], %o0\n> .stabn 68,0,134,.LM10-s_lock\n> .LM10:\n> and %o0,0xff,%o0 \n> \n> Now I know absoloutely nothing whatsoever about SPARC (or gnu)\n> assembler so the code changes could result in nothing like a\n> test and set function but...\n> \n> [postgres@sparclinux buffer]$ make s_lock_test\n> gcc -g -I../../../include -I../../../backend -O2 -I../.. -DS_LOCK_TEST=1 \n> s_lock.c -o s_lock_test\n> ./s_lock_test\n> S_LOCK_TEST: this will hang for a few minutes and then abort\n> with a 'stuck spinlock' message if S_LOCK()\n> and TAS() are working.\n> \n> FATAL: s_lock(00020be8) at s_lock.c:215, stuck spinlock. Aborting.\n> \n> FATAL: s_lock(00020be8) at s_lock.c:215, stuck spinlock. Aborting.\n> make: *** [s_lock_test] IOT trap/Abort (core dumped)\n> make: *** Deleting file `s_lock_test' \n> \n> Running s_lock_test does result in a hang for a few minutes\n> and then a \"stuck spinlock\" message so perhaps it's not all\n> that bad. (Not sure about the core dump though :-( )\n> \n> Keith.\n\nThe core dump is expected. When a stuck spinlock is detected abort() is called\nwhich makes the core dump. So it looks like it is working.\n\nIf you would, please send me the s_lock.s file and a diff for you changes\nI will look at the generated code and if it looks ok will submit the patch.\n\nThanks,\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Sun, 21 Jun 1998 13:50:45 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
}
] |
[
{
"msg_contents": "> \n> On Sat, 20 Jun 1998, Bruce Momjian wrote:\n> \n> > > \n> > > On Fri, 19 Jun 1998, Bruce Momjian wrote:\n> > > \n> > > > > \n> > > > > Hi,\n> > > > > \n> > > > > I just send out a mail that I got a compile-error ona bsdi 2.1 machine. I \n> > > > > wanted to let you know I fixed it.\n> > > > > \n> > > > > The problem was that sys/un.h was included twice (directly from pqcomm.c \n> > > > > and indirectly through libpq/auth.h and down). On my system, this \n> > > > > include-file (sys/un.h) is not guarded by #ifndef's. I removed the \n> > > > > explicit #include in pqcomm.c and things compiled.\n> > > > \n> > > > Oh. good\n> > > \n> > > Same thing happened in interfaces/libpq/fe-connect.c, I also fixed that \n> > > locally. Should this be fixed upstream also?\n> > \n> > Sure. Send it over and I will check it against 3.1.\n> \n> Well, my fix was very simple, and might not be that portable. I just \n> commented out the lines that said '#include <sys/un.h>', since this file \n> was also included from libpq/auth.h. I think it's save to just remove \n> these lines from pqcomm.c and fe-connect.c.\n\nOK, removed include in both files.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 21 Jun 1998 12:38:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] About failed compile on BSDI 2.1"
}
] |
[
{
"msg_contents": "Authenticated sender is <[email protected]>\nSubject: Bull's Eye Targeting Software\nMime-Version: 1.0\nContent-Type: text/plain; charset=\"us-ascii\"\nContent-Transfer-Encoding: 7bit\n\nEMAIL MARKETING WORKS!!\n\nBull's Eye Gold is the PREMIER email address collection tool.\nThis program allows you to develop TARGETED lists of email\naddresses. Doctors, florists, MLM, biz opp,...you can collect\nanything...you are only limited by your imagination! You can\neven collect email addresses for specific states, cities, and\neven countries! All you need is your web browser and this program.\nOur software utilizes the latest in search technology called\n\"spidering\". By simply feeding the spider program a starting\nwebsite it will collect for hours. The spider will go from website\nto targeted website providing you with thousands upon thousands of\nfresh TARGETED email addresses. When you are done collecting, the\nspider removes duplicates and saves the email list in a ready to\nsend format. No longer is it necessary to send millions of ads to\nget a handful of responses...SEND LESS...EARN MORE!!!\n\nA terrific aspect of the Bull's Eye software is that there is\nno difficult set up involved and no special technical mumbo-jumbo\nto learn. All you need to know is how to search for your targeted\nmarket in one of the many search engines and let the spider do the\nrest! Not familiar with the search engines? No problem, we provide\nyou with a list of all the top search engines. Just surf to the\nlocation of a search engine on your browser then search for the\nmarket you wish to reach...it's that easy!\n\nFor instance if you were looking for email addresses of Doctors\nin New York all you would do is:\n\n1) Do a search using your favorite search engine by typing in\nthe words doctor(s) and New York\n2) Copy the URL (one or more)...that's the stuff after the\nhttp://... for instance it might look like\nhttp://www.yahoo.com/?doctor(s)/?New+York\n3) Press the START button\n\nTHAT's IT!!! The Bull's Eye spider will go to all the websites\nthat are linked, automatically extracting the email addresses\nyou want.\n\nThe spider is passive too! That means you can let it run all\nday or all night while you are working on important things or\njust having fun on your computer. There is no need to keep a\nconstant watch on it, just feed it your target market and give\nit praise when it delivers thousands of email addresses at\nthe end of the day!\n\nFeatures of the Bull's Eye Software:\n\n* Does TARGETED searches of websites collecting the email\n addresses you want!\n* Collects Email addresses by City, State, even specific\n Countries\n* Runs Automatically...simply enter the Starting information,\n press The Start Button, and it does the rest\n* Filters out duplicates\n* Keeps track of URLs already visited\n* Can run 24 hours per day, 7 days per week\n* Fast and Easy List Management\n* Also has built in filtering options...you can put in words\n that it \"Must\" have while searching,...you can even put in\n criteria that it \"Must NOT Have\"...giving you added flexibility\n* Also imports email addresses from any kind of files (text\n files, binary files, database files)\n* List editor handles Multiple files to work on many lists\n simultaneously\n* Has a Black-Book feature... avoid sending emails to people\n who do not want to receive it\n* Built-in Mail program...send email directly on the internet\n with just a click of your mouse\n* Personalized Emails...if the email address has the user's\n name when it is collected,..you can send Personalized emails!!!\n* Sort by Location, Server, User Name, Contact Name\n* Advanced Operations:\n� Email address lists export in many different formats\n (HTML, Comma delimited, text file)\n� Advanced editing...Transfer, Copy, Addition, Delete, Crop,\n Move to Top/Bottom\n� Operations between lists...Union, Subtraction, Comparison\n* Program is Passive,...meaning you can run other programs at\n the same time\n\nCALL FOR MORE INFORMATION 213-427-5820\nCALL FOR MORE INFORMATION 213-427-5820\n\nORDERING INFORMATION\n\nCustomer Name\nCompany Name\nAddress\nCity\nState Zip\nPhone Fax\nEmail Address\n\n______ BULL'S EYE SOFTWARE $259.00\nIncludes Software, Instructions, Technical Support\n\n______ Shipping & Handling (2-3 Day Fedex) $10.00\n (Fedex Overnite) $20.00\n\n______ TOTAL\n (CA Residents add applicable sales tax)\n\n*All orders are for Win 95 and Win NT\n\n *****CREDIT CARDS ACCEPTED*****\n MASTERCARD VISA AMEX\n\n PLEASE CALL 213-427-5820 to process your order\n 9am-5pm Pacific Time\n Checks or Money Orders send to:\n WorldTouch Network Inc.\n5670 Wilshire Blvd. Suite 2170 Los Angeles, CA 90036\nPlease note: Allow 5 business days for all checks to\nclear before order is shipped.\n\n\n\n",
"msg_date": "Mon, 22 Jun 1998 04:40:01 +0300 (EET DST)",
"msg_from": "lj644f <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "I have tried to build Jun 20 snapshot on my linux-ppc box and failed\nto run configure. It seems the way for providing platform specific\ntemplates is changed, and Linux or MkLnux/PPC specific template is now\nrequired. Just copying the linux_sparc template as linux_ppc solved my\nproblem. I don't this is the correct solution, however.\n--\nTatsuo Ishii\[email protected]\n\n\n",
"msg_date": "Mon, 22 Jun 1998 10:50:10 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "linux-elf-ppc template?"
},
{
"msg_contents": "> \n> I have tried to build Jun 20 snapshot on my linux-ppc box and failed\n> to run configure. It seems the way for providing platform specific\n> templates is changed, and Linux or MkLnux/PPC specific template is now\n> required. Just copying the linux_sparc template as linux_ppc solved my\n> problem. I don't this is the correct solution, however.\n\nAdded to /templates.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 21 Jun 1998 22:05:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] linux-elf-ppc template?"
}
] |
[
{
"msg_contents": "I have built Jun 20 snapshot on my FreeBSD box. I have tried to run\ntest/bench and got following result.\n\nERROR: destroydb: database bench does not exist.\n\nThis is normal. Problem is the make command is aborted by a signal. My\nguesss is elog(ERROR) raises HUP signal, but for some reason nobody\ncatches it. Here is the place the problem occurs in runwisc.sh:\n\necho \"drop database bench\" | postgres -D${1} template1 > /dev/null\n\nComments?\n--\nTatsuo Ishii\[email protected]\n\n[srapc451.sra.co.jp]t-ishii{107} gmake all runtest\nif [ -z \"$USER\" ]; then USER=$LOGNAME; fi; \\\nif [ -z \"$USER\" ]; then USER=`whoami`; fi; \\\nif [ -z \"$USER\" ]; then echo 'Cannot deduce $USER.'; exit 1; fi; \\\nrm -f create.sql; \\\nC=`pwd`; \\\nsed -e \"s:_CWD_:$C:g\" \\\n -e \"s:_OBJWD_:$C:g\" \\\n -e \"s:_SLSUFF_::g\" \\\n -e \"s/_USER_/$USER/g\" < create.source > create.sql\nx=1; \\\nfor i in `ls query[0-9][0-9]`; do \\\n echo \"select $x as x\" >> bench.sql; \\\n cat $i >> bench.sql; \\\n x=`expr $x + 1`; \\\ndone\nrm -f bench.out bench.out.perquery\n/bin/sh ./create.sh $PGDATA && \\\n/bin/sh ./runwisc.sh $PGDATA >bench.out 2>&1\n=============== destroying old bench database... =================\nERROR: destroydb: database bench does not exist.\nERROR: destroydb: database bench does not exist.\ngmake: *** [bench.out] Hangup\nHangup\n",
"msg_date": "Mon, 22 Jun 1998 11:34:41 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "HUP signal handling?"
}
] |
[
{
"msg_contents": "If anybody has any solid ways of tracking this person and finding out the\ntrue originating hosts/domain, please email me privately. I get this\nmessage no less 4-5x/week and it comes with COMPLETEY different routes and\nhosts as they are apparently using relays.\n\nSorry for the off-topic post, but it will get worse before it gets better. :(\nEnrico\n\n\nOn 6/22/98, lj644f allegedly wrote:\n>Authenticated sender is <[email protected]>\n>Subject: Bull's Eye Targeting Software\n>Mime-Version: 1.0\n>Content-Type: text/plain; charset=\"us-ascii\"\n>Content-Transfer-Encoding: 7bit\n>\n>EMAIL MARKETING WORKS!!\n>\nsnip\n>CALL FOR MORE INFORMATION 213-427-5820\n>CALL FOR MORE INFORMATION 213-427-5820\nsnip\n> PLEASE CALL 213-427-5820 to process your order\n> 9am-5pm Pacific Time\n> Checks or Money Orders send to:\n> WorldTouch Network Inc.\n>5670 Wilshire Blvd. Suite 2170 Los Angeles, CA 90036\n>Please note: Allow 5 business days for all checks to\n>clear before order is shipped.\n\n\n--\nEnrico Cantu <[email protected]> http://www.bchs.uh.edu/~ecantu/\n Bioinformatics Programmer and Database Mangler\n Depts. of Biochemical & Biophysical Sciences and Biology\n University of Houston ; PGP key and GC at [email protected]\n \n\n\n",
"msg_date": "Sun, 21 Jun 1998 22:03:59 -0500",
"msg_from": "Enrico Cantu <[email protected]>",
"msg_from_op": true,
"msg_subject": "spam"
}
] |
[
{
"msg_contents": "I sent this to general:\n> I have written an aggregate function which returns a \"varlena\" type whose\n> length is proportional to the number of rows passed to the function.\n> \n> My question is what is the consensus on the best way to handle the case \n> where the input to the aggregate causes the length of the variable to \n> exceed the 8k limit? Abort the query? Generate an error message, and\n> discard subsequent data? Silently discard data? Something else?\n\nApparently, not only is there no consensus, but nobody seems to have any\nopinions either. So, I would like to solicit opinions.\n\nWhat is the proper thing for aggregate functions and operators to do when \nthey discover they need more than 8k for a variable length data type?\n\n\t-Michael Robinson\n\n",
"msg_date": "Mon, 22 Jun 1998 14:59:53 +0800 (GMT)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Gracefully handling variable-length data types at the 8k limit?"
}
] |
[
{
"msg_contents": "On Mon, 22 Jun 1998, David Schanen wrote:\n\n> Hi Marc & Mike,\n> \n> I wanted to check with you to see if you had seen my latest post to\n> pgsql-questions? \n\n\tpgsql-questions has been a discontinued mailing list for over a\nmonth now, actually...and, from the topic, this should be discussed on\npgsql-hackers anyway :)\n\n> Looking at the backtrace from the debug.core It seems to me like we are\n> still getting the BTP_CHAIN errors we saw in previous versions. \n\n\tYou are using v6.3.2+patches currently?\n\n\n> \n> The cause seems to be a corruption in a single record of a btree index\n> in very large tables(indices). If we simply restart the postgres\n> backends and try to query on the same record where it crashed before we\n> cause the crash again. If we query on any other record there is no\n> problem. If we reindex the problem goes away. Unfortunately this is a\n> high volume realtime telephony application and taking the system out of\n> service for twenty minutes to reindex could cause the loss of too much\n> data for thousands of calls and prevention of service to thousands more. \n> \n> I think the bug must be in writing of the index record or (more likely)\n> an adjacent index record but I don't know how how to find it. \n> \n> Marc I have been reluctant to include Vadim in these emails so far. Do\n> you think it is okay to bring him in on this one? I haven't had any\n> response from the post to the list. \n> \n> Mike have you tried compiling with debug? \n> \n> Below is the backtrace output from my debug.core. I see the BTP_CHAIN\n> error, am I missing something else that you can see? \n> \n> Thanks for your help!\n> \n> Best regards,\n> \n> -Dave\n> \n> PS. Marc, I haven't heard back from Mike since my earliest email. Have you heard\n> anything fom him?\n> \n> David Schanen wrote:\n> \n> > Maybe we are still having btree problems, but we no longer see the BTP_CHAIN\n> > error. Now we get:\n> >\n> > > IpcMemoryCreate: memKey=5432101 , size=2361552 ,\n> > > permission=384IpcMemoryCreate: shmget(..., create, ...) failed:\n> > > Cannot allocate memory\n> >\n> > Here is the backtrace output. Let me know if you need the core file.\n> >\n> > Thanks,\n> >\n> > -Dave\n> > ----------------\n> > Postgres 6.3.2\n> > Pentium II / 200 - 128M\n> > FreeBSD 3.0-981007-SNAP\n> >\n> > # gdb postgres postgres.core.save\n> > GDB is free software and you are welcome to distribute copies of it\n> > under certain conditions; type \"show copying\" to see the conditions.\n> > There is absolutely no warranty for GDB; type \"show warranty\" for details.\n> > GDB 4.16 (i386-unknown-freebsd),\n> > Copyright 1996 Free Software Foundation, Inc...\n> > Core was generated by `postgres'.\n> > Program terminated with signal 11, Segmentation fault.\n> > Cannot access memory at address 0x40ff080.\n> > #0 0x41a256a in ?? ()\n> > (gdb) bt\n> > #0 0x41a256a in ?? ()\n> > #1 0x41b7060 in ?? ()\n> > #2 0x415b5e5 in ?? ()\n> > #3 0xc7578 in elog (lev=1,\n> > fmt=0x1381e \"btree: BTP_CHAIN flag was expected in %s (access = %s)\")\n> > at elog.c:121\n> > #4 0x1397f in _bt_moveright (rel=0x225290, buf=153, keysz=1,\n> > scankey=0x21b3d0, access=0) at nbtsearch.c:222\n> > #5 0x137e9 in _bt_searchr (rel=0x225290, keysz=1, scankey=0x21b3d0,\n> > bufP=0xefbfb664, stack_in=0x2405f0) at nbtsearch.c:127\n> > #6 0x136e7 in _bt_search (rel=0x225290, keysz=1, scankey=0x21b3d0,\n> > bufP=0xefbfb664) at nbtsearch.c:55\n> > #7 0x1014e in _bt_doinsert (rel=0x225290, btitem=0x21b390, index_is_unique=0,\n> > heapRel=0x21dd90) at nbtinsert.c:63\n> > #8 0x12f84 in btinsert (rel=0x225290, datum=0x2405b0, nulls=0x2405d0 \" \\002\",\n> > ht_ctid=0x1dd228, heapRel=0x21dd90) at nbtree.c:377\n> > #9 0xc8445 in fmgr_c (finfo=0xefbfb6f4, values=0xefbfb704,\n> > isNull=0xefbfb6f3 \"\") at fmgr.c:119\n> > #10 0xc8834 in fmgr (procedureId=331) at fmgr.c:290\n> > #11 0xc6d5 in index_insert (relation=0x225290, datum=0x2405b0,\n> > nulls=0x2405d0 \" \\002\", heap_t_ctid=0x1dd228, heapRel=0x21dd90)\n> > at indexam.c:180\n> > #12 0x3a178 in ExecInsertIndexTuples (slot=0x1cbc10, tupleid=0x1dd228,\n> > estate=0x1d8310, is_update=0) at execUtils.c:1156\n> > #13 0x36fa9 in ExecAppend (slot=0x1cbc10, tupleid=0x0, estate=0x1d8310)\n> > at execMain.c:1010\n> > #14 0x36dfe in ExecutePlan (estate=0x1d8310, plan=0x1d8210,\n> > parseTree=0x225910, operation=CMD_INSERT, numberTuples=0,\n> > direction=ForwardScanDirection, printfunc=0x3520 <printtup>)\n> > at execMain.c:814\n> > #15 0x36751 in ExecutorRun (queryDesc=0x230f50, estate=0x1d8310, feature=3,\n> > count=0) at execMain.c:236\n> > #16 0xa01db in ProcessQueryDesc (queryDesc=0x230f50) at pquery.c:332\n> > #17 0xa0246 in ProcessQuery (parsetree=0x225910, plan=0x1d8210, argv=0x0,\n> > typev=0x0, nargs=0, dest=Remote) at pquery.c:378\n> > #18 0x9e3dd in pg_exec_query_dest (\n> > query_string=0xefbfb934 \"insert into acct_history (acct_no, activity_date,\n> > origination, destination, duration, amount, balance, changed_by, changed_on)\n> > VALUES ( '126587291393', 'Wed Jun 17 18:38:06 1998', '0213906996', '79028\"...,\n> > argv=0x0, typev=0x0, nargs=0, dest=Remote) at postgres.c:699\n> > #19 0x9e290 in pg_exec_query (\n> > query_string=0xefbfb934 \"insert into acct_history (acct_no, activity_date,\n> > origination, destination, duration, amount, balance, changed_by, changed_on)\n> > VALUES ( '126587291393', 'Wed Jun 17 18:38:06 1998', '0213906996', '79028\"...,\n> > argv=0x0, typev=0x0, nargs=0) at postgres.c:601\n> > #20 0x9fa31 in PostgresMain (argc=9, argv=0xefbfd978) at postgres.c:1382\n> > #21 0x49bfa in main (argc=9, argv=0xefbfd978) at main.c:106\n> > (gdb)\n> >\n> > The Hermit Hacker wrote:\n> >\n> > > On Mon, 8 Jun 1998, David Schanen wrote:\n> > >\n> > > > a) I compiled 6.3.2 with CASSERT as recommended by vadim in one of\n> > > > his posts. What does this do for me exactly? Could this be the reason\n> > > > we aren't seeing the error report any longer?\n> > >\n> > > CASSERT shouldn't be used in production, only in development...can\n> > > you send in a trace of what the core shows?\n> > >\n> > > > b) Can someone explain what causes the BTP_CHAIN error above?\n> > >\n> > > all I know is that its an index corruption only fixed by dropping\n> > > and recreating the index. v6.3.2 tells you which table is generating the\n> > > BTP_CHAIN error as part of its error message...\n> > >\n> > > > b) How dangerous do you think it is to continue to run the database\n> > > > in this condition?\n> > >\n> > > My experience: the index is useless when the condition is\n> > > triggered...\n> > >\n> > > Marc G. Fournier\n> > > Systems Administrator @ hub.org\n> > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n> \n\n",
"msg_date": "Mon, 22 Jun 1998 07:49:18 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: btree: BTP_CHAIN flag was expected (revisited)"
},
{
"msg_contents": "\n\nThe Hermit Hacker wrote:\n\n> On Mon, 22 Jun 1998, David Schanen wrote:\n>\n> > Hi Marc & Mike,\n> >\n> > Looking at the backtrace from the debug.core It seems to me like we are\n> > still getting the BTP_CHAIN errors we saw in previous versions.\n>\n> You are using v6.3.2+patches currently?\n\n Actually, we are using the basic 6.3.2, no patches applied yet. I'll look into that.\nIs this a known bug fix in the current patch release?\n\n -Dave\n\n",
"msg_date": "Tue, 23 Jun 1998 03:55:17 -0700",
"msg_from": "David Schanen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree: BTP_CHAIN flag was expected (revisited)"
},
{
"msg_contents": "On Tue, 23 Jun 1998, David Schanen wrote:\n\n> \n> \n> The Hermit Hacker wrote:\n> \n> > On Mon, 22 Jun 1998, David Schanen wrote:\n> >\n> > > Hi Marc & Mike,\n> > >\n> > > Looking at the backtrace from the debug.core It seems to me like we are\n> > > still getting the BTP_CHAIN errors we saw in previous versions.\n> >\n> > You are using v6.3.2+patches currently?\n> \n> Actually, we are using the basic 6.3.2, no patches applied yet. I'll look into that.\n> Is this a known bug fix in the current patch release?\n\n\tI haven't seen it since upgrading my server(s) to v6.3.2+patches,\nwhere I saw it relatively often before hand...but I won't guarantee that\nthat just hasn't been luck either...\n\n\n",
"msg_date": "Tue, 23 Jun 1998 14:59:14 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: btree: BTP_CHAIN flag was expected (revisited)"
}
] |
[
{
"msg_contents": "Since the removal of exec(), Thomas has seen, and I have confirmed that\nif a backend crashes, and the postmaster must reset the shared memory,\nno backends can connect anymore. One way to reproduce it is to run the\nregression tests, which on their last test will crash for an un-related\nreason. However, it will not allow you to restart any more backends.\n\nThe error it gets is:\n\nFailed Assertion(\"!((((unsigned long)nextElem) > ShmemBase)):\", File: \"shmqueue.\nc\", Line: 83)\n!((((unsigned long)nextElem) > ShmemBase)) (0) [No such file or directory]\n\nIn this case nextElem = ShmemBase, so it is not greater. Removing the\nAssert() still does not make things work, so there must be something\nelse.\n\nNow, the problem is probably not at that exact spot, but somewhere\ndeeper. There are two differences between the old non-exec() behavior\nand new behavior. In the old setup, the backend had all its global\nvariables initialized, while in the new no-exec case, they take the\nglobal variable values from the postmaster. Second, the old setup had\neach backend attaching to the shared memory, while the new setup has\nthem inheriting the shared memory from the fork().\n\nMy guess is that there is something buggy about the reset code in\npostmaster.c that was not resetting completely, but the initialization\nof the global variables in the backend was masking the bug, or the\nattach() operation did some extra work that we now need to do when\nresetting the shared memory:\n\t\n\tstatic void\n\treset_shared(short port)\n\t{\n\t ipc_key = port * 1000 + shmem_seq * 100;\n\t CreateSharedMemoryAndSemaphores(ipc_key);\n\t ActiveBackends = FALSE;\n\t shmem_seq += 1;\n\t if (shmem_seq >= 10)\n\t shmem_seq -= 10;\n\t}\n\n\nI am stumped on this.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 22 Jun 1998 10:45:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem after removal of exec(), help"
},
{
"msg_contents": "> \n> Since the removal of exec(), Thomas has seen, and I have confirmed that\n> if a backend crashes, and the postmaster must reset the shared memory,\n> no backends can connect anymore. One way to reproduce it is to run the\n> regression tests, which on their last test will crash for an un-related\n> reason. However, it will not allow you to restart any more backends.\n> \n> The error it gets is:\n> \n> Failed Assertion(\"!((((unsigned long)nextElem) > ShmemBase)):\", File: \"shmqueue.\n> c\", Line: 83)\n> !((((unsigned long)nextElem) > ShmemBase)) (0) [No such file or directory]\n> \n> In this case nextElem = ShmemBase, so it is not greater. Removing the\n> Assert() still does not make things work, so there must be something\n> else.\n> \n> Now, the problem is probably not at that exact spot, but somewhere\n> deeper. There are two differences between the old non-exec() behavior\n> and new behavior. In the old setup, the backend had all its global\n> variables initialized, while in the new no-exec case, they take the\n> global variable values from the postmaster. Second, the old setup had\n> each backend attaching to the shared memory, while the new setup has\n> them inheriting the shared memory from the fork().\n> \n> My guess is that there is something buggy about the reset code in\n> postmaster.c that was not resetting completely, but the initialization\n> of the global variables in the backend was masking the bug, or the\n> attach() operation did some extra work that we now need to do when\n> resetting the shared memory:\n> \t\n> \tstatic void\n> \treset_shared(short port)\n> \t{\n> \t ipc_key = port * 1000 + shmem_seq * 100;\n> \t CreateSharedMemoryAndSemaphores(ipc_key);\n> \t ActiveBackends = FALSE;\n> \t shmem_seq += 1;\n> \t if (shmem_seq >= 10)\n> \t shmem_seq -= 10;\n> \t}\n> \n> \n> I am stumped on this.\n\nNo help here, but a request:\n\nCould we have an option to do the fork()/exec() the old way as well as the\nnew sleek fork() only. I want to do some performance testing under gprof and\nwant to be able to replace my postgres binary with a shell script to save\nthe gmon.out file eg:\n\n#!/bin/sh\npostgres.bin $*\nmv gmon.out gmon.$$\n\nThis won't work unless and exec() is done.\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Mon, 22 Jun 1998 18:51:39 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
},
{
"msg_contents": "> No help here, but a request:\n> \n> Could we have an option to do the fork()/exec() the old way as well as the\n> new sleek fork() only. I want to do some performance testing under gprof and\n> want to be able to replace my postgres binary with a shell script to save\n> the gmon.out file eg:\n> \n> #!/bin/sh\n> postgres.bin $*\n> mv gmon.out gmon.$$\n> \n> This won't work unless and exec() is done.\n\nI am confused. What doesn't work without the exec()?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 23 Jun 1998 00:23:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Since the removal of exec(), Thomas has seen, and I have confirmed that\n> if a backend crashes, and the postmaster must reset the shared memory,\n> no backends can connect anymore. One way to reproduce it is to run the\n> regression tests, which on their last test will crash for an un-related\n> reason. However, it will not allow you to restart any more backends.\n> \n> The error it gets is:\n> \n> Failed Assertion(\"!((((unsigned long)nextElem) > ShmemBase)):\", File: \"shmqueue.\n> c\", Line: 83)\n> !((((unsigned long)nextElem) > ShmemBase)) (0) [No such file or directory]\n> \n> In this case nextElem = ShmemBase, so it is not greater. Removing the\n> Assert() still does not make things work, so there must be something\n> else.\n> \n> Now, the problem is probably not at that exact spot, but somewhere\n> deeper. There are two differences between the old non-exec() behavior\n> and new behavior. In the old setup, the backend had all its global\n> variables initialized, while in the new no-exec case, they take the\n> global variable values from the postmaster. Second, the old setup had\n> each backend attaching to the shared memory, while the new setup has\n> them inheriting the shared memory from the fork().\n\nBruce,\nI have not look into it the specifics yet,\nbut I suggest looking into what is done when\nthe child process exits.\nThis (the pg_exit() et al.) caused some bugs\nwhen we introduced unix domain sockets and\nit is not the first place one looks. :-(\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n",
"msg_date": "Tue, 23 Jun 1998 13:11:26 +0200",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
},
{
"msg_contents": "> Bruce,\n> I have not look into it the specifics yet,\n> but I suggest looking into what is done when\n> the child process exits.\n> This (the pg_exit() et al.) caused some bugs\n> when we introduced unix domain sockets and\n> it is not the first place one looks. :-(\n\nAre you suggesting that because one of the backends did not exit\ncleanly, that there is some problem?\n\nBecause the postmaster is resetting all shared memory at that point, I\nam not sure that is the area. I have been thinking about it, and my\nguess is that one of the initialization functions (lock?) just appends\nto the lock queue on restart, instead of clearing it first, and a\nbackend that does exec() starts out with clean global variables, which\nthey now do not.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 23 Jun 1998 11:29:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
},
{
"msg_contents": "> \n> > No help here, but a request:\n> > \n> > Could we have an option to do the fork()/exec() the old way as well as the\n> > new sleek fork() only. I want to do some performance testing under gprof and\n> > want to be able to replace my postgres binary with a shell script to save\n> > the gmon.out file eg:\n> > \n> > #!/bin/sh\n> > postgres.bin $*\n> > mv gmon.out gmon.$$\n> > \n> > This won't work unless and exec() is done.\n> \n> I am confused. What doesn't work without the exec()?\n\nReplacing the postgres binary with a shell script that executes the real\npostgres binary and then moves the gmon.out file out of the way.\n\n $ mv postgres postgres.bin\n $cat > postgres\n #!/bin/sh\n postgres.bin $*\n mv gmon.out gmon.$$\n ^D\n $ postmaster ...\n $ psql template1\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n",
"msg_date": "Tue, 23 Jun 1998 11:22:27 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
},
{
"msg_contents": "> Replacing the postgres binary with a shell script that executes the real\n> postgres binary and then moves the gmon.out file out of the way.\n> \n> $ mv postgres postgres.bin\n> $cat > postgres\n> #!/bin/sh\n> postgres.bin $*\n> mv gmon.out gmon.$$\n> ^D\n> $ postmaster ...\n> $ psql template1\n> \n\nAh, I see. Re-enabling exec() is not a trivial job. Perhaps you can put a\nsystem(\"mv ...\") call in the postmaster backend cleanup code.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 23 Jun 1998 14:42:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
},
{
"msg_contents": "> \n> Since the removal of exec(), Thomas has seen, and I have confirmed that\n> if a backend crashes, and the postmaster must reset the shared memory,\n> no backends can connect anymore. One way to reproduce it is to run the\n> regression tests, which on their last test will crash for an un-related\n> reason. However, it will not allow you to restart any more backends.\n> \n> The error it gets is:\n> \n> Failed Assertion(\"!((((unsigned long)nextElem) > ShmemBase)):\", File: \"shmqueue.\n> c\", Line: 83)\n> !((((unsigned long)nextElem) > ShmemBase)) (0) [No such file or directory]\n> \n> In this case nextElem = ShmemBase, so it is not greater. Removing the\n> Assert() still does not make things work, so there must be something\n> else.\n> \n> Now, the problem is probably not at that exact spot, but somewhere\n> deeper. There are two differences between the old non-exec() behavior\n> and new behavior. In the old setup, the backend had all its global\n> variables initialized, while in the new no-exec case, they take the\n> global variable values from the postmaster. Second, the old setup had\n> each backend attaching to the shared memory, while the new setup has\n> them inheriting the shared memory from the fork().\n\nI have fixed the problem. The problem was that InitMultiLevelLocks()\nwas not re-initializing the LockTable, which was still pointing to the\nold shared memory lock structures, not the new ones in the new shared\nmemory segment.\n\nI had to change InitMultiLevelLocks so it always reset the memory, and\nforce LockTableInit to set Numtables in lock.c to 1 on startup, so it\nre-creates the LOCKTAB entries that do not point to the old shared\nmemory stuff.\n\nI also replaces on_exitpg with new on_proc_exit and on_shmem_exit() to\nclarify when these are being run, and removed quasi_exit().\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 27 Jun 1998 01:14:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem after removal of exec(), help"
}
] |
[
{
"msg_contents": "Hi All,\n\nIn the latest CVS I can get the backend to terminate quite\neasily with a divide by 0.\n\npostgres=> select 1/0;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while \nprocessing the request.\nWe have lost the connection to the backend, so further processing is impossible. \n Terminating.\n[postgres@sparclinux pgsql]$ \n\nA bt on the core shows:-\n\nProgram received signal SIGILL, Illegal instruction.\n0xe0155718 in .div ()\n(gdb) bt \n#0 0xe0155718 in .div ()\n#1 0xc6bcc in int4div (arg1=1, arg2=0) at int.c:523\n\nI don't know if this is recently introduced behaviour or if\nit's platform dependant. I can't recall trying this before\nso maybe it's always happened on S/Linux.\n\nMy immediate thought is to include a check for divide by 0\nin the intXXdiv() functions and do something like an elog(WARN,...)\n\nFirstly, what do other people get on their platform?\n\nKeith.\n\n",
"msg_date": "Mon, 22 Jun 1998 20:40:15 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Divide by zero error on SPARC/Linux."
},
{
"msg_contents": "> \n> Hi All,\n> \n> In the latest CVS I can get the backend to terminate quite\n> easily with a divide by 0.\n> \n> postgres=> select 1/0;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or while \n> processing the request.\n> We have lost the connection to the backend, so further processing is impossible. \n> Terminating.\n> [postgres@sparclinux pgsql]$ \n> \n> A bt on the core shows:-\n> \n> I don't know if this is recently introduced behaviour or if\n> it's platform dependant. I can't recall trying this before\n> so maybe it's always happened on S/Linux.\n> \n> My immediate thought is to include a check for divide by 0\n> in the intXXdiv() functions and do something like an elog(WARN,...)\n> \n> Firstly, what do other people get on their platform?\n\nI get:\n\nttest=> select 1/0;\nERROR: floating point exception! The last floating point operation\neither exceeded legal ranges or was a divide by zero\n\nso it looks like the signal. Check these lines:\n\n#$ gid FloatExceptionHandler\nbackend/postmaster/postmaster.c:1247: pqsignal(SIGFPE, FloatExceptionHandler);\nbackend/tcop/postgres.c:772: FloatExceptionHandler(SIGNAL_ARGS)\ninclude/tcop/tcopprot.h:37: extern void FloatExceptionHandler(SIGNAL_ARGS);\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 22 Jun 1998 17:07:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Divide by zero error on SPARC/Linux."
}
] |
[
{
"msg_contents": "Bruce,\n\nThat's what I would have half expected too but am I actually using\nany floating point arithmetic?\n\nThe function called is int4div() which simply does a, return (arg1/arg2),\nso I would have expected integer arithmetic.\n\nIf I cast to float I get:-\n\npostgres=> select 1::float8/0::float8;\nERROR: float8div: divide by zero error\npostgres=>\n\nThe error coming courtesy of :-\n\n if (*arg2 == 0.0)\n elog(ERROR, \"float8div: divide by zero error\");\n \nin backend/utils/adt/float.c:604\n\nI'm still puzzled but lean towards a signal problem too.\n\nI do get FP exceptions as there's one in the float8.out regression\ntest, where there shouldn't be one!!\n\nQUERY: SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nERROR: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\nQUERY: SELECT '' AS bad, f.f1 / '0.0' from FLOAT8_TBL f;\nERROR: float8div: divide by zero error \n\n\nKeith.\n\n\nBruce Momjian <[email protected]>\n> > \n> > Hi All,\n> > \n> > In the latest CVS I can get the backend to terminate quite\n> > easily with a divide by 0.\n> > \n> > postgres=> select 1/0;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally before or \nwhile \n> > processing the request.\n> > We have lost the connection to the backend, so further processing is \nimpossible. \n> > Terminating.\n> > [postgres@sparclinux pgsql]$ \n> > \n> > A bt on the core shows:-\n> > \n> > I don't know if this is recently introduced behaviour or if\n> > it's platform dependant. I can't recall trying this before\n> > so maybe it's always happened on S/Linux.\n> > \n> > My immediate thought is to include a check for divide by 0\n> > in the intXXdiv() functions and do something like an elog(WARN,...)\n> > \n> > Firstly, what do other people get on their platform?\n> \n> I get:\n> \n> ttest=> select 1/0;\n> ERROR: floating point exception! The last floating point operation\n> either exceeded legal ranges or was a divide by zero\n> \n> so it looks like the signal. Check these lines:\n> \n> #$ gid FloatExceptionHandler\n> backend/postmaster/postmaster.c:1247: pqsignal(SIGFPE, \nFloatExceptionHandler);\n> backend/tcop/postgres.c:772: FloatExceptionHandler(SIGNAL_ARGS)\n> include/tcop/tcopprot.h:37: extern void FloatExceptionHandler(SIGNAL_ARGS);\n> \n\n",
"msg_date": "Mon, 22 Jun 1998 23:52:26 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Divide by zero error on SPARC/Linux."
},
{
"msg_contents": "> \n> Bruce,\n> \n> That's what I would have half expected too but am I actually using\n> any floating point arithmetic?\n> \n> The function called is int4div() which simply does a, return (arg1/arg2),\n> so I would have expected integer arithmetic.\n> \n> If I cast to float I get:-\n> \n> postgres=> select 1::float8/0::float8;\n> ERROR: float8div: divide by zero error\n> postgres=>\n\nI recommend you put debugs around where it is crashing, and the\nreproduce it in a C program, with the same signal() call to catch it,\nand see if it works.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 23 Jun 1998 00:16:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Divide by zero error on SPARC/Linux."
}
] |
[
{
"msg_contents": "RE: ability to access tables from different DB in same SQL query\n\nI have a good example right now of why this is a good feature, \nand why I suspect it is impractical or impossible to implement \none of my bread-n-butter sybase-based apps using PG. This is \nrather a fraught question, because NOT migrating to PG may cost \nus thousands of dollars. More like tens of thousands, since we \nwould need to bring our old licenses up to date (I just found a \nhorrendous bug in Sybase SQL Server 4.9 date arithmetic) and \nbuy a couple of new licenses for our project. So a lot is\nriding on our ability to port existing apps to PG.\n\nI don't think this feature is in SQL92, so I'm making a plea for\nan \"extra\", another way in which PG can be \"better than\" the bare\nstandard.\n\n-------\n\nThere is a telemetry gathering daemon (several actually) dumping\ntelemetry into tables. The tables are large (within the limits\nof Sybase, which are 250 fields per record and 3700-some bytes).\nSay about 220 fields per record, in the high hundreds of bytes,\nall ints and floats, and the tables are ever growing in length.\nSay between 10K and 250K records depending on how long the logger\nhas been running and at what frequency telemetry has been gathered.\n\nThe engineering staff person wants to do some analysis on these\ndata. The front end app provides an easy, friendly way to select\na date range (or other RSE) from the huge mass of telemetry,\nand to refine the FSE.\n\nThe selected data are then copied into a temp table using a \nSELECT INTO.\n\nHere's the point, then. The user's temp table wants to live in\na DB with generous permissions: ordinary users can create and\ndelete tables! But the original telemetry data want to live in\na very protected DB where users absolutely cannot mess with the\noriginal tables OR go creating tables of their own that compete\nwith the originals for storage space.\n\nThis is where two important features of Sybase come in handy, and\nI don't think Oracle does this (correct me, O Oracle users, if I'm\nslandering the product): Different DB can be located on different\npartitions, or different inviolable physical chunks of one\npartition. One server can \"see\" multiple DB. And Sybase SQL queries \ncan span databases, that is, the database name is part of the FQON \n(fully qual object name). So it's as easy as\n\n\tselect * into sandbox.guest.DMyn_de897082595_D1 from\n\t\ttelem.dbo.hires_Log_1 where logstamp between\n\t\t'Jun 19 1998 03:00' and 'Jun 19 1988 08:00'\n\nThe user has no privs other than 'select' anywhere in the telem DB.\nHe/she now works freely with the smaller, lighter table in the\nsandbox DB (*not* repeating expensive queries against the potentially\nvery, very large table in 'telem'). The user could, of course, \nspecify a very large range of data and create quite a large temp table \n(there are some safety limits in the app, but they are generous). \nThe worst result of this would be that the user would get bored waiting \nfor the query to return :-), the dataset would be too big to plot easily, \nand other users of the sandbox DB would be annoyed when all the disk \nspace was used up.\n\nNow, as I understand PG, the user would have to create the temp table \nin the incoming telemetry db, because the SELECT query could not\nreference tables in 2 different DB. So he/she would have to be granted \ntable create/delete in a place where creation of large tables could \neasily interfere with the essential job of logging the incoming telemetry.\nNot acceptable. The user playpen has to be separate from the incoming\nproduction data.\n\nIf PG offers some other way of \n\n\tguaranteeing inviolable space for tables that grow with a \n\t\trealtime feed, \n\tyet making those tables accessible -- via SQL query --\n\t\tto nonpriv users who want to grab chunks of the data into \n\t\ttemp tables created on the fly, \n\nI would very much like to know how to do that. Yeah, I could do it \nby buffering the data row by row in the app and then inserting it from \nthe app into the new table, but what the heck is SQL for if not to do \nthat job more efficiently and concisely? SELECT INTO is the right\nsyntax. But the limitations of PG prevent my using it.\n\nYeah, I could mirror the whole telemetry db to a \"public\" server \nperiodically, and let the users query that. But the engineers want \nup-to-the-last-sample data to analyze; they can change the sampling \nspeed in real time, if something interesting is happening; they don't \nwant to wait for the 24-hourly or 12-hourly flush to the warehouse.\n\nSolutions using psql special commands are not acceptable; it has to\nbe SQL.\n\nI hope this makes the case for cross-DB queries, which I have agitated\nfeebly for in the past but never really justified. If not (if there is\na good way to achieve the same result in PG without x-db queries) then\nplease do tell!\n\nde\n\n",
"msg_date": "Mon, 22 Jun 1998 16:02:00 -0700 (PDT)",
"msg_from": "De Clarke <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL queries accessing tables in more than one db"
},
{
"msg_contents": "De Clarke wrote:\n> \n> RE: ability to access tables from different DB in same SQL query\n\n[snip]\n\n> Here's the point, then. The user's temp table wants to live in\n> a DB with generous permissions: ordinary users can create and\n> delete tables! But the original telemetry data want to live in\n> a very protected DB where users absolutely cannot mess with the\n> original tables OR go creating tables of their own that compete\n> with the originals for storage space.\n> \n> \tselect * into sandbox.guest.DMyn_de897082595_D1 from\n> \t\ttelem.dbo.hires_Log_1 where logstamp between\n> \t\t'Jun 19 1998 03:00' and 'Jun 19 1988 08:00'\n\nA couple of comments on this. Sybase does have quite an elegant\nsystem for this <db>.<owner>.<table>.<column> for simple queries,\ntable can be omitted, when not ambiguous, user can be omitted and the\nfull form can be used in all queries if the permissions are right. So\nyou could just as easily say:\n\nselect * from telem..hires_Log_1 where logstamp=sandbox..table1.timestamp\n\nNow we could take it one step further and put the name of the database\nserver before all of this, so you could say:\n\nselect * from office1.sales..saleslog,\n office2.sales..saleslog,\n... etc\n\nI wouldn't vouch for performance in this case though.\n\nOcie\n \n",
"msg_date": "Mon, 22 Jun 1998 16:24:39 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL queries accessing tables in more than one db"
}
] |
[
{
"msg_contents": "\n\n subscribe\n\n\n",
"msg_date": "Mon, 22 Jun 1998 16:19:10 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "\nThe machine has 128Meg of RAM, 256Meg of SWAP, and SWAP isn't even being\ntouched...\n\n\nacctng=> insert into radhist select * from radhist_old;\nFATAL 1: palloc failure: memory exhausted\nacctng=> \\q\n> pstat -s\nDevice 1K-blocks Used Avail Capacity Type\n/dev/sd0s1b 256000 0 255872 0% Interleaved\n> psql\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: acctng\n\nacctng=> select count(start_time) from radhist_old;\n count\n------\n295850\n(1 row)\n\nacctng=>\n\n",
"msg_date": "Tue, 23 Jun 1998 14:10:29 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "I thought we had fixed this for v6.3.2 ...?"
},
{
"msg_contents": "> \n> \n> The machine has 128Meg of RAM, 256Meg of SWAP, and SWAP isn't even being\n> touched...\n> \n> \n> acctng=> insert into radhist select * from radhist_old;\n> FATAL 1: palloc failure: memory exhausted\n\nTry this before starting postmaster:\n\n\t:\n\tulimit -d 65536 2>/dev/null\n\tulimit -c 0 2>/dev/null\n\tlimit datasize 64m 2>/dev/null\n\tlimit cordumpsize 0 2>/dev/null\n\nSome proc limit is being exceeded.\n\n> acctng=> \\q\n> > pstat -s\n> Device 1K-blocks Used Avail Capacity Type\n> /dev/sd0s1b 256000 0 255872 0% Interleaved\n> > psql\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: acctng\n> \n> acctng=> select count(start_time) from radhist_old;\n> count\n> ------\n> 295850\n> (1 row)\n> \n> acctng=>\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 23 Jun 1998 14:38:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I thought we had fixed this for v6.3.2 ...?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> The machine has 128Meg of RAM, 256Meg of SWAP, and SWAP isn't even being\n> touched...\n> \n> acctng=> insert into radhist select * from radhist_old;\n> FATAL 1: palloc failure: memory exhausted\n\nSize of radhist_old file ?\nI'll try to reproduce...\n\nVadim\n",
"msg_date": "Wed, 24 Jun 1998 03:02:42 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I thought we had fixed this for v6.3.2 ...?"
},
{
"msg_contents": "On Wed, 24 Jun 1998, Vadim Mikheev wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > The machine has 128Meg of RAM, 256Meg of SWAP, and SWAP isn't even being\n> > touched...\n> > \n> > acctng=> insert into radhist select * from radhist_old;\n> > FATAL 1: palloc failure: memory exhausted\n> \n> Size of radhist_old file ?\n> I'll try to reproduce...\n\npalos> ls -lt radhist_old\n-rw------- 1 pgsql wheel 40198144 Jun 23 13:11 radhist_old\n\n~300 000 tuples ...\n\n\n",
"msg_date": "Tue, 23 Jun 1998 15:04:04 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I thought we had fixed this for v6.3.2 ...?"
},
{
"msg_contents": "On Tue, 23 Jun 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > The machine has 128Meg of RAM, 256Meg of SWAP, and SWAP isn't even being\n> > touched...\n> > \n> > \n> > acctng=> insert into radhist select * from radhist_old;\n> > FATAL 1: palloc failure: memory exhausted\n> \n> Try this before starting postmaster:\n> \n> \t:\n> \tulimit -d 65536 2>/dev/null\n> \tulimit -c 0 2>/dev/null\n> \tlimit datasize 64m 2>/dev/null\n> \tlimit cordumpsize 0 2>/dev/null\n> \n> Some proc limit is being exceeded.\n\n\tJust an FYI...this did fix it. Should we update the FAQ with\nthis, or was it already there and I couldn't find it? :(\n\n> \n> > acctng=> \\q\n> > > pstat -s\n> > Device 1K-blocks Used Avail Capacity Type\n> > /dev/sd0s1b 256000 0 255872 0% Interleaved\n> > > psql\n> > Welcome to the POSTGRESQL interactive sql monitor:\n> > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > \n> > type \\? for help on slash commands\n> > type \\q to quit\n> > type \\g or terminate with semicolon to execute query\n> > You are currently connected to the database: acctng\n> > \n> > acctng=> select count(start_time) from radhist_old;\n> > count\n> > ------\n> > 295850\n> > (1 row)\n> > \n> > acctng=>\n> > \n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 27 Jun 1998 20:17:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I thought we had fixed this for v6.3.2 ...?"
},
{
"msg_contents": "> \n> On Tue, 23 Jun 1998, Bruce Momjian wrote:\n> \n> > > \n> > > \n> > > The machine has 128Meg of RAM, 256Meg of SWAP, and SWAP isn't even being\n> > > touched...\n> > > \n> > > \n> > > acctng=> insert into radhist select * from radhist_old;\n> > > FATAL 1: palloc failure: memory exhausted\n> > \n> > Try this before starting postmaster:\n> > \n> > \t:\n> > \tulimit -d 65536 2>/dev/null\n> > \tulimit -c 0 2>/dev/null\n> > \tlimit datasize 64m 2>/dev/null\n> > \tlimit cordumpsize 0 2>/dev/null\n> > \n> > Some proc limit is being exceeded.\n> \n> \tJust an FYI...this did fix it. Should we update the FAQ with\n> this, or was it already there and I couldn't find it? :(\n\nAdded to the FAQ.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 27 Jun 1998 22:14:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I thought we had fixed this for v6.3.2 ...?"
}
] |
[
{
"msg_contents": "\nI'm sending this one to both hackers and interfaces (not sure which one\nit really pertains to) but I have the reply-to set to interfaces.\n\nAnyway...\n\nI'm trying to set up dbd/dbi and all went well until I got to test DBD.\nThe README says it's for 6.2+, but fails when trying to create a table\nlike so:\n\nCREATE TABLE builtin (\n bool bool,\n char char,\n char16 char16,\n text text,\n date date,\n int4 int4,\n int4_ int4[],\n float8 float8,\n point point,\n lseg lseg,\n box box\n)\n\nI get the error: ERROR: parser: parse error at or near \"char\"\n\nSame thing when I try to create it from psql. Since I know I can\ncreate tables all day long with the user I am at the time I have to\nthink there's something changed in PostgreSQL, although I didn't \nfind anything that would point to the problem in any of the docs.\nThat's why I'm sending this to both lists. Was there a change in \ndatatypes or is there something wrong with the perl script that does\nthe test; I'm rather perl un-savvy! I'm running 6.3 and perl 5.00404\non FreeBSD 2.2.6.\n\nAnyone have any suggestions?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n",
"msg_date": "Tue, 23 Jun 1998 15:16:54 -0400 (edt)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "DBI/DBD anyone?"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> I'm sending this one to both hackers and interfaces (not sure which one\n> it really pertains to) but I have the reply-to set to interfaces.\n> \n> Anyway...\n> \n> I'm trying to set up dbd/dbi and all went well until I got to test DBD.\n> The README says it's for 6.2+, but fails when trying to create a table\n> like so:\n> \n> CREATE TABLE builtin (\n> bool bool,\n> char char,\n> char16 char16,\n> text text,\n> date date,\n> int4 int4,\n> int4_ int4[],\n> float8 float8,\n> point point,\n> lseg lseg,\n> box box\n> )\n> \n> I get the error: ERROR: parser: parse error at or near \"char\"\n> \n> Same thing when I try to create it from psql. Since I know I can\n> create tables all day long with the user I am at the time I have to\n> think there's something changed in PostgreSQL, although I didn't\n> find anything that would point to the problem in any of the docs.\n> That's why I'm sending this to both lists. Was there a change in\n> datatypes or is there something wrong with the perl script that does\n> the test; I'm rather perl un-savvy! I'm running 6.3 and perl 5.00404\n> on FreeBSD 2.2.6.\n> \n> Anyone have any suggestions?\n> \n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Searchable Campground Listings http://www.camping-usa.com\n> \"There is no outfit less entitled to lecture me about bloat\n> than the federal government\" -- Tony Snow\n> ==========================================================================\n\n\nuse DBD-Pg-0.73.tar.gz and please report the version of DBD-Pg if you\nhave problems with this module.\n\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 23 Jun 1998 21:49:36 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DBI/DBD anyone?"
},
{
"msg_contents": "On Tue, 23 Jun 1998, Edmund Mergl wrote:\n\n> use DBD-Pg-0.73.tar.gz and please report the version of DBD-Pg if you\n> have problems with this module.\n\nLooks like I found a site with an old version. It's 0.63 and I just\nftp'd it yesterday! Where do I find 0.73? I don't see any reference\nin the README.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n",
"msg_date": "Tue, 23 Jun 1998 15:58:12 -0400 (edt)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] DBI/DBD anyone?"
},
{
"msg_contents": "On Tue, 23 Jun 1998, Vince Vielhaber wrote:\n\n> On Tue, 23 Jun 1998, Edmund Mergl wrote:\n> \n> > use DBD-Pg-0.73.tar.gz and please report the version of DBD-Pg if you\n> > have problems with this module.\n> \n> Looks like I found a site with an old version. It's 0.63 and I just\n> ftp'd it yesterday! Where do I find 0.73? I don't see any reference\n> in the README.\n\nI just looked back to where I'd been and I see that Hermetica had the\nreference to 0.73 so I just grabbed it. Now I'm not sure where I found\n0.63.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n",
"msg_date": "Tue, 23 Jun 1998 16:03:37 -0400 (edt)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] DBI/DBD anyone?"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> On Tue, 23 Jun 1998, Edmund Mergl wrote:\n> \n> > use DBD-Pg-0.73.tar.gz and please report the version of DBD-Pg if you\n> > have problems with this module.\n> \n> Looks like I found a site with an old version. It's 0.63 and I just\n> ftp'd it yesterday! Where do I find 0.73? I don't see any reference\n> in the README.\n\n\nAs any other perl module from CPAN. Try \n\n http://www.perl.com/CPAN/modules/by-module/DBD/\n\nThe multiplex dispatcher will automatically route your\nrequest to the nearest CPAN site.\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 23 Jun 1998 22:11:21 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] DBI/DBD anyone?"
},
{
"msg_contents": "On Tue, 23 Jun 1998, Vince Vielhaber wrote:\n\n> \n> I'm sending this one to both hackers and interfaces (not sure which one\n> it really pertains to) but I have the reply-to set to interfaces.\n> \n> Anyway...\n> \n> I'm trying to set up dbd/dbi and all went well until I got to test DBD.\n> The README says it's for 6.2+, but fails when trying to create a table\n> like so:\n> \n> CREATE TABLE builtin (\n> bool bool,\n> char char,\n> char16 char16,\n> text text,\n> date date,\n> int4 int4,\n> int4_ int4[],\n> float8 float8,\n> point point,\n> lseg lseg,\n> box box\n> )\n> \n> I get the error: ERROR: parser: parse error at or near \"char\"\n> \n> Same thing when I try to create it from psql. Since I know I can\n\nHmmmm, isn't 'char' a reserved word? try to create this table with \ndifferent column names....\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 24 Jun 1998 10:19:03 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DBI/DBD anyone?"
},
{
"msg_contents": "Hi,\n\nafter installing Jun 20 snapshot I 'm unable to compile DBD-Pg v 0.73\nIt seems that there were many changes in libpq. \nEdmund, did you try latest snapshots ?\n\n\tRegards,\n\n\t\tOleg\n\ncc -c -I/usr/local/pgsql/include -I/usr/local/include/pgsql -I/usr/include/pgsql -I/usr/lib/perl5/i686-linux/5.00404/DBI -I/usr/lib/perl5/site_perl/i686-linux/auto/DBI -Dbool=char -DHAS_BOOL -I/usr/local/include -O2 -DVERSION=\\\"0.73\\\" -DXS_VERSION=\\\"0.\n73\\\" -fpic -I/usr/lib/perl5/i686-linux/5.00404/CORE dbdimp.c\nIn file included from /usr/local/pgsql/include/libpq/pqcomm.h:22,\n from /usr/local/pgsql/include/libpq-fe.h:28,\n from Pg.h:12,\n from dbdimp.c:12:\n/usr/local/pgsql/include/c.h:66: warning: useless keyword or type name in empty declaration\n/usr/local/pgsql/include/c.h:66: warning: empty declaration\ndbdimp.c: In function \u0004bd_db_ping':\ndbdimp.c:140: structure has no member named \u0010fout'\ndbdimp.c:140: too many arguments to function \u0010qPuts'\ndbdimp.c:146: structure has no member named \u0010fin'\ndbdimp.c:146: warning: passing arg 2 of \u0010qGetc' from incompatible pointer type\ndbdimp.c:146: structure has no member named \u0010fin'\ndbdimp.c:146: warning: passing arg 2 of \u0010qGetc' from incompatible pointer type\nmake: *** [dbdimp.o] Error 1\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 1 Jul 1998 17:52:24 +0400 (MSK DST)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "DBD-Pg is broken from Jun 20 snapshot ?"
}
] |
[
{
"msg_contents": "Hello,\n\nthis problem appeared quite frequently in the last two years: \nthe libpq-function lo_export gives a segmentation fault.\n\nThis happend with the current snapshot (I don't remember the\ndate, it was begin of June, filesize: 3980592) on Linux-2.0.34.\nIt worked with Postgresql-6.3.2.\n\nEdmund \n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Tue, 23 Jun 1998 22:06:15 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Segmentation fault with lo_export"
},
{
"msg_contents": "Edmund, has this been fixed?\n\n> Hello,\n> \n> this problem appeared quite frequently in the last two years: \n> the libpq-function lo_export gives a segmentation fault.\n> \n> This happend with the current snapshot (I don't remember the\n> date, it was begin of June, filesize: 3980592) on Linux-2.0.34.\n> It worked with Postgresql-6.3.2.\n> \n> Edmund \n> -- \n> Edmund Mergl mailto:[email protected]\n> Im Haldenhau 9 http://www.bawue.de/~mergl\n> 70565 Stuttgart fon: +49 711 747503\n> Germany\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 21 Aug 1998 22:32:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Segmentation fault with lo_export"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Edmund, has this been fixed?\n> \n> > Hello,\n> >\n> > this problem appeared quite frequently in the last two years:\n> > the libpq-function lo_export gives a segmentation fault.\n> >\n> > This happend with the current snapshot (I don't remember the\n> > date, it was begin of June, filesize: 3980592) on Linux-2.0.34.\n> > It worked with Postgresql-6.3.2.\n> >\n> > Edmund\n\n\nHi Bruce,\n\nit looks like the last time I tested lo_export it worked\njust by chance.\n\nThe bug seems to be in interfaces/libpq/fe-lobj.c line 424.\nThe two functions lo_import and lo_export are somehow\nsimilar when exchanging the read/write for Unix file\nand inv file. But whereas read() reads exactly BUFSIZ\nbytes, lo_read() appends a '\\0' after having read\nBUFSIZ bytes. So line 424 should be:\n\n \tchar\t\tbuf[LO_BUFSIZE+1];\n\ninstead of:\n\n\tchar\t\tbuf[LO_BUFSIZE];\n\nCould you please apply this bug-fix ?\nAlso the code example in the man page for\nlarge-objects needs to be corrected. \n\nI can't do it by myself because I am on \nvacation on a Greek island, and the only \nmessage I'm seeing frequently is \n'modem hangup, no CARRIER' ...\n\nthanks\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Wed, 26 Aug 1998 20:06:07 +0300",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Segmentation fault with lo_export"
},
{
"msg_contents": "> Hi Bruce,\n> \n> it looks like the last time I tested lo_export it worked\n> just by chance.\n> \n> The bug seems to be in interfaces/libpq/fe-lobj.c line 424.\n> The two functions lo_import and lo_export are somehow\n> similar when exchanging the read/write for Unix file\n> and inv file. But whereas read() reads exactly BUFSIZ\n> bytes, lo_read() appends a '\\0' after having read\n> BUFSIZ bytes. So line 424 should be:\n> \n> \tchar\t\tbuf[LO_BUFSIZE+1];\n> \n> instead of:\n> \n> \tchar\t\tbuf[LO_BUFSIZE];\n> \n> Could you please apply this bug-fix ?\n> Also the code example in the man page for\n> large-objects needs to be corrected. \n> \n> I can't do it by myself because I am on \n> vacation on a Greek island, and the only \n> message I'm seeing frequently is \n> 'modem hangup, no CARRIER' ...\n\nI am going to fix lo_read(). I see no reason not to have it function\nlike read().\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 26 Aug 1998 13:46:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Segmentation fault with lo_export"
},
{
"msg_contents": "> > Hi Bruce,\n> > \n> > it looks like the last time I tested lo_export it worked\n> > just by chance.\n> > \n> > The bug seems to be in interfaces/libpq/fe-lobj.c line 424.\n> > The two functions lo_import and lo_export are somehow\n> > similar when exchanging the read/write for Unix file\n> > and inv file. But whereas read() reads exactly BUFSIZ\n> > bytes, lo_read() appends a '\\0' after having read\n> > BUFSIZ bytes. So line 424 should be:\n> > \n> > \tchar\t\tbuf[LO_BUFSIZE+1];\n> > \n> > instead of:\n> > \n> > \tchar\t\tbuf[LO_BUFSIZE];\n> > \n> > Could you please apply this bug-fix ?\n> > Also the code example in the man page for\n> > large-objects needs to be corrected. \n> > \n> > I can't do it by myself because I am on \n> > vacation on a Greek island, and the only \n> > message I'm seeing frequently is \n> > 'modem hangup, no CARRIER' ...\n> \n> I am going to fix lo_read(). I see no reason not to have it function\n> like read().\n\nOK. It was actually libpq that was placing the NULL, through PQfn() and\npqGetnchar(). Shouldn't have.\n\nFixed now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 28 Aug 1998 22:13:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Segmentation fault with lo_export"
},
{
"msg_contents": "> Hello,\n> \n> this problem appeared quite frequently in the last two years: \n> the libpq-function lo_export gives a segmentation fault.\n> \n> This happend with the current snapshot (I don't remember the\n> date, it was begin of June, filesize: 3980592) on Linux-2.0.34.\n> It worked with Postgresql-6.3.2.\n\nI suspect my large objects change have fix it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 28 Aug 1998 22:17:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Segmentation fault with lo_export"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > Edmund, has this been fixed?\n> > \n> > > Hello,\n> > >\n> > > this problem appeared quite frequently in the last two years:\n> > > the libpq-function lo_export gives a segmentation fault.\n> > >\n> > > This happend with the current snapshot (I don't remember the\n> > > date, it was begin of June, filesize: 3980592) on Linux-2.0.34.\n> > > It worked with Postgresql-6.3.2.\n> > >\n> > > Edmund\n> \n> \n> Hi Bruce,\n> \n> it looks like the last time I tested lo_export it worked\n> just by chance.\n> \n> The bug seems to be in interfaces/libpq/fe-lobj.c line 424.\n> The two functions lo_import and lo_export are somehow\n> similar when exchanging the read/write for Unix file\n> and inv file. But whereas read() reads exactly BUFSIZ\n> bytes, lo_read() appends a '\\0' after having read\n> BUFSIZ bytes. So line 424 should be:\n> \n> \tchar\t\tbuf[LO_BUFSIZE+1];\n> \n> instead of:\n> \n> \tchar\t\tbuf[LO_BUFSIZE];\n> \n> Could you please apply this bug-fix ?\n> Also the code example in the man page for\n> large-objects needs to be corrected. \n> \n> I can't do it by myself because I am on \n> vacation on a Greek island, and the only \n> message I'm seeing frequently is \n> 'modem hangup, no CARRIER' ...\n\nExtra null removed.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 29 Aug 1998 00:08:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Segmentation fault with lo_export"
}
] |
[
{
"msg_contents": "\n",
"msg_date": "Tue, 23 Jun 1998 17:02:36 -0400",
"msg_from": "Jeffrey Napolitano <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscribe"
}
] |
[
{
"msg_contents": "I'm living in outside US and am running the export version of FreeBSD\ncoming without DES. Problem is that if I enable the crypt password\nauthentication, the FE on any platform other than FreeBSD will not\ntalk to the BE on the FreeBSD box (Of course FreeBSD can talk to\nFreeBSD). The export version of FreeBSD's crypt() is implemented using\nMD5, and it does not compatible with the traditional crypt(). This is\nthe source of the problem, I guess. I have looked into backend/libpq\nand interfaces/libpq, but I couldn't find any portable solution for\nthat so far.\n\nAs far as I know, there are at least 2 workarounds:\n\n1. install \"des\" package\n (ftp://ftp.internat.freebsd.org/pub/FreeBSD/2.2.6-RELEASE/des/)\n\n2. link the BE with libcrypt.a coming with SSLeay\n (see http://www.psy.uq.oz.au/~ftp/Crypto/ for more info about SSLeay)\n\nShould we document these in somewhere?\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Wed, 24 Jun 1998 11:03:55 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "crypt password authentication does not work in cross platform env"
},
{
"msg_contents": "On Wed, 24 Jun 1998 [email protected] wrote:\n\n> I'm living in outside US and am running the export version of FreeBSD\n> coming without DES. Problem is that if I enable the crypt password\n> authentication, the FE on any platform other than FreeBSD will not\n> talk to the BE on the FreeBSD box (Of course FreeBSD can talk to\n> FreeBSD). The export version of FreeBSD's crypt() is implemented using\n> MD5, and it does not compatible with the traditional crypt(). This is\n> the source of the problem, I guess. I have looked into backend/libpq\n> and interfaces/libpq, but I couldn't find any portable solution for\n> that so far.\n\nI thought there was only one implementation of crypt()?\n\nWhen I added crypt support into the JDBC driver, I used an existing java\nimplementation as a baseline.\n\nNow this works for postgres running on Linux() (& java running on Linux &\nWin95), but I haven't heared of a problem with it on other Unixes.\n\n> As far as I know, there are at least 2 workarounds:\n> \n> 1. install \"des\" package\n> (ftp://ftp.internat.freebsd.org/pub/FreeBSD/2.2.6-RELEASE/des/)\n> \n> 2. link the BE with libcrypt.a coming with SSLeay\n> (see http://www.psy.uq.oz.au/~ftp/Crypto/ for more info about SSLeay)\n> \n> Should we document these in somewhere?\n\nHow accessible is the source, and is it in C?\n\nI'm asking this, because we would have to convert it into Java for the\nJDBC driver, and I know the ODBC guys would have to convert it as they\ndon't use libpq either.\n\n--\nPeter Mount, [email protected] \nPostgres email to [email protected] & [email protected]\nRemember, this is my work email, so please CC my home address, as I may \nnot always have time to reply from work.\n\n\n",
"msg_date": "Wed, 24 Jun 1998 08:42:10 +0100 (BST)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] crypt password authentication does not work in cross\n\tplatform env"
},
{
"msg_contents": "At 8:42 AM 98.6.24 +0100, Peter Mount wrote:\n>On Wed, 24 Jun 1998 [email protected] wrote:\n>\n>> I'm living in outside US and am running the export version of FreeBSD\n>> coming without DES. Problem is that if I enable the crypt password\n>> authentication, the FE on any platform other than FreeBSD will not\n>> talk to the BE on the FreeBSD box (Of course FreeBSD can talk to\n>> FreeBSD). The export version of FreeBSD's crypt() is implemented using\n>> MD5, and it does not compatible with the traditional crypt(). This is\n>> the source of the problem, I guess. I have looked into backend/libpq\n>> and interfaces/libpq, but I couldn't find any portable solution for\n>> that so far.\n>\n>I thought there was only one implementation of crypt()?\n>\n>When I added crypt support into the JDBC driver, I used an existing java\n>implementation as a baseline.\n>\n>Now this works for postgres running on Linux() (& java running on Linux &\n>Win95), but I haven't heared of a problem with it on other Unixes.\n\nLinux's crypt() is fine. Only FreeBSD has that problem (I'm not sure\nabout other BSDish boxes, though). Anyway, I will check the JDBC driver with\ncrypt authentication enabled next week (sorry, I'm too busy on the\n\"real world\" work this week:-)\n\n>> As far as I know, there are at least 2 workarounds:\n>> \n>> 1. install \"des\" package\n>> (ftp://ftp.internat.freebsd.org/pub/FreeBSD/2.2.6-RELEASE/des/)\n>> \n>> 2. link the BE with libcrypt.a coming with SSLeay\n>> (see http://www.psy.uq.oz.au/~ftp/Crypto/ for more info about SSLeay)\n>> \n>> Should we document these in somewhere?\n>\n>How accessible is the source, and is it in C?\n\nSure, they are free and are written in C.\n\n>I'm asking this, because we would have to convert it into Java for the\n>JDBC driver, and I know the ODBC guys would have to convert it as they\n>don't use libpq either.\n\nDoes the Java crypt class call native crypt lib? or is that written in\n100% pure Java?\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Wed, 24 Jun 1998 23:38:10 +0900",
"msg_from": "[email protected] (Tatsuo Ishii)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] crypt password authentication does not work in cross\n\tplatform env"
},
{
"msg_contents": "On Wed, 24 Jun 1998, Tatsuo Ishii wrote:\n\n> At 8:42 AM 98.6.24 +0100, Peter Mount wrote:\n\n[snip]\n\n> >\n> >I thought there was only one implementation of crypt()?\n> >\n> >When I added crypt support into the JDBC driver, I used an existing java\n> >implementation as a baseline.\n> >\n> >Now this works for postgres running on Linux() (& java running on Linux &\n> >Win95), but I haven't heared of a problem with it on other Unixes.\n> \n> Linux's crypt() is fine. Only FreeBSD has that problem (I'm not sure\n> about other BSDish boxes, though). Anyway, I will check the JDBC driver with\n> crypt authentication enabled next week (sorry, I'm too busy on the\n> \"real world\" work this week:-)\n\nYour'e not the only one at the moment ;-)\n\n> Does the Java crypt class call native crypt lib? or is that written in\n> 100% pure Java?\n\nIt's 100% pure Java.\n\n--\nPeter Mount, [email protected] \nPostgres email to [email protected] & [email protected]\nRemember, this is my work email, so please CC my home address, as I may \nnot always have time to reply from work.\n\n\n",
"msg_date": "Wed, 24 Jun 1998 15:53:11 +0100 (BST)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] crypt password authentication does not work in cross\n\tplatform env"
}
] |
[
{
"msg_contents": "\nAnyone else here ever check out Slashdot? http://www.slashdot.org/\n\nLike a web news group of sorts...readers submit computer related stuff\nand other readers can then comment on them. Worth a peek once in a while.\n\nBUT, one post there today in particular caught my eye though...\n\n\"Mysql/PHP win DB award\" -\nhttp://www.slashdot.org/articles/9806241451235.shtml\n\ndarrenk\n\n",
"msg_date": "Thu, 25 Jun 1998 00:44:52 -0400",
"msg_from": "\"Stupor Genius\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rasmus has his day."
},
{
"msg_contents": "On Thu, 25 Jun 1998, Stupor Genius wrote:\n\n> \n> Anyone else here ever check out Slashdot? http://www.slashdot.org/\n> \n> Like a web news group of sorts...readers submit computer related stuff\n> and other readers can then comment on them. Worth a peek once in a while.\n> \n> BUT, one post there today in particular caught my eye though...\n> \n> \"Mysql/PHP win DB award\" -\n> http://www.slashdot.org/articles/9806241451235.shtml\n\nDoes anyone have a list of URLs that I can add to my Netscape where stuff\nlike this can be submit'd? ie. when we go to v6.4, are there sites like\nslashdot.org and freshmeat.net that I should go around and post an\nannouncement to?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 27 Jun 1998 20:18:40 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Rasmus has his day."
}
] |
[
{
"msg_contents": "\nokay -- I've seriously updated the SSL patch (practically rewritten)\nand we're even using it here @ work, and it works great. If you used\nit before, be sure to read the info page as the behavior has changed\ndrastically.\n\nit is two patches now, one to funnel all be/fe communication through\nmy functions pq_read/write and one to add SSL on top of that. i'm not\ngoing to bother releasing the comm patch as I'll have to rewrite it\nfor the snapshot version of pgsql.\n\nI plan on including a beta version pl/perl with the 6.4 (can beta\nsoftware exist in contrib?) for those who are wondering. I'm short on\ntime, but I am dedicated to this project.\n\nthat's about it -- check out the SSL stuff if you like, at the\nfollowing spacetime coordinate and listen to me beg for a job:\n\nhttp://www.chicken.org/pgsql/ssl/\n\nbtw -- thanks for the link on postgresql.org\n\nalso, does anyone know if i'm illegally distributing encryption\nsoftware? not that I give a rats ass, but I am curious.\n\ni'll announce SSL patch to the general list soon if that is\nappropriate..\n",
"msg_date": "Thu, 25 Jun 1998 21:42:01 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSL patch updated & pl/perl slated for 6.4"
},
{
"msg_contents": "Where are we this this in relation to 6.4?\n\n> \n> okay -- I've seriously updated the SSL patch (practically rewritten)\n> and we're even using it here @ work, and it works great. If you used\n> it before, be sure to read the info page as the behavior has changed\n> drastically.\n> \n> it is two patches now, one to funnel all be/fe communication through\n> my functions pq_read/write and one to add SSL on top of that. i'm not\n> going to bother releasing the comm patch as I'll have to rewrite it\n> for the snapshot version of pgsql.\n> \n> I plan on including a beta version pl/perl with the 6.4 (can beta\n> software exist in contrib?) for those who are wondering. I'm short on\n> time, but I am dedicated to this project.\n> \n> that's about it -- check out the SSL stuff if you like, at the\n> following spacetime coordinate and listen to me beg for a job:\n> \n> http://www.chicken.org/pgsql/ssl/\n> \n> btw -- thanks for the link on postgresql.org\n> \n> also, does anyone know if i'm illegally distributing encryption\n> software? not that I give a rats ass, but I am curious.\n> \n> i'll announce SSL patch to the general list soon if that is\n> appropriate..\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 21 Aug 1998 22:33:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SSL patch updated & pl/perl slated for 6.4"
},
{
"msg_contents": "\nIt has a few semi-serious bugs. One is that it can't initdb (xpg_user\nno such file or directory) and another is that copy from doesn't seem\nto work all the time for more than a few lines. I haven't been able\nto track down the xpg_user. I could modify the patch to use\n--with-ssl in configure -- i've also updated the patch so that the\nunix socket doesn't use SSL and added support for an SSL port and a\nnon-SSL port.\n\nI'd like to work on certificate verification, but i'm not sure exactly\nhow that might work yet.\n\nOn Fri, 21 August 1998, at 22:33:16, Bruce Momjian wrote:\n\n> Where are we this this in relation to 6.4?\n> \n> > \n> > okay -- I've seriously updated the SSL patch (practically rewritten)\n> > and we're even using it here @ work, and it works great. If you used\n> > it before, be sure to read the info page as the behavior has changed\n> > drastically.\n> > \n> > it is two patches now, one to funnel all be/fe communication through\n> > my functions pq_read/write and one to add SSL on top of that. i'm not\n> > going to bother releasing the comm patch as I'll have to rewrite it\n> > for the snapshot version of pgsql.\n> > \n> > I plan on including a beta version pl/perl with the 6.4 (can beta\n> > software exist in contrib?) for those who are wondering. I'm short on\n> > time, but I am dedicated to this project.\n> > \n> > that's about it -- check out the SSL stuff if you like, at the\n> > following spacetime coordinate and listen to me beg for a job:\n> > \n> > http://www.chicken.org/pgsql/ssl/\n> > \n> > btw -- thanks for the link on postgresql.org\n> > \n> > also, does anyone know if i'm illegally distributing encryption\n> > software? not that I give a rats ass, but I am curious.\n> > \n> > i'll announce SSL patch to the general list soon if that is\n> > appropriate..\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 22 Aug 1998 01:58:36 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SSL patch updated & pl/perl slated for 6.4"
}
] |
[
{
"msg_contents": "\nHi !\n\nWandering on Slashdot ( http://slashdot.org ), I saw a post from someone\nwhich said he has convinced a publisher to do a book to help people\nmigrate to free software (from Windows background, for ex.). What is\ninteresting to us, is that he wrote that he will focus on GTK for the GUI\npart, and PostgreSQL for the DB part :\n\nhttp://slashdot.org/articles/980625090209.shtml\n\nMaybe there is something to do to help, or so... :)\n\nPatrice\n\n--\nPatrice H�D� --------------------------------- [email protected] -----\nNous sommes au monde. [...] La croyance en un esprit absolu ou en un\nmonde en soi d�tach� de nous n'est qu'une rationalisation de cette\nfoi primordiale. --- Merleau-Ponty, Ph�nom�nologie de la Perception\n----- http://www.idf.net/patrice/ ----------------------------------\n\n",
"msg_date": "Fri, 26 Jun 1998 10:59:18 +0200 (MET DST)",
"msg_from": "=?ISO-8859-1?Q?Patrice_H=E9d=E9?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential Book"
},
{
"msg_contents": "Hello,\nI'am trying to subscribe to list, but i'am not getting a response.\nhere is the address i use. is this correct?\[email protected]\nin the subject line i put subscribe.\nTIA.\nWayne\n\n",
"msg_date": "Wed, 1 Jul 1998 08:23:43 -0400 (EDT)",
"msg_from": "wward <[email protected]>",
"msg_from_op": false,
"msg_subject": "None"
},
{
"msg_contents": "On Wed, 1 Jul 1998, wward wrote:\n\n> Hello,\n> I'am trying to subscribe to list, but i'am not getting a response.\n> here is the address i use. is this correct?\n> [email protected]\n\nNo, should be [email protected]\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Jul 1998 12:05:31 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: your mail"
}
] |
[
{
"msg_contents": "Hello all\n\nSorry for the off topic post, but we upgraded our sendmail program this week\nand I haven't received any posts from the Hackers list since (over 2 days now).\nI did receive one from the Announce list though, and I am receiving mail from\nother lists I've subscribed to. So, is the hackers list been \"unusually quiet\"\nlately, or should I be looking into our sendmail configuration for a problem\nwith receiving mail from the hackers list.\n\nBTW, I was able to receive a reply from [email protected] as well.\n\nPlease CC a reply to me directly, since if there's a problem at my end I won't\nsee the post to the hackers list.\n\nThank you for your time and cooperation in this matter.\n\n-- \nSincerely,\n\nJazzman (a.k.a. Justin Hickey) e-mail: [email protected]\nHigh Performance Computing Center\nNational Electronics and Computer Technology Center (NECTEC)\nBangkok, Thailand\n==================================================================\nPeople who think they know everything are very irritating to those\nof us who do. ---Anonymous\n\nJazz and Trek Rule!!!\n==================================================================\n",
"msg_date": "Fri, 26 Jun 1998 10:11:44 +0000",
"msg_from": "\"Justin Hickey\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Quiet List???"
}
] |
[
{
"msg_contents": "Is there an isnull function avaliable in postgres, such as:\n\n select isnull(dt_field, 'now') ...\n\nThanks\n",
"msg_date": "Fri, 26 Jun 1998 10:42:03 -0700",
"msg_from": "Kachun Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "isnull function"
},
{
"msg_contents": "> \n> Is there an isnull function avaliable in postgres, such as:\n> \n> select isnull(dt_field, 'now') ...\n> \n> Thanks\n> \n> \n\nI don't tbink so, but it would be nice. I think Ingres has it. It also\nwould not be hard to write as a loadable function.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 26 Jun 1998 15:13:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] isnull function]"
},
{
"msg_contents": "At 03:13 PM 6/26/98 -0400, you wrote:\n>> \n>> Is there an isnull function avaliable in postgres, such as:\n>> \n>> select isnull(dt_field, 'now') ...\n>> \n>> Thanks\n>> \n>> \n>\n>I don't tbink so, but it would be nice. I think Ingres has it. It also\n>would not be hard to write as a loadable function.\n>\n>-- \n>Bruce Momjian | 830 Blythe Avenue\n>[email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n>\n\nI tried, but the following code does not seem to work:\n\n#include <string.h>\n#include <stdio.h>\n#include \"postgres.h\"\n#include \"libpq-fe.h\" \n#include \"utils/dt.h\"\n\nDateTime * is_null(DateTime *, DateTime *);\n\nDateTime * is_null (DateTime * dt, DateTime * def)\n{\n return dt ? dt : def;\n}\n\nI searched the doc/maillist, and I could not find how to test an arg for\nNULL value. Should I post this to HACKER?\n",
"msg_date": "Fri, 26 Jun 1998 13:33:17 -0700",
"msg_from": "Kachun Lee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] isnull function]"
},
{
"msg_contents": "I posted this question a while back and received a response of 'is null'\nsuch as:\n\nselect * from try where field is null;\n\nWill give you all entries where the field 'field' does not yet contain a\nvalue. Works great and I hope will answer your original question.\n(Now if only I could find an easy answer for mine :)\n\n--\nColin Dick\nOn Call Internet Services\[email protected]\n\n\nOn Fri, 26 Jun 1998, Kachun Lee wrote:\n\n> At 03:13 PM 6/26/98 -0400, you wrote:\n> >> \n> >> Is there an isnull function avaliable in postgres, such as:\n> >> \n> >> select isnull(dt_field, 'now') ...\n> >> \n> >> Thanks\n> >> \n> >> \n> >\n> >I don't tbink so, but it would be nice. I think Ingres has it. It also\n> >would not be hard to write as a loadable function.\n> >\n> >-- \n> >Bruce Momjian | 830 Blythe Avenue\n> >[email protected] | Drexel Hill, Pennsylvania 19026\n> > + If your life is a hard drive, | (610) 353-9879(w)\n> > + Christ can be your backup. | (610) 853-3000(h)\n> >\n> \n> I tried, but the following code does not seem to work:\n> \n> #include <string.h>\n> #include <stdio.h>\n> #include \"postgres.h\"\n> #include \"libpq-fe.h\" \n> #include \"utils/dt.h\"\n> \n> DateTime * is_null(DateTime *, DateTime *);\n> \n> DateTime * is_null (DateTime * dt, DateTime * def)\n> {\n> return dt ? dt : def;\n> }\n> \n> I searched the doc/maillist, and I could not find how to test an arg for\n> NULL value. Should I post this to HACKER?\n> \n\n",
"msg_date": "Fri, 26 Jun 1998 13:41:22 -0700 (PDT)",
"msg_from": "Colin Dick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] isnull function]"
},
{
"msg_contents": "> \n> I tried, but the following code does not seem to work:\n> \n> #include <string.h>\n> #include <stdio.h>\n> #include \"postgres.h\"\n> #include \"libpq-fe.h\" \n> #include \"utils/dt.h\"\n> \n> DateTime * is_null(DateTime *, DateTime *);\n> \n> DateTime * is_null (DateTime * dt, DateTime * def)\n> {\n> return dt ? dt : def;\n> }\n> \n> I searched the doc/maillist, and I could not find how to test an arg for\n> NULL value. Should I post this to HACKER?\n> \n> \n\nHaving DateTime* be a NULL does not represent a NULL value in SQL\ninternally.\n\nNot sure how to do it, because I don't rememeber where the NULL checking\nis done.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 26 Jun 1998 22:11:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] isnull function]"
},
{
"msg_contents": "\n> > Is there an isnull function avaliable in postgres, such as:\n> >\n> > select isnull(dt_field, 'now') ...\n> >\n> > Thanks\n> >\n> >\n>\n> I don't tbink so, but it would be nice. I think Ingres has it. It also\n> would not be hard to write as a loadable function.\n>\n\nIs it possible to do something like:\n\n select * from blah where x = ''\n\nand get what he is asking for?\n\n...james\n\n",
"msg_date": "Sat, 27 Jun 1998 06:29:30 -0400",
"msg_from": "James Olin Oden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] isnull function]"
}
] |
[
{
"msg_contents": "\nDoes anyone announce PGSQL releases to Linux Weekly News?\nI noticed that the only SQL DB mentioned in the LWN is\nMySQL :-(\n\nde \n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n",
"msg_date": "Fri, 26 Jun 1998 12:42:20 -0700 (PDT)",
"msg_from": "De Clarke <[email protected]>",
"msg_from_op": true,
"msg_subject": "lwn.net"
},
{
"msg_contents": "On Fri, 26 Jun 1998, De Clarke wrote:\n\n> \n> Does anyone announce PGSQL releases to Linux Weekly News?\n> I noticed that the only SQL DB mentioned in the LWN is\n> MySQL :-(\n\n\tUntil now, I never knew about it :( Added to my bookmarks for\nv6.4...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 27 Jun 1998 20:19:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] lwn.net"
}
] |
[
{
"msg_contents": "I tried to implement an is_null function, such as:\n\n select is_null(dt, 'now'::datetime) ...\n\nThe following code does not seem to work:\n\n#include <string.h>\n#include <stdio.h>\n#include \"postgres.h\"\n#include \"libpq-fe.h\" \n#include \"utils/dt.h\"\n\nDateTime * is_null(DateTime *, DateTime *);\n\nDateTime * is_null (DateTime * dt, DateTime * def)\n{\n return dt ? dt : def;\n}\n\nI searched the doc/maillist, and I could not find how to test an arg for\nNULL value. Any info or help will be greately appreciated.\n",
"msg_date": "Fri, 26 Jun 1998 17:32:33 -0700",
"msg_from": "Kachun Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "isnull function"
}
] |
[
{
"msg_contents": "\nthe query is: update process_order set processresult = 't'::bool and processmsg = '328024' where oid = 25647;\n\nit causes the backend to core dump.\n\nwhen I don't cast it to 't'::bool, I get this:\n\n ERROR: left-hand side of AND is type 'unknown', not bool\n\nvery weird. I wouldn't expect that error when doing an update like\nthis.\n\nokay, I just realized that i'm using \"and\" instead of a comma to join\nthe list of updated value pairs, doh!\n\nafter banging by head on a few random objects..\n",
"msg_date": "Fri, 26 Jun 1998 18:35:44 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "operator error"
}
] |
[
{
"msg_contents": "Dearest PQSL Hackers,\n\nDoes anyone know of some one who has developed a url type ?\n\nmaybe something like a composite type of two varchars, one for the\nurl, one for the common name. (e.g. \"http://www.postgresql.org/\" &\n\"Postgres Home Page\".) that prints out \"<A\nHREF=http://www.postgresql.org/>Postgress Home Page</A>\"\nwhen the -H option of psql is used...???\n\nAny help/advice/etc. much appreciated.\n\nThanks.\n\n\n\n-- \nYour time is appreciated,\n James Richard Werwath \n +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+\n | 3-7-10 Ichiokamotomachi | Phone: (011-81-) 6-586-3052 |\n | Minato-ku, Osaka 552 Japan | FAX: (011-81-) 6-586-3968 |\n | Nippon Motorola Ltd. 5th floor | Mobile:(011-81-) 030-715-9610|\n | |\n | Homepage: http://www.geocities.com/ResearchTriangle/Lab/7300/ |\n | Motorola Intranet homepage http://www.cig.nml.mot.com/~werwath|\n +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+\n | Any opinions expressed here are my own, not that of Motorola. |\n | Kindly ignore any spelling or grammar errors in the above email.|\n +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+\n",
"msg_date": "Sat, 27 Jun 1998 15:38:32 +0900",
"msg_from": "James Werwath <[email protected]>",
"msg_from_op": true,
"msg_subject": "URL type ?"
}
] |
[
{
"msg_contents": "I have renamed the BindingTable to ShmemIndex. Binding table made no\nsense to me, and the table is an index of shared memory structures, so\nthe new name should be clearer.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 27 Jun 1998 11:48:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "binding table"
},
{
"msg_contents": "> \n> I have renamed the BindingTable to ShmemIndex. Binding table made no\n> sense to me, and the table is an index of shared memory structures, so\n> the new name should be clearer.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\n\nI kinda wish you hadn't. These kinds of style changes impose a certain cost\nin terms of merge issues and add a (perhaps small) bit of risk of introducing\nbugs without any improvement in the product that can be detected by the\nuser.\n\nBtw, the name \"BindingTable\" is derived from the lisp terminology where\na variable is composed of a storage location and a \"binding\" of a symbol to\nrefer to that location. As much of postgres was initially written in lisp,\nthis is a fairly natural name.\n\nAs it happens, the shared memory structure (formerly) known as BindingTable\nreally is used in a way that closely resembles lisp style bindings. When\nyou want to create something in shared memory (eg the lock table), you\nallocate a name in the binding table and then hang all the datastructures\nonto the name.\n\nFinally, I think the name ShmemIndex lends itself to confusion with table\nindexes etc...\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n",
"msg_date": "Sat, 27 Jun 1998 16:05:11 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] binding table"
},
{
"msg_contents": "> I kinda wish you hadn't. These kinds of style changes impose a certain cost\n> in terms of merge issues and add a (perhaps small) bit of risk of introducing\n> bugs without any improvement in the product that can be detected by the\n> user.\n\nI normally would not start changing things like this, and take a 'if it\nain't broke, don't fix it' metality, but the original code was so\nriddled with problems and needed changes, I got used to making massive\nchanges. pgindent is a great example. Very risky initially, it was\nabsolutely necessary to make things easier for new people looking at the\ncode. Now that I have become experienced making such changes, I have\nnot hesitated to make them. If someone is working in a certain area, I\ntend to stay away, though.\n\nI have developed a certain confidence in making these changes, partially\nbecause mkid allows such easy identification of the problem areas. (See\ndevelopers FAQ for mkid.) For example, the /parser directory was a\nterrible mess. I had worked on it, and still could not keep the stuff\nstraight. I redesigned the entire file layout, grouping functions into\nsmaller files. It was risky, and a massive diff, but I am certain it\nhas allowed Thomas and others to more clearly understand and change the\nexisting code.\n\nBasically, I would like to continue cleaning up the code where I see\nareas of improvement, as long as it does not introduce problems for\nother developers.\n\n> Btw, the name \"BindingTable\" is derived from the lisp terminology where\n> a variable is composed of a storage location and a \"binding\" of a symbol to\n> refer to that location. As much of postgres was initially written in lisp,\n> this is a fairly natural name.\n> \n> As it happens, the shared memory structure (formerly) known as BindingTable\n> really is used in a way that closely resembles lisp style bindings. When\n> you want to create something in shared memory (eg the lock table), you\n> allocate a name in the binding table and then hang all the datastructures\n> onto the name.\n\nThe confusion I had is that Binding is really what a hash structure\ndoes, allowing you to supply a value, and get a matching value. So a\nBindingTable seemed like a normal hash structure, not a real shared\nmemory index.\n\nI don't know lisp, so I did not know it actually had a meaning. \nUnfortunately, very few other developers know lisp, so the name had\nlittle meaning for them either.\n\n> \n> Finally, I think the name ShmemIndex lends itself to confusion with table\n> indexes etc...\n\nI can change the name again. What should it be? (Please don't say\n\"BindingTable\"). :-)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 27 Jun 1998 21:58:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] binding table"
}
] |
[
{
"msg_contents": "I have renamed lockt/LOCKT to locktype/LOCKTYPE for clarity.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 28 Jun 1998 17:17:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "renaming"
}
] |
[
{
"msg_contents": "\nIs there any way to get user's name when user uses trigger? \n\nThank you for your answer.\n\n\n\n",
"msg_date": "Mon, 29 Jun 1998 10:14:46 +0300 (EEST)",
"msg_from": "Alexzander Blashko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting username from trigger? "
},
{
"msg_contents": " Is there any way to get user's name when user uses trigger? \n\nsee contrib/spi/username.c or something like that.\n\nCheers,\nBrook\n",
"msg_date": "Mon, 29 Jun 1998 09:39:23 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Getting username from trigger?"
}
] |
[
{
"msg_contents": "I have renamed most of the variables/structures in\nbackend/storage/lmgr/lock.c. The old names were very confusing.\n\nNames like LOCKTAB, LOCKCTL, ltable, ntypes, AllTables, locktype,\nLockTableInit, tableId, tableID, lockName are gone. There are two\nprimary items in lock.c. One is the locking method. Currently we only\nsupport multi-level locks. The second is lock mode (read/write). These\nwere very unclear in the code. Now, I can understand what is happening,\nand I hope others can too.\n\nI have attatched the contents of the README in that directory, to show\nthat someone in 1992 also agrees with me, in case some want to complain\nthat I changed some variable names.\n\n---------------------------------------------------------------------------\n\nThis file is an attempt to save me (and future code maintainers) some\ntime and a lot of headaches. The existing lock manager code at the time\nof this writing (June 16 1992) can best be described as confusing. The\ncomplexity seems inherent in lock manager functionality, but variable\nnames chosen in the current implementation really confuse me everytime\nI have to track down a bug. Also, what gets done where and by whom isn't\nalways clear....\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 29 Jun 1998 22:09:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "new lock structure names"
}
] |
[
{
"msg_contents": "I have seriously updated the backend flowchart. In fact, the name is\ngone, and it is now called \"How PostgreSQL Processes a Query\".\n\nI have added several paragraphs describing the flow of data from one\nmodule to another, and have added a section describing 'exactly' what is\nstored in shared memory, and how it is used.\n\nIf you have current source tree, point your browser at:\n\n\tfile:/usr/local/pgsql/src/tools/backend\n\nor the proper directory for your system. You can also view it on our\nwww site from the docs page.\n\nI am interested in comments, suggestions, and any possible errors.\n\nThis will certainly help people get started coding, because at least\nthey will understand how all the parts fit together.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 30 Jun 1998 00:40:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "flowchart updated"
}
] |
[
{
"msg_contents": ">> Approved: [email protected]\n>> \n>> The latest version of Rx, 1.9, is available on the web at:\n>> \n>> \thttp://users.lanminds.com/~lord\n>> \tftp://emf.net/users/lord/src/rx-1.9.tar.gz\n>> and at ftp://ftp.gnu.org/pub/gnu/rx-1.9.tar.gz and mirrors of that \n>> site (see list below).\n>\n>The reason that we do not use this particular Regex package is that *it*\n>falls under the \"Almighty GPL\", which conflicts with our Berkeley\n>Copyright...\n\nI thought we had an agreement that GNU software could be \nincluded in postgresql without imposing a GPL on postgresql,\nas long as it is delivered in it's whole, probably in it's own subdirectory,\nwith the GPL only applying to this subdirectory.\n\nAndreas\n\n",
"msg_date": "Tue, 30 Jun 1998 08:54:16 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] regular expressions from hell"
},
{
"msg_contents": "On Tue, 30 Jun 1998, Andreas Zeugswetter wrote:\n\n> >> Approved: [email protected]\n> >> \n> >> The latest version of Rx, 1.9, is available on the web at:\n> >> \n> >> \thttp://users.lanminds.com/~lord\n> >> \tftp://emf.net/users/lord/src/rx-1.9.tar.gz\n> >> and at ftp://ftp.gnu.org/pub/gnu/rx-1.9.tar.gz and mirrors of that \n> >> site (see list below).\n> >\n> >The reason that we do not use this particular Regex package is that *it*\n> >falls under the \"Almighty GPL\", which conflicts with our Berkeley\n> >Copyright...\n> \n> I thought we had an agreement that GNU software could be \n> included in postgresql without imposing a GPL on postgresql,\n> as long as it is delivered in it's whole, probably in it's own subdirectory,\n> with the GPL only applying to this subdirectory.\n\n\tOnly those that fall under LGPL, not GPL...\n\n",
"msg_date": "Tue, 30 Jun 1998 07:54:31 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] regular expressions from hell"
},
{
"msg_contents": "> \n> On Tue, 30 Jun 1998, Andreas Zeugswetter wrote:\n> \n> > >> Approved: [email protected]\n> > >> \n> > >> The latest version of Rx, 1.9, is available on the web at:\n> > >> \n> > >> \thttp://users.lanminds.com/~lord\n> > >> \tftp://emf.net/users/lord/src/rx-1.9.tar.gz\n> > >> and at ftp://ftp.gnu.org/pub/gnu/rx-1.9.tar.gz and mirrors of that \n> > >> site (see list below).\n> > >\n> > >The reason that we do not use this particular Regex package is that *it*\n> > >falls under the \"Almighty GPL\", which conflicts with our Berkeley\n> > >Copyright...\n> > \n> > I thought we had an agreement that GNU software could be \n> > included in postgresql without imposing a GPL on postgresql,\n> > as long as it is delivered in it's whole, probably in it's own subdirectory,\n> > with the GPL only applying to this subdirectory.\n> \n> \tOnly those that fall under LGPL, not GPL...\n> \n\nAs it happens, rx-1.9 is not under either the GPL, or the LGPL. The author\noffers a range of free and commercial licenses. It appears that we could\nuse this software, but the safest bet would be to contact the author and ask\nas the license documents are not as well spelled out as the Gnu licences and\nI may have misunderstood them.\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nQ: Someone please enlighten me, but what are they packing into NT5 to\nmake it twice the size of NT4/EE?\nA: A whole chorus line of dancing paperclips. -- [email protected]\n",
"msg_date": "Tue, 30 Jun 1998 12:32:55 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] regular expressions from hell"
},
{
"msg_contents": "> As it happens, rx-1.9 is not under either the GPL, or the LGPL. The author\n> offers a range of free and commercial licenses. It appears that we could\n> use this software, but the safest bet would be to contact the author and ask\n> as the license documents are not as well spelled out as the Gnu licences and\n> I may have misunderstood them.\n\nDavid, I received this from the maintainer of our current regex library.\nSeems like rx may be too buggy for us. But we do need a faster regex\nlibrary.\n\n\n---------------------------------------------------------------------------\n\n> Henry, will the new code you write be in the public domain, or only part\n> of BSDI?\n\nThe new regex code will be under essentially the same redistribution terms\nas the old stuff (in fact, slightly more generous). BSDI didn't end up\ncontributing to this particular project, and the folks who did were all \nhappy with open redistribution.\n\nI should clarify that this code isn't \"to be written\" -- it already exists,\nalthough I'm not entirely happy with it yet and want to limit distribution\nuntil it's tidied up somewhat.\n\n> Would you recommend we replace our regex stuff with something else?\n\nMy only real competitor :-) right now appears to be the GNU rx package,\nand I have heard enough grumbling about it that I hesitate to recommend\nit for general use. It's faster but it has problems, is my impression;\nI have not examined it closely.\n\n> Do you have any patches you would like us to test?\n\nNothing quite yet. Incidentally, the new code is a from-scratch\nreimplementation, not just patches to the 4.4 one.\n\n Henry Spencer\n [email protected]\n\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 00:22:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] regular expressions from hell"
}
] |
[
{
"msg_contents": ">> \n>> It takes too long time to reload big tables...\n>\n> I have to agree here...the one application that *I* really use\n>this for is an accounting server...any downtime is unacceptable, because\n>the whole system revolves around the database backend.\n>\n> Take a look at Michael Richards application (a search engine)\n>where it has several *million* rows, and that isn't just one table.\n>Michael, how long would it take to dump and reload that? \n\n\nI just did a dump and reload. The reload took about 18 hours\non a Sun Ultra sparc, dual CPU 256MB RAM and UWSCSI disks. The\ndatabase was not that big either. After gzipping the dump file\nit was only a few hundred megabytes. After reload it is about\n12 million rows total.\n\nThat said, if you guys would reduce to per-tupple overhead and/\nor make the thing go faster I'd be happy to dump and reload.\n\nDown time is an issue with some people. My suggestion is to\ndump the database but _don't drop it_ keep running the old\nversion while the new version is being rebuilt. This does\nrequire running both version 6.3 and 6.4 servers either on \ndiferent port numbers and data directories or a second computer.\nIn any case, there is no need to be down except for a few minutes\nno matter how big your database.\n\nSo as a user my request to the development team is Performence,\nPerformence, Performence. Don't trade away performence for anything.\nYou can code around a messing feature but a slow DBMS forces you\nback to using flat files.\n-- \n--Chris Albertson\n\n [email protected] Voice: 626-351-0089 X127\n Logicon RDA, Pasadena California Fax: 626-351-0699\n",
"msg_date": "Tue, 30 Jun 1998 11:13:53 -0700",
"msg_from": "Chris Albertson <[email protected]>",
"msg_from_op": true,
"msg_subject": "dump/reload"
}
] |
[
{
"msg_contents": "Can an SQL database (PostgreSQL) be connected to shell in a way that if\na directory is changed on shell it will update the database with the\nchanges? If so how?\nA prompt reply would be appreciated.\n\nThank in advance,\nGreg\n\n",
"msg_date": "Wed, 01 Jul 1998 10:13:26 -0400",
"msg_from": "Gregory Holston <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB-shell connect"
},
{
"msg_contents": "> \n> Can an SQL database (PostgreSQL) be connected to shell in a way that if\n> a directory is changed on shell it will update the database with the\n> changes? If so how?\n> A prompt reply would be appreciated.\n> \n> Thank in advance,\n> Greg\n> \n> \n> \n\nThat is tough. It is normally very difficult to trigger 'anything' if\nsomeone changes a directory in their shell. It would have to be\ntriggered in the kernel, and out of the kernel up to an SQL process. \nVery tough.\n\nTheir are Unix tools that monitor the everyone's current directory, but\nthey are polling tools, not some automatic thing on every directory\nchange.\n\nIt is the chdir() kernel call you want to trigger on.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 1 Jul 1998 11:30:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DB-shell connect"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n> Can an SQL database (PostgreSQL) be connected to shell in a way that if\n> a directory is changed on shell it will update the database with the\n> changes? If so how?\n> A prompt reply would be appreciated.\n\nOn most *nix systems, this feature is called \"auditing.\" You may not\nrealize this, Greg, but what you *really* want to do is:\n1) Turn on auditing (if auditing is not available, try new OS).\n2) Write a script which grovels through the audit log periodically\nand inserts the appropriate rows in your database.\n\nActually, there *is* another alternative:\n1) Make everybody's shell /bin/bash (or translate the following)\n2) Create the following alias in /etc/profile\nalias cd=\"\\cd $1;psql database -c \\\"insert into cdtable values (\\'$PWD\\');\\\\g\\\"\"\n3) Hope your users don't unalias that.\n\nBTW, the alternative offered above is BAD:\n1) Every cd invokes psql == suckful performance.\n2) Can be trivially broken by users.\n3) Will not notice chdir()'s in shell-scripts and executables.\n\nAn improvement can be attained by grabbing the source to bash, and\nmodifying both the init code (attach to database) and the \"cd\" code\n(insert row, COMMIT). However, this shell should *never* be used by\nroot (never futz with the root shell) or PostgreSQL superuser\n(chicken-and-egg).\n\n- -- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\n\niQBVAwUBNZpaRYdzVnzma+gdAQFASQH+NaiN0JnlDGxd5mP7gjmQ/7mXqS8bvds9\nGiYHmTuOZ1gTJeW1zWiz60nc4xHQYz6jTNE/JqLaIBqP44NKHwQiUA==\n=gWRI\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "1 Jul 1998 11:48:21 -0400",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DB-shell connect"
},
{
"msg_contents": ">> Can an SQL database (PostgreSQL) be connected to shell in a way that if\n>> a directory is changed on shell it will update the database with the\n>> changes? If so how?\n\nI assumed what he really meant was detecting additions or deletions of\nfiles in a particular Unix directory, not trapping chdir's all over\nthe system...\n\nIf instant response isn't too critical, having a daemon process scan\nthe directory every five minutes or so seems like the least expensive/\nintrusive way to do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jul 1998 12:40:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DB-shell connect "
}
] |
[
{
"msg_contents": "Hi,\n\nI have a little bit confused about permissions:\nIf I work as a user who have permissions to create databases but\nnot to create users I can't create view in my database which I own.\nIs it my 'glyuk' or postgres feature ?\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 1 Jul 1998 18:33:02 +0400 (MSK DST)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Permissions for create view "
}
] |
[
{
"msg_contents": "unsubscribe\n\n",
"msg_date": "Fri, 03 Jul 1998 04:09:26 +1100 (VSD)",
"msg_from": "Roman Volkoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PORTS] Re: [HACKERS] no answer to Solaris 2.6 failure to bu"
},
{
"msg_contents": "\nI still cannot get postgres 6.3.2 libpq to build under \nSolaris 2.6 ... this means I can't test any of my apps\nusing postgres on my deployment platform. can anyone\nhelp?\n\ngcc version 2.7.2.3\n\nlooks like too many warnings (on make in libpq dir) -- \nI'm not expert at Posix/bsd porting, which is sorta what\nthis looks like...\n\ngcc -I../../include -I../../backend -I/opt/tcl/include -I/usr/include// -I/opt/tcl/include// -Wall -Wmissing-prototypes -DFRONTEND -fPIC -c fe-connect.c \nfe-connect.c: In function `PQconnectdb':\nfe-connect.c:204: warning: implicit declaration of function `strdup'\nfe-connect.c:204: warning: assignment makes pointer from integer without a cast\nfe-connect.c:206: warning: assignment makes pointer from integer without a cast\nfe-connect.c:208: warning: assignment makes pointer from integer without a cast\nfe-connect.c:210: warning: assignment makes pointer from integer without a cast\nfe-connect.c:212: warning: assignment makes pointer from integer without a cast\nfe-connect.c:214: warning: assignment makes pointer from integer without a cast\nfe-connect.c:216: warning: assignment makes pointer from integer without a cast\nfe-connect.c: In function `PQsetdbLogin':\nfe-connect.c:339: warning: assignment makes pointer from integer without a cast\nfe-connect.c:342: warning: assignment makes pointer from integer without a cast\nfe-connect.c:348: warning: assignment makes pointer from integer without a cast\nfe-connect.c:351: warning: assignment makes pointer from integer without a cast\nfe-connect.c:357: warning: assignment makes pointer from integer without a cast\nfe-connect.c:360: warning: assignment makes pointer from integer without a cast\nfe-connect.c:366: warning: assignment makes pointer from integer without a cast\nfe-connect.c:369: warning: assignment makes pointer from integer without a cast\nfe-connect.c:374: warning: assignment makes pointer from integer without a cast\nfe-connect.c:379: warning: assignment makes pointer from integer without a cast\nfe-connect.c:399: warning: assignment makes pointer from integer without a cast\nfe-connect.c:403: warning: assignment makes pointer from integer without a cast\nfe-connect.c:406: warning: assignment makes pointer from integer without a cast\nfe-connect.c:412: warning: assignment makes pointer from integer without a cast\nfe-connect.c:414: warning: assignment makes pointer from integer without a cast\nfe-connect.c: In function `connectDB':\nfe-connect.c:561: warning: passing arg 4 of `setsockopt' from incompatible pointer type\nfe-connect.c:579: warning: implicit declaration of function `fdopen'\nfe-connect.c:579: warning: assignment makes pointer from integer without a cast\nfe-connect.c:580: warning: assignment makes pointer from integer without a cast\nfe-connect.c: In function `PQsetenv':\nfe-connect.c:694: warning: implicit declaration of function `strcasecmp'\nfe-connect.c: In function `closePGconn':\nfe-connect.c:743: storage size of `ignore_action' isn't known\nfe-connect.c:749: storage size of `oldaction' isn't known\nfe-connect.c:756: warning: implicit declaration of function `sigemptyset'\nfe-connect.c:758: warning: implicit declaration of function `sigaction'\nfe-connect.c:749: warning: unused variable `oldaction'\nfe-connect.c:743: warning: unused variable `ignore_action'\nfe-connect.c: In function `conninfo_parse':\nfe-connect.c:866: warning: assignment makes pointer from integer without a cast\nfe-connect.c:1012: warning: assignment makes pointer from integer without a cast\nfe-connect.c:1035: warning: assignment makes pointer from integer without a cast\nfe-connect.c:1047: warning: assignment makes pointer from integer without a cast\nfe-connect.c:1060: warning: assignment makes pointer from integer without a cast\nfe-connect.c:1073: warning: assignment makes pointer from integer without a cast\nmake: *** [fe-connect.o] Error 1\n\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n",
"msg_date": "Thu, 2 Jul 1998 14:45:23 -0700 (PDT)",
"msg_from": "De Clarke <[email protected]>",
"msg_from_op": false,
"msg_subject": "no answer to Solaris 2.6 failure to build 6.3.2?"
},
{
"msg_contents": "\ngoing to try it now, but is this befoer or after the patches?\n\n\n\nOn Thu, 2 Jul 1998, De Clarke wrote:\n\n> \n> I still cannot get postgres 6.3.2 libpq to build under \n> Solaris 2.6 ... this means I can't test any of my apps\n> using postgres on my deployment platform. can anyone\n> help?\n> \n> gcc version 2.7.2.3\n> \n> looks like too many warnings (on make in libpq dir) -- \n> I'm not expert at Posix/bsd porting, which is sorta what\n> this looks like...\n> \n> gcc -I../../include -I../../backend -I/opt/tcl/include -I/usr/include// -I/opt/tcl/include// -Wall -Wmissing-prototypes -DFRONTEND -fPIC -c fe-connect.c \n> fe-connect.c: In function `PQconnectdb':\n> fe-connect.c:204: warning: implicit declaration of function `strdup'\n> fe-connect.c:204: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:206: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:208: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:210: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:212: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:214: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:216: warning: assignment makes pointer from integer without a cast\n> fe-connect.c: In function `PQsetdbLogin':\n> fe-connect.c:339: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:342: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:348: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:351: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:357: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:360: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:366: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:369: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:374: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:379: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:399: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:403: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:406: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:412: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:414: warning: assignment makes pointer from integer without a cast\n> fe-connect.c: In function `connectDB':\n> fe-connect.c:561: warning: passing arg 4 of `setsockopt' from incompatible pointer type\n> fe-connect.c:579: warning: implicit declaration of function `fdopen'\n> fe-connect.c:579: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:580: warning: assignment makes pointer from integer without a cast\n> fe-connect.c: In function `PQsetenv':\n> fe-connect.c:694: warning: implicit declaration of function `strcasecmp'\n> fe-connect.c: In function `closePGconn':\n> fe-connect.c:743: storage size of `ignore_action' isn't known\n> fe-connect.c:749: storage size of `oldaction' isn't known\n> fe-connect.c:756: warning: implicit declaration of function `sigemptyset'\n> fe-connect.c:758: warning: implicit declaration of function `sigaction'\n> fe-connect.c:749: warning: unused variable `oldaction'\n> fe-connect.c:743: warning: unused variable `ignore_action'\n> fe-connect.c: In function `conninfo_parse':\n> fe-connect.c:866: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:1012: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:1035: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:1047: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:1060: warning: assignment makes pointer from integer without a cast\n> fe-connect.c:1073: warning: assignment makes pointer from integer without a cast\n> make: *** [fe-connect.o] Error 1\n> \n> \n> de\n> \n> .............................................................................\n> :De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n> :Mail: [email protected] | \"There is no problem in computer science that cannot: \n> :Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n> \n> \n> \n> \n\n",
"msg_date": "Thu, 2 Jul 1998 18:08:43 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no answer to Solaris 2.6 failure to build 6.3.2?"
},
{
"msg_contents": "De Clarke <[email protected]> writes:\n> I still cannot get postgres 6.3.2 libpq to build under \n> Solaris 2.6 ... this means I can't test any of my apps\n> using postgres on my deployment platform. can anyone\n> help?\n\n> gcc version 2.7.2.3\n\n> fe-connect.c: In function `PQconnectdb':\n> fe-connect.c:204: warning: implicit declaration of function `strdup'\n> fe-connect.c:204: warning: assignment makes pointer from integer without a cast\n\n[ snip a whole lot of similar errors, all apparently arising from the\nlack of prototypes for strdup() and other functions... ]\n\ngcc is unhappy because it hasn't seen any declaration for strdup, and\nlater fdopen, strcasecmp, etc. All the other complaints follow from\nthat.\n\nEither Solaris 2.6 has incredibly brain-damaged system include files,\nor (more likely) you have a misconfigured gcc that is not reading the\ncorrect version of <stdio.h>, <string.h>, etc. One way that that can\nhappen is if you try to copy a gcc installation from another system\nrather than configuring and compiling it on exactly the target system.\n(gcc tends to like to make \"patched\" copies of some of the system\ninclude files, and if those don't match up with the real ones you are\nin deep trouble.)\n\nSurely libpq is not the only area where things are failing to build\nbecause of these problems? Or is that the only subdirectory you have\ntried to build?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jul 1998 21:23:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no answer to Solaris 2.6 failure to build 6.3.2? "
},
{
"msg_contents": ">De Clarke <[email protected]> writes:\n>> I still cannot get postgres 6.3.2 libpq to build under \n>> Solaris 2.6 ... this means I can't test any of my apps\n>> using postgres on my deployment platform. can anyone\n>> help?\n>\n>> gcc version 2.7.2.3\n>\n>> fe-connect.c: In function `PQconnectdb':\n>> fe-connect.c:204: warning: implicit declaration of function `strdup'\n>> fe-connect.c:204: warning: assignment makes pointer from integer without a cast\n>\n>[ snip a whole lot of similar errors, all apparently arising from the\n>lack of prototypes for strdup() and other functions... ]\n>\n>gcc is unhappy because it hasn't seen any declaration for strdup, and\n>later fdopen, strcasecmp, etc. All the other complaints follow from\n>that.\n[snip]\n\nI have successfully built 6.3.2 under Sparc/Solaris 2.6 with gcc\n2.7.2.2. Here are the outputs from gmake.\n\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c fe-auth.c -o fe-auth.o\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c fe-connect.c -o fe-connect.o\nfe-connect.c: In function `connectDB':\nfe-connect.c:561: warning: passing arg 4 of `setsockopt' from incompatible pointer type\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c fe-exec.c -o fe-exec.o\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c fe-misc.c -o fe-misc.o\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c fe-lobj.c -o fe-lobj.o\nln -s ../../backend/lib/dllist.c .\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c dllist.c -o dllist.o\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c pqsignal.c -o pqsignal.o\nln -s ../../backend/libpq/pqcomprim.c .\ngcc -I../../include -I../../backend -I/usr/local/include/tcl7.6jp -I/usr/local/include/tk4.2jp -Wall -Wmissing-prototypes -DFRONTEND -c pqcomprim.c -o pqcomprim.o\nar crs libpq.a `lorder fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-lobj.o dllist.o pqsignal.o pqcomprim.o | tsort`\nUX tsort: INFORM: cycle in data\n\tfe-connect.o\n\tfe-auth.o\nUX tsort: INFORM: cycle in data\n\tfe-exec.o\n\tfe-connect.o\nranlib libpq.a\nrm -f c.h\necho \"#undef PORTNAME\" > c.h\necho \"#define PORTNAME sparc_solaris\" >> c.h\ncat ../../include/c.h >> c.h\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Fri, 03 Jul 1998 10:35:04 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no answer to Solaris 2.6 failure to build 6.3.2? "
},
{
"msg_contents": "[email protected] writes:\n> I have successfully built 6.3.2 under Sparc/Solaris 2.6 with gcc\n> 2.7.2.2. Here are the outputs from gmake.\n\nOK, that seems sufficient evidence that Solaris 2.6 *does* have proper\ndeclarations for all these functions in the system include files.\nSo we're left with the hypothesis of a botched gcc installation\non De Clarke's machine. I'm not quite foolish enough to say that\nit couldn't be anything else, but that sure seems like the thing\nto investigate first.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jul 1998 21:50:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no answer to Solaris 2.6 failure to build 6.3.2? "
},
{
"msg_contents": "On Thu, 2 Jul 1998, Tom Lane wrote:\n\n> De Clarke <[email protected]> writes:\n> > I still cannot get postgres 6.3.2 libpq to build under \n> > Solaris 2.6 ... this means I can't test any of my apps\n> > using postgres on my deployment platform. can anyone\n> > help?\n> \n> > gcc version 2.7.2.3\n> \n> > fe-connect.c: In function `PQconnectdb':\n> > fe-connect.c:204: warning: implicit declaration of function `strdup'\n> > fe-connect.c:204: warning: assignment makes pointer from integer without a cast\n> \n> [ snip a whole lot of similar errors, all apparently arising from the\n> lack of prototypes for strdup() and other functions... ]\n> \n> gcc is unhappy because it hasn't seen any declaration for strdup, and\n> later fdopen, strcasecmp, etc. All the other complaints follow from\n> that.\n> \n> Either Solaris 2.6 has incredibly brain-damaged system include files,\n> or (more likely) you have a misconfigured gcc that is not reading the\n> correct version of <stdio.h>, <string.h>, etc. One way that that can\n> happen is if you try to copy a gcc installation from another system\n> rather than configuring and compiling it on exactly the target system.\n> (gcc tends to like to make \"patched\" copies of some of the system\n> include files, and if those don't match up with the real ones you are\n> in deep trouble.)\n\nIsn't it possible to just run 'fixincludes'? This is what's done when you \ninstall gcc from scratch. Don't actually know where to find fixincludes \nthough.\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 3 Jul 1998 09:44:24 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] no answer to Solaris 2.6 failure to build 6.3.2?"
}
] |
[
{
"msg_contents": "\ndoes anyone have any idea how difficult it would be to implement, and\nperhaps point me in the right direction?\n",
"msg_date": "Thu, 2 Jul 1998 19:35:40 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "implementing outer joins"
},
{
"msg_contents": "> \n> does anyone have any idea how difficult it would be to implement, and\n> perhaps point me in the right direction?\n> \n> \n\nGood question. My guess is that you have to set a flag in the\nRangeTblEntry for OUTER, and have this flag read by all the join\nmethods. Most(all?) of them have an inner and outer loop. I think\nOUTER has to be in the outer part of the loop, and as you are spinning\nthrough the outer loop, if you don't find any matches in the inner loop,\nyou output the outer loop with NULLs for the inner values. Now if you\nhave only two tables joined, and they are both outer, I am not sure how\nto handle that.\n\nCheck out the new backend \"How PostgreSQL Processes a Query\" for hints\non where to make changes. (I have gotten no comments on the new\nversion.) Also check out the Developers FAQ for ideas too.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 3 Jul 1998 00:08:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] implementing outer joins"
},
{
"msg_contents": "> \n> does anyone have any idea how difficult it would be to implement, and\n> perhaps point me in the right direction?\n> \n\nI forget where, but there is a place in the code where the two values\nof the join are compared to see whether the row qualifies. Around\nthis spot there is a check for one of the values being null and if it\nis, the function doesn't include the data. Seems that modifying that\nspot or using it as a basis for your code could be a quick start for\nthe left join. The right could perhaps then be converted beforehand\nin the optimizer to a left to be handled there.\n\nBut there's prolly a great and sane reason why this wouldn't work.\n\nHaven't thought about the full outer join yet. Don't have my machine\ncompletely configured yet and it's hard to stay in to do that with it\nbeing summer and all.\n\nLater,\ndarrenk\n",
"msg_date": "Fri, 3 Jul 1998 10:49:51 -0400",
"msg_from": "\"Stupor Genius\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] implementing outer joins"
}
] |
[
{
"msg_contents": "\nokay.. in order to build my understanding of the postgres code (wow,\nthis compiles?) I'm going to print some of it out. This helped me\nlearn C many years back, by printing out the code to MOO server. of\ncourse, I could print out *all* of it, which clearly is not possible\nwith postgres, as there are about 170k lines of code in the backend.\nIt would still take about 1300 pages with really small type.\n\nSo, any suggestions? I'm interested in learning more about the\nplanning & execution of queries -- scary stuff. What should I print\nout? Or should I just scrap that entirely and browse the code with a\ndecent TAGS editor? The only problem with that is I have to be at the\ncomputer...\n\nI'm committed to postgresql, but my time is pretty limited. There are\nmany things I want to do, and gaining a thourough (hah!) knowledge of\nthe code will help facilitate that.\n\n--brett\n",
"msg_date": "Thu, 2 Jul 1998 20:23:58 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "you are lost in a maze of twisting code, all alike"
}
] |
[
{
"msg_contents": "> Hello!\n> \n> Through some minor changes, I have been able to compile the libpq client\n> libraries on the Win32 platform. Since the libpq communications part has\n> been rewritten, this has become much easier. Enclosed is a patch that\n> will allow at least Microsoft Visual C++ to compile libpq into both a\n> static and a dynamic library.\n> I will take a look at porting the psql frontend as well, but I figured\n> it was a good idea to send in these patches first - so no major changes\n> are done to the files before it gets applied (if it does).\n> \n> The patches patch clean against the snapshot from Jun 20 (I don't have\n> cvs, so I got it from the ftp site). It modifies only the\n> interfaces/libpq/* files and the include/postgres.h file.\n> It also creates the following new files:\n> interfaces/libpq/win32.mak\t- Makefile for Win32 (Visual C)\n> interfaces/libpq/libpqdll.c - Routine required to compile dynamic\n> library\n> interfaces/libpq/lubpqdll.def - List of functions to export from dynamic\n> lib\n> interfaces/libpq/win32.h - Misc header stuff req. to work in win32\n> win32.mak\t\t\t\t- Top level make file. Will only\n> make libpq now.\n> \n> To compile it, you just issue the command\n> \"nmake /F win32.mak\"\n> src directory. A static and a dynamic version will be created.\n> \n> When I have finished checking over the possiblities of doing the psql\n> part too, I will look at writing a short README on how to do it.\n> \n> \n> I hope this is interesting, and gets incorporated in the source soon.\n> \n> \n> Regards,\n> Magnus Hagander\n> \n> \n> \n\nI have applied this fine patch to allow WIN32 compiles of libpq. Nice\njob, and a good feature for us.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 3 Jul 1998 00:30:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: (postgres) Libpq Win32"
}
] |
[
{
"msg_contents": "Hi,\n\nare BLOBs deleted that are not referenced any more?\n\nImagine, I have a table with a column\n\n | text | oid | 4 |\n\nwhen this row is deleted, will postgres throw the BLOB away?\n\n---\n _ _\n _(_)(_)_ David Wetzel, Turbocat's Development,\n(_) __ (_) Buchhorster Strasse, D-16567 Muehlenbeck/Berlin, FRG,\n _/ \\_ Fax +49 33056 82835 NeXTmail [email protected]\n (______) http://www.turbocat.de/\n DEVELOPMENT * CONSULTING * ADMINISTRATION\n",
"msg_date": "Fri, 3 Jul 98 11:28:08 +0200",
"msg_from": "David Wetzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "are BLOBs deleted that are not referenced?"
},
{
"msg_contents": "On Fri, 3 Jul 1998, David Wetzel wrote:\n\n> Hi,\n> \n> are BLOBs deleted that are not referenced any more?\n\nNo, they remain in the database, and become 'orphaned'. We are looking at\nways of removing this problem, which occurs mainly with JDBC & ODBC based\nclients.\n\nThere is one method in src/contrib/lo which goes some way in solving this,\nbut there was also discussion about using vacuum to remove the orphaned\nblobs.\n\n> Imagine, I have a table with a column\n> \n> | text | oid | 4 |\n> \n> when this row is deleted, will postgres throw the BLOB away?\n\nThe lo type (src/contrib/lo) handles row deletion (as well as updates) by\ndefining a trigger which deletes the associated blob as required.\n\nHowever, DELETE TABLE doesn't fire a trigger, so if the table's contents\ndon't get deleted first, then the blobs are again orphaned.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sun, 5 Jul 1998 11:06:50 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] are BLOBs deleted that are not referenced?"
}
] |
[
{
"msg_contents": "Hi all,\nI have this problem:\n\nThis sample query create with Access\n\nSELECT figure.*, utenti.ragione_sociale\nFROM figure INNER JOIN utenti ON figure.codice_figura = utenti.azienda;\n\nvia-psqlODBC reports this\n\nERROR: The field being ordered by must appear in the target list'\nSTATEMENT ERROR: func=SC_execute, desc='', errnum=1, errmsg='Error while executing the query'\n\nPostgres executes this sql query (I find this in Psqlodbc.log file) :\n\nSELECT figure.oid,utenti.azienda\nFROM utenti,figure WHERE (figure.codice_figura = utenti.azienda ) ORDER BY figure.codice_figura\n\nWhy is there ORDER BY CLAUSE ?\nWhy is there only \"figure.oid\" instead of \"figure.*\" in SELECT\narguments ??????\n\nBest regards,\n Marco Pollachini\n Sferacarta mailto:[email protected]\n\n\n",
"msg_date": "Fri, 3 Jul 1998 12:02:13 +0200",
"msg_from": "Sferacarta Software <[email protected]>",
"msg_from_op": true,
"msg_subject": "Access & Postgres"
},
{
"msg_contents": "This question may be better posted on the interfaces list.\n\nSee: http://www.insightdist.com/psqlodbc\nVisit the FAQ. It is one of the last items listed.\n\nSferacarta Software wrote:\n\n> Hi all,\n> I have this problem:\n>\n> This sample query create with Access\n>\n> SELECT figure.*, utenti.ragione_sociale\n> FROM figure INNER JOIN utenti ON figure.codice_figura = utenti.azienda;\n>\n> via-psqlODBC reports this\n>\n> ERROR: The field being ordered by must appear in the target list'\n> STATEMENT ERROR: func=SC_execute, desc='', errnum=1, errmsg='Error while executing the query'\n>\n> Postgres executes this sql query (I find this in Psqlodbc.log file) :\n>\n> SELECT figure.oid,utenti.azienda\n> FROM utenti,figure WHERE (figure.codice_figura = utenti.azienda ) ORDER BY figure.codice_figura\n>\n> Why is there ORDER BY CLAUSE ?\n> Why is there only \"figure.oid\" instead of \"figure.*\" in SELECT\n> arguments ??????\n>\n> Best regards,\n> Marco Pollachini\n> Sferacarta mailto:[email protected]\n\n\n\n",
"msg_date": "Mon, 06 Jul 1998 16:30:19 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Access & Postgres"
},
{
"msg_contents": "The backend patch is downloadable text file, linked to our FAQ. I think the direct URL is\nhttp://www.insightdist.com/psqlodbc/junkfilter_patch.txt\n\nI know the code works. Let me know, though, if you have trouble retrieving or applying it. The\npatch was originally 2 patches. I just cat'ed them together. It should apply properly.\n\nPS: This is the first time I have distributed the patch this way. So could I get an affirmation from\nyou even if it goes smoothy.\n\nSferacarta Software wrote:\n\n> Hello David,\n>\n> Do you remember our problem with ORDER BY not in target list ?\n> Well, seems that now we can't go on without your fix.\n> Please may you send me it ? Thanks\n>\n> luned�, 6 luglio 98, you wrote:\n>\n> DH> This question may be better posted on the interfaces list.\n> DH> See: http://www.insightdist.com/psqlodbc\n> DH> Visit the FAQ. It is one of the last items listed.\n> H> Sferacarta Software wrote:\n>\n> >> Hi all,\n> >> I have this problem:\n> >>\n> >> This sample query create with Access\n> >>\n> >> SELECT figure.*, utenti.ragione_sociale\n> >> FROM figure INNER JOIN utenti ON figure.codice_figura = utenti.azienda;\n> >>\n> >> via-psqlODBC reports this\n> >>\n> >> ERROR: The field being ordered by must appear in the target list'\n> >> STATEMENT ERROR: func=SC_execute, desc='', errnum=1, errmsg='Error while executing the query'\n> >>\n> >> Postgres executes this sql query (I find this in Psqlodbc.log file) :\n> >>\n> >> SELECT figure.oid,utenti.azienda\n> >> FROM utenti,figure WHERE (figure.codice_figura = utenti.azienda ) ORDER BY figure.codice_figura\n> >>\n> >> Why is there ORDER BY CLAUSE ?\n> >> Why is there only \"figure.oid\" instead of \"figure.*\" in SELECT\n> >> arguments ??????\n> >>\n> >> Best regards,\n> >> Marco Pollachini\n> >> Sferacarta mailto:[email protected]\n>\n> Best regards,\n> Jose' mailto:[email protected]\n\n\n\n",
"msg_date": "Wed, 08 Jul 1998 12:01:01 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Access & Postgres"
},
{
"msg_contents": "> http://www.insightdist.com/psqlodbc/junkfilter_patch.txt\n> I know the code works.\n\nHi David. Do you have access to the latest cvs development tree for\nPostgres? I'm seeing backend crashes from the \"junkfilter\" regression\ntest, and have been for some time. I (just this morning) applied some\npatches to the tree to fix up type coersion for inserts from other\ntables or columns, and this changed code in the \"junkfilter\" area.\nHowever, the regression test behavior is still bad.\n\nIf you have access to the latest tree, and had time to look at this it\nwould be great. Otherwise, I'll start poking at it. Talk to you soon...\n\n - Tom\n",
"msg_date": "Thu, 09 Jul 1998 04:49:06 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] Access & Postgres"
},
{
"msg_contents": "> I'm seeing backend crashes from the \"junkfilter\" regression test\n\nI think I have a patch for the problem:\n\npostgres=> select c, count(*) from test_missing_target group by 3;\nERROR: ORDER/GROUP BY position 3 is not in target list\n\nPreviously this query provoked a core dump. Will do some regression\ntesting and then commit to the source tree...\n\n - Tom\n",
"msg_date": "Thu, 09 Jul 1998 13:30:43 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] Access & Postgres"
},
{
"msg_contents": "Tom,\n\nYes, I do have access to the cvs tree from home. And, I will check it out\nwhen I get home.\n\nI am curious though. Did you apply the patch, mentioned below, to your\nlatest 6.4 tree? This patch should have been already part of the latest\n6.4. And, to my knowledge, it was already applied completely and\nsuccessfully.\n\nAnyway, the intent of the patch on our web site was to allow 6.3 users to\nbe able to use MS Access cleanly. Now that I think of it, though, this\nversion of the patch does not include the new \"output\" and \"sql\" files\nnecessary for the regression test. In any case, I should clean that up.\n\n\nThomas G. Lockhart wrote:\n\n> > http://www.insightdist.com/psqlodbc/junkfilter_patch.txt\n> > I know the code works.\n>\n> Hi David. Do you have access to the latest cvs development tree for\n> Postgres? I'm seeing backend crashes from the \"junkfilter\" regression\n> test, and have been for some time. I (just this morning) applied some\n> patches to the tree to fix up type coersion for inserts from other\n> tables or columns, and this changed code in the \"junkfilter\" area.\n> However, the regression test behavior is still bad.\n>\n> If you have access to the latest tree, and had time to look at this it\n> would be great. Otherwise, I'll start poking at it. Talk to you soon...\n>\n> - Tom\n\n\n\n",
"msg_date": "Thu, 09 Jul 1998 09:42:38 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] Access & Postgres"
},
{
"msg_contents": "> I am curious though. Did you apply the patch, mentioned below, to \n> your latest 6.4 tree? This patch should have been already part of \n> the latest 6.4. And, to my knowledge, it was already applied \n> completely and successfully.\n\nI did my final tests using a fresh cvs tree two days ago just before\ncommitting my big \"patch wad\". In the last two or three weeks I did have\na \"merge problem\" with my patches in the resjunk area, which might have\nbeen the patches you are talking about.\n\n> Now that I think of it, though, this version of the patch does not \n> include the new \"output\" and \"sql\" files necessary for the regression \n> test. In any case, I should clean that up.\n\nI am in the process of renaming and moving the \"resjunk\" test to\n\"select_implicit\". Will try to commit the changes this morning so it is\navailable to you.\n\nAt the moment, all regression tests (except \"random\") pass on my Linux\nbox :)\n\n - Tom\n",
"msg_date": "Thu, 09 Jul 1998 14:25:30 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] Access & Postgres"
},
{
"msg_contents": "> > http://www.insightdist.com/psqlodbc/junkfilter_patch.txt\n> > I know the code works.\n> \n> Hi David. Do you have access to the latest cvs development tree for\n> Postgres? I'm seeing backend crashes from the \"junkfilter\" regression\n> test, and have been for some time. I (just this morning) applied some\n> patches to the tree to fix up type coersion for inserts from other\n> tables or columns, and this changed code in the \"junkfilter\" area.\n> However, the regression test behavior is still bad.\n> \n> If you have access to the latest tree, and had time to look at this it\n> would be great. Otherwise, I'll start poking at it. Talk to you soon...\n\nI have now made the junk filter only run when needed. Perhaps this will\nfix the problem.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 00:23:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] Access & Postgres"
}
] |
[
{
"msg_contents": "Hi!\n\nI was looking for a libpq version to compile in Win32 platform. \n\nI'm porting my applications from mSQL to PostgreSQL 6.3.2 most of it are CGI and run at UNIX box but I will develop a Engeneering Eletric Calculation System for the Win32 platform with all data stored into PostgreSQL at Unix BOX.\n\nIs it possible to get this patch, or it will be available only in PostgreSQL 6.4?\n\n\n>> Hello!\n>> \n>> Through some minor changes, I have been able to compile the libpq client\n>> libraries on the Win32 platform. Since the libpq communications part has\n>> been rewritten, this has become much easier. Enclosed is a patch that\n>> will allow at least Microsoft Visual C++ to compile libpq into both a\n>> static and a dynamic library.\n>> I will take a look at porting the psql frontend as well, but I figured\n>> it was a good idea to send in these patches first - so no major changes\n>> are done to the files before it gets applied (if it does).\n>> \n>> The patches patch clean against the snapshot from Jun 20 (I don't have\n>> cvs, so I got it from the ftp site). It modifies only the\n>> interfaces/libpq/* files and the include/postgres.h file.\n>> It also creates the following new files:\n>> interfaces/libpq/win32.mak\t- Makefile for Win32 (Visual C)\n>> interfaces/libpq/libpqdll.c - Routine required to compile dynamic\n>> library\n>> interfaces/libpq/lubpqdll.def - List of functions to export from dynamic\n>> lib\n>> interfaces/libpq/win32.h - Misc header stuff req. to work in win32\n>> win32.mak\t\t\t\t- Top level make file. Will only\n>> make libpq now.\n>> \n>> To compile it, you just issue the command\n>> \"nmake /F win32.mak\"\n>> src directory. A static and a dynamic version will be created.\n>> \n>> When I have finished checking over the possiblities of doing the psql\n>> part too, I will look at writing a short README on how to do it.\n>> \n>> \n>> I hope this is interesting, and gets incorporated in the source soon.\n>> \n>> \n>> Regards,\n>> Magnus Hagander\n>> \n>> \n>> \n>\n>I have applied this fine patch to allow WIN32 compiles of libpq. Nice\n>job, and a good feature for us.\n>\n>\n>-- \n>Bruce Momjian | 830 Blythe Avenue\n>[email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n>\n>\n---------------------------------------------------------------------------------------------------\nEng. Roberto Joao Lopes Garcia E-mail: [email protected] \nF. 55 11 848 9906 FAX 55 11 848 9955\n\nMHA Engenharia de Projetos Ltda \nE-mail: [email protected] WWW: http://www.mha.com.br\n\nAv Maia Coelho Aguiar, 215 Bloco D 2 Andar\nCentro Empresarial de Sao Paulo\nSao Paulo - BRASIL - 05805 000\n---------------------------------------------------------------------------------------------------\n",
"msg_date": "Fri, 03 Jul 1998 12:57:05 -0200",
"msg_from": "[email protected] (Roberto Joao Lopes Garcia)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: (postgres) Libpq Win32"
},
{
"msg_contents": "[email protected] (Roberto Joao Lopes Garcia) writes:\n> Is it possible to get this patch, or it will be available only in\n> PostgreSQL 6.4?\n\nSince Magnus indicated that his changes depended on the recent libpq\nrewrite (with accompanying protocol changes), you'd have to do some\nsignificant surgery to get it to work with a 6.3.2 pgsql server.\n\nIf you don't mind running alpha-quality software you could pick up\nthe current sources from the cvs server ... otherwise better wait\nfor 6.4.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Jul 1998 11:09:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: (postgres) Libpq Win32 "
}
] |
[
{
"msg_contents": "> [email protected] (Roberto Joao Lopes Garcia) writes:\n> > Is it possible to get this patch, or it will be available only in\n> > PostgreSQL 6.4?\n> \n> Since Magnus indicated that his changes depended on the recent libpq\n> rewrite (with accompanying protocol changes), you'd have to do some\n> significant surgery to get it to work with a 6.3.2 pgsql server.\n\nYes, that is correct.\nThe \"old libpq\" was much based on using fdopen() on the sockets,\nwhich is not at all supported under Win32. The new code uses send()\nand recv(), which made the porting really easy.\n\n\n//Magnus\n",
"msg_date": "Fri, 3 Jul 1998 18:20:43 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: (postgres) Libpq Win32 "
}
] |
[
{
"msg_contents": "Nice to see they have added a mention of subselect(\"recursive\nsubqueries\"?), union, view, and transactions to the CrashMe comparison.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 3 Jul 1998 16:09:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "MySQL and PostgreSQL"
}
] |
[
{
"msg_contents": "> > The simple way is to reduce the capabilities of the Postgres\n> > rule system. Changing the syntax of the CREATE RULE to\n> > \n> > CREATE RULE rule AS { BEFORE | AFTER } event TO table\n> > DO [ INSTEAD ] { action | NOTHING }\n> > \n> > would tell the rewrite handler, if the rule actions have to\n> > be applied before or after the query. Respectively only NEW\n> > or CURRENT could be referenced in the rules qualification and\n> > actions. I don't like this simple way, but I think it's the\n> > only possibility that would work without reincarnating the\n> > instance rule system.\n> \n> Good description of the problem. If adding BEFORE/AFTER makes it easier\n> to program, do it. I don't think we are losing any functionality by\n> doing this, and the rule system is advertized as being broken, so there\n> is probably not a huge installed base.\n> \n> We now have something that doesn't work. If you can get it working, do\n> it. The change makes it clearer to the user exactly how the rule is\n> going to behave, too.\n\nJan, I remember you steering away from rewrite cleanups because we were\nso close to 6.3. Any chance you can make those changes for 6.4, or are\nyou already working on them? We really need cleanup in the rewrite\narea, and I understand you can do it.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 4 Jul 1998 19:57:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PG95-DEV] Rule system"
},
{
"msg_contents": ">\n> > > The simple way is to reduce the capabilities of the Postgres\n> > > rule system. Changing the syntax of the CREATE RULE to\n> > >\n> > > CREATE RULE rule AS { BEFORE | AFTER } event TO table\n> > > DO [ INSTEAD ] { action | NOTHING }\n> > >\n> > > would tell the rewrite handler, if the rule actions have to\n> > > be applied before or after the query. Respectively only NEW\n> > > or CURRENT could be referenced in the rules qualification and\n> > > actions. I don't like this simple way, but I think it's the\n> > > only possibility that would work without reincarnating the\n> > > instance rule system.\n> >\n> > Good description of the problem. If adding BEFORE/AFTER makes it easier\n> > to program, do it. I don't think we are losing any functionality by\n> > doing this, and the rule system is advertized as being broken, so there\n> > is probably not a huge installed base.\n> >\n> > We now have something that doesn't work. If you can get it working, do\n> > it. The change makes it clearer to the user exactly how the rule is\n> > going to behave, too.\n>\n> Jan, I remember you steering away from rewrite cleanups because we were\n> so close to 6.3. Any chance you can make those changes for 6.4, or are\n> you already working on them? We really need cleanup in the rewrite\n> area, and I understand you can do it.\n\n I hoped to get pl/pgsql finished before 6.4, but since I got\n some private trouble and much serious work, I fear to miss\n that target too. Haven't had my fingers on PostgreSQL at all\n for the last weeks.\n\n But you're right, the rewrite handler still needs work and\n since I spent much time to understand the code it would be\n wasted time if someone else has to do it.\n\n Let's see if I find some time sometimes :-)\n\n>\n>\n> --\n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n>\n\n\nUntil later, Jan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 8 Jul 1998 20:34:45 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [PG95-DEV] Rule system"
},
{
"msg_contents": "> I hoped to get pl/pgsql finished before 6.4, but since I got\n> some private trouble and much serious work, I fear to miss\n> that target too. Haven't had my fingers on PostgreSQL at all\n> for the last weeks.\n> \n> But you're right, the rewrite handler still needs work and\n> since I spent much time to understand the code it would be\n> wasted time if someone else has to do it.\n> \n> Let's see if I find some time sometimes :-)\n\nThat would be good. The pl/pgsql would be nice, but the rewrite system\nis more significant to users right now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 9 Jul 1998 11:05:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PG95-DEV] Rule system"
},
{
"msg_contents": "> >\n> > > > The simple way is to reduce the capabilities of the Postgres\n> > > > rule system. Changing the syntax of the CREATE RULE to\n> > > >\n> > > > CREATE RULE rule AS { BEFORE | AFTER } event TO table\n> > > > DO [ INSTEAD ] { action | NOTHING }\n> > > >\n> > > > would tell the rewrite handler, if the rule actions have to\n> > > > be applied before or after the query. Respectively only NEW\n> > > > or CURRENT could be referenced in the rules qualification and\n> > > > actions. I don't like this simple way, but I think it's the\n> > > > only possibility that would work without reincarnating the\n> > > > instance rule system.\n> > >\n> > > Good description of the problem. If adding BEFORE/AFTER makes it easier\n> > > to program, do it. I don't think we are losing any functionality by\n> > > doing this, and the rule system is advertized as being broken, so there\n> > > is probably not a huge installed base.\n> > >\n> > > We now have something that doesn't work. If you can get it working, do\n> > > it. The change makes it clearer to the user exactly how the rule is\n> > > going to behave, too.\n> >\n> > Jan, I remember you steering away from rewrite cleanups because we were\n> > so close to 6.3. Any chance you can make those changes for 6.4, or are\n> > you already working on them? We really need cleanup in the rewrite\n> > area, and I understand you can do it.\n> \n> I hoped to get pl/pgsql finished before 6.4, but since I got\n> some private trouble and much serious work, I fear to miss\n> that target too. Haven't had my fingers on PostgreSQL at all\n> for the last weeks.\n> \n> But you're right, the rewrite handler still needs work and\n> since I spent much time to understand the code it would be\n> wasted time if someone else has to do it.\n> \n> Let's see if I find some time sometimes :-)\n> \n\nJust curious if you have had any time to look into cleaning up the\nrewrite system for 6.4?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 3 Aug 1998 23:45:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PG95-DEV] Rule system"
}
] |
[
{
"msg_contents": "\nLast Tuesday evening I was upgrading my machine from RedHat 4.1 to 5.0\nwhen I started to smell burning. The power supplies fan had failed, and\nhad gone unnoticed until the inside of the machine was pretty warm (this\nmachine is on 24 hours a day).\n\nAnyhow, I removed the power pretty quickly, but as the upgrade was in\nprogress at the time, the machine wasn't in too good a state. So since\nthen, after replacing the psu, and adding a new disk, I've only just got\nback online.\n\nAs for JDBC, I've just got java back up and running. Next I have to get\nsome patches to bring the various libraries up to date, then get CVS\nand postgres back up and in sync. This should take me a couple of days.\nOnce that has been done, I'm going to resume with the JDBC Documentation.\n\nSo, if anyone has sent me any emails, esp. for the JDBC Driver, can they\nresend it, as I may have lost some mail between my last backup and coming\nback online.\n\nAlso, I'm in the process of merging where the email lists are delivered\nto, so anything should be sent to:\n\n\[email protected]\n\nThanks, Peter.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sun, 5 Jul 1998 11:38:25 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "JDBC Driver Development Hitch"
}
] |
[
{
"msg_contents": "> > Also, are these items completed? How about our cancel query key? I\n> > think it is random/secure enough for our purposes. Can you make the\n> > changes, or do you need changes from me?\n> \n> After pulling down the current cvs sources and taking a quick look\n> around, I guess nothing has happened on this front whilst I wasn't\n> paying attention. There is code in postmaster.c to compute a cancel\n> key for each newly-spawned backend, but nothing ever gets done with it.\n> \n> What we still need to do, I think:\n> \n> 1. Define the client-to-postmaster message that requests a cancel.\n\nI was totally lost on where to implement this. I looked and could not\nfind the proper place for it. I am sure you will be able to figure it\nout.\n\n\n> 2. Implement code in postmaster.c that receives this message and sends\n> a signal to the appropriate backend. (What signal did you want to use?\n> I assume SIGURG would be a bad idea, probably SIGUSR1 or 2...)\n\nSure, either of these, and replace the current use of SIGURG with the\nnew one.\n\n> \n> 3. Modify postgres.c to treat that signal not SIGURG as requesting a\n> cancel; no real change in how the signal is handled, only what signal\n> number it is.\n\nYep, just search/replace. I recommend mkid, (see FAQ_DEV). Very nice\nfor this type of thing, though overkill in this case.\n\n> 4. However, we can and probably should rip out *all* OOB-related code\n> in the backend. It's dead code and probably always will be, given the\n> portability issues.\n\n\nYes, other interfaces probably don't support OOB. That was the most\nconvincing argument to scrap OOB.\n\n> \n> 5. Define an additional protocol message that tells the client what the\n> secret cancel key is; have postgres.c send this at some appropriate\n> point during startup.\n\nYes.\n\n> \n> 6. Modify libpq to save the cancel key and issue cancels via the message\n> to the postmaster instead of OOB.\n\nYep, and you said you want to save the PID too, which makes sense. \nPostmaster has to spin through its structure, find the proper pid, and\ncheck against the secret key. If someone sends a bogus request, please\ndump something to the postmaster log file. Will be helpful for\ndebugging.\n\n> \n> 7. Document these protocol changes. (Do we still call this protocol\n> 2.0, or should we go to 3.0?? Officially 2.0 isn't out yet, so I think\n> we can just change it.)\n\nStill 2.0. No one is using it in production yet, I hope.\n\n> \n> Anything I've missed? How much of this do you feel like doing?\n> I'm happy to do 5,6,7, but am less sure of my competence for 1-4.\n\nI am confused by 1-4 myself, especially the place in the protocol to\npass the secret key to the backend, and the place to receive the\nprotocol change. I can help with the postmaster/backend logic flow, but\nI don't have a concept of the various protocol packets that get sent\nusing various authentication methods.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 5 Jul 1998 21:34:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Revised proposal for libpq and FE/BE protocol changes"
}
] |
[
{
"msg_contents": "Hi there\n\nIs the snapshot on ftp://ftp.postgresql.org/pub/ version 4 of\npostgresql. The timestamp is Jul 4 07:01\n\nCheers\n",
"msg_date": "Mon, 06 Jul 1998 08:15:07 +0200",
"msg_from": "David Maclean <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql snapshot on ftp://ftp.postgresql.org/pub/"
}
] |
[
{
"msg_contents": "\nEnybody knows as to receive the information\nabout the users transmitting inquiries as:\n- username,\n- string of sql-request, transmitted by him.\n\nThank you for your answer.\n \n\n",
"msg_date": "Mon, 6 Jul 1998 12:49:34 +0300 (EEST)",
"msg_from": "Alexzander Blashko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Information about user`s procces:username and his sql-request?"
},
{
"msg_contents": "> \n> Enybody knows as to receive the information\n> about the users transmitting inquiries as:\n> - username,\n> - string of sql-request, transmitted by him.\n> \n> Thank you for your answer.\n> \n> \n> \n> \n\nNot sure if that shows the username too.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 6 Jul 1998 05:57:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Information about user`s procces:username and his\n\tsql-request?"
},
{
"msg_contents": "Hi all.\n\nAs nobody answered my previous post, I decided to correct PostgreSQL\nbuffer leaks caused by large objects methods. \n\nTheses buffer leaks are caused by indexes that are kept open between\ncalls. Outside a transaction, the backend detects them as buffer leaks; it\nsends a NOTICE, and frees them. This sometimes cause a segmentation fault\n(at least on Linux). These indexes are initialized on the first\nlo_read/lo_write/lo_tell call, and (normally) closed on a lo_close call.\nThus the buffer leaks appear when lo direct access functions are used, and\nnot with lo_import/lo_export functions (libpq version calls lo_close\nbefore ending the command, and the backend version uses another path).\n\nThe included patches (against recent snapshot, and against 6.3.2) cause\nindexes to be closed on transaction end (that is on explicit 'END'\nstatment, or on command termination outside trasaction blocks), thus\npreventing the buffer leaks while increasing performance inside\ntransactions. Some (all?) 'classic' memory leaks are also removed.\n \nI hope it will be ok.\n \n---\nPascal ANDRE, graduated from Ecole Centrale Paris\[email protected]\n\"Use the source, Luke. Be one with the Code.\" -- Linus Torvalds",
"msg_date": "Mon, 20 Jul 1998 11:30:41 +0200 (MEST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Large Objects buffer leak patch(es)"
},
{
"msg_contents": "Patch applied.\n\n> \n> \tHi all.\n> \n> As nobody answered my previous post, I decided to correct PostgreSQL\n> buffer leaks caused by large objects methods. \n> \n> Theses buffer leaks are caused by indexes that are kept open between\n> calls. Outside a transaction, the backend detects them as buffer leaks; it\n> sends a NOTICE, and frees them. This sometimes cause a segmentation fault\n> (at least on Linux). These indexes are initialized on the first\n> lo_read/lo_write/lo_tell call, and (normally) closed on a lo_close call.\n> Thus the buffer leaks appear when lo direct access functions are used, and\n> not with lo_import/lo_export functions (libpq version calls lo_close\n> before ending the command, and the backend version uses another path).\n> \n> The included patches (against recent snapshot, and against 6.3.2) cause\n> indexes to be closed on transaction end (that is on explicit 'END'\n> statment, or on command termination outside trasaction blocks), thus\n> preventing the buffer leaks while increasing performance inside\n> transactions. Some (all?) 'classic' memory leaks are also removed.\n> \n> I hope it will be ok.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 00:17:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Objects buffer leak patch(es)"
}
] |
[
{
"msg_contents": "... because the conditional structure assumes that pgsql will only be\nbuilt with non-gcc compilers on HPUX.\n\nThis is an entirely bogus assumption not only for HPUX, but for any\nother architecture that has gcc available.\n\nTo be able to compile, I just duplicated the \"#if defined(__hpux)\"\nblock into the \"#if defined(__GNUC__)\" part of the file, but that's\na pretty grotty hack. I think that the right way to structure the\nfile is just this:\n\n\n#if defined(HAS_TEST_AND_SET)\n\n#if defined(somearchitecture)\n\n#if defined(__GNUC__)\n// inline definition of tas here\n#else\n// non-inline definition of tas here, if default isn't adequate\n#endif\n\n// machine-dependent-but-compiler-independent definitions here\n\n#endif /* somearchitecture */\n\n// ... repeat above structure for each architecture supported ...\n\n\n#if !defined(S_LOCK)\n// default definition of S_LOCK\n#endif\n\n// default definitions of other macros done in the same way\n\n#endif /* HAS_TEST_AND_SET */\n\n\nOn architectures where we don't have any special inline code for GCC,\nthe inner \"#if defined(__GNUC__)\" can just be omitted in that\narchitecture's block.\n\nThe existing arrangement with an outer \"#if defined(__GNUC__)\" doesn't\nhave any obvious benefit, and it encourages missed cases like this one.\n\n\nBTW, I'd suggest making the definition of clear_lock for HPUX be\n\nstatic const slock_t clear_lock =\n{{-1, -1, -1, -1}};\n\nThe extra braces are needed to suppress warnings from gcc, and declaring\nit const just seems like good practice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Jul 1998 16:02:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "New s_lock.h fails on HPUX with gcc"
},
{
"msg_contents": "\nSorry not to reply sooner, the people who pay me wanted me to work on their\nstuff... sigh.\n \n> ... because the conditional structure assumes that pgsql will only be\n> built with non-gcc compilers on HPUX.\n\nMy fault. But, see below. \n\n> This is an entirely bogus assumption not only for HPUX, but for any\n> other architecture that has gcc available.\n\nNot quite. Only for architectures whose S_LOCK code is identical under\nboth GCC and non GCC. Solaris for example has different code for the GCC\ncase vs the non GCC case.\n\n> To be able to compile, I just duplicated the \"#if defined(__hpux)\"\n> block into the \"#if defined(__GNUC__)\" part of the file, but that's\n> a pretty grotty hack. I think that the right way to structure the\n> file is just this:\n> \n> #if defined(HAS_TEST_AND_SET)\n> \n> #if defined(somearchitecture)\n> \n> #if defined(__GNUC__)\n> // inline definition of tas here\n> #else\n> // non-inline definition of tas here, if default isn't adequate\n> #endif\n> \n> // machine-dependent-but-compiler-independent definitions here\n> \n> #endif /* somearchitecture */\n> \n> // ... repeat above structure for each architecture supported ...\n> \n> On architectures where we don't have any special inline code for GCC,\n> the inner \"#if defined(__GNUC__)\" can just be omitted in that\n> architecture's block.\n> \n> The existing arrangement with an outer \"#if defined(__GNUC__)\" doesn't\n> have any obvious benefit, and it encourages missed cases like this one.\n\nI see your point and apologize for my oversight in the cases where the GCC\nimplementation is identical to the non gcc implementation.\n\nHowever, I do think the current \"outer\" __GNUC__ block has some advantages\nthat as you say may not be obvious. It also, as you found, has some problems\nthat I did not notice.\n\n - It works fine on platforms that don't have GCC.\n\n - It works fine on platforms that have only GCC.\n\n - It works fine on platforms that have both GCC and non-GCC but\n have _different_ implementations of S_LOCK (eg Solaris).\n\n - It requires duplicating a code block to make it work on platforms that\n have both GCC and non-GCC and also have identical implementations of\n S_LOCK for both compilers (eg HPUX). It might be \n\nThe main advantage of using _GCC_ as the outer condition is to unify\nall the X86 unix flavors (bar SCO and Solaris) and eliminate a bunch of\nthe previously almost but not quite identical x86 gcc asm blocks in *BSD\nand linux specific segments. These could also perhaps have been written:\n\n#if defined(ix86) && (defined(linux) || defined(FreeBSD) || defined(NetBSD) || defined(BSDI) || defined(BeOS) ...)\n\n GCC specific ix86 asm inline here\n\n#endif /* defined(ix86) && (defined(linux) || defined(FreeBSD) || defined(NetBSD) || defined(BSDI) || defined(BeOS) ...) */\n\nBut I really hate complex platform conditionals as they are a fine source\nof error.\n\nSecondly, by testing for __GNUC__ it makes the unifying feature of\nthe various platforms explicit.\n\nWe also have a couple of platforms that potentially could have other\ncompilers than GCC (eg alpha, VMS), but our current S_LOCK code was clearly\nwritten for GCC, so I stuck them in the GCC block.\n\nPerhaps a look at the original unpatched 6.3.2 code will help explain\nwhat I was trying to accomplish.\n\nStill, your point about making it easy to miss some possibilities is well\ntaken. On the other hand, the #if block per platform gets pretty clumsy\nwhen you have a half dozen major platforms that use the same compiler.\n\nPerhaps something this would be better:\n\n\n#if defined(__GNUC)\n #if defined(x86)\n x86 gcc stuff\n #elsif defined(sparc)\n sparc gcc stuff\n #endif\n#elsif defined(GCC_same_as_other_platform)\n stuff that works for both\n#elsif defined(some_doesn't_even_have_gcc_platform)\n stuff that only works for proprietary C\n#elsif defined(non_gcc_is_different_than_gcc_platform)\n stuff that only works for proprietary C\n#endif\n\n\nI have worked a lot harder on this silly spinlock thing than I ever intended.\nFirst there was a merge problem. Then I got sidetracked into checking out\nthe performance issue Bruce was concerned about, which was interesting\nbut time consuming, and now this. At this point I really want to do\n\"the right thing\", but I also really want to move on.\n\nSo if you have a better idea than I outlined just above, or an objection,\nI am very happy to hear it and try to make it right. But, let me know soon\notherwise I will put together a patch using the above method this weekend.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nQ: Someone please enlighten me, but what are they packing into NT5 to\nmake it twice the size of NT4/EE? A: A whole chorus line of dancing\npaperclips. -- [email protected]\n",
"msg_date": "Thu, 9 Jul 1998 00:07:37 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New s_lock.h fails on HPUX with gcc"
},
{
"msg_contents": "[email protected] (David Gould) writes:\n>> This is an entirely bogus assumption not only for HPUX, but for any\n>> other architecture that has gcc available.\n\n> Not quite. Only for architectures whose S_LOCK code is identical under\n> both GCC and non GCC. Solaris for example has different code for the GCC\n> case vs the non GCC case.\n\nQuite true. But it seems to me that the default assumption should be\nthat an architecture has the same code for GCC and other compilers;\nfirst you get it going, then maybe you think about using asm inline\nto optimize under GCC. With the existing structure of s_lock.h, the\neasiest path is to miss out one case or the other completely.\n\nYour example seems to be that all the x86/GCC platforms can be\nconsolidated, but AFAICS that's true whether you make the outer\nconditional be GCC or platform.\n\n> Still, your point about making it easy to miss some possibilities is well\n> taken. On the other hand, the #if block per platform gets pretty clumsy\n> when you have a half dozen major platforms that use the same compiler.\n\nBut cutting&pasting to start adding support for a new platform is pretty\nsimple and straightforward if there is only one block of code to look at.\nThere might be a tad more duplicated code after a while, but it'd be\neasy to understand and hence easy to modify. I think the direction you\nare headed reduces code duplication at a very high price in confusion\nand fragility.\n\n> So if you have a better idea than I outlined just above, or an objection,\n> I am very happy to hear it and try to make it right. But, let me know soon\n> otherwise I will put together a patch using the above method this weekend.\n\nSince I'm not doing the work, I'll shut up now and let you do what you\nthink best ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jul 1998 11:16:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New s_lock.h fails on HPUX with gcc "
},
{
"msg_contents": "> ... because the conditional structure assumes that pgsql will only be\n> built with non-gcc compilers on HPUX.\n> \n> This is an entirely bogus assumption not only for HPUX, but for any\n> other architecture that has gcc available.\n> \n> To be able to compile, I just duplicated the \"#if defined(__hpux)\"\n> block into the \"#if defined(__GNUC__)\" part of the file, but that's\n> a pretty grotty hack. I think that the right way to structure the\n> file is just this:\n> \n> \n> #if defined(HAS_TEST_AND_SET)\n> \n> #if defined(somearchitecture)\n> \n> #if defined(__GNUC__)\n> // inline definition of tas here\n> #else\n> // non-inline definition of tas here, if default isn't adequate\n> #endif\n> \n> // machine-dependent-but-compiler-independent definitions here\n> \n> #endif /* somearchitecture */\n> \n> // ... repeat above structure for each architecture supported ...\n> \n> \n> #if !defined(S_LOCK)\n> // default definition of S_LOCK\n> #endif\n> \n> // default definitions of other macros done in the same way\n> \n> #endif /* HAS_TEST_AND_SET */\n> \n> \n> On architectures where we don't have any special inline code for GCC,\n> the inner \"#if defined(__GNUC__)\" can just be omitted in that\n> architecture's block.\n> \n> The existing arrangement with an outer \"#if defined(__GNUC__)\" doesn't\n> have any obvious benefit, and it encourages missed cases like this one.\n> \n> \n> BTW, I'd suggest making the definition of clear_lock for HPUX be\n> \n> static const slock_t clear_lock =\n> {{-1, -1, -1, -1}};\n> \n> The extra braces are needed to suppress warnings from gcc, and declaring\n> it const just seems like good practice.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nPatch applied. I just moved hpux out of the gcc/nogcc ifdef, so it\nalways gets hit. Also changed the clear_lock stuff.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 18 Jul 1998 10:50:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New s_lock.h fails on HPUX with gcc"
},
{
"msg_contents": "> ... because the conditional structure assumes that pgsql will only be\n> built with non-gcc compilers on HPUX.\n> \n> This is an entirely bogus assumption not only for HPUX, but for any\n> other architecture that has gcc available.\n> \n> To be able to compile, I just duplicated the \"#if defined(__hpux)\"\n> block into the \"#if defined(__GNUC__)\" part of the file, but that's\n> a pretty grotty hack. I think that the right way to structure the\n> file is just this:\n\nI have moved platforms that have have common code for gcc and non-gcc to\ntheir own section of s_lock.h. Should make things easier.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 18 Jul 1998 10:59:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New s_lock.h fails on HPUX with gcc"
}
] |
[
{
"msg_contents": "Hmm. I find that SIGUSR1 and SIGUSR2 are both already in use for\ncommunication between backends. We can't really commandeer SIGURG,\neither, because it's apparently the same as SIGUSR1 on SCO\n(see src/include/port/sco.h ... so OOB wouldn't work there anyway!).\n\nAll three of SIGINT, SIGHUP, SIGTERM currently do the same thing in a\nbackend, so it looks like our best choice is to redefine one of those\nas the cancel request signal. Any preference?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Jul 1998 18:18:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which signal to use for CANCEL from postmaster to backend?"
},
{
"msg_contents": "> Hmm. I find that SIGUSR1 and SIGUSR2 are both already in use for\n> communication between backends. We can't really commandeer SIGURG,\n> either, because it's apparently the same as SIGUSR1 on SCO\n> (see src/include/port/sco.h ... so OOB wouldn't work there anyway!).\n> \n> All three of SIGINT, SIGHUP, SIGTERM currently do the same thing in a\n> backend, so it looks like our best choice is to redefine one of those\n> as the cancel request signal. Any preference?\n> \n> \t\t\tregards, tom lane\n> \n> \n\nI like SIGQUIT. Looks to be unused. SIGINT is used too much from the\ncommand line, and SIGTERM used too much from scripts (the default kill\narg.)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 7 Jul 1998 13:27:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Which signal to use for CANCEL from postmaster to\n\tbackend?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> All three of SIGINT, SIGHUP, SIGTERM currently do the same thing in a\n>> backend, so it looks like our best choice is to redefine one of those\n>> as the cancel request signal. Any preference?\n\n> I like SIGQUIT. Looks to be unused. SIGINT is used too much from the\n> command line, and SIGTERM used too much from scripts (the default kill\n> arg.)\n\nI've been thinking about it a little more, and on the whole I like using\nSIGINT the best. Here's my reasoning:\n\n1. SIGINT is not normally generated for non-interactive processes,\nso for the ordinary case of a backend running under the postmaster\nit wouldn't be generated by accident. (As you point out, the default\nsignal from kill(1) is SIGTERM not SIGINT.)\n\n2. The only case where it *would* be easy to direct SIGINT to a backend\nis in the case of a backend hand-invoked from the command line. In this\ncase I think that having control-C do a Cancel rather than kill the\nprocess is a fine idea --- it is exactly parallel to the behavior that\npsql now offers. People will probably get used enough to psql's\nbehavior that having an interactive backend work differently would be\na bad idea.\n\n3. SIGQUIT normally generates a coredump, and is one of the few\nnon-error signals that do so. If we use it for the cancel signal we\nwill eliminate the easiest, most standard way of externally forcing a\nbackend to coredump. That seems like a loss of a useful debug tool.\nAlso, in all Unix applications that I'm familiar with, SIGQUIT is a\n\"more severe\" signal than SIGINT --- one expects that an app may trap\nSIGINT and return keyboard control, but SIGQUIT is supposed to kill it.\nInverting the severity of SIGINT and SIGQUIT for a pgsql backend doesn't\nsound like a good plan.\n\n\nSo I think we should leave SIGTERM and SIGQUIT alone, and redefine\nSIGINT to perform a cancel.\n\n\nBTW, I realized that SIGHUP is not really free, it's what is used to\nreturn control from elog(ERROR) to the main loop. The code probably\nshould refer to it as SIGHUP not signal \"1\".\n\n\t\t\tregards, tom lane\n\nPS: I've got these changes coded up, and am about to start testing.\n",
"msg_date": "Tue, 07 Jul 1998 16:56:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Which signal to use for CANCEL from postmaster to\n\tbackend?"
},
{
"msg_contents": "> I've been thinking about it a little more, and on the whole I like using\n> SIGINT the best. Here's my reasoning:\n> \n> 1. SIGINT is not normally generated for non-interactive processes,\n> so for the ordinary case of a backend running under the postmaster\n> it wouldn't be generated by accident. (As you point out, the default\n> signal from kill(1) is SIGTERM not SIGINT.)\n> \n> 2. The only case where it *would* be easy to direct SIGINT to a backend\n> is in the case of a backend hand-invoked from the command line. In this\n> case I think that having control-C do a Cancel rather than kill the\n> process is a fine idea --- it is exactly parallel to the behavior that\n> psql now offers. People will probably get used enough to psql's\n> behavior that having an interactive backend work differently would be\n> a bad idea.\n> \n> 3. SIGQUIT normally generates a coredump, and is one of the few\n> non-error signals that do so. If we use it for the cancel signal we\n> will eliminate the easiest, most standard way of externally forcing a\n> backend to coredump. That seems like a loss of a useful debug tool.\n> Also, in all Unix applications that I'm familiar with, SIGQUIT is a\n> \"more severe\" signal than SIGINT --- one expects that an app may trap\n> SIGINT and return keyboard control, but SIGQUIT is supposed to kill it.\n> Inverting the severity of SIGINT and SIGQUIT for a pgsql backend doesn't\n> sound like a good plan.\n> \n> \n> So I think we should leave SIGTERM and SIGQUIT alone, and redefine\n> SIGINT to perform a cancel.\n\nAll very good points. SIGINT is good.\n\n> \n> \n> BTW, I realized that SIGHUP is not really free, it's what is used to\n> return control from elog(ERROR) to the main loop. The code probably\n> should refer to it as SIGHUP not signal \"1\".\n\nPatch applied.\n\n> \n> \t\t\tregards, tom lane\n> \n> PS: I've got these changes coded up, and am about to start testing.\n> \n\nCool.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 7 Jul 1998 18:01:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Which signal to use for CANCEL from postmaster to\n\tbackend?"
},
{
"msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> >> All three of SIGINT, SIGHUP, SIGTERM currently do the same thing in a\n> >> backend, so it looks like our best choice is to redefine one of those\n> >> as the cancel request signal. Any preference?\n> \n> > I like SIGQUIT. Looks to be unused. SIGINT is used too much from the\n> > command line, and SIGTERM used too much from scripts (the default kill\n> > arg.)\n> \n> I've been thinking about it a little more, and on the whole I like using\n> SIGINT the best. Here's my reasoning:\n> \n> 1. SIGINT is not normally generated for non-interactive processes,\n> so for the ordinary case of a backend running under the postmaster\n> it wouldn't be generated by accident. (As you point out, the default\n> signal from kill(1) is SIGTERM not SIGINT.)\n> \n> 2. The only case where it *would* be easy to direct SIGINT to a backend\n> is in the case of a backend hand-invoked from the command line. In this\n> case I think that having control-C do a Cancel rather than kill the\n> process is a fine idea --- it is exactly parallel to the behavior that\n> psql now offers. People will probably get used enough to psql's\n> behavior that having an interactive backend work differently would be\n> a bad idea.\n> \n> 3. SIGQUIT normally generates a coredump, and is one of the few\n> non-error signals that do so. If we use it for the cancel signal we\n> will eliminate the easiest, most standard way of externally forcing a\n> backend to coredump. That seems like a loss of a useful debug tool.\n> Also, in all Unix applications that I'm familiar with, SIGQUIT is a\n> \"more severe\" signal than SIGINT --- one expects that an app may trap\n> SIGINT and return keyboard control, but SIGQUIT is supposed to kill it.\n> Inverting the severity of SIGINT and SIGQUIT for a pgsql backend doesn't\n> sound like a good plan.\n> \n> \n> So I think we should leave SIGTERM and SIGQUIT alone, and redefine\n> SIGINT to perform a cancel.\n> \n> \n> BTW, I realized that SIGHUP is not really free, it's what is used to\n> return control from elog(ERROR) to the main loop. The code probably\n> should refer to it as SIGHUP not signal \"1\".\n> \n> \t\t\tregards, tom lane\n> \n> PS: I've got these changes coded up, and am about to start testing.\n\nThese are the signals I'm using in my own current version of pgsql.\n\nThe main changes to the old implementation are SIGQUIT instead of\nSIGHUP to handle warns, SIGHUP to reread pg_options and redirection\nto all backends of SIGHUP, SIGINT, SIGTERM, SIGUSR1 and SIGUSR2.\nIn this way some of the signals sent to the postmaster can be sent\nautomatically to all the backends. To shut down postgres one needs\nonly to send a SIGTERM to postmaster and it will stop automatically\nall the backends. This new signal handling mechanism is also used\nto prevent SI cache table overflows: when a backend detects the SI\ntable full at 70% it simply sends a signal to the postmaster which\nwill wake up all idle backends and make them flush the cache.\n\nThe use of SIGINT for the new functionality seems ok to me.\nGiven your considerations I would also use SIGCHLD instead of SIGQUIT\nto handle warn from elog.\n\nsignal\t\tpostmaster\t\t\t\tbackend\n\nSIGHUP\t\tkill(*,sighup)\t\t\t\tread_pg_options\nSIGINT\t\tkill(*,sigint), die\t\t\tdie\nSIGCHLD\t\treaper\t\t\t\t\t-\nSIGTTIN\t\tignored\t\t\t\t\t-\nSIGTTOU\t\tignored\t\t\t\t\t-\nSIGQUIT\t\tdie\t\t\t\t\thandle_warn\nSIGTERM\t\tkill(*,sigterm), kill(*,9), die\t\tdie\nSIGCONT\t\tdumpstatus\t\t\t\t-\nSIGPIPE\t\tignored\t\t\t\t\tdie\nSIGFPE\t\t-\t\t\t\t\tFloatExceptionHandler\nSIGTSTP\t\t-\t\t\t\t\tignored (is_alive ?)\nSIGUSR1\t\tkill(*,sigusr1), die\t\t\tquickdie\nSIGUSR2\t\tkill(*,sigusr2)\t\t\t\tAsync_NotifyHandler\n\t\t\t\t\t\t\t(also SI buffer full)\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Thu, 9 Jul 1998 23:01:12 +0200 (MET DST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Which signal to use for CANCEL from postmaster to\n\tbackend?"
}
] |
[
{
"msg_contents": "\nHi Rasool...\n\n\tI'm forwarding this into the pgsql-hackers mailing list...I'm\nbound to be overlooking *someone*'s hard work, but right now, the only one\nthat I can think of who's done any work on concurrency issues (assuming\nI'm current in that this is the 'shared/multi-user' aspect we are\nreferring here) is Bruce Momjian...as for recovery, I don't think *anyone*\nhas dived into that area yet...\n\n\tAs for your co-operating with us in enhancing and extending...we\nlook very much forward to it. Keeping our academic ties, as much and as\nfar as possible, is in everyone's best interests, as it tends to provide a\nfountain of \"younger\" ideas that us old-timers tend to overlook or not be\naware of :)\n\n\nOn Tue, 7 Jul 1998, Rasool Jalili wrote:\n\n> Dear Marc,\n> \n> I am an Assistant Professor in the Department of Computer Engineering,\n> Sharif University of Technology, Tehran, Iran. As a new research field,\n> we intend to define some (currently one) Msc projects in concurrency\n> control or recovery of Postgresql. This will help us to share our\n> research ability here with you in enhancing/extending postgresql as an\n> academic shareware DBMS. Unfortunately, we have been unable to find\n> useful documentation saying in detail how the Postgresql transaction\n> management has been formed and which algorithms have been implemented. \n> I appreciate you if you let me know: \n\n> \t- which (and at which level) of research have been done on these\n> aspects of Postgresql? \n\n> \t- how do you think of our cooperating in enhancing/extending\n> Postgresql? \n\n> \t- how can we have more information to initiate such projects? \n\n> \t- is there any known database benchmarker tools on Linux for\n> evaluating our modifications? \n\n",
"msg_date": "Tue, 7 Jul 1998 08:58:36 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need your comments/help"
},
{
"msg_contents": "\n\tHi.\n\nI wrote some time ago about a buffer leak appearing with PostgreSQL large\nobjects, calling for hints about where to look. As I did not get any\nanswer, I dived a little bit more in the code.\n\nThe problem is simple. For performance reasons (as far as I can tell), PG\nlarge objects keep the object internal scan index open as long as the\nobject is not closed. The problem is that this index (may) keep pinned\nbuffers.\nIn the CommitTransaction() function, these buffers are examined and\nreleased if necessary, with an error notice. For long objects this causes\na segmentation fault in postmaster (and this is present in current public\nrelease).\nWhenever all large objects operations are done inside a transaction (begin\n- open lo - ... - close lo - end), this problem does not appear\n(CommitTransaction() is only called on the END statement, when the index \nis closed). \n\nIn order to correct this, I see two solutions:\n - close the index after every operation on large objects\n - clean up large objects opened indexes in CommitTransaction()\nI prefer the second one, that offers speed-up inside transactions. But as\nI do not know all the evolutions in progress, I would like to know which \nshould be used in order to be coherent with the current works. \n\nYet another question, does someone work on large objects ? If not, I can\ncode this bug fix and submit a patch.\n\n\tThanks.\n\n---\nPascal ANDRE, graduated from Ecole Centrale Paris\[email protected]\n\"Use the source, Luke. Be one with the Code.\" -- Linus Torvalds\n\n",
"msg_date": "Thu, 16 Jul 1998 16:59:33 +0200 (MEST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Large objects buffer leak "
}
] |
[
{
"msg_contents": "> Thomas, we now get:\n> select usesysid from pg_user union select null ;\n> ERROR: type id lookup of 0 failed\n> which not good either. Can you address this issue?\n\nI'm almost back on-line (I hope) after massive hacker activity took out\nthe alumni pop server at Caltech. From looking through the hackers\nmhonarc archive (hmm, don't much like that name \"hackers\" anymore; it's\ncost us a _lot_ at work and taken me out of Postgres for more than a\nmonth :( I see two issues for me to look at: the one above and the one\nfrom Brett regarding a core dump from a mal-formed query. Will look at\nboth.\n\nI have some additional patches in the parser area which continue the\ntype matching/coersion work from the last two months.\n\nI've got patches to put 64-bit integers into the backend; they should\nwork for Alphas and at least some gcc-compiled 32-bit machines, but\nwe'll need beta testers to help get the configuration for our other\nsupported platforms. Once we have that for v6.4, we can also use this\ntype internally to implement additional types like numeric() and\ndecimal().\n\nAll mail sent to me from June 12 to now has been lost (I lost access to\nthe pop server and procmail went away, and so my .forward file reference\nto procmail was broken and unfixable). It is not yet fixed, but should\nbe in a day or so. Talk to you then. In the meantime, I will be looking\nat the mhonarc archives to keep up (great feature scrappy!)...\n\n - Tom\n",
"msg_date": "Tue, 07 Jul 1998 15:15:39 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] Small bug in union"
},
{
"msg_contents": "> > Thomas, we now get:\n> > select usesysid from pg_user union select null ;\n> > ERROR: type id lookup of 0 failed\n> > which not good either. Can you address this issue?\n` > \n> I'm almost back on-line (I hope) after massive hacker activity took out\n\nYea, that is on my list too. One of us will have it fixed for 6.4. The\nrest sounds good to me. I am not particularly happy with 'hackers' name\neither.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 7 Jul 1998 14:47:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Small bug in union"
},
{
"msg_contents": "\nAww. Hackers is a great name. Devel might be more appropriate, but\nhackers isn't so bad.\n\nOn Tue, 7 July 1998, at 14:47:08, Bruce Momjian wrote:\n\n> Yea, that is on my list too. One of us will have it fixed for 6.4. The\n> rest sounds good to me. I am not particularly happy with 'hackers' name\n> either.\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n",
"msg_date": "Tue, 7 Jul 1998 15:27:43 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] Small bug in union"
},
{
"msg_contents": "Brett McCormick <[email protected]> writes:\n> Aww. Hackers is a great name. Devel might be more appropriate, but\n> hackers isn't so bad.\n\nMy two cents: \"hackers\" != \"crackers\".\n\n\"Hacker\" is an ancient and honorable term for a dedicated programmer,\nand the PostgreSQL group ought to wear it proudly.\n\nThe sort of common thieves and vandals who attacked Caltech's system\ndon't deserve the name \"hacker\"; that crowd is trying to appropriate\na term they don't have the right to aspire to. Bad enough that these\nlow-lifes cause us everyday grief, but to steal the hacking community's\nself-label is an intolerable insult. Don't give in to it.\n\nBTW, if you somehow are not familiar with the history of the term\n\"hacker\", you might care to visit the Jargon File\n(try http://sagan.earthspace.net/jargon/), and/or Eric Raymond's\npage about hacker history and culture\n(http://tuxedo.org/~esr/faqs/index.html).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Jul 1998 19:04:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] Small bug in union "
},
{
"msg_contents": "> BTW, if you somehow are not familiar with the history of the term\n> \"hacker\", you might care to visit the Jargon File\n> (try http://sagan.earthspace.net/jargon/), and/or Eric Raymond's\n> page about hacker history and culture\n> (http://tuxedo.org/~esr/faqs/index.html).\n\nActually, for some reason we used to use the term (circa mid '70s) to\nrefer to an unaccomplished, undisciplined coder. Sort of like a bad\ngolfer is called a hacker (you know, hacking and slashing at the ball).\nI haven't found the usage in anything recent though, and the folks who\nanswered my mail at jargon hadn't heard of that usage either (boy, I\nmust have way too much time on my hands...). Still have trouble applying\nthe term to coders with skill :)\n\n - Tom\n",
"msg_date": "Wed, 08 Jul 1998 01:39:23 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [BUGS] Small bug in union"
}
] |
[
{
"msg_contents": ">> I have some embedded spaces in data that should be stripped out.\n>>\n>> In the course of trying to find a way to do this, I found out some stuff:\n>>\n>> #1a\n>> \\df would be a whole lot nicer if I could do:\n>>\n>> \\df <type> and get only functions that have the given return type, or\n>> \\df <string> it grepped all the lines for something matching string\n>>\n>\n>Try:\n>\n> echo \"\\df\" | psql | grep <type>\n>\n>or\n>\n> echo \"\\df\" | psql | grep <string>\n>\n>I realize you want it inside the psql program, but the above should get the\n>same results pretty quick...james\n\nThanks for that James... I've been trying to work out how to do it for ages!\n\nI have to agree with Richard though, it would be very nice to have it\nwithin psql. Furthermore, it would be great if such functionality\ncould be extended to all \\d? queries. In fact, and I'm sure there actually\nis a way to this if you're more knowledgeable than I, since most of these\n\\d? queries basically yield a 'SQL' table, wouldn't be nice to be able to\nperform SQL on the (e.g. select * from \\df where function = 'int2_text';\netc.)\n\nOr am I just getting carried away! 8)\n\nStuart\n\n\n+-------------------------+--------------------------------------+\n| Stuart Rison | Ludwig Institute for Cancer Research |\n+-------------------------+ University College London |\n| Tel. (0171) 878 4041 | 91 Riding House Street |\n| Fax. (0171) 878 4040 | London, W1P 8BT, UNITED KINGDOM. |\n+-------------------------+--------------------------------------+\n| stuart@NOJUNK_ludwig.ucl.ac.uk [Remove NOJUNK_ for it to work] |\n+----------------------------------------------------------------+\n\n\n",
"msg_date": "Tue, 7 Jul 1998 19:12:45 +0100",
"msg_from": "Stuart Rison <stuart@NOJUNK_ludwig.ucl.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] translate \"bug\"?"
},
{
"msg_contents": "\n> Thanks for that James... I've been trying to work out how to do it for ages!\n>\n> I have to agree with Richard though, it would be very nice to have it\n> within psql. Furthermore, it would be great if such functionality\n> could be extended to all \\d? queries. In fact, and I'm sure there actually\n> is a way to this if you're more knowledgeable than I, since most of these\n> \\d? queries basically yield a 'SQL' table, wouldn't be nice to be able to\n> perform SQL on the (e.g. select * from \\df where function = 'int2_text';\n> etc.)\n>\n> Or am I just getting carried away! 8)\n>\n\nYou know, I think there might be a way to do that, because I believe in some SQL\nDatabases, you can actually do queries against the catalogs; perhaps you can do\nthat with Postgress. I don't know for sure though...james\n\n",
"msg_date": "Tue, 7 Jul 1998 17:07:25 -0400",
"msg_from": "James Olin Oden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] translate \"bug\"?"
},
{
"msg_contents": "> >I realize you want it inside the psql program, but the above should get the\n> >same results pretty quick...james\n> \n> Thanks for that James... I've been trying to work out how to do it for ages!\n\nI will look into implementing \\df kjasdf, and do it as a regex!\n\n\n> \n> I have to agree with Richard though, it would be very nice to have it\n> within psql. Furthermore, it would be great if such functionality\n> could be extended to all \\d? queries. In fact, and I'm sure there actually\n> is a way to this if you're more knowledgeable than I, since most of these\n> \\d? queries basically yield a 'SQL' table, wouldn't be nice to be able to\n> perform SQL on the (e.g. select * from \\df where function = 'int2_text';\n> etc.)\n> \n> Or am I just getting carried away! 8)\n\nThat is a little strange.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 7 Jul 1998 17:48:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] translate \"bug\"?"
},
{
"msg_contents": "Yeah, I've rethough about it and it does seem a bit out-there ;) ... I'm\nsure there was one time where I encountered a situation where you would\nwant to use the \\d? commands in an SQL statement environment but I can't\neven begin to remember... so maybe it wasn't that important afterall and\nregex'ing will do just fine.\n\nI do feel though that the \\df <regex> functionality would be useful in\nother catalog queries (thank james, I didn't know that what they were\ncalled) e.g. \\do <regex> or \\dt <regex> (which would get you all tables\ncontaining <regex> e.g. titles, tileauthors, titleditors etc, as opposed to\n\\d titles which would get you just table titles).\n\nor am I being dim again? [that dreaded feeling of posting something really\nstupid...]\n\nCheers,\n\nS.\n\n>> >I realize you want it inside the psql program, but the above should get the\n>> >same results pretty quick...james\n>>\n>> Thanks for that James... I've been trying to work out how to do it for ages!\n>\n>I will look into implementing \\df kjasdf, and do it as a regex!\n\n\n>\n>>\n>> I have to agree with Richard though, it would be very nice to have it\n>> within psql. Furthermore, it would be great if such functionality\n>> could be extended to all \\d? queries. In fact, and I'm sure there actually\n>> is a way to this if you're more knowledgeable than I, since most of these\n>> \\d? queries basically yield a 'SQL' table, wouldn't be nice to be able to\n>> perform SQL on the (e.g. select * from \\df where function = 'int2_text';\n>> etc.)\n>>\n>> Or am I just getting carried away! 8)\n>\n>That is a little strange.\n>\n\n\n\n\n+-------------------------+--------------------------------------+\n| Stuart Rison | Ludwig Institute for Cancer Research |\n+-------------------------+ University College London |\n| Tel. (0171) 878 4041 | 91 Riding House Street |\n| Fax. (0171) 878 4040 | London, W1P 8BT, UNITED KINGDOM. |\n+-------------------------+--------------------------------------+\n| stuart@NOJUNK_ludwig.ucl.ac.uk [Remove NOJUNK_ for it to work] |\n+----------------------------------------------------------------+\n\n\n",
"msg_date": "Wed, 8 Jul 1998 12:36:56 +0100",
"msg_from": "Stuart Rison <stuart@NOJUNK_ludwig.ucl.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] translate \"bug\"?"
},
{
"msg_contents": "At 14:36 +0300 on 8/7/98, Stuart Rison wrote:\n\n\n> Yeah, I've rethough about it and it does seem a bit out-there ;) ... I'm\n> sure there was one time where I encountered a situation where you would\n> want to use the \\d? commands in an SQL statement environment but I can't\n> even begin to remember... so maybe it wasn't that important afterall and\n> regex'ing will do just fine.\n>\n> I do feel though that the \\df <regex> functionality would be useful in\n> other catalog queries (thank james, I didn't know that what they were\n> called) e.g. \\do <regex> or \\dt <regex> (which would get you all tables\n> containing <regex> e.g. titles, tileauthors, titleditors etc, as opposed to\n> \\d titles which would get you just table titles).\n\nIn my (humble?) opinion, instead of having this as a specialized command in\npsql, there should rather have been a read-only view in the system which\nwould make it possible to look at only part of the information.\n\nThis would make the information readily available from other interfaces\nbeside psql, and will not force the interface developers to rely on \"inside\ninformation\" telling them which tables to query and what the internal oids\nare to look for. Thus, if internal things like that are changed, the view\nwould stay alive and easily accessible.\n\nSo, instead of \\d we would have something like SELECT * from SYSRELS_VIEW.\nAnd instead of \\dt, SELECT * from SYSRELS_VIEW WHERE relation_type =\n'table' (which would imply that the view will have this in ascii, where the\ninternal table has it in some coding). Instead of \\d my_table, we'll have\nSELECT * from SYSCOLS_VIEW WHERE relation = 'my_table';\n\nI think that's more or less how it's done in Oracle and the like, isn't it?\nAnd it relies only on something we all know - SQL queries.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Wed, 8 Jul 1998 15:27:15 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] translate \"bug\"?"
},
{
"msg_contents": "> Yeah, I've rethough about it and it does seem a bit out-there ;) ... I'm\n> sure there was one time where I encountered a situation where you would\n> want to use the \\d? commands in an SQL statement environment but I can't\n> even begin to remember... so maybe it wasn't that important afterall and\n> regex'ing will do just fine.\n> \n> I do feel though that the \\df <regex> functionality would be useful in\n> other catalog queries (thank james, I didn't know that what they were\n> called) e.g. \\do <regex> or \\dt <regex> (which would get you all tables\n> containing <regex> e.g. titles, tileauthors, titleditors etc, as opposed to\n> \\d titles which would get you just table titles).\n> \n> or am I being dim again? [that dreaded feeling of posting something really\n> stupid...]\n\nI think these are good ideas. I will implement them for 6.4. Glad you\nare using the new features.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 9 Jul 1998 00:11:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] translate \"bug\"?"
}
] |
[
{
"msg_contents": "Well, I've got this new code running, and it works. Sort of.\nThe postmaster and backend seem to be fine ... but psql has a tendency\nto coredump right after sending a cancel request.\n\nAfter digging into it, I realized that the problem is that psql.c is\ncoded to invoke PQrequestCancel() directly from its SIGINT signal\nhandler. That was cool when the only thing PQrequestCancel() did\nwas to invoke send().\n\nBut now, PQrequestCancel requires allocating memory, opening a new\nconnection, sending some data, closing the connection, and freeing\nmemory.\n\nOn my machine, the C library is not reentrant, and if you try to do\nthis sort of stuff from a signal handler that has interrupted a call\nto malloc() or printf() or some such, you can expect to crash.\n\nI can see several alternatives, none very attractive:\n\n1. Try to code the new PQrequestCancel so that it doesn't invoke\nany likely-non-reentrant part of the C library. Difficult at best,\nmaybe impossible (is gethostbyname reentrant? I doubt it if malloc\nisn't).\n\n2. Live with PQrequestCancel not being reentrant: code apps using it\nto invoke it from main line not a signal handler. The trouble is that\nthis makes it *substantially* harder to use. In psql.c, for example,\nwe could no longer use plain PQexec; we'd have to write some kind of\nloop around the more primitive libpq functions, so that control would\nblock out in psql.c while waiting for a backend response, and not down\nin the guts of libpq.\n\n3. Keep a connection to the postmaster open at all times so that\nPQrequestCancel only needs to do a send() and not any of the hard\nstuff. This is not good because it risks overflowing the number of\nopen files the postmaster process can have at one time. It also means\nestablishing two IPC connections not one during backend startup, which\nis clearly a performance loss.\n\n4. Stick with OOB-based cancels and live with the portability\nlimitations thereof.\n\nI will work on #1 but I am not very hopeful of success. Has anyone\ngot a better idea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Jul 1998 18:46:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trouble in paradise: cancel via postmaster ain't so cool"
},
{
"msg_contents": "At 3:46 PM -0700 7/7/98, Tom Lane wrote:\n>I can see several alternatives, none very attractive:\n>\n>1. Try to code the new PQrequestCancel so that it doesn't invoke\n>any likely-non-reentrant part of the C library. Difficult at best,\n>maybe impossible (is gethostbyname reentrant? I doubt it if malloc\n>isn't).\n...\n\n>I will work on #1 but I am not very hopeful of success. Has anyone\n>got a better idea?\n\nIdea A: precompute everything you need to do a cancel as part of sending\nthe request in the first place so #1 above takes minimum effort (i.e. no\nmalloc(), no gethostbyname(), no nothing).\n\nIdea B: spawn (vfork()/exec()) a cancel process so all the funny stuff\nhappens in a different address space.\n\nIdea C: look at what some standard network clients do to handle similar\nproblems. What does ftp do for example? It also seems like some network\nprogramming textbooks, like Stevens, should discuss this problem.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n",
"msg_date": "Tue, 7 Jul 1998 17:04:03 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trouble in paradise: cancel via postmaster ain't so\n cool"
},
{
"msg_contents": "\"Henry B. Hotz\" <[email protected]> writes:\n> Idea A: precompute everything you need to do a cancel as part of sending\n> the request in the first place so #1 above takes minimum effort (i.e. no\n> malloc(), no gethostbyname(), no nothing).\n\nYeah, that's what I planned to try. It'll mean making the PGconn\nstructure a little bigger to hold the info, but that seems OK.\n\n> Idea B: spawn (vfork()/exec()) a cancel process so all the funny stuff\n> happens in a different address space.\n\nHadn't thought of that one... it'd have to be a real fork not a vfork,\nconsequently could be pretty expensive for a large application.\nStill it might be a better answer than living with a nonreentrant\nPQrequestCancel().\n\nAfter sleeping on it, I feel like it should be possible to solve the\nproblem along the lines Henry mentions: have connectDB save everything\nthat it needs to get from C library routines, so that only kernel calls\nare needed to open a new postmaster connection in PQrequestCancel.\nWill work on that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jul 1998 09:56:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Trouble in paradise: cancel via postmaster ain't so\n\tcool"
}
] |
[
{
"msg_contents": "Hello\n\nSorry for asking the hackers directly, but I think my question will be very implementation\n\nspecific so I think only one of yor can give me the answer or thow me out the door.\n\nFirst some introductionary explanations:\n\nWe are developing a SCADA system for process automation since the last 6 years. In 1991\n\nwe changed the platform from a commercial Unix to Linux and are very happy with the choice.\n\nWithin the SCADA system we have a programming environment (called STX, similar to the IEC1131 Structured Text\n\nlanguage) for doing sophisticated controlling or analysis needed to run the process.\n\n>From within this programming environment we also have the need to access a relational database to store\n\nany data from the SCADE system into. This to give the end user also the possibilty to access this data\n\nfor offline analysis purpose e.g from a MS/Windows application.\n\nWe have implemented the PostgreSQL database during the last weeks, by using the libpq. This works fine.\n\nNearly all needed functionality and tools are available for it (pgaccess, Web access, ODBC, ...).\n\nThe problem is now as following:\n\n We have some customers, who want to have the database interface within the SCADA system (same API),\n\nbut want to have the data itself stored in a Oracle database (one customer on Digital Unix, the other\n\non Win/NT).\n\nCause there is no Oracle client SW available under Linux and I don't want to change the API in our\n\nprogramming environment I would like to have a modified PostgreSQL Backend that runs under the above\n\nmentioned platforms and simply acts as a SW gateway to the Oracle Sever (using e.g the Oracle OCI interaface\n\nor the PRO/C interafce). Such a solution would preserve my interface completely (still using the libpg).\n\nNow after this lengthy introduction my questions:\n\n1 Could you give me more information on how and where to hook into the backend code to implement this\n\n stuff.\n\n2 Is there anyone outside that will be interested also in this approach.\n\n3 Are you interested in getting back the new stuff\n\nThanks a lot for your patience!!\n\nGreetings,\n\nDr. Armin. Schloesser\n\n==============================================================================\nPhilips Automation Projects Phone: +49 561 501 1395\nMiramstr. 87 Fax: +49 561 501 1688\n34123 Kassel Email: [email protected]\nGermany\n==============================================================================\n\n\n\n",
"msg_date": "Wed, 08 Jul 1998 09:37:41 +0200",
"msg_from": "\"Dr. Armin Schloesser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "On Wed, Jul 08, 1998 at 09:37:41AM +0200, Dr. Armin Schloesser wrote:\n> Now after this lengthy introduction my questions:\n\nJust to be sure. You�d like to have a connected table as in M$ Access?\n\n> 2 Is there anyone outside that will be interested also in this approach.\n\nYes, me.\n\n> 3 Are you interested in getting back the new stuff\n\nOf course.\n\nGru� nach Kassel.\n\nMichael\n\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Wed, 8 Jul 1998 10:36:12 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "(still using the libpg). > > Now after this lengthy introduction my\nquestions: > > 1 Could you give me more information on how and where to\nhook into the backend code to implement this > > stuff. > > 2 Is\nthere anyone outside that will be interested also in this approach. > >\n3 Are you interested in getting back the new stuff > > Thanks a lot for\nyour patience!! > > Greetings, > > Dr. Armin. Schloesser > >\n\n[See what happens when you send lines >80.]\n\nYou want to use libpq to send that through the backend, and then pass it\nto Oracle. Some people have asked for this, but I know of know way to\naccomplish this. Your best be would be to create a fake libpq, that has\nthe same function names/behavior, but calls native Oracle C functions.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 8 Jul 1998 23:32:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "Hello Bruce,\n\nthanks for your reply.\n\nI already have anticipated that there is no off the shelf solution for my\nproblem.\n\nA wrapper linpq I dont't want to use, cause then I would have to link\nagainst different stuff coneccting to PostgreSQL and Oracle. I want to\nhave this switching done not in my application, but in a separate process.\n\nAfter all the Oracle client SW is not available under Linux.\n\nNevertheless I would like to implement this Oracle Gateway using the\nPostgeSQL stuff.\n\nNow the problem for me is how to get used to the backend architecture as\nfast as possible. That means is there any more deeper technical doku\navailable, describing the backend control flow and architekture.\nEspecially the layout of list and nodes structure of the parser.\n\nCause using the Oracle OCI there has to be some preprocessing done to\nparse the libpq SQL strings for binding input and output variables to. So\na simple SELECT call is not mappable to a single OCI call.\n\nIf there are also other guys outside interested in this approach, it would\nbe perhaps worth to discuss also their requirements to get a proper\nfunctional spec to implement the SW gateway. \n\nSorry for the long lines. I used the netscape to generate the mail and he\ndoen't complain about long lines. I will use the good old pine in the\nfuture.\n\nGreetings,\n\nDr. Armin. Schloesse\n\n==============================================================================\nPhilips Automation Projects Phone: +49 561 501 1395\nMiramstr. 87 Fax: +49 561 501 1688\n34123 Kassel Email: [email protected]\nGermany \n==============================================================================\n\nOn Wed, 8 Jul 1998, Bruce Momjian wrote:\n\n> (still using the libpg). > > Now after this lengthy introduction my\n> questions: > > 1 Could you give me more information on how and where to\n> hook into the backend code to implement this > > stuff. > > 2 Is\n> there anyone outside that will be interested also in this approach. > >\n> 3 Are you interested in getting back the new stuff > > Thanks a lot for\n> your patience!! > > Greetings, > > Dr. Armin. Schloesser > >\n> \n> [See what happens when you send lines >80.]\n> \n> You want to use libpq to send that through the backend, and then pass it\n> to Oracle. Some people have asked for this, but I know of know way to\n> accomplish this. Your best be would be to create a fake libpq, that has\n> the same function names/behavior, but calls native Oracle C functions.\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\n",
"msg_date": "Thu, 9 Jul 1998 08:13:48 +0200 (MEST)",
"msg_from": "Armin Schloesser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "On Thu, Jul 09, 1998 at 08:13:48AM +0200, Armin Schloesser wrote:\n> Now the problem for me is how to get used to the backend architecture as\n> fast as possible. That means is there any more deeper technical doku\n> available, describing the backend control flow and architekture.\n> Especially the layout of list and nodes structure of the parser.\n\nDo you want to implement a special solution for your problem, or add a table\ntype that is a link to a different table. With ODBC you can do this in M$\nAccess for instance.\n\nIMO a link to a different databse would be a very nice feature to have. We\ncould add the code to link an external postgres database as well.\n\n> Cause using the Oracle OCI there has to be some preprocessing done to\n> parse the libpq SQL strings for binding input and output variables to. So\n> a simple SELECT call is not mappable to a single OCI call.\n\nYes, there should a an adapter for each different DB system.\n\n> If there are also other guys outside interested in this approach, it would\n> be perhaps worth to discuss also their requirements to get a proper\n> functional spec to implement the SW gateway. \n\nInterested yes. But I'm afraid I have neither time to work on it, nor enough\ninside knowledge. But then I'm working to get more knowledge anyway.\n\nMichael \n\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Thu, 9 Jul 1998 11:42:47 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "> Nevertheless I would like to implement this Oracle Gateway using the\n> PostgeSQL stuff.\n> Cause using the Oracle OCI there has to be some preprocessing done to\n> parse the libpq SQL strings for binding input and output variables to. \n> So a simple SELECT call is not mappable to a single OCI call.\n> If there are also other guys outside interested in this approach, it \n> would be perhaps worth to discuss also their requirements to get a \n> proper functional spec to implement the SW gateway.\n\nI don't have a particular interest in the Oracle gw, but am interested\nin getting simultaneous multiple db access within Postgres. I had been\nthinking of trying to implement this as a Postgres \"master database\"\nwith hooks deeper in the backend to call out to a remote database as a\nseparate session. Sort of like Ingres implemented their distributed\ndatabases. Haven't done anything with it though...\n\n - Tom\n",
"msg_date": "Thu, 09 Jul 1998 13:21:27 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "Hello Tom,\n\nexactly what I think should be done first.\n\nBuild a clear interface between the parsing and I think the execution step\nof the postgres backend to hook in own stuff to interface to an\narbritrary other database or even other implemention of the real\nhard word of storing and getting the data. \n\nGreetings,\n\narmin\n\n\n==============================================================================\nPhilips Automation Projects Phone: +49 561 501 1395\nMiramstr. 87 Fax: +49 561 501 1688\n34123 Kassel Email: [email protected]\nGermany \n==============================================================================\n\nOn Thu, 9 Jul 1998, Thomas G. Lockhart wrote:\n\n> \n> I don't have a particular interest in the Oracle gw, but am interested\n> in getting simultaneous multiple db access within Postgres. I had been\n> thinking of trying to implement this as a Postgres \"master database\"\n> with hooks deeper in the backend to call out to a remote database as a\n> separate session. Sort of like Ingres implemented their distributed\n> databases. Haven't done anything with it though...\n> \n> - Tom\n> \n\n",
"msg_date": "Thu, 9 Jul 1998 15:56:52 +0200 (MEST)",
"msg_from": "Armin Schloesser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "On Thu, 9 Jul 1998, Thomas G. Lockhart wrote:\n\n> > Nevertheless I would like to implement this Oracle Gateway using the\n> > PostgeSQL stuff.\n> > Cause using the Oracle OCI there has to be some preprocessing done to\n> > parse the libpq SQL strings for binding input and output variables to. \n> > So a simple SELECT call is not mappable to a single OCI call.\n> > If there are also other guys outside interested in this approach, it \n> > would be perhaps worth to discuss also their requirements to get a \n> > proper functional spec to implement the SW gateway.\n> \n> I don't have a particular interest in the Oracle gw, but am interested\n> in getting simultaneous multiple db access within Postgres. I had been\n> thinking of trying to implement this as a Postgres \"master database\"\n> with hooks deeper in the backend to call out to a remote database as a\n> separate session. Sort of like Ingres implemented their distributed\n> databases. Haven't done anything with it though...\n\nCewl... wouldn't this enable us to run PostgreSQL on beowolf-like \nclusters? :) (for those of you unknown to them, see \nhttp://cesdis.gsfc.nasa.gov/beowulf/consortium/consortium.html, this one \nis also very nice :) http://cnls.lanl.gov/avalon/ ).\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 9 Jul 1998 16:31:24 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "> Hello Bruce,\n> \n> thanks for your reply.\n> \n> I already have anticipated that there is no off the shelf solution for my\n> problem.\n> \n> A wrapper linpq I dont't want to use, cause then I would have to link\n> against different stuff coneccting to PostgreSQL and Oracle. I want to\n> have this switching done not in my application, but in a separate process.\n> \n> After all the Oracle client SW is not available under Linux.\n> \n> Nevertheless I would like to implement this Oracle Gateway using the\n> PostgeSQL stuff.\n> \n> Now the problem for me is how to get used to the backend architecture as\n> fast as possible. That means is there any more deeper technical doku\n> available, describing the backend control flow and architekture.\n> Especially the layout of list and nodes structure of the parser.\n\nCheck the web site documentation. Under developers, there is all the\nstuff you should need. Description/flowchart, and developers FAQ.\n\n> \n> Cause using the Oracle OCI there has to be some preprocessing done to\n> parse the libpq SQL strings for binding input and output variables to. So\n> a simple SELECT call is not mappable to a single OCI call.\n\nYou would have to put code into the server to call pass the query and\nreturned data to oracle. Not easy.\n> Sorry for the long lines. I used the netscape to generate the mail and he\n> doen't complain about long lines. I will use the good old pine in the\n> future.\n\nYou can set your netscape window size in your .Xdefaults file.\n\n\tNetscape.Composition.geometry: =750x650\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 9 Jul 1998 11:24:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "\nOn Wed, 8 Jul 1998, Dr. Armin Schloesser wrote:\n\n> Cause there is no Oracle client SW available under Linux and I don't want to change the API in our\n> \n> programming environment I would like to have a modified PostgreSQL Backend that runs under the above\n\n You can use the Openlink ODBC broker (basically an ODBC proxy). You run\nit the broker on a platform that does have an ODBC driver available\n(the broker is available for _many_ platforms). Then you use an Openlink\nODBC driver to connect to the broker (there are Openlink ODBC drivers\navailable for _many_ platforms, including Linux). The broker just\nredirects the requests for you.\n\n See www.openlinksw.com Trial versions of this are available. Basically\nOpenlink offers almost complete \"access any database, anywhere\" solutions.\n\nTom\n\n",
"msg_date": "Thu, 9 Jul 1998 10:38:05 -0700 (PDT)",
"msg_from": "Tom <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "Hi,\n\n>\n> Hello Bruce,\n>\n> thanks for your reply.\n>\n> I already have anticipated that there is no off the shelf solution for my\n> problem.\n>\n> A wrapper linpq I dont't want to use, cause then I would have to link\n> against different stuff coneccting to PostgreSQL and Oracle. I want to\n> have this switching done not in my application, but in a separate process.\n>\n> After all the Oracle client SW is not available under Linux.\n>\n> Nevertheless I would like to implement this Oracle Gateway using the\n> PostgeSQL stuff.\n>\n> Now the problem for me is how to get used to the backend architecture as\n> fast as possible. That means is there any more deeper technical doku\n> available, describing the backend control flow and architekture.\n> Especially the layout of list and nodes structure of the parser.\n>\n> Cause using the Oracle OCI there has to be some preprocessing done to\n> parse the libpq SQL strings for binding input and output variables to. So\n> a simple SELECT call is not mappable to a single OCI call.\n\n Hmmm - why? From looking at the Oratcl package from Tom\n Poindexter (Tcl extension to access Oracle DB) I know, that\n Oracles OCI interface accepts mainly the same SQL strings\n sent to a PostgreSQL backend. Column names and data types of\n the result can be fingered out some way using odescr().\n\n Using this information would make it possible, to build up\n the data structures sent from a PostgreSQL backend to the\n frontend.\n\n A little server, running on the system where Oracle resides,\n could behave like a PostgreSQL postmaster and backend.\n Accepting connections on PGPORT, receiving query strings and\n sending back results in the fe-be protocol. The client\n shouldn't matter that the DB server it connects to isn't a\n real PostgreSQL.\n\n For every SQL statement recieved, the pseudo Postmaster just\n calls Oracle using OCI and sends back the results in libpq\n format.\n\n Every client program that doesn't use PostgreSQL specific\n stuff rather than standard SQL queries should be able to\n access Oracle over libpq than.\n\n>\n> If there are also other guys outside interested in this approach, it would\n> be perhaps worth to discuss also their requirements to get a proper\n> functional spec to implement the SW gateway.\n>\n> Sorry for the long lines. I used the netscape to generate the mail and he\n> doen't complain about long lines. I will use the good old pine in the\n> future.\n>\n> Greetings,\n>\n> Dr. Armin. Schloesse\n>\n\n\nUntil later, Jan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 9 Jul 1998 20:10:06 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
},
{
"msg_contents": "> > in getting simultaneous multiple db access within Postgres...\n> > thinking of trying to implement this as a Postgres \"master database\"\n> > with hooks deeper in the backend to call out to a remote database as \n> > a separate session. Sort of like Ingres implemented their \n> > distributed databases. Haven't done anything with it though...\n> Cewl... wouldn't this enable us to run PostgreSQL on beowolf-like\n> clusters? :)\n\nWell, no. I haven't coded on a beowolf system (and my Linux Journal with\na writeup on it has gone wandering :( but a beowolf must be a MIMD\nsystem with (perhaps) a shared file system. So, we would need a\nmedium-grained or coarse-grained decomposition of the backend to\ndistribute a single session across a cluster.\n\nHowever, what I proposed would allow a single database, or parts of a\nsingle logical database, to reside on one host, with access from a\nclient hitting multiple hosts to find all the tables, so one could\ndistribute the load if several pieces or many databases were involved.\nDoesn't need to be a beowolf, just a networked set of servers.\n\n - Tom\n",
"msg_date": "Fri, 10 Jul 1998 05:37:32 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL Backend as SW Gateway to Oracle"
}
] |
[
{
"msg_contents": "Things I'd like to see get in there before 6.4:\n\n1. On HPUX, the new no-exec method of starting the backend means that\nall the backends claim to be \"postmaster\"; there's no way to tell 'em\napart via ps(1). There is some code in postgres.c that tries to update\nthe process title information by tweaking the original argv[] array, but\nthat just plain doesn't work under HPUX, nor under quite a few other\nUnix variants. I'm finding out that not being able to tell which\nprocess is which is a real pain in the neck; I don't think it will be\nacceptable for production use. I think we are going to have to bite the\nbullet and borrow the process-title-setting code from sendmail --- I\nknow it's ugly, but it *works* on many many Unixes.\n\n2. I'm starting to get annoyed by the inability to \"unlisten\" from\na particular relation. Is there a reason that there is not an UNLISTEN\ncommand? (Like maybe it's not in ANSI SQL?) Or is it just something\nthat never got to the top of the to-do queue? I see that the low-level\ncode for a backend to unlisten itself is in there, but there's no way\nfor the frontend to command it to happen.\n\nIf no one else feels like working on these, maybe I will. I could use\nsome pointers for #2 though ... what needs to be done to add a new SQL\nstatement?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jul 1998 19:26:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some items for the TODO list"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Things I'd like to see get in there before 6.4:\n> \n> 1. On HPUX, the new no-exec method of starting the backend means that\n> all the backends claim to be \"postmaster\"; there's no way to tell 'em\n> apart via ps(1). There is some code in postgres.c that tries to update\n> the process title information by tweaking the original argv[] array, but\n> that just plain doesn't work under HPUX, nor under quite a few other\n> Unix variants. I'm finding out that not being able to tell which\n> process is which is a real pain in the neck; I don't think it will be\n> acceptable for production use. I think we are going to have to bite the\n> bullet and borrow the process-title-setting code from sendmail --- I\n> know it's ugly, but it *works* on many many Unixes.\n \nOne way to tell them apart is to see which of them is the parent of\nthe rest. Not as nice as being able to set the title, but it might be\na serviceable workaround in some cases.\n\nOcie\n",
"msg_date": "Wed, 8 Jul 1998 18:04:49 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some items for the TODO list"
},
{
"msg_contents": "> Is there a reason that there is not an UNLISTEN\n> command? (Like maybe it's not in ANSI SQL?)\n> If no one else feels like working on these, maybe I will. I could use\n> some pointers for #2 though ... what needs to be done to add a new SQL\n> statement?\n\nI'll add the new statement if you can get the backend to do something\nwith it. Usually, there is a parse tree structure corresponding to the\ncommand, with any parameters included in it. In this case, the\n\"unlisten\" block should probably look like the \"listen\" block, since\nboth probably have similar arguments.\n\nLet me know if you have time to work on it...\n\n - Tom\n",
"msg_date": "Thu, 09 Jul 1998 04:41:21 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some items for the TODO list"
},
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n>> Is there a reason that there is not an UNLISTEN\n>> command? (Like maybe it's not in ANSI SQL?)\n\n> I'll add the new statement if you can get the backend to do something\n> with it.\n\nDoing something with it is trivial: duplicate the LISTEN code and then\nchange the call to Async_Listen to Async_Unlisten. (Async_Unlisten\nalready exists in src/backend/commands/async.c, though for some reason\nit's not declared in src/include/commands/async.h.)\n\nI'd do it if I knew exactly what-all has to be copied and pasted to make\na new SQL statement.\n\nProbably the main question is whether the correct statement name is\n\"UNLISTEN\", or whether ANSI specifies some other spelling (\"STOP\nLISTEN\", maybe? SQL seems rather Cobol-ish in syntax choices, so I'd\nkind of expect a phrase rather than a made-up word).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jul 1998 11:25:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some items for the TODO list "
},
{
"msg_contents": "\nOn Wed, 8 Jul 1998, Tom Lane wrote:\n\n> Things I'd like to see get in there before 6.4:\n> \n> 1. On HPUX, the new no-exec method of starting the backend means that\n> all the backends claim to be \"postmaster\"; there's no way to tell 'em\n> apart via ps(1). There is some code in postgres.c that tries to update\n> the process title information by tweaking the original argv[] array, but\n> that just plain doesn't work under HPUX, nor under quite a few other\n> Unix variants. I'm finding out that not being able to tell which\n> process is which is a real pain in the neck; I don't think it will be\n> acceptable for production use. I think we are going to have to bite the\n> bullet and borrow the process-title-setting code from sendmail --- I\n> know it's ugly, but it *works* on many many Unixes.\n\n Many UNIXes have a setproctitle() function, either in libc or libutil.\nI think a native function should be used if exists.\n\nTom\n\n",
"msg_date": "Thu, 9 Jul 1998 10:48:01 -0700 (PDT)",
"msg_from": "Tom <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some items for the TODO list"
},
{
"msg_contents": "> \n> On Wed, 8 Jul 1998, Tom Lane wrote:\n> \n> > Things I'd like to see get in there before 6.4:\n> > \n> > 1. On HPUX, the new no-exec method of starting the backend means that\n> > all the backends claim to be \"postmaster\"; there's no way to tell 'em\n> > apart via ps(1). There is some code in postgres.c that tries to update\n> > the process title information by tweaking the original argv[] array, but\n> > that just plain doesn't work under HPUX, nor under quite a few other\n> > Unix variants. I'm finding out that not being able to tell which\n> > process is which is a real pain in the neck; I don't think it will be\n> > acceptable for production use. I think we are going to have to bite the\n> > bullet and borrow the process-title-setting code from sendmail --- I\n> > know it's ugly, but it *works* on many many Unixes.\n> \n> Many UNIXes have a setproctitle() function, either in libc or libutil.\n> I think a native function should be used if exists.\n\nWhat is Linux doing with my changes. Do you see all the process names\nas postmaster? Do you see the query type displayed as part of the ps\noutput. We can use setproctitle/sendmail hack to change the process\nname from postmaster to postgres, but are these sufficiently quick to\nbe run for every query to display the query type, i.e. SELECT.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 9 Jul 1998 14:40:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some items for the TODO list"
},
{
"msg_contents": "Tom <[email protected]> writes:\n> Many UNIXes have a setproctitle() function, either in libc or libutil.\n> I think a native function should be used if exists.\n\nRight, that is one of the implementation strategies found in sendmail's\ncode. Basically, what sendmail has is an emulation routine that\nprovides the setproctitle() API on systems where there is no such libc\nfunction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jul 1998 15:16:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some items for the TODO list "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> We can use setproctitle/sendmail hack to change the process\n> name from postmaster to postgres, but are these sufficiently quick to\n> be run for every query to display the query type, i.e. SELECT.\n\nThat's something I'm a little worried about too. The sendmail code\noffers these implementation strategies for setproctitle:\n\n#define SPT_NONE\t0\t/* don't use it at all */\n#define SPT_REUSEARGV\t1\t/* cover argv with title information */\n#define SPT_BUILTIN\t2\t/* use libc builtin */\n#define SPT_PSTAT\t3\t/* use pstat(PSTAT_SETCMD, ...) */\n#define SPT_PSSTRINGS\t4\t/* use PS_STRINGS->... */\n#define SPT_SYSMIPS\t5\t/* use sysmips() supported by NEWS-OS 6 */\n#define SPT_SCO\t\t6\t/* write kernel u. area */\n#define SPT_CHANGEARGV\t7\t/* write our own strings into argv[] */\n\nIt looks like our existing code in postgres.c corresponds to the last\nof these (CHANGEARGV). REUSEARGV and PSSTRINGS are variants on this\nwith probably not much worse performance. PSTAT is a kernel call,\nwhile the SCO method involves lseek and write on /dev/kmem --- at least\ntwo kernel calls, and likely it doesn't even work without special privs.\nBUILTIN means use libc's setproctitle, and SYSMIPS calls some other libc\nsubroutine. The performance of those two methods is indeterminate, but\nprobably the library routines turn around and do one of these same\nsorts of things.\n\nI'm inclined to think that another kernel call per SQL statement is\nnot something to be overly worried about, but maybe you see it\ndifferently.\n\nHere's a thought: we could implement an additional function, say\n\"fast_setproctitle\", which is defined to do nothing unless the\nsetproctitle implementation strategy is one we know to be fast.\nThen we'd use the regular setproctitle to set up the initial\n\"postgres user database\" display, and call fast_setproctitle to\nmunge the display (or not) for each statement. This would have the\nadditional hack value that it would be easy for a particular\ninstallation to override our choice about whether updating the\nprocess title for every statement is worthwhile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jul 1998 15:46:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some items for the TODO list "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We can use setproctitle/sendmail hack to change the process\n> > name from postmaster to postgres, but are these sufficiently quick to\n> > be run for every query to display the query type, i.e. SELECT.\n> \n> That's something I'm a little worried about too. The sendmail code\n> offers these implementation strategies for setproctitle:\n> \n> #define SPT_NONE\t0\t/* don't use it at all */\n> #define SPT_REUSEARGV\t1\t/* cover argv with title information */\n> #define SPT_BUILTIN\t2\t/* use libc builtin */\n> #define SPT_PSTAT\t3\t/* use pstat(PSTAT_SETCMD, ...) */\n> #define SPT_PSSTRINGS\t4\t/* use PS_STRINGS->... */\n> #define SPT_SYSMIPS\t5\t/* use sysmips() supported by NEWS-OS 6 */\n> #define SPT_SCO\t\t6\t/* write kernel u. area */\n> #define SPT_CHANGEARGV\t7\t/* write our own strings into argv[] */\n> \n> It looks like our existing code in postgres.c corresponds to the last\n> of these (CHANGEARGV). REUSEARGV and PSSTRINGS are variants on this\n> with probably not much worse performance. PSTAT is a kernel call,\n> while the SCO method involves lseek and write on /dev/kmem --- at least\n> two kernel calls, and likely it doesn't even work without special privs.\n> BUILTIN means use libc's setproctitle, and SYSMIPS calls some other libc\n> subroutine. The performance of those two methods is indeterminate, but\n> probably the library routines turn around and do one of these same\n> sorts of things.\n> \n> I'm inclined to think that another kernel call per SQL statement is\n> not something to be overly worried about, but maybe you see it\n> differently.\n> \n> Here's a thought: we could implement an additional function, say\n> \"fast_setproctitle\", which is defined to do nothing unless the\n> setproctitle implementation strategy is one we know to be fast.\n> Then we'd use the regular setproctitle to set up the initial\n> \"postgres user database\" display, and call fast_setproctitle to\n> munge the display (or not) for each statement. This would have the\n> additional hack value that it would be easy for a particular\n> installation to override our choice about whether updating the\n> process title for every statement is worthwhile.\n\nWhat I think we need to do is add a configure check for setproctitle(),\nand if we find it, we call it after we do the fork() to set our process\nname to postgres. We also change argv, and continue changing it as a\nstatus display.\n\nChanging the argv pointer is cheap, and I don't know how we are going to\nknow if setproctitle is cheap or not, but we need to make the call if it\nexists.\n\nI don't have the function on BSDI, so someone else is going to have to\nadd the code.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 21 Aug 1998 23:19:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some items for the TODO list"
}
] |
[
{
"msg_contents": "(I'm back on-line, I think...)\nI committed several changes to the development source tree this morning\n(~15 hours ago). The changes affect the following areas:\n\n1) There is an 8-byte integer data type. It needs some testing and\npossibly configuration help on most of the supported platforms. Works on\ni686/Linux and should work on Alpha.\n\n2) pg_dump now surrounds table and column names with double-quotes, to\npreserve case and funny characters through a dump/reload operation. Hope\nthis is OK with you Bruce; let me know... btw, last time I tested this\ncode (two weeks ago?) it was still slightly off of a perfect dump/reload\nof the regression tests. The test I am doing is to dump the regression\ntest database, then reload that into a new database, then dump the new\ndatabase. The resulting pg_dump output should be the same as the dump of\nthe original database.\n\n3) some docs sources have been updated.\n\n4) some additional regression tests have been defined to cover the\nHAVING clause and the int8 data type.\n\n5) automatic data type conversion now happens in every place it needs\nto, I think. The last changes are to get source columns to match target\ncolumns in simple INSERT/FROM statements.\n\nExcept for the \"random\" and \"resjunk\" regression tests, things look good\non my development machine; I've done a build and regression test\ndirectly from the CVS source tree after these changes with no other\nerrors noted.\n\n - Tom\n",
"msg_date": "Thu, 09 Jul 1998 05:44:58 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recent updates"
},
{
"msg_contents": "> (I'm back on-line, I think...)\n> I committed several changes to the development source tree this morning\n> (~15 hours ago). The changes affect the following areas:\n> \n> 1) There is an 8-byte integer data type. It needs some testing and\n> possibly configuration help on most of the supported platforms. Works on\n> i686/Linux and should work on Alpha.\n\nGood.\n\n> \n> 2) pg_dump now surrounds table and column names with double-quotes, to\n> preserve case and funny characters through a dump/reload operation. Hope\n> this is OK with you Bruce; let me know... btw, last time I tested this\n> code (two weeks ago?) it was still slightly off of a perfect dump/reload\n> of the regression tests. The test I am doing is to dump the regression\n> test database, then reload that into a new database, then dump the new\n> database. The resulting pg_dump output should be the same as the dump of\n> the original database.\n\nYep, that's the ticket.\n\n> \n> 3) some docs sources have been updated.\n> \n> 4) some additional regression tests have been defined to cover the\n> HAVING clause and the int8 data type.\n\nStephan has fixes for the remaining HAVING problems. He has not sent\nthem yet because he is getting problems with psort.\n\n> \n> 5) automatic data type conversion now happens in every place it needs\n> to, I think. The last changes are to get source columns to match target\n> columns in simple INSERT/FROM statements.\n> \n> Except for the \"random\" and \"resjunk\" regression tests, things look good\n> on my development machine; I've done a build and regression test\n> directly from the CVS source tree after these changes with no other\n> errors noted.\n\nGood. I assume UNION NULL is still an issue one of us needs to fix.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 9 Jul 1998 11:19:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > 5) automatic data type conversion now happens in every place it \n> > needs to, I think.\n> Good. I assume UNION NULL is still an issue one of us needs to fix.\n\nNot anymore :)\n\npostgres=> select text 'hi' union select NULL;\n?column?\n--------\nhi\n\n(2 rows)\n\nIt was a one-liner addition to check for NULL columns and do nothing for\nconversions. Will keep the patch here for a couple of days while doing\nmore testing...\n\n - Tom\n",
"msg_date": "Sat, 11 Jul 1998 13:55:18 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> Good. I assume UNION NULL is still an issue one of us needs to fix.\n\nOK, I just committed changes to parse_clause.c which fix the most\nobvious \"UNION SELECT NULL\" problems:\n\npostgres=> select 1 union select null;\n?column?\n--------\n 1\n\n(2 rows)\n\nOther permutations work too. The code also gets the types right if there\nare multiple UNION clauses (remember the type of the result should be\nthe same as the type of the first clause in the UNION):\n\npostgres=> select 1.1 as \"float\" union select NULL\npostgres-> union select 2 union select 3.3;\nfloat\n-----\n 1.1 <--- (this value determined the types)\n 2 <--- (this was an int after a null)\n 3.3 <--- (this double came after an int)\n <--- (null is here)\n(4 rows)\n\nIn testing I found at least one remaining case with problems. It\ninvolves a UNION ALL in clauses other than the first:\n\npostgres=> select 1 union select 2 union all select null;\nBackend message type 0x44 arrived while idle\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while \n processing the request.\nWe have lost the connection to the backend, so further processing is\n impossible. Terminating.\n\nAt the same time, the backend printed the following:\n\nToo Large Allocation Request(\"!(0 < (size)\n && (size) <= (0xfffffff)):size=-2 [0xfffffffe]\",\n File: \"mcxt.c\", Line: 228)\n !(0 < (size) && (size) <= (0xfffffff)) (0)\n\nDo you want to look at this Bruce? I haven't looked at it yet, but think\nit might be deeper into the backend than the parser (haven't run into\nmcxt.c before).\n\nI am testing on a Friday's version of the cvs tree.\n\n - Tom\n",
"msg_date": "Tue, 14 Jul 1998 04:14:58 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> At the same time, the backend printed the following:\n> \n> Too Large Allocation Request(\"!(0 < (size)\n> && (size) <= (0xfffffff)):size=-2 [0xfffffffe]\",\n> File: \"mcxt.c\", Line: 228)\n> !(0 < (size) && (size) <= (0xfffffff)) (0)\n> \n> Do you want to look at this Bruce? I haven't looked at it yet, but think\n> it might be deeper into the backend than the parser (haven't run into\n> mcxt.c before).\n> \n> I am testing on a Friday's version of the cvs tree.\n\nTypically a bad malloc. I can check.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 00:35:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> Do you want to look at this Bruce? I haven't looked at it yet, but think\n> it might be deeper into the backend than the parser (haven't run into\n> mcxt.c before).\n> \n> I am testing on a Friday's version of the cvs tree.\n\nCan you check:\n\n\ttest=> select null union select null;\n\tERROR: type id lookup of 0 failed\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 00:55:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> Do you want to look at this Bruce? I haven't looked at it yet, but think\n> it might be deeper into the backend than the parser (haven't run into\n> mcxt.c before).\n> \n> I am testing on a Friday's version of the cvs tree.\n\nLook at this:\n\n\ttest=> select 4 union select 5 union all select null;\n\t?column?\n\t--------\n\t\n\t�\n\t\n\t(3 rows)\n\nAnd the character randomly changes depending on the constant you use.\nStrange.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 00:58:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> Do you want to look at this Bruce? I haven't looked at it yet, but think\n> it might be deeper into the backend than the parser (haven't run into\n> mcxt.c before).\n> \n> I am testing on a Friday's version of the cvs tree.\n\nThe problem appears to be in the sorting of nulls, which is used by\nUNION ALL:\n\n\ttest=> select null order by 1;\n\tERROR: type id lookup of 0 failed\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 01:02:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> test=> select 4 union select 5 union all select null;\n> ?column?\n> --------\n> \n> ¼\n> \n> (3 rows)\n> \n> And the character randomly changes depending on the constant you use.\n> Strange.\n\nWell, this is just another symptom of a lost or uninitialized memory\narea; I get\n\npostgres=> select 4 union select 5 union all select null;\n?column?\n--------\n\n\n\n(3 rows)\n\nwith apparently blank or null results.\n\nI'll check on the \"NULL UNION NULL\" problem. Since there is _absolutely\nno context_ to assign a type it's a bit strange...\n\n - Tom\n",
"msg_date": "Tue, 14 Jul 1998 05:21:18 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> The problem appears to be in the sorting of nulls, which is used by\n> UNION ALL:\n> test=> select null order by 1;\n> ERROR: type id lookup of 0 failed\n\nHmm. And I've got trouble with the following when I assigned the type\n\"UNKNOWNOID\" to the null fields:\n\npostgres=> select null union select null;\nERROR: Unable to find an ordering operator '<' for type unknown.\n Use an explicit ordering operator or modify the query.\n\nWith \"UNION ALL\" it works, since no sorting needs to happen:\n\npostgres=> select null union all select null;\n?column?\n--------\n\n\n(2 rows)\n\nAn additional problem is that the UNION parsing is done recursively, so\nthe routine which does the type matching does not see a list of all the\nclauses all at once.\n\nAny ideas?\n\n - Tom\n",
"msg_date": "Tue, 14 Jul 1998 13:38:53 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> postgres=> select null union all select null;\n> ?column?\n> --------\n> \n> \n> (2 rows)\n> \n> An additional problem is that the UNION parsing is done recursively, so\n> the routine which does the type matching does not see a list of all the\n> clauses all at once.\n> \n> Any ideas?\n\nI knew there was a reason I did not support NULL in union. :-)\n\nJust kidding. I never thought of testing it, and a good thing too. :-)\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 09:44:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > Any ideas?\n> I knew there was a reason I did not support NULL in union. :-)\n\nYeah, it's sticky. Where in the code does the sorting get set up?\n\n - Tom\n",
"msg_date": "Tue, 14 Jul 1998 15:12:20 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > > Any ideas?\n> > I knew there was a reason I did not support NULL in union. :-)\n> \n> Yeah, it's sticky. Where in the code does the sorting get set up?\n> \n> - Tom\n> \n\nSee optimizer/prep/prepunion.c::plan_union_queries(). You will see me\ncalling transformSortClause() from there to set up a query using\nUNION/UNION ALL. I think that is where the problem is happening. \nWhatever you did in the parser to get these types converted is not in\nthat function. Can you check into it? Should I be doing that in\nanother place. I am unsure, but it looks like the best place for it.\n\nI think the major problem is the way I am re-ordering the UNION sort to\nhandle the placement of UNION and UNION ALL. I think I need some more\ncode, or perhaps grab some structure you are already populating in the\nparser in this place.\n\nWhen I required all the types to be the same, it didn't matter how I\nre-ordered things.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 13:01:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > Yeah, it's sticky. Where in the code does the sorting get set up?\n> See optimizer/prep/prepunion.c::plan_union_queries(). You will see me\n> calling transformSortClause() from there to set up a query using\n> UNION/UNION ALL. I think that is where the problem is happening.\n> Whatever you did in the parser to get these types converted is not in\n> that function. Can you check into it? Should I be doing that in\n> another place. I am unsure, but it looks like the best place for it.\n\nOK, made a change to transformSortClause() called from\nplan_union_queries():\n\npostgres=> select null union select null;\n?column?\n--------\n\n(1 row)\n\npostgres=> select null union select null union all select null;\n?column?\n--------\n\n\n(2 rows)\n\nI decided to use the int4 sorting routines when the type is\n\"InvalidOid\", the type apparently assigned to null constants. The sort\nroutines probably don't get called anyway since everything is a null,\nand if they did the \"pass by value\" int4 routines are probably safest.\n\nWill continue testing, and need to look into this still:\n\npostgres=> select 1 union select null union all select null;\nBackend message type 0x44 arrived while idle\n...\n\n - Tom\n",
"msg_date": "Wed, 15 Jul 1998 13:15:41 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> I decided to use the int4 sorting routines when the type is\n> \"InvalidOid\", the type apparently assigned to null constants. The sort\n> routines probably don't get called anyway since everything is a null,\n> and if they did the \"pass by value\" int4 routines are probably safest.\n\nGood. That was my suspicion on how to do it.\n\nWhat does 'select null order by 1;' do now?\n\nI have renamed the append struct names just now as part of an EXPLAIN\nfix. Should not affect you.\n\n> \n> Will continue testing, and need to look into this still:\n> \n> postgres=> select 1 union select null union all select null;\n> Backend message type 0x44 arrived while idle\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 11:32:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> What does 'select null order by 1;' do now?\n\npostgres=> select null order by 1;\nERROR: type id lookup of 0 failed\n\nDarn. That doesn't touch the UNION code, so will need to look elsewhere\nI guess.\n\n> I have renamed the append struct names just now as part of an EXPLAIN\n> fix. Should not affect you.\n\nOK.\n\n - Tom\n",
"msg_date": "Wed, 15 Jul 1998 16:26:29 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > What does 'select null order by 1;' do now?\n> \n> postgres=> select null order by 1;\n> ERROR: type id lookup of 0 failed\n> \n> Darn. That doesn't touch the UNION code, so will need to look elsewhere\n> I guess.\n> \n> > I have renamed the append struct names just now as part of an EXPLAIN\n> > fix. Should not affect you.\n> \n> OK.\n> \n> - Tom\n> \n\nIt is from here:\n\t\n\t#2 0x9aca9 in typeidType (id=0) at parse_type.c:69\n\t#3 0x99d19 in oper (opname=0x8ce13 \"<\", ltypeId=0, rtypeId=0, \n\t noWarnings=0 '\\000') at parse_oper.c:614\n\t#4 0x95a18 in transformSortClause (pstate=0x129f50, orderlist=0x130650, \n\t sortlist=0x0, targetlist=0x130690, uniqueFlag=0x0) at parse_clause.c:330\n\t#5 0x7daed in transformSelectStmt (pstate=0x129f50, stmt=0x2dfb90)\n\t at analyze.c:802\n\t#6 0x7cb99 in transformStmt (pstate=0x129f50, parseTree=0x2dfb90)\n\t at analyze.c:190\n\t#7 0x7c91c in parse_analyze (pl=0x130670, parentParseState=0x0)\n\t at analyze.c:76\n\nLooks easy to fix. The code is:\n\n /* check for exact match on this operator... */\n if (HeapTupleIsValid(tup = oper_exact(opname, ltypeId, rtypeId, NULL, NULL,$\n {\n }\n /* try to find a match on likely candidates... */\n else if (HeapTupleIsValid(tup = oper_inexact(opname, ltypeId, rtypeId, NULL$\n { \n }\n else if (!noWarnings)\n {\n elog(ERROR, \"Unable to find binary operator '%s' for types %s and %s\",\n opname, typeTypeName(typeidType(ltypeId)), typeTypeName(typeidType$\n }\n\nIt can't find operators for NULL, and is bombing when trying to print\nthe error message. I think we need to handle this query properly,\nbecause some sql's generated by other programs will auto-order by all\nthe fields. I think your fix that you did with sort perhaps can be done\nhere.\n\nBut actually, the call is coming from transformSortClause(), so parhaps\nyou can do a NULL test there and prevent oper() from even being called.\n\nGlad you are back on-line.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 12:31:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > > > What does 'select null order by 1;' do now?\n> > you can do a NULL test there and prevent oper() from even being \n> > called.\n> \n> postgres=> select null order by 1;\n> ?column?\n> --------\n> \n> (1 row)\n> \n> There are three or four cases in transformSortClause() and I had fixed\n> only one case for UNION. A second case is now fixed, in the same way; I\n> assigned INT4OID to the column type for the \"won't actually happen\"\n> sort. Didn't want to skip the code entirely, since the backend needs to\n> _try_ a sort to get the NULLs right. I'm not certain under what\n> circumstances the other cases are invoked; will try some more testing...\n> \n> Off to work now :)\n\nGood. Yes, I agree we need to put something in that place.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 13:04:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
},
{
"msg_contents": "> > > What does 'select null order by 1;' do now?\n> you can do a NULL test there and prevent oper() from even being \n> called.\n\npostgres=> select null order by 1;\n?column?\n--------\n\n(1 row)\n\nThere are three or four cases in transformSortClause() and I had fixed\nonly one case for UNION. A second case is now fixed, in the same way; I\nassigned INT4OID to the column type for the \"won't actually happen\"\nsort. Didn't want to skip the code entirely, since the backend needs to\n_try_ a sort to get the NULLs right. I'm not certain under what\ncircumstances the other cases are invoked; will try some more testing...\n\nOff to work now :)\n\n - Tom\n",
"msg_date": "Wed, 15 Jul 1998 17:06:42 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recent updates"
}
] |
[
{
"msg_contents": "Please add to the reliability list:\n\n GRANT will not work with usernames such as \"www-data\"\n\n(unless that is already fixed?)\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And this is the confidence that we have in him, that,\n if we ask any thing according to his will, he heareth\n us; And if we know that he hears us, whatsoever we\n ask, we know that we have the petitions that we\n desired of him.\" I John 5:14,15\n\n\n",
"msg_date": "Thu, 09 Jul 1998 14:02:03 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some items for the TODO list "
},
{
"msg_contents": "> Please add to the reliability list:\n> \n> GRANT will not work with usernames such as \"www-data\"\n> \n> (unless that is already fixed?)\n> \nAdded:\n\n\t* allow usernames with dashes(GRANT fails)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 9 Jul 1998 11:29:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some items for the TODO list"
}
] |
[
{
"msg_contents": "I'm using the latest CVS and PHP can't get a connection. Starting the\npostmaster with or without the -i flag is not the problem. Also psql\nworks fine. \n\nIf I try to use PHP to access postgres I get the following:\n\n-------------------------------------------------------------------------------\npostmaster: ServerLoop: handling reading 5\npostmaster: ServerLoop: handling reading 5\npostmaster: ServerLoop: handling writing 5\npostmaster: BackendStartup: environ dump:\n-----------------------------------------\n HOSTNAME=marliesle\n LOGNAME=postgres\n MACHTYPE=i586-debian-linux\n TERM=xterm\n HOSTTYPE=i586\n \nPATH=/usr/local/pgsql/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11\n HOME=/home/postgres\n SHELL=/bin/bash\n PS1=\\h\\$ \n PGLIB=/usr/local/pgsql/lib\n USER=postgres\n PGDATA=/usr/local/pgsql/data\n MANPATH=/usr/local/pgsql/man\n LANG=de_DE\n OSTYPE=linux\n SHLVL=1\n _=/usr/local/pgsql/bin/postmaster\n _652_GNU_nonoption_argv_flags_=0000\n POSTPORT=5432\n POSTID=2147483646\n PG_USER=nobody\n IPC_KEY=5432000\n-----------------------------------------\npostmaster: BackendStartup: pid 658 user nobody db verlag socket 5\npostmaster child[658]: starting with (/usr/local/pgsql/bin/postgres, -p,\n-d3, -P5, -v131072, verlag, )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\n ---debug info---\n Quiet = f\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 64\n sortmem = 512\n query echo = f\n DatabaseName = [verlag]\n ----------------\n\n InitPostgres()..\nAn connection error occured. # <- A PHP message #\npostmaster: reaping dead processes...\npostmaster: CleanupProc: pid 658 exited with status 0\nmarliesle$ \n--------------------------------------------------------------------------------\n\nBetween InitPostgres()... and \"An connection error occured\" 'psql\nverlag' show's a \"Welcome to the POSTGRESQL interactive sql monitor:\"\n\nI append here the last lines from a strace (postmaster startet with -i):\n\n--------------------------------------------------------------------------------\nmunmap(0x40158000, 4096) = 0\nsocket(PF_UNIX, SOCK_STREAM, 0) = 5\nconnect(5, {sun_family=AF_UNIX, sun_path=\"/tmp/.s.PGSQL.5432\"}, 20) = 0\nfcntl(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0\ngetsockname(5, {sun_family=AF_UNIX, sun_path=\"\"}, [3]) = 0\nsend(5, \"\\0\\0\\1(\\0\\2\\0\\0verlag\\0\\0\\0\\0\\0\\0\"..., 296, 0) = 296\noldselect(6, [5], [], NULL, NULL) = 1 (in [5])\nrecv(5, \"R\\0\\0\\0\\0\", 8192, 0) = 5\noldselect(6, [5], [], NULL, NULL) = 1 (in [5])\nrecv(5, \"K\\0\\0\\2_\\262\\263\\247\\212Z\", 8192, 0) = 10\nclose(5) = 0\nwrite(1, \"An connection error occured.\\n\", 29) = 29\nmunmap(0x40126000, 200704) = 0\nclose(4) = 0\nmunmap(0x40009000, 4096) = 0\nsetitimer(ITIMER_PROF, {it_interval={0, 0}, it_value={0, 0}}, NULL) = 0\n_exit(0) = ?\n--------------------------------------------------------------------------------\n\n-Egon\n",
"msg_date": "Thu, 09 Jul 1998 16:17:10 +0200",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": true,
"msg_subject": "InitPostgres() fails through libpq"
}
] |
[
{
"msg_contents": "I�m going ot leave for vacation later on. So don�t expect much input from me\nuntil July 27th the earliest. :-)\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Fri, 10 Jul 1998 00:10:50 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Leaving for vacation"
},
{
"msg_contents": "On Fri, 10 Jul 1998, Dr. Michael Meskes wrote:\n\n> I���m going ot leave for vacation later on. So don���t expect much input from me\n> until July 27th the earliest. :-)\n\n\tEnjoy :)\n\n\n",
"msg_date": "Fri, 10 Jul 1998 07:36:19 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Leaving for vacation"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.