threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": " > As just said, this is a good thing from the point of view of\n > encouraging participation and commercial success. As long as the open\n > source version of postgresql remains a well-designed, solid product it\n > behooves any commercial distributor to aid in its maintenance rather\n > than take on the whole thing. Ideally, they will contribute any fixes\n > they make so that all can benefit and perhaps more importantly so they\n > don't have to maintain the separate fixes any longer.\n\n this assumes an ideal world. \n\nNot entirely. It really assumes that people do a cost-benefit\nanalysis and recognize that the cost of maintaining a separate\ndistribution with patches, etc. of something as complex as PostgreSQL\ngenerally far outweighs the benefit of having a global network of\nprogrammers do it for you. Simple economics, not idealism.\n\n look at BSD itself and how it has fragmented, or the plethora of unices and how\n the forking there nearly closed the door on wide spread use. the fact of the\n matter is, eventually someone/someidiot feels slighted by the community or feels\n their ideas are better, no matter WHAT. so they go their own way. they decide to\n take the BSD'd source and wander off to their little corner of the world, taking\n some of the developers with them.\n\nLook also at Linux and how it has fragmented despite the GPL. There\nis not one Linux release, at all, despite the common usage of the term\n\"Linux\" as if that referred to a single entitity. In contrast, the\nterm \"NetBSD\", for example, refers to exactly one public release, a\nstandard and well defined entity that can be readily duplicated.\nIndeed, there are likely many more versions of Linux in standard\ndistribution and use (at least 17 in a recent count) than ALL of the\npublicly-released *BSD OSes combined (3).\n\n whilst BSD is less restrictive, its so unrestrictive that it allows people to\n set up barriers to the furtherance of the source. \n\nBut as long as the publicly available source remains a viable\nenterprise (e.g., people see strengths in it and are willing to\ncontribute time rather than pay for a commercial version) it doesn't\nmatter if there exist other versions that have been commercialized.\n\nThe issue is whether or not PostgreSQL should ALLOW release of binary\nversions or not. There are many viable situations in which it is not\nfeasible or desireable to ship source code, even if the product is\nidentical to the public source. For example, clients may not care and\nmay not want to be burdened with it. Or the database may be embedded\nwithin something else in a streamlined system for which there is no\nspace for source. With the GPL, the producer is required to either\nship the source or maintain all the relevant copies for at least 3\nyears. That is quite a burden when no one cares to have the source.\nIn this case the GPL ultimately restricts what can be done in an\neconomically realistic manner and can lead to stagnation.\n\nIn the interests of maximizing the potential marketplace for\nPostgreSQL, it seems like the BSD license is in fact superior.\n\n finally, does the BSD liscence further the possibility of commercial adoption?\n well. look at Apache. apache has commercial products built on top of\n it. PHP4 w/commercial optimizer leaps to mind. and commercial acceptance (e.g.\n bundling with other packages that are closed, open, free, for sale, etc) is\n quite high.\n\nBoth Apache and PHP (both 3 and 4) have BSD-style (not GPL) licences\n(though you can choose to abide by the GPL for PHP4, if you want, you\nare not required to). Hence, commercial vendors can release binary\nproducts built upon either of these without also releasing the source.\nThat is likely the main reason these products have been adopted in the\ncommercial world (given the prerequisite that they are both extremely\nhigh quality products to start with).\n\n the ability to sell a core product directly does NOT create commercial\n success. instead, i feel it encourages forking and the removal of that product\n from the user base. which is us. remember: greed and pride cause people to do\n stupid things.\n\nClearly the ability to sell something does not create a success. Nor\ndoes it necessarily encourage the forking you seem to think is\ninevitable. Your examples above both counter this claim.\n\n that being said.... is it possible to GPL postgresql? probably not, i'd\n imagine. how much of the original Berkley source code is there left?\n\nNor is it desireable, I would argue.\n\nCheers,\nBrook\n",
"msg_date": "Thu, 21 Oct 1999 11:43:01 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Readline use in trouble?]"
}
] |
[
{
"msg_contents": "Hi,\n\ntoday I decided to look at postgres log file ( -d 2).\nWe use postgres as a database backend to apache+modperl server.\nI notice messages like:\npq_recvbuf: unexpected EOF on client connection\nWhat does it means ?\n\nStartTransactionCommand\nquery: select a.msg_id, h.status_set_date, a.title, a.msg_path, c.name from mes\nProcessQuery\nCommitTransactionCommand\npq_recvbuf: unexpected EOF on client connection\n~~~~~~~~~~~~~~\n\nAlso I'm curious about postmaster's activity:\n\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 21507 exited with status 0\n\nThis message appears too often - I have in httpd.conf\nMaxRequestsPerChild 5000, so I expect new httpd children after 5000 requests \nand new postgress process accordingly (I use persistent connection\nbetween httpd and postgres).\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 21 Oct 1999 23:19:04 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "pq_recvbuf: unexpected EOF on client connection"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I notice messages like:\n> pq_recvbuf: unexpected EOF on client connection\n> What does it means ?\n\nMeans your client closed the connection without sending a \"terminate\"\nmessage first, ie, you didn't close down libpq gracefully. It's harmless\nenough, although I think having the log message is a good idea. (If you\nuse clients that are careful to do PQfinish() then you can use the\npostmaster log to check for client crashes.)\n\n> Also I'm curious about postmaster's activity:\n\n> proc_exit(0) [#0]\n> shmem_exit(0) [#0]\n> exit(0)\n> /usr/local/pgsql/bin/postmaster: reaping dead processes...\n> /usr/local/pgsql/bin/postmaster: CleanupProc: pid 21507 exited with status 0\n\n> This message appears too often - I have in httpd.conf\n> MaxRequestsPerChild 5000, so I expect new httpd children after 5000 requests\n> and new postgress process accordingly (I use persistent connection\n> between httpd and postgres).\n\nWell, that's certainly the trace of a backend quitting. I'd say your\nhttpd stuff isn't working the way you think it is...\n\n\t\t\tregards, tom lane\n\nPS: I didn't hear back from you about INTERSECT/LIMIT --- is that still\nbroken for you? I can't find anything wrong with it here.\n",
"msg_date": "Thu, 21 Oct 1999 18:47:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pq_recvbuf: unexpected EOF on client connection "
}
] |
[
{
"msg_contents": "Well, I'm doing the pg_dump thing at the moment. If nobody has stepped up\nto the plate by the time I'm finished, and if nobody objects to my style too\nmuch (Tom, comments from the psql changes?), I'll have a shot. This was the\nreason that I started on the query string limit in the first place, anyway.\n\nI'll be slowing right up on my day job at the end of next week, so I should\nhave a fair amount of time to stick into it for a couple of weeks after\nthat, which should be enough to get pg_dump finished. \n\nMikeA\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Friday, October 22, 1999 8:04 AM\n>> To: Vadim Mikheev\n>> Cc: Bruce Momjian; [email protected]\n>> Subject: Re: [HACKERS] Planning final assault on query length limits \n>> \n>> \n>> Vadim Mikheev <[email protected]> writes:\n>> > Bruce Momjian wrote:\n>> >>>> Any volunteers out there? It'd be a shame to not have \n>> this problem\n>> >>>> area completely licked for 7.0.\n>> >> \n>> >> Welcome to the small club, Tom. For the first 2 & 1/2 \n>> years, the only\n>> >> person who could tackle those big jobs was Vadim. Now \n>> you are in the\n>> >> club too.\n>> >> \n>> >> The problem is that there are no more. I can't imagine \n>> anyone is going\n>> >> to be able to jump out of the woodwork and take on a job \n>> like that. We\n>> >> will just have to do the best job we can, and maybe save \n>> something for\n>> >> 7.1.\n>> \n>> > There is Jan!...\n>> > But he's busy too -:)\n>> \n>> > Let's wait for 7.0 beta - \"big tuples\" seems as work for 2 weeks...\n>> \n>> Thing is, if Vadim could do it in two weeks (sounds about \n>> right), then\n>> maybe I could do it in three or four (I'd have to spend time studying\n>> parts of the backend that Vadim already knows, but I don't). \n>> It seems\n>> to me that some aspiring hacker who's already a little bit familiar\n>> with backend coding could do it in a month or two, with \n>> suitable study,\n>> and would in the process make great strides towards gurudom. This is\n>> a fairly localized task, if I'm not greatly mistaken about it. And\n>> there's plenty of time left before 7.0. So this seems like a perfect\n>> project for someone who wants to learn more about the backend and has\n>> some time to spend doing so.\n>> \n>> A year ago I didn't know a darn thing about the backend, so I'm a bit\n>> bemused to find myself being called a member of \"the small club\".\n>> Programming skills don't come out of nowhere, they come out of study\n>> and practice. (See http://www.tuxedo.org/~esr/faqs/loginataka.html)\n>> \n>> In short, I'd like to see the club get bigger...\n>> \n>> \t\t\tregards, tom lane\n>> \n>> ************\n>> \n",
"msg_date": "Fri, 22 Oct 1999 09:43:01 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Planning final assault on query length limits "
}
] |
[
{
"msg_contents": "With current sources,\n\nregression=> create table xx (f1 int, f2 serial, f3 serial);\nNOTICE: CREATE TABLE will create implicit sequence 'xx_f2_seq' for SERIAL column 'xx.f2'\nNOTICE: CREATE TABLE will create implicit sequence 'xx_f3_seq' for SERIAL column 'xx.f3'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'xx_f2_key' for table 'xx'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'xx_f3_key' for table 'xx'\nCREATE\nregression=> insert into xx values(1);\nERROR: Relation 'xx_f2_seq' does not exist\nregression=>\n\n6.5.2 fails to do the CREATE TABLE at all. I'm betting this is related\nto the multiple-unique-index bug that you thought you had fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Oct 1999 12:01:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Current sources fail if two 'serial' columns in one table"
}
] |
[
{
"msg_contents": "I apologize if this is the wrong group for this message, but I'm not\nsure where else this would go.\n\nI don't have a specific problem, but I would like to ask some questions\nabout how postgres works.\n\nBut first, some backfground info:\nI have two identical servers each running postgres 6.5.1 and each has an\nidentical database called zipfind. This is a pretty static, mostly read\nonly database with 700,000 rows. A few days ago I got some updated\ninformation for the database, 1,400,000 rows worth, almost double the\ndata in ascii format.\n\nSo, I got the new rows inserted with a perl script which read the ascii\nfile line by line and inserted the data. This took quite a while, in\nfact, it took more than 24 hours. So, I decided I would update the\nsecond database in a different way.\n\nI realized I could pg_dump the new zipfind database, and read it back in\nusing psql on the other machine, but I decided to try it a little\ndifferently, just to see what would happen.\n\nWhat I tried was to move the actual data files in the data/base/zipfind\ndirectory from the newly updated database directly to the machine still\nin need of updating. I shutdown postmaster on the machine that I was\nmoving the files to, replaced all of the files in the zipfind directory\nwith the files from the machine with all the new rows, reset all the\npermissions, and restarted postmaster.\n\nThe strange thing is, even though the old files were removed and\nreplaced with the new files using identical file names, psql seemed to\nbe reading data from the old database as if it had not been removed.\nissuing a \"select count(*) from zips;\" returned the old row count 666730\ninstead of the new row count ca 1400000 ... if anything I expected to\nget some kind of error ..not the old row count!\n\nI checked the filesizes in the zipfind directory to make sure I hadn't\nmade a mistake while putting the new data in place. Everything was\ncorrect. I then vacuumed the database and rechecked the file sizes. ..\nthe \"zips\" table entry now reported the old file size!\n\nIt occurred to me that there may be some system tables which were\ncausing the erratic behaviour, I searched for something relevant but\nfound nothing.\n\nThe only theory that I could come up with was that postgres latched on\nto an inode for the original files ..but how it would keep that inode\ninfo across daemon invocations seems a mystery to me.\n\nExplanations appreciated!\n\nThanks,\nBryan\n\n\n\n\n\n\n -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------\n http://www.newsfeeds.com The Largest Usenet Servers in the World!\n------== Over 73,000 Newsgroups - Including Dedicated Binaries Servers ==-----\n",
"msg_date": "Fri, 22 Oct 1999 15:21:42 -0500",
"msg_from": "Bryan Ingram <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres inode q's"
},
{
"msg_contents": "Bryan Ingram wrote:\n\n> I apologize if this is the wrong group for this message, but I'm not\n> sure where else this would go.\n\n Topshot - absolutely the right group.\n\n> I don't have a specific problem, but I would like to ask some questions\n> about how postgres works.\n>\n> But first, some backfground info:\n> I have two identical servers each running postgres 6.5.1 and each has an\n> identical database called zipfind. This is a pretty static, mostly read\n> ...\n\n> It occurred to me that there may be some system tables which were\n> causing the erratic behaviour, I searched for something relevant but\n> found nothing.\n\n Warm, warm, hot - missed!\n\n> The only theory that I could come up with was that postgres latched on\n> to an inode for the original files ..but how it would keep that inode\n> info across daemon invocations seems a mystery to me.\n\n Deep frozen :-)\n\n I assume from this description, that one of the servers is\n created more or less by a similar copy operation, but that\n time it was the entire ./data directory that got copied, or\n maybe the entire installation directory - right? If not, the\n two installations must have been treated absolutely identical\n until all the data was inserted into the zipfind databases.\n\n Anyway, the system file causing this is pg_log. It's not a\n table, it's a bitmap telling which transaction have committed\n and which ones not. There are some transaction ID fields in\n the header information of each data tuple in PostgreSQL. One\n tells in which transaction this tuple appeared, and the other\n when it disappeared. But they are ignored if the transaction\n in question isn't marked as committed in pg_log. So on a\n DELETE operation, the deleted tuples simply get the DELETE's\n transaction ID stamped into the ending field, and on an\n UPDATE, the same is done and a new tuple with this XID as the\n beginning is appended at the end of the table. Can you\n imagine now, what a ROLLBACK in PostgreSQL means? Simple -\n eh? Just mark the transaction in pg_log as rolled back and\n the stamps will get ignored. So the old tuple is still valid\n and the new tuple at the end is ignored.\n\n Vacuum now is the utility, that (dramatically simplified)\n whipes out all the tuples with a committed XID in the ending\n field and truncates the datafile.\n\n Since you didn't copy pg_log (AND DON'T DO SO, IT WOULD\n CORRUPT ALL DATABASES IN THE INSTALLATION) from PostgreSQL's\n point of view all the UPDATES/INSERTS found in the copied\n zipfind database files never committed, so the where ignored.\n\n Either you copy the entire ./data directory, or you do it\n with pg_dump. No other chance.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 22 Oct 1999 23:08:52 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgres inode q's"
},
{
"msg_contents": "Jan,\n\nThanks for the explanation, that does help to explain, and adds a lot to my\npostgres knowledge in general ..\n\nBased on your explanation, I understand how running VACUUM wiped out the new\ntuples that did not have a corresponding XID in pg_log.\n\nHowever, there is one aspect of this I still do not quite grasp ..\n\nWhat happens if the INSERT/DELETE is done without a transaction\n(BEGIN/COMMIT)? Is an XID still generated for that particular tuple, or is the\ntuple instantly commited with no XID stamped into the beginning/ending fields?\n\nAlso, I don't understand why vacuum didn't wipe out all tuples in the\ndatabase, rather than just the new ones. Here's why:\n\nWhen I updated the \"new\" database with the new records I used the DELETE then\nINSERT trick to avoid having to write logic to first see if there was an\nexisting record and then to update only the changing fields. Since I actually\ndeleted, then inserted, I'm guessing that the XID would change so that when I\nmoved the database over to the other server, ALL of the XIDs would be\ndifferent, not just the newly added rows. In which case, I would expect\nVACUUM to wipe everything. Instead, it only wiped the new rows, which tells\nme that even though I DELETED/INSERTED all existing rows, that somehow the\nXID's still sync with the XID's on the other server.\n\nAssuming the XIDs did change, I'd guess that though I had exactly the same\nnumber of rows I started with (666730 instead of +1400000) it is because the\nXIDs happened to correspond, but not necessarily with their original\nrelationships. Which would mean that I still had 666730 rows, but not the\noriginal ones. Probably a smattering of new and old ones.\n\nI'm just theorizing off of the top of my head .. please let me know where I\nhave gone wrong!\n\nMuch Thanks,\nBryan\n\n\n\n\n\n\n\n\n\n\n\n -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------\n http://www.newsfeeds.com The Largest Usenet Servers in the World!\n------== Over 73,000 Newsgroups - Including Dedicated Binaries Servers ==-----\n",
"msg_date": "Fri, 22 Oct 1999 17:38:05 -0500",
"msg_from": "Bryan Ingram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postgres inode q's"
},
{
"msg_contents": "Bryan Ingram <[email protected]> writes:\n> What happens if the INSERT/DELETE is done without a transaction\n> (BEGIN/COMMIT)? Is an XID still generated for that particular tuple,\n> or is the tuple instantly commited with no XID stamped into the\n> beginning/ending fields?\n\nThere is always a transaction. Postgres effectively generates an\nimplict BEGIN and END around any query that's not inside an explicit\ntransaction block. This is why failing statements don't cause trouble;\ntheir transactions get aborted.\n\n> When I updated the \"new\" database with the new records I used the DELETE then\n> INSERT trick to avoid having to write logic to first see if there was an\n> existing record and then to update only the changing fields. Since I actually\n> deleted, then inserted, I'm guessing that the XID would change so that when I\n> moved the database over to the other server, ALL of the XIDs would be\n> different, not just the newly added rows. In which case, I would expect\n> VACUUM to wipe everything. Instead, it only wiped the new rows, which tells\n> me that even though I DELETED/INSERTED all existing rows, that somehow the\n> XID's still sync with the XID's on the other server.\n\nYeah, but the old tuples are *still there*. They are marked as having\nbeen deleted by transaction XID so-and-so. When you moved the files,\nthose transaction numbers are no longer thought to be committed, so\nthe old tuples come back to life (just as the new tuples are no longer\nconsidered valid, because their inserting transaction is not known to\nbe committed).\n\nThere is a potential hole in this theory, which relates to a point Jan\ndidn't make in his otherwise excellent discussion. A tuple normally\ndoesn't stay marked with its creating or deleting XID number for all\nthat long, because we don't really want to pay the overhead of\nconsulting pg_log for every single tuple. So, as soon as any backend\nchecks a tuple and sees that its inserting transaction did commit,\nit rewrites the tuple with a new state \"INSERT KNOWN COMMITTED\" (which\nis represented by inserting XID = 0 or some such). After that, no one\nhas to check pg_log anymore for that tuple; it's good. Similarly, the\ndeleting XID only stays on the tuple until someone verifies that the\ndeleting transaction committed; after that the tuple is marked KNOWN\nDEAD, and it'll stay dead no matter what's in pg_log. VACUUM is really\nonly special in that it reclaims space occupied by known-dead tuples;\nwhen it checks/updates the state of a tuple, it's not doing anything\nthat's not done by a plain SELECT.\n\nSo, AFAICT, you could only have seen the problem for tuples that were\nnot scanned by any SELECT or UPDATE operation subsequent to having been\ninserted/deleted and committed. If you did all the deletes/inserts\ninside a transaction, committed, and then immediately copied the files,\nthen for sure you'd have gotten burnt. If you did any sort of SELECT\nfrom the table after committing the changes, I'd have expected the tuple\nstates to get frozen --- at least for the tuples that SELECT visited,\nwhich might not have been all of them if the SELECT was able to use an\nindex.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Oct 1999 21:15:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgres inode q's "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> ................ So, as soon as any backend\n> checks a tuple and sees that its inserting transaction did commit,\n> it rewrites the tuple with a new state \"INSERT KNOWN COMMITTED\" (which\n> is represented by inserting XID = 0 or some such). .........\n> \n\nThe way concurrency is supported in PostgreSQL is really cool, and I\nthink not widely understood. The tuple uses flags stored in the\nt_infomask field of the HeapTupleHeader structure to 'cache' the status\nof the creating and deleting transactions for each tuple. \n\nCheck out backend/utils/time/tqual.c and include/utils/tqual.h for\nthe details of the algorithms. (Not recommended if you have been\ndrinking at all)\n\nUllman \"Principles of Database and Knowledge-Base Systems, Vol 1\" Has a\npretty good discussion of time based and lock based schemes for\nconcurrency control.\n \nBernie Frankpitt\n",
"msg_date": "Sat, 23 Oct 1999 17:49:16 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgres inode q's"
},
{
"msg_contents": "Tom Lane wrote:\n> Yeah, but the old tuples are *still there*. They are marked as having\n> been deleted by transaction XID so-and-so. When you moved the files,\n> those transaction numbers are no longer thought to be committed, so\n> the old tuples come back to life (just as the new tuples are no longer\n> considered valid, because their inserting transaction is not known to\n> be committed).\n> \n> There is a potential hole in this theory, which relates to a point Jan\n> didn't make in his otherwise excellent discussion. A tuple normally\n> doesn't stay marked with its creating or deleting XID number for all\n> that long, because we don't really want to pay the overhead of\n> consulting pg_log for every single tuple. So, as soon as any backend\n> checks a tuple and sees that its inserting transaction did commit,\n> it rewrites the tuple with a new state \"INSERT KNOWN COMMITTED\" (which\n> is represented by inserting XID = 0 or some such). After that, no one\n> has to check pg_log anymore for that tuple; it's good. Similarly, the\n> deleting XID only stays on the tuple until someone verifies that the\n> deleting transaction committed; after that the tuple is marked KNOWN\n> DEAD, and it'll stay dead no matter what's in pg_log. VACUUM is really\n> only special in that it reclaims space occupied by known-dead tuples;\n> when it checks/updates the state of a tuple, it's not doing anything\n> that's not done by a plain SELECT.\n> \n> So, AFAICT, you could only have seen the problem for tuples that were\n> not scanned by any SELECT or UPDATE operation subsequent to having been\n> inserted/deleted and committed. If you did all the deletes/inserts\n> inside a transaction, committed, and then immediately copied the files,\n> then for sure you'd have gotten burnt. If you did any sort of SELECT\n> from the table after committing the changes, I'd have expected the tuple\n> states to get frozen --- at least for the tuples that SELECT visited,\n> which might not have been all of them if the SELECT was able to use an\n> index.\n\nSounds like good material for the manual... and the book.\n--------\nRegards\nTheo\n",
"msg_date": "Sat, 30 Oct 1999 14:11:35 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgres inode q's"
}
] |
[
{
"msg_contents": "Request For Comments: Towards an industrial-strength\nlogging facility\n\n1999-10-19, Tim Holloway [email protected]\n\nIntroduction.\n\nPostgreSQL is a commercial-quality DBMS. However, one item\ngenerally found in commercial DBMS's that\nPostgreSQL has so far lacked has been a logging facility.\nYes, it has a debugging facility that can\nspit out reams of useful information, but debugging is not\nlogging - it has different goals and constraints. This,\nthen, is an attempt to provide that missg item.\n\nWhat should a log look like?\n\nThis depends. I like a console-style listing, as my needs\nare simple. Others would prefer that the\nlog be itself a database. Happily, I think that what I have\ndeveloped so far can be used for both.\nWhile it's perilous to attempt to be all things to all\npeople, my experience in working with the Amiga was that\nsimplicity doesn't have to mean rigidity or lack of ability.\nPreliminary design efforts have resulted in a plan that I\nthink will satisfy the majority of DBA's. Time will tell.\n\nDesign goals\n\n1. Robustness. Adding logging should not cause the system to\nbecome unstable.\n2. Performance. Unless you're IBM at least, logging is a\nmeans, not an end. The performance of the system\nmust not be degraded.\n3. Security. Since logging is often part of a security\neffort, it's only reasonable that the log itself\nbe secure. As of this writing, security is that of the\nPostgreSQL backend and/or syslog facilities.\n4. Routability. Preliminary design permits routing any or\nall events to multiple destinations, each of which is\nindividually controllable as to format. Abuse of this power\nmay impact 2), above, however.\n5. Locale support. Not everyone's preferred language is\nEnglish. Because each and every log message is fully\nconfigurable, and because care is given to formatting based\non locale, the DBA can customize logging to the convenience\nof his or her own culture. I hope that those who benfit from\nthis will keep my on the proper path.\n\nImplementation\n\n\"Simple things should be simple and complex things should be\npossible\"\n\n Alan Kay\n\nI've seen far too many systems where simple things were\ncomplex and complex things were simple and other\nvariantions on that theme. I HOPE that's not what I'm\nproducing. If I am, PLEASE LET ME KNOW!!!\n\nAlthough the same mechanism is at work at all times, the\ndefaults are set to display just enough information to let\nyou know that more is possible:\n\nPostgres [123] 900 - Logging configuration file\n\"/usr/local/pgsql/data/postgresql.conf\" was not found or\ndenied access. Using default logging.\nPostgres [123] 101 - Server started\nPostgres [123] 102 - Server shutdown\n\nThese messages are routed to stderr (if available) for the\nbackend process AND to syslog (if available).\nIf there are other worthy default channels, I'd like to know\nthem.\n\nThe Next Level\n\nThe logging configuration file allows customizing of\nlogging. At one extreme, it can be used to suppress ALL\nlogging - even the default items listed above. At the other\n- suffice it to say that you can get VERY elaborate.\n\nThere are 3 types of information in the logging\nconfiguration file (which may, but likely won't, be part of\npg_hba.conf) Logging info is read at startup. There may\nexist signals that cause it to be reread, but not just yet.\n\nThey are:\n\n1. General log control. For example, suppression of\nhigh-demand activities BEFORE they get run, formatted and\nsent to the log subsystem.\n\n2. Message format. Allows definition/override of message\nformats on a class (see below) and individual basis. This is\nboth how formatting for database load and locale are done.\nMultiple message formats are supported!\n\n3. Message routing. Allows definition of the destination(s)\nof log messages. Each destination (channel) can select which\nmessages it will format and report. To avoid potential loss\nof critical info, any message not explicitly routed at least\nonce gets reported on the default channel - stderr/syslog,\nunless otherwise configured.\n\nMessage classes\n\nImplicit in the desire for logging into a database is the\nunderstanding that some types of messages may have identical\nformats but different content. To facilitate this (and to\naid in locale support) each possible message has a unique\nidentifier, and related messages (those which would route to\nthe same table) are grouped into classes, identified by\ncentury, as in the http and other familiar protocols.\n\nClasses for PostgreSQL logging are not grouped by severity,\nhowever, but by their affinity for a given\nstatistical table. Tentatively:\n\n1xz - The PostgreSQL server\n2xx - User-related information\n3xx - Transaction information\n4xx - EXPLAIN results (???)\n9xx - General system alerts\n\nRight now, the following are considered likely candidates,\nsubject to user feedback:\n\nserver info\n Server name, signal ID\n101 - Server started\n102 - Server shutdown\n103 - Signal xxx received\n104 - Server ABEND\n\nuser session\n userid, port or terminal ID, authentication scheme name\n(e.g. md5). session ID\n201 - User xxxx connected via port/terminal xxxxxxxx\nauthenticated by aaaaa\n202 - User xxxx disconnected\n203 - FORBIDDEN - connection denied for user xxxx via\nport/terminal xxxxxxxxxx rejected by aaaaaaa\n\nshow commands\n Session ID, command text\n301 - SELECT text\n302 - INSERT text\n303 - UPDATE text\n304 - DELETE text\n\nshow results\n session ID, count or OID. primary/first/only table ID\naffected\n401 - SUCCESS - nnn records retrieved\n402 - SUCCESS - record inserted at OID\n403 - SUCCESS - nnn records updated\n404 - SUCCESS - nnn records deleted\n405 - FORBIDDEN - action xxxxxx denied to user xxxx on table\nxxxxxxxx\n\nexplain\n as below:\n500 EXPLAIN transaction ID sequence cost rows bytes\n\nmiscellaneous\n explanatory text\n900 - Logging configuration file \"ffff\" was not found or\ndenied read access. Using default logging.\n901 - Logging configuration file \"ffff\" could not be\nprocessed - invalid text at line nnn.\n902 - User overrides non-existent message ID nnn\n903 - Channel requests non-existent message ID nnn\n904 - end of section starting on line nnn was not found\n905 - start of section ending on line nnn was not found\n906 - (message from logging configuration file)\n\nMultiple message format tables may be defined. Each of these\noverrides or disables one or more of of the messages listed\nabove - or its \"final\" equivalent. Messages which aren't\noverridden display in their default (English text) mode.\nBecause this could be VERY rude to a table loader, each\nchannel must explicitly request which messages are\nacceptable (this also facilitates routing of message\nclasses). The default channel catches unsolicited messages\nas a safeguard. To make it easier, both common message\nformats in the message format tables and their\ncorrespoonding solicitations in the channel definitions\nallow ranges to be defined. E.g., you can define a logfile\nformat for messages 301-304 rather than having to replicate\nthe same format for each.\n\nA brief example --- SUBJECT TO RADICAL CHANGE! ---\n\n; One possible implementation of logging configuration:\n; an SQL style - verb attribute(value[s]) might be better?\n;\n<logging>\n\n<options>\nlevel = warning ; of INFO, NOTICE, DEBUG, WARNING, ERROR, or\nFATAL\nlog directory = /var/log/postgresql\nstartmessage = \" This is a sample log configuration\" ;\noutput via message 906\nendmessage = \"Have a Nice Day\" ; output via message 906\n</options>\n\n<format name=info>\n101 INFO \"%n [%p] fing an\" \n102 INFO \"%n [%p] ist zu Ende\" \n</format>\n\n<format name=database>\n201-203 INFO \"%u %p\"\n301-304 INFO \"'%2s'\" ; quote with sql escapes\n</format>\n\n<channel>\nformat name : info\noutput : syslog( localost )\nlevel : INFO\nsolicit : 101-104, default\n</channel>\n\n<channel>\nformat name : database-user\ntimestamp: local\nfile : user.log\nsolicit : 201-203\n</channel>\n\n<channel>\nformat name : database-session\nfile : session.log\nsolicit : 301-304\n</channel>\n\n; *** The default message channel ***\n<channel>\noutput : syslog( dba.mousetech.com )\nsolicit : 1**, 9**, default\n</channel>\n\n</logging>\n\nApology\n\nAlthough this scheme may appear elaborate, the internal\nrealization is fairly simple. I have far more concern that\nit may overwhelm someone who is new to the entire PostgreSQL\nsystem and is FAR more interested at that time in learning\nPOSTGRES! The plus side is that it's possible to amass a\nlibrary of mix-and-match blocks so that the more sordid\ndetails need not be recreated endlessly by every DBA in the\nworld.\n\nCredit where it's due\n\nAsture observers may have noticed that the user-definable\nmessage format is a blatant ripoff from Apache. The concept\nof logging channels I lifted from bind, the DNS utility.\nSome folding, spindling, stapling and/or mutilation may have\noccurred in the process.\n\nFeedback Needed\n\nThe details are still very much fluid, so your opinion\ncounts!!!! What's good? What's bad?\nWhat can be improved, and what should be immediately hauled\noff to the nearest toxic waste disposal center? Especially\nof interest is what the shape of the config file should be.\nIs the the pseudo-HTML format shown good? Would an SQL\nstatement form be preferable? Maybe something like LISP or\nC? Or something entirely different? Please tell me! All I\nask is that it be YACC-parseable. Email your thoughts to me\nat [email protected], subject: PostgreSQL logging. Results\nwill be posted to pgsql-admin and pgsql-hackers mailing\nlists. Thank You.\n\n Tim Holloway\n MTS Associates, Inc.\n",
"msg_date": "Fri, 22 Oct 1999 20:05:46 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "RFC: Industrial-strength logging (long message)"
},
{
"msg_contents": "Thus spake Tim Holloway\n> Request For Comments: Towards an industrial-strength\n> logging facility\n\n<COMMENT>Woo hoo!</COMMENT>\n\nYes please. As soon as possible. I have been trying to figure out all\nsorts of kluges for this. I even considered putting something in PyGreSQL\nbut this is mush better.\n\n> Design goals\n> \n> 1. Robustness. Adding logging should not cause the system to\n> become unstable.\n\nAbsolutely.\n\n> 2. Performance. Unless you're IBM at least, logging is a\n> means, not an end. The performance of the system\n> must not be degraded.\n\nIf performance takes a hit, could it be turned on and off with a flag\nor by the existence of the config file itself? That way people willing\nto pay for the logging can and those that need performance above all\ncan get it.\n\n> Postgres [123] 900 - Logging configuration file\n> \"/usr/local/pgsql/data/postgresql.conf\" was not found or\n> denied access. Using default logging.\n\nOr don't log - see above.\n\nThe one thing I would suggest is make sure that logs get date and time stamped.\n\nHow about the ability to send the log to a process instead of a file? I\nwould like to log on a separate machine but there is firewall considerations.\n\n> Although this scheme may appear elaborate, the internal\n> realization is fairly simple. I have far more concern that\n> it may overwhelm someone who is new to the entire PostgreSQL\n> system and is FAR more interested at that time in learning\n\nI don't see a problem here. Logging would be an advanced subject. No\none would have to deal with it. I think it is important that it not\nlog (or log to /dev/null) by default so that new users don't suddenly\nfind their disk space disappearing. Logging (especially if you log\nSELECTs) cand use a lot of space.\n\nGood work. This was definitely needed.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 22 Oct 1999 20:57:07 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging (long message)"
},
{
"msg_contents": "Tim Holloway <[email protected]> writes:\n> Request For Comments: Towards an industrial-strength\n> logging facility\n\nSounds pretty good overall.\n\n> This depends. I like a console-style listing, as my needs are\n> simple. Others would prefer that the log be itself a database.\n\nNote that logging into a table is harder than you might think.\nFor example: (1) The postmaster cannot touch tables at all, so\nno events detected in the postmaster could be logged that way.\n(2) No events detected during a transaction that ultimately\naborts will be successfully logged. (3) Logging events such as\n\"server failure\" would be quite risky --- if the server has\nsuffered internal state corruption then it could manage to\ncorrupt the log table entirely while it's trying to add its\nlast-dying-breath report.\n\nFortunately none of these problems apply to stderr, syslog,\nor plain-text-file reporting channels.\n\n> There are 3 types of information in the logging\n> configuration file (which may, but likely won't, be part of\n> pg_hba.conf)\n\nNo, it should definitely not be part of pg_hba.conf, it should\nbe a separate configuration file. (pg_hba.conf has a syntax too\nsimple to be easily extensible.)\n\nAnother possibility is to keep the config info in a system table, but\nthat would have a number of drawbacks (the postmaster cannot read\ntables, nor can a backend that's just starting up and hasn't finished\ninitialization). On the whole, a plain text file in the database\ndirectory is probably the best bet.\n\n> Logging info is read at startup. There may\n> exist signals that cause it to be reread, but not just yet.\n\nThere MUST exist a way to alter the logging level on-the-fly;\nIMHO this is a rock bottom, non negotiable requirement.\nA production system can't restart the postmaster just to tweak\nthe logging level up or down for investigation of a problem.\n\nWhether it's a signal or something else remains to be determined.\nWe have pretty nearly used up all the available signal numbers :-(.\nI suppose that whichever signal is currently used to trigger\nrereading of the pg_options configuration file could also trigger\nre-reading of the logging config file.\n\n> To avoid potential loss\n> of critical info, any message not explicitly routed at least\n> once gets reported on the default channel - stderr/syslog,\n> unless otherwise configured.\n\nHmm, so I'd have to explicitly discard every message I didn't want to\nhear about? I think that \"forced display\" like this should only happen\nfor high-severity messages, not for routine junk. There doesn't seem to\nbe any notion of message importance in your design, but I think there\nshould be. Most people would probably prefer to filter on an importance\nlevel, only occasionally resorting to calling out specific message types.\n\n> Especially of interest is what the shape of the config file should be.\n> Is the the pseudo-HTML format shown good?\n\nYou could do worse than to borrow BIND's syntax for log control.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Oct 1999 13:25:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging (long message) "
},
{
"msg_contents": "hi...\n\n> > This depends. I like a console-style listing, as my needs are\n> > simple. Others would prefer that the log be itself a database.\n> \n> Note that logging into a table is harder than you might think.\n\nunless i misunderstand, the concept is to design the logs such that it is\ntrivial to convert them into a database, even including tools to do this, not\nto actually create a database on the fly.\n\n i'm also supportive of including tools to run standard reports out-of-the-box\non these logs. using pgsql to massage its own logs is pretty sexy, imo =)\n\n> No, it should definitely not be part of pg_hba.conf, it should\n> be a separate configuration file. (pg_hba.conf has a syntax too\n> simple to be easily extensible.)\n> initialization). On the whole, a plain text file in the database\n> directory is probably the best bet.\n\nagreed...\n\n> There MUST exist a way to alter the logging level on-the-fly;\n> IMHO this is a rock bottom, non negotiable requirement.\n\nwhilst i don't think this is MUST, it is EXTREMELY desirable and would make the\nlogging actually useful for large installations =)\n\n> Whether it's a signal or something else remains to be determined.\n> We have pretty nearly used up all the available signal numbers :-(.\n> I suppose that whichever signal is currently used to trigger\n> rereading of the pg_options configuration file could also trigger\n> re-reading of the logging config file.\n\nwhy not use pg_options for logging cofig?\n\n> Hmm, so I'd have to explicitly discard every message I didn't want to\n> hear about? I think that \"forced display\" like this should only happen\n> for high-severity messages, not for routine junk. There doesn't seem to\n> be any notion of message importance in your design, but I think there\n> should be. Most people would probably prefer to filter on an importance\n> level, only occasionally resorting to calling out specific message types.\n\nsystems i've dealt with in the past prioritize (as you mentioned) on a numeric\nscale to reflect \"importance\" and the logging level is by default set at a\ncertain \"height\"... increasing logging is as easy as changing the threshold..\nthis has been effective in the past...\n\nthe only problem with ONLY relying on thresholds is that its a very coarse\ngrain method... so when you request a lower level of logging, you often get\ninundated with all SORTS of stuff you don't really care/want...\n\na hybridization between the two approaches might be best.. e.g., each type of\nmessage gets assigned a \"criticality\", between 1 and 10 perhaps. with 1 being\nmost critical, and 10 being least.. therefore, the higher you set your logging\nlevel, the more messages you get (logical?)\n\nthe default behaviour for each level (1..10) is to let everything though on\nthat level. but you can apply filters to limit the output...\nso you get the ability to define grossly with thresholds (1..10) and finely\n(and optionally) with message filters...\n\n> You could do worse than to borrow BIND's syntax for log control.\n\nmuch worse. =)\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Sat, 23 Oct 1999 11:54:40 -0600",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging (long\n\tmessage)"
},
{
"msg_contents": "On Sat, 23 Oct 1999, Aaron J. Seigo wrote:\n\n> > There MUST exist a way to alter the logging level on-the-fly;\n> > IMHO this is a rock bottom, non negotiable requirement.\n> \n> whilst i don't think this is MUST, it is EXTREMELY desirable and would make the\n> logging actually useful for large installations =)\n\nLet's re-iterate Tom here: There MUST exist a way ... someone *MUST* be\nable to change their configuration without having to physically stop/start\nthe server to affect the changes ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 23 Oct 1999 16:23:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging (long\n\tmessage)"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Tim Holloway <[email protected]> writes:\n> > Request For Comments: Towards an industrial-strength\n> > logging facility\n> \n> Sounds pretty good overall.\n> \n> > This depends. I like a console-style listing, as my needs are\n> > simple. Others would prefer that the log be itself a database.\n> \n> Note that logging into a table is harder than you might think.\n> For example: (1) The postmaster cannot touch tables at all, so\n> no events detected in the postmaster could be logged that way.\n> (2) No events detected during a transaction that ultimately\n> aborts will be successfully logged. (3) Logging events such as\n> \"server failure\" would be quite risky --- if the server has\n> suffered internal state corruption then it could manage to\n> corrupt the log table entirely while it's trying to add its\n> last-dying-breath report.\n> \n> Fortunately none of these problems apply to stderr, syslog,\n> or plain-text-file reporting channels.\n\nThanks, Tom! I'l file this collection of wisdom to help keep\nme on the straight and\nnarrow. I guess I should have mentioned - at least in its\ninitial incarnation, cowardice\nforbids me to attempt reading or writing PostgreSQL tables\ndirectly. The logfile design is\ndesigned to be text and customizable. If one of those custom\nformats just happens to look\nlike loadable data, well..... :) \n\nBTW, cowardice also forbids me to attempt message filtering\nexcept by message ID or severity\njust yet (no \"log all requests from Clevleand to channel 2\"\nstuff). I will try to provide a\nstub for the adventurous, though. For everyone else, there's\nPerl.\n\n> > There are 3 types of information in the logging\n> > configuration file (which may, but likely won't, be part of\n> > pg_hba.conf)\n> \n> No, it should definitely not be part of pg_hba.conf, it should\n> be a separate configuration file. (pg_hba.conf has a syntax too\n> simple to be easily extensible.)\n\nOf more concern to me was that I THINK I saw pg_hba.conf\nbeing rescanned whenever security\nwas tested. I don't want to slow that down with a lot of\n\"one-time\" (see below) data.\n> \n> Another possibility is to keep the config info in a system table, but\n> that would have a number of drawbacks (the postmaster cannot read\n> tables, nor can a backend that's just starting up and hasn't finished\n> initialization). On the whole, a plain text file in the database\n> directory is probably the best bet.\n\nI think so too -- you just reinforced my feelings. There's\nno intrinsic\nbenefit, since the error messages and channel definition get\ncompiled\ninto memory, anyhow.\n> \n> > Logging info is read at startup. There may\n> > exist signals that cause it to be reread, but not just yet.\n> \n> There MUST exist a way to alter the logging level on-the-fly;\n> IMHO this is a rock bottom, non negotiable requirement.\n> A production system can't restart the postmaster just to tweak\n> the logging level up or down for investigation of a problem.\n\nOK, I'm convinced!\n> \n> Whether it's a signal or something else remains to be determined.\n> We have pretty nearly used up all the available signal numbers :-(.\n> I suppose that whichever signal is currently used to trigger\n> rereading of the pg_options configuration file could also trigger\n> re-reading of the logging config file.\n\nHow about via psql or other facilities passing a message\npacket?\nCan you think of any cases where this would fail? BETTER\nYET!\nIs there any reason whay pg_options cannot be extended? It\nseems like\na natural fit to me - the only reason I didn't suggest it\noriginally was that\nit's been so low-key, I forgot it was there!\n\n> \n> > To avoid potential loss\n> > of critical info, any message not explicitly routed at least\n> > once gets reported on the default channel - stderr/syslog,\n> > unless otherwise configured.\n> \n> Hmm, so I'd have to explicitly discard every message I didn't want to\n> hear about? I think that \"forced display\" like this should only happen\n> for high-severity messages, not for routine junk. There doesn't seem to\n> be any notion of message importance in your design, but I think there\n> should be. Most people would probably prefer to filter on an importance\n> level, only occasionally resorting to calling out specific message types.\n\nGood point, but it was the second item on the message\noverride line:\n\n101 INFO \"Server started\"\n A\n-----+\n\nThe intent was actually more to ensure that if a new release\nadded new messages, they\nwouldn't suddenly pop up in places they'd cause the receptor\nto get indigestion (e.g. table loader)\nor have critical messages get lost entirely. I did the\npseudocode for lossless message routing\ntoday -- adding a dropout threshold doesn't look like a\nmajor problem.\n> \n> > Especially of interest is what the shape of the config file should be.\n> > Is the the pseudo-HTML format shown good?\n> \n> You could do worse than to borrow BIND's syntax for log control.\n\n*I* like it (I must - I stole almost everything else from\nthere!). That's what I meant\nby \"is a \"C\" format good?\". It works well as an extension to\npg_options. Just wanted to see what\nothers would be most comfortable with.\n\nAgain, thanks! This is a big help!\n\n Tim Holloway\n",
"msg_date": "Sat, 23 Oct 1999 20:47:13 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging (long message)"
},
{
"msg_contents": "\n\nThe Hermit Hacker wrote:\n> \n> On Sat, 23 Oct 1999, Aaron J. Seigo wrote:\n> \n> > > There MUST exist a way to alter the logging level on-the-fly;\n> > > IMHO this is a rock bottom, non negotiable requirement.\n> >\n> > whilst i don't think this is MUST, it is EXTREMELY desirable and would make the\n> > logging actually useful for large installations =)\n> \n> Let's re-iterate Tom here: There MUST exist a way ... someone *MUST* be\n> able to change their configuration without having to physically stop/start\n> the server to affect the changes ...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\nI think we have a consensus. Destroy and recreate logging\ndata structures/tasks on receipt of\nsuitable event.\n\nFor simple things like log levels, though, I'd still like\nfeedback on\ndesirablility and feasibility of altering basic logging\noptions though\n(authorized!) frontends. As a user, I get nervous when I\nhave to thread\nmy way past possibly-fragile unrelated items in a config\nfile when I'm trying\nto do a panic diagnosis. As an administrator, I get even\nMORE nervous if one\nof the less careful people I know were to be entrusted with\nthat task.\n\nAnother possible mode of controlling what's logged is to\nassign mask bits to various\nclasses of messaages and allow the administrator to alter\nthe filter mask.\nAlthough, in truth, the channel design is pretty much the\nsame thing.\n\n Tim Holloway\n",
"msg_date": "Sat, 23 Oct 1999 21:07:52 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging\n\t(longmessage)"
},
{
"msg_contents": "Tim Holloway <[email protected]> writes:\n>> Note that logging into a table is harder than you might think.\n\n> I guess I should have mentioned - at least in its initial incarnation,\n> cowardice forbids me to attempt reading or writing PostgreSQL tables\n> directly. The logfile design is designed to be text and\n> customizable. If one of those custom formats just happens to look like\n> loadable data, well..... :)\n\nYeah, someone else suggested writing to a textfile and then having a\ncron task or something like that load the data into a table later on.\nThat seems workable, but you'd need some answer to the following\nproblem. Suppose that for some reason (like trying to diagnose a\ntransient problem) the log level is currently set high enough so that\nevery \"insert\" command generates a log entry. The appointed hour\ncomes 'round and your cron task fires off. Each time it copies data\nout of the logfile into the database, it is itself adding at least one\nmore entry to the logfile. Can you say \"infinite loop\"?\n\nI can think of a couple of possible workarounds, but the one that seems\nmost natural is to let the logging task override the system-wide logging\nlevel and set its own log level to something low. That ties right in\nwith your followup comment:\n\n> For simple things like log levels, though, I'd still like feedback on\n> desirablility and feasibility of altering basic logging options though\n> (authorized!) frontends.\n\nI think you were thinking here of altering the system-wide level through\na frontend command, but what I'm envisioning is allowing an SQL client\nto alter the log level for its own particular backend *without* any\nsystem-wide effects.\n\nEven that ability might need to be restricted to suitably-privileged\nusers, else it could be used to \"fly under the radar\" of an admin who\nwas using logging for security purposes. Perhaps I'm being overly\nparanoid, though. There are probably only a few message types that\nmight be of interest for security purposes, so maybe we could define the\nfiltering commands in such a way that those messages are not disablable\nfrom a client. Anyone else have strong feelings about this?\n\n>> No, it should definitely not be part of pg_hba.conf, it should\n>> be a separate configuration file. (pg_hba.conf has a syntax too\n>> simple to be easily extensible.)\n\n> Of more concern to me was that I THINK I saw pg_hba.conf being\n> rescanned whenever security was tested.\n\nThat's true, pg_hba.conf is currently reread on each connection attempt.\nWe probably ought to try to avoid that... but in any case I think there\nare obvious security reasons for keeping access authorization info\nstrictly separate from other configuration data.\n\n> Good point, but it was the second item on the message\n> override line:\n\n> 101 INFO \"Server started\"\n> A\n> -----+\n\nOops, I missed that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 1999 12:15:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging (long message) "
},
{
"msg_contents": "\nWhy not do something similar to what we are doing with pg_shadow? If I\nremember the logic right, when you update pg_shadow, one ofits \"steps\" is\nto dump it to a text file so that postmaster can read it? this should\nmake it easy for one user/database to have one logging set, while another\ndoesn' have it set at all...and should make it so that each database\n*should* theoretically log to different files/mechanisms?\n\nOn Sat, 23 Oct 1999, Tim Holloway wrote:\n\n> \n> \n> The Hermit Hacker wrote:\n> > \n> > On Sat, 23 Oct 1999, Aaron J. Seigo wrote:\n> > \n> > > > There MUST exist a way to alter the logging level on-the-fly;\n> > > > IMHO this is a rock bottom, non negotiable requirement.\n> > >\n> > > whilst i don't think this is MUST, it is EXTREMELY desirable and would make the\n> > > logging actually useful for large installations =)\n> > \n> > Let's re-iterate Tom here: There MUST exist a way ... someone *MUST* be\n> > able to change their configuration without having to physically stop/start\n> > the server to affect the changes ...\n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> I think we have a consensus. Destroy and recreate logging\n> data structures/tasks on receipt of\n> suitable event.\n> \n> For simple things like log levels, though, I'd still like\n> feedback on\n> desirablility and feasibility of altering basic logging\n> options though\n> (authorized!) frontends. As a user, I get nervous when I\n> have to thread\n> my way past possibly-fragile unrelated items in a config\n> file when I'm trying\n> to do a panic diagnosis. As an administrator, I get even\n> MORE nervous if one\n> of the less careful people I know were to be entrusted with\n> that task.\n> \n> Another possible mode of controlling what's logged is to\n> assign mask bits to various\n> classes of messaages and allow the administrator to alter\n> the filter mask.\n> Although, in truth, the channel design is pretty much the\n> same thing.\n> \n> Tim Holloway\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 24 Oct 1999 14:12:27 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging\n\t(longmessage)"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Why not do something similar to what we are doing with pg_shadow? If I\n> remember the logic right, when you update pg_shadow, one ofits \"steps\" is\n> to dump it to a text file so that postmaster can read it?\n\nI thought about suggesting that, but IIRC the pg_shadow stuff doesn't\nreally *work* very well --- CREATE USER and friends know that they\nare supposed to dump the table to a textfile after modifying it,\nbut heaven help you if you try poking pg_shadow with vanilla SQL\ncommands. And I bet aborting a transaction after it does a CREATE USER\ndoesn't undo the changes to the flat file, either.\n\nSo, unless someone is feeling inspired to go rework the way the pg_shadow\nstuff is handled, I don't think it's a good model to emulate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 1999 13:19:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging\n\t(longmessage)"
},
{
"msg_contents": "\n\nTom Lane wrote:\n>\n> Tim Holloway <[email protected]> writes:\n> >> Note that logging into a table is harder than you might think.\n>\n> > I guess I should have mentioned - at least in its initial incarnation,\n> > cowardice forbids me to attempt reading or writing PostgreSQL tables\n> > directly. The logfile design is designed to be text and\n> > customizable. If one of those custom formats just happens to look like\n> > loadable data, well..... :)\n>\n> Yeah, someone else suggested writing to a textfile and then having a\n> cron task or something like that load the data into a table later on.\n> That seems workable, but you'd need some answer to the following\n> problem. Suppose that for some reason (like trying to diagnose a\n> transient problem) the log level is currently set high enough so that\n> every \"insert\" command generates a log entry. The appointed hour\n> comes 'round and your cron task fires off. Each time it copies data\n> out of the logfile into the database, it is itself adding at least one\n> more entry to the logfile. Can you say \"infinite loop\"?\n\nYou noticed that too, eh? You might want to take a look at the archived\npsql-admin postings about the middle of last week. Since I'm working on the\npremise that all log files are text files and there's already been the desire\nexpressed that they be rotatable, it's simplest to piggyback the load function\nonto rotation: start a new file and load the prior one. It introduces\nsome latency into the log tables (forcing rotation can obviously cure this),\nbut should eliminate the log recursion by deferring the entries.\nHmmmm. Maybe the initial log filter WON'T be just a stub!\n\n>\n> > For simple things like log levels, though, I'd still like feedback on\n> > desirablility and feasibility of altering basic logging options though\n> > (authorized!) frontends.\n>\n> I think you were thinking here of altering the system-wide level through\n> a frontend command, but what I'm envisioning is allowing an SQL client\n> to alter the log level for its own particular backend *without* any\n> system-wide effects.\n>\n> Even that ability might need to be restricted to suitably-privileged\n> users, else it could be used to \"fly under the radar\" of an admin who\n> was using logging for security purposes. Perhaps I'm being overly\n> paranoid, though. There are probably only a few message types that\n> might be of interest for security purposes, so maybe we could define the\n> filtering commands in such a way that those messages are not disablable\n> from a client. Anyone else have strong feelings about this?\n\nI seem to read a desire to log frontend action in what you're saying.\nI guess I should define my ambitions. Initially, at least, all I'm trying\nto log is server activity, and that from an administrative point of view.\nI don't plan to subsume the debugging system, because:\n\n\t1. IMHO, it works fine as is (excepting the lack of timestamping)\n\t2. The debugging messages weren't designed to fit the constraints of\n\tthe logger. That would require reworking dozens of messages all over\n\tthe program. I'd almost certainly break something critical.\n\nOf course, the line between event logging and debugging is pretty fuzzy and\napt to change, depending on what you need at the moment.\n\nI didn't consider logging as related to front-ends, since: they're more\nof a programmer's problem; there exist a multitude of them, and some -\nlike ODBC - already have their own logging. I'm open to suggestion, though\nI think that's too big a bite to chew just yet.\n",
"msg_date": "Sun, 24 Oct 1999 13:48:20 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging (long message)"
},
{
"msg_contents": "On Oct 23, Tim Holloway mentioned:\n\n> I think we have a consensus. Destroy and recreate logging data\n> structures/tasks on receipt of suitable event.\n> \n> For simple things like log levels, though, I'd still like feedback on\n> desirablility and feasibility of altering basic logging options though\n> (authorized!) frontends. As a user, I get nervous when I have to\n> thread my way past possibly-fragile unrelated items in a config file\n> when I'm trying to do a panic diagnosis. As an administrator, I get\n> even MORE nervous if one of the less careful people I know were to be\n> entrusted with that task.\n\nWhat about\nSET LOGLEVEL TO <something>;\nSET LOGDETAIL TO <something>;\nor the like. You could use pg_shadow.usesuper as a security stipulation.\nUsing something like a signal to do this is probably overkill, especially\nsince there are hardly any left, and it's also infinitely less intuitive\nand flexible.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 24 Oct 1999 22:15:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging"
},
{
"msg_contents": "hi...\n\n> What about\n> SET LOGLEVEL TO <something>;\n> SET LOGDETAIL TO <something>;\n> or the like. You could use pg_shadow.usesuper as a security stipulation.\n> Using something like a signal to do this is probably overkill, especially\n> since there are hardly any left, and it's also infinitely less intuitive\n> and flexible.\n\nthis would be done from psql? if so, here's a query i have: are there any plans\nto seperate the admin functions out of psql and into another seperate tool? \n\ni have a queasyness with general users having access to a tool that can\ndo admin takss (even if they supposedly don't have a superuser account).\n\nit also seems much cleaner to have admin in one tool and data interaction in\nanother.\n\nam i off track here?\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Mon, 25 Oct 1999 15:50:59 -0600",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging"
},
{
"msg_contents": "On Mon, 25 Oct 1999, Aaron J. Seigo wrote:\n\n> hi...\n> \n> > What about\n> > SET LOGLEVEL TO <something>;\n> > SET LOGDETAIL TO <something>;\n> > or the like. You could use pg_shadow.usesuper as a security stipulation.\n> > Using something like a signal to do this is probably overkill, especially\n> > since there are hardly any left, and it's also infinitely less intuitive\n> > and flexible.\n> \n> this would be done from psql? if so, here's a query i have: are there any plans\n> to seperate the admin functions out of psql and into another seperate tool? \n> \n> i have a queasyness with general users having access to a tool that can\n> do admin takss (even if they supposedly don't have a superuser account).\n\nThere is no such thing, actually...all \"admin commands\" are seperate SQL\nqueries...psql is merely one of many interfaces that allows one to talk to\nand send queries to the backend...what the backend then does with the\nquery is where the security lies...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 25 Oct 1999 22:01:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging"
},
{
"msg_contents": "> I think we have a consensus. Destroy and recreate logging\n> data structures/tasks on receipt of\n> suitable event.\n> \n> For simple things like log levels, though, I'd still like\n> feedback on\n> desirablility and feasibility of altering basic logging\n> options though\n> (authorized!) frontends. As a user, I get nervous when I\n> have to thread\n> my way past possibly-fragile unrelated items in a config\n> file when I'm trying\n> to do a panic diagnosis. As an administrator, I get even\n> MORE nervous if one\n> of the less careful people I know were to be entrusted with\n> that task.\n\nOne more item I have not heard is that you can create virtual table that\nlook like tables but return data about users, queries on SELECT.\n\nInformix does this. Allows you to get info, without the need for\nstorage. Not good for every case, but an interesting idea sometimes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 01:27:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging\n\t(longmessage)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> \n> One more item I have not heard is that you can create virtual table that\n> look like tables but return data about users, queries on SELECT.\n> \n> Informix does this. Allows you to get info, without the need for\n> storage. Not good for every case, but an interesting idea sometimes.\n> \n\nThis should come naturally after the function interface is updated\nto enable it to return cursors.\n\nA very desirable feature, but I'm not sure anyone is actually working on it.\n\n-------------------\nHannu\n",
"msg_date": "Tue, 26 Oct 1999 07:27:55 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Re: [HACKERS] RFC: Industrial-strength logging\n\t(longmessage)"
}
] |
[
{
"msg_contents": "Whilst cleaning up query-length dependencies, I noticed that our\nhandling of maximum file pathname lengths is awfully messy.\n\nDifferent parts of the system rely on no fewer than four different\nsymbols that they import from several different system header\nfiles (any one of which might not exist on a particular platform):\n\tMAXPATHLEN, _POSIX_PATH_MAX, MAX_PATH, PATH_MAX\nAnd on top of that, postgres.h defines MAXPGPATH which is used\nby yet other places.\n\nOn my system, _POSIX_PATH_MAX = 255, PATH_MAX = 1023, MAXPATHLEN = 1024\n(a nearby Linux box is almost but not quite the same) whereas MAXPGPATH\nis 128. So there is absolutely no consistency to the pathname length\nlimits being imposed in different parts of Postgres.\n\nAFAIK, most or all flavors of Unix have kernel limits on the maximum\nlength of a pathname that will be accepted by the kernel's file-access\ncalls (it's 1024 on my box). So I don't feel any need to remove\nhardwired limits on pathname lengths in favor of indefinitely-expansible\nbuffers. But it does seem that a little more consistency in the\nhardwired limits is called for.\n\n>From the information I have, it seems that the various allegedly-\nstandard #defines for max pathname length are not too standard,\nand I don't think that Postgres internal buffers ought to constrain\npath lengths to much less than the kernel limit (so using the\nseemingly \"standard\" _POSIX_PATH_MAX symbol would be a loser).\nSo my inclination is to define MAXPGPATH as 1024 in config.h, and\nremove all uses of the other four symbols in favor of MAXPGPATH.\nThat would at least provide a single point of tweaking for anyone\nwho didn't like the value of 1024.\n\nDoes anyone have a better idea? Is it worth trying to extract a\nsystem limit on pathlength during configure, rather than leaving\nMAXPGPATH as a manual configuration item --- and if so, exactly how\nshould configure go about it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Oct 1999 00:46:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Path-length follies"
},
{
"msg_contents": "> Whilst cleaning up query-length dependencies, I noticed that our\n> handling of maximum file pathname lengths is awfully messy.\n> \n> Different parts of the system rely on no fewer than four different\n> symbols that they import from several different system header\n> files (any one of which might not exist on a particular platform):\n> \tMAXPATHLEN, _POSIX_PATH_MAX, MAX_PATH, PATH_MAX\n> And on top of that, postgres.h defines MAXPGPATH which is used\n> by yet other places.\n> \n> On my system, _POSIX_PATH_MAX = 255, PATH_MAX = 1023, MAXPATHLEN = 1024\n> (a nearby Linux box is almost but not quite the same) whereas MAXPGPATH\n> is 128. So there is absolutely no consistency to the pathname length\n> limits being imposed in different parts of Postgres.\n> \n> AFAIK, most or all flavors of Unix have kernel limits on the maximum\n> length of a pathname that will be accepted by the kernel's file-access\n> calls (it's 1024 on my box). So I don't feel any need to remove\n> hardwired limits on pathname lengths in favor of indefinitely-expansible\n> buffers. But it does seem that a little more consistency in the\n> hardwired limits is called for.\n> \n> >From the information I have, it seems that the various allegedly-\n> standard #defines for max pathname length are not too standard,\n> and I don't think that Postgres internal buffers ought to constrain\n> path lengths to much less than the kernel limit (so using the\n> seemingly \"standard\" _POSIX_PATH_MAX symbol would be a loser).\n> So my inclination is to define MAXPGPATH as 1024 in config.h, and\n> remove all uses of the other four symbols in favor of MAXPGPATH.\n> That would at least provide a single point of tweaking for anyone\n> who didn't like the value of 1024.\n> \n> Does anyone have a better idea? Is it worth trying to extract a\n> system limit on pathlength during configure, rather than leaving\n> MAXPGPATH as a manual configuration item --- and if so, exactly how\n> should configure go about it?\n\nI don't like the 128 or 256 numbers, but isn't there a predefined place\nfor this value in standard system headers?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 23 Oct 1999 15:38:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Path-length follies"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Does anyone have a better idea? Is it worth trying to extract a\n>> system limit on pathlength during configure, rather than leaving\n>> MAXPGPATH as a manual configuration item --- and if so, exactly how\n>> should configure go about it?\n\n> I don't like the 128 or 256 numbers, but isn't there a predefined place\n> for this value in standard system headers?\n\nThere are too many of 'em, actually --- I had never realized this\nbefore, but there are three or four *different* \"standard\" symbols that\nall purport to be max pathlength. On my box they actually have three\ndifferent values, which doesn't leave a warm feeling in the stomach.\n\nAs I was just commenting off-list, we do not need to enforce the local\nkernel's pathlength limit --- it's perfectly capable of doing that for\nitself. All we really need to do is make sure we are not a bottleneck\npreventing reasonable usage. So, although I was thinking last night\nthat a configure test might be a good idea, I now believe it's a waste\nof cycles. (It could even be counterproductive, if it seized on a\nbogusly small value, as _POSIX_PATH_MAX appears to be on both of the\nsystems I've checked.) Let's just set the value at something generous\nlike 1K and forget it. But we should use a consistent, tweakable-in-\none-place value, just in case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Oct 1999 18:28:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Path-length follies "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Does anyone have a better idea? Is it worth trying to extract a\n> >> system limit on pathlength during configure, rather than leaving\n> >> MAXPGPATH as a manual configuration item --- and if so, exactly how\n> >> should configure go about it?\n> \n> > I don't like the 128 or 256 numbers, but isn't there a predefined place\n> > for this value in standard system headers?\n> \n> There are too many of 'em, actually --- I had never realized this\n> before, but there are three or four *different* \"standard\" symbols that\n> all purport to be max pathlength. On my box they actually have three\n> different values, which doesn't leave a warm feeling in the stomach.\n\nCouldn't we pick one of the standard ones for use in setting a value for\nour own define, or at least test one of the standard ones against ours\nto see that it is either equal or greater than the 1024 we chose?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 25 Oct 1999 23:55:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Path-length follies"
},
{
"msg_contents": "This came and went already but I did some research on it and it doesn't\nlook as bad as it seems.\n\nOn 1999-10-23, Tom Lane mentioned:\n\n> Different parts of the system rely on no fewer than four different\n> symbols that they import from several different system header\n> files (any one of which might not exist on a particular platform):\n> \tMAXPATHLEN, _POSIX_PATH_MAX, MAX_PATH, PATH_MAX\n> And on top of that, postgres.h defines MAXPGPATH which is used\n> by yet other places.\n> \n> On my system, _POSIX_PATH_MAX = 255, PATH_MAX = 1023, MAXPATHLEN = 1024\n> (a nearby Linux box is almost but not quite the same) whereas MAXPGPATH\n> is 128. So there is absolutely no consistency to the pathname length\n> limits being imposed in different parts of Postgres.\n\nThe Posix.1 symbol is PATH_MAX, which, in theory, describes the \"uniform\nsystem limit\". The symbol _POSIX_PATH_MAX defines the minimum which\nPATH_MAX is required to be on any Posix system, therefore that value\nshould be fixed at 255 in the whole world. (Which yields code such as\nthis:\n#ifndef MAXPATHLEN\n#define MAXPATHLEN _POSIX_PATH_MAX \n#endif\n--from the actual source-- conceptually incorrect.)\n\n>From my linux/limits.h (which propagates through to limits.h):\n#define PATH_MAX 4095 /* # chars in a path name */\n\nIn addition there is FILENAME_MAX, which is even defined if there is, in\nfact, no limit on the filename length, in which case it is set to some\nreally large number. (Thus it is no good for allocating fixed size\nbuffers.) This seems to be an ANSI C symbol for stdio sort of stuff, not a\nkernel thing. (And of course in the GNU \"Any Day Now\" System, there is no\nsuch limit. ;)\n\nMAXPATHLEN is the BSD name for PATH_MAX. From my sys/param.h:\n/* BSD names for some <limits.h> values. */\n . . .\n#define MAXPATHLEN PATH_MAX\n\nAlthough this seems to be the most popular thing to use, I can hardly see\nit referenced in any documentation at all on this machine.\n\nIf one wishes to be anally proper one could use pathconf() to find out the\nlimits on the fly as they apply to a particular file system.\n\nFinally, the symbol MAX_PATH is not described anywhere and I didn't find\nit in the source either.\n\nWhich would lead one to suggest the following as portable as possible way\nout:\n\n#if defined(PATH_MAX)\n #define MAXPGPATH PATH_MAX\n#else\n #if defined(MAXPATHLEN)\n #define MAXPGPATH MAXPATHLEN\n #else\n #define MAXPGPATH 255 /* because this is the lowest common\n\t\t\t denominator on Posix systems */\n #endif\n#endif\n\nThat ought to cover all bases really. And if your system doesn't have\neither Posix or BSD includes (whoops!) you can tweak it yourself. Put that\nin config.h and everyone is happy.\n\nThen again, I would be even happier if we just used PATH_MAX and not\ninvent a PostgreSQL-specific constant for everything in the world, but I'm\nnot sure about the Posix'ness of other systems in the crowd out there. How\nabout simply:\n\n#ifndef PATH_MAX\n#define PATH_MAX 255\n#endif\n\nin c.h (not config.h) -- end of story.\n\n(Of course the code would actually have to use this as well. Currently,\nMAXPATHLEN is most widespread.)\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 5 Nov 1999 01:35:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Path-length follies"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Which would lead one to suggest the following as portable as possible way\n> out:\n\n> #if defined(PATH_MAX)\n> #define MAXPGPATH PATH_MAX\n> #else\n> #if defined(MAXPATHLEN)\n> #define MAXPGPATH MAXPATHLEN\n> #else\n> #define MAXPGPATH 255 /* because this is the lowest common\n> \t\t\t denominator on Posix systems */\n> #endif\n> #endif\n\nI don't think this would be an improvement. The main problem with it is\nthat the above code could yield different values for MAXPGPATH *on the\nsame system* depending on which system include file(s) you had included\nbefore reading config.h. Of course it would be a very bad thing if\ndifferent Postgres source files had different ideas about the value of\nMAXPGPATH --- it could lead to different interpretations of a struct\nlayout, for example. (I'm not sure that we actually have any such\nstructs, but there's obviously potential for trouble.)\n\nIf it were really important to have MAXPGPATH exactly equal to the\nlocal filename length limit, I'd be more interested in trying to\nconfigure it just so. One possibility would be to have the configure\nscript do the equivalent of the above logic once at configure time,\nand then put the nailed-down value into config.h. But I can't see\nthat it's worth the trouble. As long as we are not getting in people's\nway with an unreasonably small limit on pathlengths, it doesn't much\nmatter exactly what the limit is. IMHO anyway.\n\nHowever, this line of thought does lead to something that maybe we\nshould change: right now, most of the source files are set up as\n\n\t#include <all necessary system header files>\n\n\t#include \"postgres.h\"\n\n\t#include \"necessary postgres headers\"\n\nwhere config.h is read as part of postgres.h. I wonder whether it's\nsuch a good idea to have different source files reading different\nsets of system headers before config.h. Maybe the standard order\nought to be\n\n\t#include \"postgres.h\"\n\n\t#include <all necessary system header files>\n\n\t#include \"necessary postgres headers\"\n\nso that config.h is always read in a uniform context.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Nov 1999 12:11:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Path-length follies "
},
{
"msg_contents": "On 1999-11-06, Tom Lane mentioned:\n\n> Peter Eisentraut <[email protected]> writes:\n> > Which would lead one to suggest the following as portable as possible way\n> > out:\n> \n> > #if defined(PATH_MAX)\n> > #define MAXPGPATH PATH_MAX\n> > #else\n> > #if defined(MAXPATHLEN)\n> > #define MAXPGPATH MAXPATHLEN\n> > #else\n> > #define MAXPGPATH 255 /* because this is the lowest common\n> > \t\t\t denominator on Posix systems */\n> > #endif\n> > #endif\n> \n> I don't think this would be an improvement. The main problem with it is\n\nThat's why I suggested:\n\n#ifndef PATH_MAX\n#define PATH_MAX 255\n#endif\n\ninstead. Then remove all references to MAXPATHLEN and MAXPGPATH. That can\nbe done rather quickly. The above is standardized and then we'll have a\nuniform limit throughout the source, that should be equal to the actual\nsystem limit on 99% of all systems. And it makes the source simpler along\nthe way. As it is right now, the vast majority of files doesn't use\nMAXPGPATH anyway.\n\nOf course, this is a stupid topic to discuss, but please consider the\npoint.\n\n\n> However, this line of thought does lead to something that maybe we\n> should change: right now, most of the source files are set up as\n> \n> \t#include <all necessary system header files>\n> \n> \t#include \"postgres.h\"\n> \n> \t#include \"necessary postgres headers\"\n> \n> where config.h is read as part of postgres.h. I wonder whether it's\n> such a good idea to have different source files reading different\n> sets of system headers before config.h. Maybe the standard order\n> ought to be\n> \n> \t#include \"postgres.h\"\n> \n> \t#include <all necessary system header files>\n> \n> \t#include \"necessary postgres headers\"\n> \n> so that config.h is always read in a uniform context.\n\nDefinitely.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 8 Nov 1999 22:19:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Path-length follies "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> As it is right now, the vast majority of files doesn't use\n> MAXPGPATH anyway.\n\n?? I think you are looking at out-of-date sources, because I changed\neverything to use MAXPGPATH a week or two ago...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Nov 1999 21:34:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Path-length follies "
}
] |
[
{
"msg_contents": "Hello!\n\n Eric Raymond responds to Bill Joy... RMS said its words too:\n http://www.upsidetoday.com/texis/mvm/richard_brandt?id=380f44bb0\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Sat, 23 Oct 1999 12:44:35 +0000 (GMT)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "GPL vs BSD vs SCSL"
}
] |
[
{
"msg_contents": ">> \n>> Oops! :)\n>> \n>> Okay, I guess the motivation behind this was the question \"Where is\n>> that damn COPYRIGHT file?\", or maybe I've just been reading the\n>> appendix to the GPL too often.\nThe thing is this: it's not GPL'd ;-)\n\nMikeA\n",
"msg_date": "Sat, 23 Oct 1999 22:09:30 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] New psql startup banner"
}
] |
[
{
"msg_contents": "\n\tIs there something syslog does not do? multi level is\nbuilt in. Use a cron task to record back to a postgres db if needed/wanted.\nsyslog.conf lets you decide what to log (to some degree).\n\n\tWhy re-invent things? Most admins sort-of understand syslog.\n\n-- \n\tStephen N. Kogge\n\[email protected]\n\thttp://www.uimage.com\n\n\n",
"msg_date": "Sat, 23 Oct 1999 16:30:51 -0400",
"msg_from": "Stephen Kogge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RFC: Industrial-strength logging "
}
] |
[
{
"msg_contents": "This is a followup to a message I wrote in June about reworking the fmgr\ninterface. I've had a couple of good ideas (I think ;-)) since then,\nbut there are some parts of the proposal that still need work before\nimplementation can begin.\n\nI could particularly use some feedback from Jan and anyone else who's\nworked with function-call handlers: does this design eliminate the\nkluges that you've had to use in the past? If not, what else is needed?\n\n\t\t\tregards, tom lane\n\n\nProposal for function-manager redesign\n--------------------------------------\n\nWe know that the existing mechanism for calling Postgres functions needs\nto be redesigned. It has portability problems because it makes\nassumptions about parameter passing that violate ANSI C; it fails to\nhandle NULL arguments and results cleanly; and \"function handlers\" that\nsupport a class of functions (such as fmgr_pl) can only be done via a\nreally ugly, non-reentrant kluge. (Global variable set during every\nfunction call, forsooth.) Here is a proposal for fixing these problems.\n\nIn the past, the major objections to redoing the function-manager\ninterface have been (a) it'll be quite tedious to implement, since every\nbuilt-in function and everyplace that calls such functions will need to\nbe touched; (b) such wide-ranging changes will be difficult to make in\nparallel with other development work; (c) it will break existing\nuser-written loadable modules that define \"C language\" functions. While\nI have no solution to the \"tedium\" aspect, I believe I see an answer to\nthe other problems: by use of function handlers, we can support both old\nand new interfaces in parallel for both callers and callees, at some\nsmall efficiency cost for the old styles. That way, most of the changes\ncan be done on an incremental file-by-file basis --- we won't need a\n\"big bang\" where everything changes at once. Support for callees\nwritten in the old style can be left in place indefinitely, to provide\nbackward compatibility for user-written C functions.\n\n\nThe new function-manager interface\n----------------------------------\n\nThe core of the new design is revised data structures for representing\nthe result of a function lookup and for representing the parameters\npassed to a specific function invocation. (We want to keep function\nlookup separate, since many parts of the system apply the same function\nover and over; the lookup overhead should be paid once per query, not\nonce per tuple.)\n\n\nWhen a function is looked up in pg_proc, the result is represented as\n\ntypedef struct\n{\n PGFunction fn_addr; /* pointer to function or handler to be called */\n Oid fn_oid; /* OID of function (NOT of handler, if any) */\n int fn_nargs; /* 0..MAXFMGRARGS, or -1 if variable arg count */\n void *fn_extra; /* extra space for use by handler */\n} FunctionLookupInfoData;\ntypedef FunctionLookupInfoData* FunctionLookupInfo;\n\nFor an ordinary built-in function, fn_addr is just the address of the C\nroutine that implements the function. Otherwise it is the address of a\nhandler for the class of functions that includes the target function.\nThe handler can use the function OID and perhaps also the fn_extra slot\nto find the specific code to execute. (fn_oid = InvalidOid can be used\nto denote a not-yet-initialized FunctionLookupInfoData struct. fn_extra\nwill always be NULL when a FunctionLookupInfoData is first filled by the\nfunction lookup code, but a function handler could set it to avoid\nmaking repeated lookups of its own when the same FunctionLookupInfoData\nis used repeatedly during a query.) fn_nargs is the number of arguments\nexpected by the function.\n\nFunctionLookupInfo replaces the present FmgrInfo structure (but I'm\ngiving it a new name so that the old struct definition can continue\nto exist during the transition phase).\n\n\nDuring a call of a function, the following data structure is created\nand passed to the function:\n\ntypedef struct\n{\n FunctionLookupInfo flinfo; /* ptr to lookup info used for this call */\n bool isnull; /* input/output flag, see below */\n int nargs; /* # arguments actually passed */\n Datum arg[MAXFMGRARGS]; /* Arguments passed to function */\n bool argnull[MAXFMGRARGS]; /* T if arg[i] is actually NULL */\n} FunctionCallInfoData;\ntypedef FunctionCallInfoData* FunctionCallInfo;\n\nNote that all the arguments passed to a function (as well as its result\nvalue) will now uniformly be of type Datum. As discussed below, callers\nand callees should apply the standard Datum-to-and-from-whatever macros\nto convert to the actual argument types of a particular function. The\nvalue in arg[i] is unspecified when argnull[i] is true.\n\nIt is generally the responsibility of the caller to ensure that the\nnumber of arguments passed matches what the callee is expecting: except\nfor callees that take a variable number of arguments, the callee will\ntypically ignore the nargs field and just grab values from arg[].\n\nThe meaning of the struct elements should be pretty obvious with the\nexception of isnull. isnull must be set by the caller to the logical OR\nof the argnull[i] flags --- ie, isnull is true if any argument is NULL.\n(Of course, isnull is false if nargs == 0.) On return from the\nfunction, isnull is the null flag for the function result: if it is true\nthe function's result is NULL, regardless of the actual function return\nvalue. Overlapping the input and output flags in this way provides a\nsimple, convenient, fast implementation for the most common case of a\n\"strict\" function (whose result is NULL if any input is NULL):\n\n\tif (finfo->isnull)\n\t return (Datum) 0; /* specific value doesn't matter */\n\n\t... else do normal calculation ignoring argnull[] ...\n\nNon-strict functions can easily be implemented; they just need to check\nthe individual argnull[] flags and set the appropriate isnull value\nbefore returning.\n\nFunctionCallInfo replaces FmgrValues plus a bunch of ad-hoc parameter\nconventions.\n\n\nCallees, whether they be individual functions or function handlers,\nshall always have this signature:\n\nDatum function (FunctionCallInfo finfo);\n\nwhich is represented by the typedef\n\ntypedef Datum (*PGFunction) (FunctionCallInfo finfo);\n\nThe function is responsible for setting finfo->isnull appropriately\nas well as returning a result represented as a Datum. Note that since\nall callees will now have exactly the same signature, and will be called\nthrough a function pointer declared with exactly that signature, we\nshould have no portability or optimization problems.\n\nWhen the function's result type is pass-by-reference, the result value\nmust always be stored in freshly-palloc'd space (it can't be a constant\nor a copy of an input pointer). This rule will eventually allow\nautomatic reclamation of storage space during expression evaluation.\n\n\nFunction coding conventions\n---------------------------\n\nAs an example, int4 addition goes from old-style\n\nint32\nint4pl(int32 arg1, int32 arg2)\n{\n return arg1 + arg2;\n}\n\nto new-style\n\nDatum\nint4pl(FunctionCallInfo finfo)\n{\n if (finfo->isnull)\n return (Datum) 0; /* value doesn't really matter ... */\n /* we can ignore flinfo, nargs and argnull */\n\n return Int32GetDatum(DatumGetInt32(finfo->arg[0]) +\n DatumGetInt32(finfo->arg[1]));\n}\n\nThis is, of course, much uglier than the old-style code, but we can\nimprove matters with some well-chosen macros for the boilerplate parts.\nWhat we actually end up writing might look something like\n\nDatum\nint4pl(PG_FUNCTION_ARGS)\n{\n PG_STRICT_FUNCTION;\t\t\t/* encapsulates null check */\n {\n PG_ARG1_INT32;\n PG_ARG2_INT32;\n\n PG_RESULT_INT32( arg1 + arg2 );\n }\n}\n\nwhere the macros expand to things like\n\"int32 arg1 = DatumGetInt32(finfo->arg[0])\"\nand \"return Int32GetDatum( x )\". I don't yet have a detailed proposal\nfor convenience macros for function authors, but I think it'd be well\nworth while to define some.\n\nFor the standard pass-by-reference types (int8, float4, float8) these\nmacros should also hide the indirection and space allocation involved,\nso that the function's code is not explicitly aware that the types are\npass-by-ref. This will allow future conversion of these types to\npass-by-value on machines where it's feasible to do that. (For example,\non an Alpha it's pretty silly to make int8 be pass-by-ref, since Datum\nis going to be 64 bits anyway.)\n\n\nCall-site coding conventions\n----------------------------\n\nThere are many places in the system that call either a specific function\n(for example, the parser invokes \"textin\" by name in places) or a\nparticular group of functions that have a common argument list (for\nexample, the optimizer invokes selectivity estimation functions with\na fixed argument list). These places will need to change, but we should\ntry to avoid making them significantly uglier than before.\n\nPlaces that invoke an arbitrary function with an arbitrary argument list\ncan simply be changed to fill a FunctionCallInfoData structure directly;\nthat'll be no worse and possibly cleaner than what they do now.\n\nWhen invoking a specific built-in function by name, we have generally\njust written something like\n\tresult = textin ( ... args ... )\nwhich will not work after textin() is converted to the new call style.\nI suggest that code like this be converted to use \"helper\" functions\nthat will create and fill in a FunctionCallInfoData struct. For\nexample, if textin is being called with one argument, it'd look\nsomething like\n\tresult = DirectFunctionCall1(textin, PointerGetDatum(argument));\nThese helper routines will have declarations like\n\tDatum DirectFunctionCall2(PGFunction func, Datum arg1, Datum arg2);\nNote it will be the caller's responsibility to convert to and from\nDatum; appropriate conversion macros should be used.\n\nThe DirectFunctionCallN routines will not bother to fill in\nfinfo->flinfo (indeed cannot, since they have no idea about an OID for\nthe target function); they will just set it NULL. This is unlikely to\nbother any built-in function that could be called this way. Note also\nthat this style of coding cannot check for a NULL result (it couldn't\nbefore, either!). We could reasonably make the helper routines elog an\nerror if they see that the function returns a NULL.\n\n(Note: direct calls like this will have to be changed at the same time\nthat the called routines are changed to the new style. But that will\nstill be a lot less of a constraint than a \"big bang\" conversion.)\n\nWhen invoking a function that has a known argument signature, we have\nusually written either\n\tresult = fmgr(targetfuncOid, ... args ... );\nor\n\tresult = fmgr_ptr(FmgrInfo *finfo, ... args ... );\ndepending on whether an FmgrInfo lookup has been done yet or not.\nThis kind of code can be recast using helper routines, in the same\nstyle as above:\n\tresult = OidFunctionCall1(funcOid, PointerGetDatum(argument));\n\tresult = FunctionCall2(funcCallInfo,\n\t PointerGetDatum(argument),\n\t Int32GetDatum(argument));\n\nAgain, this style of coding does not recognize the possibility of a\nnull result. We could provide variant helper routines that allow\na null return rather than raising an error, which could be called in\na style like\n\tif (FunctionCall1IsNull(&result, funcCallInfo,\n\t PointerGetDatum(argument)))\n\t{\n\t\t... cope with null result ...\n\t}\n\telse\n\t{\n\t\t... OK, use 'result' here ...\n\t}\nBut I'm unsure that there are enough places in the system that need this\nto justify the extra set of helpers. If there are only a few places\nthat need a non-error response to a null result, they could just be\nchanged to fill and examine a FunctionCallInfoData structure directly.\n\nAs with the callee-side situation, I am strongly inclined to add\nargument conversion macros that hide the pass-by-reference nature of\nint8, float4, and float8, with an eye to making those types relatively\npainless to convert to pass-by-value.\n\nThe existing helper functions fmgr(), fmgr_c(), etc will be left in\nplace until all uses of them are gone. Of course their internals will\nhave to change in the first step of implementation, but they can\ncontinue to support the same external appearance.\n\n\nNotes about function handlers\n-----------------------------\n\nHandlers for classes of functions should find life much easier and\ncleaner in this design. The OID of the called function is directly\nreachable from the passed parameters; we don't need the global variable\nfmgr_pl_finfo anymore. Also, by modifying finfo->flinfo->fn_extra,\nthe handler can cache lookup info to avoid repeat lookups when the same\nfunction is invoked many times. (fn_extra can only be used as a hint,\nsince callers are not required to re-use a FunctionLookupInfo struct.\nBut in performance-critical paths they normally will do so.)\n\nI observe that at least one other global variable, CurrentTriggerData,\nis being used as part of the call convention for some function handlers.\nThat's just as grotty as fmgr_pl_finfo, so I'd like to get rid of it.\nAny comments on the cleanest way to do so?\n\nAre there any other things needed by the call handlers for PL/pgsql and\nother languages?\n\nDuring the conversion process, support for old-style builtin functions\nand old-style user-written C functions will be provided by appropriate\nfunction handlers. For example, the handler for old-style builtins\nwill look roughly like fmgr_c() does now.\n\n\nSystem table updates\n--------------------\n\nIn the initial phase, pg_language type 11 (\"builtin\") will be renamed\nto \"old_builtin\", and a new language type named \"builtin\" will be\ncreated with a new OID. Then pg_proc entries will be changed from\nlanguage code 11 to the new code piecemeal, as the associated routines\nare rewritten. (This will imply several rounds of forced initdbs as\nthe contents of pg_proc change. It would be a good idea to add a\n\"catalog contents version number\" to the database version info checked\nat startup before we begin this process.)\n\nThe existing pg_language entry for \"C\" functions will continue to\ndescribe user functions coded in the old style, and we will need to add\na new language name for user functions coded in the new style. (Any\nsuggestions for what the new name should be?) We should deprecate\nold-style functions because of their portability problems, but the\nsupport for them will only be one small function handler routine,\nso we can leave them in place for as long as necessary.\n\nThe expected calling convention for PL call handlers will need to change\nall-at-once, but fortunately there are not very many of them to fix.\n",
"msg_date": "Sat, 23 Oct 1999 19:52:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Function-manager redesign: second draft (long)"
},
{
"msg_contents": "> This is a followup to a message I wrote in June about reworking the fmgr\n> interface. I've had a couple of good ideas (I think ;-)) since then,\n> but there are some parts of the proposal that still need work before\n> implementation can begin.\n> \n> I could particularly use some feedback from Jan and anyone else who's\n> worked with function-call handlers: does this design eliminate the\n> kluges that you've had to use in the past? If not, what else is needed?\n\nSounds good. My only question is whether people need backward\ncompatibility, and whether we can remove the compatiblity part of the\ninterface and small overhead after 7.1 or later?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 01:25:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": ">\n> > This is a followup to a message I wrote in June about reworking the fmgr\n> > interface. I've had a couple of good ideas (I think ;-)) since then,\n> > but there are some parts of the proposal that still need work before\n> > implementation can begin.\n> >\n> > I could particularly use some feedback from Jan and anyone else who's\n> > worked with function-call handlers: does this design eliminate the\n> > kluges that you've had to use in the past? If not, what else is needed?\n>\n> Sounds good. My only question is whether people need backward\n> compatibility, and whether we can remove the compatiblity part of the\n> interface and small overhead after 7.1 or later?\n\n Backward compatibility is a common source of problems, and I\n don't like it generally. In the case of the fmgr it is quite\n a little difficult, and I thought about it too already. I\n like the current interface for it's simpleness from the user\n function developers point of view. And converting ALL\n internal plus most of the contrib directories ones to\n something new is really a huge project.\n\n All function calls through the fmgr use the FmgrInfo\n structure, but there are alot of calls to internal functions\n like textout() etc. too. Changing their interface would IMHO\n introduce many problems. And there are only a few internal\n functions where a new fmgr interface really is required due\n to incomplete NULL handling or the like.\n\n Therefore I would prefer an interface extension, that doesn't\n require changes to existing functions. What about adding a\n proifversion to pg_proc, telling the fmgr which call\n interface the function uses? This is then held in the\n FmgrInfo struct too so the fmgr can call a function using the\n old and new interface.\n\n First fmgr_info() is extended to put the interface version\n into the FmgrInfo too. Then fmgr_faddr() is renamed to\n fmgr_faddr_v1() and it has to check that only functions using\n the old interface are called through it (there aren't that\n many calls to it as you might think). After that you have\n all the time in the world to implement another interface and\n add a switch into fmgr() and sisters handling the different\n versions.\n\n My thoughts for the new interface:\n\n o Each function argument must have it's separate NULL flag.\n\n o The functions result must have another NULL flag too.\n\n o Argument names and default values for omitted ones aren't\n IMHO something to go into the interface itself. The\n function is allways called with all arguments positional,\n the parser must provide this list.\n\n o The new interface must at least be designed for a later\n implementation of tuple set returns. I think this must be\n implemented as a temp table, collecting all return tuples\n since for a procedural language it might be impossible to\n implement a real RETURN AND RESUME (like it is already\n for PL/Tcl and it would be for PL/Perl). Therefore\n another STRUCT kind of relation must be added too,\n providing the tupdesc for the returned set. This temp\n table, filled by calling the function at the first time a\n tuple is needed and then it is simply another RTE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 26 Oct 1999 11:06:35 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "> System table updates\n> --------------------\n>\n> In the initial phase, pg_language type 11 (\"builtin\") will be renamed\n> to \"old_builtin\", and a new language type named \"builtin\" will be\n> created with a new OID. Then pg_proc entries will be changed from\n> language code 11 to the new code piecemeal, as the associated routines\n> are rewritten. (This will imply several rounds of forced initdbs as\n> the contents of pg_proc change. It would be a good idea to add a\n> \"catalog contents version number\" to the database version info checked\n> at startup before we begin this process.)\n>\n> The existing pg_language entry for \"C\" functions will continue to\n> describe user functions coded in the old style, and we will need to add\n> a new language name for user functions coded in the new style. (Any\n> suggestions for what the new name should be?) We should deprecate\n> old-style functions because of their portability problems, but the\n> support for them will only be one small function handler routine,\n> so we can leave them in place for as long as necessary.\n>\n> The expected calling convention for PL call handlers will need to change\n> all-at-once, but fortunately there are not very many of them to fix.\n\n This approach nearly matches all my thoughts about the\n redesign of the fmgr. In the system table section I miss\n named arguments.\n\n I think we need a new system table\n\n pg_proargs (\n Oid pargprooid,\n int2 pargno,\n name pargname,\n bool pargdflnull,\n text pargdefault\n );\n\n plus another flag in pg_proc that tells if this function\n prototype information is available.\n\n The parser then has to handle function calls like\n\n ... myfunc(userid = 123, username = 'hugo');\n\n and build a complete function argument list that has all the\n arguments in the correct order and defaults for omitted\n arguments filled in as const nodes. This new prototype\n information than must also be used in the PL handlers to\n choose the given names for arguments.\n\n In addition, we could add an argument type at this time\n (INPUT, OUTPUT and INOUT) but only support INPUT ones for now\n from the SQL level. PL functions calling other functions\n could eventually use these argument types in the future.\n\n Also I miss the interface for tuple set returns. I know that\n this requires much more in other sections than only the fmgr,\n but we need to cover it now or we'll not be able to do it\n without another change to the fmgr interface at the time we\n want to support real stored procedures. As said in another\n mail, I think it must be done via some temp table since most\n interpreter languages will not be able to do RETURN AND\n RESUME in any other way - not even PL/pgSQL will be easy and\n would need a total internal redesign of the bytecode\n interpreter since it otherwise needs to recover the internal\n call stack maybe across recursive calls.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 26 Oct 1999 12:19:16 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Sounds good. My only question is whether people need backward\n> compatibility, and whether we can remove the compatiblity part of the\n> interface and small overhead after 7.1 or later?\n\nI think we could drop it after a decent interval, but I don't see any\nreason to be in a hurry. I do think that we'll get complaints if 7.0\ndoesn't have any backward compatibility for existing user functions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 1999 10:01:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long) "
},
{
"msg_contents": "[ responding to both of Jan's messages in one ]\n\[email protected] (Jan Wieck) writes:\n> I like the current interface for it's simpleness from the user\n> function developers point of view.\n\nThere is that; even with a good set of convenience macros there will be\nmore to learn about writing user functions. OTOH, the way it is now\nis not exactly transparent --- in particular, NULL handling is so easy\nto forget about/get wrong. We can make that much easier. We can also\nsimplify the interface noticeably for standard types like float4/float8.\n\nBTW: I am not in nearly as big a hurry as Bruce is to rip out support\nfor the current user-function interface. I want to get rid of old-style\nbuiltin functions before 7.0 because of the portability issue (see\nbelow). But if a particular user is using old-style user functions\nand isn't having portability problems on his machine, there's no need\nto force him to convert, it seems to me.\n\n> And converting ALL\n> internal plus most of the contrib directories ones to\n> something new is really a huge project.\n\nIt is. I was hoping to get some help ;-) ... but I will do it myself\nif I have to.\n\n> ... And there are only a few internal\n> functions where a new fmgr interface really is required due\n> to incomplete NULL handling or the like.\n\nIf your goal is *only* to deal with the NULL issue, or *only* to get\nthings working on a specific platform like Alpha, then yes we could\npatch in a few dozen places and not undertake a complete changeover.\nBut I believe that we really need to fix this right, because it will\nkeep coming back to haunt us until we do. I will not be happy as\nlong as we have ports that have to compile \"-O0\" because of fmgr\nbrain-damage. I fear there are going to be more and more such ports\nas people install smarter compilers that assume they are working with\nANSI-compliant source code. We're giving up performance system-wide\nbecause of fmgr.\n\n> Therefore I would prefer an interface extension, that doesn't\n> require changes to existing functions. What about adding a\n> proifversion to pg_proc, telling the fmgr which call\n> interface the function uses?\n\nI was going to let the prolang column tell that, by having different\nlanguage codes for old vs. new builtin function and old vs. new\ndynamic-linked C function. We could add a version column instead,\nbut that seems like unnecessary complication.\n\n> First fmgr_info() is extended to put the interface version\n> into the FmgrInfo too. Then fmgr_faddr() is renamed to\n> fmgr_faddr_v1() and it has to check that only functions using\n> the old interface are called through it (there aren't that\n> many calls to it as you might think).\n\nI think it should be possible to make fmgr_faddr call both old and\nnew functions --- I haven't actually written code yet, but I think\nI can do it. You're right that that's important in order to spread\nout the repair work instead of having a \"big bang\".\n\n> This approach nearly matches all my thoughts about the\n> redesign of the fmgr. In the system table section I miss\n> named arguments.\n\nAs you said in your earlier message, that is a parser-level feature\nthat has nothing to do with what happens at the fmgr level. I don't\nwant to worry about it for this revision.\n\n> Also I miss the interface for tuple set returns. I know that\n> this requires much more in other sections than only the fmgr,\n> but we need to cover it now or we'll not be able to do it\n> without another change to the fmgr interface at the time we\n> want to support real stored procedures.\n\nOK, I'm willing to worry about that. But what, exactly, needs to\nbe provided in the fmgr interface?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 1999 11:14:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long) "
},
{
"msg_contents": ">\n> [ responding to both of Jan's messages in one ]\n>\n> [email protected] (Jan Wieck) writes:\n> > I like the current interface for it's simpleness from the user\n> > function developers point of view.\n>\n> There is that; even with a good set of convenience macros there will be\n> more to learn about writing user functions. OTOH, the way it is now\n> is not exactly transparent --- in particular, NULL handling is so easy\n> to forget about/get wrong. We can make that much easier. We can also\n> simplify the interface noticeably for standard types like float4/float8.\n>\n> BTW: I am not in nearly as big a hurry as Bruce is to rip out support\n> for the current user-function interface. I want to get rid of old-style\n> builtin functions before 7.0 because of the portability issue (see\n> below). But if a particular user is using old-style user functions\n> and isn't having portability problems on his machine, there's no need\n> to force him to convert, it seems to me.\n\n Personally, I could live with dropping the entire old\n interface. That's not the problem. But at least Bruce and his\n book need to know the final programming conventions if we\n ought to change it at all, so it can be covered in his\n manuscript when it is sent down to the paper.\n\n> I was going to let the prolang column tell that, by having different\n> language codes for old vs. new builtin function and old vs. new\n> dynamic-linked C function. We could add a version column instead,\n> but that seems like unnecessary complication.\n\n Right - language is all needed to tell.\n\n> > This approach nearly matches all my thoughts about the\n> > redesign of the fmgr. In the system table section I miss\n> > named arguments.\n>\n> As you said in your earlier message, that is a parser-level feature\n> that has nothing to do with what happens at the fmgr level. I don't\n> want to worry about it for this revision.\n\n Right too. I just hoped to expand the scope of this change so\n there would only ONE change to the PL handlers covering both,\n new interface with proper NULL handling and enhanced function\n prototypes.\n\n>\n> > Also I miss the interface for tuple set returns. I know that\n> > this requires much more in other sections than only the fmgr,\n> > but we need to cover it now or we'll not be able to do it\n> > without another change to the fmgr interface at the time we\n> > want to support real stored procedures.\n>\n> OK, I'm willing to worry about that. But what, exactly, needs to\n> be provided in the fmgr interface?\n\n First we need another relation type in pg_class. It's like a\n table or view, but none of the NORMAL SQL statements can be\n used with it (e.g. INSERT, SELECT, ...). It just describes a\n row structure.\n\n Then, a function returning a SET of any row type (this time,\n regular relations or views too) can be used in the rangetable\n as\n\n SELECT A.col1, B.col2 FROM mysetfunc() A, anothertab B\n WHERE A.col1 = B.col1;\n\n Of course, it requires some new fields in the rangetable\n entry. Anyway, at the beginning of execution for such a\n query, the executor initializes those RTE's by creating a\n temptable with the schema of the specified structure or\n relation. Then it calls the user function, passing in some\n handle to the temp table and the user function fills in the\n tuples. Now the rest of query execution is if it is a regular\n table. After execution, the temp table is dropped.\n\n Isn't that all required for true stored procedures?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 26 Oct 1999 17:36:00 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Sounds good. My only question is whether people need backward\n> > compatibility, and whether we can remove the compatiblity part of the\n> > interface and small overhead after 7.1 or later?\n> \n> I think we could drop it after a decent interval, but I don't see any\n> reason to be in a hurry. I do think that we'll get complaints if 7.0\n> doesn't have any backward compatibility for existing user functions.\n\nTrue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 12:46:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "> >\n> > [ responding to both of Jan's messages in one ]\n> >\n> > [email protected] (Jan Wieck) writes:\n> > > I like the current interface for it's simpleness from the user\n> > > function developers point of view.\n> >\n> > There is that; even with a good set of convenience macros there will be\n> > more to learn about writing user functions. OTOH, the way it is now\n> > is not exactly transparent --- in particular, NULL handling is so easy\n> > to forget about/get wrong. We can make that much easier. We can also\n> > simplify the interface noticeably for standard types like float4/float8.\n> >\n> > BTW: I am not in nearly as big a hurry as Bruce is to rip out support\n> > for the current user-function interface. I want to get rid of old-style\n> > builtin functions before 7.0 because of the portability issue (see\n> > below). But if a particular user is using old-style user functions\n> > and isn't having portability problems on his machine, there's no need\n> > to force him to convert, it seems to me.\n> \n> Personally, I could live with dropping the entire old\n> interface. That's not the problem. But at least Bruce and his\n> book need to know the final programming conventions if we\n> ought to change it at all, so it can be covered in his\n> manuscript when it is sent down to the paper.\n\nNo. My coverage of that is going to be more conceptual than actual\nprogramming. Do whatever you think is best.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 12:49:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>>>> Also I miss the interface for tuple set returns. I know that\n>>>> this requires much more in other sections than only the fmgr,\n>>>> but we need to cover it now or we'll not be able to do it\n>>>> without another change to the fmgr interface at the time we\n>>>> want to support real stored procedures.\n>> \n>> OK, I'm willing to worry about that. But what, exactly, needs to\n>> be provided in the fmgr interface?\n\n> First we need another relation type in pg_class. It's like a\n> table or view, but none of the NORMAL SQL statements can be\n> used with it (e.g. INSERT, SELECT, ...). It just describes a\n> row structure.\n\nWhy bother? Just create a table --- if you don't want to put any\nrows in it, you don't have to.\n\n> Of course, it requires some new fields in the rangetable\n> entry. Anyway, at the beginning of execution for such a\n> query, the executor initializes those RTE's by creating a\n> temptable with the schema of the specified structure or\n> relation. Then it calls the user function, passing in some\n> handle to the temp table and the user function fills in the\n> tuples. Now the rest of query execution is if it is a regular\n> table. After execution, the temp table is dropped.\n\nOK, but this doesn't answer the immediate problem of what needs to\nappear in the fmgr interface.\n\nI have been thinking about this some, and I think maybe our best bet is\nnot to commit to *exactly* what needs to pass through fmgr, but rather\nto put in a couple of generic hooks. I can see two hooks that are\nneeded: one is for passing \"context\" information, such as information\nabout the current trigger event when a function is called from the\ntrigger manager. (This'd replace the present CurrentTriggerData global,\nwhich I hope you'll agree shouldn't be a global...) The other is for\npassing and/or returning information about the function result --- maybe\nreturning a tuple descriptor, maybe passing in the name of a temp table\nto put a result set in, maybe other stuff. So, I am thinking that we\nshould add two fields like this to the FunctionCallInfo struct:\n\n\tNode\t*resultinfo; /* pass or return extra info about result */\n\tNode\t*context; /* pass info about context of call */\n\nWe would restrict the usage of these fields only to the extent of saying\nthat they must point to some kind of Node --- that lets callers and\ncallees check the node tag to make sure that they understand what they\nare looking at. Different node types can be used to handle different\nsituations. For an \"ordinary\" function call expecting a scalar return\nvalue, both fields will be set to NULL. Other conventions for their use\nwill be developed as time goes on.\n\nDoes this look good to you, or am I off the track entirely?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 1999 10:55:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long) "
},
{
"msg_contents": "> \n> \tNode\t*resultinfo; /* pass or return extra info about result */\n> \tNode\t*context; /* pass info about context of call */\n> \n> Does this look good to you, or am I off the track entirely?\n> \n> \t\t\tregards, tom lane\n> \n\n Looks perfect. I appreciate to get rid of globals whenever\n\tpossible.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n",
"msg_date": "Wed, 27 Oct 1999 19:52:43 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Sounds good. My only question is whether people need backward\n> > compatibility, and whether we can remove the compatiblity part of the\n> > interface and small overhead after 7.1 or later?\n>\n> I think we could drop it after a decent interval, but I don't see any\n> reason to be in a hurry. I do think that we'll get complaints if 7.0\n> doesn't have any backward compatibility for existing user functions.\n\n Right. A major release is what it is. And porting\n applications to a new major release too, it is a conversion,\n not an upgrade. Therefore a major release should drop as much\n backward compatibility code for minor releases as possible.\n\n Thus, we should think about getting rid of the broken design\n for functions returning tuple sets in 7.0. As far as I\n understand the books I have, there are a couple of different\n types of functions/procedures out, and not every database\n implements all types, nor do they all name one and the same\n type equally so that something called function in one\n database is a stored procedure in another. Anyway, the\n different types are:\n\n 1. Functions returning a scalar value taking only input-\n arguments.\n\n 2. Functions returning a scalar value taking input-, output-\n and in/out-arguments.\n\n 3. Functions returning nothing taking only input-arguments.\n\n 4. Functions returning nothing taking input-, output- and\n in/out-arguments.\n\n 5. Functions returning a set of result rows taking only\n input-arguments.\n\n 6. Functions returning a set of result rows taking input-,\n output- and in/out-arguments.\n\n I don't think that we have to implement everything, and since\n we don't have host variables, output- and in/out-arguments\n would make sense only for calls from procedural languages.\n OTOH they would cause much trouble so they are one detail to\n let out for PostgreSQL.\n\n Three cases left. Type number 1. we have already. And it is\n advanced, because the arguments can be either single values,\n or single rows.\n\n And type number 3. is easy, because invoking something that\n returns a dummy that is thrown away is absolutely no work.\n\n So the only thing that's really left is number 5. The funny\n detail is, that those functions or procedures can't be used\n inside regular SELECT queries. Instead a CALL FUNCTION or\n EXECUTE PROCEDURE statement is used from the client\n application or inside a PL block. CALL FUNCTION then returns\n a tuple set as a SELECT does. The result in our world\n therefore has a tuple descriptor and depending on the invoker\n is sent to the client or stored in an SPI tuple table.\n\n So we do not need to call functions returning sets through\n the normal function manager. It could competely deny calls to\n set functions, and the interface for them can be a total\n different one. I have something in mind that could work\n without temp tables, but it requires a redesign for PL/pgSQL\n and causes some limitations for PL/Tcl. Let's leave that for\n a past 7.0 release.\n\n I correct my previous statements and vote to deny calls to\n set functions through the default function manager in 7.0.\n\n And there is another detail I found while browsing through\n the books. Functions can be defined as [NOT] NULL CALL (IBM\n DB2). Functions defined as NOT NULL CALL will be called only\n if all their arguments aren't NULL. So we can prevent much\n NULL handling inside the functions if we simply define that a\n function that is NOT NULL CALL will allways return NULL if\n any of it's input arguments is NULL. This case can then be\n handled at the function manager level without calling the\n function itself. Nearly all our builtin functions behave that\n way but have all the tests inside.\n\n Another detail I'm missing now is a new, really defined\n interface for type input/output functions. The fact that they\n are defined taking one opaque (yepp, should be something\n different as already discussed) argument but in fact get more\n information from the attribute is ugly.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 30 Oct 1999 22:42:33 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n...\n> \n> 5. Functions returning a set of result rows taking only\n> input-arguments.\n... \n> So the only thing that's really left is number 5. The funny\n> detail is, that those functions or procedures can't be used\n> inside regular SELECT queries. Instead a CALL FUNCTION or\n> EXECUTE PROCEDURE statement is used from the client\n> application or inside a PL block. CALL FUNCTION then returns\n> a tuple set as a SELECT does. The result in our world\n> therefore has a tuple descriptor and depending on the invoker\n> is sent to the client or stored in an SPI tuple table.\n> \n> So we do not need to call functions returning sets through\n> the normal function manager. It could competely deny calls to\n> set functions, and the interface for them can be a total\n> different one. I have something in mind that could work\n> without temp tables, but it requires a redesign for PL/pgSQL\n> and causes some limitations for PL/Tcl. Let's leave that for\n> a past 7.0 release.\n> \n> I correct my previous statements and vote to deny calls to\n> set functions through the default function manager in 7.0.\n> \n\nIt would be very nice if we could use the tuple-set-returning \nfunctions in place of tables/views,\n\nSELECT * FROM LOGGED_IN_USERS_INFO_PROC;\n\nor at least define views on them: \n\nCREATE VIEV LOGGED_IN_USERS AS CALL FUNCTION LOGGED_IN_USERS_INFO_PROC;\n\nWe would not need to call them in place of functions that return\neither single-value or tuple.\n\nOn the topic of 2x3=6 kinds of functions you mentioned I think we \ncould use jet another type of functions - the one returning a \ntuple/row as is ofteh case in python and possibly other languages \nthat do automatic tuple packing/unpacking. \n\nIt could be used in cases like this:\n\nINSERT INTO MY_TABLE CALL FUNCTION MY_ROW_VALUE;\n\nor \n\nDELETE FROM MY_TABLE WHERE * = CALL FUNCTION MY_ROW_VALUE;\n\n(The last example is not ansi and does not work currently),\n\nOTOH, these exaples would jus be redundant cases for your 5th case.\n\nOTOOH, all the functions returning less than a set of rows are \nredundadnt cases of the functions that do ;)\n\n\n-----------\nHannu\n",
"msg_date": "Sat, 30 Oct 1999 21:32:26 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Another detail I'm missing now is a new, really defined\n> interface for type input/output functions. The fact that they\n> are defined taking one opaque (yepp, should be something\n> different as already discussed) argument but in fact get more\n> information from the attribute is ugly.\n\nCan we currently return a list of the same type ?\n\nI guess we can, as lists (or arrays) are fundamentl types in \nPostgreSQL, but I'm not sure.\n\nI would like to define aggregate functions list() and set()\n\nCould I define then just once and specify that they return an array \nof their input type ?\n\nHalf of that is currently done for count() - i.e. it can take any \ntype of argument, but I guess the return-array-of-input-type is more \ncomplicated.\n\n\n\n\nAlso (probably off topic) how hard would it be to add another type \nof aggregate funtions tha operate on pairs of values ?\n\nI would like to have FOR_MIN and FOR_MAX (and possibly MIN_MIN and\nMAX_MAX) functions that return _another_ field from a table for a \nminimal value in one field.\n\n-------------\nHannu\n",
"msg_date": "Sat, 30 Oct 1999 21:39:54 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "I think that the proposals for the revision of the function interface\nare all an improvement on what is there, and I hope to try to find time\nto help implement whatever is decided. Here are some more thoughts in\nrelation to the question of set valued and tuple valued (or complex\ntype?) arguments.\n\nAnother place that user defined functions are used in the PostgreSQL\nbackend is in association with access methods. Both as boolean\noperators for search predicates, and as auxiliary functions for the\naccess methods. Allowing set valued and tuple valued arguments and\nreturn values for functions and operators in this setting can be\nvaluable. \n\nFor instance, suppose I have a table t that stores geometric objects in\nsome space, and I have a spatial index such as R*-tree, or even a GIST\nindex. Given a set of points pts I want to do a query of the form\n\nSELECT * FORM t WHERE t.object <intersects> pts;\n\nUnder these circumstances it would be really nice to be able to pass a\nset of objects (as an SPI tuple table for instance) into the index.\n\nCurrently, the way I do this (with a custom access method) is to create\na temp table, put the key set into the temp table, and pass the name of\nthe temp table to the access method in the search key. The access\nmethod then does an SPI select on the temp table and stores the returned\nitems into the private scan state for use during the scan. \n\nWhile I realize that implementing this example requires much more than a\nchange to the function interface, I hope that it illustrates that it is\nperhaps a good idea to keep as much flexibility in the function\ninterface as possible. \n\nBernie Franpitt\n",
"msg_date": "Sat, 30 Oct 1999 18:05:53 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Hannu Krosing wrote:\n>\n> Jan Wieck wrote:\n> >\n> > I correct my previous statements and vote to deny calls to\n> > set functions through the default function manager in 7.0.\n> >\n>\n> It would be very nice if we could use the tuple-set-returning\n> functions in place of tables/views,\n>\n> SELECT * FROM LOGGED_IN_USERS_INFO_PROC;\n\n Exactly that's something I want for long now. Sticking\n another querytree, that returns a tuple set, into a\n rangetable entry. This other querytree could be either a\n SELECT as in\n\n SELECT A.x, A.y, B.z FROM table1 A, (SELECT ...) B\n WHERE A.x = B.x;\n\n or a function returning a set as in\n\n SELECT A.x, A.y, B.z FROM table1 A, setfunc('arg') B\n WHERE A.x = B.x;\n\n Finally, the form\n\n CALL setfunc('arg');\n\n would be equivalent to a\n\n SELECT * FROM setfunc('arg');\n\n but closer to the IBM DB2 calling syntax. The first one is\n required to get rid of some problems in the rule system,\n especially views with aggregate columns that need their own\n GROUP BY clause. The other ones are what we need to implement\n stored procedures.\n\n>\n> or at least define views on them:\n>\n> CREATE VIEV LOGGED_IN_USERS AS CALL FUNCTION LOGGED_IN_USERS_INFO_PROC;\n\n Wrong syntax since the statement after AS must be a SELECT.\n But a\n\n CREATE VIEW v AS SELECT * FROM setfunc();\n\n would do the trick.\n\n>\n> We would not need to call them in place of functions that return\n> either single-value or tuple.\n>\n> On the topic of 2x3=6 kinds of functions you mentioned I think we\n> could use jet another type of functions - the one returning a\n> tuple/row as is ofteh case in python and possibly other languages\n> that do automatic tuple packing/unpacking.\n>\n> It could be used in cases like this:\n>\n> INSERT INTO MY_TABLE CALL FUNCTION MY_ROW_VALUE;\n\n Let's clearly distinguish between scalar, row and set return\n values. A scalar return value is one single datum. A row\n return value is exactly one tuple of 1...n datums and a set\n return value is a collection of 0...n rows.\n\n What we have now (at least what works properly) are only\n scalar return values from functions. And I don't see the\n point of a row return, so I think we don't need them.\n\n>\n> or\n>\n> DELETE FROM MY_TABLE WHERE * = CALL FUNCTION MY_ROW_VALUE;\n>\n> (The last example is not ansi and does not work currently),\n>\n> OTOH, these exaples would jus be redundant cases for your 5th case.\n>\n> OTOOH, all the functions returning less than a set of rows are\n> redundadnt cases of the functions that do ;)\n\n But please don't forget that it isn't enough to write down\n the syntax and specify the behaviour with some english words.\n We must define the behaviour in C too, and in that language\n it's a little more than a redundant case of something,\n because we don't have that something.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 00:19:10 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > Jan Wieck wrote:\n> What we have now (at least what works properly) are only\n> scalar return values from functions. And I don't see the\n> point of a row return, so I think we don't need them.\n\nThat's what I mant by the OTOH below\n\n> >\n> > (The last example is not ansi and does not work currently),\n> >\n> > OTOH, these exaples would jus be redundant cases for your 5th case.\n> >\n> > OTOOH, all the functions returning less than a set of rows are\n> > redundadnt cases of the functions that do ;)\n> \n> But please don't forget that it isn't enough to write down\n> the syntax and specify the behaviour with some english words.\n> We must define the behaviour in C too, and in that language\n> it's a little more than a redundant case of something,\n> because we don't have that something.\n\nYes, that's the hard part.\n\n---------\nHannu\n",
"msg_date": "Sat, 30 Oct 1999 22:58:46 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > Sounds good. My only question is whether people need backward\n> > > compatibility, and whether we can remove the compatiblity part of the\n> > > interface and small overhead after 7.1 or later?\n> >\n> > I think we could drop it after a decent interval, but I don't see any\n> > reason to be in a hurry. I do think that we'll get complaints if 7.0\n> > doesn't have any backward compatibility for existing user functions.\n> \n> Right. A major release is what it is. And porting\n> applications to a new major release too, it is a conversion,\n> not an upgrade. Therefore a major release should drop as much\n> backward compatibility code for minor releases as possible.\n\nIf this was simple stuff, we could keep compatibility with little\nproblem. But, with complex stuff like this, keeping backward\ncompatibility sometimes make things more confusing. They can code\nthings two ways, and that makes people get confused as to which one to\nfollow.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Oct 1999 21:57:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Bernie Franpitt wrote:\n\n> SELECT * FORM t WHERE t.object <intersects> pts;\n>\n> Under these circumstances it would be really nice to be able to pass a\n> set of objects (as an SPI tuple table for instance) into the index.\n>\n> Currently, the way I do this (with a custom access method) is to create\n> a temp table, put the key set into the temp table, and pass the name of\n> the temp table to the access method in the search key. The access\n> method then does an SPI select on the temp table and stores the returned\n> items into the private scan state for use during the scan.\n>\n> While I realize that implementing this example requires much more than a\n> change to the function interface, I hope that it illustrates that it is\n> perhaps a good idea to keep as much flexibility in the function\n> interface as possible.\n\n Uhhh - it is a good idea, but passing tuple sets as arguments\n to functions, that will cause headaches, man.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 03:29:39 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "On Oct 30, Jan Wieck mentioned:\n\n> Right. A major release is what it is. And porting\n> applications to a new major release too, it is a conversion,\n> not an upgrade. Therefore a major release should drop as much\n> backward compatibility code for minor releases as possible.\n\nCertainly true. But that would also mean that we'd have to keep\nmaintaining the 6.* series as well. At least bug-fixing and releasing one\nor two more 6.5.x versions. Up until now the usual answer to a bug was\n\"upgrade to latest version\". But if you break compatibility you can't do\nthat any more. (Compare to Linux 2.0 vs 2.2) I'm just wondering what the\nplans are in that regard.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 1 Nov 1999 00:46:19 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Oct 30, Jan Wieck mentioned:\n>\n> > Right. A major release is what it is. And porting\n> > applications to a new major release too, it is a conversion,\n> > not an upgrade. Therefore a major release should drop as much\n> > backward compatibility code for minor releases as possible.\n>\n> Certainly true. But that would also mean that we'd have to keep\n> maintaining the 6.* series as well. At least bug-fixing and releasing one\n> or two more 6.5.x versions. Up until now the usual answer to a bug was\n> \"upgrade to latest version\". But if you break compatibility you can't do\n> that any more. (Compare to Linux 2.0 vs 2.2) I'm just wondering what the\n> plans are in that regard.\n\n Sometimes ago, I already pointed out that we have to support\n one or too older releases for some time. Not only because we\n might drop some compatibility code. Each release usually\n declares one or the other new keyword, making existing\n applications probably fail with the new release. And no\n amount of compatibility code would help in that case! It's a\n deadlock trap, an application that cannot be easily ported to\n a newer release because of incompatibilities in the\n querylaguage cannot use the last release it is compatible to\n because of a bug.\n\n There is a new aspect in this discussion since then. The new\n corporation PostgreSQL Inc. offers commercial support for our\n database (look at www.pgsql.com). If they offer support, they\n must support older releases as well, so they need to\n backpatch already.\n\n Wouldn't it be a good idea if their return into our project\n are bugfix releases of older versions (created by\n backpatching release branches)? In the case of a customers\n accident, they have to do it anyway. And doing it for\n critical bugs during idle time could avoid accidents, so it's\n good customer service.\n\n Marc, what do you think about a such an agreement?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Nov 1999 02:45:45 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> There is a new aspect in this discussion since then. The new\n> corporation PostgreSQL Inc. offers commercial support for our\n> database (look at www.pgsql.com). If they offer support, they\n> must support older releases as well, so they need to\n> backpatch already.\n\nYes, but who's the \"them\" here? If PostgreSQL Inc. has any warm\nbodies other than the existing group of developers, I sure haven't\nheard from them...\n\nI agree 100% with Jan's basic point: we must provide a degree of\nbackwards compatibility from release to release. In some cases\nthat might create enough pain to be worth debating, but in this\nparticular case it seems like the choice is a no-brainer. We just\nleave in the transition fmgr code that we're going to write anyway.\nI don't understand why it even got to be a topic of discussion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 22:15:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Function-manager redesign: second draft (long) "
}
] |
[
{
"msg_contents": "I use the following functions in psql which are found in backend/port.\nThat implies that there is at least one system that doesn't have these so\nit would be a good idea if they were available to the entire code tree.\nCould someone move them to a better place in the tree or would you like me\nto do it?\n\nstrcasecmp\nstrtol\nputenv (from nextstep/port.c)\n\nIn addition, as I already mentioned a while back, I would really like to\nuse\n\nsnprintf\n\nCould that be done?\n\nThanks,\n\tPeter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 24 Oct 1999 14:14:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "mv backend/port ../../"
},
{
"msg_contents": "> I use the following functions in psql which are found in backend/port.\n> That implies that there is at least one system that doesn't have these so\n> it would be a good idea if they were available to the entire code tree.\n> Could someone move them to a better place in the tree or would you like me\n> to do it?\n> \n> strcasecmp\n> strtol\n> putenv (from nextstep/port.c)\n\nWe have them because only a few platforms don't have them. They are\nnormally part of the OS library.\n\nIt is counter productive to move them out of port. We want to reduce\nwhat is in there, not move it into the main tree.\n\n> \n> In addition, as I already mentioned a while back, I would really like to\n> use\n> \n> snprintf\n\nI think so, but it only handles strings. You have to interface that to\nthe pgsql character types.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 00:16:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mv backend/port ../../"
}
] |
[
{
"msg_contents": "Following is an updated list of the messages to be channeled by the proposed logging system.\n\nTHESE AND *ONLY* THESE are slated for implementation. If you have items you want\nincluded, PLEASE LET ME KNOW! As it stands, this is a pretty minimal set.\n\nBear in mind that the logger is NOT a debugger. Logged messages are expected to be related to administrative events,\nincluding, but not limited to - server status (including load-balancing and fault\nreporting), security, user connections, and service requests. Added are the 1xxx class, which is designed to assist the\nexisting debugging system by providing a linkage between the free-form debugging messages and the formalized log system.\nvia the LOGBUG macro, which write to both debugging output AND logging.\n\nHint 1: If you are about to emit a plethora of debugging messages and you want a timestamp and/or entry in a log\ndatabase, use LOGBUG and include a unique event ID in the message so that the two can be reconciled.\n\nHint 2: If you want to log specific debugging info into a database table, format it as appropriate for a table load and\nroute it to a channel destined for that table. E.g.:\n\n sprintf( tracebuffer, \"'%l|MEMSHORTAGE'|%d|%d\", ++event_id, bytes_used, max_bytes);\n LOGBUG( 1003, tracebuffer );\n\nFEEDBACK NEEDED! THANK YOU!\n\nLogging classes:\n---------------\n1xz - The PostgreSQL server\n2xx - User-related information\n3xx - Transaction information\n4xx - EXPLAIN results (???)\n9xx - General system alerts\n1000-1999 debugging events\n\nRight now, the following are considered likely candidates,\nsubject to user feedback:\n\nserver info\n Server name, signal ID\n101 - Server started\n102 - Server shutdown\n103 - Signal xxx received\n104 - Server ABEND\n\nuser session\n userid, port or terminal ID, authentication scheme name\n(e.g. md5). session ID\n201 - User xxxx connected via port/terminal xxxxxxxx\nauthenticated by aaaaa\n202 - User xxxx disconnected\n203 - FORBIDDEN - connection denied for user xxxx via\nport/terminal xxxxxxxxxx rejected by aaaaaaa\n\nshow commands\n Session ID, command text\n301 - SELECT text\n302 - INSERT text\n303 - UPDATE text\n304 - DELETE text\n\nshow results\n session ID, count or OID. primary/first/only table ID\naffected\n401 - SUCCESS - nnn records retrieved\n402 - SUCCESS - record inserted at OID\n403 - SUCCESS - nnn records updated\n404 - SUCCESS - nnn records deleted\n405 - FORBIDDEN - action xxxxxx denied to user xxxx on table\nxxxxxxxx\n\nexplain\n as below:\n500 EXPLAIN transaction ID sequence cost rows bytes\n\nmiscellaneous\n explanatory text\n900 - Logging configuration file \"ffff\" was not found or\ndenied read access. Using default logging.\n901 - Logging configuration file \"ffff\" could not be\nprocessed - invalid text at line nnn.\n902 - User overrides non-existent message ID nnn\n903 - Channel requests non-existent message ID nnn\n904 - end of section starting on line nnn was not found\n905 - start of section ending on line nnn was not found\n906 - (message from logging configuration file)\n\n1000-1999 - LOGBUG macro \n text - message text\nuser defines as needed - not standardized\n",
"msg_date": "Sun, 24 Oct 1999 14:22:34 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logging - events supported"
},
{
"msg_contents": "> Following is an updated list of the messages to be channeled by the proposed logging system.\n> \n> THESE AND *ONLY* THESE are slated for implementation. If you have items you want\n> included, PLEASE LET ME KNOW! As it stands, this is a pretty minimal set.\n> \n> Bear in mind that the logger is NOT a debugger. Logged messages are expected to be related to administrative events,\n> including, but not limited to - server status (including load-balancing and fault\n> reporting), security, user connections, and service requests. Added are the 1xxx class, which is designed to assist the\n> existing debugging system by providing a linkage between the free-form debugging messages and the formalized log system.\n> via the LOGBUG macro, which write to both debugging output AND logging.\n> \n> Hint 1: If you are about to emit a plethora of debugging messages and you want a timestamp and/or entry in a log\n> database, use LOGBUG and include a unique event ID in the message so that the two can be reconciled.\n> \n> Hint 2: If you want to log specific debugging info into a database table, format it as appropriate for a table load and\n> route it to a channel destined for that table. E.g.:\n> \n> sprintf( tracebuffer, \"'%l|MEMSHORTAGE'|%d|%d\", ++event_id, bytes_used, max_bytes);\n> LOGBUG( 1003, tracebuffer );\n> \n> FEEDBACK NEEDED! THANK YOU!\n> \n> Logging classes:\n> ---------------\n> 1xz - The PostgreSQL server\n> 2xx - User-related information\n> 3xx - Transaction information\n> 4xx - EXPLAIN results (???)\n> 9xx - General system alerts\n> 1000-1999 debugging events\n> \n> Right now, the following are considered likely candidates,\n> subject to user feedback:\n> \n> server info\n> Server name, signal ID\n> 101 - Server started\n> 102 - Server shutdown\n> 103 - Signal xxx received\n> 104 - Server ABEND\n ^^^^^\n\nThis reminds too much the old IBM dinosaurs. Maybe `crash' is more modern.\n\n\n> user session\n> userid, port or terminal ID, authentication scheme name\n> (e.g. md5). session ID\n> 201 - User xxxx connected via port/terminal xxxxxxxx\n> authenticated by aaaaa\n> 202 - User xxxx disconnected\n> 203 - FORBIDDEN - connection denied for user xxxx via\n> port/terminal xxxxxxxxxx rejected by aaaaaaa\n> \n> show commands\n> Session ID, command text\n> 301 - SELECT text\n> 302 - INSERT text\n> 303 - UPDATE text\n> 304 - DELETE text\n\n\nUtility commands? Sequences? Table alteration commands?\n\n\n> show results\n> session ID, count or OID. primary/first/only table ID\n> affected\n> 401 - SUCCESS - nnn records retrieved\n> 402 - SUCCESS - record inserted at OID\n> 403 - SUCCESS - nnn records updated\n> 404 - SUCCESS - nnn records deleted\n> 405 - FORBIDDEN - action xxxxxx denied to user xxxx on table\n> xxxxxxxx\n> \n> explain\n> as below:\n> 500 EXPLAIN transaction ID sequence cost rows bytes\n> \n> miscellaneous\n> explanatory text\n> 900 - Logging configuration file \"ffff\" was not found or\n> denied read access. Using default logging.\n> 901 - Logging configuration file \"ffff\" could not be\n> processed - invalid text at line nnn.\n> 902 - User overrides non-existent message ID nnn\n> 903 - Channel requests non-existent message ID nnn\n> 904 - end of section starting on line nnn was not found\n> 905 - start of section ending on line nnn was not found\n> 906 - (message from logging configuration file)\n> \n> 1000-1999 - LOGBUG macro \n> text - message text\n> user defines as needed - not standardized\n> \n> ************\n> \n\n\nI suggest also the following things:\n\n1)\teach log entry should be a single line. This would greatly simplify\n\tthe automatic processing of log files using standard unix tools,\n\tincluding loading entries into a database table.\n\n2)\teach entry should be prefixed by a timestamp and the backend pid,\n\tmore or less like the syslog entries. I suggest the following\n\tformat, which is the one currently implemented by elog_timestamp()\n\n\t991020.14:29:56.699 [7172] started: host=127.0.0.1 user=dz database=dz\n\t991020.14:31:02.723 [7172] query: select * from pg_user\n\n3)\tthe logging level can be changed on-the-fly by sending a SIGHUP to\n\tthe postmaster and then automatically to all the backends. Currently\n\tit reloads the pg_options file, which was originally designed exactly\n\tfor controlling the debug and log messages without restarting the\n\tpostmaster and all backends, but it could also reload any other\n\tconfiguration file.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Sun, 24 Oct 1999 23:35:24 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Logging - events supported"
},
{
"msg_contents": "\n\nMassimo Dal Zotto wrote:\n> \n...\n> > 104 - Server ABEND\n> ^^^^^\n> \n> This reminds too much the old IBM dinosaurs. Maybe `crash' is more modern.\n> \n\nMy past lies exposed! But that's locale=specific. You can just as easily make it report\n\"La comedia es finito\". Or whatever.\n\n> \n> I suggest also the following things:\n> \n> 1) each log entry should be a single line. This would greatly simplify\n> the automatic processing of log files using standard unix tools,\n> including loading entries into a database table.\n> \n> 2) each entry should be prefixed by a timestamp and the backend pid,\n> more or less like the syslog entries. I suggest the following\n> format, which is the one currently implemented by elog_timestamp()\n> \n> 991020.14:29:56.699 [7172] started: host=127.0.0.1 user=dz database=dz\n> 991020.14:31:02.723 [7172] query: select * from pg_user\n>\n\nWell, again, the format of the log output is under the administrator's control. If you\nlook at how Apache does it, you'll see the idea. Only the \"magic codes\" have changed to\nreflect the differing types of data.\n \n> 3) the logging level can be changed on-the-fly by sending a SIGHUP to\n> the postmaster and then automatically to all the backends. Currently\n> it reloads the pg_options file, which was originally designed exactly\n> for controlling the debug and log messages without restarting the\n> postmaster and all backends, but it could also reload any other\n> configuration file.\n>\n\nThis was Tom's suggestion as well. It seems good. Unless something prevents it,\nthat is how it shall work.\n\n Thanks,\n\n Tim Holloway\n",
"msg_date": "Sun, 24 Oct 1999 19:08:24 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Logging - events supported"
},
{
"msg_contents": "\nNot sure if I missed something, but it would be nice to be able to log\nperformance information such as \"query 'XYZ' performed a table scan on\na 3,000,000 row table\", \"query 'XYZ' took 3000 seconds to complete\",\n\"query 'XYZ' forced a sort of a 4,000,000 row table\", etc. where the\nthresholds could be set by the administrator. This would allow you to\nperiodically audit your server to make sure that there were sufficient\nindices and that users/programmers were not writing really bad\nqueries.\n\nAlthough I am not sure how difficult adding this to the backend is but\nI would love to be able to hook a tool onto the logfile and see what\nbad queries were being run while I ran an appliation against the\nserver. This is especially useful if my application allows dynamic\nqueries.\n\n\t\t-ben\n",
"msg_date": "Mon, 25 Oct 1999 07:39:02 -0400",
"msg_from": "Ben Bennett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Logging - events supported"
},
{
"msg_contents": "At 20:22 +0200 on 24/10/1999, Tim Holloway wrote:\n\n\n> show commands\n> Session ID, command text\n> 301 - SELECT text\n> 302 - INSERT text\n> 303 - UPDATE text\n> 304 - DELETE text\n\nFWIW, don't forget CREATE, ALTER, DROP - DDL items in general. Nor COPY in\nand out, perhaps SET.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Wed, 27 Oct 1999 17:30:16 +0200",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Logging - events supported"
}
] |
[
{
"msg_contents": "On Oct 23, Mike Mascari mentioned:\n\n> The following patch extends the COMMENT ON functionality to the\n> rest of the database objects beyond just tables, columns, and views. The\n> grammer of the COMMENT ON statement now looks like:\n> \n> COMMENT ON [ \n> [ DATABASE | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ] <objname> |\n> \n> COLUMN <relation>.<attribute> |\n> AGGREGATE <aggname> <aggtype> | \n> FUNCTION <funcname> (arg1, arg2, ...) | \n> OPERATOR <op> (leftoperand_typ rightoperand_typ) | \n> TRIGGER <triggername> ON relname> \n> ] IS 'text'\n\nIn related news I'd like to point out that psql's \\dd command now supports\naggregates, functions, operators, types, relations (tables, views,\nindices, sequences), rules, and triggers. In addition all the other \\d?\ncommands (\\da, \\df, \\dT, \\do, \\dtvsiS), as well as \\l, have comments\ndisplay switchable. Attribute comments can be seen in \\d in a similar\nfashion. You can also give a comment on \\lo_import which can then be seen\nin \\lo_list (=\\dl). Seems like all the bases are covered.\n\nJust to confirm a few things here: Are you keying rule comments on\npg_rewrite.oid? Are operator comments keyed on the oid of the underlying\nfunction? (Perhaps that could even be changed so you can put a comment on\nthe operator and a note like \"implementation of %^*& operator\" on the\nfunction. Just a thought.)\n\nNow we just have to stick a whole bunch of comments on all system stuff.\nWhere would be a good place to do this? Where are all the comments on the\nbuilt-in operators generated?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 24 Oct 1999 20:57:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "> In related news I'd like to point out that psql's \\dd command now supports\n> aggregates, functions, operators, types, relations (tables, views,\n> indices, sequences), rules, and triggers. In addition all the other \\d?\n> commands (\\da, \\df, \\dT, \\do, \\dtvsiS), as well as \\l, have comments\n> display switchable. Attribute comments can be seen in \\d in a similar\n> fashion. You can also give a comment on \\lo_import which can then be seen\n> in \\lo_list (=\\dl). Seems like all the bases are covered.\n\nOK, I think we need help on this. I have added documentation in\npsqlHelp.c and comment.sgml. You are mentioning some new psql flags\nthat I don't know we had. Can you send info on that. psql.c and\npsql-ref.sgml are two areas that need additions based on what you said.\n\n> Now we just have to stick a whole bunch of comments on all system stuff.\n> Where would be a good place to do this? Where are all the comments on the\n> built-in operators generated?\n\nOK, right now, comments are in src/include/catalog as DESC entries. \nThese are pulled out by OID during creation of the *.bki files, and\ninitdb does a COPY to load the description table.\n\nOne limitation now is that we can only comment objects that have a fixed\noid in the system tables because we define the oid at compile time\ncoming from the system table.\n\nOne idea I had in the past was to store the object type and object name\ninstead in a file during compile, and run some UPDATE during initdb that\nlooked up the oid of the object type and name, and stuffed that\ninitdb-supplied oid into the pg_description table. I think that is the\nonly way you are going to be able to do this properly.\n\nSeems your COMMENT command already has this done. You could just dump a\nfile containing COMMENT lines as part of *.bki compile process, and have\ninintdb run the COMMENT file. That is the best way, I think.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 00:34:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "On Tue, 26 Oct 1999, Bruce Momjian wrote:\n\n> > In related news I'd like to point out that psql's \\dd command now supports\n> > aggregates, functions, operators, types, relations (tables, views,\n> > indices, sequences), rules, and triggers. In addition all the other \\d?\n> > commands (\\da, \\df, \\dT, \\do, \\dtvsiS), as well as \\l, have comments\n> > display switchable. Attribute comments can be seen in \\d in a similar\n> > fashion. You can also give a comment on \\lo_import which can then be seen\n> > in \\lo_list (=\\dl). Seems like all the bases are covered.\n> \n> OK, I think we need help on this. I have added documentation in\n> psqlHelp.c and comment.sgml. You are mentioning some new psql flags\n> that I don't know we had. Can you send info on that. psql.c and\n> psql-ref.sgml are two areas that need additions based on what you said.\n\nI implemented sort of shell variables into psql (I mentioned it in the\nchangelogs, but those were admittedly quite long), so you can set\nvariables like:\n\\set foo 'bar'\n\\echo $foo\n\\echo \"foo is now ${foo}\"\netc.\n\nThe initial motivation was that I would run out of mnemonic flags pretty\nsoon, so most psql state is now in a variable:\n\\set quiet on (-q switch)\n\\set echo on (-e switch)\n\\set echo_secret on (-E switch)\netc.\n(In fact you don't have to set them to \"on\", anything works. To unset them\njust write \\set varname)\nThe cmd line switches are unaffected, but this way you can also set them\nwithin psql. There are also a few variables representing new\nfunctionality:\n\\set description on\nwill turn on the display of the object descriptions. There are a few\nothers, too.\n\nThat's just what I meant with the above. Of course one fine day very soon\nI'll formally document all of this in DocBook. There is _a lot_ of new\nstuff, so I might actually end up doing a lot of new documenting. You\nmight want to save yourself the work right now.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 26 Oct 1999 13:33:08 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
}
] |
[
{
"msg_contents": "I have added a new feature that I suggested a few weeks ago (and didn't\nget a lot of feedback about --- if you didn't like the idea, you shoulda\ncomplained then ;-)). To wit, there is now an internal version number\nthat can be bumped anytime anyone makes an initdb-forcing change.\n\nIf we are faithful about changing this number when necessary, then\ndevelopers will not get burnt by failing to notice \"you need to initdb\"\nmessages in the pghackers list. I know some people have wasted hours\nthat way in the past.\n\nThe new number lives in src/include/catalog/catversion.h, and I think\nI will just copy the comments in that file:\n\n * catversion.h\n *\t \"Catalog version number\" for Postgres.\n *\n * The catalog version number is used to flag incompatible changes in\n * the Postgres system catalogs. Whenever anyone changes the format of\n * a system catalog relation, or adds, deletes, or modifies standard\n * catalog entries in such a way that an updated backend wouldn't work\n * with an old database (or vice versa), the catalog version number\n * should be changed. The version number stored in pg_control by initdb\n * is checked against the version number compiled into the backend at\n * startup time, so that a backend can refuse to run in an incompatible\n * database.\n *\n * The point of this feature is to provide a finer grain of compatibility\n * checking than is possible from looking at the major version number\n * stored in PG_VERSION. It shouldn't matter to end users, but during\n * development cycles we usually make quite a few incompatible changes\n * to the contents of the system catalogs, and we don't want to bump the\n * major version number for each one. What we can do instead is bump\n * this internal version number. This should save some grief for\n * developers who might otherwise waste time tracking down \"bugs\" that\n * are really just code-vs-database incompatibilities.\n *\n * The rule for developers is: if you commit a change that requires\n * an initdb, you should update the catalog version number (as well as\n * notifying the pghackers mailing list, which has been the informal\n * practice for a long time).\n *\n * The catalog version number is placed here since modifying files in\n * include/catalog is the most common kind of initdb-forcing change.\n * But it could be used to protect any kind of incompatible change in\n * database contents or layout, such as altering tuple headers.\n\nNaturally, you need to initdb after retrieving this update, but the\nsystem will now tell you so if you forget! For example:\n\nFATAL 2: database was initialized with CATALOG_VERSION_NO 0,\n but the backend was compiled with CATALOG_VERSION_NO 199910241.\n looks like you need to initdb.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 1999 16:44:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Catalog version numbering added (committers READ THIS)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Naturally, you need to initdb after retrieving this update, but the\n> system will now tell you so if you forget! For example:\n> \n> FATAL 2: database was initialized with CATALOG_VERSION_NO 0,\n> but the backend was compiled with CATALOG_VERSION_NO 199910241.\n> looks like you need to initdb.\n> \n> regards, tom lane\n\nWill the backend really tell me \"regards, tom lane\" ;)\n\n----------\nHannu\n",
"msg_date": "Sun, 24 Oct 1999 21:17:02 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Catalog version numbering added (committers READ THIS)"
},
{
"msg_contents": "\nGood idea.\n\n\n> I have added a new feature that I suggested a few weeks ago (and didn't\n> get a lot of feedback about --- if you didn't like the idea, you shoulda\n> complained then ;-)). To wit, there is now an internal version number\n> that can be bumped anytime anyone makes an initdb-forcing change.\n> \n> If we are faithful about changing this number when necessary, then\n> developers will not get burnt by failing to notice \"you need to initdb\"\n> messages in the pghackers list. I know some people have wasted hours\n> that way in the past.\n> \n> The new number lives in src/include/catalog/catversion.h, and I think\n> I will just copy the comments in that file:\n> \n> * catversion.h\n> *\t \"Catalog version number\" for Postgres.\n> *\n> * The catalog version number is used to flag incompatible changes in\n> * the Postgres system catalogs. Whenever anyone changes the format of\n> * a system catalog relation, or adds, deletes, or modifies standard\n> * catalog entries in such a way that an updated backend wouldn't work\n> * with an old database (or vice versa), the catalog version number\n> * should be changed. The version number stored in pg_control by initdb\n> * is checked against the version number compiled into the backend at\n> * startup time, so that a backend can refuse to run in an incompatible\n> * database.\n> *\n> * The point of this feature is to provide a finer grain of compatibility\n> * checking than is possible from looking at the major version number\n> * stored in PG_VERSION. It shouldn't matter to end users, but during\n> * development cycles we usually make quite a few incompatible changes\n> * to the contents of the system catalogs, and we don't want to bump the\n> * major version number for each one. What we can do instead is bump\n> * this internal version number. This should save some grief for\n> * developers who might otherwise waste time tracking down \"bugs\" that\n> * are really just code-vs-database incompatibilities.\n> *\n> * The rule for developers is: if you commit a change that requires\n> * an initdb, you should update the catalog version number (as well as\n> * notifying the pghackers mailing list, which has been the informal\n> * practice for a long time).\n> *\n> * The catalog version number is placed here since modifying files in\n> * include/catalog is the most common kind of initdb-forcing change.\n> * But it could be used to protect any kind of incompatible change in\n> * database contents or layout, such as altering tuple headers.\n> \n> Naturally, you need to initdb after retrieving this update, but the\n> system will now tell you so if you forget! For example:\n> \n> FATAL 2: database was initialized with CATALOG_VERSION_NO 0,\n> but the backend was compiled with CATALOG_VERSION_NO 199910241.\n> looks like you need to initdb.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 00:49:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Catalog version numbering added (committers READ THIS)"
}
] |
[
{
"msg_contents": "--- Peter Eisentraut <[email protected]> wrote:\n...\n> Just to confirm a few things here: Are you keying rule comments on\n> pg_rewrite.oid? Are operator comments keyed on the oid of the underlying\n> function? (Perhaps that could even be changed so you can put a comment\n> on\n> the operator and a note like \"implementation of %^*& operator\" on the\n> function. Just a thought.)\n> \n> Now we just have to stick a whole bunch of comments on all system stuff.\n> Where would be a good place to do this? Where are all the comments on\n> the\n> built-in operators generated?\n\n...\nHmm, this is where I'm getting the oid's:\n\nDATABASE -- pg_database\nINDEX -- pg_class\nRULE -- pg_rewrite\nSEQUENCE -- pg_class\nTABLE -- pg_class\nTYPE -- pg_type\nVIEW -- pg_class\nCOLUMN -- pg_attribute\nAGGREGATE -- pg_aggregate\nFUNCTION -- pg_proc\nOPERATOR -- pg_operator\nTRIGGER -- pg_trigger\n\nSo in the example you gave above, you could put a comment\non each of the two functions which compose the operator\nand a command on the operator itself.\n\nI still need to write the SGML and change pg_dump to\ngenerate COMMENT ON statements, and also regression tests,\nbut the functionality should be complete. Just glancing\nover the Win32 ODBC driver, it appears that SQLTables() and\nSQLColumns() is not currently fetching the associated\ndescription from pg_description for the REMARKS parameter\nto the call. Perhaps this could be changed? \n\nHope that helps, \n\nMike Mascari (looking forward to the new psql...)\n([email protected])\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Sun, 24 Oct 1999 14:54:56 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "\n\nMike Mascari wrote:\n> \n> --- Peter Eisentraut <[email protected]> wrote:\n> ...\n> > Just to confirm a few things here: Are you keying rule comments on\n> > pg_rewrite.oid? Are operator comments keyed on the oid of the underlying\n> > function? (Perhaps that could even be changed so you can put a comment\n> > on\n> > the operator and a note like \"implementation of %^*& operator\" on the\n> > function. Just a thought.)\n> >\n> > Now we just have to stick a whole bunch of comments on all system stuff.\n> > Where would be a good place to do this? Where are all the comments on\n> > the\n> > built-in operators generated?\n> \n> ...\n> Hmm, this is where I'm getting the oid's:\n> \n> DATABASE -- pg_database\n> INDEX -- pg_class\n> RULE -- pg_rewrite\n> SEQUENCE -- pg_class\n> TABLE -- pg_class\n> TYPE -- pg_type\n> VIEW -- pg_class\n> COLUMN -- pg_attribute\n> AGGREGATE -- pg_aggregate\n> FUNCTION -- pg_proc\n> OPERATOR -- pg_operator\n> TRIGGER -- pg_trigger\n> \n> So in the example you gave above, you could put a comment\n> on each of the two functions which compose the operator\n> and a command on the operator itself.\n> \n> I still need to write the SGML and change pg_dump to\n> generate COMMENT ON statements, and also regression tests,\n> but the functionality should be complete. Just glancing\n> over the Win32 ODBC driver, it appears that SQLTables() and\n> SQLColumns() is not currently fetching the associated\n> description from pg_description for the REMARKS parameter\n> to the call. Perhaps this could be changed?\n> \n> Hope that helps,\n> \n> Mike Mascari (looking forward to the new psql...)\n> ([email protected])\n> \n> =====\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\n\n\nIt wouldn't be hard to add the pg_description to the remarks.\nDoes this field exist for all previous postgres releases (specifically,\n6.2,6.3, and 6.4) ??\n\nByron\n",
"msg_date": "Sun, 24 Oct 1999 21:09:11 -0400",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "On Sun, 24 Oct 1999, Mike Mascari wrote:\n\n> Hmm, this is where I'm getting the oid's:\n> \n> DATABASE -- pg_database\n> INDEX -- pg_class\n> RULE -- pg_rewrite\n> SEQUENCE -- pg_class\n> TABLE -- pg_class\n> TYPE -- pg_type\n> VIEW -- pg_class\n> COLUMN -- pg_attribute\n> AGGREGATE -- pg_aggregate\n> FUNCTION -- pg_proc\n> OPERATOR -- pg_operator\n> TRIGGER -- pg_trigger\n> \n> So in the example you gave above, you could put a comment\n> on each of the two functions which compose the operator\n> and a command on the operator itself.\n\nVery nice, BUT: In the old psql the assumption was that operator comments\nare keyed on the underlying function(s?). Since a lot of operators seem to\nhave comments on them by default, one would have to change this somehow.\nTry \\do and see for yourself. The fix should be rather simple but I'm not\nsure where those descriptions are generated actually.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 25 Oct 1999 10:49:41 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Sun, 24 Oct 1999, Mike Mascari wrote:\n>> So in the example you gave above, you could put a comment\n>> on each of the two functions which compose the operator\n>> and a command on the operator itself.\n\nTwo functions? An operator only has one underlying function.\n(Aggregates have as many as three though.)\n\n> Try \\do and see for yourself. The fix should be rather simple but I'm not\n> sure where those descriptions are generated actually.\n\nThe default contents of pg_description come from the DESCR() macros in\ninclude/catalog/*.h. It looks like only pg_proc and pg_type have any\nuseful info in them in the current state of the source. I'm guessing\nthat psql's \\do actually looks for a description attached to the\nunderlying function, rather than one attached to the operator.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Oct 1999 11:02:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch "
},
{
"msg_contents": "> Hmm, this is where I'm getting the oid's:\n> \n> DATABASE -- pg_database\n> INDEX -- pg_class\n> RULE -- pg_rewrite\n> SEQUENCE -- pg_class\n> TABLE -- pg_class\n> TYPE -- pg_type\n> VIEW -- pg_class\n> COLUMN -- pg_attribute\n> AGGREGATE -- pg_aggregate\n> FUNCTION -- pg_proc\n> OPERATOR -- pg_operator\n> TRIGGER -- pg_trigger\n> \n> So in the example you gave above, you could put a comment\n> on each of the two functions which compose the operator\n> and a command on the operator itself.\n> \n> I still need to write the SGML and change pg_dump to\n\nPlease update the sgml and psqlHelp.c files I have already modified for\nyou. I thought you didn't want to do it, so I did it.\n\n\n> generate COMMENT ON statements, and also regression tests,\n> but the functionality should be complete. Just glancing\n> over the Win32 ODBC driver, it appears that SQLTables() and\n> SQLColumns() is not currently fetching the associated\n> description from pg_description for the REMARKS parameter\n> to the call. Perhaps this could be changed? \n\nHmm. Never heard of that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 00:52:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > On Sun, 24 Oct 1999, Mike Mascari wrote:\n> >> So in the example you gave above, you could put a comment\n> >> on each of the two functions which compose the operator\n> >> and a command on the operator itself.\n> \n> Two functions? An operator only has one underlying function.\n> (Aggregates have as many as three though.)\n> \n> > Try \\do and see for yourself. The fix should be rather simple but I'm not\n> > sure where those descriptions are generated actually.\n> \n> The default contents of pg_description come from the DESCR() macros in\n> include/catalog/*.h. It looks like only pg_proc and pg_type have any\n> useful info in them in the current state of the source. I'm guessing\n> that psql's \\do actually looks for a description attached to the\n> underlying function, rather than one attached to the operator.\n\nI can only get oids that are fixed in the include files. Not sure if it\nlooks at the function behind the operator. I just don't remember.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 01:04:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
}
] |
[
{
"msg_contents": "I have finished another round of work for indefinitely-long queries.\nWe can now do things like SELECT textlen(' ... 200K string here ... ')\n--- and get the right answer :-). Still can't actually *store* that\n200K string in a table though.\n\nHere are the other loose ends I'm aware of:\n\npg_dump has a whole bunch of fixed-size buffers, which means it will\nfail to dump extremely complex table definitions &etc. This is\ndefinitely a \"must fix\" item. Michael Ansley is working on it.\n\necpg's lexer still causes YY_USES_REJECT to be defined, even though the\nmain lexer does not. Per previous discussions, this means it's unable\nto deal with individual lexical tokens exceeding 16K or so. I am not\nsure this is worth worrying about. For example, if you break up a\nstring constant into multiple lines,\n\t'here is a'\n\t' really really'\n\t' really really long string'\nthen the 16K limit only applies to each line individually (I think).\nAnd data values that you aren't writing literally in the ECPG source\ncode aren't constrained either. Still, if it's easy to alter the ECPG\nlexical definition to avoid using REJECT, it might be worth doing.\n\nThe ODBC interface contains a lot of apparently-no-longer-valid\nassumptions about maximum query length; these need to be looked at\nby someone who's familar with ODBC, which I am not. Note that some\nof its limits are associated with maximum tuple length, which means\nthey're not broken quite yet --- but it would be a good idea to\nflag the changes that will be needed when we have long tuples.\nThese symbols in ODBC need to be looked at and possibly eliminated:\nSQL_PACKET_SIZE MAX_MESSAGE_LEN MAX_QUERY_SIZE ERROR_MESSAGE_LENGTH\nMAX_STATEMENT_LEN TEXT_FIELD_SIZE MAX_VARCHAR_SIZE DRV_VARCHAR_SIZE\nDRV_LONGVARCHAR_SIZE MAX_CONNECT_STRING MAX_FIELDS\n\nThe Python interface needs to eliminate its fixed-size query buffers\n(look for MAX_BUFFER_SIZE). I'm not touching this since I don't\nhave Python installed to test with.\n\nAnd that's about it. Hard limits on query length are history!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 1999 23:30:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status report: long-query changes"
}
] |
[
{
"msg_contents": "I have fixed a problem with the PDF file not properly displaying certain\ncharacters. New copy uploaded. \n\nThe book in on our web site now in HTML and PDF formats. It will be\nupdated automatically very night.\n\nGo to:\n\n http://www.postgresql.org/docs\n\nUnder documentation, you will see the entry \"Published Book\".\n\n>From our main web site, it is under Info Central/Documentation.\n\nComments welcomed.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 24 Oct 1999 23:47:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Book on web site"
}
] |
[
{
"msg_contents": "--- Byron Nikolaidis <[email protected]> wrote:\n...\n> > I still need to write the SGML and change pg_dump to\n> > generate COMMENT ON statements, and also regression tests,\n> > but the functionality should be complete. Just glancing\n> > over the Win32 ODBC driver, it appears that SQLTables() and\n> > SQLColumns() is not currently fetching the associated\n> > description from pg_description for the REMARKS parameter\n> > to the call. Perhaps this could be changed?\n> \n> It wouldn't be hard to add the pg_description to the remarks.\n> Does this field exist for all previous postgres releases (specifically,\n> 6.2,6.3, and 6.4) ??\n> \n> Byron\n...\n\nIt appears, just from a spot check of the initial database structure\ncreated from old RPMS on rpmfind.net that pg_description was added\nafter 6.2 whose \"provides\" looks like this (for 6.2.1):\n\n...\n/var/lib/postgresql/data/base/template1/pg_attrdefind\n/var/lib/postgresql/data/base/template1/pg_attrelidind\n/var/lib/postgresql/data/base/template1/pg_attribute\n/var/lib/postgresql/data/base/template1/pg_class\n/var/lib/postgresql/data/base/template1/pg_classnameind\n/var/lib/postgresql/data/base/template1/pg_classoidind\n/var/lib/postgresql/data/base/template1/pg_index\n/var/lib/postgresql/data/base/template1/pg_inheritproc\n/var/lib/postgresql/data/base/template1/pg_inherits\n/var/lib/postgresql/data/base/template1/pg_internal.init\n...\n\nwhile for 6.3.1, the initial database structure looks like:\n\n...\n/var/lib/pgsql/base/template1/pg_class\n/var/lib/pgsql/base/template1/pg_class_oid_index\n/var/lib/pgsql/base/template1/pg_class_relname_index\n/var/lib/pgsql/base/template1/pg_description\n/var/lib/pgsql/base/template1/pg_description_objoid_index\n/var/lib/pgsql/base/template1/pg_index\n/var/lib/pgsql/base/template1/pg_inheritproc\n...\n\nAnd of course, it appears also in 6.4.x, so I assume that it was added \nbetween the 6.2 and 6.3 releases. Is that going to be a problem?\n\nHope that helps,\n\nMike Mascari\n([email protected])\n\n\n\n\n\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Sun, 24 Oct 1999 22:18:15 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n>> Does this field exist for all previous postgres releases (specifically,\n>> 6.2,6.3, and 6.4) ??\n\n> And of course, it appears also in 6.4.x, so I assume that it was added \n> between the 6.2 and 6.3 releases. Is that going to be a problem?\n\nFor Peter's purposes, it's unnecessary to worry about anything older\nthan 6.4, since he's depending on an up-to-date libpq and current libpq\nwon't talk to anything older than 6.4.\n\nByron might still care about 6.2 ... I dunno whether ODBC currently\nreally works with 6.2 or not, or whether it needs to keep doing so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Oct 1999 01:36:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch "
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Mike Mascari <[email protected]> writes:\n> >> Does this field exist for all previous postgres releases (specifically,\n> >> 6.2,6.3, and 6.4) ??\n> \n> > And of course, it appears also in 6.4.x, so I assume that it was added\n> > between the 6.2 and 6.3 releases. Is that going to be a problem?\n> \n> For Peter's purposes, it's unnecessary to worry about anything older\n> than 6.4, since he's depending on an up-to-date libpq and current libpq\n> won't talk to anything older than 6.4.\n> \n> Byron might still care about 6.2 ... I dunno whether ODBC currently\n> really works with 6.2 or not, or whether it needs to keep doing so.\n> \n> regards, tom lane\n\n\nIt still really works with 6.2! But whether it needs to, is another\nquestion!\n\nI'm not sure if anyone cares if it works with 6.2 (even 6.3 for that\nmatter) or not.\n\nByron\n",
"msg_date": "Mon, 25 Oct 1999 19:36:52 -0400",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
},
{
"msg_contents": "> --- Byron Nikolaidis <[email protected]> wrote:\n> ...\n> > > I still need to write the SGML and change pg_dump to\n> > > generate COMMENT ON statements, and also regression tests,\n> > > but the functionality should be complete. Just glancing\n> > > over the Win32 ODBC driver, it appears that SQLTables() and\n> > > SQLColumns() is not currently fetching the associated\n> > > description from pg_description for the REMARKS parameter\n> > > to the call. Perhaps this could be changed?\n> > \n> > It wouldn't be hard to add the pg_description to the remarks.\n> > Does this field exist for all previous postgres releases (specifically,\n> > 6.2,6.3, and 6.4) ??\n\nHISTORY file says added in release 6.3.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 01:00:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
}
] |
[
{
"msg_contents": "--- Byron Nikolaidis <[email protected]> wrote:\n...\n> > I still need to write the SGML and change pg_dump to\n> > generate COMMENT ON statements, and also regression tests,\n> > but the functionality should be complete. Just glancing\n> > over the Win32 ODBC driver, it appears that SQLTables() and\n> > SQLColumns() is not currently fetching the associated\n> > description from pg_description for the REMARKS parameter\n> > to the call. Perhaps this could be changed?\n> \n> It wouldn't be hard to add the pg_description to the remarks.\n> Does this field exist for all previous postgres releases (specifically,\n> 6.2,6.3, and 6.4) ??\n> \n> Byron\n...\n\nIt appears, just from a spot check of the initial database structure\ncreated from old RPMS on rpmfind.net that pg_description was added\nafter 6.2 whose \"provides\" looks like this (for 6.2.1):\n\n...\n/var/lib/postgresql/data/base/template1/pg_attrdefind\n/var/lib/postgresql/data/base/template1/pg_attrelidind\n/var/lib/postgresql/data/base/template1/pg_attribute\n/var/lib/postgresql/data/base/template1/pg_class\n/var/lib/postgresql/data/base/template1/pg_classnameind\n/var/lib/postgresql/data/base/template1/pg_classoidind\n/var/lib/postgresql/data/base/template1/pg_index\n/var/lib/postgresql/data/base/template1/pg_inheritproc\n/var/lib/postgresql/data/base/template1/pg_inherits\n/var/lib/postgresql/data/base/template1/pg_internal.init\n...\n\nwhile for 6.3.1, the initial database structure looks like:\n\n...\n/var/lib/pgsql/base/template1/pg_class\n/var/lib/pgsql/base/template1/pg_class_oid_index\n/var/lib/pgsql/base/template1/pg_class_relname_index\n/var/lib/pgsql/base/template1/pg_description\n/var/lib/pgsql/base/template1/pg_description_objoid_index\n/var/lib/pgsql/base/template1/pg_index\n/var/lib/pgsql/base/template1/pg_inheritproc\n...\n\nAnd of course, it appears also in 6.4.x, so I assume that it was added \nbetween the 6.2 and 6.3 releases. Is that going to be a problem?\n\nHope that helps,\n\nMike Mascari\n([email protected])\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Sun, 24 Oct 1999 22:18:57 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch"
}
] |
[
{
"msg_contents": "unsubscribe\n",
"msg_date": "Mon, 25 Oct 1999 09:38:38 -0400",
"msg_from": "\"Nguyen, Thuan X\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hi Peter,\n\nCould you explain me the use of these 3 functions:\n\nstrcasecmp\nstrtol\nputenv (from nextstep/port.c)\n\n\nRegards,\n______________________________\nSt�phane FILLON\nmailto:[email protected]\n\n\n",
"msg_date": "Tue, 26 Oct 1999 04:11:08 +1100",
"msg_from": "=?iso-8859-1?Q?St=E9phane_FILLON?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: mv backend/port ../../"
}
] |
[
{
"msg_contents": "--- Tom Lane <[email protected]> wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > On Sun, 24 Oct 1999, Mike Mascari wrote:\n> >> So in the example you gave above, you could put a comment\n> >> on each of the two functions which compose the operator\n> >> and a command on the operator itself.\n> \n> Two functions? An operator only has one underlying function.\n> (Aggregates have as many as three though.)\n\nI'm sorry...it was late a night. I meant you could comment on left\nand right hand sides of the operator (the types) as well as the function\nand also on the operator itself. I also spelled comment as command)...\n\n> \n> > Try \\do and see for yourself. The fix should be rather simple but I'm\n> not\n> > sure where those descriptions are generated actually.\n> \n> The default contents of pg_description come from the DESCR() macros in\n> include/catalog/*.h. It looks like only pg_proc and pg_type have any\n> useful info in them in the current state of the source. I'm guessing\n> that psql's \\do actually looks for a description attached to the\n> underlying function, rather than one attached to the operator.\n\nPerhaps this behavior should continue. But I thought it would be \nnice to comment on the function of the operator without respect to the\nfunction.\n\nMike Mascari\n([email protected])\n\n\n\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Mon, 25 Oct 1999 12:30:56 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] COMMENT ON patch "
}
] |
[
{
"msg_contents": "Would it be objectionable if I altered the format of the pg_options file slightly?\nI feel the need to handle a somewhat more complex syntax for the logging subsystem.\n\nWhat I'm proposing is to wrap the existing stuff in a backwards-compatible manner,\nbut extend it. Like so:\n\n---------------------------------------------------\n# postgresql options\n\ndebugging {\n\tfooparam+\n\tbarswitch\n\tdumplevel = 11\n}\n\nlogging {\n\t# details to follow\n}\n---------------------------------------------------\n\nAlso, is YACC sufficently thread-safe that if a SIGHUP starts\nparsing options it won't collide with another task's in-progress\nparsing of, say a SELECT statement?\n\n Thanks,\n\n Tim Holloway\n",
"msg_date": "Mon, 25 Oct 1999 20:01:15 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logging - pg_options format change?"
},
{
"msg_contents": "Tim Holloway <[email protected]> writes:\n> Would it be objectionable if I altered the format of the pg_options\n> file slightly? I feel the need to handle a somewhat more complex\n> syntax for the logging subsystem.\n\nWhile I'm not particularly wedded to the pg_options format, I wonder\nwhether it wouldn't be a better idea to create a separate file for\nthe logging control data. If I'm reading your proposal correctly,\nthe backend would no longer parse existing pg_options files --- and\nthat's certain to make dbadmins unhappy, even if the fix is easy.\nUpgrades are always stressful enough, even without added complications\nlike forced changes to config files.\n\nYou could probably tweak the syntax so that an existing pg_options\nfile is still valid, but that might be a bit too klugy. What's\nwrong with having two separate files? We can assume that this isn't\na performance-critical path, I think.\n\n> Also, is YACC sufficently thread-safe that if a SIGHUP starts\n> parsing options it won't collide with another task's in-progress\n> parsing of, say a SELECT statement?\n\nDon't even think of going there. Even if yacc/bison code itself can be\nmade reentrant (which I doubt; it's full of static variables) you'd also\nhave to assume that large chunks of libc are reentrant --- malloc() and\nstdio in particular --- and I know for a fact that you *cannot* assume\nthat. There might be some platforms where it will work, but many others\nwon't.\n\nBasically, the only thing that's really safe for a signal handler to do\nis set an int flag to TRUE for a test in the main control paths to\nnotice at some later (hopefully not too much later) time. The\nQueryCancel flag in the existing Postgres code is an example.\n\nFor the purposes of logging, I see no reason why it wouldn't be\ngood enough to reread the config file at the start of the next\nquery-execution cycle. There's no need to take the risks of doing\nanything unportable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 1999 02:16:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Logging - pg_options format change? "
},
{
"msg_contents": "> Tim Holloway <[email protected]> writes:\n> > Would it be objectionable if I altered the format of the pg_options\n> > file slightly? I feel the need to handle a somewhat more complex\n> > syntax for the logging subsystem.\n> \n> While I'm not particularly wedded to the pg_options format, I wonder\n> whether it wouldn't be a better idea to create a separate file for\n> the logging control data. If I'm reading your proposal correctly,\n> the backend would no longer parse existing pg_options files --- and\n> that's certain to make dbadmins unhappy, even if the fix is easy.\n> Upgrades are always stressful enough, even without added complications\n> like forced changes to config files.\n> \n> You could probably tweak the syntax so that an existing pg_options\n> file is still valid, but that might be a bit too klugy. What's\n> wrong with having two separate files? We can assume that this isn't\n> a performance-critical path, I think.\n\nWith a 7.0 release, I think we can revamp that file without too many\ncomplaints. pg_options file is fairly new, and it is an administrator's\nthing, and only has to be done once. Seems like a revamp to make it\nclear for all users would help. Having two files would mean explaining\nthat to people for ever.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 12:24:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Logging - pg_options format change?"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n> \n> > Tim Holloway <[email protected]> writes:\n> > > Would it be objectionable if I altered the format of the pg_options\n> > > file slightly? I feel the need to handle a somewhat more complex\n> > > syntax for the logging subsystem.\n> >\n> > While I'm not particularly wedded to the pg_options format, I wonder\n> > whether it wouldn't be a better idea to create a separate file for\n> > the logging control data. If I'm reading your proposal correctly,\n> > the backend would no longer parse existing pg_options files --- and\n> > that's certain to make dbadmins unhappy, even if the fix is easy.\n> > Upgrades are always stressful enough, even without added complications\n> > like forced changes to config files.\n> >\n> > You could probably tweak the syntax so that an existing pg_options\n> > file is still valid, but that might be a bit too klugy. What's\n> > wrong with having two separate files? We can assume that this isn't\n> > a performance-critical path, I think.\n> \n> With a 7.0 release, I think we can revamp that file without too many\n> complaints. pg_options file is fairly new, and it is an administrator's\n> thing, and only has to be done once. Seems like a revamp to make it\n> clear for all users would help. Having two files would mean explaining\n> that to people for ever.\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nNot to worry - the operative word was \"wrap\". In fact, I planned to leave the existing\ndebug parser intact and just jump into it if the proper trigger for extended syntax isn't\nseen (also as a subprocessor if it IS seen). I've been on the receiving end of trauma\ntoo many times myself.\n\nI had considered making a \"postgresql.conf\" file with an option for debugging statements,\nbut the net effect would just be the same anyway. Besides, Apache went the multi-config\nfile route and regretted it. I'd rather not repeat history if a little advance planning\ncan avoid it.\n\nThere's another consideration. If a SIGHUP rescanned the ENTIRE configuration and there were\ntwo config files, BOTH of them would end up being processed anyway.\n\n Tim Holloway\n",
"msg_date": "Tue, 26 Oct 1999 19:14:01 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Logging - pg_options format change?"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> > Also, is YACC sufficently thread-safe that if a SIGHUP starts\n> > parsing options it won't collide with another task's in-progress\n> > parsing of, say a SELECT statement?\n> \n> Don't even think of going there. Even if yacc/bison code itself can be\n> made reentrant (which I doubt; it's full of static variables) you'd also\n> have to assume that large chunks of libc are reentrant --- malloc() and\n> stdio in particular --- and I know for a fact that you *cannot* assume\n> that. There might be some platforms where it will work, but many others\n> won't.\n> \n> Basically, the only thing that's really safe for a signal handler to do\n> is set an int flag to TRUE for a test in the main control paths to\n> notice at some later (hopefully not too much later) time. The\n> QueryCancel flag in the existing Postgres code is an example.\n> \n> For the purposes of logging, I see no reason why it wouldn't be\n> good enough to reread the config file at the start of the next\n> query-execution cycle. There's no need to take the risks of doing\n> anything unportable.\n> \n> regards, tom lane\n\nDarn. I thought newer YACC programs had gotten rid of that kind of mess. But then\nI thought the same of the C libs, too. Oh well.\n \nI trust that it IS safe to use do the config reread using YACC if I do it as a\n\"query-execution\" task? \n\n Thanks!\n Tim Holloway\n",
"msg_date": "Tue, 26 Oct 1999 19:33:44 -0400",
"msg_from": "Tim Holloway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Logging - pg_options format change?"
}
] |
[
{
"msg_contents": "Hi All,\n\nA make on a \"cvs update\" from this morning fails with the following\nerror message.\n\nmake -C commands all \nmake[2]: Entering directory `/usr/local/pgsql/src/backend/commands'\nmake[2]: *** No rule to make target `../parse.h', needed by `comment.o'. Stop.\nmake[2]: Leaving directory `/usr/local/pgsql/src/backend/commands'\nmake[1]: *** [commands.dir] Error 2\nmake[1]: Leaving directory `/usr/local/pgsql/src/backend'\nmake: *** [install] Error 2\n\nThis looks to have been broken by the COMMENT patch.\n\nKeith\n\n",
"msg_date": "Tue, 26 Oct 1999 10:57:55 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Current source from CVS won't compile."
}
] |
[
{
"msg_contents": "Yes.\n\nThis is my fault. Sorry. Attached is a patch which fixes the problem.\nI missed adding the rule to make parse.h to the Makefile for \nthe ../backend/commands. It also allows for comments to be dropped\nusing IS NULL as well as IS '';\n\nAgain, sorry.\n\nMike Mascari\n([email protected])\n\n--- Keith Parks <[email protected]> wrote:\n> Hi All,\n> \n> A make on a \"cvs update\" from this morning fails with the following\n> error message.\n> \n> make -C commands all \n> make[2]: Entering directory `/usr/local/pgsql/src/backend/commands'\n> make[2]: *** No rule to make target `../parse.h', needed by `comment.o'.\n> Stop.\n> make[2]: Leaving directory `/usr/local/pgsql/src/backend/commands'\n> make[1]: *** [commands.dir] Error 2\n> make[1]: Leaving directory `/usr/local/pgsql/src/backend'\n> make: *** [install] Error 2\n> \n> This looks to have been broken by the COMMENT patch.\n> \n> Keith\n> \n> \n> ************\n> \n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com",
"msg_date": "Tue, 26 Oct 1999 04:18:56 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Current source from CVS won't compile."
},
{
"msg_contents": "\nApplied. Thanks.\n\n\n> Yes.\n> \n> This is my fault. Sorry. Attached is a patch which fixes the problem.\n> I missed adding the rule to make parse.h to the Makefile for \n> the ../backend/commands. It also allows for comments to be dropped\n> using IS NULL as well as IS '';\n> \n> Again, sorry.\n> \n> Mike Mascari\n> ([email protected])\n> \n> --- Keith Parks <[email protected]> wrote:\n> > Hi All,\n> > \n> > A make on a \"cvs update\" from this morning fails with the following\n> > error message.\n> > \n> > make -C commands all \n> > make[2]: Entering directory `/usr/local/pgsql/src/backend/commands'\n> > make[2]: *** No rule to make target `../parse.h', needed by `comment.o'.\n> > Stop.\n> > make[2]: Leaving directory `/usr/local/pgsql/src/backend/commands'\n> > make[1]: *** [commands.dir] Error 2\n> > make[1]: Leaving directory `/usr/local/pgsql/src/backend'\n> > make: *** [install] Error 2\n> > \n> > This looks to have been broken by the COMMENT patch.\n> > \n> > Keith\n> > \n> > \n> > ************\n> > \n> \n> \n> =====\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\nContent-Description: patchfile\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 12:28:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: [HACKERS] Current source from CVS won't compile."
},
{
"msg_contents": "> Yes.\n> \n> This is my fault. Sorry. Attached is a patch which fixes the problem.\n> I missed adding the rule to make parse.h to the Makefile for \n> the ../backend/commands. It also allows for comments to be dropped\n> using IS NULL as well as IS '';\n> \n\nCan we use only NULL, and not '' please? Seems clearer. I don't like\nus of '' for any special handling. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 12:31:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current source from CVS won't compile."
}
] |
[
{
"msg_contents": "will some one tell me how to use telnet and what is it .\nthe-virus\n\n",
"msg_date": "Tue, 26 Oct 1999 07:17:37 -0700",
"msg_from": "saad <[email protected]>",
"msg_from_op": true,
"msg_subject": "what the hell telnet is"
}
] |
[
{
"msg_contents": "Hi\n\nSorry if it�s the wrong place to post this. Please let me know where is the\ncorrect place.\n\nI'm upgrading from PostgreSQL 6.4 to 6.5.2. I compiled 6.5.2 and installed\nit in an machine (that was not running 6.4) for tests but Postmaster want\nnot run.\n\nOs: Sun sparc solaris 2.5\ncompiled with GCC 2.8.1, FLEX 2.5.4\n\nCompilation and install runs OK! \nInitidb could not tell what username to use (I was su postgres) so I tried\ninitdb -u postgres and works, see bellow:\n\n$ LD_LIBRARY_PATH=/usr/local/pgsql/lib\n$ export LD_LIBRARY_PATH\n$\ninitdb\n\nCan't tell what username to use. You don't have the\nUSER\nenvironment variable set to your username and didn't specify the\n\n--username option\n$ initdb -u postgres\n\nWe are initializing the database\nsystem with username postgres (uid=1156).\nThis user will own all the files\nand must also own the server process.\n\nCreating Postgres database system\ndirectory /usr/local/pgsql/data\n\nCreating Postgres database system\ndirectory /usr/local/pgsql/data/base\n\nCreating template database in\n/usr/local/pgsql/data/base/template1\n\nCreating global classes in\n/usr/local/pgsql/data/base\n\nAdding template1 database to\npg_database...\n\nVacuuming template1\nCreating public pg_user view\nCreating\nview pg_rules\nCreating view pg_views\nCreating view pg_tables\nCreating view\npg_indexes\nLoading pg_description\n$\n\nI tried to start postmaster and it report an error, see bellow:\n\n$ nohup postmaster -i > pserver.log 2>&1 &\n[1]\t17192\n$ ps -ef | grep\npost\npostgres 6006 529 0 10:22:05 pts/1 0:00 ksh\npostgres 17225\n6006 1 11:01:41 pts/1 0:00 grep post\n[1] + Done\nnohup postmaster -i > pserver.log 2>&1 &\n$ cat\npserver.log\nIpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\nsize=1073152, permission=600\nFATAL 1: ShmemCreate: cannot create region\n$ \n \nI have tried 3 times this installation and the result is the same. I\ninstalled 6.5.2 in my house linux machine and work well.\n\nThank you for your attention\n\nRoberto\n\n",
"msg_date": "Tue, 26 Oct 1999 12:25:15 -0200",
"msg_from": "Roberto Joao Lopes Garcia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error: shmget failed"
},
{
"msg_contents": "Roberto Joao Lopes Garcia <[email protected]> writes:\n> I tried to start postmaster and it report an error, see bellow:\n> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> size=1073152, permission=600\n> FATAL 1: ShmemCreate: cannot create region\n\nAs a quick hack you can start the postmaster with smaller-than-\nnormal -B and -N (say -B 40 -N 20). Long term solution is to\nincrease your kernel's SHMMAX limit to more than 1 megabyte.\n\nI thought we had adjusted the default -B and -N to stay just\nunder a meg, which is the default SHMMAX value on many kernels.\nBut it looks like someone chewed up some more shared memory when\nI wasn't looking :-(.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 1999 12:22:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error: shmget failed "
},
{
"msg_contents": "> Roberto Joao Lopes Garcia <[email protected]> writes:\n> > I tried to start postmaster and it report an error, see bellow:\n> > IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> > size=1073152, permission=600\n> > FATAL 1: ShmemCreate: cannot create region\n> \n> As a quick hack you can start the postmaster with smaller-than-\n> normal -B and -N (say -B 40 -N 20). Long term solution is to\n> increase your kernel's SHMMAX limit to more than 1 megabyte.\n> \n> I thought we had adjusted the default -B and -N to stay just\n> under a meg, which is the default SHMMAX value on many kernels.\n> But it looks like someone chewed up some more shared memory when\n> I wasn't looking :-(.\n\n7.0 backend will point them to FAQ on such errors.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 12:51:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Error: shmget failed"
},
{
"msg_contents": "Hi\n\nTo change maximun Shared Memory segment size in my Sparc Solaris 2.5, I\nhave to add the following line into /etc/system file and then reboot the\nsystem\n\n\tset shmsys:shminfo_shmmax=268435456\n\nIt solved the problem. \n\nAlso, Solaris ANSWER BOOK recomends to change the follow. Please note that\nI do not test the lines bellow in my system, only the above change in\n/etc/system solved the problem.\n\n\tset semsys:seminfo_semmap=250\n\tset semsys:seminfo_semmni=500\n\tset\nsemsys:seminfo_semmns=500\n\tset semsys:seminfo_semmsl=500\n\tset\nsemsys:seminfo_semmnu=500\n\tset semsys:seminfo_semume=100\n\tset\nsemsys:seminfo_shmmin=200\n\tset semsys:seminfo_shmmni=200\n\tset\nsemsys:seminfo_shmseg=200\n\nTo see the actual system sets one can use the command sysdef that, in my\nsystem produce the follow:\n\n#sysdef\n\t.\n\t.\n\t. ...\n*\n* IPC Semaphores\n*\n 10\tentries in semaphore map (SEMMAP)\n 10\nsemaphore identifiers (SEMMNI)\n 60\tsemaphores in system (SEMMNS)\n 30\nundo structures in system (SEMMNU)\n 25\tmax semaphores per id (SEMMSL)\n\n 10\tmax operations per semop call (SEMOPM)\n 10\tmax undo entries per\nprocess (SEMUME)\n 32767\tsemaphore maximum value (SEMVMX)\n 16384\tadjust on\nexit max value (SEMAEM)\n*\n* IPC Shared Memory\n*\n268435456\tmax shared memory\nsegment size (SHMMAX)\n 1\tmin shared memory segment size (SHMMIN)\n 100\nshared memory identifiers (SHMMNI)\n 6\tmax attached shm segments per\nprocess (SHMSEG)\n*\n* Time Sharing Scheduler Tunables\n*\n60\tmaximum time\nsharing user priority (TSMAXUPRI)\nSYS\tsystem class name (SYS_NAME)\n# \n\nPlease read the man pages before execute this command or the changes showed\nabove.\n\nThank you for you help\n\nRoberto\n\nAt 12:51 26/10/99 -0400, you wrote:\n>> Roberto Joao Lopes Garcia <[email protected]> writes:\n>> > I tried to start postmaster and it report an error, see bellow:\n>> > IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n>> > size=1073152, permission=600\n>> > FATAL 1: ShmemCreate: cannot create region\n>> \n>> As a quick hack you can start the postmaster with smaller-than-\n>> normal -B and -N (say -B 40 -N 20). Long term solution is to\n>> increase your kernel's SHMMAX limit to more than 1 megabyte.\n>> \n>> I thought we had adjusted the default -B and -N to stay just\n>> under a meg, which is the default SHMMAX value on many kernels.\n>> But it looks like someone chewed up some more shared memory when\n>> I wasn't looking :-(.\n>\n>7.0 backend will point them to FAQ on such errors.\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>************\n>\n>\n\n",
"msg_date": "Wed, 27 Oct 1999 10:20:57 -0200",
"msg_from": "Roberto Joao Lopes Garcia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Error: shmget failed - SOLVED!"
}
] |
[
{
"msg_contents": "Alrighty, this is it. I submit this to your scrutiny in the hope that it\nwill prove useful and reliable.\n\nThe source is at <http://www.pathwaynet.com/~peter/psql-final.tar.gz>\n(49k). In a perfect world you could just drop the directory into your\nsource tree and configure and compile again. Whether or not it is a\nperfect world we will find out soon enough, I suppose.\n\nThree patches are included in the tarball. One is a minor libpq fix which\nI submitted the other day already and which is mandatory. (I now see it is\nin the current tree already). Two more are to put a test of getopt_long\nin the autoconf business. Other than that the changes are restricted to\nthe psql directory.\n\nI'm going to do some more work on it but those should be localized\nchanges. Here are a few unresolved issues:\n\n* It is now consistently possible to put several slash commands on a line,\neven mixed with SQL, such as:\n=> select * from \\t \\o file.out \\x \\\\ my_table \\g\nThis might cause a problem in Windows, if you need to write, for example,\n\\o \\temp\\dir. The fix would be to write \\o '\\temp\\dir' (as opposed to \\o\n\"\\temp\\dir\", because that would be subject to substitutions like \\t =>\ntab). As this might be cumbersome I give the option to the Windows\ncommunity: disable things like the above command line completely or quote\nyour stuff. It's a tradeoff.\n\t(On the other hand, I was at some point under the impression that\nin C on Windows you could actually use forward slashes in your file names\nwhich would be converted by some magic layer, thus making this a\nnon-issue.(?))\n\n* Slash commands can only have up to 16 options. This is purely my own\nlaziness. Of course, no single slash command actually uses more than three\noptions, but it sure is unsatisfying.\n\n* The \\d* command silently disappeared. It's previous semantics where\n\"show everything\" but I'm not quite sure what that should be short of\nrewriting pg_dump.\n\n* Heaven help you if you want to compile this under Windows. I don't have\nWindows, so some porter will have to take care of that.\n\nI am writing DocBook documentation right now and an updated version should\nbe available within 48 hours. For a starter here is a session that\nattempts to illustrate a couple of the quoting and substitution features:\n\nplay=> \\set foo 'bar'\nplay=> \\echo $foo\nbar\nplay=> \\echo bla$foo\nbla$foo\nplay=> \\echo \"bla${foo}bla\"\nblabarbla\nplay=> \\echo 'bla${foo}bla'\nbla${foo}bla\nplay=> \\echo \"a\\nb\"\na\nb\nplay=> \\echo 'a\\nb'\na\\nb\nplay=> \\echo `uname -rms`\nLinux 2.2.12 i586\nplay=> \\set sql_interpol '#'\nplay=> \\set singlestep on\nplay=> \\set blah `/usr/games/fortune`\nplay=> insert into foo values (0, '#blah#');\n***(Single step mode: Verify query)*********************************************\nQUERY: insert into foo values (0, 'Everything you read in newspapers is\nabsolutely true, except for that\nrare story of which you happen to have first-hand knowledge.\n -- Erwin Knoll')\n***(press return to proceed or enter x and return to cancel)********************\nx\n\nIf you use the large object operations, your previous transaction will be\nrolled back. To change this do \\set lo_transaction 'commit' or \\set\nlo_transaction 'nothing' (in which case you must provide your own\nBEGIN/END block).\n\nMost other stuff can be determined from \\? or -?, respectively, and also\nfrom the changelogs I posted in the past.\n\n\nEnjoy!\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 26 Oct 1999 18:43:15 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql Week 4"
}
] |
[
{
"msg_contents": "\u0018--- Bruce Momjian <[email protected]> wrote:\n> > Yes.\n> > \n> > This is my fault. Sorry. Attached is a patch which fixes the problem.\n> > I missed adding the rule to make parse.h to the Makefile for \n> > the ../backend/commands. It also allows for comments to be dropped\n> > using IS NULL as well as IS '';\n> > \n> \n> Can we use only NULL, and not '' please? Seems clearer. I don't like\n> us of '' for any special handling. Thanks.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> \n\nI agree with you, an empty string is a goofy way to drop a comment.\nThe only reason I did it that way was because that's how Oracle does\nit (heck, why not create comment on, drop comment on). The question is,\nwhat should the behavior be when a user supplies an empty string?\n\nCOMMENT ON TABLE employees IS '';\n\nShould the above add an empty comment to pg_description?\n\nIts up to you... :-)\n\nMike Mascari\n([email protected])\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Tue, 26 Oct 1999 10:16:19 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Current source from CVS won't compile."
},
{
"msg_contents": "> > Can we use only NULL, and not '' please? Seems clearer. I don't like\n> > us of '' for any special handling. Thanks.\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> > 19026\n> > \n> \n> I agree with you, an empty string is a goofy way to drop a comment.\n> The only reason I did it that way was because that's how Oracle does\n> it (heck, why not create comment on, drop comment on). The question is,\n> what should the behavior be when a user supplies an empty string?\n> \n> COMMENT ON TABLE employees IS '';\n> \n> Should the above add an empty comment to pg_description?\n\nOK, I like the compatability issue. Let's leave both as dropping\ncomments, but only document the NULL case. SGML already updated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Oct 1999 13:38:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current source from CVS won't compile."
}
] |
[
{
"msg_contents": "\n\nHi,\n\nI try dump (via pg_dump) my database, but if I write dumped data back to DB,\nviews (as select on ingerit table) not work.\n\nThe bug is in routine pg_get_ruledef(pg_rewrite.rulename), which _not_\ndiscern between select on standard table and select on inderit table.\nSelect on inherit table is \"SELECT * FROM table*\", but pg_get_ruledef()\nreturn this view definition without asterisk: \"SELECT * FROM table\".\n\nSee example:\n----------- \nabil=> create table mother_tab (aaa int);\nCREATE\n\nabil=> create table son () inherits(mother_tab);\nCREATE\n\nabil=> create view v_mother as select * from mother_tab*;\nCREATE\n\nabil=> insert into son values (111);\nINSERT 4946878 1\n\nabil=> select * from v_mother;\naaa\n---\n111\n(1 row)\n\nabil=> SELECT pg_get_ruledef(pg_rewrite.rulename) FROM pg_rewrite WHERE\nrulename ='_RETv_mother';\nCREATE RULE \"_RETv_mother\" AS ON SELECT TO \"v_mother\" DO INSTEAD SELECT\n\"mother_tab\".\"aaa\" FROM \"mother_tab\";\n(1 row)\n ^^^^^^^^^^^^\n\t\t\tright is \"mother_tab*\"\n\n-----\n\n Is it but?\n\n (It is probably fatal bug if somebody backup batabase via pg_dump and views\nfrom dump is unavailable.) \n\n\n\t\t\t\t\t\tKarel Z.\n\n\n------------------------------------------------------------------------------\n<[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n ...and cathedral dilapidate\n\n",
"msg_date": "Wed, 27 Oct 1999 12:37:44 +0200 (CEST)",
"msg_from": "Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug(?) in pg_get_ruledef()"
},
{
"msg_contents": "\n\nHi,\n\nmy first question was without answer, I try it again:\n\nIMHO is a problem with the routine pg_get_ruledef(), this routine is used in \nany query in the pg_dump for view dumping. But the pg_get_ruledef() not discern \ncontrast between view rules defined as 'select * table' and rules defined as \n'select * table*' (the query should be run over all classes in the \ninheritance hierarchy). \n\n Is it a bug or a limitation? (The pg_dump is unworkable for a views tables \nrunnig over the inheritance hierarchy?) \n \n Problem example:\n --------------- \n abil=> create table mother_tab (aaa int);\n CREATE\n \n abil=> create table son () inherits(mother_tab);\n CREATE\n \n abil=> create view v_mother as select * from mother_tab*;\n CREATE\n \n abil=> insert into son values (111);\n INSERT 4946878 1\n \n abil=> select * from v_mother;\n aaa\n ---\n 111\n (1 row)\n \n abil=> SELECT pg_get_ruledef(pg_rewrite.rulename) FROM pg_rewrite WHERE\n rulename ='_RETv_mother';\n\n CREATE RULE \"_RETv_mother\" AS ON SELECT TO \"v_mother\" DO INSTEAD SELECT\n \"mother_tab\".\"aaa\" FROM \"mother_tab\";\n (1 row)\n ^^^^^^^^^^^^\n \t\t\tbut right is \"mother_tab*\"\n\n---\n \n Any comments? (Please)\n\n\t\t\t\t\t\t\tKarel Z.\n\n",
"msg_date": "Fri, 29 Oct 1999 11:23:54 +0200 (CEST)",
"msg_from": "Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "view vs. inheritance hierarchy (was: Bug(?) in pg_get_ruledef())"
},
{
"msg_contents": ">\n>\n>\n> Hi,\n>\n> my first question was without answer, I try it again:\n>\n> IMHO is a problem with the routine pg_get_ruledef(), this routine is used in\n> any query in the pg_dump for view dumping. But the pg_get_ruledef() not discern\n> contrast between view rules defined as 'select * table' and rules defined as\n> 'select * table*' (the query should be run over all classes in the\n> inheritance hierarchy).\n>\n> Is it a bug or a limitation? (The pg_dump is unworkable for a views tables\n> runnig over the inheritance hierarchy?)\n\n Surely a bug!\n\n Unfortunately I'm too busy at the moment to tackle it down.\n The location where the inheritance is ignored is\n\n src/backend/utils/adt/ruleutils.c\n\n or a similar name - you'll find that file - it's the source\n where that damned pg_get_ruledef() is defined. If you can\n loacate and fix the problem therein depends on how familiar\n you are with interpreting querytrees. At some place the table\n name is printed, but I don't know if it is possible to tell\n from the data at hand if it is an inheritance. Maybe another\n catalog lookup is required there.\n\n Oh man, this little 'piece of magic' (as someone else called\n it) was only intended to demonstrate that it is POSSIBLE AT\n ALL to translate a querytree back into it's original SQL\n statement. Why the hell did I assist in making use of it in\n pg_dump?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 29 Oct 1999 14:55:24 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] view vs. inheritance hierarchy (was: Bug(?) in\n\tpg_get_ruledef())"
},
{
"msg_contents": "\n\nOn Fri, 29 Oct 1999, Jan Wieck wrote:\n\n> > Is it a bug or a limitation? (The pg_dump is unworkable for a views tables\n> > runnig over the inheritance hierarchy?)\n> \n> Surely a bug!\n> \n> Unfortunately I'm too busy at the moment to tackle it down.\n> The location where the inheritance is ignored is\n> \n> src/backend/utils/adt/ruleutils.c\n> \n> or a similar name - you'll find that file - it's the source\n> where that damned pg_get_ruledef() is defined. If you can\n> loacate and fix the problem therein depends on how familiar\n> you are with interpreting querytrees. At some place the table\n> name is printed, but I don't know if it is possible to tell\n> from the data at hand if it is an inheritance. Maybe another\n> catalog lookup is required there.\n\nWell, I try see to the source and fix it.\n\n> Oh man, this little 'piece of magic' (as someone else called\n\n But, more good details make very good PosgreSQL :-)) \n\n> it) was only intended to demonstrate that it is POSSIBLE AT\n> ALL to translate a querytree back into it's original SQL\n> statement. Why the hell did I assist in making use of it in\n> pg_dump?\n\nIf exist handle, why not open the door? Pg_dump is backup util which allow\ndump _all_ definition and data, we need it right if we allow it. \n\n(I use pg_dump only for data backup.)\n\nThank Jan!\n\t\t\t\t\t\t\tKarel Z.\t\t\n\n\n",
"msg_date": "Fri, 29 Oct 1999 15:23:21 +0200 (CEST)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] view vs. inheritance hierarchy (was: Bug(?) in\n\tpg_get_ruledef())"
},
{
"msg_contents": "Zakkr <[email protected]> writes:\n> But the pg_get_ruledef() not discern contrast between view rules\n> defined as 'select * table' and rules defined as 'select * table*'\n> (the query should be run over all classes in the inheritance\n> hierarchy).\n\n> Is it a bug or a limitation?\n\nSounds like a bug to me too. The fix is probably just a small addition\nof code, but I haven't had time to look into it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 1999 09:47:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] view vs. inheritance hierarchy (was: Bug(?) in\n\tpg_get_ruledef())"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Oh man, this little 'piece of magic' (as someone else called\n> it) was only intended to demonstrate that it is POSSIBLE AT\n> ALL to translate a querytree back into it's original SQL\n> statement. Why the hell did I assist in making use of it in\n> pg_dump?\n\nBecause it solved a necessary problem. Don't beat yourself up about\nit...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 1999 10:25:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] view vs. inheritance hierarchy (was: Bug(?) in\n\tpg_get_ruledef())"
},
{
"msg_contents": "\n\n\n\nOn Fri, 29 Oct 1999, Tom Lane wrote:\n\n> Zakkr <[email protected]> writes:\n> > But the pg_get_ruledef() not discern contrast between view rules\n> > defined as 'select * table' and rules defined as 'select * table*'\n> > (the query should be run over all classes in the inheritance\n> > hierarchy).\n> \n> > Is it a bug or a limitation?\n> \n> Sounds like a bug to me too. The fix is probably just a small addition\n> of code, but I haven't had time to look into it.\n> \n> \t\t\tregards, tom lane\n\n Yes, I fix this bug. Here is my patch (for /src/backend/utils/adt/ruleutils) \nfor it:\n\n*** ruleutils.c.org\tMon Sep 6 00:55:28 1999\n--- ruleutils.c\tSun Sep 31 13:37:42 1999\n***************\n*** 968,971 ****\n--- 968,973 ----\n \t\t\t\tstrcat(buf, \"\\\"\");\n \t\t\t\tstrcat(buf, rte->relname);\n+ \t\t\t\tif (rte->inh)\n+ \t\t\t\t\tstrcat(buf, \"*\");\n \t\t\t\tstrcat(buf, \"\\\"\");\n \t\t\t\tif (strcmp(rte->relname, rte->refname) != 0)\n***************\n*** 973,976 ****\n--- 975,980 ----\n \t\t\t\t\tstrcat(buf, \" \\\"\");\n \t\t\t\t\tstrcat(buf, rte->refname);\n+ \t\t\t\t\tif (rte->inh)\n+ \t\t\t\t\t\tstrcat(buf, \"*\");\n \t\t\t\t\tstrcat(buf, \"\\\"\");\n \t\t\t\t}\n\n\n Add we (Jan or Tom) this code to PostgreSQL source main? (Pease).\n\n\t\t\t\t\t\t\tKarel\n\n------------------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n\n",
"msg_date": "Sun, 31 Oct 1999 15:26:51 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Patch - Re: [HACKERS] view vs. inheritance hierarchy "
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> *** ruleutils.c.org\tMon Sep 6 00:55:28 1999\n> --- ruleutils.c\tSun Sep 31 13:37:42 1999\n> ***************\n> *** 968,971 ****\n> --- 968,973 ----\n> \t\t\t\tstrcat(buf, \"\\\"\");\n> \t\t\t\tstrcat(buf, rte->relname);\n> + \t\t\t\tif (rte->inh)\n> + \t\t\t\t\tstrcat(buf, \"*\");\n> \t\t\t\tstrcat(buf, \"\\\"\");\n> \t\t\t\tif (strcmp(rte->relname, rte->refname) != 0)\n> ***************\n> *** 973,976 ****\n> --- 975,980 ----\n> \t\t\t\t\tstrcat(buf, \" \\\"\");\n> \t\t\t\t\tstrcat(buf, rte->refname);\n> + \t\t\t\t\tif (rte->inh)\n> + \t\t\t\t\t\tstrcat(buf, \"*\");\n> \t\t\t\t\tstrcat(buf, \"\\\"\");\n> \t\t\t\t}\n\n> Add we (Jan or Tom) this code to PostgreSQL source main? (Pease).\n\nThat looks about like the right thing to do, but I wonder whether the\n\"*\" doesn't need to go *outside* the quote marks around the table name?\nSeems like it would be taken as a name character if inside...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Oct 1999 11:59:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch - Re: [HACKERS] view vs. inheritance hierarchy "
},
{
"msg_contents": "\n\n\nOn Sun, 31 Oct 1999, Tom Lane wrote:\n> \n> That looks about like the right thing to do, but I wonder whether the\n> \"*\" doesn't need to go *outside* the quote marks around the table name?\n> Seems like it would be taken as a name character if inside...\n\ngrrr! - it is (my) novice's idiocy...\n\n Sorry Tom, I forget that between the quote is the table name.. next time I\nfirst test & check my patches :-) Now is it good? (I test it this time.) \n\n\t\t\t\t\t Karel\n\n\n*** ruleutils.c.org\tMon Sep 6 00:55:28 1999\n--- ruleutils.c\tMon Nov 1 09:26:03 1999\n***************\n*** 969,972 ****\n--- 969,974 ----\n \t\t\t\tstrcat(buf, rte->relname);\n \t\t\t\tstrcat(buf, \"\\\"\");\n+ \t\t\t\tif (rte->inh)\n+ \t\t\t\t\tstrcat(buf, \"*\");\n \t\t\t\tif (strcmp(rte->relname, rte->refname) != 0)\n \t\t\t\t{\n***************\n*** 974,977 ****\n--- 976,981 ----\n \t\t\t\t\tstrcat(buf, rte->refname);\n \t\t\t\t\tstrcat(buf, \"\\\"\");\n+ \t\t\t\t\tif (rte->inh)\n+ \t\t\t\t\t\tstrcat(buf, \"*\");\n \t\t\t\t}\n \t\t\t}\n\n",
"msg_date": "Mon, 1 Nov 1999 10:09:51 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch - Re: [HACKERS] view vs. inheritance hierarchy "
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> *** ruleutils.c.org\tMon Sep 6 00:55:28 1999\n> --- ruleutils.c\tMon Nov 1 09:26:03 1999\n> ***************\n> *** 969,972 ****\n> --- 969,974 ----\n> \t\t\t\tstrcat(buf, rte->relname);\n> \t\t\t\tstrcat(buf, \"\\\"\");\n> + \t\t\t\tif (rte->inh)\n> + \t\t\t\t\tstrcat(buf, \"*\");\n> \t\t\t\tif (strcmp(rte->relname, rte->refname) != 0)\n> \t\t\t\t{\n\nI applied this part --- I don't think adding a second '*' after the\nrefname is correct.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 09:52:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch - Re: [HACKERS] view vs. inheritance hierarchy "
}
] |
[
{
"msg_contents": "There's a missing quote in psqlHelp.h in the latest CVS.\t\n\nHere's a patch:-\n\n*** src/bin/psql/psqlHelp.h.orig Tue Oct 26 09:34:18 1999\n--- src/bin/psql/psqlHelp.h Wed Oct 27 11:54:25 1999\n***************\n*** 60,66 ****\n FUNCTION <func_name> (arg1, arg2, ...)|\\n\\\n OPERATOR <op> (leftoperand_type rightoperand_type) |\\n\\\n TRIGGER <trigger_name> ON <table_name>\\n\\\n! ] IS 'text'},\n {\"commit work\",\n \"commit a transaction\",\n \"\\\n--- 60,66 ----\n FUNCTION <func_name> (arg1, arg2, ...)|\\n\\\n OPERATOR <op> (leftoperand_type rightoperand_type) |\\n\\\n TRIGGER <trigger_name> ON <table_name>\\n\\\n! ] IS 'text'\"},\n {\"commit work\",\n \"commit a transaction\",\n \"\\\n\n",
"msg_date": "Wed, 27 Oct 1999 12:02:21 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Syntax error in psqlHelp.h"
},
{
"msg_contents": "\nApplied. Thanks. That was my bug.\n\n\n> There's a missing quote in psqlHelp.h in the latest CVS.\t\n> \n> Here's a patch:-\n> \n> *** src/bin/psql/psqlHelp.h.orig Tue Oct 26 09:34:18 1999\n> --- src/bin/psql/psqlHelp.h Wed Oct 27 11:54:25 1999\n> ***************\n> *** 60,66 ****\n> FUNCTION <func_name> (arg1, arg2, ...)|\\n\\\n> OPERATOR <op> (leftoperand_type rightoperand_type) |\\n\\\n> TRIGGER <trigger_name> ON <table_name>\\n\\\n> ! ] IS 'text'},\n> {\"commit work\",\n> \"commit a transaction\",\n> \"\\\n> --- 60,66 ----\n> FUNCTION <func_name> (arg1, arg2, ...)|\\n\\\n> OPERATOR <op> (leftoperand_type rightoperand_type) |\\n\\\n> TRIGGER <trigger_name> ON <table_name>\\n\\\n> ! ] IS 'text'\"},\n> {\"commit work\",\n> \"commit a transaction\",\n> \"\\\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Oct 1999 12:30:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax error in psqlHelp.h"
}
] |
[
{
"msg_contents": "Oops, the stuff I posted yesterday contained a few pretty funky bugs. I\nhave put up a new tarball.\n\nAlso, I have written updated DocBook documentation. It's not a great work\nof literature yet but it's for those that want to get started. I will\nupdate it several times during the next few days.\n\nAgain, those URLs are:\nhttp://www.pathwaynet.com/~peter/psql-final.tar.gz (49k)\nhttp://www.pathwaynet.com/~peter/psql-ref.sgml.gz (16k)\n\nAnd for those who don't want to rebuild the whole documentation:\nhttp://www.pathwaynet.com/~peter/psql-ref.html\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 27 Oct 1999 18:13:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql Week 4.142857"
},
{
"msg_contents": "\nLet me know when you want these applied to the main tree.\n\n> Oops, the stuff I posted yesterday contained a few pretty funky bugs. I\n> have put up a new tarball.\n> \n> Also, I have written updated DocBook documentation. It's not a great work\n> of literature yet but it's for those that want to get started. I will\n> update it several times during the next few days.\n> \n> Again, those URLs are:\n> http://www.pathwaynet.com/~peter/psql-final.tar.gz (49k)\n> http://www.pathwaynet.com/~peter/psql-ref.sgml.gz (16k)\n> \n> And for those who don't want to rebuild the whole documentation:\n> http://www.pathwaynet.com/~peter/psql-ref.html\n> \n> \t-Peter\n> \n> -- \n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Oct 1999 17:03:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "> Let me know when you want these applied to the main tree.\n\nDoes his work replace psql? Or will it be placed under contrib/?\nI just want to know how 7.0 would be look like.\n---\nTatsuo Ishii\n",
"msg_date": "Thu, 28 Oct 1999 10:07:16 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857 "
},
{
"msg_contents": "> > Let me know when you want these applied to the main tree.\n> \n> Does his work replace psql? Or will it be placed under contrib/?\n> I just want to know how 7.0 would be look like.\n\nThis is going into the main tree. psql will still be there, but will be\nimproved.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Oct 1999 21:38:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "On Wed, 27 Oct 1999, Bruce Momjian wrote:\n\n> Let me know when you want these applied to the main tree.\n\n(At the risk of turning this into a C mailing list . . .)\n\nAs soon as I have this problem fixed:\n\ngcc -o psql -L../../interfaces/libpq command.o common.o help.o input.o\nstringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o\nprint.o describe.o -lpq -L/usr/sup/gnu/lib -lgen -lcrypt -lnsl -lsocket\n-ldl -lm -lreadline -ltermcap -lcurses \nld: fatal: symbol `xmalloc' is multiply defined:\n (file common.o and file /usr/sup/gnu/lib/libreadline.a(xmalloc.o));\nld: fatal: File processing errors. No output written to psql\nmake: *** [psql] Error 1\n\nThis happens on a (particular) Solaris box but not on a (particular) Linux\nbox. Beats the heck out of me. ;(\n\nSorry for OT.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 28 Oct 1999 14:58:14 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "> On Wed, 27 Oct 1999, Bruce Momjian wrote:\n> \n> > Let me know when you want these applied to the main tree.\n> \n> (At the risk of turning this into a C mailing list . . .)\n> \n> As soon as I have this problem fixed:\n> \n> gcc -o psql -L../../interfaces/libpq command.o common.o help.o input.o\n> stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o\n> print.o describe.o -lpq -L/usr/sup/gnu/lib -lgen -lcrypt -lnsl -lsocket\n> -ldl -lm -lreadline -ltermcap -lcurses \n> ld: fatal: symbol `xmalloc' is multiply defined:\n> (file common.o and file /usr/sup/gnu/lib/libreadline.a(xmalloc.o));\n> ld: fatal: File processing errors. No output written to psql\n> make: *** [psql] Error 1\n> \n> This happens on a (particular) Solaris box but not on a (particular) Linux\n> box. Beats the heck out of me. ;(\n\nI can't find xmalloc in the source, and can't figure out how it could be\n_defined_ in common.o. Strange.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 28 Oct 1999 10:57:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "On Oct 28, Bruce Momjian mentioned:\n\n> > gcc -o psql -L../../interfaces/libpq command.o common.o help.o input.o\n> > stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o\n> > print.o describe.o -lpq -L/usr/sup/gnu/lib -lgen -lcrypt -lnsl -lsocket\n> > -ldl -lm -lreadline -ltermcap -lcurses \n> > ld: fatal: symbol `xmalloc' is multiply defined:\n> > (file common.o and file /usr/sup/gnu/lib/libreadline.a(xmalloc.o));\n> > ld: fatal: File processing errors. No output written to psql\n> > make: *** [psql] Error 1\n> > \n> > This happens on a (particular) Solaris box but not on a (particular) Linux\n> > box. Beats the heck out of me. ;(\n> \n> I can't find xmalloc in the source, and can't figure out how it could be\n> _defined_ in common.o. Strange.\n\nOh, I defined it in that file, it's used in psql. Unfortunately, it seems\nto be used internally in readline as well. In now figured out that this\ncauses problems only with static linking, but not dynamic. I'm no expert\non linking, does anyone have an idea or do I have to make up a different\nname?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 28 Oct 1999 18:46:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Oh, I defined it in that file, it's used in psql. Unfortunately, it seems\n> to be used internally in readline as well. In now figured out that this\n> causes problems only with static linking, but not dynamic. I'm no expert\n> on linking, does anyone have an idea or do I have to make up a different\n> name?\n\nPick another name --- you're just *asking* for trouble with that one.\nPeople tend to invent macros with names like that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Oct 1999 18:58:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857 "
},
{
"msg_contents": "> Oh, I defined it in that file, it's used in psql. Unfortunately, it seems\n> to be used internally in readline as well. In now figured out that this\n> causes problems only with static linking, but not dynamic. I'm no expert\n> on linking, does anyone have an idea or do I have to make up a different\n> name?\n\nYes, very different name. We are supporting tons of platforms. You\nneed something very unique.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 28 Oct 1999 20:27:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "> Oops, the stuff I posted yesterday contained a few pretty funky bugs. I\n> have put up a new tarball.\n> \n> Also, I have written updated DocBook documentation. It's not a great work\n> of literature yet but it's for those that want to get started. I will\n> update it several times during the next few days.\n> \n> Again, those URLs are:\n> http://www.pathwaynet.com/~peter/psql-final.tar.gz (49k)\n> http://www.pathwaynet.com/~peter/psql-ref.sgml.gz (16k)\n> \n> And for those who don't want to rebuild the whole documentation:\n> http://www.pathwaynet.com/~peter/psql-ref.html\n> \n> \t-Peter\n\nApplied. Configure changes also applied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 16:55:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql Week 4.142857"
},
{
"msg_contents": "OK, new version of psql installed. Only problem I see is that \\h shows\nTRUNCATE as the first help item. I assume the directory contents are\nnot being sorted. Peter?\n\nSecond, the new psql prompt is #, so it shows as:\n\n\ttest-#\n\nNot sure I like that. I liked the > better, I think, unless # grows on\nme.\n\nComments?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 17:22:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "New version of psql"
},
{
"msg_contents": "> > \n> > OK, new version of psql installed. Only problem I see is that \\h shows\n> > TRUNCATE as the first help item. I assume the directory contents are\n> > not being sorted. Peter?\n> > \n> > Second, the new psql prompt is #, so it shows as:\n> > \n> > \ttest-#\n> > \n> > Not sure I like that. I liked the > better, I think, unless # grows on\n> > me.\n> > \n> > Comments?\n> > \n> \n> Surely make it # if the user is a postgres superuser or > if not\n> \n> Makes it consistent and functional.\n\nI see now. Thanks. Makes sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 17:46:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> > Surely make it # if the user is a postgres superuser or > if not\n> > \n> > Makes it consistent and functional.\n> \n> I see now. Thanks. Makes sense.\n\nPeter, are you working on libpq as well?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 18:05:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "At 05:22 PM 11/4/99 -0500, Bruce Momjian wrote:\n>OK, new version of psql installed. Only problem I see is that \\h shows\n>TRUNCATE as the first help item. I assume the directory contents are\n>not being sorted. Peter?\n>\n>Second, the new psql prompt is #, so it shows as:\n>\n>\ttest-#\n>\n>Not sure I like that. I liked the > better, I think, unless # grows on\n>me.\n>\n>Comments?\n\nWas there a reason for changing it?\n\nI guess I'm from the old school, why make changes needlessly?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 04 Nov 1999 15:34:35 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> OK, new version of psql installed. Only problem I see is that \\h shows\n> TRUNCATE as the first help item. I assume the directory contents are\n> not being sorted. Peter?\n> \n> Second, the new psql prompt is #, so it shows as:\n> \n> \ttest-#\n> \n> Not sure I like that. I liked the > better, I think, unless # grows on\n> me.\n> \n> Comments?\n> \n\nIt appears the PAGER is not working. If I do a SELECT, the result\nscrolls off my screen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 19:43:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> > OK, new version of psql installed. Only problem I see is that \\h shows\n> > TRUNCATE as the first help item. I assume the directory contents are\n> > not being sorted. Peter?\n> > \n> > Second, the new psql prompt is #, so it shows as:\n> > \n> > \ttest-#\n> > \n> > Not sure I like that. I liked the > better, I think, unless # grows on\n> > me.\n> > \n> > Comments?\n> > \n> \n> It appears the PAGER is not working. If I do a SELECT, the result\n> scrolls off my screen.\n\nSeems pager is off by default. Enable with \\pset pager. Any reason we\ndon't enable it by default, Peter?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 19:59:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> OK, new version of psql installed. Only problem I see is that \\h shows\n> TRUNCATE as the first help item. I assume the directory contents are\n> not being sorted. Peter?\n> \n> Second, the new psql prompt is #, so it shows as:\n> \n> \ttest-#\n> \n> Not sure I like that. I liked the > better, I think, unless # grows on\n> me.\n\nI have changed psql so the pager is in by default. Very easy to do with\nthe new code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 20:37:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> \n> OK, new version of psql installed. Only problem I see is that \\h shows\n> TRUNCATE as the first help item. I assume the directory contents are\n> not being sorted. Peter?\n> \n> Second, the new psql prompt is #, so it shows as:\n> \n> \ttest-#\n> \n> Not sure I like that. I liked the > better, I think, unless # grows on\n> me.\n> \n> Comments?\n> \n\nSurely make it # if the user is a postgres superuser or > if not\n\nMakes it consistent and functional.\n\n\t\t\t\t\t\t~Michael\n",
"msg_date": "Fri, 5 Nov 1999 10:35:47 +0000 (GMT)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "On 1999-11-04, Bruce Momjian mentioned:\n\n> OK, new version of psql installed. Only problem I see is that \\h shows\n> TRUNCATE as the first help item. I assume the directory contents are\n> not being sorted. Peter?\n> \n> Second, the new psql prompt is #, so it shows as:\n> \n> \ttest-#\n> \n> Not sure I like that. I liked the > better, I think, unless # grows on\n> me.\n\nThe new prompt is the same as the the old one. *** Scratches head ***\n\nThe '>' is replaced by a '#' if you are the superuser. (Correction: if the\nusername is \"postgres\". I'm still thinking about ways to make this a\nlittle more elegant though. Ideas welcome.)\n\nThe '-' shouldn't be there unless you're in continue mode, as usual. Works\nfor me:\n\ntestdb=> select *\ntestdb-> from foo;\nERROR: foo: Table does not exist.\ntestdb=> \\c - postgres\nYou are now connected as new user postgres.\ntestdb=# select *\ntestdb-# from foo;\nERROR: foo: Table does not exist.\n\nOf course you can also completely customize your prompt. See the docs for\nthat.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 6 Nov 1999 14:27:44 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New version of psql"
},
{
"msg_contents": "On 1999-11-04, Bruce Momjian mentioned:\n\n> > It appears the PAGER is not working. If I do a SELECT, the result\n> > scrolls off my screen.\n> \n> Seems pager is off by default. Enable with \\pset pager. Any reason we\n> don't enable it by default, Peter?\n\nNo.\n\nPerhaps I'm too used to using scrollbars on the {x|k}terms. Or perhaps it\nwas some \"everything is off by default\" idea. No reason really though.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 6 Nov 1999 14:29:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "On 1999-11-04, Bruce Momjian mentioned:\n\n> OK, new version of psql installed. Only problem I see is that \\h shows\n> TRUNCATE as the first help item. I assume the directory contents are\n> not being sorted. Peter?\n\nTry attached patch. Perhaps this is a file system thing. On my box they\nare are sorted beautifully.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden",
"msg_date": "Sat, 6 Nov 1999 14:40:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New version of psql"
},
{
"msg_contents": "On 1999-11-04, Bruce Momjian mentioned:\n\n> > > Surely make it # if the user is a postgres superuser or > if not\n> > > \n> > > Makes it consistent and functional.\n> > \n> > I see now. Thanks. Makes sense.\n> \n> Peter, are you working on libpq as well?\n\nNope.\n\nMy next projects where to put in the tab completion I have lying around\nhere, to implement some sort if \\if \\else \\endif so you can write *really*\ncool scripts, and to adjust the createdb etc. scripts a little.\n\nThen I would be available for doing a little libpq stuff, but I'm not even\nsure what needs to be done. Bring it on . . .\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 6 Nov 1999 14:46:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "On 1999-11-04, Bruce Momjian mentioned:\n\n> OK, new version of psql installed.\n\nThis takes care of these TODO items:\n* Allow psql \\copy to allow delimiters\n* Allow psql to print nulls as distinct from \"\"(see TODO.detail/null)\n\nAlso this is already taken care of but it's still in my TODO:\n* Make configure --enable-debug add -g on compile line\n\nIn unrelated news, I found out the other day that there is actually a Lisp\nbinding for Pgsql. It says for version 6.4.2 on it. I'm going to see if I\ncan track down who wrote it.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 6 Nov 1999 14:59:22 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New version of psql"
},
{
"msg_contents": "> On 1999-11-04, Bruce Momjian mentioned:\n> \n> > OK, new version of psql installed. Only problem I see is that \\h shows\n> > TRUNCATE as the first help item. I assume the directory contents are\n> > not being sorted. Peter?\n> > \n> > Second, the new psql prompt is #, so it shows as:\n> > \n> > \ttest-#\n> > \n> > Not sure I like that. I liked the > better, I think, unless # grows on\n> > me.\n> \n> The new prompt is the same as the the old one. *** Scratches head ***\n> \n> The '>' is replaced by a '#' if you are the superuser. (Correction: if the\n> username is \"postgres\". I'm still thinking about ways to make this a\n> little more elegant though. Ideas welcome.)\n\nThanks. Yes, nice new feature.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 11:46:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New version of psql"
},
{
"msg_contents": "> On 1999-11-04, Bruce Momjian mentioned:\n> \n> > > It appears the PAGER is not working. If I do a SELECT, the result\n> > > scrolls off my screen.\n> > \n> > Seems pager is off by default. Enable with \\pset pager. Any reason we\n> > don't enable it by default, Peter?\n> \n> No.\n> \n> Perhaps I'm too used to using scrollbars on the {x|k}terms. Or perhaps it\n> was some \"everything is off by default\" idea. No reason really though.\n\nI have made the change to enable pager by default. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 11:46:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> On 1999-11-04, Bruce Momjian mentioned:\n> \n> > OK, new version of psql installed. Only problem I see is that \\h shows\n> > TRUNCATE as the first help item. I assume the directory contents are\n> > not being sorted. Peter?\n> \n> Try attached patch. Perhaps this is a file system thing. On my box they\n> are are sorted beautifully.\n\nThanks. Patch applied. That did the trick.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 11:48:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New version of psql"
},
{
"msg_contents": "> On 1999-11-04, Bruce Momjian mentioned:\n> \n> > > > Surely make it # if the user is a postgres superuser or > if not\n> > > > \n> > > > Makes it consistent and functional.\n> > > \n> > > I see now. Thanks. Makes sense.\n> > \n> > Peter, are you working on libpq as well?\n> \n> Nope.\n\nI just asked because you were mentioning a lot of old stuff, and I\nthought you meant in libpq. Now I see you meant old stuff in the psql\ncode about our old 'monitor' program and stuff.\n\nOne amazing trick is your ability to pull the help right out of the sgml\nsources. I never expected that would be possible. Makes maintaining\nthings a lot easier.\n\n> \n> My next projects where to put in the tab completion I have lying around\n> here, to implement some sort if \\if \\else \\endif so you can write *really*\n> cool scripts, and to adjust the createdb etc. scripts a little.\n\nSounds good.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 11:50:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New version of psql"
},
{
"msg_contents": "> On 1999-11-04, Bruce Momjian mentioned:\n> \n> > OK, new version of psql installed.\n> \n> This takes care of these TODO items:\n> * Allow psql \\copy to allow delimiters\n> * Allow psql to print nulls as distinct from \"\"(see TODO.detail/null)\n\nThanks. That helps. TODO updated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 11:51:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New version of psql"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The '>' is replaced by a '#' if you are the superuser. (Correction: if the\n> username is \"postgres\". I'm still thinking about ways to make this a\n> little more elegant though. Ideas welcome.)\n\nYou should be looking at the usesuper column from pg_user if you really\nwant a correct indication of whether the user has superuser privs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Nov 1999 12:16:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: New version of psql "
}
] |
[
{
"msg_contents": "Hello,\n\nI'm running into what appears to be some hard coded limits of postgres.\nI've got a table with with a text column that I need to insert large \namounts of text into. I quickly found these two things out: \n\nFirst, the MAX_QUERY_SIZE which is BLCKSZ*2 (or 16384 bytes), prevents\nme from from running the query since my query is much larger than 16384 i\nbytes. After discovering this, I decided to create a test query just\nsmaller than 16384 to see what would happen. \n\nThe second query returns \"Tuple is too big: size 12508\". I didn't bother\nto look into this one because I'd probably spend a lot of time looking,\ninstead I am bringing the issue here.\n\nI haven't done any development work on postgres and don't want \nto get involved in doing so before discussing it here and making sure that \nmy efforts won't be in vain. Without looking at the code, I expect this\nproblem is not *simple* to fix. I'm operating on the assumption that\ntuple sizes cannot be changed very easily. Even if they could, a bigger\nproblem is probably the MAX_QUERY_SIZE. Simply increasing the MAX_QUERY_SIZE\nwill only address a short term problem, and there is a practicle limit\nto the size it can be increased. Idealy, some sort of dynamic query buffer\nwould be better but that likely challanges the way the communication\nbetween the client and server works.\n\nSo, has anyone looked into this limitation? Is it something that the\ndevelopment team wants to be addressed? If this is a know problem, is \nthere some sort of agreement on how it should be solved? If so, is\nsomeone working on it? If not, possible I could help. \n\nOver the last 9 months, I've been using postgres more and more. It's gotten\nto the point where our project is becoming quite dependent on it. I am \nquite happy with the software and want to stick with it. I've run into \nseveral minor things this year that I have been able to avoid, but this \none may be a show stopper. Rather than ditch this fine software, I'd\nrather help out if the help is wanted and I am capible of giving it the\ntime required.\n\nMore details on the problem I am having:\n\tRedhat 6.0 / x86\n\tPostgres 6.5.1\n\tInserts tried with psql and libpg via Perl DBD/DBI\n\n-Brian\n",
"msg_date": "Wed, 27 Oct 1999 14:25:18 -0400",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": true,
"msg_subject": "text datatype and tuple size limits."
},
{
"msg_contents": "As far as I could follow it, the query size limit is all but gone, and it\nwill be for sure in 7.0. Regarding the tuple size limit, we are still\nlooking for volunteers to tackle that. You should find relevant messages\non this list a few days back.\n\n\t-Peter\n\nOn Wed, 27 Oct 1999, Brian Hirt wrote:\n\n> I'm running into what appears to be some hard coded limits of postgres.\n> I've got a table with with a text column that I need to insert large \n> amounts of text into. I quickly found these two things out: \n> \n> First, the MAX_QUERY_SIZE which is BLCKSZ*2 (or 16384 bytes), prevents\n> me from from running the query since my query is much larger than 16384 i\n> bytes. After discovering this, I decided to create a test query just\n> smaller than 16384 to see what would happen. \n> \n> The second query returns \"Tuple is too big: size 12508\". I didn't bother\n> to look into this one because I'd probably spend a lot of time looking,\n> instead I am bringing the issue here.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 28 Oct 1999 12:17:01 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] text datatype and tuple size limits."
}
] |
[
{
"msg_contents": "It was a good one, the compiler threw out an error for\na line about 100 lines further down the file that made\nno sense whatsoever to me!!\n\nBruce Momjian <[email protected]>\n>\n>\n>Applied. Thanks. That was my bug.\n>\n>\n>> There's a missing quote in psqlHelp.h in the latest CVS.\t\n>> \n>> Here's a patch:-\n>> \n>> *** src/bin/psql/psqlHelp.h.orig Tue Oct 26 09:34:18 1999\n>> --- src/bin/psql/psqlHelp.h Wed Oct 27 11:54:25 1999\n>> ***************\n>> *** 60,66 ****\n>> FUNCTION <func_name> (arg1, arg2, ...)|\\n\\\n>> OPERATOR <op> (leftoperand_type rightoperand_type) |\\n\\\n>> TRIGGER <trigger_name> ON <table_name>\\n\\\n>> ! ] IS 'text'},\n>> {\"commit work\",\n>> \"commit a transaction\",\n>> \"\\\n>> --- 60,66 ----\n>> FUNCTION <func_name> (arg1, arg2, ...)|\\n\\\n>> OPERATOR <op> (leftoperand_type rightoperand_type) |\\n\\\n>> TRIGGER <trigger_name> ON <table_name>\\n\\\n>> ! ] IS 'text'\"},\n>> {\"commit work\",\n>> \"commit a transaction\",\n>> \"\\\n>> \n>> \n>> ************\n>> \n>\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>************\n\n",
"msg_date": "Wed, 27 Oct 1999 21:43:34 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Syntax error in psqlHelp.h"
},
{
"msg_contents": "> It was a good one, the compiler threw out an error for\n> a line about 100 lines further down the file that made\n> no sense whatsoever to me!!\n\nI should always recompile before cvs commit, but I am often too busy.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Oct 1999 17:03:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Syntax error in psqlHelp.h"
}
] |
[
{
"msg_contents": "\nWhen are we packaging 6.5.3 with the pgaccess addition?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Oct 1999 17:39:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5.3"
},
{
"msg_contents": "On Wed, 27 Oct 1999, Bruce Momjian wrote:\n\n> \n> When are we packaging 6.5.3 with the pgaccess addition?\n\nMonday, 4:30ADT?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 28 Oct 1999 22:45:37 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 6.5.3"
}
] |
[
{
"msg_contents": "Bruce,\n\nWe (PostgreSQL users in Japan) have formed a non-profit,\nnon-commercial PostgreSQL user's group, called JPUG (Japan PostgreSQL\nUser's Group) this July. We have over 240 members now. You can visit\nour web page at http://www.jp.postgresql.org/ (all contents are\nwritten in Japanese).\n\nWhat we are thinking about is translating your PostgreSQL book into\nJapanese. We want not only the result be published from an appropriate\npublisher, but also the whole contents could be viewed by anyone on the\nInternet as well. I'm not sure this would conflict with the contract\nyou have made with your publisher though.\n\nHow do you think?\n---\nTatsuo Ishii\n",
"msg_date": "Thu, 28 Oct 1999 10:06:53 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL book translation"
},
{
"msg_contents": "> Bruce,\n> \n> We (PostgreSQL users in Japan) have formed a non-profit,\n> non-commercial PostgreSQL user's group, called JPUG (Japan PostgreSQL\n> User's Group) this July. We have over 240 members now. You can visit\n> our web page at http://www.jp.postgresql.org/ (all contents are\n> written in Japanese).\n\nSure. Addison, Wesley has the foreign publishing rights. I am CC'ing\nmy publisher contact on this. You can reach him directly. This\npublisher has extensive experience with foreign publishing.\n\n\n> What we are thinking about is translating your PostgreSQL book into\n> Japanese. We want not only the result be published from an appropriate\n> publisher, but also the whole contents could be viewed by anyone on the\n> Internet as well. I'm not sure this would conflict with the contract\n> you have made with your publisher though.\n\nAs you know, the book is on the Internet now, and will remain there\nafter I complete it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Oct 1999 21:38:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL book translation"
},
{
"msg_contents": "> > Bruce,\n> > \n> > We (PostgreSQL users in Japan) have formed a non-profit,\n> > non-commercial PostgreSQL user's group, called JPUG (Japan PostgreSQL\n> > User's Group) this July. We have over 240 members now. You can visit\n> > our web page at http://www.jp.postgresql.org/ (all contents are\n> > written in Japanese).\n> \n> Sure. Addison, Wesley has the foreign publishing rights. I am CC'ing\n> my publisher contact on this. You can reach him directly. This\n> publisher has extensive experience with foreign publishing.\n\nThanks. I will contact him.\n\n> > What we are thinking about is translating your PostgreSQL book into\n> > Japanese. We want not only the result be published from an appropriate\n> > publisher, but also the whole contents could be viewed by anyone on the\n> > Internet as well. I'm not sure this would conflict with the contract\n> > you have made with your publisher though.\n> \n> As you know, the book is on the Internet now, and will remain there\n> after I complete it.\n\nGreat!\n---\nTatsuo Ishii\n",
"msg_date": "Fri, 29 Oct 1999 09:56:56 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL book translation "
}
] |
[
{
"msg_contents": "Is anyone working on support for BLOB fields and corresponding JDBC:\n\tPreparedStatement.setBinaryStream\n\nI would be willing to contribute in this area.\n\n\t-Troy A. Griffitts",
"msg_date": "Wed, 27 Oct 1999 22:33:59 -0700",
"msg_from": "Troy Griffitts <[email protected]>",
"msg_from_op": true,
"msg_subject": "BLOB fields / JDBC"
}
] |
[
{
"msg_contents": "Is anyone working on support for BLOB fields and corresponding JDBC:\n\tPreparedStatement.setBinaryStream\n\nI would be willing to contribute in this area.\n\n\t-Troy A. Griffitts",
"msg_date": "Thu, 28 Oct 1999 00:34:32 -0700",
"msg_from": "Troy Griffitts <[email protected]>",
"msg_from_op": true,
"msg_subject": "BLOB fields / JDBC"
}
] |
[
{
"msg_contents": "Yes. I'm aiming to get BLOB & Array support completed for the next\nrelease (7.0?).\n\nCurrently using standard JDBC, setBytes() and getBytes() handle BLOBS.\nObviously theres the LargeObject and LargeObjectManager classes as well.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Troy Griffitts [mailto:[email protected]]\nSent: 28 October 1999 08:35\nTo: [email protected]\nSubject: [HACKERS] BLOB fields / JDBC\n\n\nIs anyone working on support for BLOB fields and corresponding JDBC:\n\tPreparedStatement.setBinaryStream\n\nI would be willing to contribute in this area.\n\n\t-Troy A. Griffitts\n",
"msg_date": "Thu, 28 Oct 1999 08:49:14 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] BLOB fields / JDBC"
}
] |
[
{
"msg_contents": "This is a Debian bug report, which needs upstream attention.\n\n------- Forwarded Message\n\nDate: Thu, 28 Oct 1999 13:45:18 -0400\nFrom: Brian Ristuccia <[email protected]>\nTo: [email protected]\nSubject: Bug#48582: psql spends hours computing results it already knows\n\nPackage: postgresql\nVersion: 6.5.2-3\n\nmassive_db=> explain select count(*) from huge_table;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=511.46 rows=9923 width=12)\n -> Seq Scan on huge_table (cost=511.46 rows=9923 width=12)\n\nEXPLAIN\n\nIf huge_table really is huge -- like 9,000,000 rows instead of 9923, after\npostgresql already knows the number of rows (that's how it determines the\ncost), it proceeds to do a very long and CPU/IO intensive seq scan to\ndetermine the count().\n\n- -- \nBrian Ristuccia\[email protected]\[email protected]\[email protected]\n\n\n------- End of Forwarded Message\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Cast thy burden upon the LORD, and he shall sustain \n thee; he shall never allow the righteous to fall.\" \n Psalms 55:22 \n\n\n",
"msg_date": "Thu, 28 Oct 1999 20:53:31 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug#48582: psql spends hours computing results it already knows (fwd)"
},
{
"msg_contents": "Oliver (& Brian) - \nHmm, that happens to not be the case. The rows=XXXX number is drawn\nfrom the statistics for the table, which are only updated on VACUUM\nANALYZE of that table. Easily tested: just INSERT a couple rows and do\nthe EXPLAIN again. The rows=XXX won't change. Ah, here's an example:\na table I've never vacuumed, until now (it's in a test copy of a db)\n\ntest=> select count(*) from \"Personnel\";\ncount\n-----\n 177\n(1 row)\n\ntest=> explain select count(*) from \"Personnel\";\nNOTICE: QUERY PLAN:\n\nAggregate (cost=43.00 rows=1000 width=4)\n -> Seq Scan on Personnel (cost=43.00 rows=1000 width=4)\n\nEXPLAIN\ntest=> vacuum analyze \"Personnel\";\nVACUUM\ntest=> explain select count(*) from \"Personnel\";\nNOTICE: QUERY PLAN:\n\nAggregate (cost=7.84 rows=177 width=4)\n -> Seq Scan on Personnel (cost=7.84 rows=177 width=4)\n\nEXPLAIN\ntest=> \n\nRoss\n\nOn Thu, Oct 28, 1999 at 08:53:31PM +0100, Oliver Elphick wrote:\n> This is a Debian bug report, which needs upstream attention.\n> \n> ------- Forwarded Message\n> \n> Date: Thu, 28 Oct 1999 13:45:18 -0400\n> From: Brian Ristuccia <[email protected]>\n> To: [email protected]\n> Subject: Bug#48582: psql spends hours computing results it already knows\n> \n> Package: postgresql\n> Version: 6.5.2-3\n> \n> massive_db=> explain select count(*) from huge_table;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=511.46 rows=9923 width=12)\n> -> Seq Scan on huge_table (cost=511.46 rows=9923 width=12)\n> \n> EXPLAIN\n> \n> If huge_table really is huge -- like 9,000,000 rows instead of 9923, after\n> postgresql already knows the number of rows (that's how it determines the\n> cost), it proceeds to do a very long and CPU/IO intensive seq scan to\n> determine the count().\n> \n> - -- \n> Brian Ristuccia\n> [email protected]\n> [email protected]\n> [email protected]\n> \n",
"msg_date": "Thu, 28 Oct 1999 15:32:17 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug#48582: psql spends hours computing results it\n\talready knows (fwd)"
},
{
"msg_contents": "On Thu, Oct 28, 1999 at 03:32:17PM -0500, Ross J. Reedstrom wrote:\n> Oliver (& Brian) - \n> Hmm, that happens to not be the case. The rows=XXXX number is drawn\n> from the statistics for the table, which are only updated on VACUUM\n> ANALYZE of that table. Easily tested: just INSERT a couple rows and do\n> the EXPLAIN again. The rows=XXX won't change. Ah, here's an example:\n> a table I've never vacuumed, until now (it's in a test copy of a db)\n\nAah.. Is there any other more efficient way of determining the number of\nrows in a table? It seems a sequential scan takes forever, but the database\nmust already have some idea (somewhere) of how many records are in the table\notherwise how would it know where to start/stop the sequential scan? \n\n> \n> test=> select count(*) from \"Personnel\";\n> count\n> -----\n> 177\n> (1 row)\n> \n> test=> explain select count(*) from \"Personnel\";\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=43.00 rows=1000 width=4)\n> -> Seq Scan on Personnel (cost=43.00 rows=1000 width=4)\n> \n> EXPLAIN\n> test=> vacuum analyze \"Personnel\";\n> VACUUM\n> test=> explain select count(*) from \"Personnel\";\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=7.84 rows=177 width=4)\n> -> Seq Scan on Personnel (cost=7.84 rows=177 width=4)\n> \n> EXPLAIN\n> test=> \n> \n> Ross\n> \n> On Thu, Oct 28, 1999 at 08:53:31PM +0100, Oliver Elphick wrote:\n> > This is a Debian bug report, which needs upstream attention.\n> > \n> > ------- Forwarded Message\n> > \n> > Date: Thu, 28 Oct 1999 13:45:18 -0400\n> > From: Brian Ristuccia <[email protected]>\n> > To: [email protected]\n> > Subject: Bug#48582: psql spends hours computing results it already knows\n> > \n> > Package: postgresql\n> > Version: 6.5.2-3\n> > \n> > massive_db=> explain select count(*) from huge_table;\n> > NOTICE: QUERY PLAN:\n> > \n> > Aggregate (cost=511.46 rows=9923 width=12)\n> > -> Seq Scan on huge_table (cost=511.46 rows=9923 width=12)\n> > \n> > EXPLAIN\n> > \n> > If huge_table really is huge -- like 9,000,000 rows instead of 9923, after\n> > postgresql already knows the number of rows (that's how it determines the\n> > cost), it proceeds to do a very long and CPU/IO intensive seq scan to\n> > determine the count().\n> > \n> > - -- \n> > Brian Ristuccia\n> > [email protected]\n> > [email protected]\n> > [email protected]\n> > \n\n-- \nBrian Ristuccia\[email protected]\[email protected]\[email protected]\n",
"msg_date": "Thu, 28 Oct 1999 16:35:53 -0400",
"msg_from": "Brian Ristuccia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug#48582: psql spends hours computing results it\n\talready knows (fwd)"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Hmm, that happens to not be the case. The rows=XXXX number is drawn\n> from the statistics for the table, which are only updated on VACUUM\n> ANALYZE of that table. Easily tested: just INSERT a couple rows and do\n> the EXPLAIN again. The rows=XXX won't change.\n\nThe short answer to this is that maintaining a perfectly accurate tuple\ncount on-the-fly would almost certainly cost more, totalled over all\noperations that modify a table, than we could ever hope to make back\nby short-circuiting \"select count(*)\" operations. (Consider\nconcurrent transactions running in multiple backends, some of which\nmay abort instead of committing, and others of which may already have\ncommitted but your transaction is not supposed to be able to see their\neffects...)\n\nThe optimizer is perfectly happy with approximate tuple counts, so it\nmakes do with stats recorded at the last VACUUM.\n\nThis has been discussed quite recently on pg-hackers; see the archives\nfor more info.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Oct 1999 19:05:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug#48582: psql spends hours computing results it\n\talready knows (fwd)"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> The short answer to this is that maintaining a perfectly accurate tuple\n> count on-the-fly would almost certainly cost more, totalled over all\n> operations that modify a table, than we could ever hope to make back\n> by short-circuiting \"select count(*)\" operations. (Consider\n> concurrent transactions running in multiple backends, some of which\n> may abort instead of committing, and others of which may already have\n> committed but your transaction is not supposed to be able to see their\n> effects...)\n\nSo, does the planner allow counting from a unique index (if one\nexists)? In general, an index scan on a unique index should be faster\nthan a table scan. Of course, I'm sure someone already thought of this...\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "28 Oct 1999 22:44:12 -0400",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug#48582: psql spends hours computing results it\n\talready knows (fwd)"
},
{
"msg_contents": "Brian E Gallew <[email protected]> writes:\n> So, does the planner allow counting from a unique index (if one\n> exists)? In general, an index scan on a unique index should be faster\n> than a table scan. Of course, I'm sure someone already thought of this...\n\nVadim will have to check me on this, but I believe that index entries\ndon't contain transaction information --- that is, you can determine\nwhether a tuple matches a specified search key by examining the index,\nbut in order to discover whether the tuple is actually *valid*\n(according to your transaction's worldview) you must fetch the tuple\nitself from the main table. So scanning an index cannot be cheaper than\na sequential scan of the main table, except when the index allows you to\navoid visiting most of the tuples in the main table.\n\nThere has been some discussion of allowing scans of indexes without\nfetching the underlying tuples, but AFAICS that would mean replicating\nthe tuple transaction status information into (each!) index, which'd\nbe a big hit in both disk space and number of disk writes implied by\ncommitting a tuple. I've got my doubts about it being a win...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Oct 1999 23:57:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug#48582: psql spends hours computing results it\n\talready knows (fwd)"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> Vadim will have to check me on this, but I believe that index entries\n> don't contain transaction information --- that is, you can determine\n> whether a tuple matches a specified search key by examining the index,\n> but in order to discover whether the tuple is actually *valid*\n> (according to your transaction's worldview) you must fetch the tuple\n> itself from the main table. So scanning an index cannot be cheaper than\n> a sequential scan of the main table, except when the index allows you to\n> avoid visiting most of the tuples in the main table.\n\nRight. As usual, I've overlooked something obvious. So, this really\nwouldn't work unless we had an exclusive table lock ('cause then there\nwouldn't be any other transactions to worry about, except for our\nown). Feh.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "29 Oct 1999 00:51:51 -0400",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug#48582: psql spends hours computing results it\n\talready knows (fwd)"
}
] |
[
{
"msg_contents": "> The optimizer is perfectly happy with approximate tuple counts, so it\n> makes do with stats recorded at the last VACUUM.\n> \n> This has been discussed quite recently on pg-hackers; see the archives\n> for more info.\n\nYes, the problem is not the optimizer. The problem is the select count(*).\nA lot of DB's (like Informix) have a shortcut on this, and even though they\nhave it,\nthey don't use it for the optimizer.\n\nIf our btrees have an accurate count (deleted rows ?), scanning the smallest\nindex \nwould also be alot faster.\n\nAndreas\n",
"msg_date": "Fri, 29 Oct 1999 11:34:44 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Bug#48582: psql spends hours computing results it a\n\tlready knows (fwd)"
}
] |
[
{
"msg_contents": "Hi,\n\n I have released pgbash-1.2.1.\n http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n\n Main updating content is next. \n\n 1.The improvement of the interactive operational environment. \n 2.The addition of the original COPY (with -y option) function. \n 3.It is not necessary to change Makefile. \n\n# Pgbash was more excellent than psql in the shell program, \n but it was not excellent in the interactive environment.\n However, in this improvement, Pgbash will be more excellent\n than psql in the interactive environment too.\n\n1. The improvement of the interactive operational environment. \n \n Type 'pgbash'.\n pgbash> l -------------------- list databases\n pgbash> sel test ------------- select * from test \n pgbash> ins test col1,col2 --- copy test(col1,col2) from stdin\n 111\tabc\tefg \n \\. \n pgbash> dt ------------------- equal to \"psql \\dt\"\n pgbash> d table_name --------- equal to \"psql \\d \"\n\n2.The addition of the original COPY (with -y option) function.\n\n pgbash> exec_sql -y \"copy test(col1,col2) from /tmp/oo\"\n\n In COPY with -y option, it is possible to designate the column.\nAnd, line number and error message are displayed, when the error\narises.\n\n3. It is not necessary to change Makefile. \n\n Until now, changes of Makefile were necessary in order to require\nthe include file of bash, when the version of bash changed. But,\nin the pgbash-1.2.1, it is not necessary to change Makefile.\n\n \n--\nRegards.\n\nSAKAIDA Masaaki -- Osaka, Japan\n\n",
"msg_date": "Fri, 29 Oct 1999 19:45:05 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbash-1.2.1 released"
}
] |
[
{
"msg_contents": "I just received a message from someone complaining about SERIAL/sequence. I\nthink there is a problem:\n\t\n\ttest=> create table test (x int, y serial);\n\tNOTICE: CREATE TABLE will create implicit sequence 'test_y_seq' for SERIAL column 'test.y'\n\tNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_y_key' for table 'test'\n\tCREATE\n\ttest=> insert into test (x) values (100);\n\tINSERT 19359 1\n\ttest=> insert into test (x) values (100);\n\tINSERT 19360 1\n\nThese work fine, but why does this fail:\n\n\ttest=> insert into test values (100, null);\n\tERROR: ExecAppend: Fail to add null value in not null attribute y\n\ttest=> insert into test values (100, 0);\n\tINSERT 19363 1\n\ttest=> insert into test values (100, 0);\n\tERROR: Cannot insert a duplicate key into a unique index\n\nCan't they use zero or null, and have the sequence value be computed?\nIs there some design decision we made to prevent this?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 29 Oct 1999 17:29:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Serial and NULL values"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> \ttest=> create table test (x int, y serial);\n> \tCREATE\n> \ttest=> insert into test values (100, null);\n> \tERROR: ExecAppend: Fail to add null value in not null attribute y\n\ngram.y thinks SERIAL is defined to mean NOT NULL:\n\n | ColId SERIAL ColPrimaryKey\n {\n ColumnDef *n = makeNode(ColumnDef);\n n->colname = $1;\n n->typename = makeNode(TypeName);\n n->typename->name = xlateSqlType(\"integer\");\n n->raw_default = NULL;\n n->cooked_default = NULL;\n=================> n->is_not_null = TRUE;\n n->is_sequence = TRUE;\n n->constraints = $3;\n\n $$ = (Node *)n;\n }\n\nOffhand I don't see any fundamental reason why serial columns should\nbe restricted to be nonnull, but evidently someone did at some point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 1999 19:44:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Serial and NULL values "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > \ttest=> create table test (x int, y serial);\n> > \tCREATE\n> > \ttest=> insert into test values (100, null);\n> > \tERROR: ExecAppend: Fail to add null value in not null attribute y\n> \n> gram.y thinks SERIAL is defined to mean NOT NULL:\n> \n> | ColId SERIAL ColPrimaryKey\n> {\n> ColumnDef *n = makeNode(ColumnDef);\n> n->colname = $1;\n> n->typename = makeNode(TypeName);\n> n->typename->name = xlateSqlType(\"integer\");\n> n->raw_default = NULL;\n> n->cooked_default = NULL;\n> =================> n->is_not_null = TRUE;\n> n->is_sequence = TRUE;\n> n->constraints = $3;\n> \n> $$ = (Node *)n;\n> }\n> \n> Offhand I don't see any fundamental reason why serial columns should\n> be restricted to be nonnull, but evidently someone did at some point.\n\nThe actual null is not the issue. The issue is that if we have a\nSERIAL column, and we try to put a NULL in there, shouldn't it put the\ndefault sequence number in there?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 29 Oct 1999 20:20:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Serial and NULL values"
},
{
"msg_contents": "On Fri, Oct 29, 1999 at 08:20:30PM -0400, Bruce Momjian wrote:\n> > \n> > Offhand I don't see any fundamental reason why serial columns should\n> > be restricted to be nonnull, but evidently someone did at some point.\n> \n> The actual null is not the issue. The issue is that if we have a\n> SERIAL column, and we try to put a NULL in there, shouldn't it put the\n> default sequence number in there?\n> \n\nIt seems logical that if a value was supplied for a serial column that \nit would override the default. After all, SERIAL is just an int column \nwith a default based on a sequence, right?. If the default is always \nused (even when a value is supplied) then that would be a REAL BIG problem. \n\nWithout making SERIAL a distinctly different datatype, I can't see how \na default sequence could behave differently for two tables created with \ndifferent syntax.\n\nMy 2 cents is that the current behavior is the correct behavior.\n\nAs far as the NULL goes, since the SERIAL column is assumed to be a \nkey and a unique index is created, having it NOT NULL seems like a\ngood idea. I don't know anyone who would have a key value be NULL,\nand even if it could be NULL, you would olny be allowd one NULL.\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n",
"msg_date": "Fri, 29 Oct 1999 21:26:42 -0400",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Serial and NULL values"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Offhand I don't see any fundamental reason why serial columns should\n>> be restricted to be nonnull, but evidently someone did at some point.\n\n> The actual null is not the issue. The issue is that if we have a\n> SERIAL column, and we try to put a NULL in there, shouldn't it put the\n> default sequence number in there?\n\nNo, I wouldn't expect that at all. A default is inserted when you\ndon't supply anything at all for the column. Inserting an explicit\nNULL means you want a NULL, and barring a NOT NULL constraint on\nthe column, that's what the system ought to insert. I can see no\npossible justification for creating a type-specific exception to\nthat behavior.\n\nIf the original asker really wants to substitute something else for\nan explicit null insertion, he could do it with a rule or a trigger.\nBut I don't think SERIAL ought to act that way all by itself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 1999 22:28:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Serial and NULL values "
},
{
"msg_contents": ">\n> Bruce Momjian <[email protected]> writes:\n> >> Offhand I don't see any fundamental reason why serial columns should\n> >> be restricted to be nonnull, but evidently someone did at some point.\n>\n> > The actual null is not the issue. The issue is that if we have a\n> > SERIAL column, and we try to put a NULL in there, shouldn't it put the\n> > default sequence number in there?\n>\n> No, I wouldn't expect that at all. A default is inserted when you\n> don't supply anything at all for the column. Inserting an explicit\n> NULL means you want a NULL, and barring a NOT NULL constraint on\n> the column, that's what the system ought to insert. I can see no\n> possible justification for creating a type-specific exception to\n> that behavior.\n>\n> If the original asker really wants to substitute something else for\n> an explicit null insertion, he could do it with a rule or a trigger.\n> But I don't think SERIAL ought to act that way all by itself.\n>\n> regards, tom lane\n\n I agree with tom.\n\n If you don't want the user to be able to insert NULL, specify\n NOT NULL explicitly. And if you want to force a default\n behaviour, use a trigger (a rule can't do - sorry).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 30 Oct 1999 15:36:55 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Serial and NULL values"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Offhand I don't see any fundamental reason why serial columns should\n> >> be restricted to be nonnull, but evidently someone did at some point.\n> \n> > The actual null is not the issue. The issue is that if we have a\n> > SERIAL column, and we try to put a NULL in there, shouldn't it put the\n> > default sequence number in there?\n> \n> No, I wouldn't expect that at all. A default is inserted when you\n> don't supply anything at all for the column. Inserting an explicit\n> NULL means you want a NULL, and barring a NOT NULL constraint on\n> the column, that's what the system ought to insert. I can see no\n> possible justification for creating a type-specific exception to\n> that behavior.\n> \n> If the original asker really wants to substitute something else for\n> an explicit null insertion, he could do it with a rule or a trigger.\n> But I don't think SERIAL ought to act that way all by itself.\n\nOK, I see now. In Informix, if you insert 0 into a serial column, you\nget the nextval assigned.\n\nHowever, I can see that is not logical. We have serial which defines a\ndefault for nextval().\n\nThanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 08:15:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Serial and NULL values"
},
{
"msg_contents": "> > No, I wouldn't expect that at all. A default is inserted when you\n> > don't supply anything at all for the column. Inserting an explicit\n> > NULL means you want a NULL, and barring a NOT NULL constraint on\n> > the column, that's what the system ought to insert. I can see no\n> > possible justification for creating a type-specific exception to\n> > that behavior.\n> >\n> > If the original asker really wants to substitute something else for\n> > an explicit null insertion, he could do it with a rule or a trigger.\n> > But I don't think SERIAL ought to act that way all by itself.\n> >\n> > regards, tom lane\n> \n> I agree with tom.\n> \n> If you don't want the user to be able to insert NULL, specify\n> NOT NULL explicitly. And if you want to force a default\n> behaviour, use a trigger (a rule can't do - sorry).\n\nI thought Informix put the nextval with NULL, but I now see they do it\nwith zero, which is pretty strange.\n\nNever mind.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 08:17:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Serial and NULL values"
}
] |
[
{
"msg_contents": "It looks like we're going to give a go at a native interface between\nMATLAB and PostgreSQL. I think if we can just define the enumerated\ntypes for PGresult and PGconnect plus some of the enumerated constants\n(PGRES_TUPLES_OK, PGRES_COMMAND_OK), then we can just write cmex wrapper\nfunctions from the existing C function library of PostgreSQL.\n\nDoes anyone happen to have a nice PGresult and PGconnect definition\n(i.e. no references to other enumerated types)? The way these look now\n(in the header file libpq-int.h) it may take some time to go through all\nof the embedded enumerated types.\n\nAlso, just so we don't have to re-invent the wheel: Are there any\nMATLAB/PostgreSQL interfaces already written?\n\nThanks.\n-Tony Reina\n\n\n",
"msg_date": "Fri, 29 Oct 1999 15:10:44 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "MATLAB PostgreSQL interface"
},
{
"msg_contents": "> It looks like we're going to give a go at a native interface between\n> MATLAB and PostgreSQL. I think if we can just define the enumerated\n> types for PGresult and PGconnect plus some of the enumerated constants\n> (PGRES_TUPLES_OK, PGRES_COMMAND_OK), then we can just write cmex wrapper\n> functions from the existing C function library of PostgreSQL.\n\nGrab them from the source or include file.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 29 Oct 1999 18:18:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MATLAB PostgreSQL interface"
}
] |
[
{
"msg_contents": " http://www.PostgreSQL.org/~wieck/\n\n The new developers globe is updated with all photos I got so\n far. Still missing are:\n\n Andrew Martin\n Bruce Momjian\n Byron Nikolaidis\n Constantin Teodorescu\n Edmund Mergl\n Goran Thyni\n Hiroshi Inoue\n Massimo dal Zotto\n Michael Meskes\n Tatsuo Ishii\n Thomas Lockhart\n Tom Lane\n\n Since I don't expect that we get all of them soon, I reverted\n and took out the photos from the text part again.\n\n Who should maintain this page finally? If it's me, I would\n and update the content of the main site now. If not, tell me\n who and when to transfer.\n\n BTW: There are alot of files on the main site where cvs stat\n doesn't tell up-to-date. Some are locally modified but not\n committed, some need a merge.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 30 Oct 1999 17:33:00 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "missing mugshots"
},
{
"msg_contents": "> http://www.PostgreSQL.org/~wieck/\n> \n> The new developers globe is updated with all photos I got so\n> far. Still missing are:\n> \n> Andrew Martin\n> Bruce Momjian\n> Byron Nikolaidis\n> Constantin Teodorescu\n> Edmund Mergl\n> Goran Thyni\n> Hiroshi Inoue\n> Massimo dal Zotto\n> Michael Meskes\n> Tatsuo Ishii\n> Thomas Lockhart\n> Tom Lane\n\nOK, mine is attached. As I mentioned on the phone, I was waiting for a\nhaircut, but it seems it is not coming soon enough. Here is a recent\nphoto I liked.\n\n> \n> Since I don't expect that we get all of them soon, I reverted\n> and took out the photos from the text part again.\n> \n> Who should maintain this page finally? If it's me, I would\n> and update the content of the main site now. If not, tell me\n> who and when to transfer.\n\nIf you would be willing to, that would be great.\n\n> \n> BTW: There are alot of files on the main site where cvs stat\n> doesn't tell up-to-date. Some are locally modified but not\n> committed, some need a merge.\n\nYou mean files on the site that are not in cvs, or are newer than the\ncvs copies. Either Vince or I did that, I suppose. How did you find\nwhich ones they were? Vince, any ideas?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "Sat, 30 Oct 1999 11:50:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> >\n> > BTW: There are alot of files on the main site where cvs stat\n> > doesn't tell up-to-date. Some are locally modified but not\n> > committed, some need a merge.\n>\n> You mean files on the site that are not in cvs, or are newer than the\n> cvs copies. Either Vince or I did that, I suppose. How did you find\n> which ones they were? Vince, any ideas?\n\n [wieck@hub] ~pgsql/ftp/www/html > cvs stat |& less\n\n The locally modified ones don't seem to be a problem to me\n since they are what's visible outside and simply need a\n commit.\n\n But the ones telling Needs Merge are! This status means,\n that the working file is a locally modified copy of an older\n revision in the CVS. A cvs update would try to merge both\n modifications into the new working file and this is required\n before a commit! Let's take devel-contrib.html as an example:\n\n [wieck@hub] ~pgsql/ftp/www/html > cvs stat devel-contrib.html\n ===================================================================\n File: devel-contrib.html Status: Needs Merge\n\n Working revision: 1.29 Mon Mar 29 16:43:09 1999\n Repository revision: 1.44 /usr/local/cvsroot/www/html/devel-contrib.html,v\n Sticky Tag: (none)\n Sticky Date: (none)\n Sticky Options: (none)\n\n The last cvs update brought the working copy (that one\n actually on the web site) into sync with 1.29. Since then,\n there have been 15 commits from other working directories\n plus local modifications to the file. Looking at the diff it\n seems that the authors (momjian and vev :-) have a checkout\n of the repository on their home system, commit modifications\n there and instead of checking them out on the main site they\n just copy their working files onto the site, ignoring that\n the main site is another working directory of the same\n repository. That's just what I guess from what I see.\n\n Just to be complete (don't know if there are any) the state\n Needs Patch means, that the working file is an unmodified one\n needing a cvs update because there where modifications to the\n CVS from somewhere else.\n\n To fix this situation I think we should first commit the ones\n that are modified locally via\n\n cvs commit <file>\n\n and do a\n\n cvs update <file>\n\n for any one that needs a patch.\n\n Finally we have all those left that need a merge. I'm not\n sure, but can we say that what's actually on the site is\n correct? If so, we could take the files one by one, move the\n actual file out of the way and do a\n\n cvs update <file>\n\n This will checkout the file in the state, the repository\n thinks is correct. Then we move back what we know is actual\n and do a commit. After this move-update-moveback cvs thinks\n it's a locally modified one and the new cvs revision is then\n what anybody can see on the web.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 30 Oct 1999 18:30:27 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "Thus spake Jan Wieck\n> The new developers globe is updated with all photos I got so\n\nOne thing I notice is that mine and the server share a pin (reasonable\nas we are in the same place) but the way it is presented is a little\nconfusing. It will get more confusing as time goes on and we get more\nthan one person in the same city. At least in this case it's obvious that\nthe server didn't work on stuff. There should be a cleaner delineation\nbetween people sharing the same location.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 30 Oct 1999 16:42:32 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": ">\n> Thus spake Jan Wieck\n> > The new developers globe is updated with all photos I got so\n>\n> One thing I notice is that mine and the server share a pin (reasonable\n> as we are in the same place) but the way it is presented is a little\n> confusing. It will get more confusing as time goes on and we get more\n> than one person in the same city. At least in this case it's obvious that\n> the server didn't work on stuff. There should be a cleaner delineation\n> between people sharing the same location.\n\nHmmmm,\n\n does that mean the server is only close to you, not where you\n are?\n\n From the wording in the original page \"... and location of\n hub.org\" I assumed that it is the SAME location, not just\n another one in Toronto. Therefore I just colored the pin a\n little different. But if it's a separate location it must be\n a separate pin with it's own popup.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sat, 30 Oct 1999 22:48:13 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "\nOn 30-Oct-99 Jan Wieck wrote:\n> Bruce Momjian wrote:\n> \n>> >\n>> > BTW: There are alot of files on the main site where cvs stat\n>> > doesn't tell up-to-date. Some are locally modified but not\n>> > committed, some need a merge.\n>>\n>> You mean files on the site that are not in cvs, or are newer than the\n>> cvs copies. Either Vince or I did that, I suppose. How did you find\n>> which ones they were? Vince, any ideas?\n> \n> [wieck@hub] ~pgsql/ftp/www/html > cvs stat |& less\n> \n> The locally modified ones don't seem to be a problem to me\n> since they are what's visible outside and simply need a\n> commit.\n> \n> But the ones telling Needs Merge are! This status means,\n> that the working file is a locally modified copy of an older\n> revision in the CVS. A cvs update would try to merge both\n> modifications into the new working file and this is required\n> before a commit! Let's take devel-contrib.html as an example:\n> \n> [wieck@hub] ~pgsql/ftp/www/html > cvs stat devel-contrib.html\n> ===================================================================\n> File: devel-contrib.html Status: Needs Merge\n> \n> Working revision: 1.29 Mon Mar 29 16:43:09 1999\n> Repository revision: 1.44 /usr/local/cvsroot/www/html/devel-contrib.html,v\n> Sticky Tag: (none)\n> Sticky Date: (none)\n> Sticky Options: (none)\n> \n> The last cvs update brought the working copy (that one\n> actually on the web site) into sync with 1.29. Since then,\n> there have been 15 commits from other working directories\n> plus local modifications to the file. Looking at the diff it\n> seems that the authors (momjian and vev :-) have a checkout\n> of the repository on their home system, commit modifications\n> there and instead of checking them out on the main site they\n> just copy their working files onto the site, ignoring that\n> the main site is another working directory of the same\n> repository. That's just what I guess from what I see.\n\nMust have been a long time ago. Can I just do a cvs release or \nsomething like that? I do all my work on hub.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sat, 30 Oct 1999 16:58:53 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "Vince Vielhaber wrote:\n\n> Must have been a long time ago. Can I just do a cvs release or\n> something like that? I do all my work on hub.\n\n I don't know about such a command. And the real problem seems\n to me that there are multiple maintainers that treat it\n differently.\n\n The content provided on the net should NOT be a checked out\n working copy. Any maintainter of the files should have it's\n own checkout local, ideally at a location where a web server\n can present it (like an apache virtual host). The files on\n the web site should be updated automatically by CVS rules at\n commit time.\n\n In this setup, one could test changes local until anything is\n O.K. and do a commit that automatically presents all the\n changes at once to the world.\n\n An advance is, that if you want to do heavy modifications to\n a couple of files, you could do a\n\n cvs admin -l <file> [...]\n\n and be sure, noone else can do a commit until you're done.\n Well, another user could explicitly break the lock with\n another \"cvs admin -l\", but then you, as the original locker,\n are notified via mail about it. If you ensure that there is\n at least a little modification in the locked file (adding a\n space somewhere immediately after locking it), you are sure\n that the next commit will checkin a new revision. At this\n time, the lock implicitly is done and the file is unlocked\n again. Otherwise you must do an explicit\n\n cvs admin -u <file> [...]\n\n to release the lock with no changes made. And that bears the\n risk of forgetting locks.\n\n I'll play around a little with CVS rules (don't know exactly\n how to set them up but I know that they exist). Will tell you\n later if this is something worth to try.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 01:35:58 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "\nOn 31-Oct-99 Jan Wieck wrote:\n> Vince Vielhaber wrote:\n> \n>> Must have been a long time ago. Can I just do a cvs release or\n>> something like that? I do all my work on hub.\n> \n> I don't know about such a command. And the real problem seems\n> to me that there are multiple maintainers that treat it\n> differently.\n> \n> The content provided on the net should NOT be a checked out\n> working copy. Any maintainter of the files should have it's\n> own checkout local, ideally at a location where a web server\n> can present it (like an apache virtual host). The files on\n> the web site should be updated automatically by CVS rules at\n> commit time.\n> \n> In this setup, one could test changes local until anything is\n> O.K. and do a commit that automatically presents all the\n> changes at once to the world.\n> \n> An advance is, that if you want to do heavy modifications to\n> a couple of files, you could do a\n> \n> cvs admin -l <file> [...]\n> \n> and be sure, noone else can do a commit until you're done.\n> Well, another user could explicitly break the lock with\n> another \"cvs admin -l\", but then you, as the original locker,\n> are notified via mail about it. If you ensure that there is\n> at least a little modification in the locked file (adding a\n> space somewhere immediately after locking it), you are sure\n> that the next commit will checkin a new revision. At this\n> time, the lock implicitly is done and the file is unlocked\n> again. Otherwise you must do an explicit\n> \n> cvs admin -u <file> [...]\n> \n> to release the lock with no changes made. And that bears the\n> risk of forgetting locks.\n> \n> I'll play around a little with CVS rules (don't know exactly\n> how to set them up but I know that they exist). Will tell you\n> later if this is something worth to try.\n\nCongrats Jan. You lost me. AFAIK that's how the website works - at least\nas long as I've been maintaining it. The thing that's not maintained that\nway is the news and announcements which I maintain via a web based tool.\nBruce and Tom (Lockhart) maintain the docs pages. Or am I missing something\ntotally obvious?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sat, 30 Oct 1999 20:19:50 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "Thus spake Jan Wieck\n> > One thing I notice is that mine and the server share a pin (reasonable\n> > as we are in the same place) but the way it is presented is a little\n> > confusing. It will get more confusing as time goes on and we get more\n> > than one person in the same city. At least in this case it's obvious that\n> > the server didn't work on stuff. There should be a cleaner delineation\n> > between people sharing the same location.\n> \n> Hmmmm,\n> \n> does that mean the server is only close to you, not where you\n> are?\n> \n> From the wording in the original page \"... and location of\n> hub.org\" I assumed that it is the SAME location, not just\n> another one in Toronto. Therefore I just colored the pin a\n> little different. But if it's a separate location it must be\n> a separate pin with it's own popup.\n\nI just assumed that there wasn't enough room to stick two pins in the same\ncity. Well, we aren't in exactly the same place. However, I connect\ndirectly to the system that provides hub.org it's bandwidth so logically\nwe are next door neighbours.\n\nHowever, it really is a different place but, as I said, we have to\ndeal with when we get big enough that one city might get crowded.\nWhy not just have the popup be a list of developers rather than one\nand allow for more than one.\n\nThen again, more pins looks better. In any case, perhaps the server\nshould have its own pin.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 30 Oct 1999 20:45:50 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "> I'll play around a little with CVS rules (don't know exactly\n> how to set them up but I know that they exist). Will tell you\n> later if this is something worth to try.\n\n Whow, never thought it would be that easy!\n\n If we place a tiny little script\n\n #!/bin/sh\n cd /home/projects/pgsql/ftp/www\n cvs update </dev/null >/dev/null 2>&1 &\n echo \"\"\n echo \"Main WEBsite will be updated automatically.\"\n echo \"The appropriate 'cvs update' is already waiting\"\n echo \"in background for your locks to be released.\"\n echo \"\"\n\n into the same directory it cd's to, we only need to add the\n line\n\n www -i /home/projects/pgsql/ftp/www/<script> www\n\n to the modules file in the CVSROOT and anyone with a local\n checkout must do a new checkout. Also we checkout the actual\n working directory of the real site again and voila, anytime\n someone does a commit a 'cvs update' is automatically started\n in the web sites root directory. It must run in background to\n avoid a deadlock, so be it. The echo messages are visible to\n the one doing the commit, so it's just to remind that he's\n doing something visible to the world.\n\n As said, anybody should WORK in his private checked out\n working directory at home. This would prevent side effects if\n multiple maintainers work on the real files.\n\n But for very small changes, it would be O.K. to do it on the\n main site and commit them there, because the cvs update\n started in background would be a noop in fact.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 02:55:01 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "> Then again, more pins looks better. In any case, perhaps the server\n> should have its own pin.\n\n Not allways, at least if more than 50% of the area not\n occupied by water gets hidden by pins :-).\n\n Anyway, I'll make it a separate one and will work on the list\n approach when we need it bacause of space shortage for new\n pins.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 02:00:07 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> >\n> > I'll play around a little with CVS rules (don't know exactly\n> > how to set them up but I know that they exist). Will tell you\n> > later if this is something worth to try.\n>\n> Congrats Jan. You lost me. AFAIK that's how the website works - at least\n> as long as I've been maintaining it. The thing that's not maintained that\n> way is the news and announcements which I maintain via a web based tool.\n> Bruce and Tom (Lockhart) maintain the docs pages. Or am I missing something\n> totally obvious?\n\nHmmm,\n\n maybe I lost you, but then again what you knew can't be\n right. The only CVSROOT files, where the module www occurs,\n are the history and the loginfo. And loginfo is just\n something that send's logging info via mail. It is not\n intended to do other automatic jobs, and the one entry for\n www in it in fact only sends them to\n [email protected].\n\n If it works that way, it must be very tricky hidden and not\n implemented the way it should be. At least it doesn't work\n anymore. I cannot imagine how it otherwise could be that from\n 58 files in the www repository\n\n 19 have state Up-to-Date\n 24 have state Needs Merge\n and\n 4 have state Locally Modified.\n\n And there are files, like overlib.*, that are totally unknown\n to the repository!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 02:23:55 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "> Vince Vielhaber wrote:\n> \n> > Must have been a long time ago. Can I just do a cvs release or\n> > something like that? I do all my work on hub.\n> \n> I don't know about such a command. And the real problem seems\n> to me that there are multiple maintainers that treat it\n> differently.\n> \n> The content provided on the net should NOT be a checked out\n> working copy. Any maintainter of the files should have it's\n> own checkout local, ideally at a location where a web server\n> can present it (like an apache virtual host). The files on\n> the web site should be updated automatically by CVS rules at\n> commit time.\n\nI am sorry, but I am totally confused.\n\nI have a cvs copy here. I do cvs commits, and I ftp the files to the\nweb site when I make a change. Yes, it would be nice to have cvs commit\nautomatically install them on the web site.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Oct 1999 22:04:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshots"
},
{
"msg_contents": "> > I'll play around a little with CVS rules (don't know exactly\n> > how to set them up but I know that they exist). Will tell you\n> > later if this is something worth to try.\n> \n> Congrats Jan. You lost me. AFAIK that's how the website works - at least\n> as long as I've been maintaining it. The thing that's not maintained that\n> way is the news and announcements which I maintain via a web based tool.\n> Bruce and Tom (Lockhart) maintain the docs pages. Or am I missing something\n> totally obvious?\n\nYea, now we are both lost.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Oct 1999 22:05:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] missing mugshotsu"
}
] |
[
{
"msg_contents": "Still problems with pgaccess.\n\nWhile the new Makefile I made properly copies pgaccess to the bin\ndirectory, it does not deal with PGACCESS_HOME properly, and I believe\nconfigure should be table to set PATH_TO_WISH.\n\nPlease hold 6.5.3 until someone comes up with a good solution to this.\n\nDo we want to copy the entire pgaccess tree to the pgsql install\ndirectory?\n\nCan someone suggest a line for PATH_TO_WISH that can be set by configure?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Oct 1999 17:43:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess for 6.5.3"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Still problems with pgaccess.\n> \n> While the new Makefile I made properly copies pgaccess to the bin\n> directory, it does not deal with PGACCESS_HOME properly, and I believe\n> configure should be table to set PATH_TO_WISH.\n> \n> Please hold 6.5.3 until someone comes up with a good solution to this.\n> \n> Do we want to copy the entire pgaccess tree to the pgsql install\n> directory?\n> \n> Can someone suggest a line for PATH_TO_WISH that can be set by configure?\n\nHmmm.... Under the RPM installation, there is installed, by the build\nscript in the spec file, a shell script into /usr/bin called pgaccess --\nthis script is as follows:\n-----------\n#!/bin/sh\n\nPATH_TO_WISH=/usr/bin/wish\nPGACCESS_HOME=/usr/lib/pgsql/pgaccess\n\nexport PATH_TO_WISH\nexport PGACCESS_HOME\n\nexec ${PATH_TO_WISH} ${PGACCESS_HOME}/main.tcl \"$@\"\n----------\n\nPGACCESS_HOME should, under the standard installation, be set to\nsomething more inline with the standard installation's idea of where\npgaccess/main.tcl is located.\n\nPATH_TO_WISH should likely be /usr/bin/wish on most systems.\n\nThe RPM packages have not relied upon the tarball's inclusion of\npgaccess -- rather, a separate tarball of just the latest pgaccess is\ngrafted in -- so there may be some other differences that I am not aware\nof.\n\n--\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Sat, 30 Oct 1999 18:57:08 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess for 6.5.3"
},
{
"msg_contents": ">> Can someone suggest a line for PATH_TO_WISH that can be set by configure?\n\nYou should use autoconf's standard mechanism for testing for a\nprogram's existence and location. To wit,\n\tAC_PATH_PROG(PATH_TO_WISH, wish)\nor one of its variants.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Oct 1999 01:54:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess for 6.5.3 "
},
{
"msg_contents": "> >> Can someone suggest a line for PATH_TO_WISH that can be set by configure?\n> \n> You should use autoconf's standard mechanism for testing for a\n> program's existence and location. To wit,\n> \tAC_PATH_PROG(PATH_TO_WISH, wish)\n> or one of its variants.\n\nThanks. That's what I used. See my other message outlining my changes\nto get this working.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 07:49:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess for 6.5.3"
}
] |
[
{
"msg_contents": "Hi, all\n\nIn pg_dump there is a file called common.c. This file has some string\nhandling routines in it that return a pointer to a fixed-length, static\nstring (char *). I need to remove the fixed-length bit (besides the fact\nfact that this is horrendously un-threadsafe). So, what is the best\nmechanism to use on replacement? There seem to be two fairly standard\nmethods to use, a) make the calling function allocate the memory it\nrequires, and pass that in to the called function, or b) the called function\nallocates memory using a documented call (say, malloc), and hands\nresponsibility for freeing the memory to the calling function. Given the\nnon-fixed-length constraint, the second option would appear better, but does\nany body out there have any other ideas?\n\nMikeA\n",
"msg_date": "Sun, 31 Oct 1999 00:09:38 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump, and strings"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> In pg_dump there is a file called common.c. This file has some string\n> handling routines in it that return a pointer to a fixed-length, static\n> string (char *). I need to remove the fixed-length bit (besides the fact\n> fact that this is horrendously un-threadsafe). So, what is the best\n> mechanism to use on replacement? There seem to be two fairly standard\n> methods to use, a) make the calling function allocate the memory it\n> requires, and pass that in to the called function, or b) the called function\n> allocates memory using a documented call (say, malloc), and hands\n> responsibility for freeing the memory to the calling function. Given the\n> non-fixed-length constraint, the second option would appear better, but does\n> any body out there have any other ideas?\n\nThe first approach requires that the caller know in advance how much\nresult space the callee will need; unless the return type is fixed-size\nthat's usually a bad design.\n\nAnother possibility is to do something like the backend's palloc()\nstuff, wherein responsibility for eventually cleaning up the garbage\nis handed to a third party. But that's probably overkill for pg_dump.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Oct 1999 19:32:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump, and strings "
},
{
"msg_contents": "On Oct 31, Ansley, Michael mentioned:\n\n> Hi, all\n> \n> In pg_dump there is a file called common.c. This file has some string\n> handling routines in it that return a pointer to a fixed-length, static\n> string (char *). I need to remove the fixed-length bit (besides the fact\n> fact that this is horrendously un-threadsafe). So, what is the best\n> mechanism to use on replacement? There seem to be two fairly standard\n> methods to use, a) make the calling function allocate the memory it\n> requires, and pass that in to the called function, or b) the called function\n> allocates memory using a documented call (say, malloc), and hands\n> responsibility for freeing the memory to the calling function. Given the\n> non-fixed-length constraint, the second option would appear better, but does\n> any body out there have any other ideas?\n> \n> MikeA\n\nOf course malloc is not very thread-safe either.\n\nBut as far is I'm concerned, the difference between a) and b) is cosmetic.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 31 Oct 1999 01:45:21 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump, and strings"
}
] |
[
{
"msg_contents": "I have been doing some profiling this weekend in response to Vadim's\nchallenge to reduce the amount of overhead in a simple INSERT command.\nI've found a number of simple improvements that I hope to check in\nshortly. I came across something in the time code that I thought I'd\nbetter check with you before changing.\n\nIn utils/adt/nabstime.c, the function GetCurrentAbsoluteTime() is called\nduring each StartTransaction in order to save the transaction's start\ntime. It shows up unreasonably high in my profile (> 1% of runtime):\n\n 0.62 10.22 100001/100001 StartTransaction [65]\n[91] 1.4 0.62 10.22 100001 GetCurrentAbsoluteTime [91]\n 0.92 8.30 100001/100001 localtime [105]\n 0.88 0.00 100001/100004 time [305]\n 0.12 0.00 100001/104713 strcpy [479]\n\nNow the interesting thing about this is that the essential part of the\nfunction is just the time() call, AFAICS, and that's quite cheap. More\nthan 90% of the runtime is being spent in the \"if (!HasCTZSet)\" branch.\nI see no reason for that code to be run during every single transaction.\nIt sets the following variables:\n\n\tCTimeZone\n\tCDayLight\n\tCTZName\n\nCDayLight is not used *anywhere* except for debug printouts, and could\ngo away completely. CTZName is not used if USE_POSIX_TIME is defined,\nwhich is true on most platforms. CTimeZone is not quite as useless, but\nthere are only a couple places where it's used when USE_POSIX_TIME is\ntrue, and they don't look like critical-path stuff to me.\n\nWe could almost say that these variables need only be set once per\nbackend startup, but I suppose that would do the wrong thing in a\nbackend that's left running over a daylight-savings transition.\n\nWhat I'm inclined to do is arrange for these variables to be calculated\nonly on-demand, at most once per transaction. It'd be even nicer to\nget rid of them entirely, but I don't think I understand the time code\nwell enough to venture that.\n\nDo you have any comments pro or con on this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Oct 1999 18:59:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance glitch in GetCurrentAbsoluteTime()"
},
{
"msg_contents": "(back online after a week of downtime)\n\n> In utils/adt/nabstime.c, the function GetCurrentAbsoluteTime() is called\n> during each StartTransaction in order to save the transaction's start\n> time. It shows up unreasonably high in my profile (> 1% of runtime):\n> 0.62 10.22 100001/100001 StartTransaction [65]\n> [91] 1.4 0.62 10.22 100001 GetCurrentAbsoluteTime [91]\n> 0.92 8.30 100001/100001 localtime [105]\n> 0.88 0.00 100001/100004 time [305]\n> 0.12 0.00 100001/104713 strcpy [479]\n> Now the interesting thing about this is that the essential part of the\n> function is just the time() call, AFAICS, and that's quite cheap. More\n> than 90% of the runtime is being spent in the \"if (!HasCTZSet)\" branch.\n> I see no reason for that code to be run during every single transaction.\n> It sets the following variables:\n> CTimeZone\n> CDayLight\n> CTZName\n> CDayLight is not used *anywhere* except for debug printouts, and could\n> go away completely.\n\nOK, let's kill it.\n\n> CTZName is not used if USE_POSIX_TIME is defined,\n> which is true on most platforms.\n\nOK, it should be #ifndef'd\n\n> CTimeZone is not quite as useless, but\n> there are only a couple places where it's used when USE_POSIX_TIME is\n> true, and they don't look like critical-path stuff to me.\n> We could almost say that these variables need only be set once per\n> backend startup, but I suppose that would do the wrong thing in a\n> backend that's left running over a daylight-savings transition.\n\nRight. If we were only supporting WinDoze, then we wouldn't need to\nworry. But my linux box stays up forever, so daylight savings time\ntransitions are important ;)\n\n> What I'm inclined to do is arrange for these variables to be calculated\n> only on-demand, at most once per transaction. It'd be even nicer to\n> get rid of them entirely, but I don't think I understand the time code\n> well enough to venture that.\n\nAt most once per transaction is what I was hoping the behavior already\nis. Anyway, if we can take the time() result and *later* figure out\nthe other values, then we could:\n\n1) clear a flag when time() is called\n2) use a wrapper around a stripped GetCurrentAbsoluteTime() for\ndate/time support\n3) if the flag in (1) is clear, then evaluate the other parameters\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 04 Nov 1999 15:35:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance glitch in GetCurrentAbsoluteTime()"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> (back online after a week of downtime)\n\nI was wondering why you were so quiet. Hardware trouble?\n\n>> What I'm inclined to do is arrange for these variables to be calculated\n>> only on-demand, at most once per transaction.\n\n> At most once per transaction is what I was hoping the behavior already\n> is.\n\nActually, my gripe is that it's done in every transaction whether\nneeded or not...\n\n> Anyway, if we can take the time() result and *later* figure out\n> the other values, then we could:\n\n> 1) clear a flag when time() is called\n> 2) use a wrapper around a stripped GetCurrentAbsoluteTime() for\n> date/time support\n> 3) if the flag in (1) is clear, then evaluate the other parameters\n\nRight, that was pretty much what I was thinking too. As long as\nCTimeZone &etc are evaluated using the time value saved at the\nstart of the transaction, the behavior will be the same.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Nov 1999 11:31:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance glitch in GetCurrentAbsoluteTime() "
},
{
"msg_contents": "> > (back online after a week of downtime)\n> I was wondering why you were so quiet. Hardware trouble?\n\nIn a sense. The server-side modem/ppp setup I use was dead, and I was\nso busy on other stuff I didn't bug anyone to fix it...\n\n> Right, that was pretty much what I was thinking too. As long as\n> CTimeZone &etc are evaluated using the time value saved at the\n> start of the transaction, the behavior will be the same.\n\nShould I put it on my ToDo? Not sure how to test that it helps with\nexecution time, but I should be able to get to it before 7.0...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 04 Nov 1999 16:40:56 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance glitch in GetCurrentAbsoluteTime()"
}
] |
[
{
"msg_contents": "\n\n",
"msg_date": "Sun, 31 Oct 1999 10:49:55 +0100",
"msg_from": "\"Mario Simeone\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscribe"
}
] |
[
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > Do we want to copy the entire pgaccess tree to the pgsql install\n> > directory?\n> \n> Why not? Of course, on UNIX systems without DLL's\n\nOK, I have installed it in the current tree and stable tree. I will\nkeep pgaccess up-to-date in both trees in the future.\n\nThe tricky part is that pgaccess must know the 'wish' path that is\ndetermined by configure, and the POSTGRESDIR path which comes from\nMakefile.global, so I had to create a Makefile.in, and a pgaccess.sh. \nMakefile.in is set a configure time, and pgaccess.sh is set at compile\ntime. The final script pgaccess has hardcoded in it the path to wish,\nand the pgaccess directory inside the install directory.\n\nThe only problem I see is that wish is determined by a crude directory\nsearch, while our tcl/tk stuff has the ability to target certain\nversions of tcl/tk. Any idea how to do that cleanly? On my machine, I\nget wish 7.6 because it see it in /usr/contrib/bin first, while for\ntcl/tk I tell configure to look in /usr/local/lib first, so I get tck/tk\n8.0.\n\nI don't see a 'wish' path defined in any of the tck/tk config files.\n\nI copied the entire pgaccess/lib and pgaccess/images trees into the\ninstall directory. The other stuff seemed like it should stay just in\nthe source tree.\n\nWould someone please test this?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 07:50:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess 0.98"
},
{
"msg_contents": "> On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > Would someone please test this?\n> \n> If I can figure out how (or if someone will tell me how) to check out the\n> 6.5.3-candidate tree, I'll be glad to. After all, I've got to get a jump on\n> the RPM's for 6.5.3.\n\nThe developement tree has the same treatment, though I would like to\nhave the stable tree checked too.\n\nI seem to mess this area up often.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 15:31:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "> On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > > On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > > > Would someone please test this?\n> > > \n> > > If I can figure out how (or if someone will tell me how) to check out the\n> > > 6.5.3-candidate tree, I'll be glad to. After all, I've got to get a jump on\n> > > the RPM's for 6.5.3.\n> > \n> > The developement tree has the same treatment, though I would like to\n> > have the stable tree checked too.\n> > \n> > I seem to mess this area up often.\n> \n> As I will be building RPM's for both trees eventually, I am in the midst of a\n> dual-branch checkout. Painful over my home's 33.6K modem, but necessary.\n> \n> I did (am doing, actually) a cvs checkout with -r REL6_5 -- that is the correct\n> tag for the stable branch, right??\n\nSadly, no. It is REL6_5_PATCHES.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 15:52:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "> On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > > On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > > > Would someone please test this?\n> > > \n> > > If I can figure out how (or if someone will tell me how) to check out the\n> > > 6.5.3-candidate tree, I'll be glad to. After all, I've got to get a jump on\n> > > the RPM's for 6.5.3.\n> > \n> > The developement tree has the same treatment, though I would like to\n> > have the stable tree checked too.\n> > \n> > I seem to mess this area up often.\n> \n> As I will be building RPM's for both trees eventually, I am in the midst of a\n> dual-branch checkout. Painful over my home's 33.6K modem, but necessary.\n> \n> I did (am doing, actually) a cvs checkout with -r REL6_5 -- that is the correct\n> tag for the stable branch, right??\n\nYou may want to skip the separate pgaccess rpm now. I have wish found\nvia configure, and the support files loaded into the install directory\nunder pgaccess/, and the pgaccess startup script pointing there, so\nthere isn't any more manual handling of pgaccess.\n\nIt should just install and work now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 15:54:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> Would someone please test this?\n\nIf I can figure out how (or if someone will tell me how) to check out the\n6.5.3-candidate tree, I'll be glad to. After all, I've got to get a jump on\nthe RPM's for 6.5.3.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 31 Oct 1999 16:29:21 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > > Would someone please test this?\n> > \n> > If I can figure out how (or if someone will tell me how) to check out the\n> > 6.5.3-candidate tree, I'll be glad to. After all, I've got to get a jump on\n> > the RPM's for 6.5.3.\n> \n> The developement tree has the same treatment, though I would like to\n> have the stable tree checked too.\n> \n> I seem to mess this area up often.\n\nAs I will be building RPM's for both trees eventually, I am in the midst of a\ndual-branch checkout. Painful over my home's 33.6K modem, but necessary.\n\nI did (am doing, actually) a cvs checkout with -r REL6_5 -- that is the correct\ntag for the stable branch, right??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 31 Oct 1999 16:45:34 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> > I did (am doing, actually) a cvs checkout with -r REL6_5 -- that is the correct\n> > tag for the stable branch, right??\n> \n> Sadly, no. It is REL6_5_PATCHES.\n\nI'm glad for the quick reply. I aborted the REL6_5 co, and am now checking out\nREL6_5_PATCHES. It'll just take a half hour to do the checkout (I miss my T1\nat work right about now....)\n\nThanks!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 31 Oct 1999 16:57:18 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "Bruce Momjian wrote:\n \n> It should just install and work now.\n\nIn the RPM context, it doesn't. I pulled a cvs update of\nREL6_5_PATCHES, tarballed the tree into 'postgresql-6.5.3.tar.gz', and\ngave it the 'rpm -ba' treatment after a patch-building and\nspec-file-editing session that I'd rather forget (due to some of my\nstupid errors) (ask Thomas about the rpm spec file....).\n\n/usr/bin/pgaccess (in the standard install, this is the result of your\nmunged src/bin/pgaccess/pgaccess.sh) has the following line:\n\nPATH_TO_WISH=@WISH@\n\nWhich confuses things. The line should read\n'PATH_TO_WISH=/usr/bin/wish' -- which, when I edit pgaccess to read\nthus, everything works.\n\nSystem: RedHat Linux 5.2.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 01 Nov 1999 18:30:30 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > It should just install and work now.\n> \n> In the RPM context, it doesn't. I pulled a cvs update of\n> REL6_5_PATCHES, tarballed the tree into 'postgresql-6.5.3.tar.gz', and\n> gave it the 'rpm -ba' treatment after a patch-building and\n> spec-file-editing session that I'd rather forget (due to some of my\n> stupid errors) (ask Thomas about the rpm spec file....).\n> \n> /usr/bin/pgaccess (in the standard install, this is the result of your\n> munged src/bin/pgaccess/pgaccess.sh) has the following line:\n> \n> PATH_TO_WISH=@WISH@\n> \n> Which confuses things. The line should read\n> 'PATH_TO_WISH=/usr/bin/wish' -- which, when I edit pgaccess to read\n> thus, everything works.\n\nThanks for testing this. I have fixed the problem. Please try it again\nand let me know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 19:12:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "Bruce Momjian wrote:\n \n> Thanks for testing this. I have fixed the problem. Please try it again\n> and let me know.\n\nI'll let you know as soon as I rebuild -- which won't be tonight (gotta\nspend some time with my wife). First thing in the morning.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 01 Nov 1999 20:21:06 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "On Mon, 01 Nov 1999, Lamar Owen wrote:\n> Bruce Momjian wrote:\n> \n> > Thanks for testing this. I have fixed the problem. Please try it again\n> > and let me know.\n> \n> I'll let you know as soon as I rebuild -- which won't be tonight (gotta\n> spend some time with my wife). First thing in the morning.\n\nWell, turns out I did have time tonight to rebuild -- and it works.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 1 Nov 1999 22:49:38 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "> On Mon, 01 Nov 1999, Lamar Owen wrote:\n> > Bruce Momjian wrote:\n> > \n> > > Thanks for testing this. I have fixed the problem. Please try it again\n> > > and let me know.\n> > \n> > I'll let you know as soon as I rebuild -- which won't be tonight (gotta\n> > spend some time with my wife). First thing in the morning.\n> \n> Well, turns out I did have time tonight to rebuild -- and it works.\n\nOK, 6.5.3 is ready for packaging.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 23:05:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "> The tricky part is that pgaccess must know the 'wish' path that is\n> determined by configure, and the POSTGRESDIR path which comes from\n> Makefile.global, so I had to create a Makefile.in, and a pgaccess.sh.\n> Makefile.in is set a configure time, and pgaccess.sh is set at compile\n> time. The final script pgaccess has hardcoded in it the path to wish,\n> and the pgaccess directory inside the install directory.\n\nHmm. There is a common trick to finding wish (and presumably other\n\"shells\") for a shell script; it involves an \"exec\" as the first\nexecutable line of the script. Did this just recently. I'll look it up\nwhen I get to work.\n\nYour path must of course be set properly; is that not acceptable?\nCan't we tolerate minor changes in wish version without\nrebuilding/reinstalling from Postgres sources??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 04 Nov 1999 15:43:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
},
{
"msg_contents": "> > The tricky part is that pgaccess must know the 'wish' path that is\n> > determined by configure, and the POSTGRESDIR path which comes from\n> > Makefile.global, so I had to create a Makefile.in, and a pgaccess.sh.\n> > Makefile.in is set a configure time, and pgaccess.sh is set at compile\n> > time. The final script pgaccess has hardcoded in it the path to wish,\n> > and the pgaccess directory inside the install directory.\n> \n> Hmm. There is a common trick to finding wish (and presumably other\n> \"shells\") for a shell script; it involves an \"exec\" as the first\n> executable line of the script. Did this just recently. I'll look it up\n> when I get to work.\n> \n\nYes, I know the trick, but since we are already doing the search in\nconfigure, we may as well use it rather than doing a search for wish at\nruntime. The reason is that we now have a WISH varible in\nMakefile.global that can be set to any value the user wants. This makes\nit consistent with the way we handle other tcl/tk things.\n\n\n> Your path must of course be set properly; is that not acceptable?\n> Can't we tolerate minor changes in wish version without\n> rebuilding/reinstalling from Postgres sources??\n\nIt does not look for any particular version of wish, but just the first\nwish in the path found by configure. However, it can be easily changed.\nMy Makefile.custom has WISH=another_path because I want wish8 and not\nthe wish that is first in the path.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 11:01:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
}
] |
[
{
"msg_contents": "Hi.\nI've such problem :\nI table with primary key and in trigger try to insert into this table data\nwich violate constrain (not uniq). When ectually executing SPI_execp I got\na message \"ERROR: cannot insert a duplicate key into a unique index\" and\ntrigger executing is aborted. What should I do in order to get this error\nas a result from SPI_execp and continue trigger execution?\n\nThanks in advance,\nAndriy Korud, Lviv, Ukraine\n\n",
"msg_date": "31 Oct 1999 21:34:23 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger aborted on error"
},
{
"msg_contents": ">\n> Hi.\n> I've such problem :\n> I table with primary key and in trigger try to insert into this table data\n> wich violate constrain (not uniq). When ectually executing SPI_execp I got\n> a message \"ERROR: cannot insert a duplicate key into a unique index\" and\n> trigger executing is aborted. What should I do in order to get this error\n> as a result from SPI_execp and continue trigger execution?\n>\n> Thanks in advance,\n> Andriy Korud, Lviv, Ukraine\n\n No chance, ERROR messages cannot be caught in any way by a\n trigger. They abort the entire transaction.\n\n The only possibility you have is to check via SELECT prior to\n the INSERT. Unfortunately you would need an exclusive table\n lock to avoid race conditions.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Sun, 31 Oct 1999 22:32:51 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trigger aborted on error"
},
{
"msg_contents": "\n\nOn Sun, 31 Oct 1999, Jan Wieck wrote:\n\n> >\n> > Hi.\n> > I've such problem :\n> > I table with primary key and in trigger try to insert into this table data\n> > wich violate constrain (not uniq). When ectually executing SPI_execp I got\n> > a message \"ERROR: cannot insert a duplicate key into a unique index\" and\n> > trigger executing is aborted. What should I do in order to get this error\n> > as a result from SPI_execp and continue trigger execution?\n> >\n> > Thanks in advance,\n> > Andriy Korud, Lviv, Ukraine\n> \n> No chance, ERROR messages cannot be caught in any way by a\n> trigger. They abort the entire transaction.\n> \n> The only possibility you have is to check via SELECT prior to\n> the INSERT. Unfortunately you would need an exclusive table\n> lock to avoid race conditions.\n> \n> \n> Jan\n> \nLet's make another question: Is there some way to insert uniq data into\ntable without first cheking using SELECT. Because this table contain >1M\nrecords and SELECT on it is very slow. If there is no way of doing it I\nshould consider moving from Postgres to other database :(\n\nAndriy Korud\n \n\n",
"msg_date": "1 Nov 1999 09:03:56 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Trigger aborted on error"
},
{
"msg_contents": "Thus spake Andrij Korud\n> > The only possibility you have is to check via SELECT prior to\n> > the INSERT. Unfortunately you would need an exclusive table\n> > lock to avoid race conditions.\n> > \n> Let's make another question: Is there some way to insert uniq data into\n> table without first cheking using SELECT. Because this table contain >1M\n> records and SELECT on it is very slow. If there is no way of doing it I\n> should consider moving from Postgres to other database :(\n\nHave you put an index on the field in question? It shouldn't matter how\nmany records you have if you do. If you don't, no other database will\nhelp you any better.\n\nThe following declaration will create the field, give it the default\nand put a unique index on it. How are you declaring the field now?\n\n CREATE TABLE t (pk SERIAL PRIMARY KEY, ...\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 1 Nov 1999 06:55:16 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trigger aborted on error"
}
] |
[
{
"msg_contents": "I have re-done pgaccess again, both trees.\n\nBecause wish is configured via a path search, I moved the WISH= line\ninto Makefile.global, so people can change it easily there, or in\nMakefile.custom. Having in embedded in pgaccess/Makefile was too\ngoofey.\n\nThere is no pgaccess/Makefile.in anymore. It gets its WISH from\nMakefile.global, like PERL gets defined.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 15:34:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess 0.98"
}
] |
[
{
"msg_contents": "OK, my 6.5.3 cvs copy works fine for pgaccess.\n\nActually, we never installed pgaccess this cleanly in earlier releases.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 31 Oct 1999 15:46:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess 0.98"
},
{
"msg_contents": "On Sun, 31 Oct 1999, Bruce Momjian wrote:\n> OK, my 6.5.3 cvs copy works fine for pgaccess.\n> \n> Actually, we never installed pgaccess this cleanly in earlier releases.\n\nWhich is why previous RPM's had such special handling for pgaccess. Your\nchanges are going to hopefully make things much easier on the RPM side. I'll\nknow here in a little while.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 31 Oct 1999 17:43:49 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess 0.98"
}
] |
[
{
"msg_contents": "www.hack.co.za has been updated today.\n\n[[-Sun 31 October-]]\n Added sendmail-8.9.3.tar.gz by icesk. (0-day!)\n Added sperl4.036.c FreeBSD 2.2.8 exploit by OVX. (old exploit)\n Added tcpdump.c Linux/misc exploit BLADI. (old exploit)\n Fixed up the CGI section, thanks to dv8 for submitting the bug. (lots of\n500 errors)\n Fixed OS colour scheme and exploit outlay. (took hours to do)\n Fixed frame border outlays. (invisible now)\n\n++gb\n\n\n",
"msg_date": "Sun, 31 Oct 1999 23:33:07 +0200",
"msg_from": "\"gov-boi\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "exploit update."
}
] |
[
{
"msg_contents": "Hi\n\nI am seeing the backend crash on a table I am using for a\nsearchengine, and I cannot find any answers with the list archives.\n\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: search\n\nsearch=> update search_url set stale=941424005 where lowerurl='http://criswell.bizland.com';\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\npsql search\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: search\n\nsearch=> select count(*) from search_url; \ncount\n-----\n20334\n(1 row)\n\nThe postmaster stderr says:\n\n/usr/bin/postmaster: reaping dead processes...\n/usr/bin/postmaster: CleanupProc: pid 9788 exited with status 0\nFATAL 1: my bits moved right off the end of the world!\n/usr/bin/postmaster: reaping dead processes...\n/usr/bin/postmaster: CleanupProc: pid 9792 exited with status 0\n\nThe postmaster stdout says:\n\nStartTransactionCommand\nProcessQuery\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\nCommitTransactionCommand\n\nThe stdout IS moving by very quickly so I cant be sure this message matches up\nwith this error, but it certainly seems to (all the rest are\nCommitTransactionCommand/StartTransactionCommand/ProcessQuery)\n\nMy biggest problem is that I am using the C libraries, and PQexec()\ndoes not return, gdb shows it is sitting in a select() inside\n#0 0xc91954e in __select ()\n#1 0xc851428 in pgresStatus ()\n#2 0xc84a9ea in PQgetResult ()\n#3 0xc84ab77 in PQexec ()\n\nand it hangs there forever (well, 10 minutes so far)\n\nNow, the line I am doing an update on, I can select quite happily, and\nit returns the value I expect.\n\nExtra info:\n#uname -a\nLinux ewtoo.org 2.2.12 #2 SMP Fri Oct 1 21:50:14 BST 1999 i686 unknown\n\n#free -k\n total used free shared buffers cached\nMem: 387476 381112 6364 761628 52904 166224\n-/+ buffers/cache: 161984 225492\nSwap: 526168 19816 506352\n\nAnyone else seen this?\n\n\t\t\t\t\t\t~Michael\n",
"msg_date": "Mon, 1 Nov 1999 04:12:09 +0000 (GMT)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend crashes (6.5.2 linux)"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> The postmaster stderr says:\n> FATAL 1: my bits moved right off the end of the world!\n\nHmm. That error is coming out of the btree index code. Vadim knows\nthat code better than anyone else, so he might have something to say\nhere, but my past-midnight recollection is that we've seen that error\nbeing triggered when there are oversize entries in the index (where\n\"oversize\" = \"more than half a disk page\"). It's a bug, for sure,\nbut what you probably want right now is a workaround. Do you have any\nentries in indexed columns that are over 4K, and can you get rid of them?\n\n> My biggest problem is that I am using the C libraries, and PQexec()\n> does not return, gdb shows it is sitting in a select() inside\n> #0 0xc91954e in __select ()\n> #1 0xc851428 in pgresStatus ()\n> #2 0xc84a9ea in PQgetResult ()\n> #3 0xc84ab77 in PQexec ()\n\nHuh? PQgetResult does not call pgresStatus ... not least because the\nlatter is an array, not a function. Your gdb is lying to you. Maybe\nyou have a problem with gdb looking at a different version of the\nlibrary than what's actually executing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 00:43:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend crashes (6.5.2 linux) "
},
{
"msg_contents": "> Michael Simms <[email protected]> writes:\n> > The postmaster stderr says:\n> > FATAL 1: my bits moved right off the end of the world!\n> \n\nThat's my favorite error message. Can we make it print more often?\nPru-Hahaha... :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 00:53:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend crashes (6.5.2 linux)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Michael Simms <[email protected]> writes:\n> > The postmaster stderr says:\n> > FATAL 1: my bits moved right off the end of the world!\n> \n> Hmm. That error is coming out of the btree index code. Vadim knows\n> that code better than anyone else, so he might have something to say\n> here, but my past-midnight recollection is that we've seen that error\n> being triggered when there are oversize entries in the index (where\n> \"oversize\" = \"more than half a disk page\"). It's a bug, for sure,\n> but what you probably want right now is a workaround. Do you have any\n> entries in indexed columns that are over 4K, and can you get rid of them?\n\nThis FATAL means that index is broken (some prev insertion\nwas interrupted by elog(ERROR) or backend crash) - try to rebuild...\nWAL should fix this bug.\n\nVadim\n",
"msg_date": "Mon, 01 Nov 1999 14:02:04 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend crashes (6.5.2 linux)"
}
] |
[
{
"msg_contents": "Is this correct?\n\n\ttest=> SET DATESTYLE = 'European';\n\tSET VARIABLE\n\ttest=> SELECT date('2/1/1983'::date);\n\t date\n\t----------\n\t02-01-1983\n\t(1 row)\n\nI have found US, Postgres, European, and NonEuropean as showing the same\ndate output. Is that correct?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 01:17:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Datestyle"
}
] |
[
{
"msg_contents": "I see now. The input is reversed in and out, so it looks the same when\nit is different. Never mind.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 01:20:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "DateStyle"
}
] |
[
{
"msg_contents": "I think it's essential in WAL to have log files(s) on separate disk\ndrives to recover database while a drive which keeps tables on it\ncrashes. How could we do this on 7.0?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 01 Nov 1999 16:23:40 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Log on separate disk?"
}
] |
[
{
"msg_contents": "Hi,\nI there any way to get an oid of just inserted by (SPI_execp) record without\ndoing SELECT?\n\nThanks in advance,\nAndriy Korud, Lviv, Ukriane\n\n",
"msg_date": "1 Nov 1999 09:51:51 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting oid of just inserted record"
}
] |
[
{
"msg_contents": "Hey, I just found that if I \"BEGIN\", making a lot of inserts (1000) and on\n1001 insert I get an error (for example \"duplicate key\") ALL prev 1000\ninsertes I LOST.\nGive me please author of this %$%$^&%# idea!!! It's really STUPID.\n\nAndriy Korud, Lviv, Ukraine\nP.S. Sorry for this letter but I'm really angy. Can postgres give me the way\nto store uniq data in datable without checking before each insert if this\ndata alredy exist or no. Why it cannot give me error about duplicate keys\nwhich I will (or will not) take in care.\n\n",
"msg_date": "1 Nov 1999 17:09:15 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Whos idea was this"
},
{
"msg_contents": "Andrij Korud writes:\n > Hey, I just found that if I \"BEGIN\", making a lot of inserts (1000) and on\n > 1001 insert I get an error (for example \"duplicate key\") ALL prev 1000\n > insertes I LOST.\n > Give me please author of this %$%$^&%# idea!!! It's really STUPID.\n\n\nActually, that's the way it's supposed to work. Most modern\nrelational databases support what are called transactions. A\ntransaction is an indivisible unit of work for the database engine.\n\nBy starting a transaction (using the 'BEGIN') you are telling the\ndatabase engine that you want it to make the updates if and only if\n_all_ of the updates can succeed. If any updates that will fail the\ndatabase engine will make sure that none of the updates take place.\n\nTransactions come in really handy if we have updates on a set of mutually\ndependent tables. Frequently we don't want to update any of the\ntables unless we can sucdeed in updating all of the tables.\n\nIf you have a set of updates that aren't mutually dependent, just use\na 'COMMIT TRANSACTION' between each update.\n\n\n-- \n=======================================================================\n Life is short. | Craig Spannring \n Bike hard, ski fast. | [email protected]\n --------------------------------+------------------------------------\n Any sufficiently horrible technology is indistinguishable from Perl.\n=======================================================================\n\n",
"msg_date": "Mon, 1 Nov 1999 10:17:42 -0700 (MST)",
"msg_from": "Craig Spannring <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Whos idea was this"
},
{
"msg_contents": "Sorry for first angry letter and let close this theme.\nSorry once more.\n\nAndriy Korud, Lviv, Ukraine\n\n",
"msg_date": "1 Nov 1999 19:20:01 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Whos idea was this"
}
] |
[
{
"msg_contents": "Seems like there is (was) a leak of file descriptors somewhere. The\ndescriptors are being used up like crazy. After a week of work on a small\ndatabase (6 tables, 20 or so indexes) Postgres used up well over 800\ndescriptors. Is this something known/fixed?\n\n[PostgreSQL 6.6.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n\ndownloaded and built about the time of 6.5.2 release, sometime in\nmid-September.\n\nGene Sokolov.\n\n\n\n",
"msg_date": "Mon, 1 Nov 1999 18:55:07 +0300",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "file descriptors leak?"
},
{
"msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> Seems like there is (was) a leak of file descriptors somewhere. The\n> descriptors are being used up like crazy.\n\nI fixed some problems along that line during the 6.5 cycle, and thought\nthe issue closed. Perhaps the problem's come back.\n\n> After a week of work on a small\n> database (6 tables, 20 or so indexes) Postgres used up well over 800\n> descriptors.\n\nHmm, there must be multiple descriptors open for the same file then?\nThat's really weird. Can you obtain a listing of just what is open,\nusing lsof or some similar tool? Even better, can you provide a\nreproducible test case that will cause descriptor leakage?\n\nAlso, exactly what do you mean by \"Postgres used up...\" --- is this\none backend, or a total across the whole system (if so, how many\nbackends are we talking about here?).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 16:17:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] file descriptors leak? "
},
{
"msg_contents": "From: Tom Lane <[email protected]>\n> \"Gene Sokolov\" <[email protected]> writes:\n> > Seems like there is (was) a leak of file descriptors somewhere. The\n> > descriptors are being used up like crazy.\n>\n> I fixed some problems along that line during the 6.5 cycle, and thought\n> the issue closed. Perhaps the problem's come back.\n>\n> > After a week of work on a small\n> > database (6 tables, 20 or so indexes) Postgres used up well over 800\n> > descriptors.\n>\n> Hmm, there must be multiple descriptors open for the same file then?\n> That's really weird. Can you obtain a listing of just what is open,\n> using lsof or some similar tool? Even better, can you provide a\n> reproducible test case that will cause descriptor leakage?\n\nWe disconnected all clients and the number of descriptors dropped from 800\nto about 200, which is reasonable. We currently have 3 connections and ~300\nused descriptors. The \"lsof -u postgres\" is attached. It seems ok except for\na large number of open /dev/null. If I hit the problem again, I'll collect\nthe list of open descriptors.\n\n> Also, exactly what do you mean by \"Postgres used up...\" --- is this\n> one backend, or a total across the whole system (if so, how many\n> backends are we talking about here?).\n\n1 postmaster, 4-5 backends. If I understand correctly, that is: 1 connection\n== 1 backend.\n\nGene Sokolov\n\n\n",
"msg_date": "Tue, 2 Nov 1999 12:01:46 +0300",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] file descriptors leak? "
},
{
"msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> We disconnected all clients and the number of descriptors dropped from 800\n> to about 200, which is reasonable. We currently have 3 connections and ~300\n> used descriptors. The \"lsof -u postgres\" is attached.\n\nHmm, I see a postmaster with 8 open files and one backend with 34.\nDoesn't look out of the ordinary to me.\n\n> It seems ok except for a large number of open /dev/null.\n\nI see /dev/null at the stdin/stdout/stderr positions, which I suppose\nmeans that you started the postmaster with -S instead of directing its\noutput to a logfile.\n\nIt is true that on a system that'll let individual processes have as\nmany open file descriptors as they want, Postgres can soak up a lot.\nOver time I'd expect each backend to acquire an FD for practically\nevery file in the database directory (including system tables and\nindexes). So in a large installation you could be looking at thousands\nof open files. But the situation you're describing doesn't seem like\nit should reach those kinds of numbers.\n\nThe number of open files per backend can be constrained by fd.c, but\nAFAIK there isn't any way to set a manually-specified upper limit; it's\nall automatic. Perhaps there should be a configuration option to add\na limit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 10:18:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] file descriptors leak? "
},
{
"msg_contents": "On Tue, 2 Nov 1999, Tom Lane wrote:\n\n> Date: Tue, 02 Nov 1999 10:18:15 -0500\n> From: Tom Lane <[email protected]>\n> To: Gene Sokolov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] file descriptors leak? \n> \n> \"Gene Sokolov\" <[email protected]> writes:\n> > We disconnected all clients and the number of descriptors dropped from 800\n> > to about 200, which is reasonable. We currently have 3 connections and ~300\n> > used descriptors. The \"lsof -u postgres\" is attached.\n> \n> Hmm, I see a postmaster with 8 open files and one backend with 34.\n> Doesn't look out of the ordinary to me.\n\nI see 617 open files (using lsof| grep post | wc).\nThis is a Linux 2.0.37, postgres 6.5.3, 1 postamster and\n10 backends. I already complained about this and would glad\nto understand now is it ok or postgres just wast fd ?\n\n\n\n> \n> > It seems ok except for a large number of open /dev/null.\n> \n> I see /dev/null at the stdin/stdout/stderr positions, which I suppose\n> means that you started the postmaster with -S instead of directing its\n> output to a logfile.\n\nIn my case most files just /dev/sda.....\n\n> \n> It is true that on a system that'll let individual processes have as\n> many open file descriptors as they want, Postgres can soak up a lot.\n> Over time I'd expect each backend to acquire an FD for practically\n> every file in the database directory (including system tables and\n> indexes). So in a large installation you could be looking at thousands\n> of open files. But the situation you're describing doesn't seem like\n> it should reach those kinds of numbers.\n> \n> The number of open files per backend can be constrained by fd.c, but\n> AFAIK there isn't any way to set a manually-specified upper limit; it's\n> all automatic. Perhaps there should be a configuration option to add\n> a limit.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 2 Nov 1999 21:13:42 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] file descriptors leak? "
},
{
"msg_contents": "> On Tue, 2 Nov 1999, Tom Lane wrote:\n> \n> > Date: Tue, 02 Nov 1999 10:18:15 -0500\n> > From: Tom Lane <[email protected]>\n> > To: Gene Sokolov <[email protected]>\n> > Cc: [email protected]\n> > Subject: Re: [HACKERS] file descriptors leak? \n> > \n> > \"Gene Sokolov\" <[email protected]> writes:\n> > > We disconnected all clients and the number of descriptors dropped from 800\n> > > to about 200, which is reasonable. We currently have 3 connections and ~300\n> > > used descriptors. The \"lsof -u postgres\" is attached.\n> > \n> > Hmm, I see a postmaster with 8 open files and one backend with 34.\n> > Doesn't look out of the ordinary to me.\n> \n> I see 617 open files (using lsof| grep post | wc).\n> This is a Linux 2.0.37, postgres 6.5.3, 1 postamster and\n> 10 backends. I already complained about this and would glad\n> to understand now is it ok or postgres just wast fd ?\n\nPostgreSQL caches up to 64 file descriptors per backend, meaning it keep\nthem open in case it needs them later. The 617 number for 10 backend\nsounds just about right.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Nov 1999 13:35:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] file descriptors leak?"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n>> Hmm, I see a postmaster with 8 open files and one backend with 34.\n>> Doesn't look out of the ordinary to me.\n\n> I see 617 open files (using lsof| grep post | wc).\n> This is a Linux 2.0.37, postgres 6.5.3, 1 postamster and\n> 10 backends.\n\nAbout 60 FDs per backend, then. That sounds fairly reasonable to me;\nthat's probably what fd.c thinks it should limit its usage to on your\nplatform.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 14:20:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] file descriptors leak? "
}
] |
[
{
"msg_contents": "Hi,\nIs there any way to obtain an OID of record just inserted by SPI_execp?\n\nThanks in advance, \nAndriy Korud, Lviv, Ukraine \n\n",
"msg_date": "1 Nov 1999 19:10:56 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Get OID of just inserted record"
},
{
"msg_contents": "\n\nOn 1 Nov 1999, Andrij Korud wrote:\n\n> Hi,\n> Is there any way to obtain an OID of record just inserted by SPI_execp?\n> \n\nSELECT max(oid) ... which is not implement now :-) \n\nIf is any way for this is prabably good idea add this to SPI API (as\nSPI_oidStatus()). What?\n\n\t\t\t\t\t\tKarel\n\n------------------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n------------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 1 Nov 1999 18:39:09 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": ">\n> Hi,\n> Is there any way to obtain an OID of record just inserted by SPI_execp?\n\n How should that work consistenty? What do you expect as\n return if the query executed was an\n\n INSERT INTO t2 SELECT * FROM t1;\n\n The first, a random one or the last of the two million rows\n inserted?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 2 Nov 1999 02:05:48 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "\n\nOn Tue, 2 Nov 1999, Jan Wieck wrote:\n\n> >\n> > Hi,\n> > Is there any way to obtain an OID of record just inserted by SPI_execp?\n> \n> How should that work consistenty? What do you expect as\n> return if the query executed was an\n> \n> INSERT INTO t2 SELECT * FROM t1;\n> \n> The first, a random one or the last of the two million rows\n> inserted?\n> \n> \nMy question is:\n\"CREATE TABLE t1 (word text)\"\n\"INSERT INTO t1 VALUES('xxx')\" (using SPI_execp)\n\nSo, is there any way to obtain OID of word 'xxx' just after insertion\nwithout doing \"SELECT oid FROM t1 WHERE word='xxx'\"?\n\nThanks in advance,\nAndriy Korud, Lviv, Ukraine\n\n\n",
"msg_date": "2 Nov 1999 09:15:14 +0200",
"msg_from": "\"Andrij Korud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "hi...\n\n> My question is:\n> \"CREATE TABLE t1 (word text)\"\n> \"INSERT INTO t1 VALUES('xxx')\" (using SPI_execp)\n> \n> So, is there any way to obtain OID of word 'xxx' just after insertion\n> without doing \"SELECT oid FROM t1 WHERE word='xxx'\"?\n\ni've been watching this thread and it has caused this thought rumble forth:\n\nwould it be possible to add a RETURN clause to INSERT? e.g.\n\nINSERT into t1 VALUES('xxx') RETURN oid;\n\ni could see where this would be useful in many different circumstances.. i\nknow this isn't standards compliant, but would be very cool =) i know that with\ntriggers, you have access to the current/old/new information, could this be\nharnessed to supply a RETURN facility?\n\njust a thought.. probably ignorable.\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Tue, 2 Nov 1999 11:13:40 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "On Tue, 2 Nov 1999, Aaron J. Seigo wrote:\n\n> would it be possible to add a RETURN clause to INSERT? e.g.\n> \n> INSERT into t1 VALUES('xxx') RETURN oid;\n> \n> i could see where this would be useful in many different circumstances.. i\n> know this isn't standards compliant, but would be very cool =) i know that with\n> triggers, you have access to the current/old/new information, could this be\n> harnessed to supply a RETURN facility?\n\nI'm not sure what I'm missing here:\n\n=> insert into foo values (4, 'aaa');\nINSERT 7998067 1\n\nThis line is generated by libpq's PQcmdStatus(). You can also just get the\noid part by using PQoidStatus(). Is that what you wanted or do you need a\nwrapper or binding for a certain environment?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 2 Nov 1999 19:48:41 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "\"Aaron J. Seigo\" <[email protected]> writes:\n> would it be possible to add a RETURN clause to INSERT? e.g.\n>\n> INSERT into t1 VALUES('xxx') RETURN oid;\n\nNot necessary --- the backend already does return the OID of the\ninserted tuple (if just one is inserted). You can see it in psql,\nfor example. The problem here is just that not all frontend libraries\nmake it possible to get at that value :-(.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 14:14:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record "
},
{
"msg_contents": "hi...\n\n> > i could see where this would be useful in many different circumstances.. i\n> > know this isn't standards compliant, but would be very cool =) i know that with\n> > triggers, you have access to the current/old/new information, could this be\n> > harnessed to supply a RETURN facility?\n> \n> I'm not sure what I'm missing here:\n> \n> => insert into foo values (4, 'aaa');\n> INSERT 7998067 1\n> \n> This line is generated by libpq's PQcmdStatus(). You can also just get the\n> oid part by using PQoidStatus(). Is that what you wanted or do you need a\n> wrapper or binding for a certain environment?\n> \n> \t-Peter\n\nthis assumes that one is using libpq.. it would be nice to have access to this\nfrom psql or anywhere for that matter.. and not just oids.. but, say for\ninstance, default values in tables that are generated dynamically... etc\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Tue, 2 Nov 1999 12:39:54 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "On Nov 2, Aaron J. Seigo mentioned:\n\n> > => insert into foo values (4, 'aaa');\n> > INSERT 7998067 1\n> > \n> > This line is generated by libpq's PQcmdStatus(). You can also just get the\n> > oid part by using PQoidStatus(). Is that what you wanted or do you need a\n> > wrapper or binding for a certain environment?\n> > \n> > \t-Peter\n> \n> this assumes that one is using libpq.. it would be nice to have access to this\n> from psql or anywhere for that matter.. and not just oids.. but, say for\n\nYou can access it right there :) How exactly do you wish to access it in\npsql though? (I'm writing around in psql at the moment, so I might\nactually implement it!)\n\n> instance, default values in tables that are generated dynamically... etc\n\nWell, now you're saying \"I want all this complex data from the database\nbut I don't want to use SELECT\". That does make much sense. The point of\ndefaults is that you don't need to worry about them. If you need to read\nback a record right after you insert it, perhaps you should rethink your\napplication. Admittedly, I know of several interfaces that make this sort\nof thing a royal pain, but you can't get everything for free.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 2 Nov 1999 22:29:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "Aaron J. Seigo wrote:\n\n> > This line is generated by libpq's PQcmdStatus(). You can also just get the\n> > oid part by using PQoidStatus(). Is that what you wanted or do you need a\n> > wrapper or binding for a certain environment?\n> >\n> > -Peter\n>\n> this assumes that one is using libpq.. it would be nice to have access to this\n> from psql or anywhere for that matter.. and not just oids.. but, say for\n> instance, default values in tables that are generated dynamically... etc\n\n Where should I place the information about the final queries\n the rule system changed the original one into? During\n rewrite, one INSERT could be rewritten into several\n different, conditional INSERT, UPDATE and DELETE statements.\n I think this would be of interest for you too!\n\n I'm not serious right now (as the ppl knowing me should have\n seen already between the lines). I can see the point of\n getting the last inserted OID, but I absolutely don't see it\n on something like generated default values or the like. This\n would finally mean, that an INSERT returns a result set of\n the values it inserted. And the same then must happen (to be\n consistent) for UPDATE and DELETE statements, where the\n UPDATE returns pairs of OLD/NEW rows and DELETE reports which\n rows got deleted. All this data has to be sent to the client\n (to be thrown away usually).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 3 Nov 1999 02:10:28 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "Thus spake Aaron J. Seigo\n> > => insert into foo values (4, 'aaa');\n> > INSERT 7998067 1\n> \n> this assumes that one is using libpq.. it would be nice to have access to this\n> from psql or anywhere for that matter.. and not just oids.. but, say for\n> instance, default values in tables that are generated dynamically... etc\n\nJust to see if I understand you, is this what you want to be able to do?\n\nUPDATE t1 SET other_oid =\n (INSERT INTO t2 VALUES (1, 'aaa') RETURN OID)\n WHERE someting = 'something';\n\nor\n\nSELECT (INSERT INTO t2 (f1, f2) VALUES (1, 'aaa') RETURN f3);\n\nIn other words, sub-inserts. It is kind of a neat idea. I don't know\nthat it is worth spending much time on but it would be a neat feature\nthat no one else has.\n\nJust wondering, how would you handle insert only tables? That is, you\nhave insert privleges but not select. Would you still return the field\nor fields requested surprising the database designer, accept the insert\nbut return an error or refuse the insert entirely since the task could\nnot be completed?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 2 Nov 1999 22:04:06 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "hi...\n\n> Just to see if I understand you, is this what you want to be able to do?\n> \n> UPDATE t1 SET other_oid =\n> (INSERT INTO t2 VALUES (1, 'aaa') RETURN OID)\n> WHERE someting = 'something';\n> \n> or\n> \n> SELECT (INSERT INTO t2 (f1, f2) VALUES (1, 'aaa') RETURN f3);\n> \n> In other words, sub-inserts. It is kind of a neat idea. I don't know\n> that it is worth spending much time on but it would be a neat feature\n> that no one else has.\n\nboth actually, though the former has much greater interest than the latter in\nterms of implications... the second example allows a streamlining of tasks (not\nhaving to do a select on a newly inserted piece of data, therefore cutting down\non statements and perhaps even back-end processing)... the first example\nthough, would only be possible otherwise with several lines of code, and if it\nis viewed as an implicit mini-transaction (see below) then it would add\nsome rather new functionality.\n\n the reason this sparked in my head originaly was that the fellow who posted\nthe first question was wondering about output (select) from an input function\n(insert) .. i started thinking about it ... having statements able to operate\nboth ways (input/output) simultaneously would be quite nice, especially from a\npower user's point of view... intuitively it makes sense (to me anyways =) and\nwould allow more complex tasks to be handled with less\ncode/statements/processing\n\n> Just wondering, how would you handle insert only tables? That is, you\n> have insert privleges but not select. Would you still return the field\n> or fields requested surprising the database designer, accept the insert\n> but return an error or refuse the insert entirely since the task could\n> not be completed?\n\ni think the task should be refused in entirety so as not to cause unexpected\nresults. performing insert/select tasks would require more permissions to\nthe system in general than someone just wanting to do an insert, but that is not\nunusual in any way and should be expected... \n\nfurther, if any part of the query broke, the entire thing should fail... it\nshould act as an implicit mini-transaction, consisting of exactly one\nstatement... so that if a piece of it failed, any and all remaining parts (outer\n'loops') of the query are not processed and any previous parts (inner 'loops')\nare rolled-back.. and of course an error would come spittering forth. the\nimplications this holds towards data integrity and conglomerating/atomizing\nchanges to the dataset are obvious.\n\nas you mentioned, i haven't seen this anywhere else, ... how\nmuch use would it get? well. i know i'd use it if it were available.. i use\ntriggers/rules, procedures and external code to do what i need now.. so\n\"sub-inserts\" (as you aptly called them) wouldn't really push the bounds of\nwhat is possible, but i think they would push the bounds of what is easily and\ndependly possible. \n\nmy 0.02 (and that's canadian.. so..)\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Tue, 2 Nov 1999 21:47:04 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "Thus spake Aaron J. Seigo\n> > Just wondering, how would you handle insert only tables? That is, you\n> > have insert privleges but not select. Would you still return the field\n> > or fields requested surprising the database designer, accept the insert\n> > but return an error or refuse the insert entirely since the task could\n> > not be completed?\n> \n> i think the task should be refused in entirety so as not to cause unexpected\n> results. performing insert/select tasks would require more permissions to\n> the system in general than someone just wanting to do an insert, but that is not\n> unusual in any way and should be expected... \n\nExactly. The reason I ask my question is that in PyGreSQL I already fake\nthis behaviour by doing a select * immediately after an insert and if it\nsucceeds I load the caller's dictionary with the data so that they have\nthe oid and any triggered or defaulted fields. This function would be\nuseful for me except that I have to be able to deal with tables with\ninsert only access and still let the insert go through. My problem is\nthat it is a generic function so I can't hard code the decision and need\nto have some way to check each time.\n\n> as you mentioned, i haven't seen this anywhere else, ... how\n> much use would it get? well. i know i'd use it if it were available.. i use\n> triggers/rules, procedures and external code to do what i need now.. so\n> \"sub-inserts\" (as you aptly called them) wouldn't really push the bounds of\n> what is possible, but i think they would push the bounds of what is easily and\n> dependly possible. \n\nI hope we also allow the following if we do it.\n\nINSERT INTO foo VALUES (1, 'aaa') RETURN f1, f2;\n\nor\n\nINSERT INTO foo VALUES (1, 'aaa') RETURN *;\n\n> my 0.02 (and that's canadian.. so..)\n\nDollarettes?\nDollar Lite?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 3 Nov 1999 07:10:23 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "hi...\n\n>Well, autocommit would only matter if it was decided that it wasn't an atomic\n>transaction. If, as seems both sensible and consensed (look, I made up another\n>word :-) the transaction should be atomic, then the state of autocommit\n>shouldn't matter.\n\nexactly... i would be most comfortable with it if it were an implied\ntransaction.\n\n> The reason I ask my question is that in PyGreSQL I already fake\n> this behaviour by doing a select * immediately after an insert and if it\n> succeeds I load the caller's dictionary with the data so that they have\n> the oid and any triggered or defaulted fields. This function would be\n\nso i'm not the only one doing this! nice to know =)\n\n> useful for me except that I have to be able to deal with tables with\n> insert only access and still let the insert go through. My problem is\n> that it is a generic function so I can't hard code the decision and need\n> to have some way to check each time.\n\n>feature that I could have used in a database I have. Instead I had to\n>give SELECT perms to a user on a table that I would have preferred to\n>otherwise keep hidden.\n\nthis is an issue that doesn't really come up until you put a database with\nsensitive information on a (semi-)public network... subinserts and RETURNs\nwould allay many security concerns i deal with on a daily basis at our\ninstallation... \n\ni like the idea of another permission, such as ISELECT to allow this\nbehaviour...\n\n> I hope we also allow the following if we do it.\n> \n> INSERT INTO foo VALUES (1, 'aaa') RETURN f1, f2;\n> \n> or\n> \n> INSERT INTO foo VALUES (1, 'aaa') RETURN *;\n\ndoes anybody know if there would be a processing time improvement with this\nscheme? isn't the tuple (re)written during an INSERT or UPDATE, implying that\nit is, at least temporarily, in memory? this seems to say to me that allowing an\nimmediate RETURN of data on an INSERT/UPDATE would be faster and easier on the\nback end than an INSERT/UPDATE followed by a SELECT... can anyone with a deeper\nunderstanding of the guts of pgsql verify/deny this?\n \n> > my 0.02 (and that's canadian.. so..)\n> \n> Dollarettes?\n> Dollar Lite?\n\nless filling! buys less!\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Wed, 3 Nov 1999 12:21:00 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
}
] |
[
{
"msg_contents": "I don't remember who is the author of patch for 6.5.2 which\nallows to use indices in ORDER BY ... DESC\nThe patch was posted to hackers mailing list and I'm using it\nwithout any problem. Today, after cvs'ing 6_5 I had a problem\nwith this patch. If this patch will not go to 6.5.3 \nis't possible to create new one ? Also, does it already\napplied to current ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 1 Nov 1999 22:42:16 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "using indices in ORDER BY ... DESC"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Oleg Bartunov\n> Sent: Tuesday, November 02, 1999 4:42 AM\n> To: [email protected]\n> Subject: [HACKERS] using indices in ORDER BY ... DESC\n> \n> \n> I don't remember who is the author of patch for 6.5.2 which\n> allows to use indices in ORDER BY ... DESC\n\nIt's me.\nHowever,it was not for 6.5.2 because it isn't a bug fix.\n\n> The patch was posted to hackers mailing list and I'm using it\n> without any problem. Today, after cvs'ing 6_5 I had a problem\n> with this patch. If this patch will not go to 6.5.3\n> is't possible to create new one ? Also, does it already\n\nHmm,I don't have REL_.. cvs branch on my machine.\nBut I may be able to help you. \nWhat kind of problem do you have ?\n\n> applied to current ?\n>\n\nYes,it's already applied to current.\n\nRegards.\n \nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 2 Nov 1999 09:36:35 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] using indices in ORDER BY ... DESC"
}
] |
[
{
"msg_contents": "The installed name of perl in needed in interfaces/Makefile. This\nseems to have been changed from perl to perl5 sometime between 6.5 and\n6.5.2. I missed discussion of this, I guess.\n\nWhat was the rationale for making this change?\n\nCheers,\nBrook\n",
"msg_date": "Mon, 1 Nov 1999 13:16:08 -0700 (MST)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "change in name of perl?"
},
{
"msg_contents": "> The installed name of perl in needed in interfaces/Makefile. This\n> seems to have been changed from perl to perl5 sometime between 6.5 and\n> 6.5.2. I missed discussion of this, I guess.\n\nYou mean the directory name was changed? I don't see any perl-mentioned\nchanges in these releases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 18:04:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > The installed name of perl in needed in interfaces/Makefile. This\n> > seems to have been changed from perl to perl5 sometime between 6.5 and\n> > 6.5.2. I missed discussion of this, I guess.\n> \n> You mean the directory name was changed? I don't see any perl-mentioned\n> changes in these releases.\n\nsrc/interfaces/Makefile:\n-----------\nperl5/Makefile: perl5/Makefile.PL\n\tcd perl5 && perl5 Makefile.PL\n\ninstall-perl5: perl5/Makefile\n\t$(MAKE) -C perl5 clean\n\tcd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl5 Makefile.PL\n-----------\nShould be:\n-----------\nperl5/Makefile: perl5/Makefile.PL\n\tcd perl5 && perl Makefile.PL\n\ninstall-perl5: perl5/Makefile\n\t$(MAKE) -C perl5 clean\n\tcd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl Makefile.PL\n-----------\nWhich is the way it read in 6.5.1, IIANM. I know that I didn't have to\npatch this in 6.5.1.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 01 Nov 1999 20:15:53 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > The installed name of perl in needed in interfaces/Makefile. This\n> > > seems to have been changed from perl to perl5 sometime between 6.5 and\n> > > 6.5.2. I missed discussion of this, I guess.\n> > \n> > You mean the directory name was changed? I don't see any perl-mentioned\n> > changes in these releases.\n> \n> src/interfaces/Makefile:\n> -----------\n> perl5/Makefile: perl5/Makefile.PL\n> \tcd perl5 && perl5 Makefile.PL\n> \n> install-perl5: perl5/Makefile\n> \t$(MAKE) -C perl5 clean\n> \tcd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl5 Makefile.PL\n> -----------\n> Should be:\n> -----------\n> perl5/Makefile: perl5/Makefile.PL\n> \tcd perl5 && perl Makefile.PL\n> \n> install-perl5: perl5/Makefile\n> \t$(MAKE) -C perl5 clean\n> \tcd perl5 && POSTGRES_HOME=\"$(POSTGRESDIR)\" perl Makefile.PL\n> -----------\n> Which is the way it read in 6.5.1, IIANM. I know that I didn't have to\n> patch this in 6.5.1.\n\nThanks again.\n\nI had applied someone's patch so perl5 would be used, but that broke\nmany things. I fixed the current tree to use $(PERL) as defined in\nMakefile.global. I just patched stable the same way. Please test. \nThanks.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 21:35:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": "On Mon, 01 Nov 1999, Bruce Momjian wrote:\n\n> I had applied someone's patch so perl5 would be used, but that broke\n> many things. I fixed the current tree to use $(PERL) as defined in\n> Makefile.global. I just patched stable the same way. Please test. \n> Thanks.\n\nNo joy. the $(PERL) variable is apparently not being set -- the make goes out\nat the line:\ncd perl5 && Makefile.PL\n\n(which should have that $(PERL) in there.....)\n\nIt is beginning to storm here, so I think I'd better shut down and unplug. \nWill have at it at work tomorrow.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 1 Nov 1999 23:30:35 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": "> On Mon, 01 Nov 1999, Bruce Momjian wrote:\n> \n> > I had applied someone's patch so perl5 would be used, but that broke\n> > many things. I fixed the current tree to use $(PERL) as defined in\n> > Makefile.global. I just patched stable the same way. Please test. \n> > Thanks.\n> \n> No joy. the $(PERL) variable is apparently not being set -- the make goes out\n> at the line:\n> cd perl5 && Makefile.PL\n> \n> (which should have that $(PERL) in there.....)\n> \n> It is beginning to storm here, so I think I'd better shut down and unplug. \n> Will have at it at work tomorrow.\n\nOK, fix applied. Seems like that fix wasn't in stable tree either.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 23:44:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": " OK, fix applied. Seems like that fix wasn't in stable tree either.\n\nDoes this mean that 6.5.3-to-be and the current branch are both fixed?\n\nCheers,\nBrook\n",
"msg_date": "Tue, 2 Nov 1999 08:11:20 -0700 (MST)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": "> OK, fix applied. Seems like that fix wasn't in stable tree either.\n> \n> Does this mean that 6.5.3-to-be and the current branch are both fixed?\n\nYes, the current branch had already been fixed. I had not fixed the\n6.5.* branch because I thought the breakage was only in current.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Nov 1999 10:13:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > OK, fix applied. Seems like that fix wasn't in stable tree either.\n> >\n> > Does this mean that 6.5.3-to-be and the current branch are both fixed?\n> \n> Yes, the current branch had already been fixed. I had not fixed the\n> 6.5.* branch because I thought the breakage was only in current.\n\nThat was my error, as I should have specified that I am currently only\nbuilding REL6_5_PATCHES in preparation for the release of 6.5.3, so that\nI can have testing rpms out close to concurrently with the tarball\nrelease.\n\nBuilding REL6_5_PATCHES now.... and the make has successfully built the\nperl client. If something else goes wrong either before the binary\nRPM's are written or after they are installed and tested, I'll let you\nknow. But, it looks like 6.5.3 is ready.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 02 Nov 1999 11:43:57 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] change in name of perl?"
}
] |
[
{
"msg_contents": "\nHow can i add a user (root for example) in pg_shadow?\n\nAtte.\n\n=======================================\nCarlos A. Vicente Altamirano\nCentro de Asistencia Tecnica de RedUNAM\nDepto. de Operacion de la Red\nDGSCA-UNAM\nTel. 6228526-28\n=======================================\n\n\n",
"msg_date": "Mon, 1 Nov 1999 15:41:58 -0600 (CST)",
"msg_from": "Carlos Vicente Altamirano <[email protected]>",
"msg_from_op": true,
"msg_subject": "users in Postgresql"
},
{
"msg_contents": "Use the command createuser in your pgsql/bin directory. Not sure how to \ndo it in SQL, maybe alter user or create user?\n\nAt 05:41 PM 11/1/99, Carlos Vicente Altamirano wrote:\n\n>How can i add a user (root for example) in pg_shadow?\n>\n>Atte.\n>\n>=======================================\n>Carlos A. Vicente Altamirano\n>Centro de Asistencia Tecnica de RedUNAM\n>Depto. de Operacion de la Red\n>DGSCA-UNAM\n>Tel. 6228526-28\n>=======================================\n>\n>\n>\n>************\n\n",
"msg_date": "Mon, 01 Nov 1999 23:33:04 -0400",
"msg_from": "Charles Tassell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "The \"SQL\" command (it isn't really part of the standard) is CREATE USER.\nThe bin/createuser is a script that calls psql and issues a create user\ncommand. (Another confused user on the ever-growing list. How can we make\nthis clearer?)\n\nBtw., although direct UPDATEs to pg_shadow will seemingly succeed, you do\nnot want to do that. That's a bug.\n\n\t-Peter\n\nOn Mon, 1 Nov 1999, Charles Tassell wrote:\n\n> Use the command createuser in your pgsql/bin directory. Not sure how to \n> do it in SQL, maybe alter user or create user?\n> \n> At 05:41 PM 11/1/99, Carlos Vicente Altamirano wrote:\n> \n> >How can i add a user (root for example) in pg_shadow?\n> >\n> >Atte.\n> >\n> >=======================================\n> >Carlos A. Vicente Altamirano\n> >Centro de Asistencia Tecnica de RedUNAM\n> >Depto. de Operacion de la Red\n> >DGSCA-UNAM\n> >Tel. 6228526-28\n> >=======================================\n> >\n> >\n> >\n> >************\n> \n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 2 Nov 1999 10:20:33 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "> Btw., although direct UPDATEs to pg_shadow will seemingly succeed, you do\n> not want to do that. That's a bug.\n\nPeter, would you explain your statement please!\n\nWhy somebody is able to UPDATE pg_shadow to create an user if that's a bug?\nAnd _why_ that's a bug?\n\nGerald\n\n",
"msg_date": "Tue, 02 Nov 1999 17:34:34 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "May Tom (or anyone) correct me if I'm wrong, but I think this is what's\ngoing on:\n\nThe contents of the pg_shadow table are written through to a file on disk\ncalled pg_pwd, so all the backends can easily access it. However, this\nwrite through is not automatic. The create user and alter user commands\ntake care of that, but if you update pg_shadow directly, your changes will\nnot be seen by currently active backends.\n\n\t-Peter\n\nOn Tue, 2 Nov 1999 [email protected] wrote:\n\n> > Btw., although direct UPDATEs to pg_shadow will seemingly succeed, you do\n> > not want to do that. That's a bug.\n> \n> Peter, would you explain your statement please!\n> \n> Why somebody is able to UPDATE pg_shadow to create an user if that's a bug?\n> And _why_ that's a bug?\n> \n> Gerald\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 2 Nov 1999 18:34:17 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "IMHO because this is _current_ implementation issue\nyou cannot be sure that some/all system catalogs will not change\nin the future - making your program _very_ unhappy :)\n\n\[email protected] wrote:\n> \n> > Btw., although direct UPDATEs to pg_shadow will seemingly succeed, you do\n> > not want to do that. That's a bug.\n> \n> Peter, would you explain your statement please!\n> \n> Why somebody is able to UPDATE pg_shadow to create an user if that's a bug?\n> And _why_ that's a bug?\n> \n> Gerald\n> \n> ************\n\n-- \nMarcin Grondecki\[email protected]\n\n*** I'm not a complete idiot - some parts are missing\n",
"msg_date": "Tue, 02 Nov 1999 18:49:35 +0100",
"msg_from": "Marcin Grondecki <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "> May Tom (or anyone) correct me if I'm wrong, but I think this is what's\n> going on:\n> \n> The contents of the pg_shadow table are written through to a file on disk\n> called pg_pwd, so all the backends can easily access it. However, this\n> write through is not automatic. The create user and alter user commands\n> take care of that, but if you update pg_shadow directly, your changes will\n> not be seen by currently active backends.\n\nYour changes never get to the file, ever, not just current backends.\n\nCREATE USER sql command updates the file, but an UPDATE on pg_shadow\ndoes not.\n\nWe use a file because the postmaster does the password authentication,\nand we don't have any database connection the postmaster.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Nov 1999 13:21:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": ">> The contents of the pg_shadow table are written through to a file on disk\n>> called pg_pwd, so all the backends can easily access it. However, this\n>> write through is not automatic. The create user and alter user commands\n>> take care of that, but if you update pg_shadow directly, your changes will\n>> not be seen by currently active backends.\n> \n> Your changes never get to the file, ever, not just current backends.\n> \n> CREATE USER sql command updates the file, but an UPDATE on pg_shadow\n> does not.\n\nIMHO, that's a bug:\nIt's not forbidden to update or insert into pg_shadow by rule, but if\nI do that I will get inconsistent authentication data.\nWhy not revoke INSERT and UPDATE on pg_shadow?\nOr better:\nWhy not use a trigger on pg_shadow, to handle pg_pwd correctly?\nThe trigger code is allways in \"create/alter user\" command handler.\n\nThe code should be as near as possible on data!\n\n> We use a file because the postmaster does the password authentication,\n> and we don't have any database connection the postmaster.\n\npg_shadow is a file too, but not in text format like pg_pwd.\n\nGerald.\n",
"msg_date": "Thu, 04 Nov 1999 09:09:29 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "On Thu, 4 Nov 1999 [email protected] wrote:\n\n> > CREATE USER sql command updates the file, but an UPDATE on pg_shadow\n> > does not.\n> \n> IMHO, that's a bug:\n> It's not forbidden to update or insert into pg_shadow by rule, but if\n> I do that I will get inconsistent authentication data.\n> Why not revoke INSERT and UPDATE on pg_shadow?\n\nThat way the postgres superuser (the one that would ideally be\nadding/removing users) can still access it. Access control doesn't apply\nto table owners. And I'm not even sure if the CREATE USER command doesn't\ndepend on the insert privilege existing (vs the create user privilege of\nthe one that's executing it). It's not all that clear.\n\n> Or better:\n> Why not use a trigger on pg_shadow, to handle pg_pwd correctly?\n> The trigger code is allways in \"create/alter user\" command handler.\n\nI was thinking about some sort of internal hook that sees any access to\npg_shadow and redirects it to a file. Don't even have the table anymore.\nSort of like /dev/* devices are handled by the kernel.\n\nI was going about looking into this a little, but since I have never\nplayed with the backend I cannot promise a result in finite time.\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 4 Nov 1999 10:26:24 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
},
{
"msg_contents": "On 1999-11-02, Bruce Momjian mentioned:\n\n> CREATE USER sql command updates the file, but an UPDATE on pg_shadow\n> does not.\n\nHow about INSERT INTO pg_shadow? Or how do you judge the following excerpt\nfrom the createuser script:\n\nQUERY=\"insert into pg_shadow \\\n (usename, usesysid, usecreatedb, usetrace, usesuper, usecatupd) \\\n values \\\n ('$NEWUSER', $SYSID, '$CANCREATE', 'f', '$CANADDUSER','f')\"\n\nFortunately (perhaps), I am getting rid of this as we're speaking. The one\nfeature the createuser script has over the CREATE USER \"SQL\" command is\nthat you can pick your system ID. Ignoring the question whether or not\nthis has any real purpose, it seems this is almost like rolling dice since\nyou cannot ever reliably change that later. (And I'm not even talking\nabout the fact that the sysid is a primary key and there is no referential\nintegrity enforced.)\n\nSo is anyone strictly opposed to yanking that feature? And perhaps\nremoving all references to user sysids in the (user) documentation?\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 7 Nov 1999 17:44:10 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] users in Postgresql"
}
] |
[
{
"msg_contents": "\"nicks.emails\" <[email protected]> writes:\n> select l.s # p.ss from LSEG_TBL l, LSEG_TBL1 p;\n\n> when I try to execute this intersection on the tables that are in the\n> regression queries the following message is printed to the screen,\n\nI'm confused --- I don't see any table named LSEG_TBL1 in the standard\nregression tests. Are you sure you didn't create that one yourself?\n\nI tried\n\tselect l.s # p.s from LSEG_TBL l, LSEG_TBL p;\nusing the table that is there, and it gave me answers (I have no idea\nif they're right though ;-)).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 18:17:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally "
},
{
"msg_contents": "> \"nicks.emails\" wrote:\n> \n> Hello,\n> \n> I not sure if I have got the right person for this but if you could\n> help me it would be extremely helpful;\n> \n> The problem is with the following query,\n> \n> select l.s # p.ss from LSEG_TBL l, LSEG_TBL1 p;\n\nDuplicated with PostgreSQL 6.5.2 on RedHat 6.0, egcs 1.1.2.\n\nWill get more details as I have them.\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 01 Nov 1999 20:07:20 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n>> The problem is with the following query,\n>> select l.s # p.ss from LSEG_TBL l, LSEG_TBL1 p;\n\n> Duplicated with PostgreSQL 6.5.2 on RedHat 6.0, egcs 1.1.2.\n\nHmm, does everyone but me have a table named LSEG_TBL1 in the\nregress tests? I've grepped both current and REL6_5 and I'll\nbe durned if I can find any use of that name. I'd look into it\nif I had a reproducible test case...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 22:18:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally "
},
{
"msg_contents": "On Mon, 01 Nov 1999, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> >> The problem is with the following query,\n> >> select l.s # p.ss from LSEG_TBL l, LSEG_TBL1 p;\n> \n> > Duplicated with PostgreSQL 6.5.2 on RedHat 6.0, egcs 1.1.2.\n> \n> Hmm, does everyone but me have a table named LSEG_TBL1 in the\n> regress tests? I've grepped both current and REL6_5 and I'll\n> be durned if I can find any use of that name. I'd look into it\n> if I had a reproducible test case...\n> \n> \t\t\tregards, tom lane\n\nOh, sorry, Tom. SELECT * INTO TABLE LSEG_TBL1 FROM LSEG_TBL; first.\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 1 Nov 1999 22:46:20 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> On Mon, 01 Nov 1999, Tom Lane wrote:\n>> Hmm, does everyone but me have a table named LSEG_TBL1 in the\n>> regress tests? I've grepped both current and REL6_5 and I'll\n>> be durned if I can find any use of that name. I'd look into it\n>> if I had a reproducible test case...\n\n> Oh, sorry, Tom. SELECT * INTO TABLE LSEG_TBL1 FROM LSEG_TBL; first.\n\nActually, the missing clue seems to be that it's cool on HPUX and\ncoredumps on Linux. Bigendian vs. littleendian bug maybe? I'm on\nit...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 23:04:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally "
},
{
"msg_contents": "I wrote:\n> Actually, the missing clue seems to be that it's cool on HPUX and\n> coredumps on Linux. Bigendian vs. littleendian bug maybe? I'm on\n> it...\n\nWell, isn't *this* special: it seems that memmove(dest, NULL, n)\ndoesn't cause a coredump on HPUX, it just silently does nothing.\nSheesh. I hardly ever use memmove, or I would've found this out\nbefore (and complained about it!).\n\n\nAnyway, the answer to the original complaint is that geo_ops.c\nis brimful of operators that think they can return a NULL pointer\nand it'll be interpreted as returning an SQL NULL. They are\nsadly misinformed. In the present state of fmgr() there isn't\nany way for a binary operator to return NULL when its operands\nare not null. Another reason we gotta redo the fmgr interface.\n\nNick, I'm afraid '#' is pretty seriously broken: it'll coredump\nwhenever presented non-intersecting segments, unless you are able\nto recompile the system so that dereferencing a NULL pointer is\nnot a fatal error. Several of the other geometric operators have\nsimilar problems. AFAICS there is not much that can be done to\npatch around this; a proper fix will require some major changes\nthat we are planning for release 7.0. Sorry the news isn't better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 00:19:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally "
},
{
"msg_contents": "Hello,\n\nI not sure if I have got the right person for this but if you could help me it would be extremely helpful;\n\nThe problem is with the following query,\n\nselect l.s # p.ss from LSEG_TBL l, LSEG_TBL1 p;\n\nwhen I try to execute this intersection on the tables that are in the regression queries the following message is printed to the screen,\n\nI created two tables with the exact lseg coordinates in both to see what happened and also with disimilar lseg points so that there was lines intersecting,\nand recieved the same message\n\npgReadData() --backend closed the channel unexpectedly.\n\nThis problem means the backend terminated abnormally \nbefore or while processing the request\n\nWe have lost the connection to the backend, so further processing is impossible.\nTerminating.\n \nPlease could give me some pointers as to where I could look or if necesarry download any patches to fix it.\n\nI am running Redhat 5.2 on an i486-pc-linux-gnu\nversion of postgres is 6.5.2\nversion of gcc is 2.7.2.3\n\nThank you in advance.\n\nNick O'Malley [email protected] \n\n\n\n\n\n\n\n\n\nHello,\n \nI not sure if I have got the right person for this \nbut if you could help me it would be extremely helpful;\n \nThe problem is with the following \nquery,\n \nselect l.s # p.ss from LSEG_TBL l, LSEG_TBL1 \np;\n \nwhen I try to execute this intersection on the \ntables that are in the regression queries the following message is printed \nto the screen,\n \nI created two tables with the exact lseg coordinates in both to see what \nhappened and also with disimilar lseg points so that there was lines \nintersecting,\nand recieved the same message\n \npgReadData() --backend closed the channel \nunexpectedly.\n \nThis problem means the backend terminated \nabnormally \nbefore or while processing the \nrequest\n \nWe have lost the connection to the backend, \nso further processing is impossible.\nTerminating.\n \nPlease could give me some pointers as to where I \ncould look or if necesarry download any patches to fix it.\n \nI am running Redhat 5.2 on an \ni486-pc-linux-gnu\nversion of postgres is 6.5.2\nversion of gcc is 2.7.2.3\n \nThank you in advance.\n \nNick O'Malley [email protected]",
"msg_date": "Mon, 1 Nov 1999 22:18:34 -0800",
"msg_from": "\"nicks.emails\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Backend terminated abnormally"
},
{
"msg_contents": "On Tue, 2 Nov 1999, Tom Lane wrote:\n\n> I wrote:\n> > Actually, the missing clue seems to be that it's cool on HPUX and\n> > coredumps on Linux. Bigendian vs. littleendian bug maybe? I'm on\n> > it...\n> \n> Well, isn't *this* special: it seems that memmove(dest, NULL, n)\n> doesn't cause a coredump on HPUX, it just silently does nothing.\n> Sheesh. I hardly ever use memmove, or I would've found this out\n> before (and complained about it!).\n\nHP is notorious for this. Pass a null pointer to its atoi() and it'll\nreturn zero. Same thing on IRIX64. Do this on a *BSD or Linux machine\nand it'll segfault.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 2 Nov 1999 06:24:26 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend terminated abnormally "
}
] |
[
{
"msg_contents": "Hi\n\nThanks to tom and vadim for the ideas so far. Here is my responses to\nthese:\n\nTom wrote:\n> Hmm. That error is coming out of the btree index code. Vadim knows\n> that code better than anyone else, so he might have something to say\n> here, but my past-midnight recollection is that we've seen that error\n> being triggered when there are oversize entries in the index (where\n> \"oversize\" = \"more than half a disk page\"). It's a bug, for sure,\n> but what you probably want right now is a workaround. Do you have any\n> entries in indexed columns that are over 4K, and can you get rid of them?\n\nOkee, Ive looked at all of the text lengths (I didnt bother with the ints)\nand got the following\n\nbestadssearch=> select length(url)+length(hostname)+length(title)+length(brief)+length(lowerurl) as length from search_url where title notnull and brief notnull order by length desc;\n\nlength\n------\n 841\n 827\n 826\n 825\n...\n\nSo my maximum length is probably under 1K\n\n> Huh? PQgetResult does not call pgresStatus ... not least because the\n> latter is an array, not a function. Your gdb is lying to you. Maybe\n> you have a problem with gdb looking at a different version of the\n> library than what's actually executing?\n\nNope, I just checked, I only have one version of the libraries.\n\nVadim Wrote:\n\n> This FATAL means that index is broken (some prev insertion\n> was interrupted by elog(ERROR) or backend crash) - try to rebuild...\n> WAL should fix this bug.\n>\n> Vadim\n\nThats curious cos look at this explain...\n\nbestadssearch=> explain update search_url set stale=941424005 where lowerurl='http://criswell.bizland.com';\nNOTICE: QUERY PLAN:\n\nSeq Scan on search_url (cost=1546.06 rows=2 width=122)\n\nEXPLAIN\n\nThat does a seq scan not an index scan.\n\nThis came to light when I realised my initialisation script for the database\nadded a bogus index (oops). However, that is probably not a good sign if\nthe seq scan fails for this reason.\n\nThe only index associated with this table is the one created with a serial\ntype.\n\nI am going to rebuild the table (dump and reload) but before I do,\ndoes anyone want me to try anything on it to see if you can get any\nmore information you may need for debugging? If so, let me know asap\n{:-)\n\n\t\t\t\t\t\t~Michael\n",
"msg_date": "Tue, 2 Nov 1999 01:20:34 +0000 (GMT)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend crashes (6.5.2 linux) "
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> Vadim Wrote:\n>> This FATAL means that index is broken (some prev insertion\n>> was interrupted by elog(ERROR) or backend crash) - try to rebuild...\n\n> Thats curious cos look at this explain...\n\n> bestadssearch=> explain update search_url set stale=941424005 where lowerurl='http://criswell.bizland.com';\n> NOTICE: QUERY PLAN:\n\n> Seq Scan on search_url (cost=1546.06 rows=2 width=122)\n\n> That does a seq scan not an index scan.\n\nNo, the elog message is coming out of btree index *update*, not\nbtree scan. Since you're doing an update, new index entries have\nto be made for the tuple being updated, regardless of what kind\nof scan was used to find it. And the message comes out because\nthe attempt to insert an index entry is finding that the index is\nalready corrupt.\n\nVadim's advice is probably the best: drop and recreate the index(es)\non that table. You shouldn't need to dump the table itself, unless\nthere's more going on than is apparent from the info so far.\n\n> I am going to rebuild the table (dump and reload) but before I do,\n> does anyone want me to try anything on it to see if you can get any\n> more information you may need for debugging? If so, let me know asap\n\nIf you can reproduce the sequence of events that led to the index\nbecoming corrupted, that'd be *really* useful debugging info. But\nit's probably too late to reconstruct anything very helpful from\nthe current state of the index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 22:06:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend crashes (6.5.2 linux) "
}
] |
[
{
"msg_contents": "--- Tom Lane <[email protected]> wrote:\n> Tatsuo Ishii <[email protected]> writes:\n> > RedHat Linux 6.0 (kernel 2.2.5-smp)\n> > Pentium III 500MHz x 2\n> > RAM: 512MB\n> > Disk: Ultra Wide SCSI 9GB x 4 + Hardware RAID (RAID 5).\n> \n> OK, no problem with inadequate hardware anyway ;-). Bruce's concern\n> about simplistic read-ahead algorithm in Linux may apply though.\n> \n> > Also, I could provide testing scripts to reproduce my tests.\n> \n> Please. That would be very handy so that we can make sure we are all\n> comparing the same thing. I assume the scripts can be tweaked to vary\n> the amount of disk space used? I can't scare up more than a couple\n> hundred meg at the moment. (The natural state of a disk drive is\n> \"full\" ...)\n> \n> > I think it depends on the disk space available. Ideally it should be\n> > able to choice the sort algorithm.\n> \n> I was hoping to avoid that, because of the extra difficulty of testing\n> and maintenance. But it may be the only answer.\n> \n> \t\t\tregards, tom lane\n\nI know this is a VERY long shot, but... what were the READ/WRITE ratios\nbetween the old version and the new version? Perhaps the computation\nof the checksum (sic) blocks under RAID5 caused the unexpected behavior. \nWith RAID 5 increasing read performance but decreasing writes, one might \nexpect a new algorithm which say, halves reads, but increases writes \nslightly to not realize the same benefits as under a normal disk system or\na RAID 1 (or, better yet, a RAID 0+1) array.\n\nLike I said...a VERY long shot theory.\n\nMike Mascari\n([email protected])\n\n\n\n\n\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Mon, 1 Nov 1999 18:24:03 -0800 (PST)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] sort on huge table "
},
{
"msg_contents": "At 06:24 PM 11/1/99 -0800, Mike Mascari wrote:\n\n>I know this is a VERY long shot, but... what were the READ/WRITE ratios\n>between the old version and the new version? Perhaps the computation\n>of the checksum (sic) blocks under RAID5 caused the unexpected behavior. \n>With RAID 5 increasing read performance but decreasing writes, one might \n>expect a new algorithm which say, halves reads, but increases writes \n>slightly to not realize the same benefits as under a normal disk system or\n>a RAID 1 (or, better yet, a RAID 0+1) array.\n\nRAID 5, not the operating system, might be getting in the way...it\nwould be interesting to test this on a Linux 2.2 kernel without\nthe RAID 5 complication.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 01 Nov 1999 18:35:00 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
},
{
"msg_contents": "> At 06:24 PM 11/1/99 -0800, Mike Mascari wrote:\n>> I know this is a VERY long shot, but... what were the READ/WRITE ratios\n>> between the old version and the new version? Perhaps the computation\n>> of the checksum (sic) blocks under RAID5 caused the unexpected behavior. \n\nGood try but no cigar --- we're dealing with a merge algorithm here,\nand it's inherently the same amount of data in and out. You write\na block once, you read the same block once later on. But...\n\nDon Baccus <[email protected]> writes:\n> RAID 5, not the operating system, might be getting in the way...it\n> would be interesting to test this on a Linux 2.2 kernel without\n> the RAID 5 complication.\n\n... I agree this'd be worth trying. There could be some subtle effect\nsomewhere in RAID5 that's tripping things up. It'd also be useful if\nsomeone could try it on similar RAID hardware with a non-Linux kernel.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Nov 1999 22:37:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
},
{
"msg_contents": ">> At 06:24 PM 11/1/99 -0800, Mike Mascari wrote:\n>>> I know this is a VERY long shot, but... what were the READ/WRITE ratios\n>>> between the old version and the new version? Perhaps the computation\n>>> of the checksum (sic) blocks under RAID5 caused the unexpected behavior. \n>\n>Good try but no cigar --- we're dealing with a merge algorithm here,\n>and it's inherently the same amount of data in and out. You write\n>a block once, you read the same block once later on. But...\n>\n>Don Baccus <[email protected]> writes:\n>> RAID 5, not the operating system, might be getting in the way...it\n>> would be interesting to test this on a Linux 2.2 kernel without\n>> the RAID 5 complication.\n>\n>... I agree this'd be worth trying. There could be some subtle effect\n>somewhere in RAID5 that's tripping things up. It'd also be useful if\n>someone could try it on similar RAID hardware with a non-Linux kernel.\n\nI have compared current with 6.5 using 1000000 tuple-table (243MB) (I\nwanted to try 2GB+ table but 6.5 does not work in this case). The\nresult was strange in that current is *faster* than 6.5!\n\nRAID5\n\tcurrent\t2:29\n\t6.5.2\t3:15\n\nnon-RAID\n\tcurrent\t1:50\n\t6.5.2\t2:13\n\nSeems my previous testing was done in wrong way or the behavior of\nsorting might be different if the table size is changed?\n\nAnyway, here is my test script.\nFirst, edit Makefile to set DB and number of tuples. Then run type\nmake. That's all.\n--\nTatsuo Ishii\n--------------------------------------------------------------------\nbegin 644 sort.tar.gz\nM'XL(`(-F'C@``^V737/:,!\"&N5:_8@MD@@D8VQ@\\`R$S!=)V.J3I).TIR<'8\nM,H@:F]@BDTQ+?WNU_H#2-L.ED![T<$`?[Z[6R\"LM<1CQQH7]E7K,IX7]H&M:\nMVS2A`-!LZDW\\UJV6AM\\I+5,#L#3-,LVV9NIBVA\"?`FA[BF>+9<SM\"*#`ZRR>\nM,O:L;CZ)#A'.H2GM@I3`I9Z]]#G$E',63&+PP@@XC;&CDF$?>DF/?/S\\Y=/H\nM_%ITQ8XC9+=W,A@(_<1QR.#MZ,T[-*Y?&N3Z_?EH)-J-,0L:\\91<7>2=:`YU\nMCUQ?#5`ZH8'JD,O^A[P3$F+[?@>B99\"$1+)&!V?)JW(E<:R`VG`B:G.JQE,H\nM5[+`%=$<]A5\"A+8CVNA70:/!`*?2`+&%JRM0#Q.GQ/&I'710=W6AX%!N\"M4?\nMY*7W=Q<QYG_R,^YOC9WYWVZE^6\\8>E/'_->;EBGS_Q\"46.#X2Y?\":<Q=%JK3\nM,T+F-@[email protected]!LY4_$#5JN@\\*.0;`<`I5IMULV:0)WR7X(@'%;2#,]`5\nM0#DD\"IN'#\"<>;O0[!4U7J/86D7#A5<3B-(IJ1=P,/%;@R`4G7#P5:T$B1JDX\nM=\"JLIW79:=!E)R>Y\\\\Q#\\<B]Y<4:2^29>B;4LU-#1#;;Z#<6C\\5,O-IR=!OD\nMX_@L[\"AY-NCU0-NX^#UN]5=7J[\\\\F!L&5$T]K_ZG0R')__PLO/?WLL:N_(>6\nME>6_WFZU+;S_35.3^7\\(W\"A<`+?'/@6N=TGZ)JP'*@PSO`9<7/\"/7+R[+QVN\nMY!^SE?_3_:RQ*_]-2\\_J_Z:E-S7,?Z-MR/P_!*77ZQI;W'8W4\"Y!/:!@P%V7\nM3T5UB_LS[/>28AK;6:G<2RY%0OV8;@V7]=RB;!\"/$;(0EXHHV&%SQ4!YV\"=J\nM(ZF3,[/OJ<J!8[SSQ;D#7A3.`2N2X#C14V<:0K&'P(/M+)=S2#I%DO;<,=3K\nM^:GUA\\6ZJKA?TN@IM^1L3F&]<DQ]ZG\"HIDL+)V$D;FX8/P$3?X!B!WPV9QP+\nFG32BE]XXB40BD4@D$HE$(I%()!*)1\"*12\"229_@)GWZ*5``H````\n`\nend\n",
"msg_date": "Tue, 02 Nov 1999 13:07:32 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have compared current with 6.5 using 1000000 tuple-table (243MB) (I\n> wanted to try 2GB+ table but 6.5 does not work in this case). The\n> result was strange in that current is *faster* than 6.5!\n\n> RAID5\n> \tcurrent\t2:29\n> \t6.5.2\t3:15\n\n> non-RAID\n> \tcurrent\t1:50\n> \t6.5.2\t2:13\n\n> Seems my previous testing was done in wrong way or the behavior of\n> sorting might be different if the table size is changed?\n\nWell, I feel better now, anyway ;-). I thought that my first cut\nought to have been about the same speed as 6.5, and after I added\nthe code to slurp up multiple tuples in sequence, it should've been\nfaster than 6.5. The above numbers seem to be in line with that\ntheory. Next question: is there some additional effect that comes\ninto play once the table size gets really huge? I am thinking maybe\nthere's some glitch affecting performance once the temp file size\ngoes past one segment (1Gb). Tatsuo, can you try sorts of say\n0.9 and 1.1 Gb to see if something bad happens at 1Gb? I could\ntry rebuilding here with a small RELSEG_SIZE, but right at the\nmoment I'm not certain I'd see the same behavior you do...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 00:31:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> >... I agree this'd be worth trying. There could be some subtle effect\n> >somewhere in RAID5 that's tripping things up. It'd also be useful if\n> >someone could try it on similar RAID hardware with a non-Linux kernel.\n> \n> I have compared current with 6.5 using 1000000 tuple-table (243MB) (I\n> wanted to try 2GB+ table but 6.5 does not work in this case). The\n> result was strange in that current is *faster* than 6.5!\n> \n> RAID5\n> current 2:29\n> 6.5.2 3:15\n> \n> non-RAID\n> current 1:50\n> 6.5.2 2:13\n> \n> Seems my previous testing was done in wrong way or the behavior of\n> sorting might be different if the table size is changed?\n\nOr the behaviour of RAID5 changes at some size. \n\nI have set up an IBM Netfinity server with specs similar to yours, except \nthat it has 1G memory and 5x9GB disks. The RAID controller is IBM ServeRAID.\n\nIt seems that when I try to write over 60 MB sequentially, the write \nperformance drops from over 50MB/s to under 2MB/s.\n\nMaybe such behaviour would suggest that building an index and traversing \nthat could be faster than full sort ?\n\nThe same tests on my single Celeron 450 produced ~10MB/s writes \nwhatever the size.\n\n> Anyway, here is my test script.\n> First, edit Makefile to set DB and number of tuples. Then run type\n> make. That's all.\n\nI'll try to run it tonight (in GMT+2 tz). \n\nCan't run it earlyer as it is a production site and a highly \nvisible web-server.\n\nIf I have the time I'll even try my index theory.\n\n--------\nHannu\n",
"msg_date": "Tue, 02 Nov 1999 07:03:55 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table"
},
{
"msg_contents": ">\n>Tatsuo Ishii <[email protected]> writes:\n>> I have compared current with 6.5 using 1000000 tuple-table (243MB) (I\n>> wanted to try 2GB+ table but 6.5 does not work in this case). The\n>> result was strange in that current is *faster* than 6.5!\n>\n>> RAID5\n>> \tcurrent\t2:29\n>> \t6.5.2\t3:15\n>\n>> non-RAID\n>> \tcurrent\t1:50\n>> \t6.5.2\t2:13\n>\n>> Seems my previous testing was done in wrong way or the behavior of\n>> sorting might be different if the table size is changed?\n>\n>Well, I feel better now, anyway ;-). I thought that my first cut\n>ought to have been about the same speed as 6.5, and after I added\n>the code to slurp up multiple tuples in sequence, it should've been\n>faster than 6.5. The above numbers seem to be in line with that\n>theory. Next question: is there some additional effect that comes\n>into play once the table size gets really huge? I am thinking maybe\n>there's some glitch affecting performance once the temp file size\n>goes past one segment (1Gb). Tatsuo, can you try sorts of say\n>0.9 and 1.1 Gb to see if something bad happens at 1Gb? I could\n>try rebuilding here with a small RELSEG_SIZE, but right at the\n>moment I'm not certain I'd see the same behavior you do...\n\nOk. I have run some testings with various amount of data.\n\nRedHat Linux 6.0\nKernel 2.2.5-smp\n512MB RAM\nSort mem: 80MB\nRAID5\n\n100 million tuples\t1:31\n200\t\t\t4:24\n300\t\t\t7:27\n400\t\t\t11:11 <-- 970MB\n500\t\t\t14:01 <-- 1.1GB (segmented files)\n600\t\t\t18:31\n700\t\t\t22:24\n800\t\t\t24:36\n900\t\t\t28:12\n1000\t\t\t32:14\n\nI didn't see any bad thing at 1.1GB (500 million).\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Nov 1999 17:30:00 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
}
] |
[
{
"msg_contents": "I have hit an unexpected error in regression testing the REL6_5_PATCHES branch\nin preparing for 6.5.3. For the first time, a test other than float or\ngeometry failed -- with a weird result. Datetime failed. Here is the section\nof regression.diffs for datetime:\n----------------------------\n*** expected/datetime.out\tWed Apr 14 22:19:01 1999\n--- results/datetime.out\tMon Nov 1 22:40:45 1999\n***************\n*** 1,7 ****\n QUERY: SELECT ('today'::datetime = ('yesterday'::datetime + '1 day'::timespan)) as \"True\";\n True\n ----\n! t \n (1 row)\n \n QUERY: SELECT ('today'::datetime = ('tomorrow'::datetime - '1 day'::timespan)) as \"True\";\n--- 1,7 ----\n QUERY: SELECT ('today'::datetime = ('yesterday'::datetime + '1 day'::timespan)) as \"True\";\n True\n ----\n! f \n (1 row)\n \n QUERY: SELECT ('today'::datetime = ('tomorrow'::datetime - '1 day'::timespan)) as \"True\";\n***************\n*** 13,19 ****\n QUERY: SELECT ('tomorrow'::datetime = ('yesterday'::datetime + '2 days'::timespan)) as \"True\";\n True\n ----\n! t \n (1 row)\n \n QUERY: SELECT ('current'::datetime = 'now'::datetime) as \"True\";\n--- 13,19 ----\n QUERY: SELECT ('tomorrow'::datetime = ('yesterday'::datetime + '2 days'::timespan)) as \"True\";\n True\n ----\n! f \n (1 row)\n \n QUERY: SELECT ('current'::datetime = 'now'::datetime) as \"True\";\n***************\n*** 69,75 ****\n QUERY: SELECT count(*) AS one FROM DATETIME_TBL WHERE d1 = 'today'::datetime - '1 day'::timespan;\n one\n ---\n! 1\n (1 row)\n \n QUERY: SELECT count(*) AS one FROM DATETIME_TBL WHERE d1 = 'now'::datetime;\n--- 69,75 ----\n QUERY: SELECT count(*) AS one FROM DATETIME_TBL WHERE d1 = 'today'::datetime - '1 day'::timespan;\n one\n ---\n! 0\n (1 row)\n \n QUERY: SELECT count(*) AS one FROM DATETIME_TBL WHERE d1 = 'now'::datetime;\n\n-------------------------------\nMisc also failed -- but that was due to the pgaccess relations from my pgaccess\ntesting.\n\nSystem: RedHat Linux 6.1 -- kernel 2.2.12 w/ glibc 2.1.2.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 1 Nov 1999 22:56:14 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression Testing on REL6_5_PATCHES"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> I have hit an unexpected error in regression testing the\n> REL6_5_PATCHES branch in preparing for 6.5.3. For the first time, a\n> test other than float or geometry failed -- with a weird result.\n> Datetime failed.\n\nYup, it's the biannual daylight-savings-time transition weirdness.\nIf you look closely, all those tests assume that today midnight to\ntomorrow midnight is 24 hours. Guess what: at this time of year it\nain't. In another day or so the results will be back to normal, at\nleast in the US of A. We'll likely see another gripe or two from\noverseas before the DST-switch season is over.\n\nI've been around this project for a year and a half now, and we've\nheard complaints like this at each of the three DST transitions that\nI remember. (I sent in some alarmed reports myself, first time I\nsaw it.) I've tried to interest Thomas in DST-proofing the regress\ntests, but he doesn't seem to think it's worth fixing.\n\nThere should be something about this in the \"expected failures\"\nsection of the regress test docs, but right at the moment I don't\nsee anything there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 00:43:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression Testing on REL6_5_PATCHES "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Lamar Owen <[email protected]> writes:\n> > I have hit an unexpected error in regression testing the\n> > REL6_5_PATCHES branch in preparing for 6.5.3. For the first time, a\n> > test other than float or geometry failed -- with a weird result.\n> > Datetime failed.\n\n> There should be something about this in the \"expected failures\"\n> section of the regress test docs, but right at the moment I don't\n> see anything there.\n\nNow it's my turn to feel much better. The tests that failed were float8\n(which is the NaN glibc weirdness we already know about -- I forget the\nspecifics, but it was tracked down -- although, I had thought that would\nbe fixed for 6.5.3, but I guess I was mistaken), datetime (which you've\nexplained), geometry (which failed due to a floating point roundoff\nerror), and misc (which didn't like the pgaccess preferences tables\nbeing in there).\n\nI'm glad it was something that simple.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 02 Nov 1999 11:04:38 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Regression Testing on REL6_5_PATCHES"
},
{
"msg_contents": "> I've been around this project for a year and a half now, and we've\n> heard complaints like this at each of the three DST transitions that\n> I remember. (I sent in some alarmed reports myself, first time I\n> saw it.) I've tried to interest Thomas in DST-proofing the regress\n> tests, but he doesn't seem to think it's worth fixing.\n\nHmm. I'm probably pretty unimaginative, but when the test is\n\n select datetime 'tomorrow' - datetime 'today';\n\nit is pretty hard to make this \"DST proof\". The alternative was to not\ntest this case at all, which I definitely was *not* interested in.\n\nIdeas? At least with the test as-is, I get nice fan mail twice a year\n;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 04 Nov 1999 16:09:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Regression Testing on REL6_5_PATCHES"
}
] |
[
{
"msg_contents": "Marc, 6.5.3 is ready for packaging.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Nov 1999 23:05:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5.3 is ready"
},
{
"msg_contents": "Hi,\n\nPleas post this to approproiate lists also, as I'm currently \nrejected from lists due to address change.\n\nI am running 6.5.2 on RH Linux 6.0 and I have a following bug\n(the dump of two tables involved is attached)\n\nhannu=> select title from document where subject not in (\nhannu-> select full_path from group_directory);\ntitle\n-----\n(0 rows)\n \nhannu=> select title from document where not subject in (\nhannu-> select full_path from group_directory);\ntestcert\n.\n.\n.\ntester\ntester\nlugu\nhlhkllk\n(26 rows)\n\nWhat's even more scary is that a little after trying to get it \nwork right and doing the first query a lot, I got a server crash \nwith corrupted shared memory, that had to be cured with a reboot\n(was faster than finding docs for ipcclean)\n\n-----------------\nHannu",
"msg_date": "Tue, 02 Nov 1999 08:11:55 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "A bug in NOT IN (SELECT ..."
},
{
"msg_contents": "\nSorry, will do that this afternoon...work has been a...battleground\nrecently, and arms/legs have been flying :) Its been great ... :)\n\nOn Mon, 1 Nov 1999, Bruce Momjian wrote:\n\n> Marc, 6.5.3 is ready for packaging.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 2 Nov 1999 12:37:13 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.3 is ready"
},
{
"msg_contents": "> \n> Sorry, will do that this afternoon...work has been a...battleground\n> recently, and arms/legs have been flying :) Its been great ... :)\n> \n\nGood thing you are late. Lamar found problems with pgaccess and perl\nthat I have since fixed.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Nov 1999 11:44:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5.3 is ready"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Sorry, will do that this afternoon...work has been a...battleground\n> > recently, and arms/legs have been flying :) Its been great ... :)\n> >\n> \n> Good thing you are late. Lamar found problems with pgaccess and perl\n> that I have since fixed.\n\nJust to be complete, consider the pgaccess stuff and the perl issue\nresolved completely. REL6_5_PATCHES builds flawlessly here (on both\nRedHat 6.x and RedHat 5.2), passes regression (except float8, geometry,\nand misc -- all of which are known failures under Linux 2.xx with\nglibc2.x), pgaccess runs, and the perl test script passes.\n\nYou have a 'Go', Houston.....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 02 Nov 1999 13:24:04 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.3 is ready"
},
{
"msg_contents": "\nCan you be more specific about the problem?\n\n> Hi,\n> \n> Pleas post this to approproiate lists also, as I'm currently \n> rejected from lists due to address change.\n> \n> I am running 6.5.2 on RH Linux 6.0 and I have a following bug\n> (the dump of two tables involved is attached)\n\n> \n> hannu=> select title from document where subject not in (\n> hannu-> select full_path from group_directory);\n> title\n> -----\n> (0 rows)\n> \n> hannu=> select title from document where not subject in (\n> hannu-> select full_path from group_directory);\n> testcert\n> .\n> .\n> .\n> tester\n> tester\n> lugu\n> hlhkllk\n> (26 rows)\n> \n> What's even more scary is that a little after trying to get it \n> work right and doing the first query a lot, I got a server crash \n> with corrupted shared memory, that had to be cured with a reboot\n> (was faster than finding docs for ipcclean)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 21:28:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A bug in NOT IN (SELECT ..."
}
] |
[
{
"msg_contents": "Sorry, forgot the attachment",
"msg_date": "Tue, 2 Nov 1999 12:04:14 +0300",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] file descriptors leak? "
}
] |
[
{
"msg_contents": "Hello.\nI have not got any help from SQL and GENERAL groups so I send my problem to\nyou.\n\n\nIn Postgres Users Guide, CREATE TABLE section, the following is stated:\n\n Postgres automatically allows the created table to inherit functions on\ntables above it in the inheritance hierarchy. \n\n Aside: Inheritance of functions is done according to the\nconventions of the Common Lisp Object System (CLOS). \n\nI have tried different constructs but I have not been able to create such a\nfunction. Can anybody send me an example of a function that will be\ninherited by inherited table? I. e.\ncreate table A (\n.\n.\n);\n\ncreate function F ...\n\ncreate table B (\n..\n) inherits (A);\n\nNow I assume that I can somehow use function F on table B \n\nThe specific example is given below !!\n\nThank you, \nRegards,\nAndrzej Mazurkiewicz\n\n\n> -----Original Message-----\n> From:\tAndrzej Mazurkiewicz \n> Sent:\t27 paYdziernika 1999 18:09\n> To:\t'[email protected]'\n> Subject:\tRE: [GENERAL] FW: inheritance of functions\n> \n> Hello.\n> Here is an example of my problem:\n> \n> ccbslin2:~/lipa$ psql -c \"drop database archimp0;\" template1\n> DESTROYDB\n> ccbslin2:~/lipa$ psql -c \"create database archimp0;\" template1\n> CREATEDB\n> ccbslin2:~/lipa$ psql -f funinh1.sql archimp0\n> BEGIN WORK;\n> BEGIN\n> CREATE TABLE A (\n> liczba float\n> );\n> CREATE\n> COMMIT WORK;\n> END\n> \n> BEGIN WORK;\n> BEGIN\n> CREATE FUNCTION suma (A) RETURNS float\n> AS 'SELECT $1.liczba AS suma;' LANGUAGE 'sql';\n> CREATE\n> COMMIT WORK;\n> END\n> \n> BEGIN WORK;\n> BEGIN\n> CREATE TABLE B (\n> liczwym float\n> ) INHERITS (A)\n> ;\n> CREATE\n> COMMIT WORK;\n> END\n> \n> BEGIN WORK;\n> BEGIN\n> INSERT INTO A (liczba) VALUES (1.56);\n> INSERT 71414 1\n> COMMIT WORK;\n> END\n> \n> BEGIN WORK;\n> BEGIN\n> INSERT INTO B (liczba, liczwym) VALUES (2.5, 3.2);\n> INSERT 71415 1\n> COMMIT WORK;\n> END\n> \n> select liczba, suma(A) from A;\n> liczba|suma\n> ------+----\n> 1.56|1.56\n> (1 row)\n> \n> select liczba, suma(A) from A*;\n> liczba|suma\n> ------+----\n> 1.56|1.56\n> 2.5| 2.5\n> (2 rows)\n> \n[Andrzej Mazurkiewicz] --\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n> \n> select liczba, suma(B) from B; [Andrzej Mazurkiewicz] !!!!!!! \n> ERROR: Functions on sets are not yet supported [Andrzej Mazurkiewicz]\n> !!!!!!! \n> \n[Andrzej Mazurkiewicz] --\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n\n> EOF\n> \n> --------------------------------------------------------------------------\n> ----------------------------------------\n> \n> After invoking psql:\n> \n> \n> archimp0=> select * from pg_proc where proname = 'suma';\n> proname|proowner|prolang|proisinh|proistrusted|proiscachable|pronargs|pror\n> etset|prorettype|\n> proargtypes|probyte_pct|properbyte_cpu|propercall_cpu|proouti\n> n_ratio|prosrc |probin\n> -------+--------+-------+--------+------------+-------------+--------+----\n> -----+----------+-------------------+-----------+--------------+----------\n> ----+--------------+-------------------------+------\n> suma | 302| 14|f |t |f | 1|f\n> | 701|71393 0 0 0 0 0 0 0| 100| 0| 0|\n> 100|SELECT $1.liczba AS suma;|- \n> (1 row)\n> \n> archimp0=> \n> \n> I am looking for working example !!!!!\n> \n> Regards,\n> Andrzej Mazurkiewicz\n> -----Original Message-----\n> From:\tAaron J. Seigo [SMTP:[email protected]]\n> Sent:\t27 paYdziernika 1999 17:39\n> To:\tAndrzej Mazurkiewicz; '[email protected]'\n> Subject:\tRe: [GENERAL] FW: inheritance of functions\n> \n> hi...\n> \n> > > Postgres automatically allows the created table to inherit functions\n> on\n> > > tables above it in the inheritance hierarchy. \n> > > create table A (\n> > > .\n> > > .\n> > > );\n> > > \n> > > create function F ...\n> > > \n> > > create table B (\n> > > ..\n> > > ) inherits (A);\n> > > \n> > > Now I assume that I can somehow use function F on table B \n> \n> you would be able to use function F on table B even if it didn't inherit\n> A. \n> \n> however, if you construct rules, triggers, etc... on table A, these should\n> be\n> inherited by table B.\n> \n> the manual is, as far as my experience has led me to believe, referring to\n> functions \"bound\" (for lack of a better word) to the parent table....\n> \n> -- \n> Aaron J. Seigo\n> Sys Admin\n",
"msg_date": "Tue, 2 Nov 1999 10:11:43 +0100 ",
"msg_from": "Andrzej Mazurkiewicz <[email protected]>",
"msg_from_op": true,
"msg_subject": "inheritance of functions"
}
] |
[
{
"msg_contents": "> Have you put an index on the field in question? It shouldn't \n> matter how\n> many records you have if you do. If you don't, no other database will\n> help you any better.\n\nThe main problem is, that PostgreSQL will abort the transaction if it raises\nelog(ERROR...).\nNo other DB does this. Thus on other DB's the user program can check the\nreturn code,\nfix the error condition and still commit the transaction.\nI think behavior like this will be easier to provide with WAL's savepoints.\n\nAndreas\n",
"msg_date": "Tue, 2 Nov 1999 10:22:40 +0100 ",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Trigger aborted on error"
}
] |
[
{
"msg_contents": "> I have compared current with 6.5 using 1000000 tuple-table (243MB) (I\n> wanted to try 2GB+ table but 6.5 does not work in this case). The\n> result was strange in that current is *faster* than 6.5!\n> \n> RAID5\n> \tcurrent\t2:29\n> \t6.5.2\t3:15\n> \n> non-RAID\n> \tcurrent\t1:50\n> \t6.5.2\t2:13\n> \n> Seems my previous testing was done in wrong way or the behavior of\n> sorting might be different if the table size is changed?\n\nThis new test case is not big enough to show cache memory contention,\nand is thus faster with the new code.\nThe 2 Gb test case was good, because it shows what happens when \ncache memory becomes rare.\n\nAndreas\n",
"msg_date": "Tue, 2 Nov 1999 10:50:22 +0100 ",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] sort on huge table "
},
{
"msg_contents": "Zeugswetter Andreas SEV <[email protected]> writes:\n> This new test case is not big enough to show cache memory contention,\n> and is thus faster with the new code.\n\nCache memory contention? I don't think so. Take a look at the CPU\nversus elapsed times in Tatsuo's prior report on the 2Gb case.\nI'm not sure yet what's going on, but it's clear that the bottleneck is\nI/O operations not processor/memory speed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 1999 10:23:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] sort on huge table "
}
] |
[
{
"msg_contents": "\n> Zeugswetter Andreas SEV <[email protected]> writes:\n> > This new test case is not big enough to show cache memory \n> contention,\n> > and is thus faster with the new code.\n> \n> Cache memory contention? I don't think so. Take a look at the CPU\n> versus elapsed times in Tatsuo's prior report on the 2Gb case.\n> I'm not sure yet what's going on, but it's clear that the \n> bottleneck is\n> I/O operations not processor/memory speed.\n\nYes, I doubt that the new test shows the same bottleneck situation.\nHe did not tell us the IO versus CPU time on the recent 250 Mb test.\nI suspect, that the CPU time now has a higher percentage on total time.\n\nAndreas\n",
"msg_date": "Tue, 2 Nov 1999 17:08:14 +0100 ",
"msg_from": "Zeugswetter Andreas SEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] sort on huge table "
}
] |
[
{
"msg_contents": "Good to see I'm not the only one who's been going mad recently :-)\n\nPeter\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\t02 November 1999 16:37\n> To:\tBruce Momjian\n> Cc:\tPostgreSQL-development\n> Subject:\tRe: [HACKERS] 6.5.3 is ready\n> \n> \n> Sorry, will do that this afternoon...work has been a...battleground\n> recently, and arms/legs have been flying :) Its been great ... :)\n> \n> On Mon, 1 Nov 1999, Bruce Momjian wrote:\n> \n> > Marc, 6.5.3 is ready for packaging.\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> > \n> > ************\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n",
"msg_date": "Tue, 2 Nov 1999 16:58:20 -0000 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.5.3 is ready"
}
] |
[
{
"msg_contents": "hi...\n\nas per the question of linux getting in the way of reads/writes, i forwarded\nbruce's (?) message re: the ext2 code in the linux kernel... this is what i got\nback.. hope this helps....\n\n---------- Forwarded Message ----------\nSubject: Re: fs/etx2/file.c question from postgres developers...\nDate: Tue, 2 Nov 1999 14:40:51 +0000 (GMT)\nFrom: \"Stephen C. Tweedie\" <[email protected]>\n\n\nHi,\n\nOn Mon, 1 Nov 1999 12:12:12 -0700, \"Aaron J. Seigo\" <[email protected]> said:\n\n> in response to a performance issue with disk i/o when using the\n> postgresql database (maintained and developed mostly by bsd users,\n> though used by many linux users) the following has come up...\n\n> ____________________________________\n\n>> Next question is what to do about it. I don't suppose we have any way\n>> of turning off the OS' read-ahead algorithm :-(. \n\n> Look what I found. I downloaded Linux kernel source for 2.2.0, and\n> started looking for the word 'ahead' in the file system files. I found\n> that read-ahead seems to be controlled by f_reada, and look where I\n> found it being turned off? Seems like any seek turns off read-ahead on\n> Linux.\n\nIt's a lot more complex than that --- check the do_generic_file_read()\ncode in mm/filemap.c for the full algorithm. The readahead \"window\" is\ntuned dynamically based on sequential accesses, and any read outside the\nwindow clears the readahead.\n\n--Stephen\n-------------------------------------------------------\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Tue, 2 Nov 1999 11:47:34 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: fs/etx2/file.c question from postgres developers..."
}
] |
[
{
"msg_contents": "Hi Bruce,\n\nHere is a core dump file from my Linux laptop in the same situation...\n\nBruce Momjian wrote:\n> \n> Core dump. Strange. Can you pull up the core from gdb and do a\n> backtrace?\n> \n> [Charset x-user-defined unsupported, skipping...]\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n=====================================================\n Dimitri KRAVTCHUK (dim) Sun Microsystems\n Benchmark Engineer France\n [email protected]\n=====================================================",
"msg_date": "Tue, 02 Nov 1999 21:31:59 +0100",
"msg_from": "Dimitri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL vs Mysql comparison"
}
] |
[
{
"msg_contents": "\nDoes Postgres have a cursor function similar to rownum in Oracle. I'm\ntrying to select just a certain number of rows.\n\nthanks\n\nTyson Oswald\n\n",
"msg_date": "Tue, 02 Nov 1999 18:17:42 -0500",
"msg_from": "Tyson Oswald <[email protected]>",
"msg_from_op": true,
"msg_subject": "selecting a certain number of rows"
}
] |
[
{
"msg_contents": "Well, with autocommit on, the statement would fail, and I would expect the\ninsert to then roll back, if the select part failed. No problem, really.\n\nMikeA\n\n-----Original Message-----\nFrom: D'Arcy\" \"J.M.\" Cain\nTo: [email protected]\nCc: [email protected]; [email protected]; [email protected]\nSent: 11/3/99 5:04 AM\nSubject: Re: [HACKERS] Get OID of just inserted record\n\nThus spake Aaron J. Seigo\n> > => insert into foo values (4, 'aaa');\n> > INSERT 7998067 1\n> \n> this assumes that one is using libpq.. it would be nice to have access\nto this\n> from psql or anywhere for that matter.. and not just oids.. but, say\nfor\n> instance, default values in tables that are generated dynamically...\netc\n\nJust to see if I understand you, is this what you want to be able to do?\n\nUPDATE t1 SET other_oid =\n (INSERT INTO t2 VALUES (1, 'aaa') RETURN OID)\n WHERE someting = 'something';\n\nor\n\nSELECT (INSERT INTO t2 (f1, f2) VALUES (1, 'aaa') RETURN f3);\n\nIn other words, sub-inserts. It is kind of a neat idea. I don't know\nthat it is worth spending much time on but it would be a neat feature\nthat no one else has.\n\nJust wondering, how would you handle insert only tables? That is, you\nhave insert privleges but not select. Would you still return the field\nor fields requested surprising the database designer, accept the insert\nbut return an error or refuse the insert entirely since the task could\nnot be completed?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n\n************\n",
"msg_date": "Wed, 3 Nov 1999 08:11:24 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "Thus spake Ansley, Michael\n>Well, with autocommit on, the statement would fail, and I would expect the\n>insert to then roll back, if the select part failed. No problem, really.\n\nWell, autocommit would only matter if it was decided that it wasn't an\natomic transaction. If, as seems both sensible and consensed (look, I\nmade up another word :-) the transaction should be atomic, then the\nstate of autocommit shouldn't matter.\n\nHowever, it almost begs the question of whether there should be another\npermission that could be granted. We may want to allow someone to see\nthe value of just inserted data after adjustments but not on the table\nin general. This statement would give us that as well if we added a\nnew perm.\n\nGRANT INSERT, SELECT_ON_INSERT ...\n\nor\n\nGRANT INSERT, RSELECT... -- for Restricted SELECT. ISELECT perhaps?\n\nSo someone can get the serial number of an entry that they just inserted\nbut they wouldn't be able to look at the table in general. That's a\nfeature that I could have used in a database I have. Instead I had to\ngive SELECT perms to a user on a table that I would have preferred to\notherwise keep hidden.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 3 Nov 1999 07:24:13 -0500 (EST)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Get OID of just inserted record"
},
{
"msg_contents": "Dear Sir.\n\nI apolize for bothering you if this mail bothers you.\nWould you mind if I ask you to introduce to me any excellent network hacker around you if you happen to have any information about this kind of matter?\nOtherwise, may I ask you are willing to introduce your acquaintances who seems to well be informed about Institution Computer System Accessing? It is not harmful to anybody at all. \nBut I will pay for 20,000 dollars for his job. I say again it is not harmful at all. I just want to confirm why certain record missed.\nI can make a business trip and meet him for details later. \n\nSincerely,\n\nTimothy\n\n",
"msg_date": "Sun, 21 Nov 1999 07:01:04 +0900",
"msg_from": "\"Timothy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Ok, help me here.\n\nWhenever I try to run Postmaster or I start the deamon it gives me the error\nmessage that he PGDATA directory cannot be found. I've set it in my\n.bash_profile file in /etc/skel : /home/user : /root directories.\n\nWhat am I doing wrong?\n\nThanx Sti\n\n\n",
"msg_date": "Wed, 3 Nov 1999 10:58:21 +0200",
"msg_from": "\"Stiaan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Installing Postgresql"
},
{
"msg_contents": "\n\"Stiaan\" <[email protected]> writes:\n\n> \n> Ok, help me here.\n> \n> Whenever I try to run Postmaster or I start the deamon it gives me the error\n> message that he PGDATA directory cannot be found. I've set it in my\n> .bash_profile file in /etc/skel : /home/user : /root directories.\n> \n> What am I doing wrong?\n> \n> Thanx Sti\n> \n> \n\n\nAs a sanity test, try setting the PGDATA and PGLIB manually and\nstarting the postmaster manually. Logged in as the postgres user and\nlooking at a bash prompt do:\n\nexport PGLIB=/wherever/you/keep/the/postgres/lib/files\nexport PGDATA=/wherever/you/keep/the/database\npostmaster \n\n\nCollin\n",
"msg_date": "03 Nov 1999 23:13:42 -0500",
"msg_from": "[email protected] (Collin W. Hitchcock)",
"msg_from_op": false,
"msg_subject": "Re: Installing Postgresql"
},
{
"msg_contents": "In article <[email protected]>,\nCollin W. Hitchcock <[email protected]> wrote:\n>\n>\"Stiaan\" <[email protected]> writes:\n>\n>> \n>> Ok, help me here.\n>> \n>> Whenever I try to run Postmaster or I start the deamon it gives me the error\n>> message that he PGDATA directory cannot be found. I've set it in my\n>> .bash_profile file in /etc/skel : /home/user : /root directories.\n>> \n>> What am I doing wrong?\n\n>\n>\n>As a sanity test, try setting the PGDATA and PGLIB manually and\n>starting the postmaster manually. Logged in as the postgres user and\n>looking at a bash prompt do:\n>\n>export PGLIB=/wherever/you/keep/the/postgres/lib/files\n>export PGDATA=/wherever/you/keep/the/database\n>postmaster \n\nAlso, don't forget to do the initial 'initdb' with these\nvalues set.\n\n Les Mikesell\n [email protected]\n",
"msg_date": "4 Nov 1999 20:01:08 -0600",
"msg_from": "[email protected] (Leslie Mikesell)",
"msg_from_op": false,
"msg_subject": "Re: Installing Postgresql"
},
{
"msg_contents": "you're on the right track,\ntry exporting the PGDATA variable manually from the shell.\nit's in the manual somewhere.\n\nGavin\n\n\n----------\n>From: \"Stiaan\" <[email protected]>\nIn article <[email protected]>, \"Stiaan\" <[email protected]>\nwrote:\n\n\n> Ok, help me here.\n>\n> Whenever I try to run Postmaster or I start the deamon it gives me the error\n> message that he PGDATA directory cannot be found. I've set it in my\n> .bash_profile file in /etc/skel : /home/user : /root directories.\n>\n> What am I doing wrong?\n>\n> Thanx Sti\n>\n> \n\n",
"msg_date": "Sun, 07 Nov 1999 02:33:31 -0800",
"msg_from": "\"Gavin Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Installing Postgresql"
}
] |
[
{
"msg_contents": "\n",
"msg_date": "Wed, 3 Nov 1999 20:57:43 +0700 (JVT)",
"msg_from": "Akmal Hasan <[email protected]>",
"msg_from_op": true,
"msg_subject": "unsubcribe"
}
] |
[
{
"msg_contents": "> \"nicks.emails\" wrote:\n\n[back on list -- use 'reply ALL' to keep it on list. This is the second\ntry -- I misspelled pgsql-hackers the first time ;-(]\n\n> tables to find any intersecting points, I read one of the replies\n> about having to recompile the postgres system so as to dereference a\n> NULL\n> pointer so it is not a null pointer, do you have any info\n> to help me to do this.\n\nI do not have the info to do this. Tom Lane, the one who replied with\nthat bit, does. However, according to Tom, that whole bit of code is a\nmess.\n\n> Also when will version 7 be available as I was hoping to use postgres\n> to complete a\n> University project which involves intersecting lsegs and time is\n> running out\n\nI certainly feel for you; however, I have absolutely no control over\nthat timing -- while I maintain the RPM's for PostgreSQL, the other\ndevelopers do most of the core work. I am wanting to help out with that\nwork, but I don't have sufficient knowledge of the backend yet to do so\n(maybe in six months I'll be able to contribute something along those\nlines). From what I gather, we're looking at the first quarter of 2000\nfor version 7 -- but that is not set in stone, wood, or anything else\nsolid.\n\nIf you want to do the lseg intersection yourself, you can:\n1.)\tRetrieve the lseg's and do the intersection in your client program;\n2.)\tWrite a PL/TCL or PL/PGSQL function to do the intersection;\n3.)\tWrite a C function to do the intersection;\n4.)\tFix the existing intersection code.\n\nNumber 4 is the most useful, but by far the most difficult of the four\noptions -- PostgreSQL has a very steep learning curve for developers! \nHowever, the developers here will be glad to help you understand what\nneeds fixing.\n\nIf and when you do fix this, your patches would be most welcome, of\ncourse.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 03 Nov 1999 10:37:49 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incomplete info from original message"
},
{
"msg_contents": "With reference to trying to alter the source code would anyone be able to\npoint me towards\nthe offending code.\n\nas another quick(er) fix could someone comment on this\nIs it possible to create a function written in c/c++ that contains a try\nstatement and within it\ncall the # with the non intersecting lsegs, so when the backend finds that\nthe lsegs don't intersect returns back to the\ntry block and then have a default handler to ignores the error, would the\nbackend return a value before aborting or just abort\nbecause I am hoping to be able to spend some time with this function but\ndon't want to waste time unnecesarily.\n\nRegards\nNick\nps sorry to be a nuisance but I am getting desperate.\n----- Original Message -----\nFrom: Lamar Owen <[email protected]>\nTo: nicks.emails <[email protected]>; <[email protected]>\nSent: Wednesday, November 03, 1999 7:37 AM\nSubject: Re: incomplete info from original message\n\n\n> > \"nicks.emails\" wrote:\n>\n> [back on list -- use 'reply ALL' to keep it on list. This is the second\n> try -- I misspelled pgsql-hackers the first time ;-(]\n>\n> > tables to find any intersecting points, I read one of the replies\n> > about having to recompile the postgres system so as to dereference a\n> > NULL\n> > pointer so it is not a null pointer, do you have any info\n> > to help me to do this.\n>\n> I do not have the info to do this. Tom Lane, the one who replied with\n> that bit, does. However, according to Tom, that whole bit of code is a\n> mess.\n>\n> > Also when will version 7 be available as I was hoping to use postgres\n> > to complete a\n> > University project which involves intersecting lsegs and time is\n> > running out\n>\n> I certainly feel for you; however, I have absolutely no control over\n> that timing -- while I maintain the RPM's for PostgreSQL, the other\n> developers do most of the core work. I am wanting to help out with that\n> work, but I don't have sufficient knowledge of the backend yet to do so\n> (maybe in six months I'll be able to contribute something along those\n> lines). From what I gather, we're looking at the first quarter of 2000\n> for version 7 -- but that is not set in stone, wood, or anything else\n> solid.\n>\n> If you want to do the lseg intersection yourself, you can:\n> 1.) Retrieve the lseg's and do the intersection in your client program;\n> 2.) Write a PL/TCL or PL/PGSQL function to do the intersection;\n> 3.) Write a C function to do the intersection;\n> 4.) Fix the existing intersection code.\n>\n> Number 4 is the most useful, but by far the most difficult of the four\n> options -- PostgreSQL has a very steep learning curve for developers!\n> However, the developers here will be glad to help you understand what\n> needs fixing.\n>\n> If and when you do fix this, your patches would be most welcome, of\n> course.\n>\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n\n",
"msg_date": "Wed, 3 Nov 1999 21:36:22 -0800",
"msg_from": "\"nicks.emails\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "your commnts"
}
] |
[
{
"msg_contents": "On the topic of how to programatically get a just-inserted serial\nvalue, I propose the Sqlflex model for adoption into postgresql.\nIn that model, the return protocol for INSERT is altered to return\nthe serial value of the just-inserted record IFF the input value\nfor the serial column was 0. [Side rules: tables can only have one\nserial column, and db-generated serial values are always natural\nnumbers.] For example,\n\n create table mytable (id serial, name varchar);\n\n -- this returns # of rows inserted, as usual...\n insert into mytable (name) values ('John');\n\n -- this returns serial 'id' of inserted record...\n insert into mytable (id,name) values (0,'Mary');\n\nThis requires no syntax change to INSERT (a Good Thing),\nand does not require any additional higher-level processing to\nget the serial value. We have had good success with this\napproach on some relatively high-performance 7x24x365 dbs.\n\nPresently, I am performing an additional select to get the same\neffect (in perl DBI) immediately after $sth->execute() for the\noriginal insert query, e.g.,\n\n select id from mytable where oid = $sth->{pg_oid_status}\n\nSeems a waste to have to do this, but I'm not aware of another way.\n\n-Ed\n\n\n\n\n",
"msg_date": "Wed, 03 Nov 1999 12:31:22 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] getting new serial value of serial insert"
}
] |
[
{
"msg_contents": "On the topic of how to programatically get a just-inserted serial\nvalue, I propose the Sqlflex model for adoption into postgresql.\nIn that model, the return protocol for INSERT is altered to return\nthe serial value of the just-inserted record IFF the input value\nfor the serial column was 0. [Side rules: tables can only have one\nserial column, and db-generated serial values are always natural\nnumbers.] For example,\n\n create table mytable (id serial, name varchar);\n\n -- this returns # of rows inserted, as usual...\n insert into mytable (name) values ('John');\n\n -- this returns serial 'id' of inserted record...\n insert into mytable (id,name) values (0,'Mary');\n\nThis requires no syntax change to INSERT (a Good Thing),\nand does not require any additional higher-level processing to\nget the serial value. We have had good success with this\napproach on some relatively high-performance 7x24x365 dbs.\n\nPresently, I am performing an additional select to get the same\neffect (in perl DBI) immediately after $sth->execute() for the\noriginal insert query, e.g.,\n\n select id from mytable where oid = $sth->{pg_oid_status}\n\nSeems a waste to have to do this, but I'm not aware of another way.\n\n-Ed\n\n",
"msg_date": "Wed, 03 Nov 1999 12:43:04 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "hi...\n\n> create table mytable (id serial, name varchar);\n> \n> -- this returns # of rows inserted, as usual...\n> insert into mytable (name) values ('John');\n> \n> -- this returns serial 'id' of inserted record...\n> insert into mytable (id,name) values (0,'Mary');\n\nhm.. this is very elegant syntactically.. \n\nhowever, it would be nice to be able to have returned any number of fields of\nany types... (for example, i have a trigger that changes a field in a record\nwhenever it gets updated/inserted.. it would be nice to get this returned as\nwell...)\n\nalso, if possible, it would be nice to extend this to UPDATE... \n\ncan you think of a way to use this syntax aproach that would meet the needs\nabove?\n\n> select id from mytable where oid = $sth->{pg_oid_status}\n> \n> Seems a waste to have to do this, but I'm not aware of another way.\n\n*nods* seems quite a few people are running into this.\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Wed, 3 Nov 1999 13:16:13 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "\"Aaron J. Seigo\" wrote:\n\n> > -- this returns serial 'id' of inserted record...\n> > insert into mytable (id,name) values (0,'Mary');\n>\n> hm.. this is very elegant syntactically..\n>\n> however, it would be nice to be able to have returned any number of fields of\n> any types... (for example, i have a trigger that changes a field in a record\n> whenever it gets updated/inserted.. it would be nice to get this returned as\n> well...)\n\n>\n> also, if possible, it would be nice to extend this to UPDATE...\n>\n> can you think of a way to use this syntax aproach that would meet the needs\n> above?\n\nNo, and I'm not sure it'd be good to couple the two operations syntactically\neven if one thought of a clever way to do it. Serial-insert value retrieval is\na very frequent lightweight operation that fits nicely within current INSERT\nsyntax, and thus it seems intuitively \"natural\" to stretch INSERT semantics\nin this way.\n\nIn the trigger scenario you mention, I'd be slightly more inclined to say it\ncrosses the fuzzy gray line into the area where a subsequent SELECT is in\norder, as opposed to modifying INSERT syntax/semantics to allow this\nSELECT functionality. How's that for fuzzy logic?\n\nCheers.\nEd\n\n\n\n\n\n\n",
"msg_date": "Wed, 03 Nov 1999 14:19:28 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "hi...\n\n> \n> No, and I'm not sure it'd be good to couple the two operations syntactically\n> even if one thought of a clever way to do it. Serial-insert value retrieval is\n> a very frequent lightweight operation that fits nicely within current INSERT\n> syntax, and thus it seems intuitively \"natural\" to stretch INSERT semantics\n> in this way.\n\nput that way, i can see your point clearly and agree... =) \n\ni think this would be a nice addition to pgsql... \n \n> In the trigger scenario you mention, I'd be slightly more inclined to say it\n> crosses the fuzzy gray line into the area where a subsequent SELECT is in\n> order, as opposed to modifying INSERT syntax/semantics to allow this\n> SELECT functionality. How's that for fuzzy logic?\n\n*nods* this is where the RETURN clause we've been batting around comes in as a\nmore powerful and secure way of dealing with this... oh well, i was hoping that\nperhaps the serial return concept could be applied here as well...\n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Wed, 3 Nov 1999 16:15:15 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "> *nods* this is where the RETURN clause we've been batting around comes\n> in as a more powerful and secure way of dealing with this... oh well,\n> i was hoping that perhaps the serial return concept could be applied\n> here as well...\n\nI don't like *any* of the proposals that have appeared in this thread.\nInventing nonstandard SQL syntax is a bad idea, and furthermore all\nof these solutions are extremely limited in capability: they only work\nfor \"serial\" columns, they only work for a single serial column, etc\netc. If we're going to address this issue at all, we should invent\na general-purpose mechanism for passing back to the frontend application\nthe results of server-side operations that are performed as a side effect\nof SQL commands.\n\nThe idea that comes to my mind is to invent a new command, available in\n\"trigger\" procedures, that causes a message to be sent to the frontend\napplication. This particular problem of returning a serial column's\nvalue could be handled in an \"after insert\" trigger procedure, with a\ncommand along the lines of\n\tSENDFE \"mytable.col1=\" + new.col1\nWe'd have to think about what restrictions to put on the message\ncontents, if any. It might be sufficient just to counsel users\nto stick identification strings on the front of the message text\nas illustrated above.\n\nWith this approach we wouldn't be adding nonstandard SQL syntax (trigger\nprocedures are already nonstandard, and we'd be keeping the additions\nin there). Also, since more than one message could be sent during a\ntransaction, there wouldn't be any artificial restriction to just\nreturning one or a fixed number of values. Finally, we'd not be\ncreating data-type-specific behavior for SERIAL; the facility could\nbe used for many things.\n\nWe'd need to think about just how to make the messages available to\nclient applications. For libpq, something similar to the existing\nNOTIFY handling might work. Not sure how that would map into ODBC or\nother frontend libraries.\n\nAnother issue is what about transaction semantics? If we send such\na message right away, and then later the transaction is aborted, then\nwe shouldn't have sent the message at all. But if the application wants\nthe message so it can get a serial number to insert in another record,\nthen it doesn't want the message to be held off till end of transaction,\neither. Maybe we need two sorts of SENDFE commands, one that sends\nimmediately and one that is queued until and unless the transaction\ncommits. An application using the first kind would have to take\nresponsibility for not using the returned data in a way that would cause\ntransactional problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Nov 1999 18:41:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert "
},
{
"msg_contents": ">\n> hi...\n>\n> >\n> > No, and I'm not sure it'd be good to couple the two operations syntactically\n> > even if one thought of a clever way to do it. Serial-insert value retrieval is\n> > a very frequent lightweight operation that fits nicely within current INSERT\n> > syntax, and thus it seems intuitively \"natural\" to stretch INSERT semantics\n> > in this way.\n>\n> put that way, i can see your point clearly and agree... =)\n>\n> i think this would be a nice addition to pgsql...\n>\n> > In the trigger scenario you mention, I'd be slightly more inclined to say it\n> > crosses the fuzzy gray line into the area where a subsequent SELECT is in\n> > order, as opposed to modifying INSERT syntax/semantics to allow this\n> > SELECT functionality. How's that for fuzzy logic?\n\n Don't forget about a BEFORE ROW trigger that decides to\n return a NULL tuple instead of a valid (maybe modified)\n tuple. Thus, it suppresses the entire INSERT, UPDATE or\n DELETE operation silently. You cannot access a plain value\n then without having a flag telling that there is a value at\n all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 4 Nov 1999 04:18:30 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "hi...\n\n> I don't like *any* of the proposals that have appeared in this thread.\n> Inventing nonstandard SQL syntax is a bad idea, and furthermore all\n\nagreed... at the same time though, just about every other database out there has\nnon-standard SQL statements to work around various limitations, perceived and\notherwise... also, a quick look through the user documentation for postgres\nwill show that there already are a lot of non-standard SQL statements..\n*shrug*\n\n> of these solutions are extremely limited in capability: they only work\n> for \"serial\" columns, they only work for a single serial column, etc\n> etc. If we're going to address this issue at all, we should invent\n> a general-purpose mechanism for passing back to the frontend application\n> the results of server-side operations that are performed as a side effect\n> of SQL commands.\n\nthe RETURN cluase concept isn't limited to serial columns or single columns...\nit would allow the return of any columns that were affected by the\nINSERT/UPDATE/DELETE...\n \n> The idea that comes to my mind is to invent a new command, available in\n> \"trigger\" procedures, that causes a message to be sent to the frontend\n> application. This particular problem of returning a serial column's\n> value could be handled in an \"after insert\" trigger procedure, with a\n> command along the lines of\n> \tSENDFE \"mytable.col1=\" + new.col1\n> We'd have to think about what restrictions to put on the message\n> contents, if any. It might be sufficient just to counsel users\n> to stick identification strings on the front of the message text\n> as illustrated above.\n\ni don't think this is leaps and bounds above what can already be done with\nfunctions, triggers and external code now. while this would probably create a\nspeed adantage (by skipping a select statement step) it would still leave the\nproblem of having to implement a trigger for every type of data you want back. \n\nand there are limitations inherent to this method: if\nyou wanted field1 returned when updating feild2, but field3 when updating\nfielld4... except that one time when you want both field1 and field3 returned...\n*takes a deep breath* it just isn't flexible enough... \n\nfor every possible return situation, you'd have to define it in a trigger...\nand there still would be limitations to what rules you could set up.. e.g. how\nwould you define in a trigger different returned values depending on the user\nthat is currently accessing the database? a real world example would be a user\ncoming in over the web and an admin coming in through the same method. unless\npgsql handles the user authentication (which in most webplications, it doesn't)\nthere would be no way to tell the difference without going through more work\nthan it takes to do it with current methods (e.g. select).\n\n> transaction, there wouldn't be any artificial restriction to just\n> returning one or a fixed number of values. Finally, we'd not be\n> creating data-type-specific behavior for SERIAL; the facility could\n> be used for many things.\n\nthis is _exactly_ what i have said in several previous posts: that it should not\nbe limited just to serial fields... \n\n> We'd need to think about just how to make the messages available to\n> client applications. For libpq, something similar to the existing\n> NOTIFY handling might work. Not sure how that would map into ODBC or\n> other frontend libraries.\n\nif it was integrated into the INSERT/UPDATE/DELETE queries, it wouldn't need to\nbe implemented in each frontend library. it would just be output, much like the\nOID and # of records inserted that currently appears after an\nINSERT/UDPATE/DELETE.\n\nhowever, if it is so completely horrid to add functionality to the SQL\nstatements, i really can't think of another method that would provide the\nfunctionality that would actually make it useful outside of a limited number of\nsituations.... so unless someone can think of a way, maybe its just better to\nleave it be.\n\n-- \nAaron J. Seigo\nSys Admin\n\nRule #1 of Software Design: Engineers are _not_ users\n",
"msg_date": "Wed, 3 Nov 1999 23:25:51 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert"
}
] |
[
{
"msg_contents": "Okay,\n\nI'm used to a feature that allows combining count and distinct in Sybase. \nI thought it was standard SQL and expected to see it in Postgres. Whatever\nthe case might be, postgres does not seem to support this. I keep running\ninto queries that I cannot write. Here's the skinny:\n\n\"select count(distinct id) from table\" is not supported. Getting this\ninformation without count(distinct id) support is a pain and always seems\nto require creating temporary table and running queries later. My first\nsolution was to create a view that just selected the distinct colmuns that\nI was interested in and then do a count on that table. This too seems\nimpossible.\n\nFor both count(distinct) and distinct in views, I have this question: Is \nthis something that needs to be supported but just never got implemented?\nOr, is it something that was conciously excluded? If nobody is working on \nthese, I may take a look them and pick the easier to implement. What little\nI know about the way postgres works, I expect the first one would be esier.\n\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n",
"msg_date": "Wed, 3 Nov 1999 18:13:31 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": true,
"msg_subject": "VIEWS, DISTINCT and COUNT"
},
{
"msg_contents": "Brian Hirt wrote:\n >Okay,\n >\n >I'm used to a feature that allows combining count and distinct in Sybase. \n >I thought it was standard SQL and expected to see it in Postgres. Whatever\n >the case might be, postgres does not seem to support this. I keep running\n >into queries that I cannot write. Here's the skinny:\n >\n >\"select count(distinct id) from table\" is not supported. \n\nI'm not convinced I understand what your query would do, but it sounds as\nif you need to use GROUP BY. For example:\n\nlfix=> select custid, count(custid) from invoice group by custid;\ncustid |count\n--------+-----\nACECS | 1\nADG | 8\nFALKIRK | 1\nJEFSYS | 25\nSOLPORT | 15\n(5 rows)\n\nlfix=> select count(*) from invoice;\ncount\n-----\n 50\n(1 row)\n\nIs that what you want to achieve?\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Lo, children are an heritage of the LORD; and the \n fruit of the womb is his reward.\" Psalms 127:3 \n\n\n",
"msg_date": "Thu, 04 Nov 1999 00:52:44 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEWS, DISTINCT and COUNT "
},
{
"msg_contents": "Brian Hirt <[email protected]> writes:\n> \"select count(distinct id) from table\" is not supported.\n\nYup. It's on the TODO list:\n\t* Allow COUNT(DISTINCT col)\n\n> For both count(distinct) and distinct in views, I have this question: Is \n> this something that needs to be supported but just never got implemented?\n\nI'm not sure what Jan has in mind for views, but certainly\naggregate(DISTINCT ...) is an SQL-standard feature that we ought to\nsupport. I don't think it's a simple addition though :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Nov 1999 19:58:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEWS, DISTINCT and COUNT "
},
{
"msg_contents": ">\n> Brian Hirt <[email protected]> writes:\n> > \"select count(distinct id) from table\" is not supported.\n>\n> Yup. It's on the TODO list:\n> * Allow COUNT(DISTINCT col)\n>\n> > For both count(distinct) and distinct in views, I have this question: Is\n> > this something that needs to be supported but just never got implemented?\n>\n> I'm not sure what Jan has in mind for views, but certainly\n> aggregate(DISTINCT ...) is an SQL-standard feature that we ought to\n> support. I don't think it's a simple addition though :-(\n\n All these DISTINCT, AGGREGATE etc. problems on views are\n based on the fact, that the planner still requires that the\n rewriters output is equivalent to a regular, allowed query.\n\n I would like to be able to place a complete querytree (just\n an entire SELECT's Query node) into a range table entry.\n AFAIK, from the callers point of view there is not much\n difference between the join-, group-, sort-, aggregate- and\n scan-nodes. They are all just nodes returing some amount of\n tuples. All of them could be the toplevel executor node of a\n SELECT - just something returning tuples.\n\n Unfortunately my knowledge in the planner is very limited, so\n I would need help to go for it. Who has that knowledge?\n\n The basic idea is this:\n\n Let's have a view defined as\n\n CREATE VIEW v1 AS SELECT a, count(*) AS n FROM t1 GROUP BY a;\n\n The plan for such a query would be a\n\n Aggregate\n -> Group\n -> Sort\n -> Seq Scan on t1\n\n Thus doing a\n\n SELECT t2.a, v1.n FROM t2, v1 WHERE t2.a = v1.a;\n\n could finally result in a\n\n Merge Join\n -> Sort\n -> Seq Scan on t2\n -> Sort\n -> Aggregate\n -> Group\n -> Sort\n -> Seq Scan on t1\n\n It's impossible to cause such an execution plan from a\n standard SQL statement. But why should it be impossible for\n the rewriter too? If v1 where a regular table (not a view),\n the generated plan would have been a\n\n Merge Join\n -> Sort\n -> Seq Scan on t2\n -> Sort\n -> Seq Scan on v1\n\n so oviously the only difference is that the scan over the v1\n relation has been replaced by the more complicated plan for\n the plain definition of the view. If the planner could do\n that, I think we would get rid of all the limitations for\n views very soon.\n\n Again, who knows enough about the planner to be able to do\n this kind of stuff?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 4 Nov 1999 04:59:39 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEWS, DISTINCT and COUNT"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> All these DISTINCT, AGGREGATE etc. problems on views are\n> based on the fact, that the planner still requires that the\n> rewriters output is equivalent to a regular, allowed query.\n\nRight, and there's no good reason for that.\n\n> I would like to be able to place a complete querytree (just\n> an entire SELECT's Query node) into a range table entry.\n\nI've been saying for some time that the parser ought to emit something\nclose to a plan-tree representation --- not committing to a particular\nquery implementation method, of course, but nonetheless a tree of\nquery nodes. The planner wouldn't find that any harder to work on\nthan what it gets now. The executor might need some work, but\nprobably not much.\n\n> Unfortunately my knowledge in the planner is very limited, so\n> I would need help to go for it. Who has that knowledge?\n\nI know enough to be dangerous, and so does Bruce. Do you think there\nis time to attack this for 7.0, or should we leave well enough alone\nfor now?\n\n> Let's have a view defined as\n\n> CREATE VIEW v1 AS SELECT a, count(*) AS n FROM t1 GROUP BY a;\n\n> The plan for such a query would be a\n\n> Aggregate\n> -> Group\n> -> Sort\n> -> Seq Scan on t1\n\nNot necessarily --- the aggregate and group nodes must be there, but\nwe don't want to commit to seqscan&sort vs. indexscan sooner than we\nhave to. I think what's needed here is some notion of an abstract\nplan tree. The trick is to pick the right level of abstraction.\nMaybe \"Aggregate -> Group -> OrderedTupleSource\" is the way to think\nabout it.\n\nBut your end point is valid: we want to be able to make a structure\nlike that be an input to a higher-level plan tree. This is also\nnecessary for subselect in FROM clause, isn't it?\n\n> Again, who knows enough about the planner to be able to do\n> this kind of stuff?\n\nI could take it on, but I have a lot of other stuff I want to do for\n7.0. Is this more important than fixing fmgr or improving the\nplanner's selectivity estimates? I dunno...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Nov 1999 23:41:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEWS, DISTINCT and COUNT "
},
{
"msg_contents": "> Not necessarily --- the aggregate and group nodes must be there, but\n> we don't want to commit to seqscan&sort vs. indexscan sooner than we\n> have to. I think what's needed here is some notion of an abstract\n> plan tree. The trick is to pick the right level of abstraction.\n> Maybe \"Aggregate -> Group -> OrderedTupleSource\" is the way to think\n> about it.\n> \n> But your end point is valid: we want to be able to make a structure\n> like that be an input to a higher-level plan tree. This is also\n> necessary for subselect in FROM clause, isn't it?\n> \n> > Again, who knows enough about the planner to be able to do\n> > this kind of stuff?\n> \n> I could take it on, but I have a lot of other stuff I want to do for\n> 7.0. Is this more important than fixing fmgr or improving the\n> planner's selectivity estimates? I dunno...\n\nLet me make a comment. Seems like a whole host of problems will be\nfixed by this overhaul, but none of the problems is major.\n\nJan's foreign key support, Vadim's WAL, and Tom Lane's cleanups are of\nmajor importance for 7.0, so it seems we better focus on those, and if\nwe have time before 7.0, and all people involved have time, we can take\non that work. We will need to have most of us available to discuss and\nmerge the changes into all the affected areas.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 00:16:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEWS, DISTINCT and COUNT"
},
{
"msg_contents": "> > I could take it on, but I have a lot of other stuff I want to do for\n> > 7.0. Is this more important than fixing fmgr or improving the\n> > planner's selectivity estimates? I dunno...\n> Jan's foreign key support, Vadim's WAL, and Tom Lane's cleanups are of\n> major importance for 7.0, so it seems we better focus on those, and if\n> we have time before 7.0, and all people involved have time, we can take\n> on that work. We will need to have most of us available to discuss and\n> merge the changes into all the affected areas.\n\nOuter joins will likely require this. So far, I'm just working on the\njoin *syntax*, and (although stalled for the last week or two) will be\ntouching the rte structure to support table and column aliases in the\njoin syntax. But to move to outer joins, I need to be able to tie two\nrte's together, which will be easier to do if I'm allowed to include a\nquery tree (which would, in turn, include rte's for the join tables).\n\nSo, we get join syntax in 7.0 without major parser changes. Not sure\nwe can get outer joins without more, which is required for Jan to go\nfarther too...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 04 Nov 1999 16:34:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] VIEWS, DISTINCT and COUNT"
}
] |
[
{
"msg_contents": "\nCan someone else take a quick peak at the tarball before we release it,\njust to make sure that nothing is missing? Its in the normal place, but\nhaven't announced it yet...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 3 Nov 1999 21:50:09 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": ">Can someone else take a quick peak at the tarball before we release it,\n>just to make sure that nothing is missing? Its in the normal place, but\n>haven't announced it yet...\n\nOk, give me an hour. I will check it on LinuxPPC with MB and Tcl/Tk\nenabled.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Nov 1999 11:05:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ... "
},
{
"msg_contents": "\nOn 04-Nov-99 The Hermit Hacker wrote:\n> \n> Can someone else take a quick peak at the tarball before we release it,\n> just to make sure that nothing is missing? Its in the normal place, but\n> haven't announced it yet...\n\nAny special highlights I should know about with this version or is it \nmainly fixes?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 03 Nov 1999 21:21:13 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Wed, Nov 03, 1999 at 09:50:09PM -0400, The Hermit Hacker wrote:\n> \n> Can someone else take a quick peak at the tarball before we release it,\n> just to make sure that nothing is missing? Its in the normal place, but\n> haven't announced it yet...\n> \n\nI downloaded, compiled and installed with no problem and a RedHat 6.0/x86 \nmachine. One thing I noticed is that the PG_SUBVERSION string is still\nreporting 1. I was confused for a minute when the program reported itself\nas 6.5.1. I thought the installation failed or something like that. I \nchecked postgresql-6.5.3/src/include/version.h.in and sure enough, it's \ndefined a 6.5.1 If this is intentional, I'll slap myself for you.\n\n[root@loopy]# /usr/bin/psql basement\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: basement\n\nbasement=> \n \n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n",
"msg_date": "Wed, 3 Nov 1999 20:25:37 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "> \n> On 04-Nov-99 The Hermit Hacker wrote:\n> > \n> > Can someone else take a quick peak at the tarball before we release it,\n> > just to make sure that nothing is missing? Its in the normal place, but\n> > haven't announced it yet...\n> \n> Any special highlights I should know about with this version or is it \n> mainly fixes?\n\n\nHistory file has:\n\n\tUpdated version of pgaccess 0.98\n\tNT-specific patch\n\tFix dumping rules on inherited tables\n\nNot much.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Nov 1999 21:32:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": ">> On 04-Nov-99 The Hermit Hacker wrote:\n>> > \n>> > Can someone else take a quick peak at the tarball before we release it,\n>> > just to make sure that nothing is missing? Its in the normal place, but\n>> > haven't announced it yet...\n>> \n>> Any special highlights I should know about with this version or is it \n>> mainly fixes?\n>\n>\n>History file has:\n>\n>\tUpdated version of pgaccess 0.98\n\npgaccess coming with 6.5.3 seems 0.96?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Nov 1999 12:41:21 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ... "
},
{
"msg_contents": "> On Wed, Nov 03, 1999 at 09:50:09PM -0400, The Hermit Hacker wrote:\n> > \n> > Can someone else take a quick peak at the tarball before we release it,\n> > just to make sure that nothing is missing? Its in the normal place, but\n> > haven't announced it yet...\n> > \n> \n> I downloaded, compiled and installed with no problem and a RedHat 6.0/x86 \n> machine. One thing I noticed is that the PG_SUBVERSION string is still\n> reporting 1. I was confused for a minute when the program reported itself\n> as 6.5.1. I thought the installation failed or something like that. I \n> checked postgresql-6.5.3/src/include/version.h.in and sure enough, it's \n> defined a 6.5.1 If this is intentional, I'll slap myself for you.\n> \n> [root@loopy]# /usr/bin/psql basement\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> [PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\nThat seems totally wrong. My version.h.in file says 6.5.3. Either you\ngot the wrong file, or Marc has packaged the wrong files.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Nov 1999 22:44:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "> >> On 04-Nov-99 The Hermit Hacker wrote:\n> >> > \n> >> > Can someone else take a quick peak at the tarball before we release it,\n> >> > just to make sure that nothing is missing? Its in the normal place, but\n> >> > haven't announced it yet...\n> >> \n> >> Any special highlights I should know about with this version or is it \n> >> mainly fixes?\n> >\n> >\n> >History file has:\n> >\n> >\tUpdated version of pgaccess 0.98\n> \n> pgaccess coming with 6.5.3 seems 0.96?\n\nOK, that confirms it. Marc has probably packaged the wrong files.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Nov 1999 22:45:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": ">> [root@loopy]# /usr/bin/psql basement\n>> Welcome to the POSTGRESQL interactive sql monitor:\n>> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n>> [PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n>\n>That seems totally wrong. My version.h.in file says 6.5.3. Either you\n>got the wrong file, or Marc has packaged the wrong files.\n\nI confirmed that too.\nftp.postgresql.org/pub/postgresql-6.5.3.tar.gz is 6.5.1 as long as I\ncan see from version.h.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Nov 1999 13:02:27 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ... "
},
{
"msg_contents": "\nFixed, and tar file rebuilt ...\n\n\nOn Wed, 3 Nov 1999, Brian Hirt wrote:\n\n> On Wed, Nov 03, 1999 at 09:50:09PM -0400, The Hermit Hacker wrote:\n> > \n> > Can someone else take a quick peak at the tarball before we release it,\n> > just to make sure that nothing is missing? Its in the normal place, but\n> > haven't announced it yet...\n> > \n> \n> I downloaded, compiled and installed with no problem and a RedHat 6.0/x86 \n> machine. One thing I noticed is that the PG_SUBVERSION string is still\n> reporting 1. I was confused for a minute when the program reported itself\n> as 6.5.1. I thought the installation failed or something like that. I \n> checked postgresql-6.5.3/src/include/version.h.in and sure enough, it's \n> defined a 6.5.1 If this is intentional, I'll slap myself for you.\n> \n> [root@loopy]# /usr/bin/psql basement\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> [PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: basement\n> \n> basement=> \n> \n> \n> -- \n> The world's most ambitious and comprehensive PC game database project.\n> \n> http://www.mobygames.com\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 4 Nov 1999 00:09:26 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Wed, 3 Nov 1999, Bruce Momjian wrote:\n\n> > On Wed, Nov 03, 1999 at 09:50:09PM -0400, The Hermit Hacker wrote:\n> > > \n> > > Can someone else take a quick peak at the tarball before we release it,\n> > > just to make sure that nothing is missing? Its in the normal place, but\n> > > haven't announced it yet...\n> > > \n> > \n> > I downloaded, compiled and installed with no problem and a RedHat 6.0/x86 \n> > machine. One thing I noticed is that the PG_SUBVERSION string is still\n> > reporting 1. I was confused for a minute when the program reported itself\n> > as 6.5.1. I thought the installation failed or something like that. I \n> > checked postgresql-6.5.3/src/include/version.h.in and sure enough, it's \n> > defined a 6.5.1 If this is intentional, I'll slap myself for you.\n> > \n> > [root@loopy]# /usr/bin/psql basement\n> > Welcome to the POSTGRESQL interactive sql monitor:\n> > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > [PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> \n> That seems totally wrong. My version.h.in file says 6.5.3. Either you\n> got the wrong file, or Marc has packaged the wrong files.\n\nNot bad, eh? I'm the one that creates the TAGS, and I don't even use the\nwrong one...glad I got a \"second opinion\" on this before I put out a\nrelease announcement *grin*\n\nFixing now ... there, new, and proper one, built ... *sheepish grin*\n\nit seems to be the \"in thing\" nowadays to include an MD5 signature, so\nhere it is:\n\nMD5(postgresql-6.5.3.tar.gz)= cf921a8aa2adc846a13e019ce920c83a\n\nits in the postgresql-6.5.3.tar.gz.md5 file on the ftp site too...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 4 Nov 1999 00:19:28 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": ">Fixing now ... there, new, and proper one, built ... *sheepish grin*\n>\n>it seems to be the \"in thing\" nowadays to include an MD5 signature, so\n>here it is:\n>\n>MD5(postgresql-6.5.3.tar.gz)= cf921a8aa2adc846a13e019ce920c83a\n>\n>its in the postgresql-6.5.3.tar.gz.md5 file on the ftp site too...\n\nOk. I have tested on my LinuxPPC box. Regression tests seem good,\npgaccess 0.98 works fine. Thanks.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Nov 1999 15:09:53 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ... "
},
{
"msg_contents": "On Thu, 4 Nov 1999, The Hermit Hacker wrote:\n\n> On Wed, 3 Nov 1999, Bruce Momjian wrote:\n> \n> > > On Wed, Nov 03, 1999 at 09:50:09PM -0400, The Hermit Hacker wrote:\n> > > > \n> > > > Can someone else take a quick peak at the tarball before we release it,\n> > > > just to make sure that nothing is missing? Its in the normal place, but\n> > > > haven't announced it yet...\n> > > > \n> > > \n> > > I downloaded, compiled and installed with no problem and a RedHat 6.0/x86 \n> > > machine. One thing I noticed is that the PG_SUBVERSION string is still\n> > > reporting 1. I was confused for a minute when the program reported itself\n> > > as 6.5.1. I thought the installation failed or something like that. I \n> > > checked postgresql-6.5.3/src/include/version.h.in and sure enough, it's \n> > > defined a 6.5.1 If this is intentional, I'll slap myself for you.\n> > > \n> > > [root@loopy]# /usr/bin/psql basement\n> > > Welcome to the POSTGRESQL interactive sql monitor:\n> > > Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> > > [PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> > \n> > That seems totally wrong. My version.h.in file says 6.5.3. Either you\n> > got the wrong file, or Marc has packaged the wrong files.\n> \n> Not bad, eh? I'm the one that creates the TAGS, and I don't even use the\n> wrong one...glad I got a \"second opinion\" on this before I put out a\n> release announcement *grin*\n> \n> Fixing now ... there, new, and proper one, built ... *sheepish grin*\n> \n> it seems to be the \"in thing\" nowadays to include an MD5 signature, so\n> here it is:\n> \n> MD5(postgresql-6.5.3.tar.gz)= cf921a8aa2adc846a13e019ce920c83a\n> \n> its in the postgresql-6.5.3.tar.gz.md5 file on the ftp site too...\n\nWill there also be a patch 6.5.2 -> 6.5.3 made for this one?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 4 Nov 1999 05:48:02 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Wed, 3 Nov 1999, Bruce Momjian wrote:\n\n>> Any special highlights I should know about with this version or is it \n>> mainly fixes?\n>\n>\n>History file has:\n>\n>\tUpdated version of pgaccess 0.98\n>\tNT-specific patch\n>\tFix dumping rules on inherited tables\n\nIt should also mention the fix for alpha/cc. Too late I'm afraid, and not\nreally important.\n\n-- \n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nE.U.I.T. Telecomunicaci�n e-mail: [email protected]\nUniversidad Polit�cnica de Madrid\nCtra. de Valencia, Km. 7 E-28031 Madrid - Espa�a / Spain\n\n",
"msg_date": "Thu, 4 Nov 1999 13:22:48 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Thu, 4 Nov 1999, Vince Vielhaber wrote:\n\n> Will there also be a patch 6.5.2 -> 6.5.3 made for this one?\n\nIf so, please consider also making patches from 6.5.0 or 6.5.1 to 6.5.3\nbecause the 6.5.2 patch was somewhat messed up. Of course a patch against\nthe messed up 6.5.2 would work, but I'd kind of like a cleaner solution.\n\nThanks,\n\tPeter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 4 Nov 1999 14:24:17 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On Wed, 3 Nov 1999, Bruce Momjian wrote:\n> \n> >> Any special highlights I should know about with this version or is it \n> >> mainly fixes?\n> >\n> >\n> >History file has:\n> >\n> >\tUpdated version of pgaccess 0.98\n> >\tNT-specific patch\n> >\tFix dumping rules on inherited tables\n> \n> It should also mention the fix for alpha/cc. Too late I'm afraid, and not\n> really important.\n\nI don't remember seeing that in the cvs logs for 6.5.3.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 09:48:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Thu, 4 Nov 1999, Bruce Momjian wrote:\n\n>> >History file has:\n>> >\n>> >\tUpdated version of pgaccess 0.98\n>> >\tNT-specific patch\n>> >\tFix dumping rules on inherited tables\n>> \n>> It should also mention the fix for alpha/cc. Too late I'm afraid, and not\n>> really important.\n>\n>I don't remember seeing that in the cvs logs for 6.5.3.\n\nI sent a patch to the patches list two or three weeks ago. I think that\nyou didn't apply exactly that patch, but someting similar. The problem is\nindeed solved, because 6.5.2 didn't compile out of the box and 6.5.3 does.\n\n-- \n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nE.U.I.T. Telecomunicaci�n e-mail: [email protected]\nUniversidad Polit�cnica de Madrid\nCtra. de Valencia, Km. 7 E-28031 Madrid - Espa�a / Spain\n\n",
"msg_date": "Thu, 4 Nov 1999 16:59:31 +0100 (MET)",
"msg_from": "\"Pedro J. Lobo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Thu, 4 Nov 1999, Peter Eisentraut wrote:\n\n> On Thu, 4 Nov 1999, Vince Vielhaber wrote:\n> \n> > Will there also be a patch 6.5.2 -> 6.5.3 made for this one?\n> \n> If so, please consider also making patches from 6.5.0 or 6.5.1 to 6.5.3\n> because the 6.5.2 patch was somewhat messed up. Of course a patch against\n> the messed up 6.5.2 would work, but I'd kind of like a cleaner solution.\n\nDoes someone want to give me a 'diff' command that they feel is good?\nNobody ever seems to like the one that I use :(\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 4 Nov 1999 12:30:55 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Wed, 3 Nov 1999, The Hermit Hacker wrote:\n\n> Can someone else take a quick peak at the tarball before we release it,\n> just to make sure that nothing is missing? Its in the normal place, but\n> haven't announced it yet...\n\n\tFor my two cents, the Linux/Alpha patches for 6.5.2 apply cleanly\nto the 6.5.3 tarball. I am compiling and running regression tests now, but\nI don't expect any problems.\n\tOnce there is a formal release annoucement, I will make an\nannouncement to pgsql-ports list (and elsewhere?) and update my web site\nto reflect that the 6.5.2 alpha patches apply fine to 6.5.3, and are all\nthat are needed to get pgsql 6.5.2 running on Linux/Alpha.\n\t...\n\tOk, with the 6.5.2 patches, pgsql compiled fine and ran regression\ntests with no problems. Only failures were the standard off by one in nth\ndecimal place geometry and the sort order difference in rules, both\nharmless (IMHO).\n\tYou have my go ahead (as if you needed it :) to release the 6.5.3\ntarball.\n\n\tPS. Was there any speed ups in this version, or is just due to the\nfact I finally have my Ultra disk on an Ultra SCSI controller that caused\nthe at least halving, if not not more of regression runtimes? :)\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 4 Nov 1999 19:38:47 -0600 (CST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Thu, 04 Nov 1999, Ryan Kirkpatrick wrote:\n> On Wed, 3 Nov 1999, The Hermit Hacker wrote:\n> \n> > Can someone else take a quick peak at the tarball before we release it,\n> > just to make sure that nothing is missing? Its in the normal place, but\n> > haven't announced it yet...\n> \n> \tFor my two cents, the Linux/Alpha patches for 6.5.2 apply cleanly\n> to the 6.5.3 tarball. I am compiling and running regression tests now, but\n> I don't expect any problems.\n\nI'm glad you checked that.... I am now going to upgrade the version on the\nRPM's I'm building from 0.2 (testing version) to 1 (stable version). The Alpha\npatches were the only thing I couldn't test here. Thanks!\n\nRPM's will be available from my site tomorrow. Thomas or whoever can mirror\nthem over to ftp.postgresql.org. The release e-mail I will send out tomorrow\n(after the Official Scrappy-Approved Release (TM) of 6.5.3, that is.) will\ncontain the text of the news item in it (note for Vince).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 4 Nov 1999 21:22:44 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "> > For my two cents, the Linux/Alpha patches for 6.5.2 apply cleanly\n> > to the 6.5.3 tarball. I am compiling and running regression tests \n> > now, but I don't expect any problems.\n\nRyan, y'all have some final results? What version of linux are you\nrunning? I'll update the ports listing when I get the new info...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 06 Nov 1999 06:13:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Sat, 6 Nov 1999, Thomas Lockhart wrote:\n\n> > > For my two cents, the Linux/Alpha patches for 6.5.2 apply cleanly\n> > > to the 6.5.3 tarball. I am compiling and running regression tests \n> > > now, but I don't expect any problems.\n> \n> Ryan, y'all have some final results? What version of linux are you\n> running? I'll update the ports listing when I get the new info...\n\n\tYea, the compile was clean after applying the 6.5.2 Linux/Alpha\npatches, and regression test results were the same as they have been for\nsome time. Geometry failed due to off by one in nth decimal place and\nrules failed due to sort order issues. \n\tAs for my system specs, I am running stock Debian 2.1r2, on an\nXLT366. Kernel version is 2.0.36, but I see no reason it should not work\non 2.2.x kernels. I can't test 2.2.x kernels at the moment due to some\nindependent hardware issues between 2.2.x and an IBM SCSI disk. \n\tBasically, you can update the ports listing to state that with\nthe alpha patches on my web site (or in the pgsql-patches mailing list\narchive), pgsql runs great with no problems on Linux/Alpha save for a few\n(harmless) unaligned access now and then. I just looked at the ports list,\nand it does need to be updated as it states the last version tested\nagainst Linux/Alpha was 6.3.2. :) \n\tAlso, please update my email address to '[email protected]'. That\nis where all pgsql related emails should be sent to reach me. The\nremaining life span on the nag email address is undetermined at this time,\nbut probably less than seven months. Thanks.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Sat, 6 Nov 1999 09:52:11 -0600 (CST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "> Basically, you can update the ports listing to state that with\n> the alpha patches on my web site (or in the pgsql-patches mailing list\n> archive), pgsql runs great with no problems on Linux/Alpha save for a few\n> (harmless) unaligned access now and then.\n\nSorry, the patches Ryan has posted are the same or different than the\nones Lamar is using/packaging?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 06 Nov 1999 21:20:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Sat, 6 Nov 1999, Thomas Lockhart wrote:\n\n> > Basically, you can update the ports listing to state that with\n> > the alpha patches on my web site (or in the pgsql-patches mailing list\n> > archive), pgsql runs great with no problems on Linux/Alpha save for a few\n> > (harmless) unaligned access now and then.\n> \n> Sorry, the patches Ryan has posted are the same or different than the\n> ones Lamar is using/packaging?\n\n\tUhh.. I think I lost you, or you lost me... I do not know which\npatches Lamar is using, but I assume they would be the pgsql 6.5.2\nlinux/alpha patches. And I have already stated twice that they apply\ncleanly to 6.5.3 pre-release, and work fine. Would it just be easier if I\ntook my patch file, renamed it to 6.5.3, and released it again? :)\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Sat, 6 Nov 1999 16:18:55 -0600 (CST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
},
{
"msg_contents": "On Sat, 06 Nov 1999, Ryan Kirkpatrick wrote:\n> On Sat, 6 Nov 1999, Thomas Lockhart wrote:\n> > Sorry, the patches Ryan has posted are the same or different than the\n> > ones Lamar is using/packaging?\n> \n> \tUhh.. I think I lost you, or you lost me... I do not know which\n> patches Lamar is using, but I assume they would be the pgsql 6.5.2\n> linux/alpha patches. And I have already stated twice that they apply\n> cleanly to 6.5.3 pre-release, and work fine. Would it just be easier if I\n> took my patch file, renamed it to 6.5.3, and released it again? :)\n\nThe patches I am packaging with the 6.5.3-1 RPMS are the 6.5.2 patches Ryan\nreleased not long ago. Ryan, if you want to do it, it would be nice to have an\n'Official' test of the RPM packaging on RedHat/Alpha -- pick up the source rpm\n(http://www.ramifordistat.net/postgres/SRPMS/postgresql-6.5.3-1.src.rpm) and\nsee if the binary RPM's produced by a 'rpm --rebuild\npostgresql-6.5.3-1.src.rpm' are sane. That would be a nice thing. I am\ncontemplating buying a used slower AT-form-factor Alpha motherboard to do just\nthis myself -- but, if you can and will, that will suffice.\n\nYour first statement as to the cleanness of the patch application is what\nallowed me to name these RPM's as 'stable' and non-beta -- had you not made\nthat mention, the 6.5.3 RPM's would be beta until I got confirmation of the\nAlpha patches applicability.\n\nAnd, Ryan, THANKS for the packaging of those patches!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 6 Nov 1999 23:41:52 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.3 built, but not released ..."
}
] |
[
{
"msg_contents": "Now that's a close to linear as you are going to get. Pretty good I think:\na sort of one billion rows in half an hour.\n\nMikea\n\n>> -----Original Message-----\n>> From: Tatsuo Ishii [mailto:[email protected]]\n>> Sent: Thursday, November 04, 1999 10:30 AM\n>> To: Tom Lane\n>> Cc: [email protected]; [email protected]\n>> Subject: Re: [HACKERS] sort on huge table \n>> \n>> \n>> >\n>> >Tatsuo Ishii <[email protected]> writes:\n>> >> I have compared current with 6.5 using 1000000 \n>> tuple-table (243MB) (I\n>> >> wanted to try 2GB+ table but 6.5 does not work in this case). The\n>> >> result was strange in that current is *faster* than 6.5!\n>> >\n>> >> RAID5\n>> >> \tcurrent\t2:29\n>> >> \t6.5.2\t3:15\n>> >\n>> >> non-RAID\n>> >> \tcurrent\t1:50\n>> >> \t6.5.2\t2:13\n>> >\n>> >> Seems my previous testing was done in wrong way or the behavior of\n>> >> sorting might be different if the table size is changed?\n>> >\n>> >Well, I feel better now, anyway ;-). I thought that my first cut\n>> >ought to have been about the same speed as 6.5, and after I added\n>> >the code to slurp up multiple tuples in sequence, it should've been\n>> >faster than 6.5. The above numbers seem to be in line with that\n>> >theory. Next question: is there some additional effect that comes\n>> >into play once the table size gets really huge? I am thinking maybe\n>> >there's some glitch affecting performance once the temp file size\n>> >goes past one segment (1Gb). Tatsuo, can you try sorts of say\n>> >0.9 and 1.1 Gb to see if something bad happens at 1Gb? I could\n>> >try rebuilding here with a small RELSEG_SIZE, but right at the\n>> >moment I'm not certain I'd see the same behavior you do...\n>> \n>> Ok. I have run some testings with various amount of data.\n>> \n>> RedHat Linux 6.0\n>> Kernel 2.2.5-smp\n>> 512MB RAM\n>> Sort mem: 80MB\n>> RAID5\n>> \n>> 100 million tuples\t1:31\n>> 200\t\t\t4:24\n>> 300\t\t\t7:27\n>> 400\t\t\t11:11 <-- 970MB\n>> 500\t\t\t14:01 <-- 1.1GB (segmented files)\n>> 600\t\t\t18:31\n>> 700\t\t\t22:24\n>> 800\t\t\t24:36\n>> 900\t\t\t28:12\n>> 1000\t\t\t32:14\n>> \n>> I didn't see any bad thing at 1.1GB (500 million).\n>> --\n>> Tatsuo Ishii\n>> \n>> ************\n>> \n",
"msg_date": "Thu, 4 Nov 1999 10:41:57 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] sort on huge table "
},
{
"msg_contents": ">Now that's a close to linear as you are going to get. Pretty good I think:\n>a sort of one billion rows in half an hour.\n\nOops! It's not one billion but 10 millions. Sorry.\n\n1 million tuples\t1:31\n2\t\t\t4:24\n3\t\t\t7:27\n4\t\t\t11:11 <-- 970MB\n5\t\t\t14:01 <-- 1.1GB (segmented files)\n6\t\t\t18:31\n7\t\t\t22:24\n8\t\t\t24:36\n9\t\t\t28:12\n10\t\t\t32:14\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 04 Nov 1999 18:10:19 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Now that's a close to linear as you are going to get. Pretty good I think:\n>> a sort of one billion rows in half an hour.\n\n> Oops! It's not one billion but 10 millions. Sorry.\n\nActually, the behavior ought to be O(N log N) not O(N).\n\nWith a little bit of arithmetic, we get\n\nMTuples\tTime\tSec\tDelta\tSec/\tMTuple/sec\n\t\t\tsec\tMTuple\n\n1\t1:31\t91\t91\t91\t0.010989\n2\t4:24\t264\t173\t132\t0.00757576\n3\t7:27\t447\t183\t149\t0.00671141\n4\t11:11\t671\t224\t167.75\t0.00596125\n5\t14:01\t841\t170\t168.2\t0.0059453\n6\t18:31\t1111\t270\t185.167\t0.00540054\n7\t22:24\t1344\t233\t192\t0.00520833\n8\t24:36\t1476\t132\t184.5\t0.00542005\n9\t28:12\t1692\t216\t188\t0.00531915\n10\t32:14\t1934\t242\t193.4\t0.00517063\n\nwhich is obviously nonlinear. Column 5 should theoretically be a log(N)\ncurve, and it's not too hard to draw one that matches it pretty well\n(see attached plot).\n\nIt's pretty clear that we don't have any specific problem with the\none-versus-two-segment issue, which is good (I looked again at the code\nand couldn't see any reason for such a problem to exist). But there's\nstill the question of whether the old code is faster.\n\nTatsuo, could you run another set of numbers using 6.5.* and the same\ntest conditions, as far up as you can get with 6.5? (I think you ought\nto be able to reach 2Gb, though not pass it, so most of this curve\ncan be compared to 6.5.)\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 04 Nov 1999 10:50:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sort on huge table "
}
] |
[
{
"msg_contents": "Why can't this simply be done with a stored proc? Or am I missing the boat?\nStored proc accepts parameters to insert, and returns whatever value you\nwant it to.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Aaron J. Seigo [mailto:[email protected]]\n>> Sent: Thursday, November 04, 1999 8:26 AM\n>> To: Tom Lane\n>> Cc: Ed Loehr; [email protected]\n>> Subject: Re: [HACKERS] getting new serial value of serial insert\n>> \n>> \n>> hi...\n>> \n>> > I don't like *any* of the proposals that have appeared in \n>> this thread.\n>> > Inventing nonstandard SQL syntax is a bad idea, and furthermore all\n>> \n>> agreed... at the same time though, just about every other \n>> database out there has\n>> non-standard SQL statements to work around various \n>> limitations, perceived and\n>> otherwise... also, a quick look through the user \n>> documentation for postgres\n>> will show that there already are a lot of non-standard SQL \n>> statements..\n>> *shrug*\n>> \n>> > of these solutions are extremely limited in capability: \n>> they only work\n>> > for \"serial\" columns, they only work for a single serial \n>> column, etc\n>> > etc. If we're going to address this issue at all, we should invent\n>> > a general-purpose mechanism for passing back to the \n>> frontend application\n>> > the results of server-side operations that are performed \n>> as a side effect\n>> > of SQL commands.\n>> \n>> the RETURN cluase concept isn't limited to serial columns or \n>> single columns...\n>> it would allow the return of any columns that were affected by the\n>> INSERT/UPDATE/DELETE...\n>> \n>> > The idea that comes to my mind is to invent a new command, \n>> available in\n>> > \"trigger\" procedures, that causes a message to be sent to \n>> the frontend\n>> > application. This particular problem of returning a \n>> serial column's\n>> > value could be handled in an \"after insert\" trigger \n>> procedure, with a\n>> > command along the lines of\n>> > \tSENDFE \"mytable.col1=\" + new.col1\n>> > We'd have to think about what restrictions to put on the message\n>> > contents, if any. It might be sufficient just to counsel users\n>> > to stick identification strings on the front of the message text\n>> > as illustrated above.\n>> \n>> i don't think this is leaps and bounds above what can \n>> already be done with\n>> functions, triggers and external code now. while this would \n>> probably create a\n>> speed adantage (by skipping a select statement step) it \n>> would still leave the\n>> problem of having to implement a trigger for every type of \n>> data you want back. \n>> \n>> and there are limitations inherent to this method: if\n>> you wanted field1 returned when updating feild2, but field3 \n>> when updating\n>> fielld4... except that one time when you want both field1 \n>> and field3 returned...\n>> *takes a deep breath* it just isn't flexible enough... \n>> \n>> for every possible return situation, you'd have to define it \n>> in a trigger...\n>> and there still would be limitations to what rules you could \n>> set up.. e.g. how\n>> would you define in a trigger different returned values \n>> depending on the user\n>> that is currently accessing the database? a real world \n>> example would be a user\n>> coming in over the web and an admin coming in through the \n>> same method. unless\n>> pgsql handles the user authentication (which in most \n>> webplications, it doesn't)\n>> there would be no way to tell the difference without going \n>> through more work\n>> than it takes to do it with current methods (e.g. select).\n>> \n>> > transaction, there wouldn't be any artificial restriction to just\n>> > returning one or a fixed number of values. Finally, we'd not be\n>> > creating data-type-specific behavior for SERIAL; the facility could\n>> > be used for many things.\n>> \n>> this is _exactly_ what i have said in several previous \n>> posts: that it should not\n>> be limited just to serial fields... \n>> \n>> > We'd need to think about just how to make the messages available to\n>> > client applications. For libpq, something similar to the existing\n>> > NOTIFY handling might work. Not sure how that would map \n>> into ODBC or\n>> > other frontend libraries.\n>> \n>> if it was integrated into the INSERT/UPDATE/DELETE queries, \n>> it wouldn't need to\n>> be implemented in each frontend library. it would just be \n>> output, much like the\n>> OID and # of records inserted that currently appears after an\n>> INSERT/UDPATE/DELETE.\n>> \n>> however, if it is so completely horrid to add functionality \n>> to the SQL\n>> statements, i really can't think of another method that \n>> would provide the\n>> functionality that would actually make it useful outside of \n>> a limited number of\n>> situations.... so unless someone can think of a way, maybe \n>> its just better to\n>> leave it be.\n>> \n>> -- \n>> Aaron J. Seigo\n>> Sys Admin\n>> \n>> Rule #1 of Software Design: Engineers are _not_ users\n>> \n>> ************\n>> \n",
"msg_date": "Thu, 4 Nov 1999 10:53:41 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "I assume it is possible in pgsql to return the just-inserted serial value\nwith a stored procedure.\n\nStored procedures, though, would seem to be significantly more hassle vs.\nthe INSERT-returns-serial approach. I share the concern about non-std\nSQL, though it seems the pgsql system (like most other RDBMS) is\nalready loaded with non-std SQL precisely because the std has\nrepeatedly been judged lacking for itches that needed scratching.\n\nAs for concern about modifying INSERT semantics just for serial types,\nthat too, I would normally share. A generalized solution is better.\nHowever, the pg serial type is already a special case, constructed by\nPostgres from other existing components unlike other types. For that\nreason, I think the case of facilitating an atomic return of the\nserial value from a SQL insert statement would provide pragmatic\nsupport for the key access mode to a special-case (non-std?) extension\nalready present. For the same reason, it strikes me that the\ngeneralized ability to return any value from an INSERT should be\ntreated as largely orthogonal to the special case serial type.\n\nCheers.\nEd\n\n\"Ansley, Michael\" wrote:\n\n> Why can't this simply be done with a stored proc? Or am I missing the boat?\n> Stored proc accepts parameters to insert, and returns whatever value you\n> want it to.\n\n",
"msg_date": "Thu, 04 Nov 1999 10:36:50 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] getting new serial value of serial insert"
},
{
"msg_contents": "hi..\n\n> Why can't this simply be done with a stored proc? Or am I missing the boat?\n> Stored proc accepts parameters to insert, and returns whatever value you\n> want it to.\n> \n\nin an earlier post i mentioned that this doesn't do anything FUNCTIONALY new,\nit merely allows doing it with EASE and greater SPEED.. \n\nease, because you don't have to write a function (not really stored procedure\n=) to handle each specific insert and return pair you want.. with RETURN this\nwould be defined on a per query basis... \n\nspeed, because you would skip the SELECT to get the information.. it would tap\nthe tuple whilst still in memory during the read, like a tigger... you skip the\nSELECt...\n\nlast, it allows certain security possibilities: giving people access to the\ninformation they just inserted without giving them general SELECT permissions\non the table(s) involved...\n\nso, no.. you aren't missing the boat by thinking this sort of thing CAN be done\nvia other methods. the point is merely that the current methods are clumsy and\nslow and it seems a number of people are going through the current necessary\nhoops... \n\n-- \nAaron J. Seigo\nSys Admin\n",
"msg_date": "Thu, 4 Nov 1999 14:29:15 -0700",
"msg_from": "\"Aaron J. Seigo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] getting new serial value of serial insert"
}
] |
[
{
"msg_contents": "Still pretty good. That's five and a half thousand rows a second.\n\n>> -----Original Message-----\n>> From: Tatsuo Ishii [mailto:[email protected]]\n>> Sent: Thursday, November 04, 1999 11:10 AM\n>> To: Ansley, Michael\n>> Cc: '[email protected]'; Tom Lane; [email protected]\n>> Subject: Re: [HACKERS] sort on huge table \n>> \n>> \n>> >Now that's a close to linear as you are going to get. \n>> Pretty good I think:\n>> >a sort of one billion rows in half an hour.\n>> \n>> Oops! It's not one billion but 10 millions. Sorry.\n>> \n>> 1 million tuples\t1:31\n>> 2\t\t\t4:24\n>> 3\t\t\t7:27\n>> 4\t\t\t11:11 <-- 970MB\n>> 5\t\t\t14:01 <-- 1.1GB (segmented files)\n>> 6\t\t\t18:31\n>> 7\t\t\t22:24\n>> 8\t\t\t24:36\n>> 9\t\t\t28:12\n>> 10\t\t\t32:14\n>> --\n>> Tatsuo Ishii\n>> \n>> ************\n>> \n",
"msg_date": "Thu, 4 Nov 1999 11:12:48 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] sort on huge table "
}
] |
[
{
"msg_contents": "Hello.\nI have not got any help from SQL and GENERAL groups so I send my problem to\nyou.\n\n\nIn Postgres Users Guide, CREATE TABLE section, the following is stated:\n\n Postgres automatically allows the created table to inherit functions on\ntables above it in the inheritance hierarchy. \n\n Aside: Inheritance of functions is done according to the\nconventions of the Common Lisp Object System (CLOS). \n\nI have tried different constructs but I have not been able to create such a\nfunction. Can anybody send me an example of a function that will be\ninherited by inherited table? I. e.\ncreate table A (\n.\n.\n);\n\ncreate function F ...\n\ncreate table B (\n..\n) inherits (A);\n\nNow I assume that I can somehow use function F on table B \n\nThe specific example is given below !!\n\nThank you, \nRegards,\nAndrzej Mazurkiewicz\n\n\n-----Original Message-----\nFrom:\tAndrzej Mazurkiewicz \nSent:\t27 paYdziernika 1999 18:09\nTo:\t'[email protected]'\nSubject:\tRE: [GENERAL] FW: inheritance of functions\n\nHello.\nHere is an example of my problem:\n\nccbslin2:~/lipa$ psql -c \"drop database archimp0;\" template1\nDESTROYDB\nccbslin2:~/lipa$ psql -c \"create database archimp0;\" template1\nCREATEDB\nccbslin2:~/lipa$ psql -f funinh1.sql archimp0\nBEGIN WORK;\nBEGIN\nCREATE TABLE A (\n liczba float\n);\nCREATE\nCOMMIT WORK;\nEND\n\nBEGIN WORK;\nBEGIN\nCREATE FUNCTION suma (A) RETURNS float\n AS 'SELECT $1.liczba AS suma;' LANGUAGE 'sql';\nCREATE\nCOMMIT WORK;\nEND\n\nBEGIN WORK;\nBEGIN\nCREATE TABLE B (\n liczwym float\n) INHERITS (A)\n;\nCREATE\nCOMMIT WORK;\nEND\n\nBEGIN WORK;\nBEGIN\nINSERT INTO A (liczba) VALUES (1.56);\nINSERT 71414 1\nCOMMIT WORK;\nEND\n\nBEGIN WORK;\nBEGIN\nINSERT INTO B (liczba, liczwym) VALUES (2.5, 3.2);\nINSERT 71415 1\nCOMMIT WORK;\nEND\n\nselect liczba, suma(A) from A;\nliczba|suma\n------+----\n 1.56|1.56\n(1 row)\n\nselect liczba, suma(A) from A*;\nliczba|suma\n------+----\n 1.56|1.56\n 2.5| 2.5\n(2 rows)\n\n[Andrzej Mazurkiewicz] --\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n \nselect liczba, suma(B) from B; [Andrzej Mazurkiewicz] !!!!!!! \nERROR: Functions on sets are not yet supported [Andrzej Mazurkiewicz]\n!!!!!!! \n\n[Andrzej Mazurkiewicz] --\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \n\nEOF\n\n----------------------------------------------------------------------------\n--------------------------------------\n\nAfter invoking psql:\n\n\narchimp0=> select * from pg_proc where proname = 'suma';\nproname|proowner|prolang|proisinh|proistrusted|proiscachable|pronargs|proret\nset|prorettype|\nproargtypes|probyte_pct|properbyte_cpu|propercall_cpu|proouti\nn_ratio|prosrc |probin\n-------+--------+-------+--------+------------+-------------+--------+------\n---+----------+-------------------+-----------+--------------+--------------\n+--------------+-------------------------+------\nsuma | 302| 14|f |t |f | 1|f\n| 701|71393 0 0 0 0 0 0 0| 100| 0| 0|\n100|SELECT $1.liczba AS suma;|- \n(1 row)\n\narchimp0=> \n\nI am looking for working example !!!!!\n\nRegards,\nAndrzej Mazurkiewicz\n\t-----Original Message-----\n\tFrom:\tAaron J. Seigo [SMTP:[email protected]]\n\tSent:\t27 paYdziernika 1999 17:39\n\tTo:\tAndrzej Mazurkiewicz; '[email protected]'\n\tSubject:\tRe: [GENERAL] FW: inheritance of functions\n\n\thi...\n\n\t> > Postgres automatically allows the created table to inherit\nfunctions on\n\t> > tables above it in the inheritance hierarchy. \n\t> > create table A (\n\t> > .\n\t> > .\n\t> > );\n\t> > \n\t> > create function F ...\n\t> > \n\t> > create table B (\n\t> > ..\n\t> > ) inherits (A);\n\t> > \n\t> > Now I assume that I can somehow use function F on table B \n\n\tyou would be able to use function F on table B even if it didn't\ninherit A. \n\n\thowever, if you construct rules, triggers, etc... on table A, these\nshould be\n\tinherited by table B.\n\n\tthe manual is, as far as my experience has led me to believe,\nreferring to\n\tfunctions \"bound\" (for lack of a better word) to the parent\ntable....\n\n\t-- \n\tAaron J. Seigo\nSys Admin\n",
"msg_date": "Thu, 4 Nov 1999 16:10:54 +0100 ",
"msg_from": "Andrzej Mazurkiewicz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inheritance of functions"
}
] |
[
{
"msg_contents": "Marc, I just made a change for bsd/os 4.1. Can you repackage the\ntarball before making an announcement?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 11:22:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "packaging of 6.5.3"
},
{
"msg_contents": "\nDone...try that one out and let me know...if that works, we'll announce\nit...\n\n\n\nOn Thu, 4 Nov 1999, Bruce Momjian wrote:\n\n> Marc, I just made a change for bsd/os 4.1. Can you repackage the\n> tarball before making an announcement?\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 4 Nov 1999 14:51:04 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: packaging of 6.5.3"
},
{
"msg_contents": "> \n> Done...try that one out and let me know...if that works, we'll announce\n> it...\n> \n\nI don't even have the beta here to test. Let's just hope it works when\nbsd/os 4.1 is released. I got my info from a beta tester. I am one too,\nbut have not installed it yet.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 13:54:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: packaging of 6.5.3"
},
{
"msg_contents": "\nUmmm...I more meant \"try out this tarball and make sure that I didn't miss\nanything\" *grin*\n\n\n\nOn Thu, 4 Nov 1999, Bruce Momjian wrote:\n\n> > \n> > Done...try that one out and let me know...if that works, we'll announce\n> > it...\n> > \n> \n> I don't even have the beta here to test. Let's just hope it works when\n> bsd/os 4.1 is released. I got my info from a beta tester. I am one too,\n> but have not installed it yet.\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 4 Nov 1999 15:50:01 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: packaging of 6.5.3"
},
{
"msg_contents": "Looks fine. Has all my recent changes.\n\n\n> \n> Ummm...I more meant \"try out this tarball and make sure that I didn't miss\n> anything\" *grin*\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Nov 1999 15:43:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: packaging of 6.5.3"
}
] |
[
{
"msg_contents": "Hmm, it looked distinctly straight-line on my graph. I must have screwed up\nsomething. Sorry...\n\n-----Original Message-----\nFrom: Tom Lane\nTo: [email protected]\nCc: Ansley, Michael; [email protected]\nSent: 11/4/99 5:50 PM\nSubject: Re: [HACKERS] sort on huge table \n\nTatsuo Ishii <[email protected]> writes:\n>> Now that's a close to linear as you are going to get. Pretty good I\nthink:\n>> a sort of one billion rows in half an hour.\n\n> Oops! It's not one billion but 10 millions. Sorry.\n\nActually, the behavior ought to be O(N log N) not O(N).\n\nWith a little bit of arithmetic, we get\n\nMTuples\tTime\tSec\tDelta\tSec/\tMTuple/sec\n\t\t\tsec\tMTuple\n\n1\t1:31\t91\t91\t91\t0.010989\n2\t4:24\t264\t173\t132\t0.00757576\n3\t7:27\t447\t183\t149\t0.00671141\n4\t11:11\t671\t224\t167.75\t0.00596125\n5\t14:01\t841\t170\t168.2\t0.0059453\n6\t18:31\t1111\t270\t185.167\t0.00540054\n7\t22:24\t1344\t233\t192\t0.00520833\n8\t24:36\t1476\t132\t184.5\t0.00542005\n9\t28:12\t1692\t216\t188\t0.00531915\n10\t32:14\t1934\t242\t193.4\t0.00517063\n\nwhich is obviously nonlinear. Column 5 should theoretically be a log(N)\ncurve, and it's not too hard to draw one that matches it pretty well\n(see attached plot).\n\nIt's pretty clear that we don't have any specific problem with the\none-versus-two-segment issue, which is good (I looked again at the code\nand couldn't see any reason for such a problem to exist). But there's\nstill the question of whether the old code is faster.\n\nTatsuo, could you run another set of numbers using 6.5.* and the same\ntest conditions, as far up as you can get with 6.5? (I think you ought\nto be able to reach 2Gb, though not pass it, so most of this curve\ncan be compared to 6.5.)\n\n\t\t\tregards, tom lane\n\n <<plot.gif>> \n",
"msg_date": "Thu, 4 Nov 1999 18:23:04 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] sort on huge table "
}
] |
[
{
"msg_contents": "> > unfortunately '^whatever.*' is what I'm trying to locate (ie: all words\n> > starting with whatever, but with nay trailing text), the problem seems to be in\n> > the termination of the index scan, not in the actual regex match (which actually\n> > seems very good, speed wise..) otherwise I could just use ='whatever', which\n> > runs very very fast.\n> \n> Isn't \"all words that start with whatever but without trailing text\" the\n> same as = 'whatever'? From a regex point of view '^whatever' and\n> '^whatever.*' are exactly equivalent, but I can see where one could fail\n> to optimize properly.\n\nOK, let's turn from speculations to facts (have just gotten off my\nrear end and verified each).:\n\n1. '^whatever.*' and '^whatever' are equivalent regular expressions.\n\n2. The version of regexp used in postgres is aware of this equivalence.\n\n3. Btree index is used in the queries involving anchored expressions:\n\nemp=> explain select * from ps where ps ~ '^EDTA';\nNOTICE: QUERY PLAN:\n\nIndex Scan using psix on ps (cost=2373.21 rows=1 width=62)\n\nemp=> explain select * from ps where ps ~ '^EDTA.*';\nNOTICE: QUERY PLAN:\n\nIndex Scan using psix on ps (cost=2373.21 rows=1 width=62)\n\n(ps is a 250k-row table; the result is returned immediately when\nindexed and in about 3 seconds when not)\n\nHowever,\n\n4. Hash index is never used\n===========================\n\nObservations made with 6.5 on RedHat 5.1.\n\n\n--Gene\n",
"msg_date": "Thu, 04 Nov 1999 13:27:50 -0500",
"msg_from": "\"Gene Selkov, Jr.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing? "
},
{
"msg_contents": "On Fri, 05 Nov 1999, Gene Selkov, Jr. wrote:\n> OK, let's turn from speculations to facts (have just gotten off my\n> rear end and verified each).:\n> \n> 1. '^whatever.*' and '^whatever' are equivalent regular expressions.\n\nyes, sorry, I was aware of this, although I was using .* for clarity and my\nmind got stuck in 'proper' regex mode where those are needed.., it unfortunately\nhas no effect on the outcome here.\n\n> 2. The version of regexp used in postgres is aware of this equivalence.\n\nsure seems that way.\n\n> 3. Btree index is used in the queries involving anchored expressions:\n> \n> emp=> explain select * from ps where ps ~ '^EDTA';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using psix on ps (cost=2373.21 rows=1 width=62)\n> \n> emp=> explain select * from ps where ps ~ '^EDTA.*';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using psix on ps (cost=2373.21 rows=1 width=62)\n> \n> (ps is a 250k-row table; the result is returned immediately when\n> indexed and in about 3 seconds when not)\n\nMy point is that, while the index (in 6.5.1 and 6.5.2, anyway) is used to locate\nthe start of the scan, the system is then index-scanning the *whole* rest of the\ntable (which takes minutes for my 1.6 million entry table if it is from near\nthe start), as opposed to using a better 'stop term' to stop scanning once the\nregex will no longer be able to match (ie: the static front of the regex is no\nlonger matching), so the ordered scan is only being half utilised, this makes a\nMASSIVE difference in performance.\n\nFor example, say one of the words in the table is 'alongword', and there is\nalso 'alongwords', but no other words with the root of 'alongword'\n\nIf I do a \"select key from inv_word_i where word='alongword'\" it will use the\nbtree index on inv_word_i, and locate the one match almost instantly.\n\nIf I do a \"select key from inv_word_i where word~'alongword' it will need to\nscan all the records (this takes some time, minutes, infact) - as it should!,\nand would match atleast the two entries detailed above.\n\nIf I do a 'select key from inv_word_i where word~'^alongword' it uses the\nindex to find 'alongword', then does an index scan of the *whole* rest of the\ntable check all the rest of the entries for regex matching, so it takes a long\ntime, and returns the two entries detailed above, it will take almost as long\nas the previous query.\n\nWhat it should do is stop as soon as the leftmost part of the regex match no\nlonger matches 'alongword' because, as it is scanning in indexed order, a match\nis no longer possible. The query will then run at nearly the speed of the first\nexample, while finding the required two entries. This method is extensible to\nany regex where there is a '^' followed by a length of static match, as soon as\nthe static part does not match in index scan order, the regex can never be\nmatched.\n\nThis makes a massive difference for searching large indexes of words when you\nwant to match a root words and all extensions of that word (for exmple, window,\nwindows, windowing, windowed, windowless, etc....) - this optimisation (if it\nis missing or broken) would make postgresql a much more powerful tool for this\njob for what would seem to be a quite simple addition.\n\n> \n> However,\n> \n> 4. Hash index is never used\n\nmakes a lot off sense, hash indexes do not supply ordering information, and are\ntherefore only usefull for equivanence location, not ordered scanning, which is\nrequired for the regex situation.\n\n > ===========================\n> \n> Observations made with 6.5 on RedHat 5.1.\n-- \n------------------------------------------------------------\nStuart Woolford, [email protected]\nUnix Consultant.\nSoftware Developer.\nSupra Club of New Zealand.\n------------------------------------------------------------\n",
"msg_date": "Fri, 5 Nov 1999 10:12:06 +1300",
"msg_from": "Stuart Woolford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "Ah, your description just tripped a memory for me from the hackers list:\n\nThe behavior you describe has to do with the implementation of using an\nindex for regex matching, in the presence of the USE_LOCALE configuration\noption.\n\nInternally, the condition: WHERE word~'^alongword' is converted in the\nparser(!) to:\n\nWHERE word >= 'alongword' AND word < 'alongword\\377'\n\nsince the index needs inequalities to be used, not matches. Now, the\nproblem is the hack of tacking an octal \\377 on the string to create\nthe lexagraphically 'just bigger' value assumes ASCI sort order. If\nUSE_LOCALE is defined, this is dropped, since we don't have a good fix\nyet, and slow correct behavior is better than fast, incorrect behavior.\n\nSo, you have two options: if you don't need locale support, recompile\nwithout it. Otherwise, hand code your anchored matches as the pair of\nconditionals above Hmm, is there syntax for adding an arbitrary value to\na string constant in the SQL? I suppose you could use: word < 'alongwore',\ni.e. hand increment the last character, so it's larger than any match.\n\nYour point is correct, the developers are aware of it as a theoretical\nproblem, at least. Always helps to hear a real world case, though. I\nbelieve it's on the TODO list as is, otherwise, pester Bruce. ;-)\n\nReviewing my email logs from June, most of the work on this has to do with\npeople who needs locales, and potentially multibyte character sets. Tom\nLane is of the opinion that this particular optimization needs to be moved\nout of the parser, and deeper into the planner or optimizer/rewriter,\nso a good fix may be some ways out.\n\nRoss\n\nOn Fri, Nov 05, 1999 at 10:12:06AM +1300, Stuart Woolford wrote:\n> \n> My point is that, while the index (in 6.5.1 and 6.5.2, anyway) is used to locate\n> the start of the scan, the system is then index-scanning the *whole* rest of the\n> table (which takes minutes for my 1.6 million entry table if it is from near\n> the start), as opposed to using a better 'stop term' to stop scanning once the\n> regex will no longer be able to match (ie: the static front of the regex is no\n> longer matching), so the ordered scan is only being half utilised, this makes a\n> MASSIVE difference in performance.\n> \n> For example, say one of the words in the table is 'alongword', and there is\n> also 'alongwords', but no other words with the root of 'alongword'\n> \n\n[...]\n\n> \n> If I do a 'select key from inv_word_i where word~'^alongword' it uses the\n> index to find 'alongword', then does an index scan of the *whole* rest of the\n> table check all the rest of the entries for regex matching, so it takes a long\n> time, and returns the two entries detailed above, it will take almost as long\n> as the previous query.\n> \n> What it should do is stop as soon as the leftmost part of the regex match no\n> longer matches 'alongword' because, as it is scanning in indexed order, a match\n> is no longer possible. The query will then run at nearly the speed of the first\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 4 Nov 1999 16:06:21 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "On Fri, 05 Nov 1999, you wrote:\n> Ah, your description just tripped a memory for me from the hackers list:\n> \n> The behavior you describe has to do with the implementation of using an\n> index for regex matching, in the presence of the USE_LOCALE configuration\n> option.\n> \n> Internally, the condition: WHERE word~'^alongword' is converted in the\n> parser(!) to:\n> \n> WHERE word >= 'alongword' AND word < 'alongword\\377'\n> \n> since the index needs inequalities to be used, not matches. Now, the\n> problem is the hack of tacking an octal \\377 on the string to create\n> the lexagraphically 'just bigger' value assumes ASCI sort order. If\n> USE_LOCALE is defined, this is dropped, since we don't have a good fix\n> yet, and slow correct behavior is better than fast, incorrect behavior.\n\nah, now this makes sense, I'm using the RPMs, and I bet they have lexical\nenabled by default (damb! perhaps another set should be produced without this\noption? it makes a BIG difference)\n\n > > So, you have two options: if you don't need locale support,\nrecompile > without it. Otherwise, hand code your anchored matches as the pair\nof > conditionals above Hmm, is there syntax for adding an arbitrary value to\n> a string constant in the SQL? I suppose you could use: word < 'alongwore',\n> i.e. hand increment the last character, so it's larger than any match.\n\nI've tried a test using \">='window' and <'windox'\", and it works perfectly, and\nvery very fast, so I think we have found your culprit.\n\n> \n> Your point is correct, the developers are aware of it as a theoretical\n> problem, at least. Always helps to hear a real world case, though. I\n> believe it's on the TODO list as is, otherwise, pester Bruce. ;-)\n> \n> Reviewing my email logs from June, most of the work on this has to do with\n> people who needs locales, and potentially multibyte character sets. Tom\n> Lane is of the opinion that this particular optimization needs to be moved\n> out of the parser, and deeper into the planner or optimizer/rewriter,\n> so a good fix may be some ways out.\n\nHmm, perhaps a 'good' initial fix would be to produce another set of RPMs,\nand/or add it to the FAQ in the 4.x section about the slow queries that say\nindexes are used for this type of search. using the >= AND < trick does seem to\nwork, but is a little non-obvious (and hard to code in some situations, it will\nmake quite a difference to how I need to implement my searching system)\n\n> \n> Ross\n\nthank you very very much for your assistance on this, it is greatly appreciated!\n\n-- \n------------------------------------------------------------\nStuart Woolford, [email protected]\nUnix Consultant.\nSoftware Developer.\nSupra Club of New Zealand.\n------------------------------------------------------------\n",
"msg_date": "Fri, 5 Nov 1999 12:09:19 +1300",
"msg_from": "Stuart Woolford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "On Fri, 05 Nov 1999, Ross J. Reedstrom wrote:\n> Ah, your description just tripped a memory for me from the hackers list:\n> \n> The behavior you describe has to do with the implementation of using an\n> index for regex matching, in the presence of the USE_LOCALE configuration\n> option.\n> \n> Internally, the condition: WHERE word~'^alongword' is converted in the\n> parser(!) to:\n> \n> WHERE word >= 'alongword' AND word < 'alongword\\377'\n> \n> since the index needs inequalities to be used, not matches. Now, the\n> problem is the hack of tacking an octal \\377 on the string to create\n> the lexagraphically 'just bigger' value assumes ASCI sort order. If\n> USE_LOCALE is defined, this is dropped, since we don't have a good fix\n> yet, and slow correct behavior is better than fast, incorrect behavior.\n\njust to add to my previous reply, the 'hack' I am using now is:\n\nselect key from inv_word_i where word>='window' and word<'window\\372'\n\nwhich matches very nearly everything in my database (actually, I limit data to\nprintable characters, so it should be safe), and words with my normal queries\n(which are actually Zope queries, and therefore changing the actual search word\nis a little non-trivial)\n\nanyway, just a quick hack that helps performance by several orders of magnitude\nif you have locale enabled (ie: are using the standard RPMs)\nBTW, I assume that my databases will need requilding if I compile up a\nnon-locale aware version, which presents a problem currently :(\n\n------------------------------------------------------------ \nStuart Woolford,\[email protected] Unix Consultant.\nSoftware Developer.\nSupra Club of New Zealand.\n------------------------------------------------------------\n",
"msg_date": "Fri, 5 Nov 1999 12:59:03 +1300",
"msg_from": "Stuart Woolford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "Stuart - \nI'm forwarding a version of your last message to the hackers list, and\nto Lamar Owen, who's the keeper of the RPMs. The short verson, for those\nwho haven't followed this thread over on GENERAL, is that Stuart is being\nbitten by the USE_LOCALE affect on the makeIndexable() function in the\nparser: anchored regex searches on a large table (a glossary, I believe)\ntake a long time, proportional to sort position of the anchoring text:\ni.e. searching for '^zoo' is quick, '^apple' is very slow.\n\nI seems to recall the packagers here (Lamar and Oliver) asking if defining\nUSE_LOCALE for the general RPM or deb would cause any problems for other\nusers, who don't need locale info. Here's a real world example.\n\nThe discussion about this was last June, and shifted focus into the\nmulti-byte problem, as far as I can tell. Bruce, some version of this\nis on the TODO list, right?\n\nRoss\n\nOn Fri, Nov 05, 1999 at 12:09:19PM +1300, Stuart Woolford wrote:\n> On Fri, 05 Nov 1999, you wrote:\n> > Ah, your description just tripped a memory for me from the hackers list:\n> > \n> > The behavior you describe has to do with the implementation of using an\n> > index for regex matching, in the presence of the USE_LOCALE configuration\n> > option.\n> > \n> > Internally, the condition: WHERE word~'^alongword' is converted in the\n> > parser(!) to:\n> > \n> > WHERE word >= 'alongword' AND word < 'alongword\\377'\n> > \n> > since the index needs inequalities to be used, not matches. Now, the\n> > problem is the hack of tacking an octal \\377 on the string to create\n> > the lexagraphically 'just bigger' value assumes ASCI sort order. If\n> > USE_LOCALE is defined, this is dropped, since we don't have a good fix\n> > yet, and slow correct behavior is better than fast, incorrect behavior.\n> \n> ah, now this makes sense, I'm using the RPMs, and I bet they have lexical\n> enabled by default (damb! perhaps another set should be produced without this\n> option? it makes a BIG difference)\n> \n> > > So, you have two options: if you don't need locale support,\n> recompile > without it. Otherwise, hand code your anchored matches as the pair\n> of > conditionals above Hmm, is there syntax for adding an arbitrary value to\n> > a string constant in the SQL? I suppose you could use: word < 'alongwore',\n> > i.e. hand increment the last character, so it's larger than any match.\n> \n> I've tried a test using \">='window' and <'windox'\", and it works perfectly, and\n> very very fast, so I think we have found your culprit.\n> \n> > \n> > Your point is correct, the developers are aware of it as a theoretical\n> > problem, at least. Always helps to hear a real world case, though. I\n> > believe it's on the TODO list as is, otherwise, pester Bruce. ;-)\n> > \n> > Reviewing my email logs from June, most of the work on this has to do with\n> > people who needs locales, and potentially multibyte character sets. Tom\n> > Lane is of the opinion that this particular optimization needs to be moved\n> > out of the parser, and deeper into the planner or optimizer/rewriter,\n> > so a good fix may be some ways out.\n> \n> Hmm, perhaps a 'good' initial fix would be to produce another set of RPMs,\n> and/or add it to the FAQ in the 4.x section about the slow queries that say\n> indexes are used for this type of search. using the >= AND < trick does seem to\n> work, but is a little non-obvious (and hard to code in some situations, it will\n> make quite a difference to how I need to implement my searching system)\n> \n> > \n> > Ross\n> \n> thank you very very much for your assistance on this, it is greatly appreciated!\n> \n> -- \n> ------------------------------------------------------------\n> Stuart Woolford, [email protected]\n> Unix Consultant.\n> Software Developer.\n> Supra Club of New Zealand.\n> ------------------------------------------------------------\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 5 Nov 1999 09:36:55 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> Stuart -\n> I'm forwarding a version of your last message to the hackers list, and\n> to Lamar Owen, who's the keeper of the RPMs. The short verson, for those\n\n> > Hmm, perhaps a 'good' initial fix would be to produce another set of RPMs,\n\nThat is easy enough. I can build two versions -- with locale, and\nno-locale. No-locale RPM's would be named differently --\npostgresql-6.5.3-1nl.i386.rpm (that's 'one in ell').\n\nI have been helping another user figure out the regression results for\nlocales -- it's not fun. HOWEVER, I also need to follow the\nRedHat-originated standard, with is with locale support.\n\nIt'll take a little bit to rebuild, but not too long -- I could release\nno-locale RPM's as early as tomorrow for RedHat 6.x, and as early as an\nhour from now for RedHat 5.2 (both releases happening after the official\n6.5.3 release, of course).\n\nIn fact, if a user wants to build the no-locale RPM's themselves, it's\nnot too difficult:\n1.)\tget the postgresql-6.5.2-1.src.rpm source RPM (hereafter abbreviated\n'the SRPM')\n2.)\tInstall the SRPM with 'rpm -i'\n3.)\tBecome root, and cd to /usr/src/redhat/SPECS\n4.)\tOpen postgresql.spec with your favorite editor\n5.)\tRemove the configure option '--enable-locale' (if you use vi, and\nare comfortable with doing so, you can ':%s/--enable-locale//g' to good\neffect).\n6.)\tChange the string after the line 'Release:' to be '1nl' from 1.\n7.)\tSave and exit your editor.\n8.)\texecute the command 'rpm -ba postgresql.spec'\n9.)\tWhen it's done, install the new RPM's from the appropriate directory\nunder /usr/src/redhat/RPMS.\n10.)\tClean up by removing the files under SOURCES and the\npostgresql-6.5.2 build tree under BUILD.\n\nNOTE: You need a fairly complete development environment to do this --\nin particular, 'python-devel' must be installed (it's not by default,\neven under a 'C Development' and 'Development Libraries' enabled\ninstallation. You do need the C++ compiler installed as well.\n\nWould this help??\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 05 Nov 1999 11:33:36 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "> Stuart - \n> I'm forwarding a version of your last message to the hackers list, and\n> to Lamar Owen, who's the keeper of the RPMs. The short verson, for those\n> who haven't followed this thread over on GENERAL, is that Stuart is being\n> bitten by the USE_LOCALE affect on the makeIndexable() function in the\n> parser: anchored regex searches on a large table (a glossary, I believe)\n> take a long time, proportional to sort position of the anchoring text:\n> i.e. searching for '^zoo' is quick, '^apple' is very slow.\n> \n> I seems to recall the packagers here (Lamar and Oliver) asking if defining\n> USE_LOCALE for the general RPM or deb would cause any problems for other\n> users, who don't need locale info. Here's a real world example.\n> \n> The discussion about this was last June, and shifted focus into the\n> multi-byte problem, as far as I can tell. Bruce, some version of this\n> is on the TODO list, right?\n\nI have beefed up the FAQ with a mention that locale disables regex\nindexing, and have added to TODO:\n\n\t* Allow LOCALE to use indexes in regular expression searches\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Nov 1999 11:37:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] indexed regex select optimisation missing?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Reviewing my email logs from June, most of the work on this has to do with\n> people who needs locales, and potentially multibyte character sets. Tom\n> Lane is of the opinion that this particular optimization needs to be moved\n> out of the parser, and deeper into the planner or optimizer/rewriter,\n> so a good fix may be some ways out.\n\nActually, that part is already done: addition of the index-enabling\ncomparisons is gone from the parser and is now done in the optimizer,\nwhich has a whole bunch of benefits (one being that the comparison\nclauses don't get added to the query unless they are actually used\nwith an index!).\n\nBut the underlying LOCALE problem still remains: I don't know a good\ncharacter-set-independent method for generating a \"just a little bit\nlarger\" string to use as the righthand limit. If anyone out there is\nan expert on foreign and multibyte character sets, some help would\nbe appreciated. Basically, given that we know the LIKE or regex\npattern can only match values beginning with FOO, we want to generate\nstring comparisons that select out the range of values that begin with\nFOO (or, at worst, a slightly larger range). In USASCII locale it's not\nhard: you can do\n\tfield >= 'FOO' AND field < 'FOP'\nbut it's not immediately obvious how to make this idea work reliably\nin the presence of odd collation orders or multibyte characters...\n\nBTW: the \\377 hack is actually wrong for USASCII too, since it'll\nexclude a data value like 'FOO\\377x' which should be included.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Nov 1999 11:46:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] indexed regex select optimisation\n missing?"
},
{
"msg_contents": "\nFirstly, damb you guys are good, please accept my strongest complements for the\nresponse time on this issue!\n\nOn Sat, 06 Nov 1999, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > Reviewing my email logs from June, most of the work on this has to do with\n> > people who needs locales, and potentially multibyte character sets. Tom\n> > Lane is of the opinion that this particular optimization needs to be moved\n> > out of the parser, and deeper into the planner or optimizer/rewriter,\n> > so a good fix may be some ways out.\n> \n> Actually, that part is already done: addition of the index-enabling\n> comparisons is gone from the parser and is now done in the optimizer,\n> which has a whole bunch of benefits (one being that the comparison\n> clauses don't get added to the query unless they are actually used\n> with an index!).\n> \n> But the underlying LOCALE problem still remains: I don't know a good\n> character-set-independent method for generating a \"just a little bit\n> larger\" string to use as the righthand limit. If anyone out there is\n> an expert on foreign and multibyte character sets, some help would\n> be appreciated. Basically, given that we know the LIKE or regex\n> pattern can only match values beginning with FOO, we want to generate\n> string comparisons that select out the range of values that begin with\n> FOO (or, at worst, a slightly larger range). In USASCII locale it's not\n> hard: you can do\n> \tfield >= 'FOO' AND field < 'FOP'\n> but it's not immediately obvious how to make this idea work reliably\n> in the presence of odd collation orders or multibyte characters...\n\nhow about something along the lines of:\n\nfile >='FOO' and field='FOO.*'\n\nie, terminate once the search fails on a match of the static left-hand-side\nfollowed by anything (although I have the feeling this does not fit into your\nexecution system..), and a simple regex type check be added to the scan\nvalidation code?\n\n> \n> BTW: the \\377 hack is actually wrong for USASCII too, since it'll\n> exclude a data value like 'FOO\\377x' which should be included.\n\nThat's why I pointed out that in my particular case, I only have alpha and\nnumeric data in the database, so it is safe, it's certainly no general solution.\n\n-- \n------------------------------------------------------------\nStuart Woolford, [email protected]\nUnix Consultant.\nSoftware Developer.\nSupra Club of New Zealand.\n------------------------------------------------------------\n",
"msg_date": "Sat, 6 Nov 1999 13:05:14 +1300",
"msg_from": "Stuart Woolford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] indexed regex select optimisation\n missing?"
},
{
"msg_contents": "I don't know much about the backend stuff, but wouldn't it reduce the \namount of records you go through to do a search for FO. and then do a \nanother check on each returned record to check that the last character \nmatches? More checks, but fewer total records.\n\nAnyway, just a thought.\n\nAt 12:46 PM 11/5/99, Tom Lane wrote:\n>[snip]\n>\n> Basically, given that we know the LIKE or regex\n>pattern can only match values beginning with FOO, we want to generate\n>string comparisons that select out the range of values that begin with\n>FOO (or, at worst, a slightly larger range). In USASCII locale it's not\n>hard: you can do\n> field >= 'FOO' AND field < 'FOP'\n>but it's not immediately obvious how to make this idea work reliably\n>in the presence of odd collation orders or multibyte characters...\n>\n>BTW: the \\377 hack is actually wrong for USASCII too, since it'll\n>exclude a data value like 'FOO\\377x' which should be included.\n>\n> regards, tom lane\n>\n>************\n\n",
"msg_date": "Sat, 06 Nov 1999 17:13:03 -0400",
"msg_from": "Charles Tassell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] indexed regex select optimisation\n missing?"
},
{
"msg_contents": "\nWell, I've improved my regex text searches to actually use the indexes properly\nnow for the basic case, but I have found another 'problem' (or feature, call it\nwhat you will ;) - to demonstrate:\nwith locale turned on (the default RPMS are like this):\n\nthe following takes a LONG time to run on 1.6 million records:\n-------------------------------------\nexplain select isbn, count from inv_word_i where\nword~'^foo'\norder by count\n\nSort (cost=35148.70 rows=353 width=16)\n -> Index Scan using i3 on inv_word_i (cost=35148.70 rows=353 width=16)\n-------------------------------------\nthe following runs instantly, and does (nearly) the same thing:\n-------------------------------------\nexplain select isbn, count from inv_word_i where\nword>='foo' and word<'fop'\norder by count\n\nSort (cost=11716.57 rows=183852 width=16)\n -> Index Scan using i3 on inv_word_i (cost=11716.57 rows=183852 width=16)\n-------------------------------------\nbut what about the following? :\n-------------------------------------\nexplain select isbn , sum(count) from inv_word_i where\n(word>='window' and word<'windox')\nor\n(word>='idiot' and word<'idiou')\ngroup by isbn\norder by sum(count) desc\n\nSort (cost=70068.84 rows=605525 width=16)\n -> Aggregate (cost=70068.84 rows=605525 width=16)\n -> Group (cost=70068.84 rows=605525 width=16)\n -> Sort (cost=70068.84 rows=605525 width=16)\n -> Seq Scan on inv_word_i (cost=70068.84 rows=605525 width=16)\n-------------------------------------\n\nthis is the fastest way I've found so far to do a multi-word search (window and\nidiot as the root words in this case), you note it does NOT use the indexes,\nbut falls back to a linear scan?!? it takes well over 30 seconds (much much too\nlong)\n\nI've tried a LOT of different combinations, and have yet to find a way of\ngetting the system to use the indexes correctly to do what I want, the closest\nI've ffound is using a select intersect select method to find all docs\ncontaining both word (what I really want, although the query above is a ranked\nor query), but it gets slow as soon as I select more than one field for the\nresults (I need to line isbn in this case to another database in the final\napplication)\n\nI assume there is some reason the system falls back to a linear scan in this\ncase? it seems two index lookups would be much much more efficient..\n\nam I missing something again?\n\n-- \n------------------------------------------------------------\nStuart Woolford, [email protected]\nUnix Consultant.\nSoftware Developer.\nSupra Club of New Zealand.\n------------------------------------------------------------\n",
"msg_date": "Mon, 8 Nov 1999 12:50:41 +1300",
"msg_from": "Stuart Woolford <[email protected]>",
"msg_from_op": false,
"msg_subject": "more] indexed regex select optimisations?"
}
] |
[
{
"msg_contents": "--- Bruce Momjian <[email protected]> wrote:\n> OK, new version of psql installed. Only problem I see is that \\h\n> shows\n> TRUNCATE as the first help item. I assume the directory contents are\n ^^^^^^^^\n\nHey! Alright. Top billing... ;-)\n\n> not being sorted. Peter?\n> \n\nMike Mascari\n([email protected])\n\n=====\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Thu, 4 Nov 1999 16:58:22 -0800 (PST)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New version of psql"
}
] |
[
{
"msg_contents": "I've been helping this fellow out -- and he drops this regression diff on me. \nI'm thinking that most, if not all, of the failures are due to locales and\nassociated collating order and money problems. Am I right, or off-base here?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 PEter 4:11\n\n---------- Forwarded Message ----------\nSubject: test problem in english now ( excuse me ! ) suite regress test\nDate: Fri, 5 Nov 1999 01:40:35 +0100\nFrom: Franck MESNIER <[email protected]>\n\n\n\nLe ven, 29 oct 1999, Lamar Owen s'adressait � la foule en d�lire en ces termes :\n> You appear to be running the regression script as user 'franck' -- you\n> need to su to user 'postgres', then run the script (sorry I didn't see\n> that before....).\n> On my RedHat 6.1 devel machine, as user postgres, I was able to run the\n> regression just five minutes ago.\n\nHi\n\nI read in your web page you're born 29.01.68 so I have to speak respectfully\nto you because I'm born 30.01.68 -)\n\nI succed to do the regress test finally, but I have 8 tests who are failed I\nsend you the regression.diffs file and so, what can i do for this, is it\nimportant ?\n\nThanks \n\nFranck\n\n-------------------------------------------------------",
"msg_date": "Thu, 4 Nov 1999 21:16:02 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: test problem in english now ( excuse me ! ) suite regress test"
}
] |
[
{
"msg_contents": "Hi all!\n\nOn a number of occasions over the past couple of weeks I have encountered\nthe following error during my nightly vacuum process. It repeats\ninfinitely until outside intervention. It seems to identify the fact that\nit is stuck in an infinite recursion but unable to deal with it. The\nrelevant portion of the log is at the end of the message. Any\nhints/suggestions are welcome.\n\nAlso, after exiting from the infinite loop I have the following errors\nappearing during vacuum and vacuum analyze:\n\nNOTICE: CreatePortal: portal <vacuum> already exists\nNOTICE: Index pg_attribute_attrelid_index: NUMBER OF INDEX' TUPLES (1203) IS NOT THE SAME AS HEAP' (1193)\nNOTICE: Index pg_attribute_relid_attnum_index: NUMBER OF INDEX' TUPLES (1203) IS NOT THE SAME AS HEAP' (1193)\nNOTICE: Index pg_attribute_relid_attnam_index: NUMBER OF INDEX' TUPLES (1203) IS NOT THE SAME AS HEAP' (1193)\nERROR: cannot find attribute 1 of relation pg_temp.13894.119\n\nAny suggestions on cleaning this up is appreciated...\n\n- Kristofer\n\nNov 4 03:37:00 mymailman logger: DEBUG: --Relation tblvalue-- \nNov 4 03:37:04 mymailman logger: DEBUG: Pages 945: Changed 83, Reapped 5, Empty 0, New 0; Tup 128483: Vac 13, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 53, MaxLen 85; Re-using: Free/Avail. Space 876/876; EndEmpty/Avail. Pages 0/5. Elapsed 0/0 sec. \nNov 4 03:37:08 mymailman logger: DEBUG: Index tblvalue_idx2: Pages 432; Tuples 128483: Deleted 13. Elapsed 0/0 sec. \nNov 4 03:37:10 mymailman logger: DEBUG: Index tblvalue_oid: Pages 289; Tuples 128483: Deleted 13. Elapsed 0/1 sec. \nNov 4 03:37:20 mymailman logger: ERROR: cannot find attribute 1 of relation pg_temp.13894.119 \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: FATAL 1: Socket command type \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: ERROR: unknown frontend message was received \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: FATAL 1: Socket command type ? unknown \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: ERROR: unknown frontend message was received \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: FATAL 1: Socket command type ? unknown \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: ERROR: unknown frontend message was received \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: FATAL 1: Socket command type ? unknown \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: ERROR: unknown frontend message was received \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: FATAL 1: Socket command type ? unknown \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: ERROR: unknown frontend message was received \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: ERROR: infinite recursion in proc_exit \n\nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: ERROR: infinite recursion in proc_exit \n\nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: ERROR: infinite recursion in proc_exit \n\nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \nNov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\nNov 4 03:37:20 mymailman logger: ERROR: infinite recursion in proc_exit \n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * ICQ 352499 * www.munn.com\n\n",
"msg_date": "Thu, 4 Nov 1999 22:44:55 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: infinite recursion in proc_exit"
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> On a number of occasions over the past couple of weeks I have encountered\n> the following error during my nightly vacuum process. It repeats\n> infinitely until outside intervention. It seems to identify the fact that\n> it is stuck in an infinite recursion but unable to deal with it.\n\nWhat Postgres version are you using? This loop:\n\n> Nov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\n> Nov 4 03:37:20 mymailman logger: FATAL 1: Socket command type ? unknown \n> Nov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \n> Nov 4 03:37:20 mymailman logger: ERROR: unknown frontend message was received \n> Nov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \n> Nov 4 03:37:20 mymailman logger: NOTICE: AbortTransaction and not in in-progress state \n> Nov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \n> Nov 4 03:37:20 mymailman logger: pq_flush: send() failed: Bad file descriptor \n> Nov 4 03:37:20 mymailman logger: pq_recvbuf: recv() failed: Bad file descriptor\n\nlooks like we are trying to read a command from the frontend, failing\nbecause the socket descriptor's been clobbered, trying to send the\nelog() report to the client --- which also fails of course --- returning\nto the main loop and failing again. But as far as I can tell from\nlooking at either 6.5 or current code, that can't happen: a failure\nreturn from pq_recvbuf should lead to proc_exit *without* reaching the\n'Socket command type ? unknown' elog. So I think you are working with\n6.4 or older code, in which case an update would be your best bet.\n\nIf you want to try to recover without doing an update, I think you'll\nstill need to do a pg_dump/destroydb/createdb/reload. It looks like\nthe indexes on pg_attribute have been corrupted, and there's not any\neasier way to clean that up. (If it were a user table, you could just\ndrop and recreate the index, but don't try that on pg_attribute...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Nov 1999 00:41:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit "
},
{
"msg_contents": "\n> What Postgres version are you using?\n\nMy apologies, should have included that with my original request:\n\n[PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n> If you want to try to recover without doing an update, I think you'll\n> still need to do a pg_dump/destroydb/createdb/reload. It looks like\n> the indexes on pg_attribute have been corrupted, and there's not any\n> easier way to clean that up. (If it were a user table, you could just\n> drop and recreate the index, but don't try that on pg_attribute...)\n\nWhat are the ramifications of continuing with the corrupted indexes -\nundefined behavior? Filesystems have fsck to fix stuff - are there any\ntools on the docket to reconstruct the indexes or other recoverable\nthings? These would be useful for systems where the database is 500+ Megs\nand takes quite awhile to reload.\n\nThe application involved uses temp files (you see some sort of attribute\nfailure in the VACUUM before everything goes haywire - perhaps unrelated)\nas well as transactions. I don't create or drop temp tables inside\ntransactions to avoid the accompanying error messages. A database\nmaintenance program runs nightly to cull old data from the database and\nthen runs a vacuum. During that time, transactions continue unabated.\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * ICQ 352499 * www.munn.com\n\n",
"msg_date": "Fri, 5 Nov 1999 01:37:35 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit "
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n>> What Postgres version are you using?\n> My apologies, should have included that with my original request:\n> [PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\nHmm. If that trace is from 6.5 code, then postgres.c should certainly\nbe calling proc_exit after the recv() fails. I wonder if proc_exit is\nreturning because proc_exit_inprogress is nonzero? proc_exit's use of\nelog(ERROR) does look mighty bogus to me --- that path could possibly\ncause a recursion just like this, but how did the code get into it to\nbegin with?\n\nBut that's not very relevant to your real problem, which is that\nthere must be something corrupted in pg_attribute's indexes.\n\n> What are the ramifications of continuing with the corrupted indexes -\n> undefined behavior?\n\nI wouldn't recommend it.\n\n> Filesystems have fsck to fix stuff - are there any\n> tools on the docket to reconstruct the indexes or other recoverable\n> things?\n\nI've thought for some time that vacuum ought to just rebuild the indexes\nfrom scratch. That'd provide a recovery path for this sort of problem,\nand I suspect it'd actually be faster than what vacuum does now. I'm\nnot volunteering to make it happen, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Nov 1999 03:01:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Kristofer Munn <[email protected]> writes:\n> > [PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n>\n> But that's not very relevant to your real problem, which is that\n> there must be something corrupted in pg_attribute's indexes.\n>\n> > What are the ramifications of continuing with the corrupted indexes -\n> > undefined behavior?\n>\n> I wouldn't recommend it.\n>\n> > Filesystems have fsck to fix stuff - are there any\n> > tools on the docket to reconstruct the indexes or other recoverable\n> > things?\n>\n> I've thought for some time that vacuum ought to just rebuild the indexes\n> from scratch. That'd provide a recovery path for this sort of problem,\n> and I suspect it'd actually be faster than what vacuum does now. I'm\n> not volunteering to make it happen, though.\n\n I don't know if you could drop/rebuild an index on a system\n catalog while the database is online.\n\n But there was sometimes a utility called reindexdb. That used\n the bootstrap processing mode interface of the backend to\n drop and recreate all system catalog indices. I don't know\n who removed that and why, neither do I know if it would still\n be possible to drop and reindex through the bootstrap\n interface.\n\n Does someone remember why it's gone?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 5 Nov 1999 13:48:41 +0100 (MET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit"
},
{
"msg_contents": "> \n> Hmm. If that trace is from 6.5 code, then postgres.c should certainly\n> be calling proc_exit after the recv() fails. I wonder if proc_exit is\n> returning because proc_exit_inprogress is nonzero? proc_exit's use of\n> elog(ERROR) does look mighty bogus to me --- that path could possibly\n> cause a recursion just like this, but how did the code get into it to\n> begin with?\n\nThe proc_exit_inprogress stuff was added by me after I found some backends\ndoing exactly that sort of infinite recursion after a socket recv error.\nIt doesn't correct the original error but at least il will exit the backend\nafter 10 iterations. The elog(ERROR) might be bogus in this context, but how\ncan you otherwise notify the error? Maybe a better solution could be this:\n\n\tif (proc_exit_inprogress++ == 9)\n\t\telog(ERROR, \"infinite recursion in proc_exit\");\n\tif (proc_exit_inprogress >= 9)\n\t\tgoto exit;\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Sat, 6 Nov 1999 01:45:48 +0100 (MET)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit"
},
{
"msg_contents": "> > \n> > Hmm. If that trace is from 6.5 code, then postgres.c should certainly\n> > be calling proc_exit after the recv() fails. I wonder if proc_exit is\n> > returning because proc_exit_inprogress is nonzero? proc_exit's use of\n> > elog(ERROR) does look mighty bogus to me --- that path could possibly\n> > cause a recursion just like this, but how did the code get into it to\n> > begin with?\n> \n> The proc_exit_inprogress stuff was added by me after I found some backends\n> doing exactly that sort of infinite recursion after a socket recv error.\n> It doesn't correct the original error but at least il will exit the backend\n> after 10 iterations. The elog(ERROR) might be bogus in this context, but how\n> can you otherwise notify the error? Maybe a better solution could be this:\n> \n> \tif (proc_exit_inprogress++ == 9)\n> \t\telog(ERROR, \"infinite recursion in proc_exit\");\n> \tif (proc_exit_inprogress >= 9)\n> \t\tgoto exit;\n\n\nFix applied:\n\n /*\n * If proc_exit is called too many times something bad is happening, so\n * exit immediately. This is crafted in two if's for a reason.\n */\n if (proc_exit_inprogress == 9)\n elog(ERROR, \"infinite recursion in proc_exit\");\n if (proc_exit_inprogress >= 9)\n goto exit;\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 11:57:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Fix applied:\n\n> /*\n> * If proc_exit is called too many times something bad is happening, so\n> * exit immediately. This is crafted in two if's for a reason.\n> */\n> if (proc_exit_inprogress == 9)\n> elog(ERROR, \"infinite recursion in proc_exit\");\n> if (proc_exit_inprogress >= 9)\n> goto exit;\n\nThat isn't going to make things any better, because it's still laboring\nunder the same basic oversight: elog(ERROR) does not return to the\ncaller, it returns control to the main loop. Thus, having proc_exit\ncall elog(ERROR) is simply *guaranteed* to create a failure. proc_exit\nmust not allow control to return anywhere, under any circumstances,\nbecause it is used as the final recourse when things are too screwed up\nto consider continuing.\n\nAlthough it's not hard to remove this silliness from proc_exit itself,\nI suspect that the real source of the problem is probably elog(ERROR)\nbeing called by one of the on_shmem_exit or on_proc_exit routines that\nproc_exit is supposed to call. I don't think we can hunt down and\neliminate all possible paths where that could happen (even if we could\ndo it today, it's too likely that future code changes would make it\npossible again).\n\nI think what we must do instead is to fix things so that if an\non_shmem_exit/on_proc_exit routine elog's, control comes back to\nproc_exit and we carry on exiting with the remaining\non_shmem_exit/on_proc_exit routines (*not* calling the one that\nfailed again, of course).\n\nA sketch of the necessary changes is:\n\n1. proc_exit_inprogress becomes a global, and we change the\nsetjmp-catching code in postgres.c so that if proc_exit_inprogress is\nnonzero, it just calls proc_exit() immediately. This allows proc_exit\nto recover control after an on_shmem_exit/on_proc_exit routine fails via\nelog(). (Note: for elog(FATAL), it's already true that elog passes\ncontrol straight off to proc_exit.)\n\n2. proc_exit and shmem_exit should decrement on_proc_exit_index and\non_shmem_exit_index as they work through the lists of registered\nroutines --- in effect, each routine is removed from the list just\nbefore it is called. That prevents the same routine from being called\nagain if it fails.\n\n3. Ensure that the final exit() code is nonzero if we cycled through\nproc_exit more than once --- errors during proc_exit should always be\ntreated as the equivalent of an outright crash, forcing a postmaster\ncleanup cycle, I think.\n\nBefore I try to implement this scheme, can anyone spot a problem with\nit?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Nov 1999 13:08:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit "
},
{
"msg_contents": "> That isn't going to make things any better, because it's still laboring\n> under the same basic oversight: elog(ERROR) does not return to the\n> caller, it returns control to the main loop. Thus, having proc_exit\n> call elog(ERROR) is simply *guaranteed* to create a failure. proc_exit\n> must not allow control to return anywhere, under any circumstances,\n> because it is used as the final recourse when things are too screwed up\n> to consider continuing.\n\nMassimo pointed out some problems in my fix. The new code is:\n\n if (++proc_exit_inprogress == 9) \n elog(ERROR, \"infinite recursion in proc_exit\");\n if (proc_exit_inprogress >= 9)\n goto exit;\n \n /* ----------------\n * if proc_exit_inprocess > 1, then it means that we\n * are being invoked from within an on_exit() handler\n * and so we return immediately to avoid recursion. \n * ----------------\n */\n if (proc_exit_inprogress > 1) \n return;\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 14:45:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> /* ----------------\n> * if proc_exit_inprocess > 1, then it means that we\n> * are being invoked from within an on_exit() handler\n> * and so we return immediately to avoid recursion. \n> * ----------------\n> */\n> if (proc_exit_inprogress > 1) \n> return;\n\nNo, no, no, noooo!!!\n\nproc_exit MUST NOT RETURN. EVER, UNDER ANY CIRCUMSTANCES.\n\nIf it does, that means that elog(STOP) can return under some\ncircumstances. The callers of elog() are not expecting that,\nand they are likely to screw things up even worse if elog returns\ncontrol unexpectedly.\n\nAFAICS, this set of problems cannot be fixed by localized patching in\nproc_exit. We have to globally change the way in which errors are\nprocessed after proc_exit has begun execution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Nov 1999 15:40:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > /* ----------------\n> > * if proc_exit_inprocess > 1, then it means that we\n> > * are being invoked from within an on_exit() handler\n> > * and so we return immediately to avoid recursion. \n> > * ----------------\n> > */\n> > if (proc_exit_inprogress > 1) \n> > return;\n> \n> No, no, no, noooo!!!\n> \n> proc_exit MUST NOT RETURN. EVER, UNDER ANY CIRCUMSTANCES.\n> \n> If it does, that means that elog(STOP) can return under some\n> circumstances. The callers of elog() are not expecting that,\n> and they are likely to screw things up even worse if elog returns\n> control unexpectedly.\n> \n> AFAICS, this set of problems cannot be fixed by localized patching in\n> proc_exit. We have to globally change the way in which errors are\n> processed after proc_exit has begun execution.\n> \n\nOh, well. I tried. Code is better than it used to be at least.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Nov 1999 15:48:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: infinite recursion in proc_exit"
}
] |
[
{
"msg_contents": "Is there a fundamental rationale underlying this behavior, or is this \nmerely something no one has bothered to do:\n\n=> select min(xmin) from mytable where xmax = 0;\nERROR: Unable to select an aggregate function min(xid)\n\n=> select max(xmin) from mytable;\nERROR: Unable to select an aggregate function max(xid)\n\n=> select * from mytable where xmin > 150000;\nERROR: Unable to identify an operator '>' for types 'xid' and 'int4'\n You will have to retype this query using an explicit cast\n\n=> select * from mytable where xmin > 150000::xid;\nERROR: Unable to identify an operator '>' for types 'xid' and 'xid'\n You will have to retype this query using an explicit cast\n\n=> select * from mytable where xmin::int4 > 150000;\nERROR: No such function 'int4' with the specified attributes\n\nThe reason I ask is that it would be fairly straightforward to implement\na poor-man's database replication server if this worked.\n\n\t-Michael Robinson\n\n",
"msg_date": "Fri, 5 Nov 1999 12:01:25 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "xid type"
}
] |
[
{
"msg_contents": "Get Relational access without sacrificing Reliability\n\nBuy Pervasive.SQL 2000 for Solaris* and receive 2 FREE days of Professional\nServices installation and design consultation.\n\nTIME-SENSITIVE OFFER - ACT BY DECEMBER 31, 1999\n\nPervasive Software, creators of Btrieve databases, now offers Pervasive.SQL\n2000 for Solaris and Linux - the perfect DBMS to smoothly replace your\nlegacy ISAM database. Pervasive.SQL 2000 advantages include:\n\nParallel relational and transactional access provides the best of both\nworlds: the performance of transactional data access and the flexibility of\nrelational data access. \n\n* Extensive reporting and data manipulation without compromising performance\n* Flexibility associated with multiple access methods\nReliability and recovery features guarantee your critical data is always\nsafe and available.\n* 24 x 7 operations ensures availability even during backups\n* Archival logging with automatic roll forward capabilities\n* Transaction durability and caching of each individual step\nCross-platform support allows development and deployment across multiple\nplatforms including NT, NetWare, Solaris and Linux. \n* Binary compatibility file format to freely move your database from one\nplatform to another\n* Deploy against any engine without rewriting or recompiling your\napplication\n\nBest of all, when you purchase Pervasive.SQL 2000 Server Edition for\nSolaris, you'll get 2 FREE days of Pervasive Professional Services support\nto assist you with porting your application.** Designed to get you off the\nground with maximum speed and efficiency, Pervasive's Professional Services\nprogram includes installation assistance, design, requirements definition,\ntop-notch training, and much more. \nAct before December 31 to take advantage of this offer. Call\n1-800-287-4383, email: [email protected], or check out our web site\nwww.pervasive.com/products\n\n\n*minimum 50 user count\n**travel and expenses not included\n_______________________________________________________________________\nAt Pervasive we strive to add value to each and every communication\nwith you. If you feel you have received this e-mail in error or would\nlike to be removed from this list, send an email with the subject\n\"unsubscribe\" to [email protected]. [email protected]\n",
"msg_date": "Fri, 5 Nov 1999 05:17:19 -0600 ",
"msg_from": "Promotions <[email protected]>",
"msg_from_op": true,
"msg_subject": "Get Relational access without sacrificing Reliability"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.