The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 11 new columns ({'lang_score', 'char_rep_ratio', 'perplexity', 'lang', 'flagged_words_ratio', 'special_char_ratio', 'avg_line_length', 'alnum_ratio', 'word_rep_ratio', 'num_words', 'max_line_length'}) This happened while the json dataset builder was generating data using /tmp/hf-datasets-cache/medium/datasets/71920310133875-config-parquet-and-info-BAAI-IndustryCorpus_progr-5c36d49b/downloads/716e12e7c067ada597f46b27dd01cf6effce61aefd300f8624ca7145f283d930 Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast text: string alnum_ratio: double avg_line_length: double char_rep_ratio: double flagged_words_ratio: double industry_type: string lang: string lang_score: double max_line_length: int64 num_words: int64 perplexity: double special_char_ratio: double word_rep_ratio: double id: int64 to {'id': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'industry_type': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1577, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1191, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 11 new columns ({'lang_score', 'char_rep_ratio', 'perplexity', 'lang', 'flagged_words_ratio', 'special_char_ratio', 'avg_line_length', 'alnum_ratio', 'word_rep_ratio', 'num_words', 'max_line_length'}) This happened while the json dataset builder was generating data using /tmp/hf-datasets-cache/medium/datasets/71920310133875-config-parquet-and-info-BAAI-IndustryCorpus_progr-5c36d49b/downloads/716e12e7c067ada597f46b27dd01cf6effce61aefd300f8624ca7145f283d930 Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id
string | text
string | industry_type
string |
---|---|---|
2014-23/1025/en_head.json.gz/11242 | Developer Reading List Andrew Binstock, December 21, 2012 New books on C, C#, Node, Win8 Apps, Perl and Groovy.
1 2 3 4 5 Next E-mail
Programming C# 5.0 by Ian Griffiths
This 800-page volume is a comprehensive tutorial and reference on C#. It's a thorough and complete discussion of the language and associated technologies written in a clear, if somewhat wordy, style. The book has been updated to cover more than the language per se. It also includes explanations of new features in .NET 4.5 and issues surrounding Windows 8 apps, such as packaging. Despite covering those technologies, the core theme is the language — not the libraries or the OS.A useful chapter discusses, in considerable detail, the means of calling native code (both 32- and 64-bit), with lengthy coverage of COM and its specific requirements when called from C#. Both low-level calls to code written in C++ and to higher level languages such as VBscript or Jscript are discussed and thoughtfully explained.My comment that the book can be used for reference is not an oblique suggestion that it contains numerous tables of APIs or anything of the sort. Rather, it contains a wealth of topics that are explained clearly and can serve as references and refreshers on how to do specific things, whether it's loading assemblies or determining which class library to write for, or figuring out how Windows 8's new stream for touch-based apps works. In all cases, you'll find several pages that lay out the material clearly.The author fully expects you to have some background both in programming and in C#, so there is no primer. This choice of audience is possibly the reason for my only serious objection to the book, which is how little code it contains. Material is presented more verbally than I care for, but the clarity and thoroughness of the presentation make up for that. Recommended. 1 2 3 4 5 Next INFO-LINK | 编程 |
2014-23/1026/en_head.json.gz/5309 | Developer Reading List Dr. Dobb's Andrew Binstock, April 02, 2013 New books on Java, Erlang, Unit Testing, Windows and more.
Learn You Some Erlang for Great Good by Fred Héber
Erlang is the functional language, developed at Swedish phone-maker Ericsson, for use in large-scale, fault-tolerant applications. The language, which embraces the functional model, relies on message passing (with actors) and is generally deployed on multiple nodes, in which the failure of any one instance simply requires a restart of the Erlang VM. Because of this use case, Erlang is frequently thought of as specializing in parallel programming contexts; but while it does handle concurrency well, it's really the elegant fault handling that delivers the language's principal value.
The key obstacle is that the language is hard to learn. The functional model and actor implementation will be new to many developers. The difficulties these features present are compounded by an opaque syntax and less-than-helpful error messages. Consequently, to get up and running with Erlang requires dedication and a long-term commitment. The previous widely used tutorial on Erlang was written by Joe Armstrong, the language's principal developer. However, it was in many ways as unapproachable as the language.
This new book by Fred Hébert changes the whole landscape. It's an enjoyable and approachable introduction to Erlang. The author presents the topics in an intelligent, easy-to-navigate sequence that reveals Erlang in small chunks that build successively on each other. The presentation is laced with humor that is at times a relief, even if it occasionally veers into gallows style. (A dead worker thread, for example, is represented by a cartoon of a corpse floating face-down in a pool.)
Hebert is well-regarded in Erlang circles for this tutorial, which is available online at no cost. The high esteem for this material is well deserved and this book is without a doubt the best way to get on board with Erlang. Recommended.
Previous 1 2 3 4 5 6 7 Next INFO-LINK | 编程 |
2014-23/1026/en_head.json.gz/18260 | Ruby Conference Wrap-Up (Part 5)
This is the last installment of my long-winded commentary on the 2005 International Ruby Conference. It’s a short one, because I had leave late Sunday morning to catch my flight home from San Diego.
David Heinemeier Hansson opened the morning’s talks with his presentation on the “The State of Ruby on Rails”. I missed last year’s conference in Virginia, where David introduced Rails, and so this was my first opportunity to hear him speak. It’s evident from his work on Rails and various Rails-based applications that he’s a talented software developer, but this presentation also revealed that he’s a natural salesman and a charistmatic public speaker.
It was for the most part a non-technical talk, a kind of review of the progress that has been made over the past year. From the 9+ Slashdot headlines, to the phenomenal download statistics, to the publication of Agile Web Development with Rails (with over 20,000 copies sold since August), it’s been a huge year for Rails. David commented on some of the things that he believes helped to make Rails such a success. He said that having a strong framework is important, but public “poster child” applications (like 37signals‘ Basecamp, or the Robot Co-op‘s 43 Things) help to convince the public that Rails is the real deal. He also noted that for a project or technology to succeed, you’ve got to tell people stories that they’re ready to hear. In Rails’ case, that story was that Java and J2EE based web applications really are too complex, and that there’s a better way to do it.
David closed his presentation by announcing the (now available) Rails 1.0 release candidate, and then talking about plans for the next phase of development. The focus is going to shift from core Rails development to platform tools. One, the SwitchTower utility for automated Rails application deployment, is already available and ships as part of Rails. The 37signals crew is also working on Gauge, a tool for monitoring the health of a clustered Rails application nicely (and in real time), as well as Conductor, a web application inspired by Naked Objects that makes it easier to develop Rails applications (a.k.a. “instant scaffolding for your application”).
Side note. During the Q&A that followed, someone mentioned a couple of books by Clayton Christensen that have influenced him: The Innovator’s Dilemma and The Innovator’s Solution. I’ve added both to my Amazon.com wish list if you want to buy one of them for me. He also recommended K | 编程 |
2014-23/1027/en_head.json.gz/3276 | The History of Python
A series of articles on the history of the Python programming language and its community.
First-class Everything
[Folks, please don't use the comments section of this blog to ask questions. If you want to suggest a topic for a future blog entry, send me email. (Use Google to find my home page, which has my email address.) If you want to propose a change or discuss the merits of alternative designs, use the python-ideas mailing list at python.org.]One of my goals for Python was to make it so that all objects were "first class." By this, I meant that I wanted all objects that could be named in the language (e.g., integers, strings, functions, classes, modules, methods, etc.) to have equal status. That is, they can be assigned to variables, placed in lists, stored in dictionaries, passed as arguments, and so forth.The internal implementation of Python made this simple to do. All of Python's objects were based on a common C data structure that was used everywhere in the interpreter. Variables, lists, functions, and everything else just used variations of this one data structure---it just didn't matter if the structure happened to represent a simple object such as an integer or something more complicated such as a class.Although the idea of having "first-class everything" is conceptually simple, there was still one subtle aspect of classes that I still needed to address---namely, the problem of making methods first class objects.Consider this simple Python class (copied from last week's blog post):class A: def __init__(self,x): self.x = x def spam(self,y): print self.x, yIf methods are going to be first-class objects, then they can be assigned to other variables and used just like other objects in Python. For example, someone could write a Python statement such as "s = A.spam". In this case, the variable "s" refers to a method of a class, which is really just a function. However, a method is not quite the same as ordinary function. Specifically, the first argument of a method is supposed to be an instance of the class in which a method was defined.To deal with this, I created a type of callable object known as an "unbound method." An unbound method was really just a thin wrapper around the function object that implemented a method, but it enforced a restriction that the first argument had to be an instance of the class in which the method was defined. Thus, if someone wanted to call an unbound method "s" as a function, they would have to pass an instance of class "A" as the first argument. For example, "a = A(); s(a)". (*)A related problem occurs if someone writes a Python statement that refers to a method on a specific instance of an object. For example, someone might create an instance using "a = A()" and then later write a statement such as "s = a.spam". Here, the variable "s" again refers to a method of a class, but the reference to that method was obtained through an instance "a" . To handle this situation, a different callable object known as a "bound method." is used. This object is also a thin wrapper around the function object for the method. However, this wrapper implicitly stores the original instance that was used to obtain the method. Thus, a later statement such as "s()" will call the method with the instance "a" implicitly set as the first argument.In reality, the same internal object type is used to represent bound and unbound methods. One of the attributes of this object contains a reference to an instance. If set to None, the method is unbound. Otherwise, the method is bound.Although bound and unbound methods might seem like an unimportant detail, they a critical part of how classes work underneath the covers. Whenever a statement such as "a.spam()" appears in a program, the execution of that statement actually occurs in two steps. First, a lookup of "a.spam" occurs. This returns a bound method--a callable object. Next, a function call operation "()" is applied to that object to invoke the method with user supplied arguments.__________(*) In Python 3000, the concept of unbound methods has been removed, and the expression "A.spam" returns a plain function object. It turned out that the restriction that the first argument had to be an instance of A was rarely helpful in diagnosing problems, and frequently an obstacle to advanced usages --- some have called it "duck typing self" which seems an appropriate name.
carlFebruary 28, 2009 at 10:05 PMVery glad to hear that unbound methods (and the self type restriction) is gone in Python 3000! Somehow I missed that in reading through the change notes.ReplyDeleteNick FabryMarch 2, 2009 at 8:42 AMThanks for the explanation of why bound and unbound methods existed. I had learned about both, and used them in my own code, but they never made sense - I never quite understood their purpose. Your historical explanation clicked for me and made it crystal clear! It also clarified the reason for ditching unbound methods in favor of plain functions.Have you ever thought of writing a python teaching book that approaches it from a historical point of view? You have a knack for conceiving & explaining things clearly...ReplyDeleteJuanjo ContiApril 25, 2009 at 6:39 AMSpanish translation here.ReplyDeleteAdd commentLoad more...
Adding Support for User-defined Classes
Python's Use of Dynamic Typing
Early Language Design and Development
Greg Stein | 编程 |
2014-23/1027/en_head.json.gz/5175 | Developer Reading List Andrew Binstock, February 25, 2014 Java 8, JavaScript, Functional Programming, and Software Engineering
1 2 3 4 5 Next E-mail
Java SE 8 for the Really Impatient by Cay S. Horstmann
Java 8, probably the most important release of the language since Java 2, is set to be launched next month. As such, it is attracting a slew of new books on the many features that will debut in this version. One of the first to hit the market is this 200-page volume by Cay Horstmann, who is the lead coauthor on the definitive Java reference, Core Java. In this book, which is intended for developers who already know Java, Horstmann goes through each of the major additions, explains its use and benefits, and demonstrates the syntax through short snippets of code. In addition, where necessary, he provides relevant details of what is happening under the hood. His sense of how much to explain and where is impeccable, so that the book is really perfectly matched to its target audience. And, as in Core Java, the explanations are crisp, lucid, and authoritative.
I was also impressed by Java SE 8 for the Really Impatient's choice of topics. All the principal advances lambdas (closures), the stream API, improved libraries and collections are covered, of course, but so are topics not typically associated with Java 8: the Nashorn JavaScript engine and JavaFX 2.0, both of which are now bundled with the new JDK. In my 2012 review of Horstmann's last book, Scala for the Impatient, I complained about his use of numerous small snippets to teach a new language. He uses a similar technique here (although the snippets are longer), but it works well in this context because he's feeding information on discrete topics to readers who already know Java. In fact, snippets are an ideal demonstration choice for new features, and Horstmann uses them here to good advantage.
I suspect that almost every publisher will be putting out some book summarizing the new features in Java 8 for the legions of Java programmers. But I doubt few of the resulting works will be as informative and rewarding as this one. Highly recommended. 1 2 3 4 5 Next INFO-LINK | 编程 |
2014-23/1027/en_head.json.gz/17004 | C++ Reading List Andrew Binstock, May 28, 2013 The new C++11 standard has led to a flood of new books and updates to classics. These are the core books you need.
C++ Programming Language, 4th Edition by Bjarne Stroustrup
This book is rightfully viewed as the "Bible" of C++ programming. It's the authoritative exposition of the language, its features, and its peculiarities, all written with considerable clarity by Stroustrup, who designed C++. Some readers might view the ANSI C++ document as a more definitive source of information, but it is a rather terse reference resource intended for readers who already know the language. This book, in contrast, gives friendly explanations of new features, coupled with advice on things to do and practices to avoid, making it a more approachable choice for readers needing to understand specific features. In this sense, this book is more a reference than a tutorial. Some physical aspects detract from the book, especially the choice of printing code without using a monospaced font. No matter how aesthetically pleasant this might look to some readers, it throws off regular readers of code, who expect vertical alignments that no longer appear. Despite this, the typesetting of the code is much better than in previous editions. A second concern is one that has to do more with C++ itself than the book. This edition is 1328 pages long. That is roughly 1000 pages more than the original edition. As Stroustrup gives scant coverage of the libraries, these numbers are indicative of how much more complex C++ has become. These concerns notwithstanding, I don't see how serious C++ programmers looking to use the new features of the language can proceed without this work. Definitely recommended.
1 2 3 4 5 6 Next INFO-LINK | 编程 |
2014-23/1028/en_head.json.gz/6329 | testingReflections.com
The mind-share information resource for software testing, agile testing and test-first/test-driven development Navigation
Today's:Update: New Article Published: Man and Machine
New Article Published: Man and Machine
Thoughts on Agile & Agile Testing
Candidate Statement for CMG Director
STP Online Summit: Achieving Business Value with Test Automation
Last viewed:Thoughts on Agile & Agile Testing
Update: New Article Published: Man and Machine
QA Podcast #9: Extreme Programming and QA With Kent Beck
New Year’s Python Meme
More IMVU comment followup: Timothy Fitz's reply Submitted by Anonymous (not verified) on Sun, 08/03/2009 - 06:25 In response to my post on IMVU, I was delighted to receive a reply from Timothy Fitz, whose original blog entry triggered my investigation.There are many things to like about Timothy's reply. First of all, it's honest and forthright. Second, he seems not to be taking personally the criticism of the product that he and his company have been working on for years. It's a rare skill not to take things personally. So, thank you, Timothy, for that.He begins:I would like to clarify, we do have a Quality Assurance staff. There are numerous quality engineering tasks that only a human can do, exploratory testing and complaining when something is too complex come to mind. They're just not in the "take code and put it into production" process; it can't scale to require QA to look at every change so we don't bother. When working on new features (not fixing bugs, not refactoring, not making performance enhancements, not solving scalability bottlenecks etc), we'll have a controlled deliberate roll out plan that involves manual QE checks along the way, as well as a gradual roll-out and A/B testing. When someone complains that something too complex, I've been trained by Jerry Weinberg and his work to ask too complex compared to what? In the same way, when someone says that it doesn't scale to have testers look at every change, I ask why it can't scale. Programmers make every change, don't they? After all, "it can't scale" is one of the great myths about the Agile Manifesto. There's a difference between "it's too complex" and "I don't have a more useful model than the one I currently have"; between "it can't scale" and "I don't know how to solve the problem of making it scale."One approach to solving complexity or scaling problems is to reconsider what testing is and where it happens. Pair programming in which no tests are written is a form of testing (we often call it "continuous review", but review is a form of testing). Behaviour-development, in which we check that each function at least to some degree does what it should as we build it, do is a form of testing. And continuous deployment is a form of testing, too.One definition of testing is "the gathering of information with the intention of informing a decision" (that's paraphrased from Perfect Software and Other Illusions About Testing, by Jerry Weinberg). The complaint that something is too complex is information. Your testers are telling you that you about the testability of the product, compared to their ability to test it. There are all kinds of details to the story to which we're not privy. Maybe they believe that they have to retest everything in the product every time the programmers make a change; maybe they believe that testing means seeing the visible manifestation of every change from the user's point of view; maybe there are insufficient hooks for the kind of automation they want to apply; maybe they are being mandated to do other things that impinge on their ability to study and grasp the issues that they're facing; or maybe they're the creditors on some of the programmers' technical debt, and the number of bug reports that they have to investigate and report is taking time away from their test design and execution—that is, their test coverage.There are constraints to every testing assignment, and as Jerry says (quoted in James Bach's article in The Gift of Time), it's the first responsibility of the tester to figure out ways to get around those constraints. But that may not always be possible, given the situation.Another form of testing is asking questions, like "if you don't involve testers when you fix bugs, make performance enhancements, solve scalability bottlenecks, etc., how do you know that you've fixed, enhanced, or solved?" And others like, "What are your testers doing?" "Are they only testing new features?" "Are you aware of how useful skilled testers can be?" "Do you see any opportunities for adding efficiencies to your model of testing?"Your point about the sheer number of bugs we have? you're right. Our software has way more bugs than I'd like. It's a byproduct of the choices made when the company was small: to prioritize determining what the customer actually wants at almost any cost. We would absolutely love to have a high quality client, and we're working every day towards that goal.Continuous Deployment let's you write software *regression free*, it sure doesn't gift you high quality software. As a start-up, we're faced with hard decisions every day about where to make our product higher quality; most of the complaints you have would be immediately ignored by the majority of computer users and so we can't in good faith prioritize them over the things that ARE causing our users trouble.I'll respond to the things I disagree with in a moment, but I want to draw attention to the most important aspect of Timothy's reply: he highlights that developing software and services and products and systems is a constant set of tradeoffs, and that, just like the rest of us, he and the rest of the IMVU crew are making these decisions all the time. That's important because, as I'd like to emphasize, my notion of IMVU's quality doesn't matter. "Quality is value to some person(s)". When James and I teach Rapid Software Testing, we add something to that: "Quality is value to some person(s) who matter". I'm an outsider. I don't have any interest in using chat illustrated by anime-looking 3D avatars who teleport from place to place. I have no interest in handing IMVU my money for this service. I have no interest in the community this stuff supports. (So why did I even bother to look at the service? I am interested in software development and testing, and I wanted to investigate the relationship between a million test cases a day, a million dollars a month in revenue, and the system being tested thereby.)I'm going to introduce something perhaps more controversial here. Even if I were working for IMVU, as a tester, I still wouldn't matter. How can I, a tester, say that? It's because my role is not to push my values down the throats of the programmers and business people that I serve. Saying that I don't matter is a simplification; my values don't matter as much as the business values and the customer values do. I do matter, but only precisely to the degree that I deliver those people the information that they value to inform their decisions. I can find defects in the features offered by the product; I can tell them about things that I see as driving risk; I can tell them about things that I've discovered that represent a threat to the value of the product to someone who does matter. And at that point, the decision is up to them.A couple of other points:While I agree that continuous deployment doesn't give you high-quality software (in the sense of few bugs), I disagree that it lets you write software regression-free. It performs some tests on the software that might find regression if that regression happens to be covered by one of the tests. That's not a bad thing in itself; that's a good thing. The bad part is that, once again, it provides The Narcotic Comfort of the Green Bar. There's a big difference between "our tests find no regressions" and "there are no regressions".Second, continuous deployment is Way Cool. As Elisabeth suggested, that IMVU can push out 50 deployments a day is impressive. But it reminds me of a story (I think it was Al Stevens) in which you go to the circus. Last year, they had a bear on roller skates, which impressed you. This year, the bear is on motorized roller skates. And you're dazzled for a moment, until you consider, "Wait a second... do I really want to see a bear on motorized roller skates?" 50 deployments a day is impressive, but 50 deployments of what? For what purpose?Could it be that the focus on deploying 50 times a day represents opportunity cost against other equally or more desirable goals? Goals like, say, addressing the problem "Our software has way more bugs than I'd like"; or addressing the complaint from the testers that the testing mission is too complex; or investigating the functionality and security problems that the customers seem to be reporting and that might represent a serious threat to the value of the product? Is 50 deployments a day provided business value that can't be delivered any other way? Any other way? Would customers or IMVU itself suffer some loss if they had to wait 30 minutes for a code drop, instead of 15? I repeat: I don't know in this case. I don't know whether five deployments a day would be better, or five hundred a day, or one every five hundred days. I do know that part of testing is noticing the cost of one activity and its potential impact on the value of another.The vast majority of developers take one look at what they think our product is and don't bother to give it a try; I'm happy to see a thoughtful open-minded dive into IMVU from a developers perspective.I'm a specific kind of a developer; I'm a tester. As such, it's my particular bent to investigate things, and not to take them on faith, and to report on what I've found. I genuinely appreciate Timothy's thoughtful and open-minded reply, and I thank him for triggering some of the questions and observations above. | 编程 |
2014-23/1028/en_head.json.gz/6622 | Download Free Books on Yoga, Religion & Philosophy
THE BAAL SHEM TOV
By SRI SWAMI
VENKATESANANDA
He was known as 'Israel ben Eliezer' and he was born around 1700 A.D. in a small
village Okop perhaps in the Ukraine. The place and the date of birth, and the poverty or
the affluence of parentage - these are of interest to scholars, not to men-of-God, the
mystics foremost among whom was Israel. Even he did not leave a clue to these. He was considered the Baal Shem Tov (the Master of the Good Name). Some even considered
that he was Moses. He was not the first Baal Shem (Master of the Name), nor was he the
first Hasid but he is perhaps the best known of them all. The Hasid were sworn to
asceticism and to obscurity: however, when Israel appeared, on the scene he became the
best-known Baal Shem. The Baal Shem Tov did not found a school of thought, though he is regarded as the
founder of modern Hasidism. When his own immediate disciples endeavoured to commit his
teachings to writing, he would gently but meaningfully chide them: "There is nothing
of me in your pages; you thought you heard what I didn't say." Yet soon, he became
something of a legend, thanks to the zeal and the devotion of his devotees. Very early in his life he was orphaned. Legend has it that his parents were a hundred
years old when he was born. However, his father had left him an invaluable legacy in the
admonition: "I leave before I can make you into a man who fears God and loves those
who fear him. Remember one thing: God is at your side and he alone is to be feared."
This, to young Israel was gospel, in every sense of the word. Israel was obliged to marry early in life, but lost his wife soon after marriage. He
eked out a meagre living doing odd jobs. Once again he was betrothed, this time to a girl
who was a baby. Soon after this, the girl's father died. Years later Israel went to the
girl's brother to claim her. Her brother tried to dissuade her; and even Israel warned her
that his life had a spiritual goal and that as his wife she, too, would have to face a
difficult life. But Hannah was prepared for all this. The brother-in-law was loathe to let
the sister and her eccentric husband live near him, and he sent them away to the
Carpathian mountains where the young couple lived a miserable life. The spiritual radiance of the Baal Shem Tov grew in brilliance all the time. It is said
that on a Saturday a young man who was the Baal Shem's guest woke up at midnight with a
fright to find that there was a huge flame in the house; and he was wonderstruck to
discover that the flame issued from the body of Israel! It is said that when he was thirty-six years of age, he had a vision in which it was
revealed to him that it was his destiny to be a spiritual leader. In his inborn humility
and simplicity, he felt that he was unworthy of this and fasted for three days. But the
divine will inexorably led him along the path of leadership. He had many holy visions; and
the people recognised in him an unquestionable leader. The Baal Shem Tov was one with all, and everyone, however lowly and unworthy in the
eyes of the people had free access to him, and found in him a great helper. Mr. Elie
Wiesel says of the Baal Shem Tov: "To have his gaze rest on you meant feeling his
fire run through you. An old peasant protects him from the cold - in return, the peasant
will become rich and live for a hundred years. A boy recites his lesson with fervour - he
will reap glory among his peers. A thief has the misfortune to cross his path. Discovered,
he turns to the Master and says: 'Since you know how to look, why don't you rather try to
see the good?' And so, even the thief enters the enchanted garden of Hasidic legend."
There were those who criticised him; but he found no one worthy of his condemnation. He
stooped to conquer even the vilest among men; no one was beneath his attention. It is said
that he constantly travelled from one village to the other, giving everyone the feeling
that he was everywhere at the same time. The Baal Shem Tov's teachings were exceedingly simple. He did not condemn
scholasticism, but he pointed out: "God listens to the shepherd playing his flute as
readily as he listens to the saint renouncing his earthly attachments." He taught
that the daily life itself is divine life. His was a gospel of joy. He pointed out that it
was only a self-centred selfish man who was subject to unhappiness, whereas one who became
aware of humanity rejoiced and such joy itself led one to God. The Baal Shem Tov's vision of humanity included all human beings whether they were
regarded saintly or sinful; his vision of life included all aspects of it. He did not
attempt to convert anyone to Judaism. He encouraged
all to be faithful to one's own faith, to be faithful to one's own self. Mr. Wiesel says:
"The Baal Shem's major concern was to create links at every level. To him, everything
that brought people together and consolidated the community was good; everything that
sowed discord was bad." Many miracles are attributed to the Baal Shem Tov and it is said that he taught even
the angels, even as legend accounts angels among the disciples of Lord
Buddha. But the supreme beauty of his life lay in the utter simplicity of his
teachings and the radiant divinity of his life. Towards the end of his life, his ecstasies were even more intense than they had been
before; his behaviour became even more eccentric than before. The Baal Shem Tov was sixty years of age. And it was the passover. He took ill. After
seven weeks of this illness he sensed that his end was near. The disciples gathered around
him, grief written on their faces. He himself consoled them: "Why do you cry? I am
leaving by one door, only to enter by another." Last Updated: Sunday, 17-Oct-2004 09:49:22 EDT
Mail Questions, Comments & Suggestions to : | 编程 |
2014-23/1028/en_head.json.gz/6799 | [p. 157]
The Zionist Movement
Translated by Eszter Andor and finished by Judie Goldstein
Trends Within the Movement and What They Supported In the interwar period the most important activities of the Zionists in Krinki were to prepare the youth for aliya to Eretz Israel, educate children in a national-Hebrew spirit, and win people over to the Zionist idea and its realization. At the same time the Zionists were also ready to tackle the various needs of the Jews in Krinki. The most important elements of the Zionist movement in Krinki were the groupings called For the Labouring Eretz Israel, especially the Tsairei Tsion, which later united with the rightist Poalei Tsion. They pursued a wide variety of activities. They attracted the Zionist-socialist and pioneer youth and devoted themselves to Hebrew education in the shtetl. Beside this, they carried out a vigorous general Zionist activity in the Keren Hayesod, the Jewish national fund, the so-called Shekel campaigns, the Hebrew Tarbut societies, and so on. The leadership of the Hashomer Hatzair organization
The elected representatives of the Tsairei Tsion were active in the town council as well as in the council of the Jewish community since the first democratic council elections, which had been carried out at the end of the First World War. Despite the fact that many of our comrades in the Tsairei Tsion had no franchise because they were too young, and not only could they not be elected but they could not even vote in the elections to the town council, we had a great success thanks to the popularity of our leaders in the various domains of social life of the shtetl, especially in education, writes Bendet Nisht about the leaders of the above-mentioned trend. The Tsairei Tsion founded the Hekhalutz movement by 1919 in Krinki. They provided Hebrew evening classes for the workers who were preparing to make aliya to Eretz Israel and in 1921 they arranged locksmith courses for the olim (the new immigrants). In the same year -- similarly to other areas in Poland -- they carried out a successful collection of tools and money to buy tools for the workers in Eretz Israel in Krinki and the neighboring shtetls. The council of the the Poalei Tsion with the drama circle and the leadership of the Hebrew elementary school
[p.158] Committee of Poalei Tsion [Youths of Zion] Krynki 1929
In the same year they also opened general evening courses on Jewish history, geography, natural sciences, political economy, Yiddish and Hebrew. By the beginning of 1926, the Poalei Tsion Union had already set up a youth organization with 100 members and 80 adults. Our comrades, describes a report from Krinki, participate actively in various social institutions, like the People's Bank, the orphans' committee, and so on, and they have a great influence on the life of the local society.
In 1934, 500 workers, common Jews and young people from the Poalei Party, the Hekhalutz, the Ha-Oved (The Worker), the Freedom Party and the Ha-Poel sports club and the Hekhalutz Hatsair kibbutz, participated in the solemn First of May demonstration organized by the League for the Laboring Eretz Israel. But the movement flourished and reached its greatest influence in the last year before the outbreak of the Second World War. That year the Freedom-Hekhalutz Hatsair movement had 200 members and the Ha-Poel 50 in Krinki. And in the elections to the 21st Zionist Congress, the League for the Laboring Eretz Israel received 406 out of the 449 votes in Krinki.
The Poalei Tsion Party entered the elections to the Krinki town council with the slogan for or against Israel now at a time when the English government had just published its White Book against Jewish immigration to Eretz Israel and its colonization by the Jews. [The party] won 6 seats out of the 8 seats accorded to Jewish deputies (the other two seats were won by the Bund). The Jewish public of Krinki identified with the for Israel slogan. The Hekhalutz Aliya To Eretz Israel
As mentioned before, a Hekhalutz union was founded in Krinki in 1919. And it started immediately to prepare its members for manual work and communal life. They leased a huge garden near the bath-house and a group of young boys and girls started to learn agriculture there and to get as much practice in it as possible until the gates of Eretz Israel would be open to aliya again. In 1919-20, Krinki was a transit point for pioneers who arrived in the shtetl from the surrounding area in order to go on a hakhsharah and then make aliya to Eretz Israel. This is how Sheyme Kaplan describes this phenomenon:
Pioneers at work in the hakhsharah-kibbutz in Krinki, 1935
At that time Krinki was within the so-called Curzon line, which was considered a territory occupied by Poland where a number of Polish laws, such as compulsory military service, did not apply. 'Hares,' that is, young boys from the territories that were already annexed by Poland by law, used to come to our town. The young people arrived with a recommendation letter from their local Zionist organization in which we were kindly asked to help the bearer of the letter, pioneer candidates for aliya to Eretz Israel. The idea was that we would provide these boys with documents proving that they were residents in Krinki (that is, that they were not liable to military service) so that they could get a passport and an English visa to Eretz Israel.
First of all we memorized with each of them street names in Krinki and the names of some local residents so that the boys would be able to argue and prove to the authorities if necessary that they were really locals. At the same time, we invited the chairman of the town council to a feast at Heykl Olian's and made the gentleman rather drunk with liquor. And [Hebrew quotation] we would have him sign the appropriate certificates on the basis of which the pioneers who arrived in our town could get the necessary documents and make aliya to Eretz Israel.
In 1919 Bendet Nisht participated in the first conference of the Hekhalutz of Lithuania (strictly speaking, of the Grodno-Vilnius district), which assembled in Grodno, as the delegate from Krinki. He also represented Krinki on the national Hekhalutz conference organized in Warsaw a year later and he was elected to be a member of the central committee of the movement. The first group of pioneers from Krinki made aliya to Eretz Israel in the summer of 1920. Among them Sheyme Zak and Zvi Rotbart (Carmeli), may he rest in peace, Eyzik Ostrinski, and Avrom Neyman, Yofe Furman (a farmer today) and her brother Motke. They spent the first few years in Eretz Israel working in a group with the pioneers from Grodno on the forestation of Mount Carmel and in Atlit and in the citrus plantations in Petakh Tikva, and later in construction in Rishon LeTsion, Ramlah, Jerusalem and Motza. Then a part of them went into agriculture with the Geva Group in Jezereel valley where they were joined by Lea Nisht (Zak) and Lize Rotbart (who is now the wife of Dovid Tubiu, the first mayor of the reconstructed Beer Sheva and its builders). The first pioneers, including the young Krinki pioneers, laid the foundations for the subsequent wider aliya to Eretz Israel, which built a country for the Jewish people that would be independent until the end of time.
Krynki, Poland
This web page created by Lance Ackerfeld
Updated 6 Dec 2003 by LA | 编程 |
2014-23/1713/en_head.json.gz/20772 | Developer Reading List Andrew Binstock, December 21, 2012 New books on C, C#, Node, Win8 Apps, Perl and Groovy.
Programming C# 5.0 by Ian Griffiths
This 800-page volume is a comprehensive tutorial and reference on C#. It's a thorough and complete discussion of the language and associated technologies written in a clear, if somewhat wordy, style. The book has been updated to cover more than the language per se. It also includes explanations of new features in .NET 4.5 and issues surrounding Windows 8 apps, such as packaging. Despite covering those technologies, the core theme is the language — not the libraries or the OS.A useful chapter discusses, in considerable detail, the means of calling native code (both 32- and 64-bit), with lengthy coverage of COM and its specific requirements when called from C#. Both low-level calls to code written in C++ and to higher level languages such as VBscript or Jscript are discussed and thoughtfully explained.My comment that the book can be used for reference is not an oblique suggestion that it contains numerous tables of APIs or anything of the sort. Rather, it contains a wealth of topics that are explained clearly and can serve as references and refreshers on how to do specific things, whether it's loading assemblies or determining which class library to write for, or figuring out how Windows 8's new stream for touch-based apps works. In all cases, you'll find several pages that lay out the material clearly.The author fully expects you to have some background both in programming and in C#, so there is no primer. This choice of audience is possibly the reason for my only serious objection to the book, which is how little code it contains. Material is presented more verbally than I care for, but the clarity and thoroughness of the presentation make up for that. Recommended. 1 2 3 4 5 Next INFO-LINK | 编程 |
2014-23/1713/en_head.json.gz/32328 | Understanding Programming
PHP, MySQL, JavaScript & HTML5 For Dummies Extras
Getting Started with Java Programming
By Barry Burd from Beginning Programming with Java For Dummies, 2nd Edition
The late 1980s saw several advances in software development, and by the early 1990s, many large programming projects were being written from prefab components. Java came along in 1995, so it was natural for the language's founders to create a library of reusable code. The library included about 250 programs, including code for dealing with disk files, code for creating windows, and code for passing information over the Internet. Since 1995, this library has grown to include more than 2,700 programs. This library is called the API — the Application Programming Interface.
Every Java program, even the simplest one, calls on code in the Java API. This Java API is both useful and formidable. It's useful because of all the things you can do with the API's programs. It's formidable because the API is so extensive. No one memorizes all the features made available by the Java API. Programmers remember the features that they use often, and look up the features that they need in a pinch.
So many ways to write computer programs
To write Java programs, you need three tools:
A Java compiler
A Java Virtual Machine. The Java API. You have at least two ways to get these tools:
You can download these tools from the Sun Microsystems Web site. You can use the tools that come with a commercial product. If you own a copy of Borland JBuilder, Metrowerks CodeWarrior, IBM Visual Age for Java, or IBM WebSphere Studio Application Developer (WSAD), then you already have the tools that you need.
Two bags of goodies
Sun's Web site bundles the basic Java tools in two different ways:
The Java Runtime Environment (JRE): This bundle includes a Java Virtual Machine and the Application Programming Interface. With the JRE, you can run existing Java programs. That's all. You can't create new Java programs, because you don't have a Java compiler.
The Software Development Kit (SDK): This bundle includes all three tools — a Java compiler, a Java Virtual Machine, and the Application Programming Interface. With the SDK, you can create and run your own Java programs.
Note that an older name for the Java SDK is the JDK — the Java Development Kit. Some people still use the JDK acronym, even though the folks at Sun Microsystems don't use it anymore.
How do you type this stuff?
A computer program is a big piece of text. So to write a computer program, you need a text editor — a tool for creating text documents. A text editor is a lot like Microsoft Word, or like any other word processing program. The big difference is that the documents that you create with a text editor have no formatting whatsoever. They have no bold, no italic, no distinctions among fonts. They have nothing except plain old letters, numbers, and other familiar keyboard characters. That's good, because computer programs aren't supposed to have any formatting.
A document with no formatting is called a plain text document.
Documents without formatting are fairly simple things, so a typical text editor is easier to use than a word processing program. (Text editors are a lot cheaper than word processing programs, and they're lightning fast. Even better, text editors take very little space on your hard drive.)
You can use a word processor, like Microsoft Word, to create program files. But, by default, word processors insert formatting into your document. This formatting makes it impossible for a Java compiler to do its job. Using word processors to write Java programs isn't recommended. But, if you must use a word processor, be sure to save your source files with the .java extension. (Call a file SomeName.java.) Remember, also, to use the Save As command to save with the plain text file type.
Using a customized editor
Even if you don't use an integrated development environment, you can use other tools to make your programming life easy. Think, for a moment, about an ordinary text editor — an editor like Windows Notepad. With Notepad you can
Create a document that has no formatting
Find and replace characters, words, and other strings
Copy, cut, and paste
Not much else
Notepad is fine for writing computer programs. But if you plan to do a lot of programming, you may want to try a customized editor. These editors do more than Windows Notepad.
Shortcuts for compiling and running programs
Explorer-like views of your works in progress
When it comes to choosing a custom editor, two favorites are JCreator and TextPad. JCreator has lots of cool features, including tools to write some boilerplate Java code. TextPad has fewer Java-specific features, but TextPad is a great general-purpose text editor.
How to Add Buttons and Text to Your JavaFX Project
Create an Abstract Class in Java
How to Pass Values to Methods in Java
Understanding How XML and Java Interact
Source Code for Java All-in-One For Dummies, 4th Edition
Beginning Programming with Java For Dummies, 4th Edition | 编程 |
2014-23/1713/en_head.json.gz/37673 | Microsoft stealth launches 'historic' programming language
Hidden F# strikes right note
Tim Anderson,
Launching a new language is easy - getting it used is hard. The combination of existing code and existing skills is a strong barrier to adoption, and even excellent languages like Ruby and Python have struggled to break out of their niches.What hope is there for F#, the new language that Microsoft has sneaked into Visual Studio 2010, launched this month?
"I think it's an amazing moment," says its principal designer, Microsoft researcher Don Syme, an Australian now based in Cambridge. "It represents part of the history of programming language design and development here in the UK."Perhaps it does. But you would not know it from most of Microsoft's marketing effort for the new Visual Studio. F# tends to get lost in the fuss about other new features. I downloaded Microsoft's Why upgrade to Visual Studio 2010? white paper and not only is F# missing from the "Top ten reasons to buy" - it's not actually mentioned at all.That is a shame. F# is a functional programming language, and there are good reasons why functional programming deserves wider use, such as its suitability for the concurrent programming required for optimal performance on today's multi-core systems.F# is also succinct. During a talk at the recent QCon London programming conference, Syme showed a series of slides, headed Pleasure and Pain, showing how F# code can be shorter and more expressive than its C# equivalent, sometimes to the extreme.Following his QCon talk, I spoke to Syme about the new language. How did F# begin?"I've been doing functional programming since 1992. I had been using the ML family of languages, including | 编程 |
2014-23/1420/en_head.json.gz/10160 | 3/14/201405:15 PMAndrew BinstockNewsConnect Directly0 commentsComment NowLogin50%50%
The JavaScript AlternativesThree languages compete to make JavaScript easier to write and faster to execute. Which to choose?Two and a half years ago, in discussing JavaScript's ubiquity, I projected that this trait alone would make the language the continued target of new languages and compilers. And in fact, this has happened. Many languages now offer the ability to compile to JavaScript in addition to their original principal targets.
For example, among those that also compile to native code, there are Nimrod, which we discussed last month; Fantom; and the gaming language Haxe. In addition to these, there are many standalone tools that translate code from your favorite language to JavaScript. Of these, the most famous by far is the underappreciated Google Web Toolkit (GWT), which converts Java code to JavaScript. (I say underappreciated because the tool definitely has magical aspects to it. For example, you can live debug Java code, which is mapped behind the scenes to the actually executing JavaScript.)
There is another segment of the industry, though, where a lot of action is taking place. Entrants here aim to correct the perceived shortcomings of JavaScript by either extending or improving the language and offering code-to-JavaScript compilation. The most widely known players are CoffeeScript, Google's Dart, and Microsoft's TypeScript. Their approaches are rather different, but all aim at enabling JavaScript to be used in larger projects than it was ever intended for.
The first of these languages to come to market (in 2010), CoffeeScript is probably also the most established. It borrows concepts from both Ruby and Python to reduce clutter and remove some ragged aspects of JavaScript syntax. To be comfortable with CoffeeScript, you must be willing to forgo traditional mainstream programming constructs — curly braces, semicolons, etc. — and adopt new syntactical elements like meaningful white space. Users of Python and Ruby already have partially adopted those conventions, and so it's no wonder they in particular have embraced CoffeeScript. For example, it's now part of Ruby on Rails (as of v. 3.1). And at GitHub, it's the recommended language for doing Web development.
Read the rest of this article on Dr. Dobb's.
Prior to joining Dr. Dobb's Journal, Andrew Binstock worked as a technology analyst, as well as a columnist for SD Times, a reviewer for InfoWorld, and the editor of UNIX Review. Before that, he was a senior manager at Price Waterhouse. He began his career in software ... View Full BioComment | Email This | Print | RSSMore InsightsWebcasts
Convergence today, Hyperconvergence tomorrow? | 编程 |
2014-23/1420/en_head.json.gz/15059 | JavaScript training for every employee? One company says yes
Software firm FreeCause mandates that everyone learn JavaScript -- and they mean everyone.
Howard Baldwin (Computerworld (US)) on 19 September, 2012 11:39
Coding is all the rage these days, as everyone from New York City Mayor Michael Bloomberg to urban teenaged girls tries a hand at computer programming. But few organizations have taken the trend quite as seriously as FreeCause, a Boston-based developer of loyalty management software for retailers and affinity groups. Every FreeCause employee, from CEO Mike Jaconi on down, is learning JavaScript. Inspired by the dictate within its Japanese parent company Rakuten to have all its employees become fluent in English, Jaconi decided to have everyone, from himself down to the interns, learn to code. Given that edict, it would only be natural to assume Jaconi is a geek, eager to imprint his culture on the 7-year-old company. In fact, he has a degree in political science from the University of Southern California and spent some time working on John McCain's presidential campaign. Nevertheless, he is passionate about the benefits of group coding. FreeCause CEO Mike Jaconi: Having everyone learn JavaScript helps to "raise the level of intelligent dialogue and improve collaboration between the various teams within the company." "I felt it would only raise the level of intelligent dialogue and improve collaboration between the various teams within the company," Jaconi says. And so he announced in January that any of the 60 employees who didn't already know JavaScript, the language of its software development team, would take programming lessons, whether their job required it or not. "Our employees' livelihood is based on a complex technology," says Jaconi. "We wanted them to know more about the technology our customers are touching. Our 'codinization' program was important for both client dialogue and cross-departmental communication." Jaconi's announcement was met with both enthusiasm and skepticism, but the results -- even among the skeptics -- have been encouraging and enlightening, he says, and in at least one case, the gamble has paid off in ways that improve the bottom line. Could such an approach produce similar results at other companies? Josh Bersin, CEO of Bersin & Associates, an Oakland, Calif.-based analyst firm focusing on training and talent management, has never heard of a company training all its employees to program. He does cite tech firms like IBM and EDS (now part of Hewlett-Packard) that have trained large swaths of customer-facing employees on specific technologies in order to ensure a common level of institutionalized knowledge. "If you're in product support or a customer advocate, and you know how the product works because you've learned how it's coded, you can answer questions in a more valuable way," says Bersin. "And when clients ask for configuration and customization, everyone understands the implications. I've just never seen it done to this extent." At first, pushback is part of the package FreeCause uses online training from Codecademy to teach the basic levels of coding, asking each employee to spend two hours a week with it. Those online lessons are augmented with two weekly one-hour meetings with a lead programmer, who acts as a mentor, and a team of three or four others, during which lessons are reviewed. A monthly "boot camp" is designed to impart more general programming lessons. Each team is also responsible for development of a new coding project that it will present to the company later this year -- projects may involve creation of a new feature or improved functionality for a Web page within the FreeCause application. The company has not yet determined future activity, such as refresher courses or work on other languages. Jaconi admits to being initially nervous about pushback from the employees. "I didn't anticipate that we'd have as much buy-in as we did up front," he admits. That didn't happen immediately. Several employees, both technical and non-technical, report being skeptical at the outset. We [engineers] all went to school for years to learn [coding], so the idea of casually teaching it to employees was daunting. Kyle Gifford, implementation engineer One of the former is Kyle Gifford, an implementation engineer for FreeCause who was already well-versed in a variety of Web-development technologies, including JavaScript. "It's a big initiative with a lot of pieces," he says. "And learning to code is different form learning a foreign language. It's not just words and syntax, it's semantics. It's being able to analyze problems and come up with solutions. We [engineers] all went to school for years to learn this, so the idea of casually teaching it to employees was daunting." One of the latter is Len Fainer, former director of product management (who has since left FreeCause for a job with a shorter commute). Fainer had a background in marketing and business administration, but never studied computer science. He and his team found it difficult to keep up with the lessons given their workload, and he's not sure how much of the knowledge he'll retain over time. That said, he admits, "I understand the value to the company from a holistic point of view." He notes that product managers sometimes get software-related requests from customers that may not be as simple as they sound, and they now better understand the effort involved in building and maintaining them. CTO Antoine Hage expands upon that point: Previously, he says, a business person might promise a programming change to a customer, thinking it would be easy to update a feature. "Now they understand the challenges, so when they're selling a solution, they know how much time a new feature might take. [And] they can answer questions immediately without having to bring in a technical salesperson." Interestingly, the programming requirement hasn't limited hiring efforts. In its initial interviews, FreeCause highlights the ongoing cross-company programming requirement, and none of the six non-technical people it's hired in the last few months has balked at the idea, says Hage. Offloading engineering tasks One goal in implementing the program was to see how many tasks the company could offload from its engineering staff. As Jaconi explains, "In any company, engineers complain that not only does the business side not understand what they do, but they're overloaded with mundane tasks that never become a priority." That frustrates both sides. According to Hage, the company used to allocate about 30% of the engineering staff's time to fulfilling requests from the business side for new features in the company's software. Offloading even 20% of that time to let engineers focus on high-level tasks delivers a huge benefit, he says -- especially in a technology company, where engineering represents a high percentage of costs. Data analyst Corinne Salchunas: "Working with my coding mentor ... we were able to improve clickthroughs at least sixfold." Data analyst Corinne Salchunas is one employee who has taken up the challenge. Salchunas is responsible for analyzing the effectiveness of the company's loyalty management software. With a degree in economics, she had not done any programming in school and none at FreeCause beyond Excel macros and limited database queries. "One of our features notifies people when they can earn points on a particular site, but I noticed that users weren't clicking on these notifications very frequently," says Salchunas. "I realized that we weren't notifying users clearly enough. Working with my coding mentor, I came up with some new versions of the notifications, including having the pop-ups appear sooner, and between the messaging and the timing, we were able to improve clickthroughs at least sixfold." "That was a great demonstration of how powerful codinization could be," says Jaconi. "We never anticipated that that kind of validation would happen so quickly." Is coding for everyone? Would something like FreeCause's cross-company codinization program work for every company? Very likely not. But the idea behind the program should resonate for both CEOs and CIOs. "As we become more dependent on technology," says CEO Jaconi, "it's tough to argue against people learning what their future work might be based on." Manufacturing jobs are turning into software engineering jobs, he says. "We're interacting with more technology than ever before, so having a fundamental understanding of what our future is built upon will make us better consumers and better professionals." Ultimately, Jaconi says, "If you understand the technology your company is built on, you can only become better at what you do." Bersin, the talent and training analyst, agrees on the importance of common vision and execution within an organization. "All well-run companies have a curriculum that they want employees to know. It's taught either through word-of-mouth, or reinforcement, or certification. It helps everyone speak the same language." Frequent contributor Howard Baldwin also wrote Should the CIO know how to code? He lives and works in Silicon Valley. Read more about it leadership in Computerworld's IT Leadership Topic Center.
This whitepaper presents IDC's insights about implementing an internal or "private" cloud technology model and how this strategy can allow IT organizations to respond to and support business demand with dynamic business agility. | 编程 |
2014-23/1420/en_head.json.gz/17429 | Ten Fantastic Objective-C Libraries for iPhone Developers Building Search Interface Using Apache Solr in .NET Microsoft Delivers New 64-Bit JIT Compiler for .NET Bitcoin’s True Purpose Amazon Releases Its Own Chaos Gorilla Author Feedback
Turn a Parsnip into a Turnip with Edit Distance Algorithms
Edit distance algorithms tell you how different two strings are from each other. That lets you see the differences between different versions of a string or file, or add differencing tools to your applications.
by Rod Stephens
Page 1 of 2 f you do as much writing as I do, then you’re probably familiar with Microsoft Word’s tracking features. They let you easily see what’s changed in different versions of a Word file.
But what if you want to see what’s changed in a plain text file? What if you want to compare different versions of data files? What if your project no longer passes its unit tests and you want to see what changed in the source code files during in the last week?
If you have these files under change control, then you’re probably done because a decent change control system will highlight changes between different versions. If these files aren’t under change control, or you just like figuring out how these things work, you can build your own tool to see what’s changed.
This article explains how you can see what’s changed between two documents or two strings. It describes an algorithm that you can use to find differences and includes C# and Visual Basic examples in the source code download.
The eventual goal of this article is to see how to documents differ but the algorithm I’m going to describe is easier to understand if you consider two strings instead, so I’ll start there. Once you know how to find the difference between two strings, you can generalize it to find the difference between two documents, or two of anything that are made up of things like letters or paragraphs. When you ask for the difference between two strings, you really want the smallest difference. Obviously you could delete every letter from the first string and then insert every letter from the second to give the new string. That gives you the new string but doesn’t really help you understand how the two are related. If the two strings share many letters, then this solution doesn’t show you what has “changed” to get from the first string to the second. For example, to convert “cat” into “cart,” you could delete the c, a, and t, and then insert c, a, r, and t, which would require seven changes. It’s easy to see in this case that a much simpler solution is to simply insert the “r” in “cat” to get “cart” in a single change. That more accurately tells you what changes between the two strings. An edit distance is a measure of how different two strings are. There are several ways to define edit distance but for this article assume that it’s simply the smallest number of deletions and additions needed to convert one string into another. For example, the edit distance between “cat” and “cart” is 1.
For a simple case like the cat/cart conversion it’s easy to guess the edit distance. When the strings are less similar, it’s a bit harder to find the best solution. For example, one way to transform “parsnip” into “turnip” is to:
1. Delete “p” arsnip
2. Delete “a” rsnip
3. Insert “t” trsnip
4. Insert “u” tursnip
5. Delete “s” turnip
This gives an edit distance of 5, but is that the best solution possible? Looking at the letters, it’s not always obvious which changes give the best result.
One way to make finding the edit distance easier is to look at an edit graph that shows the possible transformations from one string to another. Figure 1 shows an edit graph for the parsnip/turnip transformation. Figure 1. Turnip Transformation: The blue path through this edit graph shows the shortest way to transform “parsnip” into “turnip.”
To build the graph, make an array of nodes as shown in Figure 1. Write the letters of the starting string across the top and the letters in the finishing string down the left side. Draw links connecting each dot to those below and to the right.
Any point in the graph that corresponds to the same letter in both strings is called a match point. For example, “parsnip” and “turnip” both contain an “r” so the node below the “r” in “parsnip” and to the right of the “r” in “turnip” is a match point. In Figure 1, the match points are shaded pink.
To finish the edit graph, add a link leading to each match point from the node that is above and to the left, as shown in Figure 1.
The graph looks confusing at first but it’s actually fairly simple. The goal is to follow a path from the upper left to the lower right corner. Each move to the right corresponds to removing a letter from the original string. In Figure 1, the first two moves to the right along the blue path correspond to removing the letters “p” and “a” from “parsnip.” Each move down corresponds to inserting a letter in the new string. In Figure 1, the next two moves along the blue path correspond to inserting the letters “t” and “u” to the string. Diagonal moves correspond to leaving a letter unchanged. The next move along the blue path corresponds to leaving the “r” alone.
With these rules, finding the edit distance and the smallest series of changes to convert the string is easy. Simply find the shortest path through the edit graph with right and downward links costing one and diagonal links costing nothing. To think of this in another way, you must find the path through the graph that uses the most diagonals.
If you think in those terms, then it’s easy to see that the blue path represents the best solution.
(Note that there may be more than one path with the same shortest distance through the graph. In that case, there are multiple ways to convert the first string into the second with the same cost.)
Next Page 12 Next Page » Author Feedback | 编程 |
2014-23/1420/en_head.json.gz/20952 | Interviews | Discuss | Print | Email | Screen Friendly Version | Previous | Next
Sponsored Link • Good Enough Software
A Conversation with Andy Hunt and Dave Thomas, Part III
by Bill Venners
Pragmatic Programmers Andy Hunt and Dave Thomas talk with Bill Venners about the myth of bug-free software, the importance of specifying level of quality as a
system requirement, and the need for every team member to inject quality
throughout the development cycle.
Andy Hunt and Dave Thomas are the Pragmatic Programmers,
recognized internationally as experts in the development of high-quality
Their best-selling book of software best practices, The Pragmatic
Programmer: From Journeyman to Master (Addison-Wesley, 1999), is filled with practical advice
on a wide range of software development issues. They also authored
Programming Ruby: A Pragmatic Programmer's Guide (Addison-Wesley,
2000), and helped to write the now famous Agile Manifesto.
In this interview, which is being published in ten weekly installments, Andy Hunt and
Dave Thomas discuss many aspects of software development:
In Part I. Don't Live with Broken Windows, they
discuss the importance of software craftsmanship
and the importance of staying on top of the small problems in your projects.
In Part II. Orthogonality and the DRY Principle, they discuss the importance of keeping your
system orthogonal, and the real meaning the DRY, or Don't Repeat Yourself,
principle.
In this installment, they discuss the myth of bug-free software, the importance of specifying level of quality as a
The Myth of Bug-Free Software
Bill Venners: You say in your book, The Pragmatic Programmer, that "the real world won't let us produce much that's perfect, particularly not bug-free software." Why is that?
Andy Hunt: It's economics. Look at some very solidly crafted code, for example, the space shuttle. The cost per line of code for the space shuttle is something like a thousand dollars per line. It's so expensive because of the amount of care that goes into specifying the code, reviewing the code, the whole process they use. It is understandable that if you're shooting up billion dollar spacecraft with human lives at stake, you're going to put a little bit of care into that software. But everything has its cost.
The space program has had its share of bugs. Various Mars probes have flown off into the weeds. Rockets have crashed. But nevertheless the space program has a pretty good track record on software quality, but at tremendous cost. You can't spend a thousand dollars per line of code in a dot com or even most major corporations. You simply can't afford that.
People tend to think software is free, because it has no real-world presence. Software is not substantial like disk drives or automobiles—it is just people typing away at a keyboard. So, therefore, software must be free. But it's not.
Dave Thomas: Aside from economics, it is also very arrogant to assume you know what the user wants. You may say, "Each one of my programs is a testament to me. Therefore, I'm going to make each program perfect, so it reflects well on me." But the users may not want to spend the money, or invest the time, to achieve that perfection. For all you know there may be an expiring need. If the users don't get the software within X number of weeks, there's no point in having it. There's no point in writing the software for the polling booths in Florida, for example, if it's not available at the time of the election. So you have to be prepared to make the compromises.
Andy Hunt: Also, despite what the users say, it's very hard to judge what's actually important to them, because they themselves may not know. You may collect requirements and interview users. You may be certain that a particular feature is the most important. You put all your work into that important feature and ignore another minor feature that the user didn't seem to care much about. But later, you find out that in practice the users use this important feature only once every six months. The minor feature that you kind of ignored, they use six times a day. Now that's a huge problem.
What features are most important is not always clear up front. It's not even always clear to users. You need to be prepared to rock and roll and be flexible a bit. There's a kind of Heisenberg effect as you put a system into production and real users start using it. The act of introducing the system changes how the users work. It's almost impossible up front to be sure you know what the user wants, and then implement that perfectly. The very act of introducing your software into the user's world changes the game.
Dave Thomas: There is also a secondary impact to assuming you can actually write bug-free software. If you go in assuming that you can produce bug-free software, that attitude changes how you write the software. You tend to get arrogant, or at least complacent, about the actual code that you write. You'll say, "My code is going to be bug free. Therefore, I don't have to worry about this particular condition or that particular condition." The reality is that your code not going to be bug free, because you don't control the entire environment . Specifying Quality as a Requirement
Bill Venners: You also say in your book, "The quality of the system you produce should be specified as part of the system requirements." Why is that?
Dave Thomas: You say to a user, "You can have this software in two months. We anticipate it will be perfectly usable, though it may have a few rough edges. Or, we can polish it to perfection, and you can have it in seven years. Which would you prefer?" I think the user should get to make that choice. As a result, it is actually a part of the user's requirements to say what level of quality they want delivered.
Andy Hunt: And that's not an easy question. In the space shuttle, for instance, the correct answer probably is seven years. In the commercial sector it probably isn't.
Knowing When to Stop
Bill Venners: You say, "Don't spoil a perfectly good program by over-embellishment and over-refinement." How do I know when to stop?
Dave Thomas: Fundamentally, you know when to stop when the user says stop. One of the mistakes that a lot of developers seem to make is to develop in a vacuum. They'll have a project plan that says nine months from now we'll deliver something. Nine months later—or two years, whatever it ends up being—they deliver something and assume the project's gone away. But that approach is never going to work effectively.
If you work with the user more closely, if you work interactively with the user on a daily or weekly basis, then that user's going to be able to tell you when it's time to stop. You don't let the programmers keep adding features simply because they feel like it would be a good idea. Adding features should be a user decision, not a programmer decision.
Andy Hunt: In fact you can get the problem both ways. Sometimes programmers will want to keep piling features on after the program's done, but more often I think you get the opposite problem. Once the programmers have done most of what the user wants, they stop. They stop too early, when the program doesn't necessarily meet the user's needs. You can stop too early, or too late. And in both cases, the answer is feedback. As Dave said, if you work very closely with the user, you've got a much better way of judging if you're done yet.
Quality is What You Do, Not What You Measure
Bill Venners: You said, "Some methodologies have a quality officer, someone to whom the team delegates the responsibility for quality. This is clearly ridiculous. Quality can only come from the individual contributions of the team members." Why?
Dave Thomas: Because the notion of a quality officer implies that the rest of the team is out there to undermine quality. The quality officer is a policeman whose job is to catch and slap the wrists of the naughty little programmers, to send them back and tell them to do it again. Quality is not something you test after the fact. Quality is something you do all the time as you're actually doing the development. It's every individual's job to inject quality into what they're doing. Now you may have a coach or someone who can help you with the details of achieving quality, and you certainly want some kind of QA testing and acceptance testing. But you don't produce quality by installing a quality officer, ticking the quality checkbox and assuming you have quality taken care of because the quality officer is out there.
Andy Hunt: Having a quality officer is kind of like having a breathing officer.
Come back Monday, March 24 for Part IV of this conversation with
Pragmatic Programmers Andy Hunt and Dave Thomas. If you'd like to receive a brief weekly email
announcing new articles at Artima.com, please subscribe to
the Artima Newsletter.
Have an opinion on knowing how good is good enough, determining what features are most important, getting frequent user feedback,
or inspiring a team to create quality? Discuss this article in the News & Ideas Forum topic,
Good Enough Software.
Andy Hunt and Dave Thomas are authors of The Pragmatic Programmer, which is available on Amazon.com at:
http://www.amazon.com/exec/obidos/ASIN/020161622X/
The Pragmatic Programmer's home page is here:
http://www.pragmaticprogrammer.com/
Dave Thomas was not the first person I've interviewed who mentioned the arcade
game Whack-a-Mole. James Gosling also called upon the versatile Whack-a-Mole
metaphor while pointing out that it is sometimes hard in engineering to know if you've solved a problem or moved it:
http://www.artima.com/intv/gosling34.html
The Agile Manifesto is here:
http://agilemanifesto.org/
Ward's Wiki, the first WikiWikiWeb, created by Ward Cunningham, is here:
http://c2.com/cgi/wiki?WelcomeVisitors
A great article about the space shuttle software, They Write the Right Stuff:
http://www.fastcompany.com/online/06/writestuff.html | 编程 |
2014-23/1420/en_head.json.gz/39487 | Java Authors: Liz McMillan, Roger Strukhoff, Patrick Carey, Greg Akers, Michelle Drolet Related Topics: Java Java: Article
Genetic Algorithms in Java
By Michael Lacy
Starting about 3.5 billion years ago with bacteria, nature em- barked on the grandest of all algorithms: the evolution of highly complex and dynamic machines capable of interacting with and adapting to their environments in order to solve problems. We know these machines as plants and animals.
One look at the genetic code of even the simplest living organism reveals a structure that's enormously complex and efficiently tuned, ensuring the survival of the organism in its environment. We might even use the terms fault-tolerant, highly parallel, high performance, and ubiquitous. Don't forget that nature accomplished this extraordinary programming feat without a single developer coding an exhaustive list of if-then rules and switch statements to account for all possible scenarios. It was simply based on a random set of interactions with the fittest organisms surviving to replicate their genetic code into the next generation.
With the advent of the internet over the past decade, an entirely digital world has arisen in which web sites and applications are the organisms fighting for survival in a highly complex, internetworked environment replete with computer viruses, server crashes, and the like - an environment in which only the fittest will survive. As such, it's my belief that more sophisticated means of software development are needed to build web applications capable of interacting with and adapting to the complexities of the new digital world thriving within our computers. One simple, yet extremely powerful, technique that will likely play a role in the evolution of the internet (and the web applications that live within it) borrows several concepts from the biological world and transforms them into bits and bytes with the goal of building adaptive software systems.
This article is the first of a two-part series that examines a technique from the AI community called genetic algorithms, which borrows concepts from biology to solve complex and often nonlinear problems encountered in the world of computer science. This article will introduce you to the concepts of genetic algorithms and discuss why Java is well suited to their implementation. The next installment will investigate the details of implementing these algorithms in Java. It's my hope that after reading these articles, you'll think a little differently about software development and its future. Genetic algorithms provide a problem-solving technique that's too powerful to ignore. Genetic Algorithms
First a little history. Genetic algorithms were born out of the idea of evolutionary programming introduced by I. Rechenberg in the 1960s. John Holland, a professor at the University of Michigan at the time, is credited with the invention of genetic algorithms following the publication of his 1975 book Adaptation in Natural and Artificial Systems. In his book Holland formulated the basics of genetic algorithms as models of machine learning that derive their behavior from concepts of biology's theory of evolution. It was one of Holland's students, David Goldberg, who popularized the use of genetic algorithms when he was able to solve a difficult problem involving gas-pipeline transmission for his dissertation in 1989.
That said, what exactly is a genetic algorithm? What are they used for? What are the benefits over traditional programming techniques? How does Java fit into this? I'll attempt to answer these questions so you'll have the foundation needed to start implementing genetic algorithms (see Figure 1).
Darwin in Your Computer
A genetic algorithm can be thought of as a model for machine learning in which a population of randomly created individuals goes through a simulated process of evolution - a digital survival of the fittest where each individual represents a point in the problem's solution search space. Using correct terminology, an individual is represented by a chromosome, which consists of several genes. Genes are essentially the parameters of the problem to be solved. A collection of chromosomes is considered a population and is the fundamental unit on which a genetic algorithm operates. Once the algorithm is set into motion, individuals are selected from a population and combined in a process called crossover to create a set of children. The children are randomly mutated to create a new set of chromosomes to be reinserted into the population. Once enough children chromosomes have been created to replace a population, a generation is said to have passed. With each generation, all the chromosomes are evaluated according to some fitness criterion that's a measure of the strength of the chromosome compared to the rest of the population. Only the fittest chromosomes survive into the next generation where the selection, crossover, and mutate process begins anew. After a number of generations have elapsed, the best chromosome is selected from the population and represents the optimal solution to the problem being solved. Essentially what's happening is that a random set of solutions to a problem within a given search space is created and evolved over an amount of time to find an optimal solution. A concrete example will help clarify the concepts described above.
The Traveling Salesman
The traveling salesman problem (TSP) is a classic computer science problem in which a salesman must traverse a number of cities, visiting each only once, while minimizing the distance traveled. For the case of 20 cities, an exhaustive search method that examines all possible routes dictates a search through over 2.4 billion billion (20!) permutations which, if evaluated at a rate of 500 million per second, would take over 150 years to complete. Employing a gen- etic algorithm reduces the amount of time to seconds (or a fraction thereof, de- pending on the computing power available) and produces the optimum solution in some cases and a near optimal solution in most others. The representation of this problem in the genetic algorithm domain consists of cities with their x and y coordinates serving as individual genes. A chromosome is a list of cities, in order, that represent one possible solution to the traveling salesman problem. The fitness of the chromosome is then the Cartesian distance between the cities when traversed in order, with the fittest chromosomes being those with the shortest overall distance (see Figure 2). Typically, genetic algorithms have been utilized in solving complex optimization problems when traditional programming techniques (such as exhaustive search, analytic optimization, and line minimization) fail to arrive at a solution in a reasonable amount of time. genetic algorithms confer the following advantages:
They evaluate several solutions simultaneously, covering a large search space.
They work well in parallel implementation.
They optimize parameters with very complex cost functions.
They create a list of optimal solutions, not just a single solution.
They work with various data types.
This leads to the next question: Why use Java?
Why Java?
As you can see, genetic algorithms can become computationally expensive depending on a number of parameters (including the size of the population, the complexity of the fitness function, the size of the chromosome, and the time to converge on an optimal solution. Thus, in choosing a language for implementation, weighing the benefits of using Java versus using a compiled language such as C or C++ is essential. For Java to be a viable language for genetic algorithm implementation, it must present significant advantages to make up for its degraded performance as compared to other compiled languages. And it does! The advantages of Java are particularly evident in the area of distributed computing.
CIO, CTO & Developer Resources Simple and Object-Oriented
Given the dynamic memory requirements for a genetic algorithm, Java's garbage collector relieves us from having to allocate and deallocate memory for chromosomes in each generation. This allows us to focus specifically on coding the problem at hand and not worrying about memory management details. Also, the use of objects allows us to create an endless number of problem encodings and still use the genetic algorithm framework. This means that once the basic algorithm structure is developed, implementing a genetic algorithm to solve new problems becomes a matter of defining the problem and its encoding. Next month we'll take an in-depth look at what this means during implementation . Robust and Secure
Java was designed for creating software that's highly reliable and capable of operating in distributed environments. As developers start to move genetic algorithms from a single CPU to a network of parallel and distributed CPUs, robustness and security are essential. Think of partitioning a genetic algorithm into a number of populations and letting them evolve separately in parallel, frequently distributing the most fit from each population into all the populations. JavaSpaces presents itself as an excellent candidate for moving genetic algorithms into a distributed environment.
Architecture-Neutral and Portable
As referenced above, the real power of genetic algorithms can be obtained in parallel and distributed environments. With Java's platform-neutrality, populations and the algorithm to evolve them can be distributed among a network of computers for processing, provided that a JVM is available. Don't worry about the implementations for different operating systems and CPUs. Think of the SETI@home project that utilized over two million PCs connected to the internet to churn through radar data in the search for extraterrestrial intelligence. Genetic algorithms are ideal candidates for use in such a distributed environment, with java being the obvious language of choice given its portability. Computing power is no longer an issue; there will be more than enough to go around.
Now that we've briefly examined the nature of genetic algorithms and why Java makes sense as the development language of choice, let's take a more detailed look at the fundamental components that make up a genetic algorithm. For the sake of simplicity, we'll cover the most basic implementations of genetic algorithms and introduce the essential core concepts. I highly recommend further research and study if genetic algorithms spark a deeper curiosity. A number of resources are available on the web for such study.
A gene can be defined as the encoding of a single parameter in a genetic algorithm. A gene can take many forms depending on the problem definition. For the traveling salesman problem, a gene represents a city and its longitude and latitude coordinates. However, when solving a high-order, nonlinear, partial differential equation, a gene can represent one of the variables to solve for and its range of acceptable values.
This highlights the two main flavors of genetic algorithms: permutation-encoded versus real-parameter. In the former version, the goal is to find the optimal ordering of a set of genes such as in the TSP. As for the latter, an example of a real-parameter genetic algorithm is finding x and y such that the following function is minimized: f(x, y) = 2x * sin(3 * y) + 4y * cos (5 * x).
Historically, genes were represented as sequences of 1's and 0's. However, this approach has not been shown to yield better performance and introduces a layer of complexity as a translation is needed between the actual values of parameters and their binary representation. In addition, handling genes as objects in Java makes the implementation more intuitive and can be extended to make them reusable across different genetic algorithm implementations. (More on this in next month's article.)
Gene Pool
Much like its biological equivalent, the gene pool for a genetic algorithm is a collection of all the available genes. From the gene pool, chromosomes are created at the beginning of a genetic algorithm by randomly drawing genes from the gene pool and assembling them to build a chromosome that represents one solution for the search space defined for the genetic algorithm.
Returning to the examples mentioned above, the gene pool for solving the traveling salesman problem consists of one gene per city to be traversed. For the case of 20 cities, there will be 20 genes in the gene pool from which random chromosomes will be created. For real parameter genetic algorithms, such as minimizing the function f(x, y), the gene pool will consist of two genes, one representing the variable x and the other representing the variable y. Chromosomes
Continuing with definitions, a chromosome is a collection of genes representing a single point in the solution search space. The fitness of a chromosome is determined by a cost function determined prior to the execution of the genetic algorithm. Again, returning to the traveling salesman problem, the fitness of a given chromosome is the sum of the distances between the cities when traversed in the order specified by the chromosome. for the real parameter chromosome (f(x, y)), the fitness is the result of substituting the x and y values back into the original function and performing the calculation. Note that the fitness of a chromosome tells you nothing about its strength relative to other chromosomes; rather, it's a raw evaluation of the chromosome's fitness. It's at a higher level that fitnesses are compared and selection proceeds according to the rules of a genetic algorithm. This higher level is the population.
A population is a collection of all the chromosomes being evolved in a genetic algorithm. As new chromosomes are created and reinserted into the population, less fit chromosomes are replaced and only the most fit survive into the next generation. As mentioned previously, it's here that the process of digital evolution occurs, as the fitness of the competing chromosomes is compared in order to select parent chromosomes to reproduce.
Depending on the search space for a given problem, population size can range from a few dozen chromosomes to several hundred, several thousand, or more. Given the fact that a chromosome represents a single point in the solution search space, for problems with extremely large search spaces (such as the 20-city TSP), it makes sense that a large population size is needed to cover as much of the space as possible. Otherwise, the genetic algorithm may approach a local minimum and converge toward it, rather than the global minimum. Convergence is a core issue in genetic algorithm implementation, and I highly recommend further examination outside of this article to gain additional insight.
Genetic Algorithm Operations
Now that we've discussed the requisite components of a genetic algorithm, it's essential to understand how a genetic algorithm operates on each of the components to create a simulated evolutionary environment that combs a search space for an optimal solution. There are five elementary genetic algorithm operations:
Fitness evaluation: With the examination of a chromosome and its role within a population, we talked briefly about fitness evaluation and its importance. The proper definition and evaluation of a fitness function is critical to the success of the genetic algorithm. It's the means by which chromosomes are compared to one another to determine the most fit individuals. The primary goal here is differentiation between the more fit chromosomes and the less fit chromosomes. Remember, it's survival of the fittest.
Selection: This is the method by which chromosomes are chosen to reproduce in order to create children for the next generation. The goal of selection is to choose individuals that, on average, are more fit than others to pass on their genes to the next generation while, at the same time, maintaining genetic diversity. If a population consists of identical individuals, genetic diversity is lost and it's difficult for the genetic algorithm to explore different regions of a search space.
Several different methods are available for genetic algorithm selection, but for the sake of simplicity and brevity I'll focus on a technique labeled tournament selection. With this technique, a group of individuals is selected at random and the two most fit are selected for reproduction (i.e., they win the tournament). Keeping the tournament size small (4-8 chromosomes) ensures genetic diversity as the group is small, and what appears to be the most fit within the group may actually be a weak chromosome when compared with the entire population.
Crossover: Once two parent chromosomes are selected, they reproduce two child chromosomes via the crossover operation. One of the parameters of a genetic algorithm is the crossover probability (typically 75-90%) that represents the statistical chance that two given chromosomes will cross over. For each potential crossover, a random number between 0.0 and 1.0 is generated. If the number is greater than the crossover rate, then crossover doesn't occur and the children chromosomes are exact replicas of their parents. If crossover does occur, then the parents randomly exchange genes to create new chromosomes.
There are three types of crossover covering a wide range of problem encodings:
- Permutation encoding with unique genes: In this case, a gene can appear only once within a chromosome. One example is the TSP. Each city may appear only a single time within the chromosome.
- Crossover operating on the permutation encoding, with the exception that genes don't have to be unique: Let's imagine that we have a genetic algorithm that's evolving a musical piece within the key of C. All the notes in the key of C are viable and can be repeated indefinitely up to the size of the chromosome.
- Real parameter chromosome crossover: In a real parameter chromosome, each gene will represent a parameter to be applied to a given cost function. Building on the function, f(x, y) described earlier, two parent chromosomes will have genes for the x variable, both representing different values. A method for crossing over the two genes might involve creating a new gene for the x variable with the value being the average of the two parent genes.
Crossover is another essential genetic algorithm operator that ensures genetic diversity within a population. The conceptual goal of crossover is, over time, to combine the good portions of chromosomes into newer and better chromosomes. For a better understanding, see Figure 3. I highly recommend further exploration of the crossover operator before attempting to implement your own genetic algorithm.
Mutation: Similar to crossover in that it randomly modifies chromosomes, it operates on only a single chromosome at a time (see Figure 4). As with crossover, there's a probability associated with the occurrence of mutations, albeit a small one (typically 5-25%). Yet again, returning to the TSP, a typical mutation can include randomly selecting two endpoints within a chromosome and reversing the order of the genes. Several mutation techniques that can be utilized depending on the problem encoding won't be discussed here. It's important to remember that mutation is a fundamental operator for ensuring genetic diversity within a population, which translates into a better coverage of the search space.
Insertion: This is the final algorithmic step to conclude a generation in a genetic algorithm. Insertion is the process of introducing children chromosomes into a population and removing the less fit chromosomes. One common technique for insertion utilizes a technique called elitism in which the n best chromosomes of a population are kept for the next generation and the rest are replaced with new children. This ensures that the most fit chromosomes survive into the following generation and have the opportunity to reproduce again.
Genetic Algorithm Considerations
By now you should have a basic understanding of what a genetic algorithm is and how it works. Let's now quickly look at some considerations when implementing a genetic algorithm.
The goal of implementing any genetic algorithm is convergence on an optimal solution for a given search space. Convergence will be affected by numerous factors associated with the implementation of the genetic algorithm, such as parameter encoding, population size, crossover and mutation rates, and selection technique. Depending on the problem being solved, these factors are usually determined only by experience working with genetic algorithms of all flavors. My recommendation is to start coding!
Performance is an issue that has constantly plagued genetic algorithms due to their heavy-duty processing power requirements. with the combination of Moore's Law and the increased availability of highly parallel, distributed computing power, I don't think performance will be an issue in the near future.
Here's the number one barrier to acceptance of genetic algorithms as a practical programming technique: real-world applications. Genetic algorithms have resided primarily in academia solving classic computer science problems. Their use in business and commercial environments is highly unproven. As computing power becomes more readily available, I think we'll see an increase in adaptive software systems with genetic algorithms at their core.
One particular area of work that may break down the wall is security. Researchers have begun to develop operating systems modeled after the immune system of animals. As new viruses invade the system, strategies are evolved to combat the virus, remove it from the operating system, and identify similar attacks in the future. And with the proliferation of highly sophisticated attacks on internet sites, such an "immune" system offers a much better (and quicker) response than waiting for a human to recognize the attack and code a patch to fix it or close ports on a firewall to deny it.
Another interesting outbranching of genetic algorithms is the field of genetic programming pioneered by John Koza. Without getting into the details, genetic programming is essentially using genetic algorithms with the genes representing programmatic constructs (e.g., AND, OR, IF, THEN, +, and -). What's evolved are chromosomes representing computer programs. It's an exciting field that's worth a deeper look.
The goal of this article wasn't to encourage you to implement genetic algorithms in your code tomorrow, but rather to inform and educate you about one technique for building software capable of adaptation. As the Internet continues to grow at a furious pace, a new digital world is being created that operates in the language of 0's and 1's. The organisms fighting for survival are the web sites that you and I create on a daily basis. Whether fighting for survival in the sense of attracting new customers or warding off the latest computer hacker, adaptability will be crucial to survival in the complex digital world. Hopefully this article has sparked a newfound interest in software development and its future. If so, stay tuned for the next issue of JDJ, in which I'll demonstrate a simple implementation of a genetic algorithm. Published January 1, 2001 Reads 23,949 Copyright © 2001 SYS-CON Media, Inc. — All Rights Reserved.
Figure 4 More Stories By Michael Lacy
Michael Lacy is anengineer for the platform development group at Shutterfly, an online photo service, where hedevelops Web-basedsolutions for digital image printing, enhancing, and sharing. He's also acertified Java programmer and developer. | 编程 |
2014-23/1420/en_head.json.gz/44725 | Developer Reading List Andrew Binstock, December 21, 2012 New books on C, C#, Node, Win8 Apps, Perl and Groovy.
1 2 3 4 5 Next E-mail
Programming C# 5.0 by Ian Griffiths
This 800-page volume is a comprehensive tutorial and reference on C#. It's a thorough and complete discussion of the language and associated technologies written in a clear, if somewhat wordy, style. The book has been updated to cover more than the language per se. It also includes explanations of new features in .NET 4.5 and issues surrounding Windows 8 apps, such as packaging. Despite covering those technologies, the core theme is the language — not the libraries or the OS.A useful chapter discusses, in considerable detail, the means of calling native code (both 32- and 64-bit), with lengthy coverage of COM and its specific requirements when called from C#. Both low-level calls to code written in C++ and to higher level languages such as VBscript or Jscript are discussed and thoughtfully explained.My comment that the book can be used for reference is not an oblique suggestion that it contains numerous tables of APIs or anything of the sort. Rather, it contains a wealth of topics that are explained clearly and can serve as references and refreshers on how to do specific things, whether it's loading assemblies or determining which class library to write for, or figuring out how Windows 8's new stream for touch-based apps works. In all cases, you'll find several pages that lay out the material clearly.The author fully expects you to have some background both in programming and in C#, so there is no primer. This choice of audience is possibly the reason for my only serious objection to the book, which is how little code it contains. Material is presented more verbally than I care for, but the clarity and thoroughness of the presentation make up for that. Recommended. 1 2 3 4 5 Next INFO-LINK | 编程 |
2014-23/1421/en_head.json.gz/1955 | C++ Reading List Andrew Binstock, May 28, 2013 The new C++11 standard has led to a flood of new books and updates to classics. These are the core books you need.
C++ Programming Language, 4th Edition by Bjarne Stroustrup
This book is rightfully viewed as the "Bible" of C++ programming. It's the authoritative exposition of the language, its features, and its peculiarities, all written with considerable clarity by Stroustrup, who designed C++. Some readers might view the ANSI C++ document as a more definitive source of information, but it is a rather terse reference resource intended for readers who already know the language. This book, in contrast, gives friendly explanations of new features, coupled with advice on things to do and practices to avoid, making it a more approachable choice for readers needing to understand specific features. In this sense, this book is more a reference than a tutorial. Some physical aspects detract from the book, especially the choice of printing code without using a monospaced font. No matter how aesthetically pleasant this might look to some readers, it throws off regular readers of code, who expect vertical alignments that no longer appear. Despite this, the typesetting of the code is much better than in previous editions. A second concern is one that has to do more with C++ itself than the book. This edition is 1328 pages long. That is roughly 1000 pages more than the original edition. As Stroustrup gives scant coverage of the libraries, these numbers are indicative of how much more complex C++ has become. These concerns notwithstanding, I don't see how serious C++ programmers looking to use the new features of the language can proceed without this work. Definitely recommended. | 编程 |
2014-23/1421/en_head.json.gz/16616 | Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme.
Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. Scheme demonstrates that a very small number of rules for forming expressions, with no restrictions on how they are composed, suffice to form a practical and efficient programming language that is flexible enough to support most of the major programming paradigms in use today.
Scheme was one of the first programming languages to incorporate first class procedures as in the lambda calculus, thereby proving the usefulness of static scope rules and block structure in a dynamically typed language. Scheme was the first major dialect of Lisp to distinguish procedures from lambda expressions and symbols, to use a single lexical environment for all variables, and to evaluate the operator position of a procedure call in the same way as an operand position. By relying entirely on procedure calls to express iteration, Scheme emphasized the fact that tail-recursive procedure calls are essentially goto’s that pass arguments. Scheme was the first widely used programming language to embrace first class escape procedures, from which all previously known sequential control structures can be synthesized. A subsequent version of Scheme introduced the concept of exact and inexact numbers, an extension of Common Lisp’s generic arithmetic. More recently, Scheme became the first programming language to support hygienic macros, which permit the syntax of a block-structured language to be extended in a consistent and reliable manner.
Scheme is defined by a series of Reports. The current standard is R6RS, but it is controversial, and an effort to replace it has already begun. Many implementations of Scheme still use the earlier R5RS standard [PDF] [HTML], and there are even some implementations of Scheme using the R4RS standard. A semi-official and widely-supported set of libraries is available as Scheme Requests for Implementation.
Students who are just learning to program and who are learning Scheme as their first programming language may be interested in the book How to Design Programs by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. For many years the first course in computer science at Massachusetts Institute of Technology was based on the book Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman with Julie Sussman. Experienced programmers may want to look at Teach Yourself Scheme in Fixnum Days by Dorai Sitaram or The Scheme Programming Language by R. Kent Dybvig; the latter book has an excellent two-chapter tutorial on programming with Scheme.
There are many implementations of Scheme. DrScheme provides an excellent environment for students learning to program in Scheme, and has a wealth of libraries. Chez Scheme, which is the Scheme system used by Programming Praxis, provides a high-quality and very fast commercial compiler; the book Chez Scheme User’s Guide by R. Kent Dybvig describes Chez-specific extensions to Scheme, and a free version of the compiler is available as Petite Chez Scheme. Larceny was the first version of Scheme to support the new R6RS standard, and maintains compatibility with R5RS. And there are many other implementations of Scheme available; see schemers.org for a gateway to these implementations, and much other good on-line material about Scheme.
Help is available on the Usenet newsgroup comp.lang.scheme or the IRC #scheme channel. Joe Marshall provides a custom Google All Scheme Search.
Like this:Like Loading... Categories | 编程 |
2014-15/2898/en_head.json.gz/3565 | Developer Reading List Andrew Binstock, December 21, 2012 New books on C, C#, Node, Win8 Apps, Perl and Groovy.
1 2 3 4 5 Next E-mail
Programming C# 5.0 by Ian Griffiths
This 800-page volume is a comprehensive tutorial and reference on C#. It's a thorough and complete discussion of the language and associated technologies written in a clear, if somewhat wordy, style. The book has been updated to cover more than the language per se. It also includes explanations of new features in .NET 4.5 and issues surrounding Windows 8 apps, such as packaging. Despite covering those technologies, the core theme is the language — not the libraries or the OS.A useful chapter discusses, in considerable detail, the means of calling native code (both 32- and 64-bit), with lengthy coverage of COM and its specific requirements when called from C#. Both low-level calls to code written in C++ and to higher level languages such as VBscript or Jscript are discussed and thoughtfully explained.My comment that the book can be used for reference is not an oblique suggestion that it contains numerous tables of APIs or anything of the sort. Rather, it contains a wealth of topics that are explained clearly and can serve as references and refreshers on how to do specific things, whether it's loading assemblies or determining which class library to write for, or figuring out how Windows 8's new stream for touch-based apps works. In all cases, you'll find several pages that lay out the material clearly.The author fully expects you to have some background both in programming and in C#, so there is no primer. This choice of audience is possibly the reason for my only serious objection to the book, which is how little code it contains. Material is presented more verbally than I care for, but the clarity and thoroughness of the presentation make up for that. Recommended. 1 2 3 4 5 Next INFO-LINK | 编程 |
2014-15/2898/en_head.json.gz/4651 | Home > News > Bloggers > Consumer Backlash Consumer Backlash
Feb 11th, '11 • Bloggers • by XXL Staff • 32 Comments
I’m not a fan of doing lists. Sure, they’re good for a quick fix and all, but most of the time I just view them as an unoriginal, passé and bland formula used when the writer or writing organization either runs out of ideas or cannot compete with today’s Ritalin-flavored model of writing.
No shots.
So of course I was thinking of compiling a list for today’s post, something along the lines of my personal choices for the definitive misogynistic songs of all time (just in time for Valentine’s Day, of course), because I’ve had about a weeklong hangover from my trip to Toronto last weekend. That idea became rather stagnant, though, after I got to Ghostface Killah’s “Wildflower,” so I deaded the entire idea and flipped to listen to something that spoke more to the California native in me: weed songs [1], eventually shifting into rap’s latest pot purveyor, Wiz Khalifa.
Or, known now as the latest addition to the “the milk’s gone bad,” alleged wack music legion of doom.
Over the past half-decade or so Wiz has slowly but surely morphed from an angry Pittsburgh spitfire to an easygoing universal hippie, eventually trading in the electro-bleeps of Alice Deejay and the first level of Sonic The Hedgehog for the cryptically smooth stylings of Frou Frou, Demi Lovato and Chrono Trigger, finally hitting a proverbial gold mine with “Black & Yellow.” Unfortunately for him, achieving a level of success translated into the legions of pre-Taylor Gang zealots denouncing him as the latest telltale “sell-out,” as is the norm for any rapster that makes it out of the muck of today’s YouTube and the Twitter tomfoolery and into the public’s collective consciousness.
The million-dollar question, however, is why so-called “fans of the sport” are just as quick to spit on the faces of the artist who, just a few weeks ago, they were championing as the one who will take rap to that “next level.” This isn’t like a Ja Rule, “already a washout starting from his Cash Money Click days” type of disdain either; it’s more an “on-off switch” type of backlash. Whether you’re 50 Cent, whose sing-song flow was evident as far back as his pre-Bullet Tooth Tony, Power Of The Dollar days, or Jay-Z, who simply got older and richer (while mating with R&B’s Venus de Milo in the process), it seems pre-destined that an artist will not be liked by their “original” fan base.
Isn’t the point of rappers rapping is to, well, become successful at their craft? [2] To me at least, rappers who rap in any form in these days are trying to get noticed in some manner. Celebrity isn’t remotely guaranteed for most of them, but that has never stopped any of them from trying to grasp it. Fans should not be ag | 编程 |
2014-15/2898/en_head.json.gz/8738 | Wide Awake Developers
Glue Fleet and Compojure Together Using Protocols
Inspired by Glenn Vanderburg's article on Clojure templating frameworks, I decided to try using Fleet for my latest pet project. Fleet has a very nice interface. I can call a single function to create new Clojure functions for every template in a directory. That really makes the templates feel like part of the language. Unfortunately, Glenn's otherwise excellent article didn't talk about how to connect Fleet into Compojure or Ring. I chose to interpret that as a compliment, springing from his high esteem of our abilities.
My first attempt, just calling the template function directly as a route handler resulted in the following:
java.lang.IllegalArgumentException: No implementation of method: :render of protocol: #'compojure.response/Renderable found for class: fleet.util.CljString
Ah, you've just got to love Clojure errors. After you understand the problem, you can always see that the error precisely described what was wrong. As an aid to helping you understand the problem... well, best not to dwell on that.
The clue is the protocol. Compojure knows how to turn many different things into valid response maps. It can handle nil, strings, maps, functions, references, files, seqs, and input streams. Not bad for 22 lines of code!
There's probably a simpler way that I can't see right now, but I decided to have CljString support the same protocol.
Take a close look at the call to extend-protocol on lines 12 through 15. I'm adding a protocol--which I didn't create--onto a Java class--which I also didn't create. My extension calls a function that was created at runtime, based on the template files in a directory. There's deep magic happening beneath those 3 lines of code.
Because I extended Renderable to cover CljString, I can use any template function directly as a route function, as in line 17. (The function views/index was created by the call to fleet-ns on line 10.)
So, I glued together two libraries without changing the code to either one, and without resorting to Factories, Strategies, or XML-configured injection.
Metaphoric Problems in REST Systems
I used to think that metaphor was just a literary technique, that it was something you could use to dress up some piece of creative writing. Reading George Lakoff's Metaphors We Live By, though has changed my mind about that.
I now see that metaphor is not just something we use in writing; it's actually a powerful technique for structuring thought. We use metaphor when we are creating designs. We say that a class is like a factory, that an object is a kind of a thing. The thing may be an animal, it may be a part of a whole, or it may be representative of some real world thing.
All those are uses of metaphor, but there is a deeper structure of metaphors that we use every day, without even realizing it. We don't think of them as metaphors because in a sense these are actually the ways that we think. Lakoff uses the example of "The tree is in front of the mountain." Perfectly ordinary sentence. We wouldn't think twice about saying it.
But the mountain doesn't actually have a front, neither does the tree. Or if the mountain has a front, how do we know it's facing us? What we actually mean, if we unpack that metaphor is something like, "The distance from me to the tree is less than the distance from me to the mountain." Or, "The tree is closer to me than the mountain is." That we assign that to being in front is actually a metaphoric construct.
When we say, "I am filled with joy." We are actually using a double metaphor, two different metaphors related structurally. One, is "A Person Is A Container," the other is, "An Emotion Is A Physical Quantity." Together it makes sense to say, if a person is a container and emotion is a physical thing then the person can be full of that emotion. In reality of course, the person is no such thing. The person is full of all the usual things a person is full of, tissues, blood, bones, other fluids that are best kept on the inside.
But we are embodied beings, we have an inside and an outside and so we think of ourselves as a container with something on the inside.
This notion of containers is actually really important.
Because we are embodied beings, we tend to view other things as containers as well. It would make perfect sense to you if I said, "I am in the room." The room is a container, the building is a container. The building contains the room. The room contains me. No problem.
It would also make perfect sense to you, if I said, "That program is in my computer." Or we might even say, "that video is on the Internet." As though the Internet itself were a container rather than a vast collection of wires and specialized computers.
None of these things are containers, but it's useful for us to think of them as such. Metaphorically, we can treat them as containers. This isn't just an abstraction about the choice of pronouns. Rather the use of the pronouns I think reflects the way that we think about these things.
We also tend to think about our applications as containers. The contents that they hold are the features they provide. This has provided a powerful way of thinking about and structuring our programs for a long time. In reality, no such thing is happening. The program source text doesn't contain features. It contains instructions to the computer. The features are actually sort of emergent properties of the source text.
Increasingly the features aren't even fully specified within the source text. We went through a period for a while where we could pretend that everything was inside of an application. Take web systems for example. We would pretend that the source text specified the program completely. We even talked about application containers. There was always a little bit of fuzziness around the edges. Sure, most of the behavior was inside the container. But there were always those extra bits. There was the web server, which would have some variety of rules in it about access control, rewrite rules, ways to present friendly URLs. There were load balancers and firewalls. These active components meant that it was really necessary to understand more than the program text, in order to fully understand what the program was doing.
The more the network devices edged into Layer 7, previously the domain of the application, the more false the metaphor of program as container became. Look at something like a web application firewall. Or the miniature programs you can write inside of an F5 load balancer. These are functional behavior. They are part of the program. However, you will never find them in the source text. And most of the time, you don't find them inside the source control systems either.
Consequently, systems today are enormously complex. It's very hard to tell what a system is going to do once you put into production. Especially in those edge cases within hard to reach sections of the state space. We are just bad at thinking about emergent properties. It's hard to design properties to emerge from simple rules. I think we'll find this most truly in RESTful architectures. In a fully mature REST architecture, the state of the system doesn't really exist in either the client or the server, but rather in the communication between the two of them. We say, HATEOAS "Hypertext As The Engine Of Application State," (which is a sort of shibboleth use to identify true RESTafarian's from the rest of the world) but the truth is: what the client is allowed to do is to hold to it by the server at any point in time, and the next state transition is whatever the client chooses to invoke. Once we have that then the true behavior of the system can't actually be known just by the service provider.
In a REST architecture we follow an open world assumption. When we're designing the service provider, we don't actually know who all the consumers are going to be or what their individual and particular work flows maybe. Therefore we have to design for a visible system, an open system that communicates what it can do, and what it has done at any point in time. Once we do that then the behavior is no longer just in the server. And in a sense it's not really in the client either. It's in the interaction between the two of them, in the collaborations. That means the features of our system are emergent properties of the communication between these several parts. They're externalized. They're no longer in anything. There is no container. One could almost say there's no application. The features exists somewhere in the white space between those boxes on the architecture diagram.
I think we lack some of the conceptual tools for that as well. We certainly don't have a good metaphorical structure for thinking about behavior as a hive-like property emerging from the collaboration of these relatively, independent and self-directed pieces of software.
I don't know where the next set of metaphors will come from. I do know that the attempt to force web-shaped systems in to the application is container metaphor, simply won't work anymore. In truth, they never worked all that well. But now it's broken down completely.
Metaphoric Problems in REST Systems (audio)
Metaphoric problems in rest systems by mtnygard
Automation (1)
Bugs and bothers (1)
Clojure (1)
Fluidity (11)
Human Dynamics (20)
JavaOne (7)
LinuxToMac (1)
Marbles (3)
Stuff (34)
Lean (1)
OODA (1)
Takt (1) | 编程 |
2014-15/2898/en_head.json.gz/9324 | and opinions on situation in Haiti 14/2/06
Haiti: mass protests erupt over vote count By Jonathan Keane
www.wsws.org/articles/2006/feb2006/hait-f14.shtml
Nearly a week after Haitians went to the polls in the first election since the 2004 Washington-backed coup and subsequent US invasion, official results have yet to be announced, and the impoverished Caribbean country is spiraling into another intense political crisis.
More than 10,000 people poured into the streets of the capital of Port-au-Prince Sunday demanding that Rene Pr�val, the overwhelming winner of the election, be named president and denouncing the right-wing politicians controlling the vote counting for attempting to rig the results.
The protest saw large crowds march on the presidential palace from the city’s shantytowns as United Nations troops and Haitian police armed with automatic weapons took up positions to repress any potential upheavals. On Monday, as protesters erected barricades in a number of parts of the city, UN troops opened fire on demonstrators, reportedly killing one and wounding at least four.
The February 7 election represented a massive popular repudiation of the US-backed coup staged two years ago and the right-wing interim regime installed by US Marines and United Nations “peacekeepers.”
As of Monday, with ballots from 90 percent of the polls reportedly counted, the electoral council gave 48.7 percent of the votes to Pr�val, a former political ally of Jean Bertrand Aristide, the elected president who was ousted in the bloody coup of February 2004 and then forcibly removed from the country by US forces. Pr�val was the prime minister in Aristide’s first government in 1991, succeeding him as president in 1996, and then turning the presidential palace back to Aristide in 2001.
Running second with just 11.8 percent was Leslie Manigat, who was briefly installed as president by the military following the collapse of the Duvalier dictatorship in 1988. Sweatshop-factory-owner Charles Henri Baker, who enjoyed the closest ties with the US Republican Party and the sections of the Haitian elite that engineered the 2004 coup, was said to have placed third with less than 8 percent. Finally, Guy Philippe, the death squad leader who led the coup, won only 1.69 percent.
If Pr�val fails to win more than 50 percent of the vote, he will be forced into a run-off election on March 19. Popular suspicion that those controlling the vote count are manipulating the results has grown as Pr�val’s initial percentage of the total has shrunk and amid prolonged delays between announcements of new totals. The fall in the frontrunner’s percentage was particularly suspicious given that the last votes that remained to be counted were from Port-au-Prince, considered a stronghold for the former president.
Some members of the electoral council have openly charged that the vote is being rigged. “There’s a certain level of manipulation,” council member Pierre Richard Duchemin told the Associated Press Sunday, adding that “there is an effort to stop people from asking questions.” He said he was denied access to the tabulation proceedings and called for an investigation. Another member charged that Jacques Bernard, director-general of the nine-member cou | 编程 |
2014-15/2898/en_head.json.gz/16457 | WikiJava:GFDL - WikiJava
Monday, 21st April 2014 Log in
About WikiJava
Request Articles
Java Basics
Java SE
10 best practices with Exceptions
The BatBot Project
Tools for programmers
The Official Blog
Articles on serialization
Project page Discussion View source History Follow WikiJava on twitter now. @Wikijava
WikiJava:GFDL From WikiJava
Jump to: navigation, search GNU Free Documentation License Version 1.2, November 2002
Copyright (C) 2000,2001,2002 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements."
6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU Free
Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
Retrieved from "http://www.wikijava.org/wiki/WikiJava:GFDL"
What links here | Related changes | Special pages | Permanent link This page was last modified on 23 February 2008, at 17:38.
This page has been accessed 1,757 times. Privacy policy | About WikiJava | Disclaimers | Credits | 编程 |
2014-15/2899/en_head.json.gz/707 | Dev Why Walmart is using Node.js
J. O'Dell
0 Node.js has been the delight of San Francisco hackers for the past couple years now, but startups and indie developers aren’t the only ones using JavaScript on the server side.
At Node Summit today, Walmart executives talk about why the real-world retail giant has chosen to work with this relatively new, extremely trendy technology.
Any time a big company makes a decision — any business decision — it is balancing two factors: risk and profit. In its presentation, Walmart executives made it clear that the benefit of using Node.js was far greater than the risk — an assertion many other large companies (and providers of Node support systems) have been waiting to hear from a household name like Walmart.
Walmart’s vice president for mobile engineering, Ben Galbraith, and Dion Almaer, Walmart’s vice president for mobile architecture, took the stage to discuss the largest retailer in the world’s decision to use Node in its mobile applications.
In a nutshell, Walmart is able to serve some very sophisticated features to mobile users on the client side using Node. It’s saving mobile shoppers a ton of time by customizing content based on device type and browser capabilities.
“We’ve been fascinated for a long time by end-to-end JavaScript,” said Galbraith, who said his team wanted to create “a website that would be rich and dynamic… on devices that weren’t too powerful.”
Now, on Walmart’s re-engineered Node-powered mobile app, all the front-end code gets executed on the back end.
“We’re really excited to have a viable back end for that,” he continued. “That’s why Node really excited us, and at Walmart, we’re doing a lot with that kind of architecture right now.”
“We rely on services all over the world,” Almaer continued “We do not control all of those services. Node allows us to front all these services… and scale up very nicely. It’s perfect for what we’re doing in mobile.”
And of course, large-scale Node projects come in handy when you’re recruiting, Galbraith pointed out, since curious hacker-types are eager to work with the latest technologies. However, Almaer warns that many applicants with JavaScript knowledge will also claim Node expertise — and those two disciplines, while related, are hardly equivalent.
While the mobile team did consider using HTML5 for Walmart’s mobile apps, they found it wanting. “We haven’t seen people create what we want for retail in an HTML5 app,” said Galbraith. “For us, hybrid is more interesting in something like what the LinkedIn app has done… it’s the same UI across all platforms, but it has a native experience.”
(Galbraith is referencing LinkedIn’s Node-powered mobile app, which skillfully blends native-running shells with web-based pages and content.)
As previously noted, Node is a relative newcomer — especially to the enterprise, where legacy systems still rule the roost. While Node.js, an open-source technology, recently became the most popular repository on Github, its adoption has been understandably slower in larger companies.
Joyent, the foremost sponsor of Node programming and a provider of Node support for the enterprise, has good reason to promote the technology’s stability and worth to large businesses. In a recent conversation with VentureBeat, Joyent engineering VP Bryan Cantrill said, “Node.js is not simply a new programming environment or the latest shiny object, but rather a profound shift in server-side programming towards event-oriented systems…
“We believe that Node.js is a programming mega-event on the scale of Java or Ruby on Rails. [It is] not merely a new way of expressing existing ideas, but rather a new way of thinking about how software systems should be built.”
Even Node’s core contributors are beginning to say that the technology, though still new and hype-driven, is ready for the big-time — and if Walmart’s mobile app isn’t the big time, we don’t know what is.
Topics > featured Mobile node node summit node.js blog comments powered by Disqus | 编程 |
2014-15/2899/en_head.json.gz/1251 | Python gets a big data boost from DARPA
Continuum Analytics will extend the widely used NumPy library for distributed systems
Joab Jackson (IDG News Service)
DARPA (the U.S. Defense Advanced Research Projects Agency) has awarded $3 million to software provider Continuum Analytics to help fund the development of Python's data processing and visualization capabilities for big data jobs. The money will go toward developing new techniques for data analysis and for visually portraying large, multi-dimensional data sets. The work aims to extend beyond the capabilities offered by the NumPy and SciPy Python libraries, which are widely used by programmers for mathematical and scientific calculations, respectively. More mathematically centered languages such as the R Statistical language might seem better suited for big-data number crunching, but Python offers an advantage of being easy to learn. "Python is a very easy language to learn for non-programmers," said Peter Wang, president of Continuum Analytics. That's important because most big-data analysts will probably not be programmers. If they can learn an easy language, they won't have to rely on an external software development group to complete their analysis, Wang said. The work is part of DARPA's XData research program, a four-year, $100 million effort to give the Defense Department and other U.S. government agencies tools to work with large amounts of sensor data and other forms of big data. For the XData project, DARPA awarded funding to about two dozen companies, including the University of Southern California, Stanford University and Lawrence Berkeley National Laboratory. The organizations are encouraged to use each other's technologies to further extend what can be done in big data, Wang said. DARPA encouraged the funding recipients to release products based on their work and to release their code as open source, so the innovations can be widely used and supported outside of the military. The Defense Department is trying to avoid commissioning software that gets used only by the military, which may then become prohibitively time-consuming and expensive to update. "With big data systems, you find new things you want to look at every week. You can't wait for that process any more," Wang said. Headquartered in Austin, Texas, Continuum Analytics offers add-on products and services that help organizations use Python for data analysis. The company will use the DARPA money to continue development of a number of add-on technologies it has been working on, including Blaze, Numba and Bokeh, all of which provide advanced features not offered in Python itself. At the PyData 2012 conference in New York last November, Continuum engineer Stephen Diehl discussed how Blaze would operate, describing the library as a potential successor to NumPy. NumPy has limitations that Blaze seeks to correct, Diehl said. Most notably, NumPy only offers the ability to store a series of numbers as one continuous string of data. "It is a single buffer, a continuous block of memory. That may be OK for some uses, but the real world is more heterogenous," he said in a presentation. Blaze can "endow [data] with structure," Diehl said. It will also allow programmers to establish multidimensional arrays and store these arrays in a distributed architecture, across multiple machines. Bokeh is a Python library that can visually render large data sets using the HTML 5 Canvas tag, while Numba is a Python compiler that recognizes NumPy calls. Numba is included in Continuum's flagship product, Anaconda, a Python distribution with a number of premium data analysis features. Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is [email protected]
Joab Jackson
Topics: Languages and standards, application development, applications, Defense Advanced Research Projects Agency, software, Continuum Analytics, Data management | 编程 |
2014-15/2899/en_head.json.gz/4004 | Published on O'Reilly (http://oreilly.com/)
Proud to be a "Geekette"
by Julia Lerman
Julia Lerman is the leading independent authority on the Entity Framework and has been using and teaching the technology since its inception two years ago. She is well known in the .NET community as a Microsoft MVP, ASPInsider and INETA Speaker. She is a prolific blogger, a frequent presenter at technical conferences around the world, including DevConnections and TechEd and she writes articles for many well-known technical publications.
I recently received a shipment of t-shirts to distribute to the Vermont.NET User group from third party developer tool vendor, telerik. On top of the pile of t's, was a handful of powder-puff blue shirts that said "Geekette" on them (in pink letters) with an image of a little white kitten. Even the cut of the shirt was feminine, not the typical big, baggy men's t-shirt.
A few years ago, I would have probably said something very grown-up like "Oh Barf!" and dug around for something "black and baggy." Instead, I immediately took off the shirt I was wearing, put on this very girly shirt (in a medium, no less, which was definitely not baggy), ran into the kitchen to show my husband and declared, "Look honey, I'm a Geekette!"
I have been thinking about this t-shirt moment and realized that, in a funny way, it reflects that "I've come a long way, baby!" in my 20+ year career as a female software developer.
Although I did take one Basic class in college in the early '80s (a woman's college, so the entire class was female) and discovered a knack for figuring out how to make Lotus 1-2-3 turn me into a hero in the eyes of the corporate comptroller at a job in 1984, my programming career didn't really begin until a few years later when I was working for a small company where someone had left behind a copy of "dBase III PLUS: Advanced Applications for Nonprogrammers." The book encouraged and guided me to start writing code that interacted with data—something that helped me do my current job much more effectively. Little had I known that there was a data geek in me just waiting for such an opportunity to be let loose.
Over the years, I evolved up the XBase path from dBase III to dBase IV to Clipper and then to FoxPro 2.0 when it first appeared in 1991. Until that time, I was on my own as a completely self-taught programmer, had left my last full time job, and was consulting full-time. FoxPro not only brought us Rushmore technology, it introduced me to a whole community of programmers. A very interesting phenomenon here was that a small number of women had risen to the top as mentors and mavens to this community. These women were Pat Adams, Tamar Granor, and Ceil Silver. While Pat was a little intimidating (but still very kind to me), Tamar and Ceil really took me under their wings. Tamar was the editor of FoxPro advisor and even printed my very first article those many years ago. Tamar and Ceil are still prominent leaders in the FoxPro world, and we continue to keep tabs on each other and share our experiences.
Part of the FoxPro community revolved around user group meetings. Developers today, regardless of what tools they use, owe a lot to this community for all of the lessons that have been brought into the developer communities that exist today. | 编程 |
2014-15/2899/en_head.json.gz/5777 | Developer's Reading List Dr. Dobb's Staff, July 17, 2012 Windows Debugging, Web Apps, JavaScript, and Clojure Lead the List of New Titles
Programming Clojure, 2nd Edition by Stuart Halloway and Aaron BedraClojure is a language that continues to gain fans. While it hasn't yet broken into the mainstream, that's not for lack of enthusiasm from its community of users. Clojure is an implementation of Lisp developed for the JVM, with special support for parallel programming. (Variants are under development for .NET and JavaScript platforms.) The language represents a renaissance of interest in S-expressions, the unique syntax that expresses Lisp's fundamental view of code as data. While the language is gaining fans from both the die-hard Lisp community and the folks who favor functional programming, it has engendered comparatively few books. The best introduction, in my estimation, is this volume, which was recently released in its second edition.The primary author, Stuart Halloway, is a presenter I've admired for lucid, approachable explanations to a surprisingly wide variety of topics, and he plies his trade well in these pages. Because Lisp will appear foreign to many mainstream developers, it requires more careful explanation of basics than do imperative and procedural languages. The authors do this well and clearly without ever coming off as glib or condescending. Rather, you feel a colleague is leading you through the basics and then through more advanced material, such as transactional memory, concurrency, and finally Lisp/Clojure macros (through Clojure v. 1.3). Theoretical topics, such as recursion, that are fundamental to functional programming but comparatively rare in the mainstream, are explored in full detail, so that they become intuitive via substantial exposure. By the end of the book (less than 300 pages), you find yourself thinking functionally, which is an impressive feat. My only objection to this otherwise excellent volume is that it presents mostly short examples, so that it never gives you the experience of reading and working through several pages of Clojure code. That notwithstanding, I highly recommended this book. — ALB 1 2 3 4 5 6 Next INFO-LINK | 编程 |
2014-15/2899/en_head.json.gz/19946 | The A-Z of Programming Languages: C++
Bjarne Stroustrup of C++ fame dissects the history of his famed programming language
Naomi Hamilton (Computerworld)
Computerworld is undertaking a series of investigations into the most widely-used programming languages. Previously we have spoken to Alfred v. Aho of AWK fame, S. Tucker Taft on the Ada 1995 and 2005 revisions, Microsoft about its server-side script engine ASP, Chet Ramey about his experience maintaining Bash, and Charles H. Moore about the design and development of Forth.In this interview, we chat to Bjarne Stroustrup of C++ fame about the design and development of C++, garbage collection and the role of facial hair in successful programming languages. Stroustrup is currently the College of Engineering Chair and Computer Science Professor at Texas A&M University, and is an AT&T labs fellow.What prompted the development of C++?I needed a tool for designing and implementing a distributed version of the Unix kernel. At the time, 1979, no such tool existed. I needed something that could express the structure of a program, deal directly with hardware, and be sufficiently efficient and sufficiently portable for serious systems programming.You can find more detailed information about the design and evolution of C++ in my HOPL (History of Programming Languages) papers, which you can find on my home pages, and in my book "The Design and Evolution of C++".Was there a particular problem you were trying to solve?The two problems that stick in my mind were to simulate the inter-process communication infrastructure for a distributed or shared-memory system (to determine which OS services we could afford to run on separate processors), and [the need] to write the network drivers for such a system. Obviously - since Unix was written in C - I also wanted a high degree of C compatibility. Very early, 1980 onwards, it was used by other people (helped by me) for simulations of various network protocols and traffic management algorithms.Where does the name C++ come from? As "C with Classes" (my ancestor to C++) became popular within Bell Labs, some people found that name too much of a mouthful and started to call it C. This meant that they needed to qualify what they meant when they wanted to refer to Dennis Ritchie's language, so they used "Old C", "Straight C", and such. Somebody found that disrespectful to Dennis (neither Dennis nor I felt that) and one day I received a "request" though Bell Labs management channels to find a better name. As a result, we referred to C++ as C84 for a while. That didn't do much good, so I asked around for suggestions and picked C++ from the resulting list. Everybody agreed that semantically ++C would have been even better, but I thought that would create too many problems for non-geeks.
More about: AT&T, AT&T, Bell Labs, CERN, Critical Systems, Google, ISO, Microsoft, NASA, Provision, ProVision, Quality Systems
The A-Z of Programming Languages: AWK
The A-Z of Programming Languages: Ada
The A-Z of Programming Languages: ASP
The A-Z of Programming Languages: BASH/Bourne-Again Shell
The A-Z of Programming Languages: Forth
any of my papers on C++0x
Doug Gregor's home pages
JSF++ coding rules
C++ is my favorite language, the language I would spend all day working with without complains. The problem is that today is becoming very hard to find jobs for c++. I have to suffer doing sugar code in Java just to make some money...
C++ - A computer language that will will remain with the computer technology evolution.
<a href="http://dailymovielinks.org/">movie download links</a>
i think it is C++ is most common. I definitely use it the most. I also have friends that use it all the time plus use it at work.
<a href="http://www.orlando-personalinjurylawyers.com">Orlando Personal Injury Lawyers</a>
I must say, it's very true, gettig a job as a c++ programmer is very hard. Most of the companies jump on the .net or java band wagon.
Mike @ <a href="http://www.webtrain.com">web conferencing</a>
It's nigh impossible to get a C++ role these days.
Hi Bjarne ,You Got nice hair cut :)
hi, if no jobs are available in c++ then what is the use of learning c++ at all.
The A-Z of programming languages: From Pizza to Scala
The A-Z of Programming Languages: Shakespeare
The A to Z of programming languages: Objective-C
The A-Z of Programming Languages: MATLAB
An interview with ColdFusion co-creator Jeremy Allaire
Tags: a-z of programming languages
MORE IN Open Source
This helpful infographic demonstrates how, when IT is the backbone of modern business, a converged infrastructure system can solve the challenge of cost, complexity, availability, rapid provisioning and flexibility. | 编程 |
2014-15/2899/en_head.json.gz/20025 | How Can One Test a Program's Average Performance?
The standard-library sort function. This function typically implements the Quicksort algorithm, which sorts an n-element sequence in O(n log n) time — on average.
Last week, I argued that testing a program's performance is harder than testing its functionality. Not only is it hard to verify that the performance is up to par, but it can be hard to define exactly what "par" means.
I would like to continue by looking at the standard-library sort function. This function typically implements the Quicksort algorithm, which sorts an n-element sequence in O(n log n) time — on average. Despite this average performance, input that is chosen unfortunately can cause a Quicksort implementation to run much more slowly than average; for example, in O(n2) time. I chose the word unfortunately on purpose, because it is not unusual for Quicksort implementations to use randomness to ensure that the quadratic-performance cases come along only very rarely.
Why is randomness important here? Quicksort starts by picking an element, called the pivot, of the array to be sorted. Quicksort then typically rearranges the elements of the sequence so that all elements less than or equal to the pivot come first, followed by all of the elements greater than the pivot. This rearrangement can always be done in O(n) time. Finally, Quicksort calls itself recursively to sort the elements of the two sections of the (now rearranged) array.
Accordingly, Quicksort's running time is no worse than proportional to the number of elements times the maximum recursion depth. By implication, Quicksort's performance depends on having the recursion depth usually be no more than O(log n). This depth limit can be achieved so long as the pivot, on average, is not too close to the largest or smallest element.
How does Quicksort guarantee that the pivot is not too close to the endpoints? In general, it can't. Nevertheless, it can avoid performance problems most of the time by picking the pivot at random. Doing so ensures that Quicksort's average performance is reasonable, even though once in a while the pivot might happen to be close enough to an endpoint to cause performance problems. Such occasional problems aren't a big deal as long as they're rare. Right?
Well, that depends. Suppose your job is to write performance tests for an implementation of Quicksort.
How do you translate the vague "average performance" claim in the C++ standard into a requirement that it is possible to test at all?
How you test Quicksort in a way that gives you any confidence in the results?
What makes average performance so hard to test is that the very notion has an element of probability in it. If a program is required to produce a particular result, then you can say with certainty that the result of a particular test run is either right or wrong. In contrast, if you are testing a requirement on average performance, no single test can be said with certainty to be right or wrong. The best you can hope for is that by running more and more tests, you can increase your confidence that the program is working correctly; there is always the possibility that further testing may cause you to change your mind about the program's correctness.
In short, if the performance requirements include claims about average execution time, testing those claims is apt to require some kind of statistical analysis. Such analysis is not always easy, but certainly has a long tradition in engineering. As an example, consider American Airlines flight 191.
Flight 191 took off from O'Hare Airport on May 25, 1979. Just as the airplane was leaving the ground, the engine on the left wing seized up and separated from the wing. The engine was attached to the wing by shear pins that were designed to break rather than damaging the wing. Nevertheless, because of faulty maintenance, the wing was damaged; that damage caused the airplane to go out of control and crash, killing everyone aboard.
In reading about the ensuing investigation, I saw a discussion of how a different aircraft manufacturer tested its shear pins in order to ensure that — assuming that the aircraft is maintained properly — the pins will allow the engine to leave the wing rather than damage it. It hasn't occurred to me before, but a major engineering problem in designing shear pins is that the purpose of a shear pin is to break if too much force is applied to it. There is no way to test whether a pin meets that requirement without destroying it. It follows, therefore, that the pins that are actually used in the airplane cannot be tested.
How can one possibly be confident in the safety of an airplane that is built this way? The answer is quite clever.
The engine is attached to the wing with several shear pins in such a way that even if one of them fails to break, the engine will still separate from the wing rather than damage the wing.
The shear pins are manufactured in batches of 100, all made at the same time in the same way.
From each batch of 100 pins, ten pins are selected at random and tested, thereby destroying them. If all ten pins pass the tests, the other 90 are assumed to be good enough to use. If even a single pin fails, the entire batch is discarded.
Obviously, this design involves not only clever mechanical engineering, but also sophisticated statistical reasoning. The limits on the pins must be chosen so that the probability of two randomly chosen pins being out of limits is very small once the 10% sample of the pins has passed its tests. I imagine that this probability can be made even smaller by making the limits on the tested pins narrower than the pins need to be in practice.
I would not want to have to do this kind of statistical analysis in order to test the performance of a Quicksort implementation. Even if I were confident enough in my ability to get the statistics right, there is always the possibility that a future change to the specifications or to the test procedure might render the statistics invalid. Moreover, there is one important difference between algorithms such as Quicksort and mechanical devices such as shear pins, namely that algorithms are sometimes given difficult inputs on purpose. For example, Doug McIlroy wrote a paper in 1999 that detailed how one can construct input to Quicksort that will force it to take O(n2) operations to sort an n-element array. Does a Quicksort implementation that misbehaves in this way fail to meet its specifications? If so, it's hard to see how we can use Quicksort at all.
One way to simplify such performance-testing problems is to use white-box testing, which is testing that takes advantage of knowledge of the program's implementation details. I'll discuss such testing techniques in more detail next week.
Ludei's HTML5 "PhoneGap Killer"Qt and the International Space Apps ChallengeSUSE kGraft Live Patches Linux KernelOpen Build Service Version 2.5 ReleasedMore News» Commentary
Social Processes and Heartbleed, Part 1Azul Extends Zulu Runtime for Java to Java 8My Big Company Code InterviewMicrosoft Lauds Node.js Tools for Visual Studio 1.0 BetaMore Commentary» Slideshow
Developer Reading ListDeveloper Reading ListDeveloper Reading List: The Must-Have Books for JavaScriptDeveloper Staffing Survey: Needs Outstrip SupplyMore Slideshows» Video
Verizon App Challenge WinnersIBM Watson Developers CloudPTECH: Educating for InnovationOpen Source for Private CloudsMore Videos» Most Popular
Developer Reading List: The Must-Have Books for JavaScriptSoftware Estimation: How Misperceptions Mean We Almost Always Get It WrongMongoDB with C#: Deep DiveAndroid on x86: Java Native Interface and the Android Native Development KitMore Popular» More Insights
White Papers Combining Cloud-Based DDoS Protection and DNS Services to Thwart the Threat of DDoS The Essential Guide to IT Transformation More >> Reports Strategy: The Hybrid Enterprise Data Center State of Cloud 2011: Time for Process Maturation More >> Webcasts Optimize Your SQL Environment for Performance & Flexibility Cobol Techniques For Today And The Future More >> INFO-LINK
Thwart off Application-Based Security Exploits: Protect Against Zero-Day Attacks, Malware, Advanced Persistent Threats Big Customer Data Analytics with a 360-Degree View Smarter Process: Five Ways to Make Your Day-to-Day Operations Better, Faster and More Measurable How to Improve Customer Analytics: Best Practices Optimize Your SQL Environment for Performance & Flexibility More Webcasts>>
Consolidation: The Foundation for IT Business Transformation A Practical Guide to Understand and Benefit from the Consumerization of IT 10 Ways to Protect your Company from a Data Breach Bad Data is Costing You Millions of Dollars Data Quality Issues in the Configuration Management Database (CMDB) More >> | 编程 |
2014-15/2900/en_head.json.gz/8008 | Developer Reading List Andrew Binstock, December 21, 2012 New books on C, C#, Node, Win8 Apps, Perl and Groovy.
1 2 3 4 5 Next E-mail
Programming C# 5.0 by Ian Griffiths
This 800-page volume is a comprehensive tutorial and reference on C#. It's a thorough and complete discussion of the language and associated technologies written in a clear, if somewhat wordy, style. The book has been updated to cover more than the language per se. It also includes explanations of new features in .NET 4.5 and issues surrounding Windows 8 apps, such as packaging. Despite covering those technologies, the core theme is the language — not the libraries or the OS.A useful chapter discusses, in considerable detail, the means of calling native code (both 32- and 64-bit), with lengthy coverage of COM and its specific requirements when called from C#. Both low-level calls to code written in C++ and to higher level languages such as VBscript or Jscript are discussed and thoughtfully explained.My comment that the book can be used for reference is not an oblique suggestion that it contains numerous tables of APIs or anything of the sort. Rather, it contains a wealth of topics that are explained clearly and can serve as references and refreshers on how to do specific things, whether it's loading assemblies or determining which class library to write for, or figuring out how Windows 8's new stream for touch-based apps works. In all cases, you'll find several pages that lay out the material clearly.The author fully expects you to have some background both in programming and in C#, so there is no primer. This choice of audience is possibly the reason for my only serious objection to the book, which is how little code it contains. Material is presented more verbally than I care for, but the clarity and thoroughness of the presentation make up for that. Recommended. 1 2 3 4 5 Next INFO-LINK | 编程 |
2014-15/4290/en_head.json.gz/7148 | HomeRandom WatchlistUploadsSettingsLog in About WikibooksDisclaimers Computer Programming/Functional programmingDiscussion A Wikibookian suggests that this book or chapter be merged into Programming Languages/Functional Languages.Please discuss whether or not this merge should happen on the discussion page.
Functional programming is a paradigm that treats computer programs as mathematical functions. When programming in a pure functional style, we do not manipulate states and variables (things that change value), but focus entirely on constants and functions (things that never change). Another distinguishing feature of functional programming (FP) is that functions are treated as first class citizens. Programs written in a functional style often consist of functions that take other functions as input. This is a key feature of FP languages because it makes it very easy to build modular programs. The result is that software written in FP languages tend to be very concise. Indeed, one group of programmers in Utrecht University was able to build a tool for "constructing, editing and analyzing Bayesian networks" in only 10 000 lines of Haskell code, graphical interface included. An equivalent program in Java took 200 000 lines, twenty times as much.
Want to learn more? You could
See functional programming At a glance just to have a quick overview
Visit the wikibook for a functional programming languages, for example, Haskell (see also Scheme and Common Lisp).
Read the article below (which is aimed at programmers from an imperative background)
See also Procedural programming.
This is an import of the article Functional Programming For the Rest of Us on the (mostly) Public Domain defmacro blog
IntroductionEdit
Programmers are procrastinators. Get in, get some coffee, check the mailbox, read the RSS feeds, read the news, check out latest articles on technical websites, browse through political discussions on the designated sections of the programming forums. Rinse and repeat to make sure nothing is missed. Go to lunch. Come back, stare at the IDE for a few minutes. Check the mailbox. Get some coffee. Before you know it, the day is over.
The only thing, every once in a while challenging articles actually do pop up. If you're looking at the right places you'll find at least one of these every couple of days. These articles are hard to get through and take some time, so they start piling up. Before you know it, you have a list of links and a folder full of PDF files and you wish you had a year in a small hut in the middle of the forest with nobody around for miles so you could catch up. Would be nice if someone came in every morning while you're taking a walk down the river to bring some food and take out the garbage.
I don't know about your list, but a large chunk of the articles in mine are about functional programming. These generally are the hardest to get through. Written in a dry academic language, even the "ten year Wall Street industry veterans" don't understand what functional programming (also referred to as FP) articles are all about. If you ask a project manager in Citi Group or in Deutsche Bank1 why they chose to use JMS instead of Erlang they'll say they can't use academic languages for industrial strength applications. The problem is, some of the most complex systems with the most rigid requirements are written using functional programming elements. Something doesn't add up.
It's true that FP articles and papers are hard to understand, but they don't have to be. The reasons for the knowledge gap are purely historical. There is nothing inherently hard about FP concepts. Consider this article "an accessible guide to FP", a bridge from our imperative minds into the world of FP. Grab a coffee and keep on reading. With any luck your coworkers will start making fun of you for your FP comments in no time.
So what is FP? How did it come about? Is it edible? If it's as useful as its advocates claim, why isn't it being used more often in the industry? Why is it that only people with PhDs tend to use it? Most importantly, why is it so damn hard to learn? What is all this closure, continuation, currying, lazy evaluation and no side effects business? How can it be used in projects that don't involve a university? Why does it seem to be so different from everything good, and holy, and dear to our imperative hearts? We'll clear this up very soon. Let's start with explaining the reasons for the huge gap between the real world and academic articles. The answer is as easy as taking a walk in the park.
A Walk In The ParkEdit
Fire up the time machine. Our walk in the park took place more than two thousand years ago, on a beautiful sunny day of a long forgotten spring in 380 B.C. Outside the city walls of Athens, under the pleasant shade of olive trees Plato was walking towards the Academy with a beautiful slave boy. The weather was lovely, the dinner was filling, and the conversation turned to philosophy.
"Look at these two students", said Plato carefully picking words to make the question educational. "Who do you think is taller?" The slave boy looked towards the basin of water where two men were standing. "They're about the same height", he said. "What do you mean 'about the same'?", asked Plato. "Well, they look the same from here but I'm sure if I were to get closer I'd see that there is some difference."
Plato smiled. He was leading the boy in the right direction. "So you would say that there is nothing perfectly equal in our world?" After some thinking the boy replied: "I don't think so. Everything is at least a little different, even if we can't see it." The point hit home! "Then if nothing is perfectly equal in this world, how do you think you understand the concept of 'perfect' equality?" The slave boy looked puzzled. "I don't know", he replied.
So was born the first attempt to understand the nature of mathematics. Plato suggested that everything in our world is just an approximation of perfection. He also realized that we understand the concept of perfection even though we never encountered it. He came to conclusion that perfect mathematical forms must live in another world and that we somehow know about them by having a connection to that "alternative" universe. It's fairly clear that there is no perfect circle that we can observe. But we also understand what a perfect circle is and can describe it via equations. What is mathematics, then? Why is the universe described with mathematical laws? Can all of the phenomena of our universe be described by mathematics?2
Philosophy of mathematics is a very complex subject. Like most philosophical disciplines it is far more adept at posing questions rather than providing answers. Much of the consensus revolves around the fact that mathematics is really a puzzle: we set up a set of basic non-conflicting principles and a set of rules on how to operate with these principles. We can then stack these rules together to come up with more complex rules. Mathematicians call this method a "formal system" or a "calculus". We can effectively write a formal system for Tetris if we wanted to. In fact, a working implementation of Tetris is a formal system, just specified using an unusual representation.
A civilization of furry creatures on Alpha Centauri would not be able to read our formalisms of Tetris and circles because their only sensory input might be an organ that senses smells. They likely will never find out about the Tetris formalism, but they very well might have a formalism for circles. We probably wouldn't be able to read it because our sense of smell isn't that sophisticated, but once you get past the representation of the formalism (via various sensory instruments and standard code breaking techniques to understand the language), the concepts underneath are understandable to any intelligent civilization.
Interestingly if no intelligent civilization ever existed in the universe the formalisms for Tetris and circles would still hold water, it's just that nobody would be around to find out about them. If an intelligent civilization popped up, it would likely discover some formalisms that help describe the laws of our universe. They also would be very unlikely to ever find out about Tetris because there is nothing in the universe that resembles it. Tetris is one of countless examples of a formal system, a puzzle, that has nothing to do with the real world. We can't even be sure that natural numbers have full resemblance to the real world, after all one can easily think of a number so big that it cannot describe anything in our universe since it might actually turn out to be finite.
A Bit of History3Edit
Let's shift gears in our time machine. This time we'll travel a lot closer, to the 1930s. The Great Depression was ravaging the New and the Old worlds. Almost every family from every social class was affected by the tremendous economic downturn. Very few sanctuaries remained where people were safe from the perils of poverty. Few people were fortunate enough to be in these sanctuaries, but they did exist. Our interest lies in mathematicians in Princeton University.
The new offices constructed in Gothic style gave Princeton an aura of a safe haven. Logicians from all over the world were invited to Princeton to build out a new department. While most of America couldn't find a piece of bread for dinner, high ceilings, walls covered with elaborately carved wood, daily discussions by a cup of tea, and walks in the forest were some of the conditions in Princeton.
One mathematician living in such lavish lifestyle was a young man named Alonzo Church. Alonzo received a B.S. degree from Princeton and was persuaded to stay for graduate school. Alonzo felt the architecture was fancier than necessary. He rarely showed up to discuss mathematics with a cup of tea and he didn't enjoy the walks in the woods. Alonzo was a loner: he was most productive when working on his own. Nevertheless Alonzo had regular contacts with other Princeton inhabitants. Among them were Alan Turing, John von Neumann, and Kurt Gödel.
The four men were interested in formal systems. They didn't pay much heed to the physical world, they were interested in dealing with abstract mathematical puzzles instead. Their puzzles had something in common: the men were working on answering questions about computation. If we had machines that had infinite computational power, what problems would we be able to solve? Could we solve them automatically? Could some problems remain unsolved and why? Would various machines with different designs be equal in power?
In cooperation with other men Alonzo Church developed a formal system called lambda calculus. The system was essentially a programming language for one of these imaginary machines. It was based on functions that took other functions as parameters and returned functions as results. The function was identified by a Greek letter lambda, hence the system's name4. Using this formalism Alonzo was able to reason about many of the above questions and provide conclusive answers.
Independently of Alonzo Church, Alan Turing was performing similar work. He developed a different formalism (now referred to as the Turing machine), and used it to independently come to similar conclusions as Alonzo. Later it was shown that Turing machines and lambda calculus were equivalent in power.
This is where the story would stop, I'd wrap up the article, and you'd navigate to another page, if not for the beginning of World War II. The world was in flames. The U.S. Army and Navy used artillery more often than ever. In attempts to improve accuracy the Army employed a large group of mathematicians to continuously calculate differential equations required for solving ballistic firing tables. It was becoming obvious that the task was too great for being solved manually and various equipment was developed in order to overcome this problem. The first machine to solve ballistic tables was a Mark I built by IBM - it weighed five tons, had 750,000 parts and could do three operations per second.
The race, of course, wasn't over. In 1949 an Electronic Discrete Variable Automatic Computer (EDVAC) was unveiled and had tremendous success. It was a first example of von Neumann's architecture and was effectively a real world implementation of a Turing machine. For the time being Alonzo Church was out of luck.
In late 1950s an MIT professor John McCarthy (also a Princeton graduate) developed interest in Alonzo Church's work. In 1958 he unveiled a List Processing language (Lisp). Lisp was an implementation of Alonzo's lambda calculus that worked on von Neumann computers! Many computer scientists recognized the expressive power of Lisp. In 1973 a group of programmers at MIT's Artificial Intelligence Lab developed hardware they called a Lisp machine - effectively a native hardware implementation of Alonzo's lambda calculus!< | 编程 |
2014-15/4290/en_head.json.gz/8538 | The Coming Blowback
The Syrian Mousetrap by AFSHIN RATTANSI The current cast of the longest-running play in the world is ready for its changeover – and that doesn’t just refer to hegemonic power-shifts in theatres of war. The Mousetrap, showing in the West End of London and currently starring Georgina Sutcliffe was written by that old Syria resident, Agatha Christie. The cast of the play which celebrates its diamond jubilee along with the British Queen, this year, changes every ten months. When the present cast started, Syria became the first country ever to be saved by a third double veto cast by Russia and China at the UN Security Council.
Christie and her archaeologist husband, Max Mallowan once stayed at the chimerical Baron Hotel near the old Orient Express terminus in Syria’s second city of Aleppo. From the balcony of the Baron’s room 215, the sham independence of Syria, itself, was declared by the colonial puppet King Faisal I of Iraq. I’m not sure, though, that Christie’s room on the second floor even has the original art deco furniture I saw when I visited, eighteen months ago. Who knows what priceless artefacts have been ransacked in Aleppo…Palmyra…the Dead Cities? Aleppo’s Baron Hotel, where spies drank under the Ottomans and which hosted Lawrence of Arabia, Charles de Gaulle, Egypt’s Gamal Abdel Nasser and, of course, Syria’s former president, Hafez Al Assad is now rocked by the sounds of a lethal proxy war that may have already taken the lives of up to 20,000 people. NATO powers have shown themselves every bit as eager to prolong the conflict as they did in Yugoslavia or Palestine. At the UN Security Council, U.S., French and UK leaders would not even countenance peace talks between the warring parties as suggested by China and Russia.
Secular Syria, in the heart of the Middle East, is being slandered on mainstream news every day by a propaganda campaign all too easily coordinated by intelligence agents mandated by President Obama. The incompetence that led to Obama’s ‘secret’ intelligence authorization being leaked to the international media will not deter incompetent war reporters from singing the neoliberal party line on 24 hour television news channels. One wouldn’t put it past U.S. or UK networks for news bulletins to carry celebrations of rebels wearing actual “Al Qaeda” logos on their bandanas. It seems as if Washington desires to destroy Syria at whatever cost, in human life, regional chaos or 9/11 blowback. White House spokesperson Tommy Vietor merely declined to comment on reports of clandestine U.S. organisational support for the ‘secret’ base on Syria’s northern border established by Turkey, Saudi Arabia and Qatar. Will U.S.-taxpayers even be told their dollars are going into the hands of ‘Al Qaeda’? Will the U.S. Southern Bible Belt be told that a French Catholic Bishop on the ground is rep | 编程 |
Industry models play a crucial role in driving enterprise intelligence transformation and innovative development. High-quality industry data is key to improving the performance of large models and realizing industry applications. However, datasets currently used for industry model training generally suffer from issues such as insufficient data volume, low quality, and lack of domain expertise.
To address these problems, we constructed and applied 22 industry data processing operators to clean and filter 3.4TB of high-quality multi-industry classified Chinese and English language pre-training datasets from over 100TB of open-source datasets including WuDaoCorpora, BAAI-CCI, redpajama, and SkyPile-150B. The filtered data consists of 1TB of Chinese data and 2.4TB of English data. To facilitate user utilization, we annotated the Chinese data with 12 types of labels including alphanumeric ratio, average line length, language confidence score, maximum line length, and perplexity.
Furthermore, to validate the dataset's performance, we conducted continued pre-training, SFT, and DPO training on a medical industry demonstration model. The results showed a 20% improvement in objective performance and a subjective win rate of 82%.
Industry categories: 18 categories including medical, education, literature, finance, travel, law, sports, automotive, news, etc. Rule-based filtering: Traditional Chinese conversion, email removal, IP address removal, link removal, Unicode repair, etc. Chinese data labels: Alphanumeric ratio, average line length, language confidence score, maximum line length, perplexity, toxicity character ratio, etc. Model-based filtering: Industry classification language model with 80% accuracy Data deduplication: MinHash document-level deduplication Data size: 1TB Chinese, 2.4TB English
Industry classification data size:
Industry Category | Data Size (GB) | Industry Category | Data Size (GB) |
---|---|---|---|
Programming | 4.1 | Politics | 326.4 |
Law | 274.6 | Mathematics | 5.9 |
Education | 458.1 | Sports | 442 |
Finance | 197.8 | Literature | 179.3 |
Computer Science | 46.9 | News | 564.1 |
Technology | 333.6 | Film & TV | 162.1 |
Travel | 82.5 | Medicine | 189.4 |
Agriculture | 41.6 | Automotive | 40.8 |
Emotion | 31.7 | Artificial Intelligence | 5.6 |
Total (GB) | 3386.5 |
For the convenience of users to download and use, we have split the large dataset into sub-datasets for 18 industries. The current one is the sub-dataset for the programming industry.
Data processing workflow:
- Downloads last month
- 59