source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
23,280 | Do most programmers specialize on a single stack, leaving other things be, or are they expert at multiple languages at the same time? If it's more than one, how many is standard? By expert, I mean more than simply knowing the syntax of a language - an expert knows enough of standard libraries, tools, environments and syntax to be able to write non-trivial programs without having to constantly look things up or read books/tutorials. | The main benefit of knowing multiple languages isn't in writing them directly. All other things being equal, I'd rather work with a C# programmer who also knows C, Python and Lisp (for example) than one who's only ever hacked in C#. It's not that knowing more languages is better, it's that being able to think about problems at multiple levels and from multiple perspectives is really helpful. A programming language that doesn't change the way you think about programming is not worth knowing. -Alan Perlis It's not about checking off one more language, or putting it on your resume; you just need to understand its underlying concepts well enough to program in it to get the full benefit. You won't get that from having a basic understanding of the syntax. The more direct answer is "it depends". At larger companies you're expected/allowed to specialize, but as I said above, I believe there's still benefit to understanding things beyond your one favorite tool. At smaller places, you really can't get away with that. If nothing else, you typically need to maintain your app as well as build it, and you probably souldn't use the same languages for running through logs/data munging as you do to actually build your app. I guess you could technically get away with knowing a single language, but the benefit of having a well-performing, strongly-typed (or at least assertion-capable), probably compiled language do the heavy lifting, and a scripting language for maintenance/setup/scripting tasks seems pretty big. I wouldn't want to do without it, certainly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23280",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4091/"
]
} |
23,313 | Do good programmers need to have syntax at the tip of their tongue when writing code? What do you make of them if they google for simple stuff online? Are they good or bad(maybe they know where to look for)? Should programmers have a good memory? Is this a trait for a good programmer? | My philosophy on programming is that it's a "state of mind" and the rest is "just syntax." (i.e. not (as) important) That said, one shouldn't have to look-up the simple stuff. At least, not for the language(s) you work with regularly. There's nothing wrong with needing refreshers and knowing how to find information is certainly a good skill to have. However, the core syntax should definitely be well known. Otherwise, you spend too much time searching and too little time programming. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23313",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8843/"
]
} |
23,351 | This is a fairly general question. I know a bit of Perl and Python and I am looking to learn programming in more depth so that once I get the hang of it I can start developing applications and then websites. I would like to know of an algorithm (sequence of steps :)) that could describe my approach towards learning programming in general. I have posted small questions on Perl/Python and I have recieved great help from everyone. Note:- I am not in a hurry to learn. I know it takes time and that's fine. Please give any suggestions you think are valid. Also, please don't push me to learn Lisp, Haskell etc - I am a beginner. | The 11 step algorithm for learning a new programming language I'm currently in the process of learning Lisp , and I'd recommend the following algorithm: Ask around if the language is worth learning and where good resources can be found. If positive responses to the language are given by experts then proceed to step 2. Create an initial programming environment. Keep it simple: text editor and compiler/interpreter. The bare minimum. Consider a specific user account on your machine with a special colour scheme to cue the change of mindset. Create the "Hello, World!" application. Learn general syntax and control statements (if-then-else, repeat-until etc). Create sandbox to verify simple control cases (true/false evaluations etc). Try out every primitive type (int, double, string etc). Perform currency calculations. The number guessing game (as suggested by @Jeremy ) is good for this. Create class (if applicable) with several methods/functions. Make calls between functions. Apply control statements. Learn arrays and collections. Create suitably complex examples that create arrays and collections of each of the classes/functions/primitives that are available to you Learn file IO. Create examples of reading, manipulating and writing binary and character based files. Ask more questions about idiomatic programming within the language (pointers, macros, monads, closures, support frameworks, build environments etc). Choose (or adapt your existing) IDE to work in the recomended idiom. Write a variety of applications that please you (or your boss). After 1 year return to step 1 for another language while maintaining your interest in the one you've just been learning. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23351",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9057/"
]
} |
23,535 | Should web developers continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. On the other hand, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? | I use NoScript but whitelist any site I actually intend to use. When you install NoScript, JavaScript, Java, Flash Silverlight and possibly other executable contents are blocked by default . You will be able to allow JavaScript/Java/... execution... selectively, on the sites you trust. You can allow a site to run scripts temporarily, if you're just surfing randomly, or permanently, when you visit it often and you really trust it. This means that NoScript learns from your own browser habits and tends to disappear in the background after a while, but it promptly comes back to save your day if you stumble upon a malicious web page. When you browse a site containing blocked scripts a notification, similar to those issued by popup blocker, is shown. Look at it or at the statusbar icon to know current NoScript permissions... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23535",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5497/"
]
} |
23,683 | I would like to ask you some questions about dirty code. There are some beginners who coded on a medium project. The code is a very huge ball of mud. They are not advanced programmers. They just know how to use keyboard an a little about java. They just wrote code with 12 000 lines in their main class, though, 6 000 lines belongs to NetBeans itself. My job is to analyze the code and suggest a good way to maintain the code. My idea is to scrap the project and start a new one with OOP methodology. Recently I collected some notes and ideas about the problem, from this site and some others. Now, I have the followings questions: Should we repair the code, and change it to a OOP? We are now debugging it. The code has no comments, no documentation, no particular style of programming, and so forth. Changing it is really expensive and time consuming. What do we can do about this? How can I teach them to follow all the rules (commenting, OOP, good code quality, etc.)? The code is erroneous and error prone. What can we do? Testing? We almost write two or three A4 papers for correction, but it seems endless. I should have to say that I am new with them. I think I have broken the rules about adding people too late to the project, as well. Do you think I have to leave them? | Step 0: Backup to SCM Because, as hinted to by JBRWilkinson in the comments, version control is your first line of defense against (irreversible) disaster. Do also backup software configuration details, procedures to create deliverables, etc... Step 1: Test First Then start by writing tests : for what works, and for what fails. No matter what you decide to do, you're covered. You can now either: start from scratch and re-write , or fix it. My advice would be to start the general architecture from scratch , but extract from the mess the parts that validate checkpoints and to refactor these as you see fit. Step 2: Verify and Monitor Set up a Continuous Integration system (to complement step 0 and step 1 ) AND a Continuous Inspection system (to prepare for step 4 ). Step 3: Stand on the Shoulders of Giants (as you always should...) Refactoring to Patterns Refactoring: Improve the Design of Existing Code . Working Effectively with Legacy Code (as recommended by Jason Baker ) Refactoring Step 4: Clean That sort of goes without saying, but instead of skimming though the code yourself, you may want to simply run linters / static analyzers and other tools on the broken codebase to find errors in the design and in the implementation. Then you might also want to run a code formatter, that will already help a bit with the housekeeping. Step 5: Review It's easy to introduce tiny bugs by refactoring or cleaning things up. It only takes a wrong selection and quick hit on a key, and you might delete something fairly important without realizing at first. And sometimes the effect will appear only months later. Of course, the above steps help you to avoid this (especially by implementing a strong test harness), but you never know what can and will slip through. So make sure to have your refactorings reviewed by at least one other dedicated pair of eye-balls (and preferably more than that). Step 6: Future-Proof your Development Process Take all of the above, and make it an inherent part of your usual development process, if it already isn't. Don't let this happen again on your watch, and work together with your team to implement safeguards in your process and enforce this (if that's even possible) in your policies. Make producing Clean Code a priority. But really, test . A lot . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23683",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9112/"
]
} |
23,691 | Leaving the whole pie to only a few of them, amplifying the huge differences between the two status. Pay is a (huge) one, not having to do overtime is another. I leave the question open to hopefully get many great answers on all the different subjects that affects that feeling and decision not to go. While this question is really global, I'll be interested in any studies, facts, articles, opinions regarding local markets such as US, India and even Australia in which I'm in love with. | Don't know about others, but thinking about myself: I have a job that I'm currently happy with. I work regularly and get paid regularly. Of course there's always too much things to do, but still, the work is mostly interesting and the workload is approximately constant and predictable . Hardly so with freelancing (think of work requests as a Poisson process , and how the stability of frequency depends on the average frequency; a cafeteria with 10 customers and 1 toilet is not linearly proportional to a cafeteria with 100 customers and 10 toilets, i.e. the queue is not similar). Going freelance would require me to do all the marketing, selling, bureaucracy etc. boring and scary (but admittedly, important) stuff. Actually I don't think I could do it successfully. At least I would hate it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
23,760 | I'm graduating in a couple of weeks, and my resume (as expected) lists the languages that I've had experience with. Previously I've put "C/C++" , however back then I didn't have that much experience with these two languages as I do now. Now that I've formally learned these two languages, it has become evident to me (and anyone who really knows these languages) that they are similar, and completely disimilar at the same time. Sure, most C code is compilable C++ code, but syntax and incorporation of library functions is pretty much where these similarities end. In most non-trivial problems, chances are that the desirable C++ solution will be different from the desirable C solution. My question: Will recruiters take note or care about whether you put "C/C++" as opposed to "C, C++" ? Will they assume a lack of knowledge of the workings of either because of the inclusion of the first form, or perhaps see the inclusion of the second form as a potential "resume beefer" (listing them as 2 languages, instead of "one")? Furthermore, for jobs that you've applied to that were particularly interested in these two langauges, did the interview process include questions about the differences between C programming and C++ programming (so, about actual programming techniques, not only the extra paradigms in the latter)? | C, C++ I don't like C/C++, because though C++ is technically a superset of C, to do it right, you have to do things differently. C/C++ makes you look like someone who knows C and knows that a C++-compiler accepts C, too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23760",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7356/"
]
} |
23,810 | Well I've been hitting the books wherever I can. I have an interview coming up, first one via phone, for a software engineer position. I've read all the blog posts, I've read all the accounts of interviews (some pretty old), and Google itself even suggested a reading list of books, none of which would surprise anyone here. Still, after some time preparing, I can't shake that feeling that there is such a large ground to cover, and I'm never sure whether to go with depth or breadth. I've found myself re-learning a whole area of compsci, only to forget most of the nitty details as I move on to another. So, I don't know that there's a good answer to this question, but I'm looking for any practical advice on how to tackle the remaining weeks in advance of the interview. Part of my brain is tired from cramming, and of course the rest of it has to be utilized for some tough problems at my current place of employment. | Things you should know Google wants to hire you! The life-blood of any software company is its employees and Google is no different. It's looking to hire the best and the brightest and the people conducting the interview(s) want you to succede just as much as you do. Google will do it's best to evaluate you as accurately as possible. It's their job. Google is a data-driven company. Hiring decisions are not decided by manager fiat. Instead, each interviewer takes extensive notes during the interview which gets combined into a packet. That packet will then get reviewed by a separate committee , which will ultimately make the decision. So if you just weren't 'gelling' with one of your interviewers don't worry! What matters is how well you perform on the interview. Skills you should have Be sure to brush up on the following skills/techniques before your interview. Even if you don't get asked a question on these directly, reviewing them can certainly get your head into the right mindset. Data structures What is the difference between an Array and a Linked List? A Tree and a Graph? When would you use one over the other? How would that impact speed/memory tradeoffs? An interview question doesn't end at a working solution. Be able to explain the runtime of your approach and what sorts of trade offs you could make. For example, "if I cached everything it would take X gigs of RAM but would perform faster because...". Or, "if I kept the binary tree sorted while I performed the operations X would be slower, Y would be faster, etc." Algorithms Basic graph traversal algorithms, tree traversal algorithms, and two good approaches for sorting numbers. Make sure to practice solving a non-trivial problem using Dynamic Programming. That is your ace in the hole when it comes to tough interview questions! Hash tables This is huge. Know everything there is to know about hash tables, from being able to implement one yourself, to knowing about hashing functions, to why the number of buckets should be a prime number. The concepts involved with hash tables are relevant to just about every area of Computer Science. Talking points about yourself That first few minutes of chit-chat with the interviewer is an important time to explain any sort of experience which sets you apart. Relevant projects, significant technical accomplishments, and the like. Remember, the person conducting the interview has interviewed dozens if not hundreds of smart people just like you. So what can you say that would surprise them? For example, in an interview I spoke to the interviewer about a program I wrote to play the game of Go in college. It is very difficult to write an AI for the game of Go, and I have a horrible Go-bot to prove it! The bottom line is be yourself, and not just some smart person who knows how to program. Don't stress out too much, it's just an interview like any other. Rest assured that nobody will ask you stupid questions about manhole covers or Mt. Fuji. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23810",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8983/"
]
} |
23,845 | I was watching Bob Ross paint some "happy trees" tonight, and I've figured out what's been stressing me out about my code lately. The community of folks here and on Stack Overflow seem to reject any whiff of imperfection. My goal is to write respectable (and therefore maintainable and functioning) code, by improving my skills. Yet, I code creatively. Let me explain what I mean by "coding creatively": My first steps in a project are often to sit down and bash out some code. For bigger things, I plan a bit out here and there, but mostly I just dive in. I don't diagram any of my classes, unless I'm working with others who are creating other pieces in the project. Even then, it certainly isn't the first thing I do. I don't typically work on huge projects, and I don't find the visual very useful. The first round of code I write will get rewritten many, many times as I test, simplify, redo, and transform the original hack into something reusable, logical, and efficient. During this process, I am always cleaning. I remove unused code, and comment anything that isn't obvious. I test constantly. My process seems to go against the grain of what is acceptable in the professional developer community, and I would like to understand why. I know that most of the griping about bad code is that someone got stuck with a former employee's mess, and it cost a lot of time and money to fix. That I understand. What I don't understand is how my process is wrong, given that the end result is similar to what you would get with planning everything from the start. (Or at least, that's what I have found.) My anxiety over the issue has been so bad lately that I have stopped coding until I know everything there is about every method for solving the particular problem I am working on. In other words, I have mostly stopped coding altogether. I sincerely appreciate your input, no matter what your opinions are on the issue. Edit: Thank you all for your answers. I have learned something from each of them. You have all been most helpful. | There's nothing wrong with code-test-refactor-repeat, just tell people you're prototyping. On the other hand, for larger projects you will find that some thought given to the design up-front will save you a lot of time in the oh-crap-now-what loop! P.S.: Diagramming techniques help you to learn visual thinking skills, which are valuable even if no one but you ever sees your diagrams. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23845",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5194/"
]
} |
23,852 | I've seen others use Bitwise-OR to combine flags before: #define RUN 0x01
#define JUMP 0x02
#define SHOOT 0x04
const byte madPerson = RUN | JUMP | SHOOT; That's also the way I do it. But I've also seen some (not as many) combine flags using addition: #define RUN 0x01
#define JUMP 0x02
#define SHOOT 0x04
const byte madPerson = RUN + JUMP + SHOOT; Which one is more "readable"? (Which one do you think more people will recognize?) What is the "standard" way to do it? Which one do you prefer? | Bitwise-OR. Addition is dangerous. Consider an example where a bandit is a person, and an angry bandit is a bandit that speaks and shoots. Later, you decide all bandits should shoot, but you've forgotten about the angry bandit definition and don't remove its shooting flag. #define PERSON 1 << 0
#define SPEAKS 1 << 1
#define SHOOTS 1 << 2
#define INVINCIBLE 1 << 3
const byte bandit = PERSON | SHOOTS; // 00000101
const byte angryBandit_add = bandit + SPEAKS + SHOOTS; // 00001011 error
const byte angryBandit_or = bandit | SPEAKS | SHOOTS; // 00000111 ok If you used angryBandit_add your game would now have the perplexing logic error of having angry bandits that can't shoot or be killed. If you used angryBandit_or the worst you'd have is a redundant | SHOOTS . For similar reasons, bitwise NOT is safer than subtraction for removing flags. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23852",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6440/"
]
} |
24,077 | Are short identifiers bad? How does identifier length correlate with code comprehension? What other factors (besides code comprehension) might be of consideration when it comes to naming identifiers? Just to try to keep the quality of the answers up, please note that there is some research on the subject already! Edit Curious that everyone either doesn't think length is relevant or tend to prefer larger identifiers, when both links I provided indicate large identifiers are harmful! Broken Link The link below pointed to a research on the subject, but it's now broken, I don't seem to have a copy of the paper with me, and I don't recall what it was. I'm leaving it here in case someone else figure it out. http://evergreen.loyola.edu/chm/www/Papers/SCP2009.pdf | The best "rule" I've heard is that name lengths should be proportional to the length of the scope of the variable. So an index i is fine if the body of the loop is a few lines long, but I like to use something a little more descriptive if it gets to be longer than 15ish lines. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24077",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5719/"
]
} |
24,342 | I really enjoy programming games and puzzle creators/games. I find myself engineering a lot of these problems the same way and ultimately using similar technique to program them that I'm really comfortable with. To give you brief insight, I like to create graphs where nodes are represented with objects. These object hold data such as coordinates, positions and of course references to other neighboring objects. I'll place them all in a data structure and make decisions on this information in a "game loop". While this is a brief example, its not exact in all situations. It's just one way I feel really comfortable with. Is this bad? | No, it's fine. The point of practical programming is to find solutions that will possibly be useful in many similar developments. You just found one. You can't and shouldn't be creating different solutions just for the sake of them being different. But you definitely should have a critical look at your solutions each time and ask yourself if they're still good or maybe the industry has progressed since and you need to align with it accordingly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24342",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2638/"
]
} |
24,343 | Speaking as someone with an Electronic Engineering rather than Computer Science degree, what is the one bit of computer science I should know to make me a better real world programmer? (By real world I mean something I'm going to use and benefit from in my day to day job as a programmer - for instance I'd suggest understanding database normalisation is of more practical use than understanding a quick sort for which there are lots of libraries). | If I have to choose just one bit, which is a difficult decision, I'd say go for the Big O notation . Understanding the implications of O(n), O(ln n), O(n²), O(2^n), O(n!) helps you to avoid a lot of expensive mistakes, the kind of which work well in the test environment but disastrously fail in production. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24343",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
24,378 | PHP, as most of us know, has weak typing . For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10";
$value = 10 + $var;
var_dump($value); // int(20) PHP also allows you to explicitly cast a variable, like so: $var = "10";
$value = 10 + $var;
$value = (string)$value;
var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of - and fully understand the usefulness of - type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will , it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10";
$value = (int)$var;
$value = $value . ' TaDa!';
var_dump($value); // string(8) "10 TaDa!" So what's the point? Take this theoretical example of a world where user-defined type casting makes sense in PHP : You force cast variable $foo as int → (int)$foo . You attempt to store a string value in the variable $foo . PHP throws an exception!! ← That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1
$foo = 0;
$foo = (string)$foo;
$foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo;
// example 2
$foo = 0;
$foo = (int)$foo;
$foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; A year after originally asking this question, guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4
Menu Item 2 .............. $ 7.5
Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer.
echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation. | In a weakly-typed language, type-casting exists to remove ambiguity in typed operations, when otherwise the compiler/interpreter would use order or other rules to make an assumption of which operation to use. Normally I would say PHP follows this pattern, but of the cases I've checked, PHP has behaved counter-intuitively in each. Here are those cases, using JavaScript as a comparison language. String Concatentation Obviously this is not a problem in PHP because there are separate string concatenation ( . ) and addition ( + ) operators. JavaScript var a = 5;
var b = "10"
var incorrect = a + b; // "510"
var correct = a + Number(b); // 15 String Comparison Often in computer systems "5" is greater than "10" because it doesn't interpret it as a number. Not so in PHP, which, even if both are strings, realizes they are numbers and removes the need for a cast): JavaScript console.log("5" > "10" ? "true" : "false"); // true PHP echo "5" > "10" ? "true" : "false"; // false! Function signature typing PHP implements a bare-bones type-checking on function signatures, but unfortunately it's so flawed it's probably rarely usable. I thought I might be doing something wrong, but a comment on the docs confirms that built-in types other than array cannot be used in PHP function signatures - though the error message is misleading. PHP function testprint(string $a) {
echo $a;
}
$test = 5;
testprint((string)5); // "Catchable fatal error: Argument 1 passed to testprint()
// must be an instance of string, string given" WTF? And unlike any other language I know, even if you use a type it understands, null can no longer be passed to that argument ( must be an instance of array, null given ). How stupid. Boolean interpretation [ Edit ]: This one is new. I thought of another case, and again the logic is reversed from JavaScript. JavaScript console.log("0" ? "true" : "false"); // True, as expected. Non-empty string. PHP echo "0" ? "true" : "false"; // False! This one probably causes a lot of bugs. So in conclusion, the only useful case I can think of is... (drumroll) Type truncation In other words, when you have a value of one type (say string) and you want to interpret it as another type (int) and you want to force it to become one of the valid set of values in that type: $val = "test";
$val2 = "10";
$intval = (int)$val; // 0
$intval2 = (int)$val2; // 10
$boolval = (bool)$intval // false
$boolval2 = (bool)$intval2 // true
$props = (array)$myobject // associative array of $myobject's properties I can't see what upcasting (to a type that encompasses more values) would really ever gain you. So while I disagree with your proposed use of typing (you essentially are proposing static typing , but with the ambiguity that only if it was force-cast into a type would it throw an error — which would cause confusion), I think it's a good question, because apparently casting has very little purpose in PHP. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24378",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5497/"
]
} |
24,398 | I'm a year away from graduating from university, and I'm really looking forward to solving practical problems. Especially non-trivial ones which require a bit of research and a lot of thinking. But at the same time, that is also my greatest fear - being faced with a problem that I'm unable to solve, no matter how hard I try. And with pressure to deliver code on impending deadlines just around the corner, it does look a bit scary when viewing it from the safe playgrounds on uni (where the worst thing that can happen is that you have to redo a course or exam). So for those who have been in industry for any longer length of time, what would happen if you were told to solve a problem that you couldn't? Has it happened, and if so, what did happen? Did they just drop it and said "Oh well, guess we can make do with something else"? Were there consequences? Were you reprimanded, or even fired? | First of all, your fear is very healthy, and normal. Here are my musings after about 15 years in the software industry. Here are some questions to ask yourself: Do you understand the problem? Do you know that the problem is unsolvable (within your time/budget constraints)? Do you just not know how to solve the problem? Is your client/boss asking for the impossible. Is it worth it? (ROI) Is it a REAL problem? 1) First of all, make sure you understand the problem. There are no stupid questions.
Do you understand what your client/boss is asking you versus what they need? 2) This will happen. "Build me a bridge by tomorrow" . Make sure that you know for a fact that a problem is unsolvable within your constraints. Your client/boss might be flexible on the time/budget and these can be modified to give you more time/budget. 3) If the problem is understandable and the constraints are within reason, and there is technology that can solve the problem, but you just don't know enough...that's what StackOverflow and the Internet is for. Make sure you do your research first. Try to ask explicit questions that have quantifiable answers. Ask your peers. Have a design session. 4) This is a variant of answer number 2. It seems like your client/boss is asking the impossible. Do some research. Never say that the problem is unsolvable, unless you know exactly why and you can clarify. 5) ROI stands for Return On Investment. This refers to an investment in time. Your Time!. Is the problem important enough to solve to warrant the amount of time it will take you to research and solve the problem. Discuss this with your client/boss 6) Is it a real problem. Clients, often times, understand what they want, but don't necessarily understand what they need. Try to understand what your Client/Boss actually needs and discuss this with them. Hope these guidelines help you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24398",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
24,460 | I'm totally new to the Ruby world, and I'm a bit confused with the concept of Symbols. What's the difference between Symbols and Variables? Why not just using variables? Thanks. | Variables and symbols are different things. A variable points to different kinds of data. In Ruby, a symbol is more like a string than a variable. In Ruby, a string is mutable, whereas a symbol is immutable. That means that only one copy of a symbol needs to be created. Thus, if you have x = :my_str
y = :my_str :my_str will only be created once, and x and y point to the same area of memory. On the other hand, if you have x = "my_str"
y = "my_str" a string containing my_str will be created twice , and x and y will point to different instances. As a result, symbols are often used as the equivalent to enums in Ruby, as well as keys to a dictionary (hash). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6651/"
]
} |
24,542 | I have read a lot of threads about functional programming languages lately (almost in the past year, in fact). I would really like to pick one and learn it thoroughly. Last [course] semester, I have been introduced to Scheme. I loved it. Loved the extreme simplicity of syntax, the homoiconicity principle, the macros ( hygienic and non-hygienic), the n-arity of procedures, etc. The problem with Scheme is it's an academic language. I don't think it is really used in production environments. I don't believe either that it is particularly good to have on our resume. So, I have been looking around for alternatives. There are many of them and they somehow all seem to have a similar level of popularity. Some thoughts about some other functional languages I have considered yet: Clojure: It sounds great because it can access the Java world, it is oriented towards scalability and concurrency, but isn't the Java world on an edge right now? I already know Java pretty well, but would it be wise to add even more energy on depending on the JVM? Haskell: Looks like a very appreciated language, but from what I have read, it's also more of an academic language. Lisp: It's been around since forever. It seems to have most of what I like from Scheme. It has a big community. For what I [think I] know, it is probably the most widely used functional programming language in industry(?). F#: Didn't really consider it. I'm not a big fan of MS stuff. I don't have the money to pay for their softwares (I could have them free from university alliances, but I'm more inclined to go with community-driven solutions). Though... I guess it would be the best career-oriented choice. Tonight, I'm leaning towards Lisp. One week ago, it was Haskell. Before that it was Clojure. In the past year, I was doing some Scheme for fun, not pushing it for the reason you know. Now I would like to get serious (about learning one, about doing real projects with it, about maybe eventually professionally working with it). My problem is I would need to learn them all in depth before being able to choose one. | Since you want a practical language: Notice that Haskell and Lisp are used more than the others in industry, although there has been some recent interest in Clojure and F#. But look what happens when we add Scheme to the mix: Hmm, doesn't look so much like an academic language now, does it? Actually, the above graph is probably a lie; the word "scheme" can appear in help wanted ads in other contexts besides programming languages. :) So here is another graph that is probably (a little) more representative: If you want to explore a really kick-ass dialect of Scheme, have a look at Racket. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6742/"
]
} |
24,558 | This is just a wondering I had while reading about interpreted and compiled languages. Ruby is no doubt an interpreted language since the source code is processed by an interpreter at the point of execution. On the contrary C is a compiled language, as one have to compile the source code first according to the machine and then execute. This results is much faster execution. Now coming to Python : A python code ( somefile.py ) when imported creates a file ( somefile.pyc ) in the same directory. Let us say the import is done in a python shell or django module. After the import I change the code a bit and execute the imported functions again to find that it is still running the old code. This suggests that *.pyc files are compiled python files similar to executable created after compilation of a C file, though I can't execute *.pyc file directly. When the python file (somefile.py) is executed directly ( ./somefile.py or python somefile.py ) no .pyc file is created and the code is executed as is indicating interpreted behavior. These suggest that a python code is compiled every time it is imported in a new process to create a .pyc while it is interpreted when directly executed. So which type of language should I consider it as? Interpreted or Compiled?
And how does its efficiency compare to interpreted and compiled languages? According to wiki's Interpreted Languages page, it is listed as a language compiled to Virtual Machine Code, what is meant by that? | It's worth noting that languages are not interpreted or compiled, but rather language implementations either interpret or compile code. You noted that Ruby is an "interpreted language", but you can compile Ruby à la MacRuby , so it's not always an interpreted language. Pretty much every Python implementation consists of an interpreter (rather than a compiler). The .pyc files you see are byte code for the Python virtual machine (similar to Java's .class files). They are not the same as the machine code generated by a C compiler for a native machine architecture. Some Python implementations, however, do consist of a just-in-time compiler that will compile Python byte code into native machine code. (I say "pretty much every" because I don't know of any native machine compilers for Python, but I don't want to claim that none exist anywhere.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24558",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7572/"
]
} |
24,583 | What is a situation while coding in C# where using pointers is a good or necessary option? I'm talking about unsafe pointers. | From the developer of C# himself: The use of pointers is rarely required in C#, but there are some situations that require them. As examples, using an unsafe context to allow pointers is warranted by the following cases: Dealing with existing structures on disk Advanced COM or Platform Invoke scenarios that involve structures with pointers in them Performance-critical code The use of unsafe context in other situations is discouraged. Specifically, an unsafe context should not be used to attempt to write C code in C#. Caution: "Code written using an unsafe context cannot be verified to be safe, so it will be executed only when the code is fully trusted. In other words, unsafe code cannot be executed in an untrusted environment. For example, you cannot run unsafe code directly from the Internet." You may go through this for reference | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24583",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
24,608 | What methods / libraries / tools would people suggest for generating license keys (those lovely AAAAA-AAAAA-AAAAA-AAAAA-AAAAA things you put in when you register software)? Any gotchas to look out for when implementing them? (At the moment I'm interested in this as a general thing rather than language specific so just state what language you're using if your solution is language specific). | It's about the same as when storing passwords. You should have a unique secret key known only to the generator and your program. Use this key to manipulate the details (user name, password, organization, etc) and then hash it. You can then do something trivial transfer encoding in Base32 on the hash or simply move it to a hex string if you don't care about a format. Any gotchas to look out for when implementing them? Keep secrets secret and separate. Make your implementation improvable. If someone breaks it can you easily change the implementation? One common implementation on desktop applications is to use a remote server to validate the license. This removes the possibility that someone could reverse engineer a hash or the algorithm by inspecting the application itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24608",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
24,615 | For Windows Development I mean. Looking over other questions there are alternatives to VS, but they seem to be web based ones, which is fine, or you could program an entire .net website in notepad, should the urge drive you to. But is there more to it than just an IDE for Windows Development? I.E. Is it possible for me to create an application in just notepad, is the compiler part of Visual Studio, or is it separate, which could be called via command line or something? I don't want to not use VS, I'm happy with it, does what I need etc etc, just more a facet I'm curious about. | Compilers are available separately. For C# it would be the csc.exe . You could call it from the command line any time. Pass along the name of the source files to compile, the libraries to reference, compilation option and here you go. I believe Visual Studio itself calls the compiler over the command line when you ask it to build your project. The build output messages you see is what the command-line compiler returns. Apart from this Visual Studio is more than just a GUI for a compiler. It has a nice text editor, debugger, designer tools, SQL browser, also integrates with test tools, versioning control and other instrumentary (it's extendable through plug-ins). You'd be striving hard to find an equivalent product (for the Microsoft stack) with a comparable level of consolidation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9331/"
]
} |
24,910 | Having spent all morning trying to check something in - I now realise I've lost a couple of days worth of work. Its happened before - and is apparently common occurrence with SourceSafe. Can SourceSafe be used successfully, without problems, and if so, how? | My view is simple, migrate to something else ASAP. It won't take long (1-2 weeks WAG) and no matter how long the migration takes, it's easy to cost justify that to management. A little time to migrate equates to solid source control and very little chance of lost source code. Do a quick google search for "source safe horror stories" or similar if your boss is skeptic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24910",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4630/"
]
} |
24,940 | I'm having a discussion with a co-worker on Linq, I'll copy here: Co-Worker: Lets be honest here. Linq
syntax sucks. It's confusing and
non-intuitive. Me: oh come on, more
confusing than T-SQL? Co-Worker: uh,
yes. Me: it has the same basic
parts, select, where, and from Co-Worker: Linq, to me, is a
bastardization of relational + OO.
Co-Worker: Don't get me wrong - it's
incredibly powerful, but they
repurposed SQL to use agains object
collections. I'm of the opinion that using Linq + Lamda's is very powerful (he agrees), and also makes code easier to read (he disagrees on that point): pickFiles = from f in pickFolder.GetFiles("*.txt")
where ValidAuditFileName.IsMatch(f.Name)
select f; or var existing = from s in ActiveRecordLinq.AsQueryable<ScannedEntity>()
where s.FileName == f.FullName && s.DocumentType != "Unknown"
select s; or (VB code here) Dim notVerified = From image In images.AsParallel
Group Join verifyFile In verifyFolder.GetFiles("*.vfy").AsParallel.Where(
Function(v) v.Length > 0
).AsParallel
On image.Name.Replace(image.Extension, ".vfy") Equals verifyFile.Name
Into verifyList = Group
From verify In verifyList.DefaultIfEmpty
Where verify Is Nothing
Select verify To me this is clean and easy (at least easier than the alternatives) to read, what are your opinions on it? | I can't find the right post anymore, but Eric Lippert (and possibly several other softies) have opined on several occasions about how Linq is declarative , which, for several classes of problems, is far more intuitive than imperative syntax. Linq enables you to write code that expresses the intent , not the mechanism . You tell me which is easier to read. This: IEnumerable<Customer> GetVipCustomers(IEnumerable<Customer> source)
{
List<Customer> results = new List<Customer>();
foreach (Customer c in source)
{
if (c.FirstName == "Aaron")
{
results.Add(c);
}
}
results.Sort(new LastNameComparer());
return results;
}
class LastNameComparer : IComparer<Customer>
{
public int Compare(Customer a, Customer b)
{
return x.LastName.CompareTo(b.LastName);
}
} Or this? IEnumerable<Customer> GetVipCustomers(IEnumerable<Customer> source)
{
return from c in source
where c.FirstName == "Aaron"
orderby c.LastName
select c;
} Or even this? IEnumerable<Customer> GetVipCustomers(IEnumerable<Customer> source)
{
return source.Where(c => c.FirstName == "Aaron").OrderBy(c => c.LastName);
} The first example is just a bunch of pointless boilerplate in order to obtain the simplest of results. Anybody who thinks that it is more readable than the Linq versions needs to have his head examined. Not only that, but the first one wastes memory. You can't even write it using yield return because of the sorting. Your coworker can say what he wants; personally, I think Linq has improved my code readability immeasurably. There's nothing "relational" about Linq either. It may have some superficial similarities to SQL but it does not attempt in any shape or form to implement relational calculus. It's just a bunch of extensions that make it easier to query and project sequences. "Query" does not mean "relational", and there are in fact several non-relational databases that use SQL-like syntax. Linq is purely object-oriented, it just happens to work with relational databases through frameworks such as Linq to SQL because of some expression tree voodoo and clever design from the C# team, making anonymous functions implicitly convertible to expression trees. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24940",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8451/"
]
} |
24,987 | What I think about Build Numbers is that whenever a new nightly build is created, a new BUILDNUMBER is generated and assigned to that build. So for my 7.0 version application the nightly builds will be 7.0.1, 7.0.2 and so on. Is it so? Then what is the use of a REVISION after the build number? Or is the REVISION part being incremented after each nightly build? I am a little confused here... do we refer to each nightly build as a BUILD ? The format is mentioned here: AssemblyVersion - MSDN | I've never seen it written out in that form. Where I work, we are using the form MAJOR.MINOR.REVISION.BUILDNUMBER , where: MAJOR is a major release (usually many new features or changes to the
UI or underlying OS) MINOR is a minor release (perhaps some new
features) on a previous major release REVISION is usually a fix for
a previous minor release (no new functionality) BUILDNUMBER is
incremented for each latest build of a revision. For example, a revision may be released to QA (quality control), and they come back with an issue which requires a change. The bug would be fixed, and released back to QA with the same REVISION number, but an incremented BUILDNUMBER. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/24987",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2361/"
]
} |
25,023 | This article on technical debt has some good points, including: Working on the "technical matters" works best when it is driven by stories. The code base is probably in need of work everywhere, but the payoff will be received only where the code is going to be worked on for user-facing reasons. If no stories are going to pass through some crufty area, working on it is largely wasted. Therefore, I prefer the approach of taking stories as usual (but probably fewer of them), and following the "boy scout rule" of leaving the campground better than you found it. In other words, wherever the stories lead us, let's write more tests, let's refactor more aggressively. This approach has at least these advantages: maintains "best sensible" flow of stories; provides help from all team talent; provides for whole team to learn how to keep code clean; focuses improvement exactly where it is needed; does not waste improvement that "may" be needed; I've seen code quality have a very big effect on long-term productivity, so I am a believer that technical debt should be taken care of. I think the post above makes sense, but I'm not so sure about the last two points. I'm interested in finding out real experiences of benefits from cleaning technical debt, even if it was not related to user stories. What positive benefits have you seen from cleaning up your code base and ridding yourself of technical debt? What methods did you use to get the work done? | I can give you one example from my experience. About 10 or 12 years ago I inherited an application from a team of developers that ended up leaving the company (too long to get into here...). The system was a large home-grown middleware report generation system. It ran every week night and generated about 2 dozen Excel reports for senior executives of a Fortune 500 company. When I inherited it, it took about 5-6 hours to run and during any given week would fail at least 2 nights. I was not a happy camper to be given this mess. Initially my plan was just to stop the bleeding and fix the main cause of the failures. After I became more comfortable with the code base, I started looking for places where I could refactor and add stability and performance. Over the course of 2 years or so, I made many, many changes to the system. We retired that system a couple of years ago and at that point the whole process took 45 minutes to run and hadn't generated any issues in years. A lot of work went into paying down the technical debt but it was well, well worth it. It was nice not getting any phone calls in the middle of the night that the system failed. It was nice coming into the office in the monring and seeing nothing but good news in the logs. (Aside... After a couple of years I ran into one of the main developers of this system. He asked me how it was doing and I told him how bad the system was. He actually apologized and told me he knew it would be a handful to support after he left and wished he had done a better job on it). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25023",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2314/"
]
} |
25,052 | The idea of recursion is not very common in real world. So, it seems a bit confusing to the novice programmers. Though, I guess, they become used to the concept gradually. So, what can be a nice explanation for them to grasp the idea easily? | To explain recursion , I use a combination of different explanation, usually to both try to: explain the concept, explain why it matters, explain how to get it. For starters, Wolfram|Alpha defines it in more simple terms than Wikipedia : An expression such that each term is generated by repeating a particular mathematical operation. Maths If your student (or the person you explain too, from now on I'll say student) has at least some mathematical background, they've obviously already encountered recursion by studying series and their notion of recursivity and their recurrence relation . A very good way to start is then to demonstrate with a series and tell that it's quite simply what recursion is about: a mathematical function... ... that calls itself to compute a value corresponding to an n-th element... ... and which defines some boundaries. Usually, you either get a "huh huh, whatev'" at best because they still do not use it, or more likely just a very deep snore. Coding Examples For the rest, it's actually a detailed version of what I presented in the Addendum of my answer for the question you pointed to regarding pointers (bad pun). At this stage, my students usually know how to print something to the screen. Assuming we are using C, they know how to print a single char using write or printf . They also know about control loops. I usually resort to a few repetitive and simple programming problems until they get it: the factorial operation, an alphabet printer, a reversed alphabet printer, the exponentation operation. Factorial Factorial is a very simple math concept to understand, and the implementation is very close to its mathematical representation. However, they might not get it at first. Alphabets The alphabet version is interesting to teach them to think about the ordering of their recursive statements. Like with pointers, they will just throw lines randomly at you. The point is to bring them to the realization that a loop can be inverted by either modifying the conditions OR by just inverting the order of the statements in your function. That's where printing the alphabet helps, as it's something visual for them. Simply have them write a function that will print one character for each call, and calls itself recursively to write the next (or previous) one. FP fans, skip the fact that printing stuff to the output stream is a side effect for now... Let's not get too annoying on the FP-front. (But if you use a language with list support, feel free to concatenate to a list at each iteration and just print the final result. But usually I start them with C, which is unfortunately not the best for this sort of problems and concepts). Exponentiation The exponentiation problem is slightly more difficult ( at this stage of learning). Obviously the concept is exactly the same as for a factorial and there is no added complexity... except that you have multiple parameters. And that is usually enough to confuse people and throw them off at the beginning. Its simple form: can be expressed like this by recurrence: Harder Once these simple problems have been shown AND re-implemented in tutorials, you can give slightly more difficult (but very classic) exercises: The Fibonacci numbers, The Greatest Common Divisor , The 8-Queens problem, The Towers of Hanoi game, And if you have a graphical environment (or can provide code stubs for it or for a terminal output or they can manage that already), things like: Koch's Snow Flake Fractal , Sierpinski's Triangle . And for for practical examples, consider writing: a tree traversal algorithm, a simple mathematical expression parser, a minesweeper game. Note: Again, some of these really aren't any harder... They just approach the problem from exactly the same angle, or a slightly different one. But practice makes perfect. Helpers A Reference Some reading never hurts. Well it will at first, and they'll feel even more lost. It's the sort of thing that grows on you and that sits in the back of your head until one day your realize that you finally get it. And then you think back of these stuff you read. The recursion , recursion in Computer Science and recurrence relation pages on Wikipedia would do for now. Level/Depth Assuming your students do not have much coding experience, provide code stubs. After the first attempts, give them a printing function that can display the recursion level. Printing the numerical value of the level helps. The Stack-as-Drawers Diagram Indenting a printed result (or the level's output) helps as well, as it gives another visual representation of what your program is doing, opening and closing stack contexts like drawers, or folders in a file system explorer. Recursive Acronyms If your student is already a bit versed into computer culture, they might already use some projects/softwares with names using recursive acronyms . It's been a tradition going around for some time, especially in GNU projects. Some examples include: Recursive: GNU - "GNU's Not Unix" Nagios - "Nagios Ain't Gonna Insist On Sainthood" PHP - "PHP Hypertext Preprocessor" (and originall "Personal Home Page") Wine - "Wine Is Not an Emulator" Zile - "Zile Is Lossy Emacs" Mutually Recursive: HURD - "HIRD of Unix-Replacing Daemons" (where HIRD is "HURD of Interfaces representing Depth") Have them try to come up with their own. Similarly, there are many occurrences of recursive humor, like Google's recursive search correction. For more information on recursion, read this answer . Pitfalls and Further Learning Some issues that people usually struggle with and for which you need to know answers. Why, oh God Why??? Why would you do that? A good but non-obvious reason is that it is often simpler to express a problem that way. A not-so-good but obvious reason is that it often takes less typing (don't make them feel soooo l33t for just using recursion though...). Some problems are definitely easier to solve when using a recursive approach. Typically, any problem you can solve using a Divide and Conquer paradigm will fit a multi-branched recursion algorithm. What's N again?? Why is my n or (whatever your variable's name) different every time? Beginners usually have a problem understanding what a variable and a parameter are, and how to things named n in your program can have different values. So now if this value is in the control loop or recursion, that's even worse! Be nice and do not use the same variable names everywhere, and make it clear that parameters are just variables . End Conditions How do I determine my end condition? That's easy, just have them say the steps out loud. For instance for the factorial start from 5, then 4, then ... until 0. The Devil is in the Details Do not talk to early abut things like tail call optimization . I know, I know, TCO is nice, but they don't care at first. Give them some time to wrap their heads around the process in a way that works for them. Feel free to shatter their world again later on, but give them a break. Similarly, don't talk straight from the first lecture about the call stack and its memory consumption and ... well... the stack overflow . I often tutor students privately who show me lectures where they have 50 slides about everything there's to know about recursion when they can barely write a loop correctly at this stage. That's a good example of how a reference will help later but right now just confuses you deeply. But please, in due time, make it clear that there are reasons to go the iterative or recursive route . Mutual Recursion We've seen that functions can be recursive, and even that they can have multiple call points (8-queens, Hanoi, Fibonacci or even an exploration algorithm for a minesweeper). But what about mutually recursive calls ? Start with maths here as well. f(x) = g(x) + h(x) where g(x) = f(x) + l(x) and h and l just do stuff. Starting with just mathematical series makes it easier to write and implement as the contract is clearly defined by the expressions. For instance, the Hofstadter Female and Male Sequences : However in terms of code, it is to be noted that the implementation of a mutually recursive solution often leads to code duplication and should rather be streamlined into a single recursive form (See Peter Norvig 's Solving Every Sudoku Puzzle . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25052",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
25,056 | I'm interested in better learning functional programming. To do so, it seems obvious that I should force myself to use the purest possible functional programming language. Hence, I'm here asking, more or less, for an ordering of functional programming languages according to their purity. It seems to me that it would be more practical to learn Lisp or Clojure (or Scheme, or Scala, etc.), but for what I've heard recently, Haskell would be very hard to beat at teaching functional programming principles to someone. I'm not sure about this yet, so I'm asking you: which is the purest functional programming language? An ordering would be great if several are competing for the magnificent title of the purest functional programming language. | There's no scale for assessing the degree of purity of functional languages. If the language allows sideeffects it's impure, otherwise it's pure. By this definition, Haskell, Mercury, Clean etc are pure functional languages; whereas Scala, Clojure, F#, OCaml etc are impure ones. EDIT: Maybe I should have phrased this as "if language doesn't allow side-effects without letting the type system know , it's pure. Otherwise it's impure.". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25056",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6742/"
]
} |
25,063 | Recently I was being interviewed by a company and faced one question. The interviewer asked me a question and at that time I didn't know the answer but if I had been asked about just 4 months ago, I could have answered it. The question was from new language that I learned just 4 months ago. But I just get overview of the language and just get started working on that. Whenever I face difficultly, I google it. That means we do not have to memorize the whole programming language book! So in that situation I felt that Google screwed my job! Not talking subjectively, Is it good to google all the time? | Well, ... human memory and instincts work faster than Google. But then again, we cannot commit everything to memory (human brain just doesn't work that way - you remember in the lower circles only the things you need momentarily). Google however is great as a replacement for paper books, manuals, everything really. And makes searching those (like you would in the ol' days) a breeze. So yes, it is good. But one must remember, it is just a tool - like any other. No good programmer relies on it to just pop up solutions, so he can go like that from day to day. After all, what's on Google has already been done - and understanding of matter leads to doing stuff that yet haven't. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25063",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9421/"
]
} |
25,131 | So far I heard about : Lambda calculus Lambda programming Lambda expressions Lambda functions Which all seems to be related to functional programming... Apparently it will be integrated into C++1x, so I might better understand it now: http://en.wikipedia.org/wiki/C%2B%2B0x#Lambda_functions_and_expressions Can somebody defines briefly what are lambdas things and give an where it can be useful ? | Lambda calculus The lambda calculus is a computation model invented by Alonzo Church in the 30s. The syntax and semantics of most functional programming languages are directly or indirectly inspired by the lambda calculus. The lambda calculus in its most basic form has two operations: Abstraction (creating an (anonymous) function) and application (apply a function). Abstraction is performed using the λ operator, giving the lambda calculus its name. Lambda expressions Lambda functions Anonymous functions are often called "lambdas", "lambda functions" or "lambda expressions" because, as I said above, λ was the symbol to create anonymous functions in the lambda calculus (and the word lambda is used to create anonymous functions in many lisp-based languages for the same reason). Lambda programming This is not a commonly used term, but I assume it means programming using anonymous functions or programming using higher-order functions. A bit more information about lambdas in C++0x, their motivation and how they relate to function pointers (a lot of this is probably a repeat of what you already know, but I hope it helps explain the motivation of lambdas and how they differ from function pointers): Function pointers, which already existed in C, are quite useful to e.g. pass a comparison function to a sorting function. However there are limits to their usefulness: For example if you want to sort a vector of vectors by the i th element of each vector (where i is a run-time parameter), you can't solve this with a function pointer. A function that compares two vectors by their i th element, would need to take three arguments ( i and the two vectors), but the sorting function would need a function taking two arguments. What we'd need is a way to somehow supply the argument i to the function before passing it to the sorting function, but we can't do this with plain C functions. To solve this, C++ introduced the concept of "function objects" or "functors". A functor is basically an object which has an operator() method. Now we can define a class CompareByIthElement , which takes the argument i as a constructor argument and then takes the two vectors to be compared as arguments to the operator() method. To sort a vector of vectors by the i th element we can now create a CompareByIthElement object with i as an argument and then pass that object to the sorting function. Since function objects are just objects and not technically functions (even though they are meant to behave like them), you can't make a function pointer point to a function object (you can of course have a pointer to a function object, but it would have a type like CompareByIthElement* and thus not be a function pointer). Most functions in the C++ standard library which take functions as arguments are defined using templates so that they work with function pointers as well as function objects. Now to lambdas: Defining a whole class to compare by the i th element is a bit verbose if you're only ever going to use it once to sort a vector. Even in the case where you only need a function pointer, defining a named function is sub-optimal if it's only used once because a) it pollutes the namespace and b) the function is usually going to be very small and there isn't really a good reason to abstract the logic into its own function (other than that you can't have function pointers without defining a function). So to fix this lambdas were introduced. Lambdas are function objects, not function pointers. If you use a lambda literal like [x1, x2](y1,y2){bla} code is generated which basically does the following: Define a class which has two member variables ( x1 and x2 ) and an operator() with the arguments ( y1 and y2 ) and the body bla . Create an instance of the class, setting the member variables x1 and x2 to the values of the variables x1 and x2 currently in scope. So lambdas behave like function objects, except that you can't access the class that's generated to implement a lambda in any way other than using the lambda. Consequently any function that accepts functors as arguments (basically meaning any non-C function in the standard library), will accept lambdas, but any function only accepting function pointers will not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25131",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5024/"
]
} |
25,135 | If a colleague on your team is planning to quit his job and this may cause bad feelings in your company, would you notify your boss? | I probably would not, because 1) I assume it is not your job to manage people (since you refer to your boss), 2) this is one of the best ways to undermine the trust between yourself and your other colleagues. If you're concerned about the company, go to your colleague, express your concerns (DO NOT try to talk him out of quitting -- it is his life), and politely ask him to notify the boss in due time. [re. 1): i tend to keep to myself stuff that I learn accidentally, but which is basically none of my business] | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25135",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5934/"
]
} |
25,139 | Do you think being self-educated in software development is good? Please give an example of what you have learnt successfully by yourself. | Self-education is not just good , but essential if you want to be an above-average developer. The only person responsible for your professional progress is you . Sure, formal education, training courses, etc. can help, but at the end of the day, it is your career. I'm fortunate enough to have benefitted from a very good education, and I have had good employers who have supported my learning in all sorts of different ways. However, the vast majority of what I have learned about programming I have picked up myself - by reading lots and practicing more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25139",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9437/"
]
} |
25,154 | I just read one of Joel's articles in which he says: In general, I have to admit that I’m a little bit scared of language features that hide things . When you see the code i = j * 5; … in C you know, at least, that j is being multiplied by five and the results stored in i. But if you see that same snippet of code in C++, you don’t know anything. Nothing. The only way to know what’s really happening in C++ is to find out what types i and j are, something which might be declared somewhere altogether else. That’s because j might be of a type that has operator* overloaded and it does something terribly witty when you try to multiply it. (Emphasis mine.) Scared of language features that hide things? How can you be scared of that? Isn't hiding things (also known as abstraction ) one of the key ideas of object-oriented programming? Everytime you call a method a.foo(b) , you don't have any idea what that might do. You have to find out what types a and b are, something which might be declared somewhere altogether else. So should we do away with object-oriented programming, because it hides too much things from the programmer? And how is j * 5 any different from j.multiply(5) , which you might have to write in a language that does not support operator overloading? Again, you would have to find out the type of j and peek inside the multiply method, because lo and behold, j might be of a type that has a multiply method that does something terribly witty. "Muahaha, I'm an evil programmer that names a method multiply , but what it actually does is totally obscure and non-intuitive and has absolutely nothing to do whatsoever with multiplying things." Is that a scenario we must take into consideration when designing a programming language? Then we have to abandon identifiers from programming languages on the grounds that they might be misleading! If you want to know what a method does, you can either glance at the documentation or peek inside the implementation. Operator overloading is just syntactic sugar, and I don't see how it changes the game at all. Please enlighten me. | Abstraction 'hides' code so you don't have to be concerned about the inner workings and often so you can't change them, but the intention was not to prevent you from looking at it. We just make assumptions about operators and like Joel said, it could be anywhere. Having a programming feature requiring all overloaded operators to be established in a specific location may help to find it, but I'm not sure it makes using it any easier. I don't see making * do something that doesn't closely resemble multiplication any better than a function called Get_Some_Data that deletes data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25154",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3684/"
]
} |
25,276 | I've noticed a lot of questions lately relating to different abstraction techniques, and answers saying basically that the techniques in question are "too clever." I would think that part of our jobs as programmers is to determine the best solutions to the problems we are given to solve, and cleverness is helpful in doing that. So my question is: are the people who think certain abstraction techniques are too clever opposed to cleverness per se , or is there some other reason for the objection? EDIT: This parser combinator is an example of what I would consider to be clever code. I downloaded this and looked it over for about half an hour. Then I stepped through the macro expansion on paper and saw the light. Now that I understand it, it seems much more elegant than the Haskell parser combinator. | Simple solutions are better for long-term maintenance. And it's not always just about language familiarity. A complex line (or lines) takes time to figure out even if you're an expert in the given language. You open up a file and start reading: "ok, simple, simple, got it, yep, WTF?!" Your brain comes to a screeching halt and you now have to stop and decipher a complicated line. Unless there was a measurable, concrete reason for that implementation, it is "too clever". Figuring out what's going on gets progressively harder as complexity grows from a clever method to a clever class to a clever pattern. Aside from well-known approaches, you have to figure out the thought process that went into creating a "clever" solution, which can be quite difficult. That said, I hate avoiding a pattern (when its use is justified) just because someone might not understand it. It's up to us as developers to keep learning and if we don't understand something, it's a reason to learn it, not to avoid it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25276",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5621/"
]
} |
25,416 | The question might not be as simple as it sounds, as we have struggled with this a bit.
If there are 5 separate bugs, which can be taken care of with a single fix, then it is wasteful to take this approach. One bug might slip through the cracks, and would take away from a complete picture, or just be hidden for some time (we have several years worth of outstanding bugs, and our bug tracker sucks at searching :( ). Now, if you file a bug of 100+ parts (such as help tooltips are missing from all 234 dialogs), then it should be treated as a project and be broken down. However, if it is a bug of 3-4 parts, then the developer would attempt to fix all 3 or 4 at once. This is where it gets interesting. What if the coder only fixes 3 or 3.5 out of 4? Should after testing a new bug be filed and old one closed? If yes, then this invites sloppy programming practices, where close enough is good enough. If the bug is to fail, then all of it has to be re-done and re-tested. Now ... what if part 1 is the size of a breadbox in terms of size and risk, part 2 is more like a car, part 3 is like a house, and so on :) When sizing and prioritizing this bug (I must mention that we use Scrum), the house part was not noticed - who reads anything but a bug title anyway? So, it was deemed a low hanging fruit - low effort, low risk, happy user = high reward. But, we got bitten with one of them recently. What seemed to be logically the same area, was actually code in transitional state, where we are trying to deprecate one method of doing things, or one library for creating widgets with another, new and better. The problem is that we had to release mid-conversion, so we made two very different beasts look alike. Our QA folks did not know that, are not expected to know that, and so in their mind these are all the same issues. We need to be QA-friendly - not to push back too hard or too often, but also try to give a set of heuristics which would help to decide whether to file one bug or many. I suppose the problem might lie with weak tools, where splitting and merging bugs is hard to do. However, how do you deal with this stuff in general? P.S. In Scrum - once you committed to fixing something, it probably should be fixed. Break this rule too many times and discipline will degrade. | 5 bugs = 5 bug reports; the fact that they can all be fixed at once is not QA's concern. in other words, guessing that all 5 bugs stem from the same problem is putting the cart ahead of the horse. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25416",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/934/"
]
} |
25,432 | I'm working at my first programming job. My boss is a very smart software engineer, and I feel
like I have very little to offer compared to him. Problem is, he is always busy, and needs someone to help him out. I feel like I'm not good enough, but I still want to succeed. I want to be a great programmer. What can I do to impress him? Thank you. | Did I ever tell you about Ashton? Ashton was your classic corn-fed farm boy. His parents had been hippies who never really managed to get their acts together until his mother inherited 15 acres in a rural part of Michigan. The family moved out there, bought a couple of dairy goats, and struggled to make a living selling organic goat cheese to the yuppies at the Ann Arbor Farmer’s Market. From the time he was ten years old, Ashton had to wake up every morning at 4:00 a.m. and milk those damn goats, and it was exhausting. Ashton loved going to school because it meant he wasn’t working knee-deep in goat poop. Throughout high school, he studied his ass off, hoping that a scholarship to a good university would be his ticket out of the farm. He found college to be so much easier than farm life that he didn’t understand why everyone else didn’t get straight A’s like him. He majored in Software Engineering because he couldn’t imagine engineers ever being required to wake up at 4:00 a.m. Ashton graduated from school without knowing much about the software industry, really, so he went to the career fair, applied for three jobs, got accepted by all three, and picked the one that paid the most: something insane like $32,000 a year, working at a big furniture company in the southwestern part of the state that manufactured cubicle farms for corporations all over the world. He never wanted to see a farm again, so he was determined to make a good impression on his boss, Charlie Sherman. “That’s not going to be easy,” his cubicle-mate, Jeff, said. “She’s something of a legend here.” “What do you mean?” he asked. “Well, you remember a few years ago, when there was all that uproar about Y2K?” Ashton was probably too young. “Y2K?” “You know, nobody expected that all the old computer programs written in the 1960s would still be running in 2000, so they only had room for two digits for the year. Instead of storing 1999, they would store 99. And then when the year flipped over on January 1st, 2000, the computer systems crashed, because they tried to fit “100” in two digits. “Really? I thought that was a myth,” Ashton said. “At every other company in the world, nothing happened,” Jeff said. “They spent billions of dollars checking every line of code. But here, of course, they’re cheap bastards, so they didn’t bother doing any testing.” “Not at all?” “Zilch. Zero testing. Nada. And lo and behold, when people staggered back into work on January 2nd, not a single thing worked. They couldn’t print production schedules. They couldn’t get half of the assembly lines to even turn on. And nobody knew what shifts they were supposed to be working. The factory literally came to a standstill.” “You’re kidding,” Ashton said. “I shit you not. The factory was totally silent. Now, Charlie, she was new then. She had been working at Microsoft, or NASA, or something... nobody could figure out why someone like her would be working in our little armpit of a company. But she sat down, and she started coding. And coding. And coding. “Charlie coded for nine days straight. Nine days without sleeping, without eating, some people even claimed she never went to the bathroom. She went from system to system and literally fixed all of them. It was something to behold. My God, there were COBOL systems in there that needed to be fixed. The whole factory at a standstill, and Charlie is sending people to the university library in Ann Arbor to find old COBOL manuals. Assembly-line workers are standing around shivering, because even the thermostats had a Y2K bug. And Charlie is drinking cup after cup of coffee and typing like a madwoman.” “Wow. And she never went to the bathroom?” “Well, that part might be a little bit of an exaggeration. But she really did work 24 hours for nine days straight. Anyway, on January 11th, about five minutes before the day shift is supposed to start, she comes out of her cubicle, goes to the line printer, hits a button, and boom! out comes the production schedules, and the team schedules, and everything is perfect, perfectly formatted, using a slightly smaller font so that the “2000” fits where it used to say “99,” and she’s even written a new priority optimizing system that helps them catch up with 9 days of missed production without pissing off too many customers, and all the assembly lines start running like nothing was ever wrong, and the heat comes on, and the invoices come out printed with ‘2000’ as the year instead of ‘19100,’ and after that day, nobody found a single bug.” “Oh come on!” Ashton says. “Nobody writes code without bugs.” “She did. I saw it with my own eyes. The first day back they ran two days worth of cubicles without a hiccup.” Ashton was dumbstruck. “That’s epic. How can I live up to that?” “You can’t, buddy, nobody can,” Jeff said, turning back to his computer terminal, where he resumed an online flame war over who would win in a fight, Spock or Batman, which had been raging for over four months. Not one to give up, Ashton swore he would, one day, do something legendary. But the truth is, there never was another Y2K. And nobody, in that part of Michigan, gave a rat’s ass about good programming. There was almost nothing for the programmers to do, in fact. Ashton got dumb little projects assigned to him... at one point he spent three weeks working on handling a case where the sales tax in one particular county was wrong because some zip code spanned two different sales tax zones. The funny thing was, it was in some unpopulated part of New York State where nobody ever bought office cubicles, and they had never had a customer there, so his code would never run. Ever. For two years Ashton came into work enthusiastic and excited, and dying to make a difference and do something terrific and awesome, while his coworkers surfed the Internet, sent instant messages to their friends, and played computer solitaire for hours. Jeff, his cubicle-mate, only had one responsibility: updating the weekly Excel spreadsheet indicating how many people were hurt on the job that week. Nobody ever was. Once a week, Jeff opened the spreadsheet, went to the bottom of the page, entered the date and a zero, hit save, and that was that. Ashton even wrote a macro for Jeff that automated that one task. Jeff didn’t want to get caught, so he refused to install it. They weren’t on speaking terms after that. It was awkward. On the morning of his two year anniversary at the cubicle company, Ashton was driving to work when he realized something. Not one line of code that he had written had ever run. Not one thing he had done in two years of work made any impact on the world. And it was fucking 24 degrees in that part of Michigan, and it was gray, and smelly, and his Honda was a piece of crap, and he didn’t have any friends in town, and nothing he did mattered. As he drove down Lincoln Avenue, he saw the furniture company ahead on the left. Three flags fluttered in front of the corporate campus: an American flag, a flag of the great state of Michigan, and a white and red flag with the company logo. He got in the turning lane behind a long line of cars waiting to turn left. It always took four or five traffic light cycles, at rush hour, to make the turn, so Ashton had plenty of time to try to remember if any code he had ever written was ever used by anyone . And it hadn’t. And he fought back a tear. And instead of turning left, he went straight, almost causing an accident because he forgot that the left turn light didn’t mean you could go straight. And he drove right down Lincoln Avenue, and got onto the Gerald Ford freeway, and he just kept driving until he got to the airport over in Grand Rapids, and he left his crappy old Honda out right in front of the terminal, knowing perfectly well it would be towed, and didn’t even close the car door, and he walked right up to the Frontier Airlines counter and he bought himself a ticket on the very next flight to San Francisco, which was leaving in 20 minutes, and he got on the plane, and he left Michigan forever. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9520/"
]
} |
25,487 | I always hesitate when talking to professors about trying to improve the percentage of people who graduate with a CS type degree compared to the number that start out thinking that is what they want. On one hand I really do think it is important for professionals to be involved and give this feedback, on the other hand it would be better if less sub-par students ended up with CS degrees. I don't think every mind is built for this field and you have to be a good life long student. You have to have a high degree of patience and problem solving skills just to eek by. If you do have the "right" kind of brain, those hard problems are what drives you to continue. If you just get a long list of easy problems you get bored so these people are actually not good at more repetitive jobs. I don't need to go into all the details... if you are reading this you probably know what I'm getting at. So the question is: How do you find the balance of a degree program that is accessible to enough people to be funded and considered successful but also doesn't turn out people who aren't really cut out for the job? Maybe a better question is, what metric do you use to know if the changes you are making in a degree program are making it better? I don't know that a higher graduation rate is a good metric. And it seems that the feedback that you could try to capture many years later about the jobs that the graduates hold would be too far delayed. I've struggled with this question for a long time, mostly because I don't think there is an answer. But I thought I'd ask to see if anyone knows of any research that has actually been done on it. Addition: I recently had a very wise professor remind me that not everyone who graduates with a CS degree even wants to be a full-time programmer once they have actually discovered what that means. But, with the education that they received they could possibly make great Project Managers, Managers, system admins, etc. I think this was a very good point that I hadn't thought to consider here. There are a very high percentage of people who don't end up working in the field they majored in, CS isn't an exception to that. Having the extra folks helps not only in budget for the degree but also to expand the percentage of non-programmers who still know enough about it to work with programmers. | Ok, by popular demand ... Let the free market figure it out. You know, 95% of psychology majors end up doing something else. Not everyone with a CS degree/minor ends up programming, but they make better managers, analysts, project managers than those without. Do not carry the weight of the world on your shoulders. CS degree is just a piece of paper. Those with math, physics, chemistry, biology degrees go on to become programmers, and not everyone with a CS degree becomes a programmer. Without millions of kids aspiring to be the best baseball player, we would not have such great stars. The system is self-regulating. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25487",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3697/"
]
} |
25,564 | Possible Duplicate: Whats the difference between Entry Level/Jr/Sr developers? I'm curious what senior developer means because apparently the definition doesn't mean what I thought it would. I keep seeing these teens at 22-23 years old who call themselves senior X developer or senior Y developer. To me, a senior must have 10 years or so experience in programming to call himself 'senior'. I've seen a lot of these teens here (hence the question). Am I wrong? Why? | You can call yourself a Senior when: You can handle the entire software development life cycle, end to end You lead others, or others look to you for guidance. You can self manage your projects Software development is a curious creature unlike other fields. Sometimes, a fresh punk out of college can run circles around veterans who have 20+ years of "experience". Programming is a bizarre world where code is king . Some achieve the above in 2 years or less, others take 10 years. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25564",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9590/"
]
} |
25,836 | If you could ask a C++ programmer one question to measure their C++ skills, what would it be? The question I think is best is: Can you call "delete this;" inside a member function? (I put this as a link so you can think it through first, then go to The Best C++ Interview Question – Ever! to see the correct answer.) I don't ask this because I expect most people to know the answer. If they did it would not be that useful a question. I ask to see if they can work their way to the correct answer and how they do so. | The best C++ interview question would be a programming problem, not a quiz question. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25836",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9823/"
]
} |
25,888 | I thought I did okay in the interviews but apparently the interviewers didn't think so. Is it appropriate to ask for a reason after I have received the rejection email? After all I don't want to annoy the HR person. I am a student, so not so much job hunting experience so far. Bear with me if this question sounds stupid. | If you asked it like you just asked us, you'll have a much better shot at getting a useful answer. Emphasize that you are new to the workforce and really value the time the interviewer spent considering you. Ask what you lacked, perhaps that you could have predicted coming into the interview, that affected your consideration and eventual rejection. Most importantly, don't be defensive. You understand that the interviewer made a choice that was right by their own ends and you aren't trying change their minds. It's been my experience that about a third of interviewers, just give a cagey vibe once they've decided to reject you (or even sooner), and don't really answer any questions at all past that point. Another third are a bit more open, and will answer some questions, but not others, especially about matters that might reveal more than really necessary about their hiring process. the remaining third stay friendly and open for as long as you do the same, and are happy to help you in this way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25888",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9026/"
]
} |
25,963 | Is there any hard data (studies, comparisons, not-just-gut-feel analysis) on the advantages and disadvantages of working from home? My devs asked about e.g. working from home one day per week, the boss doesn't like it for various reasons, some of which I agree with but I think they don't necessarily apply in this case. We have real offices (2..3 people each), distractions are still common. IMO it would be beneficial for focus, and with 1 day / week, there wouldn't be much loss at interaction and communication. In addition it would be a great perk, and saving the commute. Related: Pros and cons of working remotely / from home (interesting points, but no hard facts) To clarify: it's not my decision to make, I agree that there are pro's and con's depending on circumstances, and we are pushing for "just try it". I've asked this specific question because (a) facts are a good addition to thoughts in arguing with an engineer boss, and (b) we, as developers, should build upon facts like every respectable trade. | Research Material There are a few but not all are current or applying exclusively to our field: Work-at-home and the quality of working life Substitution between working at home and out-of-home: The role of ICT and commuting costs The perils of working at home: IRB “mission creep” as context and content for an ethnography of disciplinary knowledges Working at home: An update Satisfaction and perceived productivity when professionals work from home Factors that Influence the Productivity of Software Developers in a Developer View Restructuring workplace cultures: the ultimate work-family challenge? NOTE: Some of the articles appeared in journals and aren't free. Update - Personal Experience I've now been doing this for about 4 months at the time of this update (2012-02-146), and here's a simple pro-con list: Pros time-table flexibility to pick the kids up from school to take care of family emergencies to walk to the kitchen to grab coffee productivity can sometimes be increased less disruption from co-workers or open-space annoyances own set of software and tools, workstation tweaking (as long as you don't indulge too much into it) savings on travel though we careful that if you still need to occasionally travel, you factor that in and don't end up paying more expensives travel cards in your area for one-off journeys or other similar issues healthier eating (depends on the workplace and your cooking skills...) if your IT requirements permit, you can check-in on work anytime you want, as you're always at your office Cons managing the time table is tricky keeping relatives away is harder,they don't (and may take a very long time to, by the look of it) understand that boundaries are needed and need to be respected whether it's your children and partner barging in to ask for help with something or discuss something important happening in their lives, which you could ignore at the office if you're in a meeting. Here, they barge into the meeting... or your mother calling in the middle of the day "since you're working from home" or the unexpected (like paramedics calling me on day to check on an old lady in my building, which I had just moved into, because they didn't get her phone number right and where trying to reach someone in the building by using the white pages: you just can't make that up) productivity can also take a hit under some cirumstances connectivity drops are a PITA if you IT department is a bit "annoying", you may end up with no local software kit, no decent hardware, and not even a VPN access but just an RDP gateway to your old workstation back at the office (this purely sucks , be warned) communication is more difficult, though possible: face time is harder to arrange your colleagues skype- or phone-screen you on occasions, and so do you the other way around the coolest and most modern gadgets and virtual office tools won't match that the good ol' back and forth during a brainstorming session with a whiteboard, colored markers and a hand-full of sticky notes crappier eating you can tend to fall into a cycle of eating snacks and things you have readily available in your kitchen (and then end up spending more than you would at an office, where you might focus more) you develop a tendency to check in out of office hours, which may not always be healthy (for your work habits, and for your family time) A lot of these are obviously linked: if you get into a non-productive cycle and take some time to snap out of it, you might be tempted to eat junk snacks and all that. There are also so variables that are neither pros nor cons, but will affect the experience: is your boss the more "give-you-some-leash" type or the "whatchya-been-doing-these-past-5-minutes"? it's understandable as you might (and I would assume you would) occasionally slack off, and might actually keep you on your toes it gets you down, disrupts your concentration, and eats time for nothing if you were actually working is your home more likely than your office to have environmental annoyances (for instance, I had roadworks for 2 weeks outside my window at home... but I had a pre-school under my window at the office) Overall I am happy with the experience and have been trying to refine my process to work at home to the fullest extent of my produtivity, but it takes some discipline at first and then whenever life gets you a bit down (I find that it's harder to shake than in an office). If I had the choice though, I'd much prefer to still work at the office with colleagues, but from the experience I'd say I wouldn't mind having subordinates requesting to telecommute (at least for a try-out). I could go on longer, but this isn't hard-data, just personal feedback as originally promised. Update 2 - More Personal (Bad) Experience It's been longer now, and I have to say I've lost momentum on a few things and let myself get overworked at a given period and... it took me nearly 2 months to snap out of a near-depressive and vegetative state. Which, granted, is what I expected to happen eventually and why I didn't really want to work from home in the first place, as the environment is more prone to this type of thing and makes it harder to kick the burn-out feeling than if you're at the office with your peers. It's also very frustrating when you know exactly how to snap out of it (it's all detailed up there: follow the dots and things will be fine, but actually doing it takes some will power, sometimes...) but just can't get yourself to actually do it. If it does happen to you, grab a friend or co-worker and have them look over your shoulder every once in a while and get people to request more frequent status updates from you (not too many). Grab people that you know won't be (too) judgmental and won't make it a hassle for you, so that you have a motivator and a need to keep things going. Do force yourself to plan and time box your daily duties as much as possible. It really got pretty bad for me at some point, as I had a lengthy period of professional overwork and a crazy load of these fun things life can throw at you. Still not saying working from home is necessarily bad, but it does have its cons, and getting into this state is already bad enough, so you better have an environment that helps you shake it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25963",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7313/"
]
} |
25,969 | Should I bother to develop for JavaScript disabled? I feel that my time is better spent developing for the majority. | There's a web-design philosophy known as Progressive Enhancement which is one you should consider. The idea is you build a basic site that is usable and workable, and then you layer onto this enhancements like jQuery and browser-specific stuff to "enhance" it. This way you get a site that works for everybody and looks nice for the majority. If that doesn't convince you, then consider other reasons for having a site work without javascript: It is more SEO friendly. If your site relies on JS for content and links then chances are search-engines will be ignoring large chunks of it. Imagine you are an eCommerce site selling widgets. Now, even if only 5% of your customers disable javascript, that is a potential sales loss of 5%. Is it worth losing customers over? Don't discriminate against the disabled. Relying on javascript means your site is not accessible and, in some cases (such as government/public sector sites), you could be breaking the law by discriminating against people. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/25969",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/526/"
]
} |
26,179 | I asked a question yesterday Should I Bother to Develop For JavaScript Disabled? . I think the consencus is: Yes, I should develop for JavaScript Disabled. Now I just want to understand why users disable JS. It seems many developers (I guess people who answered the questions are developers) disable JS. Why is that. Why do users disable JS? For security? Speed? or what? | One disables JavaScript in a browser environment because of the following considerations: Speed & Bandwidth Usability & Accessibility Platform Support Security Speed & Bandwidth A lot of applications use way too much JavaScript for their own good... Do you need parts of your interface to be refreshed by AJAX calls all the time? Maybe your interface feels great and fast when used with a broadband connection, but when you have to downgrade to slower connection speeds, a more streamlined interface is preferred. And switching off JavaScript is a good way of preventing dumb-struck web-apps of refreshing the world every 15 seconds or so for no good reason. (Ever looked at the amount of data Facebook sends through? It's scary. It's not only a JS-related issue though, but it's part of it). We also tend to off-load more and more of the processing to the client, and if you use minimalistic (or just outdated) hardware, it's painfully slow. Usability & Accessibility Not all user interfaces should expressed in a dynamic fashion, and server-generated content
might be perfectly acceptable in many cases. Plus, some people simply don't want this type of interfaces. You cannot please everybody, but sometimes you have the chance to and the duty to satisfy all your users alike. Finally, some users have disabilities, and thou shalt not ignore them , ever!!! The worst-case scenarios here, in my opinion, are government websites that try to "modernize" their UIs to appear more friendly to the public, but end up leaving behind a big chunk of their intended audience. Similarly, it's a pity when a university student cannot access his course's content: because he/she is blind and his screen-reader doesn't support the site, or because the site is so heavy and requires ad-hoc modern plug-ins that he/she doesn't get to install on that refurbished laptop bought on e-bay 2 years ago, or again because he/she goes back home to another country for the spring break and the local bandwidth constraints cannot cope with the payload of the site. Not everybody lives in a perfect world. Platform Support This point relates to the 2 previous ones and tends to be less relevant nowadays, as browsers embed JavaScript engines that are a level of magnitude more efficient than they used to be, and this keeps getting better. However, there's no guarantee that all your users have the privilege of using modern browsers (either because of corporate constraints - which force us to support antediluvian browsers for no good reason, really - or other reasons which may or may not be valid). As mentioned by "Matthieu M." in the comments, you need to remember that a lot of people still use lower-quality hardware, and that not everybody uses the latest and coolest smartphone. As of today, there are still a significant portion of people using phones that have embedded browsers with limited support. But, as I mentioned, things do get better in this area. But then you still need to remember the previous points about bandwidth limitations if you keep polling very regularly (or your users will enjoy a nice phone bill). It's all very inter-related. Security While obviously you could think that nothing particularly dangerous can be done with JavaScript considering it runs in a browser environment, this is totally untrue. You do realize that when you visit P.SE and SO you are automatically logged in if you were logged on any other network, right? There's some JS in there. That bit is still harmless though, but it uses some concepts that can be exploited by some malevolent sites. It is completely possible for a website to use JavaScript to gather information about some things you do (or did) during your browsing session (or the past ones if you don't clear your session data every time you exit your browser or run the now common incognito/private browsing modes extensively) and then just upload them to a server. Recent vulnerabilities (working in major browsers at the time) included the ability to gather your saved input forms data (by trying out combinations for you on a malevolent page and recording the suggested texts for each possible starting letter combinations, possibly telling attackers who you are, where you work and live ) or to extract your browsing history and habits ( A very clever hack doing something as simple as injecting links into the page's DOM to match the color of the link and see if it's been visited . You just need to do this on a big enough table of known domain names. And your browser getting faster at processing JavaScript, this type of thing gets done quickly.) Plus let's not forget that if your browser's security model is flawed, or the websites you visit don't protect themselves when enough against XSS attacks, then one might use JavaScript to simply tap into your open sessions on remote websites. JavaScript is mostly harmless... if you use it for trusted websites. Gmail. Facebook (maybe... and not even...). Google Reader. StackExchange. But yeah sure, JavaScript cannot be that bad, right? And there are scarier things to fear online anyway. Like thinking you're anonymous when you really aren't that much , as shown by the Panopticlick experiment of the EFF . Which is also partly done using JavaScript. You can even read their reasons to disable JavaScript to avoid browser fingerprinting . All this being said, there might be perfectly good situations where you don't need to bother about supporting JavaScript. But if you offer a public-service website, do consider accepting both types of clients. Personally, I do think a lot of modern web-apps and websites would work just as well using the former server-generated content model with no JavaScript at all on the client side, and it would still be great and possibly a lot less consuming. Your mileage may vary depending on your project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/26179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/526/"
]
} |
26,438 | Sometimes we have some business logic represented in the controller code of our applications. This is usually logic that differentiates what methods to call from the model and/or what arguments to pass them. Another example of this is a set of utility functions that exist in the controller that may work to format or sanitize data returned from the model, according to a set of business rules. This works, but I am wondering if its flirting with disaster. If there is business logic shared between controller and model the two layers are no longer separable, and someone inheriting the code may be confused by this unevenness in location of business logic related code. My question is how much business logic should be allowed in the controller and in what circumstances, if any? | Ideally none But that's not always possible. I can't give you hard numbers like 20% or 10 lines, that is subjective to the point it can't be answered. I can describe how I use design patterns and circumstances that necessitate bending them slightly. In my mind it's entirely up to the purpose of the application. Building a simple REST api to post to? Forget about clean separation or even a pattern. You can churn out a working version in under an hour. Building something bigger? Probably best to work on it. Building individually contained systems is the goal. If you start writing business logic that is specific to how two systems interact that is a problem. Without looking further into it I can't give an opinion. Design patterns are molds, some like to strictly adhere to them on the basis of principal and well-written code. Strictly adhering to a pattern probably won't give you bad code, but it could take more time and cause you to write much more code. Design patterns are flexible, adjust them to suit your needs. Bend them too much and they break though. Know what you need and pick a design pattern closest to that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/26438",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6518/"
]
} |
26,548 | If I release some code and binaries, but I don't include any license at all with it, what are the legal terms that apply by default (in the US, where I am). I know that I automatically have copyright without doing anything, but what restrictions are there on it? If I upload my code to github and announce it as a free download / contribute at will, then are people allowed to modify and close source my work? I haven't said that they cannot, as a GPL would, but I don't feel that it would by default be acceptable to steal my work either. So what can and cannot people do with code that is freely available, but has absolutely no licensing terms attached? | Without a license, companies and individuals may be reluctant to use your code, because you don't grant them specific rights to do so. Even when you put the code into the public domain, you are granting rights to use. So you might as well make a statement of acceptable use that is acceptable to you. Without such a statement or license, there is nothing preventing people from using your code in whatever way they see fit. There is, of course, nothing preventing bad people from violating your license, but most good people and companies will respect your terms if you tell them what those terms are. In short: you should have some form of license, even if that license grants unrestricted use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/26548",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7974/"
]
} |
26,596 | I asked a question on lines of code per hour and got torn a new one. So my matured follow-up question is this: If not lines of code, then what is a good metric by which to measure (by the hour/day/unit-of-time) the effectiveness of remote programmers? | In 16 years I've never actually found a workable metric of the sort you're looking for. Essentially to be useful anything would need to be measurable, representative and ungameable (that is the system can't be played by clever developers). There are simply too many variables within software development to make it measurable as piece work in this way. The closest you get is progress against estimates - that is how many tasks are they completing within the agreed estimates. The trick here is (a) getting good and fair estimates and (b) understanding where estimates have been exceeded for good reasons for which the developer can not / should not be blamed (that is something was genuinely more complex than anticipated). Ultimately if you push developers too hard you're likely to find estimates gradually creeping up to a level where they're always met not because of increased productivity but because of padded timescales. Go the other way too much in terms of the estimates (reducing them to create pressure to deliver) and you create phoney deadlines which studies have shown don't increase productivity and are likely to have an impact on team morale (see Peopleware for more information). But essentially I wonder if you're looking at a slightly false problem. Why are remote programmers different to other programmers when it comes to measuring productivity? How do you measure the productivity of non-remote programmers? If it's about not trusting them to work remotely then that's basically a wider trust issue. If you don't trust them to work from home then you either need to establish that trust, not let them work from home, or find some way of satisfying yourself that they are indeed working when they're meant to be. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/26596",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10118/"
]
} |
26,771 | I want to go on vacation and not think about work at all but they want me to provide them a contact number in the event of an emergency (server goes down, web service malfunctions, etc). I am afraid that it will be abused (they will contact me before trying everything for example) but I also think that if I am on vacation I should not be bothered even if there is an outage. Does anyone have experience with situations like this? What's the tactful approach? Any creative solutions? | Hand them a phone number and a quadruple overtime rate. The goal here is not to make lots of money, the goal is to discourage needless annoyance. You're available, but only if you really need me . The database crashed, the users are compromised, you're getting multiple DDOS waves, that's when you pick up a phone. Can't find the username for some non-critical system? Don't bother . You have to draw a clear line on what is an emergency and what is not . Go over common cases and solutions prior to taking vacation. Make sure everyone is notified ahead of time so that they can run any critical systems / processes through you before you leave. Vacation is time off. Make sure it stays that way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/26771",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/785/"
]
} |
26,775 | With so many ORM tools for most modern languages, is there still a use case for writing and executing SQL in a program, in a language/environment that supports them? If so why? For clarity: I'm not asking about if programmers need to know SQL, or if I should have a SQL tool on my desktop. I'm asking specifically why I might have it in code (or config, or whatever) as opposed to an ORM. | You're more comfortable writing SQL to query data than you are writing procedural code. Your ORM, while beautiful to look at, generates horrifically slow SQL in some critical area. You need to take advantage of functionality exposed by your RDBMS that is not / cannot be made available through your ORM. Every time you type SELECT * FROM... , God kills a kitten. And you hate kittens. You aren't using an ORM, OOP, or a modern language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/26775",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4422/"
]
} |
27,083 | Since they exist for several fields with varying degrees of usefulness, I'm curious as to the reputation of headhunters/recruiters that focus solely on IT professionals (e.g. programmers, software engineers, CIOs, etc) when it comes to finding a new job. Are they actually useful or is it fairly hit or miss? Does the old adage of only working with a limited number apply or can you safely give you resume/CV to several of them? | I think they can be helpful, but you have to be aware that their job isn't to find a good place for you, it's to sell you on a place that hired them to find people. I haven't dealt with many recruiters in my career (yet?) but the ones I did come across were fairly non-technical and were just parroting vague job description details to me and making promises about the high salary potential. So I'd say it's pretty hit or miss. I wouldn't ignore recruiters and headhunters entirely -- you never know when a good opportunity might present itself -- but I would definitely be careful to not fall for the optimistic job descriptions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27083",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2471/"
]
} |
27,134 | As the Internet is pretty much ubiquitous, can we as developers assume that all users have Internet access? Now I don't mean that the code is written in such a way that if there is no connection then the whole program crashes due to lack of error code. What I mean is, can programs today be developed under the assumption that its users will always have access to the Internet? You may ask "What do we gain by assuming that?" The reason why I'm asking is because at uni we use quite a few programs which require Internet access due to the way it checks the licenses (it checks your IP address -- if it's not an address at campus, then you you're not allowed to use it). Note that the program itself should work fine without Internet access; it's just needed for license checking. EDIT: I'm talking about desktop applications here. EDIT2: From some of the answers I get a feeling of being accused of exploiting the users in unethical ways. I'm not endorsing what I've described in this question -- I'm just asking about it because the developers of some of the programs we use at uni have done this. Personally I think doing this is plain out stupid and wrong. | Bad idea, for three reasons. First off, even though everyone has Internet access these days, which is basically true, they don't always have it available at all times. My primary machine is a laptop, and it's connected a lot of the time, but not when I'm on the bus, for example. Second, and sort of related to the first, is your method of checking. What if a student gets a legitimate copy of the program, puts it on his laptop computer, and then goes to study with a friend who lives off-campus? You've just introduced a heck of a false-positive condition into your license checking. Third, there's an ethical problem with the license checking in the first place. If a person chooses to place a program on their computer, you have no right to cause their computer to treat it as invalid. In any other context that's called hacking and it could land you in all sorts of hot water, and just because our copyright laws have been hijacked by copyright owners to make a special-case legal exemption for this scenario, that doesn't make it right. Enforcing the law is the job of law enforcement, and private individuals are highly discouraged from taking law enforcement into their own hands (vigilantism) because they tend to do it all wrong. (Just look at the Sony rootkit!) Your best course of action would be to assume that the user has an Internet connection available for features that actually require it, but not require it for features that can get by without it, and certainly don't require it just to convince the program that it's not an illegitimate copy! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27134",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
27,147 | Do you think it's worth it to use version control if you are an independent developer, and if so, why? Do you keep the repository on your own computer, or elsewhere, where it can serve as a backup? | If you use decentralized source control (Mercurial or Git or Bazaar or whatever), you get advantages over SVN/CVS that makes it easy, useful and powerful to use in case you're an indy: You commit locally : your project dir is your repo with FULL history. So you don't have to have a server, you commit directly in your repo, and you can have several repos in the same computer. Using a laptop that you open sometimes to continue working on your things? Great! You don't have to setup a server and if you need one later, it's easy and you just "push" and "pull" changes between repositories. It's made to ease experimentation : often you need to have an idea about a feature without polluting code. With SVN and CVS you can already use a branching system and ditch the branch if the feature is not as good as you wanted it to be. But if you want to merge the feature with the trunk version, you'll have a lot of hard to fix surprises. Git, Mercurial and Bazaar (at least) makes merges and branches really easy. You can even just duplicate a repo, work on it some time, still commit and kill it or push your changes in the main repo if you want. Flexibility of organisation : as pointed before, as you have repos that you organize as you need, it's easy to start alone and allow other people to work with you by changing your organization. No organization is imposed so you just have to set it up and voilà. I often just push/pull changes between my own computers (laptop/desktop/server) and I'm still alone on my devs. I use Mercurial and that help me duplicate my work but also work on features I thought about outside on my laptop, then continue to work on other features on my desktop, then push my laptop changes on my desktop or server and merge the whole desktop+laptop and put it (as backup and future teamwork repo) on my server. It helps setting up backups : if you setup a central repo (on GitHub if it's public, or in a private repo on BitBucket) you can easily write a script which will be executed each time a computer is booted, and then pass said script on to your friends so that it makes automatic backups of your work regularly. It's what I'm doing so now I'm sure it won't be easy to lose my work. In fact, currently, you have no excuse to not use a control source tool for any project. Because they are more powerful and flexible than before and scale with your needs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27147",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9357/"
]
} |
27,164 | If you were hired into a new company as a team lead (say a team of 10) one of the important things to do is to earn the respect of the members of the team. In the early days the new team lead may know nothing of the team culture, code base and business domain: in other words, is a complete neophyte. How does one go about this? What are the do's and don'ts? | Lots of good answers already. I tried to do all these when I was a team lead. Treat team with respect Delegate to strengths, but also give tasks that may help team members improve on weaknesses Protect the team from interruptions or distractions Be willing to educate Be willing to listen Remove roadblocks and stay out of the team's way, but also be approachable when people need help Be tactfully honest in assessing work. Some people do not respond well to even constructive criticism, but serious developers (or workers in any profession really) will want to know what to do to get better Respect the team's personal time and allow for work/life balance (telecommuting, getting out an hour early to pick up kids or make a doctor's appointment), and when times dictate overtime, be prepared to reward the team for the sacrifice. Sometimes my company would authorize me buying lunch to celebrate a milestone, but when it didn't (cheap bastards!) I did something for the team on my dollar. If the team gave extra to help make a project successful, then it was the least I could do. It doesn't have to be extravagant or expensive, but a little appreciation helps; and lack of it can sap morale. Work to be both a technical and subject matter expert. You don't have to do this, but in my experience a leader is more respected when the team members could both appreciate his/her knowledge and depend on him/her to help out when needed. Do what it takes to make the team members more successful. That means educating and removing roadblocks, mentioned above, but also pitching in with documentation that the team can use when your brain isn't around for them to use, and writing some of the specs. Great specs will make developers more efficient and self sufficient, fill in knowledge gaps, and help them meet requirements with a minimum of later rework. Do this in conjunction with getting them to work directly with the client. Build relationships between the team and the clients. It will help with filling in the gaps on missed requirements and open doors for other feedback you might not normally get. Accept that mistakes are a part of learning Be ready to act to remove a team member that is a detriment to the team's cohesion or productivity. This is sometimes hard to do, but you will encounter people that lack the work ethic, productivity, or competence of the other team members. You can be patient to a point, but there comes a time when you may have to work with management to make a change. Truly poor team members will cost the team more time and effort. Trying too hard to be Mr. Fair and Nice can work against you if the better team members feel the underperformer is getting special treatment or is being compensated the same for lesser work. Be transparent. Communicate regularly. If you get information about the company or a recent layoff, or a merger, share it with them. People do respect honesty even though sometimes in this nutty world the opposite seems true. If you can't be honest with your subordinates, then you don't want to be a team lead, you want to be an executive. Be quick to share credit when good things happen Be prepared to take responsibility when bad things happen | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27164",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85/"
]
} |
27,207 | Python first appeared in 1991, but it was somewhat unknown until 2004, if the TIOBE rankings quantify anything meaningful. What happened? What caused the interest in this 13 year old language to go through the roof? Is there a reason that Python wasn't considered a real competitor to Perl in its first decade of existence? Is there a reason that Python didn't continue in relative obscurity for another ten years? I personally think that Python is a very nice language, and I'm glad that I'm not the only one. But it doesn't have corporate backing or a killer feature that would explain a sudden rise to relevance. Does anyone know the story? | Google Google started using Python heavily and reinvesting in development of the language. But it doesn't have corporate backing or a killer feature that would explain a sudden rise to relevance. Google is the corporate backing. As for features Python is an OOP interpreted cross platform fast 1 language. What's not to like? It's another excellent tool in the toolbox. 1. Fast to develop , not fast to execute. Writing a general purpose script in Python is much faster then say Java or C, disregarding the fact that those language would execute faster. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27207",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2329/"
]
} |
27,264 | I've been using underscore_case for about 2 years and I recently switched to camelCase because of the new job (been using the later one for about 2 months and I still think underscore_case is better suited for large projects where there are alot of programmers involved, mainly because the code is easier to read). Now everybody at work uses camelCase because (so they say) the code looks more elegant . What are you're thoughts about camelCase or underscore_case p.s. please excuse my bad english Edit Some update first: platform used is PHP (but I'm not expecting strict PHP platform related answers , anybody can share their thoughts on which would be the best to use , that's why I came here in the first place) I use camelCase just as everibody else in the team (just as most of you recomend) we use Zend Framework which also recommends camelCase Some examples (related to PHP) : Codeigniter framework recommends underscore_case , and honestly the code is easier to read . ZF recomends camelCase and I'm not the only one who thinks ZF code is a tad harder to follow through. So my question would be rephrased: Let's take a case where you have the platform Foo which doesn't recommend any naming conventions and it's the team leader's choice to pick one. You are that team leader, why would you pick camelCase or why underscore_case? p.s. thanks everybody for the prompt answers so far | I think you should use naming convention adopted by your platform. underscore_case will look weird in C# code, as camelCase in Ruby =) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27264",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8849/"
]
} |
27,400 | It is clear that the knowledge of low level stuff is very important in our work. But in a situation where you're already developing commercial software on a high level, and when you already have a chosen direction but don't have any assembly skill, isn't it more reasonable to focus on studying stuff related to your direction? Or is there a reason you should spend some time to learn the low-level basics anyway? When is it too late, and when it is not? And if it is not too late, then how would one go about learning optimally (in the sense of not spending excessive time to get some depth and understanding)? | Can't believe no one has mentioned debugging ... I haven't written a line of assembly code in many years now. But I read it reasonably often. High-level debugging is great when you have the source and symbol information, but when your fancy library is throwing an unhandled exception on customer machines, it's too late to require that to be included in the license... But I can still open up the disassembler and see what your high-level logic eventually ended up doing , trace bad data back to its origin, find out who changed the FPU control register... This has saved my bacon more often than I care to think about. And it's never too late to learn - there are plenty of great references and tutorials out on the 'Net, and just about any program running on your machine can provide a hands-on environment. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27400",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10598/"
]
} |
27,564 | Since it's the holiday season now and everybody's making wishes, I wonder - which language features you would wish PHP would have added? I am interested in some practical suggestions/wishes for the language. By practical I mean: Something that can be practically done (not: "I wish PHP would guess what my code means and fix bugs for me" or "I wish any code would execute under 5ms") Something that doesn't require changing PHP into another language (not: "I wish they'd drop $ signs and use space instead of braces" or "I wish PHP were compiled, statically typed and had # in it's name") Something that would not require breaking all the existing code (not: "Let's rename 500 functions and change parameter order for them") Something that does change the language or some interesting aspect of it (not: "I wish there was extension to support for XYZ protocol" or "I wish bug #12345 were finally fixed") Something that is more than a rant (not: "I wish PHP wouldn't suck so badly") Anybody has any good wishes? Mod edit: Stanislav Malyshev is a core PHP developer. | I wouldn't mind named parameters. getData(0, 10, filter => NULL, cache => true, removeDups => true);
// instead of:
getData(0, 10, NULL, true, true);
// or how about:
img(src => 'blah.jpg', alt => 'an albino platypus', title => 'Yowza!'); Unfortunately the PHP devs shot that idea down already. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27564",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10080/"
]
} |
27,798 | Does function length affect the productivity of a programmer? If so, what is a good maximum number of lines to avoid productivity loss? Since this is a highly opinionated topic please back up the claim with some data. | Since I embarked on this crazy racket in 1970, I have seen exactly one module that really needed to be more than one printed page (about 60 lines). I have seen lots of modules that were longer. For that matter, I have written modules that were longer, but they were usually large finite state machines written as big switch-statements. Part of the problem appears to be that programmers these days are not taught to modularize things. Coding standards that maximize the waste of vertical space appear also to be part of the problem. (I have yet to meet a software manager who has read Gerald Weinberg 's " Psychology of Computer Programming ". Weinberg points out that multiple studies have shown that programmer comprehension is essentially limited to what the programmer can see at any given instant. If the programmer has to scroll, or turn a page, their comprehension drops significantly: they have to remember, and abstract.) I remain convinced that a lot of the well-documented programmer productivity gains from FORTH were due to the FORTH "block" system for source code: modules were hard-limited to an absolute maximum of 16 lines of 64 characters. You could factor infinitely, but you could not under any circumstances whatsoever write a 17-line routine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/27798",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8073/"
]
} |
28,098 | Example here: What languages should I know if I'm interested in building web applications? Yes, I understand that HTML and CSS are not Turing-complete. Yes, I understand that they are declarative, not imperative languages. But why are people always clubbed over the head with this pedantic (and arguably obvious) fact when they ask a question about these languages? | What is the difference, really? The real and important difference between a programming language and these other languages is this: HTML and CSS describe presentation ,
whereas programming languages describe function I intend to illustrate why this difference matters, but that pedantry on this issue is sometimes misplaced. A true story : I once spent a few months developing a complex performance management system using a "proper" programming language. It automated the process of gathering data from various other systems, performed various manipulations on that data and then presented the results in a simple table. Once it was live, a senior manager saw a tool written for a similar business, and asked if we could replace what I had written using their alternative. Furthermore, he was upset that I'd spent weeks developing my solution, where this new app had been written in a matter of days. Further investigation revealed that the manager's preferred option was all presentation with no substance: there were lots of colours and icons and graphs, but there was absolutely no logic behind them. All the data had to be gathered and manipulated manually. Despite the pretty interface, the application was essentially useless. I'm happy to say that the manager in question was persuaded that my approach was the one that met his real business needs. The importance of presentation : There is often an implication that skills in HTML, CSS etc. are somehow inferior to skills in "real" programming languages. This is a serious mistake. In my story, the senior manager felt that design was very important to him, to the extent that he was initially prepared to overlook function in its favour. Now, if this were an isolated incident, I might suggest that the manager was just being silly. But it wasn't. Time and again, I've met users who are impressed by flashy graphics and whizzy widgets, but unimpressed by raw functionality and my technical achievements. I think that there are several lessons to learn here: People evaluate software on criteria that they understand. They often understand the difference between good-looking and ugly, but rarely appreciate technical nuances. People are fooled by appearances. This may not be a good thing, but it is a reality that we must live with. Appearances influence the way people feel about software. The way people feel about software is important to them. Indeed, people sometimes prefer software that makes them feel good over software that is functionally superior. Indeed, they might well be more productive with feel-good tools than with technically superior tools. To this extent, our users are not being fooled. They are actually making a wise and thoughtful choice. As programmers, we often neglect the role of presentation as we focus on function. To some extent, this is right and proper. However, it is important to recognize that there is another dimension to our work that is important to our customers. So, presentation-oriented languages (HTML, CSS) are important. The value added by those who can use these tools effectively should not be underestimated. The importance of real programming languages As the OP pointed out, "real" programming languages are Turing Complete. As a proper sad geek, I find this sublimely fascinating. It means that, for any program written in a T-C language, a functionally equivalent program can be written in any other T-C language. Of course, this isn't to say that all languages are the same. They each have their strengths and weaknesses that make them more or less suitable for certain tasks. However, I/O aside, this means that all programs can be written in all true programming languages. (Incidentally, the important thing is T-C. The declarative vs imperative is a red-herring here. SQL, for example, is declarative but is also a proper programming language because it is T-C.) Of course, the same isn't true of a markup language like HTML or CSS. In fact, there are whole classes of problem that these languages simply can't solve . Where I can program anything I want in a true programming language - including layout engines - it just isn't possible to achieve the same things with languages that aren't T-C. As highlighted in my story, HTML and its ilk are used to produce presentation. Real programming languages are used to produce functionality. Why are programmers pedantic about it all? Programmers spend a great deal of time, effort and money developing their skills. People naturally value the things in which they invest ("your heart is where your money is"). Programmers often feel the need to justify the amount of time it takes to produce results compared to the rapid results achieved by UI designers. In order to do this, they need to draw a distinction between what the two groups actually do . Because employers need to apply the right people to the right jobs. Unless we clarify the (often technical) differences, managers easily make the wrong calls. Because there is a real and fundamental difference, as outlined above. Is it always appropriate to be pedantic? Let's face it, as programmers we're a naturally pedantic lot . It goes with the territory. It doesn't help that many of us have been burned when non-programmers have failed to understand what we do. Nevertheless (and to be honest, this goes against my natural instincts), I don't think we need to call people out whenever they slip over every little distinction . The important things here are context and perspective . I'm told that, from the perspective of a biologist, a tomato is a fruit. But when I buy them in the supermarket, I look for them amongst the vegetables. Why? Because the technical distinction doesn't matter in that particular context. Moreover, the distinction would actually get in the way of their usefulness: if I was daft enough to include tomatoes in a fruit salad, for example. It is the same with computer languages. There are times when the difference between programming languages and other languages really does matter . Quite often, however, we can all communicate perfectly effectively when just lump them all in together. In the case of the question linked by the OP, it really didn't matter what languages were true programming languages and which were not. Pointing out the distinction didn't advance the discussion in any way. Thankfully, other than adding a little noise (and becoming the stimulus for an interesting discussion!) the pedantry linked by the OP was of little consequence. At its worst, however, pedantry can stir up negative feelings and damages relationships... at least according to my wife. :-) How to deal with pedantry amongst programmers A preacher friend of mine once delivered a sermon entitled: is this a hill worth dying to be on? He was referring to generals who make a strategic assessment over which battles are worth fighting: are the gains worth the costs? Is it really worth interrupting the flow of the discussion to make this distinction? Does my pedantry stem from a sense of arrogance or from past hurt? Do my comments value the skills of others as well as my own? Of course, there are times when distinctions need to be made. My aim is that, when I make a contribution, it will add value to our collective endeavors. That is, after all, the job of every real programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28098",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1204/"
]
} |
28,187 | I'm working with php and sql. I think that my method of implementing functions is better than what my boss proposes. Just now he explained me how to do a check on a list of email addresses, and I do not like his idea. I proposed mine which is better and quicker to implement, but he disagreed. Now I think I will go ahead and implement my idea, because his idea was not clear enough to me. Do you think he will be mad? | Having been "the boss" and, as it turned out, actually better than my staff in all cases bar one - yes, he will be mad - or annoyed or frustrated and in any case, quite possibly, right in the first place. If you're genuinely better than him then you should be able to understand his proposed solution and to see why yours is better and then to explain why. But you state: because his idea was not clear enough to me In which case you need to go back and understand what he wants and why and whether - as has been the case both in me making suggestions to my staff and my staff proposing solutions to me - you or he has missed something. But don't assume that he's wrong and you're right unless and until you understand what he's asking for and whether he's covering something you haven't thought of (yet). Oh and in the one case - he's a better programmer but he's not so good a couple of steps back from the problem where I'm better and we had great fun working together for that very reason. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28187",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10282/"
]
} |
28,228 | The problem: When we were sending newsletters to customers, there was no way to confirm if the customer already received the mail. We have come up with two different approaches: (Boss's Idea) Each time mail was being sent, do an INSERT in a db with the title of the newsletter being sent and the email address which is receving the email address. To ensure that any email address does not receive the same email twice, do a SELECT in the table and find the title of the newsletter being sent: if (title of newsletter is found)
{
check to see of the email we are sending mail to is already present. if it does, do not send mail
}
else
{
send mail
} (My idea) create a column called unique and mark it as UNIQUE. Each time mail was being sent, concatenate email + newsletter id and record it in the UNIQUE row. The next time we do a "mysql_affected_rows" check to see if our INSERT was successful, we send the mail, else, there is already a duplicate and no need to send it. Is either approach objectively superior? If yes, which one and why? | I prefer the boss solution because it's easier to comprehend.
I immediately understood it, while your own solution is less obvious. For the people that will have to maintain/alter this in the future, this is important. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28228",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10282/"
]
} |
28,238 | I've often come across bugs that have been caused by using the ELSE construct. A prime example is something along the lines of: If (passwordCheck() == false){
displayMessage();
}else{
letThemIn();
} To me this screams security problem. I know that passwordCheck is likely to be a boolean, but I wouldn't place my applications security on it. What would happen if its a string, int etc? I usually try to avoid using ELSE , and instead opt for two completely separate IF statements to test for what I expect. Anything else then either gets ignored OR is specifically handled. Surely this is a better way to prevent bugs / security issues entering your app. How do you guys do it? | The else block should always consist of what you want the default behaviour to be. There's no need to avoid them, just be careful to use them appropriately. In your example, the default state should be to not allow access. A little refactoring leaves you with: If (passwordCheck)
{
letThemIn();
}
else
{
displayMessage();
} i.e. if the password check works, let them in, otherwise it's always valid to show some error message. You can of course add additional checks to you logic by using else if rather than completely separate if statements. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28238",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4630/"
]
} |
28,257 | I'm searching for a web site to start a new open source project.
I want to start the developing of a "SPORT training log".
I have some developing skill but not all I need (I dont know web coding). Anyone know a good starting point to meet some people, propose the software and then (in another place also) start the development all together? For example, I have python, c, c++ skills and a little training experience, but I want to make it in web, so I can't do all the stuff alone. I dont think I'm the only one who wants to make a training log but dont have necessary skills. I'm searching for a site to manage the idea behind a training log and get interested people to contribute in the idea and then eventualy contribute also in the code. What I mean is a step before actualy FOSS approach. The question is: Why start to code software I need if I can start global software requirements about a "sport traning log" and after a useful and global review, we can start all together to developt it? Why should I need to start developing the software with XYZ features and other people who need features YZQ need to start to write another one? Why not put the opensource comunity a step before in a more professional way starting an opensource project with a collaborative approach also in software requirements? | The else block should always consist of what you want the default behaviour to be. There's no need to avoid them, just be careful to use them appropriately. In your example, the default state should be to not allow access. A little refactoring leaves you with: If (passwordCheck)
{
letThemIn();
}
else
{
displayMessage();
} i.e. if the password check works, let them in, otherwise it's always valid to show some error message. You can of course add additional checks to you logic by using else if rather than completely separate if statements. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28257",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7771/"
]
} |
28,314 | For example, would you prefer this one-liner int median(int a, int b, int c) {
return (a<b) ? (b<c) ? b : (a<c) ? c : a : (a<c) ? a : (b<c) ? c : b;
} or an if/else solution involving multiple return statements? When is ?: appropriate, and when is it not? Should it be taught to or hidden from beginners? | Is the ternary operator evil? No it's a blessing. When is ?: appropriate? When it's something so simple you don't want to waste many lines for. and when is it not? When the readability and clearness of code suffers and a potential for a mistake through insufficient attention increases, for example, with many chained operators, just like in your example. The litmus test is when you begin to doubt your code is easily readable and maintainable in the long run. Then don't do it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28314",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3684/"
]
} |
28,331 | I'm a fairly young programmer, and I work at a medium sized company's IT department. I have a coworker, and he is a really good Visual Basic 6 programmer. And I mean really good. Honestly. He can deliver working applications, containing very few bugs, in the time I need to get my first cup of coffee, and boot my machine. He is just that good. Thing is, we are working with a team, and his working style is completely antiquated. He doesn't believe in versioning software (if you just make sure your code is correct, you don't need all that nonsense). Doesn't believe in deployment (I can deliver a working executable. How that is deployed is for the sysadmins to figure out). Doesn't believe in abstraction. ('if you want to create a subroutine, go ahead, but don't call any subroutines from that subroutine. It gets messy that way, and the code is hard to follow. This way every one can follow every step on the way.' or 'yeah, sure you can use that library to do that for you, but that way you don't really understand what's going on') and certainly doesn't believe in OOP. (we work in VB.net) He is so good in what he does, he can deliver applications way faster than I can. But it just doesn't work in a team. Our other team member is quiet, and doesn't like to speak out, though he tends to agree. Our manager thinks I make valid points, but is not a programmer. I have a really hard time maintaining programs he has written, and it doesn't make for a good team atmosphere. What do you think is the best thing for me to do? | This sounds like one of those "He's a good guy but..." where you know the truth is coming. And the truth sounds like he isn't really that good of an engineer. Good software isn't just about working and development speed. It's also about the other things you mentioned - maintainability, compatibility, abstraction (for future efficiency), etc. So, that being said, it sounds like you have a problem developer on your hands. What's worse for you is that he's proven and probably set in his ways. So what can you do? Work around him. Strive to produce your projects on as tight a schedule as he does. And if you can't, produce a better result. Over time those proven concepts which he dismisses will begin to pay off for you. On the other hand, if you are forced to work his way, leave. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11068/"
]
} |
28,434 | We are being forced to use Spartan programming on a project, to everybody's dismay. So I get it, it makes the methods really short and it handles the simple cases first. But is it really worth the price of the code looking like something out of the Obfuscated C Code contest? Can you see it being useful for something? | Many of the tenants of Spartan programming just seem like good practise to me. For example, keeping methods short, minimizing the scope of variables, minimizing the number of parameters to a method, or methods to a class, etc. These are all good things and exactly what you should be striving for. But then there's stuff like minimizing character count , minimizing token counts , ternarization (seriously?), etc which really make no sense. I think my main problem with it is exemplified by this quote : But, spartan programming is more than just a technical coding style, in that is has a single underlying, unifying principle---minimalism and simplicity taken to extremes. Anything "taken to extremes" rings alarm bells to me. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28434",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
28,484 | When is something language agnostic? Why is it called that? | Language agnostic refers to aspects of programming that are independent of any specific programming language. At least, that's how I've heard it used for the last thirty years. The word "agnostic" is derived from the ancient Greek for "don't know". So something which is "language agnostic" doesn't need to know about computer languages; it means the same thing as language independent . Things that would be language agnostic include algorithms, or Agile, or a runtime library with bindings to many languages. Some Mac OS X features are not language agnostic , because they're really designed to be used from Objective C, can only be used with difficulty from C or C++, and don't even have bindings for many languages. There can also be a subtext to using "language agnostic" rather than other terms. In colloquial English, someone who says they're "agnostic" means they are neither religious nor an atheist: they "don't know" about God. This is usually verbal code for "I don't like to talk about religion, so don't try to convert me." So sometimes when people talk about being "language agnostic", they're trying to stay out of arguments about what computer language is better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28484",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8486/"
]
} |
28,551 | What do you do to understand some code that you didn't write? Or code that you wrote long time ago and don't remember what it does anymore. Do you have some technique that you go about? Do you analyze the structures first, or the public methods, or do you draw flow charts, etc.? Or do you fire up the debugger and just step through it? Or do you just ad-hoc your way through until you understand it? | Asking the author Going with the debugger through in different scenarios Saving discoveries in written form Learning by trying to add/change something and seeing where it leads Doing some pair programming with an experienced colleague or the author | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28551",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
28,603 | Exhibit 1 , Exhibit 2 , I guess you won't find it hard to recall other examples. Thing is: if there is more than one way to solve a problem, the PHP programmer (I usually browse the PHP tag on StackOverflow) will ask for help on the solution involving regular expressions. Even when it will be less economic, even when the php manual suggests ( link ) to use str_replace instead of any preg_* or ereg_* function when no fancy substitution rules are required. Does somebody have a clue about why this happens? Don't get me wrong, some of my best friends are regular expressions and I don't despise Perl. What I don't get is why there is no looking for alternatives whatsoever, even when the overkill is obvious (regex to switch strings) or the code complexity rises exponentially (regex for getting data from html in PHP ) | When the only tool you have is a regex, every problem looks like ^((?>[a-zA-Z\d!#$%&'*+\-/=?^_{|}~]+\x20*|"((?=[\x01-\x7f])[^"\\]|\\[\x01-\x7f])*"\x20*)*(?<angle><))?((?!\.)(?>\.?[a-zA-Z\d!#$%&'*+\-/=?^_{|}~]+)+|"((?=[\x01-\x7f])[^"\\]|\\[\x01-\x7f])*")@(((?!-)[a-zA-Z\d\-]+(?<!-)\.)+[a-zA-Z]{2,}|\[(((?(?<!\[)\.)(25[0-5]|2[0-4]\d|[01]?\d?\d)){4}|[a-zA-Z\d\-]*[a-zA-Z\d]:((?=[\x01-\x7f])[^\\\[\]]|\\[\x01-\x7f])+)\])(?(angle)>)$ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28603",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11146/"
]
} |
28,651 | I decided to learn programming. I've been reading SO for few days, and I think I will start with C++, as I read some articles. I am aware of loops, arrays, program logic and objects a little and I need someone to look me over and help me with small questions I get when doing my first projects. So here is the question - where do I find such guy? I don't got any friends who program and all. EDIT: 2 years later, I am still looking for mentor. I did not actively code just started 3 months again. I work on Objective-C and iOS programming and game programming with Cocos2d. If you want to become my mentor, drop me a or comment. | Joining an open-source project is certainly one way to get started. However, I've been using open-source software for years, and quite frankly, the quality on almost all such projects is generally in the toilet. If you learn your programming and design skills entirely from them, you'll probably pick up some very poor ones along with the good ones, with no way to tell the difference between them. What do you want to learn programming for? The answer to that will determine what you should look for, and where. Here are some common answers, and my professional opinion on how to pursue them (keep in mind that it is just opinion, though IMHO, accurate): Just to say that you know how to do it. Then you don't really need a mentor, and C++ is a poor place to start. I love C++, it's my first choice for general programming, but play with another language instead. I'd suggest Python; it has a much gentler learning curve than C++, and unlike some languages (no names mentioned, I didn't wear my asbestos underwear today) you'll still learn a few useful skills in case you want to get into it further later. A lot of the concepts can translate directly to C++ if you decide to continue on that route. Just to try it out and see if you like it. An open-source project might be good enough for that. Pick a program that you like, but that you've found some problems or irritations with, and offer your help to whoever is running it. Most open-source projects are open to contributions, that's generally why they're open-source in the first place. However, in that case, do not try C++ as your first programming language. It's not hard to master the basics, but C++ is low-level enough that you can get some serious and very hard-to-find bugs in your programs. Unless you already know you love programming, or you're as stubborn as the proverbial ox, or have already found a mentor who can point you in the right direction, that will kill any budding interest you might have in the field. See the above answer about Python, it's better suited for that. Because you have an idea for a specific program you want to write. (I don't think that the OP is in this category, I'm putting it in for later readers.) Do you have any idea of the time required to master program design and implementation? As a hint, it's measured in years. You might be able to come up with a half-decent design after only a few months of study, if you're both smart and extremely lucky, but anyone with a little experience who has to work on it (including you, later) will wish that you'd never been born -- I speak from experience. :-) Unless the idea is so super-secret that no one else can know about it until it's done, don't bother. Hire an experienced programmer to do it for you, or if you can't afford one but still want the program badly enough, offer to partner with one -- you handle the business side and let him handle the programming part. Most good developers would prefer to be programming, so that kind of offer can be worth it to them. Because you already know that you're fascinated by programming and want to learn more. Then you're on exactly the right track. :-) Whether it's just as a hobby or is something you might turn into a career later, if you've got the kind of personality that finds it endlessly fascinating, the best thing you can do is to immerse yourself in it. C++ is as good a language as any, in that case, and a mentor will definitely help (and with more than just developing your skills; it can get lonely without friends who share your passion). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28651",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11193/"
]
} |
28,667 | So, there are a bunch of questions appearing asking is X evil, is Y evil. My view is that there are no language constructs, algorithms or whatever which are evil, just ones which are badly used. Hell, if you look hard enough there are even valid uses of goto . So does absolute evil, that is something which is utterly incompatible with best practice in all instances, exist in programming? And if so what is it? Or is it just bad programmers not knowing when something is appropriate? Edit: To be clear, I'm not talking about things programmers do (such as not checking return codes or not using version control - they're choices made by bad programmers), I mean tools, languages, statements, whatever which are just bad... | Magic numbers. Implicitness is inherently evil, and here's the reason why: | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28667",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
28,927 | Several organisations I know use SMART goals for their programmers. SMART is an acronym for Specific, Measurable, Achievable, Relevant and Time-Bound. They are fairly common in large corporations. My own prior experience with SMART goals has not been all that positive. Have other programmers found them an effective way to measure performance? What are some examples of good SMART goals for programmers (if they exist). | In a word No First : I've never had my projects remain stable enough that I could establish the SMART goals with any meaning. The time scales between when my roles change on a project and when perf reviews are done are just too far out of sync. Second: Measuring individual performance is a great way to create a "not my job" mentality and negative competition between individuals and/or the various sub teams in an organization. It's very easy to game the system and make sure you're looking out for yourself and not really helping out the entire team. We should be encouraging people to be team players, but then our organizations do the exact opposite. Most of these sorts of systems are antithetical to team building. Mary Poppendieck's done a far better job of articulating this than I can ever do in LeanEssays: Team Compensation . Sue got a call from Janice in human resources. “Sue,” she said, “Great job your team did! And thanks for filling out all of those appraisal input forms. But really, you can’t give everyone a top rating. Your average rating should be ‘meets expectations’. You can only have one or two people who ‘far exceeded expectations’...” ... One of the greatest thought leaders of the 20th century, W Edwards Deming, wrote that un-measurable damage is created by ranking people, merit systems, and incentive pay. Deming believed that every business is a system and the performance of individuals is largely the result of the way the system operates. In his view, the system causes 80% of the problems in a business, and the system is management’s responsibility. He wrote that using exhortations and incentives to get individuals to solve management problems simply doesn’t work. Deming opposed ranking because it destroys pride in workmanship, and merit raises because they address the symptoms, rather than the causes, of problems. ...let’s take a deeper look into employee evaluation and reward systems, and explore what causes them to become dysfunctional... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28927",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8400/"
]
} |
28,947 | In ancient history, Brendan Eich had a language design, and in today's world, JavaScript is a popular language implemented and used in many different places. What caused the language to become popular? Was it the C-like syntax familiar to previous programmers? Did Netscape have enough control of the market to force it to be used? Or is there some deeper reason that JavaScript is popular and other languages are not? Particularly, if you had to make a language as popular as JavaScript, what initial conditions would you need to recreate its growth in popularity? | I was commenting on an earlier answer , but it was getting big, so I thought I'd spin this out. Any new language can only succeed if it capitalizes on an emerging frontier in computing. Previous examples: C for Unix Objective-C for iOS Perl and PHP for back-end Web 1.0 Python and Ruby for back-end Web 2.0 Java for the back-end Internet-enabled enterprise To answer your question, JavaScript was the language for Netscape Navigator back when that was the dominant browser. Specifically, it was the language for dynamic front-end development. The next big language will have to solve another frontier. There still seems to be a land grab in the back-end web development space. Plus, mobile computing isn't totally solved, despite Apple's current dominance. Also, there's the emergence of multi-core and cloud computing, which is something that many languages are attempting to capitalize on (like concurrent languages like Erlang and Go, or functional languages like Haskell and OCaml). Entrepreneurs have an expression along the lines of, "find a someone on fire and sell him a fire hose". So if you want to introduce a new language, whose fire are you putting out? Every new frontier in computing brings a whole host of headaches; so supply some aspirin and you'll be golden. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28947",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11283/"
]
} |
28,950 | I'm studying Haskell for the purpose of understanding functional programming, with the expectation that I'll apply the insight that I gain in other languages (Groovy, Python, JavaScript mainly.) I choose Haskell because I had the impression that it is very purely functional, and wouldn't allow for any reliance on state. I did not choose to learn Haskell because I was interested in navigating an extremely rigid type system. My question is this: Is a strong type system a necessary by-product of an extremely pure functional language, or is this an unrelated design choice particular to Haskell? | I believe that understanding Haskell's type system is an amplifier to understanding functional programming. The thing about purely functional programming is that in the absence of side-effects, which allow you to do all sorts of things implicitly, purely functional programming makes the structure of your programs much more explicit. Haskell prevents you from shoving things under the carpet, forces you to deal with the structure of your program explicitly, and it teaches you a language to describe these structures: the language of types. Understanding types, particularly rich types as is Haskell, will make you a better programmer in any language. If Haskell wasn't strongly typed, concepts like monads, applicative functors and the like would never have been applied to programming. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/28950",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2329/"
]
} |
29,075 | Yes yes, I am aware that '\n' writes a newline in UNIX while for Windows there is the two character sequence: '\r\n' . All this is very nice in theory, but my question is why ? Why the carriage return character is extra in Windows? If UNIX can do it in \n why does it take Windows two characters to do this? I am reading David Beazley's Python book and he says: For example, on Windows, writing the
character '\n' actually outputs the
two- character sequence '\r\n' (and
when reading the file back, '\r\n' is
translated back into a single '\n'
character). Why the extra effort? I will be honest. I have known the difference for a long time but have never bothered to ask WHY. I hope that is answered today. Thanks for your time. | Backward compatibility. Windows is backward compatible with MS-DOS (aggressively so, even) and MS-DOS used the CR-LF convention because MS-DOS was compatible with CP/M-80 (somewhat by accident) which used the CR-LF convention because that was how you drove a printer (because printers were originally computer controlled typewriters). Printers have a separate command to move the paper up one line to a new line, and a separate command for returning the carriage (where the paper was mounted) back to the left margin. That's why. And, yes, it is an annoyance, but it is part of the package deal that allowed MS-DOS to win over CP/M, and Windows 95 to win over all the other GUI's on top of DOS, and Windows XP to take over from Windows 98. (Note: Modern laser printers still have these commands because they too are backwards compatible with earlier printers - HP in particular do this well) For those unfamiliar with typewriters, here is a video showing how typing was done: http://www.youtube.com/watch?v=LJvGiU_UyEQ . Notice that the paper is first moved up, and then the carriage is returned, even if it happens in a simple movement. The ding notified the typist that the end was near, and to prepare for it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29075",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11323/"
]
} |
29,109 | As a Linux (server side) developer, I don't know where and why should I use C++. When I'm going for performance, the first and last choice is C. When "performance" isn't the main issue, programming languages like Perl and Python would be good choices. Almost all open source applications I know in this area have been written in C, Perl, Python, Bash script, AWK or even PHP, but no one uses C++. I'm not discussing other areas like GUI or web applications, I'm just talking about Linux, CLI and daemons. Is there any satisfactory reason to use C++? | When I'm going to performance, the first and last choice is C. And that’s where you should back up. Now, I cannot, at all , speak for server development. Perhaps there is indeed no compelling reason to prefer C++ over the alternatives. But generally speaking, the reason to use C++ rather than other languages is indeed performance. The reason for that is that C++ offers a means of abstraction that has, unlike all other languages that I know, no performance overhead at runtime. This allows writing very efficient code that still has a very high abstraction level. Consider the usual abstractions: virtual functions, function pointers, and the PIMPL idiom. All of these rely on indirection that is at runtime resolved by pointer arithmetic. In other words, it incurs a performance cost (however small that may be). C++, on the other hand, offers an indirection mechanism that incurs no (performance) cost: templates. (This advantage is paid for with a (sometimes hugely) increased compile time.) Consider the example of a generic sort function. In C, the function qsort takes a function pointer that implements the logic by which elements are ordered relative to one another. Java’s Arrays.sort function comes in several variants; one of them sorts arbitrary objects and requires a Comparator object be passed to it that works much like the function pointer in C’s qsort . But there are several more overloads for the “native” Java types. And each of them has an own copy of the sort method – a horrible code duplication. Java illustrates a general dichotomy here: either you have code duplication or you incur a runtime overhead. In C++, the sort function works much like qsort in C, with one small but fundamental difference: the comparator that is passed into the function is a template parameter. That means that its call can be inlined . No indirection is necessary to compare two objects. In a tight loop (as is the case here) this can actually make a substantial difference. Not surprisingly, the C++ sort function outperforms C’s sort even if the underlying algorithm is the same. This is especially noticeable when the actual comparison logic is cheap. Now, I am not saying that C++ is a priori more efficient than C (or other languages), nor that it a priori offers a higher abstraction. What it does offer is an abstraction that is very high and incredibly cheap at the same time so that you often don’t need to choose between efficient and reusable code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29109",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11006/"
]
} |
29,170 | I have seen some programmers tweaking their code over and over again not only to make it 'work good', but also to make it 'look good'. IMO, 'clean code' is actually a compliment indicating your code is elegant, perfectly understandable and maintainable. And the difference comes out when you have to choose between an aesthetically appealing code vs. code that's stressful to look at. So, how many of you actually write 'clean code'? Is it a good practice? What are the other benefits or drawbacks of this? | I would argue that many of us do not write clean code . And generally, that's not our job . Our job as software developers is to deliver a product that works, on time. I am reminded of Joel Spolsky's blog post: The Duct Tape Programmer . He quotes from Coders at Work : At the end of the day, ship the
f*****g thing! It’s great to rewrite
your code and make it cleaner and by
the third time it’ll actually be
pretty. But that’s not the
point—you’re not here to write code;
you’re here to ship products. - Jamie Zawinsky I am also reminded of Robert Martin's blog response : So. Be smart. Be clean. Be simple.
Ship! And keep a small roll of duct
tape at the ready, and don’t be afraid
to use it. - Uncle Bob If the code,a developer writes happens to be clean AND work (is deliverable), so be it, good for everyone. But if a developer is tinkering around trying to make clean and readable code at the expense of being able to deliver it timely, then that's bad. Make it work, use duct tape, and ship it. You can refactor it later and make it super gorgeous and efficient. Yes, it's good practice to write clean code, but never at the expense of being able to deliver. The benefit of delivering a duct-taped product on time far outweighs the benefits of clean code that was never finished and delivered. A good chunk of code I've come across isn't clean. Some are downright ugly. But they were all released and used in production. Some may say that it's unprofessional to write messy code. I disagree. The professional thing is to deliver code that works, whether it's clean or messy. The developer must do the best he/she can, given whatever time that was allocated before delivery. Then, go back to clean up-- that's professional. Hopefully, the code delivered isn't pure duct tape and is 'clean enough'. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29170",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4522/"
]
} |
29,177 | So, we've all heard of The Programmers Bill of Rights and XP has a similar concept. It's a common complaint these days that we hear a lot about people's rights but not so much about their responsibilities, so what should be on the programmers bill of responsibilities. That is things that they should do, which they may find unpalatable, but that which separate programmers acting professionally and responsibly, from those who do not. I'm primarily interested in the unpalatable ones and the ones which tend not to happen. That is the ones that programmers tend to shirk and avoid, rather than the ones which 90% of programmers actually want to do (such as always refactor and use source control). So, what should be on the Programmers Bill of Responsibilities? | A programmer has the responsibility to push back poor requirements instead of blindly implementing them. This includes telling clients that what they want is more expensive than other options or has a particular set of risks. It also includes communicating bad news in a professional way - not screaming, calling people stupid, implying they are stupid or other childish behavior. If he pushes back, he should have a set of reasons (more than, "I don't like SQL Server and won't use it") and an alternative plan to present. However, the programmer also has the responsibility of accepting decisions and using tools or designs they may not like if their pushback was not accepted. If a report was requested in SSRS, delivering it in Crystal Reports (which the client may not have) is unacceptable. If a .net solution was required, delivering it in Haskell is unacceptable. If no one else on the team uses a tool or language that you want to use, it is unprofessional to use it if management doesn't agree that it is the best tool for the particular job. A programmer has the responsibility to test his work. (This shouldn't be the only test, but no professional programmer should send out code he has not tested.) This includes testing even the branches of the code you don't expect to hit very often. If you have a set of nested IFs, test all possible routes. A programmer has the responsibility to handle errors and exceptions gracefully and to write error messages that the user will see that are professional and neutral not jokes or insulting. A programmer has the responsibility to protect private data, protect the proprietary code he writes for the company and to protect the users from catastrophe (even self inflicted catastrophe) from their use of the application. A programmer has the responsibility to make sure his code is maintainable and is in source control. A programmer has the responsibility to coordinate with others to make sure his changes do not adversely affect what they are doing. A programmer has the responsibility to recommend the best choice for the client of tools or languages in the design phase not the tool/language he wants to play with and learn. A programmer has the responsibility to work with all the appropriate personnel for a project including the ones he does not like. It is not your job to like people, it is your job to work with them and be polite. A programmer has the responsibility to produce a product that does what was specified in a reasonable time-frame. If the time-frame is not going to be met, he or she has the responsibility to inform management of that as soon as it is known. A programmer has the responsibility to let project management know about impediments to getting the job done. They can't fix what they don't know about. A programmer has the responsibility to do the whole task not just the fun, interesting parts. Every job has some boring parts, they still need to be done. This includes things like timesheets and adding discussion items to project management software. It includes things like documentation, code review, etc. A programmer has the responsibility to learn the business domain he is supporting not just programming concepts. A programmer has the responsibility to keep his skills up-to-date. When a programmer messes up, he has the responsibility to do all in his power to fix the problems as soon as humanly possible. This may include bearing the bad news to management rather than trying to hide that you just deleted a critical table in the production database. A programmer has the same responsibilities of any other worker - to show up on time, to work the hours contracted, to request vacation time in advance, to answer phone and email messages (heck to read their emails), to fill in required forms for HR, etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29177",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
29,212 | Especially when writing new code from scratch in C, I find myself writing code for hours, even days without running the compiler for anything but an occasional syntax check. I tend to write bigger chunks of code carefully and test thoroughly only when I'm convinced that the code does what it's supposed to do by analysing the flow in my head. Don't get me wrong - I wouldn't write 1000 lines without testing at all (that would be gambling), but I would write a whole subroutine and test it (and fix it if necessary) after I think I'm finished. On the other side, I've seen mostly newbies that run & test their code after every line they enter in the editor and think that debuggers can be a substitute for carefulness and sanity. I consider this to be a lot of distraction once you've learned the language syntax. What do you think is the right balance between the two approaches? Of course the first one requires more experience, but does it affect productivity positively or negatively? Does the second one help you spot errors at a finer level? | Personally, I must work in small chunks because I am not smart enough to keep hours worth of coding in my biological L1 cache. Because of my limited capabilities, I write small, cohesive methods and design objects to have very loose coupling. More powerful tools and languages make it easier to code longer without building, but there is still a limit for me. My preference is to write a small piece, verify that it works as I expect. Then, in theory, I am free to forget about the details of that piece and treat it as a black box as much possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29212",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
29,268 | Quick question about TFS -
Do you guys check-in bin/debug folders into TFS?
(if so, why?)
Since the content of those folders are dynamically generated is it better to leave them out so that the programmer will not run into read-only issues? | As a general rule, source control is best used for source only, and generated files are not source. There are exceptions. Sometimes (using Visual Studio) you might want to keep the .pdb file for reading minidumps. Sometimes you can't duplicate a toolchain, so that you can't necessarily recreate generated files accurately. In these cases, you're primarily interested in doing this for released software, not for every VCS change, and in any case these can easily reside in versioned folders. (These are typically binary files, anyway, and don't have comprehensible changes from one version to another, and don't benefit all that much from being in a VCS.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29268",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2296/"
]
} |
29,280 | Is coding important to be good at computer science? Should one implement the algorithm to know it well ? I remember one cs professor's idiom that " I never code" | You won't really know the algorithm well until you code it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29280",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11390/"
]
} |
29,433 | I'm currently doing a code review and one of the things I'm noticing are the number of exceptions where the exception message just seems to reiterate where the exception occurred. e.g. throw new Exception("BulletListControl: CreateChildControls failed."); All three items in this message I can work out from the rest of the exception. I know the class and method from the stack trace and I know it failed (because I've got an exception). It got me thinking about what message I put in exception messages. First I create an exception class, if one does not already exist, for the general reason (e.g. PropertyNotFoundException - the why ), and then when I throw it the message indicates what went wrong (e.g. "Unable to find property 'IDontExist' on Node 1234" - the what ). The where is in the StackTrace . The when may end up in the log (if applicable). The how is for the developer to work out (and fix) Do you have any other tips for throwing exceptions? Specifically with regard to the creating new types and the exception message. | I'll direct my answer more to what comes after an exception: what's it good for and how should software behave, what should your users do with the exception? A great technique I came across early in my career was to always report problems and errors in 3 parts: context, problem & solution. Using this dicipline changes error handling enormously and makes the software vastly better for the operators to use. Here's a few examples. Context: Saving connection pooling configuration changes to disk.
Problem: Write permission denied on file '/xxx/yyy'.
Solution: Grant write permission to the file. In this case, the operator knows exactly what to do and to which file must be affected. They also know that the connection pooling changes didn't take and should be repeated. Context: Sending email to '[email protected]' regarding 'Blah'.
Problem: SMTP connection refused by server 'mail.xyz.com'.
Solution: Contact the mail server administrator to report a service problem. The email will be sent later. You may want to tell '[email protected]' about this problem. I write server side systems and my operators are generally tech savvy first line support. I would write the messages differently for desktop software that have a different audience but include the same information. Several wonderful things happen if one uses this technique. The software developer is often best placed to know how to solve the problems in their own code so encoding solutions in this way as you write the code is of massive benefit to end users who are at a disadvantage finding solutions since they are often missing information about what exactly the software was doing. Anyone who has ever read an Oracle error message will know what I mean. The second wonderful thing that comes to mind is when you find yourself trying to describe a solution in your exception and you're writing "Check X and if A then B else C". This is a very clear and obvious sign that your exception is being checked in the wrong place. You the programmer have the capacity to compare things in code so "if" statements should be run in code, why involve the user in something that can be automated? Chances are it's from deeper in the code and someone has done the lazy thing and thrown IOException from any number of methods and caught potential errors from all of them in a block of calling code that cannot adequately describe what went wrong, what the specific context is and how to fix it. This encourages you to write finer grain errors, catch and handle them in the right place in your code so that you can articulate properly the steps the operator should take. At one company we had top notch operators who got to know the software really well and kept their own "run book" that augmented our error reporting and suggested solutions. To recognise this the software started including wiki links to the run book in exceptions so that a basic explanation was available as well as links to more advanced discussion and observations by the operators over time. If you've had the dicipline to try this technique, it becomes much more obvious what you should name your exceptions in code when creating your own. NonRecoverableConfigurationReadFailedException becomes a bit of shorthand for what you're about to describe more fully to the operator. I like being verbose and I think that will be easier for the next developer who touches my code to interpret. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29433",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1083/"
]
} |
29,513 | Its common knowledge in programming that reinventing the wheel is bad or evil . But why is that? I am not suggesting that it's good. I believe it to be wrong. However, I once read an article that said, if someone is doing something wrong (programming wise) explain to them why its wrong, if you can't, then maybe you should be asking yourself if it is really wrong. That leads me to this question: If I see someone is clearly reinventing the wheel by building their own method of something that is already built into the language/framework. First, for arguments sake, lets assume that their method is just as efficient as the built in method. Also the developer, aware of the built in method, prefers his own method. Why should he use the built in one over his own? | Depends.. As with everything, it's about context: It's Good when: Framework or library is too heavy, and you only require limited functionality. Rolling your own extremely light-weight version that suits your requirement is a better approach. When you want to understand and learn something complex, rolling your own makes sense. You have something different to offer, something others' implementations do not have. Maybe a new twist, new feature etc. It's Bad when: Functionality already exists and is stable and well known (popular). Your version adds nothing new. Your version introduces bugs or limitations (e.g. your version is not thread-safe). Your version is missing features. Your version has worse documentation. Your version is lacking unit tests compared to what it is replacing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29513",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
29,534 | Along with the other qualities should a programmer need good debugging skills? If I have an applicant who was not able to find the error in the given program, but was able to solve all puzzles and programs, should I consider him for the job? EDIT :- The puzzles are normal red,blue and red-blue balls like. The programs are like finding continuous k zeros in an array. The debugging program is something which fails because of condition which should be >=, but instead is >. Everything is on paper. | Yes its very important About that particular candidate, it is possible that s/he was not familiar enough with code-base x to debug it. A good problem solver should be able to debug, as all that is usually required is to have a very logical method/approach. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8820/"
]
} |
29,614 | Programming always required to learn new concepts, paradigms, features and technologies and I always have been failed at first attempt to understand new concept what i encounter. I start to blame and humiliate myself without remember before how i understood new concept which i hadn't understand it before. I can hardly stop to tell myself "why i cant understand ? Am i stupid or idiot ? Yes, i am stuppiiddddd!!!" What your inner voice tells if you can not understand new concept after spend long time till been tired or hopeless ? How do you handle your self-esteem in such situations ? | Personally, everything is an analogy away. And if I don't understand something, it's probably because I haven't been shown the right concept to bridge me over to the Land of Understand. I usually keep scouring around for different tutorials and eventually one of them will take a different turn than the previous tutorials did that I didn't grok. Then I'll go back and read all of them and finally piece it together. And then rage why the other tutorials didn't present it the same way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29614",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3664/"
]
} |
29,657 | I came across something like this in an open-source project. Methods that modify instance attributes return a reference to the instance. What is the purpose of this construct? class Foo(object):
def __init__(self):
self.myattr = 0
def bar(self):
self.myattr += 1
return self | It's to allow chaining. For example: var Car = function () {
return {
gas : 0,
miles : 0,
drive : function (d) {
this.miles += d;
this.gas -= d;
return this;
},
fill : function (g) {
this.gas += g;
return this;
},
};
} Now you can say: var c = Car();
c.fill(100).drive(50);
c.miles => 50;
c.gas => 50; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29657",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8460/"
]
} |
29,692 | Always when the term RAII is used, people are actually talking about deconstruction instead of initialisation. I think I have a basic understanding what it might mean but I'm not quite sure. Also: is C++ the only RAII language? What about Java or C#/.NET? | Resource Acquisition Is Initialization means that objects should look after themselves as a complete package and not expect other code to tell an instance "hey by the way, you're going to be cleaned up soon - please tidy up now." It does usually mean there is something meaningful in the destructor. It also means you write a class specifically to manage resources, knowing that under certain hard-to-predict circumstances, like exceptions being thrown, you can count on destructors executing. Say you want to write some code where you're going to change the windows cursor to a wait (hourglass, donut-of-not-working, etc) cursor, do your stuff, and then change it back. And say also that "do your stuff" might throw an exception. The RAII way of doing that would be to make a class whose ctor set the cursor to wait, whose one "real" method did whatever it was you wanted done, and whose dtor set the cursor back. Resources (in this case the cursor state) are tied to the scope of an object. When you acquire the resource you initialize an object. You can count on the object getting destructed if exceptions are thrown, and that means you can count on getting to clean up the resource. Using RAII well means you don't need finally . Of course, it relies on deterministic destruction, which you can't have in Java. You can get a sort of deterministic destruction in C# and VB.NET with using . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29692",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10199/"
]
} |
29,792 | Do you think it makes sense to enforce that every member of a team must use the same IDE? For instance all engineers that are already on the team use IDE X. Two new engineers come and want to use IDE Y instead because that's what they have been using for several years now. Do you have any experience with "mixed IDE" teams? If so what is it? | Provided the 'official' build system (as used by the Continuous Build servers) is the same for all, I don't see any reason why each member of the team could not choose the tools he wants... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/29792",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11344/"
]
} |
30,135 | One of my professors says "the syntax is the UI of a programming language", languages like Ruby have great readability and it's growing, but we see a lot of programmers productive with C\C++, so as programmers does it really matter that the syntax should be acceptable? I would love to know your opinion on that. Disclaimer: I'm not trying to start an argument. I thought this is a good topic of discussion. Update: This turns out to be a good topic. I'm glad you are all participating in it. | Yes it does.
If you're in doubt, take APL , or J , or Brainfuck , or even plain and simple Lisp or Forth, and try to understand any not entirely trivial program on it. Then compare to e.g. Python. Then compare the same Python (or Ruby, or even C#) to things like Cobol or VB6. I'm not trying to say that hairy syntax is bad and natural-language-like syntax is good in all circumstances. But obvoiusly syntax does make a huge difference. All in all, everything you can write in the most beautiful programming language you can also write as a Turing machine program — but you usually don't want to, do you? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30135",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10963/"
]
} |
30,210 | I heard a while back that there used to be a compiler that attempted to fix syntax errors by analyzing context and inferring what was intended. Does such a compiler really exist? Obviously it has little practical value, but would be very interesting to play with and learn from. | In some sense, the act of compiling is inferring what certain syntax is meant to do, and hence a syntax error is when the compiler isn't able to figure it out. You can add more "guessing" to have the compiler infer further things and be more flexible with the syntax, but it must do this inferring by a specific set of rules. And those rules then become a part of the language, and is no longer errors. So, no, there are no such compilers, really, because the question doesn't make sense. Guessing what syntax errors are meant to do according to some set of rules just becomes a part of the syntax. In that sense, there is a good example of a compiler that does this: Any C compiler. They will often just print out a warning of something that isn't like it should be, and then assume you meant X, and go on. This is in fact "guessing" of unclear code (although it's mostly not syntax per se), something that just as well could have stopped compilation with an error, and therefore qualify as an error. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30210",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/231/"
]
} |
30,254 | These days, so many languages are garbage collected. It is even available for C++ by third parties. But C++ has RAII and smart pointers. So what's the point of using garbage collection? Is it doing something extra? And in other languages like C#, if all the references are treated as smart pointers(keeping RAII aside), by specification and by implementation, will there still be any need of garbage collectors? If no, then why is this not so? | So, what's the point of using garbage collection? I'm assuming you mean reference counted smart pointers and I'll note that they are a (rudimentary) form of garbage collection so I'll answer the question "what are the advantages of other forms of garbage collection over reference counted smart pointers" instead. Accuracy . Reference counting alone leaks cycles so reference counted smart pointers will leak memory in general unless other techniques are added to catch cycles. Once those techniques are added, reference counting's benefit of simplicity has vanished. Also, note that scope-based reference counting and tracing GCs collect values at different times, sometimes reference counting collects earlier and sometimes tracing GCs collect earlier. Throughput . Smart pointers are one of the least efficient forms of garbage collection, particularly in the context of multi-threaded applications when reference counts are bumped atomically. There are advanced reference counting techniques designed to alleviate this but tracing GCs are still the algorithm of choice in production environments. Latency . Typical smart pointer implementations allow destructors to avalanche, resulting in unbounded pause times. Other forms of garbage collection are much more incremental and can even be real time, e.g. Baker's treadmill. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30254",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
30,255 | Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?> . Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed). | Browsers can't progressively render XSLT This means that nothing else loads and nothing is displayed until all data and the whole stylesheet is loaded and processed. You're missing out on progressive rendering and prefetching of images, CSS & JS. Initial load is delayed by another request For small-ish files (<20kb) number of requests, not bandwidth, is the bottleneck for front-end performance, and most pages and stylesheets will fall into this category. If you have large pages, then it's even worse — see the first point. You probably aren't saving any bandwidth XSLT itself is quite verbose and might need to contain templates for the whole site and logic for all rare cases, not just the things used on the current page. You still have to include all data marked up in the main XML file you're sending, e.g. if you're sending a blog post, then there's no magic that XSLT can do to make it substantially smaller. If you're sending complex data, then it'll have lots of markup anyway. Caches are overrated Browser caches are not that great : 40-60% of Yahoo!’s users have an empty cache experience and ~20% of all page views are done with an empty cache. and on mobile, where latency makes extra requests most expensive, caches are even worse . Check your bounce rate — those are users who don't benefit from cached XSLT, and even pay extra price to download the stylesheet and wait for it to be processed. gzip is a reverse XSLT Most transformations done via XSLT come down to changing terse markup to more verbose one and adding repetition. But gzip is great at removing repetition/redundancy from files! You should be using gzip anyway (it's wasteful to send XML uncompressed). It's very likely that gzipped size of processed document will be about the same as gzipped size of unprocessed XML — but you won't have to send extra XSLT, and browsers will be able to start rendering as soon as first packets arrive. Clients might be slow Even assuming best case of loading from cache, XSLT processing on client-side is faster only if user's CPU is faster, and their XSLT engine is faster. On server-side you can do all kinds of optimisation tricks (e.g., cache processed fragments or even whole pages). You can use latest, fastest XSLT processor (browsers have only XSLT 1.0 and likely not very optimized). And your server probably has beefier CPU than many cheap office computers, phones, etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30255",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
30,297 | In legacy code I occasionally see classes that are nothing but wrappers for data. something like: class Bottle {
int height;
int diameter;
Cap capType;
getters/setters, maybe a constructor
} My understanding of OO is that classes are structures for data and the methods of operating on that data. This seems to preclude objects of this type. To me they are nothing more than structs and kind of defeat the purpose of OO. I don't think it's necessarily evil, though it may be a code smell. Is there a case where such objects would be necessary? If this is used often, does it make the design suspect? | Definitely not evil and not a code smell in my mind. Data containers are a valid OO citizen. Sometimes you want to encapsulate related information together. It's a lot better to have a method like public void DoStuffWithBottle(Bottle b)
{
// do something that doesn't modify Bottle, so the method doesn't belong
// on that class
} than public void DoStuffWithBottle(int bottleHeight, int bottleDiameter, Cap capType)
{
} Using a class also allows you to add an additional parameter to Bottle without modifying every caller of DoStuffWithBottle. And you can subclass Bottle and further increase the readability and organization of your code, if needed. There are also plain data objects that can be returned as a result of a database query, for example. I believe the term for them in that case is "Data Transfer Object". In some languages there are other considerations as well. For example, in C# classes and structs behave differently, since structs are a value type and classes are reference types. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30297",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6415/"
]
} |
30,355 | My last job evaluation included just one weak point: timeliness. I'm already aware of some things I can do to improve this but what I'm looking for are some more. Does anyone have tips or advice on what they do to increase the speed of their output without sacrificing its quality? How do you estimate timelines and stick to them? What do you do to get more done in shorter time periods? Any feedback is greatly appreciated, thanks, | Turn off the computer. Grab a pencil and some paper. Sketch out your design. Review it with your peers. Then write the code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30355",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21011/"
]
} |
30,449 | I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks. | You open a very interesting question. I wholeheartedly agree with you. I've made similar observations. I've been programming professionally for several years already and what I have observed is that the amount of good programmers out there, of great developers who love their work and can do it with quality and passion is pretty much close to zero. I probably met only one person who could teach me something. Most of what I know I have learned by myself, reading books and forums, asking in forums and googling for revelation thoughts. After a while I don't regret this much. The options to learn in a working environment can often be limited. You don't start things. You don't finish them. You don't design, don't improve, don't refactor, don't think about architecture, you just code and hack things together. It's how most of the shops work. Not only you don't learn anything, it's more likely that you will learn mostly wrong things how NOT to develop software. I've been continuously seeing scary things around me, all those anti-pattern you have heard of. What is worse, I'm forced to do them myself. I don't know how it happened, but I managed to somehow build an input barrier. I stay open, listen and if I see some potential for self-improvement I research and maybe adopt some technique or idea. But no BS can ever get through. I have worked in badly run projects for a long time, but I have not adopted any of those bad techniques for myself. I pretty much soon understood that if you wish satisfaction with programming, forget about job and have your own personal project. It's where you can apply all your love, passion and knowledge to do things right with the high quality level. You will learn a great deal of stuff, a myriad of things you would never have been exposed to and challenged with when hacking boring corporate staff. I only do my job for paycheck and get satisfaction with my own personal projects. One thing I truly don't understand is how this situation is possible nowadays. Software development has matured a lot. It has had good and bad experience. Many successful projects and a great deal of failed ones. There is experience with long-term projects and understanding what long-term effects one or the other organization will bring upon the project. There are numerous studies available and good books written. "Pragmatic Programmer", "Code Complete", "Mythical Man-Month", "Design of everyday things" and others. Why nobody but us, the programmers ever reads them? How it is possible that even after 20 years of working in IT most developers and managers never found a time to read one or the other methodology book. They are written for, but hardly read by, those who need this medication most. Regarding career perspectives. What I also have noticed in general on the job market for employees, is that employers out there increasingly lose interest in quality work (imagine they had it once) are shopping more and more for the cheapest work craft available. You find it hard to sell your knowledge, experience and understanding of the universe to anyone. It's not in demand. What is in demand is having your projects ruined by the juniors who have no experience and desire to do professional work. Cheap people are used and abused and then thrown out so that the next round begins. Projects are also outsourced to low-wage destination where they are done by people who apparently begin to learn programming just with your project. That's one thing I truly don't understand. I'm entertaining more and more the idea that I will drop employed programming work at some time in the future. I would very much like to work in my own start-up with my own project. If not that, I'm considering trying freelancing or probably changing the payed job nature. After all, I hardly learn anything during working hours and I don't get any satisfaction at all. I can do anything 9-5 and always have satisfaction with my own personal projects. I learn much from online communities. I receive here attention, support for my ideas and on occasions even recognition I could never get with my job and my work colleagues. Will see where I will be in the future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30449",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12107/"
]
} |
30,895 | While reviewing a co-worker's code, I came across some spelling mistakes in function names and also grammatical errors like doesUserHasPermission() instead of doesUserHavePermission() in function and variable names. Should I point these out to him or am I being too pedantic by noticing these? | Code with spelling and grammar errors is unmaintainable . People won't remember the bad grammar, so they'll try to call the function as it should have been written, and that's how bugs happen. You can't grep for something in the code if you don't know how it's spelled. Most people who make grammar/spellings do so inconsistently, so they'll introduce many bugs with mismatched naming. This is particularly problematic in languages that don't require variables to be explicitly declared before use, because you can introduce a new spelling and your code won't come to a grinding halt to let you know you screwed up. Correcting these problems is not pedantic, nor is it necessitated primarily by others' opinions of one's intelligence, literacy, etc (though that's a big side-effect); it is about writing quality, maintainable code . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30895",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2861/"
]
} |
30,908 | What is the difference between Hash and Dictionary ? Coming from a scripting background, I feel that they are similar, but I wanted to find out the exact differences. Googling did not help me much. | Hash is an extremely poorly named data structure where the programmer has confused the interface with implementation ( and was too lazy to write the full name, i.e. HashTable instead resorting to an abbreviation, Hash ). Dictionary is the “correct” name of the interface (= the ADT ), i.e. an associative container that maps (usually unique) keys to (not necessarily unique) values. A hash table is one possible implementation of such a dictionary that provides quite good access characteristics (in terms of runtime) and is therefore often the default implementation. Such an implementation has two important properties: the keys have to be hashable and equality comparable . the entries appear in no particular order in the dictionary. (For a key to be hashable means that we can compute a numeric value from a key which is subsequently used as an index in an array.) There exist alternative implementations of the dictionary data structure that impose an ordering on the keys – this is often called a sorted dictionary (and is usually implemented in terms of a search tree, though other efficient implementations exist). To summarize: a dictionary is an ADT that maps keys to values. There are several possible implementations of this ADT, of which the hash table is one. Hash is a misnomer but in context it’s equivalent to a dictionary that is implemented in terms of a hash table. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/30908",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8067/"
]
} |
31,558 | We are bootstrapping a new team of very small size (say 2-5) my question is:
which type of version control works best for these kind of teams, either centralized or distributed. | Distributed all the way, there is really no point for centralized anymore IMHO, specially if you are talking about team development. Another vote for Mercurial, no hassles to setup under windows and bitbucket.org has free repositories (that can be private) with unlimited space. If you are planning to work on a open source project, git and Github seem to be more adequate/popular. However, if you haven't ventured into DVCSs, I recommend you start with Mercurial and this awesome guide . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/31558",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8538/"
]
} |
31,567 | What kind of non-technical training course do you suggest for a programmer? Example could be public speaking course, presentation skill, English, business writing, or anything not related to programming or software engineering itself. | Anything related to communication, like public speaking, would be great. You will be considered a LOT more valuable as a programmer if you are able to communicate well with your team and the stakeholders of the software you build. A lack of communication skills will absolutely stunt your growth in this field. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/31567",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8486/"
]
} |
31,603 | Are there any particular incidents which are responsible for the low reputation Microsoft (and Bill Gates) has in the eyes of the open source community? Microsoft is clearly not the only proprietary company. Companies like Apple have done a lot worse when it comes to restrictions on software . Why does Microsoft get most of the hatred from the open source community? | I guess if there's any one "incident" then it was the so-called " Halloween Documents ", which were a series of memoranda that were leaked by a Microsoft employee to Eric S. Raymond in the late 90's, detailing Microsoft's desire to "disrupt the progress of open source software." It is worth mentioning a fact that highlights the aforementioned statement: that Microsoft often engages in negative (non-technical) campaign against its competitors. One of the greatest foul plays in Microsoft's history is paying someone to write a book claiming that Linux source code was stolen from Minix , in an attempt to make companies afraid to use Linux, so that it can sell its own products, in the basis that it was not legal to use stolen source code. Fortunatelly, Andrew Tanenbaum wrote an article to refute the accusation . While not so intensively, Microsoft still engages itself in practices like that, as one can see from the recent claim (in 2007) that Linux infringes Microsoft patents ( 1 and 2 ) or the more recent (2012) "Droid rage feud" on Twitter. A link for the specific tweet can be found here. While Microsoft's attitude has somewhat mellowed (compared to the past), many in the open source community still see Microsoft as a rather aggressive (and foul) competitor, particularly with respect to the negative campaigns and to the way they license their patented technologies (the " Open Specification Promise "). Now, whether that reputation is (still) justified is another question. Personally, I don't think Microsoft is as "evil" as some people would like you to think - certainly not compared to some other companies out there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/31603",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8783/"
]
} |
32,385 | I may have to switch to Java for new project. I have very little knowledge about Java, because I've mainly studied and used C#, and I'm afraid of the differences between these two language/platform should likely to cause me many problems. Which are the pitfalls/gotchas I should care about? | Here are some important Java gotchas when coming from C#: In Java, switch cases can silently fall-through to the next, so make sure you always put break whenever appropriate. You also can't switch on String in Java. Generics are non-reified and parameterizable with reference types only. There is no List<int> , only a List<Integer> . Autoboxing hides the verbosity, but you can get NullPointerException when unboxing a null . Also, == and != on two boxed primitive types perform reference comparison. ... because == and != on two reference types (e.g. String ) are always reference comparison An int can be autoboxed to an Integer ; there is no autoboxing from int[] to Integer[] . Java's byte , short , int , long are signed only. Watch for unintended sign extension. No multidimensional arrays, only array of arrays in Java. Most sub* ranged query methods use inclusive lower bound and exclusive upper bound String.substring(int beginIndex, int endIndex) CharSequence.subSequence(int start, int end) List.subList(int fromIndex, int toIndex) SortedSet<E>.subSet(E fromElement, E toElement) SortedMap<K,V>.subMap(K fromKey, K toKey) See also Java Puzzlers: Traps, Pitfalls, and Corner Cases A fun but at the same time very educational read. The book also has many successors presentations available on the web, e.g: 2007 Google Tech Talk video presentation TS-5186: Return of the Puzzlers: Schlock and Awe TS-1188: The Continuing Adventures of Java Puzzlers: Tiger Traps TS-2707: Java Puzzlers, Episode VI: The PhantomReference Menace, Attack of the Clone, Revenge of the Shift Wikipedia/Comparison of Java and C Sharp Related questions On some topics listed above: James Gosling’s explanation of why Java’s byte is signed Java noob: generics over objects only? (yes, unfortunately) Switch Statement With Strings in Java? Are upper bounds of indexed ranges always assumed to be exclusive? Is it guaranteed that new Integer(i) == i in Java? (YES!) When comparing two Integers in Java (with == / != ) does auto-unboxing occur? (NO!) Why does int num = Integer.getInteger("123") throw NullPointerException ? (!!!) On general Java gotchas: Java - Common Gotchas What are the pitfalls of a Java noob? Most awkward/misleading method in Java Base API ? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32385",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11840/"
]
} |
32,425 | I've read lots of books for various programming languages, Java, Python, C, etc. I understand and know all of the basics of the languages and I understand algorithms and data structures. (Equivalent of say two years of computer science classes) BUT, I still can't figure how to write a program that does anything useful. All of the programming books show you how to write the language, but NOT how to use it! The programming examples are all very basic, like build a card catalog for a library or a simple game or use algorithms, etc. They dont't show you how to develop complex programs that actually do anything useful! I've looked at open-source programs on SourceForge , but they don't make much sense to me. There are hundreds of files in each program and thousands of lines of code. But how do I learn how to do this? There's nothing in any book I can buy on Amazon that will give me the tools to write any of these programs. How do you go from reading Introduction to Java or Programming Python, or C Programming Language, etc.. to actually being able to say, I have an idea for X program? Is this how I go about developing it? It seems like there is so much more involved in writing a program than you can learn in a book or from a class. I feel like there is something. How can I be put on the right track? | Building more complex programs comes with experience. When I first programmed I thought I was doing well if it was over 25 lines long (and I had to use the scroll bar) Now I write hundreds of lines a day for years on the same project application. You might find this page interesting "Teach Yourself Programming in Ten Years" http://norvig.com/21-days.html BTW: It is very hard to start a program. A writer might call it "writers block". Instead I suggest you start writing code and improve it. Don't be afraid to delete large sections which don't do what you need. Start again, this time you will write with a better idea of what you are doing. Start again and you will find you didn't need half the stuff you wrote last time. When an author writes a story, it takes a long time, a lot of writing and rewriting etc. lots of reviews and feedback and its only finished when it has to be published (released) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32425",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.