source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
16,689 | Today, my manager told me that I must work over time to make up for a lack of planning and project management. Instead of incentivizing this unpaid mandatory overtime, my manager has made it clear I have no choice in the matter. This isn't a single project. We currently have a dozen projects all on the go at one time, and I simply can't get them all done. Therefore, I must work overtime so we don't have to push deadlines. Is this a sign of an ignorant or disrespectful manager, or simply an inexperienced one? I'm in this position because of a lack of planning and project management (I think). How might I avoid this in the future? I'm no Project Manager, it isn't my strength. What are good ways to get an employee to work overtime if you can't directly pay them? Good incentives, etc. From what I hear, gaining your employee's respect is the single best way to get your employees to work over time, although you should never make a habit of it. | My manager told me that I must work over time to make up for a lack of planning and project management. Instead of incentivizing this unpaid mandatory overtime, my manager has made it clear I have no choice in the matter. This is a clear sign of a death march. I strongly recommend the book Death March . It will give you ideas of how to deal and cope with death marches as well as helping you decide if and when it is time to quit. Sadly, death marches are the norm in software development, and not quite the flaming emergency they are made out to be. An article written some years ago pointed out why other industries got rid of "crunch mode" (or "death marches") - they were the worst way to get work done. As a side question, what are good ways to get an employee to work overtime if you can't directly pay them? Again, I refer you to the book Death March. Some organizations (notably big-4/3/2/1 accounting and consulting firms) use a "Marine Corps" mentality: "Sleep is for sissies! There will be time to sleep when we are dead!" The movie 300 has some entertaining examples of this sort of mentality. There are other methods for motivating (or trying to) workers in death marches. If this is a one-time screw up by your mismanager, then probably the only thing to do is suck it up and get to work. If this happens all the time, then it is his/her/its incompetence at work and things need to change. A useful quote to remember comes from the movie Goldfinger: Once is happenstance. Twice is coincidence. The third time it's enemy action. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16689",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3382/"
]
} |
16,708 | I started seriously programming as a hobbiest, student and then intern about 4 years ago and I've always done small projects on the side as a learning exercise. Schools over now though, and I spend my days at work as a software developer. I would still love to do projects on the side to learn about areas in computer science that I'm not exposed to at work, but I've noticed that after 8 hours of starring at an IDE it's far to tempting to veg out. Any time I do get up the gumption to work on something for a few hours lately it's gotten left by the wayside. Anyone have any advice for sticking with side projects when you spend most of your day coding? | One tip - make sure your hobby project has nothing to do whatsoever with your day job. If you use C++ at work, use something else in your hobby projects. This will help you avoid some of the burnout because you're at least switching to a different IDE and/or skill set. But, a hobby is a hobby...so don't fret it. It's supposed to be relaxing, not stressful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16708",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5323/"
]
} |
16,760 | I must do a quick assessment of dozens of fresh students very quickly. I have 30 minutes meeting with each of them (in two or three days). I want to use most of the time to discuss non technical aspects. So I plan to use 10 minutes for technical questions. The same questions for all of them. They are all hired already (they are students), what I need to know is the average level in order to prepare a training. Ideally, the difficulty must be progressive in order to set a level for each. Level I will average, and I'll do an average on each question too. If you had only 10 minutes and 10 questions to ask to a candidate, what would be your top 10 ? | Three questions, elaborating on the end of Eric Lippert's answer here : Question 1: On a scale of 1 - 10, where do you
rate yourself in (name the skill
here)? They answer [n], a number
between 1 and 10. Question 2: What could you learn from someone at
level [n+1] ? Question 3: What could someone at level [n-1]
learn from you? The most important factor in your decision is to determine where a student (realistically) places themselves , and those three questions will help you to determine that quickly. It also helps identify people that might be compromised by the Dunning-Kruger effect (on either end), but that is another topic. If anyone can find the reference to this method on SO and edit this post to include a link, I would really appreciate it. Anyway, that should fall well under ten minutes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16760",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
16,769 | As I get it, having an error (even a typo like or missing ";") in your whiteboard code will often cost you some interview points. Avoiding that will inevitably make one proof-reading code again and again (losing time and possibly neural energy / concentration) or even using a simpler (and thus less effective) algorithm -- and both these ways are "costly" again! So, why not just fast write code as elegant and effective as you would having a (unit) testing framework at your disposal and then just normally test it it (just on the whiteboard)? Has anyone tried / seen this approach? Is the whole idea worthy? [this also applies to the pen-and-paper case of course] | I absolutely want you to test the whiteboard code I ask you to write. I want you to talk out loud while you write it, look it over, spot most of the syntax mistakes you made, and point out how it could be more efficient. In fact, that's kind of the point of doing it at the whiteboard. It's not a one-shot, write-it-all-out, uh-huh-you-get-70/100 kind of thing. It's a conversation, mediated by code and held at the whiteboard instead of across my desk. Here are some great ways to fail the "Whiteboard coding" test: refuse it don't ask a single clarifying question (language, platform, something about the requirements) AND don't tell me your assumptions about any of it AND make assumptions that are way off what I would have answered (eg: write it in Fortran, interpret "display" or "print" as "write to the event log", that sort of thing. I might allow it if you told me in advance those were your assumptions) ask me what language I want it in, receive an answer that is in the job description, and then write it in a different language because you're not comfortable in the language I asked for. (We're consultants here. I am testing for consultant behaviour as much as coding. Asking the client is only correct if the client actually has a choice. Controlling conversations with people who will pay you is hard. This is lesson 1. It's a mark against you on any topic, but for the specific "you're hiring an X programmer but I don't want to write in X for you" you now have two big black marks.) show me what an architecture astronaut you are by filling two whiteboards with interfaces, factory patterns, abstractions, injections, and tests when I wanted you to "print the numbers from one to 5". (you think I'm exaggerating but I had a guy who generalized my problem dramatically - sticking to the example above let's say instead of 1 to 5 his solution would do any arbitrary sequence of integers (got from where? I wondered) and was 5 times as long as anyone else's - and he forgot to actually call the function that did the work. Repeated prompting and suggesting that he walk through it as though he was the debugger did not lead to his noticing that the function was never called.) I always say "do you like that?" "can you improve that?" "walk me through that" and the like. Typically the missing semi colon gets spotted, or the off-by-one, in that conversation. If not, I usually mark it up to nerves. Other things you may not think matter at the whiteboard that matter to me: when you're done, can I still read it? Have you smudged, scribbled over, switched colours, drawn arrows, crossed out and generally left a mess that can't now be used? Or are you aware that whiteboards are erasable, pointed to lines of code in the air instead of circling/arrowing them, and left me something I could take a picture of and keep in the design file? how much did you ask me as you did it? Do you like to be left alone and not discuss your code, or do you see code as a collaborative thing? How did you respond when I asked you things while you were still writing it? did you sneer at the "easy" task or faint at the "hard" one? Were you rude about being asked to show you can code? Are you easily intimidated by a technical problem, or arrogant about your ability to come up with a good algorithm? are you working it out in your head, or remembering a solution you read somewhere? I can usually tell for the hard problems. did you plan ahead about where you started writing? Folks who run out of whiteboard usually start too low or write too big - I can tell they didn't know this was going to be 20 lines of code and so only left room for 5 - believe it or not this tiny detail is mirrored in bigger estimating tasks as well. did you look it over before you said you were done? Did I see you pointing or tapping your way through it and testing it yourself before I asked you to? When I prompted you, or asked you specific questions about it, did you look at it again, or just go from memory? Are you willing to consider that your first draft might not be complete? I strongly recommend practicing coding at the whiteboard. I always warn interviewees that they will be asked to do it. If you have access to an actual whiteboard then set yourself some simple problems and practice doing them there. It will help your performance and your confidence. Sorry I know I'm in TL;DR territory but here's the thing - coding at the whiteboard is about more than coding . It's a test of more than your grasp of syntax. There are a lot of behaviours of good programmers that are demonstrated in your response to this task. If you think it's only about coding you are missing the point. In other conversations about whiteboard testing, people tell me I may reject a good candidate with it. Honestly, that's a risk I'm willing to take. Every hiring round contains several people I could hire. Some people with great resumes, who are doing ok in the question-and-answer part of the interview, fall apart at the whiteboard and clearly cannot (with any amount of prompting) write simple code in the language they claim to know. I might have hired some of these. Any tool that prevents that is a tool I will continue to use. I have never ended up in a no-one to hire boat because all my candidates messed up at the whiteboard and I don't expect I ever will. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16769",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5933/"
]
} |
16,807 | I thought about it and could not come up with an example. Why would somebody want to catch an exception and do nothing about it? Can you give an example? Maybe it is just something that should never be done. | I do it all the time with things like conversion errors in D: import std.conv, std.stdio, std.exception;
void main(string[] args) {
enforce(args.length > 1, "Usage: foo.exe filename");
double[] nums;
// Process a text file with one number per line into an array of doubles,
// ignoring any malformed lines.
foreach(line; File(args[1]).byLine()) {
try {
nums ~= to!double(line);
} catch(ConvError) {
// Ignore malformed lines.
}
}
// Do stuff with nums.
} That said, I think that all catch blocks should have something in them, even if that something is just a comment explaining why you are ignoring the exception. Edit: I also want to emphasize that, if you're going to do something like this, you should be careful to only catch the specific exception you want to ignore. Doing a plain old catch {} is almost always a bad thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16807",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3262/"
]
} |
16,867 | I'm in the throes of getting myself enrolled in school to get a CS degree. The school I am looking at actually offers both Java- and C++-based for the introductory software development courses (object-oriented programming, design patterns, that sort of thing). It is student's choice which track to follow, but there is only time to follow one. Knowing what you know now, if you had the choice, would you lay down your CS curriculum foundation in Java or C++? My current debate currently looks like this: A good friend (who has a PhD in AI) is touting Java as the better choice regardless of what I do, if only to open up more job opportunities later, though he might be biased since all of his work has been in Java (he loves it). I live in the Boston, MA, USA area and I see an equal amount of Java and C work. On the flip side, although I haven't entirely yet settled on what I want to do with the degree when I'm done, my preference would be to develop for the Mac, which I am doing now albeit in a limited capacity. To that end, I'm getting some limited exposure to C++ already, but I've had none with Java, and looking at my projects at my day job I don't see a need to use it anytime soon, "soon" measured by at least two years. I probably should note that I'm an adult going back to school after 20 years (I currently have no degree of any kind) so I'm looking to maximize the opportunity and time spent as best I can. I'm kind of leaning towards C++ but I'm still ambivalent, and some outside, objective advice would help here. Or I could just be thinking too hard about it. UPDATE: It turns out the language selection wasn't so clear cut as I originally surmised. While a couple of core courses focused on Java, some of the other core courses work in primarily C and Java, but also a few others thrown in for good measure. In fact, my rest of my semester is going to be in Objective-C after spending time in Java and Javascript. Last semester was C, Javascript, and PHP, plus a few others thrown in as assignments required. Since things were pretty much split down the middle overall, and I am still getting answers to this, I am now trying to work my curriculum such that I meet all of the requirements for the degree but to absorb as many languages as I can reasonably handle. So far, my grades have not suffered trying to do this. | I'd personally go with C++ as it will give you insights into how parts of Java work under the hood (Pointers for example). Moving to Java from C++ is fairly trivial, whereas moving the other way around is arguably more difficult. The truly difficult thing about the Java eco-system is it's vast number of frameworks, libraries etc - they're unlikely to cover all of that at University anyhow. At the end of the day it's not going to matter that much what language you choose, as long as you learn the principles. My JUG is going to kill me for endorsing C++ ;-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16867",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6419/"
]
} |
16,908 | This is one of the things that I hate most when I see it in someone else's code. I know what it means and why some people do it this way ("what if I accidentally put '=' instead?"). For me it's very much like when a child goes down the stairs counting the steps out loud. Anyway, here are my arguments against it: It disrupts the natural flow of reading the program code. We, humans, say "if value is zero" and not "if zero is value". Modern compilers warn you when you have an assignment in your condition, or actually if your condition consists of just that assignment, which, yes, looks suspicious anyway You shouldn't forget to put double '=' when you are comparing values if you are a programmer. You may as well forget to put "!" when testing non-equality. | Ah, yes, "Yoda conditionals" ("If zero the value is, execute this code you must!"). I always point anyone who claims they're "better" at tools like lint(1). This particular problem has been solved since the late 70s. Most modern languages won't even compile an expression like if(x = 10) , as they refuse to coerce the result of the assignment to a boolean. As others have said, it certainly isn't a problem, but it does provoke a bit of cognitive dissonance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16908",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5635/"
]
} |
17,100 | The Single Responsibility Principle states that a class should do one and only one thing. Some cases are pretty clear cut. Others, though, are difficult because what looks like "one thing" when viewed at a given level of abstraction may be multiple things when viewed at a lower level. I also fear that if the Single Responsibility Principle is honored at the lower levels, excessively decoupled, verbose ravioli code, where more lines are spent creating tiny classes for everything and plumbing information around than actually solving the problem at hand, can result. How would you describe what "one thing" means? What are some concrete signs that a class really does more than "one thing"? | I really like the way Robert C. Martin (Uncle Bob) restates the Single Responsibility Principle (linked to PDF) : There should never be more than one reason for a class to change It's subtly different from the traditional "should do only one thing" definition, and I like this because it forces you to change the way you think about your class. Instead of thinking about "is this doing one thing?", you instead think about what can change and how those changes affect your class. So for example, if the database changes does your class need to change? What about if the output device changes (for example a screen, or a mobile device, or a printer)? If your class needs to change because of changes from many other directions, then that's a sign that your class has too many responsibilities. In the linked article, Uncle Bob concludes: The SRP is one of the simplest of the principles, and one of the hardest to get right. Conjoining responsibilities is something that we do naturally. Finding and separating those
responsibilities from one another is much of what software design is really about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17100",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1468/"
]
} |
17,121 | Two examples spring to mind: One of the reasons that .Net programmers are encouraged to use .config files instead of the Windows Registry is that .config files are XML and therefore human-readable. Similarly, JSON is sometimes considered human-readable compared with a proprietary format. Are human-readable formats actually readable by humans? In the example of configuration data: The format doesn't change the underlying meaning of the information - in both cases, the data represents the same thing. Both registry and .config file are stored internally as a series 0s and 1s. To that extent, the underlying representaion is equally unreadable by humans. Both registry and .config file require a tool to read, format and display those 0s and 1s and convert them into a format that humans can read. In the case of configuration stored in the Windows Registry, this is a Registry Editor. In the case of XML it could be a text editor or XML reader. Either way, the tool makes the data readable, not the data format. So, what is the difference between human-readable data formats and non-human-readable formats? | Human readable basically means that if the content is displayed by a program that lacks direct, specific awareness of that file's format, that there's at least a reasonable chance that a person can read and understand at least some of it. Your basic point about the lack of a clear line of delineation is absolutely correct though --at one time I knew a guy who could diagnose problems with programs (mostly written in Fortran) often in five minutes or less -- going only from an octal core dump, without looking at the source code at all. For most people, that format would hardly qualify as "human readable", but obviously he was an exception... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17121",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1928/"
]
} |
17,177 | Just out of curiosity what's the difference between a small, medium and large size project? Is it measured by lines of code or complexity or what? Im building a bartering system and so far have about 1000 lines of code for login/registration. Even though there's lots of LOC i wouldnt consider it a big project because its not that complex though this is my first project so im not sure. How is it measured? | Roughly how I'd accord things -- keep in mind this is more or less arbitrary. The "size" of the project in a composite of other factors like complexity, source lines of code, number of features/business value, etc. A very small product can deliver a large amount of value, etc. That being said: 2m+ sloc is a large to huge project. These are generally so complex that few if any people are 'fluent' in the entire system; rather responsibility tends to be modularized along the structure of the code. These projects often deliver enormous business value and may be mission critical. They are also sometimes under a heavy strain of technical debt and other legacy concerns. 100k - 2m sloc is a medium-sized project. This is my middle ground: the project is complex enough to require some expert knowledge, and has likely accrued some degree of technical debt; it is likely also delivering some degree of business value. 10k - 100k is a small project, but not too small to have enough complexity that you will want expert consideration; if you are open source, consider getting people you trust to review your commits. Anythings less than 10k sloc is tiny, really. That doesn't mean it can't deliver any value at all, and many very interesting projects have very tiny imprint (e.g. Camping, whose source is ~2 kb (!)). Non-experts can generally drive value concerns -- fix bugs and add features -- without having to know too much about the domain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17177",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6208/"
]
} |
17,214 | There are lots of books about programming out there, and it seems Code Complete is pretty much at the top of most people's list of "must-read programming books", but what about The Art of Computer Programming by Donald Knuth? I'm a busy person, between work and a young family I don't have a ton of free time, so I have to be picky about how I use it. I'm wondering - has anybody here read 'TAOCP'? If so, is it worth making time to read or would some other book or more on-the-side programming like pet projects or contributing to open source be a better use of my time in terms of professional development? DISCLAIMER - For those of you who sport "Knuth is my homeboy" t-shirts, don't get me wrong - I want to read it, but I'm just wondering if it should be right at the top of my priority list or if something else should come first. | TAOCP is an utterly invaluable reference for understanding how the data structures and algorithms that we use every day work and why they work, but undertaking to read it cover-to-cover would be an extraordinary investment of your time. As one family man to another, spend the time with your kids. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17214",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5279/"
]
} |
17,254 | I'm reading Coders at Work by Peter Seibel, and many a time it has been mentioned that programmers who can't write generally make poor programmers - it's been claimed by Douglas Crockford, Joshua Bloch, Joe Armstrong, Dijkstra (and I've only read half the book). What's your view of this? Is an inability to express yourself in writing in a natural language such as English a hindrance of writing good code? | There's much more to programming than 'writing code'. A big part of being a successful programmer involves communication; Being able to connect with customers, understand their needs, translate them into the technical realm, express them in code, and then explain the result back to the customers. Programmers who have a hard time expressing themselves clearly in writing may not be able to communicate well in general, whereas those who have a good grasp of language and writing can generally translate those skills to the code they write. I think being unable to write well, and thus communicate well, will keep one from being a very good programmer. As Jason Fried and David Heinemeier Hansson (of 37signals) say in their book Rework: If you're trying to decide among a few people to fill a position, hire the best writer. Being a good writer is about more than writing. Clear writing is a sign of clear thinking. Great writers know how to communicate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17254",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
17,305 | There has been a few remarks about white space already in discussion about curly braces placements. I myself tend to sprinkle my code with blank lines in an attempt to segregate things that go together in "logical" groups and hopefully make it easier for the next person to come by to read the code I just produced. In fact, I would say I structure my code like I write: I make paragraphs, no longer than a few lines (definitely shorter than 10), and try to make each paragraph self-contained. For example: in a class, I will group methods that go together, while separating them by a blank line from the next group. if I need to write a comment I'll usually put a blank line before the comment in a method, I make one paragraph per step of the process All in all, I rarely have more than 4/5 lines clustered together, meaning a very sparse code. I don't consider all this white space a waste because I actually use it to structure the code (as I use the indentation in fact), and therefore I feel it worth the screen estate it takes. For example: for (int i = 0; i < 10; ++i)
{
if (i % 3 == 0) continue;
array[i] += 2;
} I consider than the two statements have clear distinct purposes and thus deserve to be separated to make it obvious. So, how do you actually use (or not) blank lines in code ? | Always Whitespace is crucial to clean readable code. A blank line (or two) help visually separate out logical blocks of code. For example, from Steve McConnell's Code Complete, Second Edition chapter on Layout and Style: Subjects scored 20 to 30 percent higher on a test of comprehension when programs had a two-to-four-spaces indentation scheme than they did when programs had no indentation at all. The same study found that it was important to neither under-emphasize nor over emphasize a program’s logical structure. The lowest comprehension scores were achieved on programs that were not indented at all. The second lowest were achieved on programs that used six-space indentation. The study concluded that two-to-four-space indentation was optimal. Interestingly, many subjects in the experiment felt that the six-space indentation was easier to use than the smaller indentations, even though their scores were lower. That’s probably because six space indentation looks pleasing. But regardless of how pretty it looks, six-space indentation turns out to be less readable. This is an example of a collision be tween aesthetic appeal and readability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4853/"
]
} |
17,315 | As stated by the title, what is the difference in years of experience in given language in terms of developers? For example, if one developer has had five years working with language A and the other developer has had two years working with language B followed by three years working with language A, would there be perceivable difference between them? | "it depends" Experience <> knowledge or understanding. Programmer 1 could be very good, a guru even, or they could be someone fumbling around with the language for the last 5 years. Programmer 2 could be someone who understands concepts independently of language they're using. Or someone who found language B too difficult and hopes A is easier. Coding Horror's "The Years of Experience Myth" is worth reading | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17315",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2471/"
]
} |
17,341 | Many compilers have warning messages to warn the programmers about potential runtime, logic and performance errors, most times, you quickly fix them, but what about unfixable warnings? How do you deal with unfixable warnings? Do you re-write a portion of the code, or rewrite it in the "long, hackless way" or disable warnings all together? What should be the best practice? What if you are editing someone else's code and his code has warnings? Here is a good example:
jQuery has a lots of JavaScript warnings as a Mozilla-class browser detected, why the jQ developers don't fix them? If you contribute to jQuery, are you going to fix them? | Some warning are usually safe to ignore but if you do so then over time they'll multiply until that day comes when there are so many that you miss the one warning that really matters because it's hidden in the noise. Fix warnings immediately (which may include disabling individual rules if you feel that it's never relevant for your context). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17341",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1325/"
]
} |
17,443 | In this question I asked whether being a bad writer hinders you from writing good code. Many of the answers started of with "it depends on what you mean by good code". It appears that the term "good code" and "bad code" are very subjective. Since I have one view, it may be very different from others' view of them. So what does it mean to write "good code"? What is "good code"? | A good coder is like a good pool player. When you see a professional pool player, you at first might not be impressed: "Sure, they got all of the balls in, but they had only easy shots!" This is because, when a pool player is making her shot, she doesn't think about what ball will go into which pocket, she's also thinking about where the cue ball will end up . Setting up for the next shot takes tremendous skill and practice, but it also means that it looks easy. Now, bringing this metaphor to code, a good coder writes code that looks like it was easy and straightforward to do . Many of the examples by Brian Kernighan in his books follow this pattern. Part of the "trick" is coming up with a proper conceptualization of the problem and its solution . When we don't understand a problem well enough, we're more likely to over-complicate our solutions, and we will fail to see unifying ideas. With a proper conceptualization of the problem, you get everything else: readability, maintainability, efficiency, and correctness. Because the solution seems so straightforward, there will likely be fewer comments, because extra explanation is unnecessary. A good coder can also see the long term vision of the product, and form their conceptualizations accordingly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17443",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
17,519 | I considered posting on Stack Overflow, but the question strikes me as being far too subjective since I can't think of a reasonable technical explanation for Microsoft's choice in this matter. But this question has bugged me for so long and the issue keeps coming up in one of my projects, and I have never actually seen an attempt at explaining this: OpenGL uses a right-handed coordinate system, where the +Z part of the world coordinate system extends toward the viewer. DirectX uses a left-handed system where the +Z part of the world coordinate extends into the screen, away from the viewer. I never used the Glide API , so I don't know how it worked, but from what I can gather, it uses a left-handed system as well. Is there a technical reason for this? And if not, is there some conceptual advantage to a particular handedness of a coordinate system? Why would one choose one over the other? | I know this is an old post, but I saw this post being referenced and dislike the chosen answer's tone. So I did a bit of investigation! DirectX is old . It was first released in 1995, when the world had much more than Nvidia and ATI , DirectX vs OpenGL. That's over 15 years, people. 3dfx Interactive's Glide (one of DirectX's competitors back in the day. OpenGL wasn't meant for gaming back then) used a left-handed coordinate system. POV-Ray and RenderMan (Pixar's rendering software), also use a left-handed coordinate system. DirectX 9+ can work with both coordinate systems. Both WPF and XNA (which work with DirectX under the scenes) use a right-handed coordinate system. From this, I can speculate about a couple things: Industry standards aren't as standard as people like. Direct3D was built in a time everyone did things their own way, and the developers probably didn't know better. Left-handedness is optional, but customary in the DirectX world. Since conventions die out hard, everyone thinks DirectX can only work with left-handedness. Microsoft eventually learned, and followed the standard in any new APIs they created. Therefore, my conclusion would be: When they had to choose, they didn't know of the standard, chose the 'other' system, and everyone else just went along for the ride. No shady business, just an unfortunate design decision that was carried along because backward compatibility is the name of Microsoft's game. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17519",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/83/"
]
} |
17,568 | Timesheets are something that I've never been fond of, but none the less something that is a requirement within my company. They don't bother me so much, but they seem to really grind some other people's gears. I suppose I have a few questions, and feedback would be great. Are you required to do timesheets, assuming you aren't a contractor? (That is understandable to me). What is the granularity of timesheets that you would be comfortable with or that you use? (ex: all entries must be under two hours). Would timesheets ever factor into your reasons for not accepting a job or leaving a current one? How has management within your organization justified timesheets if you aren't billing to a client? | As a manager yes I get the team to do timesheets. Here's why and a few notes on how they're implemented to, hopefully, minimise disruption: As a business much of our work is done on a time and materials basis. Without timesheets that obviously doesn't work. We have 10 clients and a range of different projects and products but we're not a big enough to devote people to clients or projects full time which means that we have to have some way of working out how long things took.
Even if this weren't true to manage a team you still need to understand what takes time and how much. Think that old app the mailroom guys use is taking more time to support than it's worth? What about when someone asks how much work went into feature X on the new website which doubled sales? Or when your developers say you should recruit someone else and you get asked to breakdown what they do to help justify it? Categories exist for all reasonable "non-work" including mentoring, general technical discussions, support, meetings and so on. Bug fixing - we record time against a whole project rather than bug by bug. This tends to make things a lot easier - spend the day fixing bugs, 7.5 hours bug fixing goes against the project and you're done. No need to try and work out how it was divided between the 13 bugs you fixed. When we implemented them I promised that no-one would be penalised / rewarded for what was on their timesheet so long as it was accurate. So there is no input into reviews based on profitability or utilisation or anything else. This means that there is no incentive to distort. By accurate I mean roughly. People really shouldn't have to spend too much time worrying about what happens when they make a coffee or go to the toilet. Basically if you make a note on a pad of each thing you worked on during the day, then at the end of the day roughly break it down across the hours you worked and that's it. If shouldn't take more than 5 minutes max. If I don't like what I see - for instance someone has spent too long on task X - the investigation is into what we can do to make X faster, rather than anything to do with the timesheet. Knowing how long you spent doing something is a great way of improving estimates. The anti-timesheet feeling among many programmers seems to come from two things - (1) badly implemented timesheets which take too long to complete, demand more information than is really needed and encourage lying and distortion so the information is worthless anyway, and (2) a feeling that every single thing that slightly inconveniences a developer should be done away with. The first one is fair but you should blame the implementation and the rules someone has attached, not the whole idea of timesheets which can be done in ways that don't have these issues. The second one is just unrealistic - there are many parties involved in projects, both inside and outside the company, each of whom have many demands on them. Yes we want to do everything we can to make programmers productive, but it has to be balanced with the needs of other parties. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17568",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7226/"
]
} |
17,606 | There are online services such as IKM that offer skills tests in many areas, including programming. Would you use these kind of tests when hiring for a senior developer position? What about just for objectively benchmarking candidates before calling them for an interview? Would you use it as a step after short-listing candidates after interviews? Is this approach more suitable in some situations compared to others? Have you personally used this kind of service or know someone who has? | To be blunt: No, No, No, No and No! Get the candidate in to do some coding with you, it's the only way you'll know how they think their way through problems and how they might fit into your team. As an aside I'd try to avoid recruiting via the CV lottery technique :-), instead find good people through word of mouth, conferences, technical community meetups etc. Avoids the sharky recruitment agents as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17606",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3414/"
]
} |
17,700 | Off the top of my head, I can think of a handful of large sites which utilize the Microsoft stack Microsoft.com Dell MySpace PlentyOfFish StackOverflow Hotmail, Bing, WindowsLive However, based on observation, nearly all of the top 500 sites seem to be running other platforms.What are the main reasons there's so little market penetration? Cost? Technology Limitations? Does Microsoft cater to corporate / intranet environments more then public websites? I'm not looking for market share, but rather large scale adoption of the MS stack. | I'll posit that it's because most of the "big websites" started out small. Google, Youtube, Facebook et al. were all at one time single-server sites that someone built as a hobby. They used LAMP-like stacks because: 1) they're cheap and the devs were poor and often 2) because they were at a university and university environments tend to favor OSS . After the sites started growing, the developers just stuck to what they knew. In the early years, there wouldn't be enough time or money to do a big system rewrite. When, and if, that ever became an option, why switch to an entirely different base? So I'm saying it's because that's just what they knew and had when they started. SO isn't any different if I recall that story correctly. The SO Founders knew MS stack, and had access to the tools/licenses/etc to start using it, and so that's what they used! (I've also heard that they also wanted to prove that MS stack was just as good as LAMP for big sites, but that may be apocryphal.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17700",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/691/"
]
} |
17,729 | I am programmer who just started working on a startup idea. At the moment I want to bring onboard at least one programmer. This programmer should be a ninja - a 10x engineer. Since early days are probably the most risky for a startup, I want to make sure I approach this problem the best I can. How do I find these people? and How do I convince them to come onboard? I would love to hear from people who started their own companies and what their thoughts are about hiring Update : I would like to get the ninja as a co-founder so besides being a ninja (ie. great programmer with computer science background) he/she has to have a healthy appetite for risk (for great programmers this is not a big deal because they can be hired anytime into mainstream jobs if the startup doesn't work) | Pay lots of money. If they can't do that they offer stock options and nice perks like free food, drink, nice working environment with latest equipment and good benefits. Basically you have to give them something worthwhile, no one is interested in making you rich for their toil. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17729",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6975/"
]
} |
17,766 | I saw this asked in the SO Tavern , so I'm posting the question here. I thought it an interesting question. (Of course it doesn't belong on SO, but I think it's OK here.) Do you add periods (or, as the OP wrote, "full stops") in your code comments? To keep it relevant, why ? | Full stop is for ending sentences, but if a comment consists of just one sentence surrounded by code, then full stop is not necessary in my opinion. Sometimes I even don't capitalize the first letter. A detailed multiline comment, on the other hand, does need full punctuation. // This function returns an average of two integers. Note that it may
// return an irrelevant result if the sum of a and b exceeds the int
// boundaries.
int avg(int a, int b) // make it static maybe?
{
// A better algorithm is needed that never overflows
return (a + b) / 2;
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17766",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/67/"
]
} |
17,824 | What specific advantages and disadvantages of each way to working on a programming language grammar? Why/When should I roll my own? Why/When should I use a generator? | There are three options really, all three of them preferable in different situations. Option 1: parser generators, or 'you need to parse some language and you just want to get it working, dammit' Say, you're asked to build a parser for some ancient data format NOW. Or you need your parser to be fast. Or you need your parser to be easily maintainable. In these cases, you're probably best off using a parser generator. You don't have to fiddle around with the details, you don't have to get lots of complicated code to work properly, you just write out the grammar the input will adhere to, write some handling code and presto: instant parser. The advantages are clear: It's (usually) quite easy to write a specification, in particular if the input format isn't too weird (option 2 would be better if it is). You end up with a very easily maintainable piece of work that is easily understood: a grammar definition usually flows a lot more natural than code. The parsers generated by good parser generators are usually a lot faster than hand-written code. Hand-written code can be faster, but only if you know your stuff - this is why most widely used compilers use a hand-written recursive-descent parser. There's one thing you have to be careful of with parser-generators: the can sometimes reject your grammars. For an overview of the different types of parsers and how they can bite you, you may want to start here . Here you can find an overview of a lot of implementations and the types of grammars they accept. Option 2: hand-written parsers, or 'you want to build your own parser, and you care about being user-friendly' Parser generators are nice, but they aren't very user (the end-user, not you) friendly. You typically can't give good error messages, nor can you provide error recovery. Perhaps your language is very weird and parsers reject your grammar or you need more control than the generator gives you. In these cases, using a hand-written recursive-descent parser is probably the best. While getting it right may be complicated, you have complete control over your parser so you can do all kinds of nice stuff you can't do with parser generators, like error messages and even error recovery (try removing all the semicolons from a C# file: the C# compiler will complain, but will detect most other errors anyway regardless of the presence of semicolons). Hand-written parsers also usually perform better than generated ones, assuming the quality of the parser is high enough. On the other hand, if you don't manage to write a good parser - usually due to (a combination of) lack of experience, knowledge or design - then performance is usually slower. For lexers the opposite is true though: generally generated lexers use table lookups, making them faster than (most) hand-written ones. Education-wise, writing your own parser will teach you more than using a generator. You have to write more and more complicated code after all, plus you have to understand exactly how you parse a language. On the other hand, if you want to learn how to create your own language (so, get experience at language design), either option 1 or option 3 is preferable: if you're developing a language, it will probably change a lot, and option 1 and 3 give you an easier time with that. Option 3: hand written parser generators, or 'you're trying to learn a lot from this project and you wouldn't mind ending up with a nifty piece of code you can re-use a lot' This is the path I'm currently walking down: you write your own parser generator. While highly nontrivial, doing this will probably teach you the most. To give you an idea what doing a project like this involves I'll tell you about my own progress. The lexer generator I created my own lexer generator first. I usually design software starting with how the code will be used, so I thought about how I wanted to be able to use my code and wrote this piece of code (it's in C#): Lexer<CalculatorToken> calculatorLexer = new Lexer<CalculatorToken>(
new List<StringTokenPair>()
{ // This is just like a lex specification:
// regex token
new StringTokenPair("\\+", CalculatorToken.Plus),
new StringTokenPair("\\*", CalculatorToken.Times),
new StringTokenPair("(", CalculatorToken.LeftParenthesis),
new StringTokenPair(")", CalculatorToken.RightParenthesis),
new StringTokenPair("\\d+", CalculatorToken.Number),
});
foreach (CalculatorToken token in
calculatorLexer.GetLexer(new StringReader("15+4*10")))
{ // This will iterate over all tokens in the string.
Console.WriteLine(token.Value);
}
// Prints:
// 15
// +
// 4
// *
// 10 The input string-token pairs are converted into a corresponding recursive structure describing the regular expressions they represent using the ideas of an arithmetic stack. This is then converted into a NFA (nondeterministic finite automaton), which is in turn converted into a DFA (deterministic finite automaton). You can then match strings against the DFA. This way, you get a good idea how exactly lexers work. In addition, if you do it the right way the results from your lexer generator can be roughly as fast as professional implementations. You also don't lose any expressiveness compared to option 2, and not much expressiveness compared to option 1. I implemented my lexer generator in just over 1600 lines of code. This code makes the above work, but it still generates the lexer on the fly every time you start the program: I'm going to add code to write it to disk at some point. If you want to know how to write your own lexer, this is a good place to start. The parser generator You then write your parser generator. I refer to here again for an overview on the different kinds of parsers - as a rule of thumb, the more they can parse, the slower they are. Speed not being an issue for me, I chose to implement an Earley parser. Advanced implementations of an Earley parser have been shown to be about twice as slow as other parser types. In return for that speed hit, you get the ability to parse any kind of grammar, even ambiguous ones. This means you never need to worry about whether your parser has any left-recursion in it, or what a shift-reduce conflict is. You can also define grammars more easily using ambiguous grammars if it doesn't matter which parse tree is the result, such as that it doesn't matter whether you parse 1+2+3 as (1+2)+3 or as 1+(2+3). This is what a piece of code using my parser generator can look like: Lexer<CalculatorToken> calculatorLexer = new Lexer<CalculatorToken>(
new List<StringTokenPair>()
{
new StringTokenPair("\\+", CalculatorToken.Plus),
new StringTokenPair("\\*", CalculatorToken.Times),
new StringTokenPair("(", CalculatorToken.LeftParenthesis),
new StringTokenPair(")", CalculatorToken.RightParenthesis),
new StringTokenPair("\\d+", CalculatorToken.Number),
});
Grammar<IntWrapper, CalculatorToken> calculator
= new Grammar<IntWrapper, CalculatorToken>(calculatorLexer);
// Declaring the nonterminals.
INonTerminal<IntWrapper> expr = calculator.AddNonTerminal<IntWrapper>();
INonTerminal<IntWrapper> term = calculator.AddNonTerminal<IntWrapper>();
INonTerminal<IntWrapper> factor = calculator.AddNonTerminal<IntWrapper>();
// expr will be our head nonterminal.
calculator.SetAsMainNonTerminal(expr);
// expr: term | expr Plus term;
calculator.AddProduction(expr, term.GetDefault());
calculator.AddProduction(expr,
expr.GetDefault(),
CalculatorToken.Plus.GetDefault(),
term.AddCode(
(x, r) => { x.Result.Value += r.Value; return x; }
));
// term: factor | term Times factor;
calculator.AddProduction(term, factor.GetDefault());
calculator.AddProduction(term,
term.GetDefault(),
CalculatorToken.Times.GetDefault(),
factor.AddCode
(
(x, r) => { x.Result.Value *= r.Value; return x; }
));
// factor: LeftParenthesis expr RightParenthesis
// | Number;
calculator.AddProduction(factor,
CalculatorToken.LeftParenthesis.GetDefault(),
expr.GetDefault(),
CalculatorToken.RightParenthesis.GetDefault());
calculator.AddProduction(factor,
CalculatorToken.Number.AddCode
(
(x, s) => { x.Result = new IntWrapper(int.Parse(s));
return x; }
));
IntWrapper result = calculator.Parse("15+4*10");
// result == 55 (Note that IntWrapper is simply an Int32, except that C# requires it to be a class, hence I had to introduce a wrapper class) I hope you see that the code above is very powerful: any grammar you can come up with can be parsed. You can add arbitrary bits of code in the grammar capable of performing lots of tasks. If you manage to get this all working, you can re-use the resulting code to do a lot of tasks very easily: just imagine building a command-line interpreter using this piece of code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17824",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
17,826 | I see that most of the good programmers have the habit of reading big books for learning about technology. What does it really take to read technical books, apart from the real interest on the technology? How can I improve my ability to read these books? | We have really BIG eyes. All kidding aside, I'm one of the people who finds reading to be very difficult. If I'm working my way through a very large book, I try to read early in the morning, when I first wake up, when my mind is free of distractions. I find that I'm able to get engrossed much easier at that time of day and I retain more. Then, there are books that are just so dry that they will be painful no matter the reading circumstances. I try to avoid them whenever possible, or find another book with the same information that is written in a different style. If reading a book is so painful that you can barely keep from putting it down, you are wasting your time because you probably won't retain much anyway. Still, I much prefer getting information in smaller doses. My 'big books' are mostly for reference and aren't intended to be read cover to cover, unless you have an amazing attention span. Additionally, though sort of digressing, I really enjoy it when people take time to write book reviews on their blog or personal web site. That helps me to find books that are best suited to me. So, if you love or hate a book, consider publishing a review. It will turn up to people who might be interested in whatever book you are discussing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17826",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7331/"
]
} |
17,843 | I read few articles on web to find out how Agile, XP, Scrum, pair programming are different from each other / related to each other and I derived the following line: Scrum and XP are almost same. XP has shorter period of releases than Scrum Pair programming is employed in both Agile and XP methodologies But I was unable to identify how Agile is different from XP. More than providing a URL, I would be happy to read your experience and thoughts on this. | You are confusing the issue. Being agile means that you are following a bunch of values and practices from the agile manifesto . Thats it. XP and Scrum are development processes that follows those values. Both are "just as agile". The big difference between Scrum and XP is that Scrum does not contain practices specifically for programming , whereas XP has lots of them (TDD, continuous integration, pair programming). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17843",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/156/"
]
} |
17,898 | In your own studies (on your own, or for a class) did you have an "ah ha" moment when you finally, really understood pointers? Do you have an explanation you use for beginner programmers that seems particularly effective? For example, when beginners first encounter pointers in C, they might just add & s and * s until it compiles (as I myself once did). Maybe it was a picture, or a really well motivated example, that made pointers "click" for you or your student. What was it, and what did you try before that didn't seem to work? Were any topics prerequisites (e.g. structs, or arrays)? In other words, what was necessary to understand the meaning of & s and * , when you could use them with confidence? Learning the syntax and terminology or the use cases isn't enough, at some point the idea needs to be internalized. Update: I really like the answers so far; please keep them coming. There are a lot of great perspectives here, but I think many are good explanations/slogans for ourselves after we've internalized the concept. I'm looking for the detailed contexts and circumstances when it dawned on you. For example: I only somewhat understood pointers
syntactically in C. I heard two of my
friends explaining pointers to another
friend, who asked why a struct was
passed with a pointer. The first
friend talked about how it needed to
be referenced and modified, but it was
just a short comment from the other
friend where it hit me: "It's also
more efficient." Passing 4 bytes
instead of 16 bytes was the final
conceptual shift I needed. | Someone much wiser than I once said: The nun Wu Jincang asked the Sixth
Patriach Huineng, "I have studied the
Mahaparinirvana sutra for many years,
yet there are many areas i do not
quite understand. Please enlighten
me." The patriach responded, "I am
illiterate. Please read out the
characters to me and perhaps I will be
able to explain the meaning." Said the nun, "You cannot even
recognize the characters. How are you
able then to understand the meaning?" "Truth has nothing to do with words.
Truth can be likened to the bright
moon in the sky. Words, in this case,
can be likened to a finger. The finger
can point to the moon’s location.
However, the finger is not the moon.
To look at the moon, it is necessary
to gaze beyond the finger, right?" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17898",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6329/"
]
} |
17,976 | Basically, I want to learn lots of programming languages to become a great programmer. I know only a handful to depth and I was hoping someone could elaborate on how many classes or types of programming languages there are . Like how you would lump them together if you had to learn them in groups. Coming from a Java background, I'm familiar with static typing, but I know that in addition to dynamic typing there has to be such variety in available languages that I would love to see a categorical breakdown if possible. | It depends on how you want to classify languages. Fundamentally, languages can be broken down into two types: imperative languages in which you instruct the computer how to do a task, and declarative languages in which you tell the computer what to do. Declarative languages can further be broken down into functional languages, in which a program is constructed by composing functions, and logic programming languages, in which a program is constructed through a set of logical connections. Imperative languages read more like a list of steps for solving a problem, kind of like a recipe. Imperative languages include C, C++, and Java; functional languages include Haskell; logic programming languages include Prolog. Imperative languages are sometimes broken into two subgroups: procedural languages like C, and object-oriented languages . Object-oriented languages are a bit orthogonal to the groupings, though, as there are object-oriented functional languages (OCaml and Scala being examples). You can also group languages by typing: static and dynamic . Statically-typed languages are ones in which typing is checked (and usually enforced) prior to running the program (typically during a compile phase); dynamically-typed languages defer type checking to runtime. C, C++, and Java are statically-typed languages; Python, Ruby, JavaScript, and Objective-C are dynamically-typed languages. There are also untyped languages, which include the Forth programming language. You can also group languages by their typing discipline : weak typing, which supports implicit type conversions, and strong typing, which prohibits implicit type conversions. The lines between the two are a bit blurry: according to some definitions, C is a weakly-typed languages, while others consider it to be strongly-typed. Typing discipline isn't really a useful way to group languages, anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17976",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5904/"
]
} |
17,995 | The specific example I have in mind involves the currently-alpha game Minecraft, but this is a general question that I think warrants some conversation. Minecraft is written in Java, and stack traces usually look like this (real example): java.lang.NullPointerException
at d.a(SourceFile:247)
at gi.b(SourceFile:92)
at bd.a(SourceFile:33)
at bn.a(SourceFile:69)
at bn.e(SourceFile:115)
at bn.d(SourceFile:103)
at net.minecraft.client.Minecraft.i(SourceFile:1007)
at net.minecraft.client.Minecraft.run(SourceFile:596)
at java.lang.Thread.run(Unknown Source) Obviously these are not the real package and method names that the developer uses when he writes. Since he is in an alpha stage, it seems that the developer should like to be able to make sense of his stack traces, especially if someone is able to provide one for a bug report. As this stands, it's mostly meaningless. What advantage could one possibly hope to gain by obfuscating his code like this that overcomes the drawbacks of more difficult bug identification? | We obfuscate our Java code too.... The advantage is that it makes it harder to reverse-engineer (if you are worried about someone stealing your code base and using it as a base to create a similar competing product, for example, etc). You can get the original stack trace back: there are obfuscation tools out there which create special reference files which you can use to run the obfuscated stack traces through, and it comes out with the original source stack trace. These are generated by the obfuscation process itself, so you can't get the original stack trace back unless you have your hands on the reference file that you used to obfuscate the code in the first place. This has no disadvantages really. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/17995",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7371/"
]
} |
18,029 | I have recently started a job that has me working on an existing system. It requires tweaks and updates as well as new code. I have done several maintainance/feature adding projects now, and several of them have ended up being significantly different from what was actually requested. So, I had to program the item several times to get it to where the requester wanted. Now, I don't mind reprogramming the feature if that's what needs to be done. However, I would like to decrease turnaround time on my projects. The bottleneck seems to be in the requester's perception of what needs to be done. Do you have any ideas on what I could do to figure out what the requester needs more quickly? | A few points of advice: Listen to problems, Not Solutions . A lot of clients like to tell you how to solve their problems. Don't listen to them. You're the programmer, and it is your job to find solutions to problems. Instead listen to what problems clients are having, and figure out the best way to solve it. As others have said, clients don't really know what they want; sometimes you have to show it to them first. Ask Questions . When you are done asking questions, ask some more. Clients are rarely forthcoming with details, as they don't really think about it. The only way you are going to get the information you need is to pry it out of them. Get Things in Writing . Depending on the situation with the client, this can be really important later when they start complaining about how what you delivered "isn't what they asked for". And if nothing else, writing out detailed specifications can help you make sure you have all of the information you need, and help clear up ambiguities between you and the client. Communication is key . Don't just talk to the client, get info, knock out some code and not talk to them until it's done. Always keep in touch with the client. Ask questions throughout the process. Show them the progress you've made and get feedback. It'll make everyone's life easier in the long run. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18029",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6415/"
]
} |
18,059 | How do you go about explaining refactoring (and technical debt) to a non-technical person (typically a PHB or customer)? ("What, it's going to cost me a month of your work with no visible difference ?!") UPDATE Thanks for all the answers so far, I think this list will provide several useful analogies to which we can point the appropriate people (though editing out references to PHBs may be wise!) | When you have a big home theater and you add things, slowly but surely a big rats nest forms in the back. If you are often times replacing parts, sometimes its worth straightening all that stuff out. Sure, if you do that, it was working before, and it's not going to work better than when you started, but when you have to mess with it again, things will be a lot easier. In any case its probably best to make a similar comparison to some subject area the PHB or customer is already familiar with, ie car or construction or something... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18059",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4285/"
]
} |
18,288 | Both asserts and unit tests serve as documentation for a codebase, and a means of discovering bugs. The main differences are that asserts function as sanity checks and see real inputs, whereas unit tests run on specific simulated inputs and are tests against a single well-defined "right answer". What are the relative merits of using asserts vs. unit tests as the main means of verifying correctness? Which do you believe should be emphasized more heavily? | Asserts are useful for telling you about the internal state of the program . For example, that your data structures have a valid state, e.g., that a Time data structure won't hold the value of 25:61:61 . The conditions checked by asserts are: Preconditions, which assure that the caller keeps its contract, Postconditions, which assure that the callee keeps its contract, and Invariants, which assure that the data structure always holds some property after the function returns. An invariant is a condition that is a precondition and a postcondition. Unit tests are useful for telling you about the external behavior of the module . Your Stack may have a consistent state after the push() method is called, but if the size of the stack doesn't increase by three after it is called three times, then that is an error. (For example, the trivial case where the incorrect push() implementation only checks the asserts and exits.) Strictly speaking, the major difference between asserts and unit tests is that unit tests have test data (values to get the program to run), while asserts do not. That is, you can execute your unit tests automatically, while you cannot say the same for assertions. For the sake of this discussion I've assumed that you are talking about executing the program in the context of higher-order function tests (which execute the whole program, and do not drive modules like unit tests). If you are not talking about automated function tests as the means to "see real inputs", then clearly the value lies in automation, and thus the unit tests would win. If you are talking about this in the context of (automated) function tests, then see below. There can be some overlap in what is being tested. For example, a Stack 's postcondition may actually assert that the stack size increases by one. But there are limits to what can be performed in that assert: Should it also check that the top element is what was just added? For both, the goal is to increase quality. For unit testing, the goal is to find bugs. For assertions, the goal is to make debugging easier by observing invalid program states as soon as they occur. Note that neither technique verifies correctness. In fact, if you conduct unit testing with the goal to verify the program is correct, you will likely come up with uninteresting test that you know will work. It's a psychological effect: you'll do whatever it is to meet your goal. If your goal is to find bugs, your activities will reflect that. Both are important, and have their own purposes. [As a final note about assertions: To get the most value, you need to use them at all critical points in your program, and not a few key functions. Otherwise, the original source of the problem might have been masked and hard to detect without hours of debugging.] | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18288",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1468/"
]
} |
18,303 | How important is it to learn XML when JSON is able to do almost all that I need? Having said that, I use JSON mainly for AJAX requests and obtaining data from various APIs.
I am a total newbie to web development and the reason I am asking this is that I want to know whether I should go ahead and buy a book on XML or whether I can just give it a pass. | You'll need to learn XML to get anywhere in the web world. It's what drives of lot of B2B communications and there are many standard XML formats describing important. Just restricting yourself to JSON is hugely self-limiting. Yeah, you'll be chucking AJAX calls around but what happens when you need to communicate with a GeoServer? It'll be adhering to GIS standards and will be spurting out XML in WCS (Web Capabilities Service), WMS (Web Map Service) and WFS (Web Feature Service) formats among others. If you don't know how to handle XML, you'll have some trouble with that. Of course, any marshaller (domain object to text format) worth its salt will be able to convert their objects to and from XML/JSON/YAML so you could make the argument that so long as you can hide behind the marshaller you only have to deal with the domain objects. Web services provide WSDL exactly for this purpose. But sooner or later you'll need to read and understand the contents of your requests and responses and that will certainly require an understanding of XML. And let's not forget good ol' XHTML the old web standard for HTML pages. It's XML. So, in short, learn XML - and keep JSON wherever you can 'cos it's lovely. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18303",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5701/"
]
} |
18,371 | The company I'm working for is looking to hire a senior developer with more experience than me, and they expect me to do the technical part of the interview. I've only been programming a few years and am not sure I have the knowledge needed to evaluate the coding skills of someone who has greater understanding/experience than I do. Can anyone recommend some technical interview questions to ask that are a good means for evaluating higher-level programming skills, but still be ones I can understand? I would say I'm past the jr. programmer level, but nowhere near senior. Most of what I've done is built small apps (web and desktop), some of them fairly complicated, but all of them have been meant to be used by no more then a handful of users. I feel I have a decent understanding of most programming concepts and am capable of learning/teaching myself just about anything, however I lack experience. As my boss is fond of telling me, "You don't know what you don't know". In particular, things we'd like the person we hire to have experience with (that I don't have) is: Multi-tier development, multi-user environment, large-scale application development, two-way messaging, shared sessions, and Multi-threading/BackgroundWorkers. UPDATE: In response to Thor's comment below, we hired someone a few months ago and I think it has been working out great. I am learning a lot, not just about coding but also about things like design patterns, software architecture, documentation, and how other larger programming teams get stuff done. Its not always easy having someone come in and point out better ways to do things you have done, but if you can swallow your pride and be willing to try out new things you can learn a lot. The interview process went better than I expected. I started asking questions about things I was familiar with, then asked some questions about some things I was struggling with. Whenever the interviewee said something I didn't understand, I'd ask them to explain it to me and then write it down so I could look it up later on. Overall, I felt I was able to get a pretty good idea of the applicant's skill level, intelligence, and what they'd be like to work with. | You can't. Instead, I would suggest you to come up in the interview with a list of problems you have today , and ask him how he would solve them . This a is very interesting method for the following two reasons: It is free consultancy . Even if you don't hire the guy, he may suggest nice solutions to your problems . If he comes with interesting solutions , he is a problem solver . The kind of guy you want to hire. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18371",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
18,406 | It has been six years since I have been coding. Coding into all kinds of things like ActionScript, JavaScript, Java, PHP, Ajax, XML HTML, ASP, etc. I have used arrays, maps, linked lists, sets, etc and wherever I worked people like me. But whenever I am interviewed it is very likely that people ask me question on hashes, trees, stacks, and queues. Some questions are on juggling some sorting algorithms. I don't know if I should really know them or should I stop calling myself a programmer. There is something in me which also tells me even if people who are asking all these questions select me, they will never be making me work of these things. Am I really required to know all this? | If all you know how to do is write glue code, you may call yourself a code monkey. Lots of glue code needs to be written and you can make a decent living as a code monkey. To call yourself a Real Programmer TM and be trusted when code needs to be written from scratch, you have to know algorithms, data structures, memory management, pointers, assembly language, etc. and understand how to use this knowledge to evaluate tradeoffs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18406",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7507/"
]
} |
18,444 | There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE . I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the
overall design of a project, gets
involved with coding efforts, conducts
code reviews, and selects the
technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well. | A Systems Architect should: Specify the high-level architecture Participate in code reviews Approve technologies used Assist the developers in their coding effort Maintain and monitor the development schedule Produce SA artifacts, such as UML diagrams, Gantt charts and the like. SA's must know how to code, and should participate in some of the coding work, get their hands wet, so to speak. This keeps them in touch with the gestalt of the development effort. As Uncle Bob Martin once said , the Architect should do some of the coding himself, so that he can see the pain he is inflicting on others with his designs. The SA should have the last word on all design, technology and coding style decisions. But, like all managers, the job of the SA is to clear the path for his people so they can be productive. That means, for the most part, that the developers get to decide, at their level, how problems are to be solved. It means that the SA keeps the pointy-haired bosses out of the developers' cubicles. And it means that the SA pitches in to help, as needed. Like all human beings, SA's can and do make mistakes. The good ones learn from those mistakes, and become better SA's. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18444",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7510/"
]
} |
18,454 | I've often written this sort of function in both formats, and I was wondering if one format is preferred over another, and why. public void SomeFunction(bool someCondition)
{
if (someCondition)
{
// Do Something
}
} or public void SomeFunction(bool someCondition)
{
if (!someCondition)
return;
// Do Something
} I usually code with the first one since that is the way my brain works while coding, although I think I prefer the 2nd one since it takes care of any error handling right away and I find it easier to read | I prefer the second style. Get invalid cases out of the way first, either simply exiting or raising exceptions as appropriate, put a blank line in there, then add the "real" body of the method. I find it easier to read. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18454",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
18,585 | Often we as programmers see large organisations wasting huge sums of money on bloated and inefficient solutions to problems. This pains me greatly because I like organisations to benefit from best of breed solutions. However, my abilities as a programmer are limited when it comes to influencing the key decision makers and often my perspective on the matter is constrained to my own little technical world. So, my question is this. After encountering an egregious waste of money on some software and/or hardware that really got your goat, what did you do about it to get it fixed or were you doomed to bite the bullet and mutter forever under your breath? I'm interested in hearing your overall experiences and especially what lessons you learned about how to tackle this sort of thing in the future . Let's not name names, the experience of how to tackle the problem is more important than the actual offending product. | I've seen too many examples to name a favourite, but I've noticed a few general trends in my main field, web-development: Vanity Websites . These are websites that serve no useful purpose to anyone outside the small organisation that commissions them and are built around an obsessive compulsion with logos, photos of themselves and self-indulgent waffle. The worst part is these are usually public-sector funded and commissioned by people who have no clue about the web. (For instance, once had a NHS hospital trust who wanted to develop a mini-version of Facebook for their own staff intranet). Paid for is Best . The mindset that insists that
paid-for software must intrinsically
be better than open-source. After
all, it's paid for, right? I've seen
so many clients insist on making
stupid choices simply because they
work in a culture that automatically
discounts anything open-source as a
matter of policy. Design by committee. This is where a huge group of people have a "brainstorm" and then try to incorporate every crack-pot idea there is into the design, inevitably resulting in a ill-thought through mess that compromises on everything in favour of trying to please everyone (and by everyone they mean the committee making the decisions, not the people having to use the application). Consultants. This is where you pay a middle-man (who knows neither business practices nor software development) to get in the way and cream-off money by protracting the development process with confusing techno-babble and business-speak. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18585",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7167/"
]
} |
18,669 | The two predominant software-development methodologies are waterfall and agile. When discussing these two, there is often much focus on the particular practices that distinguish them (pair programming, TDD, etc. vs. functional spec, big up-front design, etc.) But the real differences are far deeper, in that these practices come from a philosophy. Waterfall says: Change is costly, so it should be minimized. Agile says: Change is inevitable, so make change cheap. My question is, regardless of what you think of TDD or functional specs, is the waterfall development methodology really viable? Does anyone really think that minimizing change in software is a viable option for those that desire to deliver valuable software? Or is the question really about what sort of practices work best in our situations to manage the inevitable change? | Of course waterfall is viable. It brought us to the moon! And it's a agile coach talking here! Unless you can clearly identify problems related to the way you manage your projects, there is no valid reason to change. As an alternative of Agile and Waterfall methodologies, I will suggest YOUR methodology . Adapted to your specific business, your specific team, your products, you way of working, your company culture... It's why Scrum is called a simple framework instead of a methodology. Wanting to implement a methodology because someone on a blog you like talked about it is as stupid as letting problems going without doing anything. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18669",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2329/"
]
} |
18,679 | This is in continuation to my previous question where I asked is it necessary to learn algorithms and data structures. I feel yes it is. Now I work in an environment where I won't ever get the chance to learn it by experimenting or practically or in any assignment. What is the right approach like the right books, right kind of problems, right kind of resources that I can go through to give six months or a year or two to learn algorithms and data structures? And also mold my mind in a way that it can relate problems to data structures and algorithms. | Read. No, really, read. Read everything about algorithm and design you can possible find. There are phenomenal books out there. The Sedgewick algorithm books are good. The Algorithm Design Manual by Skiena is good as well. Together these books follow me on every bookshelf at every job I go to, along with The Mythical Man-Month. Then ask. Talk to people you respect. Ask them what decision points they had and why they made the decisions they did. The good ones will always be able to tell you "I chose to do X because it's better than A, B in these ways. I could have gone with C, but I felt this was a better choice because of this". Next, do. Build stuff. Build stuff that you'll never use. Build stuff that you'll never need. Go write a program that solves a Sudoku puzzle. Now go do it again. And again. Build it 5 completely different ways. Build a program that generates Sudoku puzzles and feed it into the solvers. Find which solver is fastest. And then... Find out why. The "what" is almost never important. I mean, yeah, it is critical to finishing the project at hand, but at the end if you know the "what" without knowing the "why", then you might as well never done it in the first place. You got a bullet point on your resume. Go get a cookie and congratulate yourself. The "why" is so much more important than the "what". And for the record Sudoku was an example. I spent a lot of free time going through that exercise with a ton of the logic puzzles on Kongregate and learned a lot on the way. http://www.amazon.com/Bundle-Algorithms-Parts-1-5-Fundamentals/dp/020172684X/ http://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/dp/1848000693/ http://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959/ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18679",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7507/"
]
} |
18,720 | I've recently had a discussion with a coworker about versioning web applications. I don't think you need it at all, and if you just want a sanity check to confirm your latest release is live, I think a date (YYMMDD) is probably good enough. Am I off base? Am I missing the point? Should I use web application version numbers | If you resell the same web application to multiple customers, you absolutely should version it. If it's a website that's only ever installed in one location, then the need is less dire, but it still likely couldn't hurt. You'd be less susceptible to timezone problems and such, too. Diagnosing an issue in a library version 1.0.2.25 is a lot nicer than hunting down the library build on November 3, 2010 11:15 a.m. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/165/"
]
} |
18,838 | I currently code with C, C++, and Python. I'm wanting to pick up a functional programming language, and right now I'm leaning toward Haskell. I do NOT want to start a "Haskell vs Lisp" war here; what I want to know is this: if I learn Haskell primarily for exposure to functional programming, what benefits, if any, will I gain from later learning Lisp? | I suggest learning both, Haskell first, then Common Lisp. My experience with Haskell was that the static typing seemed to be a restricting annoyance at first, but once I got used to it, I noticed that most of my type errors had logic errors hiding behind them. When you get to this point, and the next milestone, which is learning to think in types and define your own types as a means of expressing your solution, you'll be ready for Common Lisp. With Common Lisp, you can add monads, currying, and everything you liked from Haskell, but you also get multiple inheritance like Frank Shearar mentioned, and generic functions with multiple dispatch, and an advanced exception handling system. So why not just learn Common Lisp first? Coming from a procedural and OOP background, my experience has been that I didn't really understand functional programming until I had to use it exclusively. Once functional programming is comfortable, you can add the rest of the tools that Common Lisp makes available, and use whatever tool is best at the task at hand. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7629/"
]
} |
18,886 | Is Computer Science science, applied mathematics, engineering, art, philosophy? "Other"? To provide background, here is Steven Wartik's blog posting for Scientific American titled " I'm not a real scientist, and that's okay ." The article covers some good topics for this question, but it leaves open more than it answers. If you can think of the discipline, how would computer science fit into its definition? Should the discipline for Computer Science be based on what programmers do, or what academics do? What kind of answers do you get from people who've seemed to think deeply about this? What reasons do they give? | There are two distinct IT disciplines: Computer Science - is the discipline study of computers and computation using the scientific method. Software Engineering - is the discipline of designing and implementing software following proper engineering principles. The two overlap somewhat, but the distinction is really about desired outcomes of science versus engineering. The desired outcome of a scientific discipline is knowledge. The desired outcome of a engineering discipline is things that work. So to answer your question: Is Computer Science science, applied mathematics, engineering, art, philosophy? "Other"? Computer Science is Science ... when done properly. However, like other disciplines CS has overlaps with Mathematics, Engineering, Physical Sciences, Social Sciences, Philosophy and so on. I would also add that what most programmers do is neither Computer Science or Software Engineering. It is more like what a craftsman does. And sad to say, neither academic Computer Science or the Software Engineering profession are as rigorous as older science and engineering disciplines. (There are fields of Computer Science that are traditionally rigorous; for example, the ones with a strong mathematical basis. But for many fields, it is simply too hard / expensive to do proper scientific studies on the questions that really matter.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/18886",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6329/"
]
} |
19,007 | A question that I have been asking myself and really confused which path to take. So I need your guys help as to the pros and cons of these 2 professions in today's world. I love web applications development as the Web is the best thing to happen in this age and nearly everyone gets by on the World Wide Web. And also tend to keep learning about new technologies and about web services. On the other hand I like software engineering also for the desktop applications as I have had experience with development small scale software in VB.Net, Java, C++, etc. Which path has more scope and better future? What's your view? | Strictly speaking, software engineering is about designing software systems correctly - regardless of what platform (web, desktop, mobile, etc) they live on - how various subsystems of the solution interact with each other and external systems, etc. I'd suggest getting some experience with desktop and enterprise applications, like web applications - they can be done just as badly for the novice and just as well for the expert. Being able to tell a prospective employer that you've got more extensive experience than 90% of the 'popularist' web dev crowd always helps! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19007",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5987/"
]
} |
19,203 | I'm planning to do a talk on Dependency Injection and IoC Containers, and I'm looking for some good arguments for using it. What are the most important benefits of using this technique, and these tools? | Most important, for me, is making it easy to follow the Single Responsibility Principle . DI/IoC makes it simple for me to manage dependencies between objects. In turn, that makes it easier for me to break coherent functionality off into it's own contract (interface). As a result, my code has been far more modularized since I learned of DI/IoC. Another result of this is that I can much more easily see my way through to a design that supports the Open-Closed Principle . This is one of the most confidence inspiring techniques (second only to automated testing). I doubt I could espouse the virtues of Open-Closed Principle enough. DI/IoC is one of the few things in my programming career that has been a "game changer." There is a huge gap in quality between code I wrote before & after learning DI/IoC. Let me emphasize that some more. HUGE improvement in code quality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19203",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6104/"
]
} |
19,225 | Java's checked exceptions have gotten some bad press over the years. A telling sign is that it's literally the only language in the world that has them (not even other JVM languages like Groovy and Scala). Prominent Java libraries like Spring and Hibernate also don't use them. I personally have found one use for them ( in business logic between layers), but otherwise I'm pretty anti-checked exceptions. Are there any other uses that I don't realize? | First of all, like any other programming paradigm you need to do it right for it to work well. For me the advantage of checked exceptions is that the authors of the Java runtime library ALREADY have decided for me what common problems I might reasonably be expected to be able to handle at the calling point (as opposed to a top-level catch-print-die block) and consider as early as possible how to handle these problems. I like checked exceptions because they make my code more robust by forcing me to think about error recovery as early as possible. To be more precise, to me this makes my code more robust as it forces me to consider strange corner cases very early in the process as opposed to saying "Oops, my code does not handle if the file doesn't exist yet" based on an error in production, which you then have to rework your code to handle. Adding error handling to existing code can be a non-trivial task - and hence expensive - when reaching maintenance as opposed to just doing it right from the start. It might be that the missing file is a fatal thing and should cause the program to crash in flames, but then you make that decision with } catch (FileNotFoundException e) {
throw new RuntimeException("Important file not present", e);
} This also shows a very important side effect. If you wrap an exception, you can add an explanation which goes in the stack-trace ! This is so extremely powerful because you can add information about e.g. the name of the file that was missing, or the parameters passed to this method or other diagnostic information, and that information is present right in the stack trace which frequently is the single thing you get when a program has crashed. People may say "we can just run this in the debugger to reproduce", but I have found that very frequently production errors cannot be reproduced later, and we cannot run debuggers in production except for very nasty cases where essentially your job is at stake. The more information in your stack trace, the better. Checked exceptions help me get that information in there, and early. EDIT: This goes for library designers as well. One library I use on a daily basis contains many, many checked exceptions which could have been designed much better making it less tedious to use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19225",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7610/"
]
} |
19,292 | We all have tasks that come up from time to time that we think we'd be better off scripting or automating than doing manually. Obviously some tools or languages are better for this than others - no-one (in their right mind) is doing a one off job of cross referencing a bunch of text lists their PM has just given them in assembler for instance. What one tool or language would you recommend for the sort of general quick and dirty jobs you get asked to do where time (rather than elegance) is of the essence? Background: I'm a former programmer, now development manager PM, looking to learn a new language for fun. If I'm going to learn something for fun I'd like it to be useful and this sort of use case is the most likely to come up. | Python The obvious answer (and with good reason) is Python. Its a solid language, available cross platform. As its dynamic you can run it interactively which is great for lashing stuff together and it has a fairly large selection of libraries so its a general purpose language so can be applied to most problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19292",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
19,573 | What are the worst false economies (that is ways of saving money that ultimately cost more than they save) prevalent in the software industry and how do you combat them? | Technical Debt ie "Just do it quickly, we'll refactor later". Firstly because I have yet to see someone engaging in this behaviour actually refactor later. Secondly because doing things the quick way, instead of the good way makes it harder to add future features or resolve future bugs so you end up losing time in the long run. Sadly, many still think it saves developer cycles to have them do something fast. I guess it's possible, but I have yet to see it in practice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19573",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
19,649 | It's a common practice to place copyright notices, various legal disclaimers and sometimes even full license agreements in each source file of an open-source project. Is this really necessary for a (1) open-source project and (2) closed-source project? What are you trying to achieve or prevent by putting these notices in source files? I understand it's a legal question and I doubt we can get a fully competent answer here at programmers.SO (it's for programmers, isn't it?) What would also be interesting to hear is, when you put legal stuff in your source files, is it because "everyone does it" or you got legal advice? What was the reasoning? | Is this really necessary No. It's not legally required. (I am not a lawyer, but I've seen this stated by one.) If you've got a project where individual files might be taken out of context, it may be sensible - but it only requires a couple of lines, to say something like: This file is part of <project> which is released under <license>. See file <filename> or go to <url> for full license details. For anything else, you can simply put a LICENSE text file in the project root, and any relevant details/credits/etc in the README file - it's still copyrighted (automatically), so it's just a question of being clear license-wise in the readme file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19649",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5635/"
]
} |
19,783 | My day job is java/web developer. I have been using eclipse for ~5years. I think its excellent and I also use Webstorm for javascript and html/jsp. I do on occasion need to ssh into server and mess around with config files; for this I use vi and it pains me. I have to get up a webpage listing the syntax/commands : press escape, then asterix, turn around three times and the text will be entered two lines above your cursor . Its so unintuitive to me, and I imagine anyone who grew up in the late eighties nineties. Here are the main reasons I think eclipse is brilliant(and I assume other IDE's), and do not switch to emacs and/or vim. Error highlighting with no need to
recompile project. Code assist. Refactoring. Opening call hiearchy/Opening
declaration. Fully integrated with source control. Debugger is included. availablity of 3rd party plugins - eg
findbugs/checkstyle. One of the arguments I hear is that with emacs/vim you can create your own plugins - well OK, but you can do that in eclipse too. But you don't need to as everything is already there! Its like saying buy this half built car, you can build the rest yourself. Why are people using emacs/vim ? Do people who use it actually work on complex object-oriented projects in large organisations ? What are the reasons to switch to vim/emacs. How would my productivity increase if I did switch? | Use whatever tool fits your needs. Knowing VIM or Emacs is a good thing if you ever have to login into a remote server and edit a config file or something similar. I know VIM reasonably well, but I wouldn't use it to develop in Java. That's what Eclipse, Netbeans etc. are made for. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19783",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2297/"
]
} |
19,851 | This is happening in every team. For some reasons, conflicts arise in the team and they affect the overall motivation and productivity. What is your recommended approach to solve that common problem? Examples : one part of the team wants to implement dependency injection, the other part think it's a waste of time. some developers think the rest of the team is slowing down development (which explains why they are late on the schedule) personal incompatibilities between one or more developers one developer refuse to talk to another one (without apparent reason) | I have had a team of 10 people for two years without a conflict(touch wood) I could be lucky or may be doing something right. The best way to handle conflict is never to let one exist for a longer time. There are several core values that you can preach. Team Spirit Fairness in everything (compensation/rewards) Being appreciative Give recognition, responsibility Give freedom Let people know they are not greater than the team Personal success means nothing if the team fails attach personally to people never show a carrot you dont intend to give never hire(no matter how good) who could ruin the team communicate more often etc etc. Appreciate whenever someone goes beyond the job Give regular feedback on performance and set expectations preferrably monthly. Let people know when they behave like children. All these take guarded effort from some one. Software is pretty much a team game individual brilliance is generally short lived. If I go by your examples : We have decided decided to go with dependency injection. Period. We will see if it is the best way or not. If it is not, you get a chocolate :-) till than cooperate and let's make this thing happen If the rest of the team is slowing you down you help them to make it faster they are your teammates you are the elder guy, help them. I know you are good. Talk to both of them tell them they are spoiling the environment. If nothing works get rid of one of them or both of them. One thing I find very effective is to repeat "we are a good team" and repeat "we are a team to the lonely one's". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19851",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
19,888 | This started as a "Note to Self," so please excuse me if the frustration is all too evident and the writing is less than stellar... Three major subjects I've had at the forefront of my mind lately: Motivation Learning (Curiosity) Doing (Making) I've been studying motivation and incentives for months now. It seems there are an infinite number of different motivations that people might have for doing things (I realize that sounds trite but bear with me). I've been really drawn to it because I'm desperate to find out why I do the things I do and why I don't do the things that I want to do but don't do . I'm in the midst of reading Paul Graham's excellent Hackers and Painters book. In it, he makes the case that hackers and painters are very similar because they are both "makers." Painters make paintings. Hackers make software. Painters don't necessarily need to understand the chemical composition of paint to make beautiful paintings. And hackers don't necessarily need to know 1's and 0's to make beautiful software. Graham then draws the distinction between disparate computer science fields: some people seem to be studying mathematics some people seem to be studying the computers themselves the hackers are making software. The difference is incredibly important. It seems the motivation for some is to make beautiful things. And the motivation for the others is to learn out of curiosity. Certain motivations seem obvious to me, but curiosity seems a bit less obvious. I would certainly consider myself as a curious person with a seemingly unquenchable thirst to learn just about everything I can. But this is exactly where the problem comes up. The thing that scares me so much is that I desperately want to make things. I desperately want to do things. I want to write a book. I want to paint a painting. I want to compose a song. I want to do things like travel. But the strangeness is that I also want to learn things. I want to learn to play guitar. I want to learn about art history. I want to learn more about philosophy and literature. The key seems to be the balance between learning and doing... between studying and making. While I'm not sure how much one should learn about a given thing before doing it, I know for certain that I find myself constantly on one side rather than the other. As it stands now (and as far as I can tell I've always been this way), I am a learner and not a doer. I've read great books. I've practiced guitar for years. I've spent countless hours studying programming. But I've written 0 books. I've composed 0 songs. I've coded 0 beautiful programs. I've painted 0 beautiful paintings. I've started 0 viable businesses. The scary part of all this is that there are probably countless unfinished works of art in the world. Is this my misanthropic revenge against society and culture to never produce or finish any of the works of art that I start? Perhaps the worst part (aside from this being my natural inclination), is the fact that I f***ing know better. I just finished books like "Getting Things Done" and "Making Ideas Happen." I've aggregated and synthesized countless words of wisdom on how to do things and how to make things. Imagine the horror of going through life without being able to do the things you want to do. If this is something you've struggled with (and hopefully overcome), please share. If not... perhaps some delicious pity would make me feel better. [UPDATE: Just wanted to send a quick thanks to everyone that shared their thoughts. I deliberately left the question somewhat open-ended in hopes of encouraging discussion and having others refashion the central problem around their similar experience, and I think it worked out great... there's a lot of amazing insight here to work with and it was really helpful. Thanks again.] | Probably the best question on P.SE I've read so far. I highly suggest you to have a look at current Seth Godin 's work on the subject. He calls it the Resistance produced by the Lizard Brain . What is the Lizard Brain? My explanation... The lizard brain is the primitive, limbic system that overrides everything else in our brain: it is fear, sex, hunger, etc. Especially fear. And for many of us, myself included, it's what prevents us from blogging more, writing a new preface, updating a book, crafting a new presentation, etc. Source: Shut Up Lizard Brain: I Am Not Procrastinating Today Read the chapter about it in the book Linchpin . You may be interested in The Dip too. Please document yourself on Procrastination too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19888",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7927/"
]
} |
19,896 | For the last couple of years, all of the serious projects I have worked on have been either web based, or had a non graphical user interface (services, command line scripts etc...). I can throw together a WinForms app or do some simple WPF when needed, but I've never really delved into some of the lower level API's like MFC or QT. I understand that this depends on the situation but in general is it still worth taking the time to learn desktop development well or are applications moving to the web and mobile devices at a pace which makes this knowledge less relevant? Also, do you expect developers you work with to have desktop gui expertise? | I'd say yes, it is. There's sort of a pendulum effect in program development. First everything ran directly on the computer. Then when the computer became powerful enough to run multiple programs, they got mainframes with dumb terminals. But dumb terminals really suck in terms of usability, so as soon as computers got powerful enough to put reasonable amounts of hardware inside a terminal-sized system, we got personal computers, and everything ran directly on the computer. Then they invented the World Wide Web, and we're back to a mainframe (server) and a dumb terminal (browser.) But dumb terminals still really suck in terms of usability, and people are starting to relearn the lessons of 30 years ago, and we're trending away from that again. A lot of the really hot development these days is for desktop (or mobile) apps that run locally, but are able to connect to the Internet for specific purposes to enhance their functionality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/19896",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5323/"
]
} |
20,036 | I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going? | One of the major innovations in FP that has resulted in the "explosion" of interest is monads. In January of 1992, Philip Wadler wrote a paper called The Essence of Functional Programming which introduced monads into functional programming as a way to deal with IO. The major problem with pure, lazy, functional programming languages was utility in dealing with IO. It's one of member of the "Awkward Squad" in programming, because "laziness and side effects are, from a practical point of view, incompatible. If you want to use a lazy language, it pretty much has to be a purely functional language; if you want to use side effects, you had better use a strict language." Reference The issue with IO before monads was that maintaining purity was not possible for programs that were actually useful for anything. By IO, we mean anything that deals with changing state, including getting input and output from the user or environment. In pure functional programming, everything is immutable, to allow laziness and purity (free from side effects). How do monads solve the problem of IO? Well, without discussing monads too much, they basically take the "World" (the runtime environment) as input to the monad, and produce a new "World" as output, and the result: type IO a = World -> (a, World). FP has therefore entered more and more into the mainstream, because the biggest problem, IO (and others) has been solved. Integration into existing OO languages has also been happening, as you know. LINQ is monads, for example, through and through. For more information, I recommend reading about monads and the papers referenced in my answer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20036",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2329/"
]
} |
20,080 | Over on stackoverflow, I see this issue crop up all the time: E_NOTICE ?== E_DEBUG, avoiding isset() and @ with more sophisticated error_handler How to set PHP not to check undefind index for $_GET when E_NOTICE is on? How to stop PHP from logging PHP Notice errors How do I turn off such PHP 5.3 Notices ? Even Pekka (who offers a lot of solid PHP advice) has bumped against the dreaded E_NOTICE monster and hoped for a better solution than using isset() : isset() and empty() make code ugly Personally, I use isset() and empty() in many places to manage the flow of my applications. For example: public function do_something($optional_parameter = NULL) {
if (!empty($optional_parameter)) {
// do optional stuff with the contents of $optional_parameter
}
// do mandatory stuff
} Even a simple snippet like this: if (!isset($_REQUEST['form_var'])) {
// something's missing, do something about it.
} seems very logical to me. It doesn't look like bloat, it looks like stable code. But a lot of developers fire up their applications with E_NOTICE 's enabled, discover a lot of frustrating "uninitialized array index" notices, and then grimace at the prospect of checking for defined variables and "littering" their code with isset() . I assume other languages handle things differently. Speaking from experience, JavaScript isn't as polite as PHP. An undefined variable will typically halt the execution of the script. Also, (speaking from inexperience ) I'm sure languages like C/C++ would simply refuse to compile. So, are PHP devs just lazy? (not talking about you, Pekka, I know you were refactoring an old application.) Or do other languages handle undefined variables more gracefully than requiring the programmer to first check if they are defined? (I know there are other E_NOTICE messages besides undefined variables, but those seem to be the ones that cause the most chagrin) Addendum From the answers so far, I'm not the only one who thinks isset() is not code bloat. So, I'm wondering now, are there issues with programmers in other languages that echo this one? Or is this solely a PHP culture issue? | I code to E_STRICT and nothing else. Using empty and isset checks does not make your code ugly, it makes your code more verbose. In my mind what is the absolute worst thing that can happen from using them? I type a few more characters. Verses the consequences of not using them, at the very least warnings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20080",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5497/"
]
} |
20,128 | Which world most important algorithms have contributed most to humankind in past decades? I thought this is a good general knowledge for a developer to know about. Update: If possible, please keep the answer to a specific programming algorithm . I would like to get a list of the most important ones, only one algorithm per answer. Please consider to state why the algorithm is significant and important... | Public/private key encryption is pretty darn important. Internet commerce would be nowhere as ubiquitous without it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20128",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7083/"
]
} |
20,178 | I need a way to filter out resumes of folks who just copy-and-paste code then hope it works, and check it in if it does. All this happens without having an understanding (or care) to understand the rest of the code in the system. Sure I know that copying and pasting code is part of learning a new object, control, etc... but how can one tell if that accounts for 70% (or more) of their development career? I've come across some senior level guys perhaps whose skills are so outdated or irrelevant for the project, that all they do is google, copy-then-paste some code without thinking about the solution as a whole. As a result we have a mismash of JSON, AJAX, callbacks, ASMX, WCF and postbacks in the same project. It is clear there is no consistency or logic behind where each technology is being used. In the worst case, this type of developer creates security issues and vectors for attack. Question How would you recommend I filter out people who have a poor programming background? Can I do it at the resume level? If not, how do I do this during the interview. | I've come across some senior level
guys perhaps whose skills are so
outdated or irrelevant for the
project, that all they do is google,
copy-then-paste some code without
thinking about the solution as a
whole. As a result we have a mismash
of JSON, AJAX, callbacks, ASMX, WCF
and postbacks in the same project. It
is clear there is no consistency or
logic behind where each technology is
being used. I don't think the skills of your developers are the problem. Your problem lies elsewhere, perhaps a team leader or architect who doesn't have the self-confidence to "encourage" better coding disciplines, or a management team that doesn't understand the importance of managing technical debt, and doesn't give their developers the time and resources to do so. Does your company hold code reviews? Leadership may be the problem, not copy-paste developers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20178",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175/"
]
} |
20,204 | I came across this article Work for Free that got me thinking. The goal of every employer is to gain
more value from workers than the firm
pays out in wages; otherwise, there is
no growth, no advance, and no
advantage for the employer.
Conversely, the goal of every employee
should be to contribute more to the
firm than he or she receives in wages,
and thereby provide a solid rationale
for receiving raises and advancement
in the firm. I don't need to tell you that the
refusenik didn't last long in this
job. In contrast, here is a story from last
week. My phone rang. It was the
employment division of a major
university. The man on the phone was
inquiring about the performance of a
person who did some site work on
Mises.org last year. I was able to
tell him about a remarkable young man
who swung into action during a crisis,
and how he worked three 19-hour days,
three days in a row, how he learned
new software with diligence, how he
kept his cool, how he navigated his
way with grace and expertise amidst
some 80 different third-party plug-ins
and databases, how he saw his way
around the inevitable problems, how he
assumed responsibility for the
results, and much more. What I didn't tell the interviewer was
that this person did all this without
asking for any payment. Did that fact
influence my report on his
performance? I'm not entirely sure,
but the interviewer probably sensed in
my voice my sense of awe toward what
this person had done for the Mises
Institute. The interviewer told me
that he had written down 15 different
questions to ask me but that I had
answered them all already in the
course of my monologue, and that he
was thrilled to hear all these
specifics. The person was offered the job. He had
done a very wise thing; he had earned
a devotee for life. The harder the economic times, the
more employers need to know what they
are getting when they hire someone.
The job applications pour in by the
buckets, all padded with degrees and
made to look as impressive as
possible. It's all just paper. What
matters today is what a person can do
for a firm. The resume becomes pro
forma but not decisive under these
conditions. But for a former boss or
manager to rave about you to a
potential employer? That's worth
everything. What do you think? Has anyone here worked for free? If so, has it benefited you in any way? Why should(nt) you work for free (presuming you have the money from other means to keep you going)? Can you share your experience? Me, I am taking a year out of college and haven't gotten a degree yet so that's probably why most of my job applications are getting ignored. So im thinking about working for free for the experience? | No. Never work for free for anyone but yourself. You'll get more out of good open-source credentials and personal projects, in the way of job-hunting, than you will out of working for some son of a b_ _ who thinks that your skills aren't worth paying for. Of course, if no one is willing to pay for your skills, you may need to find another career: software development is wide open right now, so (depending on where you live, of course) it should be possible to get a job where you actually get paid. Pro bono software development is theft. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20204",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6208/"
]
} |
20,206 | So I was currently in the middle of coding, unfortunately for me, I was in "The Zone" then I thought to myself, are my method/variable names to long? POP out of the Zone I go! So I came here to ask, are my method/variable names too long? You be the Judge! Bonus points to anyone who can figure out what I'm writing, although I'm sure a guru will figure it out fast! Anyway, here are some of my method and variable names. Methods : searchBlockedListForBlockedSquares() , isCurrentSquareNextToAtLeastOneBlockedSquare() , searchBlockedListForBlockedSquares() Variables: isNextToBlockedSquares I guess there was only one variable that seemed too long. | Your names seem okay to me in terms of length. However, the way they are named suggests that maybe some new classes are in order? For example, instead of searchBlockedListForBlockedSquares() you could have blockedList.getBlockedSquares() . Similarly, isCurrentSquareNextToAtLeastOneBlockedSquare() becomes currentSquare.isAdjacentToABlockedSquare() . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20206",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2638/"
]
} |
20,275 | In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important technical decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono" 1 , which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms , including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? 1 Mark H summarized it as : "If the claim is that "I have a windows application written in .NET, it should run on mono" , then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler." | Sparkie's answer got it, let me complement a little. ".NET is cross platform" is too much of an ambiguous statement as both the framework and the world it was originally created for have changed and evolved. The short answer is: The underlying engine that powers .NET and its derivatives, the Common Language Infrastructure Standard, is cross-platform and as if you want to make your code go to multiple platforms, you need to plan on using the right APIs on the right platform to deliver the best experience on each platform. The CLI family has not tried the "Write Once, Run Anywhere" approach, as the differences from a phone to a mainframe are too big. Instead a universe of API and runtime features that are platform-specific has emerged to give developers the right tools to create great experiences in each platform. Think of it: programmers no longer target Windows PCs or Unix Servers. The world, now more than ever is surrounded by fascinating platforms from PCs, to gaming consoles, to powerful phones, to set-top boxes, to big servers and distributed clusters of machines. A one-size fits on all platform would merely feel bloated on tiny devices, and feel underpowered on large systems . The Microsoft's .NET Framework product is not cross platform, it only runs on Windows. There are variations of the .NET Framework from Microsoft that run on other systems like the Windows Phone 7, the XBox360 and browsers through Silverlight, but they are all slightly different profiles. Today you can target every major mainstream OS, phone, mobile device, embedded system and server with .NET-based technologies. Here is a list that shows which CLI implementation you would use in each case (this list is not comprehensive, but should cover 99% of the cases): x86 and x86-64 based PC computers: running Windows -> Typically you run .NET or Silverlight but you can also use full Mono here. running Linux, BSD or Solaris -> You run full Mono or Silverlight running MacOS X -> You run full Mono or Silverlight running Android -> You run Mono/Android subset ARM computers: Running Windows Phone 7: you run Compact Framework 2010 Running Windows 6.5 and older: you run the old Compact Framework Android devices: you run Mono/Android PowerPC computers: You run full Mono for full Linux, BSD or Unix operating systems You run embedded Mono for PS3, Wii or other embedded systems. On XBox360, you run CompactFramework S390, S390x, Itanium, SPARC computers: You run full Mono Other embedded operating systems: You run .NET MicroFramework or Mono with the mobile profile. Depending on your needs the above might be enough or not. You will hardly get the same source code to run everywhere. For example, XNA code wont run on every desktop, while .NET Desktop software wont run on XNA or the phone. You typically need to make changes to your code to run in other profiles of the .NET Framework. Here are some of the profiles I am aware of: .NET 4.0 Profile Silverlight Profile Windows Phone 7 Profile XBox360 Profile Mono core Profile - follows the .NET profile and is available on Linux, MacOS X, Solaris, Windows and BSD. .NET Micro Framework Mono on iPhone profile Mono on Android Profile Mono on PS3 Profile Mono on Wii Profile Moonlight profile (compatible with Silverlight) Moonlight extended profile (Silverlight + full .NET 4 API access) So each one of those profiles is actually slightly different, and this is not a bad thing. Each profile is designed to fit on its host platform and expose the APIs that make sense, and remove the ones that do not make sense. For instance, Silverlight's APIs to control the host browser do not make sense on the phone. And shaders in XNA make no sense on PC hardware that lacks the equivalent support for it. The sooner you realize that .NET is not a solution to isolating the developer from the underlying capabilities of the hardware and the native platform, the better off you will be. That begin said, some APIs and stacks are available in multiple platforms, for example ASP.NET can be used on Windows, on Linux, on Solaris, on MacOS X because those APIs exist both on .NET and Mono. ASP.NET is not available on some of Microsoft's supported platforms like XBox or Windows Phone 7 and is not supported either on other platforms that Mono supports like the Wii or the iPhone. The following information is only correct as of November 21st, and many things in the Mono world will likely change. The same principles can be applied to other stacks, a full list would require a proper table, which I have no idea of how to present here, but here is a list of technologies that might not be present on a particular platform. You can assume that anything not listed here is available (feel free to send me edits for things I missed): Core Runtime Engine [everywhere] Reflection.Emit Support [everywhere, except WP7, CF, Xbox, MonoTouch, PS3] CPU SIMD support [Linux, BSD, Solaris, MacOS X; Soon PS3, MonoTouch and MonoDroid] Continuations - Mono.Tasklets [Linux, BSD, Solaris, MacOS, PS3, Wii] Assembly Unloading [Windows only] VM Injection [Linux, BSD, MacOS X, Solaris] DLR [Windows, Linux, MacOS X, Solaris, MonoDroid] Generics [some limitations on PS3 and iPhone]. Languages C# 4 [everywhere] C# Compiler as a Service (Linux, MacOS, Solaris, BSD, Android) IronRuby [everywhere, execpt WP7, CF, Xbox, MonoTouch, PS3] IronPython [everywhere, execpt WP7, CF, Xbox, MonoTouch, PS3] F# [everywhere, execpt WP7, CF, Xbox, MonoTouch, PS3] Server Stacks ASP.NET [Windows, Linux, MacOS, BSD, Solaris] ADO.NET [everywhere] LINQ to SQL [everywhere] Entity Framework [everywhere] Core XML stack [everywhere] XML serialization [everywhere, except WP7, CF, Xbox) LINQ to XML (everywhere) System.Json [Silverlight, Linux, MacOS, MonoTouch, MonoDroid] System.Messaging [Windows; on Linux, MacOS and Solaris requires RabbitMQ] .NET 1 Enterprise Services [Windows only] WCF [complete on Windows; small subset on Silverlight, Solaris, MacOS, Linux, MonoTouch, MonoDroid] Windows Workflow [Windows only] Cardspace identity [Windows only] GUI stacks Silverlight (Windows, Mac, Linux - with Moonlight) WPF (Windows only) Gtk# (Windows, Mac, Linux, BSD) Windows.Forms (Windows, Mac, Linux, BSD) MonoMac - Native Mac Integration (Mac only) MonoTouch - Native iPhone Integration (iPhone/iPad only) MonoDroid - Native Android Integration (Android only) Media Center APIs - Windows only Clutter (Windows and Linux) Graphic Libraries GDI+ (Windows, Linux, BSD, MacOS) Quartz (MacOS X, iPhone, iPad) Cairo (Windows, Linux, BSD, MacOS, iPhone, iPad, MacOS X, PS3, Wii) Mono Libraries - Cross Platform, can be used in .NET but require manually building C# 4 Compiler as a Service Cecil - CIL Manipulation, workflow, instrumentation of CIL, Linkers RelaxNG libraries Mono.Data.* database providers Full System.Xaml (for use in setups where .NET does not offer the stack) MonoTouch means Mono running on iPhone; MonoDroid means Mono running on Android; PS3 and Wii ports only available to Sony and Nintendo qualified developers. I apologize for the lack of formality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20275",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
20,369 | I'm planning on moving to NY in 6-12 months tops, so I'm forced to find a new job. When I'm planing to start my life in another city it's also probably a good time to think about career changes. I've found a lot of different opinions about PHP vs .Net vs Java and this is not the topic here. I don't want to start a new fight about which language is better. Knowing a programming language is not the most important thing for being a software developer. To be a really good developer you need to know OOP, design patterns, testing... and a language is just a tool to make things happen. So back to my question. I have mixed experience in IT - 1 year as an IT support guy (Windows administration and support), around 2 years of experience in embedded programming (VB.Net 2005) and for the last 2 years I'm working with PHP/MySQL. I have worked with Magento web shop, assisted in some projects in Symfony, modified few Drupal sites. My main concerns are the following: Do I continue to improve my skills in PHP e.g. to start learning some major PHP framework like Zend, Symfony maybe get some PHP certification. Or do I start learning .NET or Java. I'm more familiar to .NET so I'll probably choose it if choice falls between .NET and Java ( or you could convince me to choose Java :). Career-wise, I don't know what is the best choice. Learning a new framework and language is more time consuming then improving my existing skills in PHP. But with .NET you have a lot of possibilities (Windows 7 Phone development, Silverlight, WPF) and possibly bigger chances to find better jobs. PHP jobs are less well payed then .NET, at least, according to my research (correct me if I'm wrong). But if I start now with .NET I'm just a beginner and my salary will be low. I need at least 2+ years of experience in some language to even try to find some job that is paying higher than $50-60k in NY. My main goal in the next 2-3 years is to try to find a job in the $60-80k category. Don't get me wrong, I'm not just chasing money, but money is an important factor when you're trying to start a family. I'm 27 years old and I feel that there isn't a lot of room for wrong decisions regarding my career, so any advice will be very welcome. Update Thank you all for spending time to help me with my problem. All of the answers and comments have been very helpful. I have decided to stick with PHP but also to learn C# and Silverlight 4. We'll see where the life will take me. | What on earth does the choice of programming language have anything to do with your career? This question is like asking, "I have two choices for a place to work. Should I work at the one where the boss has a norwegian accent, or the one where the boss has a spanish accent?" There are much more important career considerations. Startup or established company? Product company or company where IT is a support function? Will you be learning new things or rehashing the old? 9 to 5 or "work any 80 hours you want?" Nice co-workers or mean co-workers? Smart co-workers or stupid co-workers? Suit and tie or t-shirt? This list could go on for hours. The choice of a programming language is just about as relevant to a programmer's career as the choice of whether to comb your hair to the left or to the right. It's all software development no matter what programming language dialect you happen to be speaking. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20369",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8093/"
]
} |
20,407 | In a post , Joel Spolsky mentioned that 5 digit Stack Overflow reputation can help you to earn a job paying $100k+. How much of that is real? Would anyone like to share their success in getting a high paid job by virtue of their reputations on Stack Exchange sites? I read somewhere that, a person got an interview offer from Google because a recruiter found his Stack Overflow reputation to be impressive. Anyone else with similar stories? | No the real answer: spend a few months earning a five digit Stack Overflow reputation, and you'll be getting job offers in the $100K+ range without an interview. There is no reason why a high reputation (or "score") on any site will get you a job at all. I have pointed this out before, you are more likely to get a job by maintaining open source projects, writing proficiently, leaving good impressions, and making personal connections within the community. Are these people good programers? Undoubtedly yes! Does that mean they are a good fit for your team? Absolutely not . Calling these people "superstars" may be completely correct, but that doesn't make them perfect. 1 What determines if you are a good fit? Interviews and connections. You can't replace meeting people face to face with a number. Having a high reputation can't hurt, but it isn't a magic bullet . 1: In no way do I mean to imply these people are bad programmers, I mean to emphasize the inability to instantly and wholly judge someone based on a number. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20407",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1560/"
]
} |
20,427 | I have a question that can be best answered here, given the vast experience some of you guys have! I am going to finish my bachelor's degree in CS and let's face it, I am just comfortable with C++ and Python. C++ - I have no experience to show for and I can't quote the C++ standard like some of the guys on SO do but yet I am comfortable with the language basics and the stuff that mostly matters. With Python, I have demonstrated work experience with a good company, so I can safely put that. I have never touched C, though I have been meaning to do it now. So I cannot write C on my resume because I have not done it ever. Sure I can finish K & R and get a sense of the language in a month, but I don't feel like writing it cause that would be being unfaithful to myself. So the big question is, are two languages on a a resume considered OK or that is usually a bad sign? Most resumes I have seen mention lots of languages and hence my question. Under the language section of my resume, I just mention: C++ and Python and that kinda looks empty! What are your views on this and what do you feel about such a situation? PS: I really don't want to write every single library or API I am familiar with. Or should I? | As long as you know how to think the problems through, it does not matter how many languages you are proficient in. But since you are proficient with C++, you could invest a few months time to gain some skill in C# or Java (or Ruby, for that matter). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20427",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8163/"
]
} |
20,466 | Should I break SQL queries in different lines? For example in the project I am working on, we have a query that is taking 1600 columns! 1600 + tab chars. I wrote queries like this: "SELECT bla , bla2 , bla FROM bla " .
"WHERE bla=333 AND bla=2" .
"ORDER BY nfdfsd ..."; But they demanded me to put them in one line and said that my style is bad formatting. Why it is bad practice? | For source control reasons, we have linebreaks after every where clause, or comma. So your above turns into SELECT bla
, bla2
, bla
FROM bla
WHERE bla=333
AND bla=2
ORDER BY nfdfsd
, asdlfk; (tabbing and alignment has no standard here, but commas are usually leading) Still, makes no performance difference. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20466",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5419/"
]
} |
20,542 | Getting into zone is a pleasurable and fruitful process. We produce good source code and we get lots of satisfaction from our work done while being in the zone. But, how does one get into the 'zone'? Do you follow a specific process? Apart from switching of email system, mobiles and other mundane non-productive applications, is there anything else that can be done? | Concentrate on what you need to do. Make the effort to actually start doing it. This can be one of the hardest things - to actively stop fluffing about. Don't have email open. Don't have Fakebook in another window. Don't have any StackExchange going. No forums. Only quiet. And then get on with it. It generally takes me (and pretty much everyone else I know) about 15-20 mins to get there. You can generally sustain "the zone" for about 2 hours, and generally only once per day - its mentally pretty tiring. If you are super-duper you might manage it twice in a day. After "the zone" the rest of your day is pretty much lightweight by comparison, you get things done but the burst of huge productivity is over. Oh - and getting out of the zone takes about 3 seconds - eg a phone call, or somebody sticking their head and saying: "Can I bother you for a moment" - to which the answer is: "yes, you already did". Bang. The zone is gone. Another 15-20 to get back. Amazing how many stupid s/w defects get introduced by getting knocked out of the zone. Amazing also how many people (esp managers) think that open plan is a really good way to develop quality software (where nobody EVER gets into the zone let alone stays there). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7478/"
]
} |
20,564 | There has been a lot of discussion around the excellent question " Will high reputation in Stack Overflow help to get a good job? ". I immediately agreed with JoshK that basically said " No " (I'll will explain why), but Joel chimed in with lot of convincing facts which made me upvote him as well. My question is what other skills (other than being a technical genius) do you require from a developer? To get the job, or to keep it. I believe being a genius is far from being enough. I have met many technical geniuses in various companies I have worked for that impressed me a lot, but sadly in lot of cases, they were simply fired after a few months or put in ivory towers (mainly because of internal mutiny from other developers). I've seen many in personal distress as a result of this which I understand. That's why I'm a big fan of non-technical questions for technical positions. I like to know how the candidate will interact with others (including non-technical employees), how much consideration he will have for the business, if he will work for the desired outcome, and so on. I would like to know what you require from your developers and WHY it's important (after all, you hire someone to write code, don't you? Why would you want him to be assertive?) . Ideally, I'd like you to come up with an example question you would ask during interviews in support of your answer. | Excellent communication skills. If your colleagues cannot read your brain directly, you will need to be able to tell them what you think. Preferrably both verbally and written. EDIT: A way to see them at interview time may be by asking them what their favorite framework for doing X is, and then say that they need to work on a project where X could be used, but it is a political decision to use technology Y (which is clearly older and has some limitations that X solves). If this ends up in an argument about why the political decision is wrong, you have a strong indication of this person not doing well with pragmatic decisions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20564",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
20,653 | This is a dilemma about which I have been thinking for quite a while. I'm a graduate student and my topics of interest are programming language design, code analysis, compilation, etc. So far, this field has been very interesting and rewarding for me, so I was thinking about finding a job in that field and continuing to specialize in it. I feel like it's a relatively solid field which won't "get out of style" anytime soon. I've always thought that in such complex fields it's better to be a real expert than just another guy who superficially understand what the experts are talking about. On the other hand, I feel that by specializing this way I really limit my future option. I have always been a strong believer in multidisciplinary approaches to problems. Maybe I should go search for a general programming job in which I could gain experience in other fields, as well as occasionally apply my favorite field for solving problems. Specializing in only one or two fields can prevent me from thinking outside the box and cause stagnation. I would really like to hear more opinions about this choice. The truth is I'm already leaning towards one of the choices, so basic psychology says nothing will change my mind, but I would still love to hear some feedback. | Specialise if you enjoy it As you are aware, if you specialise you are automatically incurring an opportunity cost in that you won't be immediably eligible for other technologies (e.g. Java programmers don't often immediately get accepted for compiler optimisation jobs). However, you have to balance this with your love of the complexity inherent in your chosen discipline. You say you want to be an expert - well go ahead and take the time to learn your chosen discipline. We as a community always need new experts. However, my advice is to follow the pragmatic programmer recommendation of "Learn a new language every year" . That way, while you're engaging in deep lexical analysis of algorithmic encoding, you can also be churning out a little iPhone app that interests you on the side. You never know, the cross pollenation of different paradigms may cause you some insight that will extend your specialisation into new areas. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20653",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8331/"
]
} |
20,832 | In every interview I have been in, I have been quizzed on mathematical analysis of complexity, including big-O notation. How relevant is big-O analysis to development in industry? How often do you really use it, and how necessary is it to have a honed mindset for the problem? | My question is, how relevant is this test to development in industry? A solid understanding of computational complexity theory (e.g. big O notation) is essential to design scalable algorithms, applications and systems. Since scalability is highly relevant to computing in industry, big O notation is too. How often do you reeeally use it, and how necessary is it to have a honed mindset for the problem? Depends what you mean by "reeeally use it". On the one hand, I never do formal proofs of computational complexity for the software I write. On the other hand, most days I have to deal with applications where scalability is a potential concern, and design decisions include selection of (for example) appropriate collection types based on their complexity characteristics. (I don't know whether it is possible to consistently implement scalable systems without a solid understanding of complexity theory. I would be inclined to think that it is not.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1388/"
]
} |
20,909 | Simple question, but I often hear these three terms defined with such ferocity, but which have been known to me to mean different things over the years. What are the "correct" definitions of "Procedures", "Methods", "Function", "Subroutines", etc? | I'm going with a different answer here: practically speaking, there's really no difference , with the slight exception that "method" usually refers to a subroutine associated with an object in OO languages. The terms "procedure, function, subroutine, subprogram, and method" all really mean the same thing: a callable sub-program within a larger program. But it's difficult to come up with a definition that captures all variant usages of these terms, because they are not used consistently across programming languages or paradigms. You might say a function returns a value. Well, the following C function doesn't return a value: void f() { return; } ...but I doubt you'd find anyone who would call it a procedure. Sure, in Pascal, procedures don't return values and functions return values, but that's merely a reflection of how Pascal was designed. In Fortran, a function returns a value, and a subroutine returns multiple values. Yet none of this really allows us to come up with a "universal" definition for these terms. In fact, the term "procedural programming" refers to a whole class of languages, including C, Fortran and Pascal, only one of which actually uses the term "procedure" to mean anything. So none of this is really consistent. The only exception is probably "method", which seems to be used almost entirely with OO languages, referring to a function that is associated with an object. Although, even this is not always consistent. C++, for example, usually uses the term "member function" rather than method, (even though the term "method" has crept into the C++ vernacular among programmers.) The point is, none of this is really consistent. It simply reflects the terminology employed by whatever languages are en vogue at the time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20909",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7500/"
]
} |
20,927 | Just as the title says, what is your favorite whiteboard interview problem, and why has it proven effective for you? Junior, senior, Java, C, Javascript, PHP, SQL, pseudo-code, etc. | I ask the candidate to design a solution to a problem I actually encountered in my day to day work. Doing so, I try to create a dialog between me and the candidate. I try to discuss about the design he is building as if I had never thought about the problem before. What I try to evaluate is whether we are able to understand each other, and whether we can talk about a technical problem without confusion. Concrete example (For a java desktop developper) Design an API to handle the navigation history of a web browser (previous page, next page, list the 10 previous pages), and that can be reusable in many parts of the application (here I give concrete examples in our app). Then, sketch up an implementation. I like this one, because it's simple enough, it's easy to illustrate, it can be solved step by step (add additional behaviors without breaking everything), it allows to talk about edge cases and error handling, and it also allows to talk about data structures. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20927",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2314/"
]
} |
20,950 | I find myself using my text editor of choice (vim, nano, gedit, pick your poison) much more often than any IDE as of late. After noticing my ide shortcuts getting dusty I started to think about this and wonder: what justifies use of an IDE for you opposed to a text editor ? For that matter what rationale would you have for not using an IDE and merely relying on an editor? | The I: integration . A good text editor may be nice for writing code, but most of your programming isn't spent writing; it's spent testing and debugging, and for that you want your text editor to integrate with your compiler and your debugger. That's the greatest strength of an IDE. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20950",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136/"
]
} |
20,988 | In Python's tutorial one can read that Python's original implementation is in C; On the other hand, the Python implementation, written in C, (...) I'm very curious why was Python written in C and not C++? I'd like to know the reasoning behind this decision and the answer should be supported by historical references (and not opinion based). | From everything I've seen, it's a combination of practical and historical reasons. The (mostly) historical reason is that CPython 1.0 was released in 1989. At that time, C was just recently standardized. C++ was almost unknown and decidedly non-portable, because almost nobody had a C++ compiler. Although C++ is much more widespread and easily available today, it would still take a fair amount of work to rewrite CPython into the subset of C that's compatible with C++. By itself, that work would provide little or no real benefit. It's a bit like Joel's blog post about starting over and doing a complete rewrite being the worst mistake a software company can make. I'd counter that by pointing to Microsoft's conversion from the Windows 3.0 core to the Windows NT core, and Apple's conversion from MacOS 9 to Mac OS/X. Neither one killed the company -- but both were definitely large, expensive, long-term projects. Both also point to something that's crucial to success: maintaining both code bases for long enough that (most) users can switch to the new code base at their leisure, based on (at least perceived) benefits. For a development team the size of Python's, however, that kind of change is much more difficult. Even the change from Python 2 to 3 has taken quite a bit of work, and required a similar overlap. At least in that case, however, there are direct benefits to the changes, which rewriting into C++ (by itself) wouldn't (at least immediately) provide. Linus Torvalds's rant against C++ was brought up, so I'll mention that as well. Nothing I've seen from Guido indicates that he has that sort of strong, negative feelings toward C++. About the worst I've seen him say is that teaching C++ is often a disaster -- but he immediately went on to say that this is largely because the teachers didn't/don't know C++. I also think that while it's possible to convert a lot of C code to C++ with relative ease, that getting much real advantage from C++ requires not only quite a bit more rewriting than that, but also requires substantial re-education of most developers involved. Most well-written C++ is substantially different from well-written C to do the same things. It's not just a matter of changing malloc to new and printf to cout , by any stretch of the imagination. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/20988",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8455/"
]
} |
21,133 | Triggered by this thread , I (again) am thinking about finally using unit tests in my projects. A few posters there say something like "Tests are cool, if they are good tests". My question now: What are "good" tests? In my applications, the main part often is some kind of numerical analysis, depending on large amounts of observed data, and resulting in a fit function that can be used to model this data. I found it especially hard to construct tests for these methods, since the number of possible inputs and results are too large to just test every case, and the methods themselves are often quite longish and can not be easily be refactored without sacrificing performance. I am especially interested in "good" tests for this kind of method. | The Art of Unit Testing has the following to say about unit tests: A unit test should have the following
properties: It should be automated and repeatable. It should be easy to implement. Once it’s written, it should remain for future use. Anyone should be able to run it. It should run at the push of a button. It should run quickly. and then later adds it should be fully automated, trustworthy, readable, and maintainable. I would strongly recommend reading this book if you haven't already. In my opinion, all these are very important, but the last three (trustworthy, readable, and maintainable) especially, as if your tests have these three properties then your code usually has them as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21133",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7769/"
]
} |
21,256 | Java is often found in academia. What is the reason behind that? | A few Universities have somebody who's sufficiently well known that many (if not most) decisions revolve around that person's likes, dislikes, opinions, taste, etc. Just for example, Texas A&M has Bjarne Stroustrup on staff; it probably comes as little surprise to anybody that their curriculum tends to emphasize C++. Most universities are a bit different though. First, decisions are often made much more for the benefit of the faculty than the students. The single biggest criterion in many cases is "which language requires the least effort on our part?" Most of them are also careful in their laziness -- they want not only the language with the fewest advanced concepts to learn, but also one that is (for example) the slowest to innovate, update, or embrace anything new. Second, most decisions are made by committee. This means the final decision is rarely (if ever) what anybody actually wanted -- it's just what the fewest members of the committee (especially those with the most influence) found particularly objectionable. It's like picking a flavor of ice cream. One really likes strawberry, but another is allergic to strawberries. Another really loves chocolate, but somebody else can't stand it. Yet another thinks rum raisin is great, but the other two worry that mentioning "rum" would be interpreted as encouraging alcohol abuse -- so they end up with vanilla, even though it's not what anybody really wanted. Finally, even though it usually runs directly contrary to most of what the previous two criteria would produce in isolation, they generally need (or at least want) to be seen as responsive to the needs of industry. Java is the intersection of these three: Every concept it embodies was well known by 1980. There's not much to know beyond the very most basic notion of what OOP is. It's the tasteless, odorless, non-toxic, biodegradable, politically correct choice. Nearly the only other language in history to have existed as long and (probably) innovated less is SQL. Even though they're hardly what you'd call fast-moving targets, COBOL and Fortran have still both innovated more than Java. It is widely used. When you get down to it, profs and PHBs have similar criteria. Note that I'm not really saying (for example) that there's nothing more to know about Java than the most basic notion of what OOP is -- only that that's all that's needed to do what passes for an acceptable job of teaching it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21256",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26782/"
]
} |
21,300 | To my knowledge, all modern imperative programming languages support recursion in the sense that a procedure can call itself. This was not always the case, but I cannot find any hard facts with a quick Google search. So my question is: Which languages did not support recursion right from the start and when was that support added? | I'm not sure COBOL does (it certainly didn't at one time), but I can't quite imagine anybody caring much either. Fortran has since Fortran 90, but requires that you use the recursive keyword to tell it that a subroutine is recursive. PL/I was pretty much the same -- recursion was supported, but you had to explicitly tell it what procedures were recursive. I doubt there are many more than that though. When you get down to it, prohibiting recursion was mostly something IBM did in their language designs, for the simple reason that IBM (360/370/3090/...) mainframes don't support a stack in hardware. When most languages came from IBM, they mostly prohibited recursion. Now that they all come from other places, recursion is always allowed (though I should add that a few other machines, notably the original Cray 1, didn't have hardware support for a stack either). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21300",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3684/"
]
} |
21,339 | I have been using the http:BL to block bad IP's from accessing my site. If a malicious IP (comment spammer) trys to hit the site I just exit the web script which implicitly returns a 200 OK response. Other responses I could return: 404 - Not found? If I return a 404 maybe the robots will think "this is a waste of time, lets move on to attack another site" which would reduce the load on the site (I currently get about 2 spam-hits per second). However I'm reluctant to return 404's on urls that, under normal circumstances, can be found. I'm not sure if spam robots can 'waste time'. i.e Why would a bot writer be bothered to code for 404's when they just blitz the web anyway? 401 Unauthorized? Blocking a bad IP is not quite the same as "resource requires user authentication 1) which has not yet been provided or 2) which has been provided but failed authorization tests" In general I feel that 'responding to the bad-bots according to proper http protocol' gives the bad guys the upper hand. In the sense that I play by the rules while they do not. On some days I feel like I should do something clever to divert these bot's away. On other days I just think that I should not take it personally and just ignore them. Accepting it as par for the course of running a web site. I dunno - what are your thoughts? How do you respond when you know its a bad IP? | If you want to play by the rules, 403 Forbidden , or 403.6 IP address rejected (IIS specific) would be the correct response. Giving a 200 response (and ignoring the comment) may just increase the load on the server, as the spam bot will presumably continue submitting spam on future occasions, unaware that it is having no effect. A 4XX response at least says "go away you need to check your facts" and is likely to diminish future attempts. In the unlikely event you have firewall access, then a block of blacklisted IP addresses at the firewall would minimize server load / make it appear that your server didn't exist to the spammer. I was going to suggest using a 302 Temporary Redirect to the spammer's own IP address - but this would probably have no effect as there would be no reason for the bot to follow the redirect. If dealing with manually submitted spam, making the spam only visible by the IP address that submitted it is a good tactic. The spammer goes away happy and contented (and does not vary his approach to work around your defences), and the other users never see the spam. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21339",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6720/"
]
} |
21,387 | Previously I was searching for a good TimeLine control for a WPF project. I found an answer in Here which direct me to this CodePlex project . Now I want to change code to feed my culture needs. But there are some mismatches! My question is: How you interact with such thousands lines of code? EDIT: Any shortcut will be great! | You add comments to the source code when you have understood it enough to be able to do so. Refactor these comments vigorously as you understand more and more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21387",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4827/"
]
} |
21,403 | I always read about large scale transformation or integration project that are total or almost total disaster. Even if they somehow manage to succeed the cost and schedule blow out is enormous. What is the real reason behind large projects being more prone to failure. Can agile be used in these sort of projects or traditional approach is still the best. One example from Australia is the Queensland Payroll project where they changed test success criteria to deliver the project. See some more failed projects in this SO question ( on Wayback Machine ) Have you got any personal experience to share? | The main reason is an increase in scope , which the book "The Pragmatic Programmer" describes as: feature bloats creeping featurism requirement creep It is an aspect of the boiled-frog syndrome. The idea of the various "agile" method is to accelerate feedback and - hopefully - correct the evolution of the project in time. But the other reason is release management : if you aren't geared toward releasing the project (however imperfect it may be), chances are it will fail (because released too late, with too many buggy features, and harder to fix/update/upgrade). That does not mean you have to have a fixed release date, but that means you must be able at all time to build a running version of your program, in order to test/evaluate/release it. The blog post " Late projects are late one day at a time " contains many more examples: I know the ‘Getting Real’ thing to do would be to Flex the scope and keep the launch date fixed, but that doesn’t work if there is agreed upon functionality that cannot be completed in time. That’s why we don’t advocate specs or “agreed upon functionality.” That’s the root of the problem — saying you know everything about what you need and how its going to be implemented even before the first pixel is painted or line of code is written. When you predict a rigid future on a flexible present you’re in trouble. Rigid futures are among the most dangerous things. They don’t leave room for discovery, emergence, and mistakes that open new doors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21403",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3414/"
]
} |
21,412 | I'm working in development for 4 years, and 3.5 in PHP - why I don't seem to be able to be selected in an interview. I want to know what special things the interviewer wants to see in candidates - for senior PHP developer roles. Interviewer asks me 10 questions and I'm able to answer only 5. Does selection depend on these things? It doesn't mean that I can't solve the problem, I can google the question, I can ask on forums. Why don't they understand that a man can't remember all the answers for each and every question? Especially programming ones. Please advise. | "Interviewer asks me 10 questions and I'm able to answer only 5. Does selection depend on these things? It doesn't mean that I can't solve the problem, I can google the question, I can ask on forums. Why don't they understand that a man can't remember all the answers for each and every question? Expecially programming ones." These things are very significant and will be a very significant part of the reason. Interviewers do understand that you can't know everything and generally tailor the questions to suit. Generally most questions an interviewer will ask will be the sorts of things they expect a candidate to be able to answer without access to the internet. Why do they expect this standard? A few reasons come to mind: You indicate that you're looking at senior developer roles. Senior
developers are by definition those who have a good level of
knowledge already and can help others out, not those who are
dependent on Google. A programmer who knows this stuff - as opposed to having to post it
on forums - is going to be far more productive that one who relies
on the internet. They're not having to wait for replies, understand
what's been posted and adapt it to their purpose, they're just
getting on and coding. They're obviously finding candidates who can answer these questions
and in that instance wouldn't you hire the guy who got 9 out of 10
over the guy who got 5 out of 10. If they were happy with someone bright who understands the basics
and Googles the rest, you can hire a junior developer for a lot less
money. Personally out of 10 questions for an intermediate or senior role normally I'd expect a candidate to be answering perhaps 8 well and having a fair idea at least one of the others. If you're not hitting that level then I suggest that you're probably applying for jobs a little above your current level and should adjust your expectations. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21412",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7702/"
]
} |
21,463 | When doing TDD and writing a unit test, how does one resist the urge to "cheat" when writing the first iteration of "implementation" code that you're testing? For example: Let's I need to calculate the Factorial of a number. I start with a unit test (using MSTest) something like: [TestClass]
public class CalculateFactorialTests
{
[TestMethod]
public void CalculateFactorial_5_input_returns_120()
{
// Arrange
var myMath = new MyMath();
// Act
long output = myMath.CalculateFactorial(5);
// Assert
Assert.AreEqual(120, output);
}
} I run this code, and it fails since the CalculateFactorial method doesn't even exist. So, I now write the first iteration of the code to implement the method under test, writing the minimum code required to pass the test. The thing is, I'm continually tempted to write the following: public class MyMath
{
public long CalculateFactorial(long input)
{
return 120;
}
} This is, technically, correct in that it really is the minimum code required to make that specific test pass (go green), although it's clearly a "cheat" since it really doesn't even attempt to perform the function of calculating a factorial. Of course, now the refactoring part becomes an exercise in "writing the correct functionality" rather than a true refactoring of the implementation. Obviously, adding additional tests with different parameters will fail and force a refactoring, but you have to start with that one test. So, my question is, how do you get that balance between "writing the minimum code to pass the test" whilst still keeping it functional and in the spirit of what you're actually trying to achieve? | It's perfectly legit. Red, Green, Refactor. The first test passes. Add the second test, with a new input. Now quickly get to green, you could add an if-else, which works fine. It passes, but you are not done yet. The third part of Red, Green, Refactor is the most important. Refactor to remove duplication . You WILL have duplication in your code now. Two statements returning integers. And the only way to remove that duplication is to code the function correctly. I'm not saying don't write it correctly the first time. I'm just saying it's not cheating if you don't. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21463",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4351/"
]
} |
21,535 | Almost all developers who work for a large corporation find themselves on the wrong side of site blocking software. It can be both frustrating ( "Just let me read that damn blog!" ) and helpful ( "Woah! Dodged a bullet with that site" ). In your opinion, what is the right level of blocking to apply to developers and why? | No site blocking. If my projects are delivered on time and my productivity is not suffering, I don't see any reason to block anything (except - if you really must block something - well known spyware/malware sites). I don't really have anything else to add except that. We are professionals, not children. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21535",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7167/"
]
} |
21,575 | So your client asks you to write some code, so you do. He then changes the specs on you, as expected, and you diligently implement his new features like a good little lad. Except... the new features kind of conflict with the old features, so now your code is a mess. You really want to go back and fix it, but he keeps requesting new things and every time you finish cleaning something, it winds up a mess again. What do you do? Stop being an OCD maniac and just accept that your code is going to wind up a mess no matter what you do, and just keep tacking on features to this monstrosity? Save the cleaning for version 2? | Deliberately overestimate the time needed for your next features. Use that extra time to clean up. You'll never be able to justify maintenance, and the client needs it regardless, so give them the bitter medicine (slightly increased costs for the next features) so they can get better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21575",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7390/"
]
} |
21,802 | Getters and setters are often criticized as being not proper OO. On the other hand, most OO code I've seen has extensive getters and setters. When are getters and setters justified? Do you try to avoid using them? Are they overused in general? If your favorite language has properties (mine does) then such things are also considered getters and setters for this question. They are the same thing from an OO methodology perspective. They just have nicer syntax. Sources for Getter/Setter Criticism (some taken from comments to give them better visibility): http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html http://typicalprogrammer.com/?p=23 http://c2.com/cgi/wiki?AccessorsAreEvil http://www.darronschall.com/weblog/2005/03/no-brain-getter-and-setters.cfm http://www.adam-bien.com/roller/abien/entry/encapsulation_violation_with_getters_and http://www.yegor256.com/2014/09/16/getters-and-setters-are-evil.html To state the criticism simply: Getters and Setters allow you to manipulate the internal state of objects from outside of the object. This violates encapsulation. Only the object itself should care about its internal state. And an example Procedural version of code: struct Fridge
{
int cheese;
}
void go_shopping(Fridge fridge)
{
fridge.cheese += 5;
} Mutator version of code: class Fridge
{
int cheese;
void set_cheese(int _cheese) { cheese = _cheese; }
int get_cheese() { return cheese; }
}
void go_shopping(Fridge fridge)
{
fridge.set_cheese(fridge.get_cheese() + 5);
} The getters and setters made the code much more complicated without affording proper encapsulation. Because the internal state is accessible to other objects we don't gain a whole lot by adding these getters and setters. The question has been previously discussed on Stack Overflow: https://stackoverflow.com/questions/565095/java-are-getters-and-setters-evil https://stackoverflow.com/questions/996179 | Having getters and setters does not in itself break encapsulation. What does break encapsulation is automatically adding a getter and a setter for every data member (every field , in java lingo), without giving it any thought. While this is better than making all data members public, it is only a small step away. The point of encapsulation is not that you should not be able to know
or to change the object's state from outside the object, but that you
should have a reasonable policy for doing it. Some data members may be entirely internal to the object, and should
have neither getters nor setters. Some data members should be read-only, so they may need getters but
not setters. Some data members may need to be kept consistent with each other. In
such a case you would not provide a setter for each one, but a single
method for setting them at the same time, so that you can check the
values for consistency. Some data members may only need to be changed in a certain way, such
as incremented or decremented by a fixed amount. In this case, you
would provide an increment() and/or decrement() method, rather
than a setter. Yet others may actually need to be read-write, and would have both a getter
and a setter. Consider an example of a class Person . Let's say a person has a name, a social security number, and an age. Let's say that we do not allow people to ever change their names or social security numbers. However, the person's age should be incremented by 1 every year. In this case, you would provide a constructor that would initialize the name and the SSN to the given values, and which would initialize the age to 0. You would also provide a method incrementAge() , which would increase the age by 1. You would also provide getters for all three. No setters are required in this case. In this design you allow the state of the object to be inspected from outside the class, and you allow it to be changed from outside the class. However, you do not allow the state to be changed arbitrarily. There is a policy, which effectively states that the name and the SSN cannot be changed at all, and that the age can be incremented by 1 year at a time. Now let's say a person also has a salary. And people can change jobs at will, which means their salary will also change. To model this situation we have no other way but to provide a setSalary() method! Allowing the salary to be changed at will is a perfectly reasonable policy in this case. By the way, in your example, I would give the class Fridge the putCheese() and takeCheese() methods, instead of get_cheese() and set_cheese() . Then you would still have encapsulation. public class Fridge {
private List objects;
private Date warranty;
/** How the warranty is stored internally is a detail. */
public Fridge( Date warranty ) {
// The Fridge can set its internal warranty, but it is not re-exposed.
setWarranty( warranty );
}
/** Doesn't expose how the fridge knows it is empty. */
public boolean isEmpty() {
return getObjects().isEmpty();
}
/** When the fridge has no more room... */
public boolean isFull() {
}
/** Answers whether the given object will fit. */
public boolean canStore( Object o ) {
boolean result = false;
// Clients may not ask how much room remains in the fridge.
if( o instanceof PhysicalObject ) {
PhysicalObject po = (PhysicalObject)o;
// How the fridge determines its remaining usable volume is a detail.
// How a physical object determines whether it fits within a specified
// volume is also a detail.
result = po.isEnclosedBy( getUsableVolume() );
}
return result;
}
/** Doesn't expose how the fridge knows its warranty has expired. */
public boolean isPastWarranty() {
return getWarranty().before( new Date() );
}
/** Doesn't expose how objects are stored in the fridge. */
public synchronized void store( Object o ) {
validateExpiration( o );
// Can the object fit?
if( canStore( o ) ) {
getObjects().add( o );
}
else {
throw FridgeFullException( o );
}
}
/** Doesn't expose how objects are removed from the fridge. */
public synchronized void remove( Object o ) {
if( !getObjects().contains( o ) ) {
throw new ObjectNotFoundException( o );
}
getObjects().remove( o );
validateExpiration( o );
}
/** Lazily initialized list, an implementation detail. */
private synchronized List getObjects() {
if( this.list == null ) { this.list = new List(); }
return this.list;
}
/** How object expiration is determined is also a detail. */
private void validateExpiration( Object o ) {
// Objects can answer whether they have gone past a given
// expiration date. How each object "knows" it has expired
// is a detail. The Fridge might use a scanner and
// items might have embedded RFID chips. It's a detail hidden
// by proper encapsulation.
if( o implements Expires && ((Expires)o).expiresBefore( today ) ) {
throw new ExpiredObjectException( o );
}
}
/** This creates a copy of the warranty for immutability purposes. */
private void setWarranty( Date warranty ) {
assert warranty != null;
this.warranty = new Date( warranty.getTime() )
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21802",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1343/"
]
} |
21,870 | Is is possible to correctly call yourself (or your team) "Agile" if you don't do TDD (Test-Driven Development)? | Yes, yes, yes, a million times yes. Agile is a philosophy, TDD is a specific methodology. If I wanted to be really picky I could simply point out that there are quite a few variations of xDD - which their advocates will explain in depth are not TDD - but those are still substantially bound up with test first so that would be cheating. So lets says this - you can be agile without doing "test first" development (look at the way scrum works - nowhere in there are there specifics about how you write code). Look at a kanban board, look at all sorts of agile methodologies. Do you want unit tests? Of course you do, for all kinds of reasons - and you might well make an argument that you can't be agile without unit tests (although I suspect that you can be) - but you don't have to write them first to be agile. And finally, its equally true that you could do Test First without being Agile and strong arguments for doing test first regardless of your overall dev philosophy. It seems that others (with a more SOLID rep) have a similar opinion... http://www.twitter.com/unclebobmartin/status/208621409663070208 @unclebobmartin: http://t.co/huxAP5cS Though it's not impossible to do
Agile without TDD and OOD, it is difficult. Without TDD the iteration
rate of... (The link in the tweet is to the full answer on LinkedIn) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21870",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4351/"
]
} |
21,926 | The stereotypical view of a programmer can't do graphics very well, from what I've read and seen. However, I love programming (preferably OOP, PHP, C++, Objective-C) and can't deny the fact I have a unique taste in web design and others have said I am doing well at it (CSS). I thought to myself "Hey, wait, I'm a programmer - how can I design well?". Question is: is it possible to be good at programming and designing? Does anyone here feel the same? For the record: actual images I have created have been called programmer art several times before by friends | Well, why not? Lots of people have multiple talents. But the amount of time that you devote to a particular skill does make a difference. Spending more time one one skill means you have to spend less time on another, and spending less time means being less competent. For my part, I have spent the vast majority of my time on coding, not design. As such, I am a pretty good programmer, but have stick-figure design skills (although I do believe I know good design when I see it). Good design means more than just looking pretty; it also means making an application that is intuitive and easy to use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21926",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8682/"
]
} |
21,943 | We're all familiar with the Java package name convention of turning the domain name around. I.e. www.evilcorp.com would, by convention, chose to have their java packages com.evilcorp.stuff . Increasingly I'm getting fed up with this. As a commercial programmer, I encounter time and again that the software package name is completely irrelevant due to some rebrand, acquisition or similar. In the opensource world there's less name changes so there it makes sense. However it seems to me the shelf life of many pieces of (commercial/internal) software are much longer than that of the organisation making them. The problem is often made worse by software projects taking the marketing department's lead to use the name du jour they use refer to a certain project. A name that will, without fail, change 3 months down the line to make the emperor's new clothes feel fresh and new. Because of this, I've mostly stopped using the reverse domain as package name. Granted, if this is done on a large scale, there's risk of name collisions, but surely this is mitigated by either using "unique" software names, avoiding generic words, or use the reverse domain for projects intended to be sold/released as libraries. Other thoughts? | I'm going to quote the advice Microsoft gives for namespaces (.NET's packages), which doesn't have the domain name convention. I think it's good advice for Java packages too, since I don't believe that a domain name represents a solid and stable identity. The general format for a namespace name is as follows: <Company>.(<Product>|<Technology>)[.<Feature>][.<Subnamespace>] For example, Microsoft.WindowsMobile.DirectX . Do prefix namespace names with a company name to prevent namespaces from different companies from having the same name and prefix. Do use a stable, version-independent product name at the second level of a namespace name. Do not use organizational hierarchies as the basis for names in namespace hierarchies, because group names within corporations tend to be short-lived. The namespace name is a long-lived and unchanging identifier. As organizations evolve, changes should not make the namespace name obsolete. If even your company name is unstable, you might want to just start with the product name. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21943",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8688/"
]
} |
21,977 | Back in the "good ol' days," when we would copy shareware onto floppies for friends, we also used a fair bit of assembly. There was a common practice of "micro-optimization," where you would stare and stare at lines of assembly until you figured out a way to express it in one fewer instruction. There was even a saying, which was mathematically impossible, that " You can always remove one more instruction. " Given that changing runtime performance by small constant factors isn't a major issue for (most) programming today, are programmers transferring these micro-optimization efforts elsewhere? In other words, Can a best-practice be taken to an extreme state where it's no longer adding anything of value? And instead is wasting time? For example: Do programmers waste time generalizing private methods that are only called from one place? Is time wasted reducing test case data? Are programmers (still) overly concerned about reducing lines of code? There are two great examples of what I'm looking for below: (1) Spending time finding the right variable names, even renaming everything; and (2) Removing even minor and tenuous code duplication. Note that this is different from the question " What do you optimize for? ", because I'm asking what other programmers seem to maximize, with the stigma of these being "micro" optimizations, and thus not a productive use of time. | Code Formatting Don't get me wrong ,
code should be consistent &
readable ,
but some take it too far. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21977",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6329/"
]
} |
21,987 | I know there have been questions like What is your favorite editor/IDE? , but none of them have answered this question: Why spend the money on IntelliJ when Eclipse is free? I'm personally a big IntelliJ fan, but I haven't really tried Eclipse. I've used IntelliJ for projects that were Java, JSP, HTML/CSS, Javascript, PHP, and Actionscript, and the latest version, 9, has been excellent for all of them. Many coworkers in the past have told me that they believe Eclipse to be "pretty much the same" as IntelliJ, but, to counter that point, I've occasionally sat behind a developer using Eclipse who's seemed comparably inefficient (to accomplish roughly the same task), and I haven't experienced this with IntelliJ. They may be on par feature-by-feature but features can be ruined by a poor user experience, and I wonder if it's possible that IntelliJ is easier to pick up and discover time-saving features. For users who are already familiar with Eclipse, on top of the real cost of IntelliJ, there is also the cost of time spent learning the new app. Eclipse gets a lot of users who simply don't want to spend $250 on an IDE. If IntelliJ really could help my team be more productive, how could I sell it to them? For those users who've tried both, I'd be very interested in specific pros or cons either way. | I work with Intellij (9.0.4 Ultimate) and Eclipse (Helios) every day and Intellij beats Eclipse every time. How? Because Intellij indexes the world and everything just works intuitively. I can navigate around my code base much, much faster in Intellij. F3 (type definition) works on everything - Java, JavaScript, XML, XSD, Android, Spring contexts. Refactoring works everywhere and is totally reliable (I've had issues with Eclipse messing up my source in strange ways). CTRL+G (where used) works everywhere . CTRL+T (implementations) keeps track of the most common instances that I use and shows them first. Code completion and renaming suggestions are so clever that it's only when you go back to Eclipse that you realise how much it was doing for you. For example, consider reading a resource from the classpath by typing getResourceAsStream("/ at this point Intellij will be showing you a list of possible files that are currently available on the classpath and you can quickly drill down to the one you want. Eclipse - nope. The (out of the box) Spring plugin for Intellij is vastly superior to SpringIDE mainly due to their code inspections. If I've missed out classes or spelled something wrong then I'm getting a red block in the corner and red ink just where the problem lies. Eclipse - a bit, sort of. Overall, Intellij builds up a lot of knowledge about your application and then uses that knowledge to help you write better code, faster. Don't get me wrong, I love Eclipse to bits. For the price, there is no substitute and I recommend it to my clients in the absence of Intellij. But once I'd trialled Intellij, it paid for itself within a week so I bought it, and each of the major upgrades since. I've never looked back. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/21987",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2314/"
]
} |
22,183 | I have a thought that I tried asking at SO, but didnt seem like the appropriate place. I think that source sites like Google Code, GitHub, SourceForge... have played a major role in the history of programming. However, I found that there is another bad thing to these kind of sites and that is you may just "copy" code from almost anyone, not knowing if it is good(tested) source or not. This line of thought has taken me to believe that source code websites tend to lead many developers (most likely unexperienced) to copy/paste massive amounts of code, which I find just wrong. I really dont know how to focus the question well, but basic thought would be: Is this ok? Is Open Source contributing to that or I'm just seeing ghosts... Hope people get interested because I think this is an important theme. | Correlation doesn't imply causation. Developers copy/paste code they don't understand because they're bad developers. The availability of such code doesn't turn good developers bad. If there were no open source projects, there would still be forum posts with code snippets or programming books with examples. So we're back to my first paragraph: bad developers will find a way to be bad at writing code. The blame for copying and pasting code lies with the developers who do it, not with the source code repositories. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8750/"
]
} |
22,416 | I am being approached with a job for writing embedded C on micro controllers. At first I would have thought that embedding programming is too low on the software stack for me, but maybe I am thinking about it wrong. Normally I would have shrugged off an opportunity to write embedded code, as I don't consider myself an electrical engineer. Is this a bad assumption? Am I able to write interesting and useful software for embedded systems, or will I kick myself for dropping too low on the software stack? I went to school for computer science and really enjoyed writing a compiler, thinking about concurrent algorithms, designing data structures, and developing frameworks. However, I am currently employed as a web developer, which doesn't scream the interesting things I just described. (I currently deal with issues like: "this check box needs to be 4 pixels to the left" and "this date is formatted wrong".) I appreciate everyone's input. I know I have to make the decision for myself, I just would like some clarification on what it means to be a embedded programmer, and if it fits what I find to be interesting. | If you want to be good at working on embedded systems, then yes, you need to think like a EE some of the time. That is generally when you are writing code to interface with the various peripherals (serial busses like UART, SPI, I2C or USB), 8 and 16-bit timers, clock generators, and ADCs and DACs. "Datasheets" for microcontrollers often run into the hundreds of pages as they describe every bit of every register. It helps to be able to read a schematic so you can probe a board with an oscilloscope or logic analyzer. At other times, it is just writing software. But under tight constraints: often you won't have a formal OS or other framework, and you might have only a few KB of RAM, and maybe 64 KB of program memory. (These limits are assuming you are programming on smaller 8 or 16-bit micros; if you are working with embedded Linux on a 32-bit processor, you won't have the same memory constraints but you will still have to deal with any custom peripheral hardware that your Linux distro doesn't provide drivers for.) I have a background in both EE and CS so I enjoy both sides of the coin. I also do some web programming (mostly PHP), and desktop apps (C# and Delphi), but I've always enjoyed working on embedded projects the most. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22416",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7426/"
]
} |
22,468 | What are some data structures that should be known by somebody involved in bioinformatics? I guess that everyone is expected to know about lists, hashes, balanced trees, etc, but I expect that there are domain specific data structures. Is there any book devoted to this subject? Thanks,
Lucian | If you want to be good at working on embedded systems, then yes, you need to think like a EE some of the time. That is generally when you are writing code to interface with the various peripherals (serial busses like UART, SPI, I2C or USB), 8 and 16-bit timers, clock generators, and ADCs and DACs. "Datasheets" for microcontrollers often run into the hundreds of pages as they describe every bit of every register. It helps to be able to read a schematic so you can probe a board with an oscilloscope or logic analyzer. At other times, it is just writing software. But under tight constraints: often you won't have a formal OS or other framework, and you might have only a few KB of RAM, and maybe 64 KB of program memory. (These limits are assuming you are programming on smaller 8 or 16-bit micros; if you are working with embedded Linux on a 32-bit processor, you won't have the same memory constraints but you will still have to deal with any custom peripheral hardware that your Linux distro doesn't provide drivers for.) I have a background in both EE and CS so I enjoy both sides of the coin. I also do some web programming (mostly PHP), and desktop apps (C# and Delphi), but I've always enjoyed working on embedded projects the most. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22468",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1023/"
]
} |
22,642 | I have seen several times on this site posts that decry Java's implementation of generics. Now, I can honestly say that I have not had any issues with using them. However, I have not attempted to make a generic class myself. So, what are your issues with Java's generic support? | Java's generic implementation uses type erasure . This means that your strongly typed generic collections are actually of type Object at runtime. This has some performance considerations as it means primitive types must be boxed when added to a generic collection. Of course the benefits of compile time type correctness outweigh the general silliness of type erasure and obsessive focus on backwards compatibility. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22642",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6415/"
]
} |
22,769 | What language, in your opinion, allows the average programmer to output features with the least amount of hard-to-find bugs? This is of course, a very broad question, and I'm interested in very broad and general answers and wisdoms. Personally I find that I spend very little time looking for strange bugs in Java and C# programs, while C++ code has its distinct set of recurring bugs, and Python/similar has its own set of common and silly bugs that would be detected by the compiler in other languages. Also I find it hard to consider functional languages in this regard, because I've never seen a big and complex program written in entirely functional code. Your input please. Edit: Completely arbitrary clarification of hard-to-find bug: Takes more than 15 minutes to reproduce, or more than 1 hour to find cause of and fix. Forgive me if this is a duplicate, but I didn't find anything on this specific topic. | The more powerful the type system of the language, the more bugs will be caught at the compile time itself. The following figure compares some of the well known programming languages in terms of the power, simplicity, and safety of their type systems. [ Source ] *Factoring in the ability to use unsafe constructs. C# gets stuffed into the unsafe row because of the "unsafe" keyword and associated pointer machinery. But if you want to think of these as a kind of inline foreign function mechanism feel free to bump C# skyward. I've marked Haskell '98 as pure but GHC Haskell as not pure due to the unsafe* family of functions. If you disable unsafe* then jump GHC Haskell up accordingly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22769",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8854/"
]
} |
22,809 | Why would someone use his own time to develop an open-source project for free and without compensation? | For small projects, reasons might be "hobby", "getting some experience", "fame", "joy" etc. but that's not how the big open source projects like Mozilla, OpenOffice, Linux work. Why did Sun buy StarDivision and made StarOffice an open source program (called OpenOffice.org)? Why does Mozilla create a top-notch browser and give it away as open source? Why are there people creating Linux, writing drivers and whatnot, and make it available to everyone for free? Why does Microsoft create opensource drivers for Linux so it can run better in MS's virtualisation? Because it makes some business sense for them. They make money that way, or at least plan to. In some cases, the dominance of MS's products, i.e. Windows, Office, Internet Explorer, was the reason the create a competing product, so it would be harder for MS to use their desktop dominance to conquer other domains, i.e. servers, internet services, too. This explains, to some extend, OpenOffice.org and Mozilla. In other cases, open source software is meant to drive sales of hardware, other software or services. Open Source drivers obviously help to sell hardware components to Linux users. RedHat sells support for their Linux distro, and they sell the fact that their Linux is genuine RedHat. Other products, e.g. Oracle, are certified for use on Redhat, but not on CentOS, even though it probably runs just-as-well. Server hardware is certified for Redhat, even though other linux distros probably run just-as-well. Big-money-clients don't care about the price, they want the certificate. Some companies, e.g. Google, sponsor many open source projects, because it helps their business. They don't do it for altruism. They want a free internet, a pervasive internet, a widespread internet, where people use Google's services so Google generates revenue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22809",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7498/"
]
} |
22,819 | Suppose I have some code void some_block(void)
{
some_statement_1();
some_statement_2();
some_statement_3();
} Now I wish to out-comment statement #2, but here I never can find a way where it feels right. Usually I will do the following: void some_block(void)
{
some_statement_1();
// some_statement_2();
some_statement_3();
} But now we got what looks like extra indentation of statement #2, yet it is arguable whether the indentation is still "correct". Alternatively we could do: void some_block(void)
{
some_statement_1();
//some_statement_2();
some_statement_3();
} This looks a bit better, yet still somehow also wrong. However the following just looks misplaced: void some_block(void)
{
some_statement_1();
//some_statement_2();
some_statement_3();
} How do you outcomment code and keep "correct" indentation? Update It seems most people who have provided answers did not answer my question, but simply state that it is bad practice to leave dead code around. While I completely agree with that, that was not the point of the question. Right now I am facing a case of example where I have 5 helper functions, that manage 5 similar but slightly different structs for me. I use these helper functions to make my code easier to read. But, at the moment (and for the next month probably) one of the structs are not going to be used. And I cannot commit anything which causes compiler warnings, so I have to outcomment that part of the code and commit it like that. Also I forgot this last variant, which I think just looks plain wrong: void some_block(void)
{
some_statement_1();
// some_statement_2();
some_statement_3();
} | Format it so it sticks out and makes your code look so ugly that you'll be compelled to delete it. Comments are for comments and not for dealing with unnecessary code; that's the job of version control. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/22819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3666/"
]
} |
23,021 | Would you refactor your app as you go or focus on completing app first? Refactoring will mean progress of app app will slow down. Completing app will mean you get a possibly very hard to maintain app later on? The app is a personal project. I don't really know how to answer "What drives the functionality and design", but I guess it's to solve inefficiencies in current software out there. I like minimal easy to use software too. So I am removing some features and add some that I feel will help. | Make it work. Then make it fast. Finally, make it beautiful. If you have good separation between your code (presentation, business, and data layers) using interfaces, and it's not a monolithic design, then refactoring should not be that difficult. If you're having that much difficulty refactoring, that's probably a code-smell -- I suggest you look at Solid Principles | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/526/"
]
} |
23,064 | As most people agree, encouraging developers to make fast code by giving them slow machines is not a good idea . But there's a point in that question. My dev machine is fast, and so I occasionally write code that's disturbingly inefficient, but that only becomes apparent when running it on other people's machines. What are some good ways to temporarily slow down a turbocharged dev machine? The notion of "speed" includes several factors, for example: CPU clock frequency. Amount of CPU cores. Amount of memory and processor cache. Speed of various buses. Disk I/O. GPU. etc. | Run your tests in a virtual machine with limited memory and only one core. The old machines people still may have now are mostly Pentium 4 era things. That's not that unrealistic - I'm using one myself right now. Single core performance on many current PCs normally isn't that much better, and can be worse. RAM performance is more important than CPU performance for many things anyway, and by limiting a little more harshly than for an old 1GB P4, you compensate for that a bit. Failing that, if you're willing to spend a bit, buy a netbook. Run the tests on that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23064",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2550/"
]
} |
23,075 | We've all been there: Your project failed or got cancelled. The code you spent days working on got rejected by your team. The design pattern you introduced to the team created chaos. Everyone ignores your ideas. My question is, what is the most productive way for a programmer to handle development-related failures such as these? | Your project failed. Software development is highly prone to project failures, and depending on the severity, this is best handled by management. Many projects have failed and many more will fail, so take notes! Learn why your project failed so you don't make the same mistakes next time. You learn much more from your failures than from your successes. What you have spend coding days on was rejected by your team. Save your work (for later). There are two possibilities: (a) It sucks, and the fact multiple people responded the same way is indication of this (b) It's truly genius work, but far ahead of what people are used to or can understand. People generally do not like what they do not understand. Perhaps its better to show it when the time is right OR in a different place with a different "Culture" Nobody listen to your ideas in your company. Its probably a bad idea, OR the culture is not aligned with your thinking. Either move to a place that supports your culture or critically evaluate your idea again (objectively without your own bias) -> is my idea really that good? <- Kill thy ego The design pattern you introduced with force in your team created a mess. Be honest, you tried your best but it did not turn out how you planned it. It may be better to start again or learn from the mistakes made in the design as a team and move forward. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23075",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
23,121 | How is a Ninja programmer defined? If it is just an experienced developer, then is there really no better way to describe that? Ninja sounds like a childish marketing ploy designed to attract developers with delusions of self-grandeur. Or am I missing something? In particular I want to know what qualities of a Ninja are desired in programmers that makes the comparison valid (besides the coolness factor)? I did find this article which makes some good comparisons between Ninja characteristics and Agile development. However, I would like to hear from people who have used the word Ninja for hiring purposes and what their motivation was behind it. Update: There were several good points raised on both sides of the argument. I've tried to summarize these in my blog post . I've chosen JB's answer as the accepted one because it summarizes all the valid reasons for making the comparison. | To my mind there are a few parallels between a Ninja and a good programmer: Hidden weapons/unorthodox style - If you ever saw the G.I. Joe cartoon from the 1980s you may remember the character of Storm Shadow . He had a sword and some throwing stars but with just the right sequences of punches and kicks managed to take apart a tank in an episode. In a similar manner, programmers can get called in to do some work that requires them to pull out weapons you may not have thought they had. This is a kin the "Hacker" point that has been mentioned in a few places. Honor/respect - Granted that I've seen this more in the Western portrayals of a ninja such as G.I. Joe or Teenage Mutant Ninja Turtles, but there seemed to be this attitude of honoring one's enemy, possibly taken from "Art of War." Good programmers can respect that there may be better ones out there somewhere. Humility is also in this in a sense. Implied mastery - There may be this assumption that all ninjas are masters and thus have really honed their skills to know how to fight well. Good programmers may have similarly mastered some skills that are quite beneficial in getting the job done. Those are the few beyond the superficial, "Oh that sounds cool," implications as there had to be some meat for it to stick. While I'd agree that Samurai may be the more correct title, once things have been out for a while in an incorrect form one has to reconcile things here as the TMNT aren't really what most would consider as covert mercenaries that is what Ninjas technically are. However, for those of us that grew up with cartoon depictions of Ninjas in this positive light this is what may stick for us and so there is also an element of nostalgia here too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23121",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7263/"
]
} |
23,146 | I am having a disagreement with a client about the user authentication process for a system. The nub of it is that they want each user to have a globally unique password (i.e. no two users can have the same password). I have wheeled out all the obvious arguments against this: it's a security vulnerability, it confuses identification with authentication, it's pointless, etc., but they are insisting that there is nothing wrong with this approach. I have done various gGoogle searches looking for authoritative (or semi-authoritative , or even just independent) opinions on this, but can't find any (mainly it's just such an obvious faux pas that it doesn't seem worth warning against, as far as I can tell). Can anybody point me towards any such independent opinion, please? EDIT : Thanks for all your answers, but I already understand the problems with this proposed approach/requirement, and can even explain them to the client, but the client wont't accept them, hence my request for independent and/or authoritative sources. I'd also found the Daily WTF article, but it suffers from the problem that Jon Hopkins has pointed out - that this is such a self-evident WTF that it doesn't seem worth explaining why . And yes, the passwords are going to be salted and hashed. In which case global uniqueness might well be difficult to ensure, but that doesn't solve my problem - it just means that I have a requirement that the client won't budge on, that's not only ill-advised, but is also difficult to implement. And if I was in a position to say "I'm not budging on salting and hashing", then I'd be in a position to say "I'm not implementing globally unique passwords". Any pointers to independent and/or authoritative sources for why this is a bad idea still gratefully received... | Whenever a client tries to create a password that already exists, it receives feedback that some user already uses that password, usually a violation of the privacy agreement. Next to that, usernames are much easier to guess (and if there is a forum, you could just find alot of usernames there) and you're hinting the user ways to hack the website. There should be some page on the internet somewhere that describes the privacy agreement violation part, other than that it's just common sense: they'd basically be giving someone a key and a list of home addresses. EDIT: Not close to authorative, but perhaps helpful after you explain them what WTF means: http://thedailywtf.com/Articles/Really_Unique_Passwords.aspx | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23146",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2180/"
]
} |
23,276 | What skills, knowledge and talents should a junior programmer aim to obtain in order to be qualified to become a team leader? | The best team leaders I've seen have all been dynamite programmers. But they've all had several other qualities, which are harder to define: wisdom, good judgment, good people skills (friendly and pleasant but not a pushover), dedication, commitment, and — most important of all — knowing how to give credit to others. Such people are natural leaders. The worst qualities you could cultivate are arrogance, always having to be right, always having to have the final say, being a glory hound, one-upmanship, having a huge, bruiseable ego, never admitting you're wrong, deflecting blame onto others while hogging all the credit for yourself, and, worst of all, competing with the people who work for you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/23276",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4091/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.