source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
32,482 | So can an algorithm be patented? I saw this statement which made me think: Everybody would abstain from patenting
the improvements of contour dot
algorithm for at least several years,
say up till 2021. So that the
developers of the Outliner project
feel free to implement their ideas. taken from this codeplex project. | Yes, legally they can be patented (in many, but not all, countries). Patents have been around for a very long time, and the idea is exactly as you describe: to protect your invention so that you have time to build it, market it and profit from it. Without patents, you might invent something and then someone with more resources and money could come along, built your invention and by the time you were ready to sell it, they'd have already cornered the market. Many people believe that the same protections are not required for software, because - generally speaking - it does not take a lot of effort to "build it". When you're talking about real-world objects, you need to have a factory to manufacture it, you need machines, you need employees, you need a distribution network and so on. If you can't get those things, then you could license your patented idea to someone who did have those things, and they could do all of that extra stuff for you. But with software, anybody with a compiler and an internet connection can build and distribute the software, so there is less of a need to "protect" the invention to give you time to set up your distribution network and whatnot. Then there's also the problem that the people in the patent office are generally simply not qualified to determine whether a particular software invention is patentable or not, leaving it up to the courts to decide whether a patent was valid when the owner tries to assert their rights to it. That means if you're a small company and you "infringe" on an invalid patent, you likely don't have the resources to fight the patent anyway (even if it's invalid). But let's not go into that particular debate :-) I could go on for days... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7005/"
]
} |
32,490 | I am not sure if this is even possible but I have watched a few videos with programming examples where it seems like the program is being written in some kind of command prompt rather than a nice graphical IDE. Im just curious as to what might be going on in these videos. Is it possible to write a program without an IDE? heres two examples: http://www.youtube.com/watch?v=hFSY9cWjO8o ( @ 6 min) http://www.youtube.com/watch?v=tKTZoB2Vjuk (@ 5 min) Could anyone explain how this is done? Thank you all for the great feedback! | All you need to write a program is a text editor and a compiler (or an interpreter if you're writing in a non-compiled language). Code is usually just plain text. Really, you could write any program imaginable using Windows Notepad and a command-line C compiler. A lot of programmers don't even use IDEs. I personally used Gedit (a basic Linux text editor with syntax highlighting) for the longest time before I finally switched to Eclipse. In fact, I still use Gedit when I want to write a simple program. Sometimes I'll even just use nano if I want to whip up a quick script, because I'm too impatient to wait for an IDE to load. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10922/"
]
} |
32,533 | In college, I was never interested in theory. I never read it. No matter how much I tried, I was unable to read stuff and not know what was actually happening practically. Like for example, in my course on automata theory, my professor told me everything possibly related to the mathematical aspect of it, but not even once did he mention where it would be used practically. This is just an example. I managed to pass my college and interned with a company also, where I did a project and thankfully they didn't bother about my grades, as they were above average. Now, I am interested in knowing what subjects should a CS student must absolutely and positively be aware of? Subjects that can have relevance in the industry. This is because I have some free time on my hands and it would help me better to have a good understanding of them. What are your suggestions? Like for one, algorithms is one subject. | Believe it or not, one of the things that turned out to be of critical importance to me in later life was Compiler Construction. Not the modern namby-pamby version using Lex and Yacc, thats for dummies. REAL compiler construction where you write your own symbol scanner and parser from the ground up. This is something I thought I would never, ever, use again. But on the last 20 years that course has proven its weight in gold 4 times over. Ever command processor I've had to write, every incoming-message-scanner, every user dispatcher, every script interpreter, has used the principles from that course. Do it that way and life is sweet, clear and simple. AND I even gave all the info to a colleague who had not done it - he had to actually write a compiler for an abstract machine. Which I might add has gone on to be very commercially successful. If I had to go up and thank a university course lecturer in any one subject, this would be it. Without that I would have got by but my solutions would have been much much uglier. (And before somebody jumps up and says "well you could have used lex and yacc..." the answer is, perhaps - it depends a lot on the system. In some cases the programming languages were not C (eg PL/M and Ada), in some cases no readily available Lex or Yacc were available for the platform. Knowing basics means a solution is at hand instead of wringing hands trying to figure out how to bend some tool to fit the problem.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32533",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8163/"
]
} |
32,578 | I know this subject is a bit controversial and there are a lot of various articles/opinions floating around the internet. Unfortunatelly, most of them assume the person doesn't know what the difference between NULL and empty string is. So they tell stories about surprising results with joins/aggregates and generally do a bit more advanced SQL lessons. By doing this, they absolutely miss the whole point and are therefore useless for me. So hopefully this question and all answers will move subject a bit forward. Let's suppose I have a table with personal information (name, birth, etc) where one of the columns is an email address with varchar type. We assume that for some reason some people might not want to provide an email address. When inserting such data (without email) into the table, there are two available choices: set cell to NULL or set it to empty string (''). Let's assume that I'm aware of all the technical implications of choosing one solution over another and I can create correct SQL queries for either scenario. The problem is even when both values differ on the technical level, they are exactly the same on logical level. After looking at NULL and '' I came to a single conclusion: I don't know email address of the guy. Also no matter how hard i tried, I was not able to sent an e-mail using either NULL or empty string, so apparently most SMTP servers out there agree with my logic. So i tend to use NULL where i don't know the value and consider empty string a bad thing. After some intense discussions with colleagues i came with two questions: am I right in assuming that using empty string for an unknown value is causing a database to "lie" about the facts? To be more precise: using SQL's idea of what is value and what is not, I might come to conclusion: we have e-mail address, just by finding out it is not null. But then later on, when trying to send e-mail I'll come to contradictory conclusion: no, we don't have e-mail address, that @!#$ Database must have been lying! Is there any logical scenario in which an empty string '' could be such a good carrier of important information (besides value and no value), which would be troublesome/inefficient to store by any other way (like additional column). I've seen many posts claiming that sometimes it's good to use empty string along with real values and NULLs, but so far haven't seen a scenario that would be logical (in terms of SQL/DB design). P.S. Some people will be tempted to answer, that it is just a matter of personal taste. I don't agree. To me it is a design decision with important consequences. So i'd like to see answers where opion about this is backed by some logical and/or technical reasons. | I would say that NULL is the correct choice for "no email address". There are many "invalid" email addresses, and "" (empty string) is just one. For example "foo" is not a valid email address, "a@b@c" is not valid and so on. So just because "" is not a valid email address is no reason to use it as the "no email address" value. I think you're right in saying that "" is not the correct way to say "I don't have a value for this column". "" is a value. An example of where "" might be a valid value, separate to NULL could be a person's middle name. Not every one has a middle name, so you need to differentiate between "no middle name" ("" - empty string) and "I don't know if this person has a middle name or not" ( NULL ). There's probably many other examples where an empty string is still a valid value for a column. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32578",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11457/"
]
} |
32,581 | If you had a colleague who didn't understand the benefits of Separation of Concerns, or didn't understand it quite enough to apply consistently in their daily work, how would you explain it to them? | Imagine you have a program which has been released. A customer comes along and offers to pay you for a enhancement to one of its features. In order to get the money, you will need to change your program to add the new feature. Some of the things that will influence what your profit margin is are: how much code you have to change how easy it is to make the changes how likely you are to break existing features that are being used by other customers how much you can reuse you existing model/architecture Separation of concerns helps you to get more positive answers to these questions. if all of the code for a particular behaviour of the application is separated out, then you will only have to change code directly associated with your new feature. Which should be less code to change. if the behaviours you are interested in are neatly separated from the rest of the application it is more likely you will be able to swap in a new implementation without having to fully understand or manipulate the rest of the program. It should also be easier to find out which code you need to change. Code that you do not have to change is less likely to break than code that you do change. So splitting up the concerns helps you to avoid breakage in unrelated features by preventing you from having to change code that they could call. If your features are mixed up together you might change the behavior of one by accident while trying to change another one. If your architecture is agnostic to technical or business logic detail then changes to implementation are less likely to require new architectural features. For example, if your main domain logic is database agnostic then supporting a new database should be as easy as swapping in a new implementation of the persistence layer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32581",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10106/"
]
} |
32,593 | I have several iOS/iPhone apps that have been continually selling in small amounts in over 2 dozen different countries, even though the app UIs and all the store descriptions are only in English. In a few countries where English is not the official or native language, a few apps are selling far better than is proportionate for those country's population size compared with the U.S. So why Internationalize apps? What kind of increase, if any, in sales might a typical app see if it is Internationalized into given local languages? Which major languages might be likely to see the greatest improvement in app sales or downloads due to a localized app description? | Imagine you have a program which has been released. A customer comes along and offers to pay you for a enhancement to one of its features. In order to get the money, you will need to change your program to add the new feature. Some of the things that will influence what your profit margin is are: how much code you have to change how easy it is to make the changes how likely you are to break existing features that are being used by other customers how much you can reuse you existing model/architecture Separation of concerns helps you to get more positive answers to these questions. if all of the code for a particular behaviour of the application is separated out, then you will only have to change code directly associated with your new feature. Which should be less code to change. if the behaviours you are interested in are neatly separated from the rest of the application it is more likely you will be able to swap in a new implementation without having to fully understand or manipulate the rest of the program. It should also be easier to find out which code you need to change. Code that you do not have to change is less likely to break than code that you do change. So splitting up the concerns helps you to avoid breakage in unrelated features by preventing you from having to change code that they could call. If your features are mixed up together you might change the behavior of one by accident while trying to change another one. If your architecture is agnostic to technical or business logic detail then changes to implementation are less likely to require new architectural features. For example, if your main domain logic is database agnostic then supporting a new database should be as easy as swapping in a new implementation of the persistence layer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9742/"
]
} |
32,617 | Sometimes I have a function that should return true or false. But sometimes three possible values would make more sense. In some language theses cases would be handled with integers or with exceptions. For exemple you want to handle the age of a user if he is over 18 years old. And you have a function like this. if(user.isAdult(country_code)){
//Go On
}else{
// Block access or do nothing
} But in some case depending how your app is built I could see case where the birthday field is incomplete. Then this function should return something undetermined. switch(user.isAdult()){
case true:
// go on
break;
case undetermined:
//Inform user birthday is incomplete
case false:
//Block access
} As I said we can handle that with Exceptions and Int, but I would find it quite sexy to have a true, false, undetermined embeded in the language instead of using some home defined constants. | This can be handled with either enums, integers, symbols (e.g., Lisp, Ruby), nullable types (use null as the indeterminate state), option types (e.g., ML), or some similar construct - depending on your language. So while your example and rationale is sound I can't see it being on the priority list of language features to develop. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32617",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12039/"
]
} |
32,618 | When do the SOLID principles become YAGNI? As programmers we make trade-offs all the time, between complexity, maintainability, time to build and so forth. Amongst others, two of the smartest guidelines for making choices are in my mind the SOLID principles and YAGNI. If you don't need it; don't build it, and keep it clean. Now for example, when I watch the dimecast series on SOLID , I see it starts out as a fairly simple program, and ends up as a pretty complex one (end yes complexity is also in the eye of the beholder), but it still makes me wonder: when do SOLID principles turn into something you don't need? All solid principles are ways of working that enable use to make changes at a later stage. But what if the problem to solve is a pretty simple one and it's a throwaway application, then what? Or are the SOLID principles something that always apply? As asked in the comments: SOLID Principles YAGNI | It's always difficult to judge an approach based on a screencast, since the problems picked for demos are typically so small that applying principles like SOLID quickly makes it look like the solution is completely overengineered. I'd say SOLID principles are almost always useful. Once you become proficient with them, using them doesn't seem like something you have to deliberately think about. It just becomes natural. I've seen many throwaway one-off apps become more than that, so I'm now afraid of saying that I'm going to throw something away, because you just never know. The approach I usually take is that if I'm writing a simple app for a particular task, I'll sometimes forgo big name principles in favour of a few lines of code that work. If I find that I'm coming back to that app to make further changes, I will take the time to make it SOLID (at least somewhat, since a 100% application of the principles is rarely feasible). In bigger apps, I start small and as the program evolves, I apply SOLID principles where possible. This way I don't attempt to design the whole thing upfront down to the last enum. That, to me, is the sweet spot where YAGNI and SOLID coexist. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32618",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10422/"
]
} |
32,674 | Is this some kind of Murphy's law? Maybe if I want to hire a very good programming ninja I should check out his website as in "Show my your website and I will tell how good you are" EDIT:
Go stackoverflow top users tab and you will see | Because design is a specialization, just like programming: not everyone can do it. It takes years of training and experience to know how to recognize and implement good design. Most people are not true polymaths, and either do not have the time, inclination, or ability to master two specializations. Beyond that, most people don't have the resources to hire a professional designer to do their website. So, add that to the general programmer inclination to write one's own version of a website instead of using off-the-shelf tools, and you have a recipe for a lot of programmers creating websites that don't really look all that great. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32674",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7890/"
]
} |
32,727 | I'm reading Coders at Work , and in it there's a lot of talk about invariants. As far as I've understood it, an invariant is a condition which holds both before and after an expression. They're, among other things, useful in proving that loop is correct, if I remember my Logic course correctly. Is my description correct, or have I missed something? Have you ever used them in your program? And if so, how did they benefit? | In OOP, an invariant is a set of assertions that must always hold true during the life of an object for the program to be valid. It should hold true from the end of the constructor to the start of the destructor whenever the object is not currently executing a method that changes its state. An example of invariant could be that exactly one of two member variables should be null. Or that if one has a given value, then the set of allowed values for the other is this or that... I sometime use a member function of the object to check that the invariant holds. If this is not the case, an assert is raised. And the method is called at the start and exit of each method that changes the object (in C++, this is only one line...) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32727",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
32,754 | I'm reading The Daily WTF archives and especially those stories about IT-related companies which have a completely wrong approach of software development, the job of a developer, etc. Some stories are totally horrible: a company don't have a local network for security reasons, another one has a source control server which can only be accessed by the manager , etc. Add to it all those stories about managers who don't know anything about their work and make stupid decisions without listening to anybody. The thing is that I don't see how to know if you will be employed by such company during an interview. Of course, sometimes, an interviewer tells weird things which gives you an idea that something goes very wrong with the company (in my case, the last manager said I should work 100% of my time through Remote Desktop, connected to on an old and slooooow machine, because "it avoids several people to modify the same source code" ; maybe I should explain him what SVN is). But in most cases, you will be unable to get enough information during the interview to get the exact image of a company. So how to avoid being employed by this sort of companies? I thought about asking to see some documents like documentation guide or code style guidelines . The problem is that I live in France, and here, most of the companies don't have those documents at all, and in the rare cases where those documents exist, they are outdated, poorly written, never used, or do force you to make things that don't make any sense . I also thought about asking to see how programmers actually work . But seeing that they have dual screens or "late-modern-artsy-fartsy furnishings" doesn't mean that they don't have people making weird decisions, making it impossible to work there. Have you been in such situations? What have you tried? Have it worked? | Remember that interviews are a two-way street. Ask them open-ended questions that let you know they know what they're doing. And learn to "read between the lines" when evaluating their answers. For instance: How do you guys make sure the software you're writing doesn't suck? (rephrased to something more "appropriate" if you're boring) Good answer: "We use unit tests, have a QA department, and code reviews." It doesn't have to be this. Nor does the person you're interviewing need to have the same answer to this as I gave. You're mostly just looking to make sure that the company values the code it writes to some degree and isn't just going to push it out the door with reckless abandon. Bad answer: "Well, we've been meaning to make more of those 'unit test' things. We just haven't gotten around to it" Again, the focus is less on unit tests and more about the attitude the interviewer takes to the issue. Generally, "We know we need it, we just haven't done it" is a red flag. That means one of several possibilities: Your coworkers will be lazy. Management doesn't give time to use proper process. Your coworkers aren't smart enough to understand unit tests. None of these are good (but some are worse than others). Describe the process your company uses to add a feature (from deciding that the feature is needed to shipping it to the customer). Good Answer: "Business people decide that a feature is a good idea and consult the programmers to see how easily-implemented it is. The programmers and technical staff decide on an architecture and implement it. A release team then pushes it out into the wild." Bad Answer: "Business people tell the programmers what to do and they do it." As with the above, the answer itself isn't as important as the attitude. The good answer indicates that the business side and technical side work together to bring a product about. The bad answer indicates that management views programmers as overpaid typists. In summary , remember to ask the right questions during the interview. And remember that particular answers aren't as important as the attitude behind those answers. Lastly, don't hold back . Asking tough questions indicates that you're really interested in the job, and that you think you're good enough to be a bit picky about who will employ you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32754",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
32,830 | I started to work at a company as an engineer a couple of months ago. It's a small company and what they basically do is answering service on phones. Now they are switching from normal phones to IP phones so that computers take more important place in the work. However, all the computers used by workers are equipped with pirated software, including their operating systems. Moreover, they didn't even buy one license to make copies for other computers. In other words, they did not spend any money for the software in office. I am not saying copying a licensed one is legit, but the situation is too much. There is one guy who installed the pirated software. He does not feel any sense of guilt and even justified when I asked about it. He is not even a specialist. He just searched on the internet to install pirated software. Our boss does not have any knowledge of computers, so he took the cheaper way. What do you think about this? Since I am still new to the company, I am not doing maintenance on those cracked computers. But I have to use those software daily. And later on I will be doing support, help desk kind of stuff. I really don't want to take responsibility for operating pirated software. From an aspect of developer and engineer, pirated software is not able to get legal support and it may work unexpectedly. So, I am thinking about changing jobs. Am I thinking too much? Should I wait until I have more credibility with the boss and try to change his policy? So far, the boss does not take any words from me. Any opinions are welcome. Thank you | You need to find another job. If this company finds it this easy to compromise ethical principles so blatantly when it comes to software, how can you trust them to do the right thing under any circumstances? What is going to happen when you are in a situation where you require honesty from them, or the capacity to follow ethical principles of any kind in their dealings with you? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32830",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12121/"
]
} |
32,840 | When ever I am stuck with a particular problem, I search for a solution in Google. And then I try to understand the code and tweak it according to my requirement. For example recently I had asked a question Reading xml document in firefox in stack overflow. Soufiane Hassou gave me a link to w3schools, where I found a example on parsing xml document, I understood how the example works, but I copied the code and tweaked it according to my requirement, since I don't like typing much. So does this make me a copy/paste programmer? How do you say if a person is a copy/paste programmer ? Thanks. | Looks like I'm going to be the lone dissenting view here. First of all, we need to distinguish between copy-and-paste programming and cargo cult programming , as it seems that several people are conflating the two. Copy and paste programming refers to the practice of copying and pasting the same code over and over again into different parts of a program, either verbatim or with only minor changes, instead of creating classes or subroutines or whatever higher-order code structures are offered by the language. Sometimes this is a symptom of a deficiency in the language/environment itself, but more often it is because the programmer does not understand (or value) abstraction . Clearly, you're not a copy-and-paste programmer unless this is what you're doing. But it's obvious from your description of your activities that this isn't the question you're asking. What you're actually referring to is called cargo cult programming . That is a set of anti-patterns, one among them being the wholesale copying of code from other parts of the program or from external sources, without really understanding how they work or if they're necessary at all. It may be true that many programmers do this to a very limited extent. For example, sometimes we need to work around a bug in the framework or the operating system, and we'll just copy some well-known workaround because we're not really interested in learning all about somebody else's bug that we can't possibly fix ourselves. But for a professional programmer, these cases should be few and far between. If you find yourself doing this frequently then it should be a red flag to you. And you've made this very unconvincing assertion: [...] then I try to understand the code and tweak it according to my requirement. Maybe this is true. Maybe. The problem is that every cargo-cult programmer will say this. The question is, do you really understand it? Truly? Do you understand it to a sufficient extent that you could write it from scratch if you had to, given sufficient time? Would the result be reasonably free of bugs? Or are you just making random changes to the code you copied until you hit the magic combination that does what you want? I'm not leveling accusations; I've never met you, I know nothing about you. But this is something you need to think about. A simple test is: Do you find yourself editing and recompiling very frequently when you are in the process of "tweaking" the code? Or do you read the code maybe once or twice, and immediately know what needs to be changed and how? If your changes/tweaks are working on the first or maybe second try, you probably have actually understood the code you've appropriated. If you seem to be experimenting a lot with the copied code, then I'm sorry to say you are a cargo-cult programmer. Keep that in mind for next time. Watch how you approach the problem. If you find yourself doing the coding equivalent of mashing buttons on a game pad, then you need to slow down, and try to learn a little more about the library or algorithm you are attempting to use, before going any further. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32840",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6005/"
]
} |
32,872 | There is plenty of material about designing software for object oriented programming. Do you know of any good materials for pure structural programming? I am working purely in C and while I can use some experiences from object oriented programming I found out, that I know few tricks about good structural design. | Looks like I'm going to be the lone dissenting view here. First of all, we need to distinguish between copy-and-paste programming and cargo cult programming , as it seems that several people are conflating the two. Copy and paste programming refers to the practice of copying and pasting the same code over and over again into different parts of a program, either verbatim or with only minor changes, instead of creating classes or subroutines or whatever higher-order code structures are offered by the language. Sometimes this is a symptom of a deficiency in the language/environment itself, but more often it is because the programmer does not understand (or value) abstraction . Clearly, you're not a copy-and-paste programmer unless this is what you're doing. But it's obvious from your description of your activities that this isn't the question you're asking. What you're actually referring to is called cargo cult programming . That is a set of anti-patterns, one among them being the wholesale copying of code from other parts of the program or from external sources, without really understanding how they work or if they're necessary at all. It may be true that many programmers do this to a very limited extent. For example, sometimes we need to work around a bug in the framework or the operating system, and we'll just copy some well-known workaround because we're not really interested in learning all about somebody else's bug that we can't possibly fix ourselves. But for a professional programmer, these cases should be few and far between. If you find yourself doing this frequently then it should be a red flag to you. And you've made this very unconvincing assertion: [...] then I try to understand the code and tweak it according to my requirement. Maybe this is true. Maybe. The problem is that every cargo-cult programmer will say this. The question is, do you really understand it? Truly? Do you understand it to a sufficient extent that you could write it from scratch if you had to, given sufficient time? Would the result be reasonably free of bugs? Or are you just making random changes to the code you copied until you hit the magic combination that does what you want? I'm not leveling accusations; I've never met you, I know nothing about you. But this is something you need to think about. A simple test is: Do you find yourself editing and recompiling very frequently when you are in the process of "tweaking" the code? Or do you read the code maybe once or twice, and immediately know what needs to be changed and how? If your changes/tweaks are working on the first or maybe second try, you probably have actually understood the code you've appropriated. If you seem to be experimenting a lot with the copied code, then I'm sorry to say you are a cargo-cult programmer. Keep that in mind for next time. Watch how you approach the problem. If you find yourself doing the coding equivalent of mashing buttons on a game pad, then you need to slow down, and try to learn a little more about the library or algorithm you are attempting to use, before going any further. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32872",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9753/"
]
} |
32,964 | As a programmer, I find myself in the dilemma where I want make my program as abstract and as general as possible. Doing so usually would allow me to reuse my code and have a more general solution for a problem that might (or might not) come up again. Then this voice in my head says, just solve the problem dummy its that easy! Why spend more time than you have to? We all have indeed faced this question where Abstraction is on your right shoulder and Solve-it-stupid sits on the left. Which to listen to and how often? What is your strategy for this? Should you abstract everything? | Which to listen to and how often? Never abstract until you must. In Java, for example, you must use interfaces. They're an abstraction. In Python you don't have interfaces, you have Duck Typing, and you don't need towering levels of abstraction. So you just don't. What is your strategy for this? Don't abstract until you've written it three times. Once is -- well -- once. Just solve it and move on. Twice is the indication that there may be a pattern here. Or there may not. It can just be coincidence. Thrice is the start of a pattern. Now it transcends coincidence. You can now abstract successfully. Should you abstract everything? No. Indeed, you should never abstract anything until you have absolute proof that you're doing the right kind of abstraction. Without the "Rule of Three Repetitions", you'll write stuff that's uselessly abstracted in a way that's unhelpful. Doing so usually would allow me to reuse my code This is an assumption that's often false. Abstraction may not help at all. It can be done badly. Therefore, don't do it until you must. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/32964",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2638/"
]
} |
33,020 | "Premature optimization is root of all evil" is something almost all of us have heard/read. What I am curious what kind of optimization not premature, i.e. at every stage of software development (high level design, detailed design, high level implementation, detailed implementation etc) what is extent of optimization we can consider without it crossing over to dark side. | When you're basing it off of experience? Not evil. "Every time we've done X, we've suffered a brutal performance hit. Let's plan on either optimizing or avoiding X entirely this time." When it's relatively painless? Not evil. "Implementing this as either Foo or Bar will take just as much work, but in theory, Bar should be a lot more efficient. Let's Bar it." When you're avoiding crappy algorithms that will scale terribly? Not evil. "Our tech lead says our proposed path selection algorithm runs in factorial time; I'm not sure what that means, but she suggests we commit seppuku for even considering it. Let's consider something else." The evil comes from spending a whole lot of time and energy solving problems that you don't know actually exist. When the problems definitely exist, or when the phantom psudo-problems may be solved cheaply, the evil goes away. Steve314 and Matthieu M. raise points in the comments that ought be considered. Basically, some varieties of "painless" optimizations simply aren't worth it either because the trivial performance upgrade they offer isn't worth the code obfuscation, they're duplicating enhancements the compiler is already performing, or both. See the comments for some nice examples of too-clever-by-half non-improvements. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33020",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8073/"
]
} |
33,076 | I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java? | Any competent C# programmer should be able to quickly pick up enough Java to write an Android program, but that's not the point . It's a matter of code reuse. Think about six months from now, when your Android program is popular and your users are asking for a version for iPhone and Windows Phone 7. If you had used MonoDroid, you can reuse most of your application logic with MonoTouch (Mono for iOS) and the Windows Phone SDK. Now they want a web-based version, so you include the same class libraries in an ASP.Net project. Desktop versions? No problem, that same class library works with .Net under Windows or Mono on Linux and OS X. Other than C or C++, I can't think of any other languages that would let you reuse the same code on all of those targets. Edit to address some concerns in the comments: .Net and Mono will not let you write a complete program and use that same program everywhere. They will let you share some code, and like all cross-platform programming the amount of shared code depends on the type of programs you're writing and how well you separate the UI and hardware code from the application logic. However, if you write your Android app in Java, how much of that is reusable on iOS or Windows Phone? That's the point I was trying to make. I had existing C# libraries that were working on Mono for Android in much less time than it would have taken to reimplement them, even though I already knew Java . I have some code that is shared--unmodified--between a web site, desktop programs, and mobile apps on two different mobile platforms, thanks to Mono. I didn't mean to imply, even indirectly, that Mono was the perfect tool for every mobile development situation. It's a tradeoff, but I firmly believe that there are situations when Mono is a much better choice. Please see (and upvote!) Jason S's answer for another perspective. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33076",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3045/"
]
} |
33,082 | First the background, during interviews in the past, many times I have been asked to design some or other variation of card game as programming puzzle, and I have tried to design it in OO way, but I have never been satisfied with my solutions. However it was not until recently that I realized that I had been approaching the problem from the wrong direction. Specifically I was trying to solve the problem by modeling individual card as an object. Problem with this is individual cards don't have any non-trivial intrinsic behavior and therefore are not suitable (or primary) candidate as objects. What is interesting and important about cards are rules and constraints, such as there could be only four suits , or only thirteen cards in each suit. Of course, then there are any number of rules for games. So my questions are Are there any idioms/constructs/patterns to program for rules & constraints. How many in 1 can be applied in conjunction with OO paradigm. | Any competent C# programmer should be able to quickly pick up enough Java to write an Android program, but that's not the point . It's a matter of code reuse. Think about six months from now, when your Android program is popular and your users are asking for a version for iPhone and Windows Phone 7. If you had used MonoDroid, you can reuse most of your application logic with MonoTouch (Mono for iOS) and the Windows Phone SDK. Now they want a web-based version, so you include the same class libraries in an ASP.Net project. Desktop versions? No problem, that same class library works with .Net under Windows or Mono on Linux and OS X. Other than C or C++, I can't think of any other languages that would let you reuse the same code on all of those targets. Edit to address some concerns in the comments: .Net and Mono will not let you write a complete program and use that same program everywhere. They will let you share some code, and like all cross-platform programming the amount of shared code depends on the type of programs you're writing and how well you separate the UI and hardware code from the application logic. However, if you write your Android app in Java, how much of that is reusable on iOS or Windows Phone? That's the point I was trying to make. I had existing C# libraries that were working on Mono for Android in much less time than it would have taken to reimplement them, even though I already knew Java . I have some code that is shared--unmodified--between a web site, desktop programs, and mobile apps on two different mobile platforms, thanks to Mono. I didn't mean to imply, even indirectly, that Mono was the perfect tool for every mobile development situation. It's a tradeoff, but I firmly believe that there are situations when Mono is a much better choice. Please see (and upvote!) Jason S's answer for another perspective. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33082",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8073/"
]
} |
33,235 | I, for one, only add debug code (such as print statements) when I'm trying to locate a bug. And once I've found it, I remove the debug code (and add a test case which specifically tests for that bug). I feel that it's cluttering the real code and therefore has no place there unless I'm debugging. How do you do it? Do you leave the debug code in place, or remove it when obsolete (which may be difficult to judge when that is)? | Debug print statements should be taken out; however, if you're needing to add them to debug a production problem then it may be worth considering if you have enough information being put into your logging framework. Information about parameters, error conditions and so on may well be useful later on when the next bug appears. Using a good logging framework that can be have debug or tracing log messages turned on dynamically can be very useful in the wild. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33235",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
33,378 | disclaimer: for simplicity sake, brackets will refer to brackets, braces, quotes, and parentheses in the couse of this question. Carry on. When writing code, I usually type the beginning and end element first, and then go back and type the inner stuff. This gets to be a lot of backspacing, especially when doing something with many nested elements like: jQuery(function($){$('#element[input="file"]').hover(function(){$(this).fadeOut();})); Is there a more efficient way of remembering how many brackets you've got open ? Or a second example with quotes: <?php echo '<input value="'.$_POST['name'].'" />"; ?> | I start out like this {} , then usually fill them with something. Whenever you type { , type a corresponding } and stick it on a new line. The worst thing you have to do in that case is fix indentation prior to committing. Good syntax highlighters will often alert you to a problem, but not always. My preferred editor KATE, for instance, choked on a JSON formatted 'printf style' variadic argument string. Don't trust bracket and paren highlighting! Always close what you open immediately after opening it and then fill in the gaps. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33378",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12155/"
]
} |
33,379 | A few days ago a sales manager asked me that question. But at this moment I didn't know a answer which he can understand. He isn't a programmer! At the moment I work on a product which is over 8 years old. Nobody thought about architecture or evolvability. I have a swamp of code in front of me every day which is not tested. Because of that, time estimates are very difficult for me. How I can describe that problem to an salesman ? Not only my swamp-code-problem, but general! | Ask him how long it would take to find his way through a maze. Not any particular maze, or any particular size of maze - just "a maze". Programming is in some ways similar. You can't be sure how long it will take until you have fully explored the problems you'll need to solve. The only time you can be sure you've done that is when you already have the finished product. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33379",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12271/"
]
} |
33,460 | I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership? | I believe that Team Ownership is much more beneficial in the long term. You just need to look at the following 2 scenarios to understand why concentrating knowledge in minimum numbers of people is less than ideal: Team member meets unfortunate accident Team member meets better employment opportunity Naturally, the person/people who write particular sections will have greater knowledge of it, but don't give in to the temptation to make them the sole silo of knowledge. Silos will give you short term wins by long-term pains. Just because you don't have individual ownership of sections of code, since you wrote it, doesn't mean you will not still have the aspects of "pride in your creation", etc. I don't believe having team ownership will greatly diminish any personal feelings of ownership of code written. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1576/"
]
} |
33,526 | So I just started freelancing both in desktop/web development and this client who already accepted my work, and payed me keeps coming back at me each time he finds a bug etc. And I have found myself spending more time than I thought fixing them for free. Is this alright, or should I start charging a support fee? Which is the best way to deal with fixes on a supposedly accepted and completed work? | Part of your contract should describe acceptance tests i.e. tests that the client will do and your application needs to pass them for the contract to be fulfilled. Anything not covered by these tests is client's responsibility. Anything covered by them is yours. Because it is not possible (especially for a non-technical client) to forsee all possible issues, you should add into your contact a clause specifying a period, when you will fix any new issues as part of a contract. After that, you should offer only paid support. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33526",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5793/"
]
} |
33,560 | I believe any programmer has several ideas that she/he considers as innovative or at least valuable. It may be an idea of a new product which will make this world better or a new development approach, etc. But a great idea must be implemented and promoted/advertised. This requires a lot of work (proofs of concept, prototypes, technology previews, etc.) and a lot of money (appropriate advertisement, marketing, etc.). So months later, the idea stays in our heads, but nothing else is done, because it's difficult, long and expensive, sometimes even impossible for a single developer. On the other hand, it would be painful to share our ideas, and see a medium-size company which has enough resources making something useful from it and having success and money. So what do you do with your ideas you can hardly implement or patent? Do you talk freely about them in discussion boards and with other developers? Do you keep them like a precious thing without never talking about them to anybody? If you keep your ideas, why are you doing so? Is it just because you hope that one day, you will be able to implement them and have a huge success, while you know very well by experience that it's an utopia? | Ideas left unimplemented might as well be predictions of what someone else is going to implement. I think that it is rare, especially in this day and age for only a single person to come up with an idea. If you thought of it, chances are that someone else has as well. Fortunately, I work in a company that treats its people very well. If I come up with something that I think would work, I meet very little resistance when it comes to gathering the resources that I need to implement it. I'm also compensated quite nicely if an idea does well. For those who don't have the resources to get off the drawing board, you have two choices. Try, as hard as you can to get your idea in front of people who can help you - or keep quiet about it and watch someone else implement it. I don't think the fear that ' someone is just going to steal my idea if I go to them for help ' is reasonable, in most cases. You have a 100% chance of never seeing your idea get off the ground if you stay quiet, and perhaps a 5 - 10% chance of getting ripped off if you don't. While you may think your idea is worth millions, it probably isn't. Don't overestimate how much someone might want to steal it. Given that, I think the choice is simple. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33560",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
33,578 | The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars. Let's face it, we have our preferences, and syntax is important as it pertains to language design. I just want to avoid epic battles of the nature of emacs vs. VI (which a great number of people these days know nothing about). "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic. Edit: Clarified what I mean by Syntax Flame Wars. I'm not trying to avoid all discussion of syntax, particularly because syntax is a fundamental part of program language design. | Unicode Support by default This day and age, programs are being developed to be used internationally, or under the assumption that they might be used internationally. They must provide support for their character sets or render programs written in that language useless. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33578",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6509/"
]
} |
33,816 | Our company is looking for new programmers. And here comes the problem - there are many developers who look really great at the interview, seem to know the technology you need and have a good job background, but after two months of work, you find out that they are not able to work in a team, writing some code takes them a very long time, and moreover, the result is not as good as it should be. So, do you use any formalized tests (are there any?)? How do you recognize a good programmer - and a good person? Are there any simple 'good' questions that might reveal the future problems?
...or is it just about your 'feeling' about the person (ie., mainly your experience), and trying him/her out? Edit: According to Manoj's answer, here is the question related to the coding task at the job interview. | Get them to talk about what they're interested in. I have yet to meet a developer who is really passionate when talking about programming but can't actually code. They may well exist, of course - and your interview should check for competency as well - but passion is a good indicator in my experience. (Note that that's not the same as being able to "talk the talk" in terms of buzzwords.) Ask them what they don't like about their favourite language or platform. How would they fix things? What would they like to see in the next version? Do they have hobby projects? If they've got a blog, read it. Check their general online presence. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33816",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12632/"
]
} |
33,851 | Recently one of our key engineers resigned. This engineer has co-authored a major component of our application. We are not hitting Truck number yet though, but we're getting close :) Before the guy waltzes off, we want to take actions necessary to recover from this loss as smoothly as possible and eventually 'grow' the rest of the team to competently cover the parts he authored. More about the context: the domain the component covers and the code are no rocket science but still a lot of non-trivial stuff. Some team members can already cover a lot of this but those have a lot on their plates. These are the actions that come to my mind: Improve tests and test coverage - especially for the non-trivial stuff, Update high level documents, Document any 'funny stuff' the code does (we had to do some heavy duct-taping), Add / update code documentation - have everything with 'public' visibility documented. Finally the questions: What do you think are the actions to take in this situation? What have you done in such situations? What did or did not work well for you? | I ask him to stop coding completely and devote his time: coaching others review his comments in code (to adapt them to the situation) and write a document he would have written if he knew he would leave in a week. I try to put priority in modules he owned to make other developer work with them. Remember after he left, it's too late to do coaching or ask him questions (in good conditions). When he leaves, take that opportunity to ask him how he think you could improve your organization. Former employees have nothing to lose and their feedback is honest and then always valuable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33851",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11344/"
]
} |
33,968 | I have been given the role to improve development in our company. The first thing I wanted to start was code reviews since that has never been done here before. There are 3 programmers in our company. I am a web programmer, my known languages are mainly PHP, ActionScript and JavaScript. The other 2 developers write internal applications in VB.net We have been doing code reviews for a couple weeks now. I find it hard to understand VB code. So when they say what its doing, for the most part I just have to take their word for it. If I do see something that looks wrong, I explain my opinion and I explain how I would address it in one of the languages I know. Sometimes my suggestions are welcomed but many times I am told things like "this is the best way of doing it in this language" or "that doesn't apply to this language" or similar things of that nature. This may be true, but without knowing the language I am not sure how to confirm or refute these claims. I know one possible solution would be to learn vb so I can do better code reviews. I really have no interest in learning vb (especially since I have a list of other technologies I am trying to learn for my own projects) and would like to keep this as a last resort but it is an option. Another idea that came to me is, they both have interest in C# and so do I. Its relative to them because its .net and relative to me because its more similar to the languages I know. Yet it is new to all of us. I thought about the benefits of us all collaborating on a pet C#.net project and reviewing each others code from that. I guess theres also the possibility hiring a consultant to come in and give us some code reviews. What would you recommend I do in this situation. | Your personal desires to learn other things should take a back seat to learning what you actually need right now for your job. Learn VB.net. You can effectively code review code you don't understand when you know the language it's in by asking lots of questions (usually that's a sign the code isn't well written if you know the language and can't figure out what its doing and why). But not understanding the code, the best you can do is get them to explain it to you and hope they will see any bugs through the process of explaining it. Not that I haven't found bugs in my own code in a review by doing just that, but it isn't the most effective way to code review. Code review is now part of your job, deal with it and learn what you need to learn to do it effectively. While you are learning, when they say well that isn't the way we do it in this language, make them show you a source that says it is a good technique to use. It's up to them to justify to you in a code review not the other way around. You'll also get better at the language once you start seeing those links. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/33968",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
34,173 | Overall I'm in programming for about 8 years now and it seems to me that I'm relying more and more on open source libraries and snippets (damn you GitHub!) to "get the job done". I know that in time I could write me own implementation but I like to focus on the overall design. Is this normal (non corporate environment)? What could go wrong if my "programming" is nothing more than gluing different libraries together? I know about "don't reinvent the wheel" but what happens when you don't invent a single wheel anymore? | Using libraries instead of reinventing the wheel: Great! That's how everybody should do it. You don't get paid to do what's already done. Using snippets: As long as you understand what you copy-paste, and as long as you invest the time to make it all consistent (instead of a patchwork of different styles and approaches), there is nothing wrong with it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34173",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2572/"
]
} |
34,200 | In our company, the developers want to use a proper bug tracking tool to manager issues in our application. The management however insists on using a shared spreadsheet (formeerly a shared excel file, now a spreadsheet on a web base solution allowing concurrent access). Their argument is that the spreadsheet allow them to have a more highlevel view of the state of the project as they can see how many bugs are open with a quick glance. This also allow them to see who is working on each bug, and get estimation of the time required to close them all (as developer are required to fill time estimation of the bug they are working on). As you can understand, this is not really practical to use for the developers (bug tracking software were invented for a reason). So how can I advocate bug tracking software to ease the work of the developer ? As a bonus, which software would you recommend that would allow the management to be able to get their feedbacks (number of bugs opens, who is working on them, time estimation) with a high level view ? | So how can I advocate bug tracking software to ease the work of the developer ? Given this statement: the spreadsheet allow them to have a more highlevel view of the state of the project as they can see how many bugs are open with a quick glance. you need to be looking at systems that have reporting tools that effectively allow the creation of spreadsheets in "real time" (or as close to it as possible). When you find one of these explain that having the developers use a "proper" system will mean that the data they're interested in will (hopefully) be more accurate and up to date (for example). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34200",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9218/"
]
} |
34,456 | I searched and couldn't find any business reasons why git/mercurial/bazzr systems are better than centralized systems (subversion, perforce). If you were trying to sell a DVCS to a non-technical person what arguments would you provide for the DVCS increasing profit . I will shortly be pitching git to my manager, it will take some time converting out subversion repositories and some expense in buying smartgit licences. Edit I tried to make this question into a generic discussion on centralized vs decentralized, but inevitably it has turned into git vs subversion. Surely there are better centralized systems than subversion. | Hmm, having been a manager I have two immediate "knee-jerk" reactions to this: If you don't already have good reasons why are you pitching git other than to be trendy? Similarly, how is Subversion failing such that you need a replacement? I'm not, actually, being negative - I think there probably is a case to be made (dependent on circumstance) but if the case is simply that git is "better" than subversion then you don't really have one. You also need to be able to enumerate the disadvantages - you've already identified the overhead of migration and re-tooling - what else is a problem? e.g. What happens to your nice, central, backed up repository? How do you integrate with your continuous integration build server (if you don't have one, forget git and go sort that first). Oh security and tracking - SVN runs with proper logins and permissions. To my mind, the benefits are in flexibility, better merging, the ability to do local commits without breaking the build and so on. The disadvantages are in lack of control and that same flexibility. It may be that all you want to do is run git locally to your machine as a "better" subversion client (I'm looking at doing this using mercurial). Hmm, perhaps this whole answer is a comment really? You need to make your case here (in the question) for git over subversion (in your environment) in order to see if we can help you identify the business case. FWIW, I'm know that one can easily designate a specific instance of the repository to be the trunk/reference source and further that that is how one wires into one's build server - the difference being that with DVCS that's more of an administrative decision than something inherent in the architecture. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34456",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5678/"
]
} |
34,463 | So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results. | Hmm, having been a manager I have two immediate "knee-jerk" reactions to this: If you don't already have good reasons why are you pitching git other than to be trendy? Similarly, how is Subversion failing such that you need a replacement? I'm not, actually, being negative - I think there probably is a case to be made (dependent on circumstance) but if the case is simply that git is "better" than subversion then you don't really have one. You also need to be able to enumerate the disadvantages - you've already identified the overhead of migration and re-tooling - what else is a problem? e.g. What happens to your nice, central, backed up repository? How do you integrate with your continuous integration build server (if you don't have one, forget git and go sort that first). Oh security and tracking - SVN runs with proper logins and permissions. To my mind, the benefits are in flexibility, better merging, the ability to do local commits without breaking the build and so on. The disadvantages are in lack of control and that same flexibility. It may be that all you want to do is run git locally to your machine as a "better" subversion client (I'm looking at doing this using mercurial). Hmm, perhaps this whole answer is a comment really? You need to make your case here (in the question) for git over subversion (in your environment) in order to see if we can help you identify the business case. FWIW, I'm know that one can easily designate a specific instance of the repository to be the trunk/reference source and further that that is how one wires into one's build server - the difference being that with DVCS that's more of an administrative decision than something inherent in the architecture. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34463",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1245/"
]
} |
34,512 | The other day, a friend told me that in USA, they pronounce SQL like squel , not es-qu-el . I was surprised. I was wondering how "SELECT *" is read/pronounced while talking. select star? select asterisks? select all? | I live in the US and almost always hear it pronounced select star And normally I use sequel instead of es-qu-el because it has fewer syllables and seems easier to say | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34512",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1517/"
]
} |
34,559 | This is something I was never taught. I have seen alot of different types of authoring styles. I code primarily in Java and Python. I was wondering if there was a standard authoring style or if everything is freestyle. Also if you answer would you mind attaching the style you use to author files that your create at home or at work. I usually just go @author garbagecollector
@company garbage inc. | Why would you? that's the job of the versioning system and "Blame" :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34559",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2446/"
]
} |
34,577 | I was wondering how you guys start out if you need to design a multi-client project where multiple clients can interact with a server. In specific how do you go about dealing with different states and message handling, how do you start designing and considering all these cases? For example a video webchat application where it is possible that you call another client, while that client is already in a call, or is stuck in a modal dialog such that the calling dialog does not come through. | Why would you? that's the job of the versioning system and "Blame" :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34577",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2662/"
]
} |
34,737 | From a reasonably common programming language, which do you find to be the most mind-bending? I have been listening to a lot of programming podcasts and taking some time to learn some new languages that are being considered upcoming, and important. I'm not necessarily talking about BrainFuck , but which language would you consider to be one that challenges the common programming paradigms? For me, I did some functional and logic (for example, Prolog ) programming in the 1990s, so I can't say that I find anything special there. I am far from being an expert in it, but even today the most mind-bending programming language for me is Perl . Not because "Hello World" is hard to implement, but rather there is so much lexical flexibility that some of the hardest solutions can be decomposed so poetically that I have to walk outside away from my terminal to clear my head. I'm not saying I'd likely sell a commercial software implementation, just that there is a distinct reason Perl is so (in)famous. Just look at the basic list of books on it. So, what is your mind-bending language that promotes your better programming and practices ? | APL This language is incredibly powerful and very terse, it will hurt your brain. For starters it's tricky to use without a custom keyboard, or at least a keyboard overlay to show up all the obscure symbols it uses. Then the language is of the vector/array-based paradigm and specialises towards complex linear algebra. The original version did not even have loop constructs, anything and everything done by chaining rather unusual array operators together. strip_tags() anybody? (borrowed from Wikipedia) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34737",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2274/"
]
} |
34,843 | It seems that, in my experience, getting us engineers to accurately estimate and determine tasks to be completed is like pulling teeth. Rather than just giving a swag estimate of 2-3 weeks or 3-6 months... what is the simplest way to define software schedules so they are not so painful to define? For instance, customer A wants a feature by 02/01/2011. How do you schedule time to implement this feature knowing that other bug fixes may be needed along the way and take up additional engineering time? | If you are cranking out a project nearly identical to other projects you have done, using familiar tools and a familiar team, and you are given firm, written requirements, then it should be possible to make good estimate. These are the conditions that painters, carpet installers, landscapers, etc., regularly experience. But it is not a good fit for many (or most) software projects. We're often asked to estimate projects that use new tools, technologies, where requirements are changing,etc. This is more extrapolation into the unknown than interpolation over our past experiences. So it is natural that estimation will be more difficult. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/34843",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4459/"
]
} |
35,038 | Yesterday I had a terrible experience in an interview. Interviewer asked me about pure virtual function.
I said, It may or may not have definition in base class, but derived classes should provide definition unless they also want to be abstract class. But interviewer kept on asking that "Can pure virtual have definition !!! ???"...
I said yes. Again he said "Pure ?" I said yes. It is allowed, derived classes can explicitly call that function if they want that particular behavior. He sent me out.
I am sure that he doesn't know the fact that pure virtual function can have definition. How to deal with this kind of Interviewers ? After asking 2nd time, should i lie that it can't have definition ? :) Or i should stick to my words and loose the job opportunity ? | No. And you should thank your lucky stars that you got missed by that particular bullet. Working for people who refuse to admit that they might not know everything, and refuse to learn from others, is a VERY unpleasant experience. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35038",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12729/"
]
} |
35,074 | I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan) : I'm now convinced that it worth a try. UPDATE (12th Jun) : I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight . FINAL UPDATE (29th Jul) : I had the privilege to review Eric Sink 's next book called Version Control by Example . He finished to convince me. I'll go for Mercurial. | Note: See "EDIT" for the answer to the current question First of all, read Subversion Re-education by Joel Spolsky. I think most of your questions will be answered there. Another recommendation, Linus Torvalds' talk on Git: http://www.youtube.com/watch?v=4XpnKHJAok8 . This other one might also answer most of your questions, and it is quite an entertaining one. BTW, something I find quite funny: even Brian Fitzpatrick & Ben Collins-Sussman, two of the original creators of subversion said in one google talk "sorry about that" referring to subversion being inferior to mercurial (and DVCSs in general). Now, IMO and in general, team dynamics develop more naturally with any DVCS, and an outstanding benefit is that you can commit offline because it implies the following things: You don't depend on a server and a connection, meaning faster times. Not being slave to places where you can get internet access (or a VPN) just to be able to commit. Everyone has a backup of everything (files, history), not just the server. Meaning anyone can become the server . You can commit compulsively if you need to without messing others' code . Commits are local. You don't step on each other's toes while committing. You don't break other's builds or environments just by committing. People without "commit access" can commit (because commiting in a DVCS does not imply uploading code), lowering barrier for contributions, you can decide to pull their changes or not as an integrator. It can reinforce natural communication since a DVCS makes this essential... in subversion what you have instead are commit races, which force communication, but by obstructing your work. Contributors can team up and handle their own merging, meaning less work for integrators in the end. Contributors can have their own branches without affecting others' (but being able to share them if necessary). About your points: Merging hell doesn't exist in DVCSland; doesn't need to be handled. See next point . In DVCSs, everyone represents a "branch", meaning there are merges everytime changes are pulled. Named branches are another thing. You can keep using continuous integration if you want. Not necessary IMHO though, why add complexity?, just keep your testing as part of your culture/policy. Mercurial is faster in some things, git is faster in other things. Not really up to DVCSs in general, but to their particular implementations AFAIK. Everyone will always have the full project, not only you. The distributed thing has to do with that you can commit/update locally, sharing/taking-from outside your computer is called pushing/pulling. Again, read Subversion Re-education. DVCSs are easier and more natural, but they are different, don't try to think that cvs/svn === base of all versioning. I was contributing some documentation to the Joomla project to help preaching a migration to DVCSs, and here I made some diagrams to illustrate centralized vs distributed. Centralized Distributed in general practice Distributed to the fullest You see in the diagram there is still a "centralized repository", and this is one of centralized versioning fans favourite arguments: "you are still being centralized", and nope, you are not, since the "centralized" repository is just repository you all agree on (e.g. an official github repo), but this can change at any time you need. Now, this is the typical workflow for open-source projects (e.g. a project with massive collaboration) using DVCSs: Bitbucket.org is somewhat of a github equivalent for mercurial, know that they have unlimited private repositories with unlimited space, if your team is smaller than five you can use it for free. The best way you can convince yourself of using a DVCS is trying out a DVCS, every experienced DVCS developer that has used svn/cvs will tell you that is worth it and that they don't know how they survived all their time without it. EDIT : To answer your second edit I can just reiterate that with a DVCS you have a different workflow, I'd advise you not to look for reasons not to try it because of best practices , it feels like when people argue that OOP is not necessary because they can get around complex design patterns with what they always do with paradigm XYZ; you can benefit anyways. Try it, you'll see how working in "a private branch" is actually a better option. One reason I can tell about why the last is true is because you lose the fear to commit , allowing you to commit at any time you see fit and work a more natural way. Regarding "merging hell", you say "unless we are experimenting", I say "even if you are experimenting + maintaing + working in revamped v2.0 at the same time ". As I was saying earlier, merging hell doesn't exist, because: Everytime you commit you generate an unnamed branch, and everytime your changes meet other persons' changes, a natural merge occurs. Because DVCSs gather more metadata for each commit, less conflicts occur during merging... so you could even call it an "intelligent merge". When you do bump into merge conflicts, this is what you can use: Also, project size doesn't matter, when I switched from subversion I actually was already seeing the benefits while working alone, everything just felt right. The changesets (not exactly a revision, but a specific set of changes for specific files you include a commit, isolated from the state of the codebase) let you visualize exactly what you meant by doing what you were doing to a specific group of files, not the whole codebase. Regarding how changesets work and the performance boost. I'll try to illustrate it with an example I like to give: the mootools project switch from svn illustrated in their github network graph . Before After What you are seeing is developers being able to focus on their own work while commiting, without the fear of breaking others' code, they worry about breaking others' code after pushing/pulling (DVCSs: first commit, then push/pull, then update) but since merging is smarter here, they often never do... even when there is a merge conflict (which is rare), you only spend 5 minutes or less fixing it. My recommendation to you is to look for someone that knows how to use mercurial/git and to tell him/her to explain it to you hands-on. By spending about half an hour with some friends in the command line while using mercurial with our desktops and bitbucket accounts showing them how to merge, even fabricating conflicts for them to see how to fix in a ridiculous ammount of time, I was able to show them the true power of a DVCS. Finally, I'd recommend you to use mercurial+bitbucket instead of git+github if you work with windows folks. Mercurial is also a tad more simple, but git is more powerfull for more complex repository management (e.g. git rebase ). Some additional recommended readings: Distributed Revision Control Systems: Git vs. Mercurial vs. SVN Git vs. Mercurial: Please Relax Distributed Version Control & Git [Part 1] & Distributed Version Control & Git [Part 2] Git/Mercurial vs. Subversion: Fight! DVCS vs Subversion smackdown, round 3 Why I Like Mercurial More Than Git ( part 1 , part 2 ) Contributing with Git: Reducing the frictions of Open Source collaboration with the Git VCS "How have the DVCSs worked for you so far?" - post on the Joomla! Framework Development mailing list asking for feedback after the experimental adoption of mercurial and git for the Joomla! Platform Project DVCS branching (with Mercurial) - A detailed and example-oriented introduction to what branching looks like in a DVCS, which is possibly the most important feature in these systems | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35074",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
35,208 | As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here ), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me. Edit: Thanks to everyone for your wonderful responses! All thought-out and well-written, and I wish I had the time to write comments on all of them. Hopefully, I'll be able to when I get home from school and work later tonight... Edit 2: Wow! Unbelievable that I've gotten this many helpful responses in such a short amount of time! I haven't had the time to properly sit down and respond to many of these, but I really appreciate the effort, and I will do so tomorrow. Big thanks to everyone who posted! Edit 3: For those who are interested, I got into CMU, Cornell, and Binghamton, and decided on Binghamton. CMU and Cornell gave me no financial aid whatsoever, while Binghamton, being a state school costs less than $20,000 a year including room and board. When I got the admission letters, the decision was hard, but after visiting Binghamton and realizing just how good of a school it is (state schools are severely underrated in the United States; it's a terrible problem—for what it's worth, it turns out that Binghamton was even more selective than many of the private schools I applied to, not that that inherently means much, but just as a metric), I couldn't pass up. Besides, I visited on a terrible, rainy day, and was still impressed, so I knew it was the one. ;) Doing some actual financial analysis, I realized I would never be able to pay off the $60,000 a year required for CMU or Cornell, only making choosing Binghamton feel even better. While this question is specific to my case, I hope this can help someone else in my position. Edit 4: I've recently been made aware that students in a similar position have been stumbling across this question, and I wanted to give a short update. I'm incredibly happy here at Binghamton, and if I had to go through the college process all over again, I wouldn't have chosen any other school. I think most students tend to be happy regardless of where they go, but for me, Binghamton has been a great experience. What I want to tell students is this: I know that it's hard to judge schools without paying attention to their names and reputations — I doubt Carnegie Mellon will ever be considered a bad school for studying Computer Science — but don't ignore schools just because you've never heard of them! Don't be afraid to make practical choices. I know that Binghamton isn't a world-famous university, but from what I've seen, our curriculum is more rigorous, and lays down a better foundation than many top schools that I've heard of, and for much cheaper. We focus a lot on getting students internship, job, and research opportunities, and we have very strong connections to companies like Microsoft, Bloomberg, IBM, Lockheed Martin, and several others. We're a very practical school — we might not be famous, but if you come here, you're going to get an excellent foundation for your career, and almost certainly an internship or a job. Plus, we're small enough that students can get to know their professors well (I have several friends who are first-name basis with their professors), which certainly helps if they're interested in doing research, which we do a lot of at Binghamton. I didn't know any of this when I chose Binghamton, and it's one of those things you can only learn about a school once you go there — details like this won't be stressed on brochures and magazines, and you learn about it through experience. So, what I want to say is, don't go picking schools solely on whether you've heard of them or not, and consider state schools. The better ones, like Binghamton, are very good choices. Take this with a grain of salt: of course, if you're motivated, skilled, and hard-working, and if you push yourself to succeed, where you go doesn't matter as much because you'll be noticed wherever you are. I got very lucky with my internships and job opportunities, and my experiences have definitely been flukes, but if you are constantly improving your skills and apply yourself, you can further yourself much more than your school will ever be able to. | To tell you about my background, I went to a small, private liberal arts school and work at Google. So it's possible to land a good job without going too far into debt. But if I could make my college decision again, I would have gone with a big-name tech school. Big name schools offer attention It is difficult to land a good internship at Microsoft or Google if you went to school at some random school. While at big-name campuses Microsoft and Google recruiters will be begging for the best and the brightest. That isn't to say you can't get an internship/job right out of school at a top-tier company. It just means luck and nailing your interviews play a larger role. (Mostly because the interviewer isn't making any assumptions about your background.) Big name schools offer a better curriculum This is probably far more important to consider. People will tell you that you will learn the same things at less-reputable college. They are all wrong. Bigger name schools teach a lot more rigor and cover a lot more of the esoteric corner cases. (For example, requiring that students learn to implement their own hash table rather than just reading a paragraph in the book.) This makes a huge difference if you want to have a career where you are truly innovating. Anybody can write code, few people have the skills to change the world through software. If you are good, money doesn't matter I took out gobs and gobs of student loans. More than I would like to admit. But if you are working for Microsoft or Google, getting a 15-30,000 annual bonus isn't out of the ordinary. And if you are really, really good the sky literally is the limit. Re: Masters degree You shouldn't waste your time on this. If you studied hard from a top-tier undergraduate program, then you won't need it. (And the opportunity cost for the extra year or two isn't worth it.) However, if you are like me and didn't have a rigorous undergrad program I would highly recommend this option, as it will surely patch any holes in your undergraduate education. Even better, a lot of times your employer will pick up the tab depending on the program. Bottom line The bottom line is that you need to have an idea of what you are after. If you want to be a tech God in 10 years, then you should go to the best school you can get into. Internships and great jobs will pay for your loans. But if you aren't sure about that sort of lifestyle, you can play it more conservatively. This won't limit you in any way, but it will require a lot more effort on your part to land a great job after you graduate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35208",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7228/"
]
} |
35,214 | I am currently using dev++, I am a complete beginner, (Freshman CS major) learning C++. I can get one of the newest versions of visual studio (2008 or 2009 i think) for free through my school. Not sure if it is worth the trouble of getting. thoughts? | If by "dev++" you mean this monstrosity , then drop it as fast as you can . There have been no updates to Dev-C++ in over six years, it's buggy, comes with a really ancient version of gcc and is not worth the cost of the download. Visual C++, on the other hand, is a world-class compiler and one of the best the IDEs available. That you can get it for free is great (even the Express Editions are light years ahead of Dev-C++) and I wouldn't hesitate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35214",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10922/"
]
} |
35,293 | A few months ago my company found itself with its hands around a white-hot emergency of a project, and my entire team of six pulled basically a five week "crunch week". In the 48 hours before go-live, I worked 41 of them, two back to back all-nighters. Deep in the middle of that, I posted what has been my most successful question to date . During all that time there was never any talk of "failure". It was always "get it done, regardless of the pain." Now that the thing is over and we as an organization have had some time to sit back and take stock of what we learned, one question has occurred to me. I can't say I've ever taken part in a project that I'd say had "failed". Plenty that were late or over budget, some disastrously so, but I've always ended up delivering SOMETHING. Yet I hear about "failed IT projects" all the time. I'm wondering about people's experience with that. What were the parameters that defined "failure"? What was the context? In our case, we are a software shop with external clients. Does a project that's internal to a large corporation have more space to "fail"? When do you make that call? What happens when you do? I'm not at all convinced that doing what we did is a smart business move. It wasn't my call (I'm just a code monkey) but I'm wondering if it might have been better to cut our losses, say we're not delivering, and move on. I don't just say that due to the sting of the long hours--the company royally lost its shirt on the project, plus the intangible costs to the company in terms of employee morale and loyalty were large . Factor that against the PR hit of failing to deliver a high profile project like this one was... and I don't know what the right answer is. | The concept of failure is really a business related call. If a commercial project costs more than the money it brings in, that project would be considered a failure. If an open source project cannot build a community around the code to help maintain it and care for it, that open source project failed. I've been involved in projects were we delivered everything on time and within budget, but the business development team failed to get follow on work. From a business perspective the project failed, although what we delivered was well received and liked. In situations like yours, the company has to make some hard decisions. If they want the project to succeed, then they need to learn some lessons: Failure to plan appropriately will cause undue stress on your team, and ultimately lead to a failed project A stressed team will retaliate with high turnover--and eventually you won't be able to get good people to join the company. Emergencies happen, but find what caused the emergency and change your practices to avoid that emergency in the future. Any company that doesn't learn from its mistakes will repeat history quite often. I would take that as a sign that it is time to find another company. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35293",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15/"
]
} |
35,413 | I work in a department where no one has ever used source control before, including myself. I am trying to push the concept. I have spent a little while researching SVN. I some basics learned. I can Create/update/checkout/commit with command line and from Tortoise. I am starting to learn how to tag and branch but still confused a lot about conflicts between branches and trunk etc. I am still learning, but I do not have a physical person who can show me anything. Its all from books/tutorials and trial and error. From what I have read online it seems like git is the better thing to know, but its also more complicated. I don't want to overwhelm myself. Should I continue to master svn before moving to git or would I be wiser to just jump to git now? Are there pros and cons to both approaches? | No Git is so radically different from SVN it won't help you. If anything you will be looking for "update" and "commit" commands and wondering why everything is different. Start with Git from the Bottom Up and go from there. Contrasting a standard centralized update / commit pattern versioning system with Git is like comparing a bus with mass transit. A bus will get you (and everyone else) from point A to point B using the exact same route. Mass transit will get you from point A to point B using whatever route you like. Distributed itself has disadvantages you should be aware of. It takes more work to keep everyone on the same page. It offers a huge increase in flexibility though. Revision hashes vs. Numbers Someone mentioned Git using SHA1 hashes while Hg uses numbers. First you rarely (if ever) need to deal directly with the hashes in daily commits / pulls. You need to pull up the log and pull hashes if you are doing diffs or some of the more tricky rebasing items. With Git, everyone has the same hashes. Different repositories don't have different commit numbers and everyone is on the same page. With Hg this is less so. In practice if you have to pull up the commit hashes and work with them (either for comparing or traversing the code's timeline) it's easier to sync with other people when you have hashes instead of numbers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35413",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
35,432 | According to what I read, the compiler is not obliged to substitute the function call of an inline function with its body, but will do so if it can. This got me thinking- why do we have the inline word if that is the case? Why not make all function inline functions by default and let the compiler figure out if it can substitute the calls with the function body or not? | inline is from C; it was not new to C++. There are C keywords ( register and inline ) that were designed to allow the programmer to assist in code optimization. These are generally ignored nowadays, since compilers can do better at register assignment and deciding when to inline functions (in fact, a compiler can either inline or not inline a function at different times). Code generation on modern processors is far more complicated than on the more deterministic ones common when Ritchie was inventing C. What the word means now, in C++, is that it can have multiple identical definitions, and needs to be defined in every translation unit that uses it. (In other words, you need to make sure it can be inlined.) You can have an inline function in a header with no problems, and member functions defined in a class definition are automatically effectively inline . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
35,471 | I like to keep my lines of code under 80 characters because: I don't have to do any horizontal scrolling; I know the line is probably too complicated if it exceeds this limit; and it prints out nicely on paper. Concerning the latter, I've met only a few who actually print out code to look at (I'm one of them). So how common is it to print out code? | I still very occasionally print out code - but only if it's a particularly knotty problem. It usually indicates that the code is too complicated and needs refactoring, so in the first instance having something to scribble on helps find and fix the problem and then it helps work out where the code should be split. In an ideal world of SOLID and DRY principles you should be able to see the whole of a method on a single screen. However, we don't work in an ideal world... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35471",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
35,525 | I am a relatively young programmer. I am 23 and I have been programming professionally for about 5 years. As most programmers I started with C, learned some x86 assembly for fun and then I found C++ which turned out to be my greatest passion in the programming world. Programming with C and C++ forces you to learn platform specific APIs, libs and frameworks all of each requires constant study and experimentation.
After some time I had to move on to Java and C# as the demand on my region is basically for these languages. With these languages I entered the world of web development and then I had to learn javascript.
Developing for the .NET Framework was exciting at first but I constantly felt as I was getting tied up by Microsoft (and of course the .NET Framework was driving me away from Linux).
For desktop development I could do pretty much everything I did with .NET using C++ with Qt but for web development I had to look for an alternative. Quickly I found Django and then I proceeded to learn Python so I could use Django.
Nowadays I am learning iOS development with Objective-C. So far it was pretty much easy to learn all these languages (C++ trained me well) but I am worried that someday I won't be able to keep track of them all.
Just to clarify. The only languages I learned cause I had to were C# and Java. All of the others I learned for fun, because I love programming and learning new things. Also I like to keep my skills sharp on desktop, web and mobile development. My question is: How do you keep track of multiple programming languages? (I mean, keep track of changes to these languages and keep your skills sharp) and: Is there such a thing as enough programming languages? | Personally, I think "keeping track" of languages is a waste of time. It's always good to pick up new popular languages, but once you have a popular and well-established language like C++, Python, etc. under your belt, you shouldn't be worried. If you're a good programmer, the language is just a set of keywords. There's only so many substantial paradigms out there; maybe old dogs can't learn new tricks, but there really aren't many new tricks. If you're worried that your functional / object-oriented / event-driven / whatever language may not last, learn another paradigm; but don't fret too much over the exact language choice. And so what if you forget a keyword or two after you've been away from a language for awhile? That's why we have Google. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35525",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12824/"
]
} |
35,582 | We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.] Edit: I get that going through someone's workstation is bad form (though it's an interesting - and likely off-topic - question as to precisely why that is) and I'm certainly not looking at unlimited access. Think more along the lines of a standard where their code directories are set up with a read-only share - nothing can be changed, nothing else can be seen and so on. | " If it's not in source control, it doesn't exist. " This is one of the few things in our profession that I'm borderline dogmatic about. For the following reasons: Even though the workstation is company property, let's face it - there is a bit of an unwritten rule that a programmer's own workstation is his/her castle. I'm just uneasy with a workplace culture where anyone can routinely log onto it and go through it. Everybody has their own flow (as you said as well). Trying to force all developers to organise their own local workspaces a certain way may go against their particular way of working, and break their flow and make them less efficient. Stuff that isn't in source control is half-baked code. If it's fully baked code that's ready for release, it should be in source control. Which comes back again to the main point.... "If it's not in source control, it doesn't exist." One possible way to mitigate the issue of wanting to look at code on people's workstations is to foster a culture of regular checkins. I worked at a company once where - even though there was no official mandate to do so - it was seen a sort of point of pride to always have everything checked in for the weekend. In maintenance and release candidate phases, CR items were deliberately very fine grained to allow for small, cleanly visible changes, and regular checkins to keep track of them. Also, having everything checked in before you go on vacation was mandatory . TL;DR version : Rifling through people's workstations is bad form. Rather than trying to foster a culture of making it easy to go through people's workstations to find what we want, it's better practice to foster a culture of sensible source control use and regular checkins. Possibly even hyper-regular checkins, and fine-grained tasks when in critical phases of projects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35582",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
35,594 | Everyone says the same thing: "a real programmer knows how to handle real problems." But they forget how they learned this ability or where: it's not taught in schools. What can I do to improve my ability to tackle complex programming problems? What strategies have worked for you? Are there specific areas I should be focusing on, like algorithms or design patterns? | A few techiques that might or might not work: Look at existing solutions to common problems, e.g. design patterns. Maybe you find something similar that at least partially resembles your problem. Search the web. Act as if the problem has already been solved, and trace what follows back to the solution to make. For example, instead of designing the API for a class, just write the code that makes use of the class, with method calls as you would like them, and then implement that API. Do something else, e.g. surf the net or play solitaire, and wait for inspiration to happen. Think of the person you like most, and pretend you want to impress her with your problem solving skills. What would be an extremely impressive solution? Check the problem for inherent contradictions or conflicting requirements, and state exactly what they are and what compromise could be made. Often, when such conflicts exist, but you are not aware of, you tend to discard one possible solution after another because you cannot perfectly satisfy all requirements. If you already have a possible solution, but it feels "dirty" (copy-paste, global variables, spaghetti code etc.), use it anyway and make it better afterwards | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35594",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
35,639 | Just out of curiosity, I started wondering whether a language which doesn't allow comments would yield more readable code as you would have be forced to write self-commenting code. Then again, you could write just as bad code as before because you just don't care. But what's your opinion? | I think programmers would figure out another way to add comments... string aComment = "This is a kludge."; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35639",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
35,755 | Are there any techniques in programming that you find to be overused (I.E. used way more excessively than what they should be) or abused, or used a bit for everything, while not being a really good solution to many of the problems which people attempt to solve with it. It could be regular expressions, some kind of design pattern or maybe an algorithm, or something completely different. Maybe you think people abuse multiple inheritance etc. | Comments in Code I just wish the University Professors would understand they don't need to teach their students to write 10 lines of comments saying the following code is a for loop, iterating from 1 to number of x. I can see that in the code! Teach them to write self-documenting code first then appropriate commenting on what cannot be adequately self-documenting second. The software world would be a better place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35755",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
35,819 | Should a software engineer also act as technical support? That is, should a company allow their engineers to wear both the software engineer and technical support hats. It seems that it would remove the ability to write software if much of an engineer's time is taken up by technical support. | This is a classic issue in companies that have a software development component in their work, whether they are software companies or not. I struggle with this all the time. Having Developers involved in Production Support Pros Fights "Development in a vacuum" syndrome . It's valuable to gain exposure to how users use the app. Until I finally saw this as a young developer, I didn't realize what a crappy UI developer I was. All I cared about was coding and not design, analysis or the user's perspective. Developers that are not as good as they think they are can be humbled (although there's no guarantee you'll get this benefit; some devs are truly oblivious, selfish, and stubborn). Developers will gain domain knowledge . This is critical if your developers are to eventually become better at identifying and filling in the gaps the business analysis phase (assuming there is any) misses. Good support is a marketing point. If you do it well, clients will come to appreciate it. And a well-rounded developer with communication skills and domain knowledge is capable of doing this well. However, I would still prefer that applications be of high enough quality that they don't need support. Superior quality is its own form of customer support (and also a marketing point). Cons Interruption factor . This is the number one fault of mixing project work and support work, bar none. Projects interfere with support, and support interferes with projects. Projects depend on estimates and milestone progress, support is unpredictable and can involve impromptu urgency. Projects are schedule-based, support is interruption-based. Not a happy combination, and very frustrating for developers to deal with. Not everyone is good at support . Someone that's less experienced with the app or business, or someone whose personality or communication skills are such that they are better shielded from user access might not work well in support. Inefficient use of resources . Frank Shearar noted in the comments that a developer doing trivial support can be more expensive than a level one support tech. In my experience, most developers don't like support. Having served on both the project and support sides, I can sympathize. When having to do both at the same time, the mitigating factor is often overtime, usually unpaid, to deal with support emergencies and still make project deadlines. Project managers love unpaid overtime because it means making dates without costing more money, but for the devs, it's just a big bowl of suck. However, I also believe that if developers did a better job creating reliable and intuitive systems, you'd have less support. So this creates a weird circular argument for mixing the two. What I think you should do if you have to do both is find ways to avoid making it simultaneous. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4459/"
]
} |
35,939 | I have heard of many web APIs like that of Facebook, Twitter, etc., which helps third party access data and manipulate it. I would like to know how a web API works. What are the basics of a web API? If I want to create an API for my site, so that people can access or update it, what will I need to start with? | At its simplest, you create a set of GET/POST requests that anyone can call and publish the information on the URLs, parameters and effects. GET requests for read-only tasks and POST requests for anything that will change data on the server. Add in a authentication system if needed and you have yourself a simple Web API. A Web API is just an Interface to allow access to your system (such as site) via standard HTTP request methods . The data itself is usually wrapped in some standard format (such as JSON or XML ) to make it easy to handle. Here is an example Web API for 'TextWise' | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35939",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10153/"
]
} |
35,946 | Is it a code smell to call public method in private method of the same object instance? | No not bad smell. This might be needed, why do you suspect it to be wrong ? A method at an atomic level is an independent entity that does a task. As long as it does a task anyone who has access to it can call it to get the task done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/35946",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12929/"
]
} |
36,175 | I have always seen the recommendation that we should first write unit tests and then start writing code. But I feel that going the other way is much more comfortable (for me) - write code and then the unit tests, because I feel we have much more clarity after we have written the actual code. If I write the code and then the tests, I may have to change my code a little bit to make it testable, even if I concentrate much on creating a testable design. On the other hand, if I write the tests and then the code, the tests will change pretty frequently as and when the code shapes up. As I see a lot of recommendations to start writing tests and then move on to coding, what are the disadvantages if I do it the other way - write code and then the unit tests? | Red is the answer. Red is what you get from TDD's red-green-refactor cycle that you can't get, test-last. First, write a failing test. Watch it fail. That's your red, and it's important. It says: I have this requirement and I know my code isn't satisfying it. So when you go to step 2 (green), you know, with just as much certainty, that your code now is satisfying that requirement. You know that you've changed your code base in such a way as to satisfy the requirement. Requirements (tests) developed after the code, based on the code, deprive you of that kind of certainty, that confidence. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36175",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9208/"
]
} |
36,491 | When talking with colleagues about software design and development principles, I've noticed one of the most common sources for analogies is the construction industry. We build software and we consider the design and structure to be the architecture . One of the best ways to learn (or teach) are through analyzing analogies - what other analogies can be drawn from construction? (whether already in common use in software or not). Please provide a description, or your personal experience, regarding how the programming concept is similar to the construction concept. [Credit to Programming concepts taken from the arts and humanities for the idea] | That is where design patterns came from. The person that allegedly introduced the concept to the world was Christopher Alexander in his book "A Pattern Language: Towns, Buildings, Construction" in 1977 . From there, the Gang of Four (GoF) picked it up , and the rest is history. Even now during lectures and in software development and architecture books analogies between the construction world and the software development world keep prevailing. Some analogies and references I can think of or recall: For example, changing requirements during the construction of a building it would perhaps become more evident to the client how absurd this is, e.g.: "OH, and I want a garage instead of where the kitchen you just finished is". Temporary aids such as scaffolding (meaning in construction world | software development ) Clients cannot keep adding features without it costing them, lots of times they want stuff done for free, and sometimes we are dumb enough to accept; that just couldn't happen in the construction world (see requirements creep ). The roles in software development: the architect is central to the design of the solution; consultor and contractor can be interchangeable terms; the workers are the programmers. The client cannot provide accurate requirements in both cases. Budgets and time estimates are often wrong. The product cannot be really seen in its true form until the end . A building might have construction faults after built, the same way software has bugs . If the product is badly done, sometimes it is preferrable to demolish and start over than fixing it. Not knowing about the actual and real outcomes of poor quality work, the client wants the cheapest solution . Open Source . I was just watching this talk from Doc Searls called " Why All Business Will Be Based on Open Source " where he tells how the construction community shares techniques and general knowledge instead of patenting them much like the open source community, even when some stuff in buildings contain proprietary products built-in. Projects turn out better for everyone if the client is involved actively . (If more come to mind, I'll add them.) There are some who don't think the general analogy is correct, a recommended reading for this is The Software Construction Analogy is Broken . Also, there is a question about this on SO titled What's wrong with the analogy between software and building construction? . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36491",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2314/"
]
} |
36,612 | Stock options don't make much sense, since the company's private. [It still does, if you are a facebook of sorts AND the regulatory system permits sites like secondmarket, but I digress.] I could think of some: Health benefits to parents and parents-in-laws Sponsoring a fuel-saving bike to drive to office Gift cards for occasions like completion of 1, 3, 5 years of service I really could do with more suggestions here. EDIT: Thanks everyone for the response. To summarize, here are the additional things my HR could do: Matching contribution to employee retirement fund provided the employee contributes Funding continuing education, professional courses etc. Company subscription to ACM, IEEE, Safari Books etc. Meal vouchers Membership to gyms Hosting a recreation room at office Spot bonuses Time off for code spikes in recognition of individual contribution Sabbaticals | Paid lessons for anything they want - programming, human languages, music, arts etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36612",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8720/"
]
} |
36,731 | Perfectionism may be good and bad when programming. When and where do you draw the line when you are problem solving? When do you decide when a solution is overkill, too general or simply too futuristic? Please comment if the question is unclear. | KISS and YAGNI , especially YAGNI. Only engineer a solution for the things you know you're going to need soon. Don't engineer it for the things that might be needed in two years, because most likely you're going to need very different things and will have to re-engineer it anyway. The moment you start talking about "with this design at some point in the future we could do X, or even Y", instead of "this design allows us to do customer requirement Z in the next release", that's when you're getting into architecture astronomy. In response to the comments: KISS = Keep It Simple, Stupid = pretend you're a moron and have to understand the design YAGNI = You Ain't Gonna Need It = stop pretending you can predict the future in your design | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36731",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7083/"
]
} |
36,757 | A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security . This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license . Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a
subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy
Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability
Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the
Nation | KISS and YAGNI , especially YAGNI. Only engineer a solution for the things you know you're going to need soon. Don't engineer it for the things that might be needed in two years, because most likely you're going to need very different things and will have to re-engineer it anyway. The moment you start talking about "with this design at some point in the future we could do X, or even Y", instead of "this design allows us to do customer requirement Z in the next release", that's when you're getting into architecture astronomy. In response to the comments: KISS = Keep It Simple, Stupid = pretend you're a moron and have to understand the design YAGNI = You Ain't Gonna Need It = stop pretending you can predict the future in your design | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36757",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13243/"
]
} |
36,810 | We're starting to use Story Points here for our Agile development but I find it hard to explain and also can't find any definitive answer to what they are. The best thing I can do is point to other sites (like this one ) and give some vague generalization of what they are. I'm looking for a good explanation with some examples of use that would be helpful for others to use. Are there any good resources for explaining story points? | This may help as a starter: Mike Cohn on story points . But this one is far better: Agile Development Teams: Scope and Scale with Mike Cohn Mike's solution to software estimation metrics is simple and effective. Biological facts: Human brain is just unable to estimate in time correctly. Especially if more than few hours. This is greatly amplified by the amount of uncertainties in software developer, psychological pressures from management (when you estimate, you commit...) and difference in skills in the team. However, we are pretty good at comparing stuff. We are quite accurate there. The idea is to take one reference user story , then give it an arbitrary number of (story) points , then other stories will get points related to that reference. If your reference story is 100 points, and another story is three times bigger, then it will be 300 points. To convert story points into time for your planning, you have to know your velocity . To get an accurate velocity, you must do few iterations and calculate how much points your team completed in a given amount of time. It works . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36810",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13260/"
]
} |
36,978 | I learned very early on that cutting & pasting somebody else's code takes longer in the long run that writing it yourself. In my opinion unless you really understand it, cut & paste code will probably have issues which will be a nightmare to resolve. Don't get me wrong, I mean finding other peoples code and learning from it is essential, but we don't just paste it into our app. We rewrite the concepts into our app. But I'm constantly hearing about people who cut & paste, and they talk about it like it's common practice. I also see comments by others which indicate it's common practice. So, do most programmers cut & paste code? | Two general cases: From one project to another: Most programmers cut and paste code in this capacity. They might find a previous project or something online and copy/paste it exactly or copy/paste and make changes to it. I think this practice is typically fine. This is especially good when it is proven code. (Examples: Some sort of utility object from a past project that worked well, or possibly from a blog with few changes needed). Where this can be bad, is when you are copying code that you don't understand, or where the code is poor, or where there is a much better alternative solution than the code that you are pasting. Inside the same project: Copy and pasting in the same project is typically not a good idea. This is a bad smell that the code that is being copied should just be in a method/class somewhere and called repeatedly. There are some exceptions to this, but generally the programmer should be thinking: " Is there a way I can parameterize this code that I am copying? ". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/36978",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/165/"
]
} |
37,029 | What is the difference between a defect and a bug? | A bug is the result of a coding error A defect is a deviation from the requirements That is: A defect does not necessarily mean there is a bug in the code , it could be a function that was not implemented but defined in the requirements of the software. From the Wikipedia page on software testing : Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer.[14] A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37029",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8843/"
]
} |
37,216 | Not fatigue as in 'I need sleep' but fatigue as in 'I just can't be bothered anymore' which usually sets in when you hit roadblocks in whatever project you're working on, generally the closer you get to deadlines. It can be in work projects or personal projects, but it's something I keep hitting more and more lately. I'll get an idea, get into working on it, have a few really good days and make progress, then just some niggly things will trip me up, I can't get things working the way you want, I've hit limitations in the framework, I've got problems I can't find documentation for, etc. and it just gets too frustrating. Or am I alone in this? | Procrastination What you describe is probably procrastination . It's a very common phenomenom. Click on the link and read about the Temporal Motivation Theory . To beat procrastination, I do lists. When I feel I'm procrastinating, I open the list and force me to work on the first item. After few minutes, I get into the zone or the flow as described by the psychologist Mihály Csíkszentmihályi . You should recognize you in this image: | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37216",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13371/"
]
} |
37,231 | I develop an android application and in my app I use a libary (jar) that I download from the internet. This jar is open-source under the "GNU General Public License v2". I tried to read the text of the license but had difficulty understanding it. My question is: can I use this libary without changing nothing in the jar in a commercial application? I will be making profit from selling my app which uses this GPL-ed .jar file. If possible, I would like to avoid converting my application to open-source. | GPL Required you to release source code for your distribution
(In case of Full GPL but not lesser-GPL which is much more common in libraries) But do not mix up with the following concepts. You can still distribute any work for a fee thus making profit. (As long as you ship the binary with the source code) You don't have to provide source code, if you don't ship/distribute any binaries, in case
of SaaS (Software as a Service) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37231",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13415/"
]
} |
37,249 | Possible Duplicate: When is Singleton appropriate? I am a new programmer (4 months into my first job) and have recently taken an interest in design patterns. One that I have used recently is the Singleton. However, looking at some comments on this thread . It has some bad feedback; can somebody explain why? I have found it useful in some places, however I could probably have achieved the same without it using a static class. | It has the same problems as a global variable. It gives you global accesses to mutable state. Anything that uses the Singleton is now tightly coupled to it. Try and avoid tight coupling. It makes the code hard to update. Hard to test. If your choice is a Singleton or global variable. Than at least Singleton provides lazy initialization. But usually it is best to avoid globally accessible mutable state and pass in all dependency to your code via parameters or in the constructor. This decouples your code from specific instances. Note: Global access to constant state is not such a big deal. Here is a good talk about the subject done by Google Engineers: http://www.youtube.com/watch?v=-FRm3VPhseI Singleton pattern looks like it is one of the easiest to implement (when you first see Design Patters). It is relatively easy to do but there are a couple of gotchas in its implementation (all of which are language specific). But the hardest part of learning about Singleton's is; learning when to use them (which should be rarely (most people would argue never, but I find it hard to convince my-self that I could never use a Singleton (thought the few times I have, most have been mistakes (but that is how we learn))). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37249",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10149/"
]
} |
37,285 | What languages would you suggest a programmer to learn, not because they will have a lot of use of the language (but they may have), but because it will improve one's programming skill in general and let one think in a different (and possibly better) way? | for "normal" programming and algorithms: python, good to learn, easy to use, pretty to read. C++, teaches you what a computer REALLY is. for a thought changing experience: Haskell Prolog for destroying your mind and crossing the line between padawan and Jedi Master: Common LISP | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37285",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
37,307 | I've made a pretty good living as a self-taught programmer, but when I find that I discuss some low-level fundamental topics with my peers who have a CS degree, holes appear in my knowledge. I'm a big picture (architecture) guy, so for a long time this hasn't bothered me, but lately I've wondered if there is an approach I can take that will help me learn these fundamentals without going back to school? Are there books, websites or videos that you can recommend that would give me a ground-up perspective as opposed to a learn it as you need it mentality? | This ought to keep you busy for a couple of weeks: Electrical Engineering and Computer Science | MIT OpenCourseWare | Free Online Course Materials Course # Course Title
6.00SC Introduction to Computer Science and Programming (Spring 2011) Undergraduate
6.00 Introduction to Computer Science and Programming (Fall 2008)
6.01SC Introduction to Electrical Engineering and Computer Science I
6.001 Structure and Interpretation of Computer Programs
6.002 Circuits and Electronics
6.003 Signals and Systems
6.004 Computation Structures
6.005 Elements of Software Construction (Fall 2011)
6.005 Elements of Software Construction (Fall 2008)
6.006 Introduction to Algorithms (Fall 2011)
6.006 Introduction to Algorithms (Spring 2008)
6.007 Electromagnetic Energy: From Motors to Lasers
6.011 Introduction to Communication, Control, and Signal Processing (Spring 2010)
6.011 Introduction to Communication, Control, and Signal Processing (Spring 2004)
6.012 Microelectronic Devices and Circuits (Spring 2009)
6.012 Microelectronic Devices and Circuits (Fall 2009)
6.012 Microelectronic Devices and Circuits (Fall 2005)
6.013 Electromagnetics and Applications (Spring 2009)
6.013 Electromagnetics and Applications (Fall 2005)
6.021J Quantitative Physiology: Cells and Tissues (Fall 2004)
6.022J Quantitative Physiology: Organ Transport Systems
6.023J Fields, Forces and Flows in Biological Systems
6.024J Molecular, Cellular, and Tissue Biomechanics
6.025J Introduction to Bioengineering (BE.010J)
6.033 Computer System Engineering
6.034 Artificial Intelligence (Fall 2010)
6.034 Artificial Intelligence (Spring 2005)
6.035 Computer Language Engineering
6.035 Computer Language Engineering (SMA 5502)
6.041 Probabilistic Systems Analysis and Applied Probability (Fall 2010)
6.041 Probabilistic Systems Analysis and Applied Probability (Spring 2006)
6.042J Mathematics for Computer Science (Spring 2010)
6.042J Mathematics for Computer Science (Fall 2010)
6.042J Mathematics for Computer Science (Spring 2005)
6.042J Mathematics for Computer Science (Fall 2005)
6.045J Automata, Computability, and Complexity
6.046J Introduction to Algorithms (SMA 5503)
6.047 Computational Biology: Genomes, Networks, Evolution (Fall 2008)
6.050J Information and Entropy
6.055J The Art of Approximation in Science and Engineering
6.061 Introduction to Electric Power Systems (Spring 2011)
6.071J Introduction to Electronics, Signals, and Measurement
6.079 Introduction to Convex Optimization (Fall 2009)
6.07J Projects in Microscale Engineering for the Life Sciences
6.080 Great Ideas in Theoretical Computer Science (Spring 2008)
6.087 Practical Programming in C
6.088 Introduction to C Memory Management and C++ Object-Oriented Programming
6.089 Great Ideas in Theoretical Computer Science (Spring 2008)
6.090 Building Programming Experience: A Lead-In to 6.001
6.091 Hands-On Introduction to Electrical Engineering Lab Skills
6.092 Introduction to Programming in Java
6.092 Java Preparation for 6.170
6.092 Bioinformatics and Proteomics
6.094 Introduction to MATLAB
6.096 Introduction to C++
6.096 Algorithms for Computational Biology
6.097 Fundamentals of Photonics: Quantum Electronics (Spring 2006)
6.099 Street-Fighting Mathematics
6.101 Introductory Analog Electronics Laboratory
6.111 Introductory Digital Systems Laboratory (Spring 2006)
6.111 Introductory Digital Systems Laboratory (Fall 2002)
6.152J Micro/Nano Processing Technology
6.161 Modern Optics Project Laboratory (Fall 2005)
6.163 Strobe Project Laboratory
6.170 Laboratory in Software Engineering
6.171 Software Engineering for Web Applications
6.172 Performance Engineering of Software Systems
6.186 Mobile Autonomous Systems Laboratory
6.189 A Gentle Introduction to Programming Using Python (January IAP 2011)
6.189 A Gentle Introduction to Programming Using Python (January IAP 2008)
6.189 Multicore Programming Primer
6.207J Networks
6.270 Autonomous Robot Design Competition
6.338J Parallel Computing
6.370 Robocraft Programming Competition
6.431 Probabilistic Systems Analysis and Applied Probability (Fall 2010)
6.521J Quantitative Physiology: Cells and Tissues (Fall 2004)
6.637 Modern Optics Project Laboratory (Fall 2005)
6.701 Introduction to Nanoelectronics (Spring 2010)
6.801 Machine Vision (Fall 2004)
6.803 The Human Intelligence Enterprise (Spring 2006)
6.803 The Human Intelligence Enterprise (Spring 2002)
6.804J Computational Cognitive Science
6.805 Ethics and the Law on the Electronic Frontier (Fall 2005)
6.806 Ethics and the Law on the Electronic Frontier (Fall 2005)
6.813 User Interface Design and Implementation (Spring 2011)
6.814 Database Systems (Fall 2010)
6.830 Database Systems (Fall 2010)
6.831 User Interface Design and Implementation (Spring 2011)
6.837 Computer Graphics
6.857 Network and Computer Security
6.901 Inventions and Patents
6.911 Transcribing Prosodic Structure of Spoken Utterances with ToBI
6.912 Introduction to Copyright Law
6.930 Management in Engineering
6.974 Fundamentals of Photonics: Quantum Electronics (Spring 2006)
6.976 NextLab I: Designing Mobile Technologies for the Next Billion Users
6.S096 Introduction to C and C++
6.231 Dynamic Programming and Stochastic Control
6.241J Dynamic Systems and Control
6.243J Dynamics of Nonlinear Systems
6.245 Multivariable Control Systems
6.251J Introduction to Mathematical Programming
6.252J Nonlinear Programming (Spring 2004)
6.252J Nonlinear Programming (Spring 2003)
6.253 Convex Analysis and Optimization
6.254 Game Theory with Engineering Applications
6.255J Optimization Methods
6.262 Discrete Stochastic Processes
6.263J Data Communication Networks
6.264J Queues: Theory and Applications
6.281J Logistical and Transportation Planning Methods (Fall 2006)
6.281J Logistical and Transportation Planning Methods (Fall 2004)
6.301 Solid-State Circuits
6.302 Feedback Systems
6.331 Advanced Circuit Techniques
6.334 Power Electronics
6.336J Introduction to Numerical Simulation (SMA 5211)
6.337J Introduction to Numerical Methods
6.339J Numerical Methods for Partial Differential Equations (SMA 5212)
6.341 Discrete-Time Signal Processing
6.345 Automatic Speech Recognition
6.374 Analysis and Design of Digital Integrated Circuits
6.431 Probabilistic Systems Analysis and Applied Probability (Spring 2006)
6.432 Stochastic Processes, Detection, and Estimation
6.435 System Identification
6.436J Fundamentals of Probability
6.441 Information Theory
6.443J Quantum Information Science
6.450 Principles of Digital Communication I
6.450 Principles of Digital Communications I
6.451 Principles of Digital Communication II
6.452 Principles of Wireless Communications
6.453 Quantum Optical Communication
6.524J Molecular, Cellular and Tissue Biomechanics (BE.410J)
6.541J Speech Communication
6.542J Laboratory on the Physiology, Acoustics, and Perception of Speech
6.543J The Lexicon and Its Features
6.551J Acoustics of Speech and Hearing
6.555J Biomedical Signal and Image Processing
6.561J Fields, Forces, and Flows in Biological Systems (BE.430J)
6.581J Foundations of Algorithms and Computational Techniques in Systems Biology
6.630 Electromagnetics
6.632 Electromagnetic Wave Theory
6.635 Advanced Electromagnetism
6.637 Optical Signals, Devices, and Systems
6.641 Electromagnetic Fields, Forces, and Motion (Spring 2009)
6.641 Electromagnetic Fields, Forces, and Motion (Spring 2005)
6.642 Continuum Electromechanics
6.651J Introduction to Plasma Physics I (Fall 2006)
6.651J Introduction to Plasma Physics I (Fall 2003)
6.661 Receivers, Antennas, and Signals
6.685 Electric Machines
6.690 Introduction to Electric Power Systems (Spring 2011)
6.691 Seminar in Electric Power Systems
6.695 Engineering, Economics and Regulation of the Electric Power Sector (Spring 2010)
6.719 Introduction to Nanoelectronics (Spring 2010)
6.720J Integrated Microelectronic Devices
6.728 Applied Quantum and Statistical Physics
6.730 Physics for Solid-State Applications
6.763 Applied Superconductivity
6.772 Compound Semiconductor Devices
6.774 Physics of Microfabrication: Front End Processing
6.776 High Speed Communication Circuits
6.777J Design and Fabrication of Microelectromechanical Devices
6.780 Semiconductor Manufacturing
6.780J Control of Manufacturing Processes (SMA 6303)
6.781J Submicrometer and Nanometer Technology
6.821 Programming Languages
6.823 Computer System Architecture
6.824 Distributed Computer Systems Engineering
6.825 Techniques in Artificial Intelligence (SMA 5504)
6.826 Principles of Computer Systems
6.827 Multithreaded Parallelism: Languages and Compilers
6.828 Operating System Engineering
6.829 Computer Networks
6.832 Underactuated Robotics
6.833 The Human Intelligence Enterprise (Spring 2006)
6.833 The Human Intelligence Enterprise (Spring 2002)
6.834J Cognitive Robotics
6.838 Algorithms for Computer Animation
6.840J Theory of Computation
6.841J Advanced Complexity Theory
6.844 Computability Theory of and with Scheme
6.845 Quantum Complexity Theory
6.851 Advanced Data Structures
6.852J Distributed Algorithms
6.854J Advanced Algorithms (Fall 2008)
6.854J Advanced Algorithms (Fall 2005)
6.855J Network Optimization
6.856J Randomized Algorithms
6.859J Integer Programming and Combinatorial Optimization
6.863J Natural Language and the Computer Representation of Knowledge
6.864 Advanced Natural Language Processing
6.866 Machine Vision (Fall 2004)
6.867 Machine Learning
6.868J The Society of Mind
6.871 Knowledge-Based Applications Systems
6.872 Biomedical Computing
6.872J Medical Computing
6.873J Medical Decision Support (Fall 2005)
6.873J Medical Decision Support (Spring 2003)
6.874J Computational Functional Genomics
6.875 Cryptography and Cryptanalysis
6.876J Advanced Topics in Cryptography
6.877J Computational Evolutionary Biology (Fall 2005)
6.878 Computational Biology: Genomes, Networks, Evolution (Fall 2008)
6.881 Representation and Modeling for Image Analysis
6.883 Pervasive Human Centric Computing (SMA 5508)
6.883 Program Analysis
6.884 Complex Digital Systems
6.891 Computational Evolutionary Biology (Fall 2004)
6.892 Computational Models of Discourse
6.895 Essential Coding Theory
6.895 Theory of Parallel Systems (SMA 5509)
6.896 Theory of Parallel Hardware (SMA 5511)
6.897 Selected Topics in Cryptography
6.931 Development of Inventions and Creative Ideas
6.933J The Structure of Engineering Revolutions
6.938 Engineering Risk-Benefit Analysis
6.945 Adventures in Advanced Symbolic Programming
6.946J Classical Mechanics: A Computational Approach
6.971 Biomedical Devices Design Laboratory
6.972 Algebraic Techniques and Semidefinite Optimization
6.973 Communication System Design
6.973 Organic Optoelectronics
6.974 Engineering, Economics and Regulation of the Electric Power Sector (Spring 2010)
6.975 Introduction to Convex Optimization (Fall 2009)
6.976 High Speed Communication Circuits and Systems
6.977 Ultrafast Optics
6.977 Semiconductor Optoelectronics: Theory and Design
6.978J Communications and Information Policy
6.982J Teaching College-Level Science and Engineering (Fall 2012)
6.982J Teaching College-Level Science and Engineering (Spring 2009) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37307",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
37,339 | I am in a position where I have to hire a programmer and have the option of 2 candidates, the first has experience but he doesn't have a passion for coding and he says so while the second doesn't have the experience but he has the passion, he did well in the interview and is certified. We have the resources to train someone, but I really don't want to blow this process and hire someone who will be disappointing. Can anyone help me as to how to approach this situation? | Hire the inexperienced programmer with a passion for the craft. A passionate programmer will learn quickly, care about his work and enjoy doing it. I've worked with both types of programmers and I would always hire the passionate type over the experienced. People who don't care about their work eventually lead to problems in quality as well as in meeting deadlines. Since you explicitly state that you have the resources to train someone, this is a no brainer. Hire the passionate programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37339",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11161/"
]
} |
37,475 | It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice , it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on . | While there are plenty of examples of bad error messages that simply should not be, you must keep in the mind that it isn't always possible to provide the information you'd like to see. There is also the concern over exposing too much information, which may lead developers to error on the side of caution. (I promise you that pun was not intended, but I'm not removing it now that I've noticed it.) Sometimes, there's also the fact that a complex error message is useless to the end-user. Siding with information overload is not necessarily a good approach for error messages. (That may be best saved for logging.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37475",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7693/"
]
} |
37,548 | Whether as an attendee, a speaker, or a vendor I wanted to know what the unspoken rules of etiquette are at software conferences. Other than the blindingly obvious ones (like don't assault the winner of the iPad raffle because you didn't win). What are some of the rules that should be followed, even if you feel they don't need to be said? Please, one rule per answer, with the summary in bold leading the answer. Post multiple answers if you have multiple rules. | Treat Q&A sessions like you'd treat StackOverflow: Ask per don't be the guy who asks all the same questions over and over again and make sure your questions are: short, to the point, can be answered, wouldn't obviously violate the dude's NDA and actually matter to 85% of the other attendees. Aren't some sort of subterfuge to insult or embarrass the presenter. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37548",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/486/"
]
} |
37,600 | At a previous employment, a project manager (PM) wasn't satisfied with the delivery time of the code on a project I was on. I was told by my project lead that that the PM was considering having me sign a contract to lock-in my time estimates I gave for tasks and delivery dates. The situation on the project was that we were working with new technologies, codebase, coding standards, and very prone-to-change requirements. I was learning new things and applying them the best I could on requirements that kept on changing. The requirements throughout the iterations grew by 2-3 times, with my estimate-to-complete growing by roughly 5-8 times. The only things that didn't change were the estimates and delivery dates. Yes, I did end up missing most deadlines. And I was working on some very new technologies that no one else on the entire development team could really help out on because they wouldn't be familiar with it. At least not easily. It seemed to me then, that the PM wanted his numbers to add up-- and thus wanted me to sign a contract to "ensure" that I would always deliver working code on time. I suppose with a signed contract the PM could use it against me if I couldn't deliver on time. I believe what happened next was that other project managers and/or project leads defended me, and didn't let this happen. My question is, should this raise a red flag about the manager? Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? Or in this case, try to. Please note, I was a full time employee, not an independent consultant. Update :
I want to add that I did give new estimates weekly, but it seems the original estimates and delivery dates were what the PM was fixated on. | My question is, should this raise a red flag about the manager? Yes . It means it is time for you to get your resume/CV up to date and start looking for a new job. Or it means that your manager is about to start playing some very nasty games with you. Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? I've never heard of this being applied to an employee. Time and effort estimation is always difficult. Especially since our profession is full of excessive optimism. There are some estimation systems that could help with estimates in the future, but they need collecting historical stats from yourself. One is PSP . Another is Function Points . Many developers like neither, and you'll find very strong opinions against both of them. The key difficulty in estimating time and effort is the lack of feedback in our estimation heuristics. One of the keys is to write down what you think the estimate is, and what parameters you used to estimate it. Then, based on what you actually get done, compare that with what you thought you'd do. And use that to modify your estimation parameters. In engineering, we call this " feedback ." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37600",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14/"
]
} |
37,695 | When is Java a good choice for web development? Please do not say "When you have a development team that knows only Java." | Given the many available frameworks, the maturity of the platform etc., I'm tempted to say "almost always". So here are some reasons when you should not use Java: as a pure MS shop, you probably prefer to do it the .net way if you need the cheapest possible webhoster, you probably only have PHP as your choice if you want to do it as rapid as possible, Ruby on Rails, Grails or Django are probably better suited for your needs if your development team only knows XYZ, where XYZ != Java, you better use XYZ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/37695",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
38,002 | This came up in a discussion with a friend, and I found myself hard-pressed to think up an any good arguments. What benefits do weak typing confer? | The problem with this kind of discussion is simply that the terms "weak typing" and "strong typing" are undefined, unlike for example the terms "static typing", "dynamic typing", "explicit typing", "implicit typing", "duck typing", "structural typing" or "nominal typing". Heck, even the terms "manifest typing" and "latent typing", which are still open areas of research and discussion are probably better defined. So, until your friend provides a definition of the term "weak typing" that is stable enough to serve as the basis of a discussion, it doesn't even make sense to answer this question. Unfortunately, apart from Nick's answer , nobody of the answerers bothered to provide their definition either, and you can see the confusion that generates in some of the comments. It's hard to tell, since nobody actually provides their definitions, but I think I count at least three different ones, just on this very page. Some of the more commonly used definitions are (and yes, I know that pretty much none of them makes any sense, but those are the definitions I've seen people actually use): weak typing = unsafe typing / strong typing = safe typing weak typing = dynamic typing / strong typing = static typing weak typing = duck typing / strong typing = nominal typing weak typing = structural typing / strong typing = nominal typing weak typing = implicit typing / strong typing = explicit typing weak typing = latent typing / strong typing = manifest typing weak typing = no typing / strong typing = typing weak typing = implicit casts / strong typing = only explicit casts weak typing = implicit or explicit casts / strong typing = no casts at all weak typing = implicit conversions / strong typing = only explicit conversions weak typing = implicit or explicit conversions / strong typing = no conversions at all weak typing = interpretation / strong typing = compilation weak typing = slow / strong typing = fast weak typing = garbage collection / strong typing = manual memory management weak typing = manual memory management / strong typing = garbage collection … and many others The three definitions that seem to be used most widely, however, are weak typing = your stupid crappy programming language / strong typing = my super-awesome programming language weak typing = every other programming language / strong typing = the only programming language I ever bothered to learn (usually either Java, C# or C++; strangely, people who learn e.g. Haskell or Scheme as their first and only language don't seem to share this worldview) weak typing = every language I don't understand / strong typing = Java (substitute with C# or C++ at will) Unless everybody agrees on a definition of what "weak typing" even is , it doesn't even make sense to think about what its advantages might be. Advantages of what? Even worse, if there is no definition at all , then everybody can just shift their definitions to fit their arguments, and every discussion is pretty much guaranteed to devolve into a flamewar. I myself have personally changed my own definition several times over the years and have now reached the point where I don't even consider the terms useful any more. I also used to think that weak typing (in its various definitions) has a place in shell scripting, but whenever I have to solve the same problem in Bash and PowerShell, I am painfully reminded how wrong I was. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6/"
]
} |
38,095 | I am currently 4 months into an internship, and when reviewing my code, my boss didn't like that I had kept a specific object local to a number of methods across a few separate classes within one assembly. He didn't like that I was created a new object each time, and instead told me to create a single object that can be accessed from anywhere. I have therefore had to create it as a static object within a static class, and simply reference it from here I want to use it! How would you deal with this, as I have only been programming professionally for 4 months! | If one object is enough, creating an object every time is a waste, and here your boss may be right. The problem is proper access to that object. A factory-like method with proper visibility which always returns that static object is the first solution that springs to mind. Others certainly exist. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38095",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10149/"
]
} |
38,145 | I was poking around the AdventureWorks database today and I noticed that a number of tables ( HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored? | Because not all data needs to be stored relationally and writing code to process data you've been passed as XML for relational storage is time consuming (and very very tedious). This is particularly true when a lot of XML data is coming from systems which are throwing out large generic responses. I've frequently seen situations where a message is received from another system and we don't care about 98% of what it contains. So we parse it to split out the 2% we do care about, store that relationally and then store the whole message in case we do need any of the remaining 98% later. And SQL Server gives you some OK-ish tools and syntax for working with XML in T-SQL so it's not as if it's totally beyond practical reach for ad-hoc queries in the way it might be if you were storing, say, the contents of a CSV. And that excludes the possibility that what you actually want to store is XML (for instance for support and debug purposes)... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38145",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7428/"
]
} |
38,225 | I started programming at the age of 6 on a Commodore 64. Now I'm 28, and I have to complete 4 courses from a first degree in Computer Science. I'm starting to get bored with writing code after all these years. I've taken a course in Computer Science in theoretical computer languages and 10 years of C system coding in network security field and I just don't get any stimulation from writing code. I can write code in C, C++, Python or whatever language you want, but I can't get excited about what I'm doing. I can't feel any challenge. I've written multi-threaded code, HTTPS MITM proxy and a WSGI application without the need of any specific algorithms skill. I feel that all this new stuff is all the same, with simpler (or more) abstractions or automation in it, but it all sounds the same to me. Over and over again. The computable language is all Turing computable, so coding is only a replication of a similar pattern in this subset of a partial function. In my everyday work I'm feeling bored even with searching for bugs, or doing benchmarks on code, or fixing the problem with the library X,Y,Z. I'm a very curious person. I always get stimulated by something. But I can't even when I look at great code. I feel more comfortable in how the things work. Is it time to make an advance in my career? Or to get more challenging stuff in Computer Science? Maybe get a major degree in CS? I've started to read my first book on project management "Peopleware" and I'm getting more interested in the software development lifecycle. What do you suggest to do? Perhaps trying to get into Google or Microsoft or Apple like some friends of mine have done. Or perhaps a more managerial career path. I have also tried to find a good book about communication rules and "people personality" to prepare me for the possibility of getting into management. Any suggestions? P.S: I have a lot of interest, I'm not depressed :)
I love mountain, trekking, take photo shoot and I'm a sport climber, I love to swim and sport in general, sometimes I do running, actually I'm reading book about my country (Italy) story from AC to today and I love trips (this summer I made 4000Km to see a lot of place in Spain and climb in it, all over in only 3 week, not a holiday but a marathon, 24 km of trekking made my hernia injury), I love theater and life in general. Thanks to all, thinking about all the answer give to me the opportunity to make my path more clear. For a summary purpose we can generalize about the most voted up answer. First of all all the people vote-up the necessity to made our work a support of your life and not the only reason to stay in life (this is not my case), so, if you only reason of life if working you come fast to a really depressive situation. As peopleware say, Vienna don't waiting for you :) After this reminder people suggest to: increase technical complexity I'm actually working on to increase challenge and get less bored about it. change the field of expertise to a non technical field trying to become a manager or make some carer advance in a non technical fields related to your work. change the field of expertise in another kind of technical challenge, are you a system programmer ? try to get less bored developing about application for human so you can feel more comfortable viewing people using your useful software make some advance in your computer science degree in your academic path For my purpose the right answer is to advance in computer science, for my feeling i think programming is not the only computer science way to get and i think i can feel better getting other path in computer science different from software engineering carer. | That was bound to happen. If your primary interest lies with code, it will drive you crazy, frustrated and depressed once in a while, some day permanently. Get interested in developing products and enjoy seeing people use them. That's the ultimate goal of writing the code, right? Code is merely a tool to get something bigger done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38225",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7771/"
]
} |
38,265 | I keep seeing job postings as "Java Software Developer III" or "Software Developer II". Is there any official documentation that classifies these distinctions? | They are not supposed to be employer specific. Actually they come from the United States Bureau of Labor Statistics, which maintains a database of occupational descriptions. This database has a list of standardized job titles with fairly precise definitions for each one. In many professions, including computer programming, they have several bands based on level of expertise, years of experience, and/or management responsibility... these bands are designated with roman numerals, thus, COMPUTER PROGRAMMER I, COMPUTER PROGRAMMER II, etc. etc. The BLS descriptions are reasonably rigorous and precise so that they can compare apples to apples. A lot of people use these definitions when they want standard job descriptions and titles, including job listing sites, salary comparison engines like salary.com, and many human resource departments, especially at larger companies. The official definitions of the five levels of programmer are here . It's too long to quote here but it is by no means random or employer-specific. On the other hand, in the US, you'll generally find that the best places for programmers to work do not generally rely on US government job descriptions but instead create their own, more meaningful system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38265",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13723/"
]
} |
38,280 | Agile software development is becoming a pretty fun buzzword these days. As a developer, I understand the pragmatic value of iterative development, but (most often) it isn't a developers choice to embrace an Agile approach to software development. It's a top-down management choice! Whether it is crystal, agile methods, dsdm, rup, xp, scrum, fdd, tdd, you name it. It's not a developer choice. For all the managers out there, what are the biggest reasons for choosing to do Agile development when (in my experience) most managers haven't even touched a piece of code in their life! | Shifting requirements, faster delivery Agile is appealing because it gives the possibility of adapting to changing needs more quickly (or at all), and delivering those changes to the customer more quickly. This is why many companies fail when using Agile/Scrum: Managers don't understand that with great power (of setting quicker release dates and changing requirements often) comes the responsibility of relying on developers for estimates . For agile to work, the manager has to be willing to then cut scope. They want the power of both. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38280",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8874/"
]
} |
38,321 | I'm a developer at a small company. Sometimes I'm offered extra cash for freelance work from previous employers or on odd jobs that I could do after hours without encroaching on my full time job in any way. Unfortunately my manager has explicitly forbidden me from working on any side projects. The reason he says is that if I have any free time at all in my life, even on weekends, they should be spent working for his company. My argument is that my weekends are my time, so I should be able to do what I want. Secondly, I'd broaden my skills with a variety of different problems I wouldn't otherwise see, rather than just staring at the same project all year long. It would actually make me a more experienced programmer and help my full time job. Everyone else seems to be doing freelance work on the side and making extra cash, but I don't want to rat them out. What other motivation could I use to help my boss see that it's not such a bad thing? | The reason he says is that if I have any free time at all in my life, even on weekends, they should be spent working for his company. Quit the company NOW! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13739/"
]
} |
38,324 | Supposed you were asked in an interview "How would you implement Google Search?"
How would you answer such a question? There might be resources out there that explain how some pieces in Google are implemented (BigTable, MapReduce, PageRank, ...), but that doesn't exactly fit in an interview. What overall architecture would you use, and how would you explain this in a 15-30 minute time span? I would start with explaining how to build a search engine that handles ~ 100k documents, then expand this via sharding to around 50M docs, then perhaps another architectural/technical leap. This is the 20,000 feet view. What I'd like is the details - how you would actually answer that in an interview. Which data structures would you use. What services/machines is your architecture composed of. What would a typical query latency be? What about failover / split brain issues? Etc... | Consider the meta-point: what is the interviewer looking for? A mammoth question like that isn't looking for you to waste your time in the nitty-gritty of implementing a PageRank-type algorithm or how to do distributed indexing. Instead, focus on the complete picture of what it would take. It sounds like you already know all of the big pieces (BigTable, PageRank, Map/Reduce). So the question is then, how do you actually wire them together? Here's my stab. Phase 1: Indexing Infrastructure (spend 5 minutes explaining) The first phase of implementing Google (or any search engine) is to build an indexer. This is the piece of software that crawls the corpus of data and produces the results in a data structure that is more efficient for doing reads. To implement this, consider two parts: a crawler and indexer. The web crawler's job is to spider web page links and dump them into a set. The most important step here is to avoid getting caught in infinite loop or on infinitely generated content. Place each of these links in one massive text file (for now). Second, the indexer will run as part of a Map/Reduce job. (Map a function to every item in the input, and then Reduce the results into a single 'thing'.) The indexer will take a single web link, retrieve the website, and convert it into an index file. (Discussed next.) The reduction step will simply be aggregating all of these index files into a single unit. (Rather than millions of loose files.) Since the indexing steps can be done in parallel, you can farm this Map/Reduce job across an arbitrarily-large data center. Phase 2: Specifics of Indexing Algorithms (spend 10 minutes explaining) Once you have stated how you will process web pages, the next part is explaining how you can compute meaningful results. The short answer here is 'a lot more Map/Reduces', but consider the sorts of things you can do: For each web site, count the number of incoming links. (More heavily linked-to pages should be 'better'.) For each web site, look at how the link was presented. (Links in an < h1 > or < b > should be more important than those buried in an < h3 >.) For each web site, look at the number of outbound links. (Nobody likes spammers.) For each web site, look at the types of words used. For example, 'hash' and 'table' probably means the web site is related to Computer Science. 'hash' and 'brownies' on the other hand would imply the site was about something far different. Unfortunately I don't know enough about the sorts of ways to analyze and process the data to be super helpful. But the general idea is scalable ways to analyze your data . Phase 3: Serving Results (spend 10 minutes explaining) The final phase is actually serving the results. Hopefully you've shared some interesting insights in how to analyze web page data, but the question is how do you actually query it? Anecdotally 10% of Google search queries each day have never been seen before. This means you cannot cache previous results. You cannot have a single 'lookup' from your web indexes, so which would you try? How would you look across different indexes? (Perhaps combining results -- perhaps keyword 'stackoverflow' came up highly in multiple indexes.) Also, how would you look it up anyways? What sorts of approaches can you use for reading data from massive amounts of information quickly? (Feel free to namedrop your favorite NoSQL database here and/or look into what Google's BigTable is all about.) Even if you have an awesome index that is highly accurate, you need a way to find data in it quickly. (E.g., find the rank number for 'stackoverflow.com' inside of a 200GB file.) Random Issues (time remaining) Once you have covered the 'bones' of your search engine, feel free to rat hole on any individual topic you are especially knowledgeable about. Performance of the website frontend Managing the data center for your Map/Reduce jobs A/B testing search engine improvements Integrating previous search volume / trends into indexing. (E.g., expecting frontend server loads to spike 9-5 and die off in the early AM.) There's obviously more than 15 minutes of material to discuss here, but hopefully it is enough to get you started. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38324",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3880/"
]
} |
38,393 | If I work as a Developer in one language (e.g. Java) and work my way up to Senior Developer, would that qualify me to be a Senior Developer for a position using another language (e.g. Ruby)? | The best way to answer this is to look at what the difference between a Developer and a Senior Developer. Assuming that it's not just a time served thing, generally I'd expect both Developers and Senior Developers to be able to: Write code competently in the languages required by the role Diagnose and fix bugs Write unit tests Follow standards and reasonable best practice (version control, documentation) Have a broad basic technical competent Act in a professional manner In addition I'd expect a Senior Developer to: Mentor other members of staff in best practice Be and acknowledge reference point for at least some of the languages being used by the team Actively research and champion new areas of best practice Take technical ownership of more complex issues / areas of code and provide solid solutions So, the question then becomes do you fulfil the extended criteria for your second (or third or fourth) language? I'd suggest that so long as you're technically competent enough in the language you're moving to then yes as most of the Senior Developer stuff tends to be transferable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38393",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9185/"
]
} |
38,441 | I'm considering use of GWT on a major in-house web app development project, namely it's major advantage in my eyes is the cross-compilation to Javascript which would (at least theoretically) help my team reduce the size of tech stack by one. However, having been burnt before (like most devs), I would like to hear from programmers who did actually use it on any problems with GWT which would hamper, or limit, it's use within a certain problem domain. What are the arguments against using GWT, and why? | I am both good and bad to answer this question - good, in that I've actually used it before, and bad, in that I was quite experienced with HTML/CSS/JavaScript prior to working with GWT. This left me maddened by using GWT in a way that other Java developers who don't really know DHTML may not have been. GWT does what it says - it abstracts JavaScript and to some degree HTML into Java. To many developers, this sounds brilliant. However, we know, as Jeff Atwood puts it, all abstractions are failed abstractions (worth a read if considering GWT). With GWT, this specifically introduces the following problems: Using HTML in GWT sucks. As I said it, to some degree, even abstracts away HTML. It sounds good to a Java developer. But it's not. HTML is a document markup format. If you wanted to create Java objects to define a document, you would not use document markup elements. It is maddeningly verbose. It is also not controlled enough. In HTML there is essentially one way to write <p>Hello how are <b>you</b>?</p> . In GWT, you have 3 child nodes (text, B , text) attached to a P node. You can either create the P first, or create the child nodes first. One of the child nodes might be the return result of a function. After a few months of development with many developers, trying to decipher what your HTML document looks like by tracing your GWT code is a headache-inducing process. In the end, the team decided that maybe using HTMLPanel for all HTML was the right way to go. Now, you've lost many of GWT's advantages of having elements readily available to Java code to bind easily for data. Using CSS in GWT sucks. By attachment to HTML abstraction, this means that the way you have to use CSS is also different. It might have improved since I last used GWT (about 9 months ago), but at the time, CSS support was a mess. Because of the way GWT makes you create HTML, you often have levels of nodes that you didn't know were injected (any CSS dev knows how this can dramatically affect rendering). There were too many ways to embed or link CSS, resulting in a confusing mess of namespaces. On top of that you had the sprite support, which again sounds nice, but actually mutated your CSS and we had problems with it writing properties which we then had to explicitly overwrite later, or in some cases, thwarted our attempts to match our hand-coded CSS and having to just redesign it in ways that GWT didn't screw it up. Union of problems, intersection of benefits Any languages is going to have it's own set of problems and benefits. Whether you use it is a weighted formula based on those. When you have an abstraction, what you get is a union of all the problems, and an intersection of the benefits. JavaScript has it's problems, and is commonly derided among server-side engineers, but it also has quite a few features that are helpful for rapid web development. Think closures, syntax shorthand, ad-hoc objects, all of the stuff done by Jquery (like DOM querying by CSS selector). Now forget about using it in GWT! Separation of concerns We all know that as the size of a project grows, having good separation of concerns is critical. One of the most important is the separation between display and processing. GWT made this really hard. Probably not impossible, but the team I was on never came up with a good solution, and even when we thought we had, we always had one leaking into the other. Desktop != Web As @Berin Loritsch posted in the comments, the model or mindset GWT is built for is living applications, where a program has a living display tightly coupled with a processing engine. This sounds good because that's what so many feel the web is lacking. But there are two problems: A) The web is built on HTTP and this is inherently different. As I mentioned above, the technologies built on HTTP - HTML, CSS, even resource-loading and caching (images, etc.), have been built for that platform. B) Java developers who have been working on the web do not easily switch to this desktop-application mindset. Architecture in this world is an entirely different discipline. Flex developers would probably be more suited to GWT than Java web developers. In conclusion... GWT is capable of producing quick-and-dirty AJAX applications quite easily using just Java. If quick-and-dirty doesn't sound like what you want, don't use it. The company I was working for was a company that cared a lot about the end product, and it's sense of polish, both visual and interactive, to the user. For us front-end developers, this meant that we needed to control HTML, CSS, and JavaScript in ways that made using GWT like trying to play the piano with boxing gloves on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38441",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5185/"
]
} |
38,597 | I think I have good enough reputation on SO now. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my résumé - just the profile link and not that "I have this much reputation on SO". Those who haven't seen, can see this question Would you put your Stack Overflow profile link on your CV / Resume? How would this look like? Forums/Blogs/Miscellaneous others No blogging as yet but active
participant in Stackoverflow. My
profile link - http://stackoverflow.com/users/userId/username I think of putting this section after Project Details and Technical Expertise sections. Any tips/advice? Update MKO has made a very good point - do you really want a potential
employer to be able to evaluate in
detail everything you've ever written
on SO I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - "AFAIK ...", "following are my assumptions so far ...", "am I correct to conclude that... ?", "I doubt if it is possible to ..." etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter. | Your participation in Stack Overflow (or indeed any Stack Exchange site) should come under your "interests". Yes, it is related to your work, but it's not your work (unless you happen to be employed by Stack Exchange). If you do decide to put your SO profile on your CV then it would be a good idea to make sure that: Your profile picture is set to a photo of you . You use your real name. Your profile bio is up-to-date and to the point. Your questions and answers are spell-checked and grammatically correct. You are using SO as a tool to sell yourself, so you must be professional on the site. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38597",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4443/"
]
} |
38,647 | On a current job I have two projects to work on. First is very huge system and the second one is smaller but it also big (first project is being developed for 12 years, second for 4 years). At first I was working only on first project and was trying to get used to it. Then I was moved to second project and tried there, so my knowledge about first project became shady. Now I have to work on both projects at the same time. It's very hard for me because despite they both use java, they use different frameworks and the amount of code and business-logic to understand is very big so I really can't hold both that projects in my head. Is it normal and I should get used to it, although my expertise became very squashy, what won't happen if I would work only on a single project? Or should I raise a concern or maybe change employer? | I completely disagree when people say "yes, multi-tasking is normal" It's not normal! Not at all, it's very unnatural for a developer to multi-task in several projects (I'll explain more later on). On the other hand multi-tasking is very common among developers. This is definitely something you should get used to. So the real answer to your question is: how to multi-task? First of all, you shouldn't simply accept your fate because "you are such an excellent employee" and that means you need to take more tasks than you can handle. Not at all, you don't. Sometimes people are given multiple tasks because there's nobody else. Sometimes managers can't handle their work so they delegate, enforcing multi-tasking on their team because they can't handle their project schedule properly. So you should definitely try to determine if you're being asked to multi-task because it's part of your job or because other people are being incompetent . Either way, you can judge for yourself if that's acceptable or not. If you're not comfortable [with your job], there are other places you can go find work. [You, the developer, are the commodity. Employers know this and pray that you never realize it.] Now about multi-tasking, I disagree 100% when people say "yes, just switch back and forth and make sure you're doing the same amount on each project". Sorry but that's a very bad advice. First you must realize how your brain works when you're developing a software (I know there are other tasks involved but let's focus on that one). You first need to get "wired", meaning you need to concentrate a lot and get your mind in a position where you have everything mapped in your head. All variable and method names, the workflow of your code, the object model, the threads going side by side, everything. Usually takes me 15 maybe 20 minutes to get "in the zone". When you get to that state you're really flying off and writing code like you are riding a bike. The moment you get interrupted you can lose it all. If the interruption is long enough (5, 10 maybe 30 minutes) you will lose that state of mind and will have to start all over. So multi-tasking is terrible because it forces you to leave "the zone" and move on to something else. If you are constantly switching that means you're not being productive because every time you change to a new task/project you need to lose those 15-20 minutes to get in the zone again (not mentioning it melts your brain slowly). It's like multi-threading: at some point the cost of switching the thread context every couple cycles is too high so the CPU ends up spending more time switching contexts than executing the real tasks. I highly recommend reading an article from Joel Spolsky on this matter: http://www.joelonsoftware.com/articles/fog0000000022.html So my advice is: try learning how to (not) multi-task because it is indeed common. But also make sure you're comfortable doing it. Some people can take more time to concentrate and will suffer more than others when multi-tasking; and that's ok too. It's not because it's common that it should be considered normal. Joel put it well when he said: In fact, the real lesson from all this is that you should never let people work on more than one thing at once. Make sure they know what it is. Good managers see their responsibility as removing obstacles so that people can focus on one thing and really get it done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38647",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
38,663 | I have heard Google uses Python, Java and C++. But what I don't know is how is each of those programming language is used. I mean what is Python, Java and C++ is used for at Google. Why would they used upto 3 programming languages when 1 language is enough. Does anybody know? | The correct answer " because different languages have different strengths " has already been stated. But let me provide some more detail on why: C++ C++ has the main advantage of being the fastest of the lot. Not necessary because Java and Python are slow, but because in C++ you have more control about how things get executed. For example, if you are writing a web services frontend that requires less than 30ms latency you can tune C++ code to achieve that performance. In a managed language like Java it is a bit harder to get the GC to cooperate perfectly. C++ is used for a lot of 'Google magic' such as BigTable, MapReduce, and search goo. Java For most standard applications, Java is a perfectly fine language. It features great tools, lots of existing libraries, and not a lot of time spent debugging. Java is used for a lot of bigger websites that would be much more difficult to maintain in a lower-level language like C or C++. For example, my understanding is that GMail is written in Java. Also, note that you can use Google's Web Toolkit to compile Java code into JavaScript. So that awesome webpage or widget you see might have began life as a Java class. Python Python is a fantastic general purpose language, but doesn't offer as much fine-grained control as even Java. (For example, there are all sorts of crazy JVM arguments for things -- does python offer similar configuration?) However, with Python is perfectly suited for simple websites and applications that would otherwise be horrible shell scripts. For example, if you wanted to write a simple testcase to gather data from some sources, process them, and upload them to App Engine Python would be a good choice. (If you needed to distribute that processing across 1,000 machines however, perhaps you could use a different language...) JavaScript Obviously Google uses a lot of JavaScript. However, the type of JavaScript written at Google is different than what you see in the wild. Google has developed an optimizing JavaScript compiler that allows you to construct annotations in comments in exchange for better optimizations and static checking. See Google Closure . Language Doesn't Matter (interop) Another reason why Google doesn't use just one language is that it doesn't need to. There are facilities in almost every modern programming language to call into external libraries, libraries which may be written in a different language. (See swig .) Also, since App Engine runs the JVM you can run any language that compiles to Java byte code. (At Google we primarily stick to the languages mentioned here, but this isn't a hard requirement.) If you want to use Closure, Groovy, or Scala on App Engine if you include the right JAR files everything should just work. Open-Source Also, Google uses and contributes to a lot of Open-Source where possible. These projects usually are written in one of the above languages and requires it to be 'used' at Google. The bottom line is two things: Every programming language has its own strengths. To not take situationally take advantage of these strengths would be a shame. The available of interop toolkits and compatibile runtimes means that it is less painful to use multiple languages within the same runtime environment. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38663",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9298/"
]
} |
38,691 | I have heard a lot about Web Services and Web APIs, is there any difference between them or are they the same? | Web Services - that's standard defined by W3C, so they can be accessed semi-automatically or automatically (WSDL / UDDI). The whole thing is based on XML, so anyone can call it. And every aspect of the service is very well defined. There's parameters description standard, parameter passing standard, response standard, discovery standard, etc. etc. You could probably write 2000 pages book that'd describe the standard. There are even some "additional" standards for doing "standard" things, like authentication. Despite the fact that automatic invoking and discovery is barely working because clients are rather poor, and you have no real guarantee that any service can be called from any client. Web API is typically done as HTTP/REST, nothing is defined, output can be for eg. JSON/XML, input can be XML/JSON/or plain data. There are no standards for anything => no automatic calling and discovery. You can provide some description in text file or PDF, you can return the data in Windows-1250 instead of unicode, etc. For describing the standard it'd be 2 pages brochure with some simple info and you'll define everything else. Web is switching towards Web API / REST. Web Services are really no better than Web API. Very complicated to develop and they eat much more resources (bandwidth and RAM)... and because of all data conversions (REQUEST->XML->DATA->RESPONSE->XML->VALIDATION->CONVERSION->DATA) are very slow. Eg. In WebAPI you can pack the data, send it compressed and un-compress+un-pack on the client. In SOAP you could only compress HTML request. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10153/"
]
} |
38,746 | When building a non-trivial application, is it best to focus on getting things working quickly, and taking shortcuts in the code like mixing model logic with your views, breaking encapsulation - typical code smells? Or, are you better off taking the time upfront to build more architecture, build it right, but running the risk that all this extra code might not be used since your design is quite fluid and you might have to throw it away if feedback causes you to go in a different direction? For context, I'm building a desktop application. I'm the only developer, and I'm doing this part-time since I have a day job. Now, for work, I try to do things the right way, schedule permitting. But for this project, which I expect will morph as I get feedback from people, I'm not sure that's the right approach. I spent several hours this week putting in a textbook Model View Controller design in place to communicate changes in the model to the view. This is great in general, but I'm not sure if I need multiple views to display the data and I know that I could have had things displayed more quickly without the additional architecture. With maybe 10-15 hours a week to spend on the project, I feel it will take ages to get something built that I can demo if I follow good software practices. I know that my users won't care that I used MVC internally, they just want something that solve their problem. But I've also been in the situation where you've incurred so much technical debt from short cuts that the code is just incredibly difficult to maintain and add new features to. I'd love to hear how other people approach this kind of problem. | Build it well . Building it "fast" is a logical fallacy if you look at the big picture. It will prevent you from ever having it built well, and eventually you will become bogged down by bugs and fundamental architecture flaws that prevent refactoring or even makes adding new features next to impossible. Building it well is actually the opposite. At first it may be slower, but eventually you will realize the efficiency gains from having taken the time to make the right choices up front. In addition, you will be able to adapt to future requirements easier (refactoring if needed) and you will have a better end-product due, at the very least, to fewer bugs. In other words (unless this is a one-and-done contract), built it fast = build it slow, build it well = build it fast . Also there is something important to realize about "building it well" and designing an architecture. You asked... ...but running the risk that all this extra code might not be used since your design is quite fluid and you might have to throw it away if feedback causes you to go in a different direction? That is not really a true risk from "spending architecture time". Architecture design should be organic . Don't spend time designing an architecture for any part until it's justified. Architecture should only evolve out of observed and confirmed patterns in your project. John Gall's law from Systemantics : A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38746",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5743/"
]
} |
38,749 | Reading this topic about the most over hyped technologies I noticed that SharePoint is almost universally reviled. My experience with SharePoint (especially the most recent versions) is that it accomplishes it's core competencies smartly. Namely: Centralized document repository - get all those office documents out of email (with versioning) User-editible content creation for internal information disemination - look, an HR site with current phone numbers and the vacation policy Project collaboration - a couple clicks creates a site with a project's documents, task list, simple schedule, threaded discussion, and possibly a list of all project related emails. Very basic business automation - when you fill out the vacation form, an email is sent to HR. My experience is that SharePoint only gets really ugly when an organization tries to push it in a direction it isn't designed for. SharePoint is not a CRM, ERP, bug database or external website. SharePoint is flexible enough to serve in a pinch, but it is no replacement for a dedicated tool. (Microsoft is just as guilty of pushing SharePoint into domains it doesn't belong.) If you use SharePoint for what it's designed for, it really does work. Thoughts? | I think it can be summed up in a comment I once heard about VB. "It makes the simple things very simple, and the hard things impossible.". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38749",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5832/"
]
} |
38,874 | I have heard a few people say that one of the best ways to improve your coding ability is to read others code and understand it. My question, as a relatively new programmer, where do I go to find good source code examples that are not too far over my head? | You can browse open source projects on repository sites like GitHub , Codeplex , Google Code , or BitBucket . You'll find projects of different complexity levels, so you should be able to find something that both interests you and doesn't go over your head too much at first. Another option is Scott Hanselman's Weekly Source Code blog posts. I recommend starting with an established, active project to lower the odds of starting to read code that hasn't been through use and scrutiny yet. Ideally, find something that interests you and that you can use. Using the app will help you understand the source code.
Another benefit of choosing an open source project is that you may be able to contribute some fixes or features, which will help make reading through the code more interesting. Staring at a bunch of someone else's code can be intimidating, so start with the main function (or equivalent) and work your way through from there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/38874",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9413/"
]
} |
39,146 | My company has retained an outside firm to develop an iPhone app for us. As the only internal developer with any knowledge of Objective-C, I've been assigned to develop the relevant APIs on our site, but also to do anything I can to make sure the whole thing comes together on time. Any suggestions for things I should do or things I should watch out for, particularly from those who've been down this road before? | I outsourced more than 300 IT projects of all sizes over the past 10 years. I've been the outsourced developer myself. Here are the most problematic problems I encountered multiple times and the suggestions to avoid them (I learn the hard way). Those mistakes cost me hundred of thousands dollars, so I hope you will save as much thanks to the suggestions so I'm even and can rest in piece :) require access to the repository . If not possible, request to be sent the full source every week for review. You don't want to discover at the end of the project the code did not meet your quality standards such as missing comments, documentation, poor coding practices, etc. Reviewing the work frequently will allow you to give feedback early in the development phase. ensure that they signed appropriate NDA and IP assignment documents . That's one of the common mistakes. Things went bad the company you outsourced the project to claim full ownership of their work. Or worse, they decide to use what you paid for for their own business. Ensure that a proper NDA and intellectual property rights assignment is signed. they often use custom framework of libraries that comes without source code . Verify it is acceptable to you. Sometimes the developers or company you hired decide to use custom framework or library they wrote. This may be a problem if you are so dependent to them changing your developer is almost impossible. Sometimes the development shop will give you full right on the code they wrote specifically for you but not their libraries. It's as problematic. Ensuring that you will have the possibility to continue your project without them is a really important possibility you want to keep. ensure that they use standards in the technology of choice . Even if they doesn't use specific custom libraries, you may face another problem: specific way of coding that don't meet industry standards. In the worst case, you have to rewrite everything to make any maintenance possible without them. if deadline is important, request penalties in case they miss it . This is one is sometimes not specified explicitly. What happens if they miss the deadline? Let's say they face a strong internal problem preventing them to deliver on time? Will you have the budget to develop in another dev shop in urgency? As a general rule, I would add that the specification is very important in that kind of work. So you have lot of responsibilities there. With time, I learnt that it's preferable to propose a first small project to a company to test it first, and reserve bigger projects to trusted providers you work with before. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39146",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5068/"
]
} |
39,284 | I'm reading through the CNN article about the highest paying jobs in America. A software architect is listed as #1 . A software engineer listed as #9 . And a software developer (programmer) is listed at #35 . I think it's valid to replace computer scientist with programmer, right? Prior to this I always saw "Software Engineers" as being the title for experienced programmers and team leads. But where then does a "Software Architect" fit in and what exactly do they do? I read the CNN descriptions but they don't really satisfy me so I'm assuming I can get more thorough and experienced descriptions from the awesome user base here. Thanks in advance for any and all responses received. | Warning : Anecdotal evidence follows.... In my experience, at least here in the Australian market - the terms Programmer, Software Developer and Software Engineer are more or less interchangeable (I've held all three for doing the exact same actual work). The "Software Engineering Director" described in your CNN link is not the same as "Software Engineer". It is really more like a high level technical project manager role. This sort of person wouldn't actually program much, if at all. This role has little to do with your typical "Software Engineer" - which is often just a glorified title for an intermediate to senior programmer. The "Software Architect" described in the link essentially sounds like a high level team leader, the type who designs the overall structure of a software system and then probably farms off some of the grunt work to more junior programmers. This is the sort of person who heads a project and probably reports to the "Software Engineering Director" - in a large company where lots of teams write various products, especially where integration between them is required. TL;DR version : Software Engineer is often just a glorified title for "programmer", maybe indicating being somewhat senior. Software Architect probably roughly fits in with a high level team lead who has the responsibility of designing the overall architecture of the project. Software Engineering Director is a high level project manager, the type who possibly reports to the highest levels and probably doesn't touch the code at all. $0.02 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39284",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13732/"
]
} |
39,302 | I'm conducting technical interviews to fill a few .NET positions. Many of the people I interview really do know .NET pretty well, but I find at least 90% embellish their skillset anywhere between "a little" to "quite drastically". Sometimes they fabricate skills relevant to the position they're applying for, sometimes they don't. Most of the people I interview, even the most egregious liars, are not scam artists. They just want to stand out among the crowd, so they drop a few buzzwords on their resume like "JBoss", "LINQ", "web services", "Django" or whatever just to pad their skillset and stay competitive. (You might wonder if a person that lies about those skills is just bluffing their way through a technical interview. My interviews involve a lot of hands-on coding and problem-solving – people who attempt to bluff will bomb the hands-on coding portion in the first 3 minutes.) These are two open-ended questions, but it would really help me out when I make my recommendations to the hiring managers: Regarding interviewing etiquette, should I attempt to determine whether a person really possesses all of the skills they claim to have? Can I do this without making the candidate feel uncomfortable? Regarding the final decision, should I recommend candidates who are genuinely qualified for the positions they're applying for, even if they've fabricated portions of their skillset? | Should I attempt to determine whether a person really possesses all of
the skills they claim to have? Why? To determine if they're a big fat liar? Or to humiliate them? Or to prove your total technical superiority? Or to make a hiring decision? Be sure to distinguish between doing the right thing in hiring and being a jerk about nuances on someone's resume. Some people say "experienced" but you wish they said "exposed". Does that make them a evil liar? Or does that mean that their definition of experience isn't as rich, varied and deep as yours? If you suspect they're lying -- and it would be a bad hiring decision because of this -- remember your real goal. You're just making a hiring decision. If they're big fat liars, don't hire them. If you think they've "overstated" their experience, perhaps your use of the words are just as wrong as theirs. Does it matter? Do they have to be converted to your way of writing a resume? Or can you simply determine what they mean by the words they use? If you're not sure, probe their experience. You don't have to make someone uncomfortable to arrive at a meaningful, useful assessment of their skills. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39302",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1467/"
]
} |
39,368 | TDD and unit testing seems to be the big rave at the moment. But it is really that useful compared to other forms of automated testing? Intuitively I would guess that automated integration testing is way more useful than unit testing. In my experience the most bugs seems to be in the interaction between modules, and not so much the actual (usual limited) logic of each unit. Also regressions often happened because of changing interfaces between modules (and changed pre and post-conditions.) Am I misunderstanding something, or why are unit testing getting so much focus compared to integration testing? It is simply because it is assumed that integration testing is something you have, and unit testing is the next thing we need to learn to apply as developers? Or maybe unit testing simply yields the highest gain compared to the complexity of automating it? What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why? If you had to pick just one form of testing to be automated on your next project, which would it be? Thanks in advance. | One important factor which makes unit tests extremely useful is fast feedback . Consider what happens when you have your app fully covered with integration/System/functional tests (which is already an ideal situation, far from reality in most development shops). These are often run by a dedicated testing team. You commit a change to the SCM repo, sometime (possibly days) later the testing guys get a new internal release and start testing it, they find a bug and file a bug report, (in the ideal case) someone assigns the bug report back to you. This all may take days or even weeks. By this time you have already been working on other tasks, so you don't have the minute details of the code written earlier in your mind. Moreover, you typically don't even have any direct proof of where the bug actually is, so it takes considerable time to find and fix the bug. Whereas in unit testing (TDD) you write a test, you write some code to satisfy the test, the test still fails, you look at the code and typically you have an "oops" experience in a few seconds (like "oops, I forgot to check for that condition!"), then fix the bug immediately. This all happens in a matter of minutes . This is not to say that integration/system tests are not useful; they just serve different purposes. With well written unit tests you can catch a large proportion of the bugs in code before they get to the integration phase, where it is already considerably more expensive to find and fix them. You are right that the integration tests are needed to catch those types of bugs which are difficult or impossible to catch with unit tests. However, in my experience those are the rarer kind; most of the bugs I have seen are caused by some simple or even trivial omission somewhere inside a method. Not to mention that unit testing also tests your interfaces for usability/safety etc., thus giving you vitally important feedback to improve your design and APIs. Which IMHO can considerably reduce the chances of module/susbsystem integration bugs: the easier and cleaner an API is, the less the chance of misunderstanding or omission. What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why? ROI depends on a lot of factors, probably the foremost of them is whether your project is greenfield or legacy. With greenfield development my advice (and experience so far) is to do unit testing TDD style from the beginning. I am confident that this is the most cost effective method in this case. In a legacy project, however, building up sufficient unit testing coverage is a huge undertaking which will be very slow to yield benefits. It is more efficient to try to cover the most important functionality with system/functional tests via the UI if possible. (desktop GUI apps may be difficult to test automately via the GUI, although automated test support tools are gradually improving...). This gives you a coarse but effective safety net rapidly. Then you can start gradually building up unit tests around the most critical parts of the app. If you had to pick just one form of testing to automated on your next project, which would it be? That is a theoretical question and I find it pointless. All kinds of tests have their use in the toolbox of a good SW engineer, and all of these have scenarios where they are irreplaceable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39368",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3666/"
]
} |
39,449 | A recent project I worked on was proven to be severely underestimated by the architect. The estimate was out by at least 500%. Unfortunately I was brought onto the project after the estimate had been signed off with the customer. As senior dev, I quickly realised that the functional and technical spec. contained some huge gaps and uncertanties. As a result I felt compelled to call an emergency meeting with the business and technical directors to let them know the reality. As first and foremost a developer, I found this a very stressful and difficult situation. The "business" accused IT of being incompetent and being the messenger I received a few "bullets". The customer threatened to cancel the account, however to date the project is still unfinished and I am no longer directly involved with it. The architect was a nice guy socially, but based on this episode was either simply incompetent or there were large sales/business pressures influencing his estimate. So, as programmers, what is your experience of this sort of situation and how would you advise dealing with it? | Long reply, but hey, I’ve got a summary on the end, so just skip to summary if you can’t be bothered reading the entire thing! As a developer I had to deal with the situation literally every other project, but it's not until I moved into project management that I learned how to deal with it effectively. For me dealing effectively is about two things: managing expectations and understanding how estimation works. Start with a premise that it is unethical to provide an estimate, commit to an estimate or give any other indication of estimate accuracy without being able to carry some due diligence first. Other people rely on your professional ability to predict an amount of work required, giving a false indication will hurt them and their business. But you have to give something, in real life you dragged into an impromptu meeting or a late project and your superior will probably make it clear they expect you to come up with some figure straight away or comment on the estimate they provided. This is where expectations management comes into play. Explain that it would be wrong of you to give any figure or any indication without understanding the problem and working the numbers out for yourself. Say that their figures might be quite correct, you just don’t know before you went through the estimation exercise yourself. And even though you might have a good picture of what is needed there and when, say that you still need some time to work the numbers out. There is only one estimate they might expect you to give: when you going to be able to provide an estimate. By all means do provide that figure. As a developer never take responsibility for (or give indication that can be interpreted as acceptance of) other people estimates without being able to review them first. As a project manager it is a totally different matter, because then you actually have some control over the estimation process: the way an estimate is derived and reviewed and you have to rely on other people to get the actual work done and you need to make sure they are committed. Never even comment on estimates without being able to do the due diligence. This is ethical. A lawyer or a doctor will make it absolutely clear they cannot give any advice unless a client (or patient) plays by their rules and goes through an assessment procedure first. You similarly have a right to satisfy your questions before giving professional opinion. The second part is about how estimation works. I suggest researching various approaches to doing estimates and how estimation process works, including industries other than software development (manufacturing, financial markets, construction). This will give you idea what can be reasonably expected from you by your boss or client and, strangely, will help making more accurate predictions about the amount of work. It will improve your ability to defend your estimates and you will need to defend the figures every time they are different from the ones provided by an architect or a sales person. Normally, the way it works, is that your estimate is first scanned for odd looking or relatively large items. Hence be prepared to defend anything with “non-standard” name. Also split larger tasks so that all tasks have same order of magnitude, i.e. if most tasks take 2 days and one single task is 10 days be prepared to get drilled. Be clear about what is included in each task, its best to split dev and unit testing instead of just having dev and having someone assume that it includes documentation as well. Obviously this way you’ll need to produce a fairly fine grained estimate. Next the drilling comes. Since it is quite difficult to review a long work breakdown your client or boss is likely to adopt a different strategy: concentrate on a random bit they might know something about and drill down until they manage to discredit the entire estimate or will be satisfied with your answers. The credibility of entire estimate might depend on a random sample! Hence once again, you need time to prepare it carefully, include only relevant bits, exclude any extras or move them to a “nice to haves” section and think through how you going to defend the figures. Obviously you can be either consistent in your approach, i.e. estimating on the basis of features, number of screens etc or have a mix of approaches, but in any case be prepared to defend why you selected a certain way of estimation. Be also prepared to explain why your figures are different from whoever else’s attempt at predicting the amount of work required. Learn the obvious signs of weak estimates: Filled with general run-of-the-mill tasks, copied from template (good estimates are specific to the task at hand). Coarse grained estimates (i.e. tasks longer than couple of days). Estimates done on early stage of a project or by someone who might not have actual knowledge of the requirements or work involved. Estimates compiled by people other than actual doers Vague estimates (not clear what is included and, equally important, excluded). Substantial difference in the order of task magnitudes. Practise in evaluating other people estimates and drilling the figures without actual knowledge of implementation detail. This will help to back your claim for some extra time when pressed with the request to confirm someone else’s estimate when you have no hard evidence. To summarise: Do not commit to an awful or any estimate for that matter, before you had an opportunity to do due diligence. Make it clear on the outset, don’t let anyone assume it is any other way and interpret your silence as a sign of agreement. Know how various estimation methods work, their practical application and merits, including these outside software development. Be prepared to defend your estimate. Learn how to evaluate other people estimates so you don’t have to commit yourself to vastly inaccurate figures. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39449",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20615/"
]
} |
39,468 | It seems like it is nearly impossible to get close because you could run into any number of issues and things not first anticipated. How close can we be expected to reasonably estimate? Our PM wants to be able to have things like Gant charts and such mapping out weeks at a time and such... So we say we can get these bugs done, and this is how long each will take, and the goal will be Friday, but things get thrown off and pushed into the next week, like every time! How are we suppose to guess the right time? | Our host Joel recommends evidence-based scheduling , which includes methods to account for inaccurate estimation, interruptions and distractions, and all the other usual suspects. The biggest bang items: The person doing a given piece of work has final say on its estimate. No managers, leads or committees are allowed to overrule estimates, only re-assign work to someone else. The more you break tasks down, the more reliable the final estimate will be. The effect is twofold: first, over- and under-estimate errors will tend to cancel each other out more, and second, to perform the breakdown you end up thinking about the work in more detail, improving the overall accuracy. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39468",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/785/"
]
} |
39,515 | As I'm building applications I find myself constantly asking if this is best way to perform or implement a certain functionality. Often, I'll post questions on stackoverflow or another forum desiring feedback only to receive comments about how not to "put the cart before the horse" regarding performance. Do most programmers really not think about performance until the application is finished, or performance is absolutely unacceptable?? I mean, I understand that development environments differ from production environments and that you shouldn't completely rely on the results from your dev laptop...but, there are practices and techniques that yield better performance than others. Is it bad practice to consider performance throughout the development process? Should I push these considerations off until performance actually is tanking?? Update Just to be clear, I'm talking about the situation where you are considering or just about to work on some piece of functionality. You know there are several ways to implement it, but you're not quite sure how well each implementation will scale. Also there might be several techniques you're not even familiar with. On a small scale any of the approaches would probably be adequate, but on a larger scale some will keep up and some won't. Often when I ask for opinions or guidance the response is: worry about it later... | Deferral of performance considerations is sometimes based on a misapplication of the phrase: Premature optimization is the root of
all evil. If you read the complete quote, what Knuth was trying to say is that micro-optimizations applied during development without profiling are generally inadvisable, because they lead to less maintainable code without necessarily achieving substantial performance benefits. But that doesn't mean you should not consider performance until the application is almost finished. If you do that, you may find that performance is inadequate, your design architecture doesn't support better performance, and you have to start over. There are a number of things you can do during development to achieve good performance, without esoteric (and premature) optimizations: Use a sensible, well thought-out architecture. Use data structures properly. Use technologies (libraries, frameworks) that perform adequately. If you do these things, you will find that any performance optimization that needs to occur will be confined to a small part of your code. Profiling will identify that code, and allow you to focus your performance improvements where they will do the most good, without sacrificing maintainability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14143/"
]
} |
39,535 | I read the Python documentation a lot and sometimes I am baffled by this notation: os.path.join( path1[, path2[, ...]] ) I somehow make that [, path[,...]] is a list but I would like to know if I am reading it correctly. Bear with me, this is coming from a Java developer who is trying out Python. X) | Deferral of performance considerations is sometimes based on a misapplication of the phrase: Premature optimization is the root of
all evil. If you read the complete quote, what Knuth was trying to say is that micro-optimizations applied during development without profiling are generally inadvisable, because they lead to less maintainable code without necessarily achieving substantial performance benefits. But that doesn't mean you should not consider performance until the application is almost finished. If you do that, you may find that performance is inadequate, your design architecture doesn't support better performance, and you have to start over. There are a number of things you can do during development to achieve good performance, without esoteric (and premature) optimizations: Use a sensible, well thought-out architecture. Use data structures properly. Use technologies (libraries, frameworks) that perform adequately. If you do these things, you will find that any performance optimization that needs to occur will be confined to a small part of your code. Profiling will identify that code, and allow you to focus your performance improvements where they will do the most good, without sacrificing maintainability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39535",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14260/"
]
} |
39,541 | Does anyone know if a Software Engineer can become a certified Professional Engineer or PE for short? I know that my buddies who are Mechanical, Electrical, or Civil Engineers were able to become PEs by taking an exam. Does such an exam exist in Software Engineering? | In April 2013, the Professional Engineer exam for Software Engineering was offered for the first time. The IEEE Computer Society, IEEE-USA, and National Council of Examiners for Engineering and Surveying (NCEES) partnered to develop an exam specifically for software engineers. State boards issue the exams and manage the requirements for taking and passing the exam. However, the last time this exam will be offered will be in April 2019. In the news release , the NCEES cited a low candidate population. The examination was offered 5 times and only 81 candidates sat for the exam. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39541",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2281/"
]
} |
39,720 | What do recruiters expect you to answer when they ask, "Where do you see yourself in 5 years?" Something like: I want to improve my technical skills ... blah-blah-blah ... architect? Is it just to check your ambitions and if you are not going to leave in a month? | When interviewing someone, I started by cutting it down to 3 years, then started asking the general question of "Where do you see yourself beyond your initial role here?" I don't even remember the last time I asked that . There are so many more useful questions to ask. These days, in this field, 5 years is an eternity, and this is a completely outdated question. But old habits die slowly, and some firms go by the same old tired lists of questions they've been asking for years. Many interviewers are lazy and superficial. They're busy, and want to get the interview over with quickly. But if you're still going to be asked, just talk about your future goal, no matter how soon in the future, and use that. And be sincere. People can tell if your answer is just a canned response. Most initial answers I receive on an interview are canned responses, so I always dig deeper. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10981/"
]
} |
39,742 | I have seen many times statements like- "Please make this feature a first class citizen in so and so language/platform". For example, it is said about enums in C#/.net. So, when is a feature considered a "First class citizen" in a programming language/platform? | Definition An object is first-class when
it: can be stored in variables and data structures can be passed as a parameter to a subroutine can be returned as the result of a subroutine can be constructed at runtime has intrinsic identity (independent of any given name) The term "object" is used
loosely here, not necessarily
referring to objects in
object-oriented programming. The
simplest scalar data types, such as
integer and floating-point numbers,
are nearly always first-class. http://en.wikipedia.org/wiki/First_class_object | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39742",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
39,771 | In my current job, there are no coding guidelines. Everyone pretty much codes the way he wants. Which is fine, since the company is small. However, one new guy recently proposed to always use Hungarian Notation. Until now, some of us used some sort of Hungarian Notation, some of us didn't. You know, it's an engineering company, so coding styles do not really matter as long as the algorithms are sound. Personally, I feel that these little type abbreviations are kind of redundant. A well thought-out name usually delivers the same message. (Furthermore, most of our code has to run on some weirdo DSPs, where a concept like bool or float doesn't exist anyway). So, how do you feel about Hungarian Notation? Do you use it? Why? | When most people say "Hungarian Notation" they're actually talking about " Systems Hungarian ". Systems Hungarian is completely useless and should be avoided. There is no need to encode the type of the variable in it's name. However, Systems Hungarian is actually a misunderstanding of the original , "real" Hungarian: Apps Hungarian. In Apps Hungarian, you don't encode the "type" in the variable name, you encode the "kind" of the variable. So not nWidth , but pxWidth or emWidth (for "width in pixels" or "width in ems" respectively). Not strName but sName or usName (for "safe name" or "unsafe name" respectively - useful when accepting input from users: unsafe strings). Personally, I don't usually bother with either. Unless I'm doing something that is explicitly converting the "kind" of a value (for example, I've used the "px" and "em" prefix in the past because it makes mistakes like pxArea = emWidth * emHeight obvious). See also, Joel's article, " Making Wrong Code Look Wrong ." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/39771",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3586/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.