source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
133,565 | I've been assigned a group project from my AP computer science class, and I am required to work with three other people. I've never talked to them before, I have no idea their skill level, and all I have is their email address. The assignment, summed up, is this: "As a team you will complete a minimum of three Modules to a Class...." I going to try and become "Team captain" because none of them have attempted to contact each other but I am curious: how to go about this? I've emailed them and asked them if there are any methods of communication they prefer over emailing each other, but once we actually start the project I'll have to figure out who is doing what. What should I do? How do I "take charge" and lead three people that I've never met? Here is an excerpt of the actual assignment: Therefore you will need to discuss the various roles each team member
will take in this project early in the week. You can communicate via
Pronto (or Blackboard IM), email, a wiki, a google group, blog or any
other method that you see fit. If a group member does not engage the
group by the end of week let your instructor know and they will
provide additional guidance. ... Also due at the end of a project will be a team evaluation in which you will rate each team members contribution to the completion of this project along with a suggested grade. Edit: Many people suggested that I meet them in a coffee shop, or something like that. Only problem is, all of us are in different states. I also figured out one of them isn't allowed to use Facebook/Skype/twitter, so I have to resort to messaging them over yahoo messenger and emails. | The leader of this project will be the person who steps up and takes charge at the beginning. This applies to most things in life - not just software development. When everybody else is running around like chickens with no heads, the person who thinks things through, steps forward and says, " This is what we're going to do and this how we're going to do it ." is usually the person looked to as the leader for the rest of the project. Bear in mind that by doing this you are taking responsibility for the ultimate success or failure of the project. You want to lead this project? Here's a couple of things you can start doing right away to make a big impact. Use a project management tool like Trello and send invites to everybody and start assigning parts of the project out to people. Generate a bug database and start adding tasks and bugs - again, just start assigning. Set up a version control repository and check in a good initial chunk of code that everyone can work from. Refuse to deal with any other form of code control. Offer to help people get going with development by showing them how to use the version control system and bug database. Send out weekly emails detailing the status of the project and the progress of the previous week. None of these steps are particularly hard, or time consuming, but they will be huge time savers down the road. Furthermore, it will get your team talking to each other, and get them used to seeing you in charge. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133565",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42045/"
]
} |
133,600 | Let's say you have an application that has a boolean field in its User table called Inactive . Is there anything inherently wrong with just storing false as null? If so can you please explain what the down side should be? I have discussed this with someone a few months ago and we both agreed it shouldn't matter as long as you do it consistently throughout the app/database. Recently, someone I know was emphatic that "true" true or false should be used, but they didn't really give an explanation as to why. | Is there anything inherently wrong with just storing false as null? Yes. If so can you please explain what the down side should be? NULL is not the same as False. By definition, comparisons (and logic) that involve NULL should return values of NULL (not False). However, SQL implementations can vary. True and NULL is NULL (not False). True and NULL or False is NULL (not False). http://en.wikipedia.org/wiki/Null_(SQL)#Three-valued_logic_.283VL.29 http://technet.microsoft.com/en-us/library/cc966426.aspx | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133600",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25221/"
]
} |
133,664 | I've been a professional programmer for many many years (20 years) and now I've lost interest, at the moment I have trouble even knocking out a few 100 lines of simple stuff. It will take me 3-4 days rather than 30-40mins Anyone have any tips how to get your interest back? Since I was a kid I've pretty much been into programming/coding as long as I was awake. I used to finish work and hit the interwebs for new stuff until bed. Now I am lucky to make LOC counts at work seem reasonable. EDIT Thanks everyone - some great suggestions - a lot I didnt think of - although not looking forward to the exercise I probably need it. | Nobody here knows what will inspire you better than you do, but here are a few ideas: Switch projects. Programming is like writing -- it's a means to an end, not so much an end in itself. If you don't believe in the value of what you're doing, you won't feel that there's much point in doing it. So look for a different project at work that you can switch to, something you can get excited about. If it's not possible to switch completely, look for an additional project that you can spend at least some time on. Try learning something new. Challenge yourself by trying a project in your own time using a set of tools that are very different from what you've been doing for the last n years. You could try your hand at a functional programming language, or build a web site, or build a project based on Amazon's EC2 or Google's App Engine , or try developing an app for Android or iOS , or pick up an Arduino and try some embedded programming, or corner one of the mainframe guys at work and ask to learn a little COBOL, or... Take a vacation. When was the last time you took two or three consecutive weeks off? Consider making it a real vacation , where you unplug your computer and don't touch it for the duration. Don't even read about computers. Do some landscaping around your house, finish your basement, do an in-depth study of cupcakes , take up photography , spend some time with your family, travel... anything that doesn't involve typing. Sometimes you need a break from something before you can enjoy it again. Go back to school. Realize that you don't have to be a programmer forever, especially if you don't find it rewarding. This could be a good time to start thinking about what you want to do for your next act. Or, after a semester of learning about EBITDA and indirect labor costs, you might suddenly realize that you love, love, love programming after all. Start exercising . It can be hard for desk jockeys like us to believe, but many people swear that exercising gives you more energy, which might help you feel a little more motivated. It can also serve as either a mental break from work, or as a time to think deeply. Talk to your doctor. Diminished motivation could be a sign of depression or other illness. And even if that's not the problem, your doc might be able to suggest some changes that'll have you feeling a lot better. Make sure you understand how the work you're doing fits into the bigger picture. Sometimes it's hard to see the forest for the trees, and a talk with your manager about how your work impacts the bottom line could provide some of the motivation that'll help you get back on track. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133664",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40923/"
]
} |
133,688 | I understand that C++11's uniform initialization solves some syntactical ambiguity in the language, but in a lot of Bjarne Stroustrup's presentations (particularly those during the GoingNative 2012 talks), his examples primarily use this syntax now whenever he is constructing objects. Is it recommended now to use uniform initialization in all cases? What should the general approach be for this new feature as far as coding style goes and general usage? What are some reasons to not use it? Note that in my mind I'm thinking primarily of object construction as my use case, but if there are other scenarios to consider please let me know. | Coding style is ultimately subjective, and it is highly unlikely that substantial performance benefits will come from it. But here's what I would say that you gain from liberal use of uniform initialization: Minimizes Redundant Typenames Consider the following: vec3 GetValue()
{
return vec3(x, y, z);
} Why do I need to type vec3 twice? Is there a point to that? The compiler knows good and well what the function returns. Why can't I just say, "call the constructor of what I return with these values and return it?" With uniform initialization, I can: vec3 GetValue()
{
return {x, y, z};
} Everything works. Even better is for function arguments. Consider this: void DoSomething(const std::string &str);
DoSomething("A string."); That works without having to type a typename, because std::string knows how to build itself from a const char* implicitly. That's great. But what if that string came from, say RapidXML. Or a Lua string. That is, let's say I actually know the length of the string up front. The std::string constructor that takes a const char* will have to take the length of the string if I just pass a const char* . There is an overload that takes a length explicitly though. But to use it, I'd have to do this: DoSomething(std::string(strValue, strLen)) . Why have the extra typename in there? The compiler knows what the type is. Just like with auto , we can avoid having extra typenames: DoSomething({strValue, strLen}); It just works. No typenames, no fuss, nothing. The compiler does its job, the code is shorter, and everyone's happy. Granted, there are arguments to be made that the first version ( DoSomething(std::string(strValue, strLen)) ) is more legible. That is, it's obvious what's going on and who's doing what. That is true, to an extent; understanding the uniform initialization-based code requires looking at the function prototype. This is the same reason why some say you should never pass parameters by non-const reference: so that you can see at the call site if a value is being modified. But the same could be said for auto ; knowing what you get from auto v = GetSomething(); requires looking at the definition of GetSomething . But that hasn't stopped auto from being used with near reckless abandon once you have access to it. Personally, I think it'll be fine once you get used to it. Especially with a good IDE. Never Get The Most Vexing Parse Here's some code. class Bar;
void Func()
{
int foo(Bar());
} Pop quiz: what is foo ? If you answered "a variable", you're wrong. It's actually the prototype of a function that takes as its parameter a function that returns a Bar , and the foo function's return value is an int. This is called C++'s "Most Vexing Parse" because it makes absolutely no sense to a human being. But the rules of C++ sadly require this: if it can possibly be interpreted as a function prototype, then it will be. The problem is Bar() ; that could be one of two things. It could be a type named Bar , which means that it is creating a temporary. Or it could be a function that takes no parameters and returns a Bar . Uniform initialization cannot be interpreted as a function prototype: class Bar;
void Func()
{
int foo{Bar{}};
} Bar{} always creates a temporary. int foo{...} always creates a variable. There are many cases where you want to use Typename() but simply can't because of C++'s parsing rules. With Typename{} , there is no ambiguity. Reasons Not To The only real power you give up is narrowing. You cannot initialize a smaller value with a larger one with uniform initialization. int val{5.2}; That will not compile. You can do that with old-fashioned initialization, but not uniform initialization. This was done in part to make initializer lists actually work. Otherwise, there would be a lot of ambiguous cases with regard to the types of initializer lists. Of course, some might argue that such code deserves to not compile. I personally happen to agree; narrowing is very dangerous and can lead to unpleasant behavior. It's probably best to catch those problems early on at the compiler stage. At the very least, narrowing suggests that someone isn't thinking too hard about the code. Notice that compilers will generally warn you about this sort of thing if your warning level is high. So really, all this does is make the warning into an enforced error. Some might say that you should be doing that anyway ;) There is one other reason not to: std::vector<int> v{100}; What does this do? It could create a vector<int> with one hundred default-constructed items. Or it could create a vector<int> with 1 item who's value is 100 . Both are theoretically possible. In actuality, it does the latter. Why? Initializer lists use the same syntax as uniform initialization. So there have to be some rules to explain what to do in the case of ambiguity. The rule is pretty simple: if the compiler can use an initializer list constructor with a brace-initialized list, then it will . Since vector<int> has an initializer list constructor that takes initializer_list<int> , and {100} could be a valid initializer_list<int> , it therefore must be . In order to get the sizing constructor, you must use () instead of {} . Note that if this were a vector of something that wasn't convertible to an integer, this wouldn't happen. An initializer_list wouldn't fit the initializer list constructor of that vector type, and therefore the compiler would be free to pick from the other constructors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133688",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31950/"
]
} |
133,695 | I'm developing some application with clojure(lisp) alone in my team. It starts as small application. No problem. But as it's having features and extending the area, it's becoming important program. I worried about maintenance or something. No one in my team knows clojure or lisp nor is interested in languages such as them. So, isn't it wrong to do programming in unpopular languages? (for my own fun?) Should I use more popular languages? (at least such as python) I'm sure if I leave the team, --not saying I'm leaving. :)-- no one would maintain it. This program will be destroyed and some will develop with other language. I'm very enjoying developing with clojure though, I came across with that this might be not for my team. What do you think about this? I think many programmers loving unpopular languages have been concerning similar issue. | I feel your pain, I would love to do more coding in functional programming (Haskell looks so fun!). I feel like I have only just scratched the surface because I have yet to use it in a business context. I would strongly suggest against doing it though . If you program in a language only you know then only you will be able to support it. Unless you want to have to deal with every support issue (Even when you have other deadlines/priorities) then code in a language your team know and can support. What happens when something breaks while you are on holiday? What happens if you want to get promoted? I would recommend you get at least one other team member on board with you. Show them some cool language features. With two people on board it becomes workable and you wont be loaded with all the support. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133695",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47003/"
]
} |
133,704 | In C#, with an inherited class set -- when calling a method should we use keywords 'base.methodname and this.methodname'... irrespective of whether it is a overridden method or not? The code is likely to undergo changes in terms of logic and maybe some IF-ELSE like conditions may come in at a later date. So at that time, the developer has to be compelled to revisit each line of code and ensure that he/she makes the right choice of which method is being called --- base.methodname() or this.methodname() ELSE the .NET framework will call the DEFAULT (I think its base.methodname()) and the entire logic can go for a toss. | Using the base keyword is not a question of preference, it's a question of correctness. If you want to call the base class implementation in overridden method, you have to use base . If you don't want to call that, you can't use base . The this keyword, on the other hand, is a question of preference in most cases. I think it only unnecessarily clutters the code, so I don't use it if I don't have to. Where it's useful is if you have a method parameter with the same name as field (or property). I would use this in such case. But if I'm writing new code, I tend to use naming conventions that avoid this problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133704",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47000/"
]
} |
133,768 | Every few years someone proposes tighter regulation for the software industry. This IEEE article has been getting some attention lately on the subject. If software engineers who write programs for systems that expose the public to physical or financial risk knew they would be tested on their competence, the thinking goes, it would reduce the flaws and failures in code—and maybe save a few lives in the bargain. I'm skeptical about the value and merit of this. To my mind it looks like a land grab by those that proposed it. The quote that clinches that for me is: The exam will test for basic knowledge, not mastery of subject matter because the big failures (e.g. THERAC-25 ) seem to be complex, subtle issues that "basic knowledge" would never be sufficient to prevent. Ignoring any local issues (such as existing protections of the title Engineer in some jurisdictions): The aims are noble - avoid the quacks/charlatans 1 and make that distinction more obvious to those that buy their software. Can tighter regulation of the software industry ever achieve it's original goal? 1 Exactly as regulation of the medical profession was intended to do. | The view that the software engineers can be pigeon-holed into the same classification as medical professionals or accountants is an ignorant view of the "problem" that they are trying to solve. Before I give my opinion on this, lets break down some of the arguments of Mr. Thornton, who is Vice Chair of the regulatory body proposing this legislation. “Just as practicing professionals such as doctors, accountants, and
nurses are licensed, so should software engineers,” Thornton says.
“The public needs to be able to rely on some sort of credential when
choosing a contractor to write software.” - Mitch Thornton, Vice Chair of the IEEE Licensure and Registration This sounds very reasonable on the surface. After all, there are other industries that might be considered "successfully regulated". By successfully regulated I mean that if you look a doctor up in the yellow pages you can be reasonably certain that he or she has had a thorough education at an accredited university and has passed a number of exams and completed a residency. Here are some "successfully regulated" industries (in terms of professional personnel). Lawyers Doctors Accountants Nuclear engineers Barbers ( not kidding ) After all, you don't want just anyone removing that tumor from your pancreas or working on the centrifuges of that nuclear power plant just outside of town. Why shouldn't similar restrictions exist for software engineers? Only those whose programs might “endanger the public health or safety,
security, property, or the economy will need to be tested,” This is a vague statement and open to liberal interpretation and application. I could make the argument that Apple Inc. or Facebook are integral parts of the American economy - do I need a special license from the government to write code for them now, since if I bring down the site with my incompetence I might damage the American economy? At my last job I accidentally shut down a grain elevator with a faulty cron job - possibly endangering food supply. I realize that I'm avoiding the actual intent of this proposal. The idea behind it is to ensure that the person writing the code for the Anti-Lock Braking System on your new Jetta is competent and properly licensed to write code for an Anti-Lock Braking System. On your Jetta. Here's the problem: software engineering in this day and age encompasses everything and you can't possibly test for every discipline. The business rules are too specific, and too varied from discipline to discipline. Our hypothetical engineer writing code for the ABS system on a Jetta might be writing something entirely different for the ABS system on an Elantra. Does he have to get re-certified? And what if you do test for all of these derivative disciplines? Suppose for a moment that every programmer who works on an e-commerce web site gets certified as an e-commerce capable programmer. So? Does this suddenly mean that these programmers and companies will actually do the necessary validation and build to PCI compliance? Even if they do - glitches will still happen. The exam will test for basic knowledge, not mastery of subject matter,
according to Mitch Thornton, vice chair of the IEEE Licensure and
Registration Committee. Here's the kicker. A lack of basic knowledge is never the problem. My anti-lock brakes didn't stop working because Chuck was struggling with the concepts of a control structure. They failed because there was glitch where the ABS turned off due to an electrical short in the tail lights and power wasn't properly rerouted. Or something. The insulin pump software I wrote didn't stop working because I lack basic skills; it stopped because there was a bug in how I measured the dispensure of insulin when my European teammate used the metric system and I didn't. That's something you can account for in development, but you could never test for with a certification . Here's what will happen if this "certification" goes into effect: the number of incidents will stay exactly the same. Why? Because it doesn't do anything to eliminate the actual problem of an ABS or insulin pump failing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133768",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6904/"
]
} |
133,876 | I have been taking Advanced Placement Computer Science for this past year in high school. It seems as though we are taught simply to memorize code and functions and not how to be resourceful and efficient in using documentation and the like. Practically, I imagine many (if not all) programming jobs would allow you to flip through documentation, review past code and the code of others, essentially doing what my teacher would consider "cheating." While I do agree core concepts are essential to memorize (in any subject matter), it seems superfluous and impractical to me to give a pen-and-paper exam for a CS class, especially when practically you would have a compiler, debugger, reference manuals, and the entire internet to refer to in any real-world work situation. Why is CS taught focusing on the memorization of code and functions as opposed to teaching useful skills including how to use and interpret documentation, sample code, the debugger and such? | In a high school class, you're in the most basic level of your path to mastery. Things that are covered in your class are the kind of things that a professional programmer are expected to know cold. In a lot of ways, this is akin to learning your "times tables". Of course you'll always be able to grab a calculator in a "real-world" setting, but this memorization not only increases your speed in more complex tasks, but also promotes a more thorough understanding of the basic principles. For example, you should know several sorting algorithms, how they're implemented, how they work, when they're best used, and when not to use them. This could always be looked up, but shouldn't have to be - anymore than a mathematician should have to look up 6 times 8. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133876",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35862/"
]
} |
133,968 | I am new to programming, and at an interview I got a question on regular expressions; needless to say I couldn't answer. So I was wondering whether I should learn regular expression? Is it a must for every programmer of all fields? Or it is a must for programming for some particular fields? Related questions: Why are regular expressions so morbidly attractive? When you should NOT use Regular Expressions? | Regular expressions are such an incredibly convenient tool, available across so many languages that most developers will learn them sooner or later. For an interviewer, they are a nice way to probe experience during an interview. If you are interviewing someone claiming years of experience who doesn't understand them, you need to dig further. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133968",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19141/"
]
} |
133,993 | As a grad student, I find it more and more common for prestigious companies (like Google, Facebook, Microsoft,...) to put algorithm questions in their test and interviews. A few startups I applied to also asked about algorithms. I wonder if algorithms fluency is the most important thing for software developer in those companies? If the answer being yes, what are the best method or resources for one to learn & practice about algorithms effectively? I can't seem to get interested in solving seemingly too complicated problems found in most textbook or websites. Though easily understand basic algorithms (like quicksort, bubblesort,...), I find it immensely difficult to remember and reuse them later. Thanks. P/S: If you ask me what I like, it's building good softwares to solve users' problems innovatively. I suppose that does not necessarily mean the software has to be very complicated. | Algorithms are clear Here's the beautiful thing about algorithms: The problem space they deal with is well-defined, i.e. your requirements are not only actually known , but usually even formalized much like the metrics for the solution's quality. So if I tell you to come up with an algorithm, there isn't much potential for communication problems, and measuring your performance is a trivial task. At the same time your performance is a fairly good indicator for your ability to think logically. Algorithms are an efficient filter The current problem of the industry (and the education) is the poor average quality of graduates. This has been illustrated with the FizzBuzz test, which is: Write a program, that will go through the numbers from 1 to 100 and
will print "fizz" if the number is divisible by 3, "buzz" if it is
divisible by 5 and the number itself if it is divisible by neither. Apparently, the majority of all Comp Sci graduates fail to solve this problem. Please note that this is an algorithmic question, although of course an embarrassingly simple one. Given this, getting someone who can solve the kind of problems given in Google Code Jam or Project Euler, you're already enjoying the crème-de-la-crème. Algorithms are a tiny part of software development The truth is, as soon as you work in the industry, you will not be using your algorithm skills more than 1% of the time. Before you even start writing code, you must first gather and analyze requirements. Then you must synthesize your design based on them. Then you must implement the design. Then you must evaluate the implementation against the original requirements, then iterate the requirements, then iterate the design, then iterate the implementation and so on. One of the requirements is sensible performance. If that requirement is not met, you must profile your implementation to track down the bottlenecks and then you can optimize it, which sometimes is a matter of straight forward micro-optimization (which is rather easy to do), but sometimes is a matter of using better algorithms (which is not always easily done afterwards). Therefore: Algorithms are critical The better your grasp of algorithms, the bigger is the chance that you get it right the first time. Otherwise, you're not only likely to run into a problem that can only be solved by implementing a better algorithm, but also you will be unable to actually solve it. So while you almost never need this skill, it presents a single point of failure in your development methodology and if you don't have the skill, you can only hope that the necessity never arises, or that someone else jumps in to fix it for you. What is really important is to get a feeling for computational complexity and how to keep it low, as I also explained in response to a similar question . Or to specialize in things where this simply isn't important, such as GUI development, but then again almost everybody hates it ... for a reason! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133993",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15020/"
]
} |
134,097 | I always read that composition is to be preferred over inheritance. A blog post on unlike kinds , for example, advocates using composition over inheritance, but I can't see how polymorphism is achieved. But I have a feeling that when people say prefer composition, they really mean prefer a combination of composition and interface implementation. How are you going to get polymorphism without inheritance? Here is a concrete example where I use inheritance. How would this be changed to use composition, and what would I gain? Class Shape
{
string name;
public:
void getName();
virtual void draw()=0;
}
Class Circle: public Shape
{
void draw(/*draw circle*/);
} | Polymorphism does not necessarily imply inheritance. Often inheritance is used as an easy means to implement Polymorphic behaviour, because it is convenient to classify similar behaving objects as having entirely common root structure and behaviour. Think of all those car and dog code examples you've seen over the years. But what about objects that aren't the same. Modelling a car and a planet would be very different, and yet both might want to implement Move() behaviour. Actually, you basically answered your own question when you said "But I have a feeling that when people say prefer composition, they really mean prefer a combination of composition and interface implementation." . Common behaviour can be provided through interfaces and a behavioural composite. As to which is better, the answer is somewhat subjective, and really comes down to how you want your system to work, what makes sense both contextually and architecturally, and how easy it will be to test and maintain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134097",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46003/"
]
} |
134,103 | I work as an analyst for a financial institution, which, due to data sensitivity, will not store any data in the cloud. However, I'm having some success getting my team to use Git for code management. I was wondering whether there was any way to implement Github-like pull requests in our own server. The specific feature I'm interested in is the ability to submit a changeset for comments , without actually having it merged into a given branch. I like the workflow of (1) submit changes, (2) have changes reviewed and commented on, and (3) either accept the commit or reject it. Can this be implemented (even better, can this be easily implemented) on our own servers? | git request-pull anyone? Summarizes the changes between two commits to the standard output, and includes the given URL in the generated summary... This should do the trick... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134103",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9824/"
]
} |
134,118 | I have no idea what these are actually called, but I see them all the time. The Python implementation is something like: x += 5 as a shorthand notation for x = x + 5 . But why is this considered good practice? I've run across it in nearly every book or programming tutorial I've read for Python, C, R so on and so forth. I get that it's convenient, saving three keystrokes including spaces. But they always seem to trip me up when I'm reading code, and at least to my mind, make it less readable, not more. Am I missing some clear and obvious reason these are used all over the place? | It's not shorthand. The += symbol appeared in the C language in the 1970s, and - with the C idea of "smart assembler" correspond to a clearly different machine instruction and adressing mode: Things like " i=i+1 ", "i+=1 " and " ++i ", although at an abstract level produce the same effect, correspond at low level to a different way of working of the processor. In particular those three expressions, assuming the i variable resides in the memory address stored in a CPU register (let's name it D - think of it as a "pointer to int") and the ALU of the processor takes a parameter and return a result in an "accumulator" (let's call it A - think to it as an int). With these constraints (very common in all microprocessors from that period), the translation will most likely be ;i = i+1;
MOV A,(D); //Move in A the content of the memory whose address is in D
ADD A, 1; //The addition of an inlined constant
MOV (D) A; //Move the result back to i (this is the '=' of the expression)
;i+=1;
ADD (D),1; //Add an inlined constant to a memory address stored value
;++i;
INC (D); //Just "tick" a memory located counter The first way of doing it is disoptimal, but it is more general when operating with variables instead of constant ( ADD A, B or ADD A, (D+x) ) or when translating more complex expressions (they all boil down in push low priority operation in a stack, call the high priority, pop and repeat until all the arguments had been eliminated). The second is more typical of "state machine": we are no longer "evaluating an expression", but "operating a value": we still use the ALU, but avoid moving values around being the result allowed to replace the parameter. These kind of instruction cannot be used where more complicated expression are required: i = 3*i + i-2 cannot be operated in place, since i is required more times. The third -even simpler- does not even consider the idea of "addition", but uses a more "primitive" (in computational sense) circuitry for a counter. The instruction is shorted, load faster and executes immediately, since the combinatorial network required to retrofit a register to make it a counter is smaller, and hence faster than the one of a full-adder. With contemporary compilers (refer to C, by now), enabling compiler optimization, the correspondence can be swapped based on convenience, but there is still a conceptual difference in the semantics. x += 5 means Find the place identified by x Add 5 to it But x = x + 5 means: Evaluate x+5 Find the place identified by x Copy x into an accumulator Add 5 to the accumulator Store the result in x Find the place identified by x Copy the accumulator to it Of course, optimization can if "finding x" has no side effects, the two "finding" can be done once (and x become an address stored in a pointer register) the two copies can be elided if the ADD is applied to &x instead to the accumulator thus making the optimized code to coincide the x += 5 one. But this can be done only if "finding x" has no side effects, otherwise *(x()) = *(x()) + 5; and *(x()) += 5; are semantically different, since x() side effects (admitting x() is a function doing weird things around and returning an int* ) will be produced twice or once. The equivalence between x = x + y and x += y is hence due to the particular case where += and = are applied to a direct l-value. To move to Python, it inherited the syntax from C, but since there is no translation / optimization BEFORE the execution in interpreted languages, things are not necessarily so intimately related (since there is one less parsing step). However, an interpreter can refer to different execution routines for the three types of expression, taking advantage of different machine code depending on how the expression is formed and on the evaluation context. For who likes more detail... Every CPU has an ALU (arithmetic-logical unit) that is, in its very essence, a combinatorial network whose inputs and output are "plugged" to the registers and / or memory depending on the opcode of the instruction. Binary operations are typically implemented as "modifier of an accumulator register with an input taken "somewhere", where somewhere can be
- inside the instruction flow itself (typical for manifest contant: ADD A 5)
- inside another registry (typical for expression computation with temporaries: e.g. ADD A B)
- inside the memory, at an address given by a register (typical of data fetching e.g.: ADD A (H)) - H, in this case, work like a dereferencing pointer. With this pseudocode, x += 5 is ADD (X) 5 while x = x+5 is MOVE A (X)
ADD A 5
MOVE (X) A That is, x+5 gives a temporary that is later assigned. x += 5 operates directly on x. The actual implementation depends on the real instruction set of the processor:
If there is no ADD (.) c opcode, the first code becomes the second: no way. If there is such an opcode, and optimization are enabled, the second expression, after eliminating the reverse moves and adjusted the registers opcode, become the first. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134118",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37340/"
]
} |
134,184 | I am not so familiar with databases and now I am trying to understand the indexing mechanism. From what I know, in a RDBMS, indexing on a column makes searching by that column faster. This is also true for the triple stores, only there indices assume you will search(for example) mostly by the subject, then by object and so on. I am not sure about RDBMS, but on triple stores you can define more than one index, letting the store choose the best index for each query(hopefully I understood this right). Naturally, the following question appears: Why shouldn't I add all the possible indexes to a triple store, and extending to a RDBMS, why not make indexes on each column(assuming I am not too lazy)? | Because, essentially, an index is an extra table, where the primary key is the field you're indexing and the only content is the primary key of your main table. So every update has to be replicated in every index that uses the field you update. This is particularly noticable on Inserts. Imagine if every insert you did to a table had to be replicated on 20 other tables. It's going to be painfully slow. Note that this gets even worse with compound, clustered and full-text indexes, but I don't want to complicate the issue for you yet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134184",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47233/"
]
} |
134,223 | I would like to know if it's a good practice or legal to use non-standard tags in an HTML page for certain custom purposes. For example: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
consequat, felis sit amet suscipit laoreet, nisi arcu accumsan arcu,
vel pulvinar odio magna suscipit mi. I want to highlight "consectetur adipiscing elit" as important and "nisi arcu accumsan arcu" as highlighted. So in the HTML I would put: Lorem ipsum dolor sit amet, <important>consectetur adipiscing
elit</important>. Nullam consequat, felis sit amet suscipit laoreet,
<highlighted>nisi arcu accumsan arcu</highlighted>, vel pulvinar odio
magna suscipit mi. and in the CSS: important {
background: red
color: white;
}
highlighted {
background: yellow;
color: black;
} However, since these are not valid HTML tags, is this ok? | No, this is not a good practice. You should use already semantic, meaningful tags -- perhaps <em> in this case -- and apply CSS styles to achieve your design requirements. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134223",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46698/"
]
} |
134,321 | Is it a good practice to call the variable a method returns with a variable name result ? For instance: public Zorglub calculate() {
Zorglub result = [...]
[...]
return result;
} Or should I name it by its type? public Zorglub calculate() {
Zorglub zorglub = [...]
[...]
return zorglub;
} I have seen both in the wild, if I need to choose one what reasons could make me prefer the former or latter (or any better name)? I am mainly thinking about Java. | If this is a method variable, it really depends on readability. Seeing as you already have the type name in both the variable declaration and the method return type, you might as well use result - it is descriptive of the role of the variable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1480/"
]
} |
134,432 | Can someone provide me with a canonical answer on the differences between an Observer and a Mediator , and a summary of when you should use one pattern over the other? I am unsure of what kind of situation would require an Observer and what kind would require a Mediator | In the original book that coined the terms Observer and Mediator, Design Patterns, Elements of Reusable Object-Oriented Software it says that the Mediator pattern can be implemented by using the observer pattern. However it can also be implemented by having Colleagues (roughly equivalent to the Subjects of the Observer pattern) have a reference to either a Mediator class or a Mediator interface. There are many cases when you would want to use the observer pattern, they key is that on object should not know what other objects are observing it's state. Mediator is a little more specific, it avoids having classes communicate directly but instead through a mediator. This helps the Single Responsibility principle by allowing communication to be offloaded to a class that just handles that. A classic Mediator example is in a GUI, where the naive approach might lead to code on a button click event saying "if the Foo panel is disabled and Bar panel has a label saying "Please enter date" then don't call the server, otherwise go ahead", where with the Mediator pattern it could say "I'm just a button and have no earthly business knowing about the Foo panel and the label on the Bar panel, so I'll just ask my mediator if calling the server is O.K. right now." Or, if it is implemented using the observer pattern the button would say "Hey, observers (which would include the mediator), my state changed (someone clicked me). Do something about it if you care". In my example that probably makes less sense, but sometimes it would, and the difference between Observer and Mediator would be more one of intent than a difference in the code itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
134,448 | I am working on a C++ project where I have a bunch of math functions that I initially wrote to use as part of a class. As I've been writing more code, though, I've realized I need these math functions everywhere. Where is the best place to put them? Let's say I have this: class A{
public:
int math_function1(int);
...
} And when I write another class, I can't (or at leat I don't know how to) use that math_function1 in that other class. Plus, I've realized some of this functions are not truly related to class A. They seemed to be at the beginning, but now I can see how they're just math functions. What is good practice in this situation? Right now I've been copy-pasting them into the new classes, which I'm sure is the worst practice. | C++ can have non-method functions just fine, if they do not belong to a class don't put them in a class, just put them at global or other namespace scope namespace special_math_functions //optional
{
int math_function1(int arg)
{
//definition
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134448",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44639/"
]
} |
134,540 | The question Where should I put functions that are not related to a class has sparked some debate over whether it makes sense in C++ to combine utility functions in a class or just have them exist as free functions in a namespace. I come from a C# background where the latter option does not exist and thus naturally trend toward using static classes in the little C++ code I write. The highest voted answer on that question as well as several comments however say that free functions are to be preferred, even suggesting static classes were an anti-pattern. Why is that so in C++? At least on the surface, static methods on a class seem indistinguishable from free functions in a namespace. Why thus the preference for the latter? Would things be different, if the collection of utility functions needed some shared data, e.g. a cache one could store in a private static field? | I guess to answer that we should compare the intentions of both classes and namespaces. According to Wikipedia: Class In object-oriented programming, a class is a construct that is used as a blueprint to create instances of itself – referred to as class instances, class objects, instance objects or simply objects. A class defines constituent members which enable these class instances to have state and behavior. Data field members (member variables or instance variables) enable a class object to maintain state. Other kinds of members, especially methods, enable a class object's behavior. Class instances are of the type of the associated class. Namespace In general, a namespace is a container that provides context for the identifiers (names, or technical terms, or words) it holds, and allows the disambiguation of homonym identifiers residing in different namespaces. Now, what are you trying to achieve by putting the functions in a class (statically) or a namespace? I would wager that the definition of a namespace better describes your intention - all you want is a container for your functions. You don't need any of the features described in the class definition. Note that the first words of the class definition are " In object-oriented programming ", yet there is nothing object-oriented about a collection of functions. There are probably technical reasons as well but as someone coming from Java and trying to get my head around the multi-paradigm language that is C++, the most obvious answer to me is: Because we don't need OO to achieve this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134540",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40190/"
]
} |
134,551 | I'm going to write a class library for .NET which provide an implementation of arbitrary precision arithmetic for integer, rational and maybe complex numbers. What best known approaches should I become familiar with? I tried to start with Knuth's TAOCP Vol.2 (Seminumerical Algorithms, Chapter 4 – Arithmetic) but it's too complicated. At least I couldn't get the ideas in a relatively short period of time. | I guess to answer that we should compare the intentions of both classes and namespaces. According to Wikipedia: Class In object-oriented programming, a class is a construct that is used as a blueprint to create instances of itself – referred to as class instances, class objects, instance objects or simply objects. A class defines constituent members which enable these class instances to have state and behavior. Data field members (member variables or instance variables) enable a class object to maintain state. Other kinds of members, especially methods, enable a class object's behavior. Class instances are of the type of the associated class. Namespace In general, a namespace is a container that provides context for the identifiers (names, or technical terms, or words) it holds, and allows the disambiguation of homonym identifiers residing in different namespaces. Now, what are you trying to achieve by putting the functions in a class (statically) or a namespace? I would wager that the definition of a namespace better describes your intention - all you want is a container for your functions. You don't need any of the features described in the class definition. Note that the first words of the class definition are " In object-oriented programming ", yet there is nothing object-oriented about a collection of functions. There are probably technical reasons as well but as someone coming from Java and trying to get my head around the multi-paradigm language that is C++, the most obvious answer to me is: Because we don't need OO to achieve this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134551",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47157/"
]
} |
134,633 | As I understand, top-down design is by refining the abstract high level concept into smaller concrete and comprehensible parts, until the smallest building block is defined. On the other hand, bottom up defines low level parts, then gradually build up higher level blocks until the whole system is formed. In practice, it is said best to combine the two methods: starts with high level specification to fully specify the domain knowledge, its relationship and constraints. Once the problem is well understood, smallest building blocks are created to build up the system. The process of: Creating requirement spec Create a design spec (with diagrams) Implement Deliver Repeat (in iterative development, rather than doing a whole chunk in each phase, we do a little bit each repeatedly, and got daily meeting to adapt to customer's dynamic requirement) looks perfectly normal to me (with specs as plans). It has its flaws but that's why we got iterative development: instead of spending time on one phase, says, requirement analysis to study every possible thing in domain knowledge which is subjected to changes (possibly daily), we do a little bit of analysis, a little bit of design and then implement it. Another way is that each iteration is a mini-waterfall fashion, where analysis is done in a few days (or a week). The same applies for design. The rest of time is spent for implementation. Is there something inherently wrong with top-down approach in combination with iterative development? In his essay Programming Bottom Up , Paul Graham seems to encourage build from bottom up completely, or program it from bottom up, but not the requirement analysis/design phase: Experienced Lisp programmers divide up their programs differently. As
well as top-down design, they follow a principle which could be called
bottom-up design-- changing the language to suit the problem. As far as I get, what he meant is that Lisper still performs top-down design, but program bottom up, is that true? Another point he wrote: It's worth emphasizing that bottom-up design doesn't mean just writing
the same program in a different order. When you work bottom-up, you
usually end up with a different program. Instead of a single,
monolithic program, you will get a larger language with more abstract
operators, and a smaller program written in it. Instead of a lintel,
you'll get an arch. Does this means that during the period of writing a program in Lisp, you end up with a generic tool? | Top-down is a great way to describe things you know, or to re-build things that you've already built. Top-down biggest problem is that quite often simply there is no "top". You will change your mind about what the system should do while developing the system and while exploring the domain. How can be your starting point something that you don't know (i.e. what you want the system to do)? A "local" top down is a good thing... some thinking ahead of coding is clearly good. But thinking and planning too much is not, because what you are envisioning is not the real scenario (unless you've already been there before, i.e. if you are not building, but re-building). Global top-down when building new things is just nonsense. Bottom-up should be (globally) the approach unless you know 100% of the problem, you need just the known solution to be coded and you don't care about looking for possible alternative solutions. Lisp approach is the distilled bottom-up. You not only build bottom up but you can also shape the bricks the way you need them to be. Nothing is fixed, freedom is total. Of course freedom takes responsibility and you can make horrible things by misusing this power. But horrible code can be written in any language. Even in languages that are shaped as cages for the mind, designed with the hope that with those languages even monkeys could get good programs up and running (an idea so wrong on so many levels that it hurts even just thinking about it). Your example is about a web server. Now in 2012 this is a well-defined problem, you have specs to be followed. A web server is just an implementation problem.
Especially if you are aiming at writing a web server substantially identical to the other gajillion of web servers that are out there then nothing is really unclear, except some minutiae. Even your comment about RSA is still talking about a clearly defined problem, with formal specifications. With a well defined problem, with formal specifications and already known solutions then coding is just connecting in the dots. Top down is ok for that. This is the project manager heaven. In many cases however there is no proven well-known approach to be used to connect the dots. Actually very often is hard to say even what are the dots. Suppose for example you are asked to instruct an automatic cutting machine to align the parts to be cut to a printed material that is not perfectly conforming to the theoretic repetitive logo. You are given the parts and pictures of the material as taken by the machine. What is an alignment rule? You decide. What is a pattern, how to represent it? You decide. How to align the parts? You decide. Can parts be "bent"? It depends, some not and some yes, but of course not too much. What to do if the material is just too deformed for a part to cut it acceptably? You decide. Are all the material rolls identical? Of course not, but you cannot bug the user to adapt alignment rules for every roll... that would be impractical. What pictures are seeing the cameras? The material, whatever that may mean... it can be color, it can be black over black where just the light reflex makes the pattern evident. What does it mean to recognize a pattern? You decide. Now try to design the general structure of a solution for this problem and give a quote, in money and time. My bet is that even your system architecture... (yes, the architecture) will be wrong. Cost and time estimation will be random numbers. We implemented it and now it's a working system, but changed our mind about the very shape of the system a big number of times. We added entire sub-systems that now cannot even be reached from the menus. We switched master/slave roles in protocols more than once. Probably now we've enough knowledge to attempt re-building it better. Other companies of course did solve the same problem... but unless you are in one of these companies most probably your top-down detailed project will be a joke. We can design it top-down. You cannot because you never did it before. You can probably solve the same problem too. Working bottom-up however. Starting with what you know, learning what you don't and adding up. New complex software systems are grown, not designed. Every now and then someone starts designing a big new complex ill-specified software system from scratch (note that with a big complex software project there are only three possibilities: a] the specification is fuzzy, b] the specification is wrong and self-contradictory or c] both... and most often [c] is the case). These are the typical huge-company projects with thousands and thousands of hours thrown into powerpoint slides and UML diagrams alone. They invariably fail completely after burning embarrassing amounts of resources... or in some very exceptional case they finally deliver an overpriced piece of software that implements only a tiny part of the initial specs. And that software invariably is deeply hated by users... not the kind of software you would buy, but the kind of software you use because you're forced to. Does this mean that I think that you should think only to code? Of course not. But in my opinion the construction should start from bottom (bricks, concrete code) and should go up... and your focus and attention to detail should in a sense "fade" as you are getting farther from what you have. Top-down is often presented as if you should put the same level of detail to the whole system at once: just keep it splitting every node until everything is just obvious... in reality modules, subsystem are "grown" from subroutines.
If you do not have a previous experience in the specific problem your top down design of a subsystem, module or library will be horrible. You can design a good library once you know what functions to put in, not the other way around. Many of the Lisp ideas are getting more popular (first class functions, closures, dynamic typing as default, garbage collection, metaprogramming, interactive development) but Lisp is still today (among the languages I know) quite unique in how easy is to shape code for what you need. Keyword parameters for example are already present, but if they were not present they could be added. I did it (including keyword verification at compile time) for a toy Lisp compiler I am experimenting with and it doesn't take much code. With C++ instead the most you can get is a bunch of C++ experts telling you that keyword parameters are not that useful, or an incredibly complex, broken, half backed template implementation that indeed is not that useful.
Are C++ classes first-class objects? No and there's nothing you can do about it. Can you have introspection at runtime or at compile time? No and there's nothing you can do about it. This language flexibility of Lisp is what makes it great for bottom-up building. You can build not only subroutines, but also the syntax and the semantic of the language. And in a sense Lisp itself is bottom-up. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134633",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18721/"
]
} |
134,711 | I solve many problems, mostly from Top Coder. I will get answers for many, but most times I end up with an inefficient solution. In real-world implementations - does it really matter that a solution to the problem is efficient? If so how I can improve it? | The best solution is the one that is (in order of increasing importance) efficient, maintainable and done . ^^^That is the only thing you really need to take from this answer.^^^ Efficiency is important . Maybe a little less so than it used to be due to our abundance of hardware, but Performance is a Feature . In a contest, efficiency is obviously important. You should know how to write efficient code. More importantly you should know the best practices that will yield efficient, good performing code without sacrificing on the timeliness or maintainability of an application. This is really where depth of experience with a platform and language returns many yields. More important though(in 95% of cases), is having a finished, maintainable solution. Without a finished product , it doesn't matter how efficient or maintainable the solution. If it takes you an extraordinary amount of time to track and fix a bug or add a new feature, it doesn't matter how efficient the solution. But, efficiency and performance are undoubtedly important, no matter what anybody might say. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134711",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19216/"
]
} |
134,716 | I am trying to add a couple of tests to a legacy C project. The project basically consists of a command line tool that prints something to stdout every time an event happens. Now, since writing unit tests would be pretty hard given the fact that the modules are pretty tight coupled, I am trying to write some functional tests in order to validate the current behaviour and then I'll go on splitting the modules so I can unit test them. Does it make sense to have the testing framework something like a few Python scripts (keep in mind that the project is pure C) that handles all the functional tests? Basically, Python should call my command line tool, fake its input and expect a valid output. | The best solution is the one that is (in order of increasing importance) efficient, maintainable and done . ^^^That is the only thing you really need to take from this answer.^^^ Efficiency is important . Maybe a little less so than it used to be due to our abundance of hardware, but Performance is a Feature . In a contest, efficiency is obviously important. You should know how to write efficient code. More importantly you should know the best practices that will yield efficient, good performing code without sacrificing on the timeliness or maintainability of an application. This is really where depth of experience with a platform and language returns many yields. More important though(in 95% of cases), is having a finished, maintainable solution. Without a finished product , it doesn't matter how efficient or maintainable the solution. If it takes you an extraordinary amount of time to track and fix a bug or add a new feature, it doesn't matter how efficient the solution. But, efficiency and performance are undoubtedly important, no matter what anybody might say. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
134,746 | I frequently see Simulation and Emulation in computer science. These two terms seem synonymous. Is there any difference between Simulation and Emulation ? | Yes, the concepts are different. Simulation A simulation is a system that behaves similar to something else, but is implemented in an entirely different way. It provides the basic behaviour of a system, but may not necessarily adhere to all of the rules of the system being simulated. It is there to give you an idea about how something works. Example Think of a flight simulator as an example. It looks and feels like you are flying an airplane, but you are completely disconnected from the reality of flying the plane, and you can bend or break those rules as you see fit. For example, fly an Airbus A380 upside down between London and Sydney without breaking it. Emulation An emulation is a system that behaves exactly like something else, and adheres to all of the rules of the system being emulated. It is effectively a complete replication of another system, right down to being binary compatible with the emulated system's inputs and outputs, but operating in a different environment to the environment of the original emulated system. The rules are fixed, and cannot be changed, or the system fails. Example The M.A.M.E. system is built around this very premise. All those old arcade systems that have been long forgotten, that were implemented almost entirely in hardware, or in the firmware of their hardware systems can be emulated right down to the original bugs and crashes that would occur when you reached the highest possible score. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134746",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19141/"
]
} |
134,754 | We have a product that has a few different editions. The differences are minor: different strings here and there, very little additional logic in one, very little difference in logic in the other. When the software is being developed, most changes need to be added to each edition; however, there are a few that don't and a few that needs to differ. Is it a valid use of branches if I have release-editionA and release-editionB (..etc) branches? Are there any gotchas? Good practices? Update: Thanks for the insight everyone, lots of good answers here. The general consensus seems to be that it is a bad idea to use branches for this purpose. For anyone wondering, my final solution to the problem is to externalize strings as configuration, and externalize the differing logic as plugins or scripts. | This depends on the magnitude of the change, but I wouldn't consider it good practice for the differences you described. Generally, you want a Git branch to be something that will be merged in the future or stored read-only for reference. Git branches that co-exist indefinitely mean work for everyone: Changes need to be propagated and merged, conflicts resolved, all the fun. If nothing else, every developer has to remember to push changes to five repositories instead of one. If you have small changes, the whole merging and branch-keeping effort seems overkill when compared to the problem. Use your preprocessor or build system to differentiate between versions. Does a simple #ifdef do the trick? Then don't solve problems with git, it's overkill. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134754",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3530/"
]
} |
134,855 | I used to think I knew what this was, until I really started thinking about it... "maintainable"... what exactly makes code maintainable? To me, if code must be maintainable that means we can expect to revisit the code to make some sort of change to it in the future. Code is always "changeable" no matter what state it is in. So does this mean code needs to be easy to change? However, if our code was scalable/extensible, there would be no need to directly change it, because it will "change" (adapt) for free and automatically. I've also seen code maintainability used interchangeably with coding standards. Using a consistent coding pattern or style, does this really make code more maintainable? More readable and more consistent? Sure, but how does this improve the maintainability? The more I try to think into this, the more confused I get. Anyone have a proper explanation? | Maintainability isn't a binary property, either being maintainable or not. It's a continuum. Roughly speaking, maintainability is inversely proportional to the amount of time it takes a developer to make a change and the risk that change will break something. Improving readability, coupling, or consistency all contribute to maintainability because it won't take as long to make any given change. Maintainability is easier to recognize by its absence, like when something you thought should take an hour ends up taking a week. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134855",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31950/"
]
} |
134,861 | Working on a problem that uses the percent change formula: percent change = 100 * [(new value - old value) / old value] How would I explain the difference if new value or old value = NULL , rather than 0 to someone who might not be a programmer? My boss is wondering why there is an empty string in the TextBox rather than a value, because we have the old value, but not the new value. | To explain to a boss the difference between "zero" and "null": "Zero" is a value. It is the unique, known quantity of zero, which is meaningful in arithmetic and other math. "Null" is a non-value. It is a "placeholder" for a data value that is not known or not specified. It is only meaningful in this context; mathematical operations cannot be performed on null (the result of any such operation is undefined, and therefore also generally represented as null). For example, as in the comments: "What is your yearly income?" is a question requiring a numeric answer. "0" is a perfectly valid answer for someone who does not work and has no investment income. If the user does not enter a value at all, they don't necessarily make no money; they just didn't want to tell your software how much (or little) they make. It's an unknown, not specified; therefore, to allow the software to continue, you specify the "null" placeholder for that data field within the software. That's technically valid from a data perspective; whether it's valid at the business level depends on whether an actual numeric value (even zero) is required in order to perform a mathematical operation (such as calculation of taxes, or comparison with thresholds determining benefits). In computers, virtually any operation on a variable containing null will result either in null or in an error condition, because since one of the variable's values is not known, the result of the expression cannot be known. The equivalent of performing math on null would be if I asked you "What's five plus the number I'm thinking of right now?". It's impossible for you to give a definite answer because you don't know the number I'm thinking of. An operation on zero, except for dividing by it, is usually valid and will return another known, unique value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25945/"
]
} |
134,886 | At school we started learning C this year, despite the fact I'm way ahead of class, and I learned Java, C++ and C while the class is at the base of C. Anyhow, I've been documenting myself, reading books, articles, and I've asked my teacher why I should learn C, and she said it was the foundation of C++. When I first started programming I found C++ alot easier, I later on learned C. But In books you can see that C code works in C++ yet it doesn't go vice-versa. My question is pretty straightforward ~ Is it a good habit to use C expressions in C++? Let me give you an example: Should this code #include <stdio.h>
#include <iostream>
int main() {
int x;
scanf("%d", &x);
cout << "The number you entered is " << x << "And it's double is " << x*x;
return 0;
} Be more efficient or better in any way than this: #include <iostream>
int main() {
int x;
cin >> x;
cout << "The number you entered is " << x << "And it's double is " << x*x;
return 0;
} I have already done some easy documentation on this in some dusty old books, and from what I could find, using scanf instead of cout also flushes the stream or something like that, so I'm basically asking if it's better to use scanf and in what contexts. This also applies to file IO as I've always found FIle IO to be easier in C than in C++. This question goes for pretty much every general expression in C applied to C++. It's also notable that I'm using a modern compiler and nevertheless this should not matter as I'm asking if it's a good programming habit to use C expressions in C++ code. There are probably cons and pros of doing this, but I'm only looking for a yes/why, no/why type of answer. Also if there are any details I've left out post a comment. | No, it's a bad habit. When you do this for a living, you'll likely end up violating style guides that your team adheres to (or at least get whacked during code reviews). Yes it works, but if there's a c++ equivalent, use it. (e.g. try not to mix printfs with couts ) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/134886",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
135,047 | I'm just now learning TDD. It's my understanding that private methods are untestable and shouldn't be worried about because the public API will provide enough information for verifying an object's integrity. I've understood OOP for a while. It's my understanding that private methods make objects more encapsulated, thus more resistant to change and errors. Thus, they should be used by default and only those methods that matter to clients should be made public. Well, it's possible for me to make an object that has only private methods and interacts with other objects by listening to their events. This would be very encapsulated, but completely untestable. Also, it's considered bad practice to add methods for the sake of testing. Does this mean TDD is at odds with encapsulation? What is the appropriate balance? I'm inclined to make most or all of my methods public now... | Prefer testing to the interface over testing on the implementation. It's my understanding that private methods are untestable This depends on your development environment, see below. [private methods] shouldn't be worried about because the public API will provide enough information for verifying an object's integrity. That's right, TDD focuses on testing the interface. Private methods are an implementation detail that could change during any re-factor cycle. It should be possible to re-factor without changing the interface or the black-box behaviour. In fact, that is part of the benefit of TDD, the ease with which you can generate the confidence that changes internal to a class will not affect users of that class. Well, it's possible for me to make an object that has only private methods and interacts with other objects by listening to their events. This would be very encapsulated, but completely untestable. Even if the class has no public methods, it's event handlers are it's public interface , and it its against that public interface that you can test. Since the events are the interface then it is the events that you will need to generate to test that object. Look into using mock objects as the glue for your test system. It should be possible to create a simple mock object that generates an event and picks up the resultant change of state (possible by another receiver mock object). Also, it's considered bad practice to add methods for the sake of testing. Absolutely, you should be very wary of exposing internal state. Does this mean TDD is at odds with encapsulation? What is the appropriate balance? Absolutely not. TDD shouldn't change the implementation of your classes other than to perhaps simplify them (by applying YAGNI from an earlier point). Best practice with TDD is identical to best practice without TDD, you just find out why sooner, because you are using the interface as you are developing it. I'm inclined to make most or all of my methods public now... This would be rather throwing the baby out with the bath water. You shouldn't need to make all methods public so that you can develop in a TDD way. See my notes below to see if your private methods really are untestable. A more detailed look at testing private methods If you absolutely must unit test some private behaviour of a class, depending on the language/environment, you may have three options: Put the tests in the class you want to test. Put the tests in another class/source file & expose the private methods you want to test as public methods. Use a testing environment that allows you to keep test and production code separate, yet still allow testing code access to private methods of the production code. Obviously the 3rd option is by far the best. 1) Put the tests in the class you want to test (not ideal) Storing test cases in the same class/source file as the production code under test is the simplest option. But without a lot of pre-processor directives or annotations you will end up with your test code bloating your production code unnecessarily, and depending on how you have structured your code, you may end up accidentally exposing internal implementation to users of that code. 2) Expose the private methods you want to test as public or protected methods (really not a good idea) As suggested this is very poor practice, destroys encapsulation and will expose internal implementation to users of the code. 3) Use a better testing environment (best option, if it is available) In the Eclipse world, 3. can be achieved by using fragments . In the C# world, we might use partial classes . Other languages/environments often have similar functionality, you just need to find it. Blindly assuming 1. or 2. are the only options would be likely to result in production software bloated with test code or nasty class interfaces that wash their dirty linen in public. *8') All in all - it is much better not to test against private implementation though. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11266/"
]
} |
135,170 | I understand that to measure a project or code, we can use The Joel Test , but is there any simple standard test (like The Joel Test) that is able to measure and filter how good a programmer is? My plan is to have this test as a quick filter first before going to a more detail test. | There is the programmer competency matrix . As with the Joel test it's just a vague guide. The only way to properly assess a programmer is to ask good programmers who have worked with them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135170",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30958/"
]
} |
135,218 | I have been learning writing test cases for BDD (Behavior Driven Development) using specflow. If I write comprehensive tests with BDD, is it necessary to write TDD (Test Driven Development) test separately? Is it necessary to write test cases for both TDD and BDD separately, or are they effectively the same thing? It seems to me that both are same, the only difference being that BDD test cases can be understood by non developers and testers. | The difference between BDD and TDD is that BDD begins with a B and TDD begins with a T. But seriously, the gotcha with TDD is that too many developers focused on the "How" when writing their unit tests, so they ended up with very brittle tests that did nothing more than confirm that the system does what it does. BDD provides a new vocabulary and thus focus for writing a unit test. Basically it is a feature driven approach to TDD. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135218",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47474/"
]
} |
135,311 | This is probably something everyone has to face during the development sooner or later. You have an existing code written by someone else, and you have to extend it to work under new requirements. Sometimes it's simple, but sometimes the modules have medium to high coupling and medium to low cohesion, so the moment you start touching anything, everything breaks. And you don't feel that it's fixed correctly when you get the new and old scenarios working again. One approach would be to write tests, but in reality, in all cases I've seen, that was pretty much impossible (reliance on GUI, missing specifications, threading, complex dependencies and hierarchies, deadlines, etc). So everything sort of falls back to good ol' cowboy coding approach. But I refuse to believe there is no other systematic way that would make everything easier. Does anyone know a better approach, or the name of the methodology that should be used in such cases? | First off, it gets a little wearing that everyone on this site thinks anything written by anyone else is rubbish. Understanding code is difficult, admittedly some poor programming practices make it more difficult, but, for any reasonably complex system understanding the internal structure and idioms used is going to be hard, even if its well written code . Systems routinely run for more than twenty years. Programming methodologies, best practices, design philosophies and fashions change every couple of years, and, programmers pick up the improved styles at different rates. So what would have been considered a state of the art and excellent example of code in 2007, looks old-fashioned and quirky today. As an exercise I suggest you dig out some code you wrote three years ago, I can almost guarantee you will cringe. So first of you need to suppress the initial WTF response. Tell yourself that the system has worked well enough and long enough for it to become your problem so there must be something good about it. Try to get a hang of the original coders style, the idioms used, study the weirder bits of code and see if they fall into a pattern. If the required changes are small then follow the original coding style, that way someone picking up the code after you only needs to get used to one set idiosyncrasies. If the required changes are large and the changes are concentrated in a few functions or modules, then, take the opportunity to refactor these modules and clean up the code. Above all do not re-factor working code which has nothing to do with the immediate change request. It takes too much time, it introduces bugs, and, you may inadvertently stamp on a business rule that has taken years to perfect. Your boss will hate you for being so slow to deliver small changes, and, your users will hate you for crashing a system that ran for years without problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135311",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8029/"
]
} |
135,383 | Possible Duplicate: Are debugging skills important to become a good programmer? I'm a young Java developer and I make a systematic use of the Netbeans debugger. In fact, I often develop my applications when I debug step by step in order to see immediately if my code works. I feel spending a lot of time programming this way because the use of debugger increase execution time and I often wait for my app to jump from a breakpoint to an other (so much that I've the time to ask this question). I never learned to use a debugger at school, but at work I've been told immediately to use this functionality. I started teaching myself to use it two years ago, and I've never been told any key tips about it. I'd like to know if there are some rules to follow in order to use the debugger efficiently. I'm also wondering if using the debugger is eventually a good practice? Or is it a loss of time and I've to stop now this bad habit? | It affects the efficiency in a very good way. I feel that it's used too seldom by developers. Not only is it good for debugging but it can also give you an insight in profiling. For example when line stepping you can feel "ahh" this line took a little too long time to execute and you get a feeling for where the bottlenecks in your app are. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135383",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45322/"
]
} |
135,544 | The Top 10 programming languages, according to the TIOBE index seem to be heavily influenced by C: 1. Java The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities. - wikipedia.org 2. C C is one of the most widely used programming languages of all time and there are very few computer architectures for which a C compiler does not exist. - wikipedia.org 3. C# During the development of the .NET Framework, the class libraries were originally written using a managed code compiler system called Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood for "C-like Object Oriented Language". - wikipedia.org 4. C++ It was developed by Bjarne Stroustrup starting in 1979 at Bell Labs as an enhancement to the C language. - wikipedia.org 5. Objective-C Objective-C is a reflective, object-oriented programming language that adds Smalltalk-style messaging to the C programming language. - wikipedia.org 6. PHP He rewrote these scripts as C programming language Common Gateway Interface (CGI) binaries, extending them to add the ability to work with Web forms and to communicate with databases and called this implementation "Personal Home Page/Forms Interpreter" or PHP/FI. - wikipedia.org 8. Python Python was conceived in the late 1980s and its implementation was started in December 1989 by Guido van Rossum at CWI in the Netherlands as a successor to the ABC programming language (itself inspired by SETL) capable of exception handling and interfacing with the Amoeba operating system. - wikipedia.org ABC (programming language)
Its designers claim that ABC programs are typically around a quarter the size of the equivalent Pascal or C programs, and more readable. - wikipedia.org 9. Perl Perl borrows features from other programming languages including C, shell scripting (sh), AWK, and sed. - wikipedia.org 10. JavaScript JavaScript uses syntax influenced by that of C. - wikipedia.org It appears that most of them borrow their syntax from C and / or are heavily influenced in several other ways, at least in their beginnings. Why? | With the rise of UNIX in the 1970's, its standard systems programming language C quickly became the lingua franca of the programming world. For quite a while, C was practically mandatory for every programmer. As such, the fact that C has influenced almost every programming language that came after it in one way or another is hardly surprising, for two reasons: When designing a new language, it makes sense to base its syntax, where possible, on a popular existing language that can be assumed common knowledge. A new language is more likely to succeed if the learning curve is shallow, and a syntax that resembles an already known language is generally easier to learn (unless it behaves radically different despite the apparent similarities). So languages that borrow syntax from C generally gain traction more quickly than ones that don't. But other languages existed, and they still do, some of them even predating C - there's the LISP family (CL, Clojure and Scheme being the most popular modern dialects), the ML family (with several modern dialects), there's a whole army of BASIC dialects (VB.NET and VBA are modern implementations), there's Pascal and its relatives (Delphi being the best known one) and many 'oddball' languages that took influences from many other languages and invented a few things themselves; examples include Go, Python, Lua, Haskell (and its predecessor, Miranda), Prolog, and Erlang. While none of these languages (except Python) is in your top 10, many of them have a stable user base and an active community; they're certainly not going away. Also, it should be noted that the amount of C influence in these languages differs wildly, ranging from the almost 100% C compatible languages C++ and Objective-C, up to Python (which deliberately abandons many of C's syntax features). And that's only the syntax: in terms of semantics, most of the languages on that list don't have much in common with C. The overwhelming majority has memory management built into the language, and consequently, copy semantics, argument passing, etc., are very different. JavaScript, for example, has strong semantic influences from Scheme, while its syntax was designed to resemble Java (which, in turn bases its bits-and-pieces syntax on C, but not its semantics). Other differences (with the exception of C++ and Objective-C, which are mostly backwards-compatible with C) include error handling, scope rules, standard libraries, external code inclusion ( #include ), and the fact that many of these languages are 'virtualized', that is, they run on an interpreter, JIT compiler, or a virtual machine. Python, by the way, does have some C influence, but it is certainly not "based on" C. Both syntax and semantics differ quite radically from C, and this is by design. Python only borrows features from C where other alternatives are equally "good" (as per the "Zen of Python" - type import this in a python interpreter). As for the future of programming; predictions vary. The influence of C is not going away, but recent developments in hardware (multi-core machines becoming commonplace, powerful GPU's, the CPU ceasing to be the typical performance bottleneck, fast reliable network connections, etc.) call for radically different approaches to programming in general. Anyone who has ever written a multithreaded distributed application in an imperative language can tell that it's incredibly hard, while languages like Haskell have features that remove most of the typical problems and offer a more abstract and more structured approach to distributed, concurrent, and parallel processing (purity being an important concept in this context). Newer programming languages (e.g. C# or D) already include many features to support such an idiom. In any case, neither the strong impact C has made on programming, nor the existence of non-C-like languages is going away. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135544",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34553/"
]
} |
135,651 | I was looking over some old code that I wrote. It works, but it's not great code. I know more now than I did at the time, so I could improve it. It's not a current project, but it's current, working, production code. Do we have a responsibility to go back and improve code that we've written in the past, or is the correct attitude "if it ain't broke, don't fix it"? My company sponsored a code review class a few years ago and one of the major takeaways was that sometimes it's just good enough; move on. So, at some point should you just call it good enough and move on, or should you try to push projects to improve old code when you think it could be better? | Unless you will be making changes to this "old code" to fix bugs or add features, I wouldn't bother improving it just for the sake of it. If you do want to eventually improve it, make sure you have unit tests in place that will test the code before you start refactoring it. You can use the "old code" to learn better practices. If you do this, it's a part-time learning experience and you shouldn't expect that code to replace what's currently released or deployed by clients/customers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135651",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17275/"
]
} |
135,697 | I'm doing a course in college, where one of the labs is to perform buffer overflow exploits on code they give us. This ranges from simple exploits like changing the return address for a function on a stack to return to a different function, all the way up to code that changes a programs register/memory state but then returns to the function that you called, meaning that the function you called is completely oblivious to the exploit. I did some research into this, and these kinds of exploits are used pretty much everywhere even now, in things like running homebrew on the Wii , and the untethered jailbreak for iOS 4.3.1 My question is why is this problem so difficult to fix? It's obvious this is one major exploit used to hack hundreds of things, but seems like it would be pretty easy to fix by simply
truncating any input past the allowed length, and simply sanitizing all input that you take. EDIT: Another perspective that I'd like answers to consider - why do the creators of C not fix these issues by reimplementing the libraries? | They did fix the libraries. Any modern C standard library contains safer variants of strcpy , strcat , sprintf , and so on. On C99 systems - which is most Unixes - you will find these with names like strncat and snprintf , the "n" indicating that it takes an argument that's the size of a buffer or a maximum number of elements to copy. These functions can be used to handle many operations more securely, but in retrospect their usability is not great. For example some snprintf implementations don't guarantee the buffer is null-terminated. strncat takes a number of elements to copy, but many people mistakenly pass the size of the dest buffer. On Windows, one often finds the strcat_s , sprintf_s , the "_s" suffix indicating "safe". These too have found their way into the C standard library in C11, and provide more control over what happens in the event of an overflow (truncation vs. assert for example). Many vendors provide even more non-standard alternatives like asprintf in the GNU libc, which will allocate a buffer of the appropriate size automatically. The idea that you can "just fix C" is a misunderstanding. Fixing C is not the problem - and has already been done. The problem is fixing decades of C code written by ignorant, tired, or hurried programmers, or code that has been ported from contexts where security didn't matter to contexts where security does. No changes to the standard library can fix this code, although migration to newer compilers and standard libraries can often help identify the problems automatically. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135697",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32693/"
]
} |
135,718 | We all have it, problems that prove difficult to fix and working out a fix through obscure code and bizarre unexpected functionality. Slowly, logically working your way through trying to find patterns, errors, mistakes. This process takes time and the issues are often not easily understood by the client. How does one answer when asked the question "When will it be done?", especially when the client may not understand the inherent complexities of software development? | You answer the question honestly. You tell them it's a difficult problem, the solution is not obvious, and you are not sure how long it will take to resolve. Promise to update them on your progress every [time frame], so they know you're working on it, and of course, actually send them the updates. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135718",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49857/"
]
} |
135,722 | For the sake of an exercice I want to aggregate price listings with prices coming from a series of web sites in a structured form (XML, JSON...). If I have control on both sides how can I go about making price updates the most efficient possible? Edit: To clarify, I'm looking for a more efficient approach then having a script or application pull in price lists in their entirety from all sources for updates. | You answer the question honestly. You tell them it's a difficult problem, the solution is not obvious, and you are not sure how long it will take to resolve. Promise to update them on your progress every [time frame], so they know you're working on it, and of course, actually send them the updates. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135722",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3170/"
]
} |
135,724 | I want to make sure I'm following industry standards and best practices with my first real crack at MVC. In this case, it's ASP.NET MVC, using C#. I will be using Entity Framework 4.1 for my model, with code-first objects (the database already exists), so there will be a DBContext object for retrieving data from the database. In the demos I've gone through on the asp.net website, controllers have data access code in them. This doesn't seem right to me, especially when following the DRY (don't repeat yourself) practices. For example, let's say I am writing a web application to be used in a public library, and I have a controller for creating, updating, and deleting books in a catalog. Several of the actions may take an ISBN and need want to return a "Book" object (note this is probably not 100% valid code): public class BookController : Controller
{
LibraryDBContext _db = new LibraryDBContext();
public ActionResult Details(String ISBNtoGet)
{
Book currentBook = _db.Books.Single(b => b.ISBN == ISBNtoGet);
return View(currentBook);
}
public ActionResult Edit(String ISBNtoGet)
{
Book currentBook = _db.Books.Single(b => b.ISBN == ISBNtoGet);
return View(currentBook);
}
} Instead, should I actually have a method in my db context object to return one Book? That seems like it is a better separation to me, and helps promote DRY, because I might need to get a Book object by ISBN somewhere else in my web application. public partial class LibraryDBContext: DBContext
{
public Book GetBookByISBN(String ISBNtoGet)
{
return Books.Single(b => b.ISBN == ISBNtoGet);
}
}
public class BookController : Controller
{
LibraryDBContext _db = new LibraryDBContext();
public ActionResult Details(String ISBNtoGet)
{
return View(_db.GetBookByISBN(ISBNtoGet));
}
public ActionResult Edit(ByVal ISBNtoGet as String)
{
return View(_db.GetBookByISBN(ISBNtoGet));
}
} Is this a valid set of rules to follow in the coding of my application? Or, I guess a more subjective question would be: "is this the right way to do it?" | Generally, you want your Controllers to do only a few things: Handle the incoming request Delegate the processing to some business object Pass the result of the business processing to the appropriate view for rendering There shouldn't be any data access or complex business logic in the controller. [In the simplest of apps, you can probably get away with basic data CRUD actions in your controller, but once you start adding in more than simple Get and Update calls, you are going to want to break out your processing into a separate class.] Your controllers will usually depend on a 'Service' to do the actual processing work. In your service class you may work directly with your data source (in your case, the DbContext), but once again, if you find yourself writing a lot of business rules in addition to the data access, you will probably want to separate your business logic from your data access. At that point, you will probably have a class that does nothing but the data access. Sometimes this is called a Repository, but it doesn't really matter what the name is. The point is that all of the code for getting data into and out of the database is in one place. For every MVC project I've worked on, I've always ended up with a structure like: Controller public class BookController : Controller
{
ILibraryService _libraryService;
public BookController(ILibraryService libraryService)
{
_libraryService = libraryService;
}
public ActionResult Details(String isbn)
{
Book currentBook = _libraryService.RetrieveBookByISBN(isbn);
return View(ConvertToBookViewModel(currentBook));
}
public ActionResult DoSomethingComplexWithBook(ComplexBookActionRequest request)
{
var responseViewModel = _libraryService.ProcessTheComplexStuff(request);
return View(responseViewModel);
}
} Business Service public class LibraryService : ILibraryService
{
IBookRepository _bookRepository;
ICustomerRepository _customerRepository;
public LibraryService(IBookRepository bookRepository,
ICustomerRepository _customerRepository )
{
_bookRepository = bookRepository;
_customerRepository = customerRepository;
}
public Book RetrieveBookByISBN(string isbn)
{
return _bookRepository.GetBookByISBN(isbn);
}
public ComplexBookActionResult ProcessTheComplexStuff(ComplexBookActionRequest request)
{
// Possibly some business logic here
Book book = _bookRepository.GetBookByISBN(request.Isbn);
Customer customer = _customerRepository.GetCustomerById(request.CustomerId);
// Probably more business logic here
_libraryRepository.Save(book);
return complexBusinessActionResult;
}
} Repository public class BookRepository : IBookRepository
{
LibraryDBContext _db = new LibraryDBContext();
public Book GetBookByIsbn(string isbn)
{
return _db.Books.Single(b => b.ISBN == isbn);
}
// And the rest of the data access
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135724",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44522/"
]
} |
135,737 | I'm a beginner in programming and I've been reading books, studying, reading articles, and whatnot. I'm getting great results since I've started learning programming, and when I was a beginner I used to think I knew everything about programming, but as I learned more I realized how difficult this field is (In fact all fields are difficult, but that's not the point). Nowadays, I've written functional software and I've learned the BASICS of 3 languages and I'm intermediate in just one language. When I look at advanced things like MYSQL, or OpenGL programming, or even visual studio C++ code it gives me headaches, and even when visualising HTML source code of many websites (Most source codes on websites, seen by google chrome seem very messy and unorganized) it makes me confused to the very limit of my brain. It all seems simple at first, yet when looking at these advanced things it just makes me wonder how can one learn so much. The question, in a nutshell, is if these things become clearer to a programmer as he advances in his career. Do complicated topics as the ones listed above (OpenGL, MySQL, advanced html sites) become easier to read, write and understand as you learn more, or does it just get more complicated as you go by? How can you combat this feeling that you're an ant in the programming world and this stuff is the foot about to squash you? | Short answer: no. Long answer: Reading code of other people becomes easier, yes. But only reading. As you gain experience and skills, your personal requirements as a developer grow. You don't want to just write code. You want to write beautiful code . You don't assume your code runs in ideal conditions . You start to think about all bad things which may happen when running your code, handle exceptions, think about hardware problems, network latency, and the problem grows as your skills grow. You don't read and write code in the only language you know. As a skillful developer, you know that to solve this specific problem you have right now, functional programming is a much better alternative , so you must now read and write code in functional programming language. You don't limit yourself to a small set of libraries you know. If you code in C#, you want to know and use the full power of many libraries of .NET Framework. You don't use notepad any longer. You need your powerful IDE , and you want to know how to unit test code, what are code metrics about, and what is the meaning of hundreds of options and windows your IDE can show to you. You don't want to modestly limit yourself to a basic set of tools the language gives you . In C#, you want to use generics, code contracts, reflection, event-driven development, functional aspects with LINQ, Reactive extensions, and a ton of other stuff you learned, all in a single project, if those things help you to write better code. You don't start writing code . You spend 80 to 90% of your time gathering requirements , creating the architecture of your application, writing unit tests, writing documentation, etc., and only 10 to 20% of your time writing actual code . You care about security . You know the legal issues which may arise with the data manipulated by your applications. You know what is ITIL . You know some ISO standards and you apply them daily in your work. Yes, you gain experience and skills, and it becomes easier to solve a given problem with all the knowledge and intellectual abilities you gained. But problems you must solve grow too, and you're just not excited at solving the problems of the level of those I've solved when you started programming. While gaining skills, you also gain an insight on the complexity of the software development, learn the aspects you couldn't even imagine when started to learn programming, and you want and need to apply all the stuff you learn daily. In short: The first day you start to learn to program, the task of listing all numbers from 1 to 100 divisible by two is very complex: you just learned how to make loops and display numbers on the screen, but you have no idea how to find if the number is divisible by two. Ten years later, the same exercise appears to be extremely simple. But also, ten years later, you're writing applications which must use transactions, are hosted on several servers and must handle session state properly between servers, and is storing bank account details of your customers, with all the resulting security and legal aspects. ... And you are wondering yourself "How could I possibly do that?" in the exact same way you did ten years ago when you had to display numbers to a screen with a loop. When everything becomes easy to you in a domain, it means that either you achieved perfection in this domain, or you just don't care any longer. Achieving perfection in a domain as vast as software development is impossible, no matter how smart you are. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135737",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
135,914 | Why was the dependency injection pattern not incluided in the gang of four ? Did GOF pre-date widespread automated testing? Is dependency injection now considered a core pattern? | I was Editor of Software Development magazine when the Gang of Four book came out and I can say with total confidence that unit-testing was not a widespread practice in 1994, when Design Patterns was originally published. In 1994, C++ was the most commonly used object-oriented language, and most people programming it were coming from a C background. One of the "thinking in objects" things that people simply didn't have is the idea of hundreds or thousands of entry points into your program. You thought about the main() . If you worked on a large project, you might have a (usually quite elaborate) makefile to create a module-based program. But "unit-testing"? Starting a process, building the necessary memory context, executing it, and tearing it down, on a per method basis? That was very radical. Java made multiple-entry-point programming more obvious. By the time of the original Dot-Com boom, unit-testing was a well-known technique, but it was really JUnit (circa 2001?) that caused it to catch fire and become a universal practice. Although Strategy and the general concept of programming to an interface were part of GoF and the mid-90s zeitgeist, the idea of injection came quite late to the party (circa '03-'05?). Honestly, my gray hairs are still quite dubious about that aspect of DI ("Get off my lawn, you darn configuration files!"). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135914",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
135,971 | Since learning (and loving) automated testing I have found myself using the dependency injection pattern in almost every project. Is it always appropriate to use this pattern when working with automated testing? Are there any situations were you should avoid using dependency injection? | Basically, dependency injection makes some (usually but not always valid) assumptions about the nature of your objects. If those are wrong, DI may not be the best solution: First, most basically, DI assumes that tight coupling of object implementations is ALWAYS bad . This is the essence of the Dependency Inversion Principle: "a dependency should never be made upon a concretion; only upon an abstraction". This closes the dependent object to change based on a change to the concrete implementation; a class depending upon ConsoleWriter specifically will need to change if output needs to go to a file instead, but if the class were dependent only on an IWriter exposing a Write() method, we can replace the ConsoleWriter currently being used with a FileWriter and our dependent class wouldn't know the difference (Liskhov Substitution Principle). However, a design can NEVER be closed to all types of change; if the design of the IWriter interface itself changes, to add a parameter to Write(), an extra code object (the IWriter interface) must now be changed, on top of the implementation object/method and its usage(s). If changes in the actual interface are more likely than changes to the implementation of said interface, loose coupling (and DI-ing loosely-coupled dependencies) can cause more problems than it solves. Second, and corollary, DI assumes that the dependent class is NEVER a good place to create a dependency . This goes to the Single Responsibility Principle; if you have code which creates a dependency and also uses it, then there are two reasons the dependent class may have to change (a change to the usage OR the implementation), violating SRP. However, again, adding layers of indirection for DI can be a solution to a problem that doesn't exist; if it is logical to encapsulate logic in a dependency, but that logic is the only such implementation of a dependency, then it is more painful to code the loosely-coupled resolution of the dependency (injection, service location, factory) than it would be to just use new and forget about it. Lastly, DI by its nature centralizes knowledge of all dependencies AND their implementations . This increases the number of references that the assembly which performs the injection must have, and in most cases does NOT reduce the number of references required by actual dependent classes' assemblies. SOMETHING, SOMEWHERE, must have knowledge of the dependent, the dependency interface, and the dependency implementation in order to "connect the dots" and satisfy that dependency. DI tends to place all that knowledge at a very high level, either in an IoC container, or in the code that creates "main" objects such as the main form or Controller which must hydrate (or provide factory methods for) the dependencies. This can put a lot of necessarily tightly-coupled code and a lot of assembly references at high levels of your app, which only needs this knowledge in order to "hide" it from the actual dependent classes (which from a very basic perspective is the best place to have this knowledge; where it's used). It also normally doesn't remove said references from lower down in code; a dependent must still reference the library containing the interface for its dependency, which is in one of three places: all in a single "Interfaces" assembly that becomes very application-centric, each one alongside the primary implementation(s), removing the advantage of not having to recompile dependents when dependencies change, or one or two apiece in highly-cohesive assemblies, which bloats the assembly count, dramatically increases "full build" times and decreases application performance. All of this, again to solve a problem in places where there may be none. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
135,993 | Does anyone know if there is some kind of tool to put a number on technical debt of a code base, as a kind of code metric? If not, is anyone aware of an algorithm or set of heuristics for it? If neither of those things exists so far, I'd be interested in ideas for how to get started with such a thing. That is, how can I quantify the technical debt incurred by a method, a class, a namespace, an assembly, etc. I'm most interested in analyzing and assessing a C# code base, but please feel free to chime in for other languages as well, particularly if the concepts are language transcendent. | Technical debt is just an abstract idea that, somewhere along the lines of designing, building, testing, and maintaining a system, certain decisions were made such that the product has become more difficult to test and maintain. Having more technical debt means that it will become more difficult to continue to develop a system - you either need to cope with the technical debt and allocate more and more time for what would otherwise be simple tasks, or you need to invest resources (time and money) into reducing technical debt by refactoring the code, improving the tests, and so on. There are a number of metrics that might give you some indication as to the quality of the code: Code coverage. There are various tools that tell you what percentage of your functions, statements, and lines are covered by unit tests. You can also map system and acceptance tests back to requirements to determine the percentage of requirements covered by a system-level test. The appropriate coverage depends on the nature of the application. Coupling and cohesion . Code that exhibits low coupling and high cohesion is typically easier to read, understand, and test. There are code analysis tools that can report the amount of coupling and cohesion in a given system. Cyclomatic complexity is the number of unique paths through an application. It's typically counted at the method/function level. Cyclomatic complexity is related to the understandability and testability of a module. Not only do higher cyclomatic complexity values indicate that someone will have more trouble following the code, but the cyclomatic complexity also indicates the number of test cases required to achieve coverage. The various Halstead complexity measures provide insight into the readability of the code. These count the operators and operands to determine volume, difficulty, and effort. Often, these can indicate how difficult it will be for someone to pick up the code and understand it, often in instances such as a code review or a new developer to the code base. Amount of duplicate code. Duplicated code can indicate potential for refactoring to methods. Having duplicate code means that there are more lines for a bug to be introduced, and a higher likelihood that the same defects exist in multiple places. If the same business logic exists in multiple places, it becomes harder to update the system to account for changes. Often, static analysis tools will be able to alert you of potential problems. Of course, just because a tool indicates a problem doesn't mean there is a problem - it takes human judgement to determine if something could be problematic down the road. These metrics just give you warnings that it might be time to look at a system or module more closely. However, these attributes focus on the code. They don't readily indicate any technical debt in your system architecture or design that might relate to various quality attributes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/135993",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38402/"
]
} |
136,037 | If you are a Senior developer with say 8+ years experience, how much time spent doing programming is reasonable for competency on average? Specifically working with some library or framework associated with the core language they have worked with. For example, 2 years of Spring or Hibernate for a java developer. I understand that this is based on the complexity of the toolkit, framework or library, plus the competence of the individual developer, however, I'm sure there are reasonable ranges based on time with a language (hence the reference to Senior) that a Senior developer should be able to get competency and be productive. The reason I ask the question is IT Managers should know this when writing requirements for a position and as a developer, I would like to have a better idea as to what criteria is involved for this type of experience = competency formula that has to exist. Even if it's a rule of thumb only. What prompted this question is that I've found recruiters, who generally don't have a technical background, are trying to place technical people and they just look at technology buzzwords on a resume and how much experience for each the dev has. The hiring manager oftentimes doesn't have a technical background and does the same thing with the resume and uses his architect or other technical expert to screen the candidate to make sure the resume matches the person in the interview. The problem here is the perception of "competency" in the mind of the hiring IT manager, whether or not the dev can really pick things up and run with it. They rely on some formula in their mind as to how much experience the dev has with a technology will work for what they want. | The way people know about the book Mythical Man Month - which talks about bogus myth of Man-month in project planning - i guess there is a clear need for another parallel book called "Mythical Man Years" to explain why number of years is a useless metrics to evaluate people. There are many aspects to explain this: 1. Number of years alone don't count: In a consulting/project-based IT company - there are often significant periods where staff are on the bench, or where staff are assigned on a project but don't do anything meaningful. Sometimes even if people do some constructive work, there are cases where they have 1 year experience multiplied 8 times rather than doing 8 different things over 8 years. Choose candidate, based on depth - not length of career 2. What you learn is more important. Following up, we are talking about how much she/he has learned from others so that now she can apply herself to other problems. There are many people who are so obedient and show successful results by following the boss's nod rather than
applying individual thinking. This means if the candidate does not get the support of good colleagues and bosses, they become a disaster (and many candidates themselves don't understand this) Choose candidate, based on their original contribution than project
results and complexity 3. Number of years sometimes is inversely proportional to how much you can learn. This doesn't have to be true, but it is a possibility. Because someone has spent a lot of time on one subject - they may cease to learn; learning a new framework (details of it) might not be as much if someone has worked in similar level of challenge (and gained wisdom) otherwise. Given this, you can be better off with a relatively newer guy compared to a guy who has some experience. Choose a candidate who has energy to learn rather than someone tired
or who thinks she/he knows everything. 4. New area of work requires new thinking this implies that often in more senior roles, specific details (how to solve problems with a specific tool) are less important but rather insight on how to approach problems in general becomes more important. It takes wisdom rather than years of experience to foresee upcoming problems. Choose candidate who has developed insight rather than someone who
knows all terms of the API! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136037",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
136,079 | I'm writing an essay, and would like to have some empiric evidence, perhaps longitudinal data where the popularity of these technologies is compared over a period of some years. Are there any statistics that show the popularity of Git versus SVN? | To add to Jan's answer , Ohloh has been crawled (only) three times by the Internet Archive's Wayback Machine , but July 2011 is unreadable, so that gives three data sets including today (plus future edits): August 2010 Git: 26,485 repositories (11.3% of total) SVN: 25,336 repositories (10.8% of total) SvnSync: 117,326 repositories (50.0% of total) Note: Unlike later dates, August 2010 has separate values for Subversion and SvnSync (a Subversion read-only mirroring tool). It's fair to surmise the later Subversion figures also include the large SvnSync share. May 2011 Git: 116,224 repositories (35% of total) SVN: 145,917 repositories (44% of total) February 2012 Git: 124,000 repositories (26% of total) SVN: 265,883 repositories (57% of total) June 2012 Git: 134,459 repositories (27% of total) SVN: 267,499 repositories (54% of total) October 2013 Git: 238,648 repositories (38% of total) SVN: 291,920 repositories (46% of total) April 2014 Git: 247,103 repositories (37% of total) SVN: 324,895 repositories (48% of total) July 2016 Git: 274,605 repositories (39% of total) SVN: 326,540 repositories (46% of total) May 2018 Git: 397,653 repositories (51% of total) SVN: 325,684 repositories (41% of total) November 2018 Git: 600,724 repositories (61% of total) SVN: 325,733 repositories (33% of total) March 2019 Git: 842,966 repositories (69% of total) SVN: 324,589 repositories (26% of total) August 2019 Git: 913,378 repositories (70% of total) SVN: 324,629 repositories (25% of total) This appears to show that, of the open source repositories registered on Ohloh, there's been a huge growth in both Git and Subversion. Whereas they were about level in 2010, there were double the number of Subversion repositories in 2012 (...indexed by Ohloh), but Git has now easily taken the lead. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136079",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46205/"
]
} |
136,085 | I'm wondering, is music notation language Turing-Complete ? My first thought is that there are loops in musical notation, but there is no way to write conditional branches, right? I'm not a musician, so perhaps someone can help fill in the gaps? | Yes, if you admit a few instructions for transposition—uncommon but not unknown. You can then interpret a piece as Choon , which is Turing-complete. The performer is the memory: they must remember the number of notes by which the piece is currently transposed, and all of the notes they have played thus far. Obviously it’s feasible only for a computer, or perhaps a savant. From the Choon manual: Transpositions There are three transposition instructions, up ( + ), down ( - ) and cancel ( . ). A transposition instruction transposes all subsequent notes played by the amount of the last note played. The cancel instruction ( . ) sets the transposition back to zero. Transpositions are cumulative, so the Choon code to transpose future notes up by 2 is b+ , and by 4 would be b++ . Also, the value used is the value of the previous note after transpositions have been applied, so b+b+ transposes future notes up by 6, not by 4. John Cage The John Cage instruction ( % ) causes a one note silence in the output stream. The transposition value of a John Cage is zero - %- and %+ are no-ops (except that a single silence is added to the output). Repeat Bars The Repeat Bars instructions ( ||: and :|| ) enclose a loop. The loop will execute the number of times indicated by the most recent note played before the ||: was encountered. A zero or negative value will mean Choon will immediately jump to start playing from the matching :|| . A John Cage means repeat forever - %||::|| is an infinite loop. Tuning Fork The Tuning Fork instruction ~ provides a way to break out of loops. If a tuning fork is encountered in a loop, and the last note played was a note of value A , then Choon will immediately jump to start playing from after the next :|| instruction. If there is no further :|| instruction (meaning ~ has been used outside any repeat bars), then the performance will immediately terminate. Markers Markers provide marvellous programming convenience. A marker is a lower case letter or word that remembers a point in the output stream. Referring to a marker (see below) will cause the note played after the Marker occurred to be played again. Note that transpositions will affect this newly played note. Where two or more markers occur sequentially, or a marker follows a play-from-marker instruction, they must be seperated by whitespace. Play From Output The Play From Output instruction ( = ) allows you to play again notes that have already been played in the output stream. You can refer to the notes by number - the 5th note played since the program began would be =5 , by relative number - the 3rd most recent note played would be =-3 or by marker - the note played after marker x would be =x . It is a common idiom to re-use a marker and immediately then refer to it, like this: x=x . This is akin to saying x=x+y in a conventional programming language (where y represents the currently effective transposition value). A John Cage is just a rest , a Tuning Fork is (roughly) dal segno, and a marker is a segno. I suppose the tuning fork could be played by an additional performer to whom the primary performer responds, but the principle is the same. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136085",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1921/"
]
} |
136,133 | I'm quite proficient with Java, C/C++, JavaScript/jQuery and decently good at Objective-C. I'm quite productive with the languages and their corresponding frameworks too and do produce enterprise level systems (and also small scale ones) with sufficient ease all the while keeping code 'clean' and maintainable (yes, I can read my own code after six months :) Unless mandated by the platform (iPhone, iPad, etc.) or by the client/implementation organization, just "why" should I learn a new programming language? Just for "fun"? And do what with that fun if I'm not going to do anything worthwhile with it? A lot of my peers are ready to dive in to learn the "next new thing/language" and it's usually Python, Ruby or PHP (just naming a few popular ones). Now, just knowing the language by itself is futile IMHO. You also need to know the frameworks, learn their usage/APIs as well as 'good implementation practices', etc. So from an 'economic' sense, is there any benefit in learning a new programming language? If the language is learned in a quick and dirty fashion, it'll probably also be used for quick and dirty prototyping/implementation - but I don't see THAT as a justifiable investment of time/effort. So just WHY should I (or anyone for that matter) learn a new programming language other than "it's fun so let's try it out" - if the investment of time may not be worth it in the long run? | From The Pragmatic Programmer , Tip #8 "Invest Regularly in Your Knowledge Portfolio": Learn at least one new language every year. Different languages
solve the same problems in different ways. By learning several different
approaches, you can help broaden your thinking and avoid
getting stuck in a rut. Additionally, learning many languages is far
easier now, thanks to the wealth of freely available software on the
Internet It's not about the next "new thing". It's about thinking in different ways outside of your normal thought patterns. There is a saying, "When you're a hammer, everything looks like a nail". Maybe there is a better way to solve a problem using some other technology. If you don't explore, you may not know that it was available. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136133",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18748/"
]
} |
136,152 | Sometimes my QA team reports bugs, but neither I or them have any idea on how to reproduce them. This leads to very long and frustrating debugging sessions which sometimes do not even yield results. My software is tied heavily with proprietary hardware so bugs can come from many directions at once. Should I expect more from them than "your software crashed when I pressed a button" or should I figure myself what happened? EDIT: One of my coworker pointed out that we are probably all developers here so the results might suffer a little bias | QA should always try and make the bugs as easy for you to reproduce as possible and the bug description should contain the steps taken. However, if they can't easily reproduce the bugs, they should still get entered into the bug database with suitable title/headings and a full description of what they did to cause the bug. The bug description should clearly state that they can't reproduce the bug (perhaps with some comment along the lines of "tried it five times, it happened once"). This way, if someone else sees the same bug, they can add to the bug database with their findings and also you get as much information as possible which further down the line could be vital in saving you time tracking down the problem. Also, you get to filter the information - there might be a lot of bugs in different systems that you know are all linked to (eg) one area of the code - if QA don't report anything (as they can't reproduce them) then this information doesn't get to you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136152",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3250/"
]
} |
136,319 | In Visual Studio I can right-click on an interface and choose to Implement Interface, or Implement Interface Explicitly. public class Test : ITest
{
public string Id // Generated by Implement Interface
{
get { throw new NotImplementedException(); }
}
string ITest.Id // Generated by Implement Interface Explicitly
{
get { throw new NotImplementedException(); }
}
} The only difference I see between the two is that the Interface name is added to the interface properties and methods when they're created if you choose to Implement Interface Explicitly. I find it makes the code a bit more readable since I can see where that method/property comes from, however does this make any difference in how the class is used or compiled? And does it really matter if I implement my interfaces implicitly or explicitly? | Check out the top answer from Andrew Barrett for "implicit vs explicit interface implementation" on SO . Basically: Implicit: you access the interface methods and properties as if they were part of the class. Explicit: you can only access methods and properties when treating the class as the implemented interface. Code examples: Implicit: Test t = new Test();
t.Id; // OK
((ITest)t).Id; // OK Explicit: Test t = new Test();
t.Id; // Not OK
((ITest)t).Id; // OK In terms of "when" you have to implement an interface explicitly, it is when your class already has a method with the same signature as one of the methods of your interface, or when your class implements several interfaces that share methods with the same signatures but incompatible contracts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136319",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
136,484 | I've always launched builds after each commit, but on this new project, the architects just asked me to change the frequency to "one build every 15 minutes", and I just can't understand why that would be a good reason vs "building on each commit". First off, some details : Objective-C (iOS 5) project 10 developpers each build actually takes ~1 min, and includes build and unit testing. the integration server is a Mac Mini, so computing power isn't really a problem here we use Jenkins with the XCode plugin My arguments were that if you build at each commit, you can see right now what went wrong, and correct directly your errors, without bothering the other devs too often. Plus our tester is less bothered by UT errors this way. His arguments were that devs will be flooded by "build error" mails (which is not completely true, as Jenkins can be configured to send a mail only for the first broken build), and that metrics can't be done properly if the frequency of builds is too high. So, what's your opinion on this ? | Fail fast is a good principle - the sooner you know the build is broken, the sooner the offending commit can be identified and the build fixed. Build on every commit is the right thing to do. Building every 15 minutes can be pointless if the project has a high volume of commits within such a timeframe - tracking down the bad commit would take longer and may be difficult to determine (one might also be in a situation where multiple commits have different things that break the build). There is also the possiblity that in quiet times (night time?) you end up rebuilding though no changes have been made. If the build breaks so often that it is a problem, the answer it to re-educate the team on the importance of not breaking the build and in techniques that ensure this does not happen (frequent fetches, checkin dance, compiling and running unit tests locally etc...). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136484",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9001/"
]
} |
136,519 | While I learning C#, I found that, the C# supports operator overloading. I have problem with good example which: Make sense (ex. adding class named sheep and cow) Is not an example of concatenation of two strings Examples from Base Class Library are welcome. | The obvious examples of appropriate operator overloading are any classes which behave in the same way that numbers operate. So BigInt classes (as Jalayn suggests), complex numbers or matrix classes (as Superbest suggests) all have the same operations that ordinary numbers have so map really well onto mathematical operators, while time operations (as suggested by svick ) map nicely onto a subset of those operations. Slightly more abstractly, operators could be used when performing set like operations, so operator+ could be a union , operator- could be a complement etc. This does start to stretch the paradigm though, especially if you use the addition or multiply operator for an operation which isn't commutative , as you might expect them to be. C# itself has an excellent example of non-numeric operator overloading. It uses += and -= to add and subtract delegates , i.e. register and de-register them. This works well because the += and -= operators work as you would expect them to, and this result in much more concise code. For the purist, one of the problems with the string + operator is that it isn't commutative. "a"+"b" is not the same as "b"+"a" . We understand this exception for strings because it is so common, but how can we tell if using operator+ on other types will be commutative or not? Most people will assume that it is, unless the object is string-like , but you never really know what people will assume. As with strings, the foibles of matrices are pretty well known too. It is obvious that Matrix operator* (double, Matrix) is a scalar multiplication, whereas Matrix operator* (Matrix, Matrix) would be a matrix multiplication (i.e. a matrix of dot-product multiplications) for instance. Similarly the use of operators with delegates is so obviously far removed from maths that you are unlikely to make those mistakes. Incidentally, at the 2011 ACCU conference , Roger Orr & Steve Love presented a session on Some objects are more equal than others - a look at the many meanings of equality, value and identity . Their slides are downloadable , as is Richard Harris' Appendix about floating point equality . Summary: Be very careful with operator== , here be dragons! Operator overloading is a very powerful semantic technique, but it is easy to over-use. Ideally you should only use it in situations when it is very clear from context what the effect of an overloaded operator is. In many ways a.union(b) is clearer than a+b , and a*b is much more obscure than a.cartesianProduct(b) , especially since the result of a cartesian product would be a SetLike<Tuple<T,T>> rather than a SetLike<T> . The real problems with operator overloading come when a programmer assumes a class will behave in one way, but it actually behaves in another. This sort of semantic clash is what I'm suggesting it is important to try to avoid. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136519",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28834/"
]
} |
136,596 | Possible Duplicates: git / other VCS - how often to commit? How often should I/do you make commits? The usage of source control is very different from one developer to another and from project to another. Some commit very often; others can spend a whole day or several days without committing (especially when they work on the project alone or they know that other team members are working on very different part of the project). Examples Sometimes, I've seen extremely small commits, both in real life and in webcasts and other learning material. Some examples, mostly from real life, are: A commit which solves a bug #... or implements a feature #... by changing one line of code. IMHO, it's a perfectly valid case for a commit, especially if the bug tracking system is linked to the version control and is updated automatically according to the revisions. Even without this link, it's useful to track which commit solved what, independently of the number of changes required to solve a bug or implement a feature. A commit which changes a single configuration setting (given that in the context, configuration settings must be in source control). IMHO, this could be merged sometimes with another commit, unless the previous setting breaks the build or introduces a bug or can affect other developers (for example a connection string which changed after the test database server was migrated). A commit which corrects spelling of a word, for example in a string displayed to the user. IMHO, in most cases, this can be merged with another commit (unless, again, it breaks the build). The only case where it cannot be merged is when, if left, the wrong spelling can be propagated through code and would be too complicated or impossible to change later, as with HTTP referer header . A commit which adds a comment to a method (while the method was already explicit enough) or solves another minor style-related rule. Example: in .NET Framework, StyleCop requires to document every method, and the XMLDoc comment for a constructor (which is method too) must begin with: Initializes a new instance of the <Class name here> class. A commit can enforce this last rule, replacing a comment in legacy code: Creates a new vehicle with the specified number of wheels. by: Initializes a new instance of the Vehicle class, using the specified number of wheels. In other words, the revision has no meaning other than to conform the piece of code to the style standards used in the codebase. IMHO, this can be merged with another commit in every case (after all, style-related rules must be enforced at commits to reject the commits of the code which doesn't match them), unless there are several changes in several places. Questions Am I wrong on those points? Is there such a thing as a commit too small, or is a practice of committing very often a best practice? Does it worth it to commit too small changes, given that it would "pollute" the revision log and make it more difficult to find the relevant changes among tiny changes nobody cares about and which don't break or unbreak the build, nor affect other developers? | Wrong is too strong a word, because this is a matter of style/preference. However, my preference is apparently diametrically opposed to yours. I much prefer seeing coding style edits and small documentation changes done in their own commits. If I'm doing a code review, or trying to figure out where a bug was introduced, it's much easier if each commit doesn't touch on half a dozen random issues. In my world an ideal commit fixes one bug, or completes one feature. That way if the fix or new feature turns out to have some serious problem, backing it out is much easier: you don't have to try and untangle a grab-bag of changes unrelated to the problem. Now, if you have a batch of changes that are documentation/formatting issues only, feel free to bundle those together, and your commit comment should indicate "Formatting/documentation changes only". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136596",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
136,629 | While developing (either features or bug fixes) I sometimes happen to discover bugs that are not directly related to what I'm working on. What should I do in that situation. Just fix it? Try to remember to fix it later? Write it down somewhere? Or enter it into the bug tracking system? I generally enter it into the bug tracking system and let the process play itself out (i.e. triaging, assigning, etc.). However I have hardly ever seen another developer enter a bug. (Why is that?) | If you discover a bug, I can't think of any good reason not to enter it into the bug tracking system, whether you fix it or not. That's what the bug tracking system is for, after all. In some cases it might make more sense to report it to a QA person who has more experience dealing with the system, but in any case the bug should be tracked. It's possible that there might be some reason, valid or not, that developers shouldn't be entering bugs. One possible reason might be that the bug tracking system is visible to outsiders, and having too many reported bugs looks bad. That's a very bad reason, which should be addressed in some other way that still allows bugs to be tracked. Ask your boss. (Of course if there's a bug in code that you're still working on, and it doesn't show up in anything that's been released, there's no need to track it in the system, though a TODO comment in the source code may be a good idea. To take an extreme case, "This code won't compile because I haven't yet typed the semicolon at the end of this line" is not a reportable bug.) As for why other developers don't enter bugs, you'll need to ask them. They probably should. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136629",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4526/"
]
} |
136,900 | Is it a good practice to call a method that returns true or false values in an if statement? Something like this: private void VerifyAccount()
{
if (!ValidateCredentials(txtUser.Text, txtPassword.Text))
{
MessageBox.Show("Invalid user name or password");
}
}
private bool ValidateCredentials(string userName, string password)
{
string existingPassword = GetUserPassword(userName);
if (existingPassword == null)
return false;
var hasher = new Hasher { SaltSize = 16 };
bool passwordsMatch = hasher.CompareStringToHash(password, existingPassword);
return passwordsMatch;
} or is it better to store them in a variable then compare them using if else values like this bool validate = ValidateCredentials(txtUser.Text, txtPassword.Text);
if(validate == false){
//Do something
} I am not only referring to .NET, I am referring to the question in all programming languages it just so happens that I used .NET as an example | As with all these things it depends. If you aren't going to use the result of your call to ValidateCredentials then there's no need (other than for debugging purposes) to store the result in a local variable. However, if it makes the code more readable (and hence more maintainable) to have a variable go with that. The code isn't going to be measurably less efficient. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136900",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48185/"
]
} |
136,908 | A comment on this question: Checking if a method returns false: assign result to temporary variable, or put method invocation directly in conditional? says that you should use !boolean instead of boolean == false when testing conditions. Why? To me boolean == false is much more natural in English and is more explicit. I apologise if this is just a matter of style, but I was wondering if there was some other reason for this preference of !boolean ? | When I see a line like if (!lateForMeeting()) , I read that as "If not late for meeting" , which is quite straight-forward to understand, as opposed to if (lateForMeeting() == false) which I'd read as "If the fact that I'm late for meeting is false" . They're identical in meaning, but the former is closer to how the equivalent English sentence would be constructed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136908",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8633/"
]
} |
136,942 | Just wondering (now that I've started with C++ which needs a compiler) why Python doesn't need a compiler? I just enter the code, save it as an exec, and run it. In C++ I have to make builds and all of that other fun stuff. | Python has a compiler! You just don't notice it because it runs automatically. You can tell it's there, though: look at the .pyc (or .pyo if you have the optimizer turned on) files that are generated for modules that you import . Also, it does not compile to the native machine's code. Instead, it compiles to a byte code that is used by a virtual machine. The virtual machine is itself a compiled program. This is very similar to how Java works; so similar, in fact, that there is a Python variant ( Jython ) that compiles to the Java Virtual Machine's byte code instead! There's also IronPython , which compiles to Microsoft's CLR (used by .NET). (The normal Python byte code compiler is sometimes called CPython to disambiguate it from these alternatives.) C++ needs to expose its compilation process because the language itself is incomplete; it does not specify everything the linker needs to know to build your program, nor can it specify compile options portably (some compilers let you use #pragma , but that's not standard). So you have to do the rest of the work with makefiles and possibly auto hell (autoconf/automake/libtool). This is really just a holdover from how C did it. And C did it that way because it made the compiler simple, which is one main reason it is so popular (anyone could crank out a simple C compiler in the 80's). Some things that can affect the compiler's or linker's operation but are not specified within C or C++'s syntax: dependency resolution external library requirements (including dependency order) optimizer level warning settings language specification version linker mappings (which section goes where in the final program) target architecture Some of these can be detected, but they can't be specified; e.g. I can detect which C++ is in use with __cplusplus , but I can't specify that C++98 is the one used for my code within the code itself; I have to pass it as a flag to the compiler in the Makefile, or make a setting in a dialog. While you might think that a "dependency resolution" system exists in the compiler, automatically generating dependency records, these records only say which header files a given source file uses. They cannot indicate what additional source code modules are required to link into an executable program, because there is no standard way in C or C++ to indicate that a given header file is the interface definition for another source code module as opposed to just a bunch of lines you want to show up in multiple places so you don't repeat yourself. There are traditions in file naming conventions, but these are not known or enforced by the compiler and linker. Several of these can be set using #pragma , but this is non-standard, and I was speaking of the standard. All of these things could be specified by a standard, but have not been in the interest of backward compatibility. The prevailing wisdom is that makefiles and IDEs aren't broke, so don't fix them. Python handles all this in the language. For example, import specifies an explicit module dependency, implies the dependency tree, and modules are not split into header and source files (i.e. interface and implementation). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136942",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50929/"
]
} |
136,987 | I just started a diploma in software development. Right now we're starting out with basic Java and such (so right from the bottom you might say) - which is fine, I have no programming experience apart from knowing how to do "Hello World" in Java. I keep hearing that mathematics is pertinent to coding, but how is it so? What general examples would show how mathematics and programming go together, or are reliant on one another? I apologize of my question is vague, I'm barely starting to get a rough idea of the kind of world I'm stepping into as a code monkey student... | First off: I am a mathematician - a professional one (in that I get paid for doing maths). I am not a programmer. I do do some programming, but very definitely of the Cargo Cult variety (see first comment to https://tex.stackexchange.com/q/451/86 and my response) and nothing of the sort that would normally bring me to this site (indeed, I registered here to post this answer after seeing a link to it in the TeX chat room). The summary of my answer is: Mathematics is Programming . I recently got to teach a mathematics course to a non-mathematical group of students. They were the programming section. I thought this was fantastic! At last, I was going to be able to teach mathematics to people who already understood the basic ideas and who already had a rudimentary toolkit for doing maths. I was incredibly disappointed when I asked how many of them had actually written a program and got an answer somewhere between 0 and 1. Before I go on, I should clarify a few things. There are areas of mathematics that concern themselves directly with programming and are to do with evaluating algorithms and classifying languages and such-like. I'm not talking about those. There is also a program which is trying to translate all of mathematics into a formal language that can be evaluated by a computer. This is a bit closer to what I'm talking about, but even so to focus on that would miss the main part of what I'm trying to say. The mathematics that I do and the programming that I do are almost completely unrelated by topic. The connection between them is on a different level. Where I'd like to start is with the comment on the main question: If that is doing math, then all human activity is a form of math. If that's the case then the word math doesn't have a useful meaning, because it can't be used to distinguish one activity from another. Yes, that is doing maths. But "maths" is still a useful word because, as the song says, "It ain't what you do, it's the way that you do it.". I would say that I am doing maths when I am approaching something in a mathematical way . Sometimes, that is "hard core" mathematics: formulating definitions, proving theorems. Sometimes, it isn't. Sometimes, it's writing silly little programs so that my kids can learn their spelling words. This is what mathematics helps me with when I program: Abstraction This is probably the most important transferable skill from mathematics. By this, I mean the ability to strip away all the unnecessary stuff and focus on the important properties. Perspective If I could only choose one thing that all my students were to learn, this would be it: The ability to change ones point of view to suit the problem. We commonly treat this in linear algebra with change-of-basis formulae (that lead to horrendous matrices and horrendous complications), but it is much more applicable than that. At heart, it is the idea that just because something has been presented to you in one fashion, that doesn't have to be the way you work with it. This separates ones view of the thing itself from the way it has been presented. This can be extremely practical: it is all about making something useful or efficient . If I have a list of vectors and it is more efficient to store them as a list of x-coordinates and a list of y-coordinates, so be it . Form versus Function Leading on from the above; if a thing can be presented in many different ways then it is no longer fair to say that one particular presentation is the thing. To misquote that song again: "It ain't what you are it's what you do " that matters. I could go on, but those are the ones that spring to mind. Now, there are probably lots of (negative) reactions to what I've written so far. One will be "That's not maths, that's just good sense." (or bad sense) to which I refer to my remark above agreeing with the sentiment that "all human activity is a form of math". Another will be "That isn't the type of maths meant in the question.". This is almost certainly true and here I actually have a lot more sympathy with the person who said "At least I haven't touched the maths for 10 years,". He or she is wrong, of course, they have been doing maths for 10 years because whenever they wrote a program they were doing maths . They just didn't realise it. And here we get to the point about why I was delighted with the (sadly unrealised) possibility of teaching mathematics to students who were already programmers. I do actually use some "real maths" in my programs. I recently coded a fun 3D shape explorer which involved using some maths to figure out the projections and other transformations that I had to apply to my data. I was mildly amused to find myself actually coding quaternions! But of course, the maths that was involved was trivial compared to the maths that I do when I'm working. It was "back of envelope" stuff. That type of maths, then I agree with the sentiment that you pick it up when you need it, and if you need something more complicated than you can find on Wikipedia then you find a real mathematician to do it for you. However, in order that you can pick it up when you need it then you need to have learnt something . That thing might not be anything you ever actually use, but having learnt that something makes it all the easier to pick up what you do actually use later in life. So this is where I disagree with Coder: you do need to learn some mathematics if you are ever going to use any mathematics and you need to learn it from the mathematical side (which doesn't mean proving theorems, by the way). And so finally to the "Mathematics is Programming". You can learn all of these things from being a good programmer. And if you've learnt these things, you will find mathematics much easier because you will understand that when we talk about a vector in a vector space then it's just an instance of the class Vector which means that we can do all the things that Vector does to that instance: add, subtract, scale, and so forth. That's why I would love to teach mathematics to programmers. But, speaking as a mathematician, I would say that the first of these, "Abstraction", is easier to learn in mathematics than in programming because mathematics is the pursuit of abstraction. Whenever we see some behaviour our training is always to ask "What is it about that thing that makes it behave in that way? What if I took another thing that was similar, would it behave in the same way? How much of what that thing is would I have to lose for it to stop behaving like that?" (Taking this to the extreme leads to "centipede mathematics" - search for the term). But we don't do this with (just) "real world" objects (whatever they are), we do this with things that have already been abstracted. This has gone on long enough, so let me close with one of the classic mathematician jokes: A mathematician and a physicist both attended a seminar on some new model involving 24 dimensional space. Afterwards, they were discussing it and the physicist remarked: "That was really hard. I mean, how does one visualise 24-dimensional space?" to which the mathematician replied: "Oh, it's easy. Just visualise n-dimensional space and then set n = 24.". Added 2012-03-2 There were quite a few comments on this answer expressing a variety of views. These have now been deleted by a moderator on the understanding that I would try to take into incorporate them (or respond to them) in my answer. However, I'm not sure that I can. Reading those comments and the rest of what's on this page, I can only come to the conclusion that there is a huge misunderstanding as to what mathematics actually is. Moreover, I don't feel competent enough to explain it. Fortunately, someone has already linked to Lockhart's Lament so I'll defer the explanation to that. Whilst I might have put it differently (as I grew up in a scientific environment, I would have put more emphasis on the experimental nature of mathematics), I don't think I could put it better . I do still think I can add something. As well as the misunderstandings as to what mathematics is , there are also misunderstandings as to what "doing mathematics" means. I see two almost contradictory stances: Mathematics is about equations and formulas. So there's no need to study it because Wikipedia exists (this is almost the converse of Euler's apocryphal challenge to Diderot ). Mathematics is about theorems and definitions. So there's no need to study it as programs never prove anything (which is about as complete a fallacy as ... insert favourite fallacy here). Whilst the two stances contradict each other, they end up in the same place: there's no point in a programmer learning any mathematics - and most assuredly not from a mathematician! After all, what do they know about anything? Anything that a programmer really needs to know can be found in Wikipedia, or cribbed off someone else. Above, I described myself as a Cargo Cult Programmer. I bet most of you had a private giggle to yourself and thought, "Ah yes, I bet I know what your programs look like then.". You probably felt a bit smug and superior (though I'm sure you felt bad about feeling smug and superior). What I've described just above is Cargo Cult Mathematics. So when I say that you should learn a bit of mathematics to understand how mathematics works, I'm saying it for exactly the same reason as you might if you saw a bit of code that I'd written: "How much easier your life would be if you'd stop cut-and-pasting code from StackOverflow and learnt just a bit about how to do it properly.". The most important thing, though, is that you should learn it from mathematicians. Why so? Here's an analogy. The language that I'm most adept at is TeX. (Says it all, really!). Now, suppose I want to learn a bit more about TeX and it just so happens that Don Knuth is in town and has offered to give some tutorials on TeX. Or I could just read about it on Wikipedia. Or maybe it's Perl and Larry Wall, or C# (is that the right one?) and Jon Skeet. It may well be that these people are not the best teachers , but they sure make up for it in the amount that they know! And that's what mathematicians are. We're the people who write the actual language, who then write the libraries that you use. Of course, you don't have to know how to prove a theorem - you're not going to write a library! But if you know a bit about how we think, then it might help you understand why we wrote the library the way we did, and if you understand that it might help you make better use of it. There is a middle ground between looking up equations on Wikipedia and proving the Poincaré conjecture, just as - to refer to Lockhart's lament - there is a middle ground between "I don't really know much about art, but I know what I like" and being Monet, and between "Where's the 'ANY' key?" and being Don Knuth. If you are still in university then you have an amazing opportunity to learn from people who are experts in their area and who - for some reason - are willing to spend their time explaining it to you. The other point I wanted to expand on a bit was why as a programmer you should not be scared of learning a bit more mathematics. It's not the Deep Connections, nor the usefulness. It's that your ability to program a computer can directly help you learn mathematics. I just want to mention a few. Understanding variables. So many people get confused by simple statements like "Let n be a natural number ...". Or "Let epsilon > 0". There are places in mathematics where it's important to remember the scope of a variable. These are all commonplace in programming. Learn to translate a mathematical statement into a program and you'll find it much easier to keep track of what's what. The nature of proof. If you've ever written a test, or written a program to be used by someone else, then you understand the core of proofs. When you do that, you have to know that whatever the user puts in, you can deal with it (insert obligatory xkcd reference here). That's all a proof is! A demonstration that whatever the "user/universe" puts in, the statement will hold. Experimentalists will lean to the "If it works under normal circumstances, it's true" but programmers know that there is always that kid who will try Alt+G+Shift+ÅØÆ just to see what happens. DRY. Sorry to break this to you, but we invented it, not you. We've been "not repeating ourselves" for millennia. That's why I have a copy of Euclid's elements on my shelves and it's still useful . And there's more. If I knew a bit more about programming, I'd write a book called "Mathematics for Programmers" where the aim wasn't to teach "The mathematics that programmers should know" but "mathematics that everyone should know, but optimised for programmers". But I'll probably never know enough about programming to write it - unless someone offers to collaborate with me! I'll leave it there. Probably if I thought more, I'd change what I've written; hopefully I'd explain it better. In a months' time I might even disagree with parts of it. If anyone wishes to argue further, or comment otherwise, probably best not to do so in the comments here. You know where to find me . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/136987",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48536/"
]
} |
137,103 | I'm new at computer science and programming, and I was wondering, is there a difference between computer science and programming? and do you get to choose to study only one of them at the university, or both of them? | Computer science is the study of what computers [can] do; programming is the practice of making computers do things. Take a look at the courses/syllabi offered by universities you're interested in to find out whether the course is a CS course, a programming course, something else (for example Software Engineering) or even a combination of the above. Many courses advertised as "computer science" offer a significant programming component, which may be so that you can put the theoretical parts of the course into practice, or may be for their own sake so that you can learn the skill of making programs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137103",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48198/"
]
} |
137,123 | I was looking through the AvSol Coding Guidelines for C# and I agree with nearly everything but I'm really curious to see what other think of one specific rule. AV1500 Methods should not exceed 7 statements A method that requires more
than 7 statements is doing too much, or has too many responsibilities.
It also requires the human mind to analyze the exact statements to
understand what the code is doing. Break it down in multiple small and
focused methods with self-explaining names. Do most of you follow this rule? Even if there's little to be saved from creating a new method (Your code is still DRY ) aside from greatly increasing readability? And is your number still as low as 7? I would tend more toward 10. I'm not saying I violate this rule all over the place--on the contrary, my methods are 95% small and focused but saying you should never violate this rule really blew me away. I really just want to know what everyone thinks of NEVER violating this rule (It's a '1' on the coding standard--meaning NEVER do this). But I think you'd have trouble finding a codebase that doesn't. | This is a "standards smell" to me. Whenever I see coding standards with specific limits in them, I worry. You almost always run into a case where a method needs to be bigger than the standard allows (whether it's line length/count, number of variables, number of exit points, etc). Standards should be more like guidelines, and allow sufficient leeway for exercising good judgement. Don't get me wrong, it's good to have standards, but they shouldn't become "micromanagement by proxy". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137123",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44003/"
]
} |
137,158 | In Java, which is more highly recommended, and why? Both types will throw exceptions, so in that regard handling them is the same. assert is slightly shorter, but I'm not sure how much that matters. public void doStuff(Object obj) {
assert obj != null;
...
} vs public void doStuff(Object obj) {
if (obj == null) {
throw new IllegalArgumentException("object was null");
}
...
} | BEWARE! Assertions are removed at runtime unless you explicitly specify to "enable assertions" when compiling your code. Java Assertions are not to be used on production code and should be restricted to private methods (see Exception vs Assertion ), since private methods are expected to be known and used only by the developers. Also assert will throw AssertionError which extends Error not Exception , and which normally indicates you have a very abnormal error (like "OutOfMemoryError" which is hard to recover from, isn't it?) you are not expected to be able to treat. Remove the "enable assertions" flag, and check with a debugger and you'll see that you will not step on the IllegalArgumentException throw call... since this code has not been compiled (again, when "ea" is removed) It is better to use the second construction for public/protected methods, and if you want something that is done in one line of code, there is at least one way that I know of. I personally use the Spring Framework 's Assert class that has a few methods for checking arguments and that throw "IllegalArgumentException" on failure. Basically, what you do is: Assert.notNull(obj, "object was null"); ... Which will in fact execute exactly the same code you wrote in your second example. There are a few other useful methods such as hasText , hasLength in there. I don't like writing more code than necessary, so I'm happy when I reduce the number of written lines by 2 (2 lines > 1 line) :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137158",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1767/"
]
} |
137,165 | I've been programming for a while, I've written some rudimentary programs, and I want to keep learning. I've reached that point where you just don't know what to learn next, and I'd like to ask a question for my own curiosity. The question, in a nutshell, is if you can combine multiple programming languages into 1 result? For example, can this code be possible? <html>
cout << "Hello world!";
</html> or import java.util.Scanner;
cout << "Insert a number from 1 to 10";
Scanner n = new Scanner(System.in);
System.out.println("The value you entered was" +n.newLine()); This feels like a silly question but I can't possible know if it's possible or not, so that's why I'm asking it. In this question I notice he is using Python code in html code, if my above example is not possible, what did he do? | You first example is sort of possible. Usually such things happen in PHP (and other related web-programming languages) like this: <HTML>
<?PHP
call_some_php_function(1,2,"a","b"); /* This is may return nothing, a text string, or actual HTML markup code */
?>
</HTML> Some important points to note about this example: HTML is NOT a progamming language, it is a markup language. The PHP and HTML and not executed/interpreted in the same place: PHP code is executed by a PHP interpreter running on the server and the result is "injected" into the surrounding HTML. Then that whole blob is sent to the client/browser which renders the complete HTML. Your second example looks like some sort of mash-up of C++ and Java. It's possible to have compiled modules written in different languages talk to each other, but to combine Java and C++ in the same source file would be extremely confusing and difficult: how would the compiler know which statements are Java and which are C++? I suppose in theory you could write a special compiler/pre-processor with "language" indicators such as: Java
{
import java.util.Scanner;
}
C++
{
cout << "Insert a number from 1 to 10";
}
Java
{
Scanner n = new Scanner(System.in); //Actually, this line *could* be a C++ line - it's hard for me to tell just by looking at it.
System.out.println("The value you entered was" +n.newLine());
} But I'm honestly not sure you'd gain anything useful by doing this. Also, how would this hybrid language environment handle language features which are incompatible between the two? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137165",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
137,290 | Look here: a typical holy war on tabs vs spaces . Now look here: elastic tabstops . All problems solved, and a bunch of very useful new behaviours added. Are elastic tabstops even mentioned in that tabs vs spaces discussion? Why not? Are there drawbacks to the elastic tabstop idea so serious that nobody has ever implemented them in a popular editor? EDIT : I apologise for putting too much emphasis on "why aren't they mentioned". That wasn't really what I intended; that question is possibly even off topic. What I really mean is, what are the biggest drawbacks of this that prevent wider adoption of an obviously beneficial idea? (in an ideal world where everything supports it already) (Turns out there's already a request on the Microsoft Connect for a Visual Studio implementation of elastic tabstops , and a request in Eclipse too. Plus there's a question asking about other editors that implement elastic tabstops ) | Holy Wars are Subjective Nick's elastic tabstops are an amazing concept that could help a lot of people agree on a workable solution, though I highly doubt it would entirely end this Holy War: it is, after all, also a matter of taste and many programmers will not move an inch from their position on this matter, even at the cost of compromise. So that would be a first reason. For instance, a lot of people on the "spaces" side will still dislike it as it requires an additional piece of logic in your software for a decent rendering (e.g. simply viewing a changeset in your SCM's webview). Implementation Issues But the most obvious reason is just its technical barrier to entry : it's a fundamentally different concept from what has been implemented for a number of years (if not decades) in IDEs and text editors. It would require to rewrite some of them to process lines in a fairly different fasion, which makes it difficult for older and bigger systems that have a higher chance of suffering of deep and tight coupling in their line processing code. It is, however, a lot easier to do when you start from scratch (think of Nick's demo or of Go 's tabwriter package). For a personal anecdote, I remember approaching the author a while back to ask if there was any emacs support in sight, and in this particular case he mentioned this as the reason for it not being trivial. He also asked for help from the community to help implement this feature and bring it to the masses. Do We Care Enough? A third reason, is that some developers are not that hung up on the matter and don't really care so much that they would go the extra mile to support the effort. In most cases, the spaces-vs-tabs conflict is not a business blocker, so there's not so much drive behind the issue. If you want it, you'll have to fight for it. Which is doable in open-source software. And if you change enough of these, closed-source ones will have to follow at the risk of losing to some of their userbase, if an ever so small part of it. So, if you want it, give Nick a hand. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137290",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3278/"
]
} |
137,385 | I'm learning new languages as I go along, I write code for very basic programs in multiple languages, and I go to classes. I've read books, articles, lessons, videos, you name it, however I can't seem to get the hang of certain things. For example I never understood pointers - what they are good at. (NOT PART OF THE QUESTION - retagging with "Pointers" is not required...) My question however, is not what pointers do, but instead how can I understand things like that? If, after reading a book or an article about a certain part of programming, and I don't understand, what do I do? Writing code in a certain feature of programming surely helps, however it doesn't actually help with understand that much. The theoretical part is important in understanding. | If you have troubles with understanding an abstract concept, just go one level down. If you cannot get the pointers in C - go down to the assembly level. Still having problems? Learn more about the hardware, all the way down to the logic gates. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137385",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
137,501 | AFAIK, my class extends parent classes and implements interfaces. But I run across a situation, where I can't use implements SomeInterface . It is the declaration of a generic types. For example: public interface CallsForGrow {...}
public class GrowingArrayList <T implements CallsForGrow> // BAD, won't work!
extends ArrayList<T> Here using implements is syntactically forbidden. I thought first, that using interface inside <> is forbidden at all, but no. It is possible, I only have to use extends instead of implements . As a result, I am "extending" an interface. This another example works: public interface CallsForGrow {...}
public class GrowingArrayList <T extends CallsForGrow> // this works!
extends ArrayList<T> To me it seems as a syntactical inconsistancy. But maybe I don't understand some finesses of Java 6? Are there other places where I should extend interfaces? Should the interface, that I mean to extend, have some special features? | In the case of generic type variables the compiler doesn't actually care if T is a class, an interface, an enum or an annotation. All it cares about is that it is a type with a given set of sub- and super-types. And there is no reason to complicate the syntax just because in one other place (where you actually implement an interface) the distinction between classes and interfaces is relevant (for example if you implement an interface, you need to implement all methods it defines, but you don't need to, if you extend a (non-abstract) class). Assuming for a moment that you'd have to write implements here, then you'd also need a separate syntax for enum values (instead of simply writing <E extends Enum<E>> ) and annotations (which you can easily declare using <A extends Annotation> ). The only place where you need to write implements is at the point where you actually implement the interface. At that point (and that point only) the difference is important, because you must implement the methods defined in the interface. For everyone else, it doesn't matter if any given A is a base class or an implemented interface of B : it's a super-type and that's all that matters. Also note that there's one other place where you use extends with interfaces: interface A {
}
interface B extends A {
} At this point implements would be wrong, because B does not implement A . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137501",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44104/"
]
} |
137,508 | I'm selling software that lets users manipulate critical information. In my licence contract (drafted by a lawyer specialized in this field), I've got a standard clause reading: THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE. I've seen equivalents in almost every single software I've been using so far. So far, all of our clients accepted that without difficulty, but now one potential client is contesting it. Actually he's telling me something along the lines of : You mean that if I use your software to buy something at $10 from a
third party company, and there's a bug in your software (as you don't
guarantee there aren't any), and you transmit a $20 order, I shall pay
the $10 difference, not you? I'm a bit stuck between what seems like common sense on his part, and the fact that my insurance companies would probably not insure me for those kind of risks, and if you consider we're talking millions, not plain $, it doesn't feel comfortable being responsible for that kind of potential loss. So far, the best answer I've come up with is that everybody does it in the industry (Microsoft doesn't guarantee that the mail you've sent using Outlook won't be altered, e.g., by turning all the $10's into $20's ...) Any advice on how to handle this? (apart from doing our best to ship bug-free software, of course :p) | "If I gave you such a warranty, and I gave it to all my other customers too, probably I would be out of business very soon. All it takes is a single malicious customer who finds a bug and uses it to deliberately cause large fictive damages. Because of the complexity of software development, it's currently next to impossible to create software that doesn't contain any single bug. But even if my software was perfect, other components of the whole system, like the hardware, the operating system, the libraries I use, the database system etc. could still contain bugs, and they most likely do; so whenever something goes wrong, you would probably try to make me pay for the damages, because I'm the only one who gave you a warranty and the problem eventually shows up in my program, since this is the frontend you are working with. Defending against such claims is something I simply cannot afford." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137508",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3342/"
]
} |
137,520 | Why would there be any pressure if everyone knows what they are doing and the projects are accurately estimated? If there's pressure, or even high pressure, then it implies what they are currently doing is not working, why would any good programmer want to join a team like that? Are these kind of job posting failed at trying to show off or are they really just being honest? Or is there really some good reasons for having pressure? | I've always considered this code for "we're under-resourced and have unrealistically aggressive deadlines." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35901/"
]
} |
137,581 | I know I can throw exception from constructor in PHP but should I do it? For example, if a parameter's value is not as I expected it. Or should I defer throwing an exception till a method is invoked. What are advantages and disadvantages in both cases? | Why would you postpone throwing the exception? If you know that the object can't properly instantiate with the given parameters, then you should definitely throw an exception. Otherwise, somebody might test your object for null, which it won't be, and could assume everything went as expected. There are a lot of things that can be done to your object without calling a method on it: it could be added to a list, it could be compared, it could be sent as a parameter, etc etc etc. All of these are things that should not have happened, considering it is not a valid object. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137581",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46764/"
]
} |
137,611 | In my job all developers that resolve a bug have to add a new unit test that warns about this type of bugs (in the case it occours again). If a unit test is not possible (for example, a webpage design issue), then QA department has to create a test case to manually check it. The idea behind this is that if a defect has not been detected before the product release is because there isn't an appropriate unit test to detect it. So the developer has to add it. The question is: is this common in any software development methodology? This technique has a name? I would like to learn more about it, but I need some information to start with it. | This is quite common. We use this in our team. For Every production defect, the developer must add a note on the root cause of the problem , add a failing unit test and add a test impact analysis before the ticket can be pushed to dev state to check in the code. The failing unit test must pass before we can push the code to production. I don't think this has a specific name except that general "regression testing". This is very useful and we have started seeing a rise in the quality of the product after we started following this process. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137611",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48803/"
]
} |
137,640 | I'm assuming there's a history to it, but why does the stack grow downward? It seems to me like buffer overflows would be a lot harder to exploit if the stack grew upward... | I believe it comes from the very early days of computing, when memory was very limited, and it was not wise to pre-allocate a large chunk of memory for exclusive use by the stack. So, by allocating heap memory from address zero upwards, and stack memory from the end of the memory downwards, you could have both the heap and the stack share the same area of memory. If you needed a bit more heap, you could be careful with your stack usage; if you needed more stack, you could try to free some heap memory. The result was, of course, mostly, spectacular crashes, as the stack would occasionally overwrite the heap and vice versa. Back in those days there were no interwebz, so there was no issue of buffer overrun exploitations. (Or at least to the extent that the interwebz existed, it was all within high security facilities of the united states department of defense, so the possibility of malicious data did not need to be given much thought.) After that, with most architectures it was all a matter of maintaining compatibility with previous versions of the same architecture. That's why upside-down stacks are still with us today. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137640",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11833/"
]
} |
137,715 | This is more of a style question, but it is something I am currently pondering for a project of mine. Assume that you're creating an application which is modeling a school. So there are entities like Student, School, etc. Now this is all fine and intuitive until you get down to Class, as (in most languages) Class is a reserved word. So, given that Class is a reserved keyword, what would you call such an entity that models a school class? | I'd pick another term because the word Class can be ambiguous. Does class refer to: The group of students taking a course? The whole course for this semester? The abstract course taught in multiple semesters? The group of students expected to graduate at a specific time? Some other classification of students, i.e. Advanced Class, Special Needs class. A specific lecture And that's without even leaving the academic realm. Chances are you can find a better word to fit whatever you are trying to model. Class will only lead to some confusion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137715",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7123/"
]
} |
137,764 | What kind of skills determine a person that is capable of debugging code with ease? Some time ago my friend carried out an interview with a relatively good programmer. The programmer got hired. He could write good code, understood the frameworks and design patterns. The thing he was missing was - debugging skills. He could not debug at all and finding problems with his or someone else's code was huge pain for him. Since then we are thinking about how we can assess or estimate a person's debugging skills. So the first question is: what skills determine if a person can effectively debug software? And the second: how to test those skills during the interview? | If the first thing the person wants to do is look at the code and step through it with a debugger that person is not a great troubleshooter. If you don't already have a plan of action and you dive into the debugger blind you are basically Easter Egging. This is true for ANY kind of troubleshooting. In an interview situation a person that asks how the system operates and asks about history of the system would be somebody that might be a good troubleshooter. A person that thinks system first and mechanics second could be a good troubleshooter. This is true of any complex system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137764",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48869/"
]
} |
137,941 | "Extract Till You Drop" is someting I've read in Uncle Bob's blog, meaning that a method should do one thing alone be good at it. What is that one thing? When should you stop extracting methods? Let's say I have a login class with the following methods: public string getAssistantsPassword()
public string getAdministratorsPassword() These retrieve the respective accounts password in the database. Then: public bool isLogInOk() The method compares the password that has been called or selected, and sees if the password the user provided is in the database. Is this an example of "Extract Till You Drop"? When will you know when you are doing too much of extracting? | You've extracted too much when your code itself is more clear than the method name . Keep in mind, when not sure, almost any programmer I've seen has been building much larger and monolithic methods than optimal, rather than the other way around. A method with just few lines of code is quite typical of well designed app/library. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137941",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48185/"
]
} |
137,994 | I've seen it commonly repeated the object-oriented programming is based on modeling the real world but is it? It seems to me that is not true of anything outside of the business layer. My GUI classes/data access classes aren't modeling anything in the real world. Even in my business layer, I've got classes like observers, managers, factories, etc. which aren't real world objects. I try to design my classes to take advantage of things like encapsulation but is the real world encapsulated? While some objects I create are modeling real-world objects, would not pre-OOP code do the same? I doubt that OO was the first people to include concepts like Customer in their code bases. But OO is really about how to model things, and that method of modeling doesn't seem inspired by the real world to me. So: does object-oriented programming really model the real world? | No, not at all. However it's a methodology that does allow to create a nice abstraction to hold complex data structures along with some methods that act on the data structures. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137994",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1343/"
]
} |
137,999 | I frequently use a pattern where I using method chaining to setup an object, similar to a Builder or Prototype pattern, but not creating new objects with each method call, instead modifying the original object. Example: new Menu().withItem("Eggs").withItem("Hash Browns").withStyle("Diner"); Just wondering if there is a name for this pattern and whether it is considered an anti-pattern, because although it can read more fluently, it can lead to long method chains. | Fluent Interface I've always heard of this method being called a ' fluent interface ', as coined by Eric Evans (of Domain Driven Design fame) and Martin Fowler (of Agile Manifesto fame). The main drawbacks are the readability (which some folks love and some hate), and the fact that it can be harder to debug in some cases because the entire chain of actions may be considered a single statement when stepping through it. I certainly don't consider it an anti-pattern, although I've only used the technique a few times myself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/137999",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29399/"
]
} |
138,359 | My university has a module for software development with a real client. Some of my team members work in the computer lab all the time, which is an extremely noisy environment with lots of interruptions and distractions. There are about 30 people always talking. People always go on Facebook, YouTube, or tell jokes to each other in addition to doing "work". Some of my team members work 3 hours every day in this environment. I attend our weekly team meetings and use our online project management system extensively. I address all emails and have a chat client on busy, but I do get messages. I use online resources a lot when solving my problems. However, outside the team meeting and pair programming sessions, I do most of my work in a quiet environment where I can focus and concentrate and I block out all external as well as internal interruptions. I focus on my task 100%. I find that I'm about 10 times more productive this way than in the lab and I can get a lot of work done. The problem is that our tutors or "management" don't see me doing work in the lab. Thus, I do not appear to be working to them. Thus they think I do no team work. How can I convince them that I do team work because I have lots of communication with my team, but at the same time I like to work on my own? I would like to prove that, just because I work alone a lot and don't necessarily do all of my work in the lab, I'm still a productive member of the team. UPDATE Management told me that the problem was that I spend working in a team 60% of my time, and I work alone 40% of the time. They told me I should spend 99% of my time working with my team face to face in the lab, aka office. Some relevant comments of some answers, that some might miss: "Problem is I don't really need to ask my team anything, because I rather google it myself. And I trust Google a lot more than my supervisor or team members for some reason. Hence I often disagree with them. I'm not doing team work, because essentially I'm doing "web" work." " I trust information that I find online (i.e. SE) more than my supervisor's or team mates' expertise." UPDATE 2 I stopped doing any serious work at home, I just stay in the lab now and have fun with my team while also do some work occasionally. As the accepted answer shows, this isn't about me being productive but rather fulfilling my managers' and team mates' expectations. If I and my manager have different beliefs about how to do software development, it's the manager's that counts because they have the power to fail/fire me. In my opinion face to face meetings aren't as effective as online conversations, and working in a noisy lab isn't as productive as working in a quite environment. I think team mates shouldn't be able to interrupt me any time they want, while I'm in the middle of a task. I think one needs at least 50% of the time to do productive work alone and 50% of the time to synchronize with the team. Anyway, this is just my opinion, and that doesn't count in this case. | What people perceive is more important than the truth. In the real world, perception is reality . If they perceive you as something because of your behavior, that is what you represent and it is very hard to change this perception by trying to argue with the person. What they perceive is more important to them than reality , because it is a belief system to them at that point, and beliefs are pretty concrete and hard to change. They have an idea of what they want you to be doing, feel lucky someone told you and didn't just kick you to the curb with no explanation. That is what happens on the outside, you just get let go for some amorphous reason, it is easier and cleaner for most people to deal with. Remember, this involves someone forming a belief system about you, this is tricky territory! For good or bad, you will have to do what this person believes you should be doing! Nothing else will be effective, and trying to prove them wrong is a no-win situation for you. At many places, old school managers like to see "butts in chairs". Senior members that have proven themselves, don't have to worry about this as much, but that takes years to get to that point of trust with some people. This is changing very very slowly. And honestly, if you are working on a team and not in a position to be communicating with them all day long, you really aren't working on a team. You are on and team and working on your own which is perceived as being a loner and cowboy, neither is a team member personality. You can get a good set of ear plugs . Good closed back around the ear noise canceling headphones are a good investment if you like to listen to music. Personally I like to only hear the clicking of my Model M keyboard! But you need to learn how to work in a noisy distracting environment, or change your major. In the real world, cubes are the norm, and noisy environments are prevalent. This will be especially bad at the beginning of your career, you will be sat next to banks of copy machines, break rooms, in server rooms and next to sales and marketing people that talk about absolutely nothing all day long. Expect to work 8 - 10 hours a day in a noisy distracting environment in the real world. It isn't about demonstrating your productivity, it is about fulfilling their expectations. And in this case they expect you to be in the lab with your team mates. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138359",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22695/"
]
} |
138,396 | This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently we've had people start here who seem to have an attitude toward development/programming which is different from my own and hard for me to reconcile; they extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above tends to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done). | There was once a question around here that contained this kind of information, and the piece that stuck with me the most was don't touch their keyboard In short, tell your junior how to accomplish what they are trying to do, but do not do it for them. But in addition to that, here are some other tips: Encourage Google (or any other search tool). If you know the answer can be found online easily, tell them to look it up instead of telling them the answer. Ultimately you want to teach them how to teach themselves , and not have them become dependent on you. Make yourself available to answer questions. If you are ever not available or do not wish to be interrupted, make it clear to them that they should hold their questions until a specified time. Do code reviews regularly to tell them what they are doing right/wrong. Use this as an opportunity to point out best practices Begin early with best practices. It's better to take extra time to teach them the right way, than to try and change their methods later on. Get them started early with planning/documentating what they are going to be doing instead of letting them begin by writing code. Be open to learning from them. They probably spend more time than you learning, and it's possible that they learn something that you didn't know. Help them learn from their mistakes. There are going to be mistakes, so be sure you show them that mistakes are part of learning, and that they should use them as an opportunity to learn. (From RuneFS below) Instead of telling them how to do something, try to help them figure it out themself. This will help improve their ability to logically work through a problem, and increases their learning ability (From RuneFS below) Instead of telling them what they did wrong, tell them of ways they can improve it. Be sure to include why your way is better than theirs. This will boost their confidence instead of weaken it. Of course, if they aren't listening to you then don't be afraid to just tell them to do it the right way :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138396",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31508/"
]
} |
138,479 | Code kata is a concept that proposes to hone programmer's skill by doing small problems many times trying to improve the code at each iteration. The name comes from an analogy to martial art kata where forms (aka kata) are practices done over and over leading to improvements. From the reaction I got to my last question on the topic, I wish to know what are the draw backs of this approach? | In principle, I don't see drawbacks in code kata. You try to accomplish the same task many times, with different approaches and different languages. But it is rather difficult to introduce it in a work place. You are usually expected to be proficient or reasonably productive. I am not saying it would not be useful (better to spend some time to improve the skills of a new hire than to pay for the not so good code he will write in the time to come) but it is rather difficult none the less. you must actuallu try to improve in some sense. Writing the same code in the same way one thousand times will not make you improve (rather it will dull you). You must understand your previous errors, what went wrong or not worked out as expected. This is the most important part. It is a form of self study, so you must study. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36006/"
]
} |
138,521 | I am cautious of asking this question because it might appear overly fastidious. I just opened up JavaScript: The Definitive Guide, and it states of the first page of chapter 1 "JavaScript is a high-level, dynamic, untyped interpreted programming
language” So am I to take it that the interpreted part is a requirement in the language specification, or is it misleading to say that the language is an interpreted programming language when respecting the difference between a language and its many implementations? There are no static compilers for JavaScript apparently - https://stackoverflow.com/questions/1118138/is-there-a-native-machine-code-compiler-for-javascript so maybe it's just a reflection of this. | So am I to take it that the interpreted part is a requirement in the language specification, or is it misleading to say that the language is an interpreted programming language when respecting the difference between a language and its many implementations? EcmaScript language geeks often use the term "ES interpreter" to refer to an implementation of EcmaScript, but the spec does not use that term. The language overview in particular describes the language in interpreter-agnostic terms: ECMAScript is object-based: basic language and host facilities are provided by objects, and an ECMAScript program is a cluster of communicating objects. So EcmaScript assumes a "host environment" which is defined as a provider of object definitions including all those that allow I/O or any other links to the outside world, but does not require an interpreter. The semantics of statements and expressions in the language are defined in terms of completion specification which are trivially implemented in an interpreter, but the specification does not require that. 8.9 The Completion Specification Type The Completion type is used to explain the behaviour of statements ( break , continue , return and throw ) that perform nonlocal transfers of control. Values of the Completion type are triples of the form ( type , value , target ), where type is one of normal , break , continue , return , or throw , value is any ECMAScript language value or empty , and target is any ECMAScript identifier or empty . The term “abrupt completion” refers to any completion with a type other than normal . The non-local transfers of control can be converted to arrays of instructions with jumps allowing for native or byte-code compilation. "EcmaScript Engine" might be a better way to express the same idea. There are no static compilers for JavaScript apparently This is not true. The V8 "interpreter" compiles to native code internally, Rhino optionally compiles to Java bytecode internally, and various Mozilla interpreters ({Trace,Spider,Jager}Monkey) use a JIT compiler. V8 : V8 increases performance by compiling JavaScript to native machine code before executing it, versus executing bytecode or interpreting it. Rhino : public final void setOptimizationLevel(int optimizationLevel) Set the current optimization level.
The optimization level is expected to be an integer between -1 and 9. Any negative values will be interpreted as -1, and any values greater than 9 will be interpreted as 9. An optimization level of -1 indicates that interpretive mode will always be used. Levels 0 through 9 indicate that class files may be generated. Higher optimization levels trade off compile time performance for runtime performance. The optimizer level can't be set greater than -1 if the optimizer package doesn't exist at run time. TraceMonkey : TraceMonkey adds native‐code compilation to Mozilla’s JavaScript® engine (known as “SpiderMonkey”). It is based on a technique developed at UC Irvine called “trace trees”, and building on code and ideas shared with the Tamarin Tracing project. The net result is a massive speed increase both in the browser chrome and Web‐page content. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138521",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49217/"
]
} |
138,561 | Why would I want to write a web app with lots of processing server-side? To me, writing the program client-side is a huge advantage because it takes away as much server load as possible because it only has to send data to the client with minimal processing. I see very little on writing web-apps besides writing it server-side and treating client-side as only a view . Why would I ever want to do this? The only advantage I see is that I can write in whatever language I want ( http://www.paulgraham.com/avg.html ). | There are two major issues. The first is easy--you usually don't know what sort of resources are available on the client side. If it requires 1.5GB to process something, can you really push that onto an unknown client browser (IE, Safari, Opera, Firefox, etc.) on an unknown client platform? Will the client appreciate his system dogging when you overwhelm it? The second is more architectural--what layers do you want to expose to the outside world? Most would agree that it's incredibly risky to expose your data layer. How about your service layer? Do you really want to deliver that logic out there? If you do, are you also exposing the entry points to your data layer? If you keep the service layer server side, then what's left? The UI, right? See reason 1 about for considerations on how much of that lives on the server and how much on the client. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138561",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12979/"
]
} |
138,643 | There's a quote from a PyCon 2011 talk that goes: At least in our shop (Argonne National Laboratory) we have three
accepted languages for scientific computing. In this order they are
C/C++, Fortran in all its dialects, and Python. You’ll notice the
absolute and total lack of Ruby, Perl, Java. It was in the more general context of high-performance computing. Granted the quote is only from one shop, but another question about languages for HPC , also lists Python as one to learn (and not Ruby). Now, I can understand C/C++ and Fortran being used in that problem-space (and Perl/Java not being used). But I'm surprised that there would be a major difference in Python and Ruby use for HPC, given that they are fairly similar. (Note - I'm a fan of Python, but have nothing against Ruby). Is there some specific reason why the one language took off? Is it about the libraries available? Some specific language features? The community? Or maybe just historical contigency , and it could have gone the other way? | I'll expand on my comment. I think there are a few factors that influenced the use of Python in scientific computing, though I don't think there are any definitive historical points where you could say, "Yes, that is the reason why Python is used over Ruby/anything else" Early History Python and Ruby are of roughly the same age - according to Wikipedia, Python was officially first released in 1991, and Ruby in 1995. However, Python came to prominence earlier than Ruby did, as Google was already using Python and looking for Python developers at the turn of the millenium. Since it's not like we have a curated history of uses of programming languages and their influences on people who use them, I will theorize that this early adoption of Python by Google was a big motivator for people looking to expand beyond just using Matlab, C++, Fortran, Stata, Mathematica, etc. Namely, I mean that Google was using Python in a system where they had thousands of machines (think parallelization and scale) and constantly processing many millions of data points (again, scale). Event Confluence Scientific computing used to be done on specialty machines like SGIs and Crays (remember them?), and of course FORTRAN was (and still is) widely used due to its relative simplicity and because it could be optimized more easily. In the last decade or so, commodity hardware (meaning stuff you or I can afford without being millionaires) have taken over in the scientific and massive computing realm. Look at the current top 500 rankings - many of the top ranked 'super computers' in the world are built with normal Intel/AMD hardware. Python came in at a good time since, again, Google was promoting Python, and Google was using commodity hardware, and they had thousands of machines. Plus if you dig into some old scientific computing articles, they started to spring up around the 2000-era. Earlier Support Here's an article written for the Astronomical Data Analysis Software and Systems , written in 2000, suggesting Python as a language for scientific computing. The article has this quote about Python: Python is an interpreted object-oriented programming language that is starting to receive considerable attention in scientific applications (Python, 1999). This is because Python, and scripting languages in general, represent a next logical step for many scientific projects (Dubois 1994). First, Python provides an interpreted programming language that can be viewed as an extension of the simple command languages already used by scientific programs Second, Python is easily integrated with software written in other languages. As a result, it can serve as both a control language for driving existing programs as well as a glue language for combining different systems together. Finally, Python provides a large collection of third party modules, an established user base, and a variety of documentation in the form of books and online references. For this reason, one might view it as a highly polished and expanded version of what scientists often try to accomplish when writing their own command interpreters. So you can see that Python had already had traction dating back to the late 90s, due to it being functionally similar to the existing systems at the time, and because it was easy to integrate Python with things like C and the existing programs. Based on the contents of the article, Python was already in scientific use dating back to the 1995-1996 timeframe. Difference in Popularity Growth Ruby's popularity exploded alongside the rise of Ruby On Rails, which first came out in 2004. I was in college when I first really heard the buzz about Ruby, and that was around 2005-2006. django for Python was released around the same time frame (July 2005 according to Wiki), but the focus of the Ruby community seemed very heavily centered on promoting its usage in web applications. Python, on the other hand, already had libraries that fit scientific computing: NumPy - NumPy officially started in 2005, but the two libraries it was built on were released earlier: Numeric (1995), and Numarray (2001?) BioPython - biological computing library for python, dates back to 2001, at least SAGE - Math package with first public release in early 2005 And many more, though I don't know many of their time lines (aside from just browsing their download sites), but Python also has SciPy (built on NumPy, released in 2006), had bindings with R (the statistics language) in the early 2000s, got MatPlotLib, and also got a really powerful shell environment in ipython. ipython was first released in the early 2000s, and has had many features added to it that make it very nice for scientific computing, like integrated matplotlib graphing and being able to manage computational clusters . From above article: It is also worth noting a number other Python related scientific computing projects. The numeric Python extension adds fast array and matrix manipulation to Python (Dubois 1996), MMTK is Python-based toolkit for molecular modeling (Hinsen 1999), the Biopython project is developing Python-based tools for life-science research (Biopython 1999), and the Visualization Toolkit (VTK) is an advanced visualization package with Python bindings (VTK, 1999). In addition, ongoing projects in the Python community are developing extensions for image processing and plotting. Finally, work presented in (Greenfield, 2000) describes the use of Python in projects at the STScI. Good list of scientific and numeric packages for Python . So a lot of it is probably due to the early history, and the relative obscurity of Ruby until the 2000s, whereas Python had gained traction thanks to Google's evangelism. So if you were evaluating scripting languages in the period from 1995 - 2000, what were you really looking at? There was Perl, which was probably different enough syntactically that people didn't want to use it, and then there was Python, which had a clearer syntax and better readability. And yes, there is probably a lot of self-reinforcement - Python already has all these great, useful libraries for scientific computing, while Ruby has a minority voice advocating its use in science, and there are some libraries sprouting up, like SciRuby , but Python's tools have matured over the last decade. Ruby's community at large seems to be much more heavily interested in furthering Ruby as a web language, as that's what really made it well known, whereas Python started off on a different path, and later on became widely used as a web language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138643",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3688/"
]
} |
138,706 | On a related question, it has been clarified why C++ is not compatible with C in many aspects. However C++ is still a "hybrid"* language. And unfortunately, many programmers still consider C++ as a "C with streams and built-in strings". That results in really bad written code, that it's neither C++ nor C. IMHO, it would be better if the language/compiler forced to some extent programmers to write more elegant code. So is there a rationale for keeping modern C++ (for instance C++0x and future versions) hybrid? * By hybrid I mean that it's up to the programmer to decide if he/she will use: standard strings and streams, classes, namespaces other than the default, etc. | Yes, there is a strong rationale: C++ code almost always has to call existing C code. The best we can do is make it easy to write good code. There is nothing a language designer can do to make it impossible to write bad code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138706",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21030/"
]
} |
138,987 | I'm going to start my first real project in Ruby on Rails , and I'm forcing myself to write TDD tests. I don't see real advantages in writing tests, but since it seems very important, I'll try. Is it necessary to test every part of my application, including static pages? | TDD isn't about testing, it's about design. Writing the tests forces you to think about how the class is supposed to work and what kind of interface you need. The tests are a happy side effect that makes it easier to refactor later. So, with that in mind, what is the behavior of a static page and what is the interface? My first response would be "none" and "none". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/138987",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49405/"
]
} |
139,052 | So I ran into a Dictionary<int, int> today at work. This just seemed weird to me because I would have probably just used a List<int> instead. Is there a difference and would there be a use case where one structure would be preferred over the other? | You would use a Dictionary<int, int> if your indexes have a special meaning besides just positional placement. The immediate example that comes to mind is storing an id column and an int column in a database. For example, if you have a [person-id] column and a [personal-pin] column, then you might bring those into a Dictionary<int, int> . This way pinDict[person-id] gives you a PIN, but the index is meaningful and not just a position in a List<int> . But really, any time you have two related lists of integers, this could be an appropriate data structure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139052",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28880/"
]
} |
139,058 | Why do programmers even like the idea of open source? I am not talking about the creators of those projects, of course they get fame, but I am talking about the industry in general, why are we so fond of the open source concept when it brings so many bad impact to the industry? First, projects like wordpress and other CMS, they take away a lot of freelance jobs where clients want a blog or a simple website. Secondly, projects like Rails and other libraries and API's, they put a lot of programmers out of work, and make the demand for programmers smaller, because now with these open source API's, one programmer can do things that 10 programmers used to do. And finally, with open source software such as Notepad++, now people just feel funny when you ask them to buy software. So, the question is, why do we still like open source if it kind of making us poor? Probably, my life as a programmer would be harder, but at least I can make a living out of it. But now, it's more like machine replacing human, funny thing is, we are creating those "machines" that replace ourselves. Let's say, if you invented a tool, you don't have to share it, it will still help you and your company. Even without these open source tools, other programmers will live because they still have a job that makes money. | Why do we like commodity hardware? Intel and Dell stopped me charging for assembling my own computers and making my own PCBs. High level languages mean I can't bill for 2 weeks work for a simple printer function written in Assembly. And finally the internet means people can just ask questions for free and someone will answer them rather than having to pay me to write books and teach classes. I just spent a couple of days installing and learning scipy+numpy+skimage, which means that I managed to write an image processor in a day. That makes me more valuable to my company's shareholders than if I had spent weeks of work going through the maths of all the original papers and then coding everything in C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139058",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35901/"
]
} |
139,111 | There's this framework that I'm helping to design. There are some common tasks that should be done using some common components : Logging, Caching and raising events in particular. I am not sure if it's better to use dependency injection and introduce all of these components to each service (as properties for example) or should I have some kind of meta data placed over each method of my services and use interception to do these common tasks? Here's an example of both: Injection: public class MyService
{
public ILoggingService Logger { get; set; }
public IEventBroker EventBroker { get; set; }
public ICacheService Cache { get; set; }
public void DoSomething()
{
Logger.Log(myMessage);
EventBroker.Publish<EventType>();
Cache.Add(myObject);
}
} and here's the other version: Interception: public class MyService
{
[Log("My message")]
[PublishEvent(typeof(EventType))]
public void DoSomething()
{
}
} Here are my questions: Which solution is best for a complicated framework? If interception wins, what are my options to interact with internal values of a method (to use with cache service for example?)? Can I use other ways rather than attributes to implement this behavior? Or maybe there can be other solutions to solve the problem? | Cross-cutting Concerns like logging, caching etc. are not dependencies, so shouldn't be injected into services. However, while most people then seem to reach for a full interleaving AOP framework, there's a nice design pattern for this: Decorator . In the above example, let MyService implement the IMyService interface: public interface IMyService
{
void DoSomething();
}
public class MyService : IMyService
{
public void DoSomething()
{
// Implementation goes here...
}
} This keeps the MyService class completely free of Cross-cutting Concerns, thus following the Single Responsibility Principle (SRP). To apply logging, you can add a logging Decorator: public class MyLogger : IMyService
{
private readonly IMyService myService;
private readonly ILoggingService logger;
public MyLogger(IMyService myService, ILoggingService logger)
{
this.myService = myService;
this.logger = logger;
}
public void DoSomething()
{
this.myService.DoSomething();
this.logger.Log("something");
}
} You can implement caching, metering, eventing, etc. in the same way. Each Decorator does exactly one thing, so they also follow the SRP, and you can compose them in arbitrarily complex ways. E.g. var service = new MyLogger(
new LoggingService(),
new CachingService(
new Cache(),
new MyService()); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139111",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49472/"
]
} |
139,171 | I'm working through the book "Head First Python" (it's my language to learn this year) and I got to a section where they argue about two code techniques: Checking First vs Exception handling. Here is a sample of the Python code: # Checking First
for eachLine in open("../../data/sketch.txt"):
if eachLine.find(":") != -1:
(role, lineSpoken) = eachLine.split(":",1)
print("role=%(role)s lineSpoken=%(lineSpoken)s" % locals())
# Exception handling
for eachLine in open("../../data/sketch.txt"):
try:
(role, lineSpoken) = eachLine.split(":",1)
print("role=%(role)s lineSpoken=%(lineSpoken)s" % locals())
except:
pass The first example deals directly with a problem in the .split function. The second one just lets the exception handler deal with it (and ignores the problem). They argue in the book to use exception handling instead of checking first. The argument is that the exception code will catch all errors, where checking first will only catch the things you think about (and you miss the corner cases). I have been taught to check first, so my intial instinct was to do that, but their idea is interesting. I had never thought of using the exception handling to deal with cases. Which of the two is the generally considered the better practice? | In .NET, it is common practice to avoid the overuse of Exceptions. One argument is performance: in .NET, throwing an exception is computationally expensive. Another reason to avoid their overuse is that it can be very difficult to read code that relies too much on them. Joel Spolsky's blog entry does a good job of describing the issue. At the heart of the argument is the following quote: The reasoning is that I consider exceptions to be no better than "goto's", considered harmful since the 1960s, in that they create an abrupt jump from one point of code to another. In fact they are significantly worse than goto's: 1. They are invisible in the source code . Looking at a block of code, including functions which may or may not throw exceptions, there is no way to see which exceptions might be thrown and from where. This means that even careful code inspection doesn't reveal potential bugs. 2. They create too many possible exit points for a function. To write correct code, you really have to think about every possible code path through your function. Every time you call a function that can raise an exception and don't catch it on the spot, you create opportunities for surprise bugs caused by functions that terminated abruptly, leaving data in an inconsistent state, or other code paths that you didn't think about. Personally, I throw exceptions when my code can't do what it is contracted to do. I tend to use try/catch when I'm about to deal with something outside of my process boundary, for instance a SOAP call, a database call, file IO, or a system call. Otherwise, I attempt to code defensively. It's not a hard and fast rule, but it is a general practice. Scott Hanselman also writes about exceptions in .NET here . In this article he describes several rules of thumb regarding exceptions. My favourite? You shouldn't throw exceptions for things that happen all the time. Then they'd be "ordinaries". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139171",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17275/"
]
} |
139,321 | I'm working on a project solo and have to maintain my own code. Usually code review is done not by the code author, so the reviewer can look at the code with the fresh eyes — however, I don't have such luxury. What practices can I employ to more effectively review my own code? | First of all, make use of tools to check as much as you can. Tests (backed up with some reasonable code coverage) will give you some confidence of the correctness of the code. Static analysis tools can catch a lot of best practice things. There will always be issues that you need human eyes on to determine though and you will never do as good a job reviewing your own stuff as someone else, there are some things you can do to help however check tests exist and pass (possibly have a target test coverage, though you may need to break this in certain cases, but you should be able to justify why) check Static analysis passes (there will also be false negatives here but that is fine as long as you can justify why then its fine to suppress them) maintain a check list of further things to check in review (ideally add this as new static analysis rules if possible) make sure you check anything the SA can't check, e.g., are comments still valid, are things named appropriately (naming things is of course, one of the 2 hard problems known to computer science) if a fault is identified check if the cause is systemic and look at why it wasn't found in earlier tests or reviews This of course is useful when you are reviewing others code as well | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33609/"
]
} |
139,331 | I've been hiring several developers from different places around the world. It all goes well, but I see that some of them are abusing my polite overlooking too much lately. They are all hired for a full-day. However, I see that after 5 hours in a day, not much has been done. I am considering to develop a software that will take a screenshot of the computer screen every 1-5 minutes and upload it to my system. However, this is going to the extremes. How do people usually manage remote developers? | You hire honest programmers, and you (in consultation with them and possibly other honest programmers as a reality check) set reasonable goals in short time lines. If they don't meet the goals, fire them. If they do meet the goals, then it shouldn't matter to you if they play solitaire for 2 hours straight while they're clearing their minds and mulling over a problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27209/"
]
} |
139,372 | Say I have a web app that uses jQuery. Is it better practice to host the necessary javascript files on my own servers along with my website files, or to reference them on jQuery's CDN (example: http://code.jquery.com/jquery-1.7.1.min.js )? I can see pros for both sides: If it's on my servers, that's one less external dependency; if jQuery went down or changed their hosting structure or something like that, then my app breaks. But I feel like that won't happen often; there must be lots of small-time sites doing this, and the jQuery team will want to avoid breaking them. If it's on my servers, that's one less external reference that someone could call a security issue If it's referenced externally, then I don't have to worry about the bandwidth to serve the files (though I know it's not that much). If it's referenced externally and I'm deploying this web site to lots of servers that need to have their own copies of all the files, then it's one less file I have to remember to copy/update. | You should do both: Start with hosting from a CDN such as Google's because it will likely have a higher up-time than your own site and will be configured for the fastest response time. Additionally, anyone who has visited a page that links to the CDN will use their cached copy of the file, so they won't even have to re-download a copy, making the initial loading even faster. Then add a fallback reference to your own server in case the CDN happens to be down (not likely, but safe is safe). Fallbacks are relatively easy to understand, but need to be customized to suit the script being used: <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.18/jquery-ui.min.js"></script>
<script>
if (!window.jQuery) document.write('<script src="/path/to/jquery-ver.sion.min.js"><\/script>');
</script> Make sure you don't write </script> anywhere within a <script> element, as it will close the HTML element and cause the script to fail. The simple fix is to use a backslash as an escape: <\/script> . One more reason to do both: If you pick a popular CDN it's highly unlikely that it'll ever have any down-time, however in the far far future (~18 months from now given Moore's law ) when the hosting format changes, or the address is adjusted, or the network is placed behind a paywall, or anything else, it's possible that your link will no longer work as-is. If you use a fallback, then it'll give you a bit of time to adjust to any new format for hosting before having to go back through every website you've ever created and change the CDN links. another reason to do both: Recently I've been hit with a string of internet outages. I was able to keep working locally on projects where I'd linked local copies of script resources, and I quickly found that there were a number of projects that needed to have local copies linked. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139372",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14955/"
]
} |
139,450 | I've heard people lecture here and there on the internet that it's best practice to obscure public facing database ids in web applications. I suppose they mainly mean in forms and in urls, but I've never read anything more than a mouthful on the subject. EDIT : Of course, now that I ask this, I find some resources on the subject: https://stackoverflow.com/questions/2374538/obscuring-database-ids https://stackoverflow.com/questions/1895685/should-i-obscure-primary-key-values http://joshua.schachter.org/2007/01/autoincrement.html These links satisfied some of my curiosity, but the SO posts don't have many votes and aren't necessarily centered around the topic in this context so I'm not sure what to make of it, and some of them claim that the third link is bogus. I'll leave the rest of my post intact: I understand the differences between obscurity and security, as well as how the two can work together, but I can't imagine why this would be necessary. Is there any truth to this, is it just paranoia, or is it just totally bogus altogether? I can think of ways to do it, but of course it adds a lot of complexity to the application code. Under what circumstances would this be useful? If this is something people frequently do, how is it usually deployed? Hashing the identifiers? Something else? It seems like a lot of work for not much extra security. I'm not looking for real solutions, I just want to get an idea of how/why people would do this in the real world. Is this really considered a "best practice" or is it merely a micro-optimization of little value? NOTE : I think a few folks might have gotten the wrong idea: I'm not suggesting that difficult-to-guess ids would be the only security mechanism, obviously there would have be the usual access checks. Let's assume those are in place, and that simply knowing the id or hashed id of a record is not enough to grant access. | I don't think it's that important, but there are some scenarios when it could matter depending on other decisions you make. For example, if you were to expose an order ID that's generated sequentially, and you had a social engineering attack with someone calling customer support and saying "hey, I just spent $2000 on a new computer and you guys sent me some other guy's order for a $15 cable, now I'm out $2000" you could spend a lot of time trying to vet the issue before you either conclude it's bogus or you send the faker a new computer. There are similar, less sophisticated, but embarrassing variations on the theme; if a bad guy increments an order ID an emailed link to a receipt, and if no additional validations are made to verify that the person who clicked on the link has the right to view the order ID, suddenly you're unwittingly exposing private customer information to the wrong person. In such cases, if the numbers are non-sequential the exposure is slightly mitigated because guessing is less likely to yield interesting results. On the other hand, now you need a convenient way to reference an order ID in customer support interactions that won't result in long back-and-forth conversations with telephone-based customer interactions while your rep tries to distinguish between B, P D and T in order number BPT2015D. I'd say it's a stretch to call this obfuscation a "best practice", but in certain scenarios, it can reduce the ease of exploiting another weakness in your validation or authorization code. On the other hand, it doesn't really matter whether someone knows you wrote blog post #1 or #2559. If the ID isn't valuable information, even with additional knowledge, then the argument that obfuscating it is a best practice holds less weight. There's a second potential argument, which is that your database identifier may wed you to a particular database implementation (or instance), and when your company gets bought out or picks up a competitor and now you have to merge two legacy systems, or the CEO goes out drinking with the rep from DynoCoreBase and they decide that you will now move all your data to DynoCoreBase version 13h and it wants all the primary keys to be guids, and you have to create some sort of mapping layer to translate old IDs to new IDs so that old URLs don't break, but whether these scenarios matter to you depend far more on the nature of your business (and the customer involvement with those IDs) than on any general best practice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139450",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22985/"
]
} |
139,482 | Is there a reason that a semi-colon was chosen as a line terminator instead of a different symbol? I want to know the history behind this decision, and hope the answers will lead to insights that may influence future decisions. | In English the semicolon is used to separate items in a list of statements, for example She saw three men: Jamie, who came from New Zealand; John, the
milkman's son; and George, a gaunt kind of man. When programming you are separating a number of statements and using a full stop could be easily confused for a decimal point. Using the semicolon provides an easy to parse method of separating the individual program statements while remaining close to normal English punctuation. Edit to add In the early days when memory was expensive, processing slow, and the first programming languages were being devised, there was a need to split the program up into separate statements for processing. Some languages required that each statement was placed on a line so that the carriage return could act as the statement delimiter. Other languages allowed a more free format to the text layout and so required a specific delimiter character. This character was chosen to be the semicolon, most likely because of the similarity to its use in the English language (this has to be a supposition; I was not there at the time) and as it did not produce a conflict with the other punctuation marks and symbols that were required for mathematical or other syntactic purposes. Edit again The need for some terminator character goes back to the requirements for parsing the language text. The early compilers were written in assembly language or, in some cases, directly in hand crafted binary machine instructions. Having a special character that identified the end of the statement and delimited the chunk of text that is being processed makes the processing that much easier. As I said above, other languages have used the carriage return or brackets. The Algol, Pascal, Ada, BCPL, B, C, PL/M, and other families of languages happen to use the semicolon. As to which one was first to use this particular character, I do not go back far enough in history to remember. Its choice and adoption makes perfect sense as Its use mirrors the use in normal English punctuation. Other characters (e.g. the full stop) could be confusing as they already have a common use (a full stop is also used as a decimal point). A visible punctuation character allows free format code layout. Using a similar delimiter character in derivative or later languages builds upon the familiarity gained by all of the programmers that have used the earlier language. As a final remark, I think that there has been more time spent on these answers and comments than was spent in deciding to use the semicolon to end a statement when designing the first language that used it in this way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49863/"
]
} |
139,528 | I am from Java background and am new to JavaScript. I have noticed many JavaScript methods using single character parameter names, such as in the following example. doSomething(a,b,c) I don't like it, but a fellow JavaScript developer convinced me that this is done to reduce the file size, noting that JavaScript files have to be transferred to the browser. Then I found myself talking to another developer. He showed me the way that Firefox will truncate variable names to load the page faster. Is this a standard practice for web browsers? What are the best-practice naming conversions that should be followed when programming in JavaScript? Does identifier length matter, and if so, to what extent? | You will find that the developers themselves are not using short variable names. Whilst developing, they are using meaningful and detailed variable names. Then , in the build/release process, the code they've written is ran through a minifier/ obfuscator with the intention of minimizing the size of the file, as a best practise to speed up a website. This is an optional step if you care that much about performance. Most small websites don't do this. You , as a developer, should not care about the minification/ obfuscation process; write your code so that it is readable, meaningful, well documented and well structured. Then if you care so much about performance (optional, don't forget!), introduce a minifier/ obfuscator into your release process to minize the code (remove white space, new lines, comments etc) and to obfuscate it (e.g. shorten variable names). A good article which explains obfuscation vs minification can be found here . Additionally, Desktop FireFox will not truncate variable names period . The truncation of variable names is there to speed up the page download. By the time FireFox gets the file, it has already been downloaded therefore there is no need to do so. Your friend may run a plugin which is doing this; in which case, tell him to uninstall it, because it's useless. For completion, some (mobile) browsers have the option to use middle-man servers, which intercept the responses of resources you requested, and compress them for you (which could include the minification of JavaScript files). Note that the compression is done on the server (i.e. before you have downloaded the page), hence the potential benefit of downloading a smaller file, rather than in the browser once you have already downloaded the file (as suggested in the question). Such mobile browsers include Opera Mini, and newer versions of Google Chrome (on iOS at least; not sure about Android). For more info, see here . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139528",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39302/"
]
} |
139,536 | I make a lot of typing mistakes when I program. I have a good keyboard writing speed but I try to write faster and in the process end up making typos. I want to type faster, but make less errors. Can I do something to improve the situation? | Make sure you use auto-completion/Intellisense wisely if you're using an IDE or (I suppose) some text editors. If you use them well, you only have to get the spelling correct once :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139536",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
139,654 | I'm working through designing a RESTful API. We know we want to return JSON and XML for any given resource. I had been thinking we would do something like this: GET /api/something?param1=value1
Accept: application/xml (or application/json) However, someone tossed out using extensions for this, like so: GET /api/something.xml?parm1=value1 (or /api/something.json?param1=value1) What are the tradeoffs with these approaches? Is it best to rely on the accept header when an extension isn't specified, but honor extensions when specified? Is there a drawback to that approach? | This, "However, philosophically - the first approach is the only approach.", and this "The proper official RESTful approach is to use Accept: header." are widely perceived to be the case, but are also absolutely incorrect . Here's a brief snippet from Roy Fielding (who defined REST)... "section 6.2.1 does not say that content negotiation should be
used all the time." cite That particular conversation is in the context of the 'Accept-Language:' header, but the same applies equally to the 'Accept:' header, as made clear later in his response... "I have no idea why people can't see
the second and third link on the top page http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm that point to the two PDF editions." What he means there is that there's no issue in using different endpoints for different representations of the same source data. (In this case one .html endpoint and two different .pdf endpoints.) Also in a similar discussion, this time regarding the virtues of using query parameters vs. using file extensions for different media types... "That's why I always prefer extensions.
Neither choice has anything to do with REST." cite Again, that's slightly different to Accept vs. filename extensions, but Fielding's stance is still clear. Answer - it really doesn't much matter.
The trade-offs between the two aren't very significant and both are acceptable styles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139654",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36147/"
]
} |
139,663 | I've read all of the posts I can find on this and I'm still not sure of the answer. I'd like to use a jQuery plugin on my website that is dual licensed under MIT and GPL. Does the dual license mean that as long as one or the other is satisfied I'm fine, or does it have to be both? I've read that a GPL javascript being loaded on someone's broswer doesn't count as redistribution, so I wouldn't have to use the GPL license for the rest of my site (provide source code). Is this true? My partner thinks I should remove the copyright altogether and change the variable names, as this is going to be on a commercial website. Is this ethical? I respect the person who took the time to write the code. Should I contact them and see if the plugin is available under a commercial license? Thank you very much in advance for helping to clarify. As this is my first website, I figure it's better to ask these questions than take a shot in the dark. | 1) Does the dual license mean that as long as one or the other is
satisfied I'm fine, or does it have to be both? Yes.
Specifically, jQuery makes it explicit that you can use it even in commercial environment. Why then it is also with GPL? This is because, if someone wants to make additional javascript library using jQuery, he/she can choose GPL license for himself/herself and distribute further in GPL to protect the freedom (which won't be possible with MIT). 2) I've read that a GPL javascript being loaded on someone's broswer
doesn't count as redistribution, so I wouldn't have to use the GPL
license for the rest of my site (provide source code). Is this true? As of GPLv3 the web page downloading the javascript is NOT distribution because it is NOT usable form. This is actually helpful for the website owners who classify such usage of project as self use rather than distribution and hence they don't have to open their source. There is a new license GPL Affero - that prohibits this; i.e. if there is a jQuery like library released under GPL Affero, the website owner has to release their own code as well! 3) My partner thinks I should remove the copyright altogether and
change the variable names, as this is going to be on a commercial
website. Is this ethical? I respect the person who took the time to
write the code. Should I contact them and see if the plugin is
available under a commercial license? You don't have to do this. Specifically for jQuery license permits all uses (unless you are going to bomb any parliament somewhere) so it is not essentially. If there was license restriction, you should NOT use this rather than really NOT giving due credit to the original author. Even if not on ethical ground, legally it would be tough to recreate copyrights by merely changing variable names | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139663",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49702/"
]
} |
139,666 | I very often find myself in situations where I should have opened a parentheses, but forgot and end up furiously tapping the left arrow key to get to the spot I should have put it, and doing the same to get back to where I was - that, or removing one hand from the keyboard to do it with the mouse. Either way, it's a stupid mistake on my part and a poor use of time going back to fix it. For example, if typing something like var x = 100 - var1 + var2; I might get to the end of the statement and realize I wanted to subtract the sum of var1 and var2 from 100 and not add var2 to the difference of 100 and var1. I can't really expect an IDE to prevent my mistakes, but I was thinking there could be a simple enough feature that would save time when they're made. Specifically, some kind of function that, after a closing parenthesis is added where there isn't an opening one, would start ghosting in an opening parenthesis at different statements and allow the user to switch between them. For example: Say you have the following statement: var x = oldY * oldX + newY / newX - left - right; If you put a closing parenthesis after right and pressed the shortcut, the IDE would do: var x = oldY * oldX + newY / newX - ( left - right); press left, and then: var x = oldY * oldX + newY / ( newX - left - right); ...then: var x = oldY * oldX + ( newY / newX - left - right); Anyway... Does this feature exist? If not, should it exist? What do senior programmers do when this happens? | 1) Does the dual license mean that as long as one or the other is
satisfied I'm fine, or does it have to be both? Yes.
Specifically, jQuery makes it explicit that you can use it even in commercial environment. Why then it is also with GPL? This is because, if someone wants to make additional javascript library using jQuery, he/she can choose GPL license for himself/herself and distribute further in GPL to protect the freedom (which won't be possible with MIT). 2) I've read that a GPL javascript being loaded on someone's broswer
doesn't count as redistribution, so I wouldn't have to use the GPL
license for the rest of my site (provide source code). Is this true? As of GPLv3 the web page downloading the javascript is NOT distribution because it is NOT usable form. This is actually helpful for the website owners who classify such usage of project as self use rather than distribution and hence they don't have to open their source. There is a new license GPL Affero - that prohibits this; i.e. if there is a jQuery like library released under GPL Affero, the website owner has to release their own code as well! 3) My partner thinks I should remove the copyright altogether and
change the variable names, as this is going to be on a commercial
website. Is this ethical? I respect the person who took the time to
write the code. Should I contact them and see if the plugin is
available under a commercial license? You don't have to do this. Specifically for jQuery license permits all uses (unless you are going to bomb any parliament somewhere) so it is not essentially. If there was license restriction, you should NOT use this rather than really NOT giving due credit to the original author. Even if not on ethical ground, legally it would be tough to recreate copyrights by merely changing variable names | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139666",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35459/"
]
} |
139,672 | Recently I have project in which a directory listing is enabled, due to which some scripts can be seen by outside world. I asked website administrator and he said it's not his responsibility, its the programmer's job. I also asked programmer about this and he said that the script is actually a cron job , and according to him, he has to test the script on webserver as there is a difference between development and production environment, so he placed there to test that. According to him, there was no direct link as he assumed directory listing is disabled and taken care by administrator. Who is the right person to do this? | It's duty and responsibility of both The basic problems with security tend to have something to do with comfort and overconfidence (e.g. security through obscurity). If you know a way to cover a security hole, do it, there's no such thing as "too much security". ...it's not that hard For example, the Joomla! project team places an empty index.html file on each directory to prevent directory listing and frameworks like Symfony and Ruby on Rails have a single public directory to which you must link on the public end of the server account. Never ever be lazy when it comes to security | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139672",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49708/"
]
} |
139,747 | My understanding of a primitive datatype is that It is a datatype provided by a language implicitly (Others are user defined classes) So different languages have different sets of datatypes which are considered primitive for that particular language. Is that right? And what is the difference between a "basic datatype" and "built-in datatype". Wikipedia says a primitive datatype is either of the two. PS - Why is "string" type considered as a primitive type in SNOBOL4 and not in Java ? | It kind of depends on the language. For example, in languages like C and C++, you have a number of built-in scalar types - int , float , double , char , etc. These are "primitive" in the sense that they cannot be decomposed into simpler components. From these basic types you can define new types - pointer types, array types, struct types, union types, etc. Then you have a language like old-school Lisp, where everything is either an atom or a list . Again, by the above definition, an atom is "primitive" in the sense that it cannot be decomposed into something simpler. Edit As far as I'm concerned, the terms "primitive", "basic", and "built-in" are pretty much interchangeable. If you want to get really pedantic, though, you can distinguish between types that are "built-in" (those explicitly provided by the language definition) and types derived from the built-in types that are still "primitive" or "basic" in that they cannot be decomposed into simpler elements. C's typedef facility allows you to create new type names for existing types. Ada allows you to create new scalar types that have constraints on them. For example, you can derive a Latitude type from the built-in floating type, with the constraint that it can't take on values outside the range [-90.0, 90.0]. It's still a primitive or basic type in that it cannot be broken down into any simpler components, but since it's user-defined, it's not considered a "built-in" type. Again, these concepts are a little fuzzy, and it really depends on context. For example, the notion of a "built-in" type is meaningless for a typeless language like BLISS. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139747",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49744/"
]
} |
139,893 | I'm in a code shop of two. And while I understand that a bug tracker is useful where the number of programmers is greater or equal to one, I'm not so convinced that logging bugs, changes, and fixes is worth the time when they're trivial. When I find a simple bug, I understand it, fix it, and run it through some testing. And THEN I realized I need to go log it. I know in theory that bug logging should be done somewhere between finding the bug and fixing the bug, but if fixing it is faster than logging it, it seems like a drag. In larger code-shops, the boss pays attention to who is doing what and it's nice to know where others are mucking about. I find myself describing things that I've already fixed and then instantly closing them. I have doubts that anyone will ever look at this closed bug again. Is it time to trim the process fat? | You should log every change you make to your system. There's nothing wrong with logging it after the event - as long as you link the bug report to the change number. Then if anything ever goes wrong you can track back to the bug and find out why you made the change you did. In the vast majority of cases you are right and no one will look at these ever again, but in the 1 out of 100 when something does go wrong this information will be invaluable - especially if the problem only surfaces 6 months down the line. UPDATE Obviously if you are still developing a new feature and discover a bug in part of the feature you thought you'd finished it's not necessary to log it as a separate change. In these cases I'd log it against the item requesting the major feature. Once the system with the feature has been passed to QA or the customer, then it's necessary to do what I outlined above. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139893",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26034/"
]
} |
139,904 | I am a contract data analyst, so I bounce between jobs every 3-6 months, which I find to be a good fit for me, but it leads to some problems when it comes to coding. I mostly do statistics (I've asked a similar question on cross validated, but the answers there are not relevant here), but I have also found out that the business world loves excel and loves copying and pasting the same thing over and over again even more. This led me to learn how to write VBA scripts and then VB.NET programs to automate as many of these reports as I can. I am certain my programs are not the most elegant, but I put a good bit of effort into making sure they work under as many cases as I can test, I add in exceptions and try to code so the program can handle changes in the files that it processes, but there is a limit, if you remove a huge portion of the data, there is a good chance my program is going to trip up, which I accept will inevitably happen. Usually a pretty minor change in the code fixes the problem and I do try and comment my code and make it readable under the assumption that some other person will have to read it some day. My problem is that I generally get put on teams of folks with essentially no experience with programming (like VBA would be a huge stretch for anyone I work directly with). I am wondering what I should be doing as the person that wrote the code to do my best to keep it maintained. I have two approaches in mind (outlined next), but would be very happy to get any advice. Solution 1: Find the more tech savvy coworkers and run them through the programs and what basic changes can be made. Honestly automating excel is about as easy as it can get when it comes to programming, so I feel like I could teach someone the basics of maintaining it pretty quick. Solution 2: Get in touch with the IT department and show them what is going on and maybe they will be able to help. The problem here is that the IT department is constantly swamped (as I'm sure many of you know) and I feel like kind of a jerk for dumping more things on them. I do leave my personal email address with places and am willing to answer quick questions via email, but I view the need for more exhaustive maintenance as something of an inevitability and would like to make sure I do my due diligence to make sure it gets done. I imagine some combination of the two approaches outlined there, but is there any kind of heads up I should give IT? I feel like I would be annoyed if I started getting requests to fix a program that I had never seen from some random guy that is no longer there. | You should log every change you make to your system. There's nothing wrong with logging it after the event - as long as you link the bug report to the change number. Then if anything ever goes wrong you can track back to the bug and find out why you made the change you did. In the vast majority of cases you are right and no one will look at these ever again, but in the 1 out of 100 when something does go wrong this information will be invaluable - especially if the problem only surfaces 6 months down the line. UPDATE Obviously if you are still developing a new feature and discover a bug in part of the feature you thought you'd finished it's not necessary to log it as a separate change. In these cases I'd log it against the item requesting the major feature. Once the system with the feature has been passed to QA or the customer, then it's necessary to do what I outlined above. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/139904",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49575/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.