source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
158,293 | I am sure there is a term for the following bit of refactoring, but I can't remember it and my Google-fu is failing me! The refactor moves if statements to where they are going to have most impact, for example changing this $test = someFunctionThatReturnsABool();
for($x = 0; $x < 10000; $x++) {
if ($test) {
echo $x;
}
} To this $test = someFunctionThatReturnsABool();
if ($test) {
for($x = 0; $x < 10000; $x++) {
echo $x;
}
} | This is loop-invariant code motion . A good compiler should do it on its own. ... loop-invariant code consists of statements or expressions (in an imperative programming language ) which can be moved outside the body of a loop without affecting the semantics of the program. Loop-invariant code motion (also called hoisting or scalar promotion ) is a compiler optimization which performs this movement automatically... If we consider the following code sample, two optimizations can be easily applied. for (int i = 0; i < n; i++) {
x = y + z;
a[i] = 6 * i + x * x;
} The calculation x = y + z and x * x can be moved outside the loop since within they are loop invariant — they do not change over the iterations of the loop— so the optimized code will be something like this: x = y + z;
t1 = x * x;
for (int i = 0; i < n; i++) {
a[i] = 6 * i + t1;
} This code can be optimized further... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158293",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5010/"
]
} |
158,534 | In some organization I worked for web applications are developed basing all the business logic in Database stored procedures. For example, use html for view and servlet as controllers to divert the client request to appropriate Database stored procedures. What are the advantages and disadvantages of this kind of designs? In my opinion if the business logic highly depends on the database than better follow this kind of design!!!!! | In theory, the pros and cons are as so: Pros: One place to contain all of the business logic Possibly faster applications because multiple SQL queries and such can be performed in one "round trip" to the database Trivial to make use of the stored procedures from multiple applications Cons: A DBA will be required for performance tuning All developers will have to be very well versed in your particular SQL dialect(T-SQL, Pl/SQL, etc) SQL code isn't as expressive and thus harder to write when covering higher level concepts that aren't really related to data A lot more unnecessary load on the database Now, practically, only a fool would have all business logic in the database. Very few developers will be able to create a consistent stored procedure interface that works easily across applications. Usually this is because certain assumptions are made of that calling application Same goes for documenting all of those stored procedures Database servers are generally bottlenecked enough as it is. Putting the unnecessary load on them just further narrows this bottleneck. Intricate load balancing and beefy hardware will be required for anything with a decent amount of traffic SQL is just barely a programming language. I once had the pleasure of maintaining a scripting engine written as a T-SQL stored procedure. It was slow, nearly impossible to understand, and took days to implement what would have been a trivial extension in most languages What happens when you have a client that needs their database to run a different SQL server? You'll basically have to start from scratch -- You're very tied to your database. Same goes for when Microsoft decides to deprecate a few functions you use a couple hundred times across your stored procedures Source control is extremely difficult to do properly with stored procedures, more so when you have a lot of them Databases are hard to keep in sync. What about when you have a conflict of some sort between 2 developers that are working in the database at the same time? They'll be overwriting each others code not really aware of it, depending on your "development database" setup The tools are definitely less easy to work with, no matter which database engine you use. So, to objectively answer the question. In most cases, stored procedures are only needed in some cases. For instance, if you have a report to generate where you need to do a lot of conditional processing across a couple of big tables, you wouldn't want your application to be making a couple hundred SQL queries to the database. You'd want to make a stored procedure so you didn't have the network lag overhead. And, you'll usually only want to make stored procedures otherwise for when clients want to run a custom-ish query across your database. Stored procedures and views can make this easily possible without your clients being a database whiz. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38202/"
]
} |
158,603 | I was going through the source code of an open source framework, where I saw a variable "payload" mentioned many times. Any ideas what "payload" stands for? | The term 'payload' is used to distinguish between the 'interesting' information in a chunk of data or similar, and the overhead to support it. It is borrowed from transportation, where it refers to the part of the load that 'pays': for example, a tanker truck may carry 20 tons of oil, but the fully loaded vehicle weighs much more than that - there's the vehicle itself, the driver, fuel, the tank, etc. It costs money to move all these, but the customer only cares about (and pays for) the oil, hence, 'pay-load'. In programming, the most common usage of the term is in the context of message protocols, to differentiate the protocol overhead from the actual data. Take, for example, a JSON web service response that might look like this (formatted for readability): {
"status":"OK",
"data":
{
"message":"Hello, world!"
}
} In this example, the string Hello, world! is the payload, the part that the recipient is interested in; the rest, while vital information, is protocol overhead. Another notable use of the term is in malware. Malicious software usually has two objectives: spreading itself, and performing some kind of modification on the target system (delete files, compromise system security, call home, etc.). The spreading part is the overhead, while the code that does the actual evil-doing is the payload. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158603",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50008/"
]
} |
158,640 | After watching National Geographic's MegaStructures series , I was surprised how fast large projects are completed. Once the preliminary work (design, specifications, etc.) is done on paper, the realization itself of huge projects takes just a few years or sometimes a few months . For example, Airbus A380 "formally launched on Dec. 19, 2000", and "in the Early March, 2005" , the aircraft was already tested. The same goes for huge oil tankers, skyscrapers, etc. Comparing this to the delays in the software industry, I can't help wondering why most IT projects are so slow, or more precisely, why they cannot be as fast and faultless, at the same scale, given enough people? Projects such as the Airbus A380 present both: Major unforeseen risks: while this is not the first aircraft built, it still pushes the limits of the technology and things which worked well for smaller airliners may not work for the larger one due to physical constraints; in the same way, new technologies are used which were not used before, because for example they were not available in 1969 when the Boeing 747 was certified. Risks related to human resources and management in general: people quitting in the middle of the project, inability to reach a person because she's on vacation, ordinary human errors, etc. With those risks, people still achieve projects like those large airliners in a very short period of time , and despite the delivery delays, those projects are still hugely successful and of a high quality. When it comes to software development, the projects are hardly as large and complicated as an airliner (both technically and in terms of management), and have slightly less unforeseen risks from the real world. Still, most IT projects are slow and late , and adding more developers to the project is not a solution (going from a team of ten developers to two thousand will sometimes make it possible to deliver the project faster, sometimes not, and sometimes will only harm the project and increase the risk of not finishing it at all). Those which are still delivered may often contain a lot of bugs, requiring consecutive service packs and regular updates (imagine "installing updates" on every Airbus A380 twice a week to patch the bugs in the original product to prevent the aircraft from crashing). How can such differences be explained? Is it due exclusively to the fact that the software development industry is too young to be able to manage thousands of people on a single project in order to deliver large scale, nearly faultless products very fast? | Ed Yourdon's Death March touches upon a number of these meta type questions. In general, the software industry lacks a lot of the following, which gets in the way of large projects. Standardization and work item breakdown. This has certainly gotten better, but the design constructs still aren't there to break out a big system. In some ways, software can't even agree on what's needed for a given project, much less being able to break things down into components. Aerospace, building construction, auto, etc.. all have very component-driven architectures with reasonably tight interfaces to allow fully parallel development. Software still allows too much bleed through in the corresponding areas. A large body of successful, similar projects. The A380 wasn't the first big airplane that Airbus built. There are a lot of large software applications out there, but many of them have suffered dramatically in some aspect or the other and wouldn't come close to being called "successful." A large body of designers and builders who have worked on a number of similar and successful projects. Related to the successful project issue, not having the human talent who has been there, done that makes things very difficult from a repeatability point of view. "Never" building the same thing twice. In many ways, an airplane is like any other airplane. It's got wings, engines, seats, etc.. Large software projects rarely repeat themselves. Each OS kernel is significantly different. Look at the disparity in file systems. And for that matter, how many truly unique OSs are there? The big ones become clones of a base item at some point. AIX , Solaris , HP-UX , ... herald back to AT&T System V . Windows has had an incredible amount of drag forward through each iteration. Linux variants generally all go back to the same core that Linus started. I bring it up, because the variants tend to propagate faster than the truly unique, proprietary OSs. Really bad project estimation. Since the repeatability factor is so low, it's difficult to project how large it will end up being and how long something will take to build. Given that project managers and Management can't put their hands on the code and actually see what is being done, unrealistic expectations regarding timelines get generated. QA / QC is not emphasized as heavily as it could or should be for larger projects. This goes back to having looser interfaces between components, and not having rigid specifications for how components should work. That looseness allows for unintended consequences and for bugs to creep in. Consistently measurable qualifications. Generally, people speak of the number of years they've worked in X language or in programming. Time in is being used as a substitute for caliber or quality of skill. As has been mentioned many times before, interviewing and finding good programming talent is hard. Part of the problem is that the definition of "good" remains very subjective. I don't mean to be all negative, and I think the software industry has made significant strides from where we've been. Forums like this and others have really helped promote conversation and discussion of design principles. There are organizations working to standardize on "baseline" knowledge for software engineers. There is certainly room for improvement, but I think the industry has come a long way in a reasonably short period of time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158640",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
158,715 | Case : I'm working at a company, writing an application in Python that is handling a lot of data in arrays. I'm the only developer of this program at the moment, but it will probably be used/modified/extended in the future (1-3 years) by some other programmer, at this moment unknown to me. I will probably not be there directly to help then, but maybe give some support via email if I have time for it. So, as a developer who has learned functional programming (Haskell), I tend to solve, for example, filtering like this: filtered = filter(lambda item: included(item.time, dur), measures) The rest of the code is OO, it's just some small cases where I want to solve it like this, because it is much simpler and more beautiful according to me. Question : Is it OK today to write code like this? How does a developer that hasn't written/learned FP react to code like this? Is it readable? Modifiable? Should I write documentation like explaining to a child what the line does? # Filter out the items from measures for which included(item.time, dur) != True I have asked my boss, and he just says "FP is black magic, but if it works and is the most efficient solution, then it's OK to use it." What is your opinion on this? As a non-FP programmer, how do you react to the code? Is the code "googable" so you can understand what it does? I would love feedback on this. | Is it readable? For me: Yes, but I have come to understand, that the Python community often seems to consider list comprehensions a cleaner solution than using map() / filter() . In fact, GvR even considered dropping those functions altogether. Consider this: filtered = [item for item in measures if included(item.time, dur)] Further, this has the benefit that a list comprehension will always return a list. map() and filter() on the other hand will return an iterator in Python 3. Note: If you want to have an iterator instead, it's as simple as replacing [] with () : filtered = (item for item in measures if included(item.time, dur)) To be honest, I see little to no reason for using map() or filter() in Python. Is it Modifiable? Yes, certainly, however, there is one thing to make that easier: Make it a function, not a lambda. def is_included(item):
return included(item.time, dur)
filtered = filter(is_included, measures)
filtered = [item for item in measures if is_included(item)] If your condition becomes more complex, this will scale much easier, also, it allows you to reuse your check. (Note that you can create functions inside other functions, this can keep it closer to the place where it is used.) How does a developer that hasn't written/learned FP react on code like this? He googles for the Python documentation and knows how it works five minutes later. Otherwise, he shouldn't be programming in Python. map() and filter() are extremely simple. It's not like you're asking them to understand monads. That's why I don't think you need to write such comments. Use good variable and function names, then the code is almost self explanatory. You can't anticipate which language features a developer doesn't know. For all you know, the next developer might not know what a dictionary is. What we don't understand is usually not readable for us. Thus, you could argue it's no less readable than a list comprehension if you've never seen either of them before. But as Joshua mentioned in his comment, I too believe it's important to be consistent with what other developers use - at least if the alternative provides no substantial advantage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158715",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60447/"
]
} |
158,716 | Do I understand correctly that Liskov Substitution Principle cannot be observed in languages where objects can inspect themselves, like what is usual in duck typed languages? For example, in Ruby, if a class B inherits from a class A , then for every object x of A , x.class is going to return A , but if x is an object of B , x.class is not going to return A . Here is a statement of LSP: Let q(x) be a property provable about objects x of type T . Then q(y) should be provable for objects y of type S where S is a subtype of T . So in Ruby, for example, class T; end
class S < T; end violate LSP in this form, as witnessed by the property q(x) = x.class.name == 'T' Addition. If the answer is "yes" (LSP incompatible with introspection), then my other question would be: is there some modified "weak" form of LSP which can possibly hold for a dynamic language, possibly under some additional conditions and with only special types of properties . Update. For reference, here is another formulation of LSP that I've found on the web: Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it. And another: If S is a declared subtype of T, objects of type S should behave as objects of type T are expected to behave, if they are treated as objects of type T. The last one is annotated with: Note that the LSP is all about expected behaviour of objects. One can only follow the LSP if one is clear about what the expected behaviour of objects is. This seems to be weaker than the original one, and might be possible to observe, but I would like to see it formalized, in particular explained who decides what the expected behavior is. Is then LSP not a property of a pair of classes in a programming language, but of a pair of classes together with a given set of properties, satisfied by the ancestor class? Practically, would this mean that to construct a subclass (descendant class) respecting LSP, all possible uses of the ancestor class have to be known? According to LSP, the ancestor class is supposed to be replaceable with any descendant class, right? Update. I have already accepted the answer, but i would like to add one more concrete example from Ruby to illustrate the question. In Ruby, each class is a module in the sense that Class class is a descendant of Module class. However: class C; end
C.is_a?(Module) # => true
C.class # => Class
Class.superclass # => Module
module M; end
M.class # => Module
o = Object.new
o.extend(M) # ok
o.extend(C) # => TypeError: wrong argument type Class (expected Module) | Here's the actual principle : Let q(x) be a property provable about objects x of type T . Then q(y) should be provable for objects y of type S where S is a subtype of T . And the excellent wikipedia summary: It states that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., objects of type S may be substituted for objects of type T) without altering any of the desirable properties of that program (correctness, task performed, etc.). And some relevant quotes from the paper: What is needed is a stronger requirement that constrains the behavior of sub-types: properties that can be proved using the specification of an object’s presumed type should hold even though the object is actually a member of a subtype of that type... A type specification includes the following information: - The type’s name; - A description of the type's value space; - For each of the type's methods: --- Its name; --- Its signature (including signaled exceptions); --- Its behavior in terms of pre-conditions and post-conditions. So on to the question: Do I understand correctly that Liskov Substitution Principle cannot be observed in languages where objects can inspect themselves, like what is usual in duck typed languages? No. A.class returns a class. B.class returns a class. Since you can make the same call on the more specific type and get a compatible result, LSP holds. The issue is that with dynamic languages, you can still call things on the result expecting them to be there. But let's consider a statically, structural (duck) typed language. In this case, A.class would return a type with a constraint that it must be A or a subtype of A . This provides the static guarantee that any subtype of A must provide a method T.class whose result is a type that satisfies that constraint. This provides a stronger assertion that LSP holds in languages that support duck typing, and that any violation of LSP in something like Ruby occurs more due to normal dynamic misuse than a language design incompatibility. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60408/"
]
} |
158,887 | I'm interacting with Git through GitHub for Windows , which is funny since I'll never push my repository to GitHub. I'm working on it alone and it's intended to be used only by me. I noticed that my commits are listed under "unsynced commits" and under "history" it says "no commits". Which brings me to the question, what will I achieve by pushing except my commits listed under "history"? | You are technically correct -- no real need to push if you aren't sharing the code with anyone. Then again, your laptop has a hard drive made by the lowest bidder. Your house could burn down before the hard drive fails. You might want to look at your code remotely. Or even share it with someone. Now, with Github, they require everything be public or you need to pay for private repositories. So if you want to keep it to yourself, you might want to check out bitbucket which will let you do git but also features free private repos. Another option would be to save your git repository somewhere that is backed up remotely. But there are few advantages doing this rather than just using a cloud SCM provider these days. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158887",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9059/"
]
} |
158,908 | In Java: int count = (Integer) null; throws a java.lang.NullPointerException. Why doesn't this throw a Class Cast Exception for ease in programmer understanding? Why was this exception chosen over any other exception? | When executing your code, the Java runtime does the following: Cast null to an object of class Integer. Try to unbox the Integer object to an int by calling the method intValue() Calling a method on a null object throws a NullPointerException. In other words, null can be cast to Integer without a problem, but a null integer object cannot be converted to a value of type int. EDIT I had a related question a while ago at Stack Overflow, see here . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158908",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60556/"
]
} |
158,929 | I'm in a situation where part of my system has a dependency on another module in the same system, but the modules themselves need to remain independently deployable, where the parts they depend on would be filled in with another implementation. A module in this instance is the concept of a bunch of components that all contribute to servicing a business function. The system is also built in C#. So, where I depend on another module, the source module defines an interface describing the functions it needs the dependent party module to implement. The contract types (domain model objects) are also interfaces for the dependent module. Here is where it gets a bit hazy to me. The dependency inversion principle doesn't sit well in my head at this point. Both of those "modules" are of the same importance as each other. Which should define interfaces and force the other to reference it? I'm suspecting a 3rd project sitting between the modules that handles setting up the dependencies (probably a DI container). Should the entities in the modules be interfaced? They are simply bags of get/set (no DDD here). Can anyone offer any guidance here? Thanks in advance. Edit 1: The project structure: Module1.Interfaces
IModuleOneServices
Module1.Models
ModuleOneDataObject
Module1
Module1 implements IModuleOneServices
Module2.Interfaces
IModuleTwoServices
Module2.Models
ModuleTwoDataObject - has property of type ModuleOneDataObject
Module2
Module2 implements IModuleTwoServices
depends on IModuleOneServices Module2 needs to be deployable by itself, and remains compilable, and sometimes, run without a Module1 present at all. | When executing your code, the Java runtime does the following: Cast null to an object of class Integer. Try to unbox the Integer object to an int by calling the method intValue() Calling a method on a null object throws a NullPointerException. In other words, null can be cast to Integer without a problem, but a null integer object cannot be converted to a value of type int. EDIT I had a related question a while ago at Stack Overflow, see here . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158929",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15177/"
]
} |
158,954 | Actually, I'm helping a small software shop on their Scrum Implementation. Recently the Scrum Master reported me that he has a problem because the Team is working Over Time to achieve the Scope (Committed Backlog). So they have an Unreal Velocity . My formal question(s) is / are: Apart from speaking about on the Retrospective Meeting; do you think that is a good idea to implement some hard-blocks to avoid Over Time? If so, what techniques / tools do you suggest? Revision Control system (SVN, GIT, HG, etc...), blocks by hours (8 to 5) Work station blocks by hours (8 to 5) or cumulative hours (up to 8 hrs/day)? Other(s)... Or, maybe, do not hard-block this kind of things; but implement some "Penalty System" for Unjustified Extra Hours ? First: Tks all for your fast responses. @Baqueta (and others with similar questions): No they are not been paid for Extra Hours. My first advise to them was to review their estimates because maybe they were underestimating.
This was my favorite advise: If they have an interest in working overtime, remove it. Development isn't something you can do for 60 hours a week and stay productive, and there are numerous studies out there that prove this. If overtime pay is the issue, get rid of it and improve their base pay so they're getting what they're worth. Also, I think that the root problem (for this team), is a combination of the following: The developers are being told what they must achieve in a sprint/aren't being consulted on what's achievable/are being ignored when they say there's too much work. The developers are consistently underestimating how much time tasks will take/how many units of work are involved in each task. Summary: I'll talk to the Team to review their estimates, and with the P.O. because I feel that they are not being consulted about the scope, as you mentioned. | Frankly, those "hard blocks" you mention in #2 are the worst idea I've heard in a long time. What happens if a top-priority bug is found at 4.45pm and the guy who has the ability to override your blocks is off sick? As for #3 - you're suggesting punishing people for doing their jobs . If the team are consistently working overtime to complete sprints, then either they have an interest in working overtime - e.g. overtime pay or getting overtime back as holidays - or they are committing to doing too much work in the sprints. If they have an interest in working overtime, remove it . Development isn't something you can do for 60 hours a week and stay productive, and there are numerous studies out there that prove this. If overtime pay is the issue, get rid of it and improve their base pay so they're getting what they're worth. If there is too much work going into the sprints, this is usually for one of three reasons: The developers are being told what they must achieve in a sprint/aren't being consulted on what's achievable/are being ignored when they say there's too much work. The developers are consistently underestimating how much time tasks will take/how many units of work are involved in each task. The developers keep being pulled onto tasks which aren't part of the scrum. If it's #1, you're doing it wrong . No two ways about it! If it's #2, the usual cause is inexperience - either with doing time estimates, or with the system being developed. A good way around this is to stop doing time estimates and start estimating 'units of work'. Use some abstract unit, just make sure it isn't real-time hours (ultimately you're still representing a time interval, but the abstraction is important!). You can then start calculating how many units of work actually get done in a week, then set up sprints using that data. If it's #3, you need to start factoring in those other tasks somehow. If it's consistent it should be easy to account for (see #2 above). If it's frequent but unpredictable it's much tricker to deal with. You'll want to take a look at why it's happening (e.g. serious bugs in 'live' code => is your testing thorough enough?) but if that can't be fixed then ultimately scrum might not be the right approach for you. My team recently switched over to Kanban for this very reason... Edit: Clarified my criticism of the ideas presented in the question. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158954",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37367/"
]
} |
158,981 | We are building a business application (a laboratory management system to be more precise) mostly for internal company use only. To make it easier for users to find items which they work on we are implementing a list of most used items. We had a little debate on which method would be better to implement: display the most recent vs display the most used. My arguments on most-recent A little bit easier to implement. I think this is worth to mention because we are dealing with business application which will be sold as a single copy, so this may directly effect application price. Also simpler implementation means less code and may effect maintenance code. Counterargument: difference in implementation difficulty in this case is too small It is easier for users to guess what will they find in this list so they know if it is worth to look at the list at all. Items which were relative yesterday and used a lot might not be relevant today and in recent item list they quickly disappear. My arguments on most-used Actually it is quite easy to display a mixed version of recently-used and most-used by combining last access date and a number of access something like this (today - lastaccess) * number_of_access Counterargument: this requires fine-tuning What arguments would you give for one or another? | This might be down voted for not answering your question directly, but the debate you had with your colleague was a waste of time. You should have spoken to 3 (mid level, hands on) lab technicians - given them the two options and asked them what they would do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/158981",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19900/"
]
} |
159,079 | I recently started working with Unity3D and primarily scripting with C#. As I normally program in Java, the differences aren't too great but I still referred to a crash course just to make sure I am on the right track. However, my biggest curiosity with C# is that it capitalises the first letter of its method names (eg. Java: getPrime() C#: GetPrime() aka: Pascal Case?). Is there a good reason for this? I understand from the crash course page that I read that apparently it's convention for .Net and I have no way of ever changing it, but I am curious to hear why it was done like this as opposed to the normal (relative?) camel case that, say, Java uses. Note: I understand that languages have their own coding conventions (Python methods are all lower case which also applies in this question) but I've never really understood why it isn't formalised into a standard. | Naming conventions represent arbitrary choices of their publisher. There is nothing in the language itself to prohibit you from naming your methods the way you do in Java: as long as the first character is a letter/underscore, and all other characters are letters, digits, or underscores, C# is not going to complain. However, the class libraries that come with .NET follow a convention that Microsoft has adopted internally. Microsoft also published these guidelines , so that others may choose to adopt them for their own class libraries too. Although it is your choice to follow or to ignore Microsoft's guidelines, familiarization with your code by others may go faster if you follow the same naming guidelines. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159079",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25686/"
]
} |
159,096 | I sometimes end up having to write a method or property for a class library for which it is not exceptional to have no real answer, but a failure. Something cannot be determined, is not available, not found, not currently possible or there is no more data available. I think there are three possible solutions for such a relatively non-exceptional situation to indicate failure in C# 4: return a magic value that has no meaning otherwise (such as null and -1 ); throw an exception (e.g. KeyNotFoundException ); return false and provide the actual return value in an out parameter, (such as Dictionary<,>.TryGetValue ). So the questions are: in which non-exceptional situation should I throw an exception? And if I should not throw: when is returning a magic value perferred above implementing a Try* method with an out parameter ? (To me the out parameter seems dirty, and it is more work to use it properly.) I'm looking for factual answers, such as answers involving design guidelines (I don't know any about Try* methods), usability (as I ask this for a class library), consistency with the BCL, and readability. In the .NET Framework Base Class Library, all three methods are used: return a magic value that has no meaning otherwise: Collection<T>.IndexOf returns -1, StreamReader.Read returns -1, Math.Sqrt returns NaN, Hashtable.Item returns null; throw an exception: Dictionary<,>.Item throws KeyNotFoundException, Double.Parse throws FormatException; or return false and provide the actual return value in an out parameter: Dictionary<,>.TryGetValue , Double.TryParse . Note that as Hashtable was created in the time when there were no generics in C#, it uses object and can therefore return null as a magic value. But with generics, exceptions are used in Dictionary<,> , and initially it didn't have TryGetValue . Apparently insights change. Obviously, the Item - TryGetValue and Parse - TryParse duality is there for a reason, so I assume that throwing exceptions for non-exceptional failures is in C# 4 not done . However, the Try* methods didn't always exist, even when Dictionary<,>.Item existed. | I don't think that your examples are really equivalent. There are three distinct groups, each with it's own rationale for its behaviour. Magic value is a good option when there is an "until" condition such as StreamReader.Read or when there is a simple to use value that will never be a valid answer (-1 for IndexOf ). Throw exception when the semantics of the function is that the caller is sure that it will work. In this case a non-existing key or a bad double format is truly exceptional. Use an out parameter and return a bool if the semantics is to probe if the operation is possible or not. The examples you provide are perfectly clear for cases 2 and 3. For the magic values, it can be argued if this is a good design decision or not in all cases. The NaN returned by Math.Sqrt is a special case - it follows the floating point standard. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159096",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48260/"
]
} |
159,115 | My company is in the midst of a transition from waterfall-style development to Agile/Scrum. Among other things, we're told that the expectation is for us to have new working, testable (by QA) features at the end of each day. Most of our devs lose around 2 hours a day to meetings and other enterprisey overhead. This means that in any given 6-hour (at best) period, we have to design, write, unit-test, build, and deploy (with release notes) enough code to produce a complete feature for QA to play with. I understand that the build/deploy/release notes could be automated with a proper CI setup but we're not there yet. We also have a large offshore contingent writing our server-side code, and the 12-hour time difference makes this even more difficult. We attempt to task out stories into narrow, deep vertical slices in order to complete features end-to-end as fast as possible, but most days feel rather frantic and I often catch people taking stupid, fragile shortcuts to ensure QA has their build. This problem is compounded after a sprint has been in progress for a couple of days, when the inevitable defects start rolling in and have to fit into the same 6-hour window. Is this a normal pace for Agile teams? Even if we manage to implement a CI setup, I can't see how we'll be able to sustain this pace and still create quality software. Edit: There are several good answers here. It made me realize that what I was really asking is, should Agile teams deliver new features daily. I updated the title accordingly. | The crimes that are committed in the name of Agile these days make me sad. Lots of people are having a hard time making this transition. Agile Manifesto: "We value people and interactions over process and tools.". When the people are clearly hurting, the process is wrong. I don't want to tell you how to do it, but will share how I do it. In my teams, the important part is to avoid committing to a shared repo code that is broken in ways that will waste the rest of the team's time. In this sense only, I strive to 'deliver working code every day'. Don't break QA. Don't block other developers. Ideally I never check in any bugs. (ha ha). The implication is not that you have to commit something every day. The implication is that you should only commit good stuff, so that each day you can get a build of all the good stuff that anyone committed. This way the team keeps firing on all cylinders. In my teams QA is constant. I build commercial products, so the project is never over until the product is obsolete. QA Engineers test the features that are available to test. QA Engineers always have a backlog. There is never enough QA time to test or automate everything we would idealistically want. If developers need multiple days before merging in changes for a feature or fix, it's fine, encouraged if it helps them get the code right before risking our time. Developers can commit code to their private repo or branch without affecting the team or QA. Developers can run unit tests or regression automation on code built from a developer's repo or private branch. On particularly risky cases a QA Engineer will work with the developer to test before merging, to protect the team from delay. In this sense, I practice what your managers want. Almost every day for the last 12 years my development teams have had code that works in the shared repository. We're always almost ready to ship. Occasionally we do not achieve this but we don't worry too much about it. Sometimes it is intentional, to accomodate major tools changes or difficult merges. The Agile Manifesto, to me, sums up the best of the new thinking on development process that emerged in the 1990's. I'm pretty much a true believer in those principles, but the process details can vary. As I see it, the point of Agile is to adapt your process to your product's and clients' needs, not to be a slave to process. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159115",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11211/"
]
} |
159,193 | Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo {
private int bar;
private String baz;
public Foo(File f) {
execute(f);
}
private void execute(File f) {
// FTP the file to some hardcoded location,
// or parse the file and commit to the database, or whatever
}
} If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) {
new Foo(f);
} Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF? | Ok, going down the list: I was taught long ago that instantiated objects in a loop is generally a bad idea Not in any languages I've used. In C it is a good idea to declare your variables up front, but that is different from what you said. It might be slightly faster if you declare objects above the loop and reuse them, but there are plenty of languages where this increase in speed will be meaningless (and probably some compilers out there that do the optimization for you :) ). In general, if you need an object within a loop, create one. constructors should do a minimum of work Constructors should instantiate the fields of an object and do any other initialization necessary to make the object ready to use. This is generally means constructors are small, but there are scenarios where this would be a substantial amount of work. is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF? It's a submission to the Daily WTF. Granted, there are worse things you can do with code. The problem is that the author has a vast misunderstanding of what classes are and how to use them. Specifically, here's what I see that is wrong with this code: Misuse of classes: The class is basically acting like a function. As you mentioned, it should either be replaced with a static function, or the function should just be implemented in the class that is calling it. Depends on what it does and where it is used. Performance overhead: Depending on the language, creating an object can be slower than calling a function. General confusion: It is generally confusing to the programmer how to use this code. Without seeing it used, no one would know how the author intended to use the code in question. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159193",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60681/"
]
} |
159,232 | I am new to databases and trying to understand the basic concepts. I have learned how to delete data in a database. But one of my friends told me that you should never delete data in a database. Rather, when its no longer needed, it's better to simply mark it or flag it as 'not in use'. Is that true? If so, how would a big company like IBM handle their data for a hundred or more years? | As with all these things the answer is "it depends". If the user is ever likely to want the data back then your friends are right - you don't really delete just mark the record as "deleted". This way when the user changes their mind you can recover the data. However, if the deleted data is more than a certain time period old (a year for example) you might decide to really delete it from the live tables but keep it in either an archive table or even just on back up should the user ever want it back. In this way you can keep the amount of data (live and recently deleted) to a minimum. However, if the data is ephemeral or easily recreated you may well decide to actually delete the data. There is one class of data that you have to delete - and that's personal data that the user doesn't want you to hold any more. There may be local laws (e.g. in the EU) that makes this a mandatory requirement (thanks Gavin ) Equally there may be rules that require you not to delete data, so before deciding anything check with any regulatory authorities on what you need to do to comply with the law. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
159,238 | I recently converted an old application that was using XML files as the data store to use SQL instead. To avoid a lot of changes I basically created ActiveRecord style classes that inherited from the original business objects. For example SomeClassRecord :SomeClass
//ID Property
//Save method I then used this new class in place of the other one, because of polymorphism I didn't need to change any methods that took SomeClass as a parameter. Would this be considered 'Bad'? What would be a better alternative? | As with all these things the answer is "it depends". If the user is ever likely to want the data back then your friends are right - you don't really delete just mark the record as "deleted". This way when the user changes their mind you can recover the data. However, if the deleted data is more than a certain time period old (a year for example) you might decide to really delete it from the live tables but keep it in either an archive table or even just on back up should the user ever want it back. In this way you can keep the amount of data (live and recently deleted) to a minimum. However, if the data is ephemeral or easily recreated you may well decide to actually delete the data. There is one class of data that you have to delete - and that's personal data that the user doesn't want you to hold any more. There may be local laws (e.g. in the EU) that makes this a mandatory requirement (thanks Gavin ) Equally there may be rules that require you not to delete data, so before deciding anything check with any regulatory authorities on what you need to do to comply with the law. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159238",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41306/"
]
} |
159,267 | I'm asked to perform or sit in during many technical interviews. We ask logic questions and simple programming problems that the interviewee is expected to be able to solve on paper. (I would rather they have access to a keyboard, but that is a problem for another time.) Sometimes I sense that people do know how to approach a problem, but they are hung up by nervousness or some second-guessing of the question (they aren't intended to be trick questions). I've never heard my boss give any help or hints. He just thanks the interviewee for the response (no matter how good or bad it is) and moves on to the next question or problem. But I know something about the rabbit hole that defeat and nerves can lead you down, and how it disables your mind, and I can't help wondering if providing a little help now and then would ultimately help us end up with more capable programmers instead of more failed interviews. Should I provide hints and assistance for befuddled interviewees (and if so, how far should I go while still being fair to the more prepared candidates)? | When I was in a similar position, I would say to the interviewee: "Pretend I'm Google. If you need to search for something just say so." In one question interviewees needed to be able to figure out the volume of a cylinder, so I didn't mind if someone said, "I'd have to Google for the formula for the volume of a cylinder." I was interested in knowing if they knew how to attack the problem, not if they'd memorized formulas. For the job, they had to have a decent grasp of how to translate the real world into software, so it was an important concept. On the other hand I wasn't going to tell them they needed that formula. You are correct that nerves can be a problem, but I still expect people to be able to express their thought process, even if they're nervous. Simply not giving an answer was unacceptable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159267",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19033/"
]
} |
159,373 | A reoccurring theme on SE I've noticed in many questions is the ongoing argument that C++ is faster and/or more efficient than higher level languages like Java. The counter-argument is that modern JVM or CLR can be just as efficient thanks to JIT and so on for a growing number of tasks and that C++ is only ever more efficient if you know what you're doing and why doing things a certain way will merit performance increases. That's obvious and makes perfect sense. I'd like to know a basic explanation (if there is such a thing...) as to why and how certain tasks are faster in C++ than the JVM or CLR? Is it simply because C++ is compiled into machine code whereas the JVM or CLR still have the processing overhead of JIT compilation at run time? When I try to research the topic, all I find is the same arguments I've outlined above without any detailed information as to understanding exactly how C++ can be utilized for high-performance computing. | It's all about the memory (not the JIT). The JIT 'advantage over C' is mostly limited to optimizing out virtual or non-virtual calls through inlining, something that the CPU BTB is already working hard to do. In modern machines, accessing RAM is really slow (compared to anything the CPU does), which means applications that use the caches as much as possible (which is easier when less memory is used) can be up to a hundred times faster than those that don't. And there are many ways in which Java uses more memory than C++ and makes it harder to write applications that fully exploit the cache: There is a memory overhead of at least 8 bytes for each object, and the use of objects instead of primitives is required or preferred in many places (namely the standard collections). Strings consist of two objects and have an overhead of 38 bytes UTF-16 is used internally, which means that each ASCII character requires two bytes instead of one (the Oracle JVM recently introduced an optimizaion to avoid this for pure ASCII strings). There is no aggregate reference type (i.e. structs), and in turn, there are no arrays of aggregate reference types. A Java object, or array of Java objects, has very poor L1/L2 cache locality compared to C-structs and arrays. Java generics use type-erasure, which has poor cache locality compared to type-instantiation. Object allocation is opaque and has to be done separately for each object, so it is impossible for an application to deliberately lay out its data in a cache-friendly way and still treat it as structured data. Some other memory- but not cache-related factors: There is no stack allocation, so all non-primitive data you work with has to be on the heap and go through garbage collection (some recent JITs do stack allocation behind the scenes in certain cases). Because there are no aggregate reference types, there is no stack passing of aggregate reference types. (Think efficient passing of Vector arguments) Garbage collection can hurt L1/L2 cache contents, and GC stop-the-world pauses hurt interactivity. Converting between data types always requires copying; you cannot take a pointer to a bunch of bytes you got from a socket and interpret them as a float. Some of these things are tradeoffs (not having to do manual memory management is worth giving up a lot of performance for most people), some are probably the result of trying to keep Java simple, and some are design mistakes (though possibly only in hindsight, namely UTF-16 was a fixed length encoding when Java was created, which makes the decision to choose it a lot more understandable). It's worth noting that many of these tradeoffs are very different for Java/JVM than they are for C#/CIL. The .NET CIL has reference-type structs, stack allocation/passing, packed arrays of structs, and type-instantiated generics. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26979/"
]
} |
159,503 | I recently upgraded versions of pylint , a popular Python style-checker. It has gone ballistic throughout my code, pointing out places where I import modules in the same package, without specifying the full package path. The new error message is W0403. W0403: Relative import %r, should be %r Used when an import relative to the package directory is detected. Example For example, if my packages are structured like this: /cake
/__init__.py
/icing.py
/sponge.py
/drink and in the sponge package I write: import icing instead of import cake.icing I will get this error. While I understand that not all Pylint messages are of equal importance, and I am not afraid to dismiss them, I don't understand why such a practice is considered a poor idea. I was hoping someone could explain the pitfalls, so I could improve my coding style rather than (as I currently plan to do) turning off this apparently spurious warning. | The problem of import icing is that you don't know whether it's an absolute import or a relative import. icing could be a module in python's path, or a package in the current module. This is quite annoying when a local package has the same name as a python standard library package. You can do from __future__ import absolute_import which turns off implicit relative imports altogether. It is described, including with this justification about ambiguity, in PEP 328 . I believe Python 3 has implicit relative imports turned off completely. You still can do relative imports, but you have to do them explicitly, like this: from . import icing | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159503",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5146/"
]
} |
159,554 | I have a simple question, and I'm not even sure it has an answer but let's try.
I'm coding in C++, and using dependancy injection to avoid global state. This works quite well, and I don't run in unexpected/undefined behaviours very often. However I realise that, as my project grows I'm writing a lot of code which I consider boilerplate. Worse : the fact there is more boilerplate code, than actual code makes it sometimes hard to understand. Nothing beats a good example so let's go : I have a class called TimeFactory which creates Time objects. For more details (not sure it's relevant) : Time objects are quite complex because the Time can have different formats, and conversion between them is neither linear, nor straightforward. Each "Time" contains a Synchronizer to handle conversions, and to make sure they have the same, properly initialized, synchronizer, I use a TimeFactory. The TimeFactory has only one instance and is application wide, so it would qualify for singleton but, because it's mutable, I don't want to make it a singleton In my app, a lot of classes need to create Time objects. Sometimes those classes are deeply nested. Let's say I have a class A which contains instances of class B, and so on up to class D. Class D need to create Time objects. In my naive implementation, I pass the TimeFactory to the constructor of class A, which passes it to the constructor of class B and so on until class D. Now, imagine I have a couple of classes like TimeFactory and a couple of class hierarchies like the one above : I loose all the flexibility and readability I'm suppose to get using dependancy injection . I'm starting to wonder if there isn't a major design flaw in my app ...
Or is this a necessary evil of using dependancy injection ? What do you think ? | In my app, a lot of classes need to create Time objects Seems that your Time class is a very basic data type which should belong to the "general infrastructure" of your application. DI does not work well for such classes. Think about what it means if a class like string had to be injected into every part of the code which uses strings, and you would need to use a stringFactory as the only possibilty of creating new strings - the readability of your program would decrease by an order of magnitude. So my suggestion: don't use DI for general datatypes like Time . Write unit tests for the Time class itself, and when its done, use it everywhere in your program, just like the string class, or a vector class or any other class of the standard lib. Use DI for components which should be really decoupled one from each other. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159554",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28821/"
]
} |
159,637 | The Mars Curiosity rover has landed successfully, and one of the promo videos "7 minutes of terror" brags about there being 500,000 lines of code. It's a complicated problem, no doubt. But that is a lot of code, surely there was a pretty big programming effort behind it. Does anyone know anything about this project? I can only imagine it's some kind of embedded C. | It's running 2.5 million lines of C on a RAD750 processor manufactured by BAE . The JPL has a bit more information but I do suspect many of the details are not publicized. It does appear that the testing scripts were written in Python. The underlying operating system is Wind River's VxWorks RTOS . The RTOS in question can be programmed in C, C++, Ada or Java. However, only C and C++ are standard to the OS, Ada and Java are supported by extensions. Wind River supplies a tremendous amount of detail as to the hows and whys of VxWorks . The underlying chipset is almost absurdly robust . Its specs may not seem like much at first but it is allowed to have one and only one "bluescreen" every 15 years. Bear in mind, this is under bombardment from radiation that would kill a human many times over. In space, robustness wins out over speed. Of course, robustness like that comes at a cost. In this case, it's a cool $200,000 to $500,000. An Erlang programmer talks about the features of the computers and codebase on Curiosity. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159637",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13713/"
]
} |
159,691 | We are practicing collective code ownership. To my understanding this means that any developer can change any line of code to add functionality, to refactor, fix bugs or improve designs. But what about a complete rewriting of code from a developer who is still in the team? Should I ask him first? What is the best practice? | I think good communication is always the best practice. Talk to the developer and see if there's a reason why it's coded the way it is. It may be that they have been meaning to get back and refactor it for ages, it may be that they did it that way for a very good reason, or it may be that you both can learn something from the conversation. Going in and rewriting without prior communication is a recipe for ill will. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/56663/"
]
} |
159,754 | In podcast 73 , Joel Spolsky and Jeff Atwood discuss, among other subjects, "five things everyone should hate about their favorite programming language": If you’re happy with your current tool chain, then there’s no reason you need to switch. However, if you can’t list five things you hate about your favorite programming language, then I argue you don’t know it well enough yet to judge. It’s good to be aware of the alternatives, and have a healthy critical eye for whatever it is you’re using. Being curious, I asked this question to any candidate I interviewed. None of them were able to quote at least one thing they hate about C#¹. Why? What's so difficult in this question? It is because of the stressful context of the interview that this question is impossible to answer by the interviewees? Is there something about this question which makes it bad for an interview? Obviously, it doesn't mean that C# is perfect. I have myself a list of five things I hate about C#: The lack of variable number of types in generics (similar to params for arguments). Action<T> , Action<T1, T2> , Action<T1, T2, T3> , ⁞ Action<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16> Seriously?! The lack of support for units of measure, like in F#. The lack of read only properties. Writing a backing private readonly field every time I want a read only property is boring. The lack of properties with default values. And yes, I know that I can initialize them in the parameterless constructor and call it from all other constructors. But I don't want to. Multiple inheritance. Yes, it causes confusion and you don't need it in most cases. It's still useful in some (very rare) cases, and the confusion applies as well (and was solved in C#) to the class which inherits several interfaces which contain methods with the same name. I'm pretty sure that this list is far from being complete, and there are much more points to highlight, and especially much better ones than mine. ¹ A few people criticized some assemblies in .NET Framework or the lack of some libraries in the framework or criticized the CLR. This doesn't count, since the question was about the language itself, and while I could potentially accept an answer about something negative in the core of .NET Framework (for example something like the fact that there is no common interface for TryParse , so if you want to parse a string to several types, you have to repeat yourself for every type), an answer about JSON or WCF is completely off-topic. | If I would have to guess: Some programmers lack diverse language exposure. It's hard to see things wrong with the language when you don't know that better things exist. Some programmers are mere code monkeys. They barely analyze the problems in front of them, let alone something like how their programming language could be better. Few people are particularly critical. They see benefits and features, not shortcomings. It is hard for them to shift into that mode of thinking if the interview isn't going that way. At least around here, being overly critical is seen as a fatal personality flaw. Instead of being 'that insightful developer that is always looking for better ways of doing things' (like some areas I've lived), they are 'that asshole that hates everything'. Even people who can think of things they hate in the language might defer in an interview setting to seem less acerbic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159754",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
159,804 | There are some problems which are easily solved by Algebraic Data Types, for example a List type can be very succinctly expressed as: data ConsList a = Empty | ConsCell a (ConsList a)
consmap f Empty = Empty
consmap f (ConsCell a b) = ConsCell (f a) (consmap f b)
l = ConsCell 1 (ConsCell 2 (ConsCell 3 Empty))
consmap (+1) l This particular example is in Haskell, but it would be similar in other languages with native support for Algebraic Data Types. It turns out that there is an obvious mapping to OO-style subtyping: the datatype becomes an abstract base class and every data constructor becomes a concrete subclass. Here's an example in Scala: sealed abstract class ConsList[+T] {
def map[U](f: T => U): ConsList[U]
}
object Empty extends ConsList[Nothing] {
override def map[U](f: Nothing => U) = this
}
final class ConsCell[T](first: T, rest: ConsList[T]) extends ConsList[T] {
override def map[U](f: T => U) = new ConsCell(f(first), rest.map(f))
}
val l = new ConsCell(1, new ConsCell(2, new ConsCell(3, Empty)))
l.map(1+) The only thing needed beyond naive subclassing is a way to seal classes, i.e. a way to make it impossible to add subclasses to a hierarchy. How would you approach this problem in a language like C# or Java? The two stumbling blocks I found when trying to use Algebraic Data Types in C# were: I couldn't figure out what the bottom type is called in C# (i.e. I couldn't figure out what to put into class Empty : ConsList< ??? > ) I couldn't figure out a way to seal ConsList so that no subclasses can be added to the hierarchy What would be the most idiomatic way to implement Algebraic Data Types in C# and/or Java? Or, if it isn't possible, what would be the idiomatic replacement? | There is an easy, but boilerplate heavy way to seal classes in Java. You put a private constructor in the base class then make subclasses inner classes of it. public abstract class List<A> {
// private constructor is uncallable by any sublclasses except inner classes
private List() {
}
public static final class Nil<A> extends List<A> {
}
public static final class Cons<A> extends List<A> {
public final A head;
public final List<A> tail;
public Cons(A head, List<A> tail) {
this.head = head;
this.tail = tail;
}
}
} Tack on a visitor pattern for dispatch. My project jADT : Java Algebraic DataTypes generates all that boilerplate for you https://github.com/JamesIry/jADT | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159804",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1352/"
]
} |
159,813 | Isn't the whole point of an interface that multiple classes adhere to a set of rules and implementations? | Strictly speaking, no you don't, YAGNI applies. That said, the time you'll spend creating the interface is minimal, especially if you have a handy code generation tool doing most of the job for you. If you are uncertain on whether you are going to need the interface of or not, I'd say it's better to err on the side of towards supporting the definition of an interface. Furthermore, using an interface even for a single class will provide you with another mock implementation for unit tests, one that's not on production. Avner Shahar-Kashtan's answer expands on this point. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159813",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52940/"
]
} |
159,964 | I assume that my project is decoupled enough to allow for unit testing. But how big, exactly, in terms of clases and functions does my project need to be to make unit testing worthwhile? We all make mistakes and no one's perfect, but I consider myself a decent programmer to handle small projects' errors with stepping through. Or is unit testing a hard necessity no matter of what size your project is? | Your project is big enough already. In my experience, one class and one function have been sufficient to consider the need for unit testing. class Simple {
boolean reallySimple() {
return true; // how do we make sure it doesn't change to false?
}
}
class SimpleTest {
void assertReallySimple() {
// ensure reallySimple return value isn't changed unexpectedly
UnitTestFramework.assertTrue(new Simple().reallySimple());
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159964",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52940/"
]
} |
159,976 | I have been writing quite a lot of PHP for nearly two years. Now I am doing .NET (mainly c#) development. However, sometimes I go back and do some php. My main question is, is it wise for me to continue doing this or should I continue development in C#? Would this harm me in the long run (mind you my main goal is not to be a jack o all trades) or is it a good practice to be doing? | Using two languages at the same time is nothing. It’s not uncommon for programmers to use several different languages every day. Different tasks and different technologies require different languages. Just today, I’ve already used four or five different languages, and that’s interesting because I haven’d done any programming so far. All I’ve done is work on a presentation. As a good programmer it’s essentially required that you know your way around in several languages, and the only way of attaining (and then preserving) reasonable fluency is by using those languages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159976",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52940/"
]
} |
159,994 | When providing a business logic method to get a domain entity, should the parameter accept an object or an ID? For example, should we do this: public Foo GetItem(int id) {} or this: public Foo GetItem(Foo foo) {} I believe in passing objects around, in their entirety, but what about this case where we're getting an object and we only know the ID? Should the caller create an empty Foo and set the ID, or should it just pass the ID to the method? Since the incoming Foo will be empty, except for the ID, I don't see the benefit of the caller having to create a Foo and set its ID when it could just send the ID to the GetItem() method. | Just the single field being used for the lookup. The caller doesn't have a Foo , it's trying to get one. Sure, you can make a temporary Foo with all other fields left blank, but that only works for trivial data structures. Most objects have invariants that would be violated by the mostly-empty-object approach, so avoid it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/159994",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29526/"
]
} |
160,018 | I've created a pretty simple templating framework and have default implementations for some of my interfaces used for passing around information. I store these in MyFramework.Default namespace However, I am unsure of where to place example implementations, such as a template that creates a concrete class. This template seems like it would be useful to people who use my framework, but I am not sure if it belongs in a separate assembly, or perhaps a namespace within the framework. If it's a namespace, is there any convention for its name? | Just the single field being used for the lookup. The caller doesn't have a Foo , it's trying to get one. Sure, you can make a temporary Foo with all other fields left blank, but that only works for trivial data structures. Most objects have invariants that would be violated by the mostly-empty-object approach, so avoid it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160018",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39814/"
]
} |
160,185 | Let's say I start and develop some project under open-source license, and accept some community contributions. How shaky is the ground I stand on if I decide to take the project commercial and closed-source (or split license)? This question doesn't directly address the issue of a project with community contributions, which feels like different territory, at least as far as ethics are concerned. Legally, this might be iffy as well, because I'm not sure whether contributions fall under my copyright, or whether the contributor hold the copyright to the part of the project he added. Am I safe (ethically and legally) as long as I'm up front with the possibility that I may take the project commercial in the future? | In general, community contributors would retain their copyright to the code they contributed to the project. They license the contribution to you when they contribute the code. If you want to retain the possibility of changing the license terms in the future, you would generally need contributors to assign their copyright to you (either personally or a corporate entity you create to own the copyrights for this project) or the changed terms would need to be compatible with the new license terms. Of course, if you require this sort of copyright assignment paperwork before you can accept a contribution from the community, it is much less likely that the community will decide to contribute and you'll have to do a fair amount of work getting the legal forms in order before accepting each contribution. Plus, there is a strong chance that your project will get forked if and when you decide to change the license terms. It strikes me as unlikely that a new open source project is going to get a lot of contributions from the community under those circumstances. It would generally be easier if you licensed the product under the split license terms initially or if the initial license terms were compatible with a future closed-source product. Code that is under the BSD license, for example, can be incorporated into a commercial product at any time so if the project and contributions are under the BSD license, you could easily release a commercial version of the same product. Your intention (or option) to produce a commercial product, though, will likely decrease interest in making contributions to your project-- most open source developers are uninterested in making unpaid contributions to a commercial product. Of course, as with any legal issues, you'd want to talk with a lawyer rather than relying on a forum post before taking any sort of definitive action. You'll almost certainly want that lawyer to draft the copyright assignment document you'll need people to sign and you'll need to discuss your plans for the future with the lawyer to ensure that everything is set up correctly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160185",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40350/"
]
} |
160,191 | Semi-open (or Half-Open, Half-Closed , Half-Bounded ) intervals ( [a,b) , where x belongs to the interval iff a <= x < b ) are pretty common on programming, as they have many convenient properties. Can anyone offer a rationale that explains why SQL's BETWEEN uses a closed interval ( [a,b] )? This is esp. inconvenient for dates. Why would you have BETWEEN behave like this? | I think inclusive BETWEEN is more intuitive (and apparently, so did the SQL designers) than a semi-open interval. For example, if I say "Pick a number between 1 and 10", most people will include the numbers 1 and 10. The open-ended interval is actually particularly confusing for non-developers because it's asymmetric. SQL is occasionally used by non-programmers to make simple queries, and semi-open semantics would have been much more confusing for them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160191",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8082/"
]
} |
160,368 | I am an independent contractor and, as such, I interview 3-4 times a year for new gigs. I am in the midst of that cycle now and got turned down for an opportunity even though I felt like the interview went well. The same thing has happened to me a couple of times this year. Now, I am not a perfect guy and I don't expect to be a good fit for every organization. That said, my batting average is lower than usual so I politely asked my last interviewer for some constructive feedback, and he delivered! The main thing, according to the interviewer, was that I seemed to lean too much towards the use of abstractions (such as LINQ) rather than towards lower-level, organically grown algorithms. On the surface, this makes sense--in fact, it made the other rejections make sense too because I blabbed about LINQ in those interviews as well and it didn't seem that the interviewers knew much about LINQ (even though they were .NET guys). So now I am left with this question: If we are supposed to be "standing on the shoulders of giants" and using abstractions that are available to us (like LINQ), then why do some folks consider it so taboo? Doesn't it make sense to pull code "off the shelf" if it accomplishes the same goals without extra cost? It would seem to me that LINQ, even if it is an abstraction, is simply an abstraction of all the same algorithms one would write to accomplish exactly the same end. Only a performance test could tell you if your custom approach was better, but if something like LINQ met the requirements, why bother writing your own classes in the first place? I don't mean to focus on LINQ here. I am sure that the JAVA world has something comparable, I just would like to know why some folks get so uncomfortable with the idea of using an abstraction that they themselves did not write. UPDATE As Euphoric pointed out, there isn't anything comparable to LINQ in the Java world. So, if you are developing on the .NET stack, why not always try and make use of it? Is it possible that people just don't fully understand what it does? | I don't think it's the use of abstractions per se that's objectionable. There are two other possible explanations. One is that abstractions are all leaky at one time or another. If you give the impression, correct or not, that you don't understand the underlying fundamentals, that might reflect poorly in an interview. The other possible explanation is the fanboy effect. If you talk excitedly about LINQ, and repeatedly bring it up in an interview with a company who doesn't use it and has no current plans to do so, that gives the impression that you would be dissatisfied or even disgruntled working with older technologies. It can also give the impression that your enthusiasm for one product has blinded you to alternatives. If you truly think you would be happy in a non-LINQ shop, try asking about what they do use, and tailor your answers accordingly. Show them that while you prefer LINQ, you are competent using whatever tools are at hand. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160368",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44497/"
]
} |
160,522 | When you're in Python or Javascript, you should always put binary operators at the end of the previous line, in order to prevent newlines from terminating your code prematurely; it helps you catch errors. But in C or C++, this isn't an issue, so I'm wondering: Is there any reason for me to prefer the second version to the first? return lots_of_text
+ 1; versus return lots_of_text +
1; (For example, does one of them help prevent other kinds of errors? Or is one of them considered more readable?) | As you can see from the answers, there is no consensus on this matter. Unless you work in a team, use what you are more comfortable with. I prefer inserting a newline before operators. Whenever I have to break lines, I usually put at most one term of the same "level" on a line: Newton's law of gravitation in Python: force = (
gravitational_constant
* mass_1
* mass_2
/ (distance * distance)
) Compare this to: force = (
gravitational_constant *
mass_1 *
mass_2 /
(distance * distance)
) I want to know, that I "divide by distance squared", I don't want to know, that "mass_2 gets divided", because that's not how I think of mathematical expressions. Further, I usually want to know first, what I am doing (operator), before I care about what I do things with (operands). Or consider this convoluted SQL statement: WHERE
a = 1
AND b = 2
AND c = 3
AND ( -- or put the OR on one line together with the AND
d = 3
OR e = 1)
AND x = 5 This allows me to see how the individual conditions are connected very easily, just by skimming from top to bottom without having to read every line until the end to find the operator as opposed to: WHERE
a = 1 AND
b = 2 AND
c = 3 AND
(
d = 3 OR
e = 1) AND
x = 5 I think about the former in terms of " X is true", then I amend that by saying: " AND this is also true" which feels more natural to me than the other way around. Further, I find the first much easier to parse visually. Or a PHP example: $text = "lorem ipsum"
. "dolor sit amet, "
. "consectetur adipisicing elit, "
. "sed do eiusmod tempor"; Again, I can just skim read vertically to see I'm simply concatenating text, because most of the time I feel that I do not actually care what is inside the strings/conditions. Of course, I would not apply this style unconditionally. If putting the newline after an operator seems to make more sense to me, I would do so, but I can't think of an example at the moment. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160522",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11833/"
]
} |
160,602 | When debugging, I sometimes find that I make some changes and I am not 100% sure why those changes correct some bug in the program. Is it essential to understand every single detail about why some bugs were occurring and why certain changes eliminated those bugs? Or is it common among developers to sometimes get the program working without really knowing the details about why the fix worked? | I would say that it is essential to understand every single detail about why some bugs were occurring and why certain changes eliminated those bugs, and it is also common among developers to sometimes get the program working without really knowing the details about why the fix worked! The art of changing things until a bug disappears, without understanding what caused it or why the change fixed it, is often called "voodoo programming," and it's not a compliment. There is really no way you can possibly be confident that you have genuinely fixed a bug, as opposed to partially fixing it for the particular case you were investigating, if you don't understand what caused it. In the worst case, you haven't done anything at all except move the bug: I remember from first year computing at uni, when many students were learning C and pointers for the first time, pointer bugs would often stop manifesting when they changed things randomly, because the changes would rearrange data structures in memory enough to make the pointer bug stomp over a different bit of memory. Obviously that hasn't helped at all . But having said that, the commercial realities of programming are often such that satisfying the client that a bug is fixed is more important than satisfying yourself. I'd never recommend you declare something fixed if you had no idea what caused it, but if you can see that some code was problematic, and you reworked it, even if you're "not 100% sure" how that caused the specific bug to manifest, sometimes you just have to move on to the next bug before the client screams too loudly about your slow progress. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160602",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28424/"
]
} |
160,675 | What do you call classes without methods? For example, class A
{
public string something;
public int a;
} Above is a class without any methods. Does this type of class have a special name? | Most of the time: An anti pattern. Why? Because it faciliates procedural programming with "Operator" classes and data structures. You separate data and behaviour which isn't exactly good OOP. Often times: A DTO (Data Transfer Object) Read only datastructures meant to exchange data, derived from a business/domain object. Sometimes: Just data structure. Well sometimes, you just gotta have those structures to hold data that is just plain and simple and has no operations on it. But then I wouldn't use public fields but accessors (getters and setters). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160675",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52343/"
]
} |
160,732 | More and more I'm seeing functions being declared like var foo = function() {
// things
}; Instead of how I had learned, like function foo() {
// things
} What's the difference? Better performance? Scope? Should I be using this method? | var foo = function() {} defines a variable that references an anonymous function. function foo() {} defines a named function foo . Either can be passed by name as function parameters and either can be instantiated if the intended use is for OOP. At the end of the day, which one you use is largely dictated by your specific use-case (Javascript is fun like that ;)). If you end up using the former, I would strongly suggest that you name the function: var foo = function MY_function() {} . This naming convention helps your debugger callstack not be useless. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160732",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29187/"
]
} |
160,844 | I am a developer in a good company. I was given a task by my company to accomplish within a week however I finished the same in 4 days, my boss, the client, and other team members all are happy by my work; even I was! However suddenly I got a thought in my mind: "For my work I took some code from internet and mixed it up with my programming and gave the result faster however, my worry is that I should have done the task all of my own so that I would have a better understanding of it (even which was taken from internet)". Can anyone tell me if I am ruining my programming career this way (I mean by using others code)? | Probably not. Getting parts of solutions online is not that uncommon. If you don't understand what you are copying and pasting from the internet, you will eventually run into trouble. If you take a little bit of time and effort to learn about the code you find and how to adapt it better to your own circumstances, that is fine! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160844",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61839/"
]
} |
160,892 | My resume is no longer relevant. It can no longer contain an adequate description of my technical abilities. One can get a much better sense of what I am capable of by looking at my GitHub repositories, my Stack Exchange profiles, and the various courses that I am taking at Udacity and Coursera. The problem is that I have no idea how to tell employers that those are the places to look if they want an accurate description of what I can do. Every time a recruiter contacts me I gently nudge them towards all the resources I just mentioned and I also provide a link to a publicly visible Google doc that contains my resume along with links to all those resources. Yet, they keep coming back asking for a more descriptive resume. How can I make it even more blatantly obvious that if somebody wants to hire me then they can save themselves a whole bunch of trouble by just clicking on a few links and browsing around? | Look at a resume as a distilled brochure that advertises highlights from your skills and experience. A combination of your github and SO profiles and a bunch of other online resources may be complete and accurate, but it isn't sorted or otherwise prepared for easy reading in any way. People who hire want you to tell them what you think distinguishes you from the rest, so your resume should be written so that you pass the first three seconds of eyeballing; if it doesn't, three seconds is all you get. Nobody can form any useful opinion about your skills in three seconds of looking at your github account page. If you have too much to fit on your resume, great - pick the absolute highlights, and refer to online resources for more. Aim for 'impressive', not 'exhaustive'. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160892",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
160,922 | I would like to know if people have already asked back some technical questions to your interviewer. Saying something like "Well, you asked me a bunch of technical questions, and now, do you mind if I ask you some? If I am going to join your company, we will work closely together and I want to make sure my I respect the skills of my coworkers like they respect mine...". Doesn't it sound arrogant? I am curious to have your opinion or experience (as the interviewee or the interviewer, of course). P.S: throwaway account as I dont want that people who know me know I am preparing interviews. | "Well, you asked me a bunch of technical questions, and now, do you mind if I ask you some? If I am going to join your company, we will work closely together and I want to make sure my I respect the skills of my coworkers like they respect mine...". Doesn't it sound arrogant? Yes, it does. But there are other ways to get that information. Ask to look at the codebase. Say something like "I'd like to see the best and the worst part you can think of." Engage them in conversation, use lots of terms that lesser developers wouldn't understand. Mention bloggers and authors, see if you get a blank expression. Ask them about the technology stack and ask for reasons behind various decisions, compare the frameworks they use with the ones you've used. Sometimes you can impress as much by the type of questions you ask as the type of questions you answer. I once had a junior developer come in with an A4 sheet full of questions. They were the right questions to ask. They showed a deep (for a junior) knowledge of the right and wrong ways to do things. He was offered the job. I wouldn't ask someone to code up a method to display the Fibonacci sequence. But then I also wouldn't ask that as an interviewer. It's a waste of time and teaches me nothing about the person. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160922",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61891/"
]
} |
160,947 | I used to heavily rely on session variables in the past, but have recently found many of them to be unnecessary, using things like query string parameters instead. A colleague of mine refuses to use session variables. Is this a realistic goal and should session variables be avoided for any practical reasons? Can session variables be avoided completely (except for session cookies to allow logins) and would this result in better designs? Some of the reasons my colleague has for not using them: Untyped nature of session variables Session time-outs causing loss of state Global scope nature of session variables Load balancing servers losing sessions (.Net specific?) Application pools/servers restarting They are unnecessary | If you have a session variable in your application, ask yourself this: When I click the back button of my browser, what value do I
want my variable to have? If the answer is "the current value", session variables may be useful. An example would be a shopping cart: you don't expect things to be removed from the shopping cart as you go back through the history. It's always in its current state. If the answer is "a previous value", you should not be using session variables. Bad uses I have seen include passing a parameter between pages. If I click the back button to get back to a page, the page does not necessarily get the correct parameter. Also, if I open two tabs, how is my site going to behave then? Getting the back button behaviour right is by no means the be-all-and-end-all, but it helps you think about a web site as a stateless application. In general, I find appropriate uses of session variables to be few and far between. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/160947",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4722/"
]
} |
161,003 | I've been working on a software project mostly solo for over 5 years. It was a mess to begin with (I am the third or fourth developer to be working on it), and although it's less of a mess now it is still incredibly disorganized. The rate of progress in getting it under control is glacial and I'm starting to feel despondent over the state that it's in. How do I really start fixing it? Project specifics: It is a sales program written almost entirely in Visual Basic Classic (VB6) with a MySQL back end and a reporting engine written in C#. The C# reporting module is a joy to work on, it was only just written in the past couple years and before that all reports were done in Crystal Reports 9 (yes, we still have some reports that rely on it). The actual program itself, however, is a complete disaster. There are not quite 90k LOC total, and about 10k lines of comments (mostly not documentation, but old code that's been commented out). 158 form files, and 80 module files. I have no idea how many of those are actually used, because some features of the program are simply deprecated and (uh, sometimes) noted as such without having the associated code removed from the program. I would guess that only 50% of the code is in actual productive use. I am afraid of touching a lot of the code just because I am not sure if I'm breaking something that one obscure client relies on, it's happened on more occasions than I can count. It's like there are landmines strewn throughout the code. There is not really any structure to the project. It is not object oriented except in the few places I have had the patience to reform so far. If you need to get data on a form, you instantiate a database object, declare your query right there in the function, execute it and do what you will with the dataset. When I began working on the project there was no source control in use. I tried to encourage the other people I was working on to use it, but I was the new guy and my attempts to get people to use subversion all but failed. The company's lead developer finally caught a mercurial bug in the last couple years and he's made sure that all developers do use source control on all projects now, so at least that's some progress. I think if I was able to work on reforming the project full time I would be able to make decent progress and maybe even have an estimate for how long it would take me to fully makeover the project, but it is in active use and I am constantly being asked to put out fires, fix bugs, add features, etc. etc. So how do I start to really fix this project? Try to tool VB6 with another language? Try and rewrite the program in my spare time? Or is this completely hopeless? Update After this post I went back to the project with renewed zeal, but fell back into hopelessness within a few months after seeing such a slow rate of progress. I then repeated this cycle 2 or 3 more times over the next year or so. I have since moved on to a different job. Although after so many years of vb6, and only peripheral experience with other technologies the search was difficult and I faced many rejections along the way (around a dozen interviews over the course of a year). My advice to others in this situation is to consider leaving for this factor alone. Consider the damage you can do to your career by staying in a dead end position such as this. | Now that it's in source control, you can get rid of the commented out code. I started here in a similar situation (80KLOC VB6 app with no source control, no real structure, almost everything done in event handlers). In about 2 years on and off I've gotten more than half converted to C# (usually when significant new features are required). All new C# code has unit test coverage. It definitely takes way more time to convert to C# though. If you're not adding significant new modules, I wouldn't go down that route. One thing I did was create a rudimentary data access layer that auto-generated itself from the database. That at least caught problems where a table column name changed and I didn't find all the places in the code. Also, I've slowly moved the business logic into modules, out of the form event handlers. I, however, had the advantage that the application was only internal. Since I only had one site to deploy to, I could take some bigger risks than you could. If I made a mistake, it wasn't usually a big deal to fix it. Sounds like you don't have that luxury. I really think your best bet is to take the following approach: Learn every bit in excruciating detail. This makes it less likely you'll break something inadvertently. Refactor mercilessly to improve separation and structure, but don't try to shoehorn a new methodology like object-oriented programming in there, since VB6 just sucks at it. Treat it as a learning experience. How would you know a different way is better if you'd never seen the inferior way? Don't bother rewriting in a new language/framework/platform unless you have a major module to write. Even then, consider carefully. Remember, the goal of the program is to be a product that makes your company money. It doesn't have to be a flawless work of art (and I'm a perfectionist so it's really hard for me to admit that). Sometimes it's better to embrace pragmatism. There are supposedly a lot of COBOL programmers out there maintaining massive amounts of legacy code. I doubt they're all madly working away at rewriting it in some new language. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161003",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61915/"
]
} |
161,099 | I am always given the advice that developers need to stay up to date with the latest in technology - things like webrtc, updates on html5 and css3 and new js libraries, software methodologies like TDD, DDD, and BDD. The question is why ? Why do we need to constantly update ourselves? Can't we just stick with what we know and become better with it? | New technologies surface for a reason. Usually that reason is because they are more efficient or powerful at accomplishing a particular task. There is still value to be had in sticking with old technology for the sake of legacy systems, but when they eventually reach their end of life you'll be behind the game. Business reasons aside, constantly learning new technologies keeps you on your toes and will open your eyes to different ways of approaching tasks, even in old technologies and so on, so forth. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161099",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25953/"
]
} |
161,149 | Let's say you are given the following... List<Thing> theThings = fubar.Things.All(); If there were nothing to return, what would you expect fubar.Things.All() to return? Edit:
Thanks for the opinions. I'll wait a bit and accept the entry with the most ups. I agree with the responses so far, particularly those suggesting an empty collection. A vendor provided an API with several calls similar to the example above. A vendor who did $4.6 million in revenue via their API(s) last year, BTW. They do something I fundamentally disagree with -- they throw an exception. | Of the two possibilities (i.e. returning a null or returning an empty collection) I would pick returning an empty collection, because it lets the caller to skip a check of the returned value. Instead of writing this List<Thing> theThings = fubar.Things.All();
if (theThings != null) {
for (Thing t : theThings) {
t.doSomething();
}
} they would be able to write this: List<Thing> theThings = fubar.Things.All();
for (Thing t : theThings) {
t.doSomething();
} This second code fragment is shorter and easier to read, because the nesting level is lower by one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161149",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62010/"
]
} |
161,222 | I submitted an application I wrote to some other architects for code review. One of them almost immediately wrote me back and said "Don't use static . You can't write automated tests with static classes and methods. static is to be avoided." I checked and fully 1/4 of my classes are marked static . I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. He went on to mention something involving mocking, IOC/DI techniques that can't be used with static code. He says it is unfortunate when 3rd party libraries are static because of their un-testability. Is this other architect correct? update: here is an example: APIManager - this class keeps dictionaries of 3rd party APIs I am calling along with the next allowed time. It enforces API usage limits that a lot of 3rd parties have in their terms of service. I use it anywhere I am calling a 3rd party service by calling Thread.Sleep(APIManager.GetWait("ProviderXYZ")); before making the call. Everything in here is thread safe and it works great with the TPL in C#. | He is too general about it. He is correct, it hinders testing. However, static classes and methods have their place and it really depends on the context. Without code examples you can't really tell. I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. This can be severe code smell. What are you using the classes for? To hold data? To access a database? Then he is correct. You should look in to dependency injection in that case, as such a static class is effectively an implicit singleton. If you are using them for extension methods or helpers that don't change the state and just operate on the parameters you provide, those usually are fine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161222",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62038/"
]
} |
161,231 | Let's consider something like a GUI application where main thread is updating the UI almost instantaneously, and some other thread is polling data over the network or something that is guaranteed to take 5-10 seconds to finish the job. I've received many different answers for this, but some people say that if it is a race condition of a statistical impossibility, don't worry about it at all but others have said that if there's even a 10 -53 % (I kid you not on the numbers, this is what I've heard) of some voodoo magic happening due to race condition, always obtain/release locks on the thread that needs it. What are your thoughts? Is it a good programming practice to handle race condition in such statistically-impossible situations? or would it be totally unnecessary or even counterproductive to add more lines of code to hinder readability? | If it is truly a 1 in 10^55 event, there would be no need to code for it. That would imply that if you did the operation 1 million times a second, you'd get one bug every 3 * 10^41 years which is, roughly, 10^31 times the age of the universe. If your application has an error only once in every trillion trillion billion ages of the universe, that's probably reliable enough. However, I would wager very heavily that the error is nowhere near that unlikely. If you can conceive of the error, it is almost certain that it will occur at least occasionally thus making it worth coding correctly to begin with. Plus, if you code the threads correctly at the outset so that they obtain and release locks appropriately, the code is much more maintainable in the future. You don't have to worry when you're making a change that you have to re-analyze all the potential race conditions, re-compute their probabilities, and assure yourself that they won't recur. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161231",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62049/"
]
} |
161,265 | I developed one application for iPhone and now I want it on App Store. So many my iOS geek friends told me to test it on actual device i.e. on iPhone. So I wonder that why it is necessary to to test my iPhone app on actual iPhone device though they (Apple) have given "simulator" which is near about same as my device? | You need to test application on real device to at least to look how it behaves with: Real device hardware Real internet connection (including the use of a cell network vs WiFi) Your fingers instead of mouse Performance with other apps running in the background The limitations of the iPhone, like cpu, disk capacity and memory ( A Simulator is not an Emulator ). Real context: is it easy to use your app on the train, or while walking down the street? How about in bright sunlight or in the rain? iOS developers, please continue this list. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161265",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62058/"
]
} |
161,283 | I often see projects (in Java projects and teams using Eclipse) that prefix function parameters with p . For example public void filter (Result pResult) ... I personally don't see any benefit in this, but would like to know what the reasoning is. The best explanation I've heard yet is that it is to distinguish the name of identical named fields.I have my issues with that explanation but I can understand the point. | The practices of adding meaningful prefixes to symbols, such as the well-publicized Hungarian Notation , date back to the times when IDEs did not exist or were too primitive. Today, when finding a point of declaration is a mouse click away, there is no point in spoiling the most precious part of the name, its first few letters, by assigning a common prefix. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161283",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44606/"
]
} |
161,293 | In a git environment, where we have modularized most projects, we're facing the one project per repository or multiple projects per repository design issue. Let's consider a modularized project: myProject/
+-- gui
+-- core
+-- api
+-- implA
+-- implB Today we're having one project per repository . It gives freedom to release individual components tag individual components But it's also cumbersome to branch components as often branching api requires equivalent branches in core , and perhaps other components. Given we want to release individual components can we still get the similar flexibility by utilizing a multiple projects per repository design. What experiences are there and how/why did you address these issues? | There are three major disadvantages to "one project per repository", the way you've described it above. These are less true if they are truly distinct projects, but from the sounds of it changes to one often require changes to another, which can really exacerbate these problems: It's harder to discover when bugs were introduced. Tools like git bisect become much more difficult to use when you fracture your repository into sub-repositories. It's possible, it's just not as easy, meaning bug-hunting in times of crisis is that much harder. Tracking the entire history of a feature is much more difficult. History traversing commands like git log just don't output history as meaningfully with fractured repository structures. You can get some useful output with submodules or subtrees, or through other scriptable methods, but it's just not the same as typing tig --grep=<caseID> or git log --grep=<caseID> and scanning all the commits you care about. Your history becomes harder to understand, which makes it less useful when you really need it. New developers spend more time learning the Version Control's structure before they can start coding. Every new job requires picking up procedures, but fracturing a project repository means they have to pick up the VC structure in addition the code's architecture. In my experience, this is particularly difficult for developers new to git who come from more traditional, centralized shops that use a single repository. In the end, it's an opportunity cost calculation. At one former employer, we had our primary application divided into 35 different sub-repositories. On top of them we used a complicated set of scripts to search history, make sure state (i.e. production vs. development branches) was the same across them, and deploy them individually or en masse. It was just too much; too much for us at least. The management overhead made our features less nimble, made deployments much harder, made teaching new devs take too much time, and by the end of it, we could barely recall why we fractured the repository in the first place. One beautiful spring day, I spent $10 for an afternoon of cluster compute time in EC2. I wove the repos back together with a couple dozen git filter-branch calls. We never looked back. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161293",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62034/"
]
} |
161,303 | Possible Duplicate: When are Getters and Setters Justified Why are public and private accessors considered good practice? In my time as developer I learned that properties can be very useful. I use properties to control read and write access or to add something like validation checks. But assuming that I have an auto property with a public getter and setter. public string LastName { get; set; } In that case I don't see any point in using properties over fields. I could also do following: public string LastName; Is there anything that speaks against a public field in this case?
Why should I use a property? | In general, yes, using public fields instead of properties is a bad practice. The .NET framework by and large assumes that you will use properties instead of public fields. For example, databinding looks up properties by name: tbLastName.DataBindings.Add("Text", person, "LastName"); // textbox binding Here are some things you can easily do with properties but not with fields: You can add validation (or any other code, within reason) to values in a property: private string lastName;
public string LastName
{
get { return lastName; }
set
{
if(string.IsNullOrEmpty(value))
throw new ArgumentException("LastName cannot be null or empty");
lastName = value;
}
} You can easily change accessibility levels for getters and setters: public string LastName { get; private set; } You can use them as part of an interface definition or an abstract class. If you start out with public fields and assume it'll be easy to change to properties later, you will likely run into trouble. If you have any code in other assemblies that depends on accessing your public fields, you will have to rebuild and redeploy all of them if you switch to property-based access. You might also have code that uses reflection to get at the values... that'd also need to be changed. And if you're really unlucky, those changes would have to be made in a part of the codebase you have no control over and you'll be stuck hacking around public fields. For more gory details, check out item 1 in the first chapter of Effective C# by Bill Wagner. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161303",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62059/"
]
} |
161,443 | While learning Haskell I have faced a lot of tutorials trying to explain what are monads and why monads are important in Haskell. Each of them used analogies so it would be easier to catch the meaning.
At the end of the day, I have end up with 3 differents view of what a monad is: View 1: Monad as a label Sometimes I think that a monad as a label to a specific type.
For example, a function of type: myfunction :: IO Int myfunction is a function that whenever is performed it will yield an Int value.
The type of the result is not Int but IO Int. So, IO is a label of the Int value warning the user to know that the Int value is the result of a process where a IO action has been made. Consequently, this Int value has been marked as value that came from a process with IO therefore this value is "dirty". Your process is not pure anymore. View 2: Monad as a private space where nasty things can happen. In a system where all the process are pure and strict sometimes you need to have side-effects. So, a monad is just a little space that is allow to you for doing nasty side-effects.
In this space your are allow to escape the pure world, go the impure, make your process and then come back with a value. View 3: Monad as in category theory This is the view that I don't fully understand.
A monad is just a functor to the same category or a sub-category.
For example, you have the Int values and as a subcategory IO Int, that are the Int values generated after a IO process. Are these views correct? Which is more accurate? | Views #1 and #2 are incorrect in general. Any data-type of kind * -> * can work as a label, monads are much more than that. (With the exception of the IO monad) computations within a monad are not impure. They simply represent computations that we perceive as having side effects, but they're pure. Both these misunderstandings come from focusing on the IO monad, which is actually a bit special. I'll try to elaborate on #3 a bit, without getting into category theory if possible. Standard computations All computations in a functional programming language can be viewed as functions with a source type and a target type: f :: a -> b . If a function has more than one argument, we can convert it to an one-argument function by currying (see also Haskell wiki ). And if we have just a value x :: a (a function with 0 arguments), we can convert it into a function that takes an argument of the unit type : (\_ -> x) :: () -> a . We can build more complex programs form simpler ones by composing such functions using the . operator. For example, if we have f :: a -> b and g :: b -> c we get g . f :: a -> c . Note that this works for our converted values too: If we have x :: a and convert it into our representation, we get f . ((\_ -> x) :: () -> a) :: () -> b . This representation has some very important properties, namely: We have a very special function - the identity function id :: a -> a for each type a . It is an identity element with respect to . : f is equal both to f . id and to id . f . The function composition operator . is associative . Monadic computations Suppose we want to select and work with some special category of computations, whose result contains something more than just the single return value. We don't want to specify what "something more" means, we want to keep things as general as possible. The most general way to represent "something more" is representing it as a type function - a type m of kind * -> * (i.e. it converts one type to another). So for each category of computations we want to work with, we'll have some type function m :: * -> * . (In Haskell, m is [] , IO , Maybe , etc.) And the category will contains all functions of types a -> m b . Now we would like to work with the functions in such a category in the same way as in the basic case. We want to be able to compose these functions, we want the composition to be associative, and we want to have an identity. We need: To have an operator (let's call it <=< ) that composes functions f :: a -> m b and g :: b -> m c into something as g <=< f :: a -> m c . And, it must be associative. To have some identity function for each type, let's call it return . We also want that f <=< return is the same as f and the same as return <=< f . Any m :: * -> * for which we have such functions return and <=< is called a monad . It allows us to create complex computations from simpler ones, just as in the basic case, but now the types of return values are tranformed by m . (Actually, I slightly abused the term category here. In the category-theory sense we can call our construction a category only after we know it obeys these laws.) Monads in Haskell In Haskell (and other functional languages) we mostly work with values, not with functions of types () -> a . So instead of defining <=< for each monad, we define a function (>>=) :: m a -> (a -> m b) -> m b . Such an alternative definition is equivalent, we can express >>= using <=< and vice versa (try as an exercise, or see the sources ). The principle is less obvious now, but it remains the same: Our results are always of types m a and we compose functions of types a -> m b . For each monad we create, we must not forget to check that return and <=< have the properties we required: associativity and left/right identity. Expressed using return and >>= they are called the monad laws . An example - lists If we choose m to be [] , we get a category of functions of types a -> [b] . Such functions represent non-deterministic computations, whose results could be one or more values, but also no values. This gives rise to the so-called list monad . The composition of f :: a -> [b] and g :: b -> [c] works as follows: g <=< f :: a -> [c] means to compute all possible results of type [b] , apply g to each of them, and collect all the results in a single list. Expressed in Haskell return :: a -> [a]
return x = [x]
(<=<) :: (b -> [c]) -> (a -> [b]) -> (a -> [c])
g (<=<) f = concat . map g . f or using >>= (>>=) :: [a] -> (a -> [b]) -> [b]
x >>= f = concat (map f x) Note that in this example the return types were [a] so it was possible that they didn't contain any value of type a . Indeed, there is no such requirement for a monad that the return type should have such values. Some monads always have (like IO or State ), but some don't, like [] or Maybe . The IO monad As I mentioned, the IO monad is somewhat special. A value of type IO a means a value of type a constructed by interacting with the program's environment. So (unlike all the other monads), we cannot describe a value of type IO a using some pure construction. Here IO is simply a tag or a label that distinguishes computations that interact with the environment. This is (the only case) where the views #1 and #2 are correct. For the IO monad: Composition of f :: a -> IO b and g :: b -> IO c means: Compute f that interacts with the environment, and then compute g that uses the value and computes the result interacting with the environment. return just adds the IO "tag" to the value (we simply "compute" the result by keeping the environment intact). The monad laws (associativity, identity) are guaranteed by the compiler. Some notes: Since monadic computations always have the result type of m a , there is no way how to "escape" from the IO monad. The meaning is: Once a computation interacts with the environment, you cannot construct a computation from it that doesn't. When a functional programmer doesn't know how to make something in a pure way, (s)he can (as the last resort) program the task by some stateful computation within the IO monad. This is why IO is often called a programmer's sin bin . Notice that in an impure world (in the sense of functional programming) reading a value can change the environment too (like consume user's input). That's why functions like getChar must have a result type of IO something . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161443",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32976/"
]
} |
161,488 | I've been thinking about that for quite a while actually. I am not a native English speaker myself but still, I have years of programming experience and I always asked myself this. Why is it named as Exception but not Error since they are errors? It could be PageNotFoundError instead of PageNotFoundException ! | They don't need to be errors at all. The fact that the page is not there may be just an interesting fact rather than an actual error. They seem to get used as errors almost all the time, I admit. But sometimes they're used to break out of loops, or let you know that a string is not a valid number. They can be used to hold and return vast amounts of useful data--as part of a fairly normal return. (Some languages are a bit slow with their exceptions, in that case throwing them frequently is a bad idea.) In theory anyway, an exception merely means "don't do a normal return, go up the call stack until you find someone interested in this." Even a null pointer exception might not mean much to you. You call someone else's code, and then catch a null pointer exception because you know it's apt to blow up, print a message saying whose fault it is, and carry on and get your job done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161488",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3613/"
]
} |
161,526 | In programming we're often faced with a choice: cover each conceivable use case individually, or solve the general problem: Its obvious that solving the immediate problem is faster, however creating a generalized solution will save time in the future. How do I know when it's best to try and cover a finite list of cases, or make a generic system to cover all possibilities? | First, you pass the salt. Then you pass the pepper. Then you pass the grated parmesan cheese. At this point, you have enough experience to start developing a general condiment-passing system. It works in software projects in the same way: use special purpose systems that you develop as your learning steps to generalized ones, so when it is time to start your general-purpose system, you have a better confidence in what you are building, because you have several special-purpose systems under your belt. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161526",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42685/"
]
} |
161,568 | The IO monad in Haskell is often explained as a state monad where the state is the world. So a value of type IO a monad is viewed as something like worldState -> (a, worldState) . Some time ago I read an article (or a blog/mailing list post) that criticized this view and gave several reasons why it's not correct. But I cannot remember neither the article nor the reasons. Anybody knows? Edit: The article seems lost, so let's start gathering various arguments here. I'm starting a bounty to make things more interesting. Edit: The article I was looking for is Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell by Simon Peyton Jones. (Thanks to TacTics's answer.) | The problem with IO a = worldState -> (a, worldState) is that if this were true then we could prove that forever (putStrLn "Hello") :: IO a and undefined :: IO a are equal. Here is the proof courtesy of dolio (2010, irc): forever m
=
m >> forever m
=
fix (\r -> m >> r)
= {definition of >> for worldState -> (a, worldState)}
fix (\r -> \w -> r (snd $ m w)) Lemma: (\r w -> r (snd $ m w)) ⊥ = ⊥ (\r w -> r (snd $ m w)) ⊥
=
\w -> ⊥ (snd $ m w))
=
⊥ . snd . m
=
⊥ Therefore forever m = fix (\r -> \w -> r (snd $ m w)) = ⊥ In particular forever (putStrLn "Hello") = ⊥ and hence forever (putStrLn "Hello") and undefined are equivalent programs. However, clearly they are not supposed to be considered equivalent programs, in theory or in practice. Notice that this model is wrong even without invoking concurrency. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161568",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61231/"
]
} |
161,704 | We have big enterprise projects they normally involve copying data from a source database to a destination database and then setting up a number of additional applications that sync this data etc. The last project contained 250,000 items (rows of data). The next project will only contain 4,000 items. Project managers / business people believe the project should be 1/10 the time to complete because its only a fraction of the size of the last project. What is a good analogy I can use to explain that writing code to transfer data from one system to another takes the same amount regardless of the number items - writing it for 1 item or for 100,000,000 will take roughly the same amount of time from a programming point of view. | Tell them it's like building a new four lane highway to a remote part of the country. Whether that road gets used by 100 cars a day or 1000 cars a day, the effort to create the road will be about the same. Granted, if it's going to support 1,000,000 cars a day you'll have to make the road a little more robust, but regardless, you're going to have to cut down the same trees, blast through the same mountains, level the same amount of dirt, and these activities are pretty much a fixed cost no matter how many cars use the road. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161704",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8898/"
]
} |
161,753 | What is a good way to name a method that checks if X needs to be done, and does X it if necessary? For example, how to name a method that updates a user list if new users have logged in? UpdateListIfNeeded seems too long, while simple UpdateList implies a possibly expensive and unnecessary operation is done every time. EnsureListUpdated is a variant as well. C# has a bool TryXXX(args, out result) pattern (e.g. int.TryParse(str, out num) ) to check if X is possible and do it, but that is subtly different. | I tend to use Ensure . It carries the meaning of making sure that something is taken care of, however that needs to be done. If it's already fine, just check it and we're done. Otherwise, do it. Either way, just ensure that it gets done . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161753",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2193/"
]
} |
161,794 | In Code Complete page 25, it's said that it's a good idea to be able to easily replace the regular user interface classes by a command line one. Knowing its advantages for testing, what about the problems it may bring? Will this extra work really pay off for web and mobile projects? What about small and medium projects; do the same rules apply?
What if it makes your design more complex? | Completely aside from testing, the obvious advantage to this approach is that it will make your project automatable and scriptable . If I'm able to send command-line commands to a program, I can write up a script to perform complicated tasks much more easily (and more reliably!) than I could create a macro to automate the same thing on a GUI. Whether or not that's actually worth doing, of course, depends entirely on whether or not you have a lot of users who would want to automate your program. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161794",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51350/"
]
} |
161,970 | For the sake of readability I often find myself defining temporary variables while calling functions, such as the following code var preventUndo = true;
doSomething(preventUndo); The shorter version of this to this would be, doSomething(true); But when I come back to the code I often wonder what true refers to. Is there a convention for this kind of conundrum? | Explaining Variables Your case is an example of the introduce explaining variable / extract variable refactoring. In short, an explaining variable is one which is not strictly necessary, but allows you to give a clear name to something, with the aim of increasing readability. Good quality code communicates intent to the reader; and as a professional developer readability and maintainability are your #1 goals. As such, the rule of thumb I would recommend is this: if your parameter's purpose is not immediately obvious, feel free to use a variable to give it a good name. I think this is a good practice in general (unless abused). Here's a quick, contrived example - consider: editButton.Enabled = (_grid.SelectedRow != null && ((Person)_grid.SelectedRow).Status == PersonStatus.Active); versus the slightly longer, but arguably clearer: bool personIsSelected = (_grid.SelectedRow != null);
bool selectedPersonIsEditable = (personIsSelected && ((Person)_grid.SelectedRow).Status == PersonStatus.Active)
editButton.Enabled = (personIsSelected && selectedPersonIsEditable); Boolean Parameters Your example actually highlights why booleans in APIs are often a bad idea - on the calling side, they do nothing to explain what's happening. Consider: ParseFolder(true, false); You'd have to look up what those parameters mean; if they were enums, it'd be a lot more clear: ParseFolder(ParseBehaviour.Recursive, CompatibilityOption.Strict); Edit: Added headings and swapped the order of the two main paragraphs, because too many people were focusing on the boolean parameters part (to be fair, it was the first paragraph originally). Also added an example to the first part. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/161970",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9667/"
]
} |
162,002 | I am an intern for a health company (unpaid), let's call it Company A and I noticed that they are using a lot of paper form for things that can be done on the computer. Excel files for things that shouldn't be in Excel. So I wanted to improve on my programming and figured that it was the best opportunity to do. I developed a couple of apps for their use. All these applications were outside company time. One application I did and they love and one of the director has a brother who has a health start up company. He wants me to give over my source code so his brother's company can further develop it and maybe sell it (I am out of the equation). I have no intention of handing over my code as I put a lot of time outside doing it but I also don't want to burn bridges with anyone in the company. I can't go to the director and tell him "I don't think so". I am fine with demoing to the brother how it works but the line is giving up my code. If they want to build something like it then they can go right ahead I have no problem with it. What is the correct way of approaching this and do they have the right to do this to me? Edit: there is no contract I never signed anything | IANAL . Contract does matter here. That's all I can say on that and I won't repeat the advice everyone else has given. The company may already own it and you have no say in the matter. Even a lawyer would tell you to hire a lawyer if you decide to simply say "No". So if that's your choice, hire a lawyer. I read and re-read this question until I figured out why you care about Burning Bridges. I'm guessing that your real concern is that, as an intern, you're not looking for a million dollars. Heck, you don't specify that you're a software development intern, which may mean that you didn't go into this intending to be a programmer. I think what you're really looking for is the company to show respect for your work. You may have even surprised yourself with how well this project turned out and maybe you want to move into a new career path as a software developer. This is something to be proud of. This project is a big deal, don't clutter this with needless crap by taking one thing your boss said and blowing it out of proportion. The Boss Perspective First and foremost, remember that your boss is human . He makes mistakes just like everyone else. Second, you also need to consider that your boss may not know anything about software. You showed him something cool, which he is thinking "Cool, my brother knows software; he could do something with this." He doesn't know it took you a lot of time. It's likely he might not even know that you're still an unpaid intern. How To Think About It First, I'm going to tell you that you can win this. Even if this company has you under the nastiest contract known to man, in the most anti-employee market on earth, and you have an extremely mean boss, this is still a winnable situation. You could quite easily get them to understand your worth here. The question is, how do you want them to demonstrate that they care about your work? Do you really want to get paid for this project? Based on the fact that you are willing to demo the software, I'm guessing you don't much care about the payment part, but more the fact that you worked really hard and want to be acknowledged for it. If you'd like a permanent position at the company in exchange, then make that evident. If you think you'd like to move into software development, maybe you could ask about being moved into the brother's company in a real, paying position. The bottom line is, before you can approach your boss about getting what you want, you have to truly decide what you want . How To Approach The Boss First thing's first, don't try to catch the boss while he's walking between meetings. Set up an appointment . You may only need five minutes, but five minutes is a lot of time when you're running a company. Talk to him and ask about times that work for him and put the appointment on his calendar if you can. With the appointment set up, you need to go in prepared . Have an outline of the points you want to make on a piece of paper with you so you don't get nervous and forget everything. Remember that more people list Public Speaking as their number one fear than Death. You will forget stuff, so write it down . Going in prepared also looks good to the boss. It shows him you're not trying to waste his time. Even the most anal of bosses relaxes a bit when he sees you've prepared what you're going to say. What To Say Nothing is more valuable to a manager than numbers. If you grab a BLS statistic or two, you're more likely to get his attention. If you can, calculate the potential value to his company in real dollar amounts and put it in front of him. Show him that you're worth something in terms of time, sales, or if you can, flat dollar amounts. Use of buzzwords only helps if your boss is into that sort of thing. It's important that you stay on point. Your boss doesn't care about how you wrote that program. He wouldn't understand it if you explained it. He just wants to hear about why you think it's worth something to him. Don't get personal. Like I said, the boss isn't going to speak your language. Don't spend time explaining some really challenging bug you overcame when writing the program. It's a black box to him. As far as he's concerned it cost you nothing to write it. Instead, throw numbers at him as to cost to you, in terms of time and money . Look at the things he's said in the past and try to associate it with the work you've done. If you've worked with the boss before, this can be easy. Associate the work you've done with specific complaints he's made in the past. Demonstrate how you're fixing problems he's encountered and why you're worth it. It Can Be Done I can tell you in all honesty this is doable. I've sold these sorts of ideas in the past. It's just a matter of doing it right. Think about the things he's thinking about and speak his language and you truly can get what you want out of this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62460/"
]
} |
162,007 | I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: An intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are etc., but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development where you don't have to go hunting through the log files to find what you need on every exception. But that could be solved with a configuration switch. Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right? | I tend to build an application log, either in DB or in file, and log all such information to that. You can then give the user an error number, which identifies which log item the error is related to, so you can get it back. This pattern is also useful as you can follow errors even if the users don't bother raising them with you, so you can get a better idea of where the problems are. If your site is installed in a client's environment and you can't reach it, you can get the on-site IT dept to send you some extract based on the error no. The other thing you could consider is having the system email details of errors to a mailbox that you have sight of, so you know when things are going wrong. Fundamentally having a system that spills its guts when something isn't right doesn't inspire confidence in non-technical users - it tends to scare them into thinking something is very wrong (eg how much of a BSOD do you understand, and how do you feel when one comes up)? On stacktrace: In .Net the stack trace will show the full trace right into the core MS sourced assemblies, and will reveal details about what technologies you're using, and possible versions as well. This gives intruders valuable info on possible weaknesses that could be exploited. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162007",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7279/"
]
} |
162,030 | Up to now I have always worked with imperative languages like Pascal, C, C++, Java in a production environment, so I have experience with debuggers for these languages (Turbo Pascal, Turbo C, GDB / DDD, Visual Studio, Eclipse). A debugger for an imperative language normally allows to Set break points on statements, and execute the program until the next break point is encountered. Execute the program one statement at a time, with the option of entering a function / method call or skipping over to the following statement. When program execution is paused, one can examine the current state of the program, i.e.: Inspect the contents of the stack to see the current nesting of function calls. Inspect local variables, actual function parameters, global variables, and so on. Since I have never worked with a functional programming language in a production environment, I was trying to figure out how a debugger for such a language would look like. For example: How does the debugger walk through the program if there are no statements like in an imperative language? Where does one set break points? In a language with lazy evaluation like Haskell, the sequence of function calls can be different from that of an eager language: will this make a big difference when trying to understand the call stack? What about variables? Since there is no state, I imagine that a debugger will only show the variable bindings for the current scope (?), i.e. there will be no local or global variables changing value as I step through the program. Summarizing, are there any general, common features of functional-language debuggers that clearly distinguish them from imperative-language debuggers? | I tend to build an application log, either in DB or in file, and log all such information to that. You can then give the user an error number, which identifies which log item the error is related to, so you can get it back. This pattern is also useful as you can follow errors even if the users don't bother raising them with you, so you can get a better idea of where the problems are. If your site is installed in a client's environment and you can't reach it, you can get the on-site IT dept to send you some extract based on the error no. The other thing you could consider is having the system email details of errors to a mailbox that you have sight of, so you know when things are going wrong. Fundamentally having a system that spills its guts when something isn't right doesn't inspire confidence in non-technical users - it tends to scare them into thinking something is very wrong (eg how much of a BSOD do you understand, and how do you feel when one comes up)? On stacktrace: In .Net the stack trace will show the full trace right into the core MS sourced assemblies, and will reveal details about what technologies you're using, and possible versions as well. This gives intruders valuable info on possible weaknesses that could be exploited. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162030",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29020/"
]
} |
162,073 | What is the process for leaving a company (or even a group/division) in terms of code support? Is it best to handle all questions? Do you give the remaining developers access to yourself as a future resource? If so, is there a way to not give full access? I've experienced first hand where answers about the general software arthitecture from the initial developer would be invaluable. I understand that if serious assistance is needed, than it becomes a typical case of employment negotiation as a support contract. However, should serious assistance be required, what steps can you make to ease that process of contacting you? I was thinking of doing something like making a (YOUR_NAME)_codesupport @ (YOUR_FAVORITE_EMAIL_CLIENT).com address. My Situation Specifics: I'm a co-op student, and as such bounce around companies on 4-month stints. This means introducing myself to a lot of new code bases, as well as leaving a fair share of orphaned code behind when I leave a company. I feel bad if I leave junk code around. | How do you support your code post employment end? You don't. That's why it's called the end . If they'd be surprised to see you walk through the door and start using their equipment a month after you left, you should be surprised to have them call you up and ask a bunch of questions a month after you left. Okay, more realistically, depending on the situation, you might offer to answer questions by phone or e-mail for a bit, especially if you a) would like to go back there, b) are friends with the people who work there, c) are still depending on them for a good review, d) feel pretty confident that the company won't abuse your goodwill gesture, and/or e) the company is willing to compensate you for any non-trivial additional support. This means introducing myself to a lot of new code bases That's a good skill to develop -- you'll need it. as well as leaving a fair share of orphaned code behind when I leave a company. Part of your job while you're still working there is to document what you've done, or at least make sure that some of the other people working there have a clear understanding of it. That's something that's in the company's interest, and they should make sure they have what you need while you're still there. I feel bad if I leave junk code around. Don't write junk code in the first place. If your previously-good code becomes junk (obsolete, no longer needed, etc.) before you leave, then clean it up before you leave. If your code isn't junk when you leave, then what happens to it afterward isn't something you should worry about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162073",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39814/"
]
} |
162,145 | I'm just wondering if we should assign story points to bug fixing tasks or not. JIRA, our issues-tracking software, does not have story point field for Bug type issues (it's only for Story s and Epic s). Should we add the Bug issue type to the applicable issue types of the Story Points field? What are the pros and cons? Would it be suitable for Scrum? | Ideally, your software should be bug-free after each iteration, and fixing bugs should be part of each sprint, so the work required to fix bugs should be considered when assigning story points (i.e., a task that is more likely to produce bugs should have more story points assigned to it). In reality, however, bugs surface post-deployment all the time, no matter how rigid your testing; when that happens, removing the bug is just another change, a feature if you will. There is no fundamental difference between a bug report and a feature request in this context: in both cases, the application shows a certain behavior, and the user (or some other stakeholder) would like to see it changed. From a business perspective, bugfixes and features are also the same, really: either you do it (scenario B), or you don't (scenario A); both scenarios have costs and benefits attached, and a decent business person will just weigh them up and go with whatever earns them more profit (long-term or short-term depending on business strategy). So yes, by all means, assign story points to bugs. How else are you going to prioritize bugs vs. features, and bugs against bugs? You need some measure of development effort for both, and it better be comparable. The biggest problem with this is that bugfixes are often harder to estimate: 90% or more of the actual effort lies in finding the cause; once you have found it, you can come up with an accurate estimate, but it is almost impossible to judge how long the search will take. I've even seen a fair share of bugs where most of the time was spent just trying to reproduce the bug. On the other hand, depending on the nature of the bug, it is often possible to narrow down the search with some minimal research before making an estimate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162145",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36726/"
]
} |
162,256 | In C and C++, it is very easy to write the following code with a serious error. char responseChar = getchar();
int confirmExit = 'y' == tolower(responseChar);
if (confirmExit = 1)
{
exit(0);
} The error is that the if statement should have been: if (confirmExit == 1) As coded, it will exit every time, because the assignment of the confirmExit variable occurs, then confirmExit is used as the result of the expression. Are there good ways to prevent this kind of error? | The best technique is to increase the warning level of your compiler.
It will then warn you about potential assignment in the if conditional. Make sure you compile your code with zero warnings (which you should be doing anyway). If you want to be pedantic then set your compiler to treat warnings as errors. Using Yoda conditionals (putting the constant on the left hand side) was another technique that was popular about a decade ago. But they make the code harder to read (and thus maintain because of the unnatural way they read (unless you are Yoda)) and provide no greater benefit than increasing the warning level (which also has extra benefits of more warnings). Warnings are really logical errors in the code and should be corrected. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162256",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61659/"
]
} |
162,340 | In a production web application, my fellow programmers used StringBuffer everywhere. Now I am taking care of application development and corrections. After reading StringBuilder and StringBuffer I have decided to replace all the StringBuffer code with StringBuilder because we don't need thread safety in our data beans. For example: (In each data bean I can see the use of StringBuffer) @Override
public String toString() {
StringBuffer sb = new StringBuffer();// replace it from StringBuilder
sb.append(" ABCD : ").append(abcd);
sb.append(", EFGH : ").append(efgh);
sb.append(", IJKL : ").append(ijkl);
} We create a separate data beans for each session/request. A session is used by a single user no other user can access it. Should I consider other points before migrating? If there is a single thread (no waiting threads/no new thread will be looking for object lock), it performs equally with either StringBuffer or StringBuilder. I know in the case of StringBuffer, it takes time to take the object lock but I want to know if there is any performance difference between them except the hold/release of the object lock. | The only difference between the two is the synchronization used in StringBuffer. The overhead of synchronization is not huge in the grand scheme of things, but it is significant relative to the StringBuilder methods that don't have them. The JVM is doing work that it wouldn't otherwise have to do--especially with only one thread, etc. If your code works and people aren't complaining about performance, I wouldn't worry about it. You aren't going to get a lot of bang for your buck. However, if you are writing new code, or are updating code that uses StringBuffer, I'd suggest converting them StringBuilder at the same time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162340",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62100/"
]
} |
162,399 | I started building an app in 3 layers (DAL, BL, UI) [it mainly handles CRM, some sales reports and inventory]. A colleague told me that I must move to service layer pattern, that developers came to service pattern from their experience and it is the better approach to design most applications. He said it would be much easier to maintain the application in the future that way. Personally, I get the feeling that it's just making things more complex and I couldn't see much of a benefit from it that would justify that. This app does have an additional small partial ui that uses some (but only few) of the desktop application functions so I did find myself duplicating some code (but not much). Just because of some code duplication I wouldn't convert it to be service oriented, but he said I should use it anyway because in general it's a very good architecture, why programmers are so passionate about services?? I tried to google on it but I'm still confused and can't decide what to do. | Martin Fowler's book "Patterns of Enterprise Architecture" states: The easier question to answer is probably when not to use it. You probably don't need a Service Layer if your application's business logic will only have one kind of client - say, a user interface - and it's use case responses don't involve multiple transactional resources. [...] But as soon as you envision a second kind of client, or a second transactional resource in use case responses, it pays to design in a Service Layer from the beginning. The benefits a Service Layer provides is that it defines a common set of application operations available to different clients and coordinates the response in each operation. Where you have an application that has more than one kind of client that consumes its business logic and has complex use cases involving multiple transactional resources - it makes sense to include a Service Layer with managed transactions. With CRM, Sales and Inventory there will be a lot of CRUD-type use cases of which there is almost always a one-to-one correspondence with Service Layer operations. The responses to creation, update or deletion of a domain object should be coordinated and transacted atomically by Service Layer operations. Another benefit of having a Service Layer is that it can be designed for local or remote invocation, or both - and gives you the flexibility to do so. The pattern lays the foundation for encapsulated implementation of an application's business logic and invocation of that logic by various clients in a consistent manner. This means you also reduce/remove duplication of code, as your clients share the same common services. You can potentially reduce maintenance costs too - as when your business logic changes, you (generally) only need to change the service, and not each of the clients. In summary, it's good to use a Service Layer - more-so I think in your example you have provided as it sounds like you have multiple clients of business logic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162399",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60729/"
]
} |
162,432 | It seems to me it would be very useful to use Javascript for general server side scripting tasks as it has more or less the same features as Perl and Python. But AFAIK there are no generally available Javascript interpreters for the major machine architectures. I guess the other problem may be lack of libraries but surely these would come if the interpreters were there. Google's V8 maybe could be a starting point. Does anyone think we'll see this soon? | Node.js is exactly what you're asking for ... and more. In addition to being a JavaScript runtime it also provides APIs for common operations, such as file system access (JavaScript on the browser doesn't really need that) and network IO. It's marketed for building network application (and it's great at that!), but it's really a general purpose JavaScript runtime that you can use to build anything you want. Also, it is based on V8. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62741/"
]
} |
162,565 | Given that programmers are authors and write code to express abstract thoughts and concepts, and good code should be read by other programmers without difficulties and misunderstandings, should a programmer take writing lessons to write better code? Abstracting concepts and real world problems/entities is an important part of writing good code, and a good mastery of the language used for coding should allow the programmer to express his thoughts more easily, or in a better way. Besides, when trying to write or rewrite some code to make it better, much time can be spent in deciding the names for functions, variables or data structures. I think this could also help to avoid writing code with more than one meaning, often cause of misunderstanding between different programmers. Code should always express clearly its function unambiguously. | 1. Writing lessons? Not really. Writing source code is different enough from writing a book. While both pursue the same goals: being as unambiguous as possible and being easy to understand, they are doing it in a very different manner, and things a writer should learn are not the same as things a software developer should learn. Example 1: figures of speech Figures of speech are valuable when writing novels, poetry, etc., since they increase the expressiveness of the writing. What is the last time you've seen an oxymoron or a litotes in source code ? Would it help to have them, or would it rather be extremely harmful for any developer who will have to maintain such source code later? Example 2: vocabulary Rich vocabulary is highly appreciated in literature. Vocabulary of William Shakespeare, for example, is twenty thousand to twenty-five thousand words. Richer vocabulary makes it more interesting to read a novel or a poem. When you write source code, you expect it to be read by people who don't speak English very well . Showing how well you know English would be extremely harmful for your code. If you know a fancy word which means exactly what you need but you know that lots of people don't know the meaning of this word, you should rather find a less expressive synonym or a set of words which explain the meaning. A vocabulary of a few thousands of words is often largely enough for a given project. Note an important aspect: while Google Translate could be of a great help for a non-native speaker, there are two issues with any translator: A pair of languages don't necessarily have a 1:1 match between the words. Some words have either no translation in other languages whatsoever, or multiple words could translate into a single word in a foreign language. For instance, in Russian, there is a huge amount of words which target specific states of snow and cold weather, and translating them in French or Spanish is usually impossible without losing their specificity. A word sometimes has multiple meanings, and the meaning is deduced from the context. Google Translate, despite its high quality, is usually unable to indicate the meaning for any but the most basic situations. Example 3: expressions Expressions make prose richer as well. An author expects a reader to have a given amount of general culture, and uses this opportunity to make the text more expressive. Similarly to the previous example, such expressions could be very problematic when read by people who are not native speakers. But if general vocabulary can usually be translated, expressions are much more problematic. For instance, English is not my first language, and on daily basis, I encounter expressions, including here on StackExchange, that I don't know. I try to guess their meaning, and sometimes I'm right. But sometimes I'm wrong, and Googling those expressions doesn't help. A user in her/his comment reminded me of an example which made me suffer for a long time when I just started programming: PHP's needle and haystack . I was unaware of the corresponding figure of speech, so every time when I was reading the documentation, I was wondering what is this all about. Needless to say that C#'s sequence.Contains(element) or the excellent Python's element in sequence are a much better alternative. Well, at least, developers who don't know Hebrew had to suffer from PHP too , but this is a different story. Example 4: cultural references Cultural references. In literature, it is tempting to include elements from a given culture, and this too makes the book richer and sometimes more interesting to read. However, code is addressed to developers from all around the world. Therefore, what is an obvious reference for an Italian developer may not be that obvious for a Russian one, and what every Indian boy or girl knows may not necessarily be known by an American programmer. The same user who talked about the needle and the haystack gave an excellent example of such cultural reference too: the Grail. Who doesn't know what Grail is? Well, I mean, it's “Graal” in French, “Grial” in Spanish and... “Kutsal Kâse” in Turkish, but still. However, how much American or European developers know the medieval history of China or India? Why would anyone assume that every Chinese and Indian programmer has to know the Holy Grail reference? 2. Lessons to write expressive source code? Sure. Any developer should learn how to write expressive source code. Any developer should explain why the comment in: int j = i + 1; // Creating i and adding 1 to it. is bad, even aside the fact that it's totally wrong. Any developer should be able to understand the basic refactoring and how it helps making the source code more expressive. Any developer should remember that 20% of the time is spent developing code, and 80% of the time maintaining it. For some projects, it's more like 5% - 95%. etc. In essence, programming is close to technical documentation. Does a person who writes a spec sheet for a bolt need to take writing lessons? Not really. The same applies for developers. Anyone should write without making spelling mistakes in every word, and anyone should be able to communicate her ideas clearly enough. Aside that, I'm not sure how writing lessons would be more useful than, let's say, a course in Computer science or in IT security or whatsoever. The expressiveness of the source code can be learnt by other means. superM mentioned one of them in his answer : reading good code. I can mention a few others: Reading books like Beautiful Code or Code Complete, Asking for a more experienced developer to review your code, Understanding patterns and how and when to use them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162565",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32325/"
]
} |
162,574 | Who decided (and based on what concepts) that switch construction (in many languages) has to use break in each statement? Why do we have to write something like this: switch(a)
{
case 1:
result = 'one';
break;
case 2:
result = 'two';
break;
default:
result = 'not determined';
break;
} (noticed this in PHP and JS; there are probably many other languages using this) If switch is an alternative of if , why we can't use the same construction as for if ? I.e.: switch(a)
{
case 1:
{
result = 'one';
}
case 2:
{
result = 'two';
}
default:
{
result = 'not determined';
}
} It is said that break prevents the execution of the block following the current one. But, does someone really run into the situation, where there was any need for execution of the current block and following ones? I didn't. For me, break is always there. In every block. In every code. | C was one of the first languages to have the switch statement in this form, and all other major languages inherited it from there, mostly choosing to keep the C semantics per default - they either didn't think of the advantages of changing it, or judged them less important than keeping the behaviour everyone was used to. As for why C was designed that way, it probably stems from the concept of C as "portable assembly". The switch statement is basically an abstraction of a branch table , and a branch table also has an implicit fall-through and requires an additional jump instruction to avoid it. So basically, the designers of C also chose to keep the assembler semantics per default. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57759/"
]
} |
162,614 | I am considering writing a new software product. Performance will be critical, so I am wary of using an interpreted or language or one that uses a emulation layer (read java). Which leads me to thinking of using C (or C++) however these are both rather long in the tooth. I haven't used either for a long time. I figured in the last 20 years someone has probably created something which is reasonably popular, nice to code in and compiled. What more modern alternatives are there to C for writing high performance compiled code? If C++ is a different beast than it was 15 years ago, I would consider it, I guess I had an assumption that it had some inherent problems. Parallelisation would be important, but probably not across multiple machines. | There is a language in development called The Rust Programming Language which pursues similar goals as C++ does, notably zero-cost abstractions and fine control over memory management. That said, it is perhaps the most notable upcoming candidate despite being still very young. Apart from Rust there really aren't any other popular alternatives which compile to native code. There's Delphi and D too, of course, but they aren't as fast, popular or used. Google's Go language could be a candidate, but it's still very young and aims for a bit different domain. However, do note that C#(assuming Microsoft platform) and Java might not be all that slow even though they run on top of a virtual machine; the just-in-time compilation of code can do some optimizations which traditional ahead-of-time compilers aren't capable of applying due to lack of information of the program state and environment. Frankly I would personally not consider C to be a candidate if C++ is an option, mainly because of the fact that modern C++ is safer, works at higher level of abstraction, is more expressive and has practically no performance loss over C(in some cases C++ is notably faster). Simply put, C++ provides everything that C provides and more. Most of the C functionality is considered to be "deprecated" and better, safer, faster and more intuitive alternatives are provided by the C++ standard library. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162614",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10624/"
]
} |
162,615 | I was reading Why do we have to use break in switch ? , and it led me to wonder why implicit fall-through is allowed in some languages (such as PHP and JavaScript), while there is no support (AFAIK) for explicit fall-through. It's not like a new keyword would need to be created, as continue would be perfectly appropriate, and would solve any issues of ambiguity for whether the author meant for a case to fall through. The currently supported form is: switch (s) {
case 1:
...
break;
case 2:
... //ambiguous, was break forgotten?
case 3:
...
break;
default:
...
break;
} Whereas it would make sense for it to be written as: switch (s) {
case 1:
...
break;
case 2:
...
continue; //unambiguous, the author was explicit
case 3:
...
break;
default:
...
break;
} For purposes of this question lets ignore the issue of whether or not fall-throughs are a good coding style. Are there any languages that exist that allow fall-through and have made it explicit? Are there any historical reasons that switch allows for implicit fall-through instead of explicit? | It's primarily historical, most languages just copied what C did. The reason that C did it that way is that the creators of C intended switch statements to be easy to optimize into a jump table. This is also the reason that C limits switch statements to integral values. In a jump table, the program will calculate what position to jump to based on the expression. The program will jump to that point and then continue executing from that point. If you want skip the rest of the table you have to include a jump to the end of the table. C uses explicit break statements so that there is a direct correspondence to this construct. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7865/"
]
} |
162,631 | I'm a beginner and have only little knowledge in programming. Would it be good if I directly learn C++ from books which cover new C++11 or should I study through the old best C++ books? Should I have little knowledge about C++ before learning C++11? or I can start directly from there? Would it cause problem if I directly start from C++11? If no, then suggest some books on C++11. | There are a lot of usability enhancements that make C++11 more comprehensible to a beginner, especially one who has experience in other languages with those features. Other changes in C++11 are only of interest to advanced users, so you're likely to get overwhelmed if you pick up a book that is designed to mostly teach the differences. Make sure any book you get is designed for complete beginners to C++. That being said, you'll probably have to learn the old way eventually, as there is a lot of existing code out there, and even new C++11 code will contain the old way of doing things if the programmer so chooses. I write C++ for a living, and my company still hasn't even gotten around to evaluating C++11-compatible compilers, let alone using one in production. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162631",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62822/"
]
} |
162,643 | Clean Code suggests avoiding protected variables in the "Vertical Distance" section of the "Formatting" chapter: Concepts that are closely related should be kept vertically close to each other. Clearly this rule doesn't work for concepts that belong in separate files. But then closely related concepts should not be separated into different files unless you have a very good reason. Indeed, this is one of the reasons that protected variables should be avoided . What is the reasoning? | Protected variables should be avoided because: They tend to lead to YAGNI issues. Unless you have a descendant class that actually does stuff with the protected member, make it private. They tend to lead to LSP issues. Protected variables generally have some intrinsic invariance associated with them (or else they'd be public). Inheritors then need to maintain those properties, which people can screw up or willfully violate. They tend to violate OCP . If the base class makes too many assumptions about the protected member, or the inheritor is too flexible with the behavior of the class, it can lead to the base class' behavior being modified by that extension. They tend to lead to inheritance for extension rather than composition. This tends to lead to tighter coupling, more violations of SRP , more difficult testing, and a slew of other things that fall within the 'favor composition over inheritance' discussion. But as you see, all of these are 'tend to'. Sometimes a protected member is the most elegant solution. And protected functions tend to have fewer of these issues. But there are a number of things that cause them to be treated with care. With anything that requires that sort of care, people will make mistakes and in the programming world that means bugs and design problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162643",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51669/"
]
} |
162,698 | There used to be very good reasons for keeping instruction / register names short. Those reasons no longer apply, but short cryptic names are still very common in low-level programming. Why is this? Is it just because old habits are hard to break, or are there better reasons? For example: Atmel ATMEGA32U2 (2010?): TIFR1 (instead of TimerCounter1InterruptFlag ), ICR1H (instead of InputCapture1High ), DDRB (instead of DataDirectionPortB ), etc. .NET CLR instruction set (2002): bge.s (instead of branch-if-greater-or-equal.short ), etc. Aren't the longer, non-cryptic names easier to work with? When answering and voting, please consider the following. Many of the possible explanations suggested here apply equally to high-level programming, and yet the consensus, by and large, is to use non-cryptic names consisting of a word or two (commonly understood acronyms excluded). Also, if your main argument is about physical space on a paper diagram , please consider that this absolutely does not apply to assembly language or CIL, plus I would appreciate if you show me a diagram where terse names fit but readable ones make the diagram worse. From personal experience at a fabless semiconductor company, readable names fit just fine, and result in more readable diagrams. What is the core thing that is different about low-level programming as opposed to high-level languages that makes the terse cryptic names desirable in low-level but not high-level programming? | The reason the software uses those names is because the datasheets use those names. Since code at that level is very difficult to understand without the datasheet anyway, making variable names you can't search is extremely unhelpful. That brings up the question of why datasheets use short names. That's probably because you often need to present the names in tables like this where you don't have room for 25-character identifiers: Also, things like schematics, pin diagrams, and PCB silkscreens often are very cramped for space. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162698",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3278/"
]
} |
162,744 | I've been a web developer for almost 10 years and I've gotten into the habit of trying not to use JavaScript whenever possible. I'm not talking about building web apps here, but database driven websites. Is this a good/respected approach? | It's the instinct of most programmers to reduce all sorts of code. The less code, the fewer complexities, and the fewer points of possible error in said code. This rule applies to Javascript just as well as other languages. You're just upholding the tradition. Use Javascript as needed/desired within HTML pages... but there's no reason to use it when its not actually needed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162744",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24988/"
]
} |
162,853 | Asking a job seeker to show some code is a fairly common practice for a software company. However, would it be acceptable for the candidate to ask the interviewer to show him a small piece of code that he thinks is well written? | I always ask to see some code, for several reasons: I want to know what I'm getting into. Of course no software firm is perfect, and I don't expect everyone to pump out marvels of elegance all the time (because neither do I), but if I ask for a company's very best code, and all they can show me is a sub-par spaghetti mess, I know I'm in for a miserable time, unwrapping hairballs and fighting the technical debt to get anything done. Looking at the best code a company can show establishes an upper limit of what kind of quality is possible there; even if it's unlikely that all their code looks like that, you still know it is something they strive for. Looking at code samples tells me a lot about a company's coding culture. Do they use documentation comments? Do they lean towards an Object-Oriented style, do they have Functional Programming tendencies? Are they conservative or progressive? Do they value consistent naming, proper formatting and indentation, and neat code in general? Is the code easy to follow? How do they structure their projects? How do they approach the important things - automated testing, error handling, etc.? How defensive is their coding style? Seeing their existing code will allow you to judge whether you can live up to their standards . The fact that a company is willing to share code samples alone is a good sign in principle. It means that they offer me, the applicant, some trust , since their codebase is one of their most valuable assets. It also means that they are not ashamed of their code, that they are confident that showing me the code will help interest me in working with them. If they won't show you any code samples, then that doesn't have to be a red flag, but it is wise to both ask why they won't share (quite likely, they simply can't for legal reasons), as well as explain why you want to see some. I don't think showing interest in their code is going to be seen as a negative sign, as long as you ask politely and positively. And then there are some more side effects: Companies, those that do agree to show you code, are unlikely to just send me a tarball of source files containing the latest version of their entire codebase, for obvious reason. If they show me any code, they will do so in the form of a little demonstration, which is great: it means I get to talk to one of my potential peers, it allows me to ask more questions about their coding culture, processes, and codebase, and ideally, it will help start a professional discussion in which I can both demonstrate skills and knowledge and learn more about the work environment. It also means that I get to look at the tools they use, which is also quite insightful - for example, if the project they show me relies heavily on a particular IDE, this means that everyone uses that, which can be good or bad. And finally, talking through a bit of code gives a good impression how well future professional communication might go. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/162853",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34327/"
]
} |
163,004 | The term will be used as a method name. The method is called when a part of the user interface is hidden (or removed), and it is used to reset values to default and dispose objects that will not be used any more. Possible names are: release, remove, dispose, clear etc. Which do you think is the most appropriate? | I use : initialize() terminate() I find it the more appropriate: it's hard to not see it in code, because it's both long words (I don't use init) it's correct english (AFAIK) in my head, terminate avoids ambiguity. It doesn't match begin (which matches end), start (which matches stop), create (which matches destroy), setup (which matches unset), load (which matches unload) etc. Some people could find it a question of taste though. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163004",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57576/"
]
} |
163,090 | I am a student who recently joined a software development company as an intern. Back at the university, one of my professors used to say that we have to strive to achieve "Low coupling and high cohesion". I understand the meaning of low coupling. It means to keep the code of separate components separately, so that a change in one place does not break the code in another. But what is meant by high cohesion. If it means integrating the various pieces of the same component well with each other, I dont understand how that becomes advantageous. What is meant by high cohesion? Can an example be explained to understand its benefits? | One way of looking at cohesion in terms of OO is if the methods in the class are using any of the private attributes. Using metrics such as LCOM4 (Lack of Cohesive Methods), as pointed out by gnat in this answer here , you can identify classes that could be refactored. The reason you want to refactor methods or classes to be more cohesive is that it makes the code design simpler for others to use it . Trust me; most tech leads and maintenance programmers will love you when you fix these issues. You can use tools in your build process such as Sonar to identify low cohesion in the code base. There are a couple of very common cases that I can think of where methods are low in "cohesiveness" : Case 1: Method is not related to the class at all Consider the following example: public class Food {
private int _foodValue = 10;
public void Eat() {
_foodValue -= 1;
}
public void Replenish() {
_foodValue += 1;
}
public void Discharge() {
Console.WriteLine("Nnngghhh!");
}
} One of the methods, Discharge() , lacks cohesion because it doesn't touch any of the class's private members. In this case there is only one private member: _foodValue . If it doesn't do anything with the class internals, then does it really belong there? The method could be moved to another class that could be named e.g. FoodDischarger . // Non-cohesive function extracted to another class, which can
// be potentially reused in other contexts
public FoodDischarger {
public void Discharge() {
Console.WriteLine("Nnngghhh!");
}
} In you're doing it in Javascript, since functions are first-class objects, the discharge can be a free function: function Food() {
this._foodValue = 10;
}
Food.prototype.eat = function() {
this._foodValue -= 1;
};
Food.prototype.replenish = function() {
this._foodValue += 1;
};
// This
Food.prototype.discharge = function() {
console.log('Nnngghhh!');
};
// can easily be refactored to:
var discharge = function() {
console.log('Nnngghhh!');
};
// making it easily reusable without creating a class Case 2: Utility Class This is actually a common case that breaks cohesion. Everyone loves utility classes, but these usually indicate design flaws and most of the time makes the codebase trickier to maintain (because of the high dependency associated with utility classes). Consider the following classes: public class Food {
public int FoodValue { get; set; }
}
public static class FoodHelper {
public static void EatFood(Food food) {
food.FoodValue -= 1;
}
public static void ReplenishFood(Food food) {
food.FoodValue += 1;
}
} Here we can see that the utility class needs to access a property in the class Food . The methods in the utility class has no cohesion at all in this case because it needs outside resources to do it's work. In this case, wouldn't it be better to have the methods in the class they're working with itself (much like in the first case)? Case 2b: Hidden objects in Utility Classes There is another case of utility classes where there are unrealized domain objects. The first knee-jerk reaction a programmer has when programming string manipulation is to write a utility class for it. Like the one here that validates a couple of common string representations: public static class StringUtils {
public static bool ValidateZipCode(string zipcode) {
// validation logic
}
public static bool ValidatePhoneNumber(string phoneNumber) {
// validation logic
}
} What most don't realize here is that a zip code, a phone number, or any other string repesentation can be an object itself: public class ZipCode {
private string _zipCode;
public bool Validates() {
// validation logic for _zipCode
}
}
public class PhoneNumber {
private string _phoneNumber;
public bool Validates() {
// validation logic for _phoneNumber
}
} The notion that you shouldn't "handle strings" directly is detailed in this blogpost by @codemonkeyism , but is closely related to cohesion because the way programmers use strings by putting logic in utility classes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163090",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63057/"
]
} |
163,096 | Usually there is an @author tag in API documentation (JavaDoc, PHPDoc etc). Is it acceptable to use this tag on every function you make in a commercial product? | One way of looking at cohesion in terms of OO is if the methods in the class are using any of the private attributes. Using metrics such as LCOM4 (Lack of Cohesive Methods), as pointed out by gnat in this answer here , you can identify classes that could be refactored. The reason you want to refactor methods or classes to be more cohesive is that it makes the code design simpler for others to use it . Trust me; most tech leads and maintenance programmers will love you when you fix these issues. You can use tools in your build process such as Sonar to identify low cohesion in the code base. There are a couple of very common cases that I can think of where methods are low in "cohesiveness" : Case 1: Method is not related to the class at all Consider the following example: public class Food {
private int _foodValue = 10;
public void Eat() {
_foodValue -= 1;
}
public void Replenish() {
_foodValue += 1;
}
public void Discharge() {
Console.WriteLine("Nnngghhh!");
}
} One of the methods, Discharge() , lacks cohesion because it doesn't touch any of the class's private members. In this case there is only one private member: _foodValue . If it doesn't do anything with the class internals, then does it really belong there? The method could be moved to another class that could be named e.g. FoodDischarger . // Non-cohesive function extracted to another class, which can
// be potentially reused in other contexts
public FoodDischarger {
public void Discharge() {
Console.WriteLine("Nnngghhh!");
}
} In you're doing it in Javascript, since functions are first-class objects, the discharge can be a free function: function Food() {
this._foodValue = 10;
}
Food.prototype.eat = function() {
this._foodValue -= 1;
};
Food.prototype.replenish = function() {
this._foodValue += 1;
};
// This
Food.prototype.discharge = function() {
console.log('Nnngghhh!');
};
// can easily be refactored to:
var discharge = function() {
console.log('Nnngghhh!');
};
// making it easily reusable without creating a class Case 2: Utility Class This is actually a common case that breaks cohesion. Everyone loves utility classes, but these usually indicate design flaws and most of the time makes the codebase trickier to maintain (because of the high dependency associated with utility classes). Consider the following classes: public class Food {
public int FoodValue { get; set; }
}
public static class FoodHelper {
public static void EatFood(Food food) {
food.FoodValue -= 1;
}
public static void ReplenishFood(Food food) {
food.FoodValue += 1;
}
} Here we can see that the utility class needs to access a property in the class Food . The methods in the utility class has no cohesion at all in this case because it needs outside resources to do it's work. In this case, wouldn't it be better to have the methods in the class they're working with itself (much like in the first case)? Case 2b: Hidden objects in Utility Classes There is another case of utility classes where there are unrealized domain objects. The first knee-jerk reaction a programmer has when programming string manipulation is to write a utility class for it. Like the one here that validates a couple of common string representations: public static class StringUtils {
public static bool ValidateZipCode(string zipcode) {
// validation logic
}
public static bool ValidatePhoneNumber(string phoneNumber) {
// validation logic
}
} What most don't realize here is that a zip code, a phone number, or any other string repesentation can be an object itself: public class ZipCode {
private string _zipCode;
public bool Validates() {
// validation logic for _zipCode
}
}
public class PhoneNumber {
private string _phoneNumber;
public bool Validates() {
// validation logic for _phoneNumber
}
} The notion that you shouldn't "handle strings" directly is detailed in this blogpost by @codemonkeyism , but is closely related to cohesion because the way programmers use strings by putting logic in utility classes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163096",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53462/"
]
} |
163,163 | In some code I'm reviewing, I'm seeing stuff that's the moral equivalent of the following: public class Foo
{
private Bar bar;
public MethodA()
{
bar = new Bar();
bar.A();
bar = null;
}
public MethodB()
{
bar = new Bar();
bar.B();
bar = null;
}
} The field bar here is logically a local variable, as its value is never intended to persist across method calls. However, since many of the methods in Foo need an object of type Bar , the original code author has just made a field of type Bar . This is obviously bad, right? Is there a name for this antipattern? | This is obviously bad, right? Yea. It makes the methods non-reentrant, which is a problem if they are called on the same instance recursively or in a multi-threaded context. It means that state from one call leaks to another call (if you forget to reinitialize). It makes the code hard to understand because you have to check for the above to be sure what the code is actually doing. (Contrast with using local variables.) It makes each Foo instance bigger than it needs to be. (And imagine doing this for N variables ...) Is there a name for this antipattern? IMO, this is does not deserve to be called an antipattern. It is just a bad coding habit / a misuse of Java constructs / crap code. It is the sort of thing that you might see in an undergraduate's code when the student has been skipping lectures and/or has no aptitude for programming. If you see it in production code, it is a sign that you need to do a lot more code reviews ... For the record, the correct way to write this is: public class Foo
{
public methodA()
{
Bar bar = new Bar(); // Use a >>local<< variable!!
bar.a();
}
// Or more concisely (in this case) ...
public methodB()
{
new Bar().b();
}
} Notice that I've also fixed the method and variable names to conform to the accepted style rules for Java identifiers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163163",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6916/"
]
} |
163,168 | As the title says, How important is Discrete Mathematics for a Computer Scientist? Background: I'm pursuing a Master's degree with a focus on fundamentals such as Algorithms, Complexity and Computability Theory and Programming Languages to get a good foundation for working in the field of Parallel Computing. Some more background: My university grants a lot of freedom in the choices of courses for my Master's degree. It's officially called "Software Engineering", but due to a the broad range of electives, a different focus is possible. Interestingly, none of the electives is a lecture in Math! I'm thinking about doing a course about Discrete Mathematics that would take half a semester to complete successfully, even if I can't use it for my degree. So with this question I'm trying to find out if the effort is justifiable. | As a Computer Scientist looking to get a Master's degree with focus on "Algorithms, Complexity and Computability Theory and Programming Languages" I would say Discrete Mathematics is very important. Discrete math will help you with the "Algorithms, Complexity and Computability Theory" part of the focus more than programming language. The understanding of set theory, probability, and combinations will allow you to analyze algorithms. You will be able to successfully identify parameters and limitations of your algorithms and have the ability to realize how complex a problem/solution is. As far as the programming language, discrete math doesn't touch on how to actually program; but rather it can be used for software system design specification. I used "ZED" in university, and it was dealing with designing a system using set theory. I'm not sure what percentage of software systems are designed with set theory these days though. The last important concept to grab out of discrete math is boolean algebra. This is very useful not only for creating logical solution, but it is very useful in programming too. Software can be made/broke simply on the boolean logic in it. Overall, discrete math is not a numbers class for the most part. It makes you use your brain in ways no other classes do. It is a logical thinking class and you must have patience if doing proofs/logic computations don't come easy to you. I've seen people change majors because they couldn't think "abstractly" enough to get through the course. In short, I would make a stance that discrete math would be important class to take for a Computer Scientist/Software Engineer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163168",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40931/"
]
} |
163,185 | Accidentally I've stumbled upon the following quote by Linus Torvalds: "Bad programmers worry about the code. Good programmers worry about
data structures and their relationships." I've thought about it for the last few days and I'm still confused (which is probably not a good sign), hence I wanted to discuss the following: What interpretation of this possible/makes sense? What can be applied/learned from it? | It might help to consider what Torvalds said right before that: git actually has a simple design, with stable and reasonably well-documented data structures. In fact, I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful […] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. What he is saying is that good data structures make the code very easy to design and maintain, whereas the best code can't make up for poor data structures. If you're wondering about the git example, a lot of version control systems change their data format relatively regularly in order to support new features. When you upgrade to get the new feature, you often have to run some sort of tool to convert the database as well. For example, when DVCS first became popular, a lot of people couldn't figure out what about the distributed model made merges so much cleaner than centralized version control. The answer is absolutely nothing, except distributed data structures had to be much better in order to have a hope of working at all. I believe centralized merge algorithms have since caught up, but it took quite a long time because their old data structures limited the kinds of algorithms they could use, and the new data structures broke a lot of existing code. In contrast, despite an explosion of features in git, its underlying data structures have barely changed at all. Worry about the data structures first, and your code will naturally be cleaner. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163185",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24132/"
]
} |
163,410 | I found this cool comparison table for integration servers on Wikipedia, but I am a little uncertain how to rank the tools vs. my needs and interests. The chart itself seems to have a lot of boxes marked unknown, so if you are comfortable updating it on Wikipedia, that could be great too. Are there a few top performing products so I can quickly narrow down to four or five options? Which products seems to have the largest user communities and most ongoing enhancements and integration with new tools? Are the open source offerings best, or are there high quality tools that can be a great deal for a single user at home? Will use of multiple systems (primary desktop, local only home network server, personal and work notebooks, multiple virtual machines spread across all) create problems and how can they be managed? | Don't worry about comparisons. Start with Jenkins ; it is hugely popular and extremely easy to use. Once you've used it a while you'll learn what features are important to you and what are not. My guess is, you'll end up sticking with Jenkins. I'm sure people will debate whether or not it's the best CI server. Don't listen to them because it doesn't matter. There are probably many that are every bit as good as Jenkins -- better in some ways, maybe not as good as others. It's not so important to pick the best one; the important thing is to pick one and start learning, and Jenkins is a very good one for that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163410",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61659/"
]
} |
163,430 | I want to choose a version control system for my company. So far I know I have Git, Subversion and Mercurial. These days I see that Git is the most used, so I'm left wondering: would there be any specific reason to still use Subversion, or should I go directly to Git? | SVN is not dead at all. It's is still in extremely wide use, and it's not going anywhere anytime soon. SVN is much simpler to use than distributed version control, especially if you're not actually running a distributed project that needs distributed version control. If you only have one central repository (which is all your company will need if they're still small enough to get by without source control so far), it's much simpler to use SVN to interact with it. For example, with SVN you can pull changes from the repository, or commit your local changes to it, with a single operation, whereas HG and Git require two or three steps to do the equivalent work. And with the recent revisions, SVN has fixed a lot of the performance issues that made people prefer HG and Git. It's significantly faster now than it was a couple years ago, and at this point, there's really no good reason to look at HG or Git for your project unless you actually need the advanced features of distributed version control. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163430",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59309/"
]
} |
163,432 | The more functional programming I do, the more I feel like it adds an extra layer of abstraction that seems like how an onion's layer is- all encompassing of the previous layers. I don't know if this is true so going off the OOP principles I've worked with for years, can anyone explain how functional does or doesn't accurately depict any of them: Encapsulation, Abstraction, Inheritance, Polymorphism I think we can all say, yes it has encapsulation via tuples, or do tuples count technically as fact of "functional programming" or are they just a utility of the language? I know Haskell can meet the "interfaces" requirement, but again not certain if it's method is a fact of functional? I'm guessing that the fact that functors have a mathematical basis you could say those are a definite built in expectation of functional, perhaps? Please, detail how you think functional does or does not fulfill the 4 principles of OOP. Edit:
I understand the differences between the functional paradigm and object oriented paradigm just fine and realize there are plenty of multiparadigm languages these days which can do both. I am really just looking for definitions of how outright fp (think purist, like haskell) can do any of the 4 things listed, or why it cannot do any of them. i.e. "Encapsulation can be done with closures" (or if I am wrong in this belief, please state why). | Functional programming isn't a layer above OOP; it's a completely different paradigm. It's possible to do OOP in a functional style (F# was written for exactly this purpose), and on the other end of the spectrum you have stuff like Haskell, which explicitly rejects the principles of object orientation. You can do encapsulation and abstraction in any language advanced enough to support modules and functions. OO provides special mechanisms for encapsulation, but it's not something inherent to OO. The point of OO is the second pair you mentioned: inheritance and polymorphism. The concept is formally known as Liskov substitution, and you can't get it without language-level support for object-oriented programming. (Yes, it's possible to fake it in some cases, but you lose a lot of the advantages that OO brings to the table.) Functional programming doesn't focus on Liskov substitution. It focuses on increasing the level of abstraction, and on minimizing the use of mutable state and routines with "side effects", which is a term functional programmers like to use to make routines that actually do something (as opposed to simply calculating something) sound scary. But again, they're completely separate paradigms, that can be used together, or not, depending on the language and the skill of the programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35276/"
]
} |
163,489 | I wonder why java.util.ArrayList allows to add null . Is there any case where I would want to add null to an ArrayList ? I am asking this question because in a project we had a bug where some code was adding null to the ArrayList and it was hard to spot where the bug was. Obviously a NullPointerException was thrown but not until other code tried to access the element. The problem was how to locate the code that added the null object. It would have been easier if ArrayList threw an exception in the code where the elements was being added. | This design decision appears mostly driven by naming. Name ArrayList suggests to reader a functionality similar to arrays - and it is natural for Java Collections Framework designers to expect that vast majority of API users will rely on it functioning similar to arrays. This in particular, involves treatment of null elements. API user knowing that below works OK: array[0] = null; // NPE won't happen here would be quite surprised to find out if similar code for ArrayList would throw NPE: arrayList.set(0, null); // NPE => WTF? Reasoning like above is presented in JCF tutorial stressing points that suggest close similarity between ArrayList and plain arrays: ArrayList... offers constant-time positional access and is just plain fast ... If you would want a List implementation disallowing nulls, it would better be called like NonNullableArrayList or something like that, to avoid confusing API users. Side note there is an auxiliary discussion in comments below, along with additional considerations supporting the reasoning laid out here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163489",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34624/"
]
} |
163,509 | Assuming that: Your team is using centralized version control. You are working on a larger feature which will take several days to complete, and you won't be able to commit before that because it would break the build. Your team members commit something every day that might change some of files you're working on. Since this is centralized version control,
you will have to update your local checkout at some point:
at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Would you stay it's always a good idea to update often? | Personally, I update my local versions daily. In the scenario you describe, I would go the extra mile by Creating a branch for the new, lengthy feature. Merge often from the mainline to this new branch. This way, You can check-in daily to preserve your code on the server You don't have to worry about breaking the build by checking-in. You can use the repository to undo some work or diff when necessary with earlier check-ins. You are certain to be working on the latest codebase and detect possible conflicting code changes early on. The drawbacks as I see them are Merging from main has to be done manually (or scripted) It takes more "administration" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163509",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52610/"
]
} |
163,513 | I have been browsing through some webpages related to testing and found one dealing with the metrics of testing . It says: The severity level of a defect indicates the potential business impact
for the end user (business impact = effect on the end user x frequency
of occurrence). I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately).
Also from this description, what is the difference between the effect on the end user and business impact? | Personally, I update my local versions daily. In the scenario you describe, I would go the extra mile by Creating a branch for the new, lengthy feature. Merge often from the mainline to this new branch. This way, You can check-in daily to preserve your code on the server You don't have to worry about breaking the build by checking-in. You can use the repository to undo some work or diff when necessary with earlier check-ins. You are certain to be working on the latest codebase and detect possible conflicting code changes early on. The drawbacks as I see them are Merging from main has to be done manually (or scripted) It takes more "administration" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163513",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60327/"
]
} |
163,593 | One of the reasons why programmers prefer SVN over CVS is the former allows atomic commits ? What does this mean ? | It means that when you do a commit to the version control system either everything you want to commit goes in, OR nothing does. In CVS, when you try to commit it's possible for the commit to succeed on several files, then fail on several others (because they've changed). This leaves the repository in an unfortunate state because half of your commit isn't there, and it's likely that you've left things in a state where they won't compile or worse. Now you've got to hurry up and integrate whatever changes so that you can commit the other files before someone else needs to update and gets your broken set of changes. In SVN this won't happen - SVN will either commit everything you've changed, or it will fail the whole changeset. Thus, you'll never leave the repository in a broken state due to commit issues. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60189/"
]
} |
163,631 | An old adage that many programmers stick to is "It takes a certain type of mind to learn programming, and not everyone can do it." Now I'm sure that we all have our own trove of anecdotal evidence, but has this been studied scientifically? | Yes, there's a pretty famous paper online designed to more or less determine "Who is cut out to be a programmer." A cognitive study of early learning of programming - Prof Richard Bornat, Dr. Ray Adams All teachers of programming find that their results display a 'double
hump'. It is as if there are two populations: those who can [program],
and those who cannot [program], each with its own independent bell
curve. Almost all research into programming teaching and learning have
concentrated on teaching: change the language, change the application
area, use an IDE and work on motivation. None of it works, and the
double hump persists. We have a test which picks out the population
that can program, before the course begins. We can pick apart the
double hump. You probably don't believe this, but you will after you
hear the talk. We don't know exactly how/why it works, but we have
some good theories. Here's a blog post by Jeff Atwood that interprets the results and puts some things into context. Despite the enormous changes which have taken place since electronic
computing was invented in the 1950s, some things remain stubbornly the
same. In particular, most people can't learn to program: between 30%
and 60% of every university computer science department's intake fail
the first programming course. Experienced teachers are weary but never
oblivious of this fact; brighteyed beginners who believe that the old
ones must have been doing it wrong learn the truth from bitter
experience; and so it has been for almost two generations, ever since
the subject began in the 1960s. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163631",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17485/"
]
} |
163,641 | My coworker and I have different opinions on the relationship between base classes and interfaces. I'm of the belief that a class should not implement an interface unless that class can be used when an implementation of the interface is required. In other words, I like to see code like this: interface IFooWorker { void Work(); }
abstract class BaseWorker {
... base class behaviors ...
public abstract void Work() { }
protected string CleanData(string data) { ... }
}
class DbWorker : BaseWorker, IFooWorker {
public void Work() {
Repository.AddCleanData(base.CleanData(UI.GetDirtyData()));
}
} The DbWorker is what gets the IFooWorker interface, because it is an instantiatable implementation of the interface. It completely fulfills the contract. My coworker prefers the nearly identical: interface IFooWorker { void Work(); }
abstract class BaseWorker : IFooWorker {
... base class behaviors ...
public abstract void Work() { }
protected string CleanData(string data) { ... }
}
class DbWorker : BaseWorker {
public void Work() {
Repository.AddCleanData(base.CleanData(UI.GetDirtyData()));
}
} Where the base class gets the interface, and by virtue of this all inheritors of the base class are of that interface as well. This bugs me but I can't come up with concrete reasons why, outside of "the base class cannot stand on its own as an implementation of the interface". What are the pros & cons of his method vs. mine, and why should one be used over another? | I'd have to agree with your coworker. In both examples you give, BaseWorker defines the abstract method Work(), which means that all subclasses are capable of meeting IFooWorker's contract. In this case, I think BaseWorker should implement the interface, and that implementation would be inherited by its subclasses. This will save you from having to explicitly indicate that each subclass is indeed an IFooWorker (the DRY principle). If you weren't defining Work() as a method of BaseWorker, or if IFooWorker had other methods that subclasses of BaseWorker wouldn't want or need, then (obviously) you'd have to indicate which subclasses actually implement IFooWorker. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163641",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17309/"
]
} |
163,658 | Since programming languages initially only used lines of code executed sequentially, and it evolved into including functions which were one of the first levels of abstraction, and then classes and objects were created to abstract it even further; what is the next level of abstraction? What's even more abstract than classes or is there any yet? | I think you have some misconceptions about the history of computing. The first abstraction (in 1936) was, in fact, Alonzo Church's Lambda Calculus, which is the foundation for the concept of high-order functions and all of the functional languages that followed. It directly inspired Lisp (the second oldest high-level programming language, created in 1959), which in turn inspired everything from ML to Haskell and Clojure. The second abstraction was procedural programming. It came out of the von Neumann computer architectures where sequential programs were written, one instruction at a time. FORTRAN (the oldest high-level programming language, 1958) was the first high-level language to come out of the procedural paradigm. The third abstraction was probably actually declarative programming, first exemplified by Absys (1967), and then later Prolog (1972). It is the foundation of logic programming, where expressions are evaluated by matching a series of declarations or rules, rather than executing a series of instructions. The fourth abstraction then was object-oriented programming, which made its first appearance in Lisp programs in the 60's, but was later exemplified by Smalltalk in 1972. (Though there seems to be some debate as to whether the message-passing style of Smalltalk is the One True object-oriented abstraction. I'm not going to touch that.) All other abstractions, especially on the traditional von Neumann computer architecture, are variations on those four themes. I'm not convinced that there is another abstraction beyond those four that is not merely a variation or a combination of them. But an abstraction is, in essence, merely a way to model and describe an algorithm. You can describe algorithms as a series of discrete steps, as a set of rules which must be obeyed, as a set of mathematical functions, or as interacting objects. It's very hard to conceive of any other way to describe or model algorithms, and even if there is, I'm not convinced of its utility. There is, however, the quantum computing model. In quantum computing, new abstractions are necessary to model quantum algorithms. Being a neophyte in this area, I can't comment on it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163658",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63418/"
]
} |
163,701 | It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications. However, it turns out that this data never existed . The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost). Is there any independent data to corroborate or refute this? And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology? | Does software testing methodology rely on flawed data? Yes, demonstrably. Examining the Agile Cost of Change Curve shows that part of Kent Beck's work on XP (I'm not sure whether it was part of his motivation or his justification) was to "flatten the curve" of defect costs, based on knowledge of the "exponential" curve that lies behind the Code Complete table. So yes, work on at least one methodology - the one that did most to popularise test-first development - is at least in part based on flawed data. Is there any independent data to corroborate or refute this? Yes, there certainly is other data you can look to - the largest study I'm aware of is the analysis of defects done at Hughes Aircraft as part of their CMM evaluation program . The report from there shows how defect costs depended with phase for them : though the data in that report don't include variances so you need to be wary of drawing too many "this thing costs more than that thing" conclusions. You should also notice that, independent of methodology, there have been changes in tools and techniques between the 1980s and today that call the relevance of these data into question. So, assuming that we do still have a problem justifying these numbers: how does this impact software development methodology? The fact that we've been relying on numbers that can't be verified didn't stop people making progress based on anecdotes and experience: the same way that many master-apprentice trades are learned. I don't think there was a Journal of Evidence-Based Masonry during the middle ages, but a whole bunch of big, impressive and long-lasting buildings were nonetheless constructed with some observable amount of success. What it means is that we're mainly basing our practice on "what worked for me or the people I've met"; no bad thing, but perhaps not the most efficient way to improve a field of millions of people who provide the cornerstone of the current technological age. I find it disappointing that in a so-called engineering discipline doesn't have a better foundation in empiricism, and I suspect (though clearly cannot prove) that we'd be able to make better, clearer progress at improving our techniques and methodologies were that foundation in place - just as clinical medicine appears to have been transformed by evidence-based practice. That's based on some big assumptions though: that the proprietary nature of most software engineering practice does not prevent enough useful and relevant data being gathered; that conclusions drawn from these data are generally applicable (because software engineering is a skilled profession, personal variances in experience, ability and taste could affect such applicability); that software engineers "in the field" are able and motivated to make use of the results thus obtained; and that we actually know what questions we're supposed to be asking in the first place. This is obviously the biggest point here: when we talk about improving software engineering, what is it that we want to improve? What's the measurement? Does improving that measurement actually improve the outcome, or does it game the system? As an example, suppose the management at my company decided we were going to decrease the ratio between actual project cost and predicted project cost. I could just start multiplying all my cost estimates by a fudge factor and I'd achieve that "goal". Should it then become standard industry practice to fudge all estimates? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163701",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2366/"
]
} |
163,709 | I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually deploying the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the deployment of this application goes like this: to developers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt git repo : everything mentioned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right? Is there any reference that would formalize this deployment process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to deploy it in a professional way? | Does software testing methodology rely on flawed data? Yes, demonstrably. Examining the Agile Cost of Change Curve shows that part of Kent Beck's work on XP (I'm not sure whether it was part of his motivation or his justification) was to "flatten the curve" of defect costs, based on knowledge of the "exponential" curve that lies behind the Code Complete table. So yes, work on at least one methodology - the one that did most to popularise test-first development - is at least in part based on flawed data. Is there any independent data to corroborate or refute this? Yes, there certainly is other data you can look to - the largest study I'm aware of is the analysis of defects done at Hughes Aircraft as part of their CMM evaluation program . The report from there shows how defect costs depended with phase for them : though the data in that report don't include variances so you need to be wary of drawing too many "this thing costs more than that thing" conclusions. You should also notice that, independent of methodology, there have been changes in tools and techniques between the 1980s and today that call the relevance of these data into question. So, assuming that we do still have a problem justifying these numbers: how does this impact software development methodology? The fact that we've been relying on numbers that can't be verified didn't stop people making progress based on anecdotes and experience: the same way that many master-apprentice trades are learned. I don't think there was a Journal of Evidence-Based Masonry during the middle ages, but a whole bunch of big, impressive and long-lasting buildings were nonetheless constructed with some observable amount of success. What it means is that we're mainly basing our practice on "what worked for me or the people I've met"; no bad thing, but perhaps not the most efficient way to improve a field of millions of people who provide the cornerstone of the current technological age. I find it disappointing that in a so-called engineering discipline doesn't have a better foundation in empiricism, and I suspect (though clearly cannot prove) that we'd be able to make better, clearer progress at improving our techniques and methodologies were that foundation in place - just as clinical medicine appears to have been transformed by evidence-based practice. That's based on some big assumptions though: that the proprietary nature of most software engineering practice does not prevent enough useful and relevant data being gathered; that conclusions drawn from these data are generally applicable (because software engineering is a skilled profession, personal variances in experience, ability and taste could affect such applicability); that software engineers "in the field" are able and motivated to make use of the results thus obtained; and that we actually know what questions we're supposed to be asking in the first place. This is obviously the biggest point here: when we talk about improving software engineering, what is it that we want to improve? What's the measurement? Does improving that measurement actually improve the outcome, or does it game the system? As an example, suppose the management at my company decided we were going to decrease the ratio between actual project cost and predicted project cost. I could just start multiplying all my cost estimates by a fudge factor and I'd achieve that "goal". Should it then become standard industry practice to fudge all estimates? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163709",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57387/"
]
} |
163,793 | My employer (Not a Developer) thinks that CASE tools will help us improve our development process and documentation. I am not sure about that, we are a small team of 5 developers building mobile banking solutions for local clients. I think CASE tools will be a waste of time and money as they need to be purchased and we will need some time before we get used to them and be efficient working with them for modeling and stuff. Code generation is another issue, I really think that the CASE generated code won't be as good as code written by good developers. I think that if we stick with agile principles, design patterns, use TDD, and keep our code clean. we should be good. And as far as Analysis and Design, I think simple UML diagrams on whiteboard should do the trick. Documentation is good and important, but should be made as little as possible and we should not focus on Docs and forget the code. This is what i think. Am I correct? or should I listen to my employer and start researching for an appropriate CASE Tool? | The situation warrants an analytical approach to the decision. The bottom line will be "Does the CASE tool provide a value to the business?" Often, management will want developers to adopt a methodology or tool because they have heard good things about it, regardless of how well it fits into the current processes and culture of the organization. If your employer has asked you to look into CASE tools, as ChrisF points out, you should oblige (this is a workplace issue, not programming). Some factors that would affect the adoption of a CASE tool include: For which of your processes are there CASE tools available, An estimation of how many person-hours would be needed to adopt the new tool(s), How would the process(es) change with the adoption of the new tool(s), or What kind of positive (or negative) impact would be measurable from adopting the new tool(s) Think of this as an opportunity to upgrade your development environment and processes. It may be that your current processes are a perfect match for your organization's culture, but you owe it to your employer and your team to do the appropriate research. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163793",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48867/"
]
} |
163,913 | I've spent the last year as a one-man team developing a rich-client application (35,000+ LoC, for what it's worth). It's currently stable and in production. However, I know that my skills were rusty at the beginning of the project, so without a doubt there are major issues in the code. At this point, most of the issues are in architecture, structure, and interactions -- the easy problems, even architecture/design problems, have already been weeded out. Unfortunately, I've spent so much time with this project that I'm having a hard time thinking outside of it -- approaching it from a new perspective to see the flaws deeply buried or inherent in the design. How do I step outside my head and outside my code so I can get a fresh look and make it better? | Ways to approach this: Find someone familiar with the technology and business problem and talk it through. This may be hard in a single-person team but is generally the best option. Work on a different project for a while. This also may be difficult but even taking a week's break can give you a fresh perspective. Look at similar projects or products, such as open source products if any exist. Be careful not to copy code but they may have approached the idea completely differently. Learn a new language, library or framework. The techniques involved may give you insight how to approach the same problems you have differently. Read a good book/blog/magazine on design or the language/framework. I am not sure what level of skill you are at, but there are lots of alternatives in other answers on this site. If you have specific examples you want addressed, perhaps post them here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163913",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63563/"
]
} |
163,985 | It seems lazy evaluation of expressions can cause a programmer to lose control over the order in which their code is executed. I am having trouble understanding why this would be acceptable or desired by a programmer. How can this paradigm be used to build predictable software that works as intended, when we have no guarantee when and where an expression will be evaluated? | A lot of the answers are going into things like infinite lists and performance gains from unevaluated parts of the computation, but this is missing the larger motivation for laziness: modularity . The classic argument is laid out in the much-cited paper "Why Functional Programming Matters" (PDF link) by John Hughes. The key example in that paper (Section 5) is playing Tic-Tac-Toe using the alpha-beta search algorithm. The key point is (p. 9): [Lazy evaluation] makes it practical to modularize a program as a generator that constructs a large number of possible answers, and a selector that chooses the appropriate one. The Tic-Tac-Toe program can be written as a function that generates the whole game tree starting at a given position, and a separate function that consumes it. At runtime this does not intrinsically generate the whole game tree, only those subparts that the consumer actually needs. We can change the order and combination in which alternatives are produced by changing the consumer; no need to change the generator at all. In an eager language, you can't write it this way because you would probably spend too much time and memory generating the tree. So you end up either: Combining the generation and consumption into the same function; Writing a producer that works optimally only for certain consumers; Implementing your own version of laziness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
163,993 | I've been asking a lot of people where to start learning java web development, I already know core java (Threading,Generics,Collections, a little experience with (JDBC)) but I do not know JSPs and servlets. I did my fair share of development with several web based applications using PHP for server-side and HTML,CSS,Javascript,HTML5 for client side. Most people that I asked told me to jump right ahead to Hibernate while some told me that I do not need to learn servlets and jsps and I should immediately study the Spring framework. Is this true? do I not need to learn servlets and JSPs to learn hibernate or Spring? All of their answers confused me and now I am completely lost what to learn or study. I feel that if I skipped learning JSP and servlets I would missed a lot of important concepts
that will surely help me in the future. So the question, do I need to have foundation/know servlets and JSP to learn spring or hibernate or any other java web frameworks.? | If you've got a good grasp of HTML, CSS, and JavaScript, you have a leg up on many people who end up doing web development. The concepts behind JSP are very similar to PHP. The quirks are different. A servlet is the name for a chunk of Java code that serves a request. That's it really. The whole original Struts framework was a single servlet. I would add Tomcat or Jetty to your list of technologies to learn. Tomcat is the original Java Servlet Container implementation and happens to also be a fully featured and rather popular web server. GlassFish is built on top of it. I've been using Jetty instead of Tomcat in my newer projects because it's simpler, more flexible, and faster. Jetty was designed to make web services as opposed to web apps. But a web app is just a web service that serves HTML in response to raw HTTP requests, so if you understand HTTP (which you can learn the important parts of in a few hours to a day), it's very easy to work with. You can make a little web site with Tomcat and JSP ( tutorial here or JSF) knowing just what you know and spending a few hours going through tutorials. That way you can start where you are comfortable before stretching out. Then make a javax.servlet.http.HttpServlet that writes "<html><head><title>Hi</title></head><body><h1>Hello World</h1></body></html>" to the response object, list it in your Tomcat web.xml and send an HTTP request it from a web browser. It's not rocket science. All Java web frameworks are variations on those two basic activities. If you go the Jetty route, it's even less structured. Check out this Hello Jetty example. If you're just going to make a blog or standard ecommerce site, I'd start with SquareSpace or Wordpress or something. You get so much off the shelf, there's no way to justify custom coding any of that anymore. The strength of Java for web applications is its reliability, maintainability, and performance. PHP or Ruby/Rails is simpler, but Java will scale as much as you want to go. I am not bowled over by any of the Java web frameworks. When you have a team of people working on a large web application, or you need to use Hibernate, then a framework like Spring really shines. Spring is the most popular. When you have some familiarity with servlets and JSP/JSF, then learn how Spring ties those together with a data model. If you are making a blog or a content management system, maybe you can get away with a NoSQL database. But I would argue that NoSQL databases are basically just a caching layer on a file system, rather than replacing relational databases. I think it's rare that a project that's a good fit for a NoSQL database is going to be appropriate to develop in Java. Things that still require custom, high-performance code (in Java, PHP, whatever) are probably going to have a relational/SQL database powering them. I would recommend you get a basic familiarity with SQL and JDBC (Java Database Connectivity) first. After you are comfortable with the world of Java objects, and the world of relational databases and SQL, then you can learn Ebean / JPA (Java Persistence API)/ORM (Object to Relational Mapping) which connects the object world to the relational world. ORM's are tricky and weird. Most are eventually worth the struggle. Ebean is the simplest one I know. I'm more comfortable with it after 8 months than I am with Hibernate after 12 years. I know a lot of people who use Spring with Hibernate and they don't seem to have any trouble, or even be particularly aware of what Hibernate is or does, so I'd say if you're going to use Hibernate, do it through Spring. Maybe just because I've worked with it longer, I've managed to completely stub-out Hibernate with a couple of hash maps for testing, which is awesome (overview available on request). You have some of the most important skills already. Take the others one at a time and try not to get overwhelmed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/163993",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48185/"
]
} |
164,000 | Was reading through some articles on the advantages of creating Generic Repositories for a new app ( example ). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor
var c1 = new Country() { Name = "United States", CountryCode = "US" };
var c2 = new Country() { Name = "Canada", CountryCode = "CA" };
var c3 = new Country() { Name = "Mexico", CountryCode = "MX" };
var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" };
var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" };
var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" };
repo.Add<Country>(c1);
repo.Add<Country>(c2);
repo.Add<Country>(c3);
repo.Add<Province>(p1);
repo.Add<Province>(p2);
repo.Add<Province>(p3);
repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query();
IList<T> Find(Expression<Func<T,bool>> predicate);
T Get(Expression<Func<T,bool>> predicate);
T First(Expression<Func<T,bool>> predicate);
//... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Original link doesn't reflect the pattern I was using in my sample code. Here is an ( updated link ). | Generic repository is even useless (and IMHO also bad) for Entity Framework. It doesn't bring any additional value to what is already provided by IDbSet<T> (which is btw. generic repository). As you have already found the argument that generic repository can be replaced by implementation for other data access technology is pretty weak because it can demand writing your own Linq provider. The second common argument about simplified unit testing is also wrong because mocking repository / set with in-memory data storage replaces Linq provider with another one which has different capabilities. Linq-to-entities provider supports only subset of Linq features - it even doesn't support all methods available on IQueryable<T> interface. Sharing expression trees between data access layer and business logic layers prevents any faking of data access layer - query logic must be separated. If you want to have strong "generic" abstraction you must involve other patterns as well. In this case you need to use abstract query language which can be translated by repository to specific query language supported by used data access layer. This is handled by Specification pattern. Linq on IQueryable is specification (but the translation requires provider - or some custom visitor translating expression tree into query) but you can define your own simplified version and use it. For example NHibernate uses Criteria API. Still the simplest way is using specific repository with specific methods. This way is the simplest to implement, simplest to test and simplest to fake in unit tests because the query logic is completely hidden and separated behind the abstraction. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/164000",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63600/"
]
} |
164,017 | A few months ago, we started developing an app to control an in-house developed test equipment and record a set of measurements. It should have a simple UI, and would likely require threads due to the continuous recording that must take place. This application will be used for a few years, and shall be maintained by a number of computer science students during this period. Our boss graduated some 30 years ago (not to be taken as an offense; I have more than half that time on my back too) and has mandated that we develop this application in ANSI C. The rationale is that he is the only one that will be around the entire time, and therefore he must be able to understand what we are doing. He also ruled that we should use no abstract data types; he even gave us a list with the name of the global variables (sigh) he wants us to use. I actually tried that approach for a while, but it was really slowing me down to make sure that all pointer operations were safe and all strings had the correct size. Additionally, the number of lines of code that actually related to the problem in hand was a only small fraction of our code base. After a few days, I scrapped the entire thing and started anew using C#. Our boss has already seen the program running and he likes the way it works, but he doesn't know that it's written in another language. Next week the two of us will meet to go over the source code, so that he "will know how to maintain it". I am sort of scared, and I would like to hear from you guys what arguments I could use to support my decision. Cowardly yours, | Notice that the "Please do it like this so I am sure I can maintain it" is actually a very good requirement - most programs spend much longer being maintained than being written and keeping a solution in a known technology is usually a good idea. Just imagine if some new computer kid when asked for writing a C# application wrote it in Haskell in two days and said "Hey, it works and I'm gone, bye" leaving the maintenance on you. Just imagine if some new computer kid when asked for writing an ANSI C application 15 years ago had written it in Visual Basic 6 in two days and left it. Now you have to maintain it and Windows 7 starts complaining already when the installation media is inserted . This might be a good opportunity to say - as Heinzi hinted in the comments - that "this is a quick prototype written in C# which happens to look very much like C - shall we make it production ready, or reimplement it in ANSI C like you asked", and then take the discussion now. Having actual source to see, is much better than "Hey, shouldn't we write our next application in Haskell because its faster". In other words - you now have the opportunity to demonstrate that a new platform could be considered. Bring up that you wrote a prototype before the code review - this will help removing the impression that you are trying to sneak C# in under the radar. I would suggest that you also demonstrate that all existing code written in ANSI C can be used from within C#. Personally I believe you will be told that the target remains ANSI C to stay in a single platform. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/164017",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63610/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.