source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
368,021 | according to Are there guidelines on how many parameters a function should accept? , a method should not have too many parameters. However, some answers suggest this issue can be solved by builder pattern: Builder b=new Builder();
b.setParm1("a");
b.setParm2("b");
.
.
.
Obj obj=b.createObj(); or encapsulate parameters in a single object. ObjectParam op=new ObjectParam();
op.param1="a";
op.param2="b";
.
.
.
obj.f(op); But I doubt if it solves the problem, because I think the methods about just align the parameters at better way (i.e.:from horizontally to vertically), but it doesn't change the nature that the tasks depend on too many parameters. And if I want the chain of parameters have better looking, I can use a new line for each parameter like it: https://softwareengineering.stackexchange.com/a/331680/248528 so my question is, is "too many parameters" a visual issue (hard to read a long single line of code), or a logical issue (the task nature depends on too much parameters and needs break down)? If it is more about a visual issue, does a new line for each parameter solves the issue? | It is first and foremost a logical issue (which often comes with visual issues, too). The wrong solution to this is by trying only to improve the visual problem by encapsulate parameters in a single object [...] just align the parameters at better way (i.e.:from horizontally to vertically) Encapsulating parameters in an object does not mean to put five parameters in an arbitrary container with some meaningless name like ObjectParam . Instead, encapsulating a group of parameters in an object should create a new abstraction (or reuse an existing one). Like encapsulating three parameters "X,Y,Z" in a parameter "position of type Point3D , or encapsulating parameters "startDate, endDate" in an object DateInterval or encapsulating parameters documentTitle, documentText, author in an object Document grouping these parameters together If the method in stake has a lot of unrelated parameters you cannot come up with a good grouping name, then it has probably too many parameters and too many responsibilities. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
368,195 | This is a known pitfall for people who are getting their feet wet using LINQ: public class Program
{
public static void Main()
{
IEnumerable<Record> originalCollection = GenerateRecords(new[] {"Jesse"});
var newCollection = new List<Record>(originalCollection);
Console.WriteLine(ContainTheSameSingleObject(originalCollection, newCollection));
}
private static IEnumerable<Record> GenerateRecords(string[] listOfNames)
{
return listOfNames.Select(x => new Record(Guid.NewGuid(), x));
}
private static bool ContainTheSameSingleObject(IEnumerable<Record>
originalCollection, List<Record> newCollection)
{
return originalCollection.Count() == 1 && newCollection.Count() == 1 &&
originalCollection.Single().Id == newCollection.Single().Id;
}
private class Record
{
public Guid Id { get; }
public string SomeValue { get; }
public Record(Guid id, string someValue)
{
Id = id;
SomeValue = someValue;
}
}
} This will print "False", because for each name supplied to create the original collection, the select function keeps getting reevaluated, and the resulting Record object is created anew. To fix this, a simple call to ToList could be added at the end of GenerateRecords . What advantage did Microsoft hope to gain by implementing it this way? Why wouldn't the implementation simply cache the results an internal array? One specific part of what's happening may be deferred execution, but that could still be implemented without this behavior. Once a given member of a collection returned by LINQ has been evaluated, what advantage is provided by not keeping an internal reference/copy, but instead recalculating the same result, as a default behavior? In situations where there is a particular need in the logic for the same member of a collection recalculated over and over, it seems like that could be specified through an optional parameter and that the default behavior could do otherwise. In addition, the speed advantage that is gained by deferred execution is ultimately cut back against by the time it takes to continually recalculate the same results. Finally this is confusing block for those that are new to LINQ, and it could lead to subtle bugs in ultimately anyone's program. What advantage is there to this, and why did Microsoft make this seemingly very deliberate decision? | What advantage was gained by implementing LINQ in a way that does not cache the results? Caching the results would simply not work for everybody. As long as you have tiny amounts of data, great. Good for you. But what if your data is larger than your RAM? It has nothing to do with LINQ, but with the IEnumerable<T> interface in general. It is the difference between File.ReadAllLines and File.ReadLines . One will read the whole file into RAM, and the other will give it to you line by line, so you can work with large files (as long as they have line-breaks). You can easily cache everything you want to cache by materializing your sequence calling either .ToList() or .ToArray() on it. But those of us who do not want to cache it, we have a chance to not do so. And on a related note: how do you cache the following? IEnumerable<int> AllTheZeroes()
{
while(true) yield return 0;
} You cannot. That's why IEnumerable<T> exists as it does. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368195",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100669/"
]
} |
368,213 | I am working with a REST API which resides on a server that handles data for a multitude of IoT devices. My task is to query the server using the API to collect specific performance information about said devices. In one instance, I obtain a list of available devices and their corresponding identifiers, then later query the server for more details using those identifiers (GUIDs). The server is returning a 500 Internal Server Error for a query on one of those IDs. In my application, an exception is thrown and I don't see details about the error. If I examine the response more closely with Postman , I can see that the server returned JSON in the body which contains: errorMessage: "This ID does not exist" . Disregard the fact the server provided the ID to begin with -- that's a separate problem for the developer. Should a REST API return a 500 Internal Server Error to report that a query references an object that doesn't exist? To my thinking, the HTTP response codes should refer strictly to the status of the REST call, rather than to the internal mechanics of the API. I would expect a 200 OK with the response containing the error and description, which would be proprietary to the API in question. It occurs to me that there is a potential difference in expectation depending on how the REST call is structured. Consider these examples: http://example.com/restapi/deviceinfo?id=123 http://example.com/restapi/device/123/info In the first case, the device ID is passed as a GET variable. A 404 or 500 would indicate that the path ( /restapi/deviceinfo ) is either not found or resulted in a server error. In the second case, the device ID is part of the URL. I would be more understanding of a 404 Not Found , but still could argue based on which parts of the path are interpreted as variables versus endpoints. | I think a 404 response is the best semantic match here, because the resource you were trying to find (as represented by the URI used for the query) was not found. Returning an error payload in the body is reasonable, but not required. According to RFC 2616 , the definition of the 404 status code is: 10.4.5 404 Not Found The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368213",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2117/"
]
} |
368,414 | I am implementing a DelegateCommand , and when I was about to implement the constructor(s), I came up with the following two design choices: 1: Having multiple overloaded constructors public DelegateCommand(Action<T> execute) : this(execute, null) { }
public DelegateCommand(Action<T> execute, Func<T, bool> canExecute)
{
this.execute = execute;
this.canExecute = canExecute;
} 2: Having only one constructor with an optional parameter public DelegateCommand(Action<T> execute, Func<T, bool> canExecute = null)
{
this.execute = execute;
this.canExecute = canExecute;
} I don't know which one to use because I don't know what possible advantages / disadvantages come with either of the two proposed ways. Both can be called like this: var command = new DelegateCommand(this.myExecute);
var command2 = new DelegateCommand(this.myExecute, this.myCanExecute); Can someone please point me in the right direction and give feedback? | I prefer multiple constructors over default values and personally I don't like your two constructor example, it should be implemented differently. The reason for using multiple constructors is that the main one can just check if all parameters are not null and whether they are valid whereas other constructors can provide default values for the main one. In your examples however, there is no difference between them because even the secondary constructor passes a null as a default value and the primary constructor must know the default value too. I think it shouldn't. This means that it would be cleaner and better separated if implemented this way: public DelegateCommand(Action<T> execute) : this(execute, _ => true) { }
public DelegateCommand(Action<T> execute, Func<T, bool> canExecute)
{
this.execute = execute ?? throw new ArgumentNullException(..);
this.canExecute = canExecute ?? throw new ArgumentNullException(..);
} notice the _ => true passed to the primary constructor that is now also checking all parameters for null and doesn't care about any defaults. The most important point however is extensibility. Multiple constructors are safer when there is a possibility that you will extend your code in the future. If you add more required parameters, and the optional ones must come at the end, you'll break all your current implementations. You can make the old constructor [Obsolete] and inform the users that it's going to be removed giving them time to migrate to the new implementation without instantly breaking their code. On the other hand making too many parameters optional would be confusing too because if some of them are required in one scenario and optional in another one you would need to study the documentation instead of just picking the right constructor by simply looking at its parameters. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368414",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
368,423 | You have a class X and you write some unit tests that verify behaviour X1.
There's also class A which takes X as a dependency. When you write unit tests for A, you mock X. In other words, while unit testing A, you set (postulate) the behaviour of X's mock to be X1.
Time goes by, people do use your system, needs change, X evolves: you modify X to show behaviour X2. Obviously, unit tests for X will fail and you will need to adapt them. But what with A? Unit tests for A will not fail when X's behaviour is modified (due to the mocking of X). How to detect that A's outcome will be different when run with the "real" (modified) X? I'm expecting answers along the line of: "That's not the purpose of unit testing", but what value does unit testing have then? Does it really only tell you that when all tests pass, you haven't introduced a breaking change?
And when some class's behaviour changes (willingly or unwillingly), how can you detect (preferably in an automated way) all the consequences? Shouldn't we focus more on integration testing? | When you write unit tests for A, you mock X Do you? I don't, unless I absolutely have to. I have to if: X is slow, or X has side effects If neither of these apply, then my unit tests of A will test X too. Doing anything else would be taking isolating tests to an illogical extreme. If you have parts of your code using mocks of other parts of your code, then I'd agree: what is the point of such unit tests? So don't do this. Let those tests use the real dependencies as they form far more valuable tests that way. And if some folk get upset with you calling these tests, "unit tests", then just call them "automated tests" and get on with writing good automated tests. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368423",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28264/"
]
} |
368,463 | Agile question: does agile believe in getting things up and running the βquick and dirty wayβ - or does agile prefer building solidly from the ground up? Or is this not a methodology question, and more a question that you evaluate case by case? Iβm technically βremakingβ the foundation of system, after I already built much of the structure itself... itβs not a monumental amount of work... would agile have wanted me to spec out the entire flow first, analyze it, tweak it, and then build? I feel like in a way itβs better this way... once I put up a messy system I see better how it needs to be done... on the other hand it isnβt so organized... just curious what best development practice is in this regard. I believe this question is somewhat different than Agile and protyping since I am not asking about prototyping and throwaway code; I am interested in agile for production-grade code. | The agile methodology is plan first. It's just not plan everything first. In fact you gather requirements, design, code, test, deploy, and present. You just do all that in less than a fortnight (give or take) on the tiniest little feature you can deploy and get feedback about. Then you do it all again adding another feature or tweaking an old one. The key is to write code that accepts change so that when you finally do see "how the entire flow should go" you can change the code to do that. That way when "the way the flow should go" (or whatever) changes yet again, it's not traumatic. You can't write quick and dirty. Quick and dirty gives you ridgid code. Be fast by working small. Stay flexible by not spreading knowledge . Ideally any single feature change should impact only one place in the code. You can't spend tons of time doing nothing but planning either. You can plan but you need to be able to change the plan. You need to quickly discover reasons to change the plan. When planning is going smoothly, with no surprises to learn from, that is when planning has gone on for too long. The planning and the coding have to happen close to each other. If you're learning then the older the plan, the dumber it is. In the long term, you should plan to get smarter. Write flexible code. Then getting smarter doesn't lead to regret. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368463",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/266115/"
]
} |
368,765 | I am writing tests for a project that consists of multiple submodules. Each test case that I have written runs independent of each other and I clear all data between tests. Even though the tests run independently, I am considering enforcing an execution order, as some cases require more than one submodule. For example, a submodule is generating data, and another one is running queries on the data. If the submodule generating the data contains an error, the test for the query submodule will also fail, even if the submodule itself works fine. I can not work with dummy data, as the main functionality I am testing is the connection to a black box remote server, which only gets the data from the first submodule. In this case, is it OK to enforce an execution order for the tests or is it bad practice? I feel like there is a smell in this setup, but I can not find a better way around. edit: the question is from How to structure tests where one test is another test's setup? as the "previous" test is not a setup, but tests the code which performs the setup. | I can not work with dummy data, as the main functionality I am testing is the connection to a black box remote server, which only gets the data from the first submodule. This is the key part for me. You can talk about "unit tests" and them "running independently of each other", but they all sound like they are reliant on this remote server and reliant on the "first sub module". So everything sounds tightly coupled and dependent on external state. As such, you are in fact writing integration tests. Having those tests run in a specific order is quite normal as they are highly dependent on external factors. An ordered test run, with the option of an early quit out of the test run if things goes wrong is perfectly acceptable for integration tests. But, it would also be worth taking a fresh look at the structure of your app. Being able to mock out the first submodule and external server would then potentially allow you to write true unit tests for all the other submodules. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/368765",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/301883/"
]
} |
369,154 | I am junior developer among seniors and am struggling a lot with understanding their thinking, reasoning. I am reading Domain-Driven Design (DDD) and can't understand why we need to create so many classes. If we follow that method of designing software we end up with 20-30 classes which can be replaced with at most two files and 3-4 functions. Yes, this could be messy, but it's a lot more maintainable and readable. Anytime I want to see what some kind of EntityTransformationServiceImpl does, I need to follow lots of classes, interfaces, their function calls, constructors, their creation and so on. Simple math: 60 lines of dummy code vs 10 classes X 10 (let's say we have totally different such logics) = 600 lines of messy code vs. 100 classes + some more to wrap and manage them; do not forget to add dependency injection. Reading 600 lines of messy code = one day 100 classes = one week, still forget which one does what, when Everyone is saying it's easy to maintain, but for what? Every time you add new functionality, you add five more classes with factories, entities, services, and values. I feel like this kind of code moves a lot slower than messy code. Let's say, if you write 50K LOC messy code in one month, the DDD thing requires lots of reviews and changes (I do not mind tests in both cases). One simple addition can take week if not more. In one year, you write lots of messy code and even can rewrite it multiple times, but with DDD style, you still do not have enough features to compete with messy code. Please explain. Why do we need this DDD style and lots of patterns? UPD 1 : I received so many great answers, can you guys please add comment somewhere or edit your answer with the link for reading list (not sure from which to start, DDD, Design Patterns, UML, Code Complete, Refactoring, Pragmatic ,... so many good books), of course with sequence, so that I can also start understanding and become senior as some of you do. | This is an optimization problem A good engineer understands that an optimization problem is meaningless without a target. You can't just optimize, you have to optimize for something. For example, your compiler options include optimizing for speed and optimizing for code size; these are sometimes opposite goals. I like to tell my wife that my desk is optimized for adds. It's just a pile, and it's very easy to add stuff. My wife would prefer it if I optimized for retrieval, i.e. organized my stuff a bit so I can find things. This makes it harder, of course, to add. Software is the same way. You can certainly optimize for product creation-- generate a ton of monolithic code as quickly as possible, without worrying about organizing it. As you have already noticed, this can be very, very fast. The alternative is to optimize for maintenance-- make creation a touch more difficult, but make modifications easier or less risky. That is the purpose of structured code. I would suggest that a successful software product will be only created once but modified many, many times. Experienced engineers have seen unstructured code bases take on a life of their own and become products, growing in size and complexity, until even small changes are very difficult to make without introducing huge risk. If the code were structured, risk can be contained. That is why we go to all this trouble. Complexity comes from relations, not elements I notice in your analysis you are looking at quantities-- amount of code, number of classes, etc. While these are sort of interesting, the real impact comes from relations between elements, which explodes combinatorially. For example, if you have 10 functions and no idea which depends on which, you have 90 possible relations (dependencies) you have to worry about-- each of the ten functions might depend on any of the nine other functions, and 9 x 10 = 90. You might have no idea which functions modify which variables or how data gets passed around, so coders have a ton of things to worry about when solving any particular problem. In contrast, if you have 30 classes but they are arranged cleverly, they can have as few as 29 relations, e.g. if they are layered or arranged in a stack. How does this affect your team's throughput? Well, there are fewer dependencies, the problem is much more tractable; coders don't have to juggle a zillion things in their head whenever they make a change. So minimizing dependencies can be a huge boost to your ability to reason about a problem competently. That is why we divide things into classes or modules, and scope variables as tightly as possible, and use SOLID principles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369154",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/302623/"
]
} |
369,191 | I'm working on a larger solo project and right now, and I have several classes in which I do not see any reason to create an instance of. My dice class right now, for example, stores all of its data statically and all of its methods are static too. I don't need to initialize it because when I want to roll the dice and get a new value, I just use Dice.roll() . I have several similar classes that only have one main function like this and I'm about to start working on a sort of "controller" class that will be in charge of all of the events (like when a player moves, and what current turn it is) and I've found that I could follow the same idea for this class. I don't ever plan on creating multiple objects for these specific classes, so would it be a bad idea to make them fully static? I was wondering if this is considered "bad practice" when it comes to Java. From what I've seen, the community seems to be kind of split on this topic? Anyways, I would love some discussion on this and links to resources would be great too! | There is nothing wrong with static classes that are truly static . That is to say, there is no internal state to speak of that would cause the output of the methods to change. If Dice.roll() is simply returning a new random number from 1 to 6, it's not changing state. Granted, you may be sharing a Random instance, but I wouldn't consider that a change of state as by definition, the output is always going to be well, random. It is also thread-safe so there are no problems here. You'll often see final "Helper" or other utility classes which have a private constructor and static members. The private constructor contains no logic and serves only to prevent someone from instantiating the class. The final modifier only brings this idea home that this isn't a class you'd ever want to derive from. It is merely a utility class. If done properly, there should be no singleton or other class members which are not themselves static and final. So long as you follow these guidelines and you're not making singletons, there is absolutely nothing wrong with this. You mention a controller class, and this will almost certainly require state changes, so I would advise against using only static methods. You can rely heavily on a static utility class, but you cannot make it a static utility class. What is considered a change in state for a class? Well, lets exclude random numbers for a second, as they're nondeterministic by definition and therefore the return value changes often. A pure function is one which is deterministic, which is to say, for a given input, you will get one and exactly one output. You want static methods to be pure functions. In Java there are ways of tweaking behavior of static methods to hold state, but they're almost never good ideas. When you declare a method as static , the typical programmer will assume right off the bat that it is a pure function. Deviating from expected behavior is how you tend to create bugs in your program, generally speaking and should be avoided. A singleton is a class containing static methods about as opposite of "pure function" as you can be. A single static private member is kept internally to the class which is used to ensure there is exactly one instance. This is not best practice and can get you into trouble later for a number of reasons. To know what we're talking about, here is a simple example of a singleton: // DON'T DO THIS!
class Singleton {
private String name;
private static Singleton instance = null;
private Singleton(String name) {
this.name = name;
}
public static Singleton getInstance() {
if(instance == null) {
instance = new Singleton("George");
}
return instance;
}
public getName() {
return name;
}
}
assert Singleton.getInstance().getName() == "George" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369191",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/302679/"
]
} |
369,390 | I am just designing my application and I am not sure if I understand SOLID and OOP correctly. Classes should do 1 thing and do it well but from the other hand they should represent real objects we work with. In my case I do a feature extraction on a dataset and then I do a machine learning analysis. I assume that I could create three classes FeatureExtractor DataSet Analyser But the FeatureExtractor class doesnt represent anything, it does something what makes it more of a routine than a class.
It will have just one function that will be used : extract_features() Is it correct to create classes that do not represent one thing but do one thing? EDIT: not sure if it matters but I'm using Python And if extract_features() would look like that: is it worth to create a special class to hold that method? def extract_features(df):
extr = PhrasesExtractor()
extr.build_vocabulary(df["Text"].tolist())
sent = SentimentAnalyser()
sent.load()
df = add_features(df, extr.features)
df = mark_features(df, extr.extract_features)
df = drop_infrequent_features(df)
df = another_processing1(df)
df = another_processing2(df)
df = another_processing3(df)
df = set_sentiment(df, sent.get_sentiment)
return df | Classes should do 1 thing and do it well Yes, that is generally a good approach. but from the other hand they should represent real object we work with. No, that is a IMHO common misunderstanding. A good beginner's access to OOP is often "start with objects representing things from the real world" , that is true. However, you should not stop with this ! Classes can (and should) be used to structure your program in various ways. Modeling objects from the real world is one aspect of this, but not the only one. Creating modules or components for a specific task is another sensible use case for classes. A "feature extractor" is probably such a module, and even it contains only one public method extract_features() , I would be astonished if if does not also contain a lot of private methods and maybe some shared state. So having a class FeatureExtractor will introduce a natural location for these private methods. Side note: in languages like Python which support a separate module concept one can also use a module FeatureExtractor for this, but in the context of this question, this is IMHO a negligible difference. Moreover, a "feature extractor" can be imagined as "a person or bot which extracts features". That is an abstract "thing", maybe not a thing you will find in the real world, but the name itself is a useful abstraction, which gives everyone a notion of what the responsibility of that class is. So I disagree that this class does not "represent anything". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369390",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/303144/"
]
} |
369,566 | I'm testing that a function does what expected on a list. So I want to test f(null) -> null
f(empty) -> empty
f(list with one element) -> list with one element
f(list with 2+ elements) -> list with the same number of elements, doing what expected In order to do so, What is the best approach? Testing all the cases in the same (method) test, under the name "WorksAsExpected" Placing one test for each case, thus having "WorksAsExpectedWhenNull" "WorksAsExpectedWhenEmpty" "WorksAsExpectedWhenSingleElement" "WorksAsExpectedWhenMoreElements" Another choice I wasn't thinking of :-) | The simple rule of thumb I use for whether to perform a set of tests in one test case, or many, is: does it involve just one setup? So if I were testing that, for multiple elements, it both processed all of them and derived the correct result, I may have two or more asserts, but I only have to set up the list once. So one test case is fine. In your case though, I'd have to set up a null list, an empty list etc. That's multiple setups. So I'd definitely create multiple tests in this case. As others have mentioned, those "multiple tests" might be able to exist as a single parameterised test case; ie the same test case is run against a variety of setup data. The key to knowing wether this is a viable solution lies in the other parts of the test: "action" and "assert". If you can perform the same actions and asserts on each data set, then use this approach. If you find yourself adding if 's for example to run different code against different parts of that data, then this is not the solution. Use individual test cases in that latter case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369566",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/303477/"
]
} |
369,631 | I am trying to understand behind the curtain scenes of Javascript and kind of stuck in understanding the creation of built in objects, specially Object and Function and the relation between them. When I read that all the built in objects like Array, String etc are extension (inherited) from Object I assumed that Object is the first built in object that gets created and rest of the objects inherits from it. But it doesn't make sense when you come to know that Objects can only be created by functions but then functions are also nothing but objects of Function. It kind of started to sound like dilemma of hen and chicken. The other extremely confusing thing is, if I console.log(Function.prototype) it prints a function but when I print console.log(Object.prototype) it prints an object. Why is Function.prototype a function when it was meant to be an object? Also, according to Mozilla documentation every javascript function is extension of Function object but when you console.log(Function.prototype.constructor) it is again a function. Now how can you use something to create it self (Mind = blown). Last thing, Function.prototype is a function but I can access the constructor function using Function.prototype.constructor does that mean Function.prototype is a function which returns the prototype object | I am trying to understand behind the curtain scenes of Javascript and kind of stuck in understanding the creation of built in objects, specially Object and Function and the relation between them. It is complicated, it is easy to misunderstand, and a great many beginner Javascript books get it wrong, so do not trust everything you read. I was one of the implementers of Microsoft's JS engine in the 1990s and on the standardization committee, and I made a number of mistakes putting together this answer. (Though since I have not worked on this for over 15 years I can perhaps be forgiven.) It is tricky stuff. But once you understand prototype inheritance, it all makes sense. When I read that all the built in objects like Array, String etc are extension (inherited) from Object I assumed that Object is the first built in object that gets created and rest of the objects inherits from it. Start by throwing away everything you know about class-based inheritance. JS uses prototype based inheritance. Next, make sure you have a very clear definition in your head of what "inheritance" means. People used to OO languages like C# or Java or C++ think that inheritance means subtyping, but inheritance does not mean subtyping. Inheritance means that the members of one thing are also members of another thing . It does not necessarily mean that there is a subtyping relationship between those things! So many misunderstandings in type theory are the result of people not realizing that there is a difference. But it doesn't make sense when you come to know that Objects can only be created by functions but then functions are also nothing but objects of Function. This is simply false. Some objects are not created by calling new F for some function F . Some objects are created by the JS runtime out of nothing at all. There are eggs that were not laid by any chicken . They were just created by the runtime when it started up. Let's say what the rules are and maybe that will help. Every object instance has a prototype object. In some cases that prototype can be null . If you access a member on an object instance, and the object does not have that member, then the object defers to its prototype, or stops if the prototype is null. The prototype member of an object is typically not the prototype of the object. Rather, the prototype member of a function object F is the object that will become the prototype of the object created by new F() . In some implementations, instances get a __proto__ member that really does give their prototype. (This is now deprecated. Don't rely on it.) Function objects get a brand-new default object assigned to prototype when they are created. The prototype of a function object is, of course Function.prototype . Let's sum up. The prototype of Object is Function.prototype Object.prototype is the object prototype object. The prototype of Object.prototype is null The prototype of Function is Function.prototype -- this is one of the rare situations where Function.prototype is actually the prototype of Function ! Function.prototype is the function prototype object. The prototype of Function.prototype is Object.prototype Let's suppose we make a function Foo. The prototype of Foo is Function.prototype . Foo.prototype is the Foo prototype object. The prototype of Foo.prototype is Object.prototype . Let's suppose we say new Foo() The prototype of the new object is Foo.prototype Make sure that makes sense. Let's draw it. Ovals are object instances. Edges are either __proto__ meaning "the prototype of", or prototype meaning "the prototype property of". All the runtime has to do is create all those objects and assign their various properties accordingly. I'm sure you can see how that would be done. Now let's look at an example that tests your knowledge. function Car(){ }
var honda = new Car();
print(honda instanceof Car);
print(honda.constructor == Car); What does this print? Well, what does instanceof mean? honda instanceof Car means "is Car.prototype equal to any object on honda 's prototype chain?" Yes it is. honda 's prototype is Car.prototype , so we're done. This prints true. What about the second one? honda.constructor does not exist so we consult the prototype, which is Car.prototype . When the Car.prototype object was created it was automatically given a property constructor equal to Car , so this is true. Now what about this? var Animal = new Object();
function Reptile(){ }
Reptile.prototype = Animal;
var lizard = new Reptile();
print(lizard instanceof Reptile);
print(lizard.constructor == Reptile); What does this program print? Again, lizard instanceof Reptile means "is Reptile.prototype equal to any object on lizard 's prototype chain?" Yes it is. lizard 's prototype is Reptile.prototype , so we're done. This prints true. Now, what about print(lizard.constructor == Reptile); You might think that this also prints true, since lizard was constructed with new Reptile but you would be wrong. Reason it out. Does lizard have a constructor property? No. Therefore we look at the prototype. The prototype of lizard is Reptile.prototype , which is Animal . Does Animal have a constructor property? No. So we look at it's prototype. The prototype of Animal is Object.prototype , and Object.prototype.constructor is created by the runtime and equal to Object . So this prints false. We should have said Reptile.prototype.constructor = Reptile; at some point in there, but we did not remember to! Make sure that all makes sense to you. Draw some boxes and arrows if it's still confusing. The other extremely confusing thing is, if I console.log(Function.prototype) it prints a function but when I print console.log(Object.prototype) it prints an object. Why is Function.prototype a function when it was meant to be an object? The function prototype is defined as a function which, when called, returns undefined . We already know that Function.prototype is the Function prototype, oddly enough. So therefore Function.prototype() is legal, and when you do it, you get undefined back. So it's a function. The Object prototype does not have this property; it is not callable. It's just an object. when you console.log(Function.prototype.constructor) it is again a function. Function.prototype.constructor is just Function , obviously. And Function is a function. Now how can you use something to create it self (Mind = blown). You are over-thinking this . All that is required is that the runtime creates a bunch of objects when it starts up. Objects are just lookup tables that associate strings with objects. When the runtime starts up, all it has to do is create a few dozen blank objects, and then start assigning the prototype , __proto__ , constructor , and so on properties of each object until they make the graph that they need to make. It will be helpful if you take that diagram I gave you above and add constructor edges to it. You'll quickly see that this is a very simple object graph and that the runtime will have no problem creating it. A good exercise would be to do it yourself. Here, I'll start you off. We'll use my__proto__ to mean "the prototype object of" and myprototype to mean "the prototype property of". var myobjectprototype = new Object();
var myfunctionprototype = new Object();
myfunctionprototype.my__proto__ = myobjectprototype;
var myobject = new Object();
myobject.myprototype = myobjectprototype; And so on. Can you fill in the rest of the program to construct a set of objects that has the same topology as the "real" Javascript built-in objects? If you do so, you'll find it is extremely easy. Objects in JavaScript are just lookup tables that associate strings with other objects . That's it! There's no magic here. You're getting yourself tied in knots because you are imagining constraints that do not actually exist, like that every object had to be created by a constructor. Functions are just objects that have an additional capability: to be called. So go through your little simulation program and add an .mycallable property to every object that indicates whether it is callable or not. It's as simple as that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369631",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42251/"
]
} |
369,642 | I see questions similar to this with regards to parameter names that match properties on the class, but I can't find anything regarding using a parameter name that is the same as the parameter type name except for casing in C#. It doesn't seem to be a violation that I can find, but is it considered bad practice? For example, I have the following method public Range PadRange(Range range) {} This method takes a range, and returns a new range that has had some padding applied. So, given the generic context, I can't think of a more descriptive name for the parameter. However, I'm reminded of a a tip I picked up when reading Code Complete about "psychological distance". It says Psychological distance can be defined as the ease in which two items can be differentiated...As you debug, be ready for the problems caused by insufficient psychological distance between similar variable names and between similar routine names. As you construct code, choose names with large differences so that you can avoid the problem. My method signature has a lot of "Range" going on, so it feels like it may be an issue with regards to this psychological distance. Now, I see many developers do the following public Range PadRange(Range myRange) {} I personally have a strong distaste for this convention. Adding a "my" prefix to variable names provides no additional context. I also see the following public Range PadRange(Range rangeToPad) {} I like this better than the "my" prefixing, but still don't care for it overall. It just feels overly verbose to me, and reads awkwardly as a variable name. To me, it's understood that range will be padded because of the method name. So with all this laid out, my gut is to go with the first signature. To me, it's clean. No need to force context when it's not needed. But am I doing myself or future developers a disservice with this convention? Am I violating a best practice? | Don't overthink this, Range range is fine. I use such kind of naming for more than 15 years in C#, and probably much longer in C++, and have never experienced any real drawbacks from it, quite the opposite. Of course, when you have different local variables in the same scope, all of the same type, it will probably help to invest some mental effort to distinguish them properly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369642",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154013/"
]
} |
369,770 | According to Wikipedia, Functional programming languages , that are Declarative, they disallow side effects. Declarative programming in general, attempts to minimize or eliminate side effects. Also, according to Wikipedia, a side effect is related to state changes. So, Functional programming languages, in that sense, they actually eliminate side effects, since they save no state. But, in addition, a side effect has another definition. Side effect has an observable interaction with its calling functions or the
outside world besides returning a value. For example, a particular
function might modify a global variable or static variable, modify one
of its arguments, raise an exception, write data to a display or file,
read data, or call other side-effecting functions. In that sense, Functional programming languages actually allow side effects, since there are countless examples of functions affecting their outside world, calling other functions, raise exceptions, writing in files etc. So, finally, do Functional programing languages allow side effects or not? Or, I don't understand what qualifies as a "side effect", so Imperative languages allow them and Declarative don't. According to the above and what I get, no language eliminates side effects, so either I am missing something about side effects, or the Wikipedia definition is incorrectly broad. | Functional programming includes many different techniques. Some techniques are fine with side effects. But one important aspect is equational reasoning : If I call a function on the same value, I always get the same result. So I can substitute a function call with the return value, and get equivalent behaviour. This makes it easier to reason about the program, especially when debugging. Should the function have side effects, this doesn't quite hold. The return value is not equivalent to the function call, because the return value doesn't contain the side effects. The solution is to stop using side effects and encoding these effects in the return value . Different languages have different effect systems. E.g. Haskell uses monads to encode certain effects such as IO or State mutation. The C/C++/Rust languages have a type system that can disallow mutation of some values. In an imperative language, a print("foo") function will print something and return nothing. In a pure functional language like Haskell, a print function also takes an object representing the state of the outside world, and returns a new object representing the state after having performed this output. Something similar to newState = print "foo" oldState . I can create as many new states from the old state as I like. However, only one will ever be used by the main function. So I need to sequence the states from multiple actions by chaining the functions. To print foo bar , I might say something like print "bar" (print "foo" originalState) . If an output state is not used, Haskell doesn't perform the actions leading up to that state, because it is a lazy language. Conversely, this laziness is only possible because all effects are explicitly encoded as return values. Note that Haskell is the only commonly used functional language that uses this route. Other functional languages incl. the Lisp family, ML family, and newer functional languages like Scala discourage but allow still side effects β they could be called imperativeβfunctional languages. Using side effects for I/O is probably fine. Often, I/O (other than logging) is only done at the outer boundary of your system. No external communication happens within your business logic. It is then possible to write the core of your software in a pure style, while still performing impure I/O in an outer shell. This also means that the core can be stateless. Statelessness has a number of practical advantages, such as increased reasonability and scalability. This is very popular for web application backends. Any state is kept outside, in a shared database. This makes load balancing easy: I don't have to stick sessions to a specific server. What if I need more servers? Just add another, because it's using the same database. What if one server crashes? I can redo any pending requests on another server. Of course, there still is state β in the database. But I've made it explicit and extracted it, and could use a pure functional approach internally if I want to. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/369770",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/303771/"
]
} |
370,135 | I am a recent grad student aiming to start my Master's in Computer Science. I have come across multiple open source projects that really intrigue me and encourage me to contribute to them (CloudStack, OpenStack, moby, and Kubernetes to name a few). One thing I've found that the majority of them have in common is the use of multiple programming languages (like Java + Python + Go or Python + C++ + Ruby). I have already looked at this other question, which deals with how multiple programming languages are made to communicate with each other: How to have two different programmings with two different languages interact? I want to understand the requirement that prompts enterprises to use multiple programming languages. What requirement or type of requirement makes the software architect or project lead say, "I'm proposing we use language X for task 1 and language Y for task 2"? I can't seem to understand the reason why multiple programming languages are used in the same product or software. | I can't seem to understand the reason as to why multiple programming languages are used in the same product or software? It is quite simple: there is no single programming language suitable for all needs and goals. Read Michael L. Scott's book Programming Language Pragmatics Some programming languages favor expressiveness and declarativity (a lot of scripting languages, but also high-level programming languages like Agda , Prolog , Lisp , Haskell, Ocaml, ...). When the cost of development is important (human time and cost of developers), it is suitable to use them (even if the runtime performance is not optimal). Other programming languages favor run-time performance (many low-level languages, with usually compiled implementations, like C++, Rust, Go, C, assembler, also specialized languages like OpenCL ...); often their specification allows some undefined behavior . When the performance of the code matters, it is preferable to use these languages. Some external libraries are written in and for a particular language and ABI and calling conventions in mind. You may need to use that other language, and follow foreign function interface conventions, perhaps by writing some glue code . In practice, it is unlikely to have a programming language which is highly expressive (so improves the productivity of the developer, assuming a skilled enough developer team) and very performant at runtime. In practice, there is a trade-off between expressivity and performance. Note: however, there has been some slow progress in programming languages: Rust is more expressive than C or perhaps even C++ but its implementation is almost as performant, and probably will improve to generate equally fast executables. So you need to learn new programming languages during your professional life; however there is No Silver Bullet Notice that the cost of development is more and more significant today (that was not the case in the 1970s -at that time computers where very costly- or in some embedded applications -with large volume of product). The rule of thumb (very approximate) is that a skilled developer is able to write about 25 thousand lines of (debugged & documented) source code each year, and that does not depend much on the programming language used. A common approach is to embed some scripting language (or some domain specific language ) in a large application. This design idea (related to domain-specific language) has been used for decades (a good example is the Emacs source code editor , using Elisp for scripting since the 1980s). Then you'll use an easily embeddable interpreter (like Guile , Lua , Python , ...) inside a larger application. The decision to embed an interpreter inside a large application has to be done very early, and has strong architectural implications. You'll then use two languages: for low level stuff which has to run quickly, some low level language like C or C++; for high level scripts, the other DSL or scripting language. Notice also that a given software can run, within most current operating systems (including Linux, Windows, Android, MacOSX, Hurd, ...), in several cooperating processes using some kind of inter-process communication techniques. It can even run on several computers (or many of them), using distributed computing techniques (e.g. cloud computing , HPC, client server, web applications , etc...). In both cases, it is easy to use several programming languages (e.g. code each program running on one process or computer in its own programming language). Read Operating Systems: Three Easy Pieces for more. Also, foreign function interfaces (e.g. JNI ), ABI s, calling conventions , etc... facilitate mixing several languages in the same program (or executable ) - and you'll find code generators like SWIG to help. In some cases, you have to mix several programming languages: web applications need Javascript or Webassembly (the only languages running inside most web browsers) for the part running in the browser (there are frameworks generating these, e.g. ocsigen ). Kernel code need some stuff (e.g. the scheduler, or the low level handling of interrupts) to be partly written in assembler, because C or C++ cannot express what is needed there, RDBMS queries should use SQL, GPGPU s need computer kernels coded in OpenCL or CUDA managed by C or C++ host code, etc....
Some languages are designed to facilitate such a mixture (e.g. asm statements in C, code chunks in my late GCC MELT , etc...). In some cases, you use metaprogramming techniques: some parts of your large software project would have code (e.g. in C or C++) generated by other tools (perhaps project specific tools) from some ad-hoc formalization: parser generators (improperly called compiler-compilers) like bison or ANTLR come to mind, but also SWIG or RPCGEN. And notice that GCC has more than a dozen of specialized C++ code generators (one for every internal DSL inside GCC) inside it. See also this example. Notice that metabugs are hard to find. Read also about bootstrapping compilers , and about homoiconicity and reflection (it is worthwhile to learn Lisp , play with SBCL , and to read SICP ; look also into JIT-compiling libraries like GCCJIT ; in some large programs you might generate some code at runtime using them; be aware of Greenspun's tenth rule ). Look also into the Circuit Less Traveled talk at FOSDEM2018. Sometimes, you want to provide formal annotations of your code (e.g. to help provers, static analyzers, compilers), using some specialized annotation language (which might be viewed as some DSL). Look into ACSL with Frama-C to annotate C programs (safety-critical ones), or OpenMP pragmas for HPC. Caveat: writing such annotations can require a lot of skills and development time. BTW, this suggests that some skills about compilers and interpreters are useful for every developer (even without working inside compilers). So read the Dragon Book even if you don't work on compilers. If you code your own interpreter (or if you design your DSL), read also Lisp In Small Pieces . See also this & that & that & that answers of mine related to your question. Study also the source code of several large free software projects (on github or from your Linux distribution ) for inspiration and enlightenment. Also, some programming languages evolved by adding annotations (as pragmas or comments ) to existing languages. For examples, think of ACSL (a comment-extension to annotate C programs to enable their proofs by Frama-C ) or of OpenCL (a C dialect to program GPGPUs) or OpenMP or OpenACC #pragma s or Common Lisp type annotations . PS: there are also social or organizational or historical reasons to mix programming languages; I'm ignoring them here, but I know that in practice such reasons are dominant. Read also The Mythical Man Month | {
"source": [
"https://softwareengineering.stackexchange.com/questions/370135",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/304405/"
]
} |
370,180 | All the sites I go to look for informations on FAT16 just declaratively state that it can not allocate more than 2 GB. OK. Great. I believe you. But how do you come to that conclusion (other than just testing it)? Is there some sort of formula one can use to determine how much a FAT16 system can hold? | FAT16 uses 16 bits to identify clusters. Thus there are a maximum of 65536 clusters before you run out of identifiers, and some identifiers are reserved for non-file uses. Each file occupies at least one cluster. Larger clusters increases the minimum allocation per file, increasing the overhead of small files. The cluster size then tells you the maximum identifiable volume. For 32KiB clusters, that is 32 * 1024 * 65536 B = 2GiB. You could increase the cluster size indefinitely, by increasing the size of your on-disk sectors, but you remain limited to the maximum number of files. You would also run into issues with software that assumed the default sector size (512B) At the point where there were physical volumes of ~2GiB readily available, processors and OSs were 32bit, so moving to FAT32 was a sensible choice, allowing vastly more files in small clusters | {
"source": [
"https://softwareengineering.stackexchange.com/questions/370180",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/304493/"
]
} |
370,429 | Lets say I am trying to describe my code in a technical meeting. First, I set the boolean foobar to true and Second, I set the boolean foobar to false seems a bit wordy. If foobar was toggled, I could probably say, Third, I toggle foobar Through implication here, you know its a boolean. So shouldnt I be able to: Fourth, I Truthify foobar and Fifth, I Falsify foobar Which will also through implication, tell my listeners that we are dealing with a boolean variable? Is there proper terminology for this? Thanks. | Setting a value to true is setting it Setting it to false is clearing it. Changing the current value is toggling it. You can also use "setting it to true" and "setting it to false" , of course. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/370429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136084/"
]
} |
370,451 | I am creating an API strcutured web application and in this application we have different layers which are doing their own job. First layer is Validation layer which validate user input and if it passes the validation we move that to second layer (which is Access Control layer ) otherwise return the error message Second layer is Access Control which checks if user has permission to perform the task it wants to perform, If user has permission it moves the request to next layer otherwise return error message Third Layer is Controller Layer where we have the logic of application My question is that is that ok to have validation layer before access control ? What if user are trying to perform a task which user doesn't have permission to and we are sending back validation error message ? User would be sending requests to an endpoint and talking with validation layer and once it passes the validation only then he would see the message You can't access this! It feels strange to me so is it fine like this or what could be my other options in infrastructure this? | It depends on whether knowing the validity of some input for a task that you aren't permitted to do is a security leak. If it is, you really should to do it the other way round. The only safe response to an unauthorised user is "access denied". If sometimes the response is "bad request" and other times "access denied", you are sending information to an unauthorised user. As an example, you could have a check in the validation of the "delete document" task that the named document exists. Someone with no permissions would be able to discern whether something exists by attempting to delete it, and comparing which error they receive back. A particularly determined attacker could enumerate all document names (under a certain length), to see which exists. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/370451",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/252239/"
]
} |
370,579 | In Java, as soon as an object no longer has any references, it becomes eligible for deletion, but the JVM decides when the object is actually deleted. To use Objective-C terminology, all Java references are inherently "strong". However, in Objective-C, if an object no longer has any strong references, the object is deleted immediately. Why isn't this the case in Java? | First of all, Java has weak references and another best-effort category called soft references. Weak vs. strong references is a completely separate issue from reference counting vs. garbage collection. Second, there are patterns in memory usage that can make garbage collection more efficient in time by sacrificing space. For example, newer objects are much more likely to be deleted than older objects. So if you wait a bit between sweeps, you can delete most of the new generation of memory, while moving the few survivors to longer-term storage. That longer term storage can be scanned much less frequently.
Immediate deletion via manual memory management or reference counting is much more prone to fragmentation. It's sort of like the difference between going grocery shopping once per paycheck, and going every day to get just enough food for one day. Your one large trip will take a lot longer than an individual small trip, but overall you end up saving time and probably money. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/370579",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/164475/"
]
} |
371,011 | As noted by in the comments by @benjamin-gruenbaum this is called the Boolean trap: Say I have a function like this UpdateRow(var item, bool externalCall); and in my controller, that value for externalCall will always be TRUE. What is the best way to call this function? I usually write UpdateRow(item, true); But I ask my self, should I declare a boolean, just to indicate what that 'true' value stands for? You can know that by looking the declaration of the function, but it's obviously faster and clearer if you just saw something like bool externalCall = true;
UpdateRow(item, externalCall); PD: Not sure if this question really fits here, if it doesn't, where could I get more info about this? PD2: I didn't tag any language 'cause I thought it was a very generic problem. Anyway, I work with c# and the accepted answer works for c# | There isn't always a perfect solution, but you have many alternatives to choose from: Use named arguments , if available in your language. This works very well and has no particular drawbacks. In some languages, any argument can be passed as a named argument, e.g. updateRow(item, externalCall: true) ( C# ) or update_row(item, external_call=True) (Python). Your suggestion to use a separate variable is one way to simulate named arguments, but does not have the associated safety benefits (there's no guarantee that you used the correct variable name for that argument). Use distinct functions for your public interface, with better names. This is another way of simulating named parameters, by putting the paremeter values in the name. This is very readable, but leads to a lot of boilerplate for you, who is writing these functions. It also can't deal well with combinatorial explosion when there are multiple boolean arguments. A significant drawback is that clients can't set this value dynamically, but must use if/else to call the correct function. Use an enum . The problem with booleans is that they are called "true" and "false". So, instead introduce a type with better names (e.g. enum CallType { INTERNAL, EXTERNAL } ). As an added benefit, this increases the type safety of your program (if your language implements enums as distinct types). The drawback of enums is that they add a type to your publicly visible API. For purely internal functions, this doesn't matter and enums have no significant drawbacks. In languages without enums, short strings are sometimes used instead. This works, and may even be better than raw booleans, but is very susceptible to typos. The function should then immediately assert that the argument matches a set of possible values. None of these solutions has a prohibitive performance impact. Named parameters and enums can be resolved completely at compile time (for a compiled language). Using strings may involve a string comparison, but the cost of that is negligible for small string literals and most kinds of applications. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371011",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/296241/"
]
} |
371,140 | A friend of my family asked me for a bit of help as he learns to program (in the C language). As we were talking, he expressed frustration about having a hard time understanding the error messages his compiler (GCC) is giving him when he makes errors. He does not understand all the terms used, and sometimes it's their combination which is beyond his comprehension. He was asking me "How come the compiler documentation doesn't include longer explanations of the error messages?" - and I didn't have a good answer for him. I myself - as a more experienced programmer - am very rarely in this situation, but those rare occurrences do happen - some exotic error message I hadn't encountered before. I manage to get by with looking for the error message in a search engine, but apparently that doesn't always work for him - especially since the errors he encounters are more common and occur in multiple distinct cases, which he has trouble relating to his own. So, how should a novice programmer approach the challenge of understanding compiler error messages? Specifically, with the combination of C and GCC? | A few useful techniques: Turn on -Wall and -Werror . It might seem counterintuitive when you're struggling with deciphering error messages to create even more error messages, but the warnings are typically easier to understand and closer to the actual source of the problem, and ignoring them can lead to errors that are difficult to understand. Just try to fix the first error in the list. Often errors compound on each other, leading to later error messages not really being actual errors. Fix one and recompile. You'll get better at fixing multiple error messages when you gain more experience. Use the newest compiler version possible. C is an extremely stable language. Therefore, a huge part of the improvements in newer compilers isn't to add language features, but to improve the developer experience, including better error messages. Many widely-used linux distributions have very old versions of gcc by default. Program incrementally. Don't try to write a ton of code before compiling. Write the shortest amount possible that will still compile. If you've only changed one line since the last time it compiled cleanly, it's a lot easier to figure out which line contains the actual problem. Write unit tests. It makes you more confident to make clarifying refactoring changes when fixing compile errors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371140",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63497/"
]
} |
371,303 | This might sound as a weird question, but in my department we are having trouble with following situation: We are working here on a server application, which is growing larger and larger, even at the point that we are considering to split it into different parts (DLL files), dynamically loading when needed and unloading afterwards, in order to be able to handle the performance issues. But: the functions we are using, are passing input and output parameter as STL objects, and as mentioned in a Stack Overflow answer , this is a very bad idea. (The post contains some Β±solutions and hacks, but it all does not look very solid.) Obviously we could replace the input/output parameters by standard C++ types and create STL objects from those once inside the functions, but this might be causing performance drops. Is it OK to conclude that, in case you are considering to build an application, which might grow that large that one single PC can't handle it anymore, you must not use STL as a technology at all? More background about this question: There seem to be some misunderstandings about the question: the issue is the following: My application is using huge amount of performance (CPU, memory) in order to complete its work, and I would like to split this work into different parts (as the program is already splitted into multiple functions), it's not that difficult to create some DLLs out of my application and put some of the functions in the export table of those DLLs. This would result in following situation: +-----------+-----------+----
| Machine1 | Machine2 | ...
| App_Inst1 | App_Inst2 | ...
| | |
| DLL1.1 | DLL2.1 | ...
| DLL1.2 | DLL2.2 | ...
| DLL1.x | DLL2.x | ...
+-----------+-----------+---- App_Inst1 is the instance of the application, installed on Machine1, while App_Inst2 is the instance of the same application, installed on Machine2. DLL1.x is a DLL, installed on Machine1, while DLL2.x is a DLL, installed on Machine2. DLLx.1 covers exported function1. DLLx.2 covers exported function2. Now on Machine1 I'd like to execute function1 and function2. I know that this will overload Machine1, so I'd like to send a message to App_Inst2, asking that application instance to perform function2. The input/output parameters of function1 and function2 are STL (C++ Standard Type Library) objects, and regularly I might expect the customer to do updates of App_Inst1, App_Inst2, DLLx.y (but not all of them, the customer might upgrade Machine1 but not Machine2, or only upgrade the applications but not the DLLs or vice versa, ...). Obviously if the interface (input/output parameters) changes, then the customer is forced to do complete upgrades. However, as mentioned in the referred StackOverflow URL, a simple re-compilation of App_Inst1 or one of the DLLs might cause the whole system to fall apart, hence my original title of this post, dis-advising the usage of STL (C++ Standard Template Library) for large applications. I hope that hereby I've cleared out some questions/doubts. | This is a stone-cold classic X-Y problem. Your real problem is performance issues. However your question makes it clear that you've done no profiling or other evaluations of where the performance issues actually come from. Instead you're hoping that splitting your code into DLLs will magically solve the problem (which it won't, for the record), and now you're worried about one aspect of that non-solution. Instead, you need to solve the real problem. If you have multiple executables, check which one is causing the slow-down. While you're at it, make sure it actually is your program taking all the processing time, and not a badly-configured Ethernet driver or something like that. And after that, start profiling the various tasks in your code. The high-precision timer is your friend here. The classic solution is to monitor average and worst-case processing times for a chunk of code. When you've got data, you can work out how to deal with the problem, and then you can work out where to optimise. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371303",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/250257/"
]
} |
371,317 | In some organisations, apparently, part of the software release process is to use unit testing, but at any point in time all unit tests must pass. Eg there might be some screen which shows all unit tests passing in green - which is supposed to be good. Personally, I think this is not how it should be for the following reasons: It promotes the idea that code should be perfect and no bugs should exist - which in the real world is surely impossible for a program of any size. It is a disincentive to think up unit tests that will fail. Or certainly come up with unit tests that would be tricky to fix. If at any point in time all unit tests pass, then there is no big picture of the state of the software at any point in time. There is no roadmap/goal. It deters writing unit tests up-front - before the implementation. I would even suggest that even releasing software with failing unit tests is not necessary bad. At least then you know that some aspect of the software has limitations. Am I missing something here? Why do organisations expect all unit tests to pass? Isn't this living in a dream world? And doesn't it actually deter a real understanding of code? | This question contains IMHO several misconceptions, but the main one I would like to focus on is that it does not differentiate between local development branches, trunk, staging or release branches. In a local dev branch, it is likely to have some failing unit tests at almost any time. In the trunk, it is only acceptable to some degree, but already a strong indicator to fix things ASAP. Note that failing unit tests in the trunk can disturb the rest of the team, since they require everyone to check if not his/her latest change was causing the failure. In a staging or release branch, failing tests are "red alert", showing there has been gone something utterly wrong with some changeset, when it was merged from the trunk into the release branch. I would even suggest that even releasing software with failing unit tests is not necessary bad. Releasing software with some known bugs below a certain severity is not necessarily bad. However, these known glitches should not cause a failing unit test. Otherwise, after each unit test run, one will have to look into the 20 failed unit tests and check one-by-one if the failure was an acceptable one or not. This gets cumbersome, error-prone, and discards a huge part of the automation aspect of unit tests. If you really have tests for acceptable, known bugs, use your unit testing tool's disable/ignore feature (so you they are not run by default, only on-demand). Additionally, add a low-priority ticket to your issue tracker, so the problem won't get forgotten. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371317",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38616/"
]
} |
371,440 | Here's one example: My web application contains draggable elements. When dragging an element, the browser produces a "ghost image". I want to remove the "ghost image" when dragging and I write a test for this behaviour. My problem is that I initially have no idea how to fix this bug and the only way I can write a test is after I have fixed it. In a simple function such as let sum = (a, b) => a - b , you can write a test as to why sum(1, 2) does not equal 3 before writing any code. In the case I am describing, I can't test, because I don't know what the verification is (I don't know what the assertion should be). A solution to the problem described is: let dataTransfer = e.dataTransfer
let canvas = document.createElement('canvas');
canvas.style.opacity = '0';
canvas.style.position = 'absolute';
canvas.style.top = '-1000px';
dataTransfer.effectAllowed = 'none';
document.body.appendChild(canvas);
dataTransfer.setDragImage(canvas, 0, 0); I could not have known that this was the solution. I could not even have written the test after finding the solution online, because the only way I could have known if it really worked, was to add this code into my codebase and verify with the browser if it had the desired effect. The test had to be written after the code, which goes against TDD. What would be the TDD approach to this problem? Is writing the test before the code mandatory or optional? | When I understood you correctly, you cannot even write a reliable automated test for your "ghost image" example after you found a solution, since the only way of verifying the correct behaviour is to look at the screen and check if there is no ghost image any more. That gives me the impression your original headline asked the wrong question. The real question should be how to automatically test a certain behaviour of a graphical user interface? And the answer is - for several kind of UI issues, you don't . Sure, one can try to automate making the UI showing the problem somehow, and try to implement something like a screenshot comparison, but this is often error-prone, brittle and not cost-effective. Especially "test driving" UI design or UI improvements by automated tests written in advance is literally impossible. You "drive" UI design by making an improvement, show the result to a human (yourself, some testers or a user) and ask for feedback. So accept the fact TDD is not a silver bullet, and for some kind of issues still manual testing makes more sense than automated tests. If you have a systematic testing process, maybe with some dedicated testers, best thing you can do is to add the case to their test plan. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371440",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/306565/"
]
} |
371,460 | I have a piece of code that can be represented as: public class ItemService {
public void DeleteItems(IEnumerable<Item> items)
{
// Save us from possible NullReferenceException below.
if(items == null)
return;
foreach(var item in items)
{
// For the purpose of this example, lets say I have to iterate over them.
// Go to database and delete them.
}
}
} Now I'm wondering if this is the right approach or should I throw exception. I can avoid exception, because returning would be the same as iterating over an empty collection, meaning, no important code is executed anyway, but on the other hand I'm possibly hiding problems somewhere in the code, because why would anyone want to call DeleteItems with null parameter? This may indicate that there is a problem somewhere else in the code. This is a problem I usually have with methods in services, because most of them do something and don't return a result, so if someone passes invalid information then there is nothing for the service to do, so it returns. | These are two different questions. Should you accept null ? That depends on your general policy about null in the code base. In my opinion, banning null everywhere except where explicitly documented is a very good practice, but it's even better practice to stick to the convention your code base already has. Should you accept the empty collection? In my opinion: YES, absolutely. It is much more effort to restrict all callers to non-empty collections than to do the mathematically right thing - even if it surprises some developers who are iffy with the concept of zero. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/255682/"
]
} |
371,545 | I'm developing an application with Python. I want to have a Boolean variable that represent whether something is buy or sell but I'm not sure how I should name it. Here are my current ideas: isBuy isSell buy_sell sell_buy buy1_sell0 actually I like the last one the most although it's somehow the ugliest because it tells you all you need to know about it with certainty. However I thought I'd ask some more experienced people to see what is the actual python convention for such situations. | Don't use a Boolean. Use an enum. E.g TransactionType with instances Buy and Sell . That is unambiguous and far easier to understand. If you want to persist the data efficiently, the boolean can be a good solution as long as there are only two instances in the enum. However, your code need not be efficient at that level of detail (that's the interpreters job); it needs to be very understandable. The enum achieves that goal far better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371545",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/306705/"
]
} |
371,671 | I have been tasked to increase code coverage of an existing Java project. I noticed that the code coverage tool ( EclEmma ) has highlighted some methods that are never called from anywhere. My initial reaction is not to write unit tests for these methods, but to highlight them to my line manager/team and ask why these functions are there to begin with. What would the best approach be? Write unit tests for them, or question why they're there? | Delete. Commit. Forget. Rationale: Dead code is dead. By its very description it has no purpose. It may have had a purpose at one point, but that is gone, and so the code should be gone. Version control ensures that in the (in my experience) rare unique event that someone comes around later looking for that code it can be retrieved. As a side effect, you instantly improve code coverage without doing anything (unless the dead code is tested, which is rarely the case). Caveat from comments: The original answer assumed that you had thoroughly verified that the code is, beyond doubt, dead. IDEs are fallible, and there are several ways in which code which looks dead might in fact be called. Bring in an expert unless you're absolutely sure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371671",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/257271/"
]
} |
371,722 | Dependency injection (DI) is a well known and fashionable pattern. Most of engineers know its advantages, like: Making isolation in unit testing possible/easy Explicitly defining dependencies of a class Facilitating good design ( single responsibility principle (SRP) for example) Enabling switching implementations quickly ( DbLogger instead of ConsoleLogger for example) I reckon there's industry wide consensus that DI is a good, useful pattern. There's not too much criticism at the moment. Disadvantages which are mentioned in the community are usually minor. Some of them: Increased number of classes Creation of unnecessary interfaces Currently we discuss architecture design with my colleague. He's quite conservative, but open minded. He likes to question things, which I consider good, because many people in IT just copy the newest trend, repeat the advantages and in general don't think too much - don't analyse too deep. The things I'd like to ask are: Should we use dependency injection when we have just one implementation? Should we ban creating new objects except language/framework ones? Is injecting a single implementation bad idea (let's say we have just one implementation so we don't want to create "empty" interface) if we don't plan to unit test a particular class? | First, I would like to separate the design approach from the concept of frameworks. Dependency injection at its simplest and most fundamental level is simply: A parent object provides all the dependencies required to the child object. That's it. Note, that nothing in that requires interfaces, frameworks, any style of injection, etc. To be fair I first learned about this pattern 20 years ago. It is not new. Due to more than 2 people having confusion over the term parent and child, in the context of dependency injection: The parent is the object that instantiates and configures the child object it uses The child is the component that is designed to be passively instantiated. I.e. it is designed to use whatever dependencies are provided by the parent, and does not instantiate it's own dependencies. Dependency injection is a pattern for object composition . Why interfaces? Interfaces are a contract. They exist to limit how tightly coupled two objects can be. Not every dependency needs an interface, but they help with writing modular code. When you add in the concept of unit testing, you may have two conceptual implementations for any given interface: the real object you want to use in your application, and the mocked or stubbed object you use for testing code that depends on the object. That alone can be justification enough for the interface. Why frameworks? Essentially initializing and providing dependencies to child objects can be daunting when there are a large number of them. Frameworks provide the following benefits: Autowiring dependencies to components Configuring the components with settings of some sort Automating the boiler plate code so you don't have to see it written in multiple locations. They also have the following disadvantages: The parent object is a "container", and not anything in your code It makes testing more complicated if you can't provide the dependencies directly in your test code It can slow down initialization as it resolves all the dependencies using reflection and many other tricks Runtime debugging can be more difficult, particularly if the container injects a proxy between the interface and the actual component that implements the interface (aspect oriented programming built in to Spring comes to mind). The container is a black box, and they aren't always built with any concept of facilitating the debugging process. All that said, there are trade-offs. For small projects where there aren't a lot of moving parts, and there's little reason to use a DI framework. However, for more complicated projects where there are certain components already made for you, the framework can be justified. What about [random article on the Internet]? What about it? Many times people can get overzealous and add a bunch of restrictions and berate you if you aren't doing things the "one true way". There isn't one true way. See if you can extract anything useful from the article and ignore the stuff you don't agree with. In short, think for yourself and try things out. Working with "old heads" Learn as much as you can. What you will find with a lot of developers that are working into their 70s is that they have learned not to be dogmatic about a lot of things. They have methods that they have worked with for decades that produce correct results. I've had the privilege of working with a few of these, and they can provide some brutally honest feedback that makes a lot of sense. And where they see value, they add those tools to their repertoire. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371722",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/143750/"
]
} |
371,728 | I am beginning with unit testing in c#. Here is my converter: public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
int c = 0;
if (value != null)
{
var path = value.ToString();
for (int x = 0; x < path.Length; x++)
{
if (c == 2) { return path.Substring(x).Replace("%","/"); }
if (path[x] == '%') c++;
}
}
return value;
} Here, if I pass a string "TestString" or any string without %, its going to return me the same string, right?
What should I pass as value? | First, I would like to separate the design approach from the concept of frameworks. Dependency injection at its simplest and most fundamental level is simply: A parent object provides all the dependencies required to the child object. That's it. Note, that nothing in that requires interfaces, frameworks, any style of injection, etc. To be fair I first learned about this pattern 20 years ago. It is not new. Due to more than 2 people having confusion over the term parent and child, in the context of dependency injection: The parent is the object that instantiates and configures the child object it uses The child is the component that is designed to be passively instantiated. I.e. it is designed to use whatever dependencies are provided by the parent, and does not instantiate it's own dependencies. Dependency injection is a pattern for object composition . Why interfaces? Interfaces are a contract. They exist to limit how tightly coupled two objects can be. Not every dependency needs an interface, but they help with writing modular code. When you add in the concept of unit testing, you may have two conceptual implementations for any given interface: the real object you want to use in your application, and the mocked or stubbed object you use for testing code that depends on the object. That alone can be justification enough for the interface. Why frameworks? Essentially initializing and providing dependencies to child objects can be daunting when there are a large number of them. Frameworks provide the following benefits: Autowiring dependencies to components Configuring the components with settings of some sort Automating the boiler plate code so you don't have to see it written in multiple locations. They also have the following disadvantages: The parent object is a "container", and not anything in your code It makes testing more complicated if you can't provide the dependencies directly in your test code It can slow down initialization as it resolves all the dependencies using reflection and many other tricks Runtime debugging can be more difficult, particularly if the container injects a proxy between the interface and the actual component that implements the interface (aspect oriented programming built in to Spring comes to mind). The container is a black box, and they aren't always built with any concept of facilitating the debugging process. All that said, there are trade-offs. For small projects where there aren't a lot of moving parts, and there's little reason to use a DI framework. However, for more complicated projects where there are certain components already made for you, the framework can be justified. What about [random article on the Internet]? What about it? Many times people can get overzealous and add a bunch of restrictions and berate you if you aren't doing things the "one true way". There isn't one true way. See if you can extract anything useful from the article and ignore the stuff you don't agree with. In short, think for yourself and try things out. Working with "old heads" Learn as much as you can. What you will find with a lot of developers that are working into their 70s is that they have learned not to be dogmatic about a lot of things. They have methods that they have worked with for decades that produce correct results. I've had the privilege of working with a few of these, and they can provide some brutally honest feedback that makes a lot of sense. And where they see value, they add those tools to their repertoire. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371728",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/306950/"
]
} |
371,966 | I really liked the concepts in the video The Principles of Clean Architecture by Uncle Bob Martin . But I feel like this pattern is like a combination of Abstract Factory and Builder patterns at its core. This is one way to write good programs but not the only way. Rails and React are 2 frameworks that come to mind that do not promote this kind of clean architecture. Rails expects your business logic to be in the models ( FatModels and SkinnyControllers ), and React inside your components. Both approaches tightly couple business logic and framework code . I don't find anything wrong in any of the 3 ways. It is a judgement call to pick any one. But in the video I feel he suggests that clean architecture should have a clear boundary between business logic and frameworks. Frameworks (web, android, etc.) should be plugins that plug in to to the business logic. He even subtly mocks Rails in the video. So, Is "Clean Architecture" by Bob Martin a rule of thumb for all architectures or is it just one of the options? | While the βClean Architectureβ is fine and has many advantages, it is important to remember that: The Clean Architecture is largely Robert C. Martin's re-branding and evolution of related approaches like the Onion Architecture by Jeffrey Palermo (2008) and the Hexagonal Architecture (βPorts and Adaptersβ) by Alistair Cockburn and others (< 2008). Different problems have different requirements. The Clean Architecture and related approaches turn decoupling, flexibility, and dependency inversion up to eleven, but sacrifice simplicity. This is not always a good deal. The precursor to these architectures is the classic MVC pattern from Smalltalk. This disentangles the model from the user interface (controller and view), so that the model does not depend on the UI. There are many variations of MVC like MVP, MVVM, β¦. More complex systems do not have just one user interface, but possibly multiple user interfaces. Many apps choose to offer a REST API that can be consumed by any UI, such as a web app or a mobile app. This isolates the business logic on the server from these UIs, so the server doesn't care which kind of app accesses it. Typically, the server still depends on backend services such as databases or third party providers. This is perfectly fine, and leads to a simple layered architecture. The Hexagonal Architecture goes further and stops making a distinction between frontend and backend. Any external system might be an input (data source) or an output. Our core system defines the necessary interfaces (ports), and we create adapters for any external systems. One classic approach for strong decoupling is a service oriented architecture (SOA), where all services publish events to and consume events from a shared message bus. A similar approach was later popularized by microservices. All of these approaches have advantages , such as making it easier to test the system in isolation (by replacing all external systems it interfaces with by mock implementations). They make it easier to provide multiple implementations for one kind of service (e.g. adding a second payment processor), or to swap out the implementation of a service (e.g. moving from an Oracle database to PostgreSQL, or by rewriting a Python service in Go). But these architectures are the Ferrari of architectures: expensive, and most people don't need them. The added flexibility of the Clean Architecture etc. comes at the cost of more complexity. Many applications and especially CRUD webapps do not benefit from that. It makes sense to isolate things that might change, e.g. by using templates to generate HTML. It makes less sense to isolate things that are unlikely to change, e.g. the backing database. What is likely to change depends on the context and business needs. Frameworks make assumptions about what is going to change. E.g. React tends to assume that design and behaviour of a component change together, so it doesn't make sense to separate them. Few frameworks assume that you might want to change the framework. As such, frameworks do present an amount of lock-in. E.g. Rail's reliance on the (very opinionated!) Active Record pattern make it difficult to impossible to change your data access strategy to the (often superior) Repository pattern. If your expectations of change do not match the framework, using a different framework might be better. Many other web frameworks do not make any assumptions about data access. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/371966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57805/"
]
} |
372,034 | I've recently started a C# programming job, but I've got quite a bit of background in Haskell. But I understand C# is an object-orientated language, I don't want to force a round peg into a square hole. I read the article Exception Throwing from Microsoft which states: DO NOT return error codes. But being used to Haskell, I've been using the C# data type OneOf , returning the result as the "right" value or the error (most often an Enumeration) as the "left" value. This is much like the convention of Either in Haskell. To me, this seems safer than exceptions. In C#, ignoring exceptions does not produce a compile error, and if they are not caught they just bubble up and crash your program. This is perhaps better than ignoring an error code and producing undefined behaviour, but crashing your client's software is still not a good thing, particularly when it's performing many other important business tasks in the background. With OneOf , one has to be quite explicit about unpacking it and handling the return value and the error codes. And if one doesn't know how to handle it at that stage in the call stack, it needs to be put into the return value of the current function, so callers know an error could result. But this doesn't seem to be the approach Microsoft suggests. Is using OneOf instead of exceptions for handling "ordinary" exceptions (like File Not Found etc) a reasonable approach or is it terrible practice? It's worth noting that I've heard that exceptions as control flow are considered a serious antipattern , so if the "exception" is something you would normally handle without ending the program, isn't that "control flow" in a way? I understand there's a bit of a grey area here. Note I'm not using OneOf for things like "Out Of Memory", conditions I don't expect to recover from will still throw exceptions. But I feel like quite reasonable issues, like user input that doesn't parse are essentially "control flow" and probably shouldn't throw exceptions. Subsequent thoughts: From this discussion what I'm taking away currently is as follows: If you expect the immediate caller to catch and handle the exception most of the time and continue its work, perhaps through another path, it probably should be part of the return type. Optional or OneOf can be useful here. If you expect the immediate caller to not catch the exception most of the time, throw an exception, to save the silliness of manually passing it up the stack. If you're not sure what the immediate caller is going to do, maybe provide both, like Parse and TryParse . | but crashing your client's software is still not a good thing It most certainly is a good thing. You want anything that leaves the system in an undefined state to stop the system because an undefined system can do nasty things like corrupt data, format the hard drive, and send the president threatening emails. If you cannot recover and put the system back into a defined state then crashing is the responsible thing to do. It's exactly why we build systems that crash rather than quietly tear themselves apart. Now sure, we all want a stable system that never crashes but we only really want that when the system stays in a defined predictable safe state. I've heard that exceptions as control flow are considered a serious antipattern That's absolutely true but it's often misunderstood. When they invented the exception system they were afraid they were breaking structured programming. Structured programming is why we have for , while , until , break , and continue when all we need, to do all of that, is goto . Dijkstra taught us that using goto informally (that is, jumping around wherever you like) makes reading code a nightmare. When they gave us the exception system they were afraid they were reinventing goto. So they told us not to "use it for flow control" hoping we'd understand. Unfortunately, many of us didn't. Strangely, we don't often abuse exceptions to create spaghetti code as we used to with goto. The advice itself seems to have caused more trouble. Fundamentally exceptions are about rejecting an assumption. When you ask that a file be saved you assume that the file can and will be saved. The exception you get when it can't might be about the name being illegal, the HD being full, or because a rat has gnawed through your data cable. You can handle all those errors differently, you can handle them all the same way, or you can let them halt the system. There is a happy path in your code where your assumptions must hold true. One way or another exceptions take you off that happy path. Strictly speaking, yeah that's a kind of "flow control" but that's not what they were warning you about. They were talking about nonsense like this : "Exceptions should be exceptional". This little tautology was born because the exception system designers need time to build stack traces. Compared to jumping around, this is slow. It eats CPU time. But if you're about to log and halt the system or at least halt the current time intensive processing before starting the next one then you have some time to kill. If people start using exceptions "for flow control" those assumptions about time all go out the window. So "Exceptions should be exceptional" was really given to us as a performance consideration. Far more important than that is not confusing us. How long did it take you to spot the infinite loop in the code above? DO NOT return error codes. ...is fine advice when you're in a code base that doesn't typically use error codes. Why? Because no one's going to remember to save the return value and check your error codes. It's still a fine convention when you're in C. OneOf You're using yet another convention. That's fine so long as you're setting the convention and not simply fighting another one. It's confusing to have two error conventions in the same code base. If somehow you've gotten rid of all code that uses the other convention then go ahead. I like the convention myself. One of the best explanations of it I found here * : But much as I like it I'm still not going to mix it with the other conventions. Pick one and stick with it. 1 1 : By which I mean don't make me think about more than one convention at the same time. Subsequent thoughts: From this discussion what I'm taking away currently is as follows: If you expect the immediate caller to catch and handle the exception most of the time and continue its work, perhaps through another path, it probably should be part of the return type. Optional or OneOf can be useful here. If you expect the immediate caller to not catch the exception most of the time, throw an exception, to save the silliness of manually passing it up the stack. If you're not sure what the immediate caller is going to do, maybe provide both, like Parse and TryParse. It's really not this simple. One of the fundamental things you need to understand is what a zero is. How many days are left in May? 0 (because it's not May. It's June already). Exceptions are a way to reject an assumption but they are not the only way. If you use exceptions to reject the assumption you leave the happy path. But if you chose values to send down the happy path that signal that things are not as simple as was assumed then you can stay on that path so long as it can deal with those values. Sometimes 0 is already used to mean something so you have to find another value to map your assumption rejecting idea on to. You may recognize this idea from its use in good old algebra . Monads can help with that but it doesn't always have to be a monad. For example 2 : IList<int> ParseAllTheInts(String s) { ... } Can you think of any good reason this must be designed so that it deliberately throws anything ever? Guess what you get when no int can be parsed? I don't even need to tell you. That's a sign of a good name. Sorry but TryParse is not my idea of a good name. We often avoid throwing an exception on getting nothing when the answer could be more than one thing at the same time but for some reason if the answer is either one thing or nothing we get obsessed with insisting that it give us one thing or throw: IList<Point> Intersection(Line a, Line b) { ... } Do parallel lines really need to cause an exception here? Is it really that bad if this list will never contain more than one point? Maybe semantically you just can't take that. If so, it's a pity. But Maybe Monads, that don't have an arbitrary size like List does, will make you feel better about it. Maybe<Point> Intersection(Line a, Line b) { ... } The Monads are little special purpose collections that are meant to be used in specific ways that avoid needing to test them. We're supposed to find ways of dealing with them regardless of what they contain. That way the happy path stays simple. If you crack open and test every Monad you touch you're using them wrong. I know, it's weird. But it's a new tool (well, to us). So give it some time. Hammers make more sense when you stop using them on screws. If you'll indulge me, I'd like to address this comment: How come none of the answers clarifies that the Either monad is not an error code, and nor is OneOf? They are fundamentally different, and the question consequently seems to be based on a misunderstanding. (Though in a modified form itβs still a valid question.) β Konrad Rudolph Jun 4 `18 at 14:08 This is absolutely true. Monads are much closer to collections than exceptions, flags, or error codes. They do make fine containers for such things when used wisely. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372034",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57047/"
]
} |
372,105 | "Premature optimization is the root of all evil" I think this we can all agree upon. And I try very hard to avoid doing that. But recently I have been wondering about the practice of passing parameters by const Reference instead of by Value . I have been taught / learned that non-trivial function arguments (i.e. most non-primitive types) should preferably be passed by const reference - quite a few books I've read recommend this as a "best practice". Still I cannot help but wonder: Modern compilers and new language features can work wonders, so the knowledge I have learned may very well be outdated, and I never actually bothered to profile if there are any performance differences between void fooByValue(SomeDataStruct data); and void fooByReference(const SomeDataStruct& data); Is the practice that I have learned - passing const references (by default for non-trivial types) - premature optimization? | "Premature optimisation" is not about using optimisations early . It is about optimising before the problem is understood, before the runtime is understood, and often making code less readable and less maintainable for dubious results. Using "const&" instead of passing an object by value is a well-understood optimisation, with well-understood effects on runtime, with practically no effort, and without any bad effects on readability and maintainability. It actually improves both, because it tells me that a call will not modify the object passed in. So adding "const&" right when you write the code is NOT PREMATURE. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372105",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/291591/"
]
} |
372,211 | According to Should we avoid language features that C++ has but Java doesn't? , I know it is horrible to write C++ as if Java, mostly because it drops the beneficial features of C++ languages. But responding to the point 2 in this question: C++ code with C++ specific features (e.g.: friend functions, multiple inheritance) can only be maintained or reviewed by C++ programmers, but if we just write C++ like Java (without C++ language specific feature), the code can be maintained or reviewed by both C++ and Java programmers. I found it seems the following questions supports the statement above: Is it effective to review code in language I don't know? Should Junior Programmers be involved as code reviewers in the projects of Senior Programmers? I think they seems support this point: 1.It allows more people (Java programmers) to review my C++ code 2.It allows Java programmers to learn C++ easier during reviewing my c++ code I mean I'm not in extreme cases that dropping all C++ features, but I would prefer not using C++ features if the substitutes is not complex, for example, I would not write all C++ classes in .h like a single .java file: class A{
private:
int b;
void methodC(){
//some other code
}
} because separating .h and .cpp is easy for C++ newbies. But I would avoid C++ features if the C++ feature can easily be replaced by a pseudo code both for C++ and Java, for example: multiple inheritance, which can be replace by composition + multiple interface. So My question is, is "letting more people(e.g.:Java programmers) to review my code" a reason to "write C++ like Java" at some degree ? | The whole statement "write C++ as Java" is total nonsense. You cannot write C++ as Java. If you tried, your program would leak, or crash, or both. C++ does not have garbage collection. C++ does not have proper exceptions. C++ has very different approach on parametric polymorphism. And after all, it implies that all developers know Java . Also, you seem to be focusing too much at review. Will the other developer be able to find and fix the bug in code? Basically, the reviews are just there to ensure that the reviewer, and supposedly, others in team, will further be able to operate the code without its author. If you rephrase your question as "may some C++ features be avoided so that developers (at hands) can understand and edit it easier", then answer is definite "yes, of course". Most project have in their code style guides notes "we don't use ...". Actually, there are many experienced C++ developers which think that some feature does not work correctly, and should be avoided. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372211",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
372,444 | When I think about the pros and cons of a static library folder and a package manager I feel like the library folder is a better approach. Pros I see with a library folder: No need for an external tool to manage packages. No internet connection required to build. Faster build (no package checking). Simpler environment (less knowledge required). Pros I see with a package manager: Helps with complex dependency trees (and that can be managed downloading a dependency together with all its dependencies). Helps checking if there is a new version available. It seems the industry has decided to follow the package manager path for almost everything built today. So, what am I missing? | An important point missing from the other answers: Using a package manager means having a configuration that indicates which library versions you are using and makes sure that config information is actually correct. Knowing which libraries you use, and which version, is very important if you: need to update a library due to a critical bug / security hole; or just need to check whether an announced security hole affects you. In addition, when you actually do update, the package manager (usually) makes sure any transitive dependencies are updated as required. Whereas with a lib folder, you just have a bunch of (possibly binary, and possibly modified) files, and you'll have to guess where they came from and what version they are (or trust some README, which may or may not be correct). To address your other points: No need of external tool to manage packages. True, but a) as a software developer you need to install loads of tools anyway, so one more does not usually matter, and b) usually there are only one or a few package managers in any given field (Maven/Gradle for Java, npm for JS/TypeScript, etc), so it's not like you need to install dozens of them. No internet connection required to build. All packages managers I know work off-line, once they have downloaded the required dependencies (which can happen right after downloading the project itself). Faster build (no package checking). Probably true, but it seems unlikely the offline package checking will take a significant amount of time (it's just comparing some version numbers). An online check may take a while, but that can be turned off if desired (if it is even on by default - Maven for example never checks for updates for release versions). Simpler environments (less knowledge required). True, but as explained above, a lib folder also requires knowledge. Also, as explained above, you'll probably only work with a handful differen package managers, which you'll know already. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372444",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35086/"
]
} |
372,692 | Our line-of-business software allows the user to save certain data as CSV . Since there are a lot of different formats (all called "CSV") in use in the wild, we are tying to decide what the "default format" should look like. Regarding line/field separators and escaping, there is a standard we can use: RFC 4180 . Regarding text encoding, UTF-8 seems to have emerged in the last decade as the "default text file format", so we will use that. The one question left open is: Should we add a BOM at the start or not? I have read multiple opinions and pros/cons on the use of BOMs in general, but is there an "official" recommendation or at least some kind of community consensus on the use of BOMs in CSV files? | Not for UTF-8 , but see the various caveats in the comments. It's unnecessary (UTF-8 has no byte order) unlike UTF-16/32 and not recommended in the Unicode standard . It's also quite rare to see UTF-8 with BOM "in the wild", so unless you have a valid reason (e.g. as commented, you'll be working with software that expects the BOM) I'd recommend the BOM-less approach. Wikipedia mentions some mainly Microsoft software that forces and expects a BOM, but unless you're working with them, don't use it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372692",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33843/"
]
} |
372,716 | Just got off a retro call where developers expressed concern around the integration of their stories into the master branch each sprint. The developers all code within their own branch and towards the end of the sprint they all merge into one master branch. Then, one developer (usually the same one) is left with the task of making sure everything has integrated well with other dev's code (Most of the changes are on the same page. For example, a data display story, data filtering story, and a SLA indicator). How can we reduce this burden and make it easier for our code to merge together? From my perspective, having the PO or SM prioritize the stories in a more efficient way so we don't have these sort of dependencies in the same sprint may solve some of the issues. How does everyone else tackle this? Or is this just part of the process? | If you are using Git, each developer would be pulling from the develop branch into their own feature branch so that they ensure they don't go too far from the current baseline. They can do that daily, so that tasks that take more than a couple days stay in sync and merge issues are resolved while they are still small. When the developer is done with their work, they create a pull request . When approved, that gets merged into the develop branch. The develop branch should always have working code, and be ready for release at any time. When you actually make a release, you merge develop into master and tag it. If you have a good Continuous Integration Server, then it will build each branch when changes are checked in--particularly for pull requests. Some build servers integrate with your Git server to auto-approve or disapprove a pull request if the build fails or the automated tests fail. This is another way to find potential integration bugs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/308387/"
]
} |
372,795 | I understand that a each service in a microservice architecture should have its own database. However, by having its own database, does it actually mean simply having another database within the same database instance or literally having another database instance? By this, I don't mean sharing of databases, which is a no-no, but rather the database instance. For example, if I were using AWS and have 3 services, do I create 3 databases for each service on a single RDS instance or do I create 3 RDS instances each containing a database which is used independently by each of the 3 services? If using multiple databases on a single RDS instance is a better idea, will it defeat the purpose of having independent services because for: The RDS instance's resource will be shared amongst services. Will Service A which may have heavy database usage at a particular time impact Service B which uses a different database but on the same RDS instance? All services will be dependent on the database version on that RDS instance. | Assumed you have some services which can use the same kind of DB system and version, if you use different database or db instances is a decision you should not need to make at design time . Instead, you should be able to make the decision at deployment time , something you can simply configure. Design your services to be agnostic of the place where other services' databases are hosted . During operation, you can start with one instance, and if the system works fine, leave it that way. However, if you notice this does not scale well for your system, because different databases on one instance share too many resources, you have always the option to use different instances, if that helps. So a service does not violate the microservice architecture just because you let two of them share some resource - it violates it when sharing the resource becomes mandatory. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372795",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26189/"
]
} |
372,860 | I just hired a junior programmer and he will be 2nd real programmer next to me. Our software is structured in git with the gitflow workflow. I normally develop and push straight on the development branch. When a commit is pushed it is directly picked up by TeamCity and Octopus and deployed on the test server under the dev.domain.com domain. This way the QA guys can test my changes. Now the junior guy won't have this luxury, he will need to make PR's to get on the development branch. He still needs to be able to push work to a test environment for QA to look at. So far I have come up with 2 soltions: 1: Create a single branch where he commits to, will automatically deploy to a dev2.domain.com site. Problem is that PR's are hard to do as he will continue to develop new code in this branch. 2: For every new feature have a script create automatically a TeamCity + Octopus configuration, deploy to a feature.domain.com site and tear it down again when the feature is merged. Advantage is that it's very easy to PR's because features are contained, downside is that you need to setup the test environment and it takes time + server resources. How do other teams manage this workflow? Edit: it looks like the point is not coming across. My code goes through the exact same steps as anyone elses code. But the question was how do I structure the test environment? If I make feature branches how do my manager and QA access them? Do I have to manually set them up in TeamCity and Octopus? That's why I push on develop, it gets deployed to our test environment that way. | I normally develop and push straight on the development branch. With all due respect, this is a problem. Nobody should be pushing directly onto develop, certainly not if you're using gitflow - everything should go onto a feature branch and then be merged onto develop once the feature is complete. What do you do if you're half way through developing a feature and priorities change so a different feature now needs to be completed first? Also, now you have another developer working on the project, you should get your code reviewed as well and you can't do that if you're going directly to develop. How do other teams manage this workflow? At some point along the line, you need to be testing develop (or another branch which contains your candidate releases) - otherwise you won't spot any interactions between your features. If you also have resources to perform some checks on individual branches before they get merged to develop, that's great, do that, but these should normally be quick checks on the changes for that feature rather than full regression type tests. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24248/"
]
} |
372,953 | I'm proposing changes to a very poorly architected software project that suffers from a multitude of problems. At a high level the project utilizes Angular on the front-end and consumes various REST APIs; which is all great (I don't see the need to change our technology or tools). The problem is that the code base is disproportionately larger in the UI than the server-side APIs. Much of the business logic lives in the UI, with the REST APIs being simple CRUD database interfaces to the UI layer. For example, a POST to customer will create a customer record, while a PUT will modify that customer. Not much more, and not much less. However our business logic is more demanding than that. The general process of creating a customer does quite more than insert 1 database record. It will provision data in other necessary tables, perform certain validations and calculations, etc. I would prefer to make a single POST/PUT call that encapsulates all of this behavior, lightening the load of the consuming client. So my viewpoint is that this overarching orchestration should live on the server (where we have full control, logs, etc), not the UI, but one counterargument is that this approach would no longer be RESTful. So I am uncertain how to best describe this approach when my recommendation is to continue with the existing technology stack, but implement fundamental shifts in the locations where code belongs. | I am uncertain how to best describe this approach when my recommendation is to continue with the existing technology stack, but implement fundamental shifts in the locations where code belongs. Service oriented architecture . You are proposing to redesign your system so that your business rules and your data are in the same place. That's effectively the definition of a service ; see Udi Dahan's talk on Finding Service Boundaries . Sidebar: as noted by Eric, this has nothing to do with "REST". There is absolutely no reason that you can't put a REST API (which is to say, an API that satisfies the constraints of the REST architectural style ) in front of your service. But that may not be obvious to people who understand REST to mean a mapping of database operations to HTTP methods. It may, or may not, be worth investing in changing your audience's understanding of REST. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/372953",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/241657/"
]
} |
373,376 | In Clean Code the author gives an example of assertExpectedEqualsActual(expected, actual) vs assertEquals(expected, actual) with the former claimed to be more clear because it removes the need to remember where the arguments go and the potential misuse that comes from that. Yet, I've never seen an example of the former naming scheme in any code and see the latter all the time. Why don't coders adopt the former if it is, as the author asserts, clearer than the latter? | Because it is more to type and more to read The simplest reason is that people like to type less, and encoding that information means more typing. When reading it, every time I have to read the whole thing even if I am familiar with what the order of the arguments should be. Even if not familiar with the order of arguments... Many developers use IDEs IDEs often provide a mechanism for seeing the documentation for a given method by hovering or via a keyboard shortcut. Because of this, the names of the parameters are always at hand. Encoding the arguments introduces duplication and coupling The names of the parameters should already document what they are. By writing the names out in the method name, we are duplicating that information in the method signature as well. We also create a coupling between the method name and the parameters. Say expected and actual are confusing to our users. Going from assertEquals(expected, actual) to assertEquals(planned, real) doesn't require changing the client code using the function. Going from assertExpectedEqualsActual(expected, actual) to assertPlannedEqualsReal(planned, real) means a breaking change to the API. Or we don't change the method name, which quickly becomes confusing. Use types instead of ambiguous arguments The real issue is that we have ambiguous arguments that are easily switched because they are the same type. We can instead use our type system and our compiler to enforce the correct order: class Expected<T> {
private T value;
Expected(T value) { this.value = value; }
static Expected<T> is(T value) { return new Expected<T>(value); }
}
class Actual<T> {
private T value;
Actual(T value) { this.value = value; }
static Actual<T> is(T value) { return new Actual<T>(value); }
}
static assertEquals(Expected<T> expected, Actual<T> actual) { /* ... */ }
// How it is used
assertEquals(Expected.is(10), Actual.is(x)); This can then be enforced at the compiler level and guarantees that you cannot get them backwards. Approaching from a different angle, this is essentially what the Hamcrest library does for tests. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207092/"
]
} |
373,385 | Currently C is considered a low level language , but back in the 70's was it considered low level? Was the term even in use then? Many popular higher level languages didn't exist until the mid 80's and beyond so I'm curious if and how the nature of low level has changed over the years. | To answer the historical aspects of the question: The design philosophy is explained in The C Programming Language written by Brian Kernighan and C designer Dennis Ritchie, the "K&R" you may have heard of. The preface to the first edition says C is not a "very high level" language, nor a "big" one... and the introduction says C is a relatively "low level" language... C provides no operations to deal directly with composite objects such as character strings, sets, lists, or arrays. There are no operations that manipulate an entire array or string... The list goes on for a while before the text continues: Although the absence of some of these features may seem like a grave deficiency,... keeping the language down to modest size has real benefits. (I only have the second edition from 1988, but the comment below indicates that the quoted text is the same in the 1978 first edition.) So, yes, the terms "high level" and "low level" were in use back then, but C was designed to fall somewhere on the spectrum in between. It was possible to write code in C that was portable across hardware platforms, and that was the main criteria for whether a language was considered high level at the time. However, C lacked some features that were characteristic of high level languages, and this was a design decision in favor of simplicity. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373385",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/273207/"
]
} |
373,387 | I am new to graph data structure. Everywhere on google it is said to be list(or array ) of linked list.
My question is can not it be represented as list of list(in java array list of array list) or map of list(in java HashMap with key as node and value as array list of connected node ) ? In all three mentioned approach I see time complexity To find if two nodes are connected - O(v)
To find all connected nodes - O(v) Also space complexity will also be more or less same . So why Adjacency List is said to be list(or array ) of linked list not as list of list or map of list ? | To answer the historical aspects of the question: The design philosophy is explained in The C Programming Language written by Brian Kernighan and C designer Dennis Ritchie, the "K&R" you may have heard of. The preface to the first edition says C is not a "very high level" language, nor a "big" one... and the introduction says C is a relatively "low level" language... C provides no operations to deal directly with composite objects such as character strings, sets, lists, or arrays. There are no operations that manipulate an entire array or string... The list goes on for a while before the text continues: Although the absence of some of these features may seem like a grave deficiency,... keeping the language down to modest size has real benefits. (I only have the second edition from 1988, but the comment below indicates that the quoted text is the same in the 1978 first edition.) So, yes, the terms "high level" and "low level" were in use back then, but C was designed to fall somewhere on the spectrum in between. It was possible to write code in C that was portable across hardware platforms, and that was the main criteria for whether a language was considered high level at the time. However, C lacked some features that were characteristic of high level languages, and this was a design decision in favor of simplicity. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373387",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/260829/"
]
} |
373,486 | So I don't know if this is good or bad code design so I thought I better ask. I frequently create methods that do data processing involving classes and I often do a lot of checks in the methods to make sure I don't get null references or other errors before hand. For a very basic example: // fields and properties
private Entity _someEntity;
public Entity SomeEntity => _someEntity;
public void AssignEntity(Entity entity){
_someEntity = entity;
}
public void SetName(string name)
{
if (_someEntity == null) return; //check to avoid null ref
_someEntity.Name = name;
label.SetText(_someEntity.Name);
} So as you can see im checking for null every time. But should the method not have this check? For example should external code cleanse the data before hand so the methods don't need to validate like below: if(entity != null) // this makes the null checks redundant in the methods
{
Manager.AssignEntity(entity);
Manager.SetName("Test");
} In summary, should methods be "data validating" and then do their processing on the data, or should it be guaranteed before you call the method, and if you fail to validate prior to calling the method it should throw an error (or catch the error)? | The problem with your basic example isn't the null check, it's the silent fail . Null pointer/reference errors, more often than not, are programmer errors. Programmer errors are often best dealt with by failing immediately and loudly. You have three general ways to deal with the problem in this situation: Don't bother checking, and just let the runtime throw the nullpointer exception. Check, and throw an exception with a message better than the basic NPE message. A third solution is more work but is much more robust: Structure your class such that it's virtually impossible for _someEntity to ever be in an invalid state. One way to do that is to get rid of AssignEntity and require it as a parameter for instantiation. Other Dependency Injection techniques can be useful as well. In some situations, it makes sense to check all function arguments for validity before whatever work you're doing, and in others it make sense to tell the caller they are responsible for ensuring their inputs are valid, and not checking. Which end of the spectrum you're on is going to depend on your problem domain. The third option has a significant benefit in that you can, to an extent, have the compiler enforce that the caller does everything properly. If the third option is not an option, then in my opinion it really doesn't matter as long as it doesn't silently fail. If not checking for null causes the program to instantly explode, fine, but if it instead quietly corrupts the data to cause trouble down the road, then it's best to check and deal with it right then and there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373486",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/303640/"
]
} |
373,633 | I don't really write large projects. I'm not maintaining a huge database or dealing with millions of lines of code. My code is primarily "scripting" type stuff - things to test mathematical functions, or to simulate something - "scientific programming". The longest programs I've worked on up to this point are a couple hundred lines of code, and most of the programs I work on are around 150. My code is also crap. I realized this the other day as I was trying to find a file I wrote a while ago but that I probably overwrote and that I don't use version control, which is probably making a large number of you cringe in agony at my stupidity. The style of my code is convoluted and is filled with obsolete comments noting alternate ways to do something or with lines of code copied over. While the variable names always start out very nice and descriptive, as I add or change things as per e.g., something new that someone wants tested, code gets overlayed on top and overwritten and because I feel like this thing should be tested quickly now that I have a framework I start using crappy variable names and the file goes to pot. In the project I'm working on now, I'm in the phase where all of this is coming back to bite me in a big way. But the problem is (aside from using version control, and making a new file for each new iteration and recording all of it in a text file somewhere, which will probably help the situation dramatically) I don't really know how to proceed with improving my actual coding style. Is unit testing necessary for writing smaller pieces of code? How about OOP? What sorts of approaches are good for writing good, clean code quickly when doing "scientific programming" as opposed to working on larger projects? I ask these questions because often, the programming itself isn't super complex. It's more about the math or science that I'm testing or researching with the programming. E.g., is a class necessary when two variables and a function could probably take care of it? (Consider these are also generally situations where the program's speed is preferred to be on the faster end - when you're running 25,000,000+ time steps of a simulation, you kinda want it to be.) Perhaps this is too broad, and if so, I apologize, but looking at programming books, they often seem to be addressed at larger projects. My code doesn't need OOP, and it's already pretty darn short so it's not like "oh, but the file will be reduced by a thousand lines if we do that!" I want to know how to "start over" and program cleanly on these smaller, faster projects. I would be glad to provide more specific details, but the more general the advice, the more useful, I think. I am programming in Python 3. Someone suggested a duplicate. Let me make clear I'm not talking about ignoring standard programming standards outright. Clearly, there's a reason those standards exist. But on the other hand, does it really make sense to write code that is say OOP when some standard stuff could have been done, would have been much faster to write, and would have been a similar level of readability because of the shortness of the program? There's exceptions. Further, there's probably standards for scientific programming beyond just plain standards. I'm asking about those as well. This isn't about if normal coding standards should be ignored when writing scientific code, it's about writing clean scientific code! Update Just thought I'd add a "not-quite-one-week-later" sort of update. All of your advice was extremely helpful. I now am using version control - git, with git kraken for a graphical interface. It's very easy to use, and has cleaned up my files drastically - no more need for old files sticking around, or old versions of code commented out "just in case". I also installed pylint and ran it on all of my code. One file got a negative score initially; I'm not even sure how that was possible. My main file started at a score of ~1.83/10 and now is at ~9.1/10. All of the code now conforms fairly well to standards. I also ran over it with my own eyes updating variable names that had gone...uhm...awry, and looking for sections to refactor. In particular, I asked a recent question on this site on refactoring one of my main functions, and it now is a lot cleaner and a lot shorter: instead of a long, bloated, if/else filled function, it is now less than half the size and much easier to figure out what is going on. My next step is implementing "unit testing" of sorts. By which I mean a file that I can run on my main file which looks at all the functions in it with assert statements and try/excepts, which is probably not the best way of doing it, and results in a lot of duplicate code, but I'll keep reading and try to figure out how to do it better. I've also updated significantly the documentation I'd already written, and added supplementary files like an excel spreadsheet, the documentation, and an associated paper to the github repository. It kinda looks like a real programming project now. So...I guess this is all to say: thank you . | This is a pretty common problem for scientists. I've seen it a lot, and it always stems by the fact that programming is something you pick on the side as a tool to do your job. So your scripts are a mess. I'm going to go against common sense and say that, assuming you're programming alone, this is not so bad! You're never going to touch most of what you write ever again, so spending too much time to write pretty code instead of producing "value" (so the result of your script) isn't going to do much to you. However, there is going to be a time where you need to go back to something you did and see exactly how something was working. Additionally, if other scientists will need to review your code, it's really important for it to be as clear and concise as possible, so that everyone can understand it. Your main problem is going to be readability, so here's a few tips for improving: Variable names: Scientists love to use concise notations. All mathematical equations usually use single letters as variables, and I wouldn't be surprised to see lots and lots of very short variables in your code. This hurts readability a lot. When you'll go back to your code you're not going to remember what those y, i and x2 represent, and you'll spend a lot of time trying to figure it out. Try instead naming your variables explicitly, using names that represent exactly what they are. Split your code into functions: Now that you renamed all your variables, your equations look terrible, and are multiple lines long. Instead of leaving it in your main program, move that equation to a different function, and name it accordingly. Now instead of having a huge and messed up line of code, you'll have a short instructions telling you exactly what's going on and what equation you used. This improves both your main program, as you don't even have to look at the actual equation to know what you did, and the equation code itself, as in a separate function you can name your variables however you want, and go back to the more familiar single letters. On this line of thought, try to find out all the pieces of code that represent something, especially if that something is something you have to do multiple times in your code, and split them out into functions. You'll find out that your code will quickly become easier to read, and that you'll be able to use the same functions without writing more code. Icing on the cake, if those functions are needed in more of your programs you can just make a library for them, and you'll have them always available. Global variables: Back when I was a beginner, I thought this was a great way for passing around data I needed in many points of my program. Turns out there are many other ways to pass around stuff, and the only things global variables do is giving people headaches, since if you go to a random point of your program you'll never know when that value was last used or edited, and tracking it down will be a pain. Try to avoid them whenever possible. If your functions need to return or modify multiple values, either make a class with those values and pass them down as a parameter, or make the function return multiple values (with named tuples) and assign those values in the caller code. Version Control This doesn't directly improve readability, but helps you do all the above. Whenever you do some changes, commit to version control (a local Git repository will be fine enough), and if something doesn't work, look at what you changed or just roll back! This will make refactoring your code way easier, and will be a safety net if you accidentally break stuff. Keeping all this in mind will allow you to write clear and more effective code, and will also help you find possible mistakes faster, as you won't have to wade through gigantic functions and messy variables. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373633",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/258006/"
]
} |
373,686 | I'm a developer in an agile team, and we try to use Scrum. So I'll put here a hypothetical problem to illustrate the situation. We have a very old app, using some messy and bad maintainability JQuery code. We also have parts of the app using React, and those parts are way easier to update/maintain. Besides that, the company goal is to make a client-single-page-app, on React, so using JQuery gets you further away from that. When we do the planning, we always go for the easy solution in terms of development time, so for instance if we are making a new dialog or something, we use the old JQuery because its quicker, and we say that we're go back later to tidy up and transform into React, but that rarely happens. We get the requirements of what we have to do from user stories (which are well done IMO, they are slim but they explain what we are doing, and why we are doing it). Sometimes, the requirements of new features are very slim, so for an example if a requirement says "make a dialog that loads tons of contents" but does not says to implement a loading feature, we, in most cases, won't implement it, even though we all know it would be better for the customers, with the reason that this could compromise our sprint goal (even though I personally believe it would not). The result is that our codebase is a big mess with very bad maintainability, and new features sometimes, are very small and take a full sprint to make (something that could be achieved in a single day in a good codebase) mainly because of this development strategy, just go fast, do the minimal. In this case, what are we doing wrong? Should we tackle the solutions in a more complete way so we are not aways writing bad code and rewriting code we just wrote last week? Or should we keep doing that just making sure all that code is being rewritten? What would be a good agile approach to this problem? | This has nothing to do with Agile or Scrum. The problem with "duct tape it now and we'll fix it later" is that later never comes and in the mean time you are accumulating a lot of technical debt . The first step to recovery is to recognize the problem and stop making it worse. For every new user story, the team should consider "what's the right way to code this?", not "what's the quickest way to hack this?" and plan the sprints accordingly. To clean up the existing problem, see the excellent answers to I've inherited 200K lines of spaghetti code β what now? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373686",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/309750/"
]
} |
373,806 | Although this isn't mandatory in the C++ standard, it seems the way GCC for example, implements parent classes, including pure abstract ones, is by including a pointer to the v-table for that abstract class in every instantiation of the class in question. Naturally this bloats the size of every instance of this class by a pointer for every parent class it has. But I've noticed that many C# classes and structs have lots of parent interfaces, which are basically pure abstract classes. I would be surprised if every instance of say Decimal , was bloated with 6 pointers to all it's various interfaces. So if C# does do interfaces differently, how does it do them, at least in a typical implementation (I understand the standard itself may not define such an implementation)? And do any C++ implementations have a way of avoiding object size bloat when adding pure virtual parents to classes? | In C# and Java implementations, the objects typically have a single pointer to its class. This is possible because they are single-inheritance languages. The class structure then contains the vtable for the single-inheritance hierarchy. But calling interface methods has all the problems of multiple inheritance as well. This is typically solved by putting additional vtables for all implemented interfaces into the class structure. This saves space compared to typical virtual inheritance implementations in C++, but makes interface method dispatch more complicated β which can be partially compensated by caching. E.g. in the OpenJDK JVM, each class contains an array of vtables for all implemented interfaces (an interface vtable is called an itable ). When an interface method is called, this array is searched linearly for the itable of that interface, then the method can be dispatched through that itable. Caching is used so that each call site remembers the result of the method dispatch, so this search only has to be repeated when the concrete object type changes. Pseudocode for method dispatch: // Dispatch SomeInterface.method
Method const* resolve_method(
Object const* instance, Klass const* interface, uint itable_slot) {
Klass const* klass = instance->klass;
for (Itable const* itable : klass->itables()) {
if (itable->klass() == interface)
return itable[itable_slot];
}
throw ...; // class does not implement required interface
} (Compare the real code in the OpenJDK HotSpot interpreter or x86 compiler .) C# (or more precisely, the CLR) uses a related approach. However, here the itables don't contain pointers to the methods, but are slot maps: they point to entries in the main vtable of the class. As with Java, having to search for the correct itable is only the worst case scenario, and it is expected that caching at the call site can avoid this search nearly always. The CLR uses a technique called Virtual Stub Dispatch in order to patch the JIT-compiled machine code with different caching strategies. Pseudocode: Method const* resolve_method(
Object const* instance, Klass const* interface, uint interface_slot) {
Klass const* klass = instance->klass;
// Walk all base classes to find slot map
for (Klass const* base = klass; base != nullptr; base = base->base()) {
// I think the CLR actually uses hash tables instead of a linear search
for (SlotMap const* slot_map : base->slot_maps()) {
if (slot_map->klass() == interface) {
uint vtable_slot = slot_map[interface_slot];
return klass->vtable[vtable_slot];
}
}
}
throw ...; // class does not implement required interface
} The main difference to the OpenJDK-pseudocode is that in OpenJDK each class has an array of all directly or indirectly implemented interfaces, whereas the CLR only keeps an array of slot maps for interfaces that were directly implemented in that class. We therefore need to walk the inheritance hierarchy upwards until a slot map is found. For deep inheritance hierarchies this results in space savings. These are particularly relevant in CLR due to the way how generics are implemented: for a generic specialization, the class structure is copied and methods in the main vtable may be replaced by specializations. The slot maps continue to point at the correct vtable entries and can therefore be shared between all generic specializations of a class. As an ending note, there are more possibilities to implement interface dispatch. Instead of placing the vtable/itable pointer in the object or in the class structure, we can use fat pointers to the object, that are basically a (Object*, VTable*) pair. The drawback is that this doubles the size of pointers and that upcasts (from a concrete type to an interface type) are not free. But it's more flexible, has less indirection, and also means that interfaces can be implemented externally from a class. Related approaches are used by Go interfaces, Rust traits, and Haskell typeclasses. References and further reading: Interface dispatch . An expanded version of this answer, complete with diagrams and in-depth discussion. Wikipedia: Inline caching . Discusses caching approaches that can be used to avoid expensive method lookup. Typically not needed for vtable-based dispatch, but very desirable for more expensive dispatch mechanisms like the above interface dispatch strategies. OpenJDK Wiki (2013): Interface Calls . Discusses itables. Pobar, Neward (2009): SSCLI 2.0 Internals. Chapter 5 of the book discusses slot maps in great detail. Was never published but made available by the authors on their blogs . The PDF link has since moved. This book probably no longer reflects the current state of the CLR. CoreCLR (2006): Virtual Stub Dispatch . In: Book Of The Runtime. Discusses slot maps and caching to avoid expensive lookups. Kennedy, Syme (2001): Design and Implementation of Generics for the .NET Common Language Runtime . ( PDF link ). Discusses various approaches to implement generics. Generics interact with method dispatch because methods might be specialized so vtables might have to be rewritten. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373806",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57047/"
]
} |
373,959 | I'm still inexperienced to write high quality code, so I read books addressing the issue such as Clean Code by Robert C. Martin, and keep checking code of well-known libraries to improve my skills. Although many open source libraries have been maintained for years, which means that it's very unlikely that they aren't on the right path, I found the code in many of them to be far from the principles addressed to write clean code β e.g methods containing hundreds of lines of code. So my question is: Are the principles of clean code too restricted, and we can do without them in many libraries like these? If not, how are huge libraries being maintained without considering many of these principles? I'll appreciate any brief clarification. I apologize if the question seems to be silly from a newbie guy. EDIT Check this example in Butterknife library β one of the most well know libraries in Android community. | The principles stated in "Clean Code" are not always generally agreed upon. Most of it is common sense, but some of the author's opinions are rather controversial and not shared by everybody. In particular, the preference for short methods is not agreed on by everybody. If the code in a longer method is not repeated elsewhere, extracting some of it to a separate method (so you get multiple shorter methods) increases overall complexity, since these methods are now visible for other methods which should not care about them. So it is a trade-off, not an objective improvement. The advice in the book is also (like all advice) geared towards a particular type of software: Enterprise applications. Other kinds of software like games or operating systems have different constraints than enterprise software, so different patterns and design principles are in play. The language is also a factor: Clean Code assumes Java or a similar language - if you use C or Lisp a lot of the advice does not apply. In short, the book is a single persons opinions about a particular class of software. It will not apply everywhere. As for open source projects, code quality ranges from abysmal to brilliant. After all, anyone can publish their code as open source. But if you look at a mature and successful open source project with multiple contributors, you can be fairly sure they have consciously settled on a style that works for them. If this style is in contradiction to some opinion or guideline, then (to put it bluntly) it is the guideline that is wrong or irrelevant, since working code trumps opinions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/373959",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/286627/"
]
} |
374,031 | Our service is in 5 cities right now. If someone tries to call our service API from any other city, we want to throw this error Service not available in your area . The question is, what is the appropriate http code would be for this error? 503: Service Unavailable 403: Forbidden or something else? | Any HTTP error code would be inappropriate. There is no error or problem of any sort from an HTTP perspective so it should be something in the 200 range. You politely inform some of your users that they will not be serviced by sending back a document that tells them so. And this all goes well. The user will not be able to use your application . That is a conscious decision made by your business logic, not a mishap. On the HTTP level everything is honky dory. Edit It looks like what we are looking at here is a clash of old school versus new school. When HTTP was designed, there were no web services, there was no SOAP, no JSON, no REST principles. As a protocol above TCP this was already considered (close to) application level and many high level status codes were defined. When the web started to be used for richer, high level services and a common means to transport "envelopes" was required, designers hi-jacked HTTP rather than defining a newer and cleaner protocol, just because HTTP was ubiquitous. So in a modern web service context, HTTP is indeed little more than a dumb transport layer and most of its codes may be considered not applicable or obsolete. Just picking one because it comes close to your application state and happens to be in that list that once meant something may seem harmless, but I think it would send a wrong message. You do not want HTTP to play that regulating role in a web service context. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/374031",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198505/"
]
} |
374,074 | I was recently writing a small piece of code which would indicate in a human-friendly way how old an event is. For instance, it could indicate that the event happened βThree weeks agoβ or βA month agoβ or βYesterday.β The requirements were relatively clear and this was a perfect case for test driven development. I wrote the tests one by one, implementing the code to pass each test, and everything seemed to work perfectly. Until a bug appeared in production. Here's the relevant piece of code: now = datetime.datetime.utcnow()
today = now.date()
if event_date.date() == today:
return "Today"
yesterday = today - datetime.timedelta(1)
if event_date.date() == yesterday:
return "Yesterday"
delta = (now - event_date).days
if delta < 7:
return _number_to_text(delta) + " days ago"
if delta < 30:
weeks = math.floor(delta / 7)
if weeks == 1:
return "A week ago"
return _number_to_text(weeks) + " weeks ago"
if delta < 365:
... # Handle months and years in similar manner. The tests were checking the case of an event happening today, yesterday, a four days ago, two weeks ago, a week ago, etc., and the code was built accordingly. What I missed is that an event can happen a day before yesterday, while being one day ago: for instance an event happening twenty six hours ago would be one day ago, while not exactly yesterday if now is 1 a.m. More exactly, it's one point something, but since the delta is an integer, it will be just one. In this case, the application displays βOne days ago,β which is obviously unexpected and unhandled in the code. It can be fixed by adding: if delta == 1:
return "A day ago" just after computing the delta . While the only negative consequence of the bug is that I wasted half an hour wondering how this case could happen (and believing that it has to do with time zones, despite the uniform use of UTC in the code), its presence is troubling me. It indicates that: It's very easy to commit a logical mistake even in a such simple source code. Test driven development didn't help. Also worrisome is that I can't see how could such bugs be avoided. Aside thinking more before writing code, the only way I can think of is to add lots of asserts for the cases that I believe would never happen (like I believed that a day ago is necessarily yesterday), and then to loop through every second for the past ten years, checking for any assertion violation, which seems too complex. How could I avoid creating this bug in the first place? | Test driven development didn't help. It seems like it did help, its just that you didn't have a test for the "a day ago" scenario. Presumably, you added a test after this case was found; this is still TDD, in that when bugs are found you write a unit-test to detect the bug, then fix it. If you forget to write a test for a behavior, TDD has nothing to help you; you forget to write the test and therefore don't write the implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/374074",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
374,079 | We have one single internal website. Underneath this website (in IIS) we have two web applications: 1-Main web application 2-Web services (created using the visual studio web api project template) The webservices are called by the pages in the main application. They are also called by our database in some SQL CLR functions. They are not called by any other applications. A - Can these two web applications be merged into one? Meaning merged into one web application project in Visual Studio. B - If they can be merged into one, should they be? Arguments for no they should not be merged: 1- for the day we create another application that consumes the services 2- security Thanks | Test driven development didn't help. It seems like it did help, its just that you didn't have a test for the "a day ago" scenario. Presumably, you added a test after this case was found; this is still TDD, in that when bugs are found you write a unit-test to detect the bug, then fix it. If you forget to write a test for a behavior, TDD has nothing to help you; you forget to write the test and therefore don't write the implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/374079",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/310362/"
]
} |
374,137 | I'm a newbie coder. I find it troublesome to declare a variable in 1 function and not be able to access it in other functions. I have to make many of my variables global just to get my code to work. But a lot of people say that the global state is evil. I don't understand the purpose of having limited scope, anyway. Scope seems to encourage developers to declare same-name variables in different places. def func1():
string = 'hello world'
...
def func2():
string = 'hello universe'
... Without the need to distinguish variables in different functions, developers name their variables vague and less-meaningful names. Wouldn't it be better to have more specific variable names? def func1():
world_string = 'hello world'
...
def func2():
universe_string = 'hello universe'
... Why is limiting access to variables outside of the block that it's defined in, i.e. scope, a good thing to have in programming languages? As a newbie, I find it convenient to make every variable a global variable. Why is that bad in bigger projects? | Engineering is, abstractly, managing complexity. Software Engineering is, abstractly, managing complexity in software! Scope is a tool to help manage complexity, like any tool it can be used or abused, and sometimes can be overkill. Without scope, you would have to track the entire state of the program at all times when writing or modifying it. Without tools to manage complexity, the complexity will rise to the point of becoming overwhelming. N.b. scope is only one tool in the aresanel. As a newbie, I find it convenient to make every variable a global variable. Why is that bad in bigger projects? As the size of the project increases it will become harder and harder to keep all the state, names, etc. straight; you will make more errors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/374137",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/184113/"
]
} |
375,436 | When a data structure (for example, a queue) is implemented using an OOP language, some members of the data structure need to be private (for example, the number of items in the queue). A queue can also be implemented in a procedural language using a struct and a set of functions that operate on the struct . However, in a procedural language you can't make the members of a struct private. Were the members of a data structure implemented in a procedural language left public, or was there some trick to make them private? | OOP did not invent encapsulation and is not synonymous with encapsulation. Many OOP languages do not have C++/Java style access modifiers. Many non-OOP languages have various techniques available to offer encapsulation. One classic approach for encapsulation is closures , as used in functional programming . This is significantly older than OOP but is in a way equivalent. E.g. in JavaScript we might create an object like this: function Adder(x) {
this.add = function add(y) {
return x + y;
}
}
var plus2 = new Adder(2);
plus2.add(7); //=> 9 The above plus2 object has no member that would allow direct access to x β it's entirely encapsulated. The add() method is a closure over the x variable. The C language supports some kinds of encapsulation through its header file mechanism, particularly the opaque pointer technique. In C, it is possible to declare a struct name without defining its members. At that point no variable of the type of that struct can be used, but we can use pointers to that struct freely (because the size of a struct pointer is known at compile time). For example, consider this header file: #ifndef ADDER_H
#define ADDER_H
typedef struct AdderImpl *Adder;
Adder Adder_new(int x);
void Adder_free(Adder self);
int Adder_add(Adder self, int y);
#endif We can now write code that uses this Adder interface, without having access to its fields, e.g.: Adder plus2 = Adder_new(2);
if (!plus2) abort();
printf("%d\n", Adder_add(plus2, 7)); /* => 9 */
Adder_free(plus2); And here would be the totally encapsulated implementation details: #include "adder.h"
struct AdderImpl { int x; };
Adder Adder_new(int x) {
Adder self = malloc(sizeof *self);
if (!self) return NULL;
self->x = x;
return self;
}
void Adder_free(Adder self) {
free(self);
}
int Adder_add(Adder self, int y) {
return self->x + y;
} There is also the class of modular programming languages , which focuses on module-level interfaces. The ML language family incl. OCaml includes an interesting approach to modules called functors . OOP overshadowed and largely subsumed modular programming, yet many purported advantages of OOP are more about modularity than object orientation. There's also the observation that classes in OOP languages like C++ or Java are often not used for objects (in the sense of entities that resolve operations through late binding/dynamic dispatch) but merely for abstract data types (where we define a public interface that hides internal implementation details). The paper On Understanding Data Abstraction, Revisited (Cook, 2009) discusses this difference in more detail. But yes, many languages have no encapsulation mechanism whatsoever. In these languages, structure members are left public. At most, there would be a naming convention discouraging use. E.g. I think Pascal had no useful encapsulation mechanism. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/375436",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/274111/"
]
} |
375,704 | In my 10+ years of experience in the IT field, I have never used foreign keys in any of my projects and I never felt the need. I did work with professional databases that had foreign keys constraints. I am now at a position where we are building a new application/database and I am thinking should I use foreign keys or not? This is going to be a professional product. I will also consider implementing this in my existing projects if I get a satisfactory answer. This article on why to use foreign keys exactly addresses my concerns. It main crux is It maintains referential integrity (yes but can be maintained without it too) Easier Detective work (of course) Better Performance (I am not quite sure) My question is, should I use foreign keys or can I live without them. What are strong pros and cons from a developer who worked in such scenarios. Example: Now an important part of using foreign keys is added complexity that is added to design. For example, a simple delete may not work, or it may delete other records you are not aware of. Let's consider that scenario. I have a database with user and user_comments tables. create table user(
user_id int not null identity,
user_name varchar(50),
...
)
create table user_comment(
comment_id int not nul identity,
user_id int,
CONSTRAINT FK_USER_USERID FOREIGN KEY (user_id)
REFERENCES user (user_id)
ON DELETE CASCADE
ON UPDATE CASCADE
) Here, if I delete a user, all his comments will automatically be deleted. I know I can change that behavior, but my question is, are Foreign keys worth using with their added complexity? What are pros and cons from SE.stackexchange users ? Am I potentially looking at some horror stories? Can someone comment on how it improves performance? | It maintains referential integrity (yes but can be maintained without it too) You are technically correct that if you're able to maintain referential integrity yourself, you don't need the constraint to exist. But by that same logic, you don't need fire insurance as long as your house doesn't burn down, and you don't need health insurance as long as you don't get sick. While technically correct, the underlying assertion than you can do it all flawlessly is simply a failure to recognize the possibility of you (or any other developer) making a mistake. Accidentally breaking referential integrity without foreign keys works without any issue. But later on, when you want to retrieve the data, it blows up in your face. Who set this data? When did they set it? Why did they set it to this value? These question become very hard to answer. Accidentally breaking referential integrity with foreign keys blows up in your face immediately . Who set this data? You did. When did they try to set it? Right now. Why did they set it to this value? Since you're doing it right now, you're logically the best source to know what it is you're trying to do. Troubleshooting the issue becomes so much easier when you're at the source of the problem already. Easier Detective work (of course) I assume you mean the thing I just described. Better Performance (I am not quite sure) Can someone comment how how it improves performance? Foreign keys don't improve performance, at least not directly. The performance gain is achieved by the use of indexes . It just so happens that PKs and FKs are automatically indexed because they are very frequently used for searching, making them prime targets for search optimization. In here, if I delete a user, all his comments will automatically be deleted. This is not inherent to a foreign key. This is inherent to setting ON DELETE CASCADE on the foreign key. Cascaded deletes are a nice-to-have feature but they are not the core use case of foreign keys. The core use case is maintaining referential integrity. My question is should I use foreign keys or can I live without it. What are strong pros and cons from a developer who worked in such scenarios. my question is, is Foreign keys worth using with its added complexity I'm not seeing the complexity you're talking about. If you claim to already be capable of handling referential integrity, that means that I should be able to sneakily put a FK on your FK-less column, and you would be unable to notice that I put an FK on your column. There is no complexity from having the FK. Setting up the FK is trivial. Yes, it requires an explicit SQL command, but the command is very copy/pastable: CONSTRAINT unique_name FOREIGN KEY fk_column_name REFERENCES pk_table (pk_column_name) While the lazy developer in me does wonder if naming a constraint is really necessary, the other information you need to add is logically always required to set up a relation between two columns. Other than the name, it's about as simple as it can be. The performance gain from having an index on the column is inherent to having an FK on the column. Setting an index without a FK is about as complex as setting a FK: CREATE INDEX unique_name ON fk_table_name (fk_column_name) So again, I'm not seeing the added complexity from actually using a foreign key. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/375704",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30160/"
]
} |
375,860 | I know you're not supposed to test private methods, and if it looks like you need to, there might be a class in there waiting to come out. But, I don't want to have a gazillion classes just so that I can test their public interfaces and I find that for many classes if I just test the public methods I end up having to mock a lot of dependencies and the unit tests are enormous and hard to follow. I much prefer mocking the private methods when testing the public ones, and mocking external dependencies when testing the private ones. Am I crazy? | You're partially right - you shouldn't directly test private methods. The private methods on a class should be invoked by one or more of the public methods (perhaps indirectly - a private method called by a public method may invoke other private methods). Therefore, when testing your public methods, you will test your private methods as well. If you have private methods that remain untested, either your test cases are insufficient or the private methods are unused and can be removed. If you are taking a white-box testing approach, you should consider the implementation details of your private methods when constructing unit tests around your public methods. If you are taking a black-box approach, you shouldn't be testing against any implementation details in either the public or private methods but against the expected behavior. Personally, I prefer a white-box approach to unit tests. I can craft tests to put the methods and classes under test into different states that cause interesting behavior in my public and private methods and then assert that the results are what I expect. So - don't mock your private methods. Use them to understand what you need to test in order to provide good coverage of the functionality that you provide. This is especially true at the unit test level. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/375860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21152/"
]
} |
375,934 | I have a class with some default/shared functionality. I use abstract class for it: public interface ITypeNameMapper
{
string Map(TypeDefinition typeDefinition);
}
public abstract class TypeNameMapper : ITypeNameMapper
{
public virtual string Map(TypeDefinition typeDefinition)
{
if (typeDefinition is ClassDefinition classDefinition)
{
return Map(classDefinition);
}
...
throw new ArgumentOutOfRangeException(nameof(typeDefinition));
}
protected abstract string Map(ClassDefinition classDefinition);
} As you can see, I also have the interface ITypeNameMapper . Does it make sense to define this interface if I already have an abstract class TypeNameMapper or abstract class is just enough? TypeDefinition in this minimal example is abstract too. | Yes, because C# doesn't allow multiple inheritance except with interfaces. So if I have a class which is both a TypeNameMapper and SomethingelseMapper I can do: class MultiFunctionalClass : ITypeNameMapper, ISomethingelseMapper
{
private TypeNameMapper map1
private SomethingelseMapper map2
public string Map(TypeDefinition typeDefinition) { return map1.Map(typeDefintion);}
public string Map(OtherDef otherDef) { return map2.Map(orderDef); }
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/375934",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/269824/"
]
} |
376,535 | RFC 2606 standard reserves the domain names example.org , example.net and example.com for the purpose of being used as examples in documentation. What is an equivalent for a phone number (including country code) that can be used as an example, e.g. for giving users an example in what format to input phone numbers? In the best case, it would be a dummy number designated by the relevant standards to be an example phone number, and which would not be attributed to any real subscriber. | The North American Numbering Plan reserves 555-01 numbers for fictitious purposes. If you want an example Seattle number, for example, +1 206 555 0100 - +1 206 555 0199 would do. In the United Kingdom, Ofcom, the regulator, has set aside numbers for this purpose . For example, if you want a Leeds number, +44 113 496 0000 - +44 113 496 0999 may be used. I'm sure other countries will have similar things, but I doubt there's one consistent rule across all countries. Australia lists ranges for premium, subscriber, toll free and local rate numbers. Ireland - look for "drama use", which currently lists 020 91X XXXX as the only fictional range. There's only one real standard for representing telephone numbers - E.164 - so from the perspective of storing a fictitious number, spaces don't matter - +44 113 496 0000 is the same number as +441134960000 . But from the perspective of rendering a number to a user , there isn't a global standard, and even within a country there isn't usually a standard. In the US no one would give out their number as +14255550123 , they'd use (425) 555-0123 , or 425-555-0123 , or 425 555 0123 . Within the UK, the 3-3-4 ( +44 113 496 0000 ) format is just one of them. Some numbers are 2-4-4 ( +44 20 7946 0000 ), and many numbers are a 4-6 pattern ( +44 1632 960999 ). See this Wikipedia article for more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/376535",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/312335/"
]
} |
376,688 | This is the most popular way (it seems to me) of checking if a value is in an array: for (int x : array)
{
if (x == value)
return true;
}
return false; However, in a book Iβve read many years ago by, probably, Wirth or Dijkstra, it was said that this style is better (when compared to a while-loop with an exit inside): int i = 0;
while (i < array.length && array[i] != value)
i++;
return i < array.length; This way the additional exit condition becomes an explicit part of the loop invariant, there are no hidden conditions and exits inside the loop, everything is more obvious and more in a structured-programming way. I generally preferred this latter pattern whenever possible and used the for -loop to only iterate from a to b . And yet I cannot say that the first version is less clear. Maybe it is even clearer and easier to understand, at least for very beginners. So Iβm still asking myself the question of which one is better? Maybe someone can give a good rationale in favor of one of the methods? Update: This is not a question of multiple function return points, lambdas or finding an element in an array per se. Itβs about how to write loops with more complex invariants than a single inequality. Update: OK, I see the point of people who answer and comment: I mixed-in the foreach loop here, which itself is already much more clear and readable than a while-loop. I should not have done that. But this is also an interesting question, so let's leave it as it is: foreach-loop and an extra condition inside, or a while-loop with an explicit loop invariant and a post-condition after. It seems that the foreach-loop with a condition and an exit/break is winning. I will create an additional question without the foreach-loop (for a linked list). | This is easy. Almost nothing matters more than clarity to the reader. The first variant I found incredibly simple and clear. The second 'improved' version, I had to read several times and make sure all the edge conditions were right. There is ZERO DOUBT which is better coding style (the first is much better). Now - what is CLEAR to people may vary from person to person. I'm not sure there are any objective standards for that (though posting to a forum like this and getting a variety of peoples inputs can help). In this particular case, however, I can tell you why the first algorithm is more clear: I know what the C++ iterate over a container syntax looks like and does. I've internalized it. Someone UNFAMILIAR (its new syntax) with that syntax might prefer the second variation. But once you know and understand that new syntax, its a basic concept you can just use. With the loop iteration (second) approach, you have to carefully check that the user is CORRECTLY checking for all the edge conditions to loop over the entire array (e.g. less than in stead of less-or-equal, same index used for test and for indexing etc). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/376688",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/305144/"
]
} |
376,689 | I took over 5 hours in sprint planning for a week long sprint. That seems like too much. We discuss things in detail in sprint planning, as most of team members are not senior. If we don't it will lead to mistakes during implementation and redesign during sprint. How do we deal with this? How much detail should I discuss during planning to fit it to just 2 hours long per a week sprint? | You're right - 5 hours in Sprint Planning for a 1 week Sprint does seem like a long time. The Scrum Guide time-boxes Sprint Planning to 8 hours for 1 month Sprints and says that "for shorter Sprints, the event is usually shorter". If you consider the ratio, a good target may be 2 hours of Sprint Planning for a 1 week Sprint, but there's no fixed timebox. So, how can you address a long Sprint Planning? As a Scrum Master, I would take these following steps: First, I'd work with the Product Owner to make sure that the Product Backlog is properly ordered. It is essential to effective Backlog Refinement and Sprint Planning to make sure that the most important work and their dependencies are at the top of the Product Backlog so that way the Scrum Team can focus their energies on defining, refining, and preparing the right work. Second, I'd make sure that the team is spending sufficient time on Backlog Refinement. The Scrum Guide indicates that refinement activities generally take no more than 10% of a Development Team's capacity. As an example, a Development Team of 4 working a standard 40 hour week should plan on about 16 hours of Backlog Refinement. This may be done individually, in small groups, or as a team. I've found that having a planned Backlog Refinement session for the team and then breaking out to do any research or investigation or planning tends to work the best. Third, make sure that the team realizes that they don't need to get every detail right in Sprint Planning. The goal of Sprint Planning is to produce a plan to completing the Sprint Goals. Don't try to do big design up front at a Sprint Planning session. Understand how different work fits in, dependencies, and objectives and use time outside of the Sprint Planning sessions with the right people to do the design, implementation, and testing required to deliver the work. More steps may fall out of these, but this would be a good starting point. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/376689",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200689/"
]
} |
376,746 | I use C++ on ESP-32. When registering a timer I have to do this: timer_args.callback = reinterpret_cast<esp_timer_cb_t>(&SoundMixer::soundCallback);
timer_args.arg = this; Here the timer calls soundCallback . And the same thing when registering a task: xTaskCreate(reinterpret_cast<TaskFunction_t>(&SoundProviderTask::taskProviderCode), "SProvTask", stackSize, this, 10, &taskHandle); So the method is started in a separated task. GCC always warns me about these conversions, but it works just as planned. Is it acceptable in production code? Is there a better way to do this? | A reinterpret_cast is always fishy unless you know exactly what you are doing. Here, your code happens to work only due to GCC's calling convention for C++ methods, but this smells heavily like undefined behaviour. In particular you should not assume that member functions are in any way compatible with normal function pointers. The usual approach would be to instead define a C-compatible function with the appropriate signature, which internally calls the C++ method. For example: extern "C" static void my_timer_callback(void* arg) {
static_cast<SoundMixer*>(arg)->soundCallback();
} This cast is fine because we are casting back from a void* to the type of the pointed-to object. Details: extern "C" specifies the language linkage of this function. Language linkage affects name mangling and the calling convention of the function. Member functions cannot have C language linkage. Language linkage is largely orthogonal to internal/external linkage. For a callback the function may be βprivateβ, i.e. have internal linkage. The C code never refers to the callback by name. The above code snippet specifies internal linkage through the static keyword (not a static method!). Alternatively, the function could have been placed into an anonymous namespace. I am not entirely sure about the interactions between extern "C" and static (internal linkage). E.g. [dcl.link] says that βAll function types, function names with external linkage, and variable names with external linkage have a language linkage.β I interpret this so that the type of my_timer_callback has C language linkage, but that its function name does not. A static_cast is appropriate here because we know the real type of the arg but cannot express it within the type system. In contrast a reinterpret_cast is appropriate when we want to reinterpret a bit pattern, e.g. a pointer to a numeric type. Functions are not ordinary objects, and member functions even less so. You may reinterpret-cast between function pointer types as long as the function is only invoked through its real type (and analogously for member function pointers). Whether you can cast function pointers to other types (e.g. object pointers or void pointers) is implementation-defined ( background ). On POSIX casts between function pointers and void* are allowed so that dlsym() can work. Other casts involving (member) function pointers are undefined. In particular, casts between member functions and function pointers are not possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/376746",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/242401/"
]
} |
376,750 | I have a windows service and a remote IIS server that need to communicate without any user interaction. As I canβt guarantee the two machines will be on the same network; the service will connect to the serverβs public facing IP over the internet (so security is critical) and transmit encrypted binary back and forth. I am still trying to decide whether to use an ASP.NET restful service or WCF on the server side. I like the idea of WCF and it exposing its interface so that I can reference and use directly with POCOβs in client development; but itβs something pretty new to myself and setting it all up and getting the security right is a pain (especially if youβre dropping HTTP for net.tcp) ASP.NET is much easier to setup and run, but consuming the REST api from a http client object on the client side seems clunky. And as thereβll be no web pages; having HTTP requests and responses seems like redundant overhead. From someone who knows a little bit more about WCF, would this be a more appropriate use for it over ASP.NET? Or an I missing something here? | A reinterpret_cast is always fishy unless you know exactly what you are doing. Here, your code happens to work only due to GCC's calling convention for C++ methods, but this smells heavily like undefined behaviour. In particular you should not assume that member functions are in any way compatible with normal function pointers. The usual approach would be to instead define a C-compatible function with the appropriate signature, which internally calls the C++ method. For example: extern "C" static void my_timer_callback(void* arg) {
static_cast<SoundMixer*>(arg)->soundCallback();
} This cast is fine because we are casting back from a void* to the type of the pointed-to object. Details: extern "C" specifies the language linkage of this function. Language linkage affects name mangling and the calling convention of the function. Member functions cannot have C language linkage. Language linkage is largely orthogonal to internal/external linkage. For a callback the function may be βprivateβ, i.e. have internal linkage. The C code never refers to the callback by name. The above code snippet specifies internal linkage through the static keyword (not a static method!). Alternatively, the function could have been placed into an anonymous namespace. I am not entirely sure about the interactions between extern "C" and static (internal linkage). E.g. [dcl.link] says that βAll function types, function names with external linkage, and variable names with external linkage have a language linkage.β I interpret this so that the type of my_timer_callback has C language linkage, but that its function name does not. A static_cast is appropriate here because we know the real type of the arg but cannot express it within the type system. In contrast a reinterpret_cast is appropriate when we want to reinterpret a bit pattern, e.g. a pointer to a numeric type. Functions are not ordinary objects, and member functions even less so. You may reinterpret-cast between function pointer types as long as the function is only invoked through its real type (and analogously for member function pointers). Whether you can cast function pointers to other types (e.g. object pointers or void pointers) is implementation-defined ( background ). On POSIX casts between function pointers and void* are allowed so that dlsym() can work. Other casts involving (member) function pointers are undefined. In particular, casts between member functions and function pointers are not possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/376750",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/312707/"
]
} |
376,845 | Consider this an "academic" question. I have been wondering about about avoiding NULLs from time to time and this is an example where I can't come up with a satisfactory solution. Let's assume I store measurements where on occasions the measurement is known to be impossible (or missing). I would like to store that "empty" value in a variable while avoiding NULL. Other times the value could be unknown. So, having the measurements for a certain time-frame, a query about a measurement within that time period could return 3 kinds of responses: The actual measurement at that time (for example, any numerical value including 0 ) A "missing"/"empty" value (i.e., a measurement was done, and the value is known to be empty at that point). An unknown value (i.e., no measurement has been done at that point. It could be empty, but it could also be any other value). Important Clarification: Assuming you had a function get_measurement() returning one of "empty", "unknown" and a value of type "integer". Having a numerical value implies that certain operations can be done on the return value (multiplication, division, ...) but using such operations on NULLs will crash the application if not caught. I would like to be able to write code, avoiding NULL checks, for example (pseudocode): >>> value = get_measurement() # returns `2`
>>> print(value * 2)
4
>>> value = get_measurement() # returns `Empty()`
>>> print(value * 2)
Empty()
>>> value = get_measurement() # returns `Unknown()`
>>> print(value * 2)
Unknown() Note that none of the print statements caused exceptions (as no NULLs were used). So the empty & unknown values would propagate as necessary and the check whether a value is actually "unknown" or "empty" could be delayed until really necessary (like storing/serialising the value somewhere). Side-Note: The reason I'd like to avoid NULLs, is primarily a brain-teaser. If I want to get stuff done I'm not opposed to using NULLs, but I found that avoiding them can make code a lot more robust in some cases. | The common way to do this, at least with functional languages is to use a discriminated union. This is then a value that is one of a valid int, a value that denotes "missing" or a value that denotes "unknown". In F#, it might look something like: type Measurement =
| Reading of value : int
| Missing
| Unknown of value : RawData A Measurement value will then be a Reading , with an int value, or a Missing , or an Unknown with the raw data as value (if required). However, if you aren't using a language that supports discriminated unions, or their equivalent, this pattern isn't likely of much use to you. So there, you could eg use a class with an enum field that denotes which of the three contains the correct data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/376845",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29121/"
]
} |
377,028 | I am the new developer - this is my first programming position. My issue is this: We use git - I cut a branch from our develop branch, then I start working on the minor task I've been assigned. It's very slow, because I'm inexperienced. By the time I'm ready to merge my branch back into develop the others have made so many changes that resolving the conflicts is overwhelming (it actually seems easier to scrap my work and start over on the task, which of course is not a sustainable solution). How do I overcome this? Is there a tactic I can use other than 'be better at coding'? I intend to bring this up with my supervisor next week. | I assume you're using git. If so, make use of git rebase -i (the -i means interactive). Make it a daily task (even more frequently, if necessary) to rebase your branch against the develop branch. This brings in the changes incrementally every day (at least) to keep your feature branch up-to-date. If there are conflicts during your daily rebase, you need to have a talk with your team about who's working on what. If you run it daily, you probably won't need the interactive part. Just let it do its thing. I'm a fairly experienced developer, and it still takes me quite a bit of time to get up to speed on a new project. In your case, it sounds like you have several people working on the same project simultaneously, so it's either a very large project, or a new project that's evolving quickly. In either case, don't be worried if it takes you several months to get into the flow. If I switch projects for 2 or 3 weeks and then switch back, it can take me a few hours (or even a day or two) to get fully "back into" a project that I wrote 100% on my own! In short, don't worry about being slow right now. The way to get better is to just keep practicing. Don't be afraid to ask other developers about aspects of the projects you don't understand. EDIT: Or use merge . That's also an option. So, the above would be: "make use of git rebase -i (the -i means interactive) or git merge ". As for which one to use, talk it over with the rest of your team. They may (or may not) have strong preferences either way. It's clear some folks do have strong preferences. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/377028",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/313167/"
]
} |
377,464 | Python added the async/await constructs in 3.5 in 2015. The Javascript community made steps towards it for a bazzillion years and finally added a very similar implementation to the draft in ES8 released in 2017 (From my understanding). Typescript also added async methods in 2015 in version 1.7 that to the untrained eye look exactly like js async methods. C# Added async methods in 2012 that look like all other implementations of async/await and were based on F#'s similarly behaving but different looking asynchronous workflows that were introduced in F# 2.0 in 2010. This is the earliest example I know of language built in asynchronous programming - C# with the async/await pair and F# with async flows. Are there earlier examples of the keywords being used in this context as language constructs (or library)? From my limited information it looks like everyone imitated the good parts of the C# implementation, but did C# copy it from someone else? | Haskell (2012) There is an async package for Haskell (2012) by Simon Marlow. In case you don't know, Simon Marlow is a lead developer of Haskell. Notes: Simon Marlow joined Microsoft research in 1998 and left in 2012. In 2013 Simon Marlow published the book "Parallel and Concurrent Programming in Haskell" . C# (2011) Accoding to an Anders Hejlsberg interview for Channel 9 about Asynchronous Programming async/await in C# takes inspiration on async worflows in F#. Microsoft released a version of C# with async/await for the first time in the Async CTP (2011). And were later officially released in C# 5 (2012). In case you don't know, Anders Hejlsberg is the lead architect of C#, and has also worked in other languages including TypeScript. F# (2007) According to Don Syme, on his blog (2007), F# async workflows take inspiration from the implementation of asynchronous monad for haskell. In particular Peng Li's paper (2007) and Koen Claessen's "A Poor Man's Concurrency Monad" paper (1999). The first version of F# to include "asynchronous workflows" is F# 1.9.2.7 (2007). In case you don't know, Don Syme is the lead architect of F#, among other things. Haskell (1999) Koen Claessen's paper is the older implementation of operations with a result and continuations I can find (which is the logic behind async/await ), dating to 1999. It implements concurrency by defining atomic operations, continuations and a round-robin scheduler. The monaid approach would be the motivation for the switch from message passing to awaiting results. Abstract form the paper: Without adding any primitives to the language, we define a concurrency monad transformer in Haskell. This allows us to add a limited form of concurrency to any existing monad. The atomic actions of the new monad are lifted actions of the underlying monad. Some extra operations, such as fork , to initiate new processes, are provided. We discuss the implementation, and use some examples to illustrate the usefulness of this construction. This was not part of an official release of the language. Prior work: Haskell Concurrent Haskell (1996) is an extension of Haskell, to which "A Poor Man's Concurrency Monad" is an alternative. Concurrent Haskell used software transactional memory and threads (fork). And the paper "Implicit and Explicit Parallel Programming in Haskell" (1993) by Mark P. Jones and Paul Hudak. This paper laid the groundwork for Koen Claessen's paper. The paper defines a fork function among other things. Prior work: ML In the paper "Implicit and Explicit Parallel Programming in Haskell" Mark and Paul analyze the properties of a fork function and the problem of side effects in concurrency, among other things. They reference the paper "A semantics for ML concurrency primitives" (1992) which picks a set concurrent primitives based on Concurrent ML and provides a proof that they preserve sequential execution properties. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/377464",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21598/"
]
} |
377,539 | I'm having a read at J.B. Rainsberger's blog post on integrated tests and wonder in which way an integration test is more harsh with our design? We write more integrated tests, which are bigger and donβt criticize our design as harshly as microtests do | Microtests can help lead to good design . By writing good small tests, you are deliberately testing a small amount of code and filling in its gaps with mock objects . This leads to low coupling (things aren't reliant on each other) and high cohesion (things that belong together stay together). That way, when you go back and make changes, it's easy to find what is responsible for what you're looking for and you're less likely to break things in making the change. This won't solve all your design but it can help. In this context, J.B. Rainsberger is noting that if you're having a difficult time writing a unit test, you likely have an issue with your design that is causing the difficulty, and thus the tests are criticizing the design implicitly. He posits that this is a good thing, because without the small tests help keeping your architecture in line it is easy to stray from good design patterns - which integrated tests will not capture. Update : as Rainsberger notes below, he did not intend microtests to be synonymous with unit tests. He's also provided a detailed answer that can give you deeper insight into exactly what he was communicating. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/377539",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/269192/"
]
} |
377,540 | I'm developing an application that handles sales for multiple clients, when the client sells a particular item, that information should be send to the admin. The count of each item sold and the item names. I've done that by setting a simple server online and sending POST requests to that server with the appropriate data, I send the item name and the change in sale since last update. The problem I'm facing is if the request fails for whatever reason (server is down, or internet is unavailable, etc.) What is the best approach to such a problem? What I did is that I store the change locally on the client's machine. In my current implementation, I set a value in the registry with the item's name to the change. I try to send that change to the server and if that request is successfull, I remove that value from the registry. This works, but only informs the server when the client sells another of the same item. For example, the client sold three of Item A and two of Item B, the request was sent correctly so the server is up to date. Later, there was no internet access and the client sold two extra of Item B. Since the request failed, the client is left with "Item B: 2" in his registry. When the internet comes back, the server will not be informed of those two sales unless if the client sells another of Item B. This is extremely undesirable, but I have no idea how to fix it. To put it clearly: I want to log sales to the server, and if the internet is not accessible, I want to store the logging offline until the internet is back again, at which point the stored logs should be sent. How can I do that? | Microtests can help lead to good design . By writing good small tests, you are deliberately testing a small amount of code and filling in its gaps with mock objects . This leads to low coupling (things aren't reliant on each other) and high cohesion (things that belong together stay together). That way, when you go back and make changes, it's easy to find what is responsible for what you're looking for and you're less likely to break things in making the change. This won't solve all your design but it can help. In this context, J.B. Rainsberger is noting that if you're having a difficult time writing a unit test, you likely have an issue with your design that is causing the difficulty, and thus the tests are criticizing the design implicitly. He posits that this is a good thing, because without the small tests help keeping your architecture in line it is easy to stray from good design patterns - which integrated tests will not capture. Update : as Rainsberger notes below, he did not intend microtests to be synonymous with unit tests. He's also provided a detailed answer that can give you deeper insight into exactly what he was communicating. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/377540",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/313883/"
]
} |
377,927 | I recently interviewed at Amazon. During a coding session, the interviewer asked why I declared a variable in a method. I explained my process and he challenged me to solve the same problem with fewer variables. For example (this wasn't from the interview), I started with Method A then improved it to Method B, by removing int s . He was pleased and said this would reduce memory usage by this method. I understand the logic behind it, but my question is: When is it appropriate to use Method A vs. Method B, and vice versa? You can see that Method A is going to have higher memory usage, since int s is declared, but it only has to perform one calculation, i.e. a + b . On the other hand, Method B has lower memory usage, but has to perform two calculations, i.e. a + b twice. When do I use one technique over the other? Or, is one of the techniques always preferred over the other? What are things to consider when evaluating the two methods? Method A: private bool IsSumInRange(int a, int b)
{
int s = a + b;
if (s > 1000 || s < -1000) return false;
else return true;
} Method B: private bool IsSumInRange(int a, int b)
{
if (a + b > 1000 || a + b < -1000) return false;
else return true;
} | Instead of speculating about what may or may not happen, let's just look, shall we? I'll have to use C++ since I don't have a C# compiler handy (though see the C# example from VisualMelon ), but I'm sure the same principles apply regardless. We'll include the two alternatives you encountered in the interview. We'll also include a version that uses abs as suggested by some of the answers. #include <cstdlib>
bool IsSumInRangeWithVar(int a, int b)
{
int s = a + b;
if (s > 1000 || s < -1000) return false;
else return true;
}
bool IsSumInRangeWithoutVar(int a, int b)
{
if (a + b > 1000 || a + b < -1000) return false;
else return true;
}
bool IsSumInRangeSuperOptimized(int a, int b) {
return (abs(a + b) < 1000);
} Now compile it with no optimization whatsoever: g++ -c -o test.o test.cpp Now we can see precisely what this generates: objdump -d test.o 0000000000000000 <_Z19IsSumInRangeWithVarii>:
0: 55 push %rbp # begin a call frame
1: 48 89 e5 mov %rsp,%rbp
4: 89 7d ec mov %edi,-0x14(%rbp) # save first argument (a) on stack
7: 89 75 e8 mov %esi,-0x18(%rbp) # save b on stack
a: 8b 55 ec mov -0x14(%rbp),%edx # load a and b into edx
d: 8b 45 e8 mov -0x18(%rbp),%eax # load b into eax
10: 01 d0 add %edx,%eax # add a and b
12: 89 45 fc mov %eax,-0x4(%rbp) # save result as s on stack
15: 81 7d fc e8 03 00 00 cmpl $0x3e8,-0x4(%rbp) # compare s to 1000
1c: 7f 09 jg 27 # jump to 27 if it's greater
1e: 81 7d fc 18 fc ff ff cmpl $0xfffffc18,-0x4(%rbp) # compare s to -1000
25: 7d 07 jge 2e # jump to 2e if it's greater or equal
27: b8 00 00 00 00 mov $0x0,%eax # put 0 (false) in eax, which will be the return value
2c: eb 05 jmp 33 <_Z19IsSumInRangeWithVarii+0x33>
2e: b8 01 00 00 00 mov $0x1,%eax # put 1 (true) in eax
33: 5d pop %rbp
34: c3 retq
0000000000000035 <_Z22IsSumInRangeWithoutVarii>:
35: 55 push %rbp
36: 48 89 e5 mov %rsp,%rbp
39: 89 7d fc mov %edi,-0x4(%rbp)
3c: 89 75 f8 mov %esi,-0x8(%rbp)
3f: 8b 55 fc mov -0x4(%rbp),%edx
42: 8b 45 f8 mov -0x8(%rbp),%eax # same as before
45: 01 d0 add %edx,%eax
# note: unlike other implementation, result is not saved
47: 3d e8 03 00 00 cmp $0x3e8,%eax # compare to 1000
4c: 7f 0f jg 5d <_Z22IsSumInRangeWithoutVarii+0x28>
4e: 8b 55 fc mov -0x4(%rbp),%edx # since s wasn't saved, load a and b from the stack again
51: 8b 45 f8 mov -0x8(%rbp),%eax
54: 01 d0 add %edx,%eax
56: 3d 18 fc ff ff cmp $0xfffffc18,%eax # compare to -1000
5b: 7d 07 jge 64 <_Z22IsSumInRangeWithoutVarii+0x2f>
5d: b8 00 00 00 00 mov $0x0,%eax
62: eb 05 jmp 69 <_Z22IsSumInRangeWithoutVarii+0x34>
64: b8 01 00 00 00 mov $0x1,%eax
69: 5d pop %rbp
6a: c3 retq
000000000000006b <_Z26IsSumInRangeSuperOptimizedii>:
6b: 55 push %rbp
6c: 48 89 e5 mov %rsp,%rbp
6f: 89 7d fc mov %edi,-0x4(%rbp)
72: 89 75 f8 mov %esi,-0x8(%rbp)
75: 8b 55 fc mov -0x4(%rbp),%edx
78: 8b 45 f8 mov -0x8(%rbp),%eax
7b: 01 d0 add %edx,%eax
7d: 3d 18 fc ff ff cmp $0xfffffc18,%eax
82: 7c 16 jl 9a <_Z26IsSumInRangeSuperOptimizedii+0x2f>
84: 8b 55 fc mov -0x4(%rbp),%edx
87: 8b 45 f8 mov -0x8(%rbp),%eax
8a: 01 d0 add %edx,%eax
8c: 3d e8 03 00 00 cmp $0x3e8,%eax
91: 7f 07 jg 9a <_Z26IsSumInRangeSuperOptimizedii+0x2f>
93: b8 01 00 00 00 mov $0x1,%eax
98: eb 05 jmp 9f <_Z26IsSumInRangeSuperOptimizedii+0x34>
9a: b8 00 00 00 00 mov $0x0,%eax
9f: 5d pop %rbp
a0: c3 retq We can see from the stack addresses (for example, the -0x4 in mov %edi,-0x4(%rbp) versus the -0x14 in mov %edi,-0x14(%rbp) ) that IsSumInRangeWithVar() uses 16 extra bytes on the stack. Because IsSumInRangeWithoutVar() allocates no space on the stack to store the intermediate value s it has to recalculate it, resulting in this implementation being 2 instructions longer. Funny, IsSumInRangeSuperOptimized() looks a lot like IsSumInRangeWithoutVar() , except it compares to -1000 first, and 1000 second. Now let's compile with only the most basic optimizations: g++ -O1 -c -o test.o test.cpp . The result: 0000000000000000 <_Z19IsSumInRangeWithVarii>:
0: 8d 84 37 e8 03 00 00 lea 0x3e8(%rdi,%rsi,1),%eax
7: 3d d0 07 00 00 cmp $0x7d0,%eax
c: 0f 96 c0 setbe %al
f: c3 retq
0000000000000010 <_Z22IsSumInRangeWithoutVarii>:
10: 8d 84 37 e8 03 00 00 lea 0x3e8(%rdi,%rsi,1),%eax
17: 3d d0 07 00 00 cmp $0x7d0,%eax
1c: 0f 96 c0 setbe %al
1f: c3 retq
0000000000000020 <_Z26IsSumInRangeSuperOptimizedii>:
20: 8d 84 37 e8 03 00 00 lea 0x3e8(%rdi,%rsi,1),%eax
27: 3d d0 07 00 00 cmp $0x7d0,%eax
2c: 0f 96 c0 setbe %al
2f: c3 retq Would you look at that: each variant is identical . The compiler is able to do something quite clever: abs(a + b) <= 1000 is equivalent to a + b + 1000 <= 2000 considering setbe does an unsigned comparison, so a negative number becomes a very large positive number. The lea instruction can actually perform all these additions in one instruction, and eliminate all the conditional branches. To answer your question, almost always the thing to optimize for is not memory or speed, but readability . Reading code is a lot harder than writing it, and reading code that's been mangled to "optimize" it is a lot harder than reading code that's been written to be clear. More often than not, these "optimizations" have negligible, or as in this case exactly zero actual impact on performance. Follow up question, what changes when this code is in an interpreted language instead of compiled? Then, does the optimization matter or does it have the same result? Let's measure! I've transcribed the examples to Python: def IsSumInRangeWithVar(a, b):
s = a + b
if s > 1000 or s < -1000:
return False
else:
return True
def IsSumInRangeWithoutVar(a, b):
if a + b > 1000 or a + b < -1000:
return False
else:
return True
def IsSumInRangeSuperOptimized(a, b):
return abs(a + b) <= 1000
from dis import dis
print('IsSumInRangeWithVar')
dis(IsSumInRangeWithVar)
print('\nIsSumInRangeWithoutVar')
dis(IsSumInRangeWithoutVar)
print('\nIsSumInRangeSuperOptimized')
dis(IsSumInRangeSuperOptimized)
print('\nBenchmarking')
import timeit
print('IsSumInRangeWithVar: %fs' % (min(timeit.repeat(lambda: IsSumInRangeWithVar(42, 42), repeat=50, number=100000)),))
print('IsSumInRangeWithoutVar: %fs' % (min(timeit.repeat(lambda: IsSumInRangeWithoutVar(42, 42), repeat=50, number=100000)),))
print('IsSumInRangeSuperOptimized: %fs' % (min(timeit.repeat(lambda: IsSumInRangeSuperOptimized(42, 42), repeat=50, number=100000)),)) Run with Python 3.5.2, this produces the output: IsSumInRangeWithVar
2 0 LOAD_FAST 0 (a)
3 LOAD_FAST 1 (b)
6 BINARY_ADD
7 STORE_FAST 2 (s)
3 10 LOAD_FAST 2 (s)
13 LOAD_CONST 1 (1000)
16 COMPARE_OP 4 (>)
19 POP_JUMP_IF_TRUE 34
22 LOAD_FAST 2 (s)
25 LOAD_CONST 4 (-1000)
28 COMPARE_OP 0 (<)
31 POP_JUMP_IF_FALSE 38
4 >> 34 LOAD_CONST 2 (False)
37 RETURN_VALUE
6 >> 38 LOAD_CONST 3 (True)
41 RETURN_VALUE
42 LOAD_CONST 0 (None)
45 RETURN_VALUE
IsSumInRangeWithoutVar
9 0 LOAD_FAST 0 (a)
3 LOAD_FAST 1 (b)
6 BINARY_ADD
7 LOAD_CONST 1 (1000)
10 COMPARE_OP 4 (>)
13 POP_JUMP_IF_TRUE 32
16 LOAD_FAST 0 (a)
19 LOAD_FAST 1 (b)
22 BINARY_ADD
23 LOAD_CONST 4 (-1000)
26 COMPARE_OP 0 (<)
29 POP_JUMP_IF_FALSE 36
10 >> 32 LOAD_CONST 2 (False)
35 RETURN_VALUE
12 >> 36 LOAD_CONST 3 (True)
39 RETURN_VALUE
40 LOAD_CONST 0 (None)
43 RETURN_VALUE
IsSumInRangeSuperOptimized
15 0 LOAD_GLOBAL 0 (abs)
3 LOAD_FAST 0 (a)
6 LOAD_FAST 1 (b)
9 BINARY_ADD
10 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
13 LOAD_CONST 1 (1000)
16 COMPARE_OP 1 (<=)
19 RETURN_VALUE
Benchmarking
IsSumInRangeWithVar: 0.019361s
IsSumInRangeWithoutVar: 0.020917s
IsSumInRangeSuperOptimized: 0.020171s Disassembly in Python isn't terribly interesting, since the bytecode "compiler" doesn't do much in the way of optimization. The performance of the three functions is nearly identical. We might be tempted to go with IsSumInRangeWithVar() due to it's marginal speed gain. Though I'll add as I was trying different parameters to timeit , sometimes IsSumInRangeSuperOptimized() came out fastest, so I suspect it may be external factors responsible for the difference, rather than any intrinsic advantage of any implementation. If this is really performance critical code, an interpreted language is simply a very poor choice. Running the same program with pypy, I get: IsSumInRangeWithVar: 0.000180s
IsSumInRangeWithoutVar: 0.001175s
IsSumInRangeSuperOptimized: 0.001306s Just using pypy, which uses JIT compilation to eliminate a lot of the interpreter overhead, has yielded a performance improvement of 1 or 2 orders of magnitude. I was quite shocked to see IsSumInRangeWithVar() is an order of magnitude faster than the others. So I changed the order of the benchmarks and ran again: IsSumInRangeSuperOptimized: 0.000191s
IsSumInRangeWithoutVar: 0.001174s
IsSumInRangeWithVar: 0.001265s So it seems it's not actually anything about the implementation that makes it fast, but rather the order in which I do the benchmarking! I'd love to dig in to this more deeply, because honestly I don't know why this happens. But I believe the point has been made: micro-optimizations like whether to declare an intermediate value as a variable or not are rarely relevant. With an interpreted language or highly optimized compiler, the first objective is still to write clear code. If further optimization might be required, benchmark . Remember that the best optimizations come not from the little details but the bigger algorithmic picture: pypy is going to be an order of magnitude faster for repeated evaluation of the same function than cpython because it uses faster algorithms (JIT compiler vs interpretation) to evaluate the program. And there's the coded algorithm to consider as well: a search through a B-tree will be faster than a linked list. After ensuring you're using the right tools and algorithms for the job, be prepared to dive deep into the details of the system. The results can be very surprising, even for experienced developers, and this is why you must have a benchmark to quantify the changes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/377927",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/314489/"
]
} |
378,259 | To avoid magic numbers, we often hear that we should give a literal a meaningful name. Such as: //THIS CODE COMES FROM THE CLEAN CODE BOOK
for (int j = 0; j < 34; j++) {
s += (t[j] * 4) / 5;
}
-------------------- Change to --------------------
int realDaysPerIdealDay = 4;
const int WORK_DAYS_PER_WEEK = 5;
int sum = 0;
for (int j = 0; j < NUMBER_OF_TASKS; j++) {
int realTaskDays = taskEstimate[j] * realDaysPerIdealDay;
int realTaskWeeks = (realdays / WORK_DAYS_PER_WEEK);
sum += realTaskWeeks;
} I have a dummy method like this: Explain: I suppose that I have a list people to serve and by default, we spend $5 to buy food only, but when we have more than one person, we need to buy water and food, we must spend more money, maybe $6. I'll change my code, please focus on the literal 1 , my question about it. public int getMoneyByPersons(){
if(persons.size() == 1){
// TODO - return money for one person
} else {
// TODO - calculate and return money for people.
}
} When I asked my friends to review my code, one said giving a name for the value 1 would yield cleaner code, and the other said we don't need a constant name here because the value is meaningful by itself. So, my question is Should I give a name for the literal value 1? When is a value a magic number and when is it not? How can I distinguish context to choose the best solution? | No. In that example, 1 is perfectly meaningful. However, what if persons.size() is zero? Seems strange that persons.getMoney() works for 0 and 2 but not for 1. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378259",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/256726/"
]
} |
378,348 | My team is migrating from a monolithic ASP.NET application to .NET Core and Kubernetes. The code changes seem to be going as well as can be expected but where my team is encountering a lot of discord is around the database. We currently have rather large SQL Server database that houses all of the data for our entire business. I'm proposing that we split the database in a similar way to splitting the code--catalog data in one (logical) database, inventory data in another, orders in another, etc--and each microservice would be the gatekeeper for its database. The implication here is that foreign keys that cross microservice boundaries would have to be removed and sprocs and views that reach across boundaries would be prohibited. All of the data models may or may not reside in the same physical database, but even if they do, they should not interact with each other directly. Orders might still reference catalog items by Id but the data integrity would not be strictly enforced at the database level and that data will have to be joined in code rather than in SQL. I see the loss of these as necessary trade offs in moving to microservice and getting the scalability benefits that come with. As long as we choose our seams wisely and develop around them then it should be OK. Other team members are adamant that everything must stay in the same monolithic database so everything can be ACID and have referential integrity preserved everywhere. This brings me to my question. First, is my stance on foreign key constraints and join plausible? If so, is anyone aware of any credible reading material I could offer to my colleagues? Their position is nearly religious and they don't seem like they will be swayed by anything short of Martin Fowler himself telling them they're wrong. | There is no clear solution because this depends entirely on your context β in particular, along which dimensions your system is supposed to scale and what your actual problems are. Is the database really your bottleneck? This (unfortunately rather lengthy) answer will read a bit like βmicroservices are bad, monoliths for life!β, but that's not my intention. My point is that microservices and distributed databases can solve various problems, but not without having some issues of their own. In order to make a strong argument for your architecture, you must show that these issues do not apply or can be mitigated, and that this architecture is the best choice for your business needs. Distributed data is difficult. The same flexibility that enables better scaling is the flip side of weaker guarantees. Notably, distributed systems are much much harder to reason about. Atomic updates, transactions, consistency/referential integrity, and durability are extremely valuable and should not be waived rashly. There is little point in having data if it is incomplete, out of date, or outright wrong.
When you have ACID as a business requirement but are using database technology that cannot offer it out of the box (e.g. many NoSQL databases, or a DB-per-microservice architecture), then your application must fill the gap and provide those guarantees. This is not impossible to do, but tricky to get right. Very tricky. Especially in a distributed setting where there are multiple writers to each database. This difficulty translates to a high chance of bugs, possibly including dropped data, inconsistent data, and so on. For example, consider reading the Jepsen analyses of well-known distributed database systems , perhaps starting with the analysis of Cassandra . I don't understand half of that analysis, but the TL;DR is that distributed systems are so difficult that even industry-leading projects sometimes get them wrong, in ways that can seem obvious in hindsight. Distributed systems also imply a larger development effort. To a certain degree, there's a direct trade-off between development costs or dropping money on beefier hardware. Example: dangling references In practice, you should not look at computer science but at your business requirements to see whether and how ACID can be relaxed. E.g. many foreign-key relationships might not be as important as they seem. Consider a product β category n:m relationship. In a RDBMS we might use a foreign-key constraint so that only existing products and existing categories can be part of that relationship. What happens if we introduce separate product and category services, and a product or category is deleted? In this case, that might not be a big problem and we can write our application so that it filters out any products or categories that no longer exist. But there are tradeoffs! Note that this might require an application-level JOIN over multiple databases/microservices, which merely moves processing from the database server to your application. This increases total load and has to move extra data through the network. This can mess with pagination. E.g. you request the next 25 products from a category, and filter out unavailable products from that response. Now your application displays 23 products. In theory, a page with zero products would also be possible! You will want to occasionally run a script that cleans up dangling references, either after each relevant change or on regular intervals. Note that such scripts are fairly expensive because they have to request every product/category from the backing database/microservice to see whether it still exists. This should be obvious, but for clarity: do not reuse IDs. Autoincrement-style IDs may or may not be fine. GUIDs or hashes give you more flexibility, e.g. by being able to assign an ID before the item is inserted into a database. Example: concurrent orders Now instead consider a product β order relationship. What happens to an order if a product is deleted or changed? Ok, we can simply copy the relevant product data into the order entry to keep it available β trading disk space for simplicity. But what if the product's price changes or the product becomes unavailable just before an order for that product is made? In a distributed system, effects take time to propagate and the order will likely go through with outdated data. Again, how to approach this depends on your business requirements. Maybe the outdated order is acceptable, and you can later cancel the order if it cannot be fulfilled. But maybe that's not an option, e.g. for highly concurrent settings. Consider 3000 people rushing to buy concert tickets within the first 10 seconds, and let's assume a change in availability will require 10ms to propagate. What is the probability of selling the last ticket to multiple people? Depends on how those collisions are handled, but using a Poisson distribution with Ξ» = 3000 / (10s / 10ms) = 3 we get a P(k > 1) = 1 - P(k = 0) - P(k = 1) = 80% chance of collision per 10ms interval. Whether selling and later cancelling the majority of your orders is possible without committing fraud might lead to an interesting conversation with your legal department. Pragmatism means cherry-picking the best features. The good news is that you don't have to move to a distributed database model, if that's not required otherwise. No one will revoke your Microservice Club membership if you don't do microservices βproperlyβ, because there is no such club β and no one true way to build microservices. Pragmatism wins every time, so mix and match various approaches as they solve your problem. This could even mean microservices with a centralized database. Really, don't go through the pain of distributed databases if you don't have to. You can scale without microservices. Microservices have two major benefits: The organizational benefit that they can be developed and deployed independently by separate teams (which in turns requires the services to offer a stable interface). The operational benefit that each microservice can be scaled independently . If independent scaling is not required, microservices are a lot less attractive. A database server already is a kind of service which you can scale (somewhat) independently, e.g. by adding read replicas. You mention stored procedures. Reducing them might have such a large effect that any other scalability discussions are moot. And it's perfectly possible to have a scalable monolith which includes all services as libraries. You can then scale by launching more instances of the monolith, which of course requires each instance to be stateless. This tends to work well until the monolith is too large to be reasonably deployed, or if some services have special resource requirements so that you might want to scale them independently. The problem domains that involve extra resources might not involve a separate data model. Do you have a strong business case? You are aware of the business needs of your organization, and can therefore create an argument for a database-per-microservice architecture, based on an analysis: that a certain scale is required, and this architecture is the most cost-effective approach to obtain that scalability, taking into account the increased development effort for such a setup and alternative solutions; and that your business requirements allow relevant ACID guarantees to be relaxed, without leading to various problems like those discussed above. Conversely, if you are unable to demonstrate this, in particular if the current database design is able to support sufficient scale into the future (as your colleagues seem to believe), then you also have your answer. There's also a big YAGNI component to scalability. In the face of uncertainty, it is a strategic business decision on building for scalability now (lower total costs, but involves opportunity costs and may not be needed) versus deferring some work on scalability (higher total costs if needed, but you have a better idea of the actual scale). This is not primarily a technical decision. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378348",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47432/"
]
} |
378,612 | When you create an extension method you can, of course, call it on null .But, unlike an instance method call, calling it on null doesn't have to throw a NullReferenceException -> you have to check and throw it manually. For the implementation of the Linq extension method Any() Microsoft decided that they should throw a ArgumentNullException ( https://github.com/dotnet/corefx/blob/master/src/System.Linq/src/System/Linq/AnyAll.cs ). It irks me to have to write if( myCollection != null && myCollection.Any() ) Am I wrong, as a client of this code, to expect that e.g. ((int[])null).Any() should return false ? | I have a bag with five potatoes in it. Are there .Any() potatoes in the bag? " Yes ," you say. <= true I take all of the potatoes out and eat them. Are there .Any() potatoes in the bag? " No ," you say. <= false I completely incinerate the bag in a fire. Are there .Any() potatoes in the bag now? " There is no bag ." <= ArgumentNullException | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378612",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/315572/"
]
} |
378,670 | Having two classes: public class Parent
{
public int Id { get; set; }
public int ChildId { get; set; }
}
public class Child { ... } When assigning ChildId to Parent should I check first if it exists in the DB or wait for the DB to throw an exception? For example (using Entity Framework Core): NOTE these kinds of checks are ALL OVER THE INTERNET even on official Microsoft's docs: https://docs.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application#modify-the-department-controller but there is additional exception handling for SaveChanges also, note that the main intent of this check was to return friendly message and known HTTP status to the user of the API and not to completely ignore database exceptions. And the only place exception be thrown is inside SaveChanges or SaveChangesAsync call... so there won't be any exception when you call FindAsync or Any . So if child exists but was deleted before SaveChangesAsync then concurrency exception will be thrown. I did this due to a fact that foreign key violation exception will be much harder to format to display "Child with id {parent.ChildId} could not be found." public async Task<ActionResult<Parent>> CreateParent(Parent parent)
{
// is this code redundant?
// NOTE: its probably better to use Any isntead of FindAsync because FindAsync selects *, and Any selects 1
var child = await _db.Children.FindAsync(parent.ChildId);
if (child == null)
return NotFound($"Child with id {parent.ChildId} could not be found.");
_db.Parents.Add(parent);
await _db.SaveChangesAsync();
return parent;
} versus: public async Task<ActionResult<Parent>> CreateParent(Parent parent)
{
_db.Parents.Add(parent);
await _db.SaveChangesAsync(); // handle exception somewhere globally when child with the specified id doesn't exist...
return parent;
} The second example in Postgres will throw 23503 foreign_key_violation error: https://www.postgresql.org/docs/9.4/static/errcodes-appendix.html The downside of handling exceptions this way in ORM like EF is that it will work only with a specific database backend. If you ever wanted to switch to SQL server or something else this will not work anymore because the error code will change. Not formatting the exception properly for the end-user could expose some things you don't want anybody but developers to see. Related: https://stackoverflow.com/questions/6171588/preventing-race-condition-of-if-exists-update-else-insert-in-entity-framework https://stackoverflow.com/questions/4189954/implementing-if-not-exists-insert-using-entity-framework-without-race-conditions https://stackoverflow.com/questions/308905/should-there-be-a-transaction-for-read-queries | Checking for uniqueness and then setting is an antipattern; it can always happen that the ID is inserted concurrently between checking time and writing time. Databases are equipped to deal with this problem through mechanisms like constraints and transactions; most programming languages aren't. Therefore, if you value data consistency, leave it to the expert (the database), i.e. do the insert and catch an exception if it occurs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378670",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/269824/"
]
} |
378,682 | In C#, the DateTime property Month has a type of int (a 32 bit signed integer) yet its range will only ever be 1-12. What are the reasons the C# team chose int over a smaller numeric type such as byte (8 bit unsigned integer)? | int is used for almost all integer variables in .NET although often a smaller type would be enough. Also, unsigned types are almost never used although they could be. Some reasons: Signed and unsigned types as well as integer types of different size can be awkward when combining them ( + or < for example). The rules are not obvious. I'm an experienced developer and I could not tell you the full set of rules. I do not need to know. int is fast on all common architectures. Smaller types often result in conversions which can be slower. Performance is not an issue for 99% of typical code. No need to overthink this. Just use int everywhere. Readability is very good because the intention is clear. A byte would suggest binary data for example. (See comment by Flater.) It's a useful convention to use int . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378682",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/185177/"
]
} |
378,686 | Context Currently I'm struggling with correctly modeling a small instruction set I want to send as JSON to an Android application to generate a list of UI parts. Right now it's layered as pretty much a single cascading tree of objects/conditions and meta-data. Small example: {
"instruction": {
"remarks": "",
"steps": [
{
"id": "1234",
"type": "text",
"text": "Welcome!"
},
{
"..": "..",
"type": "image",
"url": "example.org/image.png"
},
..
]
} But now my instruction set is growing with parts that can sometimes be ignored, not present and parts that need to be filled in. I also want to make the JSON easy to parse, so I think I probably need to make some sort of simple java classes out of it for a parsing library like GSON. Questions How should I model this? Should I stick to making a single large JSON document with embedded objects that then translate to separate simple java classes (maybe DTO) files? Or do I need to rethink my instruction set and make it in to several JSON documents (like objects) that include document references? I'm also a bit confused on how I then would translate those references (if I used them) in to a JSON field that can be read by a library like GSON back in to java code. The referencing and embedding documents concepts I mentioned are from MongoDB Data Model Design . Since I figured it would probably be easy to save my JSON in a NoSQL database like MongoDB. Is there maybe another technique I'd be better off using to describe my JSON data at this point such as JSON Schema's ? | int is used for almost all integer variables in .NET although often a smaller type would be enough. Also, unsigned types are almost never used although they could be. Some reasons: Signed and unsigned types as well as integer types of different size can be awkward when combining them ( + or < for example). The rules are not obvious. I'm an experienced developer and I could not tell you the full set of rules. I do not need to know. int is fast on all common architectures. Smaller types often result in conversions which can be slower. Performance is not an issue for 99% of typical code. No need to overthink this. Just use int everywhere. Readability is very good because the intention is clear. A byte would suggest binary data for example. (See comment by Flater.) It's a useful convention to use int . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378686",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/315664/"
]
} |
378,931 | Is there any engineering reason why is it like that? I was wondering in the case of a RDBMS that it had something to do with performance, since a "YEAR" is more specific than a "MONTH", for instance: you only have one year 2000, but every year have "January", which would make it easier/faster to filter/sort something by year first, and that's why the year comes first. But I don't know if that really makes sense... Is there any reason at all? | This way, the dates can easily be sorted as strings using the default sorting rules (i.e. lexicographical sorting ). This is also why both month and day are specified using two digits (adding a leading zero if needed). In fact it is one of the date formats defined by ISO 8601 . That standard also defines a date-and-time format, 2015-03-27T15:26:40Z , which is also sortable as strings. However, YYYYMMDD has an added benefit of making it possible to easily (no substrings or character replacements involved) parse the string as an integer, and still use default ordering on integers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/378931",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/192346/"
]
} |
379,095 | I like SOLID, and I try my best to use and apply it when I'm developing. But I can't help but feel as though the SOLID approach turns your code into 'framework' code - ie code you would design if you were creating a framework or library for other developers to use. I've generally practiced 2 modes of programming - creating more or less exactly what is asked via requirements and KISS (typical programming), or creating very generic and reusable logic, services, etc that provide the flexibility other developers may need (framework programming). If the user really just wants an application to do x and y things, does it make sense to follow SOLID and add in a whole bunch of entry points of abstraction, when you don't even know if that is even a valid problem to begin with? If you do add these entry points of abstraction, are you really fulfilling the users requirements, or are you creating a framework on top of your existing framework and tech stack to make future additions easier? In which case are you serving the interests of the customer, or of the developer? This is something that seems common in the Java Enterprise world, where it feels as though you're designing your own framework on top of J2EE or Spring so that it's a better UX for the developer, instead of focusing on UX for the user? | Your observation is correct, the SOLID principles are IMHO made with reusable libraries or framework code in mind. When you just follow all of them blindly, without asking if it makes sense or not, you are risking to overgeneralize and invest a lot more effort into your system than probably necessary. This is a trade-off, and it needs some experience to make the right decisions about when to generalize and when not. A possible approach to this is to stick to the YAGNI principle - do not make your code SOLID "just in case" - or, to use your words: do not provide the flexibility other developers may need instead, provide the flexibility other developers actually need as soon as they need it , but not earlier. So whenever you have one function or class in your code you are not sure if it could be reused, don't put it into your framework right now. Wait until you have an actual case for reusage and refactor to "SOLID enough for that case". Don't implement more configurability (following the OCP), or entry points of abstraction (using the DIP) into such a class as you really need for the actual reusage case. Add the next flexibility when the next requirement for reusage is actually there. Of course, this way of working will always require some amount of refactoring at the existing, working code base. That is why automatic tests are important here. So making your code SOLID enough right from the start to have it unit-testable is not a waste-of-time, and doing so does not contradict YAGNI. Automatic tests are a valid case for "code reusage", since the code in stake is used from production code as well as from tests. But keep in mind, just add the flexibility you actually need for making the tests work, no less, not more. This is actually old wisdom. Long ago before the term SOLID got popular, someone told me before we try to write re usable code, we should write usable code. And I still think this is a good recommendation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/379095",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/74674/"
]
} |
379,575 | I frequently work with very numeric / mathematical programs, where the exact result of a function is difficult to predict in advance. In trying to apply TDD with this kind of code, I often find writing the code under test significantly easier than writing unit tests for that code, because the only way I know to find the expected result is to apply the algorithm itself (whether in my head, on paper, or by the computer). This feels wrong, because I am effectively using the code under test to verify my unit tests, instead of the other way around. Are there known techniques for writing unit tests and applying TDD when the result of the code under test is difficult to predict? A (real) example of code with difficult to predict results: A function weightedTasksOnTime that, given an amount of work done per day workPerDay in range (0, 24], the current time initialTime > 0, and a list of tasks taskArray ; each with a time to complete property time > 0, due date due , and importance value importance ; returns a normalized value in range [0, 1] representing the importance of tasks that can be completed before their due date if each task if completed in the order given by taskArray , starting at initialTime . The algorithm to implement this function is relatively straightforward: iterate over tasks in taskArray . For each task, add time to initialTime . If the new time < due , add importance to an accumulator. Time is adjusted by inverse workPerDay. Before returning the accumulator, divide by sum of task importances to normalize. function weightedTasksOnTime(workPerDay, initialTime, taskArray) {
let simulatedTime = initialTime
let accumulator = 0;
for (task in taskArray) {
simulatedTime += task.time * (24 / workPerDay)
if (simulatedTime < task.due) {
accumulator += task.importance
}
}
return accumulator / totalImportance(taskArray)
} I believe the above problem can be simplified, while maintaining its core, by removing workPerDay and the normalization requirement, to give: function weightedTasksOnTime(initialTime, taskArray) {
let simulatedTime = initialTime
let accumulator = 0;
for (task in taskArray) {
simulatedTime += task.time
if (simulatedTime < task.due) {
accumulator += task.importance
}
}
return accumulator
} This question addresses situations where the code under test is not a re-implementation of an existing algorithm. If code is a re-implementation, it intrinsically has easy to predict results, because existing trusted implementations of the algorithm act as a natural test oracle. | There are two things you can test in difficult-to-test code. First, the degenerate cases. What happens if you have no elements in your task array, or only one, or two but one is past the due date, etc. Anything that is simpler than your real problem, but still reasonable to calculate manually. The second is the sanity checks. These are the checks you do where you don't know if an answer is right , but you definitely would know if it's wrong . These are things like time must move forward, values must be in a reasonable range, percentages must add up to 100, etc. Yes, this isn't as good as a full test, but you'd be surprised how often you mess up on the sanity checks and degenerate cases, that reveals a problem in your full algorithm. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/379575",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/317002/"
]
} |
379,879 | Are there cases where more verbose code (as in more logical statements) is more clean than more concise code? | To answer that, let's take a real world example that happened to me. In C# a library that I maintain, I had the following code: TResult IConsFuncMatcher<T, TResult>.Result() =>
TryCons(_enumerator) is var simpleMatchData && !simpleMatchData.head.HasValue
? _emptyValue.supplied
? _emptyValue.value
: throw new NoMatchException("No empty clause supplied");
: _recursiveConsTests.Any()
? CalculateRecursiveResult()
: CalculateSimpleResult(simpleMatchData); Discussing this with peers, the unanimous verdict was that the nested ternary expressions, coupled with the "clever" use of is var resulted in terse, but difficult to read code. So I refactored it to: TResult IConsFuncMatcher<T, TResult>.Result()
{
var simpleMatchData = TryCons(_enumerator);
if (!simpleMatchData.head.HasValue)
{
return _emptyValue.supplied
? _emptyValue.value
: throw new NoMatchException("No empty clause supplied");
}
return _recursiveConsTests.Any()
? CalculateRecursiveResult()
: CalculateSimpleResult(simpleMatchData);
} The original version contained just one compound expression with an implicit return . The new version now contains an explicit variable declaration, an if statement and two explicit returns . It contains more statements and more lines of code. Yet everyone I consulted considered it easier to read and reason, which are key aspects of "clean code". So the answer to your question is an emphatic "yes", more verbose can be cleaner than concise code and is thus a valid refactoring. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/379879",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6871/"
]
} |
379,996 | I'm no software engineer.
I'm a phd student in the field of geoscience. Almost two years ago I started programming a scientific software. I never used continuous integration (CI), mainly because at first I didn't know it exists and I was the only person working on this software. Now since the base of the software is running other people start to get interested in it and want to contribute to the software. The plan is that other persons at other universities are implementing additions to the core software. (I'm scared they could introduce bugs). Additionally, the software got quite complex and became harder and harder to test and I also plan to continue working on it. Because of this two reasons, I'm now more and more thinking about using CI.
Since I never had a software engineer education and nobody around me has ever heard about CI (we are scientists, no programmers) I find it hard to get started for my project. I have a couple of questions where I would like to get some advice: First of all a short explanation of how the software works: The software is controlled by one .xml file containing all required settings. You start the software by simply passing the path to the .xml file as an input argument and it runs and creates a couple of files with the results. One single run can take ~ 30 seconds. It is a scientific software. Almost all of the functions have multiple input parameters, whose types are mostly classes which are quite complex. I have multiple .txt files with big catalogs which are used to create instances of these classes. Now let's come to my questions: unit tests, integration tests, end-to-end tests? :
My software is now around 30.000 lines of code with hundreds of functions and ~80 classes.
It feels kind of strange to me to start writing unit tests for hundreds of functions which are already implemented.
So I thought about simply creating some test cases. Prepare 10-20 different .xml files and let the software run. I guess this is what is called end-to-end tests? I often read that you should not do this, but maybe it is ok as a start if you already have a working software? Or is it simply a dumb idea to try to add CI to an already working software. How do you write unit tests if the function parameters are difficult to create? assume I have a function double fun(vector<Class_A> a, vector<Class_B>) and usually, I would need to first read in multiple text files to create objects of type Class_A and Class_B . I thought about creating some dummy functions like Class_A create_dummy_object() without reading in the text files. I also thought about implementing some kind of serialization . (I do not plan to test the creation of the class objects since they only depend on multiple text files) How to write tests if results are highly variable? My software makes use of big monte-carlo simulations and works iteratively. Usually, you have ~1000 iterations and at every iteration, you are creating ~500-20.000 instances of objects based on monte-carlo simulations. If only one result of one iteration is a bit different the whole upcoming iterations are completely different. How do you deal with this situation? I guess this a big point against end-to-end tests, since the end result is highly variable? Any other advice with CI is highly appreciated. | Testing scientific software is difficult, both because of the complex subject matter and because of typical scientific development processes (aka. hack it until it works, which doesn't usually result in a testable design). This is a bit ironic considering that science should be reproducible. What changes compared to βnormalβ software is not whether tests are useful (yes!), but which kinds of test are appropriate. Handling randomness: all runs of your software MUST be reproducible. If you use Monte Carlo techniques, you must make it possible to provide a specific seed for the random number generator. It is easy to forget this e.g. when using C's rand() function which depends on global state. Ideally, a random number generator is passed as an explicit object through your functions. C++11's random standard library header makes this a lot easier. Instead of sharing random state across modules of the software, I've found it useful to create a second RNG which is seeded by a random number from the first RNG. Then, if the number of requests to the RNG by the other module changes, the sequence generated by the first RNG stays the same. Integration tests are perfectly fine. They are good at verifying that different parts of your software play together correctly, and for running concrete scenarios. As a minimum quality level βit doesn't crashβ can already be a good test result. For stronger results, you will also have to check the results against some baseline. However, these checks will have to be somewhat tolerant, e.g. account for rounding errors. It can also be helpful to compare summary statistics instead of full data rows. If checking against a baseline would be too fragile, check that the outputs are valid and satisfy some general properties. These can be general (βselected locations must be at least 2km apartβ) or scenario-specific, e.g. βa selected location must be within this areaβ. When running integration tests, it is a good idea to write a test runner as a separate program or script. This test runner performs necessary setup, runs the executable to be tested, checks any results, and cleans up afterwards. Unit test style checks can be quite difficult to insert into scientific software because the software has not been designed for that. In particular, unit tests get difficult when the system under test has many external dependencies/interactions. If the software is not purely object-oriented, it is not generally possible to mock/stub those dependencies. I've found it best to largely avoid unit tests for such software, except for pure math functions and utility functions. Even a few tests are better than no tests. Combined with the check βit has to compileβ that's already a good start into continuous integration. You can always come back and add more tests later. You can then prioritize areas of the code that are more likely to break, e.g. because they get more development activity. To see which parts of your code are not covered by unit tests, you can use code coverage tools. Manual testing: Especially for complex problem domains, you will not be able to test everything automatically. E.g. I'm currently working on a stochastic search problem. If I test that my software always produces the same result, I can't improve it without breaking the tests. Instead, I've made it easier to do manual tests: I run the software with a fixed seed and get a visualization of the result (depending on your preferences, R, Python/Pyplot, and Matlab all make it easy to get high-quality visualizations of your data sets). I can use this visualization to verify that things did not go terribly wrong. Similarly, tracing the progress of your software via logging output can be a viable manual testing technique, at least if I can select the type of events to be logged. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/379996",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/317634/"
]
} |
380,001 | I was reading this page and came across this sentence in the accepted answer: I don't like A directly knowing about B . But that's a DIP thing
not a POJO thing. What if you can't abstract out B so that A isn't aware of it? Suppose if I have the following Book class: public final class Book {
private final Author author;
// constructor and methods left out :D.
}
public final Author {
private final String firstName;
private final String lastName;
// constructor and methods left out :D.
} I'm aware of the fact that a book might have a list of authors, so it would be better if it was List<Author> authors , but I want to focus on a specific part of the above code sample. Some might point out that the Book class knows about the Author object. I don't see why Author would be an interface or abstract class. Question: Would you abstract out Author ? If so, how? | Testing scientific software is difficult, both because of the complex subject matter and because of typical scientific development processes (aka. hack it until it works, which doesn't usually result in a testable design). This is a bit ironic considering that science should be reproducible. What changes compared to βnormalβ software is not whether tests are useful (yes!), but which kinds of test are appropriate. Handling randomness: all runs of your software MUST be reproducible. If you use Monte Carlo techniques, you must make it possible to provide a specific seed for the random number generator. It is easy to forget this e.g. when using C's rand() function which depends on global state. Ideally, a random number generator is passed as an explicit object through your functions. C++11's random standard library header makes this a lot easier. Instead of sharing random state across modules of the software, I've found it useful to create a second RNG which is seeded by a random number from the first RNG. Then, if the number of requests to the RNG by the other module changes, the sequence generated by the first RNG stays the same. Integration tests are perfectly fine. They are good at verifying that different parts of your software play together correctly, and for running concrete scenarios. As a minimum quality level βit doesn't crashβ can already be a good test result. For stronger results, you will also have to check the results against some baseline. However, these checks will have to be somewhat tolerant, e.g. account for rounding errors. It can also be helpful to compare summary statistics instead of full data rows. If checking against a baseline would be too fragile, check that the outputs are valid and satisfy some general properties. These can be general (βselected locations must be at least 2km apartβ) or scenario-specific, e.g. βa selected location must be within this areaβ. When running integration tests, it is a good idea to write a test runner as a separate program or script. This test runner performs necessary setup, runs the executable to be tested, checks any results, and cleans up afterwards. Unit test style checks can be quite difficult to insert into scientific software because the software has not been designed for that. In particular, unit tests get difficult when the system under test has many external dependencies/interactions. If the software is not purely object-oriented, it is not generally possible to mock/stub those dependencies. I've found it best to largely avoid unit tests for such software, except for pure math functions and utility functions. Even a few tests are better than no tests. Combined with the check βit has to compileβ that's already a good start into continuous integration. You can always come back and add more tests later. You can then prioritize areas of the code that are more likely to break, e.g. because they get more development activity. To see which parts of your code are not covered by unit tests, you can use code coverage tools. Manual testing: Especially for complex problem domains, you will not be able to test everything automatically. E.g. I'm currently working on a stochastic search problem. If I test that my software always produces the same result, I can't improve it without breaking the tests. Instead, I've made it easier to do manual tests: I run the software with a fixed seed and get a visualization of the result (depending on your preferences, R, Python/Pyplot, and Matlab all make it easy to get high-quality visualizations of your data sets). I can use this visualization to verify that things did not go terribly wrong. Similarly, tracing the progress of your software via logging output can be a viable manual testing technique, at least if I can select the type of events to be logged. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380001",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
380,021 | I am in a situation where I can use an open source JavaScript plugin to fulfill a task. But when I tried to use it,I found myself I have to redesign lot of things of what I already have done,and it adds a certain complexity, in my humble opinion, to the project. Whereas I can achieve the same task with a clean code I can craft myself, and without needing to change what I have done so far. Should you opt for a library anyway in this situation (like for the sake of better quality code?) | As an engineer, perhaps it is suitable to think of this as an optimization problem. Naturally we must have an optimization goal . A common one in this sort of situation would be to minimize Total Cost of Ownership . If you believe adding the third party component will save cost in the long run, you should use it. If you don't, you shouldn't. Make sure you consider the cost of ongoing maintenance (for example, when a new version of O/S is released, or a security flaw is found, or some new W3C specification is released). For many trivial problems, it will be lower cost to grow your own, but for moderately complex problems outside your organization's core competency, it will often make sense to go third party. There are other goals to consider too (e.g. risk) but TCO is the big one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/214828/"
]
} |
380,034 | It's a sort of simple compression where you use one numeric variable to store many boolean / binary states, using doubling and the fact that every doubling number is 1 + the sum of all the previous ones. I'm sure it must be an old, well-known technique, I'd like to know what it's called to refer to it properly. I've done several searches on every way I can think of to describe it, but found nothing beyond some blog articles where the article authors seem to have figured this out themselves and don't know what to call it, either ( example 1 , example 2 ). For example, here's a very simple implementation intended to illustrate the concept: packStatesIntoNumber () {
let num = 0
if (this.stateA) num += 1
if (this.stateB) num += 2
if (this.stateC) num += 4
if (this.stateD) num += 8
if (this.stateE) num += 16
if (this.stateF) num += 32
return num
}
unpackStatesFromNumber (num) {
assert(num < 64)
this.stateF = num >= 32; if (this.stateF) num -= 32
this.stateE = num >= 16; if (this.stateE) num -= 16
this.stateD = num >= 8; if (this.stateD) num -= 8
this.stateC = num >= 4; if (this.stateC) num -= 4
this.stateB = num >= 2; if (this.stateB) num -= 2
this.stateA = num >= 1; if (this.stateA) num -= 1
} You could also use bitwise operators, base 2 number parsing, enums... There are many more efficient ways to implement it, I'm interested in the name of the approach more generally. | It's most commonly referred to as a bit field , and another term you'll often hear is bit masks , which are used to get or set individual bit values or the entire bit field at once. Many programming languages have auxiliary structures to help with this. As @BernhardHiller notes in the comments, C# has enums with flags ; Java has the EnumSet class. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380034",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95836/"
]
} |
380,251 | In his book 'Clean Architecture', Uncle Bob says that the Presenter should put the data that it receives into something he calls the 'View Model'. Is this the same thing as the 'ViewModel' from the Model-View-ViewModel (MVVM) design pattern or is it a simple Data Transfer Object (DTO)? If it is not a simple DTO, how does it relate to the View?
Does the view get updates from it through an Observer relationship? My guess is that it is more like the ViewModel from MVVM, because in Chapter 23 of his book, Robert Martin says: [The Presenter's] job is to accept data from the application and format it for presentation so that the View can simply move it to the screen. For example, if the application wants a date displayed in a field, it will hand the Presenter a Date object. The Presenter will then format that data into an appropriate string and place it in a simple data structure called the View model, where the View can find it. This implies that the View is somehow connected to the ViewModel, as opposed to simply recieving it as a function argument for example (as would be the case with a DTO). Another reason why I think this is because if you look at the image, the Presenter uses the View Model, but not the View.
Whereas the Presenter uses both the Output Boundary and the Output Data DTO. If it is neither a DTO nor the ViewModel from MVVM, please elaborate as to what it is. | Is this the same thing as the 'ViewModel' from the Model-View-ViewModel (MVVM) design pattern Nope. That would be this : That has cycles. Uncle Bob has been carefully avoiding cycles . Instead you have this: Which certainly doesn't have cycles. But it's leaving you wondering how the view knows about an update. We'll get to that in a moment. or is it a simple Data Transfer Object (DTO)? To quote Bob from the previous page: You can use basic structs or simple data transfer objects if you like. Or you can pack it into a hashmap, or construct it into an object. Clean Architecture p207 So, sure, if you like. But I strongly suspect what's really bugging you is this : This cute little abuse of UML contrasts the direction of source code dependency with the direction of flow of control. This is where the answer to your question can be found. In a using relationship: flow of control goes in the same direction the source code dependency does. In a implementing relationship: flow of control typically goes in the opposite direction the source code dependency does. Which means you're really looking at this: You should be able to see that the flow of control is never going to get from the Presenter to the View. How can that be? What does it mean? It means the view either has it's own thread (which is not that unusual) or (as @Euphoric points out) flow of control is coming into the view from something else not depicted here. If it's the same thread then the View will know when the View-Model is ready to be read. But if that's the case and the view is a GUI then it will have a hard time repainting the screen when the user moves it around while they wait for the DB. If the view has it's own thread then it has its own flow of control. That means to implement this the View will have to poll the View-Model to notice changes. Since the Presenter doesn't know the View exists and the View doesn't know the Presenter exists they can't call each other at all. They can't fling events at each other. All that can happen is the Presenter will write to the View-Model and the View will read the View-Model. Whenever it feels like it. According to this diagram the only thing the View and the Presenter share is knowledge of the View-Model. And it's just a data structure. So don't expect it to have any behavior. That might seem impossible but it can be made to work even if the View-Model is complex. One little incrementing update field is all the view would have to poll to detect a change. Now of course you can insist on using the observer pattern, or have some frameworky thing hide this issue from you but please understand that you don't have to. Here's a bit of fun I had illustrating the flow of control: Note that whenever you see the flow going against the directions I defined before, what you are seeing is a call returning. That trick won't help us get to the View. Well, unless we first return to whatever called the Controller. Or you could just change the design so that you can get to the view. That also fixes what looks like the start of a yo-yo problem with Data Access and it's Interface. The only other thing to learn here besides that is that the Use Case Interactor can pretty much call things in whatever order it wants as long as it calls the presenter last. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380251",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/283363/"
]
} |
380,287 | Context: I am currently working on a small project in Python. I commonly structure my classes with some public methods that are documented but mainly deal with the high level concepts (what a user of the class should know and use), and a bunch of hidden (starting with underscore) methods which are in charge of the complex or low level processing. I know that tests are essential to give confidence in the code and to ensure that any later modification has not broken the previous behaviour. Problem: In order to build the higher level public methods on a trusted base, I generally test the private methods. I find it easier to find whether a code modification has introduced regressions and where. It means that those internal tests can breake on minor revisions and will need to be fixed/replaced But I also know that unit testing private method is at least a disputed concept or more often considered as bad practice. The reason being: only public behaviour should be tested ( ref. ) Question: I do care about following best practices and would like to understand: why is using unit tests on private/hidden methods bad (what is the risk)? what are the best practices when the public methods can use low level and/or complex processing ? Precisions: it is not a how to question. Python has not true concept of privacy and hidden methods are simply not listed but can be used when you know their name I have never been taught programming rules and patterns: my last classes are from the 80's... I have mainly learned languages by trial and failure and references on Internet (Stack Exchange being my favourite for years) | A couple of reasons: Typically when you're tempted to test a class's private method, it's a design smell (iceberg class, not enough reusable public components, etc). There's almost always some "larger" issue at play. You can test them through the public interface (which is how you want to test them, because that's how the client will call/use them). You can get a false sense of security by seeing the green light on all the passing tests for your private methods. It is much better/safer to test edge cases on your private functions through your public interface. You risk severe test duplication (tests that look/feel very similar) by testing private methods. This has major consequences when requirements change, as many more tests than necessary will break. It can also put you in a position where it is hard to refactor because of your test suite...which is the ultimate irony, because the test suite is there to help you safely redesign and refactor! A tip if you're still tempted to test the private parts (don't use it if it bothers you, and YMMV, but it has worked well for me in the past): Sometimes writing unit tests for private functions just to make sure they're working exactly how you think they are can be valuable (especially if you are new to a language). However, after you're sure they work, delete the tests, and always ensure that the public facing tests are solid and will catch if someone makes an egregious change to said private function. When to test private methods: Since this answer has gotten (somewhat) popular, I feel obligated to mention that a "best practice" is always just that: a "best practice". It doesn't mean you should do it dogmatically or blindly. If you think you should test your private methods and have a legitimate reason (like you're writing characterization tests for a legacy application), then test your private methods . Specific circumstances always trump any general rule or best practice. Just be aware of some of the things that can go wrong (see above). I have an answer that goes over this in detail on SO which I'll not repeat here: https://stackoverflow.com/questions/105007/should-i-test-private-methods-or-only-public-ones/47401015#47401015 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380287",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/242770/"
]
} |
380,344 | XSLT is a mature, widely accepted standard. It can be used in browsers (even in old IE) and on the server side (nginx has an XSLT module, which can be used from programming languages, of course). Its implementations are compiled and, therefore, should be much faster than Python or JS. The JS implementation Saxon JS can be used, at least, as a fallback. Jinja, Angular, Ruby's Slim, ASP and PHP templating are not even close. An XSL template can be easily validated in an IDE. How many IDEs can help with Jinja or Angular? It looks like it's a perfect idea to decompose UI and data with XSLT. Admittedly, implementations can give different results in some corner cases, but it's a problem only with templating on the client side. And it's same with HTML, CSS and everything else that is done on the client side. So, why not XSLT? | XSLT does not really have a useful role in the modern interactive web. The purpose of XSLT is to transform from one XML language into another - but you actually never need to do that in the first place. How powerful, fast and well supported a technology is is irrelevant if you don't have the problem which the technology is designed to solve. There are several reasons why the use case for XSLT has gone away: HTML has won. XSLT was supposed to be useful for transforming "rich text" content in some semantic markup format into HTML. But HTML is in itself a perfectly fine format, so why not just use that for the content in the first place and skip the transformation? CSS has become much more powerful. One of the promises of XSLT was that you could keep the source markup clean and semantic and then transform it into "presentational HTML" which worked cross browser and where you could rearrange elements and so on. But you don't really need presentational HTML these days, you can use semantic HTML and CSS can perform the necessary styling and layout. XML has not become the ubiquitous format for data. When fetching SQL data from a database it is much simpler to just directly merge it into a template, rather than first transform it into XML and then transform it via XSLT. And JSON has all but replaced XML for structured data on the client side. XSLT is designed for transforming a whole document at a time. But in modern interactive web pages, small snippets of data are downloaded piecemeal all the time and merged into the page. Data is just not so complex. For the majority of use cases, simpler template formats with placeholders and repeaters solve the task fine. XSLT is more much more powerful, but you rarely need that extra power, and it has a steep cost in complexity and uglyness. XSLT grew out of publishing where you could have a one-way process from one structured source format to multiple publishing format like print, PDF and static web pages. Most web sites does not fit this use case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380344",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120877/"
]
} |
380,347 | If I define a variable of a certain type (which, as far as I know, just allocates data for the content of the variable), how does it keep track of which type of variable it is? | Variables (or more generally: βobjectsβ in the sense of C) do not store their type at runtime. As far as machine code is concerned, there is only untyped memory. Instead, the operations on this data interpret the data as a specific type (e.g. as a float or as a pointer). The types are only used by the compiler. For example, we might have a struct or class struct Foo { int x; float y; }; and a variable Foo f {} . How can a field access auto result = f.y; be compiled? The compiler knows that f is an object of type Foo and knows the layout of Foo -objects. Depending on platform-specific details, this might be compiled as βTake the pointer to the start of f , add 4 bytes, then load 4 bytes and interpret this data as a float.β In many machine code instruction sets (incl. x86-64) there are different processor instructions for loading floats or ints. One example where the C++ type system cannot keep track of the type for us is an union like union Bar { int as_int; float as_float; } . An union contains up to one object of various types. If we store an object in an union, this is the union's active type. We must only try to get that type back out of the union, anything else would be undefined behavior. Either we βknowβ while programming what the active type is, or we can create a tagged union where we store a type tag (usually an enum) separately. This is a common technique in C, but because we have to keep the union and the type tag in sync this is fairly error prone. A void* pointer is similar to an union but can only hold pointer objects, except function pointers. C++ offers two better mechanisms to deal with objects of unknown types: We can use object-oriented techniques to perform type erasure (only interact with the object through virtual methods so that we don't need to know the actual type), or we can use std::variant , a kind of type-safe union. There is one case where C++ does store the type of an object: if the class of the object has any virtual methods (a βpolymorphic typeβ, aka. interface). The target of a virtual method call is unknown at compile time and is resolved at run time based on the dynamic type of the object (βdynamic dispatchβ). Most compilers implement this by storing a virtual function table (βvtableβ) at the start of the object. The vtable can also be used to get the type of the object at runtime. We can then draw a distinction between the compile-time known static type of an expression, and the dynamic type of an object at runtime. C++ allows us to inspect the dynamic type of an object with the typeid() operator which gives us a std::type_info object. Either the compiler knows the type of the object at compile time, or the compiler has stored the necessary type information inside the object and can retrieve it at runtime. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380347",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/318238/"
]
} |
380,350 | Do I need to create a different use case diagram for my mobile app and for my web app? I have a mobile and a web app but mobile users cannot access the web application. Only the admins can do that. | Variables (or more generally: βobjectsβ in the sense of C) do not store their type at runtime. As far as machine code is concerned, there is only untyped memory. Instead, the operations on this data interpret the data as a specific type (e.g. as a float or as a pointer). The types are only used by the compiler. For example, we might have a struct or class struct Foo { int x; float y; }; and a variable Foo f {} . How can a field access auto result = f.y; be compiled? The compiler knows that f is an object of type Foo and knows the layout of Foo -objects. Depending on platform-specific details, this might be compiled as βTake the pointer to the start of f , add 4 bytes, then load 4 bytes and interpret this data as a float.β In many machine code instruction sets (incl. x86-64) there are different processor instructions for loading floats or ints. One example where the C++ type system cannot keep track of the type for us is an union like union Bar { int as_int; float as_float; } . An union contains up to one object of various types. If we store an object in an union, this is the union's active type. We must only try to get that type back out of the union, anything else would be undefined behavior. Either we βknowβ while programming what the active type is, or we can create a tagged union where we store a type tag (usually an enum) separately. This is a common technique in C, but because we have to keep the union and the type tag in sync this is fairly error prone. A void* pointer is similar to an union but can only hold pointer objects, except function pointers. C++ offers two better mechanisms to deal with objects of unknown types: We can use object-oriented techniques to perform type erasure (only interact with the object through virtual methods so that we don't need to know the actual type), or we can use std::variant , a kind of type-safe union. There is one case where C++ does store the type of an object: if the class of the object has any virtual methods (a βpolymorphic typeβ, aka. interface). The target of a virtual method call is unknown at compile time and is resolved at run time based on the dynamic type of the object (βdynamic dispatchβ). Most compilers implement this by storing a virtual function table (βvtableβ) at the start of the object. The vtable can also be used to get the type of the object at runtime. We can then draw a distinction between the compile-time known static type of an expression, and the dynamic type of an object at runtime. C++ allows us to inspect the dynamic type of an object with the typeid() operator which gives us a std::type_info object. Either the compiler knows the type of the object at compile time, or the compiler has stored the necessary type information inside the object and can retrieve it at runtime. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380350",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/318241/"
]
} |
380,397 | I have seen many implementations of the Builder pattern (mainly in Java). All of them have an entity class (let's say a Person class), and a builder class PersonBuilder . The builder "stacks" a variety of fields and returns a new Person with the arguments passed. Why do we explicitly need a builder class, instead of putting all the builder methods in the Person class itself? For example: class Person {
private String name;
private Integer age;
public Person() {
}
Person withName(String name) {
this.name = name;
return this;
}
Person withAge(int age) {
this.age = age;
return this;
}
} I can simply say Person john = new Person().withName("John"); Why the need for a PersonBuilder class? The only benefit I see, is we can declare the Person fields as final , thus ensuring immutability. | Why use/provide a builder class: To make immutable objects β the benefit you've identified already.Β Useful if the construction takes multiple steps.Β FWIW, immutability should be seen a significant tool in our quest to write maintainable and bug free programs. If the runtime representation of the final (possibly immutable) object is optimized for reading and/or space usage, but not for update.Β String and StringBuilder are good examples here.Β Repeatedly concatenating strings is not very efficient, so the StringBuilder uses a different internal representation that is good for appending β but not as good on space usage, and not as good for reading and using as the regular String class. To clearly separate constructed objects from objects under construction.Β This approach requires a clear transition from under-construction to constructed.Β For the consumer, there is no way to confuse an under-construction object with a constructed object: the type system will enforce this.Β That means sometimes we can use this approach to "fall into the pit of success", as it were, and, when making abstraction for others (or ourselves) to use (like an API or a layer), this can be a very good thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/289455/"
]
} |
380,434 | At the end of a 2 week sprint and a task has a code review, in the review we discover a function that works, is readable, but it's quite long and has a few code smells. Easy refactor job. Otherwise the task fits the definition of done. We have two choices. Fail the code review, so that the ticket doesn't close in this sprint, and we take a little hit on morale, because we cannot pass off the ticket. The refactor is a small piece of work, and would get done in the next sprint (or even before it starts) as a tiny, half point story. My question is: are there any inherent problems or considerations with raising a ticket off of the back of a review, instead of failing it? The resources I can find and have read detail code reviews as 100% or nothing, usually, but I find that is usually not realistic. | are there any inherent problems or considerations with raising a ticket off of the back of a review, instead of failing it? Not inherently. For example, the implementation of the current change may have unearthed a problem which was already there, but wasn't known/apparent until now. Failing the ticket would be unfair as you'd fail it for something unrelated to the actually described task. in the review we discover a function However, I surmise that the function here is something that was added by the current change. In this case, the ticket should be failed as the code did not pass the smell test. Where would you draw the line, if not where you've already drawn it? You clearly don't think this code is sufficiently clean to stay in the codebase in its current form; so why would you then consider giving the ticket a pass? Fail the code review, so that the ticket doesn't close in this sprint, and we take a little hit on morale, because we cannot pass off the ticket. It seems to me like you're indirectly arguing that you are trying to give this ticket a pass to benefit team morale, rather than benefit the quality of the codebase. If that is the case, then you've got your priorities mixed. The standard of clean code should not be altered simply because it makes the team happier. The correctness and cleanliness of code does not hinge on the team's mood. The refactor is a small piece of work, and would get done in the next sprint (or even before it starts) as a tiny, half point story. If the implementation of the original ticket caused the code smell, then it should be addressed in the original ticket. You should only be creating a new ticket if the code smell cannot be directly attributed to the original ticket (for example, a "straw that broke the camel's back" scenario). The resources I can find and have read detail code reviews as 100% or nothing, usually, but I find that is usually not realistic. Pass/fail is inherently a binary state , which is inherently all or nothing. What you're referring to here, I think, is more that you interpret code reviews as requiring perfect code or otherwise failing it, and that is not the case. The code shouldn't be immaculate, it should simply comply with the reasonable standard of cleanliness that your team/company employs. Adherence to that standard is a binary choice: it adheres (pass) or it doesn't (fail). Based on your description of the issue, it's clear that you don't think that this adheres to the expected code standard, and thus it should not be passed for ulterior reasons such as team morale. Otherwise the task fits the definition of done. If "it gets the job done" were the best benchmark for code quality, then we wouldn't have had to invent the principle of clean code and good practice to begin with - the compiler and unit testing would already be our automated review process and you wouldn't need code reviews or style arguments. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380434",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/241017/"
]
} |
380,579 | Suppose your team writes a software system that is (quite surprisingly) running fine. One day one of the engineers mistakenly runs some SQL queries that change some of the DB data, then forgets about it. After some time you discover the corrupted/erroneous data and everyone scratches their heads as to which part of the code caused this and why, to no avail. Meanwhile the project manager insists that we find the part of the code that caused it. How do you deal with this? | It is obvious no project manager will invest an infinite amount of time into such a problem. They want to prevent the same situation happening again. To achieve this goal, even if one cannot find the root cause of such a failure, it is often possible to take some measures to Detect such failures earlier in case they reoccur Make it less likely the same failure will happen again Make the system more robust against the specific kind of inconsistency For example, more detailed logging, more finegrained error handling, or immediate error signaling could help to prevent the same error striking again, or to find the root cause. If your system allows adding database triggers, maybe it is possible to add a trigger which forbids the inconsistency being introduced in the first place. Think of what the appropriate kind of action might be in your situation, and suggest this to the team; I am sure your project manager will be pleased. One day one of the engineers mistakenly runs some SQL queries that change some of the DB data, then forgets about it. As mentioned by others, it is also a good idea to forbid such a procedure (if you have influence on how the system is operated). No one should be allowed to run undocumented ad-hoc queries which change database content. If there is a need for such a query, make sure there is a policy to store the query together with its execution date, the name of the person who executed it, and the reason why it was used, in a documented place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380579",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108346/"
]
} |
380,597 | I have learned a significant amount of coding, however, it's always been in a scientific environment (not computer science), completely self-taught without anyone to guide me in the right direction. Thus, my coding journey has been ... messy. I've noticed now that whenever I build some type of program, by the end, I realize how I could have done it far more elegantly, far more efficiently, and in a way that is far more flexible and easy to manage going forward. In some circumstances, I've actually gone back and rebuilt things from the ground up, but usually this is not practically feasible. While most of my programs so far have been relatively small, it seems quite unwieldy to completely rewrite large programs every time you create something. I'm just wondering, is this a normal experience? If not, how do you prevent this from happening? I've tried planning things in advance, but I can't seem to really foresee everything until I start hammering out some code. | This is a very common experience Most people I interact with, and I myself as well, feel like this. From what I can tell one reason for this is that you learn more about the domain and the tools you use as you write your code, which leads you to recognize many opportunities for improvement after you've already written your program. The other reason is that you might have an idea in your head about the ideal clean code solution and then the real world and its messy limitations get in your way, forcing you to write imperfect work-arounds and hacks that may leave you dissatisfied. "Everyone has a plan until they get punched in the face." What to do about it To some degree you will have to learn to accept that your code will never be perfect. One thing that helped me with that is the mindset of "If I hate the code I wrote a month ago, it means I have learned and become a better programmer". A way to alleviate the issue is to constantly be on the lookout for potential improvements as you work and refactor continuously. Make sure to hit a good balance between refactoring and adding new features / fixing bugs. This won't help with big design issues, but it will generally leave you with a more polished code base you can be proud of. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380597",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/310103/"
]
} |
380,901 | for example, suppose I need to play different sounds according to "grade": file list: fairSound.mp3
goodSound.mp3
excellentSound.mp3 code: showResult(grade){
if(grade==0 || grade==1){
SoundUtility.play(fairSound);
}else if(grade==2 || grade==3){
SoundUtility.play(goodSound);
}else if(grade==4){
SoundUtility.play(excellentSound);
}
} but I don't want any branching at this case, so my question is, is it a good or an anti pattern to duplicate sounds: file list: sound0.mp3 (copy from fairSound.mp3)
sound1.mp3 (copy from fairSound.mp3)
sound2.mp3 (copy from goodSound.mp3)
sound4.mp3 (copy from goodSound.mp3)
sound4.mp3 (excellentSound.mp3) so that it doesn't need to write any branching case in the program in this case: showResult(grade){
SoundUtility.play(sound[grade]);
} ? Is it violating DRY principle? Note: I also don't want to something like: var arr=[fairSound.mp3,fairSound.mp3,goodSound.mp3,goodSound.mp3,excellentSound.mp3]; into my code because when I want to change grade 0 to play fairSound.mp3, I still need to modify the code | If changing the sound files for each single grade without modifying the code is the requirement here, I would externalize the configuration (mapping). Create entries in your configuration mechanism (config files or database) which contain the sound file names for each grade. Instead of duplicating the sound files I would rather list the same file name multiple times. Example for an .ini file: [resultSounds]
0=fairSound.mp3
1=fairSound.mp3
2=goodSound.mp3
3=goodSound.mp3
4=excellentSound.mp3 Read your configuration file into some data structure (like an array or a hash map) and use that in your method to find the file to play. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/380901",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
381,047 | I'm a web developer of a small, local SaaS web application. It currently has about a half-dozen clients. As I continue to design the application, it's become increasingly harder for me to convince myself to commit any time to the project, which has happened in the beginning phase. Having grown attached to the project and the code I've already written, I'm scared that all additional work I commit will be overturned in the near future, when the app turns out to not scale well as the business grows. As a university student applying for internships, I've had employers question my choice in not using any web frameworks during interviews, which has only caused me to further doubt my previous work. I simply don't know any web frameworks, and don't know which one to start using. I've landed an internship as a full-stack developer in January, where I'll begin to learn front-end frameworks, but the pressure to finish the app is mounting, and I'm considering scrapping the app completely and starting over, which is something I've done before. The app currently is built in PHP and jQuery (for AJAX calls) and uses MySQL for its database. Any thoughts on how I can overcome this mental block, and to ensure my app will be scalable? Thanks in advance. | Perfect is the enemy of good. Or put another way, don't worry about it today. If your app does what it needs to do, then it's fine. It's not a bad thing to rewrite parts of software further down the line; by that point you 1) know more clearly what you're trying to build and 2) know which bits are actually the bottleneck. You could spend an enormous amount of time writing an app which would scale to a million users, but it wouldn't be any better for your current six customers than what you've got today . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/381047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/319488/"
]
} |
381,407 | I am having a hard time looking for resources on why I should use dependency injection . Most of the resources that I see explains that it just passes an instance of an object to another instance of an object, but why? Is this just for cleaner architecture/code or does this affect performance as a whole? Why should I do the following? class Profile {
public function deactivateProfile(Setting $setting)
{
$setting->isActive = false;
}
} Instead of the following? class Profile {
public function deactivateProfile()
{
$setting = new Setting();
$setting->isActive = false;
}
} | The advantage is that without dependency injection, your Profile class needs to know how to create a Settings object (violates Single Responsibility Principle) Always creates its Settings object the same way (creates a tight coupling between the two) But with dependency injection The logic for creating Settings objects is somewhere else It's easy to use different kinds of Settings objects This may seem (or even be) irrelevant in this particular case, but imagine if we're not talking about a Settings object, but a DataStore object, which might have different implementations, one that stores data in files and another that stores it in a database. And for automated tests you want a mock implementation as well. Now you really don't want the Profile class to hardcode which one it uses - and even more importantly, you really, really don't want the Profile class to know about filesystem paths, DB connections and passwords, so the creation of DataStore objects has to happen somewhere else. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/381407",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/316517/"
]
} |
381,460 | I've been trying to design a database to go with a project concept and ran into what seems like a hotly debated issue. I've read a few articles and some StackΒ Overflow answers that state it's never (or almost never) okay to store a list of IDs or the like in a field -- all data should be relational, etc. The problem I'm running into, though, is that I'm trying to make a task assigner. People will create tasks, assign them to multiple people, and it will save to the database. Of course, if I save these tasks individually in "Person", I'll have to have dozens of dummy "TaskID" columns and micro-manage them because there can be 0 to 100 tasks assigned to one person, say. Then again, if I save the tasks in a "Tasks" table, I'll have to have dozens of dummy "PersonID" columns and micro-manage them -- same problem as before. For a problem like this, is it okay to save a list of IDs taking one form or another or am I just not thinking of another way this is achievable without breaking principles? | The key word and key concept you need to investigate is database normalization . What you would do, is rather than adding info about the assignments to the person or tasks tables, is you add a new table with that assignment info, with relevant relationships. Example, you have the following tables: Persons: +ββββ+βββββββββββ+
| ID | Name |
+====+===========+
| 1 | Alfred |
| 2 | Jebediah |
| 3 | Jacob |
| 4 | Ezekiel |
+ββββ+βββββββββββ+ Tasks: +ββββ+ββββββββββββββββββββ+
| ID | Name |
+====+====================+
| 1 | Feed the Chickens |
| 2 | Plow |
| 3 | Milking Cows |
| 4 | Raise a barn |
+ββββ+ββββββββββββββββββββ+ You would then create a third table with Assignments. This table would model the relationship between the people and the tasks: +ββββ+βββββββββββ+βββββββββ+
| ID | PersonId | TaskId |
+====+===========+=========+
| 1 | 1 | 3 |
| 2 | 3 | 2 |
| 3 | 2 | 1 |
| 4 | 1 | 4 |
+ββββ+βββββββββββ+βββββββββ+ We would then have a Foreign Key constraint, such that the database will enforce that the PersonId and TaskIds have to be valid IDs for those foreign items. For the first row, we can see PersonId is 1 , so Alfred , is assigned to TaskId 3 , Milking cows . What you should be able to see here is that you could have as few or as many assignments per task or per person as you want. In this example, Ezekiel isn't assigned any tasks, and Alfred is assigned 2. If you have one task with 100 people, doing SELECT PersonId from Assignments WHERE TaskId=<whatever>; will yield 100 rows, with a variety of different Persons assigned. You can WHERE on the PersonId to find all of the tasks assigned to that person. If you want to return queries replacing the Ids with the Names and the tasks, then you get to learn how to JOIN tables. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/381460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/320147/"
]
} |
381,628 | I am modeling a database that should be used as generic non-functional requisite for all services of the startup company, like persons, users, services and commercial data like coupons, signature packages, etc. I am thinking about the gender model. In these modern days and with different laws across countries about subjective identity, should I take that into consideration and model my Person entity with more than just the male and female options? Options are: undefined, not-answered, other, transgender... or any other industry standard that I am unaware of... Or does this offend LGBT people by saying that they are not truly male or female? | First consider why you need to collect this data. Do not collect it if it is unnecessary. For example: You would like to address the individuals properly. Then, simply ask for their preferred form of address/honorific/title, such as βMr.β or βMx.β. This should be a free-form field, not dropdown list. There are more possible forms of address than can be enumerated, especially if you consider clerical, academic, or military forms. There is not necessarily any relation between gender and honorific. You would prefer to analyze behavior by gender. Then you will likely not be interested in genders other than male/female. In that case, offer three choices: female, male, or other/prefer not to say. The last could be a free-form text box that can be left empty. Note that collecting and using this kind of data may be subject to privacy laws, so make sure the collection and analysis is legal. E.g. under the GDPR you may have to acquire the user's informed consent first, but that only applies if you or potential users are in the EU. You need to process this data for a specific legal or medical reason. Then, do not guess which information may be needed but find out your actual requirements. E.g. in some jurisdictions only two genders are legally recognized, but the legal gender might be irrelevant in software used for a sexual health clinic. You may want to keep your software portable and future-proof, so do not assume that there is a fixed enumeration of genders. Make it possible to update this list e.g. by updating a config file. You may also want to assume the possibility that there is no fixed list. In a database, a VARCHAR field may be appropriate. Because of these differences in purpose and context, it is difficult to build an universal model for the βgenderβ concept. Note: the English word βgenderβ describes a social role or identity, whereas the term βsexβ describes biological aspects such as physiology or genetics. Neither of these concepts is unambiguous. Again, it depends on the context as to which of these concepts is relevant (if at all) and which values are βallowedβ in that context. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/381628",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/58513/"
]
} |
381,711 | I heard that you should avoid leading newlines when using printf . So that instead of printf("\nHello World!") you should use printf("Hello World!\n") In this particular example above it does not make sense, since the output would be different, but consider this: printf("Initializing");
init();
printf("\nProcessing");
process_data();
printf("\nExiting"); compared to: printf("Initializing\n");
init();
printf("Processing\n");
process_data();
printf("Exiting"); I cannot see any benefit with trailing newlines, except that it looks better. Is there any other reason? EDIT: I'll address the close votes here and now. I don't think this belong to Stack overflow, because this question is mainly about design. I would also say that although it may be opinions to this matter, Kilian Foth's answer and cmaster's answer proves that there are indeed very objective benefits with one approach. | A fair amount of terminal I/O is line-buffered , so by ending a message with \n you can be certain that it will be displayed in a timely manner. With a leading \n the message may or may not be displayed at once. Often, this would mean that each step displays the progress message of the previous step, which causes no end of confusion and wasted time when you try to understand a program's behaviour. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/381711",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/283695/"
]
} |
381,763 | I am maintaining a public API and have to deprecate a method. Is there a general rule on how many months/years/versions before the deletion I should deprecate a method? | At minimum, you should keep deprecated methods in one version before removing them which seems kind of obvious when I write it out. I don't think there is a maximum time but if you never actually remove them, deprecation becomes a little pointless. Major version releases are a good time to remove deprecated methods. Minor releases should typically not contain breaking changes. As cHao has noted in the comments, deprecation doesn't necessarily imply that there will be an eventual removal so if you plan to remove things after deprecation, you should explicitly note that and provide some guidance on the timeline. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/381763",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21538/"
]
} |
382,026 | Is it enough for methods to be distinguished just by argument name (not type) or is it better to name it more explicitly? For example T Find<T>(int id) vs T FindById<T>(int id) . Is there any good reason to name it more explicitly (i.e. adding ById ) vs keeping just argument name? One reason I can think of is when signatures of the methods are the same but they have a different meaning. FindByFirstName(string name) and FindByLastName(string name) | Sure there is a good reason to name it more explicitly. It's not primarily be the method definition that should be self-explanatory, but the method use . And while findById(string id) and find(string id) are both self-explanatory, there is a huge difference between findById("BOB") and find("BOB") . In the former case you know that the random literal is, in fact, an Id. In the latter case you're not sure - it might actually be a given name or something else entirely. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382026",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/269824/"
]
} |
382,069 | I am minding my own business at home and my wife comes to me and says Honey.. Can you print all the Day Light Savings around the world for 2018 in the console? I need to check something. And I am super happy because that was what I had been waiting for my whole life with my Java experience and come up with: import java.time.*;
import java.util.Set;
class App {
void dayLightSavings() {
Set<String> availableZoneIds = ZoneId.getAvailableZoneIds();
availableZoneIds.forEach(
zoneId -> {
LocalDateTime dateTime = LocalDateTime.of(
LocalDate.of(2018, 1, 1),
LocalTime.of(0, 0, 0)
);
ZonedDateTime now = ZonedDateTime.of(dateTime, ZoneId.of(zoneId));
while (2018 == now.getYear()) {
int hour = now.getHour();
now = now.plusHours(1);
if (now.getHour() == hour) {
System.out.println(now);
}
}
}
);
}
} But then she says she was just testing me whether I was a ethically-trained software engineer, and tells me it looks like I am not since (taken from here ).. It should be noted that no ethically-trained software engineer would
ever consent to write a DestroyBaghdad procedure. Basic professional
ethics would instead require him to write a DestroyCity procedure, to
which Baghdad could be given as a parameter. And I am like, fine, ok, you got me.. Pass any year you like, here you go: import java.time.*;
import java.util.Set;
class App {
void dayLightSavings(int year) {
Set<String> availableZoneIds = ZoneId.getAvailableZoneIds();
availableZoneIds.forEach(
zoneId -> {
LocalDateTime dateTime = LocalDateTime.of(
LocalDate.of(year, 1, 1),
LocalTime.of(0, 0, 0)
);
ZonedDateTime now = ZonedDateTime.of(dateTime, ZoneId.of(zoneId));
while (year == now.getYear()) {
// rest is same.. But how do I know how much (and what) to parameterize? After all, she might say.. she wants to pass a custom string formatter, maybe she does not like the format I am already printing in: 2018-10-28T02:00+01:00[Arctic/Longyearbyen] void dayLightSavings(int year, DateTimeFormatter dtf) she is interested in only certain month periods void dayLightSavings(int year, DateTimeFormatter dtf, int monthStart, int monthEnd) she is interested in certain hour periods void dayLightSavings(int year, DateTimeFormatter dtf, int monthStart, int monthEnd, int hourStart, int hourend) If you are looking for a concrete question: If destroyCity(City city) is better than destroyBaghdad() , is takeActionOnCity(Action action, City city) even better? Why / why not? After all, I can first call it with Action.DESTROY then Action.REBUILD , isn't it? But taking actions on cities is not enough for me, how about takeActionOnGeographicArea(Action action, GeographicalArea GeographicalArea) ? After all, I do not want to call: takeActionOnCity(Action.DESTORY, City.BAGHDAD); then takeActionOnCity(Action.DESTORY, City.ERBIL); and so on when I can do: takeActionOnGeographicArea(Action.DESTORY, Country.IRAQ); p.s. I only built my question around the quote I mentioned, I have nothing against any country, religion, race or whatsoever in the world. I am just trying to make a point. | It's turtles all the way down. Or abstractions in this case. Good practice coding is something that can be infinitely applied, and at some point you're abstracting for the sake of abstracting, which means you've taken it too far. Finding that line is not something that's easy to put into a rule of thumb, as it very much depends on your environment. For example, we've had customers who were known to ask for simple applications first but then ask for expansions. We've also had customers that ask what they want and generally never come back to us for an expansion. Your approach will vary per customer. For the first customer, it will pay to pre-emptively abstract the code because you're reasonably certain that you'll need to revisit this code in the future. For the second customer, you may not want to invest that extra effort if you're expecting them to not want to expand the application at any point (note: this doesn't mean that you don't follow any good practice, but simply that you avoiding doing any more than is currently necessary. How do I know which features to implement? The reason I mention the above is because you've already fallen in this trap: But how do I know how much (and what) to parameterize? After all, she might say . "She might say" is not a current business requirement. It's a guess at a future business requirement. As a general rule, do not base yourself on guesses, only develop what's currently required. However, context applies here. I don't know your wife. Maybe you accurately gauged that she will in fact want this. But you should still confirm with the customer that this is indeed what they want, because otherwise you're going to spend time developing a feature that you're never going to end up using. How do I know which architecture to implement? This is trickier. The customer doesn't care about the internal code, so you can't ask them if they need it. Their opinion on the matter is mostly irrelevant. However, you can still confirm the necessity of doing so by asking the right questions to the customer. Instead of asking about the architecture, ask them about their expectations of future development or expansions to the codebase. You can also ask if the current goal has a deadline, because you may not be able to implement your fancy architecture in the timeframe necessary. How do I know when to abstract my code further? I don't know where I read it (if anyone knows, let me know and I'll give credit), but a good rule of thumb is that developers should count like a caveman: one, two many . XKCD #764 In other words, when a certain algorithm/pattern is being used for a third time, it should be abstracted so that it is reusable (= usable many times). Just to be clear, I'm not implying that you shouldn't write reusable code when there's only two instances of the algorithm being used. Of course you can abstract that as well, but the rule should be that for three instances you must abstract. Again, this factors in your expectations. If you already know that you need three or more instances, of course you can immediately abstract. But if you only guess that you might want to implement it more times, the correctness of implementing the abstraction fully relies on the correctness of your guess. If you guessed correctly, you saved yourself some time. If you guessed wrongly, you wasted some of your time and effort and possibly compromised your architecture to implement something you end up not needing. If destroyCity(City city) is better than destroyBaghdad() , is takeActionOnCity(Action action, City city) even better? Why / why not? That very much depends on multiple things: Are there multiple actions that can be taken on any city? Can these actions be used interchangeably? Because if the "destroy" and "rebuild" actions have completely different executions, then there's no point in merging the two in a single takeActionOnCity method. Also be aware that if you recursively abstract this, you're going to end up with a method that's so abstract that it's nothing more than a container to run another method in, which means you've made your method irrelevant and meaningless. If your entire takeActionOnCity(Action action, City city) method body ends up being nothing more than action.TakeOn(city); , you should wonder if the takeActionOnCity method truly has a purpose or isn't just an extra layer that adds nothing of value. But taking actions on cities is not enough for me, how about takeActionOnGeographicArea(Action action, GeographicalArea GeographicalArea) ? The same question pops up here: Do you have a use case for geographical regions? Is the execution of an action on a city and a region the same? Can any action be taken on any region/city? If you can definitively answer "yes" to all three, then an abstraction is warranted. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382069",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123316/"
]
} |
382,070 | The question: Is it considered better practice to use derived types/subtypes or to use conditions and exception-handling to restrict the acceptable inputs for a subprogram in Ada? I understand that in Ada2012 you can use contracts, but what about for previous versions like Ada95? The context: I want to make a series of procedures which can read/write specific fields in a packet (represented by a private unsigned byte array) to be broadcasted from an embedded device to an ASIC. A lot of the fields are different sizes and don't line up with the byte boundaries, so using standard types would allow a program using these procedures to enter illegal values if unchecked. Extra notes: Outside of manipulating the packet contents via these subprograms, these types would not be used elsewhere. Also, if it makes a difference, this code would be used in a safety critical system. | It's turtles all the way down. Or abstractions in this case. Good practice coding is something that can be infinitely applied, and at some point you're abstracting for the sake of abstracting, which means you've taken it too far. Finding that line is not something that's easy to put into a rule of thumb, as it very much depends on your environment. For example, we've had customers who were known to ask for simple applications first but then ask for expansions. We've also had customers that ask what they want and generally never come back to us for an expansion. Your approach will vary per customer. For the first customer, it will pay to pre-emptively abstract the code because you're reasonably certain that you'll need to revisit this code in the future. For the second customer, you may not want to invest that extra effort if you're expecting them to not want to expand the application at any point (note: this doesn't mean that you don't follow any good practice, but simply that you avoiding doing any more than is currently necessary. How do I know which features to implement? The reason I mention the above is because you've already fallen in this trap: But how do I know how much (and what) to parameterize? After all, she might say . "She might say" is not a current business requirement. It's a guess at a future business requirement. As a general rule, do not base yourself on guesses, only develop what's currently required. However, context applies here. I don't know your wife. Maybe you accurately gauged that she will in fact want this. But you should still confirm with the customer that this is indeed what they want, because otherwise you're going to spend time developing a feature that you're never going to end up using. How do I know which architecture to implement? This is trickier. The customer doesn't care about the internal code, so you can't ask them if they need it. Their opinion on the matter is mostly irrelevant. However, you can still confirm the necessity of doing so by asking the right questions to the customer. Instead of asking about the architecture, ask them about their expectations of future development or expansions to the codebase. You can also ask if the current goal has a deadline, because you may not be able to implement your fancy architecture in the timeframe necessary. How do I know when to abstract my code further? I don't know where I read it (if anyone knows, let me know and I'll give credit), but a good rule of thumb is that developers should count like a caveman: one, two many . XKCD #764 In other words, when a certain algorithm/pattern is being used for a third time, it should be abstracted so that it is reusable (= usable many times). Just to be clear, I'm not implying that you shouldn't write reusable code when there's only two instances of the algorithm being used. Of course you can abstract that as well, but the rule should be that for three instances you must abstract. Again, this factors in your expectations. If you already know that you need three or more instances, of course you can immediately abstract. But if you only guess that you might want to implement it more times, the correctness of implementing the abstraction fully relies on the correctness of your guess. If you guessed correctly, you saved yourself some time. If you guessed wrongly, you wasted some of your time and effort and possibly compromised your architecture to implement something you end up not needing. If destroyCity(City city) is better than destroyBaghdad() , is takeActionOnCity(Action action, City city) even better? Why / why not? That very much depends on multiple things: Are there multiple actions that can be taken on any city? Can these actions be used interchangeably? Because if the "destroy" and "rebuild" actions have completely different executions, then there's no point in merging the two in a single takeActionOnCity method. Also be aware that if you recursively abstract this, you're going to end up with a method that's so abstract that it's nothing more than a container to run another method in, which means you've made your method irrelevant and meaningless. If your entire takeActionOnCity(Action action, City city) method body ends up being nothing more than action.TakeOn(city); , you should wonder if the takeActionOnCity method truly has a purpose or isn't just an extra layer that adds nothing of value. But taking actions on cities is not enough for me, how about takeActionOnGeographicArea(Action action, GeographicalArea GeographicalArea) ? The same question pops up here: Do you have a use case for geographical regions? Is the execution of an action on a city and a region the same? Can any action be taken on any region/city? If you can definitively answer "yes" to all three, then an abstraction is warranted. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382070",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/314093/"
]
} |
382,087 | As I understand, the point of unit tests is to test units of code in isolation . This means, that: They should not break by any unrelated code change elsewhere in the codebase. Only one unit test should break by a bug in the tested unit, as opposed to integration tests (which may break in heaps). All of this implies, that every outside dependency of a tested unit, should be mocked out. And I mean all the outside dependencies , not only the "outside layers" such as networking, filesystem, database, etc.. This leads to a logical conclusion, that virtually every unit test needs to mock . On the other hand, a quick Google search about mocking reveals tons of articles that claim that "mocking is a code smell" and should mostly (though not completely) be avoided. Now, to the question(s). How should unit tests be written properly? Where exactly does the line between them and the integration tests lie? Update 1 Please consider the following pseudo code: class Person {
constructor(calculator) {}
calculate(a, b) {
const sum = this.calculator.add(a, b);
// do some other stuff with the `sum`
}
} Can a test that tests the Person.calculate method without mocking the Calculator dependency (given, that the Calculator is a lightweight class that does not access "the outside world") be considered a unit test? | the point of unit tests is to test units of code in isolation. Martin Fowler on Unit Test Unit testing is often talked about in software development, and is a term that I've been familiar with during my whole time writing programs. Like most software development terminology, however, it's very ill-defined, and I see confusion can often occur when people think that it's more tightly defined than it actually is. What Kent Beck wrote in Test Driven Development, By Example I call them "unit tests", but they don't match the accepted definition of unit tests very well Any given claim of "the point of unit tests is" will depend heavily on what definition of "unit test" is being considered. If your perspective is that your program is composed of many small units that depend on one another, and if you constrain yourself to a style that tests each unit in isolation, then a lot of test doubles is an inevitable conclusion. The conflicting advice that you see comes from people operating under a different set of assumptions. For example, if you are writing tests to support developers during the process of refactoring, and splitting one unit into two is a refactoring that should be supported, then something needs to give. Maybe this kind of test needs a different name (ex: "component test" was used by Boris Beizer)? Or maybe we need a different understanding of "unit". You may want to compare: Ian Cooper's TDD: Where Did It All Go Wrong JBRainsberger's Integrated Tests are a Scam Can a test that tests the Person.calculate method without mocking the Calculator dependency (given, that the Calculator is a lightweight class that does not access "the outside world") be considered a unit test? I think that's the wrong question to ask; it's again an argument about labels , when I believe what we actually care about are properties . When I'm introducing changes to the code, I don't care about isolation of tests -- I already know that "the mistake" is somewhere in my current stack of unverified edits. If I run the tests frequently, then I limit the depth of that stack, and finding the mistake is trivial (in the extreme case, the tests are run after every edit -- the max depth of the stack is one). But running the tests isn't the goal -- it's an interruption -- so there is value in reducing the impact of the interruption. One way of reducing the interruption is to ensure that the tests are fast ( Gary Bernhardt suggests 300ms , but I haven't figured out how to do that in my circumstances). If invoking Calculator::add doesn't significantly increase the time required to run the test (or any of the other important properties for this use case), then I wouldn't bother using a test double -- it doesn't provide benefits that outweigh the costs. Notice the two assumptions here: a human being as part of the cost evaluation, and the short stack of unverified changes in the benefit evaluation. In circumstances where those conditions do not hold, the value of "isolation" changes quite a bit. See also Hot Lava , by Harry Percival. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382087",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/220149/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.