source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
358,020
Imagine an API to identify whether a person has selected their spirit animal. They can only have zero or one spirit animals. Currently: /person/{id}/selectedSpiritAnimal when they have selected an animal returns http 200 and {selectedAnimal:mole} but when they have no selection it returns http 404. This makes my spirit animal unhappy as we're representing a valid domain concern - having not yet selected a spirit animal - as an HTTP error. Plus, as a business - erm Sprit-Animal-Hampers-R-us - we want to know when someone has no selection so we can prompt them. What's a better response here: HTTP 200 and {selectedAnimal:null} or even more explicit HTTP 200 and {selectedAnimal:null, spiritAnimalSelected: false} Or is it better to return a 404? Since much like this image has not yet been uploaded when viewing an image online would be a 404. this person has not selected a spirit animal might be a 404 This question has been proposed as a duplicate but that question addresses an otherwise valid URL being requested when the application has been configured to not allow the change that URL represents. Whereas here I'm looking at how one represents a resource where the absence of the resource is meaningful. I.e. it is valid for the client to request the URL and the response is you have successfully requested the resource which represents an absence of a thing. So this isn't 'business logic' but rather a circumstance where the absence of a thing has meaning (it may be as many of my colleagues are arguing that 404 is still correct) but I'm not sure how to map that to the spec. Very difficult to pick an answer. I've changed my mind multiple times over the conversation here and the one ongoing at work. The thing that settles it for me here is that the spec says that a 4xx is when the client has erred . In this instance the client has been told to expect a response from the selectedSpiritAnimal url so has not erred. The consensus amongst my colleagues is that this is a symptom of a bad API design It would probably be better that we simply request /person/{id} and that returns a set of link relations for the person... then if you aren't given the /selectedSpiritAnimal link (when a person has no selection) but you call it anyway then a 404 makes sense. Or that you implement partial responses and let /person/{id} return a more full document unless the client requests a subset of the data
HTTP 4xx codes are probably not the right choice for this scenario. You state that having zero spirit animals is a valid state, and the API route person/{id}/selectedSpiritAnimal will account for whether person id does or does not have one. HTTP 4xx responses are reserved for the situation when a client has done something incorrect in the request (see w3's archive of the original spec ). But the client is making a valid request, whether or not person id has a spirit animal. So I lean toward the second solutions using a properly formatted JSON body in the response and an HTTP 2xx code. Now if you get such a request and it turns out person id does not exist, a 4xx code makes more sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358020", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/91614/" ] }
358,068
Why are "bit masks" called like this? I know that they are mainly used for bitwise operations and the usage of bit masks is more efficient than the usage of separate variables. However my question is why and when were bit masks invented? Were they used since early computing? Are there any other type of "masks" besides bit masks in the IT domain?
A mask (of the facial variety) is something that covers up some parts of your face and lets other parts show through. The terminology is used by analogy in computing: a bitmask covers up (filters out) some bits in a bitset and allows others to pass. Are there any other type of "masks" besides bit masks in the IT domain? Just off the top of my head, masks are used frequently in image processing. It's a similar concept: you create a black-and-white image that shows the shape of what to mask off and what to let through.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358068", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/223002/" ] }
358,085
I'm currently developing a Website that requires direct access to the client computer. Therefore I decided to split this project in this three main parts. There are some requirements that have to apply The Website has to use SSL everywhere because sensitive user data is exchanged with the API The connection between the website and the client application has to be over localhost because a lot of data is exchanged that shouldn't be handled by an external server Currently I'm using websockets to provide the connection between the website and the client application. This results in the problem that the websocket connection has to be SSL secured too because an unsecured connection is rejected. Therefore I'm installing a self signed localhost cert on the client machine to be able to connect between the website and the client application. This is really dirty and not a preferable solution. Now I'm searching for a new solution for this problem. Sadly I cannot start a websocket server out of angular. This would resolve the problem with the certificate. I'm curious how Battlelog is doing this. According to this answer the plugin is using windows named pipe. This sounds interesting but I'm unsure whether I could use them in this scenario. Is there a possibility to solve this problem?
A mask (of the facial variety) is something that covers up some parts of your face and lets other parts show through. The terminology is used by analogy in computing: a bitmask covers up (filters out) some bits in a bitset and allows others to pass. Are there any other type of "masks" besides bit masks in the IT domain? Just off the top of my head, masks are used frequently in image processing. It's a similar concept: you create a black-and-white image that shows the shape of what to mask off and what to let through.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358085", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284280/" ] }
358,160
If you have an enum with values only (no methods as one could do in Java), and this enum is part of the business definition of the system, should one write unit tests for it? I was thinking that they should be written, even if they could seem simple and redundant I consider that what concerns the business specification should be made explicitly written in a test, whether it be with unit/integration/ui/etc. tests or by using the type system of the language as a testing method. Since the values that an enum (e.g. in Java) must have, from the point of view of the business, cannot be tested using the type system I think there should be a unit test for that. This question isn't similar to this one since it doesn't address the same problem as mine. In that question there is a business function (savePeople) and the person is inquiring about the internal implementation (forEach). In there, there's a middle business layer (the function save people) encapsulating the language construct (forEach). Here the language construct (enum) is the one used to specify the behavior from a business standpoint. In this case the implementation detail coincides with the "true nature" of the data, that is: a set (in the mathematical sense) of values. You could arguably use an immutable set, but the same values should still be present there. If you use an array the same thing must be done to test the business logic. I think the conundrum here is the fact that the language construct coincides very well with the nature of the data. I am not sure if I've explained myself correctly
If you have an enum with values only (no methods as one could do in Java), and this enum is part of the business definition of the system, should one write unit tests for it? No, they are just state. Fundamentally, the fact that you are using an enum is an implementation detail ; that's the sort of thing that you might want to be able to refactor into a different design. Testing enums for completeness is analogous to testing that all of the representable integers are present. Testing the behaviors that the enumerations support, however, is a good idea. In other words, if you start from a passing test suite, and comment out any single enum value, then at least one test should fail (compilation errors being considered failures).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358160", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284400/" ] }
358,164
This is a WPF application using MVVM design that connects to Services (Business logic layer) which manage Models, with an exception that some property set of the model is bound to some views directly. This means the model properties are updated without involving ViewModel in some cases and this view scheme cannot be changed because of specific integration limitations. The requirement is that in many cases, the property change in model involves some business logic to be performed in the Service that in turn updates other properties of the model and updates(CRUD) other models in the service too. The two approaches that I thought of, The model has Service reference. The property-set call performs the service operation. But, I do not think it is a proper design strategy to do so, what are the thoughts on this? class Model { public int Property1 { get => _property1; set { _property1 = value; Service.PerformOperation(); OnPropertyChanged(); } } } Service subscribes to events of Models. class Service { private void ModelPropertyChanged(object sender, EventArgs e) { if (e.PropertyName == "Property1") { PerformOperation(); } } } What is the better design strategy recommended in this type of scenarios? ========================= Based on the answer [@Robert Harvey] Introduce a Service-Model Worker between Service and Model and set Model properties through the Service-Model Worker. This ensures loose-coupling and keeps the Model less Anemic . class Model { public Model(IServiceWorker serviceWorker) { } public int Property1 { get => _property1; set { _serviceWorker.SetProperty( nameof(Property1), value, () => { _property1 = value; }); } } } class ServiceWorker : IServiceWorker { public void SetProperty(string propertyName, object value, Action setPropertyInternal) { switch (propertyName) { case nameof(_model.Property1): { //preview property change //(validation and other cancellable checks) IModelValidator.Validate(propertyName, value); //Before property changed Service.PerformOperation_BeforePropertyChanged(); //Set property setPropertyInternal(); //On Property change Service.PerformOperation(); _model.RaisePropertyChanged(propertyName); } break; } } }
If you have an enum with values only (no methods as one could do in Java), and this enum is part of the business definition of the system, should one write unit tests for it? No, they are just state. Fundamentally, the fact that you are using an enum is an implementation detail ; that's the sort of thing that you might want to be able to refactor into a different design. Testing enums for completeness is analogous to testing that all of the representable integers are present. Testing the behaviors that the enumerations support, however, is a good idea. In other words, if you start from a passing test suite, and comment out any single enum value, then at least one test should fail (compilation errors being considered failures).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358164", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284405/" ] }
358,243
I'll describe an example: I start making an API for a baking shop. The API will allow people to search their catalogus for baking products, such as home-made minty chocolate chip cookies using api.examplebakery.com/search?q=..... . Someone uses this to look for a product named pineapple-banana flavoured cookies and will obviously not find any results. Should this be returned as an error? The search did not fail, the API searched and successfully concluded no cookies could be found. The API should not return 404 , because the API was indeed found.
When there are results, the output is a (JSON, based on your comment) list. For queries with no results, the output should be exactly the same. The list simply has 0 items in it. So if your response is normally this: { "results": [ { "name": "Pancakes", .... }, { "name": "French Fries", .... } ] } Then for a query with 0 results, it should be this: { "results": [] } If you also include meta data about how many "pages" of results there are, links to those "pages", etc. then I would suggest saying there is 1 "page". The HTTP status should be the same as when there are results - 200 OK . 204 No Content may also seem to be an option, but it is not because you are in fact returning "content" - the empty list. If you feel an empty list does not count as "content", what if you then amend the response to offer spelling suggestions? The core of the response will still be an empty list, but now there is even more "content". For more useful information about HTTP status codes, jpmc26 has an answer worth reading.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358243", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284482/" ] }
358,501
Let's say I start developing a role game with characters that attack other characters and that kind of stuff. Applying TDD, I make some test cases to test the logic inside Character.receiveAttack(Int) method. Something like this: @Test fun healthIsReducedWhenCharacterIsAttacked() { val c = Character(100) //arg is the health c.receiveAttack(50) //arg is the suffered attack damage assertThat(c.health, is(50)); } Say I have 10 methods testing receiveAttack method. Now, I add a method Character.attack(Character) (that calls receiveAttack method), and after some TDD cycles testing it, I make a decision: Character.receiveAttack(Int) should be private . What happens with previous 10 test-cases? Should I delete them? Should I keep method public (I don't think so)? This question is not about how to test private methods but how to deal with them after a re-design when applying TDD
In TDD, the tests serve as executable documentation of your design. Your design changed, so obviously, your documentation must, too! Note that, in TDD, the only way in which the attack method could have appeared, is as the result of making a failing test pass. Which means, attack is being tested by some other test. Which means that indirectly receiveAttack is covered by attack 's tests. Ideally, any change to receiveAttack should break at least one of attack 's tests. And if it doesn't, then there is functionality in receiveAttack that is no longer needed and should no longer exist! So, since receiveAttack is already tested through attack , it doesn't matter whether or not you keep your tests. If your testing framework makes it easy to test private methods, and if you decide to test private methods, then you can keep them. But you can also delete them without losing test coverage and confidence.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358501", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/145346/" ] }
358,598
I was recently watching a great Computerphile video on passwords in which Mike Pound brags of his company's supercomputer having 4 graphics cards (Titan X's, to be exact). As a numerical simulation enthusiast, I dream of building a desktop solely for simulation work. Why does Mike Pound measure his computer's computational ability by its graphics cards and not its processors? If I were building a computer, which item should I care about more?
Mike Pound obviously values the computational ability of the graphics cards higher than the computational ability of the CPUs. Why? A graphics card is basically made up of MANY simplified processors which all run in parallel. For some simulation work, alot of the computation can be easily parallelised and processed in parallel on the thousands of cores available in the graphics cards, reducing the total processing time. which item should I care about more? It really depends on the workload you care about, and how that workload can/is parallelised for use on a graphics card. If your workload is an embarrassingly parallel set of simple computations, and the software is written to take advantage of available graphics cards, then more graphics cards will have a far greater performance impact than more CPUs (dollar for dollar).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358598", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284960/" ] }
358,611
Encapsulation tells me to make all or almost all fields private and expose these by getters/setters. But now libraries such as Lombok appear which allow us to expose all private fields by one short annotation @Data . It will create getters, setters and setting constructors for all private fields. Could somebody explain to me what is the sense of hiding all fields as private and after that to expose all of them by some extra technology? Why do we simply not use only public fields then? I feel we have walked a long and hard way only to return to the starting point. Yes, there are other technologies that work through getters and setters. And we cannot use them through simple public fields. But these technologies appeared only because we had those numerous properties - private fields behind public getters/setters. If we had not the properties, these technologies would develop another way and would support public fields. And everything would be simple and we wouldn't have any need for Lombok now. What is the sense of the whole this cycle? And has encapsulation really any sense now in real life programming?
If you expose all your attributes with getters/setters you are gaining just data structure which is still used in C or any other procedural language. It is not encapsulation and Lombok is just making to work with procedural code less painful. Getters/setters as bad as plain public fields. There is no difference really. And data structure is not an object. If you will start creating an object from writing an interface you will never add getters/setters to the interface. Exposing your attributes leads to spaghetti procedural code where manipulation with data is outside of an object and spread allover the codebase. Now you are dealing with data and with manipulations with data instead of talking to objects. With getters/setters, you will have data-driven procedural programming where manipulation with that done in the straight imperative way. Get data - do something - set data. In OOP encapsulation is an elephant if done in right way. You should encapsulate state and implementation details so that object has full control on that. Logic will be focused inside object and will not be spread all over the codebase. And yes - encapsulation is still essential in programming as the code will be more maintainable. EDITS After seeing the discussions going on I want to add several things: It doesn't matter how many of your attributes you expose through getters/setters and how carefully you're doing that. Being more selective will not make your code OOP with encapsulation. Every attribute you expose will lead to some procedure working with that naked data in imperative procedural way. You will just spread your code less slowly with being more selective. That doesn't change the core. Yes, in boundaries within the system you get naked data from other systems or database. But this data is just another encapsulation point. Objects should be reliable . The whole idea of objects is being responsible so that you don't need to give orders that are straight and imperative. Instead you're asking an object to do what it does well through the contract. You safely delegate acting part to the object. Objects encapsulate state and implementation details. So if we return to the question why we should do this. Consider this simple example: public class Document { private String title; public String getTitle() { return title; } } public class SomeDocumentServiceOrHandler { public void printDocument(Document document) { System.out.println("Title is " + document.getTitle()); } } Here we have Document exposing internal detail by getter and have external procedural code in printDocument function which works with that outside of the object. Why is this bad? Because now you just have C style code. Yes it is structured but what's really the difference? You can structure C functions in different files and with names. And those so called layers are doing exactly that. Service class is just a bunch of procedures that work with data. That code is less maintainable and has many drawbacks. public interface Printable { void print(); } public final class PrintableDocument implements Printable { private final String title; public PrintableDocument(String title) { this.title = title; } @Override public void print() { System.out.println("Title is " + title); } } Compare with this one. Now we have a contract and the implemantation details of this contract is hidden inside of the object. Now you can truly test that class and that class is encapsulating some data. How it works with that data is an object concerns. In order to talk with object now you need to ask him to print itself. That's encapsulation and that is an object. You will gain the full power of dependency injection, mocking, testing, single responsibilities and a lot bunch of benefits with OOP.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358611", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44104/" ] }
358,786
This doc states the following: If you happen to modify the public API of Angular, API golden files must be updated using... Also this commit has the following heading: fix: public API golden files #16414 I'm wondering what is usually referred to as "golden files". I've googled around and it seems that this phrase is commonly used.
A "golden file" is the expected output of some test (usually automated), stored as a separate file rather than as a string literal inside the test code. So when the test is executed, it will read in the file and compare it to the output produced by the system under test. It's not really a very common expression; I have not heard it in 15 years of professional programming, even though I have used such files many times.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358786", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/108594/" ] }
358,834
This is an example of what I want to do via code. I know you can use jump point search to easily get from the green node to the red node with no problems, or even A*. But how do you calculate this with warps. In the image, you can see that it only takes 8 moves to get from the green node to the red node when taking the blue path. The blue path instantly moves your position from one purple node to the next. The space in the middle that costs 2 moves is a point between two warp zones that you must move to get to. It is clearly faster to take the blue path, since you only need to move half (roughly) as far as the yellow path, but how do I do this programatically? For the purpose of solving this problem, let's assume that there are multiple purple "warps" around the graph that you are able to use, AND we know exactly where each purple point will warp to, and where they are on the graph. Some purple warps are bi-directional, and some are not, meaning, sometimes you can only enter a warp from one side, but not go back after warping. I've thought about the solution, and only concluded that I would be able to calculate the problem by checking the distance to each warp point (minus the uni-directional points), and the difference between those points, and the points close to them. The program would have to figure out somehow that it is more beneficial to take the second warp, instead of walking from the first jump. So, instead of moving 6 spots, then warping, then moving the remaining 8 steps by foot (which is also faster than not using warps at all), it would take the 6 moves, then the two moves to the second warp. EDIT: I realized the blue path will actually take 12 moves, instead of 8, but the question remains the same.
Most path finding algorithms are defined in terms of graphs, not in terms of grids. In a graph, a connection between two otherwise distant nodes is not really a problem. However, you have to take care with your heuristics. With wormholes, the minimum distance between two nodes is no longer the euclidean distance and the distance does not satisfy the triangle inequality. Such heuristics are inadmissible for A*. You therefore cannot use A* easily. Of course path finding algorithms like Dijkstra that do not use a heuristic will still work. This is more like a breadth-first search and will select your wormholes without extra effort. However, Dijkstra will visit more nodes that A* with a good heuristic. (Dijkstra is equivalent to A* with heuristic(x) = 0 .) I think A* will work if you use a heuristic that treats all outgoing wormholes as a wormhole directly to the target: the heuristic may underestimate the distance, but must never overestimate it. I.e. the heuristic would be: def wormhole_heuristic(x): return min(euclidean_distance(x, g) for g in [goal, wormholes...]) For a very accurate heuristic, you can (recursively) add the distance from the wormhole endpoint to the goal or next wormhole. I.e. as a pre-calculation you could perform path finding on the (totally connected) subgraph containing all wormholes and the goal, where the distance between two nodes is their euclidean distance. This may be beneficial if the number of wormholes is far less than the number of reachable cells on your grid. The new heuristic would be: def wormhole_heuristic(x): direct = euclidean_distance(x, goal) via_wormhole = min(euclidean_distance(x, w) + wormhole_path_distance(w, goal) for w in wormholes) return min(direct, via_wormhole) As @Caleth points out in the comments, this is all very tunable and we can improve the first wormhole heuristic without doing a full path finding through the wormhole network, by adding the distance between last wormhole exit and the goal. Because we don't know which wormhole exit will be used last and we must not overestimate, we have to assume the exit closest to the goal: def wormhole_heuristic(x): direct = euclidean_distance(x, goal) to_next_wormhole = min(euclidean_distance(x, w) for w in wormholes) from_last_wormhole = min(euclidean_distance(w.exit, goal) for w in wormholes) via_wormhole = to_next_wormhole + from_last_wormhole return min(direct, via_wormhole)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358834", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/285335/" ] }
358,847
A while back I wrote this answer to a question on how to avoid having a getter and setter for every mutable variable. At the time, I had only a hard-to-verbalize gut feeling that this is a Bad Idea, but OP was explicitly asking how to do it. I searched here on why this might be a problem, and found this question , whose answers seem to take it as a given that using reflection unless absolutely necessary is bad practice. So, can someone verbalize why it is a given that reflection should be avoided, specifically in the case of the generic getter and setter?
Downsides of reflection in general Reflection is harder to understand than straight-line code. In my experience, reflection is an "expert-level" feature in Java. I would argue that most programmers never use reflection actively (i.e. consuming libraries that use reflection doesn't count). That makes code using it harder to understand for these programmers. Reflection code is inaccessible to static analysis Suppose I have a getter getFoo in my class and I want to rename it to getBar . If I use no reflection, I can just search the code base for getFoo and will find every place that uses the getter so I can update it, and even if I miss one, the compiler will complain. But if the place that uses the getter is something like callGetter("Foo") and callGetter does getClass().getMethod("get"+name).invoke(this) , then the above method won't find it, and the compiler won't complain. Only when the code is actually executed will you get a NoSuchMethodException . And imagine the pain you're in if that exception (which is tracked) is swallowed by callGetter because "it's only used with hard-coded strings, it can't actually happen". (Nobody would do that, someone might argue? Except that the OP did exactly that in his SO answer. If the field is renamed, users of the generic setter would never notice, except for the extremely obscure bug of the setter silently doing nothing. Users of the getter might, if they're lucky, notice the console output of the ignored exception.) Reflection code is not type-checked by the compiler This is basically a big sub-point of the above. Reflection code is all about Object . Types are checked at runtime. Errors are discovered by unit tests, but only if you have coverage. ("It's just a getter, I don't need to test it.") Basically, you lose the advantage using Java over Python gained you in the first place. Reflection code is unavailable for optimization Maybe not in theory, but in practice, you won't find a JVM that inlines or creates an inline cache for Method.invoke . Normal method calls are available for such optimizations. That makes them a lot faster. Reflection code is just slow in general The dynamic method lookup and type checking necessary for reflection code is slower than normal method calls. If you turn that cheap one-line getter into a reflection beast, you might (I have not measured this) be looking at several orders of magnitude of slowdown. Downside of generic getter/setter specifically That's just a bad idea, because your class now has no encapsulation anymore. Every field it has is accessible. You might as well make them all public.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358847", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/285358/" ] }
358,873
Pretty straightforward fundamental, albeit naive, question: Would having 4 states per "bit" rather than 2 mean twice the storage space? In case that isn't clear, I mean as in if every "storage structure", rather than only representing 2 values, (base 2 : 0, 1), could represent 4 values (base 4 : 0, 1, 2, 3).
The word you are looking for is not "bit" but "symbol." "Symbol" is the word used to describe the process of mapping hardware signals (such as voltages or magnetic patterns) into logical bits. If a symbol may have 4 states, it can encode 2 bits worth of information. Of course, we aren't saying anything about the resource usage of the symbol in that argument. If you're sending symbols along a wire as voltages, the different symbols look more and more similar as you increase the number of states per symbol. If I have a 0-5V wire, and 2 states per symbol (1 bit), my two states are 0V and 5V, with 5V between each symbol. If I have the same wire, but encode 4 states per symbol (2 bits), my states are 0V, 1.66V, 3.33V and 5V. That's 1.66V between each symbol. It's now easier for noise to corrupt my signal. There is a law relating these, known as Shannon's Law which relates the bandwidth (in bits) to the rate of errors which occur due to noise on the line. It turns out that there's a limit to how many bits you can cram across a wire. Using more symbols leads to more errors, requiring more error correction. We do use this technique in real life. Digital television uses QAM-64, with 64 states (and thus 6 bits per symbol). Ethernet uses 4 voltage levels, so 2 bits per symbol. Edit: I used bit transmission rates rather than storage because it's more common to see symbols with more states in transmission, so I could make the story more clear. If one wishes to specifically look at storage and storage alone, one could look at Multi-Level Cells in flash memory, as Someone Somewhere mentioned in the comments. Such memory uses the exact same approach, storing 3 bits as 16 different charge levels of a capacitor. (or more!)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358873", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100972/" ] }
358,883
I have made a custom DialogBox that accepts a key in the constructor then sets itself up based on the key. It functions as a box for adding editing or deleting objects depending on which key is passed in the constructor. For example, if the ADD key is passed then there are TextBoxes to allow editing of parameters for submission. If the EDIT key is passed then some of those TextBoxes are replaced with Labels , and some of the text and functionality of the Buttons are modified. Is this bad design practice? Is it better to have three separate DialogBoxes for each function?
The word you are looking for is not "bit" but "symbol." "Symbol" is the word used to describe the process of mapping hardware signals (such as voltages or magnetic patterns) into logical bits. If a symbol may have 4 states, it can encode 2 bits worth of information. Of course, we aren't saying anything about the resource usage of the symbol in that argument. If you're sending symbols along a wire as voltages, the different symbols look more and more similar as you increase the number of states per symbol. If I have a 0-5V wire, and 2 states per symbol (1 bit), my two states are 0V and 5V, with 5V between each symbol. If I have the same wire, but encode 4 states per symbol (2 bits), my states are 0V, 1.66V, 3.33V and 5V. That's 1.66V between each symbol. It's now easier for noise to corrupt my signal. There is a law relating these, known as Shannon's Law which relates the bandwidth (in bits) to the rate of errors which occur due to noise on the line. It turns out that there's a limit to how many bits you can cram across a wire. Using more symbols leads to more errors, requiring more error correction. We do use this technique in real life. Digital television uses QAM-64, with 64 states (and thus 6 bits per symbol). Ethernet uses 4 voltage levels, so 2 bits per symbol. Edit: I used bit transmission rates rather than storage because it's more common to see symbols with more states in transmission, so I could make the story more clear. If one wishes to specifically look at storage and storage alone, one could look at Multi-Level Cells in flash memory, as Someone Somewhere mentioned in the comments. Such memory uses the exact same approach, storing 3 bits as 16 different charge levels of a capacitor. (or more!)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/358883", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13417/" ] }
359,014
The company I work for now doesn't implement continuous delivery yet. We still deploy the project manually to server, file by file. Which is best practice: to manually deploy one project artifact for each deployment or keep doing the file-by-file deployment?
Which is best practice? to manually deploy one project artifact each deployment or keep doing the file by file deployment? Neither. Best Practice is to automate your deployment, completely and exclusively. That means nobody gets to put anything onto a server manually. "To summarize the summary of the summary: People are a Problem." (Douglas Adams) People make mistakes. If one of the files that you forget to copy across is a shared "library" that's been extensively changed, you can bring the whole Production site crashing down.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359014", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248300/" ] }
359,027
I've been learning some C++, and often have to return large objects from functions that are created within the function. I know there's the pass by reference, return a pointer, and return a reference type solutions, but I've also read that C++ compilers (and the C++ standard) allow for return value optimization, which avoids copying these large objects through memory, thereby saving the time and memory of all of that. Now, I feel that the syntax is much clearer when the object is explicitly returned by value, and the compiler will generally employ the RVO and make the process more efficient. Is it bad practice to rely on this optimization? It makes the code clearer and more readable for the user, which is extremely important, but should I be wary of assuming the compiler will catch the RVO opportunity? Is this a micro-optimization, or something I should keep in mind when designing my code?
Employ the least astonishment principle . Is it you and only ever you who is going to use this code, and are you sure the same you in 3 years is not going to be surprised by what you do? Then go ahead. In all other cases, use the standard way; otherwise, you and your colleagues are going to run into hard to find bugs. For example, my colleague was complaining about my code causing errors. Turns out, he had turned off short-circuit Boolean evaluation in his compiler settings. I nearly slapped him.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359027", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/262488/" ] }
359,194
I would like to understand why is redis called an in memory database when it provides persistence similar to databases like MySQL or Postgres: http://oldblog.antirez.com/post/redis-persistence-demystified.html https://redis.io/topics/persistence
Redis is an in-memory database because it keeps the whole data set in memory, and answers all queries from memory. Because RAM is faster than disks, this means Redis always has very fast reads. The drawback is that the maximum size of the data set is limited by the available RAM. Redis has various options to save the data to permanent storage. This permanent representation can then be used to rebuild the in-memory state of a Redis instance. However, this representation is not indexed and cannot be used to answer queries directly from disk. This is in stark contrast to databases like Postgres. They always keep the whole data set including indices on disk in a format that allows random access. Queries can be answered directly from the on-disk data. The database may load caches or indices into memory as an optimization, but that is not fundamentally necessary: the database can handle more data than fits into RAM. A larger difference between Redis and SQL databases is how they deal with writes, i.e. what durability guarantees they provide. There are a lot of tunable parameters here, so it's not correct to say “an SQL database is always more durable than a Redis database”. However, Redis usually commits data to permanent storage on a periodic basis, whereas Postgres will usually commit before each transaction is marked as complete. This means Postgres is slower because it commits more frequently, but Redis usually has a time window where data loss may occur even when the client was told that their update was handled successfully. This data loss may or may not be an acceptable tradeoff in a given use case. What kind of data set always fits into RAM, is a good match for a key–value datamodel, and doesn't need durability? A cache for some other data source. Redis is very good at being fast. SQL databases like Postgres are better at dealing with large data sets and providing ACID guarantees .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359194", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23899/" ] }
359,320
I am trying to adhere to the Single Responsibility Principle (SRP) as much as possible and got used to a certain pattern (for the SRP on methods) heavily relying on delegates. I'd like to know if this approach is sound or if there are any severe issues with it. For example, to check input to a constructor, I could introduce the following method (the Stream input is random, could be anything) private void CheckInput(Stream stream) { if(stream == null) { throw new ArgumentNullException(); } if(!stream.CanWrite) { throw new ArgumentException(); } } This method (arguably) does more than one thing Check the inputs Throw different exceptions To adhere to the SRP I therefore changed the logic to private void CheckInput(Stream stream, params (Predicate<Stream> predicate, Action action)[] inputCheckers) { foreach(var inputChecker in inputCheckers) { if(inputChecker.predicate(stream)) { inputChecker.action(); } } } Which supposedly only does one thing (does it?): Check the input. For the actual checking of the inputs and throwing of the exceptions I have introduced methods like bool StreamIsNull(Stream s) { return s == null; } bool StreamIsReadonly(Stream s) { return !s.CanWrite; } void Throw<TException>() where TException : Exception, new() { throw new TException(); } and can call CheckInput like CheckInput(stream, (this.StreamIsNull, this.Throw<ArgumentNullException>), (this.StreamIsReadonly, this.Throw<ArgumentException>)) Is this any better than the first option at all, or do I introduce unneccesary complexity? Is there any way I can still improve this pattern, if it's viable at all?
SRP is perhaps the most misunderstood software principle. A software application is built from modules, which are built from modules, which are built from... At the bottom, a single function such as CheckInput will only contain a tiny bit of logic, but as you go upward, each successive module encapsulates more and more logic and this is normal . SRP is not about doing a single atomic action. It's about having a single responsibility, even if that responsibility requires multiple actions... and ultimately it's about maintenance and testability : it promotes encapsulation (avoiding God Objects), it promotes separation of concerns (avoiding rippling changes through the whole codebase), it helps testability by narrowing the scope of responsibilities. The fact that CheckInput is implemented with two checks and raise two different exceptions is irrelevant to some extent. CheckInput has a narrow responsibility: ensuring that the input complies with the requirements. Yes, there are multiple requirements, but this does not mean there are multiple responsibilities. Yes, you could split the checks, but how would that help? At some point the checks must be listed in some way. Let's compare: Constructor(Stream stream) { CheckInput(stream); // ... } versus: Constructor(Stream stream) { CheckInput(stream, (this.StreamIsNull, this.Throw<ArgumentNullException>), (this.StreamIsReadonly, this.Throw<ArgumentException>)); // ... } Now, CheckInput does less... but its caller does more! You have shifted the list of requirements from CheckInput , where they are encapsulated, to Constructor where they are visible. Is it a good change? It depends: If CheckInput is only called there: it's debatable, on the one hand it makes the requirements visible, on the other hand it clutters the code; If CheckInput is called multiple times with the same requirements , then it violates DRY and you have an encapsulation issue. It's important to realize that a single responsibility may imply a lot of work. The "brain" of a self-driving car has a single responsibility: Driving the car to its destination. It is a single responsibility, but requires coordinating a ton of sensors and actors, taking lots of decision, and even has possibly conflicting requirements 1 ... ... however, it's all encapsulated. So the client doesn't care. 1 safety of the passengers, safety of others, respect of regulations, ...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359320", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143358/" ] }
359,408
What is the advantage of returning a pointer to a structure as opposed to returning the whole structure in the return statement of the function? I am talking about functions like fopen and other low level functions but probably there are higher level functions that return pointers to structures as well. I believe that this is more of a design choice rather than just a programming question and I am curious to know more about the advantages and disadvantages of the two methods. One of the reasons I thought that is would be an advantage to return a pointer to a structure is to be able to tell more easily if the function failed by returning NULL pointer. Returning a full structure that is NULL would be harder I suppose or less efficient. Is this a valid reason?
There are several practical reasons why functions like fopen return pointers to instead of instances of struct types: You want to hide the representation of the struct type from the user; You're allocating an object dynamically; You're referring to a single instance of an object via multiple references; In the case of types like FILE * , it's because you don't want to expose details of the type's representation to the user - a FILE * object serves as an opaque handle, and you just pass that handle to various I/O routines (and while FILE is often implemented as a struct type, it doesn't have to be). So, you can expose an incomplete struct type in a header somewhere: typedef struct __some_internal_stream_implementation FILE; While you cannot declare an instance of an incomplete type, you can declare a pointer to it. So I can create a FILE * and assign to it through fopen , freopen , etc., but I can't directly manipulate the object it points to. It's also likely that the fopen function is allocating a FILE object dynamically, using malloc or similar. In that case, it makes sense to return a pointer. Finally, it's possible you're storing some kind of state in a struct object, and you need to make that state available in several different places. If you returned instances of the struct type, those instances would be separate objects in memory from each other, and would eventually get out of sync. By returning a pointer to a single object, everyone's referring to the same object.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359408", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/223002/" ] }
359,558
I'm currently refactoring a large subsystem with a multi layered architecture, and I'm struggling to design an effective error logging \ handling strategy. Let's say that my architecture consists of the following three layers: Public Interface (I.E an MVC Controller) Domain Layer Data Access Layer My source of confusion is where I should implement the error logging \ handling: The easiest solution would be to implement the logging at the top level (I.E the Public Interface \ MVC Controller). However this feels wrong because it means bubbling up the exception through the different layers, and then logging it; rather than logging the exception at it's source. Logging the exception at it's source is obviously the best solution because I have the most information. My problem with this is that I can't catch every exception at source without catching ALL exceptions, and in the domain / public interface layer, this will lead to catching exceptions that have already been caught, logged and re-thrown by the layer below. Another possible strategy is a mix of #1 and #2; whereby I catch specific exceptions at the layer they are most likely to be thrown (I.E Catching, logging and re-throwing SqlExceptions in the Data Access Layer) and then log any further uncaught exceptions at the top level. However this would also require me to catch and relog every exception at the top level, because I can't distinguish between errors that have already been logged \ handled vs those that have not. Now, obviously this is an issue in most software applications, so there must be a standard solution to this problem that results in exceptions being caught at source, and logged once; however I just can't see how to do this myself. Note, the title of this question is very similar to ' Logging exceptions in a multi tier application" ', however the answers in that post are lacking detail and are not sufficient to answer my question.
To your questions: The easiest solution would be to implement the logging at the top level Having the exception bubble up to the top level is an absolutely correct and plausible approach. None of the higher layer methods tries to continue some process after the failure, which typically can't succeed. And a well-equipped exception contains all the information necessary for logging. And doing nothing about exceptions helps you keep your code clean and focussed on the main task instead of the failures. Logging the exception at it's source is obviously the best solution because I have the most information. That's half correct. Yes, most useful information is available there. But I'd recommend to put all this into the exception object (if it isn't already there) instead of immediately logging it. If you log at a low level, you still need to throw an exception up to tell your callers that you didn't complete your job. This ends up in multiple logs of the same event. Exceptions My main guideline is to catch and log exceptions at the top level only. And all the layers below should make sure that all necessary failure information gets transported up to the top level. Within a single-process application e.g. in Java, this mostly means not to try/catch or log at all outside the top level. Sometimes, you want some context information included in the exception log that's not available in the original exception, e.g. the SQL statement and parameters that were executed when the exception was thrown. Then you can catch the original exception and re-throw a new one, containing the original one plus the context. Of course, real life sometimes interferes: In Java, sometimes you have to catch an exception and wrap it into a different exception type just to obey some fixed method signatures. But if you re-throw an exception, make sure the re-thrown one contains all information needed for later logging. If you are crossing an inter-process border, often you technically can't transfer the full exception object including the stack trace. And of course the connection might get lost. So here's a point where a service should log exceptions and then try its best to transmit as much of the failure information as possible across the line to its client. The service must make sure the client gets a failure notice, either by receiving a failure response or by running into a timeout in case of a broken connection. This will typically result in the same failure to be logged twice, once inside the service (with more detail) and once in the client's top level. Logging I'm adding some sentences about logging in general, not only exception-logging. Besides exceptional situations, you want important activities of your application to get recorded in the log as well. So, use a logging framework. Be careful about log levels (reading logs where debug information and serious errors are not flagged with different accordingly is a pain!). Typical log levels are: ERROR: Some function failed irrecoverably. That doesn't necessarily mean your whole program crashed, but some task couldn't be completed. Typically, you have an exception object describing the failure. WARNING: Something strange happened, but didn't cause any task to fail (strange configuration detected, temporary connection breakdown causing some retries, etc.) INFO: You want to communicate some significant program action to the local system administrator (starting some service with its configuration and software version, importing data files into the database, users logging into the system, you get the idea...). DEBUG: Things you as the developer want to see when you are debugging some issue (but you'll never know in advance what you really need in case of this or that specific bug - if you can foresee it, you'll fix the bug). One thing that's always useful is to log activities on external interfaces. In production, set the log level to INFO. The results should be useful to a system administrator so he knows what's going on. Expect him to call you for assistance or bug-fixing for every ERROR in the log and half of the WARNINGs. Enable the DEBUG level only during real debugging sessions. Group the log entries into appropriate categories (e.g. by the fully-qualified classname of the code that generates the entry), allowing you to switch on debug logs for specific parts of your program.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143983/" ] }
359,592
I see this term a lot in the context of software architecture ("domain-model", "domain-driven-design" etc.). I have googled it, but I get tons of different definitions. So what is it really?
The domain is the real-world context in which you're attempting to solve a problem using software. Each domain comes with expertise, vocabulary and tools that are part of that domain. A specific example of a domain could be something like "the automated machining of intricate parts using a high-speed rotating cutter." The software and hardware system that accomplishes this is called a CNC mill . Another example of a domain is the accounting department at a corporation. Further Reading Bounded Context by Martin Fowler
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359592", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/200887/" ] }
359,667
I'm building a wpf application which implements the following features: Take user input and read data from databases perform some calculations on it Showcase it to the user in multiple types of views and write changes back to db Proposed architecture: Database -> Entity Framework -> Repository -> Business Logic -> Data Service -> ViewModel Reasons to use this architecture: Multiple scenarios present in the application (Multiple views) and multiple databases. Hence, i'm willing to use repository in the middle for abstraction. One caveat is that the context will be long lived if repository is implemented. To overcome this, is it okay to create a context and dispose them in a using() block in each of the crud methods.? feel free to suggest alternate approaches.
Use one DbContext object per data access or transaction. DbContext is a lightweight object; it is designed to be used once per business transaction. Making your DbContext a Singleton and reusing it throughout the application can cause other problems, like concurrency and memory leak issues. DbContext essentially implements a Unit of Work. Treat it accordingly. Don't dispose DbContext objects. Although the DbContext implements IDisposable , you shouldn't manually dispose it, nor should you wrap it in a using statement. DbContext manages its own lifetime; when your data access request is completed, DbContext will automatically close the database connection for you. To understand why this is the case, consider what happens when you run a Linq statement on an entity collection from a DbContext . If you return a lazy-loading IQueryable from your data access method, you stand up a pipeline that isn't actually executed until the client compels some data from it (by calling FirstOrDefault() , ToList() or iterating over it). Further Reading Do I always have to call Dispose() on my DbContext objects? Why you shouldn't use Singleton DataContexts in Entity Framework Returning IEnumerable<T> vs. IQueryable<T> Should Repositories return IQueryable ?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359667", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/285747/" ] }
359,983
A bit of context: Earlier today I had to update some SQL code that another colleague of mine provided, and since it’s a pretty large script, it’s stored as a separate file (which is then read and executed at runtime). While doing this I accidentally reintroduced two bugs we had a few months back, namely: For whatever reason the ASCII file was encoded in UTF-16 (the colleague emailed me the file, which might have caused it). The script was missing initial SET statements (required due to some driver things on production, but not on a clean install locally). After debugging this for about an hour (again) I decided to write some unit tests to ensure this would never happen again (and include a quick way to fix it in the assertion message to provide an easy fix for future developers). However when I pushed this code another colleague (who is also our team lead) walks up to me and told me I shouldn't make these things again because: "These things don't belong in unit tests" "Unit tests should only be used to check the flow of your code" I’m pretty conflicted now since I still think what I’m doing isn’t wrong, as this bug wouldn’t be reintroduced in the future, however this colleague works as a senior and at the end of the day gets to decide what we spend our time on. What should I do? Am I wrong for doing it this way? Is it considered bad practice?
Most likely the tests you wrote are closer to integration or regression tests than unit tests. While the line can be very fuzzy and sometimes devolves into pedantry over what is or is not a unit test, I would go back to your colleague and ask where the tests you wrote should be since they do add value ensuring correctness of the code. I would not focus to much on what is or isn't a unit test and realize that even if its an integration test, there could still be value in the test.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/359983", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/223194/" ] }
360,016
I'm fetching a set of tuples from the database, and putting it into a map. The database query is costly. There is no obvious natural ordering of the elements in the map, but insertion order matters nonetheless. Sorting the map would be a heavy operation, so I want to avoid doing that, given that the query result is already sorted the way I want it. Therefore, I just store the query result into a LinkedHashMap , and return the map from a DAO method: public LinkedHashMap<Key, Value> fetchData() I have a method processData that should do some processing on the map - modifying some values, adding some new key/values. It is defined as public void processData(LinkedHashMap<Key, Value> data) {...} However, several linters (Sonar etc) complain that The type of 'data' should be an interface such as 'Map' rather than the implementation "LinkedHashMap" ( squid S1319 ). So basically it is saying that I should have public void processData(Map<Key, Value> data) {...} But I want the method signature to say that map order matters - it matters to the algorithm in processData - so that my method is not passed just any random map. I don't want to use SortedMap , because it (from the javadoc of java.util.SortedMap ) "is ordered according to the natural ordering of its keys, or by a Comparator typically provided at sorted map creation time." My keys don't have a natural ordering , and creating a Comparator to do nothing seems verbose. And I would still want it to be a map, to take advantage of put to avoid duplicate keys etc. If not, data could have been a List<Map.Entry<Key, Value>> . So how do I say that my method wants a map that is already sorted ? Sadly, there is no java.util.LinkedMap interface, or I would have used that.
So use LinkedHashMap . Yes , you should use Map over a specific implementation whenever possible, and yes , this is best practice. That said, this is an oddly specific situation where the implementation of Map actually matters. This won't be true for 99.9% of cases in your code when you use Map , and yet here you are, in this 0.1% situation. Sonar can't know this and so Sonar simply tells you to avoid using the specific implementation because it would be correct in most cases. I would argue that if you can make a case for using a specific implementation, don't try to put lipstick on a pig. You need a LinkedHashMap , not a Map . This said, if you are new to programming and stumble upon this answer, don't think this allows you to go against best practice because it doesn't. But when replacing one implementation for another isn't acceptable, the only thing you can do is use that specific implementation, and be damned to Sonar.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360016", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52573/" ] }
360,123
In C++, is it bad practice create blocks of code inside some function, such as the following: bool f() { { double test = 0; test = // some other variable outside this function, for example. if (test == // some value) return true; } { double test = 0; test = // some variable outside this function, different from the last one. if (test == // some value) return true; } return false; } The point of doing this would be to use the same variable name of "test" multiple times, for the same type of procedure. In my actual project, I have multiple variables and am performing multiple tests. I don't really want to keep creating new variables with different names for each of the tests, considering how the tests are so similar. Is it bad practice to insert blocks of code so that the variables go out of scope after each test, and then I can use their names again? Or should I seek another solution? It should be noted that I considered using the same set of variables for all my tests (and just setting them all to 0 after each test was over), but I was under the impression this may be bad practice.
Blocks are perfectly reasonable if you're using them to scope some resource. Files, network connections, memory allocations, database transactions, whatever. In those cases, the block is actually part of the logical structure of the code: you spawn a resource, it exists for some period of time, and then it goes away at a designated time. But if all you're doing is scoping a name , then I would say that they are bad practice. Generally speaking, of course; special circumstances can apply. For example, if this function were generated by some code generation system, testing framework, or the like, then blocks for the sake of name scoping is a reasonable thing. But you'd be talking about code written for the purpose of a machine, not a human. If a human is writing code where they need to reuse names within the same function, I would say that those blocks probably need to be separate functions. Especially if those names are being used with different types and/or meaning within those blocks.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360123", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/287384/" ] }
360,385
I've always used public methods and recently been advised by one of my friends to always avoid defining methods as public, as far as possible, though I have worked in any commercial company I have never "really" understood the concept of why public methods are not "safe to use at times". I have worked a lot with c# and please don't get me wrong I have used other access types such internal , private and protected and I know how they "work" but never really understood why you should define methods with those access levels, especially in production.
The methods, properties and constructors (i.e. members of a class) that you define using a public accessor determine the surface area that the users of your class will be allowed to interact with. Any method that you don't want to be part of this surface area should not be made public . Reasons why some members might not be public: They are implementation details; they implement behavior needed by the public members, but are not meant to be called from the outside directly (i.e. private ). They are meant to be accessed in a derived class, but not called directly. (i.e. protected ). Think of it in terms of your house. You don't let the tax man come into your house, find your checkbook and write himself a check. Rather, the tax man sends you a bill, you review the bill, write a check and send it to him. The sending of the bill and the check are both public acts. The acts of reviewing the bill and writing the check are implementation details performed in the privacy of your own home that the tax man doesn't need to know anything about. Further Reading Encapsulation
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360385", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/281027/" ] }
360,522
In the article: Why POCO , there is this sentence: Maciej Sobczak puts it well: “I just don’t like when somebody gives me half of the language and tells me that it’s for my own protection”. I don't understand what he means, even though C# is owned by Microsoft & Java is owned by Oracle , it doesn't mean they hold half of the language, does it? I didn't find any evidence to prove that sentence, and I'm really curious about this. And even more curious about the 'for my own protection' part.
Sobczak isn't talking about corporate ownership. The "half" language that he is missing is all those things that you can't do in many modern languages, even though as a well-educated computer expert he knows they could be made possible: inherit from as many classes as you like. Assign any object to any other without type constraints. Control allocation and freeing resources manually instead of trusting the compiler and run-time to do it for him. The thing is, all those restrictions were put into programming languages for a reason. We did have languages that allowed all this. Over time we found that the average programmer is better off with a certain amount of restrictions and hand-holding, because the potential of making really bad errors is just too great to be worth the additional power and expressivity. (Obviously, this sometimes annoys programmers who wouldn't really need that much hand-holding. Their complaints are sometimes legitimate. But people are notoriously bad at assessing their own skills, and many who think they don't need the safeguards do, in fact, very much need them. It isn't always easy to tell apart actual superior intellects who feel held back by restrictions in high-level languages from average coders who just think that complaining will make them look superior, or who don't know any better.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360522", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/226532/" ] }
360,525
I had an interesting discussion today with another developer about how to approach a class with a method that accepts a string and outputs string. Imagine something like the following which is completely made up for the purpose of example public string GetStringPart(string input) { //Some input validation which is removed for clarity if(input.Length > 5) return input.Substring(0,1); if(input.Substring(0,1) == "B") return input.Substring(0,3); return string.empty; } A function which has some logic based on it's string input is added to a project using DI and has a DI Container in place. Would you add this new class with an interface and inject it where needed, or would you make it a static class? What are the pros and cons of each? Why would you (or not) want to make this something used with constructor injection rather than just accessed when required anywhere.
There is no reason why this needs to be injected. This is just a function, it has no dependencies, so just call it. It can even be static if you want as it looks to be pure. One can write unit tests against this with no difficulty. If it is used in other classes, unit tests can still be written. There is no need to abstract away functions with no dependencies, it's overkill. If this becomes more complex then maybe passing an interface into a constructor or method is warranted. But, I wouldn't go down that road that unless I had complex GetStringPart logic based on location, etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360525", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/93050/" ] }
360,965
The specific example in mind is a list of filenames and their sizes. I can't decide whether each item in the list should be of the form {"filename": "blabla", "size": 123} , or just ("blabla", 123) . A dictionary seems more logical to me because to access the size, for example, file["size"] is more explanatory than file[1] ... but I don't really know for sure. Thoughts?
I would use a namedtuple : from collections import namedtuple Filesize = namedtuple('Filesize', 'filename size') file = Filesize(filename="blabla", size=123) Now you can use file.size and file.filename in your program, which is IMHO the most readable form. Note namedtuple creates immutable objects like tuples, and they are more lightweight than dictionaries, as described here .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360965", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/239550/" ] }
360,979
I had an unusual, brief conversation with a very senior architect about dynamic and static languages. He said that company data shows that there is evidence for higher productivity when static languages are used. Note, it's a large company with long history. To my (and others) surprise, the metric he used was lines of code added. He quickly dismissed objections regarding the metric saying that within a same company, with similar culture, line of business and with enough data, differences (as to unique situations and capabilities of individuals) blend enough so that the SLOC metric is useful for comparing productivity of tools and languages. While I don't think that this claim is backed by rigorious statistical analysis, is there some evidence in the industry that would support this line of thinking?
The argument of the senior architect could mean two things. It may mean that an average developer in the company produces more lines of code when using static languages than when using dynamic ones. For instance, if fifteen developers work with Java for six months, they will write 100 KLOC, and if the same fifteen developers work with Python for six months, they will write only 50 KLOC. There is no correlation between LOC and productivity here. What if it takes four times more lines of code in Java to produce the same feature than in Python? If that is true, using Python would result in twice the productivity, based on the KLOC metrics above. He may also mean that an average developer in the company produces fewer lines of code when using static languages than when using dynamic ones: fifteen developers would write in six months 100 KLOC in Java, or 200 KLOC in Python. While less lines of code is usually better (less code to write, to read and to maintain), it's still unclear how much features did Java developers produce compared to Python ones. Maybe they wrote half lines of code compared to Python developers, but also produced half the number of features? In both cases, LOC is not a valuable metric, because the same feature wouldn't translate in the same amount of lines of code in different languages . Some languages tend to be more verbose; others—more compact. While in some cases, compactness is valuable, there is no general rule for that. An extreme example would be Brainfuck language which has extreme compactness, but which is not popular for its readability. Comparing even similar languages could be tricky: for instance, when it comes to curly braces, Java follows K&R style, while in C#, the opening curly brace is on its own line in most cases when following the official style, which leads to an artificial increase of LOCs for C#. And what happens when one compares a procedural language with an object-oriented one, or with a functional language? Instead of using a error prone metric, the senior architect could rely on a group of metrics which do measure productivity when used together: the number of features developed per month, the number of bugs introduced in the code base and the time spent solving those bugs, the evolution of the technical debt, etc. This comparison could be tricky at the beginning, since one has to take in account the unfamiliarity of the team with the new language. Once the team becomes familiar enough with it, the choice should be based on the stable metrics, as well as in most part on the preference of the members of the team themselves. LOC has a value in some narrow situations. For instance, it could give a hint about the size of the project and parts of the project (and in average correlates with function points, while being often easier to measure), or it could indicate the methods and classes which may need further attention because of their large size. However, LOC should be used with care, since it is misused too often by persons who imagine some correlation between unrelated things. The most humanly disastrous usage of LOCs was in the past the attempt to measure the productivity of an individual developer based on the LOCs written per month.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/360979", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/210415/" ] }
361,112
I am a bit confused by the MSDN C# documentation which states that & and | are logical operators and that && and || are conditional operators. I keep calling && , || and ! logical operators, so I am wrong?
I am a bit confused by the MSDN C# documentation which states that & and | are logical operators and that && and || are conditional operators. I keep calling && , || and ! logical operators, so I am wrong? No; you're correct. There are numerous small, mostly unimportant nomenclature errors in the MSDN documentation; I tried to get as many of them out as I could, but in cases where it is not egregiously wrong and misleading, it's not always a wise use of time. Go to the specification if you want a definitive statement about the name of a C# feature. So: the relevant authority is the C# specification, which states in section 7.11: The & , ^ , and | operators are called the logical operators. It then goes on to further break down the built-in logical operators into integer, enumeration, Boolean and nullable-Boolean logical operators. There are also user-defined logical operators; see the spec for details. In section 7.12 we have The && and || operators are called the conditional logical operators. They are also called the “short-circuiting” logical operators. So all of them are logical operators. Some of them are conditional logical operators . What makes the conditional logical operators conditional ? One might make a specious guess that it is because they are typically used in conditional statements ( if ) or conditional expressions ( ? : ). The real reason is given by the specification: The && and || operators are conditional versions of the & and | operators: The operation x && y corresponds to the operation x & y , except that y is evaluated only if x is not false. The operation x || y corresponds to the operation x | y , except that y is evaluated only if x is not true. The conditional logical operators are thus named because the right hand operand is evaluated conditionally depending on the value of the left hand operand. We can see this more vividly by noting that the conditional logical operators are just "syntactic sugars" for conditional expressions . x && y is simply a more pleasant way to write x ? y : false , and x || y is simply a more pleasant way to write x ? true : y . The conditional logical expressions are actually conditional expressions. There is also a user-defined form of the conditional logical operator, and it is a little tricky. See the specification for details. Further reading, if this subject interests you: https://ericlippert.com/2015/11/02/when-would-you-use-on-a-bool/ https://ericlippert.com/2012/03/26/null-is-not-false-part-one/ https://ericlippert.com/2012/04/12/null-is-not-false-part-two/ https://ericlippert.com/2012/04/19/null-is-not-false-part-three/
{ "source": [ "https://softwareengineering.stackexchange.com/questions/361112", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60327/" ] }
361,395
I'd like to use Youtube as an example: they use IDs in the form of PEckzwggd78 . Why don't they use simple integers? Or imgur.com - they also use IDs such as 9b6tMZS for images and galleries. Not sequential integers. Why don't they use integers (particularly sequential ones)? In what cases is it a wise decision to use such string IDs instead of integers?
Youtube can't use sequentional IDs for two reasons: Its databases are almost certainly distributed, making sequential numbering complicated. It has a privacy option "Unlisted videos": those that don't show up in the search results, but are available if you know the ID. Therefore, the video IDs should be reasonably random and unpredictable. Whether the ID is represented by digits only, or by a combination of letters and digits, is irrelevant: there is a trivial mapping from one representation to another.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/361395", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/289445/" ] }
361,458
If something can be generated, then that thing is data, not code. Given that, isn't this whole idea of source code generation a misunderstanding? That is, if there is a code generator for something, then why not make that something a proper function which can receive the required parameters and do the right action that the "would generated" code would have done? If it is being done for performance reasons, then that sounds like a shortcoming of the compiler. If it is being done to bridge two languages, then that sounds like a lack of interface library. Am I missing something here? I know that code is data as well. What I don't understand is, why generate source code ? Why not make it into a function which can accept parameters and act on them?
Is source code generation an anti pattern? Technically, if we generate code, it is not source even if it is text that is readable by humans. Source Code is original code, generated by a human or other true intelligence, not mechanically translated and not immediately reproducible from (true) source (directly or indirectly). If something can be generated, than that thing is data, not code. I would say everything is data anyway. Even source code. Especially source code! Source code is just data in a language designed to accomplish programming tasks. This data is to be translated, interpreted, compiled, generated as needed into other forms — of data — some of which happen to be executable. The processor executes instructions out of memory. The same memory that is used for data. Before the processor executes instructions, the program is loaded into memory as data . So, everything is data , even code . Given that [generated code is data], isn't this whole idea of code generation a misunderstanding? It is perfectly fine to have multiple steps in compilation, one of which can be intermediate code generation as text. That is, if there is a code generator for something, then why not make that something a proper function which can receive the required parameters and do the right action that the "would generated" code would have done? That's one way, but there are others. The output of code generation is text, which is something designed to be used by a human. Not all text forms are intended for human consumption. In particular, generated code (as text) is typically intended for compiler consumption not human consumption. Source code is considered the original: the master — what we edit & develop; what we archive using source code control.  Generated code, even when human-readable text, is typically regenerated from the original source code .  Generated code, generally speaking, doesn't have to be under source control since it is regenerated during build.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/361458", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/158474/" ] }
361,517
I'm writing a program in Java where at one point I need to load a password for my keystore. Just for fun, I tried to keep my password in Java as short as possible by doing this: //Some code .... KeyManagerFactory keyManager = KeyManagerFactory.getInstance("SunX509"); Keystore keyStore = KeyStore.getInstance("JKS"); { char[] password = getPassword(); keyStore.load(new FileInputStream(keyStoreLocation), password); keyManager.init(keyStore, password); } ... //Some more code Now, I know in this instance that's kinda dumb. There are a bunch of other things I could've done, most of them actually better (I could've not used a variable at all). However, I was curious if there's a case where doing this wasn't so dumb. The only other thing I can think of is if you wanted to reuse common variable names like count , or temp , but good naming conventions and short methods make it unlikely that would be useful. Is there a case where using blocks only to reduce variable scope makes sense?
First, speaking to the underlying mechanics: In C++ scope == lifetime b/c destructors are invoked on the exit from the scope. Further, an important distinction in C/C++ we can declare local objects. In the runtime for C/C++ the compiler will generally allocate a stack frame for the method that is as large as may be needed, in advance, rather than allocating more stack space on entry to each scope (that declares variables). So, the compiler is collapsing or flattening the scopes. The C/C++ compiler may reuse stack storage space for locals that don't conflict in usage lifetime (it generally will use analysis of the actual code to determine this rather than scope, b/c that is more accurate than scope!). I mention C/C++ b/c Java's syntax, i.e. curly braces and scoping, is at least in part derived from that family. And also because C++ came up in question comments. By contrast, it is not possible to have local objects in Java: all objects are heap objects, and all locals/formals are either reference variables or primitive types (btw, the same is true for statics). Further, in Java scope and lifetime are not exactly equated: being in/out of scope is a mostly a compile time concept going to accessibility and name conflicts; nothing really happens in Java on scope exit with regard to cleanup of variables. Java's garbage collection determines (the ending point of the) lifetime of objects. Also Java's bytecode mechanism (the output of the Java compiler) tends to promote user variables declared within limited scopes to the top level of the method, because there is no scope feature at the bytecode level (this is similar to C/C++ handling of the stack). At best the compiler could reuse local variable slots, but (unlike C/C++) only if their type is the same. (Though, to be sure the underlying JIT compiler in the runtime could reuse the same stack storage space for two differently typed (and different slotted) locals, with sufficient analysis of the bytecode.) Speaking to the programmer advantage, I would tend to agree with others that a (private) method is a better construct even if used only once. Still, there is nothing really wrong with using it to deconflict names. I have done this in rare circumstances when making individual methods is a burden. For example, when writing an interpreter using a large switch statement within a loop, I might (depending on factors) introduce a separate block for each case to keep the individual cases more separate from each other, instead of making each case a different method (each of which is only invoked once). (Note that as code in blocks, the individual cases have access to the "break;" and "continue;" statements regarding the enclosing loop, whereas as methods would require returning booleans and the caller's use of conditionals to gain access to these control flow statements.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/361517", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/239446/" ] }
361,605
Say I have a entity that has "type" attribute. There could be 20+ possible types. Now I'm asked to implement something that would allow changing the type from A->B, which is the only use case. So should I implement something that allows arbitrary changes of type as long as they are valid types? Or should I ONLY allow it to change from A->B per the requirement and reject any other type change such as B->A or A->C? I can see the pros and cons from both sides, where a generic solution would mean less work in case a similar requirement comes up in the future, but it would also mean more chance of going wrong (though we 100% control the caller at this point). A specific solution is less error prone, but requires more work in the future if a similar requirement arises. I keep hearing that a good developer should try to anticipate the change and design the system so that it's easy to extend in the future, which sounds like a generic solution is the way to go? Edit: Adding more details to my not-so-specific example: The "generic" solution in this case requires less work than the "specific" solution, as the specific solution requires validation on the old type as well as new type, while the generic solution only need to validate the new type.
My rule of thumb: the first time you encounter the problem, only solve the specific problem (this is the YAGNI principle) the second time you run into the same problem, consider generalizing the first case, if it's not a lot of work once you have three specific cases where you'd be able to use the generalized version, then you should start really planning the generalized version -- by now, you should understand the problem well enough to actually be able to generalize it. Of course, this is a guideline and not a hard-and-fast rule: the real answer is to use your best judgement, on a case by case basis.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/361605", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/289728/" ] }
361,841
I am working in a governmental institution. The technology being used here and the methods for developing software are quite old fashioned. They have tons of storage space but no appropriate space to keep and maintain applications that are used to automate most of the work here. The institution would not allow me to use SCM software like GIT or SVN. What would be the best approach to keep code quality and be able to add new features in the apps later on? How can I remember changes I have made to the code without breaking it? EDIT: I forgot to mention, they have network drives for each of the computers and somehow these network drives make or save backups in periods. However if I don't create my own plan allowing to save my work and be able to add new features without breaking the existing code, there is no big advantage over an SCM solution. EDIT: Since many people suggested portable Git I have to add more info. I tried installing Visual SVN server, but it failed because I don't have admin privileges to install. I also tried downloading regular Git shell, but the firewall or the network settings didn't allow me to access Git download page. I even tried, sending portable Git to my email which is Gmail. Google detected the exe file in the package, and it too didn't allow me to download the portable Git version on my work computer. Another thing I have to mention, the network policy applied to the computers trough the institution do not allow using USB storage devces. You can use the USB ports to charge smartphone or power some gadgets like small speakers. Also as some people mentioned, there are computers on which not even Internet is allowed.
You can loosely replicate the role source control plays with three simple tools: Back-up software (Commits/Check-ins) Folders (Branches) Performing a directory merge between two directories using a tool like KDiff3 (Merging branches) Basically your workflow becomes: Create a new folder (new branch) Copy files to the new folder (new branch) from an existing folder (existing branch) Make a back-up of that folder (finish creating the new branch) Do some work Make a back-up of the new folder (commit) Do a directory merge from one folder to another (merge) Do another back-up in the other folder (commit the merge) The more monolithic source control systems, like SVN or TFS, basically do this for you behind the scenes. Now, the reality is this is like a Bus Company telling its drivers that they can't drive buses that have a battery, forcing the drivers to push the bus down hill and then pop the clutch to start the bus... this is terrible and indicates that the current management doesn't know anything about running a bus garage. My condolences. But at least you can get the bus started.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/361841", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/151166/" ] }
362,071
I'm going into Clean Architecture and lift my Android level from MVC to MVP , introducing DI with Dagger 2, Reactivity with RxJava 2, and of course Java 8. In MVP clean architecture there is a layer between the entities (in datastores) and the presenters that should access them. This layer is the "Use Case" . An use case it's ideally an interface, that implements ONE operation on ONE entity. I also know that Clear Architecture " is screaming ", in sense of its projects are really highly readable as the high number of classes in them. Now, in my project, I have something like 6 different entities , and of course, each entity repository has at least 4 methods (usually get,add,delete,update) to access them.. so, 6 * 4 = 24 . If what I understood until now of Clean Architecture, I will have 24 UseCase. This is a lot of classes if compared to just 6 controllers in MVC.. Do I really have to make 24 use cases? I will really appreciate a clarification by someone already used it with success. Thanks, Jack
Do I really have to make 24 use cases? Only if everything you write is CRUD . Refer to the diagram below: Your assertion is that you will have six different entities, and 4 methods (Create, Read, Update and Delete) for each entity. But that is only true in the yellow circle in the middle of the diagram (the Entities layer). It is pointless to create 24 methods in the Use Cases layer that merely pass through CRUD calls to the Entities layer. A Use Case is not "Add a Customer Record." A Use Case is more along the lines of "Sell an item to a customer" (which involves Customer, Product, and Inventory entities) or "Print an invoice" (which involves the same entities, in addition to Invoice Header and Invoice Line Items). When you create Use Cases, you should be thinking about business transactions, not CRUD methods. Further Reading Aggregate - a cluster of domain objects that can be treated as a single unit
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362071", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/282375/" ] }
362,179
C#'s is operator and Java's instanceof operator allow you to branch on the interface (or, more uncouthly, its base class) an object instance has implemented. Is it appropriate to use this feature for high level branching based on the capabilities an interface provides? Or should a base class provide boolean variables to provide an interface to describe the capabilities an object has? Example: if (account is IResetsPassword) ((IResetsPassword)account).ResetPassword(); else Print("Not allowed to reset password with this account type!"); vs. if (account.CanResetPassword) ((IResetsPassword)account).ResetPassword(); else Print("Not allowed to reset password with this account type!"); Are their any pitfalls for using interface implementation for capability identification? This example was just that, an example. I am wondering about a more general application.
Should an object's capabilities be identified exclusively by the interfaces it implements? An objects capabilities should not be identified at all. The client using an object shouldn't be required to know anything about how it works. The client should only know things it can tell the object to do. What the object does, once it's been told, is not the clients problem. So rather than if (account is IResetsPassword) ((IResetsPassword)account).ResetPassword(); else Print("Not allowed to reset password with this account type!"); or if (account.CanResetPassword) ((IResetsPassword)account).ResetPassword(); else Print("Not allowed to reset password with this account type!"); consider account.ResetPassword(); or account.ResetPassword(authority); because account already knows if this will work. Why ask it? Just tell it what you want and let it do whatever it's going to do. Imagine this done in such a way that the client doesn't care if it worked or not because that's something elses problem. The clients job was only to make the attempt. It's done that now and it's got other things to deal with. This style has many names but the name I like the most is tell, don't ask . It's very tempting when writing the client to think you have to keep track of everything and so pull towards you everything you think you need to know. When you do that you turn objects inside out. Value your ignorance. Push the details away and let the objects deal with them. The less you know the better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362179", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/173910/" ] }
362,286
When I review the changes in a pull request, I sometimes stumble upon a comment with a "TODO" note which may be there for different reasons, in our case mostly because of: the solution used to solve a problem can be improved but would require significantly more time investment. An author chose a quicker solution but put a comment that a better option is potentially available there is a temporary code to workaround an existing bug that should be fixed soon Knowing that TODO s generally stay in the codebase for the lifetime of the codebase, how should I react to them in a pull request? How can I politely request to avoid it, or if it is really justified, how can I make sure the author of the PR would follow it up later in the future?
When you say that they "generally stay in the codebase for the lifetime of the codebase" in your team/department/organization, consider the following: Write it down in your DoD that TODO , FIXME , or similar tags should be avoided. Use a static code analysis tool such as SonarQube to automatically mark the build unstable. Temporarily allow them if, and only if, there is a corresponding ticket in your issue tracker. Then, the code may look like TODO [ID-123] Description ... As mentioned in my comment , the last statement probably only makes sense in an environment that doesn't let tickets rot (e.g. if you follow a zero-bug policy ). Personally, I think TODO s are sometimes reasonable, but one should not use them excessively. Taken from Robert C. Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" (p. 59): TODO s are jobs that the programmer thinks should be done, but for some reason can't do at the moment. It might be a reminder to delete a deprecated feature or a plea for someone else to look at a problem. It might be a request for someone else to think of a better name or a reminder to make a change that is dependent on a planned event. Whatever else a TODO might be, it is not an excuse to leave bad code in the system.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362286", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/88474/" ] }
362,316
The highest rated answer to this question about the Liskov Substitution Principle takes pains to distinguish between the terms subtype and subclass . It also makes the point that some languages conflate the two, whereas others do not. For the object-oriented languages that I am most familiar with (Python, C++), "type" and "class" are synonymous concepts. In terms of C++, what would it mean to have a distinction between subtype and subclass? Say, for example, that Foo is a subclass, but not a subtype, of FooBase . If foo is an instance of Foo , would this line: FooBase* fbPoint = &foo; no longer be valid?
Subtyping is a form of type polymorphism in which a subtype is a datatype that is related to another datatype (the supertype) by some notion of substitutability, meaning that program elements, typically subroutines or functions, written to operate on elements of the supertype can also operate on elements of the subtype. If S is a subtype of T , the subtyping relation is often written S <: T , to mean that any term of type S can be safely used in a context where a term of type T is expected. The precise semantics of subtyping crucially depends on the particulars of what "safely used in a context where" means in a given programming language. Subclassing should not be confused with subtyping. In general, subtyping establishes an is-a relationship, whereas subclassing only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is also known as interface inheritance , whereas subclassing is known as implementation inheritance or code inheritance. References Subtyping Inheritance
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362316", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/102004/" ] }
362,491
For example: While performing functional testing of a form in a web application, we will test the fields by entering different kind of random input values. In general, we as users of the web application do not actually enter random values into fields. So what is the use of incorporating all those testcases which may/may not lead to bugs, when the probability of appearing these kind of issues in production is way less? Note: The above example is only a sample case; such issues may occur in any kind of feature/module. I am asking this question only to know whether any standard practices are there to follow or it totally depends on the product, domain and all other factors.
You might not enter random values into fields of a web application, but there certainly people out there that do just that. Some people enter random by accident and others do it intentionally trying to break the application. In both cases, you don't want the application to crash or exhibit other unwanted behavior. For the first type of user, you don't want that because it gives them a bad experience and might turn them away. For the second type of user, they usually don't have honorable intentions and you don't want to let them have access to information that they shouldn't be able to access or allow them to deny genuine users access to your services. Standard practice for testing is to verify not only that the good-weather case works, but also that unusual edge cases are explored to find potential problems and to have confidence that attackers can't easily gain access to your system. If your application already crashes with random input, you don't want to know what a attacker can do with specially crafted input.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362491", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/291128/" ] }
362,610
I have had a discussion with a coworker about breaking a return statement and the statement that calculates the return value in two lines. For example private string GetFormattedValue() { var formattedString = format != null ? string.Format(format, value) : value.ToString(); return formattedString; } instead of private string GetFormattedValue() { return format != null ? string.Format(format, value) : value.ToString(); } Code-wise I don't really see a value in the first variant. For me, the latter is clearer, particularly for methods that short. His argument whatsoever was that the former variant is easier to debug - which is quite a small merit, since VisualStudio allows us a very detailed inspection of the statements, when the execution is stopped due to a break point. My question is, if it's still a valid point to write code less clear, just to make debugging a glimpse easier? Are there any further arguments for the variant with the split calculation and return statement?
Introducing explaining variables is a well-known refactoring which can sometimes help to make complicated expressions better readable. However, in the shown case, the additional variable does not "explain" anything which is not clear from the surrounding method name the statement gets even longer, so (slightly) less readable Moreover, newer versions of the Visual Studio debugger can show the return value of a function in most cases without introducing a superfluous variable (but beware, there are some caveats, have a look at this older SO post and the different answers ). So in this specific case, I agree to you, however, there are other cases where an explaining variable can indeed improve code quality.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362610", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143358/" ] }
362,746
I recently had my final exam for a software engineering course for my masters program and one of the questions on the exam was the following: Unit Testing is considered: a. White-box Testing b. Black-box Testing c. Either In my 7 years of software development experience, unit testing has always taken a white box approach. The tester has always had full knowledge of the implementation of the unit while writing the tests. Black box testing always came later in the forms of integration, system, and acceptance testing. However, the correct answer to the exam (according to the professor) is that unit testing can be either white or black box testing. I have done some research, and it seems many cases "black box unit testing" is used to describe a test-first approach where the unit tests are written before the code is. However in my opinion this is still white box testing. While the implementation does not yet exist, whoever is writing the test generally has a pretty good idea about how the source code is going to be implemented. Can someone please explain to me how black box unit testing works (if it truly is a thing) and how it differs from white box unit testing?
Your professor is right: unit testing can be either black-box or white-box. The difference is less about what the tester knows, but more about how you generate test cases. With black box testing, you only look at the interface and (if it exists) the specification for a component. When a function has a signature int foo(int a, int b) , then I can immediately generate a few test cases just by testing interesting integers: zero, one, minus one, multi-digit numbers, INT_MAX, INT_MAX - 1 and so on. Black box tests are great because they are independent from the implementation. But they might also miss important cases. With a white-box test, I look at the implementation, i.e. the source code and generate test cases from that. For example, I might want to achieve 100% path coverage for a function. I then choose input values so that all paths are taken. White-box tests are great because they can exhaustively exercise a piece of code, with far more confidence than a black-box tests. But they might be only testing implementation details, not actually important behaviour. In some cases, they are clearly a waste of time. Since a white-box test is derived from the implementation, it can only be written afterwards. A black-box test is derived from the design/interface/specification, and can therefore be written before or after the implementation. TDD is neither clearly black-box or white-box. Since all behaviour is first expressed by a test and then the minimal code for that behaviour is implemented, TDD results in similar test cases to a white box test. But when we look at the information flow, TDD tests are not derived from the source code, but from external requirements. Therefore, TDD is more black-box-like.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362746", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/291493/" ] }
362,755
I have two stories (I know they are missing the benefit part) As a Credit Management User, I can view the current and previous payroll differences for Offices. As a Credit Management User, I can receive an email containing a PDF of the current and previous payroll differences for Offices. The two are related in that they would have the same Query / Filter criteria. The only difference is that in the "View" story, the results are displayed to the User and in "Email" story, the results are written to a PDF that is emailed to the User. I am struggling with the separation of the common aspects of these two stories or if I should even do so. For example, they will both have the same query, what they do with the results is different. Should I separate the query out into another story that is purely technical? The creation of the PDF and sending of the email should be done offline, should that become a technical story? I could see breaking those two stories down into 2 functional stories and 2 technical stories. As the System, I can calculate the differences in the current and previous payroll for Offices. As a Credit Management User, I can view the differences in the current and previous payroll for Offices. As the System, I can create a PDF document of the differences in the current and previous payroll for Offices. As a Credit Management User, I can request to receive an email containing a PDF of the differences in the current and previous payroll for Offices. The problem I keep coming back to is that the 4 stories are not independent and do not "slice the cake". So I am not quite sure how to deal with these two.
User Stories are not system specifications or functional requirements. Rather, they are the beginning of a conversation that can lead to such specifications or requirements. Accordingly, I would expect there to be overlap in the system implementation. User Stories are not meant to describe such functional overlap or to eliminate it. The purpose of User Stories is to capture functional expectations from a user's point of view, not to describe implementation details.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362755", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/291508/" ] }
362,766
I'm a beginner in DB. I have studied some basic examples but now I'm studying more complex cases that involve many tables related to each other. I'm studying how to model a DB for a conference system but there is a detail that seems difficult to model. The detail is, a conference can have multiple registration types and each registration type can have user defined fields, for example, the conference organizer can create a conference with a registration type "General" that needs the fields "name, email" that need to be filled in by the user that does the registration for each participant that he is registering. he can create another registration type "vip" with the fields "name, email, phone" that need to be filled in by the user that does the registration for each participant that he is registering. Do you know how to add this context to the diagram (fulfilling the normal forms)? Diagram (without that user defined fields part)
User Stories are not system specifications or functional requirements. Rather, they are the beginning of a conversation that can lead to such specifications or requirements. Accordingly, I would expect there to be overlap in the system implementation. User Stories are not meant to describe such functional overlap or to eliminate it. The purpose of User Stories is to capture functional expectations from a user's point of view, not to describe implementation details.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362766", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/291316/" ] }
362,775
(JAVA) I have about 50 Getters, all return the same text, tag, type. Text and tag are String and type is int. The purpose is a forms that use repetitive fields along with form specific fields. I am having trouble with simplifying and reusing code? public class DataTransferClass { private DataTransferClass(){} public class FirstName { public String getText() { return "First Name:"; } public String getTag(){ return "First"; } public int getType(){ return android.text.InputType.TYPE_TEXT_VARIATION_PERSON_NAME; } } public class LastName { public String getText() { return "Last Name:"; } public String getTag(){ return "Last"; } public int getType(){ return android.text.InputType.TYPE_TEXT_VARIATION_PERSON_NAME; } } ... The answer I’m looking for is the best way to manage data returns, since I’m returning different primitive types return an array does not work very well. I understand that I can convert the integer into string and return an Array. But then I must convert the String back into an integer when needed. All fifty getters return the same primitive types. Maintaining three arrays with fifty elements is nearly impossible to maintain, update or modify specific elements.
User Stories are not system specifications or functional requirements. Rather, they are the beginning of a conversation that can lead to such specifications or requirements. Accordingly, I would expect there to be overlap in the system implementation. User Stories are not meant to describe such functional overlap or to eliminate it. The purpose of User Stories is to capture functional expectations from a user's point of view, not to describe implementation details.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362775", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/291538/" ] }
362,900
I often come across methods/functions which have an additional boolean parameter which controls whether an exception is thrown on failure, or null is returned. There are already discussions about which of those is the better choice in which case, so let's not focus on this here. See e.g. Return magic value, throw exception or return false on failure? Let us instead assume that there is a good reason why we want to support both ways. Personally I think such a method should rather be split in two: One that throws an exception on failure, the other that returns null on failure. So, which is better? A: One method with $exception_on_failure parameter. /** * @param int $id * @param bool $exception_on_failure * * @return Item|null * The item, or null if not found and $exception_on_failure is false. * @throws NoSuchItemException * Thrown if item not found, and $exception_on_failure is true. */ function loadItem(int $id, bool $exception_on_failure): ?Item; B: Two distinct methods. /** * @param int $id * * @return Item|null * The item, or null if not found. */ function loadItemOrNull(int $id): ?Item; /** * @param int $id * * @return Item * The item, if found (exception otherwise). * * @throws NoSuchItemException * Thrown if item not found. */ function loadItem(int $id): Item; EDIT: C: Something else? A lot of people have suggested other options, or claim that both A and B are flawed. Such suggestions or opinions are welcome, relevant and useful. A complete answer can contain such extra information, but will also address the main question of whether a parameter to change the signature/behavior is a good idea. Notes In case someone is wondering: The examples are in PHP. But I think the question applies across languages as long as they are somewhat similar to PHP or Java.
You're correct: two methods are much better for that, for several reasons: In Java, the signature of the method which potentially throws an exception will include this exception; the other method won't. It makes it particularly clear what to expect from the one and the other. In languages such as C# where the signature of the method tells nothing about the exceptions, the public methods should still be documented, and such documentation includes the exceptions. Documenting a single method would not be easy. Your example is perfect: the comments in the second piece of code look much clearer, and I would even short “ The item, if found (exception otherwise).” down to “ The item.”—the presence of a potential exception and the description you gave to it are self-explanatory. In a case of a single method, there are few cases where you would like to toggle the value of the boolean parameter on runtime , and if you do, it would mean that the caller will have to handle both cases (a null response and the exception), making the code much more difficult than it needs to be. Since the choice is made not at runtime, but when writing the code, two methods make perfect sense. Some frameworks, such as .NET Framework, have established conventions for this situation, and they solve it with two methods, just like you suggested. The only difference is that they use a pattern explained by Ewan in his answer , so int.Parse(string): int throws an exception, while int.TryParse(string, out int): bool doesn't. This naming convention is very strong in .NET's community and should be followed whenever the code matches the situation you describe.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/362900", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/113434/" ] }
363,076
This is a question about how to work in teams. Recently I worked on my first larger (~80 classes, Java) programming project with a team of 6 people, though only 4 of us were continously working on the code. We distributed the work to be done early on and at some point I needed to call a method that was not yet implemented by one of my co-programmers. How is the recommended way to deal with this? Options I saw, though I don't really like any of them: Writing myself a //TODO and revisiting this line of code later to check if the method has been implemented in the meantime. Asking the corresponding team member to implement that now . Throwing a custom runtimeException with a clear description of what is not yet implemented. (At least we don't have to search for a long time to find out what is missing) Adding the needed method to their class and writing them a //TODO in the message body, possibly also send them a quick message about that change. (Now it's not my problem anymore, but this can cause annoying merge conflicts if they were working on this method in the meantime) Defining abstract classes or interfaces for everything before actually writing the code that does the work. (Didn't work too well because these interfaces were often changed)
Ask for stubs. Or write them yourself. Either way, you and your coworkers need to agree on the interfaces and how they're intended to be used. That agreement needs to be relatively solidified so you can develop against stubs -- not to mention, so you can create your own mocks for your unit testing...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363076", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248311/" ] }
363,169
C++17 introduces the [[nodiscard]] attribute, which allows programmers to mark functions in a way that the compiler produces a warning if the returned object is discarded by a caller; the same attribute can be added to an entire class type. I've read about the motivation for this feature in the original proposal , and I know that C++20 will add the attribute to standard functions like std::vector::empty , whose names do not convey an unambiguous meaning regarding the return value. It's a cool and a useful feature. In fact, it almost seems too useful. Everywhere I read about [[nodiscard]] , people discuss it as if you'd just add it to a select few functions or types and forget about the rest. But why should a non-discardable value be a special case, especially when writing new code? Isn't a discarded return value typically a bug or at least a waste of resources? And isn't one of the design principles of C++ itself that the compiler should catch as many errors as possible? If so, then why not add [[nodiscard]] in your own, non-legacy code to almost every single non- void function and almost every single class type? I've tried to do that in my own code, and it works fine, except it's so terribly verbose that it starts to feel like Java. It would seem much more natural to make compilers warn about discarded return values by default except for the few other cases where you mark your intention [*] . As I've seen zero discussions about this possibility in standard proposals, blog entries, Stack Overflow questions or anywhere else on the internet, I must be missing something. Why would such mechanics not make sense in new C++ code? Is verbosity the only reason not to use [[nodiscard]] almost everywhere? [*] In theory, you may have something like a [[maydiscard]] attribute instead, which could also be retroactively added to functions like printf in standard-library implementations.
In new code that need not be compatible with older standards, do use that attribute wherever it's sensible. But for C++, [[nodiscard]] makes a bad default. You suggest: It would seem much more natural to make compilers warn about discarded return values by default except for the few other cases where you mark your intention. That would suddenly cause existing, correct code to emit lots of warnings. While such a change could technically be considered to be backwards-compatible since any existing code still compiles successfully, that would be a huge change of semantics in practice. Design decisions for an existing, mature language with a very large code base are necessarily different from design decisions for a completely new language. If this were a new language, then warning by default would be sensible. For example, the Nim language requires unneeded values to be discarded explicitly – this would be similar to wrapping every expression statement in C++ with a cast (void)(...) . A [[nodiscard]] attribute is most useful in two cases: if a function has no effects beyond returning a certain result, i.e. is pure. If the result is not used, the call is certainly useless. On the other hand, discarding the result would not be incorrect. if the return value must be checked, e.g. for a C-like interface that returns error codes instead of throwing. This is the primary use case. For idiomatic C++, that's going to be quite rare. These two cases leave a huge space of impure functions that do return a value, where such a warning would be misleading. For example: Consider a queue data type with a .pop() method that removes an element and returns a copy of the removed element. Such a method is often convenient. However, there are some cases where we only want to remove the element, without getting a copy. That is perfectly legitimate, and a warning would not be helpful. A different design (such as std::vector ) splits these responsibilities, but that has other tradeoffs. Note that in some cases, a copy has to be made anyway so thanks to RVO returning the copy would be free. Consider fluent interfaces, where each operation returns the object so that further operations can be performed. In C++, the most common example is the stream insertion operator << . It would be extremely cumbersome to add a [[nodiscard]] attribute to each and every << overload. These examples demonstrate that making idiomatic C++ code compile without warnings under a “C++17 with nodiscard by default” language would be quite tedious. Note that your shiny new C++17 code (where you can use these attributes) may still be compiled together with libraries that target older C++ standards. This backwards compatibility is crucial for the C/C++ ecosystem. So making nodiscard the default would result in many warnings for typical, idiomatic use cases – warnings which you cannot fix without far-reaching changes to the library's source code. Arguably, the problem here isn't the change of semantics but that the features of each C++ standard apply on a per-compilation-unit scope and not on a per-file scope. If/when some future C++ standard moves away from header files, such a change would be more realistic.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363169", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/292114/" ] }
363,184
Every day I see SO questions with some variation on this theme (C#-ey pseudocode) method MyButtonClickHandler(){ string sql = "SELECT * FROM table WHERE column = " + myTextBox_Text sqlcommand cmd = new sqlcommand(sql, someConnectionString) datareader rdr = mycommand.ExecuteAndGetReader() while(rdr.HasResultRows) string name = rdr.GetString(1) //column 1 is the name ... } I regard this construct/methodology as pretty low level: Many times over tens of years, people have built some SQL-generating abstraction layer flavoured more like a high level language which makes the SQL part de-facto low level I can't really think of any way lower than writing an SQL statement/getting a reader, to get data out of an SQL based database like SQL Server or Oracle (other than directly reading the bytes of the data store file SQL Server uses) It's the sort of code that breaks out of type safety, compile time sense checking and cleaning up after yourself properly; it's the sort of code that ought be done to some formulaic pattern and hence really should be written by other software, not humans I routinely find this particular construct in some high level workspace, let's say something like ASP.NET MVC or Web Forms; in and of themselves a much more abstract way to get the server to generate some HTML and send it to the client browser. Here's a high and low level example of what I mean: //MVC, rendering a dropdown list (some enumerable collection has been put in the page's data model): @Html.DropDownList("EnumerableCollectionNameHere") //WebForms, rendering a dropdown list (plus some back end code to populate the Items collection it uses): <asp:DropDownList id="someList" runat="server"/> //One possible low level way of getting a drop down list into the user's browser (C#): httpResponseStream.Write("<select id='someID' name='someList'>"); foreach(KeyValuePair itm in somecollection) httpResponseStream.Write("<option value='"+itm.Key+">"+itm.Value+"</option>"); ... We stopped doing the lattermost a long time ago, and for good reasons I'm sure can be imagined - but it's a low level way that still technically can be made to work today; form some raw HTML in code, string concat, whatever, and send it down the socket to the client. It feels to me every bit as low level as the SQL/reader/getstring I thus find myself looking at something that puzzles me; the question poser seems to be asking a basic question and I thus assume he's a beginner. He is clearly using many high level framework(s) to implement his app and probably in most areas of the workspace you could say "that could be done in a lower level way" Why, then, is there such a prevalence of doing the data access bit in the lowest level way possible? A way that leads to obscure bugs, no strong typing, security issues and injection attacks abound? A way that effectively embeds a whole other foreign programming language directly in the app being created? Why do beginners still write SQL? And then the other question from the title: There usually appears on such questions a litany of comments and answers to the effect of "use parameterized SQL" and I've always wondered why the pro's aren't going one step further, and recommending to use some higher level framework for these simplest of tasks ( select * from person where name = 'smith' ..) that must surely make up 99% of all queries run True, an abstraction layer doesn't always cut it (sometimes C++ is just too darn slow and the game engine writer starts optimising in assembler..) but by and large, we don't find questions every day on SO where a block of (e.g. C++) is interspersed with (machine code), yet the inclusion of SQL in high level language X is routine. In all those cases where the first comment out of our fingertips is "use parameters!" why are we not routinely advocating use of some higher level abstraction that provides type safety, eliminates SQL injection and so on? Note: following a few complaints that the original question was poorly worded/too long/lacked focus I've reworked a lot of the original content to focus on what I come to realise was the core question I had. Some very good answers below may still reference the original wording, and I apologise if my edits have left some answers with orphan commentary
Simple. Because there is no way you would really understand how to use a database if you're only interacting with it through the ORM. This already happened in practice to many beginner programmers out there. They have never written a single line of SQL, because their favorite ORM does it for them, and, surprise, surprise, they could even write applications that do work. Until they get a task of optimizing an app which is terribly slow when accessing data, and since they have no clue about execution plans or database normalization , they're screwed. Even worse, by not understanding SQL and relational databases, they won't understand how to use the ORM properly . The most basic mistake of someone who don't get what is happening under the hood: flush the whole table, and only then filter it to find a single element .¹ Why not. It works pretty well when there are a dozen rows in the table. Moreover, every ORM I used is a leaky abstraction . Take indexes : to have decent performance for anything but tiny tables, you need them. ORMs with code-first approach allow you to define which columns (should I say “properties” ) should have indexes. But this simply makes the ORM leak the underlying database it tries to hide in the first place. I mean, when I manipulate sequences, I don't have indexes. In terms of data structures, I have dictionaries and lookups. Indexes? Don't know what is that thing. I've seen one example where code-first approach resulted in a clear, well-designed database schema. When the team showed their code, it appeared that the code was filled with adjustments: it's not the ORM which figured this great schema, but the people who pushed the ORM to (and beyond) its limits. Same for the queries: 99% of the time, they are ugly to look at, but do their job well. And then, one time out of one hundred, the ORM is doing some weird stuff,² ending with a request which takes minutes instead of milliseconds. If you know your job, you can either give some hints to the ORM so that he finds a better way to do the thing, or just write the SQL query yourself. If you don't... we've come a million miles from the low level of concatenating HTML together and writing it to a socket The comparison is unfair. SQL is already an abstraction. For a business app, writing a table row to disk instead of using SQL is the same as, for a web app, manipulating sockets instead of using a web server. Similarly, ORM over SQL could be compared to using WordPress over plain PHP . There are cases where WordPress makes perfect sense. There are cases where it doesn't. to send an email in c# 9 out of 10 cats will use System.Net.Mail; there's probably one oddball who just loves writing an SMTP conversation into a socket, of course When you need to send an e-mail, you'll use a library provided with your favorite framework. But what would happen if you need to send, say, a few millions of e-mails in a very short amount of time? Exactly, you need to know lots of things about protocols, spam filters, etc. Chances are, as an ordinary developer, you don't know all this stuff, and you'll delegate the task to a specialized company. With databases, things are different. If you delegate your work to an external company or a consultant every time you need to do something you don't know how to do with something more than the basic ORM skills, you're in a big trouble, because for nearly every project which grows over time, you'll need database skills, and you'll need them repeatedly. E-mails, that's one thing; but if my app uses a relational database, I'd better have at least a basic understanding of SQL and databases . ¹ It happens a lot in .NET world. The developer starts by writing something like that: Products.Single(...) . At runtime, Entity Framework complains that it cannot deal with whatever it is within the (...) , which happens for anything but the most basic .NET methods which were transcribed to SQL within EF. Either the developer knows what happens under the hood and tries to change the predicate; if not, he does that: Products.ToList().Single(...) . ² A general term for that is “object-relational impedance mismatch.” Humans do tend to cope quite well with it when they know SQL well enough; ORMs—not so well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363184", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/261738/" ] }
363,307
Whenever I need division, for example, condition checking, I would like to refactor the expression of division into multiplication, for example: Original version: if(newValue / oldValue >= SOME_CONSTANT) New version: if(newValue >= oldValue * SOME_CONSTANT) Because I think it can avoid: Division by zero Overflow when oldValue is very small Is that right? Is there a problem for this habit?
Two common cases to consider: Integer arithmetic Obviously if you are using integer arithmetic (which truncates) you will get a different result. Here's a small example in C#: public static void TestIntegerArithmetic() { int newValue = 101; int oldValue = 10; int SOME_CONSTANT = 10; if(newValue / oldValue > SOME_CONSTANT) { Console.WriteLine("First comparison says it's bigger."); } else { Console.WriteLine("First comparison says it's not bigger."); } if(newValue > oldValue * SOME_CONSTANT) { Console.WriteLine("Second comparison says it's bigger."); } else { Console.WriteLine("Second comparison says it's not bigger."); } } Output: First comparison says it's not bigger. Second comparison says it's bigger. Floating point arithmetic Aside from the fact that division can yield a different result when it divides by zero (it generates an exception, whereas multiplication does not), it can also result in slightly different rounding errors and a different outcome. Simple example in C#: public static void TestFloatingPoint() { double newValue = 1; double oldValue = 3; double SOME_CONSTANT = 0.33333333333333335; if(newValue / oldValue >= SOME_CONSTANT) { Console.WriteLine("First comparison says it's bigger."); } else { Console.WriteLine("First comparison says it's not bigger."); } if(newValue >= oldValue * SOME_CONSTANT) { Console.WriteLine("Second comparison says it's bigger."); } else { Console.WriteLine("Second comparison says it's not bigger."); } } Output: First comparison says it's not bigger. Second comparison says it's bigger. In case you don't believe me, here is a Fiddle which you can execute and see for yourself. Other languages may be different; bear in mind, however, that C#, like many languages, implements an IEEE standard (IEEE 754) floating point library, so you should get the same results in other standardized run times. Conclusion If you are working greenfield , you are probably OK. If you are working on legacy code, and the application is a financial or other sensitive application that performs arithmetic and is required to provide consistent results, be very cautious when changing around operations. If you must, be sure that you have unit tests that will detect any subtle changes in the arithmetic. If you are just doing things like counting elements in an array or other general computational functions, you will probably be OK. I am not sure the multiplication method makes your code any clearer, though. If you are implementing an algorithm to a specification, I would not change anything at all, not just because of the problem of rounding errors, but so that developers can review the code and map each expression back to the specification to ensure there are no implementation flaws.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363307", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248528/" ] }
363,318
Let's say, I've the below logic. How to write that in Functional Programming? public int doSomeCalc(int[] array) { int answer = 0; if(array!=null) { for(int e: array) { answer += e; if(answer == 10) break; if(answer == 150) answer += 100; } } return answer; } The examples in most blogs, articles... I see just explains the simple case of one straight forward math function say 'Sum'. But, I have a logic similar to the above written in Java and would like to migrate that to functional code in Clojure. If we can't do the above in FP, then the kind of promotions for FP doesn't state this explicitly. I know that the above code is totally imperative. It was not written with the forethought of migrating it to FP in future.
The closest equivalent to looping over an array in most functional languages is a fold function, i.e. a function that calls a user-specified function for each value of the array, passing an accumulated value along the chain. In many functional languages, fold is augmented by a variety of additional functions that provide extra features, including the option to stop early when some condition arises. In lazy languages (e.g. Haskell), stopping early can be achieved simply by not evaluating any further along the list, which will cause additional values to never be generated. Therefore, translating your example to Haskell, I would write it as: doSomeCalc :: [Int] -> Int doSomeCalc values = foldr1 combine values where combine v1 v2 | v1 == 10 = v1 | v1 == 150 = v1 + 100 + v2 | otherwise = v1 + v2 Breaking this down line by line in case you're not familiar with Haskell's syntax, this works like this: doSomeCalc :: [Int] -> Int Defines the type of the function, accepting a list of ints and returning a single int. doSomeCalc values = foldr1 combine values The main body of the function: given argument values , return foldr1 called with arguments combine (which we'll define below) and values . foldr1 is a variant of the fold primitive that starts with the accumulator set to the first value of the list (hence the 1 in the function name), then combines it using the user specified function from left to right (which is usually called a right fold , hence the r in the function name). So foldr1 f [1,2,3] is equivalent to f 1 (f 2 3) (or f(1,f(2,3)) in more conventional C-like syntax). where combine v1 v2 | v1 == 10 = v1 Defining the combine local function: it receives two arguments, v1 and v2 . When v1 is 10, it just returns v1 . In this case, v2 is never evaluated , so the loop stops here. | v1 == 150 = v1 + 100 + v2 Alternatively, when v1 is 150, adds an extra 100 to it, and adds v2. | otherwise = v1 + v2 And, if neither of those conditions is true, just adds v1 to v2. Now, this solution is somewhat specific to Haskell, because the fact that a right fold terminates if the combining function doesn't evaluate its second argument is caused by Haskell's lazy evaluation strategy. I don't know Clojure, but I believe it uses strict evaluation, so I would expect it to have a fold function in its standard library that includes specific support for early termination. This is often called foldWhile , foldUntil or similar. A quick look at the Clojure library documentation suggests that it is a little different from most functional languages in naming, and that fold isn't what you're looking for (it's a more advanced mechanism aimed at enabling parallel computation) but reduce is the more direct equivalent. Early termination occurs if the reduced function is called within your combining function. I'm not 100% sure I understand the syntax, but I suspect what you're looking for is something like this: (reduce (fn [v1 v2] (if (= v1 10) (reduced v1) (+ v1 v2 (if (= v1 150) 100 0)))) array) NB: both translations, Haskell and Clojure, are not quite right for this specific code; but they convey the general gist of it -- see discussion in the comments below for specific problems with these examples.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363318", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/292326/" ] }
363,397
During one of my lectures today about Unity, we discussed updating our player position by checking every frame if the user has a button pushed down. Someone said this was inefficient and we should use an event listener instead. My question is, regardless of the programming language, or situation that it is applied in, how does an event listener work? My intuition would assume that the event listener constantly checks if the event has been fired, meaning, in my scenario, it would be no different than checking every frame if the event has been fired. Based on the discussion in class, it seems that event listener works in a different way. How does an event listener work?
Unlike the polling example you provided (where the button is checked every frame), an event listener does not check if the button is pushed at all. Instead, it gets called when the button is pushed. Perhaps the term "event listener" is throwing you. This term suggests that the "listener" is actively doing something to listen, when in fact, it's not doing anything at all. The "listener" is merely a function or method that is subscribed to the event. When the event fires, the listener method ("event handler") gets called. The benefit of the event pattern is that there's no cost until the button is actually pushed. The event can be handled this way without being monitored because it originates from what we call a "hardware interrupt," which briefly preempts the running code to fire the event. Some UI and game frameworks use something called a "message loop," which queues events for execution at some later (usually short) time period, but you still need a hardware interrupt to get that event into the message loop in the first place.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363397", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/286828/" ] }
363,517
Compared to about 10 years ago I have noted a shift towards frameworks using the style of routing that decouples the URL path from the filesystem. This is typically accomplished with the help of a front-controller pattern. Namely, when before, URL path was mapped directly to the file system and therefore reflected exact files and folders on disk, nowadays, the actual URL paths are programmed to be directed to specific classes via configuration, and as such, no longer reflect the file system folder and file structure. Question How and why did this become commonplace? How and why was it decided that it's "better" to the point where once-commonplace direct-to-file approach was effectively abandoned? Other Answers There is a similar answer here that goes a bit into the concept of route and some benefits and drawbacks: With PHP frameworks, why is the "route" concept used? But it does not address historical change aspects, or how or why this change gradually happened, to where any new projects nowadays are pretty much using this new routing style pattern and direct-to-file is outdated or abandoned. Also, most of those benefits and drawbacks mentioned, do not appear to be significant enough to warrant such a global change. The only benefit that I can see driving this change perhaps is hiding the file/folder system from end-user, and also lack of ?param=value&param2=value , which makes URLs look a tad cleaner. But were those the sole reason for the change? And if yes, why were those reasons behind it? Examples: I am most familiar with PHP frameworks and many popular modern frameworks use this decoupled routing approach. To make it work you set up URL rewriting in Apache or similar web server, to where web application functionality is typically no longer triggered via a direct-to-file URL path. Zend Expressive https://docs.zendframework.com/zend-expressive/features/router/aura/ https://docs.zendframework.com/zend-expressive/features/router/fast-route/ https://docs.zendframework.com/zend-expressive/features/router/zf2/ Zend Framework https://docs.zendframework.com/zend-mvc/routing/ Laravel https://laravel.com/docs/5.5/routing CakePHP https://book.cakephp.org/3.0/en/development/routing.html
In its most basic form, a website serves static files. Mapping the URL path to a file path is the most obvious choice; essentially, it's a read-only FTP site. Then people wanted to change the content of the page with some scripting. The easiest way is to embed a scripting language into the page and run it through an interpreter. Again, given the already existing path -> file path routing, this was simple enough. But really, you are running that file as an argument to the interpreter now. You have to identify when the request is for a static file and when it's for something that you need to interpret. Once you start to use more advanced compiled languages, you are even more divorced from the file location. Plus, your web server is already caching static files and doing all sorts of optimizations, that means hitting the file system is the exception rather than the rule. At this point, the old link file system path is more of a hindrance than a help. But I think the real sea change came when users wanted to get rid of the file extension from the path. Getting myPage.asp or myPage.php was somthing that confused 'normal' people and interfered with SEO. Because the user sees the path, it has become part of the UI of the web, and as such, it needs to be completely free of any technical limitations. We have lost the 'www' and virtually everything is a '.com'. Multiple URLs will point to the same page. If I make more money with mydomain.com/sale vs www.mydomain.co.uk/products/sale.aspx, then I don't want any technical limitations to stand in my way.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363517", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/119333/" ] }
363,608
Let's say I have a function which sorts a database in O(n^2) time. I want to go about refactoring it so it runs in O(n log(n)) time, and in doing so I will change the fundamental way the operation runs, while keeping the return values and inputs equivalent. What do I call this refactoring activity? "Speeding-up-ifying" doesn't seem quite right, since you can make an algorithm go faster without changing the big O speed at which it executed. "Simplifying" also doesn't seem right. What do I call this activity? Update The best answer I could find is reducing the asympotic time complexity.
It is typically called "performance optimization" , but I would not call it "refactoring", since this term typically refers to changes in code which don't change its visible behaviour . And a change in Big-O is definitely something which I would call a visible change. in doing so I will change the fundamental way the operation runs In this case, your optimization is a rewrite of that function. Not every optimization, even if it changes "Big-O", is necessarily a rewrite, sometimes only small changes are necessary to achieve such an improvement, but even then, I am reluctant to use the term "refactoring" for this, because it tends to give a wrong impression about the nature of the change. EDIT: I checked Fowler's refactoring list , and among this ~100 named refactorings, the very last one is called "Substitute Algorithm" . So if we take this as canonical reference, there is a small, grey area where an optimization of the described form might be called a special kind of refactoring (but IMHO not a typical one). Note also, Fowler's goal with all refactorings was always to improve the design with focus on maintainability and evolvability of existing code without rewriting it, and clearly not performance optimization.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363608", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/163825/" ] }
363,655
During development phase, there are certain variables which need to be the fixed in the same run, but may need be modified over time. For example a boolean to signal debug mode, so we do things in the program we normally wouldn't. Is it bad style to contain these values in a constant, i.e. final static int CONSTANT = 0 in Java? I know that a constant stays the same during run time, but is it also supposed to be the same during the whole development, except for unplanned changes, of course? I searched for similar questions, but did not find anything that matched mine exactly.
Anything in your source code, including const declared global constants, might be subject to change with a new release of your software. The keywords const (or final in Java) are there to signal to the compiler that this variable will not change while this instance of the program is running . Nothing more. If you want to send messages to the next maintainer, use a comment in source, that's what they are there for. // DO NOT CHANGE without consulting with the legal department! // Get written consent form from them before release! public const int LegalLimitInSeconds = ... Is a way better way to communicate with your future self.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363655", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/283636/" ] }
363,704
So I have to do a project for about 10 days. About the work, lets just say Im going to develop a Website with a front-end and a few interfaces between internal services. Now I have to use a projectmethod and I'm thinking of the Scrum Method. But since I'm only one person, I'm asking, if it is possible to implement the Scrum method for this project. My Idea is, that I take the roles of the Product Owner, Development and Scrum Master and based on that, I would "do" the project. So to list my question(s): Is this still considered "Scrum"? Is there any other project method I could use for this? (Or) Should I build an "own" project method based on Scrum/Agile methodology?
In this case I would simplify to Kanban . Kanban simply has a backlog that you work off, so there is no need to organize work into sprints. It's best not to over-complicate things. Considering this is a stretch of work that would be only one sprint, and a very limited staff, I think it matches the Kanban way more than scrum.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363704", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/292880/" ] }
363,739
According to Is it wrong to use a boolean parameter to determine behavior? , I know the importance of avoid using boolean parameters to determine a behaviour, eg: original version public void setState(boolean flag){ if(flag){ a(); }else{ b(); } c(); } new version: public void setStateTrue(){ a(); c(); } public void setStateFalse(){ b(); c(); } But how about the case that the boolean parameter is used to determine values instead of behaviours? eg: public void setHint(boolean isHintOn){ this.layer1.visible=isHintOn; this.layer2.visible=!isHintOn; this.layer3.visible=isHintOn; } I'm trying to eliminate isHintOn flag and create 2 separate functions: public void setHintOn(){ this.layer1.visible=true; this.layer2.visible=false; this.layer3.visible=true; } public void setHintOff(){ this.layer1.visible=false; this.layer2.visible=true; this.layer3.visible=false; } but the modified version seems less maintainable because: it has more codes than the original version it cannot clearly show that the visibility of layer2 is opposite to the hint option when a new layer (eg:layer4) is added, I need to add this.layer4.visible=false; and this.layer4.visible=true; into setHintOn() and setHintOff() separately So my question is, if the boolean parameter is used to determine values only, but not behaviours (eg:no if-else on that parameter), is it still recommended to eliminate that boolean parameter?
API design should focus on what is most useable for a client of the API, from the calling side . For example, if this new API requires the caller to write regularly code like this if(flag) foo.setStateTrue(); else foo.setStateFalse(); then it should be obvious that avoiding the parameter is worse than having an API which allows the caller to write foo.setState(flag); The former version just produces an issue which then has to be solved at the calling side (and probably more than once). That does neither increase readability nor maintainability. The implementation side , however, should not dictate how the public API looks like. If a function like setHint with a parameter needs less code in implementation, but an API in terms setHintOn / setHintOff looks easier to use for a client, one can implement it this way: private void setHint(boolean isHintOn){ this.layer1.visible=isHintOn; this.layer2.visible=!isHintOn; this.layer3.visible=isHintOn; } public void setHintOn(){ setHint(true); } public void setHintOff(){ setHint(false); } So though the public API has no boolean parameter, there is no duplicate logic here, so only one place to change when a new requirement (like in the example of the question) arrives. This works also the other way round: if the setState method from above needs to switch between two different pieces of code, that pieces of code can be refactored to two different private methods. So IMHO it does not make sense to search for a criterion for deciding between "one parameter/one method" and "zero parameters/two methods" by looking at the internals. Look, however, at the way you would like to see the API in the role of a consumer of it. If in doubt, try using "test driven development" (TDD), that will force you to think about the public API and how to use it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363739", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248528/" ] }
363,743
I have found many posts about how to organize work on develop branch and up to release branch. Or even how to work without the develop branch. The trend of the "develop" branch going away , GitFlow branching strategy, but without the develop branch , To branch or not to branch? ... But I have problems about how to organize work before the develop branch. If we create a new branch for every task(ticket), and push it into the common develop branch, it works OK with the smaller projects. But I have seen larger projects where the more complicated scheme was used - for connected tasks some medium level branches were created, and ticket branches were later pushed not to the developer branch, but to the appropriate medium level branch. And I understand why - If the project is so complicated that more than one person works on the same theme and these developers start to have problems with changes made by others on the developer branch during the time he is waiting for the reactions and approvals of the pull request, and while he works on the appropriate repairs, and has to solve appearing conflicts again and again. I thought that medium level branch could be temporarily locked and thus all participants could push changes into it in turn and that would practically prevent the conflicts. But I have too little experience in large repositories organization and I am not sure at all. In the description of the GitFlow strategy, http://datasift.github.io/gitflow/IntroducingGitFlow.html , the drawings have this very two-storey scheme. But this part and its practical use is not explained there even a little bit. The question is: is the scheme of two levels of task branches a necessary and sufficient solution for the problem? And how should it be used to be that solution? Edit: I am not talking about branches created for long time for some departments. I understand that they are ineffective. Imagine that we both have to do some functionalities that touch the same several classes. Do we need to solve our code conflicts on the develop branch? And if we do it on the common task branch, then we have exactly what I am speaking about: separate local branches and a common thematic branch in the repository.
API design should focus on what is most useable for a client of the API, from the calling side . For example, if this new API requires the caller to write regularly code like this if(flag) foo.setStateTrue(); else foo.setStateFalse(); then it should be obvious that avoiding the parameter is worse than having an API which allows the caller to write foo.setState(flag); The former version just produces an issue which then has to be solved at the calling side (and probably more than once). That does neither increase readability nor maintainability. The implementation side , however, should not dictate how the public API looks like. If a function like setHint with a parameter needs less code in implementation, but an API in terms setHintOn / setHintOff looks easier to use for a client, one can implement it this way: private void setHint(boolean isHintOn){ this.layer1.visible=isHintOn; this.layer2.visible=!isHintOn; this.layer3.visible=isHintOn; } public void setHintOn(){ setHint(true); } public void setHintOff(){ setHint(false); } So though the public API has no boolean parameter, there is no duplicate logic here, so only one place to change when a new requirement (like in the example of the question) arrives. This works also the other way round: if the setState method from above needs to switch between two different pieces of code, that pieces of code can be refactored to two different private methods. So IMHO it does not make sense to search for a criterion for deciding between "one parameter/one method" and "zero parameters/two methods" by looking at the internals. Look, however, at the way you would like to see the API in the role of a consumer of it. If in doubt, try using "test driven development" (TDD), that will force you to think about the public API and how to use it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363743", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44104/" ] }
363,807
I'm designing an application which will in an early stage collect data A, B, and C from clients, but later on will instead collect data A, B, and D. A, B, C, and D are very related and right now exist as columns of a single database PostgreSQL table T . Once C is no longer needed, I want to remove its references from my application (I use the Django ORM ), but I want to keep the data that was already entered. What is the best way to do so? I've thought of creating a new table for ABD, but that means that might cause issues with any rows referencing table T. I could just leave column C along, and remove references to it in the code, allowing the existing data to survive. Is there a better option I'm not seeing? Some extra details: The number of rows will not be big, most likely 1-2 per user. This is a mass market application, but by the time I switch from C to D, the userbase will not be very large yet. C and D will likely not be collected at the same time, although that is a possibility. C and D likely represent multiple columns each, not just one each.
If you want to keep the data, then it's not obsolete. Just leave it where it is. It's fine if some class mapped to a table doesn't map every column.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363807", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/282909/" ] }
363,874
I'm trying to follow Uncle Bob's clean code suggestions and specifically to keep methods short. I find myself unable to shorten this logic though: if (checkCondition()) {addAlert(1);} else if (checkCondition2()) {addAlert(2);} else if (checkCondition3()) {addAlert(3);} else if (checkCondition4()) {addAlert(4);} I cannot remove the elses and thus separate the whole thing into smaller bits, cause the "else" in the "else if" helps performance - evaluating those conditions is expensive and if I can avoid evaluating the conditions below, cause one of the first ones is true, I want to avoid them. Even semantically speaking, evaluating the next condition if the previous was met does not make sense from the business point of view. edit: This question was identified as a possible duplicate of Elegant ways to handle if(if else) else . I believe this is a different question (you can see that also by comparing answers of those questions). My question is checking for the first accepting condition to end quickly . The linked question is trying to have all conditions to be accepting in order to do something. (better seen in this answer to that question: https://softwareengineering.stackexchange.com/a/122625/96955 )
The important measurement is complexity of the code, not absolute size. Assuming that the different conditions are really just single function calls, just like the actions are not more complex than what you've shown, I'd say there's nothing wrong with the code. It is already as simple as it can be. Any attempt to further "simplify" will indeed complicate things. Of course, you can replace the else keyword with a return as others have suggested, but that's just a matter of style, not a change in complexity whatsoever. Aside: My general advice would be, never to get religious about any rule for clean code: Most of the coding advice you see on the internet is good if its applied in a fitting context, but radically applying that same advice everywhere may win you an entry in the IOCCC . The trick is always to strike a balance that allows human beings to easily reason about your code. Use too big methods, and you are screwed. Use too small functions, and you are screwed. Avoid ternary expressions, and you are screwed. Use ternary expressions everywhere, and you are screwed. Realize that there are places that call for one-line functions, and places that call for 50-line functions (yes, they exist!). Realize that there are places that call for if() statements, and that there are places that call for the ?: operator. Use the full arsenal that's at your disposal, and try to always use the most fitting tool you can find. And remember, don't get religious even about this advice as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/363874", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/96955/" ] }
364,051
First off, I am aware that many questions have been asked about VCS as a solo developer, but they are often too broad. This concerns only branching, and still it has been marked as a duplicate...the supposed duplicate is, again, marked as another duplicate of another question that is too broad and doesn't concern branching specifically. That's how my question's unique. What are the advantages, if any, of using branching as a solo developer? I've often seen it recommended even in a solo-dev context, but as far as I can see, beyond using a 'master' trunk for development, and branching off for working, release-ready code, I don't see how I could harness the power of branching (for example, to compartmentalize new features) without over-complicating the whole development process.
The advantages are mostly the same as for groups of developers. By using an always release-ready master branch, and feature branches for developing new features, you can always release off the master. Find an important bug while working on a feature? Switch branch, fix, release, switch back and continue developing. Or maybe this is a hobby project and you just like being able to work a bit on this feature and a bit of that, as the mood strikes you. You're basically emulating a multiple-person team by time-slicing. The implicit branching that DVCSs do on clones means that formal branches on the authoritative repository are less about coordinating people and more about coordinating development directions, and even a single person can do multiple of those.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364051", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/293466/" ] }
364,086
Edit added 2+ years later I "checked" the @dandavis answer because it answers my original question , giving reasons to prefer const foo . However, I am completely convinced by the @Wayne Bloss answer that function foo() is generally superior. Original Question here For example, in this Redux video , the instructor always uses syntax like const counter = (state=0, action) => { ... function body here } where I would just use the "traditional" function counter(state=0, action) { ... function body here } Which is actually shorter and, IMO, clearer. It's easier to scan the fairly even and structured left edge of the page for the word "function" than scan the raggedy right edge for a small "=>". Other than this , and trying to be objective, not opinion, is there some useful difference or advantage to the newfangled syntax?
Function statements (named functions, 2nd syntax shown) are hoisted to the top of the full lexical scope, even those behind arbitrary and control blocks, like if statements. Using const (like let ) to declare a variable gives it block scope, stops the full hoisting (hoisting to mere block), and ensures it cannot be re-declared. When concatenating scripts together, or some using other package-building tools, function hoisting can break conflicting scripts in ways that are difficult to debug as it fails silently. A re-declared const will throw an exception before the program can run, so it's much easier to debug.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364086", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/90992/" ] }
364,093
I'm using an internal library that was designed to mimic a proposed C++ library , and sometime in the past few years I see its interface changed from using std::string to string_view . So I dutifully change my code, to conform to the new interface. Unfortunately, what I have to pass in is a std::string parameter, and something that is a std::string return value. So my code changed from something like this: void one_time_setup(const std::string & p1, int p2) { api_class api; api.setup (p1, special_number_to_string(p2)); } to void one_time_setup(const std::string & p1, int p2) { api_class api; const std::string p2_storage(special_number_to_string(p2)); api.setup (string_view(&p1[0], p1.size()), string_view(&p2_storage[0], p2_storage.size())); } I really don't see what this change bought me as the API client, other than more code (to possibly screw up). The API call is less safe (due to the API no longer owning the storage for its parameters), probably saved my program 0 work (due to move optimizations compilers can do now), and even if it did save work, that would only be a couple of allocations that will not and would never be done after startup or in a big loop somewhere. Not for this API. However, this approach seems to follow advice I see elsewhere, for example this answer : As an aside, since C++17 you should avoid passing a const std::string& in favor of a std::string_view: I find that advice surprising, as it seems to be advocating universally replacing a relatively safe object with a less safe one (basically a glorified pointer and length), primarily for purposes of optimization. So when should string_view be used, and when should it not?
Does the functionality taking the value need to take ownership of the string? If so use std::string (non-const, non-ref). This option gives you the choice to explicitly move in a value as well if you know that it won't ever be used again in the calling context. Does the functionality just read the string? If so use std::string_view (const, non-ref) this is because string_view can handle std::string and char* easily without issue and without making a copy. This should replace all const std::string& parameters. Ultimately you should never need to call the std::string_view constructor like you are. std::string has a conversion operator that handles the conversion automatically.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364093", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/127619/" ] }
364,145
The default behavior of assert in C++ is to do nothing in release builds. I presume this is done for performance reasons and maybe to prevent users from seeing nasty error messages. However, I'd argue that those situations where an assert would have fired but was disabled are even more troublesome because the application will then probably crash in an even worse way down the line because some invariant was broken. Additionally, the performance argument for me only counts when it is a measurable problem. Most assert s in my code aren't much more complex than assert(ptr != nullptr); which will have small impact on most code. This leads me to the question: Should assertions (meaning the concept, not the specific implementation) be active in release builds? Why (not)? Please note that this question is not about how to enable asserts in release builds (like #undef _NDEBUG or using a self defined assert implementation). Furthermore, it is not about enabling asserts in third party/standard library code but in code controlled by me.
The classic assert is a tool from the old standard C library, not from C++. It is still available in C++, at least for reasons of backwards compatibility. I have no precise timeline of the C standard libs at hand, but I am pretty sure assert was available shortly after the time when K&R C came into live (around 1978). In classic C, for writing robust programs, adding NULL pointer tests and array bounds checking needs to be done way more frequently than in C++. The gross of NULL pointer tests can be avoided by using references and/or smart pointers instead of pointers, and by using std::vector , array bounds checking is often unnecessary. Moreover, the performance hit at 1980 was definitely much more important than today. So I think that is very likely the reason why "assert" was designed to be active only in debug builds by default. Moreover, for real error handling in production code, a function which just tests some condition or invariant, and crashes the program if the condition is not fulfilled, is in most cases not flexible enough. For debugging that is probably ok, since the one who runs the program and observes the error has typically a debugger at hand to analyse what happens. For production code, however, a sensible solution needs to be a function or mechanism which tests some condition (and stops execution at the scope where the condition fails) provides a clear error message in case the condition does not hold allows the outer scope to take the error message and outputs it to a specific communication channel. This channel might be something like stderr, a standard log file, a message box in a GUI program, a general error handling callback, a network-enabled error channel, or whatever fits best to the particular piece of software. allows the outer scope on a per-case base to decide if the program should end gracefully, or if it should continue. (Of course, there are also situations where ending the program immediately in case of an unfulfilled condition is the only sensible option, but in such cases, it should happen in a release build as well, not just in a debug build). Since the classic assert does not provide this features, it is not a good fit for a release build, assumed the release build is what one deploys into production. Now you can ask why there is no such function or mechanism in the C standard lib which provides this kind of flexibility. Actually, in C++, there is a standard mechanism which has all these features (and more), and you know it: it is called exceptions . In C, however, it is hard to implement a good, general purpose standard mechanism for error handling with all the mentioned features because of the lack of exceptions as part of the programming language. So most C programs have their own error handling mechanisms with return codes, or "goto"s, or "long jumps", or a mixture of that. These are often pragmatic solutions which fit to the particular kind of program, but are not "general purpose enough" to fit into the C standard library.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364145", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/112442/" ] }
364,211
We have recently moved to Java 8. Now, I see applications flooded with Optional objects. Before Java 8 (Style 1) Employee employee = employeeServive.getEmployee(); if(employee!=null){ System.out.println(employee.getId()); } After Java 8 (Style 2) Optional<Employee> employeeOptional = Optional.ofNullable(employeeService.getEmployee()); if(employeeOptional.isPresent()){ Employee employee = employeeOptional.get(); System.out.println(employee.getId()); } I see no added value of Optional<Employee> employeeOptional = employeeService.getEmployee(); when the service itself returns optional: Coming from a Java 6 background, I see Style 1 as more clear and with fewer lines of code. Is there any real advantage I am missing here? Consolidated from understanding from all answers and further research at blog
Style 2 isn't going Java 8 enough to see the full benefit. You don't want the if ... use at all. See Oracle's examples . Taking their advice, we get: Style 3 // Changed EmployeeServive to return an optional, no more nulls! Optional<Employee> employee = employeeServive.getEmployee(); employee.ifPresent(e -> System.out.println(e.getId())); Or a more lengthy snippet Optional<Employee> employee = employeeServive.getEmployee(); // Sometimes an Employee has forgotten to write an up-to-date timesheet Optional<Timesheet> timesheet = employee.flatMap(Employee::askForCurrentTimesheet); // We don't want to do the heavyweight action of creating a new estimate if it will just be discarded client.bill(timesheet.orElseGet(EstimatedTimesheet::new));
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364211", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/260829/" ] }
364,295
When to use else in conditions? 1) a) long int multiplyNumbers(int n) { if (n >= 1) { return n*multiplyNumbers(n-1); } else { return 1; } } or b) long int multiplyNumbers(int n) { if (n >= 1) { return n*multiplyNumbers(n-1); } return 1; } 2) a) int max(int num1, int num2) { int result; if (num1 > num2) { result = num1; } else { result = num2; } return result; } or b) int max(int num1, int num2) { if (num1 > num2) { return num1; } else { return num2; } } or c) int max(int num1, int num2) { if (num1 > num2) { return num1; } return num2; } Is there a rule about when to use else? Do the if with else statements take more memory? On the one hand, they are more readable, but on the other hand, too much nesting is poor reading. If I throw exceptions, then it is better not to use else, but if these are ordinary conditional operations like in my examples?
I can tell you which one I would use, and for which reasons, however I think there will be as many opinions as community members here... 1) Neither. I would write: long int multiplyNumbers(int n) { if (n < 1) { return 1; } return n*multiplyNumbers(n-1); } This is because I like the idea of "early exit" in the special cases, and this reminds me of that scenario. The general rule is: if you need to handle a special case or a general case, handle the special case first. Presumably the special case (e.g. error or a boundary condition) will be handled quickly and the rest of the code is still not too far from the 'if' condition so that you cannot skip a few lines of code to look into it. It also makes reading the general case handling easier, because you need to just scroll down the function body. In other words, instead of: void foo(int bar) { if (!special_case_1) { if (!special_case_2) { return handle_general_case(); } else { return handle_special_case_2(); } } else { return handle_special_case_1(); } } I am advocating writing this: int foo(int bar) { if (special_case_1) { return handle_special_case_1(); } if (special_case_2) { return handle_special_case_2(); } return handle_general_case(); } You will notice that, if it is feasible, I prefer not to introduce an additional variable 'result' here, mostly because then I don't need to care whether it is initialised or not. (If you introduce it, then either you initialise it at the top, only to be overwritten in each branch, or you don't, and then you may forget to initialise it in some branch. Plus, even if you initialise it in every branch, your static code analyser may not see this and may complain for no reason.) 2) I would use (b) (with declaration of 'result' thrown away). This is because in this case I cannot say that either num1 < num2 or num1 >= num2 is a special case. They are equal to my eyes and thus (b) is preferred over (c). (I've already explained why, in this case, I wouldn't introduce an additional variable, so (a) is in my eyes inferior to (b).)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364295", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/293813/" ] }
364,310
My office uses Git and SourceTree for our version control. This came about because when I joined there was zero version control and SourceTree was the only system I had ever used. I am not an expert by any means, but I am the most experienced out of my coworkers so I am the de facto expert responsible for teaching everyone to use Git properly and fix any mistakes they are making. I am making a tutorial document that goes through Git and SourceTree and explains every step of the process. In the Pull process, the SourceTree dialogue allows you to select the option "Commit merged changes immediately". I understand what this does and why it is useful. What I don't understand is why anyone would not want to use this feature. Could someone explain why you would ever not want to have your merged changes committed automatically? I am trying to understand the reasoning so I can explain the feature's usefulness better and get an idea of what pitfalls to look out for in the future.
I would not want to use this feature. The fact that there were no conflicts means pretty much that the changes being merged in my branch are roughly not in the same lines of code as the ones I've made. It does not mean that those changes are compatible with my changes. It does not mean that the code will compile, or that the code will work, or that the tests will pass. In other words, by using this option I potentially end up with a spurious commit of code that may not be in a good state and which requires a new commit to fix. As I am doing this work anyways, and as I should never push this spurious commit upstream, not even by mistake (Goodness forbid, someone may then merge that into some other branch!), I see no reason to create this commit in the first place.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364310", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/199550/" ] }
364,418
I have a piece of code where I iterate a map until a certain condition is true and then later on use that condition to do some more stuff. Example: Map<BigInteger, List<String>> map = handler.getMap(); if(map != null && !map.isEmpty()) { for (Map.Entry<BigInteger, List<String>> entry : map.entrySet()) { fillUpList(); if(list.size() > limit) { limitFlag = true; break; } } } else { logger.info("\n>>>>> \n\t 6.1 NO entries to iterate over (for given FC and target) \n"); } if(!limitFlag) // Continue only if limitFlag is not set { // Do something } I feel setting a flag and then using that to do more stuff is a code smell. Am I right? How could I remove this?
There's nothing wrong with using a Boolean value for its intended purpose: to record a binary distinction. If I were told to refactor this code, I'd probably put the loop into a method of its own so that the assignment + break turns into a return ; then you don't even need a variable, you can simply say if(fill_list_from_map()) { ...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364418", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/173849/" ] }
364,601
As a C++ developer I'm quite used to C++ header files, and find it beneficial to have some kind of forced "documentation" inside the code. I usually have a bad time when I have to read some C# code because of that: I don't have that sort of mental map of the class I'm working with. Let's assume that as a software engineer I'm designing a program's framework. Would it be too crazy to define every class as an abstract unimplemented class, similarly to what we would do with C++ headers, and let developers implement it? I'm guessing there may be some reasons why someone could find this to be a terrible solution but I'm not sure why. What would one have to consider for a solution like this?
The reason this was done in C++ had to do with making compilers faster and easier to implement. This was not a design to make programming easier. The purpose of the header files was to enable the compiler to do a super quick first pass to know all the expected function names and allocate memory locations for them so that they can be referenced when called in cpp files, even if the class defining them had not been parsed yet. Trying to replicate a consequence of old hardware limitations in a modern development environment is not recommended! Defining an interface or abstract class for every class will lower your productivity; what else could you have done with that time ? Also, other developers will not follow this convention. In fact, other developers might delete your abstract classes. If I find an interface in code that meets both of these criteria, I delete it and refactor it out of code: 1. Does not conform to the interface segregation principle 2. Only has one class that inherits from it The other thing is, there are tools included with Visual Studio that do what you aim to accomplish automatically: The Class View The Object Browser In Solution Explorer you can click on triangles to expand classes to see their functions, parameters and return types. There are three little drop down menus below the file tabs, the rightmost one lists all the members of the current class. Give one of the above a try before devoting time to replicating C++ header files in C#. Furthermore, there are technical reasons not to do this... it will make your final binary larger than it needs to be. I will repeat this comment by Mark Benningfield : Declarations in C++ header files don't end up being part of the generated binary. They are there for the compiler and linker. These C# abstract classes would be part of the generated code, to no benefit whatsoever. Also, mentioned by Robert Harvey , technically, the closest equivalent of a header in C# would be an interface, not an abstract class.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/294243/" ] }
364,649
A typical advice before any production deployments is backup the DB first. This way, if the new update has some issue that can lead to potential data loss or logical data corruption, then you still have a backup to compare and correct old records. However, this can work well till DB size is in few GBs. Once the DB size is huge, backups take a long time complete. What are some best practices that should be followed in such situations, so as to avoid logical data corruption because of logical issues in a code deployment?
As someone who regularly dealt with updating production database for customers for our software upgrades, I tell you that the best way to minimize errors is to make updates as straightforward as possible. If you can perform a change to all records rather than specific records, it is preferable. In other words, if you're given a list of ids of records which need their state changed, you should be asking yourself why the update is being done in the context of the program. It may be that of the 10 records you need to update, the table only has 10 elements. Therefore you should be asking yourself if conceptually all you're doing is updating the state of all records. If you can insert, it is preferable. The act of adding a record is self-contained. By this I mean there is only one side effect of adding a record, and that is the existence of a record that didn't exist prior. Therefore unless you're adding a record which shouldn't be there, there should be no issues. If you can avoid deletion, it is preferable. If you're performing a deletion, you're removing data which would otherwise be unrecoverable without a backup. If possible, try to organize the data in such a way that you can disable records by changing its state rather than physically deleting the record. The excess of data can be put in a partition or it can be removed entirely in a later moment once you're sure there are no problems. Have a consistent update policy. If you need to update a record, one of several things can happen: Your record doesn't exist. Your record exists but it has already been changed. Your record exists and requires the change. You need to have a policy to determine the course of action should something not go as planned. For simplicity sake, you should be consistent across the board and apply this policy in any situation of this type, not just for specific tables. This makes it easier to be able to recover data later. Generally, my policy is to write the script in such a way as to be able to re-execute it later. Should the script fail, it is nice to know you can make the proper adjustments and re-execute, however you're free to pick your own policy that suits you best. Backups This by no means excuses you from performing a backup prior to performing any update in a production environment! Though even with a backup, I consider it a failure to have to use the backup. Losing data cannot be a possibility even in the worst-case scenario. Conclusion You're not always going to be able to have it your way. The table schema is not likely going to be determined by you, and as such it means the types of updates you can expect to perform will be both complicated and risky. Though if you have any say-so in the matter, it helps to keep these points in mind as they make any updates straightforward and without significant risk. Good luck!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364649", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23888/" ] }
364,654
Say I have an interface Interface , which only contains getters for various fields. This interface has multiple implementations (say Foo and Bar ), each of which adds various fields. All these implementations are immutable. Assume I have an instance interface of Interface , and I want to create a copy of this instance which changes one of the fields exposed by Interface . How can I achieve this without casting interface to a subtype? I cannot use a Builder class for Interface because this is an abstract interface. I believe this problem is a fairly generic OOP problem but here is what the Java code for it would look like: public interface Interface { public int getVersion(); } public class Foo implements Interface { private final int version; private final String owner; public Foo(int version, String owner) { this.version = version; this.owner = owner; } public int getVersion() { return version; } public String getOwner() { return owner; } public String toString() { StringBuilder builder = new StringBuilder(owner); builder.append(" v"); builder.append(version); return builder.toString(); } } public Bar implements Interface { // similar to Foo } and the situation where I want to copy an existing instance: Interface instance = new Foo(1, "some value"); assertEquals("some value v1", instance.toString()); Parent copy = // create the copy based on instance but updating the version to 2 assertEquals("some value v2", copy.toString());
As someone who regularly dealt with updating production database for customers for our software upgrades, I tell you that the best way to minimize errors is to make updates as straightforward as possible. If you can perform a change to all records rather than specific records, it is preferable. In other words, if you're given a list of ids of records which need their state changed, you should be asking yourself why the update is being done in the context of the program. It may be that of the 10 records you need to update, the table only has 10 elements. Therefore you should be asking yourself if conceptually all you're doing is updating the state of all records. If you can insert, it is preferable. The act of adding a record is self-contained. By this I mean there is only one side effect of adding a record, and that is the existence of a record that didn't exist prior. Therefore unless you're adding a record which shouldn't be there, there should be no issues. If you can avoid deletion, it is preferable. If you're performing a deletion, you're removing data which would otherwise be unrecoverable without a backup. If possible, try to organize the data in such a way that you can disable records by changing its state rather than physically deleting the record. The excess of data can be put in a partition or it can be removed entirely in a later moment once you're sure there are no problems. Have a consistent update policy. If you need to update a record, one of several things can happen: Your record doesn't exist. Your record exists but it has already been changed. Your record exists and requires the change. You need to have a policy to determine the course of action should something not go as planned. For simplicity sake, you should be consistent across the board and apply this policy in any situation of this type, not just for specific tables. This makes it easier to be able to recover data later. Generally, my policy is to write the script in such a way as to be able to re-execute it later. Should the script fail, it is nice to know you can make the proper adjustments and re-execute, however you're free to pick your own policy that suits you best. Backups This by no means excuses you from performing a backup prior to performing any update in a production environment! Though even with a backup, I consider it a failure to have to use the backup. Losing data cannot be a possibility even in the worst-case scenario. Conclusion You're not always going to be able to have it your way. The table schema is not likely going to be determined by you, and as such it means the types of updates you can expect to perform will be both complicated and risky. Though if you have any say-so in the matter, it helps to keep these points in mind as they make any updates straightforward and without significant risk. Good luck!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364654", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/294326/" ] }
364,725
The SRP states that a class (module) should have only one reason to change. The "duties" of an Interactor in Bob Martin's clean architecture are, per use case: receive requests/inputs from a controller; orchestrate domain entities to fulfil the requests; and prepare the output data. Does this imply three reasons to change? (ie whenever inputs change or domain functionality is expanded or extra output fields are added.) If necessary, what would be a good strategy to resolve this? (eg, CQRS?) My current approach is to make a use-case Interactor module with three classes, one per each concern, and a fourth Facade/ Mediator class for orchestration and clients interfacing. However, doesn't this push SRP violation up onto the module level? As pointed by @Robert Harvey, the term "duties" was used rather sloppily. The actual design issue has been the large changes to the interactor needed both when the domain changed, and the OutputData fields/formats changed (less so with input). Aren't these two distinct reasons for change? As I realised from @Filip Milovanović and @guillaume31, SRP is not violated, esp. with three separate classes in the interactor module. Also, at the module level, the "Common Closure Principle" is perhaps more appropriate than the SRP. The CCP ("Gather into components ... classes that change for the same reasons and at the same times.") might suggest to separate the interactor classes. (But then the classes corresponding to the same use case would be spread out between locations.) Thanks to the answers and comments, these trade-offs have become much clearer to me.
The "duties" of an Interactor in Bob Martin's clean architecture are, per use case: receive requests/inputs from a controller; orchestrate domain entities to fulfil the requests; and prepare the output data. Does this imply three reasons to change? You're confusing duties with responsibilities. More specifically, you're confusing "should have only one responsibility" with "should do only one thing." The responsibility of an interactor is to "interact." The responsibility of a data access class is to access data. It doesn't have four responsibilities because it creates, reads, updates and deletes; it has four duties. If you're a short-order cook, your responsibility is to make meals. You don't split your duties into separate employees. You don't have one employee that cracks the egg, another employee that turns it over, and a third that puts it on the plate. You perform all three.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364725", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/294434/" ] }
364,918
A common pattern for locating a bug follows this script: Observe weirdness, for example, no output or a hanging program. Locate relevant message in log or program output, for example, "Could not find Foo". (The following is only relevant if this is the path taken to locate the bug. If a stack trace or other debugging information is readily available that’s another story.) Locate code where the message is printed. Debug the code between the first place Foo enters (or should enter) the picture and where the message is printed. That third step is where the debugging process often grinds to a halt because there are many places in the code where "Could not find Foo" (or a templated string Could not find {name} ) is printed. In fact, several times a spelling mistake helped me find the actual location much faster than I otherwise would - it made the message unique across the entire system and often across the world, resulting in a relevant search engine hit immediately. The obvious conclusion from this is that we should use globally unique message IDs in the code, hard coding it as part of the message string, and possibly verifying that there’s only one occurrence of each ID in the code base. In terms of maintainability, what does this community think are the most important pros and cons of this approach, and how would you implement this or otherwise ensure that implementing it never becomes necessary (assuming that the software will always have bugs)?
Imagine you have a trivial utility function that is used in hundreds of places in your code: decimal Inverse(decimal input) { return 1 / input; } If we were to do as you suggest, we might write decimal Inverse(decimal input) { try { return 1 / input; } catch(Exception ex) { log.Write("Error 27349262 occurred."); } } An error that could occur is if the input were zero; this would result in a divide by zero exception. So let's say you see 27349262 in your output or your logs. Where do you look to find the code that passed the zero value? Remember, the function-- with its unique ID-- is used in hundreds of places. So you while you may know that division by zero occurred, you have no idea whose 0 it is. Seems to me if you're going to bother logging the message IDs, you may as well log the stack trace. If the verbosity of the stack trace is what bothers you, you don't have to dump it as a string the way the runtime gives it to you. You can customize it. For example, if you wanted an abbreviated stack trace going only to n levels, you could write something like this (if you use c#): static class ExtensionMethods { public static string LimitedStackTrace(this Exception input, int layers) { return string.Join ( ">", new StackTrace(input) .GetFrames() .Take(layers) .Select ( f => f.GetMethod() ) .Select ( m => string.Format ( "{0}.{1}", m.DeclaringType, m.Name ) ) .Reverse() ); } } And use it like this: public class Haystack { public static void Needle() { throw new Exception("ZOMG WHERE DID I GO WRONG???!"); } private static void Test() { Needle(); } public static void Main() { try { Test(); } catch(System.Exception e) { //Get 3 levels of stack trace Console.WriteLine ( "Error '{0}' at {1}", e.Message, e.LimitedStackTrace(3) ); } } } Output: Error 'ZOMG WHERE DID I GO WRONG???!' at Haystack.Main>Haystack.Test>Haystack.Needle Maybe easier than maintaining message IDs, and more flexible. Steal my code from DotNetFiddle
{ "source": [ "https://softwareengineering.stackexchange.com/questions/364918", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13162/" ] }
365,008
For example, I want to show a list of buttons from 0,0.5,... 5, which jumps for each 0.5. I use a for loop to do that, and have different color at button STANDARD_LINE: var MAX=5.0; var DIFF=0.5 var STANDARD_LINE=1.5; for(var i=0;i<=MAX;i=i+DIFF){ button.text=i+''; if(i==STANDARD_LINE){ button.color='red'; } } At this case there should be no rounding errors as each value is exact in IEEE 754.But I'm struggling if I should change it to avoid floating point equality comparison: var MAX=10; var STANDARD_LINE=3; for(var i=0;i<=MAX;i++){ button.text=i/2.0+''; if(i==STANDARD_LINE/2.0){ button.color='red'; } } On one hand, the original code is more simple and forward to me. But there is one thing I'm considering : is i==STANDARD_LINE misleads junior teammates? Does it hide the fact that floating point numbers may have rounding errors? After reading comments from this post: https://stackoverflow.com/questions/33646148/is-hardcode-float-precise-if-it-can-be-represented-by-binary-format-in-ieee-754 it seems there are many developers don't know some float numbers are exact. Should I avoid float number equality comparisons even if it is valid in my case? Or am I over thinking about this?
I would always avoid successive floating-point operations unless the model I'm computing requires them. Floating-point arithmetic is unintuitive to most and a major source of errors. And telling the cases in which it causes errors from those where it doesn't is an even more subtle distinction! Therefore, using floats as loop counters is a defect waiting to happen and would require at the very least a fat background comment explaining why it's okay to use 0.5 here, and that this depends on the specific numeric value. At that point, rewriting the code to avoid float counters will probably be the more readable option. And readability is next to correctness in the hierarchy of professional requirements.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365008", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248528/" ] }
365,119
I have read that using "new" in a constructor (for any other objects than simple value ones) is bad practice as it makes unit testing impossible (as then those collaborators needs to be created too and cannot be mocked). As I am not really experienced in unit testing, I am trying to gather some rules that I will learn first. Also, is this is a rule that is generally valid, regardless of the language used?
There are always exceptions, and I take issue with the 'always' in the title, but yes, this guideline is generally valid, and also applies outside of the constructor as well. Using new in a constructor violates the D in SOLID (dependency inversion principle). It makes your code hard to test because unit testing is all about isolation; it is hard to isolate class if it has concrete references. It is not just about unit testing though. What if I want to point a repository to two different databases at once? The ability to pass in my own context allows me to instantiate two different repositories pointing to different locations. Not using new in the constructor makes your code more flexible. This also applies to languages that may use constructs other than new for object initialization. However, clearly, you need to use good judgment. There are plenty of times when it is fine to use new , or where it would be better not to, but you will not have negative consequences. At some point somewhere, new has to be called. Just be very careful about calling new inside a class that a lot of other classes depend on. Doing something such as initializing an empty private collection in your constructor is fine, and injecting it would be absurd. The more references a class has to it, the more careful you should be not to call new from within it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365119", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/294256/" ] }
365,321
Writing a User object in Swift, though my question relates to any strongly typed language. A User can have a bunch of links (FacebookProfile, InstagramProfile, etc). A few questions around this. Is it good practice to wrap links in their own object? struct User { var firstName: string var lastName: string var email: string var links: Links } struct Links { var facebook: string var instagram: string var twitter: string } Or should they be loose? I know technically both ways are fine, but wondering if there is a recommended approach, in general--especially for readability. struct User { var firstName: string var lastName: string var email: string var facebookLink: string var twitterLink: string var instagramLink: string } In a scenario like this, should links be a collection/list? I figured it should not be a list because there is a fixed number of link options available, and not a growing number. Is my thinking right? Is it good practice to place my networking methods inside the User object, like getUsers, getUser, updateUser? I know these could be subjective, but I am trying to understand what the best practice around similar situations is. Would appreciate any pointers.
You will either need zero of a thing, one of a thing, or an arbitrary number of the thing. I'm shocked that your design predicts that the number of needed links will always be three and that you know what their names are forever. What I'm preaching is called the zero one infinity rule . It might not be obvious at first but it's what tells me your design is not future tolerant. I'd feel differently if there was something about these links that was special to them. Some special thing that I do different when accessing a facebook link that I don't do for a twitter link. That would make them different things. But that isn't indicated in this code. That's why I'm not bothered by the email string. I know that is used in a different way then the links. So until I see a reason to use these links differently I'm on the side of a link collection.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365321", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146330/" ] }
365,339
As an experienced software developer, I have learned to avoid magic strings. My problem is that it is such a long time since I have used them, I've forgotten most of the reasons why. As a result, I'm having trouble explaining why they're a problem to my less experienced colleagues. What objective reasons are there for avoiding them? What problems do they cause?
In a language that compiles, a magic string's value is not checked at compile time . If the string must match a particular pattern, you have to run the program to guarantee it fits that pattern. If you used something such as an enum, the value is at least valid at compile-time, even if it might be the wrong value. If a magic string is being written in multiple places you have to change all of them without any safety (such as compile-time error). This can be countered by only declaring it in one place and reusing the variable, though. Typos can become serious bugs. If you have a function: func(string foo) { if (foo == "bar") { // do something } } and someone accidentally types: func("barr"); This is worse the rarer or more complex the string is, especially if you have programmers that are unfamiliar with the project's native language. Magic strings are rarely self-documenting. If you see one string, that tells you nothing of what else the string could / should be. You will probably have to look into the implementation to be sure you've picked the right string. That sort of implementation is leaky , needing either external documentation or access to the code to understand what should be written, especially since it has to be character-perfect (as in point 3). Short of "find string" functions in IDEs, there are a small number of tools that support the pattern. You may coincidentally use the same magic string in two places, when really they are different things, so if you did a Find & Replace, and changed both, one of them could break while the other worked.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365339", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1928/" ] }
365,412
Is there an operator equivalent of nor ? For example, my favorite color is neither green nor blue. And the code would be equivalent to: // example one if (color!="green" && color!="blue") { } // example two if (x nor y) { // x is false and y is false }
No, there is no nor operator in any high level mainstream programming language. Why ? Mainly because it is difficult to read: it requires the mental combination of several operators ( " and not ", or in a more literary style : " further negative ", " each untrue " ) it implies an implicit not on the first operand, but the reader only understand this afterwards it is different from human languages, which use an explicit negation on the first operand, such as " neither x nor y ", " nor x nor y ". So a reader might confuse (x nor y) with (x and not y) instead of ((not x) and (not y)) some readers are confused with the apparent or semantic which doesn't apply But it's so common in hardware... nor is an elementary hardware gate that can be used to make all the other logical gates. So one could argue that all the other logical operators are combinations and nor is the simplest elementary logical operator. However, what's true for the hardware is not necessarily true to the humans. And despite it's popularity at hardware level, some mainstream CPUs do not even offer a NOR in their assembler instruction set (e.g. x86 ). Alternatives Readability matters. And sometimes it can be improved by other means. Use of existing operators For example: if x not in [1,2] // use of 'in' or 'not in' operator instead of x!=1 and x!=2 Ordering of conditions if x==1 or x==2 action A else action B instead of if x!=1 and x!=2 action B else action A Use of until loop Some languages also offer loop instructions that allow to express conditions either with while or with until , letting you choose the more "positive" way. These instructions are for example until c do ... in ruby , do until c ... in vb , or repeat ... until c in pascal and its descendants. For example: Until (x==1 or x==2) do ... is equivalent to: While (x!=1 and x!=2) ... Make a function Now if you still prefer the nor syntax, you could define a function, but only if you don't expect a shortcut to happen: If ( nor(x,y) ) // attention, x and y will always be evaluated ... There is a readability advantage of the function over the operator, because the reader immediately understands that the negation applies to all arguments. In some languages you can define a function with a variable number of arguments.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365412", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/48061/" ] }
365,427
I can see several post where importance of handling exception at central location or at process boundary been emphasized as a good practice rather than littering every code block around try/catch. I strongly believe that most of us understand the importance of it however i see people still ending up with catch-log-rethrow anti pattern mainly because to ease out troubleshooting during any exception, they want to log more context specific information (example : method parameters passed) and the way is to wrap method around try/catch/log/rethrow. public static bool DoOperation(int num1, int num2) { try { /* do some work with num1 and num2 */ } catch (Exception ex) { logger.log("error occured while number 1 = {num1} and number 2 = {num2}"); throw; } } Is there right way to achieve this while still maintaining exception handling good practice ? I heard of AOP framework like PostSharp for this but would like to know if there is any downside or major performance cost associated with these AOP frameworks. Thanks!
The problem isn't the local catch block, the problem is the log and rethrow . Either handle the exception or wrap it with a new exception that adds additional context and throw that. Otherwise you will run into several duplicate log entries for the same exception. The idea here is to enhance the ability to debug your application. Example #1: Handle it try { doSomething(); } catch (Exception e) { log.Info("Couldn't do something", e); doSomethingElse(); } If you handle the exception, you can easily downgrade the importance of the exception log entry and there is no reason to percolate that exception up the chain. It's already dealt with. Handling an exception can include informing users that a problem happened, logging the event, or simply ignoring it. NOTE: if you intentionally ignore an exception I recommend providing a comment in the empty catch clause that clearly indicates why. This lets future maintainers know that it was not a mistake or lazy programming. Example: try { context.DrawLine(x1,y1, x2,y2); } catch (OutOfMemoryException) { // WinForms throws OutOfMemory if the figure you are attempting to // draw takes up less than one pixel (true story) } Example #2: Add additional context and throw try { doSomething(line); } catch (Exception e) { throw new MyApplicationException(filename, line, e); } Adding additional context (like the line number and file name in parsing code) can help enhance the ability to debug input files--assuming the problem is there. This is kind of a special case, so re-wrapping an exception in an "ApplicationException" just to rebrand it doesn't help you debug. Make sure you add additional information. Example #3: Don't do anything with the exception try { doSomething(); } finally { // cleanup resources but let the exception percolate } In this final case, you just allow the exception to leave without touching it. The exception handler at the outermost layer can handle the logging. The finally clause is used to make sure any resources needed by your method are cleaned up, but this is not the place to log that the exception was thrown.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365427", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/293148/" ] }
365,530
While reviewing some code, I noticed the opportunity to change it to use generics. The (obfuscated) code looks like: public void DoAllTheThings(Type typeOfTarget, object[] possibleTargets) { var someProperty = typeOfTarget.GetProperty(possibleTargets[0]); ... } This code could be replaced by generics, like so: public void DoAllTheThings<T>(object[] possibleTargets[0]) { var someProperty = type(T).getProperty(possibleTargets[0]); ... } In researching the benefits and shortcomings of this approach I found a term called generic abuse . See: Protecting the uninitiated (developer) from generics https://stackoverflow.com/questions/28203199/is-this-an-abuse-of-generics https://codereview.stackexchange.com/q/60695 My question comes in two parts: Are there any benefits to moving to generics like this? (Performance? Readability?) What is Generics Abuse? And is using a generic every time there is a type parameter an abuse ?
When generics are appropriately applied, they remove code instead of just rearranging it. Primarily, the code that generics are best at removing is typecasts, reflection, and dynamic typing. Therefore generics abuse could be loosely defined as creating generic code without significant reduction in typecasting, reflection, or dynamic typing compared to a non-generic implementation. In regard to your example, I would expect an appropriate use of generics to change the object[] to a T[] or similar, and avoid Type or type altogether. That might require significant refactoring elsewhere, but if using generics is appropriate in this case, it should end up simpler overall when you're done.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365530", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/104909/" ] }
365,658
I'm currently working for a company that uses VSTS for managing git code. Microsoft's "recommended" way of merging a branch is to do a "squash merge", meaning that all commits for that branch get squashed into one new commit incorporating all of the changes. The trouble is, what if I do some changes in one branch for one backlog item, then immediately want to start doing changes in another branch for another backlog item, and those changes depend on the first branch's set of changes? I can create a branch for that backlog item and base it on the first branch. So far, so good. However, when it comes time to create a pull request for me second branch, the first branch has already been merged into master and because it's been done as a squash merge, git flags up a bunch of conflicts. This is because git doesn't see the original commits that the second branch was based off of, it just sees the one big squash merge and so in order to merge the second branch in to master it tries to replay all the first branch's commits in top of the squash merge, causing lots of conflicts. So my question is, is there any way to get around this (other than just never basing one feature branch off another, which limits my workflow) or does squash merging just break git's merging algorithm?
With Git, commits are immutable, and form a directed acyclic graph. Squashing does not combine commits. Instead, it records a new commit with the changes from multiple other commits. Rebasing is similar, but doesn't combine commits. Recording a new commit with the same changes as an existing commit is called history rewriting . But as existing commits are immutable, this should be understood as “writing an alternative history.” Merging tries to combine the changes of two commit's histories (branches) starting from a common ancestor commit. So let's look at your history: F feature2 / 1---2---3---4---5 feature1 (old) / -o---o---o---A---o---o---S master A is the common ancestor, 1–5 the original feature branch, F the new feature branch, and S the squashed commit that contains the same changes as 1–5. As you can see, the common ancestor of F and S is A. As far as git is concerned, there is no relation between S and 1–5. So merging master with S on one side and feature2 with 1–5 on the other will conflict. Resolving these conflicts is not difficult, but it's unnecessary, tedious work. Because of these constraints, there are two approaches for dealing with merging/squashing: Either you use history rewriting, in which case you will get multiple commits that represent the same changes. You would then rebase the second feature branch onto the squashed commit: F feature2 (old) / 1---2---3---4---5 feature1 (old) / -o---o---o---A---o---o---S master \ F' feature2 Or you don't use history rewriting, in which case you might get extra merge commits: F feature2 / 1---2---3---4---5 feature1 (old) / \ -o---o---o---A---o---o-----------M master When feature2 and master are merged, the common ancestor will be commit 5. In both cases will you have some merging effort. This effort doesn't depend very much on which of the above two strategies you choose. But make sure that branches are short-lived, to limit how far they can drift from the master branch, and that you regularly merge master into your feature branch, or rebase the feature branch on master to keep the branches in sync. When working in a team, it is helpful to coordinate who is currently working on what. This helps to keep the number of features under development small, and can reduce the number of merge conflicts.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365658", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125671/" ] }
365,772
For example, consider I have a class for other classes to extend: public class LoginPage { public String userId; public String session; public boolean checkSessionValid() { } } and some subclasses: public class HomePage extends LoginPage { } public class EditInfoPage extends LoginPage { } In fact, the subclass has not any methods to override, also I would not access the HomePage in generic way, i.e.: I would not do something like: for (int i = 0; i < loginPages.length; i++) { loginPages[i].doSomething(); } I just want to reuse the login page. But according to https://stackoverflow.com/a/53354 , I should prefer composition here because I don't need the interface LoginPage, so I don't use inheritance here: public class HomePage { public LoginPage loginPage; } public class EditInfoPage { public LoginPage loginPage; } but the problem comes here, at the new version, the code: public LoginPage loginPage; duplicates when a new class is added. And if LoginPage needs setter and getter, more codes need to be copied: public LoginPage loginPage; private LoginPage getLoginPage() { return this.loginPage; } private void setLoginPage(LoginPage loginPage) { this.loginPage = loginPage; } So my question is, is "composition over inheritance" violating "dry principle"?
Er wait you're concerned that repeating public LoginPage loginPage; in two places violates DRY? By that logic int x; can now only ever exist in one object in the entire code base. Bleh. DRY is a good thing to keep in mind but come on. Besides ... extends LoginPage is getting duplicated in your alternative so even being anal about DRY wont make sense of this. Valid DRY concerns tend to focus on identical behavior needed in multiple places being defined in multiple places such that a need to change this behavior ends up requiring a change in multiple places. Make decisions in one place and you will only need to change them in one place. It doesn't mean only one object can ever hold a reference to your LoginPage. DRY shouldn't be followed blindly. If you're duplicating because copy and paste is easier than thinking up a good method or class name then you're probably in the wrong. But if you want to put the same code in a different place because that different place is subject to a different responsibility and is likely to need to change independently then it's probably wise to relax your enforcement of DRY and let this identical behavior have a different identity. It's the same kind of thinking that goes into forbidding magic numbers. DRY isn't just about what the code looks like. It's about not spreading the details of an idea around with mindless repetition forcing maintainers to fix things using mindless repetition. It's when you try to tell yourself that the mindless repetition is just your convention that things are headed in a real bad way. What I think you are really trying to complain about is called boilerplate code. Yes, using composition rather than inheritance demands boilerplate code. Nothing gets exposed for free, you have to write code that exposes it. With that boilerplate comes state flexibility, the ability to narrow the exposed interface, to give things different names that are appropriate for their level of abstraction, good ol indirection, and you're using what you're composed of from the outside, not the inside, so you're facing it's normal interface. But yes, it's a lot of extra keyboard typing. As long as I can prevent the yo-yo problem of bouncing up and down an inheritance stack as I read the code it's worth it. Now it's not that I refuse to ever use inheritance. One of my favorite uses is to give exceptions new names: public class MyLoginPageWasNull extends NullPointerException{}
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365772", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248528/" ] }
365,829
OK, so the title is a little clickbaity but seriously I've been on a tell, don't ask (TDA) kick for a while. I like how it encourages methods to be used as messages in true object-oriented fashion. But this has a nagging problem that has been rattling about in my head. I have come to suspect that well-written code can follow OO principles and functional principles at the same time. I'm trying to reconcile these ideas and the big sticking point that I've landed on is return . A pure function has two qualities: Calling it repeatedly with the same inputs always gives the same result. This implies that it is immutable. Its state is set only once. It produces no side effects. The only change caused by calling it is producing the result. So, how does one go about being purely functional if you've sworn off using return as your way of communicating results? The tell, don't ask idea works by using what some would consider a side effect. When I deal with an object I don't ask it about its internal state. I tell it what I need to be done and it uses its internal state to figure out what to do with what I've told it to do. Once I tell it, I don't ask what it did. I just expect it to have done something about what it was told to do. I think of Tell, Don't Ask as more than just a different name for encapsulation. When I use return I have no idea what called me. I can't speak it's protocol, I have to force it to deal with my protocol. Which in many cases gets expressed as the internal state. Even if what is exposed isn't exactly state it's usually just some calculation performed on state and input args. Having an interface to respond through affords the chance to massage the results into something more meaningful than internal state or calculations. That is message passing . See this example . Way back in the day, when disk drives actually had disks in them, I was taught how annoying people consider functions that have out parameters. void swap(int *first, int *second) seemed so handy but we were encouraged to write functions that returned the results. So I took this to heart on faith and started following it. But now I see people building architectures where objects let how they were constructed control where they send their results. Here's an example implementation . Injecting the output port object seems a bit like the out parameter idea all over again. But that's how tell-don't-ask objects tell other objects what they've done. When I first learned about side effects I thought of it like the output parameter. We were being told not to surprise people by having some of the work happen in a surprising way, that is, by not following the return result convention. Now sure, I know there's a pile of parallel asynchronous threading issues that side effects muck about with but return is really just a convention that has you leave the result pushed on the stack so whatever called you can pop it off later. That's all it really is. What I'm really trying to ask: Is return the only way to avoid all that side effect misery and get thread safety without locks, etc. Or can I follow tell, don't ask in a purely functional way?
If a function doesn't have any side effects and it doesn't return anything, then the function is useless. It is as simple as that. But I guess you can use some cheats if you want to follow the letter of the rules and ignore the underlying reasoning. For example using an out parameter is strictly speaking not using a return. But it still does precisely the same as a return, just in a more convoluted way. So if you believe return is bad for a reason , then using an out parameter is clearly bad for the same underlying reasons. You can use more convoluted cheats. E.g. Haskell is famous for the IO monad trick where you can have side effects in practice, but still not strictly speaking have side effects from a theoretical viewpoint. Continuation-passing style is another trick, which well let you avoid returns at the price of turning your code into spaghetti. The bottom line is, absent silly tricks, the two principles of side-effect free functions and "no returns" are simply not compatible. Furthermore I will point out both of them are really bad principles (dogmas really) in the first place, but that is a different discussion. Rules like "tell, don't ask" or "no side effects" cannot be applied universally. You always have to consider the context. A program with no side effects is literally useless. Even pure functional languages acknowledge that. Rather they strive to separate the pure parts of the code from the ones with side-effects. The point of the State or IO monads in Haskell is not that you avoid side effects - because you can't - but that the presence of side effects is explicitly indicated by the function signature. The tell-dont-ask rule applies to a different kind of architecture - the style where objects in the program are independent "actors" communicating with each other. Each actor is basically autonomous and encapsulated. You can send it a message and it decides how to react to it, but you cannot examine the internal state of the actor from the outside. This means you cannot tell if a message changes the internal state of the actor/object. State and side effects are hidden by design .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/365829", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/131624/" ] }
366,058
Context In Clean Code , page 35, it says This implies that the blocks within if statements, else statements, while statements, and so on should be one line long. Probably that line should be a function call. Not only does this keep the enclosing function small, but it also adds documentary value because the function called within the block can have a nicely descriptive name. I completely concur, that makes a lot of sense. Later on, on page 40, it says about function arguments The ideal number of arguments for a function is zero (niladic). Next comes one (monadic), followed closely by two (dyadic). Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification—and then shouldn’t be used anyway. Arguments are hard. They take a lot of conceptual power. I completely concur, that makes a lot of sense. Issue However, rather often I find myself creating a list from another list and I will have to live with one of two evils. Either I use two lines in the block , one for creating the thing, one for adding it to the result: public List<Flurp> CreateFlurps(List<BadaBoom> badaBooms) { List<Flurp> flurps = new List<Flurp>(); foreach (BadaBoom badaBoom in badaBooms) { Flurp flurp = CreateFlurp(badaBoom); flurps.Add(flurp); } return flurps; } Or I add an argument to the function for the list where the thing will be added to, making it "one argument worse". public List<Flurp> CreateFlurps(List<BadaBoom> badaBooms) { List<Flurp> flurps = new List<Flurp>(); foreach (BadaBoom badaBoom in badaBooms) { CreateFlurpInList(badaBoom, flurps); } return flurps; } Question Are there (dis-)advantages I am not seeing, which make one of them preferable in general? Or are there such advantages in certain situations; in that case, what should I look for when making a decision?
These guidelines are a compass, not a map. They point you in a sensible direction . But they can't really tell you in absolute terms which solution is “best”. At some point, you need to stop walking into the direction your compass is pointing, because you have arrived at your destination. Clean Code encourages you to divide your code into very small, obvious blocks. That is a generally good direction. But when taken to the extreme (as a literal interpretation of the quoted advice suggests), then you will have subdivided your code into uselessly small pieces. Nothing really does anything, everything just delegates. This is essentially another kind of code obfuscation. It is your job to balance “smaller is better” against “too small is useless”. Ask yourself which solution is simpler. For me, that is clearly the first solution as it obviously assembles a list. This is a well-understood idiom. It is possible to understand that code without having to look at yet another function. If it's possible to do better, it's by noting that “transform all elements from a list to another list” is a common pattern that can often be abstracted away, by using a functional map() operation. In C#, I think it's called Select . Something like this: public List<Flurp> CreateFlurps(List<BadaBoom> badaBooms) { return badaBooms.Select(BadaBoom => CreateFlurp(badaBoom)).ToList(); }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366058", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/226041/" ] }
366,196
Developers create scripts to help in their work. For example, to run Maven with certain parameters, to kill unneeded background tasks that crop up in development, or to connect to a certain server. The scripts are not core build scripts nor are they used in our Continuous Integration server. What is the best way to manage them? To put them into a directory (maybe /scripts ) and check them into Git? To maintain them separately in some file server? The argument for treating them as source code is that they are source and can change. The argument for not doing it is that they are just auxiliary tools and that not all developers need any given script (e.g. Linux-specific scripts where some developers work on Windows).
Developer scripts go also into version control, because usually these scripts also depend on the items in version control, e.g. file paths. If these scripts are versioned they also should work for all developers to avoid that every developer writes his own set of scripts, which becomes a maintenance hell. In addition bugfixes or improvements of these scripts are automatically rolled out to every developer via version control.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366196", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14493/" ] }
366,235
My scenario is as follows. I am designing a system designed to receive data from various types of sensors, and convert and then persist it to be used by various front-end and analytics services later. I'm trying to design every service to be as independent as possible, but I'm having some trouble. The team has decided on a DTO we would like to use. The outward-facing services (sensor data recipients) will receive the data in their own unique way, then convert it to a JSON object (the DTO) and send it off to the Message Broker. Consumers of the messages will then know exactly how to read the sensor data messages. The problem is that I'm using the same DTO in a few different services. An update has to be implemented in multiple locations. Obviously, we've designed it in such a way that a few extra or missing fields in the DTO here and there are not much of an issue until the services have been updated, but it's still bugging me and makes me feel like I'm making a mistake. It could easily turn into a headache. Am I going about architecting the system wrong? If not, what are some ways around this, or at least to ease my worries?
My advice? Do not share these DTOs among the applications in any kind of library. Or at least don't do this right now. I know, seems very counter-intuitive. You are duplicating code, right? But this is not a business rule, so you can be more flexible. The service that sends the DTO needs to be rigid in his message contract, like an Rest API. The service can't change the DTO in a way that could break the other services that are already consuming the information from the DTO. When a new field is added do DTO, you only update the other services that consume this DTO if they need the new field. Otherwise, forget it. Using JSON as content-type you have the flexibility to create and send new attributes without breaks the code of the services that don't map these new fields on his actual versions of DTO. But if this situation is really bothering you, you can follow the Rule of Three : There are two "Rules of Three" in reuse: (a) It is three times as difficult to build reusable components as single use components, and (b) a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library. So, try to wait a bit more before sharing this DTO among the services.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366235", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/296624/" ] }
366,251
I am writing gui application. I want to implement following structure: project tree with nodes of different type and behavior (i.e. when right clicking or selecting there can be different menu options) Editor window, which can be dynamically splitted vertically and horisontally, to add more edit areas . Each edit area corresponds to one node of project tree . I am using c++/qt, but have the problem in design rather than programming languages and libraries. Currently, to implement project tree I created abstract interface of tree node which contains link to parent and to children. For each node of specific type I make new class. Because I am using Qt, I have a model, which acts like a intermediate object between view and actual tree. It seems correct that information from visual representaion can't leak to this tree. I have the following problems with implementation: Is it right to use this tree as a holder of all my data? Can I use it to hold information about objects I'm editing, or should I use external holder for all data, but have link to it from my nodes? In case of holding information about objects, the information about visual representation, already passed to objects (they know structure of the tree). In case of link to external holder, I can't understand how to create such link and support dynamic creation/deletion of nodes. When I click on view item, I want somehow to make data inside node to be opened for editing in selected edit area . I can't figure out how to do it without dynamic_cast'ing to specific type of node and passing that internal information to currently selected edit area . Another badly looking approach, which come to my mind is to introduce virtual function in node, but node can't implement the logic of this function, because it relates to visual representation. Example of tree: Models is entities which can be edited by own vector graphics editor, Materials is table-like key-value properties, where each key can be attached to some primitives in graphics editor.
My advice? Do not share these DTOs among the applications in any kind of library. Or at least don't do this right now. I know, seems very counter-intuitive. You are duplicating code, right? But this is not a business rule, so you can be more flexible. The service that sends the DTO needs to be rigid in his message contract, like an Rest API. The service can't change the DTO in a way that could break the other services that are already consuming the information from the DTO. When a new field is added do DTO, you only update the other services that consume this DTO if they need the new field. Otherwise, forget it. Using JSON as content-type you have the flexibility to create and send new attributes without breaks the code of the services that don't map these new fields on his actual versions of DTO. But if this situation is really bothering you, you can follow the Rule of Three : There are two "Rules of Three" in reuse: (a) It is three times as difficult to build reusable components as single use components, and (b) a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library. So, try to wait a bit more before sharing this DTO among the services.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366251", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
366,355
My coding style for nested function calls is the following: var result_h1 = H1(b1); var result_h2 = H2(b2); var result_g1 = G1(result_h1, result_h2); var result_g2 = G2(c1); var a = F(result_g1, result_g2); I have recently changed to a department where the following coding style is very much in use: var a = F(G1(H1(b1), H2(b2)), G2(c1)); The result of my way of coding is that, in case of a crashing function, Visual Studio can open the corresponding dump and indicate the line where the problem occurs (I'm especially concerned about access violations). I fear that, in case of a crash due to the same problem programmed in the first way, I won't be able to know which function has caused the crash. On the other hand, the more processing you put on a line, the more logic you get on one page, which enhances readability. Is my fear correct or am I missing something, and in general, which is preferred in a commercial environment? Readability or maintainability? I don't know if it's relevant, but we are working in C++ (STL) / C#.
If you felt compelled to expand a one liner like a = F(G1(H1(b1), H2(b2)), G2(c1)); I wouldn't blame you. That's not only hard to read, it's hard to debug. Why? It's dense Some debuggers will only highlight the whole thing at once It's free of descriptive names If you expand it with intermediate results you get var result_h1 = H1(b1); var result_h2 = H2(b2); var result_g1 = G1(result_h1, result_h2); var result_g2 = G2(c1); var a = F(result_g1, result_g2); and it's still hard to read. Why? It solves two of the problems and introduces a fourth: It's dense Some debuggers will only highlight the whole thing at once It's free of descriptive names It's cluttered with non-descriptive names If you expand it with names that add new, good, semantic meaning, even better! A good name helps me understand. var temperature = H1(b1); var humidity = H2(b2); var precipitation = G1(temperature, humidity); var dewPoint = G2(c1); var forecast = F(precipitation, dewPoint); Now at least this tells a story. It fixes the problems and is clearly better than anything else offered here but it requires you to come up with the names. If you do it with meaningless names like result_this and result_that because you simply can't think of good names then I'd really prefer you spare us the meaningless name clutter and expand it using some good old whitespace: int a = F( G1( H1(b1), H2(b2) ), G2(c1) ) ; It's just as readable, if not more so, than the one with the meaningless result names (not that these function names are that great). It's dense Some debuggers will only highlight the whole thing at once It's free of descriptive names It's cluttered with non-descriptive names When you can't think of good names, that's as good as it gets. For some reason debuggers love new lines so you should find that debugging this isn't difficult: If that's not enough, imagine G2() was called in more than one place and then this happened: Exception in thread "main" java.lang.NullPointerException at composition.Example.G2(Example.java:34) at composition.Example.main(Example.java:18) I think it's nice that since each G2() call would be on it's own line, this style takes you directly to the offending call in main. So please don't use problems 1 and 2 as an excuse to stick us with problem 4. Use good names when you can think of them. Avoid meaningless names when you can't. Lightness Races in Orbit's comment correctly points out that these functions are artificial and have dead poor names themselves. So here's an example of applying this style to some code from the wild: var user = db.t_ST_User.Where(_user => string.Compare(domain, _user.domainName.Trim(), StringComparison.OrdinalIgnoreCase) == 0) .Where(_user => string.Compare(samAccountName, _user.samAccountName.Trim(), StringComparison.OrdinalIgnoreCase) == 0).Where(_user => _user.deleted == false) .FirstOrDefault(); I hate looking at that stream of noise, even when word wrapping isn't needed. Here's how it looks under this style: var user = db .t_ST_User .Where( _user => string.Compare( domain, _user.domainName.Trim(), StringComparison.OrdinalIgnoreCase ) == 0 ) .Where( _user => string.Compare( samAccountName, _user.samAccountName.Trim(), StringComparison.OrdinalIgnoreCase ) == 0 ) .Where(_user => _user.deleted == false) .FirstOrDefault() ; As you can see, I've found this style works well with the functional code that's moving into the object oriented space. If you can come up with good names to do that in intermediate style then more power to you. Until then I'm using this. But in any case, please, find some way to avoid meaningless result names. They make my eyes hurt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366355", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/250257/" ] }
366,572
I've been reading/watching a lot of Robert C. Martin content. I've come across him saying SQL is unnecessary because of solid state drives. When I search other sources to back this up I get a bunch of random articles describing the difference of SQL performance between hard drives and solid state drives (which is related but not what I'm trying to research). Ultimately, I do not understand what he's trying to get at. Is he saying replace SQL with No-SQL technologies? Is he saying store data in files in a file system? Or does he just want people to stop using SQL/Relational Databases because of SQLi attacks? I fear I'm missing the point he's trying to make. I will provide some links here so you can read straight from his mind: Bobby Tables Clean Architecture Lecture First, he states that SQL should be removed from the system entirely. The solution. The only solution. Is to eliminate SQL from the system entirely. If there is no SQL engine, then there can be no SQLi attacks. And although he talks about replacing SQL with an API, I do NOT think he means putting SQL behind an API because of that previous quote and what he says earlier in the article. Frameworks don’t handle the issue;... Side note: In saying SQL, I'm pretty sure Robert means most relational databases. Maybe not all but most. In any case, most people are using SQL anyways. so... If SQL is not being used to persists data, then what are we supposed to use? Before answering that, I should also note. Robert emphasizes that solid state drives should change the tools that we use to persist data. Søren D. Ptæus's answer points this out. I must also respond to the, "but data integrity" group. Upon some further research, Robert says we should use transactional databases like datomic . Then CRUD turns into CR (create and read) and SQL transactions go away altogether. Data integrity is of course important. I can't find a question that encompasses all of this. I guess I'm looking for alternatives that match Robert's guidelines. Datomic is one but is that it? What other options match these guidelines? And do they actually work better with solid state drives?
Bob Martin's opinion is just that; one man's opinion. A programmer is expected to understand the system he is writing well enough to exercise reasonable care about its security and performance. That means that, if you're talking to a SQL database, you do what the Bobby Tables website says to do: you sanitize your input data. It means that you put your SQL database on a machine that promises adequate performance. There are very well-known and well-understood ways to do these things, and while they don't guarantee absolute security or ideal performance, neither does anything else. The assertion that we don't need SQL anymore because we now have SSD's is just specious. SQL wasn't invented because high-speed hard drives didn't exist yet; it was invented because we needed an industry-standard way to express data-retrieval concepts. Relational database systems have many other qualities besides speed and security that make them ideal for business operations; in particular, ACID . Data integrity is at least as important as speed or security, and if you don't have it, then what's the point of securing bad data or retrieving it as quickly as possible? Before you take one man's hysteria as gospel, I suggest you learn about application and system security and performance on their own terms, not by reading random Internet articles. There's much more to security, performance and robust system design than simply "avoid this technology." We don't ban kitchen knives because a few hapless individuals manage to accidentally cut their fingers with them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366572", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/261944/" ] }
366,592
Downcasting means casting from a base class (or interface) to a subclass or leaf class. An example of a downcast might be if you cast from System.Object to some other type. Downcasting is unpopular, maybe a code smell: Object Oriented doctrine is to prefer, for example, defining and calling virtual or abstract methods instead of downcasting. What, if any, are good and proper use cases for downcasting? That is, in what circumstance(s) is it appropriate to write code that downcasts? If your answer is "none", then why is downcasting supported by the language?
Downcasting is unpopular, maybe a code smell I disagree. Downcasting is extremely popular ; a huge number of real-world programs contain one or more downcasts. And it is not maybe a code smell. It is definitely a code smell. That's why the downcasting operation is required to be manifest in the text of the program . It's so that you can more easily notice the smell and spend code review attention on it. in what circumstance[s] is it appropriate to write code which downcasts? In any circumstance where: you have a 100% correct knowledge of a fact about the runtime type of an expression that is more specific than the compile-time type of the expression, and you need to take advantage of that fact in order to use a capability of the object not available on the compile-time type, and it is a better use of time and effort to write the cast than it is to refactor the program to eliminate either of the first two points. If you can cheaply refactor a program so that either the runtime type can be deduced by the compiler, or to refactor the program so that you don't need the capability of the more derived type, then do so. Downcasts were added to the language for those circumstances where it is hard and expensive to thus refactor the program. why is downcasting supported by the language? C# was invented by pragmatic programmers who have jobs to do, for pragmatic programmers who have jobs to do. The C# designers are not OO purists. And the C# type system is not perfect; by design it underestimates the restrictions that can be placed on the runtime type of a variable. Also, downcasting is very safe in C#. We have a strong guarantee that the downcast will be verified at runtime and if it cannot be verified then the program does the right thing and crashes. This is wonderful; it means that if your 100% correct understanding of type semantics turns out to be 99.99% correct, then your program works 99.99% of the time and crashes the rest of the time, instead of behaving unpredictably and corrupting user data 0.01% of the time. EXERCISE: There is at least one way to produce a downcast in C# without using an explicit cast operator. Can you think of any such scenarios? Since these are also potential code smells, what design factors do you think went into the design of a feature that could produce a downcast crash without having a manifest cast in the code?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366592", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19237/" ] }
366,655
Sometimes I need for loops which needs a break like this: for(int i=0;i<array.length;i++){ //some other code if(condition){ break; } } I feel uncomfortable with writing if(condition){ break; } because it consumes 3 lines of code. And I found the loop can be rewritten as: ↓ for(int i=0;i<array.length && !condition;i++){ //some other code } So my question is, is it good practice to move the condition into the condition field to reduce lines of codes if possible?
Those two examples you gave are not functionally equivalent. In the original, the condition test is done after the "some other code" section, whereas in the modified version, it is done first, at the start of the loop body. Code should never be rewritten with the sole purpose of reducing number of lines. Obviously it's a nice bonus when it works out that way, but it should never be done at the expense of readability or correctness.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366655", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248528/" ] }
366,938
I employ a lot of meta-programming to avoid repetitive tasks and build safer-to-use abstractions. I recently moved to a new job where I am working in a larger team and this worries some of my colleagues, because they do not comprehend it. I always try to leverage the full potential of the language, but some (not all) of my colleagues perceive that as a risk (some welcome the approach). I agree it is a problem to write code that nobody else on the team can comprehend. On the other hand we are all professional C++ developers and I think we should aspire to a higher standard than writing C with classes. My question is, who is right, what should I do? Clarification: Trying to leverage the full potential of the language, does not mean I throw TMP at every problem. C++ is a toolbox and to me C++ proficiency is about being able to use all the tools from the box and about picking the right one for a particular job.
Metaprogramming is OK. What you are trying to do is not OK. I use metaprogramming all the time in my job. It's a powerful tool which can be used to do a lot of things in a more readable and maintainable way. It's also one of the harder to comprehend styles of programming out there, so it really needs to earn its keep. I like it when I can reduce 1000 lines of code to 50, but I try to limit it as such. The issue is not metaprogramming, but this: On the other hand we are all professional C++ developers and I think we should aspire to a higher standard than writing C with classes. This is where you get in trouble. You have an opinion. It's fine to have an opinion that metaprogramming is good. It's fine to have an opinion that we should all aspire to be better C++ developers. It is not fine to compel both your collegues and future hires who will have to maintain the code you wrote to agree with your opinion. That is your boss's job. Your boss is the one who should be concerned with making sure that your code is maintainable in the long run. They (hopefully) have much more buisness experience, because believe me when I say it's a business decision, not an ideological decision. It is fine to want to metaprogram. It is fine to want to teach others to metaprogram. But understand that it's also fine for others to choose not to learn to metaprogram, and that will be true until you are in a position of power. (and, as an industry secret: when you finally are a lead developer, in a position of power, you're not in a position of power at all. There's someone controlling the pursestrings who is in power). If you want to encourage them to be okay with metaprogramming, start small . Start with a single enable_if that makes the API easier to read. Then comment the daylights out of it. Then maybe find one case where a template metafunction turns 10 large repetitive classes into 1 class with 10 little helpers. Comment the heck out of it. Get feedback. Find what people think about it. Be fine if they tell you not to do it. Find a niche where metaprogramming earns its keep so thoroughly that your collegues all (begrudgingly) agree that it was the right tool for the job. As a short story, I wrote a beautiful library once, using extensive metaprogramming. It did exactly what we needed at the time, when no other approach could get remotely close. It actually changed the direction of the application I was writing in. But it was metaprogramming. Only one or two other people in my entire company could read it. Later, my colleague took another stab at the problem. Instead of leveraging metaprogramming to precisely do what was needed, he worked with leadership to relax the constraints that had been put on the problem such that metaprogramming was not needed. Perhaps more accurately, metaprogramming was less needed. He was able to confine it to what metaprogramming does best. The resulting library is now in a position to be used in a remarkably wide market, and that's certainly in no small part due to the fact that the new code can be maintained by a far wider range of developers. I'm proud of paving the way with metaprogramming, but it's my colleague's code which is going to reach the wider audience, and there's good reasons for that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/366938", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/297670/" ] }
367,094
junior developer here. I am currently working alone on a web application for a big client of my company. I started last month. The client wants at least 25% of comments in each of its software projects. I checked the code of previous applications and here are my observations : each file begin with a comment block (package, date last updated, name of my company & copyright) all variables are commented with their names // nameOfCustomer public String nameOfCustomer all getters and setters are commented very few useful comments It seems like developers just place as much comments as they can to hit that 25% threshold, regardless of quality and usefulness. My company tells me that "we do it as the client wants it". I did not speak directly with the client about this. Here are my arguments so far : useless lines to read and to write (waste of time) comments are sometimes not updated (source of confusion) developers are less likely to use or trust real useful comments What is your advice on this subject ? How should I handle the situation ?
All the other answers and comments here really threw me for a loop, because they are so counter to my first reaction and so counter to the attitude I've witnessed in my coworkers. So I'd like to describe an alternate approach, if only for the sake of being the dissenting voice . The guiding principle of this answer is, "Delight the customer". Delighting the customer does not just mean meeting their expectations; it means understanding their requests so deeply that you can interpret what they say in the way they mean it, and delivering above and beyond what they ask for. Other answers appear to be guided by the principle of malicious compliance instead, which I find abhorrent; and besides is questionable business practice as it's a bad way to get repeat customers. To me, when I hear the client say, "I want 25% comments", that is the beginning of a dialog. For me it is clear that the implication here is "I want a lot of descriptive text, so that newcomers to this codebase can get up and running quickly", not "I want you to add randomness in a certain syntactic category" as other answers appear to be taking it. And I would take that request seriously, and intend to write a lot of descriptive, helpful comments, guiding a newcomer to the structure of the code, pointing out surprising engineering decisions and outlining the reasoning that went into them, and giving high-level English descriptions of complicated code sections (even if they don't have any surprises). This intention and understanding is the starting point of the discussion -- that is before we even start talking. To me the implication of the request is so clear that it doesn't even need that clarification; but if to you it is unclear you should of course check in with them! Okay, so where does the dialog go if that's the starting point? The next part of the dialog goes like this: I would expect this to be a serious additional effort, possibly in a second phase of the project, that is above and beyond the production of the tool they care about working. It may be several minutes of discussion to discuss this process and why it is additional work, but I'm going to omit it here because as a professional programmer I expect you know how hard it is to make good comments. "A serious additional effort" means we may need a longer time budget and a greater monetary budget; or we may need to reduce the feature budget; or we may need to compromise on comment quality and quantity. This part is going to be a bit of a negotiation. But in my opinion, you should be very up-front about the costs of doing this extra work, and make sure that it is such an important feature to the client that they are willing to take on these costs. And if they are -- great! You get extra time and money, and they get high-quality comments. Everybody wins. And if it turns out that the commenting feature is not so important to them that they're willing to lose the ability to flurgle widgets or willing to let the deadline slip to late Granuary, 20x6, then everybody wins again: they get the product they want, and you don't have to spend the extra effort it takes to create high quality comments. Here is where I think the dialog should not go: Don't threaten the client with low-quality comments. Let them help you choose the level of effort they want expended and that they are willing to pay for. Don't promise them 25% comments and then inform them that you intend to deliver on this promise by autogenerating randomness after the project is built. Don't hide your plans. Don't promise them 25% comments, and then autogenerate randomness without telling them that's what you're going to do. When they notice (not if), you both lose big-time: they are unhappy with the product they got, and you get negative word-of-mouth. Don't try to convince them they don't want comments. They clearly want comments. Discuss tradeoffs of various approaches: yes! Discuss alternative ways of making the codebase newcomer friendly: yes! Tell them they don't know what they want: eh, no. You want to work with them to get them what they want; so understand what that is and figure out how best to deliver that to them in a budget they approve of, prioritizing the features they care about most if the resources they have are insufficient. Don't make excuses about how hard comments are to write. Writing code is hard; debugging code is hard; writing comments is hard. If it was easy, they wouldn't be hiring you. Just skip the complaints and get straight to the point they care about, namely how the extra effort required affects them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367094", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/297877/" ] }
367,116
There is a common argument about multiple variable initialisation in a one liner, that is : Consider for example int i, j = 1; which might lead some people to mistakingly believe both variables are being initialized We could argue that someone should know enough the syntax of his language to not mistaken about that. Another argument could be as developers we learn so many languages we can make mistakes between the specificities between languages. However for that very specific case I'm wondering the following does it even exists a language where the very syntax i, j=1 initialize both variable ? If not that argument then doesn't apply.
I think not, but that's not the point. The point is that i, j = 0 is very easily mistaken for i = j = 0 , which does initialize both. Clarity is the most important requirement on source code next to correctness, and the fact that this question even arises proves that the clarity is suboptimal.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367116", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/216452/" ] }
367,191
Reading "Code Complete 2" in a Quality of Requirements paragraph I found this: Are acceptable trade-offs between competing attributes specified — for example, between robustness and correctness? (this above is a point of a large checkbox list in order to check the quality of the requirements) So, I found a lot of definitions of Robustness and Correctness, in the web, academic books, etc.. e.g. : In the "Object Oriented Software Construction, 2nd Edition, Bertrand Meyer, Prentice-Hall, 1997" book: Correctness: The degree to which a system is free from [defects] in its specification, design, and implementation. Robustness: The degree to which a system continues to function in the presence of invalid inputs or stressful environmental conditions. Despite this, it's not clear why and in which situations these two might be in conflict. My question is: why are these two attributes in competition ?
There are many situations in which these two might be in conflict. For instance, robustness can involve resilience under heavy load. If an approximate (i.e., incorrect) response to a request can be computed much faster than an exact (correct) response, it's important to know whether the system should deliver an approximate result, or fail delivering altogether.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367191", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/197718/" ] }
367,256
I want to document my code such that there is minimum need for reading and browsing the code again months later. I know that there are different types of documentation (in source code and outside, sequence diagrams, and so on). I just want to know what is an efficient way to document my code so that, when a few months later I want to see my code, I spend less time on reading code and understanding code flow.
IMO the best documentation is the documentation you don't actually need. I also hate writing documentation and comments. With that being said: Pick readable and talking names. Don't use n , but instead numberOfItemsFound for example. Don't shy back from storing parts of a calculation in a constant variable rather than pushing everything into one line. Move partial tasks from branches into their own (inline) functions, if you're reusing them or the parent function becomes long and tedious to follow. Be more elaborate and only optimize code over readability where it's really required.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367256", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/150675/" ] }
367,264
In my team we have been cleaning a lot of old stuff in a big monolithic project (whole classes, methods, etc.). During that cleaning tasks I was wondering if there is a kind of annotation or library fancier than the usual @Deprecated . This @FancyDeprecated should prevent the build of the project from succeeding if you haven't cleaned old unused code after a particular date has passed. I have been searching in the Internet and didn't find anything that have the capabilities described below: should be an annotation, or something similar, to place in the code you are intended to delete before a particular date before that date the code will compile and everything will work normally after that date the code will not compile and it will give you a message warning you about the problem I think I am searching for an unicorn... Is there any similar technology for any program language? As a plan B I am thinking of the possibility of making the magic with some unit tests of the code that is intended to be removed that start to fail at the "deadline". What do you think about this? Any better idea?
This would constitute a feature known as a time bomb . DON'T CREATE TIME BOMBS. Code, no matter how well you structure and document it, will turn into an ill-understood near-mythical black box if it lives beyond a certain age. The last thing anyone in the future needs is yet another strange failure mode that catches them totally by surprise, at the worst possible time, and without an obvious remedy. There is absolutely no excuse for intentionally producing such a problem. Look at it this way: if you're organized and aware of your code base enough that you care about obsolescence and follow through on it, then you don't need a mechanism within the code to remind you. If you're not, chances are that you also not up-to-date on other aspects of the code base, and will probably be unable to respond to the alarm timely and correctly. In other words, time bombs serve no good purpose for anyone. Just Say No!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367264", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/252332/" ] }
367,344
I'm a junior software developer and I was wondering when would be the best time to optimize a software for better performance (speed). Assuming the software is not extremely large and complex to manage, is it better to spend more time at the beginning optimizing it or should I just develop the software that executes all functionality correctly and then proceed to optimize it for better performance?
The number one thing should always and forever be readability. If it's slow but readable, I can fix it. If it's broken but readable, I can fix it. If it's unreadable, I have to ask someone else what this was even supposed to do. It is remarkable how performant your code can be when you were only focused on being readable. So much so I generally ignore performance until given a reason to care. That shouldn't be taken to mean I don't care about speed. I do. I've just found that there are very few problems whose solutions actually are faster when made hard to read. Only two things take me out of this mode: When I see a chance at a full blown big O improvement, even then only when n is big enough that anyone would care. When I have tests that show real performance problems. Even with decades of experience I still trust the tests more than my math. And I'm good at math. In any case, avoid analysis paralysis by making yourself think you shouldn't try a solution because it might not be the fastest. Your code will actually benefit if you try multiple solutions because making the changes will force you to use a design that makes it easy to change. A flexible code base can be made faster later where it really needs it. Choose flexible over speed and you can choose the speed you need.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367344", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/298234/" ] }
367,965
I started writing some unit tests for my current project. I don't really have experience with it though. I first want to completely "get it", so I am currently using neither my IoC framework nor a mocking library. I was wondering if there is anything wrong with providing null arguments to objects' constructors in unit tests. Let me provide some example code: public class CarRadio {...} public class Motor { public void SetSpeed(float speed){...} } public class Car { public Car(CarRadio carRadio, Motor motor){...} } public class SpeedLimit { public bool IsViolatedBy(Car car){...} } Yet Another Car Code Example(TM), reduced to only the parts important to the question. I now wrote a test something like this: public class SpeedLimitTest { public void TestSpeedLimit() { Motor motor = new Motor(); motor.SetSpeed(10f); Car car = new Car(null, motor); SpeedLimit speedLimit = new SpeedLimit(); Assert.IsTrue(speedLimit.IsViolatedBy(car)); } } The test runs fine. SpeedLimit needs a Car with a Motor in order to do its thing. It is not interested in a CarRadio at all, so I provided null for that. I am wondering if an object providing correct functionality without being fully constructed is a violation of SRP or a code smell. I have this nagging feeling that it does, but speedLimit.IsViolatedBy(motor) doesn't feel right either - a speed limit is violated by a car, not a motor. Maybe I just need a different perspective for unit tests vs. working code, because the whole intention is to test only a part of the whole. Is constructing objects with null in unit tests a code smell?
In the case of the example above, it is reasonable that a Car can exist without a CarRadio . In which case, I'd say that not only is it acceptable to pass in a null CarRadio sometimes, I'd say that it's obligatory. Your tests need to ensure that the implementation of the Car class is robust, and does not throw null pointer exceptions when no CarRadio is present. However, let's take a different example - let's consider a SteeringWheel . Let's assume that a Car has to have a SteeringWheel , but the speed test doesn't really care about it. In this case, I wouldn't pass a null SteeringWheel as this is then pushing the Car class into places where it isn't designed to go. In this case, you'd be better off creating some sort of DefaultSteeringWheel which (to continue the metaphor) is locked in a straight line.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/367965", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/226041/" ] }