source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
401,797 | I my job, I am tasked with the responsibility of improving the code quality. To meet this responsibility I often pair program with developers and conduct sessions on design principles and design patterns. I was surprised by a developer who said either he has to violate Single Responsibility Principle or YAGNI. Single Responsibility Principle states that every class or module should have one reason to change. To restate it for clarity. A module should be responsible to one, and only one, actor. In my understanding the reason we follow single responsibility principle is because: if a class is responsible for two actors, And if one actor drives a change in that class, there is a possibility of unintentionally changing the requirements of the other actor. YAGNI acronym for You ain't gonna need it. The extreme programming principle states that do the simplest thing that could possibly work. I often leaned upon YAGNI to make my code simpler by removing needless modularization. Conflict between SRP and YAGNI: We were having a work flow implemented in the system. And there was a need to collect data about the usage of Work Flow. We needed to find out about the percentage distribution of parameters used in the work flow. I explained that the data collection has to in an other class (and we could possibly use observer or decorator pattern) and not in class where the work flow is implemented as both of the requirements are driven by different actors in the system. The work flow is serving the end user and the data collection is serving the product management. My colleague wanted to directly log the parameter from class where the workflow is implemented. He told me this is the simplest thing to be done to get it working. I am nearly certain in this scenario I have to follow SRP here. And I am following SRP (implementing in 2 classes) because if there is change in the work flow the probability of accidentally modifying data collection is low. And when when there is a change in data collection, the probability of accidentally modifying the work flow is also low. But when I explain about the possible change in workflow or data collection, he tells me "You ain't gonna need it". Any suggestion on how this could be explained? | YAGNI means to avoid investing effort into code changes for hypothetical requirements which may arrive later, and instead focus on the requirements one has now. But this is not restricted to functional requirements - as long as one does not create "use-once-and-then-throw away" software, there is always the non-functional requirement of keeping code readable, understandable and evolvable. And separating responsibilities is one of the major tools to achieve that goal. Hence I interpret the YAGNI principle in such a situation as a recommendation for not separating responsibilities before the benefits of the SRP become visible , which is actually as soon as the code becomes convoluted . And that usually happens very quickly when one tries to implement different business requirements in one class. I would tolerate it if I had to make two or three small extensions to the "class where the workflow is implemented" to add this logging requirement. But then my pain level would probably be reached, and I would start thinking "heck, can we refactor this logging mechanism, or at least the data collection out of the workflow class to a better place" . So instead of telling your devs: "create a separate data collection class right from the start just in case" (which makes your request prone to the YAGNI counter-argument), tell them: "refactor to a separate data collection class as soon as different responsibilities become apparent and it helps to make the code clearer" . That should be the justification one needs to apply the SRP not for some unknown requirement in the future, but for the requirement of keeping the code understandable now . Of course, the threshold where code is perceived as being convoluted, and when the SRP could be applied to fix this may vary from one dev to another, but that is balance your team has to find, and where only code reviews can help . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401797",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/334066/"
]
} |
401,907 | Some modern languages (e.g. Swift, Dart) do not support the protected access modifier keyword. Swift is a protocol-oriented language, but I've heard that Dart is a completely object-oriented language. Why don't these modern languages support protected ? Do you only need private and public for complete object-oriented programming? I think it's convenient to have a protected access modifier keyword when there are some data or interfaces that I want to pass from the parent class to the child class. Why do some modern languages not support protected ? | It depends on what you mean by "required". Access modifiers are not a necessity. You could replace every access modifier with public and most applications will work just like they did when you used varied access modifiers, proving the point that the compiler's main goal (outputting a working application) is not directly dependent on access modifiers. As Delioth mentioned in the comments, both Javascript and Python are capable of OOP yet have no concept of access modifiers; proving the point that OOP does not require access modifiers. However, access modifiers very much matter from a developer's perspective if you're interested in avoiding mistakes. Lack of access restrictions leads to developers accessing dependencies directly that they shouldn't (e.g. circumventing a validation/authorization layer), and this is going to lead to bugs, which leads to time and effort spent. In conclusion, access modifiers are not required for the compiler, but they are mostly considered a very-nice-to-have for good practice. Such guidelines "require" developers to exercise diligent access control - even if the compiler doesn't need it. Why some modern languages remove the protected ? There is no universally applicable answer to that question, other than "because that's what the language designers decided to do". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401907",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/352152/"
]
} |
402,032 | According to Understanding "programming to an interface" , as I understand, I think I should depend on abstract class only. However, in some case, for example, Student : public class Student {
private String name;
private int age;
} to modify it so that it becomes depend on abstract class only (which MyIString may be a new abstract class that wraps String ): public class Student {
private MyIString name;
private java.lang.Number age;
} I think the modified one is more complex. And a more "real" example, say Address : public class Address {
private ZipCode zipcode;
} which I need one type of ZipCode only, but if I modify it as: public class Address {
private IZipCode zipcode;
} which IZipCode is an interface, then I think it may mislead other teammates that there would have other types of ZipCodes . I think the cases above becomes more complex and less maintainable if I was allowed a class to use abstract class members only. So my question is, should I still follow "programming to an interface not implementation" if the "followed" one becomes more complex (in my view)? | Programming to an interface means that you should focus on what the code does, not how it is actually implemented. See Telastyn's answer to Understanding “programming to an interface” . Interface classes help to enforce this guideline, but this does not mean that you should never use concrete classes. I really like the zip code example: In 1963, the United States Postal Service introduces zip codes that consist of five digits. You may now get the (bad) idea that an integer is a sufficient representation and use it throughout your code. 1983, a new zip code format is introduced. It uses 5 digits, a hyphen, and 4 other digits. Suddenly, zip codes are not integers anymore and you need to change all places that use integers for zip codes to something else, for example, to strings. This refactoring could have been avoided if a designated ZipCode class would have been used. Then, only the internal representation of the ZipCode class needed to be changed and its usages would have stayed the same. Using a ZipCode class is already programming to an interface : Instead of programming against some concrete implementation ("integer" / "string"), you program against some abstraction ("zip code"). If necessary, you can add another level of abstraction: You may need to support different postal code formats for different countries. For example, you could add an interface IPostalCode with implementations like PostalCodeUS (which is just the ZipCode from above), PostalCodeIreland , PostalCodeNetherlands etc. However, too many levels of abstraction can also make the code more complex and harder to reason about and I think it depends on your application's requirements whether you want to use some ZipCode class or some IPostalCode interface. Maybe a ZipCode that internally uses a (unicode) string is good enough for all countries and country specific stuff, for example, validation, can be handled outside the class in some PostalCodeValidator / IPostalCodeValidator . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/402032",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/351912/"
]
} |
402,235 | I have a setup of a class that represents a building. This building has a floor plan, which has bounds. The way I have it setup is like this: public struct Bounds {} // AABB bounding box stuff
//Floor contains bounds and mesh data to update textures etc
//internal since only building should have direct access to it no one else
internal class Floor {
private Bounds bounds; // private only floor has access to
}
//a building that has a floor (among other stats)
public class Building{ // the object that has a floor
Floor floor;
} These objects have their own unique reasons to exist as they do different things. However there is one situation, where I want to get a point locally to the building. In this situation I am essentially doing: Building.GetLocalPoint(worldPoint); This then has: public Vector3 GetLocalPoint(Vector3 worldPoint){
return floor.GetLocalPoint(worldPoint);
} Which leads to this function in my Floor object: internal Vector3 GetLocalPoint(Vector3 worldPoint){
return bounds.GetLocalPoint(worldPoint);
} And then of course the bounds object actually does the math required. As you can see these functions are pretty redundant as they just pass on to another function lower down. This doesn't feel smart to me - it smells like bad code thats going to bite me in butt some where down the line with code mess. Alternatively I write my code like below but I have to expose more to public which I kinda don't want to do: building.floor.bounds.GetLocalPoint(worldPoint); This also starts to get a bit silly when you go to many nested objects and leads to large rabbit holes to get your given function and you may end up forgetting where it is - which also smells like bad code design. What is the correct way to design all this? | Never forget the Law of Demeter : The Law of Demeter (LoD) or principle of least knowledge is a design guideline for developing software, particularly object-oriented programs. In its general form, the LoD is a specific case of loose coupling. The guideline was proposed by Ian Holland at Northeastern University towards the end of 1987, and can be succinctly summarized in each of the following ways:[1] Each unit should have only limited knowledge about other units: only units "closely" related to the current unit. Each unit should only talk to its friends; don't talk to strangers. Only talk to your immediate friends . The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else ( including its subcomponents) , in accordance with the principle of "information hiding". It may be viewed as a corollary to the principle of least privilege, which dictates that a module possess only the information and resources necessary for its legitimate purpose. building.floor.bounds.GetLocalPoint(worldPoint); This code violates the LOD. Your current consumer somehow is required to know: That the building has a floor That the floor has bounds That the bounds have a GetLocalPoint method But in reality, your consumer should only be handling the building , not anything inside of the building (it shouldn't be handling the subcomponents directly). If any of these underlying classes change structurally, you're suddenly also required to change this consumer, even though he may be several levels up from the class you actually changed. This starts infringing on the separation of layers you have, as a change affects multiple layers (more than just its direct neighbors). public Vector3 GetLocalPoint(Vector3 worldPoint){
return floor.GetLocalPoint(worldPoint);
} Suppose you introduce a second type of building, one without a floor. I can't think of an real-world example, but I'm trying to show a generalized use case, so let's assume that EtherealBuilding is such a case. Because you have the building.GetLocalPoint method, you are able to change its workings without the consumer of your building being aware of it, e.g.: public class EtherealBuilding : Building {
public Vector3 GetLocalPoint(Vector3 worldPoint){
return universe.CenterPoint; // Just a random example
}
} What's making this harder to understand is that there is no clear use case for a building without a floor. I don't know your domain and I can't make a judgment call on if/how that would occur. But development guidelines are generalized approaches that forgo specific contextual applications. If we change the context, the example becomes clearer: // Violating LOD
bool isAlive = player.heart.IsBeating();
// But what if the player is a robot?
public class HumanPlayer : Player {
public bool IsAlive() {
return this.heart.IsBeating();
}
}
public class RobotPlayer : Player {
public bool IsAlive() {
return this.IsSwitchedOn();
}
}
// This code works for both human and robot players, and thus wouldn't need to be changed when new (sub)types of players are developed.
bool isAlive = player.IsAlive(); Which proves the point why the method on the Player class (or any of its derived classes) has a purpose, even if its current implementation is trivial . Sidenote For the sake of example, I've skirted a few tangential discussions, such as how to approach inheritance. These are not the focus of the answer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/402235",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/303640/"
]
} |
402,250 | You'd often see that JavaScript is actually being transported over the web with all the useless stuff that doesn't need to be there -- Comments , particularly those containing licenses, indentations ( '\t' , '\n' ), etc. Given enough time, it could end up wasting terabytes of data worldwide!
Would a JavaScript bytecode cause another bigger problem, or has nobody thought of this yet? | Why is JavaScript not compiled to bytecode before sending over the network? Background: I was on the ECMAScript technical committee in the late 1990s and one of the implementers of Microsoft's JScript engine. Let me begin by saying what I always say when faced with a "why not?" question: language designers are not required to give good reasons why they did not spend hundreds of millions of other people's dollars on a feature that someone happens to like . Rather, the person pitching the feature is required to give good reasons why that's the best way to spend that time, effort and money. You've made an argument with no numbers attached to it that bytecode would be a cost savings in terms of bandwidth. I would encourage you to work up some actual numbers, and compare that to the costs of creating yet another language; those costs are significant. Remember in your analysis that "implementation" is one of the smallest costs. Also in your analysis include who saves the money vs who spends the money, and you will find that the people spending the money are not the ones saving it; incentives matter. That said, this is one of the more reasonable "why not?" questions because it is a feature we considered and rejected for reasons. We considered such a scheme, both within Microsoft and at the TC level; since JScript was already implemented as compiling to a well-designed, principled bytecode language, it would have been straightforward for us to propose it as a standard and we considered doing so. We decided not to, for a variety of reasons including: Holy goodness it was hard enough to standardize JavaScript . Everyone and their dog would have an opinion about what the ideal characteristics of a bytecode language were, and it would be multiple years of bikeshedding. No one really wanted to go there. It was an expensive solution with no associated costly problem. There's no reason to suppose that a bytecode language would be more efficient in either size or speed. JavaScript already minifies reasonably well and is highly compressible. It would have created an enormous amount of work for browser providers, who were already vexed by the expense of producing an efficient, compliant JS implementation. Creating a secure JS implementation that resists attacks by bad actors is hard enough; should we double the surface area available to attack? Probably not. Standards are an impediment to innovation. If we discovered that a small change to our bytecode language would make a big difference in some previously-unforeseen or previously-unimportant user scenario, we were free to make that change. If it was a standard, we would not be free to create that user benefit. But that analysis presupposes that the reason to do the feature at all is performance. Interestingly enough, the customer requests that motivated considering this feature back in the 1990s were not primarily about performance. Why not? The 1990s was a very different time for JS than today; scripts were mostly tiny. The notion that there would someday be frameworks with hundreds of thousands of lines was not even close to being on our radar. Downloading and parsing JS was a tiny fraction of the time spent downloading and parsing HTML. Nor was the motivation the extension to other languages, though that was of interest to Microsoft as we had VBScript running in the browser as well, which used a very similar bytecode language. (Being developed by the same team and compiled out of the same sources and all.) Rather, the primary customer scenario for motivating bytecode in the browser was to make the code harder to read, understand, decompile, reverse-engineer and tamper with. That a bytecode language is hardly any additional work to understand for any attacker with reasonable resources was major points against doing this work; we did not want to create a false sense of security. Basically there were lots of expenses and precious few benefits, so it did not get done. Something must have changed between 1998 and 2015 that made WebAssembly have a reasonable price-to-benefit; what those factors are, I do not know. You'd have to ask an expert on WebAssembly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/402250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/352665/"
]
} |
402,258 | I have a aggregate root named "Project". It has some basic attributes like name, duration, documents. I have another aggregate root named "Task". It also has some basic attributes like name, duration,documents and some more. We have scenario where a user can add more than 200 tasks to project . Creating List of tasks as one of attributes of Project seems unlikely because of memory consumption. However we another scenario where when user adds a task and provides duration, we need to find minimum date & maximum date from list of tasks and assign that duration to project We can't use eventual consistency here, because we want the updates to be immediately reflected. I was thinking of below approach public class TaskService {
public void addTaskToProject(TaskCommand command)
{
Project project = this.projectRepo.getProject(command.projectId);
Task task = Task.create(command);
if(task.startDate < project.startDate)
project.startDate = task.startDate;
//Multiple agree gate updates
taskRepo.save(task);
projectRepo.save(project);
}
} Another approach: public class TaskService {
public void addTaskToProject(TaskCommand command)
{
Project project = this.projectRepo.getProject(command.projectId);
Task task = Task.create(command);
task.assignDuration(command.duration, project);
//Multiple agree gate updates
taskRepo.save(task);
projectRepo.save(project);
}
}
public class Task
{
Date startDate;
Date endDate;
public void assignDuration(Duration duration, Project project)
{
this.startDate = duration.startDate;
this.endDate = duration.endDate;
if(this.startDate < project.startDate)
project.changeStartDate(this.startDate);
if(this.endDate > project.endDate)
project.changeEndDate(this.endDate);
} Reasons for not having tasks list as another attribute of Project is memory and there are chances that multiple people can add tasks to same project. | Why is JavaScript not compiled to bytecode before sending over the network? Background: I was on the ECMAScript technical committee in the late 1990s and one of the implementers of Microsoft's JScript engine. Let me begin by saying what I always say when faced with a "why not?" question: language designers are not required to give good reasons why they did not spend hundreds of millions of other people's dollars on a feature that someone happens to like . Rather, the person pitching the feature is required to give good reasons why that's the best way to spend that time, effort and money. You've made an argument with no numbers attached to it that bytecode would be a cost savings in terms of bandwidth. I would encourage you to work up some actual numbers, and compare that to the costs of creating yet another language; those costs are significant. Remember in your analysis that "implementation" is one of the smallest costs. Also in your analysis include who saves the money vs who spends the money, and you will find that the people spending the money are not the ones saving it; incentives matter. That said, this is one of the more reasonable "why not?" questions because it is a feature we considered and rejected for reasons. We considered such a scheme, both within Microsoft and at the TC level; since JScript was already implemented as compiling to a well-designed, principled bytecode language, it would have been straightforward for us to propose it as a standard and we considered doing so. We decided not to, for a variety of reasons including: Holy goodness it was hard enough to standardize JavaScript . Everyone and their dog would have an opinion about what the ideal characteristics of a bytecode language were, and it would be multiple years of bikeshedding. No one really wanted to go there. It was an expensive solution with no associated costly problem. There's no reason to suppose that a bytecode language would be more efficient in either size or speed. JavaScript already minifies reasonably well and is highly compressible. It would have created an enormous amount of work for browser providers, who were already vexed by the expense of producing an efficient, compliant JS implementation. Creating a secure JS implementation that resists attacks by bad actors is hard enough; should we double the surface area available to attack? Probably not. Standards are an impediment to innovation. If we discovered that a small change to our bytecode language would make a big difference in some previously-unforeseen or previously-unimportant user scenario, we were free to make that change. If it was a standard, we would not be free to create that user benefit. But that analysis presupposes that the reason to do the feature at all is performance. Interestingly enough, the customer requests that motivated considering this feature back in the 1990s were not primarily about performance. Why not? The 1990s was a very different time for JS than today; scripts were mostly tiny. The notion that there would someday be frameworks with hundreds of thousands of lines was not even close to being on our radar. Downloading and parsing JS was a tiny fraction of the time spent downloading and parsing HTML. Nor was the motivation the extension to other languages, though that was of interest to Microsoft as we had VBScript running in the browser as well, which used a very similar bytecode language. (Being developed by the same team and compiled out of the same sources and all.) Rather, the primary customer scenario for motivating bytecode in the browser was to make the code harder to read, understand, decompile, reverse-engineer and tamper with. That a bytecode language is hardly any additional work to understand for any attacker with reasonable resources was major points against doing this work; we did not want to create a false sense of security. Basically there were lots of expenses and precious few benefits, so it did not get done. Something must have changed between 1998 and 2015 that made WebAssembly have a reasonable price-to-benefit; what those factors are, I do not know. You'd have to ask an expert on WebAssembly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/402258",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/350577/"
]
} |
402,542 | In code dealing with hash tables, I often find the constant 0x9e3779b9 or sometimes 0x9e3779b1. For example hash = n * 0x9e3779b1 >>> 24 Why is this particular value used? | 0x9e3779b9 is the integral part of the Golden Ratio's fractional part 0.61803398875… (sqrt(5)-1)/2, multiplied by 2^32. Hence, if φ = (sqrt(5)+1)/2 = 1.61803398875 is the Golden Ratio, the hash function calculates the fractional part of n * φ, which has nice scattering properties. To convince yourself, just create a scatter plot of (n, n*c-FLOOR(n*c)) in your favourite spreadsheet, replacing c with φ, e, π, etc. Some interesting real-life issues when getting it wrong are described in https://lkml.org/lkml/2016/4/29/838 . This method is often referred to as "Golden Ratio Hashing", or "Fibonacci Hashing" and was popularised by Donald Knuth (The Art of Computer Programming: Volume 3: Sorting and Searching). In number theoretical terms, it mostly boils down to the Steinhaus Conjecture ( https://en.wikipedia.org/wiki/Three-gap_theorem ) and the recursive symmetry of the fractional parts of the multiples of the Golden Ratio φ. Occasionally, you may also see 0x9e3779b1 , which is the prime closest to 0x9e3779b9 (and appears to be a bit of "cargo cult" as this is not a modular hash). Similarly, 0x9e3779b97f4a7c15 and 0x9e3779b97f4a7c55 are the 64 bit equivalents of these numbers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/402542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/353174/"
]
} |
402,840 | We're a small/medium sized company with a dozen or so software developers, developing our own in-house software for in-house use. Given that there's so few of us and there's just so much work to be done, for most part each developer handles a separate part of the system and doesn't share their work much with other developers. Each has their "domain" so to say. Occasionally however the domains overlap and we need to collaborate; and also the arrangement means that it's hard to replace people and when something goes wrong we must be there to fix things because nobody else can do it (at least not quickly). So this arrangement is both nice (we each have full creative control) and not nice (we're basically forced to be on call 24/7, although in practice it's a bit more relaxed than that). Recently we tried a little "workshop" among ourselves to promote some better coding standards, namely, unit tests. (Yup, we're one of the people not doing those yet...) During the 2h meeting we aimed to create a small sample program with unit tests, just to get a feel for doing it. It was a fun 2 hours, and we did manage to produce a little bit of code in the end, however an interesting issue became painfully obvious: having lived so much in isolation for so long, each of us has basically our own coding style. Now, I'm not talking about tabs vs spaces, camel case vs snake case or some such other cosmetic difference. I'm talking about principles of arranging code. How to name things. What folders and namespaces to place them in. Do I split this code into 3 classes or just one? 5 tiny files or 1 gigantic one? Abstract it away with interfaces and factories, or call it directly? Getters and setters or naked fields? Etc. At times, writing the absolutely trivial program nearly devolved into a shouting match, although we were thankfully able to retain our cool in the end and no feelings got hurt. So this got me wondering - how do you normalize coding style among multiple seasoned developers with each their own strong preferences? The different styles certainly are bothersome when we do need to interact with each other's codes, not to mention confusing for any newcomers. And when some piece of code gets handed from one person's domain to another's, there's always the strong desire to rewrite it to match your own ways. First of all, are there any rules for how to lay out your code? Any standards? So far I've only seen the cosmetic stuff about spaces and cases. Basically how to format your code once it's written (in your head at least). But are there any guides about how to write your code, how to arrange it and how to name it? Where and how to split it in pieces and how to make the pieces interact? If there isn't a standard and we need to create our own, how do you go about doing that when everyone has a strong opinion of what is right and what is wrong? Now, mind you, we're all seasoned developers here; we realize that none of our approaches is inherently better or worse than any other - just that each one of them has certain strengths and certain weaknesses. But we also have a strong opinion about which strengths and which weaknesses matter the most. So how do you decide on The Right Way™ and how do you make sure everyone sticks to it without hurting any feelings (too much)? One way I've heard of is to select a Glorious Leader who then forces his preferred style onto others (via code reviews and meetings and whatever), but... you need a really good Glorious Leader who is indeed head and shoulders above others. What if you don't have one, and we're really all equals here? | Have a coding standard. If the shop you're going to work for already has one in use, that's the one you follow. Avoid coding standards that are dozens of pages long; it's not that complicated. Instead, look at code you like on Github, and follow that style. Find the well-established idioms for your particular programming language (camelCase, etc.), and use them. Let the IDE and code style tools do most of the work for you. For example, Visual Studio already has most of the important rules in place. Don't like how a piece of code is formatted? Ask Visual Studio to reformat it for you. Problem solved. Hire software developers that know what they are doing. They'll write good code without having to slavishly follow a style guide. Have code reviews. Seek consensus based on function and readability.
Don't like the consensus? One person is the tie-breaker; their decision stands. Don't strive for perfection; you'll never get it (for many, many reasons that are outside the scope of this question). Instead, strive for improvement. Try not to waste a lot of time and money on things that don't matter. In particular, don't rewrite code that already works and is already well-tested, just to satisfy your own sensibilities about what that code should look like. Learn how to write functions and methods that do one thing and do it well. If you do that, your naming should take care of itself (the name is a short verb phrase that describes what the function does). If you're doing OO, learn how to write small classes that have one responsibility /point of modification. If you do that, your naming should take care of itself (the name is a short noun phrase that describes what the class is). Learn how to write code that is easily testable . If you do that, the unit tests will take care of themselves. Pick your battles. Does that small, obscure proclivity that one of the developers exhibits really matter? Avoid religions and dogma. Finally, remember that the code is not yours. Don't be sentimental about it. You should "own" it and make changes as required (so, in that sense, it is yours), but it's there to serve a purpose. Being overly attached to the code only gets in the way of that by preventing objective analysis of the pros and cons of large-scale changes or decisions about the structure of the code and making you object to compromise. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/402840",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7279/"
]
} |
403,018 | I am still a software engineer in training, but I have heard the following adage multiple times: Good coders don't have to deal with merge conflicts. Whenever I work on a project with others, I'm losing an annoying amount of time and effort on resolving merge conflicts. So, I began wondering how true that adage is. I am aware of techniques like GitFlow and agile which are able to make developers efficiently split up their work, commit more often, and maintain multiple branches of code, so merge conflicts are a lot less likely. But surely they still happen and can potentially wreak havoc? In other words, how much time and effort do seasoned developers lose to resolving merge conflicts (when they could be working on their projects)? Are available techniques able to really negate any effects of merge conflicts? | I think it's a little disingenuous to say that good developers never have merge conflicts, but they can surely reduce the number of times it happens. It's also very important to remember that software development is a team activity. Actions of other members of the teams can also increase or decrease the likelihood of merge conflicts. First, it's important to understand the common ways that merge conflicts happen: Two people make different changes to the same line of code Someone decides to reformat the entire file so that every line is changed Deleting a file and then replacing it (that's a tree conflict) Code is deleted and added at the same time by two different people (not uncommon with Visual Studio .vsproj files) Development teams have come up with ways to avoid these situations: Ensure that each team member is working in different parts of the code (i.e. handled in task assignment) Code standards that dictate whether you use spaces or tabs, naming standards etc. so that whole-sale code reformatting is not necessary GitFlow so that everyone is working on their own branch, and can resolve conflicts at the Pull Request stage (all downstream merges work wonderfully) Committing often and routinely merging from develop to ensure your feature branch is never too far out of date Making sure your features are small enough they can be done in 1-3 days. With changes like that, you can greatly minimize the likelihood of conflicts. You'll never be able to completely eliminate them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403018",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/353840/"
]
} |
403,065 | In the C++ complex library, the method norm() of a complex number actually returns the square of what I have learned is usually called the "Norm". Reference: std::norm For example, std::norm() of (3,4) is 25. To me, this looks very confusing: why did people implement something that does not correspond to the "usual" use of the name? | This is not a C++ library issue but a question of mathematical terminology. In mathematics, a norm can mean different things: What you call norm is the Euclidian norm, which is the distance to the origin. In C++ it's abs() . This naming convention has the advantage of being consistent for complex and for real numbers (the origin in the latter case being 0.0). What the C++ library calls norm() corresponds to the field norm from complex numbers to real numbers. It's also known as absolute square . Post Scriptum: the early design of the C++ complex number library dates back to 1984, before templates did exist. In the article (link on this page ), Rose & Stroustrup explain that norm() was intended for comparing magnitudes faster, but at the same time was more subject to overflows. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403065",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/353920/"
]
} |
403,098 | While assisting a student with a university project, we worked on a Java exercise provided by the university which defined a class for an address with the fields: number
street
city
zipcode And it specified that the equals logic should return true if the number and zip code match. I was once taught that the equals method should only be doing an exact comparison between the objects (after checking the pointer), which makes some sense to me, but contradicts with the task they were given. I can see why you would want to override the logic so that you can use things like list.contains() with your partial matching but I'm wondering if this is considered kosher, and if not why not? | Defining Equality For Two Objects Equality can be arbitrarily defined for any two objects. There is no strict rule that forbids someone from defining any way they want. However, equality is often defined when it is meaningful for the domain rules of what is being implemented. It is expected to follow the equivalence relation contract : It is reflexive : for any non-null reference value x, x.equals(x)
should return true. It is symmetric : for any non-null reference values
x and y, x.equals(y) should return true if and only if y.equals(x)
returns true. It is transitive : for any non-null reference values x,
y, and z, if x.equals(y) returns true and y.equals(z) returns true,
then x.equals(z) should return true. It is consistent : for any
non-null reference values x and y, multiple invocations of x.equals(y)
consistently return true or consistently return false, provided no
information used in equals comparisons on the objects is modified. For
any non-null reference value x, x.equals(null) should return false. In your example, perhaps there is no need to distinguish two addresses that have the same zipcode and number as being different. There are domains that are perfectly reasonable to expect the following code to work: Address a1 = new Address("123","000000-0","Street Name","City Name");
Address a2 = new Address("123","000000-0","Str33t N4me","C1ty N4me");
assert a1.equals(a2); This can be useful, as you mentioned, for when you do not care about them being different objects - you only care about the values they hold. Perhaps zipcode + street number are enough for you to identify the correct address and the remaining information is "extra", and you don't want that extra information to affect your equality logic. This could be a perfectly good modeling for a software. Just make sure there is some documentation or unit tests to ensure this behavior and that the public API reflects this use. Do Not Forget About hashCode() One additional detail relevant for implementation is the fact that many languages heavily use the concept of hash code . Those languages, java including, usually assume the following proposition: If x.equals(y) then x.hashCode() and y.hashCode() are the same. From the same link as before: Note that it is generally necessary to override the hashCode method whenever this method (equals) is overridden, so as to maintain the general contract for the hashCode method, which states that equal objects must have equal hash codes. Note that having the same hashCode does not mean that two objects are equal! In that sense, when one implements equality, one should also implement a hashCode() that follow the property mentioned above. This hashCode() is used by data structures for efficiency and guaranteeing upper bounds on the complexity of their operations. Coming up with a good hash code function is hard and an entire topic on itself. Ideally the hashCode of two different objects should be different or have an even distribution among instance occurrences. But keep in mind that the following simple implementation still fulfills the equality property, even though it is not a "good" hash function: public int hashCode() {
return 0;
} A more common way of implementing hash code is to use the hash codes of the fields that define your equality and make a binary operation on them. In your example, zipcode and street number. It is often done like: public int hashCode() {
return this.zipCode.hashCode() ^ this.streetNumber.hashCode();
} When Ambiguous, Choose Clarity Here is where I make a distinction about what one should expect regarding equality. Different people have different expectations regarding equality and if you are looking to follow the Principle of Least Astonishment you can consider other options to better describe your design. Which of those should be considered equal? Address a1 = new Address("123","000000-0","Street Name","City Name");
Address a2 = new Address("123","000000-0","Str33t N4me","C1ty N4me");
assert a1.equals(a2); // Are typos the same address? Address a1 = new Address("123","000000-0","John Street","SpringField");
Address a2 = new Address("123","000000-0","John St.","SpringField");
assert a1.equals(a2); // Are abbreviations the same address? Vector3 v1 = new Vector3(1.0f, 1.0f, 1.0f);
Vector3 v2 = new Vector3(1.0f, 1.0f, 1.0f);
assert v1.equals(v2); // Should two vectors that have the same values be the same? Vector3 v1 = new Vector3(1.00000001f, 1.0f, 1.0f);
Vector3 v2 = new Vector3(1.0f, 1.0f, 1.0f);
assert v1.equals(v2); // What is the error tolerance? A case could be made for each one of those being true or false. When in doubt, one can define a different relation that is clearer in the context of the domain. For instance, you could define isSameLocation(Address a) : Address a1 = new Address("123","000000-0","John Street","SpringField");
Address a2 = new Address("123","000000-0","John St.","SpringField");
System.out.print(a1.equals(a2)); // false;
System.out.print(a1.isSameLocation(a2)); // true; Or in the case of Vectors, isInRangeOf(Vector v, float range) : Vector3 v1 = new Vector3(1.000001f, 1.0f, 1.0f);
Vector3 v2 = new Vector3(1.0f, 1.0f, 1.0f);
System.out.print(v1.equals(v2)); // false;
System.out.print(v1.isInRangeOf(v2, 0.01f)); // true; This way, you better describe your design intent for equality, and you avoid breaking future readers expectations regarding what your code actually does. (You can just take a look at all the slightly different answers to see how people's expectations varies regarding the equality relation of your example) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403098",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/213932/"
]
} |
403,150 | Recently, a person here asked a basic question about how to compute in Python all permutations of elements from a list. As for most questions asked by students, I haven't provided the actual source code in my answer, but rather explained how to approach the problem, which essentially ended up in a basic presentation of test driven development. Thinking about the problem itself, I remembered all the similar problems I did have to solve when I was myself studying programming in university. We were never taught about test driven development, nor any comparable method, but nevertheless were constantly asked to write algorithms which sort elements from a list, or solve Towers of Hanoi puzzle, etc. Nor do all the students who come on SoftwareEngineering.SE seem to know how TDD (Test Driven Development) would help them solving the problems they are given: if they were, they would have formulated their questions very differently. The fact is, without TDD, all those problems are indeed quite challenging. I consider myself a professional developer, but I would still spend some time writing a list-ordering algorithm without TDD, if asked, and I would be mostly clueless if I were asked to compute all the permutations of the elements in a list, still without using TDD. With TDD, on the other hand, I solved the permutations problem in a matter of minutes when writing the answer to the student. What is the reason TDD is either not taught at all or at least very late in universities? In other words, would it be problematic to explain the TDD approach before asking students to write list sorting algorithms and similar stuff? Following the remarks in two of the questions: In your question, you appear to present TDD as a "problem solving device" [...] In the past, TDD has been presented as a "solution discovery mechanism," but even Bob Martin (the principal advocate of TDD) concedes that you must bring a significant amount of prior knowledge to the technique. and especially: I'm curious why you think TDD makes a tricky algorithm problem with a well-defined spec easier. I find it necessary to explain a bit more what is, in my opinion, so magical about TDD when it comes to solving problems. In high school and in the university, I didn't have any specific techniques to solve problems, and this applied to both programming and mathematics. In retrospective, I suppose that one of such techniques is to review the current/last lesson/lecture and seek relation with the exercice. If the lesson is about integrals, there are chances the problem the teacher asked to solve requires to use integrals. If the lecture was about recursion, there are chances that the puzzle given to the students could be solved using recursion. I'm also sure there are well formalized approaches to solving problems in mathematics, and those approaches can be applied to programming as well; however, I never learned any. This means that in practice, my approach was simply to poke the problem around, trying to guess how should it be solved. Back then, if I was given the challenge of generating permutations of elements from a list, I wouldn't start with an empty list as input, but rather an illustrative example, such as [4, 9, 2] , trying to figure out why are there six possible permutations, and how can I generate them through code. From there, I need a lot of thinking to find a possible way to solve the problem. This is essentially what the author in the original question did, ending up using random . Similarly, when I was a student, no other students of my age would start with [] : all would immediately rush to the case with two or three elements, and then remain stuck for half an hour, sometimes ending up with code which doesn't compile or doesn't run. The TDD approach, for me, appears to be counter-intuitive. I mean, it works very well, but I would have never figured out myself, before reading a few articles about TDD, that (1) I should start with the simplest case, (2) write the test before writing code, and (3) never rush, trying to fulfil several scenarios in code. Looking at how beginner programmers think, I have an impression that I'm not the only one finding it counter-intuitive. It may be, I believe, more intuitive for programmers who have a good understanding of a functional language. I suppose that in Haskell, for example, it would be natural to handle the permutations problem by considering first a case of an empty list, then a case of a list with one element, and then a case of a list of multiple elements. In languages where recursive algorithms are possible, but not as natural as in Haskell, such approach is however much less natural, unless, of course, one practices TDD. | I am a part-time programming teacher at a local community college. The first course that is taught at this college is Java Programming and Algorithms. This is a course that starts with basic loops and conditions, and ends with inheritance, polymorphism and an introduction to collections. All in one semester, to students who have never written a line of code before, an activity that is completely exotic to most of them. I was invited once to a curriculum review board. The board identified a number of problems with the college's CS curriculum: Too many programming languages taught. No courses about the Software Development Life Cycle. No database courses. Difficulty in getting credits to transfer to state and local universities, partly because these schools can't agree on a uniform definition for the terms "Computer Science," "Information Technology," and "Software Engineering." My advice to them? Add a two-semester capstone class to the curriculum where students can write a full-stack application, a course that would cover the entire software development life cycle from gathering requirements to deployment. This would make them hireable at local employers (at the apprentice level). So where does TDD fit into all of this? I honestly don't know. In your question, you appear to present TDD as a "problem solving device;" I see TDD mostly as a way to improve the design and testability of code. In the past, TDD has been presented as a "solution discovery mechanism," but even Bob Martin (the principal advocate of TDD) concedes that you must bring a significant amount of prior knowledge to the technique. In other words, you still have to know how to solve problems in code first. TDD just nudges you in the right general direction relative to software design specifics. That makes it an upper-level course, not a lower-level one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403150",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
403,228 | I am making a full-stack web application for a professor. At his request, the passwords and usernames are generated programmatically, and they cannot be changed or reset by the students. (If you forget your password, you ask the professor, who can look it up.) Does this tightly-controlled system eliminate the need to do all of the normal best practices with regard to storing passwords in a database? In case it's relevant, the app does not contain any association or identifiers between the student's identifying information (name, gender, etc.) and their username and password. EDIT 1: Thank you for the many responses. This is very helpful to me as I didn't take the traditional route to this career and have some holes in my knowledge that probably seem fundamental to you. Here are a few points of clarification: I am a freelancer on my first-ever freelancing project, and the client/customer is a professor. I am not his student, and this is not an assignment. My task is to replace an existing application that is very old and for which the source code is lost. The application is used in a class taught by several professors in different schools in the US. Much of the content is just static, like a textbook. However you can also take some questionnaires/instruments developed by the professors to get insight into the topic of the course relative to a real-world example of your choosing (i.e., the information you supply is not about yourself or other people). My original goals in having username/passwords was to identify users so that I could enforce expiration of access to the content, and also control permissions. Permissions matter because in addition to students, there is the concept of administrator users (who have a dashboard where they can view lists of username/passwords they have created, and create more) and a super-admin (who in addition to what admins can do, can create other admin users). The primary reason I started down the path of username and login is because that is what the old app had. But, the old app stored (lots!) of student information. The professor in charge now does not want to go afoul of FERPA laws so he has changed the requirements there. A professor can create another username/password set and add it to the current class if needed. But they aren't forced to currently. (They can just give out the original password.) This was the professor's decision when I asked him if he wanted a "reset password" button on the list. | This is a really good example of insecure authentication, justified on the basis that if the site is compromised it is not possible to identify the person. If that's the case, why do we even need a username? just give each student a secret access code. Here are some of the flaws: Scale of breach - The entire site will become compromised by someone obtaining the cleartext username/password list. This list must have some identifying information on it so that the professor can give the username / password to the right student. Now you have the application containing data about or specific to that student and a identity file that firstly, tells you who each username is for, and then gives you the password to log into that site so you can see the data for that user. Authentication - The key point of authentication is to ensure that you are authenticating the identity of the person making the webpage request. Username / password is only valid because only the real person knows the secret password. In this professors scenario, this is not the case, as at least two people know every username and password. Compromised Credentials - what happens when someones credentials become compromised? How do they immediately revoke them? how do they get new ones? Can the professor create a new password for one student, or does the professor need to repeat the initial process and create new passwords for everyone? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403228",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/241967/"
]
} |
403,318 | I am confused because in quite a few places I've already read that the so-called 'boneheaded' exceptions (ones that result from bugs in code) are not supposed to be caught. Instead, they must be allowed to crash the application: Vexing exceptions , by Eric Lippert A comment under Eliding Async and Await , by Stephen Cleary Answer below Is it a good practice to use self-defined exception? , by Draco18s no longer trusts SE At least two of the three above people are established authorities. I am surprised. Especially for some (important!) use cases, like server side code, I simply can't see why is catching such an exception suboptimal and why the application must be allowed to crash. As far as I'm aware, the typical solution in such a case is to catch the exception, return HTTP 500 to the client, have an automatic system that sends an emergency e-mail to the development team so that they can fix the problem ASAP - but do not crash the application (one request must fail, there's nothing we can do here, but why take the whole service down and make everyone else unable to use our website? Downtime is costly!). Am I incorrect? Why am I asking - I'm perpetually trying to finish a hobby project, which is a browser based game in .net core. As far as I'm aware, in many cases the framework does for me out of the box the precise thing Eric Lippert and Stephen Cleary are recommending against! - that is, if handling a request throws, the framework automatically catches the exception and prevents the server from crashing. In a few places, however, the framework does not do this. In such places, I am wrapping my own code with try {...} catch {...} to catch all possible 'boneheaded' exceptions. One of such places, AFAIK, is background tasks. For example, I am now implementing a background ban clearing service that is supposed to clear all expired temporary bans every few minutes. Here, I'm even using a few layers of all-catching try blocks: try // prevent server from crashing if boneheaded exception occurs here
{
var expiredBans = GetExpiredBans();
foreach(var ban in expiredBans)
{
try // If removing one ban fails, eg because of a boneheaded problem,
{ // still try to remove other bans
RemoveBan(ban);
}
catch
{
}
}
}
catch
{
} (Yes, my catch blocks are empty right now - I am aware that ignoring these exceptions is unacceptable, adding some logging is perpetually on my TODO list) Having read the articles I linked to above, I can no longer continue doing this without some serious doubt... Am I not shooting myself in the foot? Why / Why not? If and why should boneheaded exceptions never be caught? | Silent But Deadly When writing enterprise software, you will eventually learn an essential truth: the worst bug in the world is not one that causes your program to crash. The worst bug in the world is one which causes your program to silently produce a wrong answer that goes unnoticed but eventually produces a massive negative effect (with severe financial implications for your employer). Thus, error messages and crashes are A Good Thing TM , because they indicate that your program detected a problem . Amazing Grace Now, this seems to conflict with another enterprise virtue, which is "degrade gracefully". Blowing up and not returning any response at all hardly looks like "graceful degradation". And this is why many folks will try very hard to return some response, if they can. Indeed, this is why many frameworks, like Spring, will catch all top-level exceptions and wrap them with a 500 response, as you describe. In general, I think this is OK. After all, most exceptions don't really require a restart of the entire app server if you can just kill/restart a server thread. A sane framework will be careful to not catch Java Errors , like OutOfMemory , for obvious reasons. But there is one more point to consider: once you get beyond a single server, you will likely have a load balancer in front of your service. And when the LB times out or gets a closed connection, it will generally return a 500 to its client. Thus, the LB will often transform your "server crash" into a client 5xx automatically! Best of both worlds. Worst Case In your scenario, what is the worst that can happen if you don't catch the exceptions? Your answer: "Well, my game server dies, and nobody can play!!!" But that's not the worst case. The worst case is, everyone is playing your game, but griefers are ruining it. Players file a bug report and tell you that bans aren't working, but you look at the logs and everything looks fine. Or, legitimate players are getting banned by griefers, and instead of being able to rejoin in a timely manner, the bans are lasting indefinitely, because your server happily ignores failures. The worst thing isn't your game crashing. It's your player trust crashing. Good luck trying to reset that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403318",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/212639/"
]
} |
403,483 | Currently I am working on a school project written in C#. Some teammates just started in C# and some are already familiar with C#. Today I had a discussion on whether to use syntactic sugar like this in our code: private _someClass;
public SomeClass SomeClass
{
set => _someClass = value;
}
// Equivalent to:
private _someClass;
public SomeClass SomeClass
{
set
{
_someClass = value;
}
} Or this: random ??= new Random();
// Equivalent to:
if (random == null)
random = new Random(); Reasons we discussed for not using syntactic sugar like this were: It is hard to read in general For someone coming from another language e.g. Java, it is harder to see what is going on in the code. Are these valid reasons? Does this happen in other languages? Are there some measures to decide what is the "cleaner" way of coding? | I disagree with It is hard to read in general especially to "in general". These language features may be hard to read for beginners when they see them the first time, but they were actually added to the language to make code more concise. So after one gets used to them (which should not last longer than using them half a dozen times) they should make the code more readable, not less. For someone coming from another language e.g. Java, it is harder to see what is going on in the code. Yes, but is your goal to program Java in C#, or to program C#? When you decide to use a language, you will be better off learn the idioms of the language, especially the simple ones. When you work with real-world programs, you will encounter these idioms frequently and will have to deal with them, whether you like them or not. Let me finally add, the ultimate measure for the readibility of your code is what your peer reviewer tells you . And whenever I am in the role of a reviewer who stumbles about a simple language idiom which is new to me, I usually take it as an occasion to learn something new, not as an occasion to tell the other devs what they should not use because I don't want to learn it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/264451/"
]
} |
403,673 | I sometimes end up with services encapsulating the responsibility of doing some sort of business process for which there are several possible outputs. Typically one of those output is success and the others represent the possible failures of the process itself. To fix the idea consider the following interfaces and classes: interface IOperationResult
{
}
class Success : IOperationResult
{
public int Result { get; }
public Success(int result) => Result = result;
}
class ApiFailure : IOperationResult
{
public HttpStatusCode StatusCode { get; }
public ApiFailure(HttpStatusCode statusCode) => StatusCode = statusCode;
}
class ValidationFailure : IOperationResult
{
public ReadOnlyCollection<string> Errors { get; }
public ValidationFailure(IEnumerable<string> errors)
{
if (errors == null)
throw new ArgumentNullException(nameof(errors));
this.Errors = new List<string>(errors).AsReadOnly();
}
}
interface IService
{
IOperationResult DoWork(string someFancyParam);
} The classes consuming the IService abstraction are required to process the returned IOperationResult instance. The straightforward way to do so is writing a plain old switch statement and decide what to do in each case: switch (result)
{
case Success success:
Console.WriteLine($"Success with result {success.Result}");
break;
case ApiFailure apiFailure:
Console.WriteLine($"Api failure with status code {apiFailure.StatusCode}");
break;
case ValidationFailure validationFailure:
Console.WriteLine(
$"Validation failure with the following errors: {string.Join(", ", validationFailure.Errors)}"
);
break;
default:
throw new NotSupportedException($"Unknown type of operation result {result.GetType().Name}");
} Writing this type of code in different points of the codebase quickly generates a mess, because this basically violates the open closed principle. Each time the implementation of IService gets modified by introducing a new implementation of IOperationResult there are several switch statements that must be modified too. The developer implementing the new feature must be aware of their existence, unless there are well written tests which can automatically detect the missing modifications in the points where the code switches over IOperationResult instances. Maybe the switch statement can be avoided at all. This is easy to do when IService is used for one specific purpose. As an example, when I write ASP.NET core MVC controllers in order to keep action methods simple and lean I inject a service in the controller and delegate to it all the processing logic. This way the action method only cares about handling the HTTP request, validating the parameters and returning an HTTP response to the caller. In this scenario the switch statement can be avoided from the beginning by using polymorphism. The trick is modifying IOperationResult this way: interface IOperationResult
{
IActionResult ToActionResult();
} The action method simply calls ToActionResult on the IOperationResult instance and returns the result. In some cases the IService abstraction must be used by different callers and we need to let them the freedom to decide what to do with the operation result. One possible solution is defining one higher order function, lets call it processor for simplicity, having the responsibility of processing a given instance of IOperationResult . It's something like this: static class Processors
{
static T Process<T>(
IOperationResult operationResult,
Func<Success, T> successProcessor,
Func<ApiFailure, T> apiFailureProcessor,
Func<ValidationFailure, T> validationFailureProcessor) =>
operationResult switch
{
Success success => successProcessor(success),
ApiFailure apiFailure => apiFailureProcessor(apiFailure),
ValidationFailure validationFailure => validationFailureProcessor(validationFailure),
_ => throw new ArgumentException($"Unknown type of operation result: {operationResult.GetType().Name}")
};
} The advantages here are the following: there is only one point where the switch statement is done each time a new implementation of IOperationResult is defined there is only one point that needs to be modified. Doing so the signature of the Process function gets modified too. the modification done at the previous point produces several compile time errors where the Process function gets called. This errors must be fixed, but we can trust the compiler being able to find all the points to be modified A more object oriented alternative is modifying the definition of IOperationResult by adding one method per each intended usage of the operation result, so that the switch statement can be avoided once more and the only thing to do is actually writing a new implementation of the interface. This is an example in the hypothesis that there are two different consumers of IService : interface IOperationResult
{
string ToEmailMessage(); // used by the email sender service
ICommand ToCommand(); // used by the command sender service
} Any thoughts ? Are there other or better alternatives ? | The problem The purpose of having these result classes derive from the same interface is so that the interface becomes what the consumer knows and works with. The consumer doesn't care about the specific implementing classes. However, your interface doesn't contain anything. You're using it as a marker interface. If I, as a consumer, receive an IOperationResult object, what can I do with it? Nothing . Because the IOperationResult interface defines no contract whatsoever. That defeats the purpose of having your result classes share the same interface. You've essentially enforced them all to comply to the same contract, but put an empty contract in place so it's literally impossible to not comply with it. Your first solution Your Processors class is effectively trying to reinvent the event delegation wheel. You're defining specific event handlers (the Func objects) that handle every outcome. But this still violates OCP for the exact same reason. You still have the switch which needs to be expanded whenever a new result type is developed. All consumers will still need to add a new handler for a new result type. It's still the same problem. You've just obfuscated it with some additional complexity. Your second solution You've found a new way to violate OCP. Now, instead of having to expand the code every time a result type is developed, you're having to expand the interface whenever a new consumer is developed. It's the same problem all over again. On top of that, now your core logic needs to somehow know how each consumer wants its own result to be handled, and your core logic is going to have to pre-chew the result exactly how every particular customer wants it. This will lead to sheer insanity in your core logic that now has to account for the handling of its own outcome (based on every individual consumer's needs); which means you're violating SRP on top of OCP. My proposed solution Overall, it seems like you've quite understood the core issue of OCP, as you've not avoided it in any of the solutions you've claimed were solutions. How you develop your interface correctly depends on what you want to do with it. Based on your usage example in the switch case, it seems you're primarily interested in two things: whether it was a success, and a possible message to inform the consumer further. Taking that, your interface becomes straightforward: public interface IOperationResult
{
bool IsSuccess { get; }
string Message { get; }
} And your implementations become straightforward: public class Success : IOperationResult
{
public bool IsSuccess => true;
public string Message { get; set; }
}
public class ApiFailure : IOperationResult
{
public bool IsSuccess => false;
public string Message { get; set; }
} However, in the interest of not overengineering, there is a simpler approach here. Based on your current needs, you don't really need separate classes. The only thing that's different is going to be the values contained in the result object, not the structure of the result object itself. It's a lot cleaner here to do away with the interface and simply use a straightforward DTO class. What you now call derivations (success, API failure, ...) can be expressed as static methods, which you essentially use as "constructors with a name", like so: public class OperationResult
{
public bool IsSuccess { get; private set; }
public string Message { get; private set; }
//This ensures you can only instantiate an object via the static methods
private OperationResult() {}
public static OperationResult Success()
{
return new OperationResult()
{
IsSuccess = true,
Message = String.Empty
};
}
public static OperationResult ValidationFailure(string message)
{
return new OperationResult()
{
IsSuccess = false,
Message = message
};
}
} Which you can then use wherever you need it: if( a == b )
return OperationResult.Success();
else
return OperationResult.ValidationFailure("a does not equal b"); In this example, I made it so that a message would only be given for a failure. That's just an example of how you can force certain outcomes to have certain values, which enables you to set your resulting values exactly how you want them in every case. For the situation you described in the question, the above class suffices. As a general rule, don't overengineer simple things. It's just going to cost time and effort now, and in the future when you have to maintain it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403673",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/310179/"
]
} |
403,911 | According to my experience, Wikipedia and prior answers , a scripting language is vague category of languages which are high-level (no manual memory management) and interpreted. Popular examples are Python, Ruby, Perl and Tcl. Some scripting languages can be " embedded ". For example: Lua is frequently embedded in video game applications. TCL is embedded in the fossil version control system It is sometimes said that Lua is more easily embedded than Python or that JavaScript is difficult to embed, because the size of the interpreter . Similarly, Wren is "intended for embedding in applications". What factors make a language embeddable? Is it solely the size and speed of the base interpreter or do other factors come into play? | Embedding a language (I'll avoid characterizing it as "scripting") means that the following has been done: The interpreter and runtime are running in the same process as the host application Enough of the standard types and the standard library are also available from within that runtime Most times, the application has its own library available to the host application The first bullet is literally the definition of embedding. The main reason to embed a language into an application is to provide an easy means of extending the functionality of the application. Reasons include: Creating macros to perform complex steps repeatably as fast as possible (e.g. Photoshop, Gimp) Programming game elements by less technical people (many games have some level of embedded language to create mods, characters, etc.) So the big question is then, what factors simplify embedding? Complexity of the interpreter and/or runtime environment (simpler is easier) Size of the standard library (smaller is easier) Layers of indirection (fewer are better, Typescript recompiles down to JavaScript like C++ used to recompile down to C, there is no native Typescript environment) Compatibility of underlying architecture (several languages are implemented on the Java runtime or .Net runtime, which makes it easier to embed due to the similarity of the underlying environment) Bottom line is that it is possible to embed a wide range of languages into another application. In some cases, the hard work has already been done for you and you simply need to include the language into your app. For example, IronPython is built on .Net and Jython is built on Java allowing you to easily embed Python into applications built on those platforms. As far as how robust or complete the implementation is, you will get mixed results. Some projects are more mature than others. Some languages are just easier to implement (there is a reason why LISP was one of the first embedded languages). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/403911",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/98711/"
]
} |
404,105 | I made the following diagram to show a typical separation of concerns as typically taught - Here ClassA indirectly uses ClassB via the ISomeInterface , of course ensuring it doesn't know ClassB exists, only the methods within the interface, which ClassB implements. All the information I can find on this separation of concerns ends here, and nowhere can I find out how classA can actually use the interface without coupling itself to ClassB . You can't of course instantiate nothing but an interface, an interface has no implementation or functionality itself. So how does ClassA use the interface? There are only two ways currently that come to mind - 1) ClassA does the following: ISomeInterface obj = new ClassB(); Here we can make sure we're not calling any members of ClassB directly, only interface members. The problem though is that ClassB has leaked through to ClassA via the instantiation. 2) ClassA relies upon the interface only, delegating this responsibility of passing the classB object elsewhere, via having the following constructor: class ClassA {
ISomeInterface obj;
ClassA(ISomeInterface obj) {
this.obj = obj;
}
} This of course completely decouples ClassA from ClassB , however all it does is "pass the buck" elsewhere since someone, somewhere , must instantiate an implementation of ISomeInterface (such as ClassB ) and pass it as an object to ClassA . All the tutorials and explanations I can find leave out this last crucial detail. Who exactly is responsible for doing this last crucial thing? It has to happen somewhere. And is the thing that does this now coupling itself to both ClassA & ClassB ? | Coupling ClassA relies upon the interface only, delegating this responsibility of passing the classB object elsewhere This is the idea. If you are separating ClassA from ClassB by the use of an interface ISomeInterface ... ensuring ( ClassA ) doesn't know ClassB Then you do not want ClassA to instantiate ClassB . Instead it must recieve an object typed by the interface (for example, it can be received in the constructor). How does this help encapsulation? You do not get encapsulation for free. However, given that ClassA will only use ClassB via its interface, you should define interfaces which are sufficient for that use. Ideally you would also remove any part of the interface that is not used. Then you will know exactly what ClassB needs to expose. You still need to design a good interface. however all it does is "pass the buck" elsewhere since someone, somewhere, must instantiate an implementation of ISomeInterface (such as ClassB ) and pass it as an object to ClassA . Yes. It has to happen somewhere. And is the thing that does this now coupling itself to both ClassA & ClassB ? Yes. Coupling is unavoidable. You are not trying to make it zero. You are trying to keep it low. Think of it as a cost you have to pay... too much coupling means that maintenance is hard. However, maintenance will never be a zero effort task. Factories I will continue on the assumption that we do dependency injection on the constructor. You want to have a single source of truth of how to get a ClassA object. That is, you do not want to call the constructor passing an object of ClassB every time. You want to encapsulate that and reuse it. If in the future you need to replace ClassB by ClassC , then there will be a single place where to change that. If there is some logic that picks ClassB or ClassC , there is a single place where that logic will be. If the instances need to be pooled, you can do that there too. Thus, you will have a piece of code that has as responsibility to give you an object of ClassA bound with the correct implementation of ISomeInterface . And that is the only responsibility of that code. We could refer to that piece of code as a factory. Note : Creating a factory for each class should not be a goal. You only need the ones you need. Depending on what you are doing, it could make sense that in some places you want ClassA created with ClassB and in others you want ClassA created with ClassC . In fact, this could depend on external input. For another example, let us say we have a video game, where there are some characters. Sometimes they are controlled by the local user, sometimes they are controlled by a player connected through the network, sometimes they are controlled by AI. In this case, we would have three valid implementations, and they could all co-exist. For another example, it could be that we have a video game that can save progress to local storage, or to a cloud service that allows you to continue playing on a different machine. That is, we have two implementations to save progress, and which one we use depends on user input. Where to put factories? Yes, you can apply complex patterns to handle your factories. In fact, I'm saying the factory is a piece of code to keep it vague. However, let me point this out: If you are making a library... unless your goal is to make a library dedicated to factory patterns, let the developer of the application decide what factory patterns to use, by not providing any. Exactly where you want to put your factory code will depend on what kind of application you have. For simple console and desktop projects, you can do all the dependency injecting in main . If you are doing something more complex (a network server, for example), you might want some factory solution. Usually, you want to use it after deciding what code will handle input, and after the dependency injection is done, the code does not need to know what factory solution you used. By factory solution I mean service locators, IoC containers… factory methods, abstract factories… all that stuff. Whatever patterns you pick. If you pick a library that does it… guess what? It is a dependency! Muahahaha. So, yeah, after the dependency injection is done, you do not want any code to be aware of it. Architecture Abstracting things away is good™. In fact, it happens naturally by the single responsibility principle. By the way, I think you mean “single responsibility principle” instead of separation of concerns. I believe that when they tell you that a layer can only interact with lower layers via an interface, they not abstract interface types, but an interface as in API. However, we could be mixing all the layer talk with the concept of abstracting external systems… The software engineer needs to make an executive decision about the scope of the project, which decides what is part of the system and what is external. Draw a line. An architectural line. On the left of the line you code that deals with external systems. Which includes writing adapters which abstract the external system. That code must be interchangeable, allowing you to easily swap it in the future if needs be. Note : I am not suggesting to wrap every dependency in adapters. Only external systems. So, no, I'm saying that you need to make an adapter for every library you use, nor the ridiculous idea of isolating from the runtime. While it is true that those would have their own reasons to change, you follow the single responsibility principle and that is enough. This is about external systems as in IO. Also, I'm saying that you decide what is external. Therefore, you do not want the code on the right to depend on code on the left (which could have to change because of external reasons). However, the code on the left can depend on code on the right (which is under your control). Thus, you will have the adapters (which are on the left) implement an interface (that exists on the right), and the rest of the code on the right of the architectural line will depend on the interface only… However, somebody has to instantiate the adapter. Where does that code go? It goes to the left of the architectural line. Because, remember, the code to the right cannot depend on the code on the left. Thus the code that instantiates the adapter (which is to the left), must be to the left. Now, surprise: the operating system is an external system. main is a method that handles an event from the operating system (namely the start of the program). And since the operating system is external, main goes on the left of the line, and thus it can instantiate the adapters, and it can call code on the right. Well, actually it would call the factory. Similarly, if you have a form and event handlers the event handlers are on the left. Or if you have web server, the router is on the left. You want that code ( main , event handlers, router) to call the factory, inject the dependencies, and let the execution flow to the right. On the flip side. Your code on the right might want to call the external systems. Remember that this is possible via the interfaces. I want to point out that there is another way to do this. Since the flow of execution starts on the left, it is often possible to let the right return, then the left can execute more code. What I’m describing here? The idea of the architectural line comes from Clean Code. I am taking inspiration from the idea of “functional core, imperative shell”: pure functions can only call pure functions. However impure functions can call pure and impure functions (pure to the right, impure to the left). Bonus chatter: You might be aware that async code has a tendency to propagate. Because async methods can call synchronous methods and await other async methods. However synchronous cannot await async methods. However, you can use the architectural line to keep it on the left (synchronous to the right, async to the left). Why do I say left and right instead of talking of higher and lower layers? Because I would treat both the database and the user interface as external systems. However, classic tier architecture will say that the database is a lower layer and the UI is a higher layer. Also, the communication between internal layers does not have to be held to the same standards. Providing a concrete facade or allowing a “higher” layer to instantiate types of the lower one is acceptable inside the system. Although, sometimes you want to draw more lines. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404105",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/355487/"
]
} |
404,116 | This is kind of similar to the Two Generals' Problem , but not quite. I think there is a name for it, but I just can't remember it right now. I am working on my website's payment flow. Scenario Alice wants to pay Bob for a service. Bob has quoted her US$10. Alice clicks pay. Whilst Alice's request is flying through the ether, Bob edits his quote. He now wants US$20. Bob's request finishes before Alice's has reached the server. Alice's request reaches the server and her payment is authorized for US$20 instead of US$10. Alice is unhappy. Whilst the chances of this are very low in practice, it is a possible scenario. Sometimes requests can hang due to network issues, etc... Possible Mitigations I don't think this problem is solvable. But we can do things to mitigate it. This is not exactly an idempotency issue, so I don't think the answer is "idempotency token". Option 1 Let's define: t_0 as the time Alice click pay. t_edit as the time Bob's edit request succeeds t_1 as the time Alice's request reaches the server Since we cannot know t_0 unless we send it as part of the request data, and because we cannot trust what the client sends, we will ignore t_0 . At the time Alice's request arrives in the server, we check: if t_1 - t_edit < 1 minute: return "409 Conflict" (or some other code) Would this approach work? 1 minute is an arbitrary choice, and it doesn't solve the problem entirely. If Alice's request takes 1 minute or more to reach the server, the issue persists. This must be an extremely common problem to deal with, right? | Alice wants to pay Bob for a service. Bob has quoted her $10. Give this quote a unique token. Alice clicks pay. When this response is send to the server, it must go with the token of what is being paid. This also allows you to discard duplicate payments. Whilst Alice's request is flying through the ether, Bob edits his quote. He now wants $20. Bob's request finishes before Alice's has reached the server. That has a new different token※. The server must invalidate the old one. Alice's request reaches the server and her payment is authorized for $20 instead of $10. No, it isn't. Alice token does not match. ※: The server must send Alice the new quote, with the new token. And alice must click pay again. For user experience, you can also add a timeout. That prevents the token to be used right away. This timeout can either be only client side or networked. The purpose is to give some time to the user to notice the change. This must be an extremely common problem to deal with, no? Many online video games that allow players to trade face this problem. A simple 5 seconds timeout can save support a lot of headaches. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404116",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147328/"
]
} |
404,195 | I am in a predicament. I am working in a repository that makes a lot of database changes within its functions. The function I am dealing with returns responseIds (but from a transformation, not from any database action). However, as a side effect, it adds an object containing those responseIds to the database. So should I name it: getResponseIds : This highlights the return values. It is a very functional way of thinking, but obviously if I have function postToDB it makes no sense to use getStatusOfPost addResponseIdToDB : This highlights the side effect, although I truly think that many of my functions just operate on the database (and tend not to return anything) getAndAddResponseIdsToDB : Very informative, but very long. What are the pros and cons on the suggestions above? Or can you make a better suggestion yourself? | A function name that contains and is at the wrong level of abstraction . I lean towards addResponseIdToDB() because otherwise the ‘side effect’ is a complete surprise. However: responseIds = addResponseIdToDB(); doesn’t leave anyone surprised. The command query responsibility segregation principle argues that this should not be the only way to get the responseId object. There should also be a query that does not change the DB that can get this object. Contrary to Bertrand Meyer , I don't believe this means the add has to return void. Just means an equivalent pure query should exist and be easy to find so the DB doesn't get needlessly abused by use of state changing queries. Given that getResponseIds() should exist and not talk to the database, the best name for a method that does both is actually addToDB(getResponseId()) . But that's just if you want to get all functional composition about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404195",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/355774/"
]
} |
404,250 | Over the years, I have seen quite a few questions on this site along the lines of "can I invent my own HTTP response codes"? Generally asked by those who are developing both the server and client. The responses tend to go towards sticking with standard codes. If I stick with standard HTTP status code numbers , is there any technical reason not to use custom text in order to differentiate between, let's say, multiple 501 responses? To reiterate, no client except mine will ever see these values, which are returned by AJAX in repose to authenticated requests. | While you may control all the clients and servers, you also has a third end you need to be aware of: the intermediates. The intermediates are web servers, proxies, caches, web application firewall, load balancer, CDN, and other intermediate systems that processes the HTTP message that may stand between your clients and servers. Even if you use end to end encryption and control all the intermediaries used, it's generally easier to integrate new intermediaries into the system when you stick to the standards. With that said, unless configured otherwise, standard compliant intermediaries normally should only use status code when deciding how to process the message and should have ignored the reason phrase, so in most cases it should be safe to customise the reason phrase. The reason phrase should be reserved to only carry human readable reason that shouldn't affect how the message should be processed. Some references from RFC 7231 (Section 6.1) (emphasis mine): The status codes listed below are defined in this specification, Section 4 of [RFC7232], Section 4 of [RFC7233], and Section 3 of [RFC7235]. The reason phrases listed here are only recommendations -- they can be replaced by local equivalents without affecting the protocol. And from RFC 7230 (Section 3.1.2) (emphasis mine): The reason-phrase element exists for the sole purpose of providing a textual description associated with the numeric status code, mostly out of deference to earlier Internet application protocols that were more frequently used with interactive text clients. A client SHOULD ignore the reason-phrase content. So what should you do instead if standard HTTP status codes aren't sufficient for you? Use the status code. HTTP Status code is designed to be extensible. Refer to RFC 7231 Section 6 on how to extend HTTP status code in a way that would remain backwards compatible with clients and intermediaries that doesn't understand the extended status code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/979/"
]
} |
404,344 | Context: I'm an embedded dev with only 2 years of solid experience. I'm the sole technical employee of a startup of 4 people. We have an MVP of our product out and are getting ready to develop the next iteration of it. The original MVP was developed by a partnered contractor team, with one older embedded dev doing everything software. I joined the company too late to have any input into the design of the MVP. The product is a gateway-type device: embedded Linux, messages come in one way, some limited intelligence happens inside, they come out on the other side. The 'problem' : Everything in the system seems to be chucked into a single SQLite database. Processed and unprocessed messages live in the same table (with one field used to indicate which one they are), provisioning related things live another table, even the logging and debug is done by writing to yet another table in the database. The system is written largely in python but the biggest part of it is a massive class full of wrappers for complex SQL which seem to do the bulk of the data manipulation. All of this makes me uncomfortable, especially the messages part as it looks like a classic case of "database as a queue" design anti-pattern. That being said I find it difficult to articulate exactly why this is wrong to my non-dev boss, other than vague and nebulous mentions of maintainability, difficulty in introducing changes because of lack of modularisation, as well as of lack of clarity in how data flows and is being processed. It doesn't help that the contractor has "authority of years of experience" over my judgment. Am I justified in feeling uncomfortable about this design choice? I mean, the thing works in principle. I know that the desire to refactor can be pretty strong, irrational and should not always be acted upon given business constraints. But then... I kinda feel like starting mostly from scratch only ripping out useful bits of the first MVP would be cleaner and take less time than working with the system created so far. But is it just my younger-person-hot-headedness? | Your first problem is that you think of the design as "wrong." That's really not the right way to consider things. Rather, different designs make different design trade-offs. Any design has pros and cons that have weighed against each other. If there is a problem with the design it's not that is "wrong" but rather that it makes a poor choice of trade-offs. Don't think of it or explain it as, the design is wrong, rather think in terms of how a better trade-off might exist. The fact that you are using SQLite and an embedded Linux system makes database as queue far less problematic then it would be in other circumstances. Database-as-queue has two major drawbacks: Database operations are fairly expensive, reducing the performance. New messages have to be detected by repeatedly querying the database to check for them. However, SQLite operations are much less expensive than a typical database, so the performance loss is minimal. SQLite also supports a data changed notification that allows the program to detect changes to the database without repeatedly querying the database. SQLite gives you a lot of functionality for free. Basically, you define your tables, and SQLite takes care of persisting them to disk, loading them back when you restart, ensuring that the file does not get corrupted, giving you powerful querying tools, allowing complex operations on the data. Furthermore, SQLite gives you all of this without configuring other applications, its just a library your code is using. If you don't use SQLite for all of this stuff, you'll have to implement that functionality yourself. In my assessment, using SQLite in this way is a pretty sensible choice. Your response comes across to me as somebody who hasn't used databases enough to know their power. Of course, there may be other considerations I don't know that change that, or it could be the system is simply poorly constructed in other ways. But if you do want to take your concerns to your boss: Make a good faith effort to work with the system as it stands first, that will make it more plausible when you make the cast. Also, you'll have a much better idea of what does and doesn't work. And maybe you'll learn to love SQLite. Be able to identify precise things your boss cares about: better performance, more features, etc. in making the case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404344",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/303762/"
]
} |
404,369 | Encapsulation In object-oriented programming (OOP), encapsulation refers to the
bundling of data with the methods that operate on that data, or the
restricting of direct access to some of an object's components. 1 Encapsulation is used to hide the values or state of a structured data
object inside a class, preventing unauthorized parties' direct access
to them. Wikipedia - Encapsulation (Computer Programming) Immutability In object-oriented and functional programming, an immutable object (unchangeable object) is an object whose state cannot be modified after it is created. Wikipedia - Immutable object If you can guarantee immutability, do you need to think about encapsulation? I have seen these concepts being used in explaining ideas in object-oriented programming (OOP) and functional programming (FP). I tried to investigate on the topics of encapsulation, immutability and their relation to one another. I couldn't find a post that explicitly asked if encapsulation is guaranteed if you have immutability. Please correct me if I have misunderstood anything on the topic of encapsulation or immutability. I wish to understand these concepts better. Also, direct me to any other posts that have been done on the topic which answers the question above. | The question Casting your question to real life: Is it okay for your doctor to post your private medical records publicly to Facebook, provided no one (other than you) is able to change it? Is it okay for me to let strangers in your house, provided they can't steal or damage anything? It's asking the same thing. The core assumption of your question is that the only concern with exposing data is that it can be changed. As per your reference material: Encapsulation is used to hide the values or state of a structured data object inside a class, preventing unauthorized parties' direct access to them. The ability to change values or state is definitely the biggest concern, but it's not the only concern. "Direct access" entails more than just write access. Read access can be a source of weakness as well. A simple example here is that you are generally advised to not show stacktraces to an end user. Not just because errors shouldn't occur, but because stacktraces sometimes reveal specific implementations or libraries, which leads to an attacker knowing about the internal structure of your system. The exception stacktrace is readonly, but it can be of use to those who wish to attack your system. Edit Due to confusion mentioned in the comments, the examples given here are not intended to suggest that encapsulation is used for data protection (such as medical records). This part of the answer so far has only addressed the core assertion that your question is built upon, i.e. that read access without write access is not harmful; which I believe to be incorrect, hence the simplified counterexamples. Encapsulation as a safety guard Additionally, in order to prevent write access, you would need to have immutability all the way to the bottom. Take this example: public class Level1
{
public string MyValue { get; set; }
}
public class Level2 // immutable
{
public readonly Level1 _level1;
public Level2(Level1 level1) { _level1 = level1; }
}
public class Level3 // immutable
{
public readonly Level2 _level2;
public Level3(Level2 level2) { _level2 = level2; }
} We've let Level2 and Level3 expose their readonly fields, which is doing what your question is asserting to be safe: read access, no write access. and yet, as a consumer of a Level3 object, I can do this: // fetch the object - this is allowed behavior
var myLevel3 = ...;
// but this wasn't the intention!
myLevel3.Level2.Level1.MyValue = "SECRET HACK ATTACK!"; This code compiles and runs perfectly fine. Because read access on a field (e.g. myLevel3.Level2 ) gives you access to an object ( Level2 ) which in turn exposes read access to another object ( Level1 ), which in turn exposes read and write access to its MyValue property. And this is the danger of brazenly making everything immutably public. Any mistake will be visible and become an open door for unwanted behavior. By needlessly exposing some things that could easily have been hidden, you have opened them up to scrutiny and abuse of weakness if any exists. Edit Caleth mentioned that a class is not immutable if it exposes something that itself is not immutable. I think that this is a semantical argument. Level2 's properties are readonly, which ostensibly makes it immutable. To be fair, if the law of Demeter had been followed in my example, the issue wouldn't have been as glaring since Level2 wouldn't expose direct access to Level1 (but that precludes the issue I was trying to highlight); but the point of the matter is that it's a fool's errand to try and ensure the immutability of an entire codebase. If someone makes one adjustment in a single class (that a lot of other classes depend on in some way), that could lead to an entire assembly worth of classes becoming mutable without anyone noticing it. This issue can be argued to be a cause of a lack of encapsulation or not following the law of Demeter. Both contribute to the issue. But regardless of what you attribute it to the fact remains that the this is unmistakably a problem in the codebase. Encapsulation for clean code But that's not all you use encapsulation for. Suppose my application wants to know the time, so I make a Calendar which tells me the date. Currently, I read this date as a string from a file (let's assume there is a good reason for this). public class Calendar
{
public readonly string fileContent; // e.g. "2020-01-28"
public DateTime Date => return DateTime.Parse(fileContent);
public Calendar()
{
fileContent = File.ReadAllText("C:\\Temp\\calendar.txt");
}
} fileContent should have been an encapsulated field, but I've opened it up because of your suggestion. Let's see where that takes us. Our developers have been using this calender. Let's look at Bob's library and John's library: public class BobsLibrary
{
// ...
public void WriteToFile(string content)
{
var filename = _calendar.fileContent + ".txt"; // timestamp in filename
var filePath = $"C:\\Temp\\{filename}";
File.WriteAllLines(filePath , content);
}
} Bob has used Calendar.fileContent , the field that should've been encapsulated, but wasn't. But his code works and the field was public after all, so there's no issue right now. public class JohnsLibrary
{
// ...
public void WriteToFile(string content)
{
var filename = _calendar.Date.ToString("yyyy-MM-dd") + ".txt"; // timestamp in filename
var filePath = $"C:\\Temp\\{filename}";
File.WriteAllLines(filePath , content);
}
} John has used Calendar.Date , the property that should always be exposed. At first glance, you'd think John is doing unnecessary work by converting the string to a DateTime and back to a string . But his code does work, so no issue is raised. Today, we have learned something that will save us a lot of money: you can get the current date from the internet! We no longer have to hire an intern to update our calendar file every midnight. Let's change our Calendar class accordingly: public class Calendar
{
public DateTime Date { get; }
public Calendar()
{
Date = GetDateFromTheInternet("http://www.whatistodaysdate.com");
}
} Bob's code has broken! He no longer has access to the fileContent , since we're no longer parsing our date from a string. John's code, however, has kept working and does not need to be updated. John used Date , the intended public contract for the calendar. John did not build his code to rely on implementation details (i.e. the fileContent from which we parsed the date in the past), and therefore his code can effortlessly handle changes to the implementation. This is why encapsulation matters. It allows you to disconnect your consumers (Bob, John), from your implementation (the calendar file) by having an intermediary interface (the DateTime Date ). As long as the intermediary interface is untouched, you can change the implementation without affecting the consumers . My example is a bit simplified, you'd more likely use an interface here and swap out the concrete class that implements the interface for another class that implements the same interface. But the issue I pointed out remains the same. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404369",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/356081/"
]
} |
404,376 | We are automating the testing on an Web ERP solution (Dynamics) through a tool (RSAT, which uses selenium) provided by the developer of the ERP (Microsoft). The RSAT has a list of instructions to do some actions on the pages and it takes the values to use from an excel file. The RSAT can be used with command lines. So at first we started using a PowerShell script and Azure DevOps to launch the automated tests right after the code packages has been deployed to the testing environment. It was a few dozen lines long and it was fine. Then we started switching the values in the parameter excel file with other values to cover more tests values with the same test case. It added a few hundred lines to the script. Then we generated a file in which we compiled all the results of the tests, We added some logs, We sent the result file by mail, We queried and rolled back the database right after finishing the tests, Well, my problem is that PowerShell script is growing a lot (we actually have multiple scripts now with script 1 calling script 2 when it ends and chaining all the actions) and we still have many features and ideas to add. My question is: at which point should we say stop, PowerShell is not meant to do this, for the sake of
maintainability and stability we should switch to [Python/C#/...] (Maybe I'm totally wrong and using multiples chained PowerShell scripts is actually good practice, especially when you use Azure DevOps) | The question Casting your question to real life: Is it okay for your doctor to post your private medical records publicly to Facebook, provided no one (other than you) is able to change it? Is it okay for me to let strangers in your house, provided they can't steal or damage anything? It's asking the same thing. The core assumption of your question is that the only concern with exposing data is that it can be changed. As per your reference material: Encapsulation is used to hide the values or state of a structured data object inside a class, preventing unauthorized parties' direct access to them. The ability to change values or state is definitely the biggest concern, but it's not the only concern. "Direct access" entails more than just write access. Read access can be a source of weakness as well. A simple example here is that you are generally advised to not show stacktraces to an end user. Not just because errors shouldn't occur, but because stacktraces sometimes reveal specific implementations or libraries, which leads to an attacker knowing about the internal structure of your system. The exception stacktrace is readonly, but it can be of use to those who wish to attack your system. Edit Due to confusion mentioned in the comments, the examples given here are not intended to suggest that encapsulation is used for data protection (such as medical records). This part of the answer so far has only addressed the core assertion that your question is built upon, i.e. that read access without write access is not harmful; which I believe to be incorrect, hence the simplified counterexamples. Encapsulation as a safety guard Additionally, in order to prevent write access, you would need to have immutability all the way to the bottom. Take this example: public class Level1
{
public string MyValue { get; set; }
}
public class Level2 // immutable
{
public readonly Level1 _level1;
public Level2(Level1 level1) { _level1 = level1; }
}
public class Level3 // immutable
{
public readonly Level2 _level2;
public Level3(Level2 level2) { _level2 = level2; }
} We've let Level2 and Level3 expose their readonly fields, which is doing what your question is asserting to be safe: read access, no write access. and yet, as a consumer of a Level3 object, I can do this: // fetch the object - this is allowed behavior
var myLevel3 = ...;
// but this wasn't the intention!
myLevel3.Level2.Level1.MyValue = "SECRET HACK ATTACK!"; This code compiles and runs perfectly fine. Because read access on a field (e.g. myLevel3.Level2 ) gives you access to an object ( Level2 ) which in turn exposes read access to another object ( Level1 ), which in turn exposes read and write access to its MyValue property. And this is the danger of brazenly making everything immutably public. Any mistake will be visible and become an open door for unwanted behavior. By needlessly exposing some things that could easily have been hidden, you have opened them up to scrutiny and abuse of weakness if any exists. Edit Caleth mentioned that a class is not immutable if it exposes something that itself is not immutable. I think that this is a semantical argument. Level2 's properties are readonly, which ostensibly makes it immutable. To be fair, if the law of Demeter had been followed in my example, the issue wouldn't have been as glaring since Level2 wouldn't expose direct access to Level1 (but that precludes the issue I was trying to highlight); but the point of the matter is that it's a fool's errand to try and ensure the immutability of an entire codebase. If someone makes one adjustment in a single class (that a lot of other classes depend on in some way), that could lead to an entire assembly worth of classes becoming mutable without anyone noticing it. This issue can be argued to be a cause of a lack of encapsulation or not following the law of Demeter. Both contribute to the issue. But regardless of what you attribute it to the fact remains that the this is unmistakably a problem in the codebase. Encapsulation for clean code But that's not all you use encapsulation for. Suppose my application wants to know the time, so I make a Calendar which tells me the date. Currently, I read this date as a string from a file (let's assume there is a good reason for this). public class Calendar
{
public readonly string fileContent; // e.g. "2020-01-28"
public DateTime Date => return DateTime.Parse(fileContent);
public Calendar()
{
fileContent = File.ReadAllText("C:\\Temp\\calendar.txt");
}
} fileContent should have been an encapsulated field, but I've opened it up because of your suggestion. Let's see where that takes us. Our developers have been using this calender. Let's look at Bob's library and John's library: public class BobsLibrary
{
// ...
public void WriteToFile(string content)
{
var filename = _calendar.fileContent + ".txt"; // timestamp in filename
var filePath = $"C:\\Temp\\{filename}";
File.WriteAllLines(filePath , content);
}
} Bob has used Calendar.fileContent , the field that should've been encapsulated, but wasn't. But his code works and the field was public after all, so there's no issue right now. public class JohnsLibrary
{
// ...
public void WriteToFile(string content)
{
var filename = _calendar.Date.ToString("yyyy-MM-dd") + ".txt"; // timestamp in filename
var filePath = $"C:\\Temp\\{filename}";
File.WriteAllLines(filePath , content);
}
} John has used Calendar.Date , the property that should always be exposed. At first glance, you'd think John is doing unnecessary work by converting the string to a DateTime and back to a string . But his code does work, so no issue is raised. Today, we have learned something that will save us a lot of money: you can get the current date from the internet! We no longer have to hire an intern to update our calendar file every midnight. Let's change our Calendar class accordingly: public class Calendar
{
public DateTime Date { get; }
public Calendar()
{
Date = GetDateFromTheInternet("http://www.whatistodaysdate.com");
}
} Bob's code has broken! He no longer has access to the fileContent , since we're no longer parsing our date from a string. John's code, however, has kept working and does not need to be updated. John used Date , the intended public contract for the calendar. John did not build his code to rely on implementation details (i.e. the fileContent from which we parsed the date in the past), and therefore his code can effortlessly handle changes to the implementation. This is why encapsulation matters. It allows you to disconnect your consumers (Bob, John), from your implementation (the calendar file) by having an intermediary interface (the DateTime Date ). As long as the intermediary interface is untouched, you can change the implementation without affecting the consumers . My example is a bit simplified, you'd more likely use an interface here and swap out the concrete class that implements the interface for another class that implements the same interface. But the issue I pointed out remains the same. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/298320/"
]
} |
404,567 | Let's say there is a member SomeMethod in an interface ISomeInterface as follows: public interface ISomeInterface
{
int SomeMethod(string a);
} For the purposes of my program, all consumers of ISomeInterface act upon the assumption that the returned int is greater than 5. Three ways come to mind to solve this - 1) For every object that consumes ISomeInterface , they assert that the returned int > 5. 2) For every object that implements ISomeInterface , they assert that the int they're about to return is > 5. Both the above two solutions are cumbersome since they require the developer to remember to do this on every single implementation or consumption of ISomeInterface . Furthermore this is relying upon the implementation of the interface which isn't good. 3) The only way I can think to do this practically is to have a wrapper that also implements ISomeInterface , and returns the underlying implementation as follows: public class SomeWrapper : ISomeInterface
{
private ISomeInterface obj;
SomeWrapper(ISomeInterface obj)
{
this.obj = obj;
}
public int SomeMethod(string a)
{
int ret = obj.SomeMethod("hello!");
if (!(ret > 5))
throw new Exception("ret <= 5");
else
return ret;
}
} The problem though now is that we're again relying on an implementation detail of ISomeInterface via what the SomeWrapper class does, although with the benefit that now we've confined it to a single location. Is this the best way to ensure an interface is implemented in the expected manner, or is there a better alternative? I understand interfaces may not be designed for this, but then what is the best practice to use some object under the assumption that it behaves a certain way more than what I can convey within its member signatures of an interface without needing to do assertions upon every time it's instanced? An interface seems like a good concept, if only I could also specify additional things or restrictions it's supposed to implement. | Instead of returning an int , return a value object that has the validation hard-coded. This is a case of primitive obsession and its fix. // should be class, not struct as struct can be created without calling a constructor
public class ValidNumber
{
public int Number { get; }
public ValidNumber(int number)
{
if (number <= 5)
throw new ArgumentOutOfRangeException("Number must be greater than 5.")
Number = number;
}
}
public class Implementation : ISomeInterface
{
public ValidNumber SomeMethod(string a)
{
return new ValidNumber(int.Parse(a));
}
} This way, the validation error would happen inside the implementation, so it should show up when developer tests this implementation. Having the method return a specific object makes it obvious that there might be more to it than just returning a plain value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404567",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/355487/"
]
} |
404,661 | I've a habit I just mechanically do without even thinking too much about it. Whenever a constructor is waiting for some parameters, I consider this a public information that should be available by the calling code later on, if desired. For example: public class FooRepository : IFooRepository
{
public FooRepository(IDbConnection dbConnection)
{
DbConnection = dbConnection ?? throw new ArgumentNullException(nameof(dbConnection));
}
public IDbConnection DbConnection { get; }
} The calling code which instantiated a FooRepository object is passing an IDbConnection object and therefore has the right to access this information later on but can't modify it anymore (no set on the DbConnection property) The dbConnection parameter could be passed explicitly or by dependency injection, it doesn't matter. The FooRepository shouldn't be aware of such details. However, yesterday when doing peer programming with a coworker, he told me that any class I write should expose just the minimum useful information.
He said developers shouldn't be able to analyse and mess with the internal state of an object. I don't quite agree with him. In order to not waste too much time, I don't want to think a few minutes for each parameter to determine if this would be a good idea to expose it or not.
In my opinion, there are some use cases we simply can't think of, when we first write a new class. Whether or not the class will finally be included in a Nuget package, doesn't really matter. I just don't want to limit users of my class from accessing information they explicitly passed when instantiating the object or could be easily retrieved from the dependency injection framework. Could someone explain to me what is considered a good practice here? Should I really think whether each parameter makes sense to be exposed? Or is there a design pattern I can just instinctively apply without wasting too much time? Any resource on the subject are welcome. | The calling code which instantiated a FooRepository object is passing an IDbConnection object and therefore has the right to access this information later on This is not true when you're dealing with things like the factory pattern , where the instantiator of the object is not the handler of the object. Factory patterns quite often exist specifically because the object's construction is an implementation detail that should be abstracted away. This applies to more cases than just the factory pattern. Essentially, it applies to any object that gets passed around at least once. but can't modify it anymore (no set on the DbConnection property) This isn't true for reference types. It's true that you can't change which object is being referenced, but you can still alter its content. For example: public class Foo
{
public string Name { get; set; }
}
public class Baz
{
public Foo Foo { get; } // allegedly: "can't modify it anymore"
public Baz(Foo foo)
{
this.Foo = foo;
}
}
var myFoo = new Foo() { Name = "Hello" };
var myBaz = new Baz(myFoo); As per your claim, myBaz.Foo can no longer be modified. Yet this code is perfectly legal: myBaz.Foo.Name = "a completely different name"; And that's still a risk you take. he told me that any class I write, it should expose just the minimum useful information. I don't want to think few minutes for each parameter to determine if this would be a good idea to expose it or not. These two don't quite follow. It doesn't require you to think about it, it requires you to default to private instead of public like you currently do. Unless there is a valid reason to expose it, don't. This is an oversimplification as there are cases where you shouldn't start out on private (e.g. DTO properties), but if you're still struggling with evaluating this, it's already better to default to private instead of public . In my opinion, there are some use cases we simply can't think of when we first write a new class. In my opinion, this is indicative of not quite understanding the class' responsibility and how it fits in the existing codebase. In fact, that's sort of what you state in the question: you don't want to think about it . But you really should. For your example, what would ever be the purpose of a repository exposing its database connection? I can't think of any answer here that does not immediately violate good practice rules, can you? Exposing the database connection is not part of the repository's purpose, which is all about providing access to a persistent data store. In part, this is a matter of experience which will come over time. Every time you have to change the access modifier on an existing property/method is a time to learn why the previous choice was not the right one. Do it enough and you will improve at judging public contracts on the first design. In my opinion, there are some use cases we simply can't think of when we first write a new class. Don't forget OCP : "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification" . If you are inherently accounting for needing to change the internals of classes as time passes, you're taking a stance orthogonal to OCP. That's not to say internals can't be changed when e.g. bugs are found or breaking changes are implemented; but it does mean you should try to avoid it as best as you can. Changing existing (often central) logic is a most common source of bugs, especially the crippling ones. Whether or not the class will finally be included in a Nuget package doesn't really matter. It really does matter. If your library is only being used in the same solution file, you can change things very quickly to your needs and can confirm it's working with a simple build. But Nuget compounds the issue. If you change the contracts of your classes exposed in yout Nuget package, that means that every Nuget consumer will have to deal with breaking changes. From personal experience, the issue is further compounded by Nuget servers not keeping a record of who has consumed your Nuget package, which makes it hard to figure out who all your consumers are and warn them ahead of time that breaking changes are about to be released. Had you defaulted to making things private , and then selectively expose them, there would be less of a problem here. Adding to the contract without changing the existing parts does not break existing code. Removing things from the contract, which is what would happen if you default to public , would always be liable to breaking code that depends on the thing you're now removing from the contract. Should I really think for each parameter if it makes sense to expose it? Yes. But it's not as complicated as you're making it out to be. Understanding what a certain class needs to expose or not is something you need to think about once per class. What is this class' purpose? How do I want this class to be used by its consumers? After that, all properties/methods that you develop can easily be matched to the class' purpose, which is not a new evaluation but simply applying the decision you already made. Or is there a design pattern I can just instinctively apply without wasting too much time? If you were using interfaces on all your classes and using interface-based dependency injection, it would really help you in understanding how to separate a class' contract (things in the interface) from its implementation (things not in the interface). Take for example: public interface ISodaVendingMachine
{
Soda GetDrink();
}
public class RegularVendingMachine : ISodaVendingMachine
{
private Drinks drinks;
public RegularVendingMachine(Drinks drinks)
{
this.drinks = drinks;
}
public Soda GetDrink()
{
return this.drinks.TakeOne();
}
}
public class ConjuringVendingMachine : ISodaVendingMachine
{
private PhilosophersStone philosophersStone;
public ConjuringVendingMachine(PhilosophersStone philosophersStone)
{
this.philosophersStone = philosophersStone;
}
public Soda GetDrink()
{
return philosophersStone.PerformIncantation("Drinkum givum");
}
} The internals of each vending machine is up to them. It doesn't matter how they have access to and dispense a drink to the consumer. To the consumer, that's an irrelevant implementation detail. The customer doesn't want to know how the sausage gets made . What matters for the public contracts is that they dispense a drink to the consumer, and thus the ISodaVendingMachine interface is built specifically for that purpose. Notice how the interface doesn't care about anothing other than what it was designed to ensure. When you have that interface, you can already see that anything in your class that isn't part of that interface should most likely be private as it is an implementation detail, not a contract. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404661",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/250661/"
]
} |
404,737 | I'm trying to clean up some of my code using some best practices, and read that comments are almost always a bad idea for future maintainability. I've managed to get rid of most of them by refactoring to methods with good names. However, there are several comments sprinkled around that explain why a single line of code needs to exist. I'm having trouble figuring out a way to get rid of these. They often follow this kind of structure: // Needs to start disabled to avoid artifacts the first frame. Will enable after frame complete.
lineRenderer.enabled = false; Then later on in the code I enable it. I've thought about extracting it into a one-line method called StartLineRendererDisabledToAvoidArtifactsFirstFrame() but this doesn't seem all that clean to me. Are there any best practices for dealing with such one-liners? Many of them explain the existence of code that on the surface looks superfluous but then actually has an important function. I want to guard against future deletion. Sidenote: I've already run into some scenarios where refactoring/renaming has made these comments be out-of-date or in the wrong spot etc. so I definitely see why removing them would be useful. Related but different question here: "Comment everything the right way" and "Instead of writing comments, write more readable code." - Both valid strategies? EDIT BASED ON ANSWERS AND COMMENTS BELOW Here's my takeaway from all the great discussion below. Comments are fine if there's not an easy way to refactor/rename for more clarity. But they should be used sparingly. Removing comments is generally a good thing. But some comments need to exist to help future readers understand the code without having to dig too deep. But they should explain the WHY, not the HOW. If the code is there for a particular reason that is very important or fixes a bug, it should probably also have a corresponding unit test anyway. Commit messages can help track why and where and how the commented code came to be. | and read that comments are almost always a bad idea for future maintainability And now you are reading that the above is total and absolute BULL____ . Use comments. Put in as many as you think are necessary. A lot of people think that comments are used to just describe what the code is doing. Basically narrating the steps. That itself isn't usually very helpful. But comments can also describe the backstory about why the code is doing what its doing. And that can be critical. Case in point - I was recently addressing a bug in a network streaming application. There was already a bug fix in place that took care of part of the problem. Needless to say I was very glad the previous developer who had worked on the problem had left copious comments in the code describing why it was doing things in its particular order, and why that should not be changed. "This should be done after X because...". In your example, in my opinion, that comment is perfectly fine and should be left alone. It describes why the renderer is disabled so that future devs looking at the code won't think it is unnecessary and remove it. Can that information be stored in the JIRA ticket or commit comments? Sure. It can also be stored right in the code that I'm looking at so I won't miss it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/404737",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157621/"
]
} |
405,038 | When sending a request to another module and expecting a result, it seems to me there are two ways of dealing with the 'non-happy paths'. Throw an exception Return a result object that wraps different results (such as value and error) I would say the first one seems better in general. It keeps the code clean and readable. If you expect the result to be correct, just throw an exception to handle happy path divergence. But what when you have no clue what the result will be? For example calling a module that validates a lottery ticket. The happy path would be that you won, but it probably won't be. (As pointed out by @Ben Cottrell in the comments, "not winning" is also the happy path, maybe not for the end user, though) Would it be better to consider that the happy path is getting a result from the LotteryTicketValidator and just handle exceptions for when the ticket could not be processed? Another one could be user authentication when logging in. Can we assume that the user entered the correct credentials and throw an exception when the credentials are invalid, or should we expect to get some sort of LoginResult object? | You have to distinguish between return values and errors. A return value is one of many possible outcomes of a computation. An error is an unexpected situation which needs to be reported to the caller. A module may indicate that an error occurred with a special return value or it throws an exception because an error was not expected. That errors occur should be an exception , that's why we call them exceptions. If a module validates lottery tickets, the outcome may be: you have won you have not won an error occurred (e.g. the ticket is invalid) In case of an error, the return value is neither "won" nor "not won", since no meaningful statement can be made when e.g. the lottery ticket is not valid. Addendum One might argue that invalid tickets are a common case and not an error. Then the outcome of the ticket validation will be: you have won you have not won the ticket is invalid an error occurred (e.g. no connection to the lottery server) It all depends on what cases you are planning to support and what are unexpected situations where you do not implement logic other than to report an error. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405038",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116570/"
]
} |
405,244 | Suppose I'm creating a game played on a 2D coordinate grid. The game has 3 types of enemies which all move in different ways: Drunkard : moves using type 1 movement. Mummy : moves using type 1 movement, except when it's near the main character, in which case it will use type 2 movement. Ninja : moves using type 3 movement. Here are the ideas I've come up with in organizing the class hierarchy: Proposal 1 A single base class where each enemy is derived from: abstract class Enemy:
show() // Called each game tick
update() // Called each game tick
abstract move() // Called in update
class Drunkard extends Enemy:
move() // Type 1 movement
class Mummy extends Enemy:
move() // Type 1 + type 2 movement
class Ninja extends Enemy:
move() // Type 3 movement Problems: Violates DRY since code isn't shared between Drunkard and Mummy . Proposal 2 Same as proposal 1 but Enemy does more: abstract class Enemy:
show() // Called each game tick
update() // Called each game tick
move() // Tries alternateMove, if unsuccessful, perform type 1 movement
abstract alternateMove() // Returns a boolean
class Drunkard extends Enemy:
alternateMove(): return False
class Mummy extends Enemy:
alternateMove() // Type 2 movement if in range, otherwise return false
class Ninja extends Enemy:
alternateMove() // Type 3 movement and return true Problems: Ninja really only has one move, so it doesn't really have an "alternate move." Thus, Enemy is a subpar representation of all enemies. Proposal 3 Extending proposal 2 with a MovementPlanEnemy . abstract class Enemy:
show() // Called each game tick
update() // Called each game tick
abstract move() // Called in update
class MovementPlanEnemy:
move() // Type 1 movement
abstract alternateMove()
class Drunkard extends MovementPlanEnemy:
alternateMove() // Return false
class Mummy extends MovementPlanEnemy:
alternateMove() // Tries type 2 movement
class Ninja extends Enemy:
move() // Type 3 movement Problems: Ugly and possibly over-engineered. Question Proposal 1 is simple but has a lower level of abstraction. Proposal 3 is complex but has a higher level of abstraction. I understand the whole thing about "composition over inheritance" and how it can solve this whole mess. However, I have to implement this for a school project which requires us to use inheritance. So given this restriction, what would be the best way to organize this class hierarchy? Is this just an example of why inheritance is inherently bad? I guess since my restriction is that I have to use inheritance, I'm really asking the broader question: in general, when is it appropriate to introduce a new layer of abstraction at the cost of complicating the program architecture? | I've built a 2D roguelike from pretty much scratch, and after lots of experimentation, I used an entirely different approach. Essentially an entity component architecture. Each game object is an Entity , and an Entity has many attributes which control how it responds to stimuli from the player and the environment. One of these components in my game is a Movable component (other examples are Burnable , Harmable , etc, my GitHub has the full list): class Entity
movable
harmable
burnable
freezable
... Different types of enemies are distinguised by injecting different basic components at object creation time. So something like: drunkard = Entity(
movable=SometimesRandomMovable(),
harmable=BasicHarmable(),
burnable=MonsterBurnable(),
freezable=LoseATurnFreezable()
...
) and ninja = Entity(
movable=QuickMovable(),
harmable=WeakHarmable(),
burnable=MonsterBurnable(),
freezable=NotFreezable()
...
) Each component stores a reference to its owner Entity for information like position. The components know how to receive messages from the game world, process them, then generate more messages for the results. These messages land in a global queue, and there is a main loop each turn which pops messages off the queue, process them, then pushes and resulting messages back on the queue. So, for example, a Movable component does not actually edit the position attributes of the owning entity , it generates a message to the game engine that they should be changed, along with the position in which the owner should be moved to. There's essentially no class hierarchy for the basic game entities, and I did not find myself missing it. Behavior is distinguished entirely by what components a entity has. This works for every entity in the game world, player, enemy, or object. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405244",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/357570/"
]
} |
405,268 | I have a C header that is generated from a CSV file and a Python script. The C header mainly contains a list of #define constants. I want to be able to detect manual changes to this header during compilation (which tends to happen frequently in this early phase of development), and have the compiler display a warning to indicate to the developer to update the CSV file and regenerate the header. If I were to go about doing this, I would have the python script generate some kind of metadata about the file itself, perhaps a hash, and then the compiler would somehow check this hash and compare to what's in the file. But I'm not sure what's the best way to go about it. Does GCC have any facilities I can use for this kind of thing? | I think you are approaching this problem from the wrong angle. Better let the generator place a clear and visible comment at the beginning of the C header file like // This file is autogenerated, don't change it manually,
// any manual changes will get lost after next regeneration. then make generating the C file from the CSV file part of the build process (which you describe in the make file). If someone ignores the comment at the beginning - bad luck, they will surely not do this a second time after loosing a few hours work. Some additional recommendations from the commenters below (thanks to all contributers): add a note to the generated comment which tool generated this file from which source make the generated file read-only (and make sure the team does not use an IDE which ignores the read-only flag) In case the header file contains parts which have to be maintained manually from time to time, then move them to a second file which is included from the generated one, so you have a clear separation between files which are autogenerated and files which are manually edited. By this separation there should be no reason to apply "manual changes to this header", if this is an "early phase of development" or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405268",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124792/"
]
} |
405,308 | Coupling is defined as the knowledge one object has about another one, which describes how dependent they are. The more dependent, the worse, since changes in one would impact in the second. High coupling is bad, low coupling good. There are different coupling types . Let's assume we talk about call coupling. In Java, when A creates an object B and calls one of its methods it's said to be tightly coupled. But if we create an interface IB, used from A where B implements IB it's said to be loosely coupled. I don't see why since one change in the interface would have an impact in A and B. And one change in B would have an impact in IB and A. They still seem to be call coupled. The same rationale applies to the Facade GoF design pattern. It's said it promotes low coupling, since we put an intermediary between a subsystem and the client code. In this case, it looks like we transferred the problem from the client code to the Facade. Since a change in the subsystem would have an impact on the Facade instead of the client code. The client code is no longer coupled to the subsystem but the Facade is. I don't see how coupling is reduced. This has been asked in: How is loose coupling achieved using interfaces in Java when an implementation class is mandatory and bound to interface contract? But the answers are not specific enough. First, since encapsulation is also about treating objects as black boxes, the first answer does not specify any gain by using interfaces compared to regular classes (= tight coupling, in case it's all about black boxes). Therefore the answer is invalid. What interfaces provide is decoupling between the interface and the implementation when multiple implementations exist. But doesn't solve anything related to call coupling. May be the link provided above should add a new category called "implementation coupling". Regardless of this, the solution is still call coupled. In the second answer it mentions data coupling, but as far as I know the issue is about call coupling. The rest of the answers are irrelevant. Regarding "Design Patterns - Understanding Facade Pattern", I understand the Facade pattern. I'm only asking about the coupling reduced by the pattern. Which based on my reasoning is not reduced but transferred. This subject has been treated but no proper answer has been given. | High coupling is bad, low coupling good. I won't put it so black and white. Some coupling is necessary. Plus some roundabout ways to get rid of coupling can introduce too much overhead. In particular for applications that need to respond in real time. It is trade-offs. Yes, you want to avoid high coupling. However, if introducing patterns in an attempt to lower coupling is preventing you from meeting your requirements first (which might include response time, time budgets, etc…) then it is not worth it. In Java, when A creates an object B and calls one of its methods it's said to be tightly coupled. But if we create an interface IB, used from A where B implements IB it's said to be loosely coupled. I don't see why since one change in the interface would have an impact in A and B. And one change in B would have an impact in IB and A. They still seem to be call coupled. Yes, they are still call coupled. Well, depends on how you define the metric. However, interfaces are not the right tool if you want to deal with that. Regardless, with the interfaces, the classes would be loosely coupled in that A is not coupled directly to B. You could have other implementations of the interface. A common anti-pattern is making an interface that matches what the consumed class offers. You should make an interface that matches what the consumer class needs. See interface segregation . The interface becomes a requirement for consumed class. If you conceptualize the interface that way, the interface would change only when it is necesary for its consumers. When changing A, you can decide to change the interface or use a new one. If we decide to change the interface when changing A, the change would propagate from A to B. Instead of it propagating from B to A. However, we do not have to decide to change the interface, or we can introduce an adapter that implement the interface and wraps B. That is, we have opportunities to stop the change from propagating. That is what we want. And that is what loose coupling (which is still coupling) buys us. We design to have more options, not less. Again, that is not solving call coupling. The same rationale applies to the Facade GoF design pattern. It's said it promotes low coupling, since we put an intermediary between a subsystem and the client code. In this case, it looks like we transferred the problem from the client code to the Facade. Since a change in the subsystem would have an impact on the Facade instead of the client code. The client code is no longer coupled to the subsystem but the Facade is. The facade hides whatever you do behind it. It is similar to how an object encapsulates its state. The facade encapsulates a (sub)system. The consumer code only needs to talk to the facade and it is unaware of the details behind it. Of course, it is still coupled. And yes, you have moved the problem to the Facade. However, thanks to the Facade, the consumer code does not have to change because of the changes of what is behind the facade. But the answers are not specific enough. First, since encapsulation is also about treating objects as black boxes, the first answer does not specify any gain by using interfaces compared to regular classes (= tight coupling, in case it's all about black boxes) If you use the class directly, then whatever the consumer code needs must be implemented by the class. If the consumer code uses an interface instead, then it does not have to be implemented by any particular class. You can change the class without the consumer code being aware of it. The consumer code has less knowledge, thus it is less coupled. What interfaces provide is decoupling between the interface and the implementation when multiple implementations exist. But doesn't solve anything related to call coupling. Correct, this is not about call coupling. You are the one narrowing the discussion to call coupling between two classes and an interface. And then wondering why they provide nothing. Interfaces are not the right tool to deal with call coupling. Instead you want an event driven architecture, a consumer subscriber pattern, or similar. That way, there might not even be an implementation on the other side. Of course, some infrastructure might be required, if not provided by the language and runtime. Oh, this is Java, yeah, some infrastructure required. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405308",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/357636/"
]
} |
405,555 | Say that A is working on a branch based off master and B merges changes into the master branch, which introduces merge conflicts between A's branch and master. Whose responsibility is it to fix merge conflicts? I am not intending to be petty, so in other words - is it more productive in general if A fixes these conflicts or B? | Person A is the one who decides when to incorporate new changes from master, so Person A will perform the merge. Person A should certainly attempt to resolve merge conflicts on their own, but if any questions arise then both Person A and Person B should sit together and resolve the conflicts together . Remember that you work on a team. Teammates should help one another without resorting to finger-pointing or saying "it's your job, not mine." So to answer your question, neither person is solely responsible for resolving merge conflicts. You both are. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405555",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/313167/"
]
} |
405,567 | Our company is trying to find a good generic way to have Many-to-One data for an entity. For example, a user might have 1 primary email, but many other emails also attached to their account. So we have a users table (1 row maps to 1 user): | id | handle | primary_email | is_verified | first_name | last_name |
|--------|----------|---------------|-------------|------------|-----------|
| (int) | (string) | (string) | (boolean) | (string) | (string) | but then we may want to store multiple emails for the same user, so we have another table, let's called it "users_map", where many rows map to 1 user: | id | user_id | key | value |
|--------|---------|----------|--------|
| (int) | (uuid) | (string) | (json) | so for example if there were multiple emails for the same user, we would do something like this: | id | user_id | key | value |
|----|---------|-------|------------------|
| 1 | 1 | email | "[email protected]" |
| 2 | 1 | email | "[email protected]" |
| 3 | 1 | email | "[email protected]" |
| 4 | 2 | email | "[email protected]" |
| 5 | 2 | email | "[email protected]" | so my question is - is there a better way to do this other than using JSON for the value column? If not - is there a way to enforce a schema on the JSON somehow? Last question - from my brief research the inverse table design is called an "unpivot" table - but if there is a better name for it please let me know. The potential advantage of a generic table by user? if you shard by user, each shard has only 2 tables instead of 5 or 10? | Person A is the one who decides when to incorporate new changes from master, so Person A will perform the merge. Person A should certainly attempt to resolve merge conflicts on their own, but if any questions arise then both Person A and Person B should sit together and resolve the conflicts together . Remember that you work on a team. Teammates should help one another without resorting to finger-pointing or saying "it's your job, not mine." So to answer your question, neither person is solely responsible for resolving merge conflicts. You both are. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405567",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101210/"
]
} |
405,749 | Disclaimer: I don't expect zero tech debt. In this post, technical debt problem refers to severity that has been causing negative impact, say productivity. Recently I was thinking to build a tool to automatically generate tech debt report from issue tracker - introduction rate vs cleanup rate over the time. Apart from the total, there'll also be numbers broken down by project team and by manager, so that managers could easily get insight on current tech debt level, without delving into issue tracker and details (such tool might already exists, I need to research to avoid reinventing wheel). Motivation wise , tech debts have been snowballing for years. Whenever developers increase project estimate to include tech debt clean up, more often they will be asked to remove those numbers from estimate, so refactoring/clean up works usually ends up indefinitely postponed. I hope the periodic report will help to improve tech debt management issue. However, on second thought , I wonder will increasing visibility of tech debt level really helps to raise priority. Generally, is tech debt issue an org culture issue or just lack of tool/insight? I supposed there's no universal answer, I wonder which is the more common cause. What's your experience? --- Update 2/28 Clarification : I believe most management are intelligent enough to realise there's impact, especially after teammates reported pain in terms of project productivity . My gut feeling is that, they don't have a concrete picture about how serious problem is. My idea is to help management to gain clearer picture, via two steps: Have techdebts logged, and have their impact tracked. (there are challenges, but that's beyond scope of this question) Have a report for introduction rate vs cleanup rate (there could be further breakdown by high/low impact). My curiosity comes from, will these efforts help or are they just waste of time, generally speaking (not specific within my org) - hence the question what's your experience . If it's org culture issue, then most likely these efforts won't help much. | Anecdotally I am a consultant developer. I have been hired on several occasions specifically to "fix the development issues". Some customers are aware of issues in their development process, whereas others only see it as a bunch of bugs that need to be fixed without looking at the cause of the bugs (i.e. bad coding practices). In my experience, one company that asked for help in fixing the development process was actually interested in taking steps to improve the development process. In other companies, their interest existed up until it required action from their side (e.g. reprimanding a developer who actively rolls back refactoring or improvements, or actually giving me access to the tools needed to set up a CI/CD pipeline). Based on my experience, bad practice starts off as a developer deficiency. Not a willful one, but rather a matter of either inexperience or general corner-cutting attitude. Whatever the cause, these developers will show quick results due to not taking the time for due diligence such as testing, reviewing or refactoring. Management will notice those quick results, and will over time come to expect this efficiency. They don't handle the fallout from bad practice (i.e. bugs) directly, but they do benefit from the shorter development times. At this point, it becomes a feedback loop. Management communicates expected (quick) deadline. Developers are forced to cut corners to achieve it. The codebase degrades. The initial quick release turns into a maintenance cycle of unclear and erratic bugs, regressions, and general lack of readability. In order to cope, while keeping up with the continuing demand for quick results, developers are forced to cut corners in their bugfixes. The cycle continues, while the quality and performance of the codebase is eroded, and also the good practice skills of developers erode and start being regarded as "needlessly" time consuming. If some developers stick to good practice and others don't, management will judge them based on how quickly they deliver - without observing the bugs or the causes of the bugs. The good practice developers are deincentivized, the bad practice developers are incentivized. Over time, due to positive/negative feedback from management, the bad practice developers take on a more leading role than the good practice developers, and the bad practice becomes the law of the land. Speaking from the experience of a company whose main workforce was external consultants, the good practice devs simply leave or become disenfranchised bad practice devs. The (initial) bad practice devs stick around. This perpetuates the imbalance of the bad practice devs having seniority over the good practice devs. At this point, bad practice has become an endemic company culture. It is reinforced from all sides (including the sales department in case of dev companies), and any good practice suggestion that pops up is often drowned out by the popular support for bad practice, combined with management's intolerance for longer deadlines. This devolution is something I've observed with at least three different companies. The same events and general work climate pervaded through all three companies. The monkeys and the ladder Whenever I talk about detrimental company culture, which often manifests as a "this is how we've always done it" attitude, I am reminded of the parable of the monkeys and the ladder. Suppose I had turned off the shower around the time of picture 4. The monkeys could have gone up that ladder without any repercussions, but their "company culture" prevented it from happening, based on what is now an outdated idea (since the shower is no longer active). This parable touches exactly on the erosion of good practice that takes place. Popular but misguided support for bad practice inhibits anyone who tries to make a change for the better by introducing good practice. The issue isn't with social checks and balances. The same principle is used in other companies to keep up the good practice and squash any bad practice suggestions. The issue is with the blind acceptance of "things are done this way" without ever being able to re-evaluate. When it reaches that stage, the behavior is a company culture. Answering your questions Generally, is tech debt issue an org culture issue or just lack of tool/insight? It depends what stage of the process you are on. In the beginning, it's a lack of insight and/or tooling. But when combined with management that looks only at results and not ongoing issues and thus wrongly (possibly unknowingly) incentivizes the bad practice, it becomes a feedback loop and over time turns into company culture. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405749",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/358438/"
]
} |
405,973 | I have been studying Clean Architecture (CA) by Robert C. Martin and have found it quite useful in promoting architectural standards for large applications. Through implementation of a case study, I have a bit of experience of how it can help build applications that are more flexible, robust and scalable. Finally I have also come into grips with its potential shortcomings (many of which are outlined in this excellent response). My question, though, is how Clean Architecture relates to Domain Driven Design (DDD) by Eric Evans. While not quite as familiar with DDD, I have noticed many similarities between DDD and CA. So here are my questions: Are there any differences between CA and DDD (other than their naming scheme)? Should they be used in tandem, drawing insight from both, or should one be used over the other? From research, the only thing I was able to find on this was that CA "uses higher level of abstraction on the business objects" sourced from here . | You are correct that both focus on separating the domain code from the application and infrastructure code. But that is where the similarities end. In Clean/Hexagonal/Onion (or CHO in short) architecture, the goal of this decoupling is testability and modularity with intended effect being that the "core" of our software can be reasoned about in isolation from rest of the world. In DDD, the main goal is to establish common language with the business experts. The separation of the domain from rest of the application code is just a side effect of this main goal. It also has some say about the design of classes as entities and aggregates, but that is only within the domain itself. It has nothing to say about design outside the domain code. In practice, you might find that you use both at the same time. You use CHO architecture to design the whole structure of the system, with the "domain core" being isolated in it's separate modules. And then you use DDD to design this domain core in collaboration with domain experts and possibly by using DDD concepts like Entites and Aggregates. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/405973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/358776/"
]
} |
406,079 | Doing a code review, I ran into this assertion in a unit test: assertThatThrownBy(() -> shoppingCartService.payForCart(command))
.isInstanceOfSatisfying(PaymentException.class,
exception -> assertThat(exception.getMessage()).isEqualTo(
"Cannot pay for ids [" + item.getId() +"], status is not WAITING")); I feel that testing the literal text of an Exception is bad practice, but I'm unable to convince the author. (Note: this is not localized text; it's a String literal in the code with the item ID as a parameter.) My reasoning: If we decided that, e.g. the item's ID was a critical piece of information, it might make sense to test that the message contains the ID. But in this case, it would be better to include the ID as a field of the Exception class. If we had some sort of external system that was automatically reading the logs and looking for certain strings (we don't), then yes, this test would be justified What's important in the message is that it clearly communicates what the problem is. But there must be hundreds of ways of constructing such a message, and no testing library can tell us if a plain text human language String is "clear" or "useful". Thus, this amounts to, e.g. writing unit tests for translations - it's pointless because the best you can do boils down to duplicating your messages file in the tests. What's the best practice here, and why ? | The main point of testing the exception message content is to make sure the right exception is thrown. There may be multiple reasons for which payForCart() throws a PaymentException . So it's not about the exact wording, but about whether it says "Cannot pay for ids A9dr6L, status is not WAITING" and not "Cannot pay for ids A9dr6L, balance insufficient". The alternative would be to have a different exception class for every single throw statement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/406079",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38424/"
]
} |
406,179 | I'm currently working with a very large system and have been asked to add an additional parameter to a method that's called from over 200 different places directly. The method signature looks something like this: public static bool SendMessageAndLog(long id, long userId, string message, string cc="", params Attachment[] attachments)
{ ... } I need to be able to log the id of the event this message is associated with. I'm kinda stuck between 2 solutions: Creating a new method that does exactly the same thing but takes the event ID as well, stripping the old method and making it call the new method Adding an optional parameter for the event id and going through and using named parameters for the 200 calls which seems like a massive pain Are there any more other potential solutions to this? What would be the best practice in this case keeping in mind that I can't refactor this too much. | Create a new method with the additional parameter and move all the code from the original method in it, making it use the additional parameter the way you are supposed to. Leave the original method (without the additional parameter) which would just call the new method with some default value of the additional parameter, basically making it a wrapper method. This, of course, works only if there is a default value of the additional parameter. If there is not, but the value differs from call to call, then you have to bite the bullet and just refactor the whole thing, which should not be too hard, because once you break the signature by adding the additional parameter, the compiler will tell you where else you need to fix it. In your particular case, if you have the message-event id mapping which does not change for the duration of the program execution , you can just create the map as an internal attribute of the class containing the method, and the first line of your method would be to read the map and get the ID of the event based on the message. You then use it the way you want to. That way, you do not need to change the method signature. It should not even be such a performance hit, because I do not expect that the map will have too many elements. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/406179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96091/"
]
} |
406,283 | I want to code a little program that takes in head tracking data and moves a 3D object accordingly on the screen. To achieve this I found a software called opentrack that has a C++ API. The problem is that any game dev environments I know / have a way to access use C# as the language. I'm very confortable with C# and used to be comfortable with C++ and C a while back, and could easily get back into it if a solution required it. This is a silly little personal project, but one I'm passionate about and would love to solve, so any help in resolving this would be appreciated. Thanks! UPDATE: Wow, that's an amazing amount and quality of responses, I would like to deeply thank everybody who contributed! | There are various ways to call native code from c# P/Invoke - allows c-style method calls directly from c# code. If the API does not expose c-style (i.e. extern C) functions you will need a wrapper that does this. If the API uses any kind of objects this will probably be an painful approach. C++/CLI - This allows you to use .Net types in a c++ project. So you would create a wrapper c++ project that interfaces with the opentrack API, and is called from your regular c# code. This looks like a nice guide on how to do this. . An advantage of this is that it allows you to write wrappers around objects to provide a object oriented API. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/406283",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/359326/"
]
} |
406,591 | While going over a java book I came across this phrase: Different JVMs can run threads in profoundly different ways. While it's completely understandable to me that code can behave differently depending on the underlying JVM implementation, it does bring up the question. Why are there multiple different implementations of JVM in the first place? Why might I, as a developer be dissatisfied with the official JVM implementation that Oracle provides and decide to build up a different one? | Why might I, as a developer be dissatisfied with the official JVM implementation that Oracle provides and decide to build up a different one? Which one? Oracle has at least three different official JVM implementations! A couple of reasons why one might develop a JVM implementation are: Platform support : you want to run Java on a platform for which Oracle does not provide a JVM. That is the main reason for the existence of IBM J9, for example. Resource usage : you want to run Java on a device that doesn't have enough resources to run Oracle HotSpot. That's the reason for the existence of Oracle Squawk and Oracle KVM (the "K" stands for "Kilobyte", indicating that this JVM is designed to run on machines with only a few kilobytes of RAM – try that with HotSpot!), and many, many, many others. Performance : Oracle HotSpot isn't fast enough / scalable enough / predictable enough for you. This is the reason for the existence of Azul Zing or WebSphere with the Metronome GC. Licensing : maybe you don't like Oracle's licensing policy. That was the reason for the existence of Apache Harmony, and the various projects that made up GNU's Java implementation efforts (GCJ, Classpath). Competition : Monocultures are bad. Competition sparks innovation. Execution modes : Maybe you prefer Ahead-of-time compilation? That's the reason for the existence of Excelsior JET. Research : there are many research JVMs, such as the Jikes RVM or Oracle Maxine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/406591",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/295696/"
]
} |
407,008 | A friend of mine is working in a 200-employee company. The company's business has nothing to do with IT, but they do have an IT department to work, among others, on their website, used by the customers. The website started with a core idea that programmers have to test the application themselves using automated testing. However, it quickly started to be problematic, as programmers were spending too much time writing functional tests with Selenium (and later Cypress.io ) trying to deal with either complicated interactions, such as drag and drop or file uploads, or trying to figure out why the tests randomly fail. For a while, more than 25% of the time was spent on those tests; moreover, most programmers were pissed off by those tests, as they wanted to produce actual value, not try to figure out why the tests would randomly fail. Two years ago, it was decided to pay a company from Bulgaria to do the functional, interface-level tests manually. Things went well, as such testing was pretty inexpensive. Overall, programmers were delivering features faster, with fewer regressions, and everyone was happy. However, over time, programmers started to be overconfident. They would write fewer integration or even unit tests, and would sometimes mark features as done without even actually checked if they work in a browser: since testers will catch the mistakes, why bother? This creates two problems: (1) it takes more time to solve the issues when they are discovered by testers a few days ago (compared to when they are discovered within minutes by programmers themselves) and (2) the overall cost of the outsourced testers grows constantly. Recently, the team lead tries to change this behavior by: Measuring, per person, how many tickets are reopened by the testers (and sharing the results to the whole team). Giving congratulation to the persons who performed the best, i.e. those who have the least tickets being reopened. Spend time pair programming with those who performed the worst, trying to understand why are they so reluctant to test their code, and showing them that it's not that difficult. Explaining that it's much faster to solve a problem right now, than to wait for several days until the feature gets tested. Explaining that testers do system tests only, and the lack of unit tests make it difficult to pinpoint the exact location of the problem. However, it doesn't work: The metrics are not always relevant. One may work on an unclear or complex ticket which gets reopened several times by the testers because of the edge cases, and a colleague may meanwhile work on a ticket which is so straightforward that there is absolutely no chance to introduce any regression. Programmers are reluctant to test code, because (1) they find it just boring, and because (2) if they don't test code, it looks like they deliver the feature faster. They also don't see why fixing a problem days after developing a feature would be a problem. They understand the theory, but they don't feel it in practice. Also, they believe that even if it would take a bit longer, it's still cheaper for the company to pay inexpensive outsourced testers rather than spend programmers' time on tests. Telling them repeatedly that this is not the case has no effect. As for system vs. unit testing, programmers reply that they don't spend that much time finding the exact location of a problem reported by a tester anyway (which seems to be actually true). What else can be done to encourage programmers to stop overly rely on testers? | It seems to me there is a contradiction in policy here. On the one hand, the firm has outsourced testing because it consumed programmers' time excessively, and could be done more cheaply by others. Now, they complain that the programmers are relying on the testers, and should be doing more testing themselves up front. I can understand from a management point of view that there is perceived to be a happy medium, but in reality the programmers are not engaging in a close analysis, on a case-by-case basis, of how much testing they do themselves and how much they outsource. To attempt to do so would consume too much time and intellectual effort, and likely without producing accurate results. How would a programmer go about estimating how many bugs a particular piece of code has, and then weighing up the economic benefit of spending his own time searching for them versus letting the testers search for them? It's an absurdity. Instead programmers are following rules of thumb. Previously the rule was to test extensively. Now the rule is to save precious programmer time, get more code out the door, and leave testing to testers (who are thought to be ten-a-penny). It's no answer to seek a happy medium, because in practice what will happen is that the anal-retentives will return to spending 25% of their time testing, and the cowboys will continue throwing low-quality code out the door, and personality traits like conscientiousness and attention to detail (or lack thereof) will predominate over the judgment. If management try to harass both types to get them to conform more closely to an average which is perceived to be economically ideal, both will probably just end up feeling harassed. I would also remark in passing, that the 25% of time which was spent testing to begin with, does not strike me as excessive. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/407008",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
408,232 | We work in scrum teams with a product owner who is responsible for the backlog and prioritisation of that backlog. Recently the topic of un-ticketed work came up, developers for one of the applications are doing un-ticketed work that they regard as important. Typically this is tech debt but can also be things like migrating to a better library etc. The argument from the developers was that these are generally small things and in an agile team they should be able to exercise their judgement and fit them in around sprint work. E.g. if waiting for the CI system to build and deploy they could tidy up some code. The effort of raising a ticket will take longer than actually doing the work. The work being done is tested via automated tests and so there is no additional burden on the QA members of the team. The argument against this is that the developers are effectively saying their opinion on what work is a priority is more important than any other stakeholder and are not going to go through the PO so it can be compared against other work in the backlog. There is also a case to be made that if a developer has spare time then it would be more productive for them to be elaborating upcoming stories. The state of stories coming into sprint has been raised at retros before and so more elaboration can only help this. There is also a concern that the self policing of what size falls into this category may start to stretch and result in even more time being spent on un-ticketed work. I can see both sides of the argument to an extent but should all work no matter how small be ticketed and go through sprint planning rather than be done as and when by developers if it is small? | If you work in a company that doesn't place any value in paying down technical debt, you may have no choice but to do unticketed work. Stakeholders are generally not qualified to make decisions about this kind of work. Include unticketed work as part of your ticket estimation process. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/408232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/174177/"
]
} |
408,246 | Imagine two classes: class Person {
MarriedPerson marry(Person other) {...}
} class MarriedPerson extends Person {
} The idea is that a Person is automatically "morphed" to type MarriedPerson when the method marry is called on it. For this reason you would need to create an instance of MarriedPerson and transfer the data from the initial Person object. The Person object would still exist, which is bad. Another way would be the application of a design pattern ... What design pattern would fit the problem? Are there better ways to model this relationship? (Off-topic) Are there Java features to define a custom downcast? This is not homework. I just got curious after commenting here . | I know that this is meant to be a fairly contrived example to demonstrate the idea at play. But I would say that as a rule of thumb, you want to exhaust every possible option before relying on Subtype polymorphism. Using composition is a better solution here just based on the domain. Marriage does not define a "type" of person any more than race, wealth, interests, etc. So using subtype polymorphism to model this case isn't appropriate. On a more technical level, most languages only support single-inheritance. So if you did have classes for MarriedPerson , PacificIslanderPerson , and WealthyPerson , you would have no way to "compose" these together to describe a wealthy married Pacific-Islander. Instead you would use simple composition within your Person class. public class Person {
public MarriageStatus marriageStatus;
public Race race;
public Wealth wealth;
} Here, MarriageStatus , Race , and Wealth can all be single-responsibility, and probably pretty simple. An example of MarriageStatus might be. public class MarriageStatus {
public Datetime anniversary;
public Person husband;
public Person wife;
// TODO: In the future the stakeholder would like to support polyamory
// public List<Person> spouses;
} If you're using a programming language like Haskell or Rust with traits (typeclasses in Haskell parlance), then you can make Person automatically act like a MarriedPerson from the function's perspective. In a more traditional OOP languages, your business logic would simply work on MarriageStatus , Race , and Wealth objects. They would only accept a Person when interplay between those three composed properties are needed. This way you've designed yourself out of the recursive relationship and all the pitfalls of it. I apologize if I've entirely missed the point of your question. Specifically, you say The Person object would still exist, which is bad. I don't think that's necessarily true. If you return a MarriedPerson object and no references to the original "Person" are around, the garbage collector will just come along and remove the old "Person" object. I might be misunderstanding you though. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/408246",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/360199/"
]
} |
408,739 | In this docker beginner video its explained, that different stacks may depend on different libraries and dependencies and that this can be handled with Docker. However, I don't get what the difference should be between a library and a dependency. As I see it, a library is a collection of code/packages and a dependency is a library that the database/webserver/tool depends on. So is there any difference? Or is saying "a database relies on specific libraries and dependencies" the same as "a database relies on specific libraries" ? | Libraries and dependencies are like persons and relatives : One is just an entity (something), the other is a relational entity. I am a person. My niece is also a person. But to her, I'm a relative . You cannot simply be a relative by nature; you're always a relative of someone else . Similarly, a code library becomes a dependency only when another project uses it , and then it's a dependency of that project and not of another. Even though a code library is invented specifically for other projects to use, it's not a dependency until this actually happens. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/408739",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/224677/"
]
} |
409,455 | Don't be afraid to make a name long. A long descriptive name is better
than a short enigmatic name. A long descriptive name is better than a
long descriptive comment. Robert C. Martin Did I understand Clean Code right?
You put the whole information that you would put into a comment into the class/method/... name.
Wouldn't that lead to long names like: PageReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorComments
PageReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorDescriptions | Yes, you understand Clean Code right, but your examples are quite a bit over the top. Here is what you start with: PageReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorComments
PageReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorDescriptions In your system you probably don't have many kinds of reloaders, you probably only have page reloaders, so the first occurence of "Page" is redundant. This leaves you with: ReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorComments
ReloaderForPagesDisplayingVectorGraphicsThatAreUsedInTheEditorDescriptions And since a reloader always reloads pages, the second occurence of "Pages" is redundant, too. This leaves you with: ReloaderForDisplayingVectorGraphicsThatAreUsedInTheEditorComments
ReloaderForDisplayingVectorGraphicsThatAreUsedInTheEditorDescriptions Pages are always displaying stuff, so the Displaying part is redundant, too. This leaves you with: ReloaderForVectorGraphicsThatAreUsedInTheEditorComments
ReloaderForVectorGraphicsThatAreUsedInTheEditorDescriptions In English, constructs like this-that-is-used-in-that can be reworded as that-this. For example, coloring-that-is-used-for-food is food-coloring. Applying this rule to replace "Vector Graphics That Are Used In X" with "X Vector Graphics" leaves you with: ReloaderForEditorCommentsVectorGraphics
ReloaderForEditorDescriptionsVectorGraphics Also in English, constructs like this-for-that can be reworded as that-this. For example, bottle-for-water can be reworded as water-bottle. Applying this rule to change "Reloader For X" to "X Reloader" leaves you with: EditorCommentsVectorGraphicsReloader
EditorDescriptionsVectorGraphicsReloader And then of course there may be other shortcuts you can apply, depending on your particular problem domain. For example, when you speak of 'vector' in your system, it may be fairly clear that you are speaking of 'vector graphics', so this would leave you with: EditorCommentsVectorReloader
EditorDescriptionsVectorReloader ... and I think that these are some pretty good realistically long names. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/409455",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/364477/"
]
} |
409,773 | I often read definitions for Polymorphism such as the following: Polymorphism is the ability to have objects of different types
understanding the same message But the above definition also apply if we don't use Polymorphism, for example if we have an object of type Circle with a method draw() , and another object of type Rectangle with a method draw() , we can do: circle1.draw();
rectangle1.draw(); So circle1 and rectangle1 understood the same message draw() without using Polymorphism! Am I missing something? | In your example, you don't really show the same message, you show two different messages that happen to have the same name. Polymorphism requires that the sender of a message can send it without knowing the exact recipient. Without seeing evidence that the caller can do something like shape.draw() without knowing whether shape contains a circle or a rectangle, you may or may not have actual polymorphism. They could be as unrelated as circle.draw() and weapon.draw() . They don't necessarily have to both implement the same nominal interface. The language could support structural typing or compile-time templating and it would still be called polymorphism. As long as the caller doesn't care who the callee is. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/409773",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/280327/"
]
} |
409,910 | I'm trying to get a review for my lists of pros/cons about how to structure commits that came out of a discussion at my work. Here's the scenario: I have to add feature X to a legacy code base The current code base has something I can't mock making unit testing feature X impossible I can refactor to make unit testing possible, but it results in a very large code change touching many other non-test classes that have little in common with feature X My company has the following strictly enforced rules: Each and every commit must be stand alone (compiles, passes test, etc.) We have automation that makes it impossible to merge until these have proven to pass. Only fast-forward merges are allowed (no branches, no merge commits, our origin repository only has a a single master branch and it is a perfectly straight line) So the question is how to structure the commits for these 3 things. (refactoring, feature X, and test for feature X) My colleague referred me to this other article but it doesn't seem to tackle the refactoring part. (I agree without the refactoring source and test should be in one commit)
The article talks about "breaking git bisect" and "making sure every commit compiles/passes" but our strict rules already cover that.
The main other argument they give is "logically related code kept together" which seems a bit to philosophical for me. I see 3 ways to proceed. I'm hoping that you can either a) add to it b) comment on why one of the existing pro/cons is not important and should be removed from the list. method 1 (one commit): includes feature X, test for feature X, and refactoring pros: "Logically related code kept together" (Not sure this is actually a "reason". I would probably argue all 3 methods do this, but some may argue otherwise. However, no one can argue against it here). If you cherry-pick / revert without merge conflict, it will probably always compile & pass tests There is never code not covered by test cons: Harder to code review. (Why is all this refactoring is done here despite not being related to feature X?) You cannot cherry-pick without the refactoring. (You have to bring along the refactoring, increasing chance of merge conflict and time spent) method 2 (two commits): one includes feature X, then two includes refactoring and test for feature X pros: Easier to code review both. (Refactoring done only for the sake of testing is kept with the test it is associated with) You can cherry-pick just the feature. (e.g. for experiments or adding feature to old releases) If you decide to revert the feature, you can keep the (hopefully) better structured code that came from the refactoring (However, revert will not be "pure". See cons below) cons: There will be a commit without test coverage (even though it's added immediately after, philosophically bad?) Having a commit without test coverage makes automated coverage enforcement hard/impossible for every commit (e.g. you need y% coverage to merge) If you cherry-pick only the test, it will fail. Adds load to people wanting to do revert. (They needed to either know to revert both commits or remove the test as part of the feature revert making the revert not "pure") method 3 (two commits): one includes refactoring, two includes feature X and test for feature X pros: Easier to code review the second commit. (Refactoring done only for the sake of testing is kept out of feature commit) If you cherry-pick / revert either without merge conflict, it should compile & pass tests There is never code not covered by test (both philosophically good and also easier for automated coverage enforcement) cons: Harder to code review the first commit. (If the only value of the refactoring is for test, and the test are in a future commit, you need to go back and forth between the two to understand why it was done and if it could have been done better.) Arguably the worst of the 3 for "logically related code kept together" (but probably not that important???) So based on all this, I'm leaning towards 3. Having the automated test coverage is a big win (and it what started me down this rabbit hole in the first place). But maybe one of you has pros/cons I missed? Or maybe there's a 4th options? | When working on existing code, it's common that you need to refactor the code before you can implement your feature. This is the mantra from Kent Beck: "Make the change easy (warning: this may be hard), then make the easy change" To do so, I usually recommend to do frequent little commits. Take baby steps. Refactor progressively: Each refactoring doesn't change the way the code works, but how it's implemented. It's not "hard to review" since both implementation are equally valid. But the new implementation will make it easier for the change to be made. Finally, write the test and make it pass. It should be relatively short and to the point. That also makes the commit easier to read. Therefore I'd go for the 3rd option too . Maybe I'd even have multiple refactoring commits. Or I'd squash them into one before pushing that for review, so there's only one. Or maybe I'd do a first PR that's only refactoring, then a second that's only the feature. It really depends on how much refactoring is needed (keep your PRs short) and your team conventions! If the only value of the refactoring is for test, and the test are in a future commit, you need to go back and forth between the two to understand why it was done and if it could have been done better To solve this problem, you need to get your team comfortable in this approach: refactor first, then implement the feature. I'd suggest you to discuss it with your colleagues and try that out. I'd also recommend you try to practice "over-committing" to get you in the habit of doing smaller commits . It's a useful skill to have when code is tricky, so it's a great exercise to do when code is not! In any case, I think you've healthy discussion with your colleagues. No doubt you'll find what works for your team! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/409910",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/365315/"
]
} |
410,248 | I'm working on a PHP web application that depends on a few 3rd-party services. These services are well documented and provided by fairly large organisations. I feel paranoid when working with responses from these API, which leads me to write validation code that validates that the responses match the structure and data types specified in the documentation. This mainly comes from the fact that it's out of my control and if I blindly trust that the data will be correct and it's not (maybe someone changes the json structure by accident), it could lead to unexpected behaviour in my application. My question is, do you think this is overkill? How does everyone else handle this situation? | Absolutely. For starters, you never know that somebody hasn't hacked into your connection and the reply you receive doesn't come from the API at all. And some time in the last two weeks I think Facebook changed an API without notice, which caused lots of iOS apps to crash. If someone had verified the reply, the API would have failed, but without crashing the app. (A very nice case I heard why validation is needed: A server provided information about goods a customer could buy. For dresses, they included the U.K. dress size as an integer, usually 36 to 52. Except for one dress, the size was a string “40-42”. Without validation that could easily be a crash. ) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/410248",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87243/"
]
} |
410,482 | I found this also happened in my team although he may have exaggerated the situation a little bit. Scrum is a way to take a below average or poor developer and turn them
into an average developer. It's also great at taking great developers
and turning them into average developers. Everyone just wants to take something easy off the board that you can
get done in a day so you have something to report in tomorrow's daily
scrum. It's just everyone trying to pick the low hanging fruit.
There's no incentive to be smart and to take time to think about
solutions, if nothing is moving across what are you even doing? You're
letting the team down! The velocity is falling! I think if you have hard problems to solve you solve them by giving
them to smart people then leaving them alone. You don't
constantly harass them every day demanding to know what they did
yesterday and what they plan to do today. With daily updates where is
the incentive for the smart people to work on the hard problems? They
now have the same incentive as the junior developer; find the easiest
tickets to move across the board. Sometimes I will want to just be alone and think about a solution for
a few days. If I do that though I'd have nothing to say at the scrum.
So instead I'll pick the user story where the colour on a front end
was the wrong shade of green or a spelling mistake! See, I knocked out
2 stories in one day, before lunch! Go me! ... I don't fully agree with his words. E.g., I agree with one comment said, it's not that they (managers) don't trust them, it's that they don't get things done without constant supervision. When a great developer becomes an average developer there are always multiple reasons, but I do find daily scrum could be one of reasons. So how do I prevent this side-effect of scrum meetings? I also realized it is easier said than done, but I like to see how others see this problem. ----- update ----- After reading all the answers I have got so far I realize I need to add some information to make my question more relevant. But before I am getting into that, I want to repeat the words Martin Maat gave in his answer, "The mere fact that so many people feel the need to say something about it is an indicator of the frustration Scrum causes." I totally agree! When I first asked the question I already expected some answers would be " oh, you don't do scrum right! " Some corrections I want make to my original question are, I used the word " great developer " and I probably should just say a decent/good developer because I have seen that sidetracked answers. Besides, throughout my career I never work with great developers, so I shouldn't use that in the first place. What I meant was I see from time to time that scrum has made a good developer perform less well. Some answers focus on the sentence "it's that they don't get things done without constant supervision" and believed that was a micromanaging issue. But this was not my case, e.g. I don't micromanage. The problem I experienced (again from time to time) is good/tech-savvy developers are not necessarily business-savvy ones. Sometimes they will focus on perfecting their technical solution too much without realizing we have a product to deliver in the end. Other times it is a cross-function feature that needs coordination, especially each team may have its own priority/schedule. That is why they need supervision. But I guess I shouldn't just copy the word " constant supervision " from the original post and should not use constant in the first place. But again, if someone argues that "great developers" and "great team" don't do that, I have no counterargument then. One answer said "the daily scrum somehow turned into a competition who has completed the most tickets". I never experienced that. A mature team does not do that (a mature is not necessarily a great team though). Has anyone experienced that? For those suggested me to read the agile manifesto, my counterargument is this was a long book review I wrote in 2008 (12 years ago) for the book "The Enterprise and Scrum (Developer Best Practices)" by scrum cofounder Ken Schwaber. I listed my review here, not to show off, but to show (1) I believe I have done scrum long enough to see its strength and weakness. (2) I know what agile is about. | Don't let Scrum become the process which overwhelms everything else My friends and I, who are part of Scrum teams are not fans of it. The reason is that in being the one process which has a dedicated process manager, it usually bends and breaks every other process to it and becomes this overarching process where you do nothing consistently except Scrum rituals and making those Scrum rituals seem successful. The problems with Scrum are: The sprint (the two week part) comes first. Because there is someone at the end of the two weeks asking about whether we got what we committed to done, getting tickets to "done" gets prioritized. It means that corners get cut to get the tickets finished. My team doesn't unit test or code review as the sprint ends. On my friend's team, he has thrown in if statements to deal with bugs QA found rather than actually finding the root cause of errors to get the tickets done. This two-week focus can lead to the infinite defects methodology . Obviously in Scrum it needs to pass the product owner, but unless they obsess over edge cases, a lot easily slips through and the developer is not encouraged to consider that very deeply. Scrum and infinite defects can be good friends because the infinite defects approach lets velocity be artificially high as long as bugs are found after the sprint and therefore counted as new work. You could have an ever higher velocity by constantly generating new bugs. Everyone can see your productivity day by day and it becomes an easy evaluation metric. Having a public task board means everyone can see how quickly you are doing stuff, including management. There are key people in my organization who consider me very productive mostly because of how quickly I have moved tickets relative to other people. When developers are judged on that basis, a lousy implementation that passes QA and a well-tested, well-architected implementation are equivalent. That is where stars can be reduced to seeming average as that board + velocity becomes how developers are judged and becomes what they focus on. Teams do not actually self organize usefully in many cases. Scrum goes on about "self-organizing teams." I wrote another answer about this. . In summary, team members are going to do things the way they prefer/are incentivized to do and unless that adds up to a useful team, lots of things do not get done and team members just keep marching on over the mess. Teams might self organize if they all have the same goal and incentives. The problem is, that is rarely true. One guy wants a promotion. Another is studying for a degree on the side. A third is upskilling to go to another company. Another just doesn't want to have arguments so agrees to anything and lets the codebase become a mess. A lot of good design requires the developers to sit down and hash out how a thing should work. In Scrum, you need to clear tickets and there is no real check on the quality of the work as "done" or "not done" is decided by a usually non-technical project owner. That incentivizes going into a void and focusing on outputting code. Tickets/user stories rapidly become architecture. Because the developers are independently working away on each ticket sequentially, the architecture rapidly begins to mirror the tickets. The tickets are typically 1-2 sentence user stories. Ticket driven architecture rapidly gets messy simply because more code gets piled on as required. The high level of developer independence means each developer takes different approaches. Consider sorting of an object. You can do that in the frontend in JS, in the backend in Java, or in the SQL itself and if you are time-constrained, you will choose whichever method you can most easily implement. While it is not necessarily wrong, it makes debugging a heck of a lot harder as more places need to be checked. Standup is effectively "update management" . The notion that standup is for developers is absurd. Does anyone actually wait until 9AM to report a problem or are they going to just ask in the group chat immediately? In practice, it is someone higher up the food chain keeping tabs on how fast things are moving so they can ask about it later in the day. Great developers are usually defined by their ability to develop robust code. Unless the product owner is technical, Scrum massively devalues that as the product owner isn't evaluating the code. It is feature driven and "it runs" is a functional standard for the provided user stories. Great developers are usually defined by their ability to write code which has value both now and in the future. Scrum projects think of everything in two week periods. There is no future. Great developers are usually defined as those who can solve tough problems. Scrum encourages picking work that can easily be done and rapidly churned out at a steady pace. A tough problem is a developer being slow on getting the tickets done. Great developers are often sought out for advice and for second opinions. But any time doing that is less time spent churning out tickets, so their velocity falls. Even if you get a situation where you are not formally judged on the points completed (which will not happen if management is mostly interacting during Scrum rituals as that is all they have to see regarding progress), people are still going to compete for attention and rewards. To resolve this, I would eliminate both individual velocity scores, the presence of management at standup (otherwise developers are strongly incentivized to always have good news), and would tell management that they second they praise a dev or give them a raise based on ticket volume, they radically change behavior. Ideally, the product owner would also not be a direct manager and thus someone the devs are incentivized to look good for during sprint review and sprint planning. The problem is, you are fighting the nature of Scrum as it primarily cares about velocity. What gets measured is what gets focused on and what Scrum measures is speed of output with the output judged from the user side only by the product owner. That metric does not value many behaviors associated with great developers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/410482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/217053/"
]
} |
410,595 | This question may sound strange to you, but I am learning C++ all by myself. I have nobody whom I could ask for mentoring and I would be very glad for some advice. I have started recently to program in C++ (about 3 - 4 intensive months with about 6 - 8 daily hours). My background is Java and I have done some bigger projects in Java with over 10k LOC (which is for a university student like me big). I used C++ mainly for algorithms implementing and visualization but I aim for bigger software projects as well. The only libraries I used are for Catch2, OpenCV, and a little Boost. The strange thing about my programming style is that I have never used pointers in my journey; it is not like I don't know how to use pointers but I just never found a moment where I think a pointer would be useful. When I have to store primitive data, I prefer std::vector over array. When I need to use an object of a class, I prefer to create the object on the stack and pass it by reference; no new/delete, no smart pointers. The reason why I ask this (strange) question is, that I feel like I am missing a big area of C++ programming. Could you share with me your experience and maybe give me some tips? | It may be a good idea to avoid pointers, using object copies and references whenever possible, like you do. Continue this practice. However there are a certain number of things that can go awfully wrong, if you're not extremely careful: Objects "on the stack" (the C++ terminology nowadays for this storage class is "auto") must stay valid when you use their reference. This works generally well if you pass a reference to a function. But returning a reference back is doomed to fail: the object will be destroyed immediately after the return and using the reference is then UB. The same kind of issues occur when you to inject a reference into an object: it's a ticking bomb. You cannot use polymorphism with containers. So if you never use pointers, but have vector of classes with virtual function members, your code might not work as you think because of slicing . These are extremely nasty bugs and are a common mistake when new to C++ with a java background. There are also some very common OO design patterns that are not possible without pointers, such as the factory method pattern. Avoiding pointers should not be an end per se. If you're working with visualisation, I guess that polymorphism may be your friend. And here pointers can unlock the situation. The good news is that smart pointers nowadays can safely manage the memory for you. So yes, your practice may very well work. But it might contain some unnoticed bugs. And sooner or later you'll miss very useful features. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/410595",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/312915/"
]
} |
410,724 | AFAIK, Option type will have runtime overhead, while nullable types won't, because Option time is an enum (consuming memory). Why not just mark optional references as optional, then the compiler can follow code execution and find whenever it can't more be null ? Edit : I see I was misunderstood. I understand and agree with the advantages of avoiding null pointers. I'm not talking about arbitrary pointers that accept null . I'm only asking why not use compile-time metadata, like C# 8's nullable reference types and TypeScript with strict null checks, where default pointers can't be null and there's a special syntax (mostly ? ) to indicate a pointer that can accept null. Edit 2 : Also, Some is strange, in my opinion. Implicit conversion would be better. But that a language feature and not relevant. | The purpose of Null Tracking in general (of which Nullable Types are only one of many different forms), is to somehow regain a modicum of safety (and sanity) in languages that have null references. If you have the chance to eliminate null references altogether, that is a much better solution since the problems that null references cause simply will not exist in the first place. Sir Tony Hoare has famously said that he considers inventing the Null Reference his "Billion Dollar Mistake", which is actually a quite conservative estimate on the total costs that null references have caused until today. If even the person who invented them considers them a mistake, why would you willingly put them in a language? C# has them because, well, they probably didn't know any better, and now they can't get rid of them because of backwards-compatibility. TypeScript has them because its semantics are based on ECMAScript's, which has them. The real beauty of an Option type, though, is that it is isomorphic to a collection that can only hold from zero to one elements. Dealing with collections is one of the most important parts of programming, and thus every language in the world has powerful collections libraries. And you can apply all of the work that has gone into collections also to Option types. For example, if you want to execute an action with an option, you don't need to check whether it is defined! Every collection library on the planet has a way of iterating over a collection and executing an action for each element. Now, what does "executing an action for each element" mean for an Option ? Well, if there is no element, then no action is executed. And if there is one element, then the action is executed once with that element. In other words, foreach acts exactly like a NULL check! You can just blindly do mightExistOrMightNot.foreach(println) and it will print out the value contained in the Option if it exists and do nothing if it doesn't exist. The same applies when you want to perform a computation with the value. Every collections library on the planet has a way of iteration over a collection and transforming each element. Again, for an Option "transforming each element" translates to "transform the value or do nothing". So you can just do val squared: Option[Int] = mightExistOrMightNot.map(_ ** 2) Also, collections libraries have ways to flatten nested collections. Imagine you have a long chain of references, each of which could be NULL , and you wanted to access the last reference in that chain. With nested Option s, you just write longListOfReferences.flatten And if you want to get a value out of an Option , then you can simply write mightExistOrMightNot.getOrElse(42) and you will either get the value inside the option if it exists, or a default value of your choosing if it doesn't. The only reason, really, for you to explicitly check for the existence of an Option is if you want to do something completely different in case the value is missing. It turns out that Option is actually even more than "just" a collection. It is a monad . Languages like C#, Scala, and Haskell have built in syntax sugar for working with monads, and they have powerful libraries for working with monads. I will not go into details about what it means to be a monad, but e.g. one of the advantages is that there are some specific mathematical laws and properties associated with monads, and one can exploit those properties. The fact that Java's Optional is not implemented as a monad, not even as a collection, is a significant design flaw, and I think is partially to blame for people not understanding the advantages of Option s, simply because some of those advantages cannot be realized with Java's Optional . There is also a more philosophical reason for choosing an Option type over NULL references. We can call this "language democracy". There is a major difference between those two: NULL references are a language feature whereas Option is a library type . Everybody can write a library type, but only the language designer can write a language feature. That means that if for my code, I need to handle the absence of values in a slightly different manner, I can write a MyOption . But I cannot write a MYNULL reference without changing the language semantics and thus the compiler (or, for a language like C, C++, Java, Go, ECMAScript, Python, Ruby, PHP with multiple implementations, every single compiler and interpreter that exists, has existed, and will ever exist ). The more the language designer moves out of the language into libraries, the more the programmers can tailor the language (really, the library) to their needs. Also, the more the language designer moves out of the language into libraries, the more the compiler writers are forced to make library code fast. If a compiler writer figures out some clever trick to make NULL references fast, that doesn't help our hypothetical programmer who has written their own abstraction. But if a compiler writer figures out some clever trick to make Option fast, it is highly likely the same trick will also apply to MyOption (and Try , Either , Result , and possibly even every collection). Take Scala, for example. Unfortunately, because it is designed to interoperate and integrate deeply with the host environment (the Java platform, the ECMAScript platform, there is also an abandoned CLI implementation), it has null references and exceptions. But, it also has the Option type which replaces the former and Try which replaces the latter. And Try first appeared in a library of helpers released by Twitter. It was only later added to the standard library. Such innovation is much harder to do with language features. I can write my own Scala Option type, and I don't need to change the compiler for it: sealed trait Option[+A] extends IterableOnce[A]:
override def iterator: Iterator[A]
override def knownSize: Int
def isEmpty: Boolean
def getOrElse[B >: A](default: => B): B
def foreach[U](f: A => U): Unit
def map[B](f: A => B): Option[B]
// … and so on
final case class Some[+A](value: A) extends Option[A]:
override def iterator = collection.Iterator.single(value)
override val isEmpty = false
override val knownSize = 1
override def getOrElse[B >: A](default: => B) = value
override def foreach[U](f: A => U) = f(value)
override def map[B](f: A => B) = Some(f(value))
// … and so on
case object None extends Option[Nothing]:
override def iterator = collection.Iterator.empty
override val isEmpty = true
override val knownSize = 0
override def getOrElse[B](default: => B) = default
override def foreach[U](f: Nothing => U) = ()
override def map[B](f: Nothing => B) = None
// … and so on
@main def test = Some(23).foreach(println) Try it out here . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/410724",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/366741/"
]
} |
411,021 | I'm told that software is everywhere and therefore used in other domains. My question is if you're a software engineer working on software for lawyers or software for biologist when do you actually get the time to learn about the other domain you're impacting? How can you make software for lawyers if you're not familiar with the jargon? UPDATE : I see comparison made with journalists. I think that journalism is not a good example. Often the journalist writes on a topic he/she does not understand and it comes of as superficial (sometimes even wrong). Software is much more complex. | Software is a knowledge-intensive area. And a big part of the software engineer's work is to extract the domain knowledge from the users and domain expert, abstract it, and transform it into implementable data structures and algorithms. For example, the best introduction I ever got about legal principles and law was not from a lawyer or a law professor (I followed some courses), but from an AI researcher who worked on modelling legal concepts for an expert system (sorry, this was 30 years ago, and rule based expert systems seemed very promising at that time). His explanations were so crystal clear and logical... So learning about the domain is part of the job and not something that you would do overnight outside the working hours. All you need is an open mind, and fearless questioning. Moreover, your knowledge will develop iteratively and incrementally exactly as the software you write (since the software embodies this knowledge): learning about requirements, enables you to model, design and implement something, to experiment with it, to exchange with users, and improve it again and again. But caution: you also need to remain modest: it's not because you are able to design a flight system, that you can hope to replace the pilot and fly on your own ("don't try this at home") ;-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/332856/"
]
} |
411,082 | I have the following application: This application receives an API call (HTTP), does some internal work that usually includes reading or writing to a database that has 5 different tables, generates an XML file and then sends it to another system via a REST interface. This has around 5K lines of codes. I'm now studying the microservice architecture and see how my application would/should look like if I was using it. After reading a lot of stuff, I came to two different solutions: In the first one, each microservice only serve one or a very few HTTP requests. This is good because if one microsevice is down, only a small part of my previous big app is down, but the rest is still working. However, there is a lot of code duplication between each microservice. For example, each of them has more or less the same code to generate an XML template or send request to the southbound system. I know it can be overcome by using a shared library, but then each microservice needs to use the same programming language. Splitting the part of my app to smaller services. Here, I avoid code duplication and each service can use a different programming language, but if a single microservice is down, everything stops working. Moreover, this looks like a monolithic architecture, because I'm just separating the layer with HTTP request instead of a class/functions interface. It seems maybe a bit easier to understand and maintain, but not really giving anything more. I would like to have your opinion on my personal use case as I really struggle to understand how an application in a microservice world should be designed without a concrete example. | The objective of microservices is to provide independently deployable, loosely coupled, lean services . This means that you shall be able to change one of the microservices however you want, and can deploy it in production without changing the others. Your scenario 1 indeed breaks the monolith down into smaller pieces. But these are not independently deployable: However, there is a lot of code duplication between each microservice. (...) I know it can be overcome by using a shared library, If you change some functionality in the shared code, you no longer are sure that it's interoperable with the other service. And if shared library is changed, you're no longer sure that the other service still compiles and could be patched for an emergency issue within minutes. Your focus seems to be reliability and continuity of the microservices that you have extracted from your legacy monolith. But the common code weakens the independent continuity of services: if you have for example a vulnerability in your shared library, both services could easily be disrupted by the same hacker. Or suffer from the same nasty UB bug. The reliability achieved with microservice is based on true independence on the functional axis. The focus is also more on scalability (see the scale cube ): microservices can offer horizontal duplication (i.e. several service instances for the same API - if one breaks down, the clones continue to work), not to speak about partitioning. For scenario 2, you could consider implementing fault tolerance by implementing horizontal scalability, i.e. having several instances of the same service running on different nodes. It's a tough change since the services have to find each other dynamically. But then, if one of the service instance is down in the chain, the other services upstream can find another working instance. A trick for achieveing this is to move from a synchronous communication to asynchronous communication via event queues or a message broker. It's difficult based on the few elements you provide to confirm if approach 2 is the best one. Perhaps another decomposition strategy could result in even lower coupling. I could for example think of this one, which combines both of your approaches: migrating your common XML generator features into a "Service Proxy" (the name is arbitrary: it seemed to be a relay between your new microservices and other applications), splitting the core of the monolith into highly cohesive but loosely coupled capabilities , and hiding the details of the split to the outside world behind an API gateway : | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411082",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/367475/"
]
} |
411,106 | I'm a junior engineer, but I've worked at two companies now. In both cases I've found that my colleagues and I are assigned tasks with no thought put into the design before being told to do it. Typically I spend about 90% of my time writing down on paper how I'm going to do the task, then the code sort of writes itself in maybe a day, sometimes two if I run into difficulties with an API. Is there something strange about this picture? Is the design supposed to be the work? I don't really begrudge it, I quite like the designing of things. But it seems to me it would be more productive to together as a team determine the design of everything to be done, and then everyone can go away and do it pretty quickly, this would put everyone on the wavelength, and so there would be less instances of interfaces not being as you'd like. I feel that once the design is done, the coding is essentially a series of 'write a function which takes in X and does Y', which anyone can easily do except perhaps in the case of some complex algorithm being required. I am thinking of asking my team if we could try designing things up front, before assigning the work. But I am not sure if they will be up for this, or if it's a ridiculous idea and that's why I've not seen anyone do it. | If you are assigned a task that has no design done for you then doing the design is part of your task. This is not unusual. Design may have been done at some level but now the task needs it's own design. Now please don’t take this as license to spend weeks alone doodling UML. You should create enough design so that you can see and communicate your plan. Nothing more. You should also be prepared for the plan to fall apart. When you code and test you learn. Don’t rob yourself of that time to learn by making perfect diagrams no one asked for. Done well, a design ensures you're pointed in the right direction before you put tons of effort into something no one wants. It's a check on how well you understood your task. Don't defeat this by putting too much effort into the design. Do just enough to show your thinking. You know the design process is working when you’re discovering mistakes, disproving assumptions, and discovering work. If all you’re doing is making pretty pictures then it’s time to move on. A great way to do that is to show your design to the team. Communicate well and they’ll help you find mistakes, bad assumptions, and hidden work. Listen, learn, and when needed, try again. It’s like making popcorn. When the popping stops it’s time to move on. Presenting a design can involve the whole team but devising one is usually one or two people working on some small bit of territory that’s been carved out for them. Be sure to learn how to communicate with the other territories and you’ll find you don’t need the whole team for every detail. Design is important regardless of the task or the teams development methodology. Even in agile. Even in chaos. It may be a screen mockup. It may be pseudo code. It may be stating your task in your own words. But, like every other development step, it must be useful, or it’s a waste of time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411106",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/357939/"
]
} |
411,428 | Everyone creates bugs, including me and my teammates. When bugs are pointed out to them, they're friendly and try to fix the bug. But their fix is 'wrong' and just creates a more subtle bug. Usually this takes the form of them thinking that the bug is an edge case, and they put some special check in. Sometimes they will add an extra boolean parameter to something that gets passed around. Repeat this several times over and there's 5 booleans getting passed around. The most irritating is when they put things in completely the wrong place because they don't know how to modify some generic thing to handle their use case. So they just avoid calling the generic code entirely and write lots of their own stuff. This creates absolute spaghetti code, and they end up duplicating a lot of code to handle what they perceive as special cases. I'm a junior developer, the same as them, and we have no-one experienced on our team. It's all juniors. To make matters significantly worse, everyone always just merges into master. Usually the reason I find bugs is that I look at a recent commit to master, and I can see in a few seconds that they've introduced a bug. What can I do? Showing them the bug, pointing out their fix isn't right until they get it right under my guidance. It feels patronising, I'm not really superior to my peers, all of us are new. Maybe I have picked things up faster than them and I know what they're doing is wrong. Is time consuming Sometimes they make such a mess that I feel like I have to tear it all down and do it better. Which obviously is delicate. It makes me want to just fix the bug myself as soon as I see it, rather than report it to them. But I know this is wrong. Both because it's insulting and also because it makes more work for me. | Test your code Have an actual regression testing, so that when a bug is introduced, the programmer finds it right away. This doesn't mean that you should just write the actual tests. Instead, it means that you need to work on the culture of the team to understand that without regression testing, the project is doomed. I wrote tests, my teammates then change the tests so that they pass while being incorrect, at one point someone actually disabled the tests. This, clearly, is a sign that the culture is all wrong. It might be that the culture is wrong inside your team only, or it may be that it's wrong at the management level as well—for instance, someone may have disabled the tests because he was told to do so by the management, after complaining that he can't urgently deploy a bugfix to production because the tests are red. In all cases, there is work to do to have a solid build pipeline, and to have a team where everybody understands that a change cannot be deployed to production without testing code for regressions. Do code reviews By doing systematic code reviews, you can eventually get rid of: This creates absolute spaghetti code, and they end up duplicating a lot of code to handle what they perceive as special cases. No code should go to production untested and unreviewed, when written by a junior programmer. Don't hire junior programmers if you don't know how to work with them Imagine that you're an owner of a hospital, and you decide to hire only interns. No doctors with twenty years of experience. Nothing like that. They are costly, after all, and any intern should be able to do exactly the same thing, right? Junior programmers will build crap. Not because they are bad persons, but because: They don't have enough experience. They qualify themselves as programmers, i.e. they write code. This is not the most important part of the project, and a project which is composed of programmers only will fail. In order to improve, they need help from more experienced software engineers. Your management decided that the project you are working on is not important for the company. They actually decided it, because they actually hired junior programmers to save money. What they didn't know is that not only the project will fail, but it will cost a lot of money, much more compared to if they hired a few experienced software engineers. From there, you have to decide what to do. Either you can convince your management to stop being stupid. Or you can't. If you can't, you may either continue working on a doomed project, while trying to keep your head low (when the management will find that the project is failing, they will search for culprits, and if you take initiatives, you will be the culprit), or you ask to be moved to a project which is more important for the company. Working with junior programmers requires skills and practice. If you have no persons with those skills, hiring junior programmers is as stupid as hiring only the interns in a hospital and hoping everything will be just fine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411428",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/368068/"
]
} |
411,585 | As a programmer I have found my code frequently elicits the reaction "I don't understand". Whenever I get this response I try my best to explain my code patiently, and not make anyone feel afraid to ask questions. I am pretty sure I've got the second part right, people are certainly not afraid to ask questions about my code! However I have good reason to believe my explanations are not effective. I routinely have hour long discussions trying to explain my code, and on many occasions the conversations have ended with my coworker saying they still don't understand, but they have somewhere else to be (lunch, or home, or a meeting, etc). I believe this is a problem with my code, as I cannot recall the last time some else's code has taken an hour of explanation to understand. As well, I rarely see my coworkers spending anywhere near as much time explaining their code to each other. Specifically, when posed with the question "I don't understand your code", what are some strategies I can use to explain my code? I have previously employed the following follow up questions, and I am looking for better, or at least more, follow questions: What part specifically seems to be confusing? Sometimes this works, but often the answer is "the whole thing". I have been in meetings with 5 other programmers, where all of the programmers agreed they didn't understand my code, but none of them could give any specifics parts which were confusing. Are you familiar with pattern "X"? I have tried to learn the names of the coding patterns I tend to use. I will bring up these names, such as "the visitor pattern", and ask them if they are familiar with this pattern. If they are familiar with it I try to show them how my code is an implementation of that pattern. This seems to stop them from immediately asking more questions, but invariably we seem to come back to the same code, and so I am afraid while they fully understand the pattern, the connection between the pattern and my code is not obvious. What are some solutions to problem "X"? Sometimes I try to get them to actively engage with solving the general problem, hoping that if they explain how they would solve it, I can show them the parallels between their solution and mine. This works, however often times the problem is a bit too complicated to just solve in your head, and so they can't quickly describe how they would solve it. ADDITIONAL INFORMATION: The code I work on most frequently is framework/architectural code, often legacy code which no one presently with the company is familiar with. My team is very busy, and while they are patient, they honestly don't have the time to help me work through legacy code. As a result my approach as been to fully understand it myself, and then try to explain it to my team during team meetings. They will be interfacing with it though, and they interface with the existing code on a daily basis. An example of this type of code would be our log pipeline, which takes browser errors, server errors, service errors, http logs, javascript logs, web logs and correctly joins time with session information, going through a few steps before the data eventually ends up in splunk. It isn't exactly complicated, but it also isn't exactly trivial, as the servers needed to handle tens of millions of logs per day, without any significant impact on server performance (our servers are already more expensive than my yearly salary). CODE SAMPLES (Please excuse the text dump. I tried to keep it short, but code samples seem like the best way to demonstrate my problem). I put together a code sample of one piece of code that seemed to confuse my teammates the most. I no longer work at the company, so it isn't the exact code, and the exact code was scrapped anyway (it confused everyone, so we all agreed no one should use it). A bit of background, our company was beginning a major rewrite, converting from a legacy framework to React/Typescript/Redux. There were regretting using Redux, but due to our browser support restrictions we were unable to use Mobx. As a result we were using Redux poorly, trying to make it work like Mobx, or KnockoutJS. The majority of our reducers simple set state, with the caller knowing exactly what they wanted to set (not how Redux action/reducers should work). However due to time constraints, we simply could not switch frameworks, and had to make Redux work. That was at least 3-4 years ago, and I would be surprised if the team was still using Redux now. (I've linked to the Typescript playground for my code, as it is a bit long for a question) An example of existing code can be found here: original code I am opposed to this style, as although it is clear, it requires changing 4 pieces of code (spread across 3 different files) to add a variable. The steps to adding a new variables are: update the state definition, add a new action , add to the actions union , and add a reducer handler . I made a builder class (a term I may not be using correctly, basically it is like yargs, https://www.npmjs.com/package/yargs , where you make a series of chained function calls to create a more complex object) that makes it possible to only add properties to one place, while preserving the types of everything. (This was before Typescript mapped types, which provides alternatives to the builder approach). A recreation of my proposed code can be found: changed code | Personally, I have encountered multiple variants of code-that-is-hard-to-understand, and each needs a different way to cope with: Simply messy. Formatting is off, variable names are not clear, convoluted control structures, methods with multiple responsibilities. => Learn cleanliness, use a formatter. Patternitis: applying patterns for each and every aspect of the code. Instead of a simple "if" apply a subclass strategy with two visitors derived from an abstract visitor, which is created by two factories which are derived from an abstract factory, which is selected via a strategy... (search for "FizzBuzz enterprise edition" on the net) => Understand the basis of patterns. A pattern is not a standard way of solving things, it is a name that you tack on a solution that you found. You don't go from a problem to "which pattern might I apply here", you go from a problem to a solution, and then say "oh look, my solution follows the form of a visitor pattern, it has a name!" Abstraction mess: instead of having a simple class that does a thing, have two to five abstract base classes, which results in a control structure, where a simple function call goes through abstract and overriden methods in subclasses all over the place. => YAGNI. Embrace this piece of extreme programming. There is no "I might need this in the future, so I split it off now". Misunderstood clean code: "good code needs no comments", and then writing code, that is not self-explanatory, but without any comments b/c "it's good". => These are the hardest to crack. If anyone knows a solution, I'd love to hear suggestions myself. Mathematicians code: looks like a proof on a whiteboard. No variable name longer than a single character, no comments, no explanation. => Teach the mathematician the values of software development, and hand them a copy of "clean code". What many junior programmers don't understand at first is, that the greatest value in software is SIMPLICITY. Don't try to be clever, don't try to optimize runtime (at least, not until you actually find a concrete problem), don't add an extra abstraction because you might need it in the future. Always do the simplest thing that solves the problem at hand. No more. No less. Seemingly, the part about the "misunderstood clean code" needs some clarification. I never meant to tell anyone personally that good code needs no comments. The remark comes from the following situation, which I often encountered with some ex-colleagues: Programmer A : I have written cool code, I understand it. As I have read the book "clean code", I know that comments are not necessary for self-explanatory code, therefore I do not comment. Programmer B : I don't understand a single line of what you have written. Programmer A : Then you are not smart enough to understand good code. The problem here is, that Programmer A does not see his own mistake, but loads it off to lack of understanding on B's side. As this is his understanding, he'll probably never change his ways, and continue to write mumble-jumble which only he understands, and refuse to comment it, as he sees it as plainly self-explanatory. (Unfortunately, nobody else shares that view.) Regarding your code samples: I am not really proficient in TypeScript, so frankly, I don't exactly understand the finer points of what you have done there. (Which probably already points to the first problem.) What I can see from the first glance and a few line counts: You have replaces 40 lines of perfectly readable code (heck, even I can understand that) with roughly 60 lines of hard-to-understand code. The resulting change in usage is probably something along the lines of: // old
let v = userReducer(x, y);
// new
let v = new ReducerFactory().addStringProp("x").addStringProp("y").createReducer(); So, the question is "why?". Let us assume that you have taken half a workday to do the concept, the implementation, and the testing. Let us further assume,
that one developer day costs $1000. It is quite well known that code that must be maintained has a much higher cost of ownership than the price of initial development. From
experience, a good guess is times ten for simple code, and times twenty for complicated code (which I apply here.) Therefore, you have taken $500 * 20 = $10000 of company money to create which business value? That the creation of a given object is somewhat
"more elegant" in your personal view? Sorry, as I see it, you do not need arguments to explain what you have done. You need education and experience in software architecture, where you learn
to put value on the right things in business. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411585",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/368290/"
]
} |
411,757 | I've been trying my hand at building apps with Flutter and Dart . I noticed in my apps that if someone decompiled my app they could access a whole lot of things I didn't want them to access. For example, if I am calling my database to set the users 'active' status to False when they cancel their plan they could just comment out that bit of code and they get access to the entire app again despite having cancelled their plan. Since this is my first app, my backend is Firebase . The app handles everything and calls Firestore when it needs to read or write data. Is this something to really worry about? If so should I be using something like Firebase Cloud Functions? Should I be creating a proper backend? If so what would its structure be? Would my app just be a client for the backend? | I used to be a full-time binary reverse engineer, and I still spend about 80% of my time reverse-engineering software (legally). There are some good answers here already, but I wanted to add a few touches. Legal Aspect I'm not a lawyer. But as far as I'm concerned (and many others too), reverse-engineering doesn't really become legally enforceable until you've done something with the knowledge. Think about this situation: Say I'm a reverse engineer and I download your app. I disconnect my "lab" machine from the network. Now, I decompile, disassemble, and debug your app, taking detailed notes of how it works. After doing all of this, I wipe out my lab machine and it never sees the network. Then, I do nothing with that knowledge, because it's a weekend hobby and I just enjoy decompiling things. It's debatable whether or not this is illegal and more importantly, it's unenforceable. There is no way you, your lawyer, or anyone else will ever know that I did this unless I'm already suspected of copyright violations, patent violations, or some other crime. Even if you sued me, what would you sue me for? I never published, distributed, advertised, told anyone, or did any kind of monetary damage to your business whatsoever. What would your "damages" be? For this reason, the vast majority of the time (view that EFF page that was linked in a comment earlier), real prosecution stems from cause of some (usually major) perceived loss by the software development firm or copyright/patent holder. The trick is that a reverse engineer may actually use some of the knowledge that he/she/they learned from your app code, and do things that will be hard for you to detect or prove. If a reverse engineer copied your code word-for-word, and then sold it in another app, this would be easier to detect. However, if they write code that does the same thing but is structured entirely different, this would be difficult to detect or prove, etc... Learn about who would target your app and why? What are the type of people who would like to reverse engineer your app? Why? What would they get out of it? Are they hobbyists who enjoy your app and could potentially even be helping your business by fostering a community of hacker enthusiasts? Are they business competitors? If so, who? What is their motive? How much would they gain? These questions are all very important to ask because at the end of the day, the more time you invest in locking down your code, the more costly it is to you and the more costly it is to the adversary to reverse engineer. You must find the sweet spot between spending some time on application hardening to the point where it makes most technical people not want to bother spending time trying to thwart your app's defenses. Five Suggestions Create a so-called "Threat Model." This is where you sit down and think about your application's modules and components, and do research on which areas would most likely be compromised and how. You map these out, many times in a diagram, and then use that threat model to address them as best as you can in the implementation. Perhaps you model out 10 threats, but decide that only 3 are most likely, and address those 3 in the code or architecture. Adopt an architecture which trusts the client application as little as possible. While the device owner can always view the app's code and the network traffic, they cannot always access the server. There are certain things you can store on the server, such as sensitive API keys, that cannot be accessed by the attacker. Look into "AWS Secrets Manager," or "HashiCorp Vault," for example. For every client module, ask yourself "Would it be ok if an attacker could see the inner workings of this?" "Why not?" and make necessary adjustments. Apply obfuscation if your threat model requires it. With obfuscation, the sky is the limit. The reality is, it is an effective protection mechanism in many cases . I hear people bashing on obfuscation a lot. They say things like Obfuscation will never stop a determined attacker because it can always be reversed, the CPU needs to see the code, and so on. The reality is, as a reverse engineer, if whatever you've done has made cracking into your app take 2-3 weeks instead of an hour (or even 3 hours instead of 5 minutes), I'm only cracking into your app if I really really want something. Most peoples' apps are frankly not that popular or interesting. Sectors which need to take extra measures would include financial, government, video game anti-hacking/anti-cheat, and so on... Furthermore, the above argument is nonsensical. Cryptography doesn't stop people from getting your data, it just slows them... Yet you're viewing this page right now over TLS. Most door locks are easily picked by a skilled lockpicker in seconds, people can be shot through bullet-proof vests, and people sometimes die in a car accident when wearing a seatbelt... So should we not lock doors, wear vests, and wear our seatbelts? No, that would be silly, as these devices reduce the likelihood of a problem , just like obfuscation, symbol stripping, developing a more secure architecture, using a Secrets Manager service to store your API secrets, and other hardening techniques that help prevent reverse engineering. Say I'm a competitor and I want to learn how to make an app like yours. I'm going to the app store and searching for similar apps. I find 10 and download them all. I do a string search through each. 7 of them turn up nothing useful, and 3 I find unstripped symbols, credentials, or other hints... Which apps do you think I'm going to be copying? The 3. You don't want to be those 3. Scan your source code for sensitive strings such as API secrets, sensitive keys, admin passwords, database passwords, email addresses, AWS keys, and so on. I usually search for words like "secret", "password", "passphrase", ".com", "http" using a tool called ripgrep . There will be false positives, but you may be surprised at what you find. There are automated tools which help accomplish this, such as truffleHog After you build your application, run the strings utility or a similar utility on it. View the output both manually and using a text search like ripgrep or grep . You'll be surprised at what you find. Know about deobfuscators and look for them Lastly, know that various obfuscators out there have deobfuscators and "unpackers." One such example is de4dot , which deobfuscates about 20 different C#/.NET obfuscator outputs. So, if your idea of protecting something sensitive is just using a commodity obfuscator, there's a high chance that there is also a deobfuscator or other folks online who are discussing deobfuscating it, and it would be useful for you to research them before you decide to use an obfuscator. Why bother obfuscating when I can open de4dot and deobfuscate your entire program in 2 seconds by searching "[insert language here] deobfuscator?" On the other hand, if your team uses some custom obfuscation techniques, it may actually be harder for your adversaries because they would need a deeper understanding of deobfuscation and obfuscation techniques aside from searching the web for deobfuscators and just running one real quick. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411757",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/368571/"
]
} |
411,877 | Reading the Google C# style guide I came across this: Generators vs containers Use your best judgement, bearing in mind:
Generator code is often less readable than filling in a container.
Generator code can be more performant if the results are going to be
processed lazily, e.g. when not all the results are needed. Generator
code that is directly turned into a container via ToList() will be
less performant than filling in a container directly. Generator code
that is called multiple times will be considerably slower than
iterating over a container multiple times. I had little trouble understanding most of the guide, but here I simply don't know what they are talking about! What is "generator code" ? | This code is a generator (Microsoft documentation refer to these as Iterator Methods , see also yield (C# Reference) ): public IEnumerable<string> GetHelloWorld()
{
yield return "Hello";
yield return "World";
} It is a method that generates an iterator enumerable. They are evaluated lazily, as you probably are aware. That is mentioned in the guideline: Generator code can be more performant if the results are going to be processed lazily, e.g. when not all the results are needed On the other hand, we can fill a container and return it: public IEnumerable<string> GetHelloWorld()
{
var list = new List();
list.Add("Hello");
list.Add("World");
return list;
} Not the best way to write that (we could have used a Collection Initializer , for example), but you get the idea. This code is eager. A container is just any collection or similar type that contains items. That is why they tell you this: Generator code that is directly turned into a container via ToList() will be less performant than filling in a container directly. I talk a little more about which one to use in my answer to yield return vs without yield return . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411877",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/329278/"
]
} |
411,882 | I am looking for a way to design a system that can provide a linear and incremental counting for invoice number accros a scalable system. At this time, I have four pools of two servers (two pool for europe and two pool for america -> total eight servers ). This system handle sports subscriptions and can generate invoice at two moment : When the customer subscribes for the first time, all along the day, especially before a match or a sport event When the subscription comes at his until date and renew, we run a batch that seeks all subscriptions to renew at the rate/parallelization we want. All theses subscriptions are distribued through an amqp and federated exchanges , e.g : pool1 handle all subscriptions coming from the load balancer (lb1) and put them into a queue (queue_pool1), then later, the queue is depiled by servers of pool1 and subscriptions are handled into pool1, except when the queue_pool1 is full, so the excess messages go into the federated exchange that feeds queue to the exchange of the pool2 (queue_pool2). At this time, we use a linear counting based on a timestamp +micro seconds base10 to 36, when we generate it, we put it on a shared memcached and a check is provided into this memcached before using the number invoice generated, but for legal purpose we have to change it to an incremental counter. However, we can use a dedicated counter with a prefixed indicator like this I1-NNNNNNN, I2-NNNNNNNN, but not holes between numbers invoices. etc. (Best would be an unique counter for all servers.) When we run renews, the rate is around 20/40 subscriptions in parallel. Depending on our partner api payment, we are at 10-20 renews per seconds. We can control the rate. When we have a big event, we are around 5 to 10 subscriptions per seconds. We can control the rate but if the rate is too slow, customers will expect delay into their services activation. Each pool handle almost around 1000 requests per second for all incoming traffic. | This code is a generator (Microsoft documentation refer to these as Iterator Methods , see also yield (C# Reference) ): public IEnumerable<string> GetHelloWorld()
{
yield return "Hello";
yield return "World";
} It is a method that generates an iterator enumerable. They are evaluated lazily, as you probably are aware. That is mentioned in the guideline: Generator code can be more performant if the results are going to be processed lazily, e.g. when not all the results are needed On the other hand, we can fill a container and return it: public IEnumerable<string> GetHelloWorld()
{
var list = new List();
list.Add("Hello");
list.Add("World");
return list;
} Not the best way to write that (we could have used a Collection Initializer , for example), but you get the idea. This code is eager. A container is just any collection or similar type that contains items. That is why they tell you this: Generator code that is directly turned into a container via ToList() will be less performant than filling in a container directly. I talk a little more about which one to use in my answer to yield return vs without yield return . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411882",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/162307/"
]
} |
411,914 | When I'm designing a system for a program, I often make misjudgements that will prevent the design from either working, being maintainable, easy to use or all the above. This means I will usually have to iterate several times, refactoring my code until I achieve this. Recently, I've been trying to think about this as I design, but it has been really difficult and mentally exhausting. The feeling is similar to playing a game of chess and trying to remember all the moves you can make. I'm always coming up with problems with my design as I'm thinking about it and I lose my train of thought. Sometimes I branch off so far, I forget the original problem I was trying to solve! So my question is, is the iterative process natural? Is me trying to 1 shot a design a bad idea? Are there ways to make this process more bearable? | Iterating through multiple versions of a design is a great thing to do! It is rare to create a design that has all the good properties at the first try. As software engineers, we should be humble and accept that we will make mistakes or overlook things. It is arrogant to think that you can create good design at your first try. But as you say, it can be exhausting to work on same piece of code for a prolonged period of time. But there might be practices and disciplines that make it more bearable. Test automation, preferably TDD This is the one discipline that enables us to actually change the design. By having a solid and reliable suite of automated tests, the design can be changed drastically without fear of breaking existing functionality. It is that fear which is most exhausting. Doing TDD also makes it more likely that you create working and 'good enough' design at your first try. This design then requires only small improvements to push it into greatness. Refactoring Instead of focusing on changing the whole design, focus on small problems and fix those. Fixing many small problems will result in big changes in the overall design. Making small changes is less mentally exhausting as you get feedback about your design sooner and you can stagger your attention between multiple designs, slowly improving all of them. Good vs. Perfect The saying 'Perfect is the enemy of good.' comes to mind here. Knowing when to stop trying to improve the design is a learned skill. If the design is being used and changed, then you will have lots of small opportunities to improve the design, so you don't have to invest all that time in the beginning. As long as you follow the Boy Scouts rule of 'Always leave code cleaner than you found it.', then the design will improve over time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411914",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/227134/"
]
} |
411,932 | This question is based on this separate question on stack overflow. I have a very low-level structure meant to compactly save presets on flash memory. For simplicity I am going to use stored_record as a substitute, obviously the actual type has a /lot/ more fields. struct stored_record
{
int16_t foo;
}; Now I also need to export/import this via the wire and I am using protobuf for this.
So I need convert from the protobuf format to the compressed representation and back.
I also have a loader submodule which should be able to load both the packed format and the protobuf one. For this reason I want to make a unified interface for both of them which is going to handle the conversions from the high-level values to each format. Let's define record_interface for this task: class record_interface
{
public:
virtual ~record_interface() = default;
virtual void set_foo(BoundedFloat) = 0;
virtual BoundedFloat get_foo() const = 0;
}; So far so good. Now submodules may accept references to either mutable record_interfaces or const record_interfaces and act accordingly. This is a very simple and clean mental model. Now we have two choices. One is that we write a record-like simple implementation of the above interface and we convert to it and back between the other representations. While simple this has the disadvantage that it's possible to forget to convert one of the fields. If we just have setters and getters, it's not easy to guarantee that we don't forget anything. The other is to make our various representations implement the above interface. This forces us to implement conversions for all fields and we can even define a generic copy constructor by using it (eg by matching getters with setters). One issue lies in that stored_record above must be trivially copyable so it can't implement the interface directly. One therefore needs to write a wrapper class: class record_wrapper : public record_interface
{
public:
DISALLOW_COPY_AND_ASSIGN(record_wrapper);
record_wrapper(stored_record & wrapped)
: wrapped_(wrapped) {}
void set_foo(BoundedFloat value) final { wrapped_.foo = convert_to_int16(value); }
BoundedFloat get_foo() const final { return convert_from_int16(wrapped_.foo); }
private:
stored_record & wrapped_;
}; However this wrapper can only be used to wrap mutable references, it's not possible to use const references with the above. There does not seem to be a very clean solution to this problem. We can't store const references conditionally since we wouldn't be able to implement the non-const member functions of the interface. One idea touched in the original thread is to make a safe create_wrapper function that const_cast s away const-ness but forces the resulting wrapper to be const: template <class T>
class const_wrapper
{
public:
template <class... Args>
const_wrapper(Args&& ... args)
: wrapped_(std::forward<Args>(args)...)
{
}
T const & get() const { return wrapped_; }
operator T const & () const { return wrapped_; }
T const * operator->() const { return &wrapped_; }
private:
T wrapped_;
};
record_wrapper make_wrapper(stored_record & wrapped) {return {wrapped}; }
const_wrapper<record_wrapper> make_wrapper(stored_record const & wrapped) { return {const_cast<stored_record &>(wrapped)}; } This works properly and allows us to use the original interface without problems. But it feels like a bit of a hack. The other option is to split the interfaces into a Reader/Writer pair (code is taken from here ) class const_record_interface
{
public:
virtual ~const_record_interface() = default;
virtual BoundedFloat get_foo() const = 0;
};
class mutable_record_interface : public const_record_interface
{
public:
virtual void set_foo(BoundedFloat) = 0;
};
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class const_record_wrapper : public const_record_interface
{
public:
const_record_wrapper(const stored_record &wrapped) : wrapped_{wrapped} {}
BoundedFloat get_foo() const final { return convert_from_int16(wrapped_.foo); }
private:
const stored_record &wrapped_;
};
const_record_wrapper
make_wrapper(const stored_record &wrapped)
{
return {wrapped};
}
class mutable_record_wrapper : public mutable_record_interface
{
public:
mutable_record_wrapper(stored_record &wrapped) : wrapped_{wrapped} {}
auto as_const() const { return make_wrapper(this->wrapped_); }
void set_foo(BoundedFloat value) final { wrapped_.foo=convert_to_int16(value); }
BoundedFloat get_foo() const final { return as_const().get_foo(); }
private:
stored_record &wrapped_;
};
mutable_record_wrapper
make_wrapper(stored_record &wrapped)
{
return {wrapped};
} This is more sane but requires more boilerplate. It also makes const-ness on the reader/writers irrelevant as mutable vs const reference on the reader does not make a difference while const on the writer is useless. I would really appreciate any feedback regarding this matter since the design space is very large and I am not sure which is the better way forward. EDIT: After thinking about it I came to realize the biggest reason that I am skeptical of the the split interface solution and that is that it forces us to redesign the whole interface hierarchy. Most user defined types could directly implement the original interface without problems. The interface's methods are also very coupled together and we wouldn't originally think of splitting them. The only reason the original interface does not work in our case is because we can't directly implement the interface on the stored_record type so we have to deal with contness of the wrapper. With the first solution we don't need to change our mental model just to fit with the edge case, with the second we have to redesign everything. So the first solution makes design easier but is not very idiomatic while the second one is very idiomatic but forces us to change our current design's model. So in the end I think I would like to compare the two interfaces as standalone models and probably go with that. So it (most likely) boils down to: User and writer ergonomics - Mutable/Const interfaces vs single interface with const and non-const methods. | Iterating through multiple versions of a design is a great thing to do! It is rare to create a design that has all the good properties at the first try. As software engineers, we should be humble and accept that we will make mistakes or overlook things. It is arrogant to think that you can create good design at your first try. But as you say, it can be exhausting to work on same piece of code for a prolonged period of time. But there might be practices and disciplines that make it more bearable. Test automation, preferably TDD This is the one discipline that enables us to actually change the design. By having a solid and reliable suite of automated tests, the design can be changed drastically without fear of breaking existing functionality. It is that fear which is most exhausting. Doing TDD also makes it more likely that you create working and 'good enough' design at your first try. This design then requires only small improvements to push it into greatness. Refactoring Instead of focusing on changing the whole design, focus on small problems and fix those. Fixing many small problems will result in big changes in the overall design. Making small changes is less mentally exhausting as you get feedback about your design sooner and you can stagger your attention between multiple designs, slowly improving all of them. Good vs. Perfect The saying 'Perfect is the enemy of good.' comes to mind here. Knowing when to stop trying to improve the design is a learned skill. If the design is being used and changed, then you will have lots of small opportunities to improve the design, so you don't have to invest all that time in the beginning. As long as you follow the Boy Scouts rule of 'Always leave code cleaner than you found it.', then the design will improve over time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/411932",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/320237/"
]
} |
412,124 | I regularly review the technical debt tickets from my backlog, to prioritize them and remove those which are no longer relevant (fixed by some other development, obsolete...) Among those with high priority, we take 2 or 3 in each sprint, and this way our codebase is healthy for the moment. The problem is that all those tickets that are still relevant but have not been prioritized, represent a big part of the backlog (50%), and my PO insists that they should be removed, the same way that he deletes regular Story tickets that he knows won't be prioritized in the next semester, in order to have a "lean and healthy backlog". I acknowledge the fact that with our current "tech debt velocity", we won't be able to take most of them in the next semester, but it frightens me to delete tickets that are pointing to spots in our code that may rot if not fixed, lending further developments more difficult (well, you all know the point of tech debt and why it is important). So my question is: should I prune the tech debt tickets with lower priority? | You are considering deleting the records of genuine problems with the codebase because the product owner wants a shorter backlog? For me, the only reason to delete (close) an item in the backlog is because you decide it will never be implemented, not because it won't be implemented for a while. Also, in an agile environment, priorities may change quickly and the backlog can be re-ordered. If you have trimmed the list only to what you can do in the near future, you lose the ability to bring lower priority items up the list. Maybe you should re-assess the tech debt issues if they represent such a large proportion of the backlog; you might be able to close a proportion of them as "won't do". I am not sure what is concerning the PO to be honest. A healthy backlog contains a mix of items with lower priorities naturally floating down at the bottom. If it's really a problem, just filter the backlog or even create a second list (still logically a single backlog but split into 2 lists for mangeability). Maybe the PO is already doing something like this; I doubt he is actually deleting stories on the basis that they are not part of the current commitment? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412124",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/206728/"
]
} |
412,164 | I have been using TDD when developing some of my side projects and have been loving it. The issue, however, is that stubbing classes for unit tests is a pain and makes you afraid of refactoring. I started researching and I see that there is a group of people that advocates for TDD without mocking--the classicists, if I am not mistaken. However, how would I go about writing unit tests for a piece of code that uses one or more dependencies? For instance, if I am testing a UserService class that needs UserRepository (talks to the database) and UserValidator (validates the user), then the only way would be... to stub them? Otherwise, if I use a real UserRepository and UserValidator , wouldn't that be an integration test and also defeat the purpose of testing only the behavior of UserService ? Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency? And if so, how would I test the behavior of UserService ? ("If UserRepository returns null, then UserService should return false", etc.) Thank you. | This answer consists of two separate views on the same issue, as this isn't a "right vs wrong" scenario, but rather a broad spectrum where you can approach it the way it's most appropriate for your scenario. Also note that I'm not focusing on the distinction between a fake, mock and stub. That's a test implementation detail unrelated to the purpose of your testing strategy. My company's view Otherwise, if I use a real UserRepository and UserValidator, wouldn't that be an integration test and also defeat the purpose of testing only the behavior of UserService? I want to answer this from the point of view of the company I currently work at. This isn't actually something I agree with, but I understand their reasoning. They don't unit test single classes, instead they test single layers . I call that an integration test, but to be honest it's somewhere in the middle, since it still mocks/stubs classes, just not all of a class' dependencies. For example, if UserService (BLL) has a GetUsers method, which: Checks with the UserAuthorizationService (BLL) if the current user is allowed to fetch lists of users. The UserAuthorizationService (BLL) in turn depends on the AuthorizationRepository (DAL) to find the configured rights for this user. Fetches the users from the UserRepository (DAL) Check with the UserPrivacyService (BLL) if some of these users have asked to not be included in search results - if they have, they will be filtered out The UserPrivacyService (BLL) in turn depends on the PrivacyRepository (DAL) to find out if a user asked for privacy This is just a basic example. When unit testing the BLL, my company builds its tests in a way that all (BLL) objects are real and all others (DAL in this case) are mocked/stubbed. During a test, they set up particular data states as mocks, and then expect the entirety of the BLL (all references/depended BLL classes, at least) to work together in returning the correct result. I didn't quite agree with this, so I asked around to figure out how they came to that conclusion. There were a few understandable bullet points to that decision: The problem domain of the application is liable to constant business refactoring, where the business layer itself may subdivide into more niche classes without changing the public contract. By not testing every BLL class individually, tests need to be rewritten much less often since a test doesn't need to know the exact dependency graph of the class it's testing. Access logic is very pervasive over the domain, but its implementation and structure changes with the modern times. By not having to rewrite tests whenever the access logic changes, the company intends to lower the threshold for developers being open to innovating the access logic. No one wants to take on a rewrite of >25000 tests. Setting up a mocked situation is quite complex (cognitively), and it's easier for developers to understand how to set the data state (which is just an event store) instead of mocking all manner of complex BLL dependencies who essentially just extract information from that data store in their own unique way. Since the interface between the BLL classes is so specific, you often don't need to know exactly which BLL class failed, since the odds are reasonably big that the contract between the failed class and its dependency (or vice versa) is part of the problem that needs to be adjusted. Almost always, the BLL call stack needs to be investigated in its entirety as some responsibilities may shift due to uncovered bugs (cfr the first bullet point). I wanted to add this viewpoint because this company is quite large, and in my opinion is one of the healthiest development environments I've encountered (and as a consultant, I've encountered many). While I still dislike the lack of true unit testing, I do also see that there are few to no problems arising from doing this kind of "layer integration" test for the business logic. I can't delve into the specifics of what kind of software this company writes but suffice it to say that they work a field that is rife with arbitrarily decided business logic (from customers) who are unwilling to change their arbitrary rules even when proven to be wrong. My company's codebase accommodates a shared code library between tenanted endpoints with wildly different business rules. In other words, this is a high pressure, high stakes environment, and the test suite holds up as well as any "true unit test" suite that I've encountered. One thing to mention though: the testing fixture of the mocked data store is quite big and bulky. It's actually quite comfortable to use but it's custom built so it took some time to get it up and running. This complicated fixture only started paying dividends when the domain grew large enough that custom-defining stubs/mocks for each individual class unit test would cost more effort than having one admittedly giant but reusable fixture with all mocked data stores in it. My view Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency? That's not what separate unit and integration tests. A simple example is this: Can Timmy throw a ball when he has one? Can Tommy catch a ball when it approaches him? These are unit tests. They test a single class' ability to perform a task in the way you expect it to be performed. Can Timmy throw a ball to Tommy and have him catch it? This is an integration test. It focuses on the interaction between several classes and catches any issues that happen between these classes (in the interaction), not in them. So why would we do both? Let's look at the alternatives: If you only do integration tests , then a test failure doesn't really tell you much. Suppose our test tells use that Timmy can't throw a ball at Tommy and have him catch it. There are many possible reason for that: Timmy's arms are broken. (= Timmy is defective) Tommy's arms are broken. (= Tommy is defective) The ball cannot travel in a throwing arc, e.g. because it is not inflated. (= Timmy and Tommy are fine but a third dependency is broken) But the test doesn't help you narrow your search down. Therefore, you're still going to have to go on a bug hunt in multiple classes, and you need to keep track of the interaction between them to understand what is going on and what might be going wrong. This is still better than not having any tests, but it's not as helpful as it could be. Suppose we only had unit tests , then these defective classes would've been pointed out to us. For each of the listed reasons, a unit test of that defective class would've raised a flag during your test run, giving you the precise information on which class is failing to do its job properly. This narrows down your bug hunt significantly. You only have to look in one class, and you don't even care about their interaction with other classes since the faulty class already can't satisfy its own public contract. However , I've been a bit sneaky here. I've only mentioned ways in which the integration test can fail that can be answered better by a unit test. There are also other possible failures that a unit test could never catch: Timmy refuses to throw a ball at Tommy because he (quote) "hates his stupid face". Timmy can (and is willing to) throw balls at anyone else. Timmy is in Australia, Tommy is in Canada (= Timmy and Tommy and the ball are fine, but their relative distance is the problem). We're in the middle of a hurricane (= temporary environmental "outage" similar to a network failure) In all of these situations, Timmy, Tommy and the ball are all individually operational. Timmy could be the best pitcher in the world, Tommy could be the best catcher. But the environment they find themselves in is causing issues. If we don't have an integration test, we would never catch these issues until we'd encounter them in production, which is the antithesis of TDD. But without a unit test, we wouldn't have been able to distinguish individual component failures from environmental failures, which leaves us guessing as to what is actually going wrong. So we come to the final conclusion : Unit tests test uncover issues that render a specific component defective Integration tests uncover issues with individually operational components that fail to work together in a particular composition. Integration tests can usually catch all of the unit test failures, but it cannot accurately pinpoint the failure, which significantly detracts from the developer's quality of life. When an integration tests fails but all dependent unit tests pass, you know that it's an environmental issue. And if so, how would I test the behavior of UserService? ("If UserRepository returns null, then UserService should return false") Be very careful of being overly specific. "returning null" is an implementation detail. Suppose your repository were a networked microservice, then you'd be getting a 404 response, not null. What matters is that the user doesn't exist in the repository . How the repository communicates that non-existence to you (null, exception, 404, result class) is irrelevant to describing the purpose of your test. Of course, when you mock your repository, you're going to have to implement its mocked behavior, which requires you to know exactly how to do it (null, exception, 404, result class) but that doesn't mean that the test's purpose needs to contain that implementation detail as well. In general, you really need to separate the contract from the implementation, and the same principle applies to describing your test versus implementing it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412164",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/362062/"
]
} |
412,366 | I'm trying to fully understand the visitor pattern. What I've learnt so far (correct me if I'm wrong) is: It's about adding operations to classes, without modifying the source code of those classes. Or put another way, to bend the OOP approach to have functions and data structures separated. It's a common misunderstanding that it has to do with hierarchies of objects (although it can be very useful in that case). I think I get it, but there is a thing that looks unnecessary to me, and that's the accept method in the classes "to be visited". Let's set up a small example in Java. First the class hierarchy to be enriched with operations, but it's not to be modified: interface Animal {
void accept(AnimalVisitor visitor);
}
class Dog implements Animal {
void accept(AnimalVisitor visitor) {
visitor.visitDog(this);
}
}
class Cat implements Animal {
void accept(AnimalVisitor visitor) {
visitor.visitCat(this);
}
} Then the visitor interface and a dummy implementation of that interface, representing an operation to make some sound. interface AnimalVisitor {
// These methods could be just called "visit" and rely on overloading,
void visitDog(Dog dog);
void visitCat(Cat cat);
}
class MakeSoundVisitor implements AnimalVisitor {
void visitDog(Dog dog) {
// In a real case you'd obviously do something with the dog object
System.out.println("bark! bark bark!!");
}
void visitCat(Cat cat) {
System.out.println("meow meeeoooww!!");
}
} And then an usage of all of this would be: var makeSoundVisitor = new MakeSoundVisitor();
var cat = new Cat();
var dog = new Dog();
cat.accept(makeSoundVisitor);
dog.accept(makeSoundVisitor); But I really don't see the point of that accept call. If you've got the visitor and the objects to be visited, why not just pass these objects directly to the visitor and avoid the indirection? You could even get rid of the accept method on the Animal interface. Something like this: var makeSoundVisitor = new MakeSoundVisitor();
var cat = new Cat();
var dog = new Dog();
makeSoundVisitor.visitCat(cat);
makeSoundVisitor.visitDog(dog); Sources: Crafting Interpreters: The visitor pattern Wikipedia Head First Design Patterns | In your simple example, you know exactly the real type of the object on which you invoke the visitor and can therefore chose yourself the right visitor method: makeSoundVisitor.visitCat(cat); // You know that cat is a Cat
makeSoundVisitor.visitDog(dog); // You know that dog is a Dog But what if you don't know the type of the object? For example Animal pet = getRandomAnimal(); How would you now invoke your simplified visitor without the accept() method ? You'd probably need to find out the real type of pet first, and then call visitDog() or visitCat() with a downcast. This is all very cumbersome and error-prone. With the classical visitor pattern, it's just the beauty of polymorphism that accept() allows: pet.accept(makeSoundVisitor); The underlying technique of double dispatch is worth to be known outside the visitor context. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412366",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/72960/"
]
} |
412,515 | I'm designing an e-commerce application and I'm concerned about users occasionally experiencing this: A user orders a product and is redirected to the payment processor. While the user is paying, another user orders the product and it's now out of stock. The user completes payment, but the order can't be created because the product is out of stock. This can be avoided by reserving the product before attempting payment. But there's some complexity in doing so - if the payment fails, products need to be un-reserved, and a timeout is needed in case the user never completes the payment process. So my question: is it worth implementing the reserve process? Or is this scenario rare enough to just not worry about it and resolve things manually if it does happen? The answers probably depends on exactly what's being sold. However, I don't know this - I'm developing generic software to be used by all sorts of vendors. I could make it configurable, but still need a sensible default. | You have two excellent answers (I've upvoted both). But they each address only a part of the problem, and this is why I feel obliged to come with a third answer Your challenge is hybrid : It's a business problem, exactly as Rik D pointed out: ultimately, the business is responsible for deciding how to deal with customers. It's a user experience problem, exactly as 1201ProgramAlarm pointed out: and good IT products should promote good user experience. But it's also an IT capability problem: if you want to reserve real-time, your online ordering system must be seamlessly connected to a back-end stock management system (1201ProgramAlarm clearly made that point). Personally, I know a lot of small businesses that still struggle with such an endeavour (inaccuracy mentioned by Rik is only one of the potential symptoms in that regard). It is a hybrid problem because it cannot be solved by just looking at one side of the medal. Business cannot come out with a good requirement if they don't know what's possible. Business experience in a shop is very different from the online experience in the browser, so some digitally native insights are also needed here. Both business and IT considerations have to be analysed together . Of course, ultimately, business people should have the final world, but after this dialogue. In your special case, you do not have business people . Whatever choice you will make might, you'll lose: If you can't reserve, some will prefer a platform that can; If you oblige to reserve, small businesses will prefer less constraining solution. So, in conclusion, if you can: offer both approaches as a configuration option (to enlarge your market) propose reservation by default (because that's what market leaders do, and your customer all dream of being one day a market leader ;-) ) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108835/"
]
} |
412,596 | Not almost, but all modern CPUs have multiple cores, yet multithreading isn't really that common. Why to have these cores then? To execute several sequential programs at the same time? Well, when calculations are complex (rendering, compiling), the program is definitely made to use advantage of multiple cores. But for other tasks a single core is enough?
I understand that multi-threading is hard to implement and has drawbacks if number of threads is less than expected. But not using these idle cores seems so irrational. | The proliferation of multi-core CPUs is predominantly driven by supply, not by demand. You're right that many programmers don't bother decomposing their systems so that they can profit from multiple threads of execution. Therefore the customer sees a benefit mainly from OS multiprogramming rather than program multi-thread execution. The reason CPU vendors create more and more cores is that the traditional way of increasing performance - increasing clock speed - has run into fundamental limitations of physics (both quantum effects and thermal problems). To keep producing chips that can be credibly sold as offering more compute power than last year's chips, they put more and more independent cores into them, trusting that OS multiprogramming and increasing use of multi-threading will catch up and yield actual rather than just nominal gains. But make no mistake, both designing and exploiting multi-core CPUs is a lot harder than just running the same code faster. Both programmers and chip manufacturers would very much like to just keep increasing the speed of their chips; the trends towards parallelization is largely a matter of necessity, not preference. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412596",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/369927/"
]
} |
412,605 | Into: There are mutliple ways to test code: unit tests /e2e / manual testing /.. I'm developing a project that it's implementation details changes very quickly (and sometimes the core functions as well). Some of our microservers talk to each other directly while some others communicate using events like kafka. Problem: When I create an e2e test (for the backend image only), before each test(s), I build the docker image of my micro service and run it again on (each) test. I find it really hard to set up this kind of e2e test for a micro service that directly talks to other microservices (sending get/post/.. requests). As a result, I also build/pull the other images and run them before each tests as well. But it's not that easy because you can end up implementing a version of docker-compose in you tests infrastructure. I would like to minimize the amount of errors that can come from other services and test a specific microservice. Possible solution: Changing the microservices architecture. When ever it is possible, a micro service will communicate with others using events. So in the tests, we only need to setup a kafka and the microservice that we try to test. I only though of this solution from testing perspective and not from "what is best", for example, it's faster to communicate without kafka. Question: What are the pros and cons of my proposal? From your experience, is it maintainable? | The proliferation of multi-core CPUs is predominantly driven by supply, not by demand. You're right that many programmers don't bother decomposing their systems so that they can profit from multiple threads of execution. Therefore the customer sees a benefit mainly from OS multiprogramming rather than program multi-thread execution. The reason CPU vendors create more and more cores is that the traditional way of increasing performance - increasing clock speed - has run into fundamental limitations of physics (both quantum effects and thermal problems). To keep producing chips that can be credibly sold as offering more compute power than last year's chips, they put more and more independent cores into them, trusting that OS multiprogramming and increasing use of multi-threading will catch up and yield actual rather than just nominal gains. But make no mistake, both designing and exploiting multi-core CPUs is a lot harder than just running the same code faster. Both programmers and chip manufacturers would very much like to just keep increasing the speed of their chips; the trends towards parallelization is largely a matter of necessity, not preference. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412605",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/197581/"
]
} |
412,985 | In a project that aspires from the onset to be maintainable across a revolving team of developers, what difference would it make to use literate programming against thorough commenting guidelines? The latter would imply: classes with explicit purposes of what they do, why they are there, with examples, non-cryptic error codes, variables with inline explanations, a style guide that forces developers to use plain English, full sentences, eschew abbreviations and so on. Add to it that an IDE could be able to collapse the details or you could just extract the docs. Could it be that literate programming was a solution to problem that was tackled meanwhile through other means? Could it be that back then, when literate programming was created, some languages/tools wouldn't allow for simple mechanisms like these? | Literate programming is the nice idea that you can write your code together with an explanation or walkthrough of that code. Importantly, you are not constrained by the syntax of the underlying programming language but can structure your literate program in any way to want. (Literate programming involves chunks of code embedded into text, not comments into code.) There are three huge problems with literate programming: it takes a lot of effort, there is little tooling, and changes become more difficult. Documentation always requires effort. Literate programming requires less effort than maintaining separate documentation of comparable quality. However, this amount of effort is still unwarranted for most kinds of code. A lot of code is not interesting and requires little discussion, it's mostly just delegating stuff to some framework. The kind of tricky logic that benefits from literate programming is comparatively rare. While there are various tools for literate programming (including Knuth's original WEB, and decent support in the Haskell ecosystem), they all suck. The next-best thing I've come across is org-mode, but that requires the use of Emacs. The problem is that programming is more than typing letters, it's also debugging and navigating code, which benefits greatly from an IDE-style experience. Auto-complete is non-negotiable! Literate programming tools also tend to require non-standard build processes, or mess up line numbers in error messages – not acceptable. If a tool makes your code easier to understand but harder to debug, that's not necessarily a good choice. Related to this is the issue that changes to literately programmed software become more difficult. When you refactor code, you also have to restructure the document. But while you have a compiler or linter to ensure that your code continues to make sense, there's no guarantee that you haven't disrupted the structure of the document. Literate programming is writing and programming to equal parts. So while full-blown literate programming does not seem to have a place in modern software development, it is still possible to reap some of the benefits. Consider in particular that literate programming is now over 35 years old, so a lot has happened in the meanwhile. Extracting a function with a useful name has many of the same benefits of a chunk of code in literate programming. It's arguably even better because variable names get their separate scope. Most programming languages allow functions to be defined in an arbitrary order, which also allows you to structure the source code within a file in a sensible manner. Literate programming can be used to describe the “why” of a code in a human-readable manner. A somewhat related idea is to express requirements for your program in a both human- and machine-readable format, e.g. as suggested by BDD. This forms a kind of executable specification. Some markup languages have the ability to pull code snippets from your source code. This lets the code be code and lets you construct a narrative around these snippets, without having to duplicate, copy, or update the code. Unfortunately, the popular Markdown has no built-in mechanism for that (but RST, AsciiDoc, and Latex+listings do). This is possibly the best current alternative for creating literate programming-style documents. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/412985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109856/"
]
} |
413,102 | Let's say you are engineering a chat room software. let client = new Client();
let room = rooms.FindRoom();
room.addClient(client); This room (parent) now has a client (child). client.on('message', (event) => {
// With the above code, room must be found
let room = rooms.FindClientsRoom(client);
if(room){
room.handleMessage(event);
}
}); Or we have a child that knows about its parent let client = new Client();
let room = rooms.FindRoom();
room.addClient(client);
client.setRoom(room);
client.on('message', (event) => {
let room = client.getRoom();
if(room){
room.handleMessage(event);
}
}); This is incredibly fast compared to looking for a client within 1000s of rooms. But is there something wrong with the design pattern? In any system, such as XML, do child nodes know about their parents? Should they? | Your question is basically the same as "should linked list items have a reference to the previous item?", and the answer is the same: it depends. There are use cases when a singly-linked list is correct, and there are use cases when a doubly-linked list is correct. The important point here is that it is the use case that matters, not the format: XML does not specify whether child nodes know about their parents, that is a matter of the design of the parser which reads the XML and creates in-memory data structures for it. Some parsers will do this, other ones won't. If you need a quick lookup from your child nodes to the parent nodes, then put it in. If you don't, Keep It Simple S... and don't have them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/413102",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/368477/"
]
} |
413,149 | This comes with a debate with my colleague that I'm using null as an object initial state. type Value = Node | null
const [value, setValue] = React.useState<Value>(null)
function test(v: Value) {
if (value === null) // Do something
else // Do something
} I'm curious if null is a mistake in this case. What alternatives are there to represent an object that is not (yet) initialized?
And how is this solved in languages that don't support null pointers? Note: This question is similar to If nulls are evil, what should be used when a value can be meaningfully absent? - the difference is that that question is about cases where a value is truly absent/not relevant - whereas here the question is about a value where initialization has been delayed, because the value will be produced/retrieved later. | The problem isn't null itself. It's the implicit nullability of all object references, as is the case in Java, Python, Ruby (and previously C#, although that picture is changing). If you have a language where every type T really means T | Null . Suppose you had a system that takes such a nullable reference at its entry point, does some null checking, and wants to forward on to another function to do the "meat" of the work. That other function has no possible way to express "a non-null T" in Java (absent annotations), Python or Ruby. The interface can't encode its expectation for a non- null value, thus the compiler can't stop you from passing null . And if you do, fun things happen. The "solution" (or "a solution", I should say), is to make your language's references non-nullable by default (equivalently, not having any null references at all, and introducing them at the library level using an Optional / Option / Maybe monad). Nullability is opt-in, and explicit. This is the case in Swift, Rust, Kotlin, TypeScript, and now C#. That way, you clearly distinguish T? vs T . I would not recommend doing what you did, which is to obscure the nullability behind a type alias. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/413149",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/232631/"
]
} |
414,435 | It is often useful to document when (i.e. in which versions) a feature was added, marked as deprecated, etc. For example: Function FooBar(x, y, z) Foos x with y and bars them with z . (Parameter z was added in version 1.2) I'm wondering when and how those notices are best added, and I can think of two alternatives: When the underlying code is changed When the release is made Alternative 1 has the benefit that it keeps code and documentation in sync. However, it requires me to know the next version number in advance. In my experience, that is often not possible, because the feature may be delayed (even though the code is already there) and hence not end up in the originally targeted version. Alternative 2 avoids that issue, but complicates the release process because you have to go over your change log and update the documentation accordingly. | This depends on how you plan which features go into which release. For example, if any feature that is merged during a certain timeframe will make it into the 2.4 release, then you can use that version number directly. If you do not know the next version, it would still be reasonable to update the docs immediately, precisely because it's best to keep code and docs in sync as much as possible. Instead of a fixed version number, use a placeholder, e.g. in Sphinx I write .. versionadded:: NEXT . You could create a check for your release QA process that all placeholders have been resolved. Searching for a placeholder is easier and quicker than trying to remember which documentation has to be rewritten. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414435",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/233918/"
]
} |
414,466 | I have been studying the best practices for a code review, and I was wondering what to do in the following scenario: During a code review, I see potential improvements, but decide that they are outside of the scope of the pull request (PR). Should I ask the reviewee to do the refactor in that same PR, or should I defer it to a future PR since it is technically out of scope? I think that all PRs should strive to improve the overall quality of the code, as this tends to make the whole project better. Is that a wrong thought? Should I be more conscious about narrowing the scope of my code reviews? | There are several relevant trade-offs here: Review complexity. If a branch has more than one functional change commit or more than one refactoring commit it becomes time-consuming to review the result, since now each commit has to be reviewed separately. Risk. Any refactoring, no matter how well the code is tested, has some non-zero risk of breaking things. Making a separate branch with a refactoring allows splitting that risk from the more obvious risk of the functional change. Relevance. Is the suggested refactoring a natural consequence of the functional change? This may be for example breaking up a class hierarchy because the inheritance is no longer natural . If so it might be appropriate to do it in the same commit as the functional change, as per the red-green-refactor TDD cycle. In general, if the refactoring is really outside the scope of the branch I would recommend making it a separate branch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414466",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/372410/"
]
} |
414,472 | For simplicity, assume my application logs only dictionaries. I want to add a step to Python logging for my application to prevent logging any dictionary with the key password , i.e., def clean_log(blob):
if 'password' in blob:
blob['password'] = 'REDACTED'
return blob One thing I could do is put clean_log in its own file clean_log.py , import that in all my other files that call the logger, then add it into the function call, e.g., import logging
import clean_log
LOGGER = logging.getLogger()
def process(event):
LOGGER.info(clean_log.clean_log(event))
return event Is there a nicer way to do this? It would be cool if I could overwrite getLogger somehow so that anytime logging.getLogger is called in the source code, it could return a modified logger that just knows to clean_logs first. For example import logging
import clean_log
class MyLogger(logging.Logger):
def info(self, blob):
return super().info(clean_log.clean_log(blob)) Is there a way to always just get this logger in the source code from something like getLogger , using handlers or filters or something? Its not totally clear to me if this is a good idea, but I thought it would be an educational experience to try to find some kind of optimal/Pythonic way to do this. I can't be the first one to want to do this. | There are several relevant trade-offs here: Review complexity. If a branch has more than one functional change commit or more than one refactoring commit it becomes time-consuming to review the result, since now each commit has to be reviewed separately. Risk. Any refactoring, no matter how well the code is tested, has some non-zero risk of breaking things. Making a separate branch with a refactoring allows splitting that risk from the more obvious risk of the functional change. Relevance. Is the suggested refactoring a natural consequence of the functional change? This may be for example breaking up a class hierarchy because the inheritance is no longer natural . If so it might be appropriate to do it in the same commit as the functional change, as per the red-green-refactor TDD cycle. In general, if the refactoring is really outside the scope of the branch I would recommend making it a separate branch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414472",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/329623/"
]
} |
414,490 | I'm building HTTP API with DDD principles. The end goal is that the server runs on some chosen language X but the clients can use what ever language the software supports. Service has state, but HTTP is stateless. First without language: storage = new MyStorage(params) // Can be anything from FS based to SQL, etc
service = new TODO(storage)
server = new build(service)
server.run() build class is something like: class build
server: HTTPBase
func build(service)
server.POST('/add',
func(req, resp)
what = req.Get("param")
res = service.DoSomething(what)
if res == "Y"
resp.Write("Everything went fine")
return
resp.Write("Oh noes!")
)
return server Let's add a language (the wrong way with global state): defaultLanguage = "eng"
storage = new MyStorage(defaultLanguage, params)
service = new TODO(defaultLanguage, storage)
server = new build(service) // Oh noes!
server.run()
class build
service
server: HTTPBase
func getLanguageFromURL(req, resp)
self.service.SetLanguage(req.GetFromURL('$language'))
func build(service)
self.service = service // Oh noes!
server.BaseRoute('/$language')
server.Middleware(self.getLanguageFromURL)
server.POST('/add', // This is now /$language/add
func(req, resp)
what = req.Get("param")
// Because we're setting service's language in the middleware
// this becomes a mess because some other user using some other
// language might just done something and results to client then
// getting the result in wrong language.
// (= This changes service's global state, and explodes)
res = self.service.DoSomething(what)
if res == "Y"
resp.Write(translate("Everything went fine"))
return
resp.Write(translate("Oh noes!"))
)
return server So where to go? Always spawn new service in middleware? defaultLanguage = "eng"
server = new build(defaultLanguage)
server.run()
class build
service
defaultLanguage
server: HTTPBase
func serviceMW(req, resp)
storage = new MyStorage(self.defaultLanguage, params)
service = new TODO(self.defaultLanguage, storage)
service.setLanguage(req.GetFromURL('$language'))
req.Context['service'] = service
func build(defaultLanguage)
self.defaultLanguage = defaultLanguage
server.BaseRoute('/$language')
server.Middleware(serviceMW)
server.POST('/add', // This is now /$language/add
func(req, resp)
// Now we have one time only service which is destroyed after request is complete
service = req.Context.Get('service')
what = req.Get("param")
res = service.DoSomething(what)
if res.X == "Y"
resp.Write(translate("Everything went fine"))
return
resp.Write(translate("Oh noes!"))
)
return server This is nicely isolated but starting the service might be really slow and eat lot of resources depending what it does. Clone/copy the service in middleware? defaultLanguage = "eng"
storage = new MyStorage(defaultLanguage, params)
service = new TODO(defaultLanguage, storage)
server = new build(service)
server.run()
class build
service // main instance which gets cloned for each request
server: HTTPBase
func serviceMW(req, resp)
// Create new instance
service = self.service.clone()
service.setLanguage(req.GetFromURL('$language'))
req.Context['service'] = service
func build(service)
// Create new instance
self.service = service.clone()
server.BaseRoute('/$language')
server.Middleware(serviceMW)
server.POST('/add', // This is now /$language/add
func(req, resp)
what = req.Get("param")
// Cloned in middleware and destroyed after this request
service = req.Context.Get('service')
res = service.DoSomething(what)
if res == "Y"
resp.Write(translate("Everything went fine"))
return
resp.Write(translate("Oh noes!"))
)
return server Adds isolation and removes possible startup slowdowns and resource hogs as they're done beforehand. Run a pool (array) of services for each language? class build
service[language] // service["eng"], service["fin"], .. There's a lot of languages so it increases resource consumption. There could be also a mechanism that spawns service for a specific language when it's requested for the first time. Also service's that have been idle for N hours/minutes could be removed. Add language parameter to service's and repository's functions? All Service.DoSomething(param1) becomes Service.DoSomething(language, param1) . Services/repositories never return only strings but some sort of translatable objects? So you have something like: result = service.DoSomething(param1)
actualResult = result.Translate(myChosenLanguage) What is the DDD way of handling localization? Am I missing any options? | There are several relevant trade-offs here: Review complexity. If a branch has more than one functional change commit or more than one refactoring commit it becomes time-consuming to review the result, since now each commit has to be reviewed separately. Risk. Any refactoring, no matter how well the code is tested, has some non-zero risk of breaking things. Making a separate branch with a refactoring allows splitting that risk from the more obvious risk of the functional change. Relevance. Is the suggested refactoring a natural consequence of the functional change? This may be for example breaking up a class hierarchy because the inheritance is no longer natural . If so it might be appropriate to do it in the same commit as the functional change, as per the red-green-refactor TDD cycle. In general, if the refactoring is really outside the scope of the branch I would recommend making it a separate branch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81198/"
]
} |
414,493 | I work in a field where lots of code is written, but hardly ever tested. This is because we are foremost scientists who try to solve problems with code. The few coding courses we had, focused on the basics and many have never heard of git, unit testing, clean code after graduating. Many haven't even heard of those during their PhD... Maybe its better now, but 10-5 years ago we did not have any mandatory courses which cover those areas. Often the software solves differential equations numerically. In many cases PDEs with many feedbacks going on. Think of weather predictions, chemical reactions, atmospheric models and so on. So now my questions, would you trust results of a complex software with many hundreds or thousands of functions without a single unit test? If there are tests then they are rather high level, like to check if the results stay the same with the same input or if the results of a very simple case fit an analytical solution. Even if you know that the numerical solution of the equation is sound, based on some years old publication, would you trust the model to make predictions?
Would you trust it if it can cause billions of damage of even loss of live? On a side note, often these models are compared against each other with the same simplified inputs. | A few aspects I would like to touch on. I work in a field where lots of code is written, but hardly ever tested. This is because we are foremost scientists who try to solve problems with code I think this is common in science. And I think it's only partly due to lack of courses or motivation. I think the main reason is that a lot of scientific code is more prototyping than application development. A lot of it is used for a few analyses and abandoned. It's small, so you can test by hand. One of the main benefits of unit tests is for long-term maintenance and refactoring. If your code won't be maintained long, and you won't refactor it, it's reasonable to prioritize unit tests less. But a part of the software is reused a lot (unfortunately not usually clear beforehand). And then... Would you trust it if it can cause billions of damage of even loss of live? At this point we've left 'prototyping' and entered application development. I'd assume the code is maintained a long time by multiple people. It'll likely be refactored if it keeps growing. It has probably long ago stopped being possible to test everything by hand for most changes. And, of course, risk tolerance would be much lower if the possible damage is greater. Unit tests become much more valuable due to all that. I think it pays to follow better software engineering principles like unit testing at this point, and honestly a while before this point. Often the software solves differential equations numerically. In many cases PDEs with many feedbacks going on. I think the more important quality is scale (lifetime, collaboration, change frequency, complexity...), not so much whether there are scientific models. But I'll say that such things are actually quite easy to test automatically (whether or not you'd still call it a 'unit' test). No UI or external dependencies to be mocked. The more examples and edge cases are covered, the more one would trust it. It probably takes some scientific insight into how 'well behaved' the model, and knowledge of the risks, to know how much is enough. often these models are compared against each other with the same simplified inputs. That would actually give me quiet a bit of confidence. I think it's a good method of validation and bug detection. It doesn't help much with localizing problems though - you might not even know which of the models is wrong, let alone what is wrong with it. Unit tests could help with that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414493",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/288214/"
]
} |
414,658 | While learning about OOP, I have found that the term "represent" is used a lot in OOP tutorials. For example I may find a statement like this: "a car object represents a real life car" (of course an object can represent anything, not just real life entities). My question is, what does the term "represent" mean in this case, does it mean the following: We can't actually put a real life car inside of computer memory!! but
we can put in memory some data (variables) that describe a real life
car (for example: color , speed , etc.), and we can also put some functionality
(methods) that describe the functionality of a real life car (for example: drive() , stop() ,
etc.), and these combined variables and methods are a car object. And then we can "imagine" or "pretend" that the car object in memory
is actually a real life car, so for example when we do car_object.drive() ,
we can "imagine" or "pretend" that there is an actual car that is
being driven (even though in reality what is happening is that some
variables in memory are being manipulated, and not an actual car is
being driven!!). Am I correct in my understanding? | This question is not specific to software engineering: it applies to all disciplines working with information. In 1929, the Belgian surrealist painter René Magritte explained it very intuitively in a masterpiece of art called the treachery of images : the painting shows a pipe on a uniform background, and a caption in French " This is not a pipe ". It looks totally absurd, because you see a pipe, so why shouldn’t it be a pipe? This is because it’s not a real pipe. If he’d expressed it positively, he would have written " This is a representation of a pipe ". You explained it appropriately for OOP: a representation of a Car is not a car; you can’t use the Car in memory to drive home. In The Sims your avatar (your representation) could use it to go to the representation of your home. By the way, even in the game, The Car representation in memory (properties about the car, its state, and a 3D model) is different from the visual representation of the car representation on the screen (2D picture made with shapes and colors). But there’s more behind it. The information in memory is just a set of bits . We decide what it represents. Take for example a simple byte 0b1000001 . The same byte value could represent 65 if we want it to be an integer, A if we want it to be an ASCII character, a RES control code if we want to use it as EBCDIC character or even a set { garden, terrace } if we decide that it’s a bit encoding of a set where the 7th bit corresponds to a terrace and the first bit a garden. In memory, there are only bits. The representation is the mapping we do to give them some kind of meaning. For an OOP object, that mapping is done between the values in memory and the state of the object and the methods that make its behavior. How is, of course, language specific (examples: C++ , Java ). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414658",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/280581/"
]
} |
414,753 | If all accounts have 2FA for a given product, is there any reason why the 2FA box couldn't be on the primary login screen? Is it bad practice to request 2FA code along with username and password on the same screen? Other than 2FA being optional on some products, are there any other reason why 2FA should show up after successful login? | I think you're misinterpreting what actually happens. It's not doing the second factor (SMS code, authenticator app) after login is successful, but simply after one factor (password) has been verified. The state between the two authentication methods is still not logged in. Your question, then, might be "why not send all factors at once", and instead do a multi-phase approach. There can be several reasons: Cost. Sending an SMS code costs money. If you send it out immediately with the password prompt, you'll end up sending many codes for nothing. It can be used as an attack against you by ramping up your service costs. Hassle. If I get a 2FA notification in my Authenticator app any time a bored hacker tried randomly brute forcing my password, I'll quickly learn to ignore it. Save it for those attackers who actually have my password. Security. By having my login prompt ask for both password and authentication code, I'm giving attackers information about my security settings (e.g. which users have 2FA enabled) they might not have had, and which they can use to focus on more vulnerable accounts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414753",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/372891/"
]
} |
414,900 | The question is: interface Animal {
void eat();
}
class Lion implements Animal{
public void eat(){
//do somethng
}
}
class Test {
public static void main(String[] args) {
Animal lion = new Lion();
lion.eat();
lion.eat();
lion.eat();
}
} The requirement is to calculate how many times the eat method is called without modifying the interface and the class itself. One way is to extend the lion class and get the results but for every object extending the class we will have to create such classes. Is there any optimized way to do this. Publish Subscribe is one way but we don't have the permissions to modify the interface or the Lion class itself. | You could use the Decorator Pattern to add additional responsibilities to an Animal without subclassing. public interface Animal {
void eat();
}
public class Lion implements Animal {
public void eat() {
// do something
}
}
/* In the original Decorator pattern,
the decorator is an abstract class,
but for the sake of brevity,
in this example it's a concrete class. */
public class AnimalWithEatCountDecorator implements Animal {
private Animal animalWeWantToCountEats;
private int eatCount=0;
public AnimalWithEatCountDecorator(Animal animal) {
this.animalWeWantToCountEats= animal;
}
public void eat(){
this.animalWeWantToCountEats.eat();
this.eatCount++;
}
public int getEatCount() {
return this.eatCount;
}
}
public class Test {
public static void main(String[] args) {
AnimalWithEatCountDecorator lion = new AnimalWithEatCountDecorator(new Lion());
lion.eat();
lion.eat();
lion.eat();
System.out.println(lion.getEatCount());
}
} UPDATE If we want to be more faithful to the Decorator Pattern we can not use the getEatCount() getter at all, and instead inject a Counter object in the constructor. public interface Counter {
public void increment();
public int getCount();
}
/* I will omit the trivial implementation of Counter */
public class AnimalWithEatCountDecorator implements Animal {
private Animal animalWeWantToCountEats;
private Counter counterThingy;
public AnimalWithEatCountDecorator(Animal animal, Counter counterThingy) {
this.animalWeWantToCountEats= animal;
this.counterThingy=counterThingy;
}
public void eat(){
this.animalWeWantToCountEats.eat();
this.counterThingy.increment();;
}
}
public class Test {
public static void main(String[] args) {
Counter counterThingy = new CounterThingy();
AnimalWithEatCountDecorator lion =
new AnimalWithEatCountDecorator(new Lion(), counterThingy);
lion.eat();
lion.eat();
lion.eat();
System.out.println(counterThingy.getCount());
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414900",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/338887/"
]
} |
414,994 | If I have a function that never returns (it contains an infinite loop) how do I declare/communicate that it will never return to the user of the function?
The user does not see the source code, only the function signature and the documentation. So the user will know to call my function in a new thread else his program will be blocked forever and the code below my function will never get executed. Example function: public void Foo()
{
while (true)
{
_player.Play("scary_sound.wav");
Thread.Sleep(TimeSpan.FromMinutes(5));
}
} Should I declare that as part of the documentation? /// <summary>Plays scary sounds!</summary>
/// <remarks>This function never returns. Call it on a new thread.</remarks>
public void Foo() Or name it with a "Run" prefix and "Loop" suffix or something? public void RunFooLoop() What ways are there to declare/communicate this? Any common practices? Any examples? | This is one of those cases where infinite loops are so predominantly a bad idea that there's no real support/convention around using them. Disclaimer My experience is with C# (as is your code example, seems to be), but I would think that this is language-agnostic, unless there are languages which automatically wrap method calls in separate threads. Based on the code you posted, what you've created here is a honeytrap. You can call the method any time you like, but you can never leave ( I couldn't resist ) What you have here is the equivalent of asking your local government what signs you need to put up in your garden to tell people that you buried mines in your garden. The answer is that you shouldn't be burying mines in your garden. The only practical use case for an infinite loop is for an application whose runtime process will be killed by the user (e.g. ping 8.8.8.8 -t is such an example). This is most commonly encountered in always-on services that are expected to run indefinitely. However, playing a sound isn't something that you want your application to hang on. It's something you want to have happen concurrently . Therefore, you should design your code to work with it. In essence, you should make a class that behaves like a media player. Maybe for you it's enough if this media player only has a start button with no way of ever turning it off again. I'll leave that up to you to decide. But the play button click event is not an infinite loop. The sound should keep playing, but the click event should return . This function never returns. Call it on a new thread. Instead of pushing that responsibility to your consumer, just develop a class that, when called creates this thread and starts playing the sound. This isn't a matter of "how do I tell my consumer that he should do this". It's significantly easier to do it for them. It promotes code reuse, lowers the change of bugs and unexpected behavior, and the documentation becomes easier to write as well. Also, as an aside, you may want to look into using async/tasks instead of threads. It helps keep down the amount of active threads. But that's a different discussion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/414994",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/122716/"
]
} |
415,197 | A container having 2 or more elements could be called a "plural container". A container with no elements could be called an "empty container". But what is the terminology for a container whose current size is exactly 1 element? I would use the term "unary" but that refers to operators that take exactly 1 argument. The reason I would like a term for this is that I am writing a boolean method to check if two data structures are both of size exactly one and I want a good name for the method. Currently I have: def length_is_1(self, x, y):
return len(x) == 1 and len(y) == 1 | Do not come up with a new word. Your name is perfectly fine: It is unambiguous and specific and consequently leaves no doubt to the reader what you are talking about. By contrast, singleton , unary , 1-tuple or any other term borrowed from mathematics or software engineering carries with it a baggage of preconceptions which are confusing. A list with a single element in it is emphatically not a singleton. It has nothing to do with unary operators, and a list is clearly not a tuple in C++. It is a list with one element in it, not more, not less. That is sometimes a perceived downside of a simple approach in programming: It seems unsophisticated, and hence of a lesser value. Look at Duff's device! Marvel at the ingenuity of boost.lambda! But then listen to Jim Radigan, who leads the VC++ compiler team: One of the other things that happen when we go to check code into the compiler is we do peer code review. So if you survive that, it’s probably ok, it’s not too complex. But if you try to check in meta-programming constructs with 4-5 different include files and virtual methods that wind up taking you places you can’t see unless you’re in a debugger – no one is going to let you check that in. That your peer is able to understand your code right away because it is plain and simple and calls things what they are is not a sign that you didn't realize your full programming potential. It is a sign of excellence. Do not look for a Latin word. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44599/"
]
} |
415,274 | I recently finished a course on advanced algorithms, and another on complexity & computability theory, and in the past few days my mind has been somewhat preoccupied by this question. Why don't we just use a different algorithm based on the size of the input? I'm asking this question because I've never seen this done in practice or heard of it, and I'm also simply curious about the answer. I also tried looking it up on StackExchange and Google with various queries but couldn't come up with anything remotely related to my question. I'll take the example of sorting algorithms, as they're quite common and there are so many, with different properties and runtime complexities. Say I have three algorithms, SortA , SortB and SortC . SortA is incredibly efficient on inputs of size <= 100 but becomes very slow on inputs that are any bigger; SortB is more efficient on inputs of length > 100 than SortA but falls off quickly after a size of 1000. Finally, SortC isn't very fast on inputs of size < 1000, but is faster than SortA and SortB on very large inputs. Why shouldn't/couldn't I make a function like this (written in pseudo-C#-ish code for simplicity)? Or why isn't it done in practice? int[] Sort(int[] numbers) {
if (numbers.Length <= 100) {
return SortA(numbers);
}
else if (numbers.Length <= 1000) {
return SortB(numbers);
}
else {
return SortC(numbers);
}
} I'm assuming some of the potential reasons are that it's more code to write, more potential bugs since there's more code, it's not necessarily easy to find the exact breakpoints at which some algorithm becomes faster than another, or it might take a lot of time to do so (i.e. running performance tests on various input sizes for every algorithm), the breakpoints could only be on small or medium-sized input, meaning there won't be a significant performance increase that is worth doing the additional implementation work, it just isn't worth it in general, and is only used in applications where performance is crucial (similar to how some numerical algorithms use a different method to solve a problem based on the properties of a matrix, like symmetry, tridiagonality,...), input size isn't the only factor on an algorithm's performance. I'm familiar with Landau/Big O notation, so feel free to use it in your answers. | Why don't we just use a different algorithm based on the size of the input? We do. Hybrid algorithms are used all the time. Why shouldn't/couldn't I make a function like this (written in pseudo-C#-ish code for simplicity)? Or why isn't it done in practice? That is quite literally how most real-world implementations of sorting algorithms look like. E.g. quick sort has quite a high overhead, so every real-world quick sort implementation switches to insertion sort for the simple cases at the lower levels of the recursion tree. Instead of switching algorithms at the leaves of the recursion, you can also simply stop sorting altogether at some pre-defined partition size, and then run insertion sort once on the "almost-sorted" result of the "aborted quick sort". This may be more efficient, because instead of having many tiny insertion sorts, you have one longer one, so you don't constantly switch between quick sort and insertion sort in the instruction cache. Merge sort is also often combined with insertion sort . For example, for cache efficiency, you might want to switch to an in-place insertion sort as soon as the partitions are small enough to fully fit into the cache. One of the most-widely used sorting algorithms is Timsort , which was implemented for CPython in 2002 by Tim Peters, and has since been adopted by (among others) Oracle JRE (and many others, e.g. IBM J9) as Arrays.sort for reference types, Android, V8, Swift, and GNU Octave. It is a hybrid insertion sort and merge sort, It tries to find "runs" of already sorted elements and merges those; if it can't find any runs, it will create them by partially sorting the list with insertion sort. Considering that it is used in some of the most widely-used implementations of some of the most widely-used languages, i.e. in Android and Swift (in other words, on pretty much every smartphone and tablet) and also in Java (in other words on pretty much every desktop and a large number of servers) and V8 (i.e. in Chrome and Node.js) and CPython, we can quite confidently say that there is probably not a single person on the planet who has not used it in some form. I don't know about you, but I wouldn't call that "not done in practice", in fact, it doesn't get any more practical than running on almost every computer in the world. it's not necessarily easy to find the exact breakpoints at which some algorithm becomes faster than another, or it might take a lot of time to do so (i.e. running performance tests on various input sizes for every algorithm) Introsort solves this by being, as the name implies, introspective . It starts off as a quick sort, but it watches itself while it executes, and when the recursion exceeds a certain depth, it switches to heap sort. Regardless of whether it switches to heap sort in between or stays at quick sort, for very small arrays, it then switches to insertion sort. Introsort is used in several C and C++ standard library implementations, in .NET, and with Shellsort instead of insertion sort as the final algorithm in Go. As we have seen above, Timsort has a really clever take on this problem: if the input data doesn't fit its assumptions, it simply makes it fit by partially sorting it first! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415274",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/337882/"
]
} |
415,413 | I've got some process in Go. Here's an example counting lines in text, though the question is meant to be far more general than this particular example: func lineCount(s string) int {
count := 0
for _, c := range s {
if c == '\n' {
count++
}
}
return count
} Alright, not bad, but it's too slow, so let's make it concurrent: func newLine(r rune, c chan<- struct{}, wg sync.WaitGroup) {
if r == '\n' {
c <- struct{}
}
wc.Done()
}
func sumLines(c <-chan struct{}, result chan<- int) {
count := 0
for _ := range c {
count++
}
result <- count
}
func lineCount(s string) int {
c := make(chan struct{})
var wg sync.WaitGroup
for _, r := range s {
wg.Add(1)
go newLine(r, c, wg)
}
result := make(chan int)
go sumLines(c, result)
wg.Wait()
close(c)
return <-result
} Better, because now we're using all our cores, but let's be honest, one goroutine per letter is probably overkill, and we're likely adding a lot of overhead between the horrendous number of goroutines and the locking/unlocking of the wait group. Let's do better: func newLine(s string, c chan<- int, wg sync.WaitGroup) {
count := 0
for _, r := range s {
if r == '\n' {
count++
}
}
c <- count
wc.Done()
}
func sumLines(c <-chan int, result chan<- int) {
count := 0
for miniCount := range c {
count += miniCount
}
result <- count
}
func lineCount(s string) int {
c := make(chan int)
var wg sync.WaitGroup
for i := 0; i < len(s)/MAGIC_NUMBER; i++ {
wg.Add(1)
go newLine(s[i*MAGIC_NUMBER : (i+1)*MAGIC_NUMBER], c, wg)
}
result := make(chan int)
go sumLines(c, result)
wg.Wait()
close(c)
return <-result
} So now we're dividing up our string evenly (except the last part) into goroutines. I've got 8 cores, so do I ever have a reason to set MAGIC_NUMBER to greater than 8? Again, while I'm writing this question with the example of counting lines in text, the question is really directed at any situation where the problem can be sliced and diced any number of ways, and it's really up the programmer to decide how many slices to go for. | The canonical time when you use far, far more processes than cores is when your processes aren't CPU bound. If your processes are I/O bound (either disk or more likely network), then you can absolutely and sensibly have a huge number of processes per core, because the processes are sleeping most of the time anyway. Unsurprisingly enough, this is how any modern web server works. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415413",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/191781/"
]
} |
415,478 | Okay, I was being interviewed at a company and the interviewer asked me a recursion problem. It was an online interview, so, he had set up the problem statement and a function signature on CodeSandbox (an online code editor/collaboration tool). I was supposed to fill-up the function body. He had only one parameter in the function signature. I added another parameter just to keep track of the result. He said I shouldn't add another parameter(I was providing a default value to the additional parameter), as it changes the function signature. Now, in my opinion, if you are adding an optional parameter to the signature, it wouldn't make any difference. Let me take a simple example to make it more clear to you: Problem: Check if the input is a palindrome. Solution 1: function isPalindrome(input, index = 0){
const isAMatch = input[index] === input[input.length - 1 - index]
if (index === Math.floor((input.length - 1) / 2)) {
return isAMatch
}
if (isAMatch) {
return isPalindrome(input, ++index)
}
return isAMatch
} In the solution above, I added an optional parameter: index to keep track of the index to be matched. The question here is that if it's reasonable to add this optional parameter? Solution 2: function isPalindrome(str){
if(str.length === 1) return true;
if(str.length === 2) return str[0] === str[1];
if(str[0] === str.slice(-1)) return isPalindrome(str.slice(1,-1))
return false;
} In this solution, we aren't using any additional parameters. Now I'm repeating the question again, would Solution 1 be considered as an invalid solution? | Well I like the index solution simply because it doesn't require creating multiple sub strings on the heap. The problem with interview questions is they're mostly "guess what I'm thinking" games. So while you and I might be fully objectively right about which is the better solution the point is to show that you can work with the interviewer to either get them to see that or figure out what will make them happy even if it is stupid. But to answer your exact question, no. Solution 1 is still valid. If challenged about the signature all you had to do was call _isPalindrome(input, index) from isPalindrome(input) . No one said you couldn't define a new function. You are still using recursion. But can you get the interviewer to see that? Being right is a small consolation if you don't get the job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415478",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102664/"
]
} |
415,538 | YAGNI might tell us, that in the below implementation the generic version is not needed, as long as the function is only used once. But to me personally, it seems, the generic version is more readable because I'm not distracted by all the possibilities the special class has, but are not used. The generic version is exposing less complexity to the algorithm. (What it actually does is not that important for this example.) Specialized version: enum class Thing {
A, B, C, D, E
// Many member functions here
}
enum class Category {
Foo, Bar
// Many member functions here
}
val thingCategories = mapOf(
Thing.A to Category.Foo,
Thing.B to Category.Bar,
Thing.C to Category.Foo
)
fun countUniqueThingCategories(xs: Iterable<Thing>): Int {
val (mapped, nonMapped) = xs.partition { it in thingCategories }
return mapped.map { thingCategories[it] }.distinct().count() + nonMapped.distinct().count()
}
fun main() {
val things = listOf(Thing.A, Thing.C, Thing.D, Thing.B, Thing.A, Thing.E)
println(countUniqueThingCategories(things))
} Generic version: enum class Thing {
A, B, C, D, E
// Many member functions here
}
enum class Category {
Foo, Bar
// Many member functions here
}
val thingCategories = mapOf(
Thing.A to Category.Foo,
Thing.B to Category.Bar,
Thing.C to Category.Foo
)
fun <T, U> countUniqueWithMapping(xs: Iterable<T>, mapping: Map<T, U>): Int {
val (mapped, nonMapped) = xs.partition { it in mapping }
return mapped.map { mapping[it] }.distinct().count() + nonMapped.distinct().count()
}
fun countUniqueThingCategories(xs: Iterable<Thing>) = countUniqueWithMapping(xs, thingCategories)
fun main() {
val things = listOf(Thing.A, Thing.C, Thing.D, Thing.B, Thing.A, Thing.E)
println(countUniqueWithMapping(things, thingCategories))
} Which version would you prefer to find when maintaining a project? | There are definitely cases where solving a more general problem than required makes code easier to read, to reason about and to maintain. The most simple example I can think of is when code deals with input data consisting of four or five similar attributes, and the processing code gets duplicated up to 5 times because the responsible developer is too unexperienced or too lazy to refactor the existing code to the usage of an array and a loop. So by solving the more general problem of processing "N" pieces of data, though only 5 are required, is definitely a good idea. But lets talks about "generics": a generic approach bears sometimes the potential to split a complex function into a less complex one together with some separated data type. If that helps to move some of the complexity of the data type out of the function, this can indeed improve readibility and maintainability, even if the function and the data type are not used elsewhere. Said that, I fail to see in the specific example of the question why the generic version of the function should fall into this category. Maybe it is because the example is in Kotlin (which I have never worked with), maybe it is because it is too artificial and contrived. But I don't see really "less complexity to the algorithm" in the generic version. And for the non-generic version, I cannot say that it "distracts me by all the possibilities the special class" . So find a better example, and I may vote for the generic version. But not this time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415538",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104636/"
]
} |
415,631 | To clarify the question, here is my context (or something very similar). I have an interface, that I call IDataSource . The implementing classes contain information to retrieve data. So I have multiple classes implementing it, let's say FileSource , DatabaseSource and WebServiceSource . So far so good. Now, depending on the version of the program, I will have some treatment to do to these sources' data, like decrypting, reading the first few bytes (signature), and anything you can imagine. What I wanted to do was having a coordinating class, that will take IDataSource objects, check their actual type ( FileSource and such) and perform the appropriate data treatment before doing stuff with the final data. For example, I would have a FileSource that requires no treatment, and an EncryptedFileSource that requires decrypting, but the classes themselves are completely equivalent (exact same properties) in a vacuum. So, the question becomes, is it OK to have EncryptedFileSource inherit FileSource , which implements IDataSource , and in my coordinating class, check if the IDataSource object's type is FileSource or EncryptedFileSource , and perform the decrypting in the second case? Or is there a better way to do it? | No . Emphatic no . Unless I misunderstood you, the question is to subclass for a different behavior, but actually not have the behavior itself. Instead an outside actor checks the exact type and does the needed behavior. That is not ok. If you do have an EncrypedFileSource you have to have the decryption/encryption in that class. If your design doesn't allow that, you'll have to change your design. Also, inheritance is the wrong tool here even if the behavior was there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415631",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/130012/"
]
} |
415,697 | I understand the concept of cloud computing, but I'm curious why the term has become so exhausted the past several years. Servers have been around for a long time, and I fail to see how this is any different from before the term "cloud computing" was in fashion. There are many more vps services and more systems and complexity, but is "cloud computing" mainly just a marketing term? | The distinguishing feature of "cloud computing" is indeed the way that it is marketed, in particular, the way that it is priced . Another synonym for "cloud computing" that I personally prefer is "utility computing", and that term describes best what it is all about: it is priced and used like any other utility, water, gas, electricity. You only pay for what you use, when you use it, you don't have to configure anything, you don't have to rent anything, you don't have to prepay anything. You are automatically billed monthly based on your very fine-grained actual usage. It really is like a utility: if you want to wash your hands, you open the tap a little bit and a bit of water comes out. If you are filling your pool, you open the tap more and more water comes out. You don't have to prepay the water, you don't have to call the water company and ask them to send you water, you don't have to arrange anything. You just open the tap, and there is instant water. Utility computing resources are the same way. This is different from anything we had before. We had rented servers in data centers, but we had to pay those whether we used them or not. Even in the (very short and unsuccessful) era of Application Service Providers (ASPs, anybody remember those), you generally had a monthly or yearly plan. There were mainframe sharing systems where you were billed by the CPU second, but those weren't as instantaneous as utility computing resources, you generally had to pre-arrange some stuff. And in the field of economics, the sub-field that deals with how to assign prices to products, and how to bring those products to the market is called "marketing", so you are almost right: "cloud computing" is mainly a marketing term, but I would very much object to the word "just" in your sentence: is "cloud computing" mainly just a marketing term? Because the marketing aspect of cloud computing is precisely what makes it different from everything that came before, and what made it so disruptive. There are other parts of the "utility" metaphor that are also applicable to utility computing, such as the fact that you don't need to care where your water comes from and how it gets to your tap, you just turn on the tap, and water comes out. The water could come from a tank, a reservoir, a lake, a river, a well. The electricity could be generated by wind, solar, geothermal, coal, nuclear, it could come directly from a plant owned by your provider or by a plant owned by a different provider who then sold the energy to your provider, etc. This is where the "cloud" term comes in. It comes from system diagrams, where the network was always drawn simply as a "magical cloud" that does everything, and you don't really need to concern yourself with how it works. That is the metaphor that "cloud computing" is meant to invoke. The cloud is just this thing that is always there, always works, and you don't need to worry about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415697",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/374470/"
]
} |
415,775 | Imagine a project is assigned to a team, deadline is estimated as 8 months. After 6 months it becomes apparent the project will most certainly not be complete on time(e.g a law changes or a hidden monumental hurdle is discovered, the lead dev gets hit by a bus, etc.). But the project is important (e.g. lose an important client on failure or have to pay reparations). One solution we all agree is horrible is adding more developers , especially new to the company. They will need at least a month to get up to speed and occupy the rest of the team during that time. One solution we all agree is awesome is prevention. But such situations do happen. What is a reasonable solution in such a situation for the manager of the team, provided they have plenty of leverage for additional people, funding, client negotiation etc? | We have historically seen over and over again that there are two working and two non-working ways of combining the two fundamental constraints on software releases: dates and features. Fixed date, flexible features, aka "release what's ready": you release at a pre-determined date, but you only release what is working. This is a model that is successfully used by Ubuntu, Windows, Linux, and many others. Fixed features, flexible date, aka "release when ready" or "It's done when it's done": you determine the set of features beforehand, and then you simply work until the features are finished. Some Open Source projects work this way. Fixed date and features. Flexible date and features. #1 and #2 have been shown to work well in many different projects. For example, both Ubuntu and Windows are released with a fixed 6-month cadence with whatever features are ready in time for the release. If you make the cadence fast enough, even if a feature misses the release, customers don't have to wait a very long time for the next release. Linux actually uses an interesting staging of the two: as soon as there is a new release, there is a fixed-time "merge window" of two weeks, during which new features are added. When this merge window closes, the set of merged features up to that point is fixed, and a "stabilization period" starts, during which the fixed set of features is stabilized, any bugs fixed, etc. This process takes as long as it takes, there is no deadline. When everything is stable, a new release is made, and the process starts anew. It turns out that this actually leads to a fairly stable release cadence of 6-8 weeks, but the point is that this cadence is not enforced, it emerges naturally. Note that this does not invalidate my assertion that #3 doesn't work: Linux development does not fix dates and features. They do #1, then make a cutoff point and switch over to #2. #3 is always a big problem, especially with a larger feature list and longer timeframes. It is pretty much impossible to predict the future (many have tried), so your estimates are almost always off. Either you have finished all the features and are sitting around bored twiddling your thumbs, or, more likely, you bump up against the deadline and frantically try to finish all the features in a hellish death march. It does work if you keep the feature list and timeframe short enough. E.g. this is essentially what a Sprint is in Agile Methodologies: a fixed set of features in a fixed timeframe. However, the timeframes are reasonably short (typically a Sprint is one week or two), and it is ensured that there is rapid and immediate feedback and adjustment. You generally have a Sprint Retrospective after every Sprint, where you gather all the problems and successes of the Sprint and incorporate what you have learned into the next Sprint. And of course there is a Sprint Planning Meeting where the team discusses the next Sprint with the customer and agrees on a set of features to be implemented during that week. Weekly (or two-weekly) Sprint Retrospectives are still not fast enough feedback, though, so there is also a Daily Standup Meeting with essentially the same goals as the Sprint Retrospective, except being able to react even faster: check whether the previous day's goals were met, and if they weren't, figure out what the problem was and fix it. (Note, I wrote "what" the problem was, not "who"!) It is also very important that every Sprint ends with the release of a working product, so that the customer can immediately start using the new features, play around with them, get a feel for them, and give feedback for the next Sprint what is good, what isn't, what should be changed, etc. #4 almost always leads to never-ending releases with feature creep. Debian 3 and Windows Longhorn were famous examples that interestingly happened around the same time. Neither of the two had a fixed release date, and neither of the two had a fixed set of features. Longhorn took 5 years, Debian 3.1 took 3. In both cases, what happened was that they didn't want to cut features because the long release meant that people would have to wait even longer for the features to appear in the next release. But because of not cutting features the release date slipped even further, so they added even more features because otherwise users would have to wait even longer, but that made the release date slip, and so on and so forth. An even more famous example might be ECMAScript 4. So, what can you actually do in your situation? Well, you are currently in situation #3, and that simply does not work. You have to turn your situation #3 either into a #1 or a #2 by either relaxing the release date or dropping features. There simply is nothing else you can do. The damage was done 6 months ago, and it cannot be magically fixed. You are in the situation where the amount of features cannot be delivered in the amount of time, and one of the two has to give. IFF you can manage to move the release, then you might have the chance to grow the team, but the thing is that once you get 5-10 members, you really won't get any faster. You'd then have to break this into two or more projects, each with its own feature set, release date, and team, but then you also have to coordinate those, and define stable interfaces between both the projects and the software deliverables. Note that in terms of culpability, the three scenarios presented in the question are very different: If the applicable law changes, then it is perfectly possible to deliver the agree-upon features at the agreed-upon time. It's just that the agreed-upon features are useless for the customer. (Another good reason to be Agile.) In this case, it is actually in the customer's interest to re-negotiate the project, because if you just stuck to the agreed contract, they would have to pay for a completely useless result. So, this is essentially either a completely new project or a requirements change for the existing project, and both mean new prices and new timelines. If the lead developer gets hit by a bus, the culpability is squarely on the project manager. Making sure that the bus factor is > 1 is pretty much a core responsibility of the PM. Practices that can improve the bus factor are for example Collective Code Ownership, Pair Programming, Promiscuous Pairing, Mob Programming, Code Reviews. The "monumental hurdle" is a bit squishy. The question doesn't really define what kind of hurdle it is. If it turns out that the supplier massively underestimated the complexity, then it's obviously their fault. This can be mitigated by Spiking or Prototyping, for example. However, regardless of who screwed up, we are still in the same place: we have an agreed set of features that cannot be delivered in the agreed time, so there is absolutely no way around the fact that one of the two has to give . There simply is no "non-horrible" solution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/415775",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
416,049 | I think the answer to the first part of my question is, "yes" -- no point in making objects immutable in a single-threaded application (or I guess in a multi-threaded application if that particular object will not be shared among threads). If I am wrong, please explain why immutable objects are useful in a single-threaded app. The second part is, if you are sharing an immutable object, what is it useful for if it can't be changed? Why is it even being shared? If it is "changed" so that a new and changed version of the object is created, say a shared counter, how is this shared object used if a change makes a new object -- how does another thread access the new version? More broadly, how are immutable objects handled in a multi-threaded application -- I guess how a shared but immutable counter would be incremented between multiple threads. I have searched extensively and I think all I read were discussions that involved some kind of locking or atomic operations which, I think, could be performed on a mutable object just as well. And finally, if immutable shared counter is useful, are the machinations any simpler, is debugging any simpler, than just locking access to a mutable shared counter? | No, immutable objects are quite useful in general. The first and most basic reason is that concurrency in a system doesn't require a multi-threaded application. Making say... a row in a database immutable provides a lot of benefit for change tracking, collision avoidance, syncing, and backups. And while less valuable than in concurrent scenarios, immutable objects tend to be easier to use and debug because you know the state of the object across the lifetime of your app and you know that some function isn't misbehaving and mutating it on you. Also, any sort of exceptional state will show up immediately on object creation rather than during some mutation later on during processing. Those tend to be easier to identify, and happen in places where it is easier to recover or abort cleanly. The second part is, if you are sharing an immutable object, what is it useful for if it can't be changed? Why is it even being shared? The most obvious example is a configuration. You don't want to change it at runtime, but it's often needed by different parts of your code. Something like the current user. You don't want to change it, but will want to share it with different modules. If it is "changed" so that a new and changed version of the object is created, say a shared counter, how is this shared object used if a change makes a new object -- how does another thread access the new version? So the biggest thing with immutable objects (in most languages) is that writes of the object are atomic. Uninterruptable. Say you want to change a few fields on a mutable object. The thread changes one, then another, then another. Any other thread can read the object in-between each of those steps. It'll see a half-changed object. But if you want to change a few fields on an immutable object it's different. The thread makes a new object, changing the three fields it wants to change. Then it overwrites the shared reference in one, uninterruptable step. Any other thread can grab a reference to the object and know that it won't change. If it grabs the reference before the other thread does its write, it might get the old object but it can never get a half-changed object. For a counter it doesn't much matter. Incrementing an int will be just as uninterruptable as assigning a reference to a new int (though that might not apply if you need counters bigger than an int, depending on your language, compiler, target CPU, etc.). Locks though are very costly in most languages/platforms so programmers will avoid them when it is safe to do so. (For more info consider this question , which is adjacent to this one) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416049",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/355659/"
]
} |
416,076 | If immutability is "good" and yet you can in effect change the value in an Integer or String variable (never mind that you get a new reference -- the value has changed) what good is it that Integer and String are immutable? If Integer were mutable, what sort of bugs would be harder to find (etc.) than in the case that Integer is immutable? Or with String? | never mind that you get a new reference No! Do mind that fact - it is the key in understanding the point of immutable objects. -- the value has changed No, it hasn't. You have a different object with a different value in this place in the code . But any other part of the code which had a reference to the original object still has that reference to that object with the original value. Immutability is good because it prevents you from making a change to an object and having that change affect a completely different part of the code which wasn't written with the possibility in mind that the objects it operates on could be changed somewhere else (very little code is really written to cope with that). This is particularly useful with multithreaded code (where a change done by a different thread could happen in between the operations of a single line of code), but even single-threaded code is much easier to understand when methods you call can't change the objects you pass into them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416076",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/355659/"
]
} |
416,242 | I have been in two software product houses for three years in a row. The first is a small company maintaining a fairly small management system with a monolithic legacy code base (almost twenty years). Tightly coupled code is everywhere without sufficient unit test coverage. However, the management usually does not want developers to refactor the legacy code. The second is a fairly big company maintaining a big domain-specific system with a huge monolithic Java legacy code base (over ten years). The layered architecture indeed decoupled the infrastructure from the business logic. However, in their business layer, there are also some giant classes with more than 3 thousand lines of code. Developers still continuously inject more and more code into those legacy classes. Developers are allowed to refactor their own fairly new code about adding new features, but are warned not to refactor these giant spaghetti classes, either. Experienced senior developers say that changes or refactoring on those classes might be disastrous due to the lack of regression tests. However, personally I have read practical books about clean code and refactoring. Most of the books strongly recommend developers to refactor actively. But why in real world companies are against this? So I would like to collect answers from very experienced developers. Why do these two companies I was in prefer to keep the super legacy code unrefactored? Isn't this disastrous? | It‘s a question of risk management: Refactoring a system always creates the risk of breaking something that worked before. The larger the system, the higher its complexity, and the higher the risk of breaking something. With spaghetti-code (or any other poorly structured code) the real structure of the code remains fuzzy, and the dependencies might be hidden. Any change in one place could easily have impacts anywhere else. This increases the risks of breaking something to the highest level. With TDD, or any other technique guaranteeing a comprehensive set of test cases, you can quickly verify that refactored parts (and dependent parts) still work. Of course, this is effective only with the help of proper encapsulation. Unfortunately, tests are often missing for legacy code, or their coverage or depth is insufficient. In other words, with large legacy spaghetti code bases, refactoring creates a high risk of breaking something that worked before, and the impact of this risk cannot be reduced with automated tests. The significant risk of refactoring simply outweighs refactoring benefits in this case. Additional remark: An alternative approach at a lower risk is: don't touch the running system, but implement new and replaced features with state of the art testable code and clear boundaries . This more evolutionary approach is not always feasible, but it can provide significant short term and long term benefits. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416242",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/320860/"
]
} |
416,309 | I'm a beginner at collaborative development, I started learning about merge conflicts.
And I have this question. Is it possible for a developer to deliberately postpone merging because he doesn't want to be the one resolving a potential conflict? If this isn’t the solution, what are the strategies that work? | Oh god yes. I broke the build my first time. Made me so gun shy I was hiding versions in folders. Of course delaying my check-ins just made things worse. I was in hell until I figured out what I needed. I needed a safe place to play. I created my own toy project so that I could deliberately cause merge conflicts. Learned how to fix them the hard way. Soon people were asking me to help them fix their problems. All because I took the time to play with a toy. Check in often. It will keep what needs fixing small. But take the time to learn your tools so you can see trouble coming. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416309",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/375564/"
]
} |
416,361 | While investigating Wikipedia article on Qantas Flight 72 I've found " Potential trigger types " section that says (emphasis mine): A number of potential trigger types were investigated, including software bugs , software corruption , hardware faults, electromagnetic interference and the secondary high-energy particles generated by cosmic rays. I wonder if there is any distinctive and ultimate difference between "bug" and "corruption" (if yes, then what is the difference) or is this just an article's wording and nothing else? | Software corruption is the contrary of software integrity. It's the same thing as data corruption , except that the data is the software code. It can affect: the software binary stored in memory: binary codes of software instructions are altered for example because of physical interference (“please switch off electronic devices during take-off and landing”), hardware defects (memory chip), malicious activities (e.g. row hammer vulnerability), or software bugs (i.e. as a consequence of a buffer overflow). the software binary before it is loaded in memory, i.e. the executable file stored in a file system (e.g. SSD memory, hard disk, ...) or transiting via the network (e.g. loaded from a remote file server). the software source code before the executable is produced: the source code is data like any other, that can be corrupted in the same situation as any data. A typical example is when a software company’s source code repository gets hacked (accidental cases generally prevent compilation and have a very limited impact) Note that “software corruption” may be used ambiguously to mean corruption caused by the software instead of corruption of the software itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416361",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57759/"
]
} |
416,371 | I have a Kubernetes instance from DigitalOcean that has 2 worker nodes and 1 load balancer. Now I want to create a MySQL managed database cluster for the app that will run on that Kubernetes. Question is: Does DigitalOcean offer enough resources for a scalable database cluster? The plan that I'm looking at starts at $60 for 4GB memory, 2 vCPUs running MySQL 8 . Besides this i can one-click add more read-only nodes with similar resources. How capable is a master with those resources? How can i do the math on resource consumption in order to find out the actual amount of resources that i need for my database cluster? P.S: the app is going to be written in Laravel 8 running on PHP 7.4.8 | Software corruption is the contrary of software integrity. It's the same thing as data corruption , except that the data is the software code. It can affect: the software binary stored in memory: binary codes of software instructions are altered for example because of physical interference (“please switch off electronic devices during take-off and landing”), hardware defects (memory chip), malicious activities (e.g. row hammer vulnerability), or software bugs (i.e. as a consequence of a buffer overflow). the software binary before it is loaded in memory, i.e. the executable file stored in a file system (e.g. SSD memory, hard disk, ...) or transiting via the network (e.g. loaded from a remote file server). the software source code before the executable is produced: the source code is data like any other, that can be corrupted in the same situation as any data. A typical example is when a software company’s source code repository gets hacked (accidental cases generally prevent compilation and have a very limited impact) Note that “software corruption” may be used ambiguously to mean corruption caused by the software instead of corruption of the software itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416371",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/326935/"
]
} |
416,386 | I’m still really new to learning to program. Just learning the syntax for a few programming languages at the moment. The courses I viewed for C# and Java touched only very briefly on getters & setters and it still didn’t make an awful lot of sense to me. So I researched them further and found that the whole subject of getters & setters is a lot more debated than I could believe. For every argument & example both for and against getters & setters there was a counter argument, saying that it was wrong, the code was a bad example etc. and I still didn’t know who to agree with or even fully grasp the jargon behind it. So I’m really not sure what to go for at the moment. So I’m going to post my take on what getters & setters really are given what I’ve read and if there’s anything I’ve got wrong then please tell me. Then I have a few more questions that I haven’t found any answers for anywhere. My take on getters & setters FOR: Getters and setters are methods declared within a class and applied to fields within said class, and control what other classes can access and modify these fields. They’re used on fields which need to be accessed from outside their class, but at the same time cannot let anything have access other than what needs access. When programmers talk about “if you make a field public then ANYONE can have access to them” by anyone they’re not necessarily talking about hackers or the user of your finished program. They’re talking about any other programmers who are working on the same program (or yourself) creating something which accidentally creates side effects which modify that class. Then you not being able to work out what is causing that change. As well as controlling access they can do other things such as validate any input to the field before it is added (Mike Dane aka Giraffeacademy has used the example several times that a movie can only have 3 age ratings, so adding into the setter a method which checks the rating of a movie before it is added is a good side effect of a setter). In the same example, it can also improve the maintainability of your program, for example if a fourth rating is added, you can modify any objects where the new rating applies to just by adding it to the setter, rather than going through each movie individually. AGAINST Many programmers nowadays are strongly against the use of getters and setters. They argue that it ruins the encapsulation of your objects and makes code noisy and unreadable. Also that they encourage programmers to think of objects as data structures. I’ve seen some arguments that they are now somewhat obsolete as some programming languages such as Python, which doesn’t need to use getters & setters. Also some IDE’s make it easy to see where side effects are taking place. The main argument I’ve seen against them is that they’re just poor design. Code and classes should be clean and organised in a way that fields are only accessed outside the class when it absolutely needs to be, and getters and setters should only be used as a last resort. An idea I saw recently is that you should start off making every field you create private. Then find everything which needs to be accessed outside the class and if there’s no other way around it, add a getter and/or setter to it. The final, neutral argument I’ve seen is that you shouldn’t be adding things to your code that you don’t understand or don’t know if you’re going to need or not. That I absolutely agree with. But obviously I need to know if they’re going to be useful to me or not when I actually create something. So if I’ve got anything wrong then please let me know, and I still have a few questions: What parts of your program would you actually use getters and setters on? The examples I’ve seen online use classes such as ‘Dog/Ball’ and ‘Person’ which really aren’t much use to me. I’m thinking you would use them if you had a class for ‘Accounts’ and a different class for ‘Settings’.. the settings class would need to access the Account’s user name if the username requested to change it.. right? Going with that example, if getters and setters are created to prevent someone from being able to change a variable through a side effect, what kind of code could actually change a user’s name as a side effect accidentally? Surely the only kind of areas in your program that could modify an accounts username are setUsername, displayUsername and changeUsername, and nothing else would ever need to go near those variables. Given the debate I’ve found surrounding getters & setters why do courses and tutorials touch so briefly on them only just teaching you the syntax for them, and not arguing the cases for and against or even providing actual real world examples? (See note before about dog/ball). Are they too biased of an opinion? Or am I just looking into one topic way too much? As I said I’m still so new to programming, so I’m probably either too inexperienced or thinking about it way too much. I just want to be absolutely clear on what I’m adding/not adding to my programs before I release them to the world. Any help much appreciated. Have a good day. | I’m still really new to learning to program. Just learning the syntax for a few programming languages at the moment. And that is actually the problem here - you approach this way too much from a syntactical point of view. What you need to learn first is solving problems with programs. When the problems get larger, the programs will get larger and require more structure. That will bring you to a point where you need data structures and data abstractions functional abstractions functions which operate on specific data structures, so the data structures might become classes, and the functions become member functions of that classes. At that point, you will have to learn about how to design the public API of a class to create a good abstraction, for reaching a certain goal. You will also start to learn why making members "private" by default is a really good idea. If you work in some context of a real problem, you will know the precise requirements for your classes and their API, so you can decide which parts / functions / properties can stay private, and which not. That will be the point in time where you may notice a very frequent requirement for classes: getting external access to some state information of the object (either read access - for which a getter is fine, or also write access, for which an additional setter will be required). Such state often corresponds to the content of a private member variable. However, that is not mandatory, getters and setters can also deliver (or change) values/state which are not stored directly in a member variable, but can be derived or calculated indirectly. So in short: you do not decide about using or not using getters and setters by a pro/con list. Getters and setters are just tools which help you to solve some problems (specifically the problems you listed in your FOR section). One decides about their usage depending on if the actual problem requires them, and if they fit well to the kind of abstraction one wants to build by a class. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416386",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/375683/"
]
} |
416,567 | Suppose we have two branches A and B which have been forked from master . Both branches A and B make some changes and implement some unit tests. They pass all current and new tests, then are merged back into master . For simplicity, there are no merge conflicts. Is it guaranteed that the resulting code on master will also pass the unit tests? The reason I ask the question, is I often see GitHub unit tests run automatically once a pull request is made. If they pass, then the code may be merged into master . However, I think master could still end up failing tests if two pull requests break each other? I would have thought
a better solution would be: When a pull request is made, run the unit tests to catch anything egregious. Have conversations, code reviews etc... Once the pull request is ready to be merged, do a test merge into master , run the unit tests, if all succeeds, commit the merge. So you never actually commit broken code into master. | No. As a counter example, consider branch A adds a unit test that uses reflection to check for a misspelling in an enum. And branch B adds a misspelling. Both pass because a misspelling doesn’t fail a build, in A the test doesn’t fail because everything is spelled right, and in B there isn’t a test to check it. There won’t be any merge conflicts because the enum and its unit test will be in separate areas. But the test will fail once the merge is complete. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416567",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/376024/"
]
} |
416,574 | Given a repository which contains two different applications A and B (e.g. bootloader and RTOS), is it ok to copy source code from A to B in order to avoid dependencies ( include 's, adding A source files to the B compilation) between them, so they stay completely independent both at build-time and runtime? Note: In addition, let's suppose that the logic to be copied from A is private (that is, it's only meant to be used by certain internal functions in A ) | It is acceptable if the copied code can change independently from the original code. If you are copying code and every future change has to be maintained in two different code bases, you could better create a shared library. Then both applications have a dependency on the library, but not on each other. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/416574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/304203/"
]
} |
417,896 | I have read that in OOP, we think of objects as "sending messages to each other", for example if we did car1.stop() , we say that "we sent the message stop() to the car1 object". But what benefit do we get by thinking of objects as " sending messages to each other? " What I mean is, let's say that we thought of car1.stop() as "calling the method stop() on the car1 object." What's wrong with thinking of it like this? | It avoids micromanaging. If I tell you to stop in an OO way I haven't called your stop procedure, or your stop function, or your stop method. When I send that stop message I've raised a stop event. One that you are free to handle or not. You don't even have to respond. Now sure, you might use a stop method to handle that, but that's your problem. This avoids micromanaging because I don't have to deal with how you respond to being told to stop. I don't have to think, "OK I told him to stop, now what's he going to do? If he ignores me then I'll do this, if he has a problem then I'll do that, if he stops then I'll do this next thing". No, that's micromanaging. If anything needs to be told what happened when you got told to stop it's better to let you decide who to tell. It gives me fewer things to think about. It gives you more freedom to control your stop response. This keeps a very low form of coupling between objects. Lower even than typical 1 functional programming. Functional programming does composition beautifully. Pure functions make reasoning simple. But it locks you down to sending the response back to the caller. It has nowhere else to go. That couples caller to callee. Messages, however, can go where they've been configured to go without worrying what becomes of them. It's not as straightforward but it's another detail avoided. Another benefit is minimizing data movement. Functional programming has been called "data in, data out". OOP wraps data in a "message in, message out" system. The messages can be very lightweight compared to the data. I'm contrasting OOP with Functional here but that shouldn't be taken to mean you exclusively use one or the other. Many of functional programmings principles can be used while using OOP. Prefer immutable objects. Be disciplined with side effects. Etc. OOP messaging is a powerful way to model. It inherently respects encapsulation. I don't look inside you. I don't ask you about your privates. I tell you what I want done and you decide what, if anything, to do about it. Once I tell you, I don't have to hover over you and manage what you do. I just let you do it. Whatever it is. If I ever need to know more I'm sure someone will tell me. Messaging is sometimes implemented by using methods as the messages but that's just one way to do it. It could be text messages, packets, tweets, emails, etc. The methods are not what makes it OOP. It's how you use them. Here's the rub. Just because you're using an “OOP language” that has methods doesn't mean every method is a genuine OOP message. No language perfectly enforces this. Your programming team has to enforce this. Depending on the design, a method may conform to requirements of a OOP message. If you're lucky your core packages will follow this well. I've never worked on a project where OOP was 100% enforced or a functional project where everything was pure. But the better projects will find some way to at least signal clearly where the ideals are followed and where they have been compromised. This is important because it impacts the readability of the code. It's good to quickly know if you're looking at a true OOP message, a pure function, or some other monster. Joel has blessed us with this awesome comment: 1 . Regarding Functional Programming only returning to the caller, I would suggest looking into the technique continuation passing style combined with tail call optimization. "When you're done here, talk to this other guy. I will show myself out." – Joel Harmon This is all true. But if the caller is saying "talk to this other guy" the caller is still dealing with knowing where to send the response. To put functional programming coupling on par with OOP (that configures output ports in constructors) pass "this other guy" into the enclosing scope of a closure. That way the caller neither knows nor cares where the result goes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/417896",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/274111/"
]
} |
418,182 | I am writing numerical calculation software using .NET C#, which needs to be blazingly fast. There is a lot of fractional math. So using decimal type is pretty much out of the question, given its poor speed relative to using double . But of course double has its problems testing for equality, with floating point rounding issues. My options seem to be subclassing double and overriding == , < and > ; versus creating extension methods for double equivalent to these. My tendency is to go with the latter - less code to change and maybe it will be less confusing to others reading the code later? Is there another option? What are other good reasons to choose one over the other? | "double has its problems testing for equality". No, that is not true. "double" does not have such problems. Equality testing for double values is well defined and usually works as it should (which may sometimes not be what several programmers expect, of course). Truth is: programmers have often problems with testing for equality correctly in numerical software. You cannot simply fix this by using another data type, or by providing some standard equality comparers with some standard precision for equality up-front. Though such approaches may be part of a solution, you first and foremost need to make sure the programmers in your team know how to do floating point comparisons correctly. Before reading the rest of my answer, please have a look into " What Every Computer Scientist Should Know About Floating-Point Arithmetic" . Now. No excuses. So, since you read this paper, you now have learned that there are several alternatives on how comparisons can be done when using floating point numbers, and one has to pick the correct one for the specific case. For example, it may be necessary to take absolute or relative errors into account, to analyse the required precision for each individual comparison/quantity, or to take the specific operations and algorithms into account which will be used in the numerical software you are designing. Another thing which might be necessary is to adapt the scaling of some quantities, or other measures to keep rounding errors under control. To find out what one really needs, I would recommend starting to implement some of the algorithms and determine precisely which kind of floating point comparisons are required there. When comparisons of the same kind occur more than two or three times, then it is time to refactor them into a reusable library (maybe using extension methods, which is a useful way in C# whenever it comes to adding some reuseable methods to an existing type one cannot change). It should be clear now why overloading an operator like == is not useful, since there is only one such operator per type, with no additional parameters like a precision . Don't try this up-front until you have already written several of such numerical programs! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/418182",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/378027/"
]
} |
418,326 | Say that I have two functions that are essentially identical, where one validates its arguments while the other doesn't. The rationale: sometimes you want to be safe and sometimes you want to go fast. What's your preferred naming convention to distinguish the two? For example: list_err_t list_push_a(list_t *ref, list_t *item) {
if ((ref != NULL) && (item != NULL)) {
list_push_b(ref, item);
return LIST_ERR_NONE;
} else {
return LIST_ERR_BAD_ARG;
}
}
void list_push_b(list_t *ref, list_t *item) {
item->next = ref->next;
ref->next = item;
} What would you name list_push_a and list_push_b ? I think list_push_safe and list_push_fast is a bit wordy -- one of them should just be list_push . (And note that I'm not asking about CamelCase vs snake_case etc...) addenda... There have been some great answers already. I should have mentioned up front that the programming environment in question is low-level embedded devices, where speed is important and resources are scant. For example, raising exceptions is not an option... | Your typical caller will expect functions to be safe, i.e. to inform about failure in an orderly fashion instead of crashing or giving funny results. You chose to use the traditional C style of error return values for that purpose, and that's probably okay in your situation (generally, I'd prefer exceptions, but I don't know if that's usable for you). I'd go for calling the safe version list_push() , and the "fast" version list_push_unsafe() or list_push_no_checks() , as the most important fact about the "fast" one isn't its speed, but its unsafe behaviour. And of course clearly describe the difference in the function documentation. I'd explicitly recommend against using the wording "fast" for the second one, as that doesn't convey a hint about the additional risk. Then the typical caller will see that there are two implementations, a slow one and a fast one, and will of course choose the "better" one: the fast one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/418326",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/126197/"
]
} |
418,446 | I've read a lot about dependency inversion principle but still, I can't apply it to my case. I just don't know when I should apply it and when not. I write a simple application in Java to generate invoices. For now, I have basic classes: Client , Product , InvoiceLine and Invoice . Should these classes communicate through interfaces? For instance, I have a method in Product for getting name of the product: public String getName() {
return name;
} And I use this method in class Invoice public void addLineToInvoice(Product product, int quantity) {
rows.add(new InvoiceLine(rows.size(), product.getName(), quantity, product.getPrice()));
} Now, should I create an interface for Product ? Or is it unnecessary? | (Disclaimer: I understand this question as "applying the Dependency Inversion Principle by injecting objects through interfaces into other object's methods", a.k.a "Dependency Injection", in short, DI.) Programs were written in the past with no Dependency Injection or DIP at all, so the literal answer to your question is obviously "no, using DI or the DIP is not necessary" . So first you need to understand why you are going to use DI, what's your goal with it? A standard "use case" for applying DI is "simpler unit testing". Refering to your example, DI could make sense under the following conditions you want to unit test addLineToInvoice , and creating a valid Product object is a very complex process, which you do not want to become part of the unit test (imagine the only way to get a valid Product object is to pull it from a database, for example) In such a situation, making addLineToInvoice accept an object of type IProduct and providing a MockProduct implementation which can be instantiated simpler than a Product object could be a viable solution. But in case a Product can be easily created in-memory by some standard constructor, this would be heavily overdesigned. DI or the DIP are not an end in itself, they are a means to an end. Use them accordingly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/418446",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/378463/"
]
} |
418,451 | I am trying to create an object model for a user and a chatroom. I'm stuck on where to place certain functionality when the objects collaborate. At the moment all the functionality for the User is inside the User class, a snippet of its methods are: User.JoinChatRoom() User.WriteChatRoomMessage() User.Authenticate() User.JoinGroup() I recognize that is this is a "God Object"/"Blob"" and instead we could model this as seperate objects ChatRoom, User and Group with the methods: User.Authenticate() ChatRoom.AddPlayer(User u) ChatRoom.WriteMessage(String msg) Group.AddPlayer(User u) But I am confused about this refactor since, the way I understand object methods is that they perform an operation on the object. Therefore you can command the user to write to a chatroom, command the user to join a group, etc. But with the "cleaner" second model this doesn't seem to fit, there is no explicit JoinChatRoom() method. How do I design and think about what methods should be attached to an object? | A user is someone who is registered and able to use the system. A chat room is a place people can chat. What happens when a user joins a chat room? What is that thing that represents a user who has joined a chat room? That is the abstraction you are missing. Other answers are hinting at this. You could say a user participates in a chat. You need a class that represents a user participating in a chat room. We can call it ChatRoomParticipant . What does participating in a chat require? A User and a ChatRoom . var participant = new ChatRoomParticipant(chatRoom, user);
participant.SendMessage("Hey, everyone. I'm new here!");
participant.SendImage(File.Open(@"C:\Cat Pictures\Fluffy playing with catnip.jpg"));
participant.SendMessage("Oops. Gotta go. Someone's at my door.");
participant.LeaveChatRoom(); Now you are commanding an object to do something. Send a message. Upload a (cat) picture. Leave the chat. Sometimes you need objects to collaborate, and it is the collaboration of those objects that needs to be modeled in its own class. Don't get too hung up on "classes must be things." You'll miss opportunities like this where the best OO design is to model the collaboration of two or more objects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/418451",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/362602/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.