source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
382,377 | We have a bug process that is currently being worked. We have 3 levels of bug: P1 bug: Bugs that prevents users from working. They must be solved on the spot. P2 bug: Bugs that are impacting but users can work P3 bug: Bugs that are not impacting and where users can work. P1 is mandatory and must be dealt on the spot. But for for P2 and P3, we judge on a case by case basis. With the 3 levels that we have, the team has the tendancy to work on more pressing new developments asked by the customers, instead of dealing with P2 and P3, which is almost like non urgent. Questions are the following: Should I add another level of priority, like having a P4? Should I also assign them targets for dealing with non urgent tickets like in this week, when not assign a coding task, you should treat at least 1 P2? Currently, we do not have objectives like I raised above but my concern is that giving them such objectives can be brutal. The thing that is certain is I need to talk to them about the objectives, the team like to be involved in discussion especially when we are setting objectives. Update: I was proposed this question in term of similarity. However it is not similar, at all. My question is how to have people deal with bugs, without imposing a strict agenda and yet to have it resolved. So no, the question implied does not help me. Still, thank you. | Generally you have two axes for bugs: gravity and frequency. So obviously something grave and frequent is of the highest priority. However, something that's serious but happens rarely should be weighed roughly at the same as something that's not serious but happens often. So supposing you rate gravity from 1 to 3 and frequency from 1 to 3, the types of bugs you should probably be fixing are those which cross the diagonal defined by gravity 1, frequency 3, and gravity 3, frequency 1. A blocking error or an error which could create potential damage to client information would always be a gravity 3. Similarly, an error with gravity 1 is not likely going to noticed by the user or has low priority. If you aren't sure here, 2 is probably a safe number to assign. An error the user sees each and every time the program is launched is going to have a frequency of 3. An error with frequency 1 is going to be something which happens rarely if at all. Again, if you aren't sure, 2 is probably a safe number to assign. It's mostly subjective on what constitutes a bug with gravity 3 or a bug with frequency 3, but use your common sense. A bug with gravity 1, frequency 2 is perhaps a misspelled label. A bug with gravity 2, frequency 1, might be proper error handling when database connection is down. Again, this is just a rough idea, but the idea is to give emphasis on what should be the focus for bug fixing as a sort of triage. Clearly it is not possible to eliminate all bugs, blocking or otherwise, though at least with this methodology, you can safely say that bugs are not too pressing or too frequent. If you solely fixed bugs which are blocking errors, then the high frequency errors will be ignored and users will notice that you didn't fix these bugs. Also, for convenience, you may find you prefer to provide letter grades for gravity or frequency, so you can say that a bug is a B3 error, and it is clear both the gravity and the frequency. Good luck! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382377",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147213/"
]
} |
382,389 | I'm developing a set of classes designed to communicate with external APIs, and I'm running into trouble with how to properly structure everything for proper loose coupling and unit testing. Currently, each API we need to talk to has a distinct class, which implements an interface a bit like this: public interface IApiIntegration
{
Task<string> SearchApi (List<string> searchValues);
Task<string> GetFromApi(string idToGet);
Task<bool> PostToApi(PostObject api);
} Each api class inherits from a base abstract with implements this interface. That class also contains a number of helper functions which are only relevant to handling data coming to and from Apis. Beneath the public PostToApi method of each class there are also a bunch of helper functions to build the object to be posted. These are often quite complicated, and could really do with testing. However, they're specific to the class in question and are thus private. Inside every public function on IApiIntegration there is also, of course, a call to an external Api. For example it might look something like: public override async Task<string> GetFromApi(string id)
{
string result = "";
string path = $"{integration.RootUrl}items/{id}?username={integration.Username}&key={integration.Password}";
// client is a static instance of HttpClient
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
result = await response.Content.ReadAsStringAsync();
}
return result;
} This leaves me with two problems: 1) It feels right that the helper methods in the base class and the individual classes should be open to unit testing, but also that they should be protected/private. Something, therefore, is clearly wrong with the structure. 2) It's obviously wrong to be testing external APIs so I need somehow to bypass or mock out those dependencies. But that's not possible in this structure. How can I refactor and restructure this to ensure everything is open for unit tests? | Generally you have two axes for bugs: gravity and frequency. So obviously something grave and frequent is of the highest priority. However, something that's serious but happens rarely should be weighed roughly at the same as something that's not serious but happens often. So supposing you rate gravity from 1 to 3 and frequency from 1 to 3, the types of bugs you should probably be fixing are those which cross the diagonal defined by gravity 1, frequency 3, and gravity 3, frequency 1. A blocking error or an error which could create potential damage to client information would always be a gravity 3. Similarly, an error with gravity 1 is not likely going to noticed by the user or has low priority. If you aren't sure here, 2 is probably a safe number to assign. An error the user sees each and every time the program is launched is going to have a frequency of 3. An error with frequency 1 is going to be something which happens rarely if at all. Again, if you aren't sure, 2 is probably a safe number to assign. It's mostly subjective on what constitutes a bug with gravity 3 or a bug with frequency 3, but use your common sense. A bug with gravity 1, frequency 2 is perhaps a misspelled label. A bug with gravity 2, frequency 1, might be proper error handling when database connection is down. Again, this is just a rough idea, but the idea is to give emphasis on what should be the focus for bug fixing as a sort of triage. Clearly it is not possible to eliminate all bugs, blocking or otherwise, though at least with this methodology, you can safely say that bugs are not too pressing or too frequent. If you solely fixed bugs which are blocking errors, then the high frequency errors will be ignored and users will notice that you didn't fix these bugs. Also, for convenience, you may find you prefer to provide letter grades for gravity or frequency, so you can say that a bug is a B3 error, and it is clear both the gravity and the frequency. Good luck! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382389",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22742/"
]
} |
382,409 | I have Customer , Order , and Product Aggregate Roots... When the order is created, it takes in a Customer and a List<Sales.Items> . It looks something like: public class Order
{
public static Order Create(Customer customer, List<Sales.Item> items)
{
// Creates the order
return newOrder;
}
} Then using CQRS I've created OrderCreateHandler which looks something like: public class OrderCreateCommandHandler : IRequestHandler<OrderCreateCommand>
{
public OrderCreateCommandHandler(ECommerceContext db)
{
_db = db;
}
public async Task<Unit> Handle(OrderCreateCommand request, CancellationToken cancellationToken)
{
var customerResult = // Q) Is it okay to execute a CustomerQuery here?
var customer = new Customer(customerResult.Id, customerResult.FirstName, customerResult.MiddleName,
customerResult.LastName, customerResult.StreetAddress, customerResult.City, customerResult.State, "United States",
customerResult.ZipCode);
// blah....
order = Order.Create(customer, products);
_db.Orders.Add(order);
}
} My question is in the command handler, is it okay to perform queries to get the data to build the aggregate roots I need to then pass? I don't ever store the aggregate root itself (just a reference if I need), but I don't want to pass in Ids and primitives everywhere, I want to follow SOLID OO and pass actual objects. Is this a violation of CQRS? | Let's start with a short review of the problem-space here. The fundamental benefit of adopting a CQRS pattern is to solve/simplify your problem domain by reducing the interleaving and leakage that begins to occur when utilizing the same model for your write-side as your read-side. Often, the tension that arises serves as a detriment to both. CQRS seeks to relieve this tension by separating (decoupling logically and possibly physically) the write-side and read-side of your system. With the above in mind it should be clear that neither your commands nor queries should be coupled to a logical entity from the other "side". Given the above, we can now formulate a direct answer to your question: Yes, you can query your data store within a command handler provided the query is issued against your command model. Because your OrderCreateCommandHandler is part of the command model of your application, we want to avoid coupling it to any part of your read model. It's unclear whether or not this is case given the example code (although the name CustomerQuery does raise some suspicions). More important than the answer above though is that... there is something else that feels fishy about the example you have provided. Can you feel that too? What I see here is quite a bit of coupling. Your handler is retrieving a CustomerResult (VO?), then breaking down all of it's data into another entity's constructor ( Customer ), then passing the Customer to a factory method of yet another entity. We have quite a bit of "asking" happening here. That is, we are passing around a lot of data in way that creates coupling. Furthermore, the command handler doesn't "read" in a very declarative fashion (which is what we want to strive for). What I mean is that it's kind of hard to "see" what's happening in your method because there is so much plumbing getting in the way. I think we can come up with a more cohesive/declarative solution. Given that the general "flow" of a command handler can be broken down into three simple steps: Retrieve all data (domain model) necessary to carryout the use-case Coordinate the data to fulfill the use-case Persist the data Let us see if we can come up with a simpler solution: buyer = buyers.Find( cmd.CustomerId );
buyer.PlaceOrder( cmd.Products );
buyers.Save( buyer ); Ah ha! Much cleaner (3 simple steps). More importantly though, not only does the code above achieve your same goal, it does so without creating many dependencies between disparate objects as wells as functioning in a more declarative and encapsulated manner (we aren't "newing" anything or calling any factory methods)! Let's break this down piece by piece so we can understand "why" the above may be a better solution. buyer = buyers.Find( cmd.CustomerId ); The first thing I've done is introduce a new concept: Buyer . In so doing this, I am partitioning your data vertically according to behavior. Let's let your Customer entity have responsibility for maintaining Customer information ( FirstName , LastName , Email , etc.), and allow a Buyer to be responsible for making purchases. Because some Customer information needs to be recorded when a purchase is made, we will hydrate a Buyer with a "snapshot" of that data (and possibly other data). buyer.PlaceOrder( cmd.Products ); Next we coordinate the purchase. The above method is where a new Order is created. An Order doesn't just appear out of nowhere right? Something must place it, so we model accordingly. What does this achieve? Well, the Buyer.PlaceOrder method provides a place in your domain to throw BuyerNotInGoodStanding , OrderExceedsBuyerSpendingLimit , or RepeatOrderDetected exceptions. By only creating an Order in the context of it's placement, we can enforce how an Order can come about. In your example, either your application-layer command handler or your Order factory method would have to be made responsible for enforcing each invariant. Neither is a good place for checking business rules. Additionally we now have a place to raise our OrderPlaced event (which will be necessary to keep your payment context decoupled), and also we can simplify your Order entity as it now only needs a scalar buyerId to keep reference to it's owner. buyers.Save( buyer ); Pretty self-explanatory. A Buyer now contains all of the information you need to persist both an Order and a "snapshot" of Customer data. How you organize that data internally and take it apart for persistence is up to you (hint: A Buyer needn't be persisted at all, for example. Just the Order it contains). EDIT The example solution (if we can call it that) that I posted is one meant to get the "gears turning", and doesn't necessarily represent the best-possible solution to the problem at hand. That is, your problem. It is totally possible (even likely) that introducing the concept of a Buyer aggregate is over-engineering given that there had been no mention of any sort of rules regarding how an Order can be placed. For example: customer = customers.Find( cmd.CustomerId );
order = customer.PlaceOrder( cmd.Products ); // raise OrderPlaced
orders.Save( order ); may be a totally valid approach! Just be sure to include all of the necessary information in the CustomerInformationSlip (your "snapshot") attached to the Order to allow it to enforce any invariant controlling how it can be modified. For example: order.ChangeShippingAddress( cmd.Address ); // raise ShippingAddressChanged The above may throw an OrderExceedsCustomerShippingDistance if each Customer has their own rules regarding how far you will ship to them given their account tier. Let the rules dictate the design! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382409",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/274856/"
]
} |
382,486 | This question concerns the C# language, but I expect it to cover other languages such as Java or TypeScript. Microsoft recommends best practices on using asynchronous calls in .NET. Among these recommendations, let's pick two: change the signature of the async methods so that they return Task or Task<> (in TypeScript, that'd be a Promise<>) change the names of the async methods to end with xxxAsync() Now, when replacing a low-level, synchronous component by an async one, this impacts the full stack of the application. Since async/await has a positive impact only if used "all the way up", it means the signature and method names of every layer in the application must be changed. A good architecture often involves placing abstractions between each layers, such that replacing low-level components by others is unseen by the upper-level components. In C#, abstractions take the form of interfaces. If we introduce a new, low-level, async component, each interface in the call stack needs to be either modified or replaced by a new interface. The way a problem is solved (async or sync) in an implementing class is not hidden (abstracted) to the callers anymore. The callers have to know if it's sync or async. Aren't async/await best practices contradicting with "good architecture" principles? Does it mean that each interface (say IEnumerable, IDataAccessLayer) needs their async counterpart (IAsyncEnumerable, IAsyncDataAccessLayer) such that they can be replaced in the stack when switching to async dependencies? If we push the problem a little further, wouldn't it be simpler to assume every method to be async (to return a Task<> or Promise<>), and for the methods to synchronize the async calls when they're not actually async? Is this something to be expected from the future programming languages? | What Color Is Your Function? You may be interested in Bob Nystrom's What Color Is Your Function 1 . In this article, he describes a fictional language where: Each function has a color: blue or red. A red function may call either blue or red functions, no issue. A blue function may only call blue functions. While fictitious, this happens quite regularly in programming languages: In C++, a "const" method may only call other "const" methods on this . In Haskell, a non-IO function may only call non-IO functions. In C#, a sync function may only call sync functions 2 . As you have realized, because of these rules, red functions tend to spread around the code base. You insert one, and little by little it colonizes the whole code base. 1 Bob Nystrom, apart from blogging, is also part of the Dart team and has written this little Crafting Interpreters serie; highly recommended for any programming language/compiler afficionado. 2 Not quite true, as you may call an async function and block until it returns, but... Language Limitation This is, essentially, a language/run-time limitation. Language with M:N threading, for example, such as Erlang and Go, do not have async functions: each function is potentially async and its "fiber" will simply be suspended, swapped out, and swapped back in when it's ready again. C# went with a 1:1 threading model, and therefore decided to surface synchronicity in the language to avoid accidentally blocking threads. In the presence of language limitations, coding guidelines have to adapt. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382486",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/321810/"
]
} |
382,722 | Here's what I mean: class MyClass {
int arr1[100];
int arr2[100];
int len = 100;
void add(int* x1, int* x2, int size) {
for (int i = 0; i < size; i++) {
x1[i] += x2[i];
}
}
};
int main() {
MyClass myInstance;
// Fill the arrays...
myInstance.add(myInstance.arr1, myInstance.arr2, myInstance.len);
} add can already access all of the variables that it needs, since it's a class method, so is this a bad idea? Are there reasons why I should or should not do this? | There are may things with the class that I would do differently, but to answer the direct question, my answer would be yes, it is a bad idea My main reason for this is that you have no control over what is passed to the add function. Sure you hope it is one of the member arrays, but what happens if someone passes in a different array that has a smaller size than 100, or you pass in a length greater than 100? What happens is that you have created the the possibility of a buffer overrun. And that is a bad thing all around. To answer some more (to me) obvious questions: You are mixing C style arrays with C++. I am no C++ guru, but I do
know that C++ has better (safer) ways of handling arrays If the
class already has the member variables, why do you need to pass them
in? This is more of architectural question. Other people with more C++ experience (I stopped using it 10 or 15 years ago) may have more eloquent ways of explaining the issues, and will probably come up with more issues as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382722",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/322138/"
]
} |
382,882 | I found an inheritance tree in our (rather large) code base that goes something like this: public class NamedEntity
{
public int Id { get; set; }
public string Name { get; set; }
}
public class OrderDateInfo : NamedEntity { } From what I could gather, this is primarily used to bind stuff on front-end. For me, this makes sense as it gives a concrete name to the class, instead of relying on the generic NamedEntity . On the other hand, there is a number of such classes that simply have no additional properties. Are there any downsides to this approach? | This is something that I use to prevent polymorphism from being used. Say you have 15 different classes that have NamedEntity as a base class somewhere in their inheritance chain and you are writing a new method that is only applicable to OrderDateInfo . You "could" just write the signature as void MyMethodThatShouldOnlyTakeOrderDateInfos(NamedEntity foo) And hope and pray no one abuses the type system to shove a FooBazNamedEntity in there. Or you "could" just write void MyMethod(OrderDateInfo foo) . Now that is enforced by the compiler. Simple, elegant and doesn't rely on people not making mistakes. Also, as @candied_orange pointed out , exceptions are a great case of this. Very rarely (and I mean very, very, very rarely) do you ever want to catch everything with catch (Exception e) . More likely you want to catch a SqlException or a FileNotFoundException or a custom exception for your application. Those classes often times don't provide any more data or functionality than the base Exception class, but they allow you to differentiate what they represent without having to inspect them and check a type field or search for specific text. Overall, it's a trick to get the type system to allow you to use a narrower set of types than you could if you used a base class. I mean, you could define all your variables and arguments as having the type Object , but that would just make your job harder, wouldn't it? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382882",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/69757/"
]
} |
382,892 | I'm confused as to how to properly interact with my Postgres database throughout the typical user experience on my web app. I'm seeking clarification on the most efficient method of communicating with the database, without over-communicating (spamming). Currently I use org.apache.commons.dbcp.BasicDataSource to connect to my database whenever needed, and I always close the connection when finished. It makes total sense to me when I'm doing a one-time query. But what if the query can be performed multiple times at the discretion of the user? Here are 2 specific occurrences in my app: If a user types in their credentials and then clicks "Login" on the login page, the database is queried to validate the credentials. What if the user clicks the button 100 times? This means they could potentially spam the button, causing multiple queries to be sent to my database. We could impose a manual limit (like 5 clicks per minute) but where do we draw the line? Once logged in, their user profile must be filled and they can 'Save' the changes after any field changes are triggered. So it would be very easy to maliciously spam the save button after a new character is typed. I understand a new connection is not created each time getConnection() is called but I don't understand if the inner mechanics of BasicDataSource handle this potential spamming. Once a user clicks 'Save' it's important that the changes are accessible to all other users. For example, User A could click 'Make Visible to Other Users' and then click 'Save'. User B should now be able to find User A in our app. Do you recommend I use Connection Pool, Hibernate, Redis, cache2K or some other tool/framework? Or is it sufficient to query the database each and every time since pool will optimize it on the back-end? Thanks so much. I'm using Java 8 + Vaadin 8. | This is something that I use to prevent polymorphism from being used. Say you have 15 different classes that have NamedEntity as a base class somewhere in their inheritance chain and you are writing a new method that is only applicable to OrderDateInfo . You "could" just write the signature as void MyMethodThatShouldOnlyTakeOrderDateInfos(NamedEntity foo) And hope and pray no one abuses the type system to shove a FooBazNamedEntity in there. Or you "could" just write void MyMethod(OrderDateInfo foo) . Now that is enforced by the compiler. Simple, elegant and doesn't rely on people not making mistakes. Also, as @candied_orange pointed out , exceptions are a great case of this. Very rarely (and I mean very, very, very rarely) do you ever want to catch everything with catch (Exception e) . More likely you want to catch a SqlException or a FileNotFoundException or a custom exception for your application. Those classes often times don't provide any more data or functionality than the base Exception class, but they allow you to differentiate what they represent without having to inspect them and check a type field or search for specific text. Overall, it's a trick to get the type system to allow you to use a narrower set of types than you could if you used a base class. I mean, you could define all your variables and arguments as having the type Object , but that would just make your job harder, wouldn't it? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/382892",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/245993/"
]
} |
383,002 | For example, in my project, I often found some head of for-loop appears many times, eg: for(let i=0;i<SharedData.students.length;i++){
SharedData.students[i].something=.....
}
if(isReset){
for(let i=0;i<SharedData.students.length;i++){
SharedData.students[i].reset();
}
}
.
.
. which the task that inside and outside for-loop are totally different, but it commonly needs for(let i=0;i<SharedData.students.length;i++) . So my question is, is copying and pasting for(let i=0;i<SharedData.students.length;i++) violating DRY principle? | The Don't Repeat Yourself (DRY) principle is easy to mindlessly over apply. Keep in mind that the real sin isn't using copy and paste. It's spreading a design decision around in a way that makes it difficult to change that decision. If what you really have is two decisions that just happen to look the same at the moment then everything is fine. You'd be doing damage if you forced the two decisions to be expressed in the same place. By leaving them as separate, as you have now, you're allowing the two loops to vary independently. If you rewrote them as Robert Harvey suggests: for(let i=0;i<SharedData.students.length;i++){
SharedData.students[i].something=.....
if(isReset){
SharedData.students[i].reset();
}
} then you'd lose the ability to easily make them vary independently (say by having one skip the first element, for whatever reason). This idea can be hard to grasp so let me say it another way: int x = 100;
int y = 100; Here is a "violation" of DRY that most people wouldn't think twice about. Why? Because we know that even though y is a redundant copy of x it might not always be. It has it's own meaning. We don't want to lose that meaning just because it happens to have the same value as x right now. So please when you think about DRY think less about copy and paste and more about what you're making easy to change. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/383002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
384,100 | When looking at a new codebase I like to start from a bottom-up approach. Where I comprehend one file and then move up to the next abstraction. But often times I find myself forgetting what the lower-level abstraction is doing. So I'll be at this point where I find myself in an almost endless loop of going back to the files that I've previously fully comprehended, and then trying to relearn them; whilst trying to juggle numerous other abstractions that connect to each other in my head. Is there a better strategy for dealing with this situation? Should I just forget about the lower-level details and take them as a given? But even then, many times a previous understanding of the lower-level abstraction is needed to understand what the current abstraction is doing. | Programming concretely is the impulse to pull details towards you so you can nail them all down in one place. We all start this way and it's hard to let go. Programming abstractly is most definitely "forgetting about the lower-level details". Sometimes even high level details. You push details away and let something else deal with them. The sneaky thing is you've been doing this all along. Do you really understand what all happens between print "Hello world" and it showing up on your screen? The number one thing to demand as you struggle to let go of these details is good names. A good name ensures you will not be surprised when you look inside. This is why you weren't surprised that print put something on your screen and didn't really care how. foo "Hello world" would have been a different story. Also, levels of abstraction should be consistent. If you're at a level that is about calculating pi you shouldn't also be worried about how to display pi. That detail has leaked into an abstraction where it doesn't belong. Lower, higher, or sideways, details, that aren't about the one thing I'm thinking about in this one place, can either go away altogether or at least hide behind a good name. So if you're really struggling bouncing from file to file I'll lay odds someone has stuck you with bad names or leaky abstractions. I fix this by reading with my fingers. Once I have decent tests around this mess I tease responsibilities apart, give them clear names that avoid surprises, and show it to someone else to make sure I'm not living in a fantasy world. Apparently I'm not alone when it comes to working this way: Whenever I work on unfamiliar code I start extracting methods. When I do this, I look for chunks of code that I can name - then I extract. Even if I end up inlining the methods I’ve extracted later, at least I have a way of temporarily hiding detail so that I can see the overall structure. Michael Feathers - Orange Code | {
"source": [
"https://softwareengineering.stackexchange.com/questions/384100",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/323657/"
]
} |
384,363 | My team has developed a new service layer in our application. They created a bunch of services that implement their interfaces (E.g., ICustomerService , IUserService , etc). That's pretty good so far. Here is where things get a bit strange: We have a class called "CoreService", which looks like this: // ICoreService interface implements the interfaces of
// all services in the system 100+
public class CoreService : ICoreService
{
// I don't like these lazy instance variables. I think they are pointless
private readonly Lazy<CustomerService> _customerService;
private readonly Lazy<UserService> _userService;
public CoreService()
{
// These violate the Dependency inversion principle.
// It also news up its dependencies, which is bad.
_customerService = new CustomerService();
_userService = new UserService();
// And so forth
}
#region ICustomerService
public long? GetCustomerCount()
{
return _customerService.GetCustomerCount();
}
#endregion
#region IUserService
public User GetUser(int userId, int customerId)
{
return _userService.GetUser(userId, customerId);
}
#endregion
// ...
// And 100 other regions for all services
} The team's reasoning is that controllers in the consuming application can easily instantiate CoreService and use its services, and it wouldn't cause any performance problems since everything is "Lazy". I tried to explain that this is a bad design because: We are violating the Dependency Inversion Principle by lazy instantiating every single dependency and their dependencies. As a result of #1, We are eliminating the testability of our services. We can no longer the mock dependency of our services and inject them for unit testing. CoreService just seems like a "God Object" anti-pattern to me. We shouldn't even instantiate anything in our controllers. We should just inject the required dependencies of the controllers into it. (E.g., if the CustomerController requires five different services, just inject them via the constructor!) Is my argument valid? Are there any other violations of best practices that I am missing here? Any input would be highly appreciated here. Edit: I updated the title since this question is getting marked as a duplicate. This service is not necessarily a God object, it's actually a "Passthrough" or a Facade service. My apologies for the mistake. | This is not a God object . It seems like it is because there is so much here, but, in a way, it's doing nothing at all. There is no behavior code here. This isn't an omnipotent God that does everything. It just finds everything. It's less a true object at all and more of a data structure. This pattern has a more proper name: Service Locator . It strongly contrasts with the Dependency Injection fowler pattern. Your testablility complaint is valid but the problem is bigger than that. The main principle being violated here is the Interface Segregation principle. Here's why: If I come at this thing with an intent to refactor away the Dependency Inversion problem I'd just stop hard coding CoreService . I'd replace it with a parameter that must be passed in. Can you see why that doesn't really help? On top of it being annoying to see a CoreService parameter passed into everything, this doesn't actually fix the dependency problem because seeing that an object needs CoreService tells me zero about it's real needs because CoreService provides access to anything and everything. We externalize dependencies to make needs clear. If we follow the Interface Segregation principle we see that an object that, right now, needs CoreService actually needs maybe 5 out of the 100 things it provides. What those 5 things need is a good descriptive name. Now when I see that name I know what this thing needs. I can go find ways to provide those 5 things. There might be 2 or 42 ways to provide those 5 things. One of those ways might be through a test, but this idea is about a lot more than testing. Testing just makes this problem painfully obvious. To provide that name, and avoid a constructor with 5 parameters, you can introduce a parameter object refactoring.com , c2.com provided those 5 things represent one coherent idea with, please, a good name. (It may be 2 ideas that need 2 names are hiding in those 5 things. If so fine, break 'em up and name 'em). You might be tempted to reuse this parameter object if you find another object that needs it. But let's say that object needs something else as well. So you add that something else to the parameter object. Even though the first object doesn't need this something else. Well we're now on our way to making a Service Locator again. You stop this by not accepting things you don't need. That's what the Interface Segregation principle is all about. Another pattern hiding in here seems to be Singleton c2.com . Dependency Injection has a wonderful alternative to that. Just build what you need in main once and pass references to what you built to everything that needs it. Once you have built your graph of long lived objects call one method on one of them to start the whole thing ticking. Just try not to go nuts . It's OK to break this up with factories and other creational patterns so long as you keep creation and behavior code separate . To convince others you're going to have to start showing how this problem can be fixed. Make some things that express their needs and write code that satisfies those needs. A simple factory with a good name and it's own file to live in often gets this done. If you're stuck with the need to find the rest of their stuff let the factory deal with that problem. Now your object, including its class file, is blissfully unaware of their nonsense and won't care when you get around to fixing the rest of it. It also news up its dependencies, which is bad. That's a little simplistic. It's bad to "new up dependencies" inside the client object when you don't provide a way to override them. You seem to be using C#. C# has named arguments ! Which means you can satisfy a dependency with an optional argument. Just be sure you have a good default value for it. Good default values should be chosen wisely. Don't use ones that surprise people. Also, as @JimmyJames points out, brainlessly lazy loading everything is far from ideal. It doesn't actually speed things up. It only changes when you pay for it. It can make when you pay for it unpredictable and it can make configuration problems difficult to diagnose. It's essentially the caching problem. Widely recognized as one of the hardest problems in computer science (after giving things good names). So please don't use it thoughtlessly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/384363",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/234480/"
]
} |
384,509 | Please see the code below; it tests to see if a person with Gender of female is eligible for offer1: [Fact]
public void ReturnsFalseWhenGivenAPersonWithAGenderOfFemale()
{
var personId = Guid.NewGuid();
var gender = "F";
var person = new Person(personId, gender);
var id = Guid.NewGuid();
var offer1 = new Offer1(id,"Offer1");
Assert.False(offer1.IsEligible(person));
} This unit test succeeds. However, it will fail if 'Offer1' is offered to females in future. Is it acceptable to say - if the business logic surrounding offer 1 changes then the unit test must change. Please note that in some cases (for some offers) the business logic is changed in the database like this: update Offers set Gender='M' where offer=1; and in some cases in the domain model like this: if (Gender=Gender.Male)
{
//do something
} Please also note that in some cases the domain logic behind offers changes regularly and in some cases it does not. | This is not brittle in the usual sense. A unit test is considered brittle if it breaks due to implementation changes which does not affect the behavior under test. But if the business logic itself changes, then a test of this logic is supposed to break. That said, if the business logic indeed changes often, perhaps it is not appropriate to hardcode the expectations into the unit tests. Instead you could test if the configurations in the database affects the offers as expected. The name of the test Returns False When Given A Person With A Gender Of Female does not describe a business rule. A business rule would be something like Offers Applicable to M should not be applied to persons of gender F . So you could write a test that confirms that if an offer is defined as only applicable to type M persons, then a type F person will not be indicated as eligible for it. This test will ensure the logic works even if the configuration of the specific offers change. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/384509",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/65549/"
]
} |
384,861 | I have recently graduated from university and started work as a programmer. I don't find it that hard to solve "technical" issues or do debugging with things that I would say have 1 solution. But there seems to be a class of problems that don't have one obvious solution -- things like software architecture. These things befuddle me and cause me great distress. I spend hours and hours trying to decide how to "architect" my programs and systems. For example - do I split this logic up into 1 or 2 classes, how do I name the classes, should I make this private or public, etc. These kinds of questions take up so much of my time, and it greatly frustrates me. I just want to create the program - architecture be damned. How can I get through the architecture phase more quickly and onto the coding and debugging phase which I enjoy? | Perfect is the enemy of good. That said, you should not cut corners. Software design will have longer lasting impact, and save you (and your peers) tons of time and effort in the future. It will take longer to get right. Most of the time spent programming isn't hammering on a keyboard, but by a whiteboard figuring out how to solve a problem. But you also shouldn't worry about perfection. If two designs fight to a stalemate, it means they're likely about the same goodness. Just go with one. It's not as though you can't change things once you figure out the flaws in that design. (And hopefully it will also help out once you find out that there's not just one way to debug/solve technical issues.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/384861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/278692/"
]
} |
384,980 | I currently have two derived classes, A and B , that both have a field in common and I'm trying to determine if it should go up into the base class. It is never referenced from the base class, and say if at some point down the road another class is derived, C , that doesn't have a _field1 , then wouldn't the principal of "least privileged" (or something) be violated if it was? public abstract class Base
{
// Should _field1 be brought up to Base?
//protected int Field1 { get; set; }
}
public class A : Base
{
private int _field1;
}
public class B : Base
{
private int _field1;
}
public class C : Base
{
// Doesn't have/reference _field1
} | It all depends upon the exact problem you're trying to solve. Consider a concrete example: your abstract base class is Vehicle and you currently have the concrete implementations Bicycle and Car . You're considering moving numberOfWheels from Bicycle and Car to vehicle. Should you do this? No! Because not all vehicles have wheels. You can already tell that if you try to add a Boat class then it's going to break. Now, if your abstract base class was WheeledVehicle then it's logical to have the numberOfWheels member variable in there. You need to apply the same logic to your problem, because as you can see, it's not a simple yes or no answer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/384980",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100503/"
]
} |
385,149 | I'm a junior developer that is given the ability to help shape my team's processes if I can justify the change, and if it helps the team get work done. This is new for me as my past companies more or less had rigidly defined processes that came from management. My team is fairly small and somewhat new (<3 years old). They lack: a well defined software development/work management framework (like
scrum) strong product ownership well defined roles ( e.g. business staff will do manual testing) regular standup meetings a consolidated issue tracking process (we have a tool, the process is still being developed) a unit, system, regression, or manual testing suite or list documentation on business logic and processes a knowledge base to document internal and customer facing tips And the list goes on. Management is open to the implementation of improvements so long as the value is justified and it helps the most important work (namely the development) get done. The underlying assumption however is that you have to take ownership in the implementation, as no one is going to do it for you. And it goes without saying some of the above projects are non-trivial, without a doubt time consuming, and are clearly not development work. Is it worth a (junior) developer's effort to try and push for the above as time goes on? Or is it best to "stay in your lane" and focus on the development, and leave the bulk of the process definition, and optimization to management? | Good answers so far, but they don't cover all the bases. In my experience, many people fresh out of college have fantastic theoretical knowledge - far better than me or many other seniors with decades building software for a living. BUT, and that's a big BUT, that knowledge isn't grounded in any practical scenario. In the real world, a lot of that theory falls flat, or at the very least has to be taken with a massive grain of salt as it's found in practice to not work that well in a real world scenario. Case in point: An application I worked on a long time ago was designed by a brilliant OO theoretician, architected to follow OO principles and theory to the T, with lots of patterns applied everywhere. It was a fantastic piece of software design. Sadly, this resulted in production and maintenance nightmare. The code base was so large and complex that places were impossible to change; Not because it was especially brittle but because it was so complex, nobody dared touch it in fear of what would happen (the original architect/designer had been a contractor who'd long since left). It also performed very poorly, precisely because of the multi-layered structure of patterns, and class libraries that design required. For example, clicking a button on a screen to make a single call to the database would result in several hundred object instantiations and method calls - all in the name of ensuring loose coupling and things like that. This architect had been a university professor with several books about the topic to his name. He'd never worked a day as a programmer on a commercial project. People with practical experience building software would have realised what a monstrosity that design would inevitably lead to and taken a more pragmatic approach, leading to a system that's easier to maintain and performed better as well. The same thing can apply to many other things you encounter as a fresh graduate, or indeed a new employee in any company. Don't assume that because your theoretical base tells you something is wrong or sub-optimal that there's not a very good reason for it to be done that way. Even now, with over 20 years experience in the field, I'm wary of criticising the way things are done in companies I go to work with. I'll mention in passing that I noticed things are different than in my experience being the most optimal, but not in a belligerent way. This often leads to interesting conversations as to why those things are as they are. Maybe changes will happen and maybe not, depending on whether the value of changing things is smaller than the cost. Don't be afraid to suggest things may be done better, but always make sure that you don't come across as the know-it-all snot-nosed kid but rather as a coworker who's trying and willing to not just learn but also help improve processes for the betterment of the company, not just theoretical correctness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385149",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
385,497 | Lets say you are coding a function that takes input from an external API MyAPI . That external API MyAPI has a contract that states it will return a string or a number . Is it recommended to guard against things like null , undefined , boolean , etc. even though it's not part of the API of MyAPI ? In particular, since you have no control over that API you cannot make the guarantee through something like static type analysis so it's better to be safe than sorry? I'm thinking in relation to the Robustness Principle . | You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well. Per a comment, this is well described by OWASP Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state. From the comments I can see that perhaps my answer could use a bit of expansion. By "never trust the inputs", I simply mean that you can't assume that you'll always receive valid and trustworthy information from upstream or downstream systems, and therefore you should always sanitize that input to the best of your ability, or reject it. One argument surfaced in the comments I'll address by way of example. While yes, you have to trust your OS to some degree, it's not unreasonable to, for example, reject the results of a random number generator if you ask it for a number between 1 and 10 and it responds with "bob". Similarly, in the case of the OP, you should definitely ensure your application is only accepting valid input from the upstream service. What you do when it's not OK is up to you, and depends a great deal on the actual business function that you're trying to accomplish, but minimally you'd log it for later debugging and otherwise ensure that your application doesn't go into an unrecoverable or insecure state. While you can never know every possible input someone/something might give you, you certainly can limit what's allowable based on the business requirements and do some form of input whitelisting based on that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385497",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/285284/"
]
} |
385,511 | I have an object that acts as nothing more complicated than a data store for a collection of items. I do this because it lets me bind the data to a single object, which I can store in the Unity (game engine) editor and assign to a bunch of other objects so they all operate on the same list of data. I'm not really sure what to call the class though: class NameNeeded<T> : // Unity stuff
{
public List<T> items { get; }
} I can't inherit this object from anything but a special Unity object, so I can't mask it and pretend that it's a collection itself. There's some other bookkeeping methods, but it's basically a collection container. If I treat it like a regular collection, I end up with this... class Lobby
{
NameNeeded<User> users;
void DoSomething()
{
users.items.Whatever();
}
} ... which I find unattractive from the double plural implying users is a collection itself. | You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well. Per a comment, this is well described by OWASP Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state. From the comments I can see that perhaps my answer could use a bit of expansion. By "never trust the inputs", I simply mean that you can't assume that you'll always receive valid and trustworthy information from upstream or downstream systems, and therefore you should always sanitize that input to the best of your ability, or reject it. One argument surfaced in the comments I'll address by way of example. While yes, you have to trust your OS to some degree, it's not unreasonable to, for example, reject the results of a random number generator if you ask it for a number between 1 and 10 and it responds with "bob". Similarly, in the case of the OP, you should definitely ensure your application is only accepting valid input from the upstream service. What you do when it's not OK is up to you, and depends a great deal on the actual business function that you're trying to accomplish, but minimally you'd log it for later debugging and otherwise ensure that your application doesn't go into an unrecoverable or insecure state. While you can never know every possible input someone/something might give you, you certainly can limit what's allowable based on the business requirements and do some form of input whitelisting based on that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385511",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/314083/"
]
} |
385,623 | I would like to be able to debug building a binary builder. Right now I am basically printing out the input data to the binary parser, and then going deep into the code and printing out the mapping of the input to the output, then taking the output mapping (integers) and using that to locate the corresponding integer in the binary. Pretty clunky, and requires that I modify the source code deeply to get at the mapping between input and output. It seems like you could view the binary in different variants (in my case I'd like to view it in 8-bit chunks as decimal numbers, because that's pretty close to the input). Actually, some numbers are 16 bit, some 8, some 32, etc. So maybe there would be a way to view the binary with each of these different numbers highlighted in memory in some way. The only way I could see that being possible is if you actually build a visualizer specific to the actual binary format/layout. So it knows where in the sequence the 32 bit numbers should be, and where the 8 bit numbers should be, etc. This is a lot of work and kind of tricky in some situations. So wondering if there's a general way to do it. I am also wondering what the general way of debugging this type of thing currently is, so maybe I can get some ideas on what to try from that. | For ad-hoc checks, just use a standard hexdump and learn to eyeball it. If you want to tool up for a proper investigation, I usually write a separate decoder in something like Python - ideally this will be driven directly from a message spec document or IDL, and be as automated as possible (so there's no chance of manually introducing the same bug in both decoders). Lastly, don't forget you should be writing unit tests for your decoder, using known-correct canned input. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385623",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/73722/"
]
} |
385,783 | I am recently studying computer science and I was introduced into boolean algebra . It seems that boolean algebra is used to simplify logic gates in hardware in order to make the circuit design minimal and thus cheaper. Is there any similar way that you can use it to reduce the number of code lines in your software in higher level languages like C++, C# or any other language? | You can use boolean algebra for many things in programming. It is a basic calculation technique like adding, subtracting or multiplying numbers, a multi-purpose tool, not just a tool for reducing the number of code lines in a program. Note it is not a tool for just simplifying logic gates in hardware as well. However, it can sometimes be used for such cases (as well as for the opposite, or for completely different purposes). For example, if your program contains an overly complicated boolean expression or sequence of conditionals, boolean algebra might help you to simplify the expression and the surrounding code. But that does not necessarily lead to less lines of code. In fact, sometimes complex one-line boolean code snippets get more maintainable when you split them up into several lines of code, and boolean algebra can help you to do this correctly . So IMHO your question is like "can I use a pocket calculator to find the shortest route when traveling from A to B" ? Sure you can, when you take a map with distance information for individual roads and use the calculator to add them up, and then pick the route with the smallest sum. But you could also use the calculator for finding longer routes, or for calculating completely different things. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385783",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/99479/"
]
} |
385,819 | Why is argv declared as "a pointer to pointer to the first index of the array", rather than just being "a pointer to the first index of array" ( char* argv )? Why is the notion of "pointer to pointer" required here? | Argv is basically like this: On the left is the argument itself--what's actually passed as an argument to main. That contains the address of an array of pointers. Each of those points to some place in memory containing the text of the corresponding argument that was passed on the command line. Then, at the end of that array there's guaranteed to be a null pointer. Note that the actual storage for the individual arguments are at least potentially allocated separately from each other, so their addresses in memory might be arranged fairly randomly (but depending on how things happen to be written, they could also be in a single contiguous block of memory--you simply don't know and shouldn't care). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/326297/"
]
} |
385,901 | I have a switch structure that has several cases to handle. The switch operates over an enum which poses the issue of duplicate code through combined values: // All possible combinations of One - Eight.
public enum ExampleEnum {
One,
Two, TwoOne,
Three, ThreeOne, ThreeTwo, ThreeOneTwo,
Four, FourOne, FourTwo, FourThree, FourOneTwo, FourOneThree,
FourTwoThree, FourOneTwoThree
// ETC.
} Currently the switch structure handles each value separately: // All possible combinations of One - Eight.
switch (enumValue) {
case One: DrawOne; break;
case Two: DrawTwo; break;
case TwoOne:
DrawOne;
DrawTwo;
break;
case Three: DrawThree; break;
...
} You get the idea there. I currently have this broken down into a stacked if structure to handle combinations with a single line instead: // All possible combinations of One - Eight.
if (One || TwoOne || ThreeOne || ThreeOneTwo)
DrawOne;
if (Two || TwoOne || ThreeTwo || ThreeOneTwo)
DrawTwo;
if (Three || ThreeOne || ThreeTwo || ThreeOneTwo)
DrawThree; This poses the issue of incredibly long logical evaluations that are confusing to read and difficult to maintain. After refactoring this out I began to think about alternatives and thought of the idea of a switch structure with fall-through between cases. I have to use a goto in that case since C# doesn't allow fall-through. However, it does prevent the incredibly long logic chains even though it jumps around in the switch structure, and it still brings in code duplication. switch (enumVal) {
case ThreeOneTwo: DrawThree; goto case TwoOne;
case ThreeTwo: DrawThree; goto case Two;
case ThreeOne: DrawThree; goto default;
case TwoOne: DrawTwo; goto default;
case Two: DrawTwo; break;
default: DrawOne; break;
} This still isn't a clean enough solution and there is a stigma associated with the goto keyword that I would like to avoid. I'm sure there has to be a better way to clean this up. My Question Is there a better way to handle this specific case without effecting readability and maintainability? | I find the code hard to read with the goto statements. I would recommend structuring your enum differently. For example, if your enum was a bitfield where each bit represented one of the choices, it could look like this: [Flags]
public enum ExampleEnum {
One = 0b0001,
Two = 0b0010,
Three = 0b0100
}; The Flags attribute tells the compiler that you're setting up values which don't overlap. The code that calls this code could set the appropriate bit. You could then do something like this to make it clear what's happening: if (myEnum.HasFlag(ExampleEnum.One))
{
CallOne();
}
if (myEnum.HasFlag(ExampleEnum.Two))
{
CallTwo();
}
if (myEnum.HasFlag(ExampleEnum.Three))
{
CallThree();
} This requires the code that sets up myEnum to set the bitfields properly and marked with the Flags attribute. But you can do that by changing the values of the enums in your example to: [Flags]
public enum ExampleEnum {
One = 0b0001,
Two = 0b0010,
Three = 0b0100,
OneAndTwo = One | Two,
OneAndThree = One | Three,
TwoAndThree = Two | Three
}; When you write a number in the form 0bxxxx , you're specifying it in binary. So you can see that we set bit 1, 2, or 3 (well, technically 0, 1, or 2, but you get the idea). You can also name combinations by using a bitwise OR if the combinations might be frequently set together. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385901",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/319749/"
]
} |
385,926 | I'm reading the Scrum - A Pocket Guide by Gunther Verheyen and it says: The Chaos report of 2011 by the Standish Group marks a turning point. Extensive research was done in comparing traditional projects with projects that used Agile methods. The report shows that an Agile approach to software development results in a much higher yield, even against the old expectations that software must be delivered on time, on budget and with all the promised scope. The report shows that the Agile projects were three times as successful, and there were three times fewer failed Agile projects compared with traditional projects. So I have an argument with one of my colleagues who says that for some projects (like medicine/military where the requirements don't change), Agile (and, particularly, Scrum) is overhead with all of the meetings etc and it's more logical to use waterfall, for example. My point of view is that Scrum should be adopted in such projects because it will make the process more transparent and increase the productivity of a team. I also think that Scrum events won't take much time if it's not needed because we don't need to sit the whole 8 hours in Sprint Planning for 1 month sprint. We can spare 5 minutes just to be sure that we are all on the same page and start working. So, will Scrum create additional overhead for a project where requirements don't change? | I believe that it's a faulty assumption to say that there are projects where the requirements don't change. Having worked in both the defense industry and the pharmaceutical industry making software, I can tell you that once software ends up in the hands of subject matter experts (either internal or external), there is feedback. Sometimes, this feedback is on the way the requirement was satisfied and in other cases it's actually on the requirements themselves being wrong or incomplete. Agility is about reducing that feedback cycle and getting working software into someone's hands faster, getting that feedback, and deciding what the next step should be to make sure that what is delivered adds value when the customer decides to accept the software. Even in realms like embedded systems with custom hardware (like you may find in domains like aerospace, automotive, or medical devices), delivering thin slices of functionality quickly to integrate and prototype with can help make sure that the software and hardware system is going to work as intended and in a way that will help the end user. The reduction in the length of the feedback cycle is a huge factor in risk reduction. From the project management perspective, if you fund a project for 2-4 weeks and get regular visibility into progress, that assures you that you are on track. By being able to deliver thin slices of functionality, you incrementally work toward the target state and can begin to forecast when you will get there. If time becomes a constraint, you can descope the lower value functions since the work done first should either be a high value function or an enabler for a high value function. At any point, you can decide if it's worth continuing to fund the effort or go in a different direction and stop a project before it's too late. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/385926",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/326472/"
]
} |
386,042 | According to When is primitive obsession not a code smell? , I should create a ZipCode object to represent a zip code instead of a String object. However, in my experience, I prefer to see public class Address{
public String zipCode;
} instead of public class Address{
public ZipCode zipCode;
} because I think the latter one requires me to move to the ZipCode class to understand the program. And I believe I need to move between many classes to see the definition if every primitive data fields were replaced by a class, which feels as if suffering from the yo-yo problem (an anti-pattern). So I would like to move the ZipCode methods into a new class, for example: Old: public class ZipCode{
public boolean validate(String zipCode){
}
} New: public class ZipCodeHelper{
public static boolean validate(String zipCode){
}
} so that only the one who needs to validate the zip code would depend on the ZipCodeHelper class. And I found another "benefit" of keeping the primitive obsession: it keeps the class looks like its serialized form, if any, for example: an address table with string column zipCode. My question is, is "avoiding the yo-yo problem" (move between class definitions) a valid reason to allow the "primitive obsession"? | The assumption is that you don't need to yo-yo to the ZipCode class to understand the Address class. If ZipCode is well-designed it should be obvious what it does just by reading the Address class. Programs are not read end-to-end - typically programs are far too complex to make this possible. You cannot keep all the code in a program in your mind at the same time. So we use abstractions and encapsulations to "chunk" the program into meaningful units, so you can look at one part of the program (say the Address class) without having to read all code it depends on. For example I'm sure you don't yo-yo into reading the source code for String every time you encounter String in code. Renaming the class from ZipCode to ZipCodeHelper suggest there now is two separate concepts: a zip code and a zip code helper. So twice as complex. And now the type system cannot help you distinguish between an arbitrary string and a valid zip code since they have the same type. This is where "obsession" is appropriate: You are suggesting a more complex and less safe alternative just because you want to avoid a simple wrapper type around a primitive. Using a primitive is IMHO justified in the cases where there is no validation or other logic depending on this particular type. But as soon as you add any logic, it is much simpler if this logic is encapsulated with the type. As for serialization I think it sounds like a limitation in the framework you are using. Surely you should be able to serialize a ZipCode to a string or map it to a column in a database. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386042",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
386,194 | When compiling C code and looking at assembly, it all has the stack grow backwards like this: _main:
pushq %rbp
movl $5, -4(%rbp)
popq %rbp
ret -4(%rbp) - does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that? I changed $5, -4(%rbp) to $5, +4(%rbp) , compiled and ran the code and there were no errors. So why do we have to still go backwards on the memory stack? | Does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that? Yes, the push instructions decrement the stack pointer and write to the stack, while the pop do the reverse, read from the stack and increment the stack pointer. This is somewhat historical in that for machines with limited memory, the stack was placed high and grown downwards, while the heap was placed low and grown upwards. There is only one gap of "free memory" — between the heap & stack, and this gap is shared, either one can grow into the gap as individually needed. Thus, the program only runs out of memory when the stack and heap collide leaving no free memory. If the stack and heap both grow in the same direction, then there are two gaps, and the stack cannot really grow into the heap's gap (the vice versa is also problematic). Originally, processors had no dedicated stack handling instructions. However, as stack support was added to the hardware, it took on this pattern of growing downward, and processors still follow this pattern today. One could argue that on a 64-bit machine there is sufficient address space to allow multiple gaps — and as evidence, multiple gaps are necessarily the case when a process has multiple threads. Though this is not sufficient motivation to change things around, since with multiple gap systems, the growth direction is arguably arbitrary, so tradition/compatibility tips the scale. You'd have to change the CPU stack handling instructions in order to change the direction of the stack, or else give up on use of the dedicated pushing & popping instructions (e.g. push , pop , call , ret , others). Note that the MIPS instruction set architecture does not have dedicated push & pop , so it is practical to grow the stack in either direction — you still might want a one-gap memory layout for a single thread process, but could grow the stack upwards and the heap downwards. If you did that, however, some C varargs code might require adjustment in source or in under-the-hood parameter passing. (In fact, since there is no dedicated stack handling on MIPS, we could use pre or post increment or pre or post decrement for pushing onto the stack as long as we used the exact reverse for popping off the stack, and also assuming that the operating system respects the chosen stack usage model. Indeed, in some embedded systems and some educational systems, the MIPS stack is grown upwards.) We refer to multi-byte items by the lowest address among them — i.e. by their first byte aka the beginning. Another advantage of growing the stack downward is that, after pushing, the stack pointer refers to the item recently pushed onto the stack, no matter its size. Growing the stack in the reverse direction means pointing to the logical end of the last item pushed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386194",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/326949/"
]
} |
386,266 | There's a debate going on in our team at the moment as to whether modifying code design to allow unit testing is a code smell, or to what extent it can be done without being a code smell. This has come about because we're only just starting to put practices in place that are present in just about every other software dev company. Specifically, we will have a Web API service that will be very thin. Its main responsibility will be marshalling web requests/responses and calling an underlying API that contains the business logic. One example is that we plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be. However, to unit test the Web API service we will need to mock this factory. This essentially means we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface. Some on the team seem reluctant to design code just for the sake of testing. It seems to me that there has to be some compromise if you hope to unit test, but I'm unsure how allay their concerns. Just to be clear, this is a brand new project, so it's not really about modifying code to enable unit testing; it's about designing the code we're going to write to be unit testable. | Reluctance to modify code for the sake of testing shows that a developer hasn't understood the role of tests, and by implication, their own role in the organization. The software business revolves around delivering a code base that creates business value. We have found, through long and bitter experience, that we cannot create such code bases of nontrivial size without testing. Therefore, test suites are an integral part of the business. Many coders pay lip service to this principle but subconsciously never accept it. It is easy to understand why this is; the awareness that our own mental capability is not infinite, and is in fact, surprisingly limited when confronted with the enormous complexity of a modern code base, is unwelcome and easily suppressed or rationalized away. The fact that test code is not delivered to the customer makes it easy to believe that it is a second-class citizen and non-essential compared to the "essential" business code. And the idea of adding testing code to the business code seems doubly offensive to many. The trouble with justifying this practice has to do with the fact that the entire picture of how value is created in a software business is often only understood by higher-ups in the company hierarchy, but these people don't have the detailed technical understanding of the coding workflow that is required to understand why testing can't be gotten rid of. Therefore they are too often pacified by practitioners who assure them that testing may be a good idea in general, but "we are elite programmers who don't need crutches like that", or that "we don't have time for that right now", etc. etc. The fact that business success is a numbers game and that avoiding technical debt, assuring quality etc. shows its value only in the longer term means that they are often quite sincere in that belief. Long story short: making code testable is an essential part of the development process, no different than in other fields (many microchips are designed with a substantial proportion of elements only for testing purposes), but it's very easy to overlook the very good reasons for that. Don't fall into that trap. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386266",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/146235/"
]
} |
386,296 | I was thinking why are there (in all programming languages I have learned, such as C++, Java, Python) standard libraries like stdlib, instead of having similar "functions" being a primitive of the language itself. | This is simply to keep the language itself as simple as possible. You need to distinguish between a feature of the language, such as a type of loop or ways to pass parameters to functions and so on, and common functionality that most applications need. Libraries are functions that may be useful to many programmers so they are created as reusable code that can be shared. The standard libraries are designed to be very common functions that programmers typically need. This way the programming language is immediately useful to a wider range of programmers. The libraries can be updated and extended without changing the core features of the language itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386296",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
386,570 | I've heard the phrase being thrown arround and to me the arguments sound completely insane (sorry if I'm strawmaning here, Its not my intention), generally it goes something along the lines of: You don't want to create an abstraction before you know what the general case is, otherwise (1) you might be putting things in your abstractions that don't belong, or (2) omitting things of importance. (1) To me this sounds like the programmer isn't being pragmatic enough, they have made assumptions that things would exist in the final program that doesnt, so they are working with to low of a level of abstraction, the problem isn't premature abstraction, it's premature concretion. (2) Omitting things of importance is one thing, it's entirely possible something is omitted from the spec that later turns out to be important, the solution to this isn't to come up with your own concretion and waste resources when you find out you guessed wrong, it's to get more information from the client. We should always be working from abstractions down to concretions as this is the most pragmatic way of doing things, and not the other way around. If we don't do so then we risk misunderstanding clients and creating things that need to be changed, but if we only build the abstractions the clients have defined in their own language we never hit this risk (at least nowhere near as likely as taking a shot in the dark with some concretion), yes it's possible clients change their minds about the details, but the abstractions they used to originally communicate what they want tend to still be valid. Here is an example, lets say a client wishes you to create an item bagging robot: public abstract class BaggingRobot() {
private Collection<Item> items;
public abstract void bag(Item item);
} We are building something from the abstractions the client used without going into more detail with things we don't know. This is extremely flexible, I've seen this being called "premature abstraction" when in reality it would be more premature to assume how the bagging was implemented, lets say after discussing with the client they want more than one item to be bagged at once. In order to update my class all I need to is change the signature, but for someone who started bottom up that might involve a large system overhaul. There is no such thing as premature abstraction, only premature concretion. What is wrong with this statement? Where is the flaws in my reasoning? Thanks. | At least in my opinion, premature abstraction is fairly common, and was especially so early in the history of OOP. At least from what I saw, the major problem that arose was that people read through the typical examples of object oriented hierarchies. They got told a lot about making everything ready to deal with future changes that might arise (even though there was no particularly good reason to believe they would). Another theme common to many articles for a while was things like the platypus, which defies simple rules about "mammals are all like this" or "birds are all like that." As a result, we ended up with code that really only needed to deal with, say, records of employees, but were carefully written to be ready if you ever hired an arachnid or maybe a crustacean. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386570",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/327654/"
]
} |
386,599 | I have a method that creates a data file after talking to a digital board: CreateDataFile(IFileAccess boardFileAccess, IMeasurer boardMeasurer) Here boardFileAccess and boardMeasurer are the same instance of a Board object that implements both IFileAccess and IMeasurer . IMeasurer is used in this case for a single method that will set one pin on the board active to make a simple measurement. The data from this measurement is then stored locally on the board using IFileAccess . Board is located in a separate project. I've come to the conclusion that CreateDataFile is doing one thing by making a quick measurement and then storing the data, and doing both in the same method is more intuitive for someone else using this code then having to make a measurement and write to a file as separate method calls. To me, it seems awkward to pass the same object to a method twice. I've considered making a local interface IDataFileCreator that will extend IFileAccess and IMeasurer and then have an implementation containing a Board instance that will just call the required Board methods. Considering that the same board object would always be used for measurement and file writing, is it a bad practice to pass the same object to a method twice? If so, is using a local interface and implementation an appropriate solution? | No, this is perfectly fine. It merely means that the API is over-engineered with regards to your current application . But that doesn't prove that there will never a use case in which the data source and the measurer are different. The point of an API is to offer the application programmer possibilities, not all of which will be used. You should not artificially restrict what API users can do unless it complicates the API so that the net understandability goes down. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386599",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/307087/"
]
} |
386,702 | I recently started working at a place with some much older developers (around 50+ years old). They have worked on critical applications dealing with aviation where the system could not go down. As a result the older programmer tends to code this way. He tends to put a boolean in the objects to indicate if an exception should be thrown or not. Example public class AreaCalculator
{
AreaCalculator(bool shouldThrowExceptions) { ... }
CalculateArea(int x, int y)
{
if(x < 0 || y < 0)
{
if(shouldThrowExceptions)
throwException;
else
return 0;
}
}
} (In our project the method can fail because we are trying to use a network device that can not be present at the time. The area example is just an example of the exception flag) To me this seems like a code smell. Writing unit tests becomes slightly more complex since you have to test for the exception flag each time. Also, if something goes wrong, wouldn't you want to know right away? Shouldn't it be the caller's responsibility to determine how to continue? His logic/reasoning is that our program needs to do 1 thing, show data to user. Any other exception that doesn't stop us from doing so should be ignored. I agree they shouldn't be ignored, but should bubble up and be handled by the appropriate person, and not have to deal with flags for that. Is this a good way of handling exceptions? Edit : Just to give more context over the design decision, I suspect that it is because if this component fails, the program can still operate and do its main task. Thus we wouldn't want to throw an exception (and not handle it?) and have it take down the program when for the user its working fine Edit 2 : To give even more context, in our case the method is called to reset a network card. The issue arises when the network card is disconnected and reconnected, it is assigned a different ip address, thus Reset will throw an exception because we would be trying to reset the hardware with the old ip. | The problem with this approach is that while exceptions never get thrown (and thus, the application never crashes due to uncaught exceptions), the results returned are not necessarily correct, and the user may never know that there is a problem with the data (or what that problem is and how to correct it). In order for the results to be correct and meaningful, the calling method has to check the result for special numbers - i.e., specific return values used to denote problems that came up while executing the method. Negative (or zero) numbers being returned for positive-definite quantities (like area) are a prime example of this in older code. If the calling method doesn't know (or forgets!) to check for these special numbers, though, processing can continue without ever realizing a mistake. Data then gets displayed to the user showing an area of 0, which the user knows is incorrect, but they have no indication of what went wrong, where, or why. They then wonder if any of the other values are wrong... If the exception was thrown, processing would stop, the error would (ideally) be logged and the user may be notified in some way. The user can then fix whatever is wrong and try again. Proper exception handling (and testing!) will ensure that critical applications do not crash or otherwise wind up in an invalid state. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/386702",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/269955/"
]
} |
387,135 | So I am asking this after reading the following: Why shouldn't I use the repository pattern with Entity Framework? . It seems there is a large split of people who say yay and those that say nay. What seems to be missing from some of the answers are concrete examples, whether that be code or good reasoning or whatever. The issue is I keep reading people responding saying "well EF is already abstraction". Well that's great, and that's probably true, but then how would you use it without the repository pattern? For those that say otherwise, why would you say otherwise, what personally have you run into that made is necessary? | To get this out of the way, I am a big proponent of Entity Framework, but it does come with some drawbacks that you need to be aware of. I also apologize for the long answer, but this is a very hot topic with many opinions and many required considerations. For small application, a lot of these considerations don't matter, but for enterprise-grade applications they do matter a lot. Part of what makes the EF discussion such a hot topic is that it leads to a chain of events, where each solution introduces a new problem (which sometimes only applies in more advanced cases). If I just gave you the final (or should I say current) answer, you'd think that I was omitting several other solutions, so I think it's relevant to walk you through the solutions and how they are not the final solution to the problem. Repositories how would you use it without the repository pattern? The short answer to that is that (simple) repositories are an anti-pattern* to Entity Framework. EF provides a context, which essentially provides access to the whole database. You can e.g. fire a query that returns all Country entities with their Province entities already filled in, with each province's City entities already filled in. In short, it enables to you execute multiple-entity-type queries (this is a phrase I coined myself in order to explain the difference with repositories). Repositories, at least the basic implementation thereof, tend to take a "one entity type per repository" approach. If you want to get a list of all countries with all their provinces and all of the province's cities, you'll have to separately talk to the CountryRepository , ProviceRepository and CityRepository . In short, repositories limit you to only being able to execute single-entity-type queries. For the same example, you would have to launch 3 separate database queries in order to get all countries and their provinces and their cities. And don't get me wrong, I like repositories. I like having the neat little boxes so you can separate your storage of different domain objects, which e.g. would allow you to get the countries from your database but the provinces from a remote API and the cities for a second remote API. But this separation of entity types into their own private boxes very much clashes with the benefit of having relational databases, where part of the benefit is that you can launch a single query that can take related entities into account (for filtering, sorting or returning). You might rightly respond that "a repository can still return more than one entity type" . And you would be correct. But if you have a query which returns both Foo and Bar entities, where do you place it? In the FooRepository ? In the BarRepository ? There may be examples where the choice is easy, but there are also examples where the choice is hard and multiple developers may have different categorization methods and thus the codebase becomes inconsistent and the true purpose of the "one entity type per repository" approach will be thrown out the window. *When I say repositories are an anti-pattern, that is not a global statement, but rather than they specifically counteract the purpose of EF. Without EF or a similar solution, repositories are not an anti-pattern. Query objects Query objects are the only real way to get around the "one entity type per repository" approach. The shortest way I can describe what a query object is, is that you should think of it as a "one method repository". Repositories suffer from having to deal with multiple types of entities, and the more methods a repository has, the more distinct entity types it's likely going to be handling. By separating each repository method into a query object of its own, you've simply removed the contradictory suggestion that "this repository only handles one type", and instead are suggesting that "this query object runs this particular query, regardless of which entity types it needs to use". You can still use repositories at the same time, and you are then able to enforce that repositories will never handle more than their designated entity type. If a query makes use of more than one entity type (e.g. Country and Province ), then it belongs in its own private query object (e.g. CountriesAndTheirProvincesQuery ). If a query only focuses on one entity type (e.g. Country ), then it belongs to that entity type's repository (e.g. CountryRepository ). On a technical level, query objects work exactly like repositories do. The only difference is that you separate the logic differently by no longer trying to pretend that your multi-entity-type queries belong to a single-entity-type repository. Repositories 2 There is a second problem pertaining to repositories. As they are separate classes, they do not depend on each other. This usually also means that each repository will use their own EF context (I'm omitting dependency injection here as it sidetracks the focus of the answer). Suppose you are doing an import, which adds countries and cities to the database. However, you want transactional safety, meaning that when any failure is encountered, then nothing should be saved to the database. But when you have to deal with two repositories that each have their own context, how can you knowingly call SaveChanges() on one context before knowing that the other context's SaveChanges() succeeded? You're going to have to guess, and you're going to be stuck manually undoing the first context's commit when the second context's commit ends up failing. By separating the repositories, you've removed their ability to have a shared context, which you need in times where you're dealing with transactions that operate on more than one entity type at the same time. Unit of work In any sufficiently large codebase or domain where I've used repositories and EF, I've ended up implementing a unit of work to at least somewhat counter the problem of transactional safety. Very simply put, a unit of work is a collection of all repositories, it forces the repositories to share the same context, and it allows for the developer to directly commit/rollback the context for all repositories at the same time. A simple example: public class UnitOfWork : IDisposable
{
public readonly FooRepository FooRepository;
public readonly BarRepository BarRepository;
public readonly BazRepository BazRepository;
private readonly MyContext _context;
public UnitOfWork()
{
_context = new MyContext();
this.FooRepository = new FooRepository(_context);
this.BarRepository = new BarRepository(_context);
this.BazRepository = new BazRepository(_context);
}
public void Commit()
{
_context.SaveChanges();
}
public void Dispose()
{
_context.Dispose();
}
} And a simple usage example: using (var uow = new UnitOfWork())
{
uow.FooRepository.Add(myFoo);
uow.BarRepository.Update(myBar);
uow.BazRepository.Delete(myBaz);
uow.Commit();
} And now we have transactional safety. Either all three objects are handled in the database, or none of them are. But Entity Framework is a framework! (personal note) Maybe you've noticed, maybe you haven't, but you should see strong similarities to EF's DbContext and the UnitOfWork I just created. They are essentially the same thing. They represent a single transaction to the database, and offer access to collections of all available entity types: public class UnitOfWork
{
public readonly FooRepository FooRepository;
public readonly BarRepository BarRepository;
public readonly BazRepository BazRepository;
public void Commit() { }
}
public class MyContext : DbContext
{
public Set<Foo> Foos { get; private set; }
public Set<Bar> Bars { get; private set; }
public Set<Baz> Bazs { get; private set; }
public int SaveChanges() { }
} EF's DbContext satifies the definition of what a unit of work is : A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work. So why do we do this? Well, simply put, because developers always try to abstract dependencies. We don't want the business layer to directly depend on EF. This is the exact same reason why you've been creating repositories in the first place: so that your business logic doesn't directly use EF. But what's the point of it all? Why do we use EF, then anti-patterned repositories, and then an anti-anti-patterned unit of work to make it all workable? This costs so much effort. We have to manually write search filters instead of being able to innately rely on EF's ability to parse (pretty much) any lambda method we throw at it. Why are we going through all this effort instead just to use EF in the way it's already intended to work out of the box? And I have to admit that I've had this question for a long time but I find little support for my opinion. If you allow me to soapbox for a moment; my opinion on the matter is that this is why EF is called Entity Framework and not Entity Library. The difference between frameworks and libraries is often semantial and up for debate, but I think an agreeable line can be drawn as explained here : A library performs specific, well-defined operations. A framework is a skeleton where the application defines the "meat" of the operation by filling out the skeleton. The skeleton still has code to link up the parts but the most important work is done by the application. This description of a framework fits with EF to a tee. It pretty much does the whole DB interaction for us, but it requires us to extend DbContext with the entities (and model configuration) that we expect EF to use. We abstract dependencies (libraries) because we can, and because the benefit of doing so (swappability) far outweighs the drawback (effort required to implement the abstraction). But frameworks, the skeleton of a system, are not easily replaced because they cannot be easily abstracted. The effort is much greater than the likelihood of needing to replace the dependency, and thus it's no longer worth the effort to do so. I think that in order to cut out a lot of boilerplating code, it would be beneficial to consider EF as a framework that we build the application around and cannot easily move away from (the same way we can for a library). This means that we can do away with the repositories and the unit of work altogether, as their only purpose is to give access to the features EF already has; and instead use EF directly and accept that its usage is an architectural choice that we do not implement with the intention of easily moving away from it. This means we could cut out the repositories and unit of work, and instead have our business logic deal with the context directly. Notice how the business logic code hardly changes: // OLD
using (var uow = new UnitOfWork())
{
uow.FooRepository.Add(myFoo);
uow.BarRepository.Update(myBar);
uow.BazRepository.Delete(myBaz);
uow.Commit();
}
// NEW
using (var db = new MyContext())
{
db.Foos.Add(myFoo);
db.Bars.Update(myBar);
db.Bazs.Delete(myBaz);
db.SaveChanges();
} The issue is I keep reading people responding saying "well EF is already abstraction". Well that's great, and that's probably true, but then how would you use it without the repository pattern? By using EF directly and no longer trying to abstract it behind a self-developed wall of repositories (and possibly a unit of work). For those that say otherwise, why would you say otherwise, what personally have you run into that made is necessary? The answer is sort of a recapitulation of my experience with EF over the last 6 to 7 years. Basic repositories by themselves introduce more problems than they solve. There are advanced solutions that solve the problems introduced by basic repositories; but you do eventually reach a point where you start wondering if it's not better to simply choose to not use repositories so you don't have to spend the effort to get them to play nicely with EF. Can they be made to play nicely with EF? Sure thing. Is it worth the effort to create all that abstraction? That very much depends on the likelihood of you moving away from EF (or using a datastore that's incompatible with EF). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/387135",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/327284/"
]
} |
387,144 | I'm designing a database for name statistics (how many people where given that name). The data consists of names, and numbers for how many men and women were given that name within a time period (primarily 15 year time periods, but this varies). The data is rather simple, but I still keep getting stuck with the schema. So here are the two (very similar) options I am considering: 1) Just one large table. (Name, CountMen, CountWomen, Timeperiod) Time period would probably be split to start and end columns for easier querying. As per primary key, I could either have autoincremented ID or just use combination of name and start of the time period. 2) I'll have names in a separate table (where they'll be primary keys) and the other table will contain the actual statistics (and thus look like the table in number 1). I've read that having a single-column table is not particularly bad design but I don't know if it makes any sense either or rather adds any value. The options I have ruled out are: 1) Having a column for each time period because then I would eventually have to update the schema. This just seems like terrible design. 2) Having separate tables for each time period. Because time periods aren't that short, I wouldn't end up with that many So how would all recommend I approach this? Is there an approach I have not considered? I know it's a simple thing and I should probably just stop overthinking and pick one approach. Still, I'd like a second opinion first because I'm quite new to database stuff. | To get this out of the way, I am a big proponent of Entity Framework, but it does come with some drawbacks that you need to be aware of. I also apologize for the long answer, but this is a very hot topic with many opinions and many required considerations. For small application, a lot of these considerations don't matter, but for enterprise-grade applications they do matter a lot. Part of what makes the EF discussion such a hot topic is that it leads to a chain of events, where each solution introduces a new problem (which sometimes only applies in more advanced cases). If I just gave you the final (or should I say current) answer, you'd think that I was omitting several other solutions, so I think it's relevant to walk you through the solutions and how they are not the final solution to the problem. Repositories how would you use it without the repository pattern? The short answer to that is that (simple) repositories are an anti-pattern* to Entity Framework. EF provides a context, which essentially provides access to the whole database. You can e.g. fire a query that returns all Country entities with their Province entities already filled in, with each province's City entities already filled in. In short, it enables to you execute multiple-entity-type queries (this is a phrase I coined myself in order to explain the difference with repositories). Repositories, at least the basic implementation thereof, tend to take a "one entity type per repository" approach. If you want to get a list of all countries with all their provinces and all of the province's cities, you'll have to separately talk to the CountryRepository , ProviceRepository and CityRepository . In short, repositories limit you to only being able to execute single-entity-type queries. For the same example, you would have to launch 3 separate database queries in order to get all countries and their provinces and their cities. And don't get me wrong, I like repositories. I like having the neat little boxes so you can separate your storage of different domain objects, which e.g. would allow you to get the countries from your database but the provinces from a remote API and the cities for a second remote API. But this separation of entity types into their own private boxes very much clashes with the benefit of having relational databases, where part of the benefit is that you can launch a single query that can take related entities into account (for filtering, sorting or returning). You might rightly respond that "a repository can still return more than one entity type" . And you would be correct. But if you have a query which returns both Foo and Bar entities, where do you place it? In the FooRepository ? In the BarRepository ? There may be examples where the choice is easy, but there are also examples where the choice is hard and multiple developers may have different categorization methods and thus the codebase becomes inconsistent and the true purpose of the "one entity type per repository" approach will be thrown out the window. *When I say repositories are an anti-pattern, that is not a global statement, but rather than they specifically counteract the purpose of EF. Without EF or a similar solution, repositories are not an anti-pattern. Query objects Query objects are the only real way to get around the "one entity type per repository" approach. The shortest way I can describe what a query object is, is that you should think of it as a "one method repository". Repositories suffer from having to deal with multiple types of entities, and the more methods a repository has, the more distinct entity types it's likely going to be handling. By separating each repository method into a query object of its own, you've simply removed the contradictory suggestion that "this repository only handles one type", and instead are suggesting that "this query object runs this particular query, regardless of which entity types it needs to use". You can still use repositories at the same time, and you are then able to enforce that repositories will never handle more than their designated entity type. If a query makes use of more than one entity type (e.g. Country and Province ), then it belongs in its own private query object (e.g. CountriesAndTheirProvincesQuery ). If a query only focuses on one entity type (e.g. Country ), then it belongs to that entity type's repository (e.g. CountryRepository ). On a technical level, query objects work exactly like repositories do. The only difference is that you separate the logic differently by no longer trying to pretend that your multi-entity-type queries belong to a single-entity-type repository. Repositories 2 There is a second problem pertaining to repositories. As they are separate classes, they do not depend on each other. This usually also means that each repository will use their own EF context (I'm omitting dependency injection here as it sidetracks the focus of the answer). Suppose you are doing an import, which adds countries and cities to the database. However, you want transactional safety, meaning that when any failure is encountered, then nothing should be saved to the database. But when you have to deal with two repositories that each have their own context, how can you knowingly call SaveChanges() on one context before knowing that the other context's SaveChanges() succeeded? You're going to have to guess, and you're going to be stuck manually undoing the first context's commit when the second context's commit ends up failing. By separating the repositories, you've removed their ability to have a shared context, which you need in times where you're dealing with transactions that operate on more than one entity type at the same time. Unit of work In any sufficiently large codebase or domain where I've used repositories and EF, I've ended up implementing a unit of work to at least somewhat counter the problem of transactional safety. Very simply put, a unit of work is a collection of all repositories, it forces the repositories to share the same context, and it allows for the developer to directly commit/rollback the context for all repositories at the same time. A simple example: public class UnitOfWork : IDisposable
{
public readonly FooRepository FooRepository;
public readonly BarRepository BarRepository;
public readonly BazRepository BazRepository;
private readonly MyContext _context;
public UnitOfWork()
{
_context = new MyContext();
this.FooRepository = new FooRepository(_context);
this.BarRepository = new BarRepository(_context);
this.BazRepository = new BazRepository(_context);
}
public void Commit()
{
_context.SaveChanges();
}
public void Dispose()
{
_context.Dispose();
}
} And a simple usage example: using (var uow = new UnitOfWork())
{
uow.FooRepository.Add(myFoo);
uow.BarRepository.Update(myBar);
uow.BazRepository.Delete(myBaz);
uow.Commit();
} And now we have transactional safety. Either all three objects are handled in the database, or none of them are. But Entity Framework is a framework! (personal note) Maybe you've noticed, maybe you haven't, but you should see strong similarities to EF's DbContext and the UnitOfWork I just created. They are essentially the same thing. They represent a single transaction to the database, and offer access to collections of all available entity types: public class UnitOfWork
{
public readonly FooRepository FooRepository;
public readonly BarRepository BarRepository;
public readonly BazRepository BazRepository;
public void Commit() { }
}
public class MyContext : DbContext
{
public Set<Foo> Foos { get; private set; }
public Set<Bar> Bars { get; private set; }
public Set<Baz> Bazs { get; private set; }
public int SaveChanges() { }
} EF's DbContext satifies the definition of what a unit of work is : A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work. So why do we do this? Well, simply put, because developers always try to abstract dependencies. We don't want the business layer to directly depend on EF. This is the exact same reason why you've been creating repositories in the first place: so that your business logic doesn't directly use EF. But what's the point of it all? Why do we use EF, then anti-patterned repositories, and then an anti-anti-patterned unit of work to make it all workable? This costs so much effort. We have to manually write search filters instead of being able to innately rely on EF's ability to parse (pretty much) any lambda method we throw at it. Why are we going through all this effort instead just to use EF in the way it's already intended to work out of the box? And I have to admit that I've had this question for a long time but I find little support for my opinion. If you allow me to soapbox for a moment; my opinion on the matter is that this is why EF is called Entity Framework and not Entity Library. The difference between frameworks and libraries is often semantial and up for debate, but I think an agreeable line can be drawn as explained here : A library performs specific, well-defined operations. A framework is a skeleton where the application defines the "meat" of the operation by filling out the skeleton. The skeleton still has code to link up the parts but the most important work is done by the application. This description of a framework fits with EF to a tee. It pretty much does the whole DB interaction for us, but it requires us to extend DbContext with the entities (and model configuration) that we expect EF to use. We abstract dependencies (libraries) because we can, and because the benefit of doing so (swappability) far outweighs the drawback (effort required to implement the abstraction). But frameworks, the skeleton of a system, are not easily replaced because they cannot be easily abstracted. The effort is much greater than the likelihood of needing to replace the dependency, and thus it's no longer worth the effort to do so. I think that in order to cut out a lot of boilerplating code, it would be beneficial to consider EF as a framework that we build the application around and cannot easily move away from (the same way we can for a library). This means that we can do away with the repositories and the unit of work altogether, as their only purpose is to give access to the features EF already has; and instead use EF directly and accept that its usage is an architectural choice that we do not implement with the intention of easily moving away from it. This means we could cut out the repositories and unit of work, and instead have our business logic deal with the context directly. Notice how the business logic code hardly changes: // OLD
using (var uow = new UnitOfWork())
{
uow.FooRepository.Add(myFoo);
uow.BarRepository.Update(myBar);
uow.BazRepository.Delete(myBaz);
uow.Commit();
}
// NEW
using (var db = new MyContext())
{
db.Foos.Add(myFoo);
db.Bars.Update(myBar);
db.Bazs.Delete(myBaz);
db.SaveChanges();
} The issue is I keep reading people responding saying "well EF is already abstraction". Well that's great, and that's probably true, but then how would you use it without the repository pattern? By using EF directly and no longer trying to abstract it behind a self-developed wall of repositories (and possibly a unit of work). For those that say otherwise, why would you say otherwise, what personally have you run into that made is necessary? The answer is sort of a recapitulation of my experience with EF over the last 6 to 7 years. Basic repositories by themselves introduce more problems than they solve. There are advanced solutions that solve the problems introduced by basic repositories; but you do eventually reach a point where you start wondering if it's not better to simply choose to not use repositories so you don't have to spend the effort to get them to play nicely with EF. Can they be made to play nicely with EF? Sure thing. Is it worth the effort to create all that abstraction? That very much depends on the likelihood of you moving away from EF (or using a datastore that's incompatible with EF). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/387144",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/328626/"
]
} |
387,482 | I am on a team of ~20 people that is part of a very large organization. We have been using the same codebase for ~5 years and have released multiple derivative projects from it. I've had some 1:1 conversations with team members and most of them are also aware of the various code smells and don't like them, but refactoring rarely happens because there is always something more important to do. I've also asked management if we could have some more deliberate planning or cleanup for the codebase, but I was basically told no. Everyone on the team programs in their own way and there isn't any coordination. I am not asking for refactoring techniques here; I'm asking what are some ways I can help create a culture of writing for maintainability rather than pure functionality. Note: I am not a manager, so I can't just dictate things. | Do not ask management for permission to refactor. It's none of their business. You might as well be asking permission to sharpen a pencil. Management doesn't understand refactoring. It's not a business need. Management shouldn't need to understand it. It's not their job. It's yours. Refactoring is a tool you use to satisfy managements needs. Don't ask management to think about this. I'd be very concerned if my mechanic asked me what brand of wrench to use on my car. Concerned, not about the tool, but about the mechanic. What you need to do is to lead this team to good practices. Don't ask management to dictate that. Even if they try they'll just make a mess of it. You need to build trust with the team, show them the benefits and the costs, and accept that change doesn't happen instantly. Sometimes a cross cutting concern sweeps through many classes and needs refactoring. Sometimes you just can't justify doing it now because now you just need this one little thing changed. It hurts. You have to live with working in crap code because you just don't have the time. So it keeps taking longer and longer. Abe Lincoln is often quoted as saying, "Give me six hours to chop down a tree and I will spend the first four sharpening the ax." When dealing with management don't argue for sharpening the ax. Argue for the six hours. Don't expect to get credit for sharpening the ax. Don't do it for management at all. Do it so you only have to spend two hours fighting the damn tree. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/387482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/329190/"
]
} |
387,483 | Every time someone reaches me and asks me to define the Dependency Injection in a conceptual way and explain the real pros and cons of using DI in software design. I confess that I have some difficulties to explain the concepts of DI. Every time I need to tell them the history about single responsibility principle, composition over inheritance, etc. Anyone can help me explaining the best way to describe DI for developers? | Dependency Injection is a horrible name (IMO) 1 for a rather straightforward concept. Here's an example: You have a method (or class with methods) that does X (e.g. retrieve data from database) As part of doing X, said method creates and manages an internal resource (e.g. a DbContext ). This internal resource is what's called a dependency You remove the creating and managing of the resource (ie DbContext ) from the method and make it the caller's responsibility to provide this resource (as a method parameter or upon instantiation of the class) You are now doing dependency injection. \[1\] : I come from a lower-level background and it took me months to sit down and learn dependency injection because the name implies it'd be something much more complicated, like *[DLL Injection][1]*. The fact that Visual Studio (and we developers in general) refers to the .NET libraries (DLLs, or _assemblies_) that a project depends upon as _dependencies_ does not help at all. There is even such a thing as the [Dependency Walker (depends.exe)][2]. [Edit] I figured some demo code would come handy for some, so here's one (in C#). Without dependency injection: public class Repository : IDisposable
{
protected DbContext Context { get; }
public Repository()
{
Context = new DbContext("name=MyEntities");
}
public void Dispose()
{
Context.Dispose();
}
} Your consumer would then do something like: using ( var repository = new Repository() )
{
// work
} The same class implemented with the dependency injection pattern would be like this: public class RepositoryWithDI
{
protected DbContext Context { get; }
public RepositoryWithDI(DbContext context)
{
Context = context;
}
} It's now the caller's responsability to instantiate a DbContext and pass (errm, inject ) it to your class: using ( var context = new DbContext("name=MyEntities") )
{
var repository = new RepositoryWithDI(context);
// work
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/387483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/329191/"
]
} |
387,720 | For example, suppose I have a class, Member , which has a lastChangePasswordTime: class Member{
.
.
.
constructor(){
this.lastChangePasswordTime=null,
}
} whose lastChangePasswordTime can be meaningful absent, because some members may never change their passwords. But according to If nulls are evil, what should be used when a value can be meaningfully absent? and https://softwareengineering.stackexchange.com/a/12836/248528 , I shouldn't use null to represent a meaningfully absent value. So I try to add a Boolean flag: class Member{
.
.
.
constructor(){
this.isPasswordChanged=false,
this.lastChangePasswordTime=null,
}
} But I think it is quite obsolete because: When isPasswordChanged is false, lastChangePasswordTime must be null, and checking lastChangePasswordTime==null is almost identical to checking isPasswordChanged is false, so I prefer check lastChangePasswordTime==null directly When changing the logic here, I may forget to update both fields. Note: when a user changes passwords, I would record the time like this: this.lastChangePasswordTime=Date.now(); Is the additional Boolean field better than a null reference here? | I don't see why, if you have a meaningfully absent value, null should not be used if you are deliberate and careful about it. If your goal is to surround the nullable value to prevent accidentally referencing it, I would suggest creating the isPasswordChanged value as a function or property that returns the result of a null check, for example: class Member {
DateTime lastChangePasswordTime = null;
bool isPasswordChanged() { return lastChangePasswordTime != null; }
} In my opinion, doing it this way: Gives better code readability than a null-check would, which might lose context. Removes the need for you having to actually worry about maintaining the isPasswordChanged value that you mention. The way that you persist the data (presumably in a database) would be responsible for ensuring that the nulls are preserved. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/387720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
387,990 | (Assuming a single-threaded environment) A function that fulfills this criterion is: bool MyClass::is_initialized = false;
void MyClass::lazy_initialize()
{
if (!is_initialized)
{
initialize(); //Should not be called multiple times
is_initialized = true;
}
} In essence, I can call this function multiple times and not worry about it initializing MyClass multiple times A function that does not fulfill this criterion might be: Foo* MyClass::ptr = NULL;
void initialize()
{
ptr = new Foo();
} Calling initialize() multiple times will cause a memory leak Motivation It would be nice to have a single concise word to describe this behavior so that functions that are expected to meet this criterion can be duly commented (especially useful when describing interface functions that are expected to be overridden) | This type of function / operation is called Idempotent Idempotence (UK: /ˌɪdɛmˈpoʊtəns/,[1] US: /ˌaɪdəm-/)[2] is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. In mathematics, this means that if f is idempotent, f ( f (x)) = f (x), which is the same as saying f ∘ f = f . In computer science, this means that if f(x); is idempotent, f(x); is the same as f(x); f(x); . Note: These meanings seem different, but under the denotational semantics of state , the word "idempotent" actually has the same exact meaning in both mathematics and computer science. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/387990",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/266029/"
]
} |
388,002 | I want to make a social game for Android. I am using NoSQL Based Database (MongoDB) and I am using NodeJs. I am using Android-Volley library to make POST and GET requests. But I am stucked on something, I need to see friends of friends or send some game requests or friend requests from one user to another. However, I am still struggling about how to design the database-model. Firstly, {
"user1":
{
"friendsList": [ {"user2" : {id:"2", ...}}]
},
"user2":{
"friendsList": [ {"user1" : {id:"1", ...}}]
}
} Or Second Approach, {
"user1" : {
"friendsList": [
{
"user2" : {
"friendsList":[ {"user1" : { ... } } ]
},
...
]
}
} So basically, what I am asking that, should I include the whole "user" object in a list or should I keep only the id numbers. If I keep the id numbers, should I make another request for the given Id numbers in order to show the profile etc. I want to reduce down to requests ( I think that I need to make another request for given id number) that's why, I need your help. Thank you for your time. | This type of function / operation is called Idempotent Idempotence (UK: /ˌɪdɛmˈpoʊtəns/,[1] US: /ˌaɪdəm-/)[2] is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. In mathematics, this means that if f is idempotent, f ( f (x)) = f (x), which is the same as saying f ∘ f = f . In computer science, this means that if f(x); is idempotent, f(x); is the same as f(x); f(x); . Note: These meanings seem different, but under the denotational semantics of state , the word "idempotent" actually has the same exact meaning in both mathematics and computer science. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/330103/"
]
} |
388,034 | Context: I recently found out about Semantic Versioning , and am trying to determine how to best use it practically for my own projects. Given that semver takes major changes, minor changes, and patches into account for versioning, when should a commit not be tagged with an updated version? It seems to me that every change would fit into one of these categories, and so every change should be versioned, but when I look at various popular projects on GitHub this doesn't seem to be the way things are done (just looking at the fact that large projects have tens of thousands of commits, with only hundreds of tags). | SemVer concerns versioning releases , not commits . If your version control model happens to require that every commit to master be a release, then yes, every commit will need to be tagged according to the degree of the change. Generally, though, projects develop a mostly stable product on master and tag the releases they deem worthy of support. When they do so, they will tag according to their versioning scheme, which doesn't necessarily have to be SemVer in particular. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388034",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/238401/"
]
} |
388,052 | The codebase I'm working on frequently uses instance variables to share data between various trivial methods. The original developer is adamant that this adheres to the best practices stated in the Clean Code book by Uncle Bob/Robert Martin: "The first rule of functions is that they should be small." and "The ideal number of arguments for a function is
zero (niladic). (...) Arguments are hard. They take a lot of conceptual power." An example: public class SomeBusinessProcess {
@Inject private Router router;
@Inject private ServiceClient serviceClient;
@Inject private CryptoService cryptoService;
private byte[] encodedData;
private EncryptionInfo encryptionInfo;
private EncryptedObject payloadOfResponse;
private URI destinationURI;
public EncryptedResponse process(EncryptedRequest encryptedRequest) {
checkNotNull(encryptedRequest);
getEncodedData(encryptedRequest);
getEncryptionInfo();
getDestinationURI();
passRequestToServiceClient();
return cryptoService.encryptResponse(payloadOfResponse);
}
private void getEncodedData(EncryptedRequest encryptedRequest) {
encodedData = cryptoService.decryptRequest(encryptedRequest, byte[].class);
}
private void getEncryptionInfo() {
encryptionInfo = cryptoService.getEncryptionInfoForDefaultClient();
}
private void getDestinationURI() {
destinationURI = router.getDestination().getUri();
}
private void passRequestToServiceClient() {
payloadOfResponse = serviceClient.handle(destinationURI, encodedData, encryptionInfo);
}
} I would refactor that into the following using local variables: public class SomeBusinessProcess {
@Inject private Router router;
@Inject private ServiceClient serviceClient;
@Inject private CryptoService cryptoService;
public EncryptedResponse process(EncryptedRequest encryptedRequest) {
checkNotNull(encryptedRequest);
byte[] encodedData = cryptoService.decryptRequest(encryptedRequest, byte[].class);
EncryptionInfo encryptionInfo = cryptoService.getEncryptionInfoForDefaultClient();
URI destinationURI = router.getDestination().getUri();
EncryptedObject payloadOfResponse = serviceClient.handle(destinationURI, encodedData,
encryptionInfo);
return cryptoService.encryptResponse(payloadOfResponse);
}
} This is shorter, it eliminates the implicit data coupling between the various trivial methods and it limits the variable scopes to the minimum required. Yet despite these benefits I still cannot seem to convince the original developer that this refactoring is warranted, as it appears to contradict the practices of Uncle Bob mentioned above. Hence my questions:
What is the objective, scientific rationale to favor local variables over instance variables? I can't quite seem to put my finger on it. My intuition tells me that hidden couplings are bad and that a narrow scope is better than a broad one. But what is the science to back this up? And conversely, are there any downsides to this refactoring that I have possibly overlooked? | What is the objective, scientific rationale to favor local variables over instance variables? Scope isn't a binary state, it's a gradient. You can rank these from largest to smallest: Global > Class > Local (method) > Local (code block, e.g. if, for, ...) Edit: what I call a "class scope" is what you mean by "instance variable". To my knowledge, those are synonymous, but I'm a C# dev, not a Java dev. For the sake of brevity, I've lumped all statics into the global category since statics are not the topic of the question. The smaller the scope, the better. The rationale is that variables should live in the smallest scope possible . There are many benefits to this: It forces you to think about the current class' responsibility and helps you stick to SRP. It enables you to not have to avoid global naming conflicts, e.g. if two or more classes have a Name property, you're not forced to prefix them like FooName , BarName , ... Thus keeping your variable names as clean and terse as possible. It declutters the code by limiting the available variables (e.g. for Intellisense) to those that are contextually relevant. It enables some form of access control so your data can't be manipulated by some actor you don't know about (e.g. a different class developed by a colleague). It makes the code more readable as you ensure that the declaration of these variables tries to stay as close to the actual usage of these variables as is possible. Wantonly declaring variables in an overly wide scope is often indicative of a developer who doesn't quite grasp OOP or how to implement it. Seeing overly widely scoped variables acts as a red flag that there's probably something going wrong with the OOP approach (either with the developer in general or the codebase in specific). (Comment by Kevin) Using locals forces you to do things in the right order. In the original (class variable) code, you could wrongly move passRequestToServiceClient() to the top of the method, and it would still compile. With locals, you could only make that mistake if you passed an uninitialized variable, which is hopefully obvious enough that you don't actually do it. Yet despite these benefits I still cannot seem to convince the original developer that this refactoring is warranted, as it appears to contradict the practices of Uncle Bob mentioned above. And conversely, are there any downsides to this refactoring that I have possibly overlooked? The issue here is that your argument for local variables is valid, but you've also made additional changes which are not correct and cause your suggested fix to fail the smell test. While I understand your "no class variable" suggestion and there's merit to it, you've actually also removed the methods themselves, and that's a whole different ballgame. The methods should have stayed, and instead you should've altered them to return their value rather than store it in a class variable: private byte[] getEncodedData() {
return cryptoService.decryptRequest(encryptedRequest, byte[].class);
}
private EncryptionInfo getEncryptionInfo() {
return cryptoService.getEncryptionInfoForDefaultClient();
}
// and so on... I do agree with what you've done in the process method, but you should've been calling the private submethods rather than executing their bodies directly. public EncryptedResponse process(EncryptedRequest encryptedRequest) {
checkNotNull(encryptedRequest);
byte[] encodedData = getEncodedData();
EncryptionInfo encryptionInfo = getEncryptionInfo();
//and so on...
return cryptoService.encryptResponse(payloadOfResponse);
} You'd want that extra layer of abstraction, especially when you run into methods that need to be reused several times. Even if you don't currently reuse your methods , it's still a matter of good practice to already create submethods where relevant, even if only to aid code readability. Regardless of the local variable argument, I immediately noticed that your suggested fix is considerably less readable than the original. I do concede that wanton use of class variables also detracts from code readability, but not at first sight compared to you having stacked all the logic in a single (now long-winded) method. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388052",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/330152/"
]
} |
388,435 | According to the accepted answer on " Rationale to prefer local variables over instance variables? ", variables should live in the smallest scope possible. Simplify the problem into my interpretation, it means we should refactor this kind of code: public class Main {
private A a;
private B b;
public ABResult getResult() {
getA();
getB();
return ABFactory.mix(a, b);
}
private getA() {
a = SomeFactory.getA();
}
private getB() {
b = SomeFactory.getB();
}
} into something like this: public class Main {
public ABResult getResult() {
A a = getA();
B b = getB();
return ABFactory.mix(a, b);
}
private getA() {
return SomeFactory.getA();
}
private getB() {
return SomeFactory.getB();
}
} but according to the "spirit" of "variables should live in the smallest scope as possible", isn't "never have variables" have smaller scope than "have variables"? So I think the version above should be refactored: public class Main {
public ABResult getResult() {
return ABFactory.mix(getA(), getB());
}
private getA() {
return SomeFactory.getA();
}
private getB() {
return SomeFactory.getB();
}
} so that getResult() doesn't have any local variables at all. Is that true? | No. There are several reasons why: Variables with meaningful names can make code easier to comprehend. Breaking up complex formulas into smaller steps can make the code easier to read. Caching. Holding references to objects so that they can be used more than once. And so on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388435",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248528/"
]
} |
388,461 | Another way of asking this is; why do programs tend to be monolithic? I am thinking of something like an animation package like Maya, which people use for various different workflows. If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? | Yes. Generally two smaller less complex applications are much easier to maintain than a single large one. However, you get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this orchestration can go wrong in various ways, even though every application might function perfectly. Having a million tiny applications has its own special problems. A monolithic application is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. It's only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388461",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/330747/"
]
} |
388,802 | Moderator note This question has already had seventeen answers posted to it. Before you post a new answer, please read the existing answers and make sure your viewpoint isn't already adequately covered. I've been following some of the practices recommended in Robert Martin's "Clean Code" book, especially the ones that apply to the type of software I work with and the ones that make sense to me (I don't follow it as dogma). One side effect I've noticed, however, is that the "clean" code I write, is more code than if I didn't follow some practices. The specific practices that lead to this are: Encapsulating conditionals So instead of if(contact.email != null && contact.emails.contains('@') I could write a small method like this private Boolean isEmailValid(String email){...} Replacing an inline comment with another private method, so that the method name describes itself rather than having an inline comment on top of it A class should only have one reason to change And a few others. The point being, that what could be a method of 30 lines, ends up being a class, because of the tiny methods that replace comments and encapsulate conditionals, etc. When you realize you have so many methods, then it "makes sense" to put all the functionality into one class, when really it should've been a method. I'm aware that any practice taken to the extreme can be harmful. The concrete question I'm looking an answer for is: Is this an acceptable byproduct of writing clean code? If so, what are some arguments I can use to justify the fact that more LOC have been written? The organization is not concerned specifically about more LOC, but more LOC can result in very big classes (that again, could be replaced with a long method without a bunch of use-once helper functions for readability sake). When you see a class that is big enough, it gives the impression that the class is busy enough, and that its responsibility has been concluded. You could, therefore, end up creating more classes to achieve other pieces of functionality. The result is then a lot of classes, all doing "one thing" with the aid of many small helper methods. THIS is the specific concern...those classes could be a single class that still achieves "one thing", without the aid of many small methods. It could be a single class with maybe 3 or 4 methods and some comments. | ... we are a very small team supporting a relatively large and undocumented code base (that we inherited), so some developers/managers see value in writing less code to get things done so that we have less code to maintain These folk have correctly identified something: they want the code to be easier to maintain. Where they've gone wrong though is assuming that the less code there is, the easier it is to maintain. For code to be easy to maintain, then it needs to be easy to change. By far the easiest way to achieve easy-to-change code is to have a full set of automated tests for it that will fail if your change is a breaking one. Tests are code, so writing those tests is going to swell your code base. And that is a good thing. Secondly, in order to work out what needs changing, you code needs to be both easy to read and easy to reason about. Very terse code, shrunk in size just to keep the line count down is very unlikely to be easy to read. There's obviously a compromise to be struck as longer code will take longer to read. But if it's quicker to understand, then it's worth it. If it doesn't offer that benefit, then that verbosity stops being a benefit. But if longer code improves readability then again this is a good thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388802",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/331234/"
]
} |
388,825 | My team is currently making some changes to our solution structure. Before the change we basically had a single solution file with about 40 different projects. Most of these projects are libraries that are consumed by a handful of applications. To improve the load and build times, we decided to split the solution into several smaller solutions. We started with 2 of the biggest libraries, lets call them Core and DAL. Both got their own solution with a single project. We created 2 build pipelines for both projects; one for pull-requests and one for building a NuGet package after a merge to the main branch. We have a policy that each PR should have 2 reviews and a successful build/test result in it's pipeline. Now let's say we have a hypothetical bug in the DAL (v1.0.0). Previously we would fix the bug in the library, run some applications locally (which would call the updated DLL) and if it works commit the change, start a PR which would kick off the PR pipeline and pretty much be done with it. But in the new scenario, in order to test the change, the developer has to publish a local NuGet package let's say 1.0.1-pre. Then he updates the package in one or more applications and if everything is fine, he publishes a local 1.0.1 package, updates the consuming applications and makes a single PR for the bug-fix and the version increase in the consuming applications. But here's the kicker; the PR pipeline for those consuming applications will fail, because the new package is not yet published to our feed. In order to publish the package, the PR has to be completed, but that's not possible because the changes to the library are in the same PR as the version increases to the apps. We're at the point of instructing all devs to not include changes to packages with other changes in the same PR (even if they're very related) because of this issue, but as a last resort I'm asking here if there's a better approach. | ... we are a very small team supporting a relatively large and undocumented code base (that we inherited), so some developers/managers see value in writing less code to get things done so that we have less code to maintain These folk have correctly identified something: they want the code to be easier to maintain. Where they've gone wrong though is assuming that the less code there is, the easier it is to maintain. For code to be easy to maintain, then it needs to be easy to change. By far the easiest way to achieve easy-to-change code is to have a full set of automated tests for it that will fail if your change is a breaking one. Tests are code, so writing those tests is going to swell your code base. And that is a good thing. Secondly, in order to work out what needs changing, you code needs to be both easy to read and easy to reason about. Very terse code, shrunk in size just to keep the line count down is very unlikely to be easy to read. There's obviously a compromise to be struck as longer code will take longer to read. But if it's quicker to understand, then it's worth it. If it doesn't offer that benefit, then that verbosity stops being a benefit. But if longer code improves readability then again this is a good thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/388825",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/320517/"
]
} |
389,380 | I'm currently working on a bigger project which unfortunately has some files where software quality guidelines where not always followed. This includes big files (read 2000-4000 lines) which clearly contain multiple distinct functionalities. Now I want to refactor these big files into multiple small ones. The issue is, since they are so big, multiple people (me included) on different branches are working on these files. So I can't really branch from develop and refactor, since merging these refactorings with other peoples' changes will become difficult. We could of course require everyone to merge back to develop, "freeze" the files (i.e. don't allow anyone to edit them anymore), refactor, and then "unfreeze". But this is not really good either, since this would require everyone to basically stop their work on these files until refactoring is done. So is there a way to refactor, don't require anyone else to stop working (for to long) or merge back their feature branches to develop? | You have correctly understood that this is not so much a technical as a social problem: if you want to avoid excessive merge conflicts, the team needs to collaborate in a way that avoids these conflicts. This is part of a larger issue with Git, in that branching is very easy but merging can still take a lot of effort. Development teams tend to launch a lot of branches and are then surprised that merging them is difficult, possibly because they are trying to emulate the Git Flow without understanding its context. The general rule to fast and easy merges is to prevent big differences from accumulating, in particular that feature branches should be very short lived (hours or days, not months). A development team that is able to rapidly integrate their changes will see fewer merge conflicts. If some code isn't yet production ready, it might be possible to integrate it but deactivate it through a feature flag. As soon as the code has been integrated into your master branch, it becomes accessible to the kind of refactoring you are trying to do. That might be too much for your immediate problem. But it may be feasible to ask colleagues to merge their changes that impact this file until the end of the week so that you can perform the refactoring. If they wait longer, they'll have to deal with the merge conflicts themselves. That's not impossible, it's just avoidable work. You may also want to prevent breaking large swaths of dependent code and only make API-compatible changes. For example, if you want to extract some functionality into a separate module: Extract the functionality into a separate module. Change the old functions to forward their calls to the new API. Over time, port dependent code to the new API. Finally, you can delete the old functions. (Repeat for the next bunch of functionality) This multi-step process can avoid many merge conflicts. In particular, there will only be conflicts if someone else is also changing the functionality you extracted. The cost of this approach is that it's much slower than changing everything at once, and that you temporarily have two duplicate APIs. This isn't so bad until something urgent interrupts this refactoring, the duplication is forgotten or deprioritized, and you end up with a bunch of tech debt. But in the end, any solution will require you to coordinate with your team. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/389380",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/282088/"
]
} |
389,445 | I have not found many resources about this: I was wondering if it's possible/a good idea to be able to write asynchronous code in a synchronous way. For example, here is some JavaScript code which retrieves the number of users stored in a database (an asynchronous operation): getNbOfUsers(function (nbOfUsers) { console.log(nbOfUsers) }); It would be nice to be able to write something like this: const nbOfUsers = getNbOfUsers();
console.log(getNbOfUsers); And so the compiler would automatically take care of waiting for the response and then execute console.log . It will always wait for the asynchronous operations to complete before the results have to be used anywhere else. We would make so much less use of callbacks promises, async/await or whatever, and would never have to worry whether the result of an operation is available immediately or not. Errors would still be manageable (did nbOfUsers get an integer or an error?) using try/catch, or something like optionals like in the Swift language. Is it possible? It may be a terrible idea/a utopia... I don't know. | Async/await is exactly that automated management that you propose, albeit with two extra keywords. Why are they important? Aside from backwards compatibility? Without explicit points where a coroutine may be suspended and resumed, we would need a type system to detect where an awaitable value must be awaited. Many programming languages do not have such a type system. By making awaiting a value explicit, we can also pass awaitable values around as first class objects: promises. This can be super useful when writing higher-order code. Async code has very deep effects for the execution model of a language, similar to the absence or presence of exceptions in the language. In particular, an async function can only be awaited by async functions. This affects all calling functions! But what if we change a function from non-async to async at the end of this dependency chain? This would be a backwards-incompatible change … unless all functions are async and every function call is awaited by default. And that is highly undesirable because it has very bad performance implications. You wouldn't be able to simply return cheap values. Every function call would become a lot more expensive. Async is great, but some kind of implicit async won't work in reality. Pure functional languages like Haskell have a bit of an escape hatch because execution order is largely unspecified and unobservable. Or phrased differently: any specific order of operations must be explicitly encoded. That can be rather cumbersome for real-world programs, especially those I/O-heavy programs for which async code is a very good fit. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/389445",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/332219/"
]
} |
389,503 | Let's say (just for the sake of example) I have three classes that implement IShape. One is a Square with a constructor of Square(int length). Second is a Triangle with a constructor of Triangle(int base, int height). Third is a Circle with a constructor of Circle(double radius). Considering all the classes share the same interface, my mind goes to the factory pattern as a creational pattern to use. But, the factory method would be awkward as it must provide parameters for these various constructors - for instance: IShape CreateShape(int length, int base, int height, double radius)
{
...
return new Circle(radius);
...
return new Triage(base, height);
...
return new Square(length);
} This factory method seems quite awkward. Is this where an abstract factory or some other design pattern comes into play as a superior approach? | You have a solution looking for a problem, that is why you run into trouble. A factory method is not an end in itself, it is a means to an end. So you need to start identifying the problem you want to solve first , which means you need a use case for constructing those objects, providing you with the necessary context. Like: you have an external data source like a file stream or database with object descriptions you want a factory to create IShape objects from this data source (so having one and only one place in code to modify in case the list of shapes gets extended) In the "file stream" context, for example, a CreateShape factory method could probably get a string as a parameter, containing one object description (maybe some CSV string, a JSON string or an XML snippet), and the requirement would be to parse that string to create the right object: IShape CreateShape(string shapeDescription)
{
switch(getShapeType(shapeDescription))
{
case "Circle":
radius=parseRadius(shapeDescription);
return new Circle(radius);
case "Triangle":
base=parseBase(shapeDescription);
height=parseHeight(shapeDescription);
return new Triangle(base, height);
...
} Now the parameter list of this method does not look quite so awkward any more, I guess? Other potential use cases: shapes are created based on user inputs: the factory gets part of the user input data as a parameter creating shapes based on some dynamic business logic You also need to take other, non-functional requirements into account: do you want your factory to assist in decoupling from that external data source? For example, for unit testing? Then make it not just a method, make it a class with an interface, which can be mocked out. do you want the factory itself to be a reusable component, following the Open/Closed principle, where the code does not have to be touched even when new shapes should be added? Then you need to build it in a more generic way, either using reflection, generics, the Prototype pattern , or the Strategy pattern . And yes, for certain use cases you will probably need no factory method at all. So in short: clarify your requirements first . If you don't know the context for using the factory method, you don't need it yet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/389503",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/84907/"
]
} |
389,638 | I have a web application. I don't believe the technology is important. The structure is an N-tier application, shown in the image on the left. There are 3 layers. UI (MVC pattern), Business Logic Layer (BLL) and Data Access Layer (DAL) The problem I have is my BLL is massive as it has the logic and paths through the application events call. A typical flow through the application could be: Event fired in UI, traverse to a method in the BLL, perform logic (possibly in multiple parts of the BLL), eventually to the DAL, back to the BLL (where likely more logic) and then return some value to the UI. The BLL in this example is very busy and I'm thinking how to split this out. I also have the logic and the objects combined which I don't like. The version on the right is my effort. The Logic is still how the application flows between UI and DAL, but there are likely no properties... Only methods (the majority of classes in this layer could possibly be static as they don't store any state). The Poco layer is where classes exist which do have properties (such as a Person class where there would be name, age, height etc). These would have nothing to do with the flow of the application, they only store state. The flow could be: Even triggered from UI and passes some data to the UI layer controller (MVC). This translates the raw data and converts it into the poco model. The poco model is then passed into the Logic layer (which was the BLL) and eventually to the command query layer, potentially manipulated on the way. The Command query layer converts the POCO to a database object (which are nearly the same thing, but one is designed for persistence, the other for the front end). The item is stored and a database object is returned to the Command Query layer. It is then converted into a POCO, where it returns to the Logic layer, potentially processed further and then finally, back to the UI The Shared logic and interfaces is where we may have persistent data, such as MaxNumberOf_X and TotalAllowed_X and all the interfaces. Both the shared logic/interfaces and DAL are the "base" of the architecture. These know nothing about the outside world. Everything knows about poco other than the shared logic/interfaces and DAL. The flow is still very similar to the first example, but it's made each layer more responsible for 1 thing (be it state, flow or anything else)... but am I breaking OOP with this approach? An example to demo the Logic and Poco could be: public class LogicClass
{
private ICommandQueryObject cmdQuery;
public PocoA Method1(PocoB pocoB)
{
return cmdQuery.Save(pocoB);
}
/*This has no state objects, only ways to communicate with other
layers such as the cmdQuery. Everything else is just function
calls to allow flow via the program */
public PocoA Method2(PocoB pocoB)
{
pocoB.UpdateState("world");
return Method1(pocoB);
}
}
public struct PocoX
{
public string DataA {get;set;}
public int DataB {get;set;}
public int DataC {get;set;}
/*This simply returns something that is part of this class.
Everything is self-contained to this class. It doesn't call
trying to directly communicate with databases etc*/
public int GetValue()
{
return DataB * DataC;
}
/*This simply sets something that is part of this class.
Everything is self-contained to this class.
It doesn't call trying to directly communicate with databases etc*/
public void UpdateState(string input)
{
DataA += input;
}
} | Yes, you are very likely breaking core OOP concepts. However don't feel bad, people do this all the time, it doesn't mean that your architecture is "wrong". I would say it is probably less maintainable than a proper OO design, but this is rather subjective and not your question anyway. ( Here is an article of mine criticizing the n-tier architecture in general). Reasoning : The most basic concept of OOP is that data and logic form a single unit (an object). Although this is a very simplistic and mechanical statement, even so, it is not really followed in your design (if I understand you correctly). You are quite clearly separating most of the data from most of the logic. Having stateless (static-like) methods for example is called "procedures", and are generally antithetic to OOP. There are of course always exceptions, but this design violates these things as a rule. Again, I would like to stress "violates OOP" != "wrong", so this is not necessarily a value judgement. It all depends on your architecture constraints, maintainability use-cases, requirements, etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/389638",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119594/"
]
} |
389,972 | The team I'm in creates components that can be used by the company's partners to integrate with our platform. As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework. Some examples: We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level. We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API. We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library. All of the code for mapping to/from XML is written "by hand", again for the same reason. I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity. | ... We are forced to stay on the lowest API level of the framework (.NET Standard) … This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach. .NET Standard is not, and never will be " the lowest API level of the framework ". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight. Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core , Mono , and Xamarin . And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms. Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework , at least for the foreseeable future, and quite likely always. So in the near future, far from being " the lowest API level of the framework ", .NET Standard will supersede the framework and have APIs that aren't supported by the latter. So be very careful with " The reasoning behind this is that a new platform could one day arrive that only supports that very low API level " as it's quite likely that new platforms will in fact support a higher level API than the old framework does. Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company. So yes, you definitely are taking this too far in my view. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/389972",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/315635/"
]
} |
389,981 | I am diving in the domain driven design (DDD) and while I go more deeply in it there are some things that I don't get. As I understand it, a main point is to split the Domain Logic (Business Logic) from the Infrastructure (DB, File System, etc.). What I am wondering is, what happens when I have very complex queries like a Material Resource Calculation Query? In that kind of query you work with heavy set operations, the kind of thing that SQL was designed for. Doing those calculations inside the Domain Layer and working with a lot of sets in it is like throwing away the SQL technology. Doing these calculations in the infrastructure can't happen too, because the DDD pattern allows for changes in the infrastructure without changing the Domain Layer and knowing that MongoDB doesn't have the same capabilities of e.g. SQL Server, that can't happen. Is that a pitfall of the DDD pattern? | These days, you are likely to see reads (queries) handled differently than writes (commands). In a system with a complicated query, the query itself is unlikely to pass through the domain model (which is primarily responsible for maintaining the consistency of writes ). You are absolutely right that we should render unto SQL that which is SQL. So we'll design a data model optimized around the reads, and a query of that data model will usually take a code path that does not include the domain model (with the possible exception of some input validation -- ensuring that parameters in the query are reasonable). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/389981",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/333103/"
]
} |
390,192 | This question is about how many bits are required to store a range. Or put another way, for a given number of bits, what is the maximum range that can be stored and how? Imagine we want to store a sub-range within the range 0-255. So for example, 45-74. We can store the example above as two unsigned bytes, but it strikes me that there must be some redundancy of information there. We know that the second value is larger than the first, so in the case that the first value is large, fewer bits are required for the second value, and in the case that the second value is large, fewer bits are required for the first. I suspect that any compression technique would yield a marginal result, so it may be a better question to ask "what is the maximum range that could be stored in one byte?". This should be larger than what is achievable by storing the two numbers separately. Are there any standard algorithms for doing this kind of thing? | Just count the number of possible ranges. There are 256 ranges with lower bound 0 (0-0, 0-1, ... 0-254, 0-255), 255 ranges with lower bound 1, ... and finally 1 range with lower bound 255 (255-255). So the total number is (256 + 255 + ... + 1) = 257 * 128 = 32,896. As this is slightly higher than 2 15 = 32,768, you'll still need at least 16 bits (2 bytes) to store this information. In general, for numbers from 0 up to n-1, the number of possible ranges is n*(n+1)/2. This is less than 256 if n is 22 or less: n = 22 gives 22*23/2 = 253 possibilities. So one byte suffices for sub-ranges of 0-21 . Another way to look at the problem is the following: storing a pair of integers in the range 0 to n-1 is almost the same as storing a subrange of 0-(n-1) plus a single bit which determines if the first number is lower or higher than the second one. (The difference comes from the case when both integers are equal, but this chance becomes increasingly smaller as n grows larger.) That's why you can only save about a single bit with this technique, and probably the main reason why it is rarely used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/390192",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/180470/"
]
} |
390,266 | Some base points: Python method calls are "expensive" due to its interpreted nature . In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse ( which is a big gain for developers, not so much for users ). The single responsibility principle (SRP) keeps code readable, is easier to test and maintain. The project has a special kind of background where we want readable code, tests, and time performance. For instance, code like this which invokes several methods (x4) is slower than the following one which is just one. from operator import add
class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3
def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates
def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates
def get_coordinates(self):
return self.coordinates
## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates() In comparison to this: from operator import add
def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]
move_and_revert_and_return([1,2,3],[1,1,1]) If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it. How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it? Are there workarounds, like some sort of pre-processor that puts things in-line for release? Or is Python simply poor at handling code breakdown altogether? | Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization. If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call. If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited. As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice. The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/390266",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/327923/"
]
} |
390,755 | I understand that directly instantiating dependencies inside a class is considered bad practise. This makes sense as doing so tightly couples everything which in turn makes testing very hard. Almost all the frameworks I've come across seem to favour dependency injection with a container over using service locators. Both of them seem to achieve the same thing by allowing the programmer to specify what object should be returned when a class requires a dependency. What's the difference between the two? Why would I choose one over the other? | When the object itself is responsible for requesting its dependencies, as opposed to accepting them through a constructor, it's hiding some essential information. It's only mildly better than the very tightly-coupled case of using new to instantiate its dependencies. It reduces coupling because you can in fact change the dependencies it gets, but it still has a dependency it can't shake: the service locator. That becomes the thing that everything is dependent on. A container that supplies dependencies through constructor arguments gives the most clarity. We see right up front that an object needs both an AccountRepository , and a PasswordStrengthEvaluator . When using a service locator, that information is less immediately apparent. You'd see right away a case where an object has, oh, 17 dependencies, and say to yourself, "Hmm, that seems like a lot. What's going on in there?" Calls to a service locator can be spread around the various methods, and hide behind conditional logic, and you might not realize you have created a "God class" -- one that does everything. Maybe that class could be refactored into 3 smaller classes that are more focused, and hence more testable. Now consider testing. If an object uses a service locator to get its dependencies, your test framework will also need a service locator. In a test, you'll configure the service locator to supply the the dependencies to the object under test -- maybe a FakeAccountRepository and a VeryForgivingPasswordStrengthEvaluator , and then run the test. But that's more work than specifying dependencies in the object's constructor. And your test framework also becomes dependent on the service locator. It's another thing you have to configure in every test, which makes writing tests less attractive. Look up "Serivce Locator is an Anti-Pattern" for Mark Seeman's article about it. If you're in the .Net world, get his book. It's very good. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/390755",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/334425/"
]
} |
391,228 | Everyone knows that new developers write long functions. As you progress, you get better at breaking your code into smaller pieces and experience teaches you the value of doing so. Enter SQL. Yes, the SQL way of thinking about code is different from the procedural way of thinking about code, but this principle seems just as applicable. Let’s say I have a query that takes the form: select * from subQuery1 inner join subQuerry2 left join subquerry3 left join join subQuery4 Using some IDs or dates etc. Those subqueries are complex themselves and may contain subqueries of their own. In no other programming context would I think that the logic for complex subqueries 1-4 belongs in line with my parent query that joins them all. It seems so straightforward that those subqueries should be defined as views, just like they would be functions if I were writing procedural code. So why isn’t that common practice? Why do people so often write these long monolithic SQL queries? Why doesn’t SQL encourage extensive view usage just like procedural programming encourages extensive function usage. (In many enterprise environments, creating views isn’t even something that’s easily done. There are requests and approvals required. Imagine if other types of programmers had to submit a request each time they created a function!) I’ve thought of three possible answers: This is already common and I’m working with inexperienced people Experienced programmers don’t write complex SQL because they prefer to solve hard data processing problems with procedural code Something else | I think the main problem is that not all databases support Common Table Expressions. My employer uses DB/2 for a great many things. The latest versions of it support CTEs, such that I'm able to do things like: with custs as (
select acct# as accountNumber, cfname as firstName, clname as lastName,
from wrdCsts
where -- various criteria
)
, accounts as (
select acct# as accountNumber, crBal as currentBalance
from crzyAcctTbl
)
select firstName, lastName, currentBalance
from custs
inner join accounts on custs.accountNumber = accounts.accountNumber The result is that we can have heavily abbreviated table / field names and I'm essentially creating temp views, with more legible names, which I can then use. Sure, the query gets longer. But the result is that I can write something which is pretty clearly separated (using CTEs the way you'd use functions to get DRY) and end up with code that's quite legible. And because I'm able to break out my subqueries, and have one subquery reference another, it's not all "inline." I have, on occasion, written one CTE, then had four other CTEs all reference it, then had the main query union the results of those last four. This can be done with: DB/2 PostGreSQL Oracle MS SQL Server MySQL (latest version; still kinda new) probably others But it goes a LONG way toward making the code cleaner, more legible, more DRY. I've developed a "standard library" of CTEs that I can plug-in to various queries, getting me off to a flying start on my new query. Some of them are starting to be embraced by other devs in my organization, too. In time, it may make sense to turn some of these into views, such that this "standard library" is available without needing to copy / paste. But my CTEs end up getting tweaked, ever so slightly, for various needs that I've not been able to have a single CTE get used SO WIDELY, without mods, that it might be worth creating a view. It would seem that part of your gripe is "why don't I know about CTEs?" or "why doesn't my DB support CTEs?" As for updates ... yeah, you can use CTEs but, in my experience, you have to use them inside the set clause AND in the where clause. It would be nice if you could define one or more ahead of the whole update statement and then just have the "main query" parts in the set / where clauses but it doesn't work that way. And there's no avoiding obscure table / field names on the table you're updating. You can use CTEs for deletes. It may take multiple CTEs to determine the PK / FK values for records you want to delete from that table. Again, you can't avoid obscure table / field names on the table you're modifying. Insomuch as you can do a select into an insert, you can use CTEs for inserts. As always, you may be dealing with obscure table / field names on the table you're modifying. SQL does NOT let you create the equivalent of a domain object, wrapping a table, with getters / setters. For that, you will need to use an ORM of some kind, along with a more procedural / OO programming language. I've written things of this nature in Java / Hibernate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/391228",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/150257/"
]
} |
391,428 | I am not looking for an opinion about semantics but simply for a case where having getters sensibly used is an actual impediment. Maybe it throws me into a never-ending spiral of relying on them, maybe the alternative is cleaner and handles getters automatically, etc. Something concrete. I've heard all the arguments, I've heard that they're bad because they force you into treating objects as data sources, that they violate an object's "pure state" of "don't give out too much but be prepared to accept a lot". But absolutely no sensible reason for why a getData is a bad thing, in fact, a few people argued that it's a lot about semantics, getters as fine per-se, but just don't name them getX , to me, this is at least funny. What is one thing, without opinions, that will break if I use getters sensibly and for data that clearly the object's integrity doesn't break if it puts it out? Of course that allowing a getter for a string that's used to encrypt something is beyond dumb, but I'm talking about data that your system needs to function. Maybe your data is pulled through a Provider from the object, but, still, the object still needs to allow the Provider to do a $provider[$object]->getData , there's no way around it. Why I'm asking: To me, getters, when used sensibly and on data that is treated as "safe" are god-sent, 99% of my getters are used to identify the object, as in, I ask, through code Object, what is your name? Object, what is your identifier? , anyone working with an object should know these things about an object, because nearly everything about programming is identity and who else knows better what it is than the object itself? So I fail to see any real issues unless you're a purist. I've looked at all the StackOverflow questions about "why getters / setters" are bad and though I agree that setters are really bad in 99% of the cases, getters don't have to be treated the same just because they rhyme. A setter will compromise your object's identity and make it very hard to debug who's changing the data, but a getter is doing nothing. | You can't write good code without getters. The reason why isn't because getters don't break encapsulation, they do. It isn't because getters don't tempt people to not bother following OOP which would have them put methods with the data they act on. They do. No, you need getters because of boundaries. The ideas of encapsulation and keeping methods together with the data they act on simply don't work when you run into a boundary that keeps you from moving a method and so forces you to move data. It's really that simple. If you use getters when there is no boundary you end up having no real objects. Everything starts to tend to the procedural. Which works as well as it ever did. True OOP isn't something you can spread everywhere. It only works within those boundaries. Those boundaries aren't razor thin. They have code in them. That code can't be OOP. It can't be functional either. No this code has our ideals stripped from it so it can deal with harsh reality. Michael Feathers called this code fascia after that white connective tissue that holds sections of an orange together. This is a wonderful way to think about it. It explains why it's ok to have both kinds of code in the same code base. Without this perspective many new programmers cling to their ideals hard, then have their hearts broken and give up on these ideals when they hit their first boundary. The ideals only work in their proper place. Don't give up on them just because they don't work everywhere. Use them where they work. That place is the juicy part that the fascia protects. A simple example of a boundary is a collection. This holds something and has no idea what it is. How could a collection designer possibly move the behavioral functionality of the held object into the collection when they have no idea what it's going to be holding? You can't. You're up against a boundary. Which is why collections have getters. Now if you did know, you could move that behavior, and avoid moving state. When you do know, you should. You just don't always know. Some people just call this being pragmatic. And it is. But it's nice to know why we have to be pragmatic. You've expressed that you don't want to hear semantic arguments and seem to be advocating putting "sensible getters" everywhere. You're asking for this idea to be challenged. I think I can show the idea has problems with the way you've framed it. But it also think I know where you're coming from because I've been there. If you want getters everywhere look at Python. There is no private keyword. Yet Python does OOP just fine. How? They use a semantic trick. They name anything meant to be private with a leading underscore. You're even allowed to read from it provided you take responsibility for doing so. "We're all adults here", they often say. So what's the difference between that and just putting getters on everything in Java or C#? Sorry but it's semantics. Pythons underscore convention clearly signals to you that you're poking around behind the employees only door. Slap getters on everything and you loose that signal. With reflection you could have stripped off the private anyway and still not have lost the semantic signal. There simply isn't a structural argument to be made here. So what we're left with is the job of deciding where to hang the "employees only" sign. What should be considered private? You call that "sensible getters". As I've said, the best justification for a getter is a boundary that forces us away from our ideals. That shouldn't result in getters on everything. When it does result in a getter you should consider moving the behavior further into the juicy bit where you can protect it. This separation has given rise to a few terms. A Data Transfer Object or DTO, holds no behavior. The only methods are getters and sometimes setters, sometimes a constructor. This name is unfortunate because it's not a true object at all. The getters and setters are really just debugging code that give you a place to set a breakpoint. If it wasn't for that need they'd just be a pile of public fields. In C++ we used to call them structs. The only difference they had from a C++ class was they defaulted to public. DTO's are nice because you can throw them over a boundary wall and keep your other methods safely in a nice juicy behavior object. A true object. With no getters to violate it's encapsulation. My behavior objects may eat DTO's by using them as Parameter Objects . Sometimes I have to make a defensive copy of it to prevent shared mutable state . I don't spread mutable DTO's around inside the juicy part within the boundary. I encapsulate them. I hide them. And when I finally run into a new boundary I spin up a new DTO and throw it over the wall thus making it someone else's problem. But you want to provide getters that express identity. Well congrats you've found a boundry. Entities have an identity that goes beyond their reference. That is, beyond their memory address. So it has to be stored somewhere. And something has to be able to refer to this thing by it's identity. A getter that expresses identity is perfectly reasonable. A pile of code that uses that getter to make decisions that the Entity could have made itself is not. In the end it's not the existence of getters that is wrong. They are far better than public fields. What's bad is when they are used to pretend you're being Object Oriented when you're not. Getters are good. Being Object Oriented is good. Getters are not Object Oriented. Use getters to carve out a safe place to be Object Oriented. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/391428",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/311101/"
]
} |
391,654 | MethodA calls an MethodB which in turn calls MethodC. There is NO exception handling in MethodB or MethodC. But there is exception handling in MethodA. In MethodC an exception occurs. Now, that exception is bubbling up to MethodA, which handles it appropriately. What is wrong with this? In my mind, at some point a caller will execute MethodB or MethodC, and when exceptions do occur in those methods, what will be gained from handling exceptions inside those methods, which essentially is just a try/catch/finally block instead of just let them bubble up to the callee? The statement or consensus around exception handling is to throw when execution cannot continue due to just that - an exception. I get that. But why not catch the exception further up the chain instead of having try/catch blocks all the way down. I understand it when you need to free up resources. That's a different matter entirely. | As a general principle, don't catch exceptions unless you know what to do with them. If MethodC throws an exception, but MethodB has no useful way to handle it, then it should allow the exception to propagate up to MethodA. The only reasons why a method should have a catch and rethrow mechanism are: You want to convert one exception to a different one that is more meaningful to the caller above. You want to add extra information to the exception. You need a catch clause to clean up resources that would be leaked without one. Otherwise, catching exceptions at the wrong level tends to result in code that silently fails without providing any useful feedback to the calling code (and ultimately the user of the software). The alternative of catching an exception and then immediately rethrowing it is pointless. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/391654",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/333082/"
]
} |
391,804 | When you use tools like jsdocs , it generates static HTML files and its styles in your codebase based on the comments in your code. Should these files be checked into the Git repository or should they be ignored with .gitignore? | Absent any specific need, any file that can be built, recreated, constructed, or generated from build tools using other files checked into version control should not be checked in. When the file is needed, it can be (re)built from the other sources (and normally would be as some aspect of the build process). So those files should be ignored with .gitignore. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/391804",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38608/"
]
} |
391,987 | I recently came across an MSDN article about branching and merging and SCM: Branching and Merging Primer - Chris Birmele . In the article they say 'big bang merge' is a merging antipattern: Big Bang Merge — deferring branch merging to the end of the
development effort and attempting to merge all branches
simultaneously. I realized that this is very similar to what my company is doing with all of the development branches that are produced. I work at a very small company with one person acting as the final review + trunk merge authority. We have 5 developers (including me), each of us will be assigned a separate task/bug/project and we will each branch off the current trunk (subversion) and then perform the development work in our branch, test the results, write documentation if necessary, perform a peer review and feedback loop with the other developers, and then submit the branch for review + merge on our project management software. My boss, the sole authority on the trunk repository, will actually defer all of the reviews of branches until a single point in time where he will perform reviews on as much as he can, some branches will be thrown back for enhancements/fixes, some branches will merge right into trunk, some branches will be thrown back because of conflicts, etc. It's not uncommon for us to have 10-20 active branches sitting in the final review queue to be merged into trunk. We also frequently have to resolve conflicts in the final review and merge stage because two branches were created off the same trunk but modified the same piece of code. Usually we avoid this by just rebranching off trunk and re-applying our changes and resolving the conflicts then submitting the new branch for review (poor mans rebase). Some direct questions I have are: Are we exhibiting the very anti-pattern that was described as the 'big bang merge'? Are some of the problems we're seeing a result of this merge process? How can we improve this merge process without increasing the bottleneck on my boss? Edit: I doubt my boss will loosen his grip on the trunk repository, or allow other devs to merge to trunk. Not sure what his reasons for that are but I don't really plan on bringing the topic up because it's been brought up before and shot down rather quickly. I think they just don't trust us, which doesn't make sense because everything is tracked anyway. Any other insight into this situation would be appreciated. | Some suggestions: There is nothing wrong in having a lot of feature or bugfix branches as long as the changes done in each branch are small enough you can still handle the resulting merge conflicts in an effective manner. That should be your criterion if your way of working is ok, not some MSDN article. Whenever a branch is merged into trunk, the trunk should be merged into all open development branches ASAP. This would allow all people in the team to resolve merge conflicts in parallel in their own branch and so take some burden from the gatekeeper of the trunk. This would work way better if the gatekeeper would not wait until 10 branches are "ready for merging into trunk" - resolving merge conflicts from the last trunk integrations always needs some time for the team, so it is probably better to work in interwoven time intervals - one integration by the gatekeeper, one re-merge by the team, next integration by the gatekeeper, next re-merge by the team, and so on. To keep branches small, it might help to split larger features into several smaller tasks and develop each of those tasks in a branch of its own. If the feature is not production ready until all subtasks are implemented, hide it from production behind a feature toggle until all subtasks are completed. Sooner or later you will encounter refactoring tasks which affect many files in the code base - these have a high risk of causing a lot merge conflicts with many branches. Those can be handled best by communicating them clearly in the team, and make sure to handle them exactly as I wrote above: by integrating them first into all dev branches before reintegration, and by splitting them up into smaller sub-refactorings. For your current team size, having a single gatekeeper may still work. But if your team will grow in size, there is no way around having a second gatekeeper (or more). Note I am not suggesting to allow everyone to merge into trunk, but that does not mean only your boss is capable of doing this. There are probably one or two senior devs who could be candidates for doing the gatekeeper's job, too. And even for your current team's size, a second gatekeeper could make it easier for your team to integrate to the trunk more often and earlier, or when your boss is not available. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/391987",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/324023/"
]
} |
391,999 | Why do languages demand catch blocks when they aren't needed? The compiler or parser complains with this code: try {
const utils = require("applicationutils");
} But it is OK with this code: try {
const utils = require("applicationutils");
} catch(e) {} I don't need the catch block. I'm using JavaScript if it matters. Update Example Code: // setting defaults ahead of try - no need for a catch block
var setting = 10;
var myRegEx = "/123/g";
var supportsRegEx2 = false;
try {
const utils = require("applicationutils");
setting = 20;
myRegEx = "/123/gm";
supportsRegEx2 = true;
} The long story I'm working in an browser like environment where new API's are introduced frequently. Some API's are silently introduced, no documentation but available. If I want to use a new API I can set a minimum-version flag in my manifest. But if I set a minimum then this excludes anyone before this version. I've received emails from users who have various reasons they are unable to upgrade; some are using previous versions simply because they haven't updated and others because of office politics. I've known businesses who the last time I've checked are still using IE6. A few times I've seen system requirements increased that excludes previous generation hardware. I could have found when an API was introduced and check against a version number or I could try to include the class so later I could check a supports flag or check if the class is not null. Since setting a minimum version would exclude a segment of the audience this way I could support the users who have not updated while still providing users who have updated access to the features using newer APIs. Approach 1: const system = require("system");
try {
const foo = require("foo");
}
function performSomeAction() {
if (supportsFoo) {
foo.bar();
}
} Approach 2: const system = require("system");
var supportsFoo = false;
try {
const foo = require("foo");
supportsFoo = true;
}
function start() {
if (foo) {
// do something
}
} In my cases I can't see a catch block being necessary. Semantics: For my specific case: Try to import a class using require() and
set a constant or variable to that class / api
If an error is thrown skip any other code in the try block and continue
If no error the constant or variable will not be null
In the constructor check for not-null and enable features for use Per a comment below here is test code in JS environment: var x = function() {
try {
console.log("hello")
throw new Error();
console.log("world");
}
// catch(e) {}
console.log("After try");
}
// VM373:6 Uncaught SyntaxError: Missing catch or finally after try ANOTHER USE CASE (5 days later): FWIW in CSS there is the idea of a progressive enhancement . Because of the way CSS styles are defined, styles can be defined multiple times and styles of the same name that are added last overwrite values set before it. So you can have this list of styles like so: body {
color: red;
color: blue;
} The color will be blue because it is defined last. That's perfectly valid in CSS. It's not right or wrong it's valid. So this comes in handy when you want to support progressive features without breaking support for earlier browsers: .slideshow {
display: flex;
display: grid;
} In the style declaration above the browser will use a grid display if it is supported and if not it will use flex display. There is no error thrown for using an incorrect value. That's the same as: var element = document.getElementById("label");
element.style.setPropertyValue("flex");
try {
// if the style is not supported the browser retains the value flex
element.style.setPropertyValue("display", "grid");
} catch(e){ /* no catch is needed */ } You could also write the CSS as: .slideshow {
display: grid;
}
@supports (display: grid) {
.slideshow {
display: grid;
}
} The code for that would be: var element = document.getElementById("label");
element.style.setPropertyValue("display", "flex");
if (CSS.supports("('display:grid')")) {
element.style.setPropertyValue("display", "grid");
} In both cases you are defining a variable, attempting to set test / set it to a new value that it may not support. The first approach is recommended for greatest backwards compatibility: .slideshow {
display:flex;
display:grid;
} Granted, when setting styles that are not compatible the browser will retain the previous valid values. I'm banking on this knowledge or this information to determine that a catch block is not necessary. This isn't my use case btw. My use case is in the "long story" section. | I don't need the catch block. But you do need to catch . The behavior of your code with a catch block is to catch any exception, and then forget that it happened. So any exception that tries to pass through will stop, and your code will basically pretend that the try block executed successfully. So you want a naked try block to act like it catches an exception and ignores it. Here's the thing: a default case is meant to be a common case, one that is useful by many users and not error prone. if doesn't typically require an else because there are many cases where you have nothing to do. I know nothing about why you want to drop exceptions on the floor and pretend they didn't happen. I'm willing to accept that you have some good justification for doing so. But the fact is, in the general case , it's not a good idea. Most programmers don't want to do it, and there are good arguments to say that it is generally unwise to do this sort of thing. A good language will let you do something unwise. But a good language will not let you do something unwise by accident . Since the behavior you want is generally unwise, languages tend to make you explicitly request it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/391999",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48061/"
]
} |
392,205 | As the sole developer in a startup, I had the luxury of being able to make a lot of decisions in the architecture and frameworks of our application. Fast forward 4 years and an acquisition later, I have a team of 5 and a lot of times it feels like the wild west. People making whatever design decision pleases them: integer and enums for DB types in one place and string another, this framework for a problem and then a different framework for the same problem elsewhere, etc. How do I go about enforcing consistency? It feels important to me but my team members seem to subscribe to the "if it works, it works" methodology. I guess a big part of my question is: is it unrealistic of me to expect standards like this? I struggle with the idea of coming across as a dictator that stifles creativity but doing whatever they want seems to not be scalable. | What makes you so special? My CPU says it works and I want to go home. Why are you bothering me? You can deal with this attitude by forcing everyone to issue pull requests. But now the deadlines are looming. Bad code presses on the gates of your pristine castle and you finally give in to the pressure. Or you win only to find everyone leaves and no one uses your pristine castle. There are plenty of tools that help with this issue. Source control, code reviews, coding standards, etc. but the heart and soul of the problem is your subjective opinions about what is best have to be seen as relevant. For that you have to earn and maintain their respect. Do that and this is much easier. Fail to do that and no tool or practice will save you. The best way to do that is communicate early. Don't tell me "we don't use strings for our DB types in this shop" 6 months after I settled on the idea. Telling me it's been buried in the documentation for 2 years is no justification for letting me do that. For whatever reason you have things you care about. If you care about them and have a point get those things communicated clearly before, during, and immediately after the coding of every module. Code stalking is a wonderful practice. Invest in whatever tools and practices you need so you can review code within minutes of it being written. Pair program and the tool is simply a guest chair. Why? Every second that passes after I write code exponentially increases the cost to change it. That's because my memory of the code has a half life. I start forgetting it the moment my bladder demands a break. Reduce the things you care about to their underlying principles. Rather than hit me with a list of 101 rules to follow, give me the 10 principles that they violate so I can figure out what rule 102 should be on my own. Empower me to impose my own vision by helping me see yours and we'll get along great. is it unrealistic of me to expect standards like this? I struggle with the idea of coming across as a dictator that stifles creativity but doing whatever they want seems to not be scalable. Then don't dictate! Make this a positive experience. This isn't some new age hippy nonsense. It's basic psychology. You are trying to modify human behavior. Random and positive is the most reinforcing (just ask Las Vegas). If you go negative you have to be consistent with your reinforcement. That's an unobtainable pain. Be positive as you spread the wisdom and you can be casual about it. I know where you're coming from because I've been there. You had control and now it's gone. You want it back. Well get over it. Now you have a team. They don't need to be controlled. What they need is leadership. What you need isn't control. It's influence. It works better and is a lot less work. Master that and relax. This should be fun. Do it right and you can go on vacation and this will still work. How? By not just being a leader but by getting the others to be leaders as well. Once you've instilled your vision in the team they can work while you're gone simply by imitating what you've been doing. Mentor the newbies and encourage them to step up and influence others as well. I know it's hard. We didn't go into this profession because we're good at dealing with people. We communicate best with code. That's fine. Just do it quick and often. Show me why yours is better. Listen if I say it's not. Do this while I'm still thinking about it. I love to code. There are few people on the planet that I can talk to about it. Be one of them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/392205",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/317313/"
]
} |
392,723 | I was reading an article from Microsoft regarding Widening Conversions and Option Strict On when I got to the part The following conversions may lose precision: Integer to Single Long to Single or Double Decimal to Single or Double However, these conversions do not lose information or magnitude. .. but according to another article regarding data types , Integer type can store from -2.147.483.648 to 2.147.483.647 and Single type can store from 1,401298E-45 to 3,4028235E+38 for positive numbers, and -3,4028235E+38 to - 1,401298E-45 for negative numbers .. so Single can store much more numbers than Integer. I couldn't understand in what situation such conversion from Integer to Single may lose precision. Could someone explain, please? | Single can store much more numbers than Integer No, it can't. Both Single and Integer are 32 Bit, which means that both can store the exact same amount of numbers, namely 2 32 = 4294967296 distinct numbers. Since the range of Single is clearly larger than that, it is immediately obvious (because of the Pigeonhole Principle ) that it cannot possibly represent all numbers within that range. And since the range of Integer is exactly the same size as the maximum amount of numbers that both Integer and Single can represent, but Single can also represent numbers outside of that range, it is clear that it cannot possibly represent all numbers inside the range of Integer . If there are some numbers of Integer that cannot be represented in Single , converting from Integer to Single must be capable of losing information. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/392723",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/337725/"
]
} |
393,065 | For example: var duckBehaviors = new Duckbehavior();
duckBehaviors.quackBehavior = new Quack();
duckBehaviors.flyBehavior = new FlyWithWings();
Duck mallardDuck = new Duck(DuckTypes.MallardDuck, duckBehaviors) As the Duck class contains all the behaviors(abstract), creating a new class MallardDuck (which extends Duck ) does not seem to be required. Reference: Head First Design Pattern, Chapter 1. | Sure, but we call that composition and delegation . The Strategy Pattern and Dependency Injection might seem structurally similar but their intents are different. The Strategy Pattern allows runtime modification of behavior under the same interface. I could tell a mallard duck to fly and watch it fly-with-wings. Then swap it out for a jet pilot duck and watch it fly with Delta airlines. Doing that while the program is running is a Strategy Pattern thing. Dependency Injection is a technique to avoid hard coding dependencies so they can change independently without requiring clients to be modified when they change. Clients simply express their needs without knowing how they will be met. Thus how they are met is decided elsewhere (typically in main). You don't need two ducks to make use of this technique. Just something that uses a duck without knowing or caring which duck. Something that doesn't build the duck or go looking for it but is perfectly happy to use whatever duck you hand it. If I have a concrete duck class I can have it implement it's fly behavior. I could even have it switch behaviors from fly-with-wings to fly-with-Delta based on a state variable. That variable could be a boolean, an int, or it could be a FlyBehavior that has a fly method that does whatever flying style without me having to test it with an if. Now I can change flying styles without changing duck types. Now Mallards can become pilots. This is composition and delegation . The duck is composed of a FlyBehavior and it can delegate flying requests to it. You can replace all your duck behaviors at once this way, or hold something for each behavior, or any combination in between. This gives you all the same powers that inheritance has except one. Inheritance lets you express what Duck methods you're overriding in the Duck subtypes. Composition and delegation requires the Duck to explicitly delegate to subtypes from the start. This is far more flexible but it involves more keyboard typing and Duck has to know it's happening. However, many people believe that inheritance has to be explicitly designed for from the beginning. And that if it hasn't been, that you should mark your classes as sealed/final to disallow inheritance. If you take that view then inheritance really has no advantage over composition and delegation. Because then either way you have to either design for extensibility from the start or be willing to tear things down later. Tearing things down is actually a popular option. Just be aware that there are cases where it's a problem. If you've independently deployed libraries or modules of code that you don't intend to update with the next release you can end up stuck dealing with versions of classes that know nothing about what you're up to now. While being willing to tear things down later can free you from over designing there is something very powerful about being able to design something that uses a duck without having to know what the duck will actually do when used. That not knowing is powerful stuff. It lets you stop thinking about ducks for awhile and think about the rest of your code. "Can we" and "should we" are different questions. Favor Composition over Inheritance doesn't say never use inheritance. There are still cases where inheritance makes the most sense. I'll show you my favorite example : public class LoginFailure : System.ApplicationException {} Inheritance lets you create exceptions with more specific, descriptive names in only one line. Try doing that with composition and you'll get a mess. Also, there is no risk of the inheritance yo-yo problem because there is no data or methods here to reuse and encourage inheritance chaining. All this adds is a good name. Never underestimate the value of a good name. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/393065",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/296142/"
]
} |
393,425 | I was thinking about software development and writing unit tests. I got following idea: Let's assume we have pairs of developers. Each pair is responsible for a part of the code. One from the pair implements a feature (writing code) and the second writes a unit tests for it. Tests are written after code. In my idea they help each other, but work rather separately. Ideally they would work on two similar-sized features and then exchange for test preparation. I think that this idea has some upsides: tests are written by someone, who can see more about the implementation, work should be made little faster than pair programming (two features at the same time), both tests and code has responsible person for it, code is tested by at least two people, and maybe searching for errors in code written by person that is testing your code would give special motivation for writing better code and avoiding cutting corners. Maybe it also good idea to add another developer for code review between code and tests development. What are the downsides of this idea? Is it already described as some unknown-to-me methodology and used in software development? PS. I'm not a professional project manager, but I know something about project development processes and know the few most popular methodologies - but this idea doesn't sound familiar to me. | The general approach of using pairs to split the effort of writing production code and writing its associated unit tests is not uncommon. I've even personally paired in this way before with decent success. However, a strict line between the person writing production code and the person writing test code may not necessarily yield results. When I used a similar approach, the pair starts by talking and getting a shared understanding of the problem. If you're using TDD, then you may start with some basic tests first. If you aren't using TDD, perhaps you'll start with the method definition. From here, both members of the pair work on both the production code and test code, with one person focusing on each aspect, but talking about ways to improve the production code as well as the test code behind it. I don't see the advantage of giving each pair two features. What you'd end up with is something that resembles TDD for some features and something that doesn't for other features. You lose focus. You don't get the benefits of real-time peer review. You don't get any of the major benefits of pairing. The practice of pair programming is not about speed, but quality. So trying to use a modified technique driven by going faster goes against the nature. By building higher quality software via parallel code review and test development, you end up saving time downstream since there are at least two people with knowledge of each change and you are eliminating (or reducing) waiting cycles for peer review and test. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/393425",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/338865/"
]
} |
393,732 | I do not understand the macro concept well. What is a macro? I do not understand how is it different from function? Both function and macro contain a block of code. So how does macro and function differ? | Unfortunately, there are multiple different uses of the term "macro" in programming. In the Lisp family of languages, and languages inspired by them, as well as many modern functional or functional-inspired languages like Scala and Haskell, as well as some imperative languages like Boo, a macro is a piece of code that runs at compile time (or at least before runtime for implementations without a compiler) and can transform the Abstract Syntax Tree (or whatever the equivalent in the particular language is, e.g. in Lisp, it would be the S-Expressions) into something else during compilation. For example, in many Scheme implementations, for is a macro which expands into multiple calls to the body. In statically-typed languages, macros are often type-safe, i.e. they cannot produce code that is not well-typed. In the C family of languages, macros are more like textual substitution. Which also means they can produce code that is not well-typed, or even not syntactically legal. In macro-assemblers, "macros" refer to "virtual instructions", i.e. instructions that the CPU does not support natively but that are useful, and so the assembler allows you to use those instructions and will expand them into multiple instructions that the CPU understands. In application scripting, a "macro" refers to a series of actions that the user can "record" and "play back". All of those are in some sense kinds of executable code, which means they can in some sense be viewed as functions. However, in the case of Lisp macros, for example, their input and output are program fragments. In the case of C, their input and output are tokens. The first three also have the very important distinction that they are executed at compile time . In fact, C preprocessor macros, as the name implies, are actually executed before the code even reaches the compiler . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/393732",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/334821/"
]
} |
393,870 | Let's assume I wrote an extension method in C# for byte arrays which encodes them into hex strings, as follows: public static class Extensions
{
public static string ToHex(this byte[] binary)
{
const string chars = "0123456789abcdef";
var resultBuilder = new StringBuilder();
foreach(var b in binary)
{
resultBuilder.Append(chars[(b >> 4) & 0xf]).Append(chars[b & 0xf]);
}
return resultBuilder.ToString();
}
} I could test the method above using NUnit as follows: [Test]
public void TestToHex_Works()
{
var bytes = new byte[] { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef };
Assert.AreEqual("0123456789abcdef", bytes.ToHex());
} If I use the Extensions.ToHex inside my project, let's assume in Foo.Do method as follows: public class Foo
{
public bool Do(byte[] payload)
{
var data = "ES=" + payload.ToHex() + "ff";
// ...
return data.Length > 5;
}
// ...
} Then all tests of Foo.Do will depend on the success of TestToHex_Works . Using free functions in C++ the outcome will be the same: tests that test methods that use free functions will depend on the success of free function tests. How can I handle such situations? Can I somehow resolve these test dependencies? Is there a better way to test the code snippets above? | Then all tests of Foo.Do will depend on the success of TestToHex _Works. Yes. That's why you have tests for TextToHex . If those tests pass, the function meets the spec defined in those tests. So Foo.Do can safely call it and not worry about it. It's covered already. You could add an interface, make the method an instance method and inject it into Foo . Then you could mock TextToHex . But now you have to write a mock, which may function differently. So you'll need an "integration" test to bring the two together to ensure the parts really work together. What has that achieved other than making things more complex? The idea that unit tests should test parts of your code in isolation from other parts is a fallacy. The "unit" in a unit test is an isolated unit of execution. If two tests can be run simultaneously without affecting each other, then they run in isolation and so are unit tests. Static functions that are fast, do not have a complex set up and have no side effects such as your example are therefore fine to use directly in unit tests. If you have code that is slow, complex to set up or has side effects, then mocks are useful. They should be avoided elsewhere though. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/393870",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/289095/"
]
} |
394,287 | Throughout the course of programming, you will end up with some comments that explain code and some comments that are removing code: // A concise description
const a = Boolean(obj);
//b = false; Is there a good method to quickly parse which is which? I've played around with using 3 / 's and /** */ for descriptive comments. I've also used a VSCode plugin to highlight //TODO: and //FIXME: | There is a very simple solution to this: remove the commented-out code. Really, there are only two good reasons to comment out code: to test something/make a fix, or to save code you might use later. If you're testing or fixing something, remove the commented out code as soon as you're done with the test or fix. If you're saving code you might use later, make it first-class code and put it somewhere such as a library where it can be put to good use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/394287",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/340281/"
]
} |
394,399 | Over the last few years, we have been slowly making the switch over to progressively better written code, a few baby steps at a time. We are finally starting to make the switch over to something that at least resembles SOLID, but we're not quite there yet. Since making the switch, one of the biggest complaints from developers is that they can't stand peer reviewing and traversing dozens and dozens of files where previously every task only required the developer touching 5-10 files. Prior to starting to make the switch, our architecture was organized pretty much like the following (granted, with one or two orders of magnitude more files): Solution
- Business
-- AccountLogic
-- DocumentLogic
-- UsersLogic
- Entities (Database entities)
- Models (Domain Models)
- Repositories
-- AccountRepo
-- DocumentRepo
-- UserRepo
- ViewModels
-- AccountViewModel
-- DocumentViewModel
-- UserViewModel
- UI File wise, everything was incredibly linear and compact. There was obviously a lot of code-duplication, tight-coupling, and headaches, however, everyone could traverse it and figure it out. Complete novices, people who had never so much as opened Visual Studio, could figure it out in just a few weeks. The lack of overall file complexity makes it relatively straightforward for novice developers and new hires to start contributing without too much ramp up time as well. But this is pretty much where any benefits of the code style go out the window. I wholeheartedly endorse every attempt we make to better our codebase, but it is very common to get some push-back from the rest of the team on massive paradigm shifts like this. A couple of the biggest sticking points currently are: Unit Tests Class Count Peer Review Complexity Unit tests have been an incredibly hard sell to the team as they all believe they're a waste of time and that they're able to handle-test their code much quicker as a whole than each piece individually. Using unit tests as an endorsement for SOLID has mostly been futile and has mostly become a joke at this point. Class count is probably the single biggest hurdle to overcome. Tasks that used to take 5-10 files can now take 70-100! While each of these files serve a distinct purpose, the sheer volume of files can be overwhelming. The response from the team has mostly been groans and head scratching. Previously a task may have required one or two repositories, a model or two, a logic layer, and a controller method. Now, to build a simple file saving application you have a class to check if the file already exists, a class to write the metadata, a class to abstract away DateTime.Now so you can inject times for unit testing, interfaces for every file containing logic, files to contain unit tests for each class out there, and one or more files to add everything to your DI container. For small to medium size applications, SOLID is a super easy sell. Everyone sees the benefit and ease of maintainability. However, they're just not seeing a good value proposition for SOLID on very large scale applications. So I'm trying to find ways to improve organization and management to get us past the growing pains. I figured I'd give a bit stronger of an example of the file volume based on a recently completed task. I was given a task to implement some functionality in one of our newer microservices to receive a file sync request. When the request is received, the service performs a series of lookups and checks, and finally saves the document to a network drive, as well as 2 separate database tables. To save the document to the network drive, I needed a few specific classes: - IBasePathProvider
-- string GetBasePath() // returns the network path to store files
-- string GetPatientFolderName() // gets the name of the folder where patient files are stored
- BasePathProvider // provides an implementation of IBasePathProvider
- BasePathProviderTests // ensures we're getting what we expect
- IUniqueFilenameProvider
-- string GetFilename(string path, string fileType);
- UniqueFilenameProvider // performs some filesystem lookups to get a unique filename
- UniqueFilenameProviderTests
- INewGuidProvider // allows me to inject guids to simulate collisions during unit tests
-- Guid NewGuid()
- NewGuidProvider
- NewGuidProviderTests
- IFileExtensionCombiner // requests may come in a variety of ways, need to ensure extensions are properly appended.
- FileExtensionCombiner
- FileExtensionCombinerTests
- IPatientFileWriter
-- Task SaveFileAsync(string path, byte[] file, string fileType)
-- Task SaveFileAsync(FilePushRequest request)
- PatientFileWriter
- PatientFileWriterTests So that's a total of 15 classes (excluding POCOs and scaffolding) to perform a fairly straightforward save. This number ballooned significantly when I needed to create POCOs to represent entities in a few systems, built a few repos to communicate with third party systems that are incompatible with our other ORMs, and built logic methods to handle the intricacies of certain operations. | Now, to build a simple file saving application you have a class to check if the file already exists, a class to write the metadata, a class to abstract away DateTime.Now so you can inject times for unit testing, interfaces for every file containing logic, files to contain unit tests for each class out there, and one or more files to add everything to your DI container. I think you have misunderstood the idea of a single responsibility. A class's single responsibility might be "save a file". To do that, it then may break that responsibility down into a method that checks whether a file exists, a method that writes metadata etc. Each those methods then has a single responsibility, which is part of the class's overall responsibility. A class to abstract away DateTime.Now sounds good. But you only need one of those and it could be wrapped up with other environment features into a single class with the responsibility for abstracting environmental features. Again a single responsibility with multiple sub-responsibilities. You do not need "interfaces for every file containing logic", you need interfaces for classes that have side-effects, e.g. those classes that read/write to files or databases; and even then, they are only needed for the public parts of that functionality. So for example in AccountRepo , you might not need any interfaces, you might only need an interface for the actual database access which is injected into that repo. Unit tests have been an incredibly hard sell to the team as they all believe they're a waste of time and that they're able to handle-test their code much quicker as a whole than each piece individually. Using unit tests as an endorsement for SOLID has mostly been futile and has mostly become a joke at this point. This suggests that you have misunderstood unit tests too. The "unit" of a unit test is not a unit of code. What even is a unit of code? A class? A method? A variable? A single machine instruction? No, the "unit" refers to a unit of isolation, i.e. code that can execute in isolation from other parts of the code. A simple test of whether an automated test is a unit test or not is whether you can run it in parallel with all your other unit tests without affecting its result. There's a couple more rules of thumb around unit tests, but that is your key measure. So if parts of your code can indeed be tested as a whole without affecting other parts, then do that. Always be pragmatic and remember everything is a compromise. The more you adhere to DRY, the more tightly coupled you code must become. The more you introduce abstractions, the easier the code is to test, but the harder it is to understand. Avoid ideology and find a good balance between the ideal and keeping it simple. There lies the sweet spot of maximum efficiency both for development and maintenance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/394399",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/204600/"
]
} |
394,551 | Nowadays everybody wants to be agile. In every team I worked with, the shape of agile was different. Some things are common - like daily stand-ups or planning, but other parts vary significantly. In my current team there's one detail which I find disturbing. It's lack of functional requirements. Not only there's no written form of expectations but also in the tasks it's rather vaguely defined what needs to be done. The project goal is to rewrite of the old system using new technologies. Old system doesn't have any reasonable documentation as well. For sure up to date one doesn't exist. Business owners' description of requirements is - let's do it in new implementation the same way as old. It seems reasonable but it's not. Old system is kind of spaghetti code and extracting business requirements from it is costly. It seems that the situation affects planning in a negative way. For sure it's prone to mistakes and bugs in new implementation (omitting some details). Therefore I'm thinking - is it truly agile to have no business requirements in case of rewriting old system? | Whether or not lacking functional requirements is agile, it is a recipe for disaster. You cannot rebuild a system when you do not know how that system works. You need to tell the business owner that you have no idea how the old system works. Your best option is to work with your business owner or a few experienced users to understand the business processes at play, and develop your own acceptance tests. If you can work with some end users you might get more feedback about how the old system works. Failing that, you'll need to do some exploratory testing in a non production environment to gather your own requirements. Many times when a business owner says "make it work like the old one" they are constrained on time, and are not able to help you out like a business owner should. You need the expertise of some seasoned users, or a whole lot of manual testing on your part to understand how the old system works. Inform the business owner that a lack of requirements and understanding of the old system will greatly increase the time it takes to rebuild it — around triple the time or greater. Faced with the increased timeline and cost, the business owner will either give you the expertise required to gather requirements faster, or decide the rewrite is not economically feasible at this time. You'll get one of the following: Proper requirements and a faster development cycle Time to gather requirements and rebuild the software A new project that won't end up being a black mark on your career | {
"source": [
"https://softwareengineering.stackexchange.com/questions/394551",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/143750/"
]
} |
395,021 | Say multiple branches are being developed, A and B , as well as a incremental "bug fix" branch C . Now C is already "finished" and merged into master. A and B are still in development and will not be fixed before (maybe) another bug fix branch is merged into master. Is it a good idea to merge C as soon as possible in the new feature branches? So that the new features stay as close to master as possible? Or is it better to let the new feature be developed in their own "world" only merging into master once they are finished? There will be conflicts anyhow, so time needs to be spent on fixing those. | The longer a branch lives, the more it is able to diverge from the main branch and the messier and more complicated the resulting merge will be when it's finally finished. Ten small conflicts are easier to resolve than 1 massive conflict, and may actually prevent developers from duplicating or wasting effort. Given that, you should merge master into A and B regularly; once a day is a pretty common recommendation, though if you have a lot of activity on your branches you may wish to merge multiple times a day. In addition to making conflict resolution easier, you specifically mention C is a bugfix branch. As a developer, I'd want my branch to have all of the latest bugfixes, to ensure I'm not repeating behavior that led to a bug, or writing tests based on erroneous data. There will be conflicts anyhow, so time needs to be spent on fixing those. If you know there will be conflicts, you may wish to adopt a different branching strategy. Keep multiple changes to the same file(s) on the same branch whenever possible, and you reduce or eliminate the number of conflicts. Refactor stories so that they are completely independent as much as possible, and rework branches to possibly cover multiple stories (branch, feature, and story are not always interchangeable). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/395021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43635/"
]
} |
395,031 | I would like to introduce permissions based access control in my Single Page Application (SPA) front-end which authenticates the user with token based authentication (JWT). Permission Requirement: In my SPA, each required (html) element is mapped to a permission and depending on the availability of the user permission, the element is shown or hidden. Multiple elements can be mapped to the same permission. Number of permissions: ~100 The problem I need to solve is: How to efficiently pass permissions that control view and access of
specific front-end elements from backend to the SPA. I am thinking about two possible approaches with different options on how to implement this: Approach 1 It seems that in almost all guides and examples on permission based authentication, the permissions are included within the jwt token: User logs in the web app The user is authenticated and the server returns a jwt token to the SPA. Option A The jwt token will contain one claim per permission . Option B The jwt token will contain one claim that will have as a value all user permissions comma separated or structured . The SPA parses the jwt token and gets the permissions. Approach 2 The above solution does not sound efficient from a network traffic perspective so here is the second approach: User logs in the web app The user is authenticated and the server returns a jwt token to the SPA. As soon as the jwt is retrieved successfully, the client requests the permissions of the user in a separate request. Once the permissions are retrieved, they are cached in the browser session. Questions: Are JWT claims well suited for passing users permissions? Wouldn't 100 claims be a large size to be passed around in a token? Do you see any issues with the second approach except from the drawback of having to validate the cache if the user permissions change? | The longer a branch lives, the more it is able to diverge from the main branch and the messier and more complicated the resulting merge will be when it's finally finished. Ten small conflicts are easier to resolve than 1 massive conflict, and may actually prevent developers from duplicating or wasting effort. Given that, you should merge master into A and B regularly; once a day is a pretty common recommendation, though if you have a lot of activity on your branches you may wish to merge multiple times a day. In addition to making conflict resolution easier, you specifically mention C is a bugfix branch. As a developer, I'd want my branch to have all of the latest bugfixes, to ensure I'm not repeating behavior that led to a bug, or writing tests based on erroneous data. There will be conflicts anyhow, so time needs to be spent on fixing those. If you know there will be conflicts, you may wish to adopt a different branching strategy. Keep multiple changes to the same file(s) on the same branch whenever possible, and you reduce or eliminate the number of conflicts. Refactor stories so that they are completely independent as much as possible, and rework branches to possibly cover multiple stories (branch, feature, and story are not always interchangeable). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/395031",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/255754/"
]
} |
395,306 | I'm trying to practice TDD, by using it to develop a simple like Bit Vector. I happen to be using Swift, but this is a language-agnostic question. My BitVector is a struct that stores a single UInt64 , and presents an API over it that lets you treat it like a collection. The details don't matter much, but it's pretty simple. The high 57 bits are storage bits, and the lower 6 bits are "count" bits, which tells you how many of the storage bits actually store a contained value. So far, I have a handful of very simple capabilities: An initializer that constructs empty bit vectors A count property of type Int An isEmpty property of type Bool An equality operator ( == ). NB: this is a value-equality operator akin to Object.equals() in Java, not a reference equality operator like == in Java. I'm running into a bunch of cyclical dependancies: The unit test that tests my initializer need to verify that the newly constructed BitVector . It can do so in one of 3 ways: Check bv.count == 0 Check bv.isEmpty == true Check that bv == knownEmptyBitVector Method 1 relies on count , method 2 relies on isEmpty (which itself relies on count , so there's no point using it), method 3 relies on == . In any case, I can't test my initializer in isolation. The test for count needs to operate on something, which inevitably tests my initializer(s) The implementation of isEmpty relies on count The implementation of == relies on count . I was able to partly solve this problem by introducing a private API that constructs a BitVector from an existing bit pattern (as a UInt64 ). This allowed me to initialize values without testing any other initializers, so that I could "boot strap" my way up. For my unit tests to truly be unit tests, I find myself doing a bunch of hacks, which complicate my prod and test code substantially. How exactly do you get around these sorts of issues? | You're worrying about implementation details too much. It doesn't matter that in your current implementation , isEmpty relies on count (or whatever other relationships you might have): all you should be caring about is the public interface. For example, you can have three tests: That a newly initialized object has count == 0 . That a newly initialized object has isEmpty == true That a newly initialized object equals the known empty object. These are all valid tests, and become especially important if you ever decide to refactor the internals of your class so that isEmpty has a different implementation that doesn't rely on count - so long as your tests all still pass, you know you haven't regressed anything. Similar stuff applies to your other points - remember to test the public interface, not your internal implementation. You may find TDD useful here, as you'd then be writing the tests you need for isEmpty before you'd written any implementation for it at all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/395306",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109689/"
]
} |
395,419 | The Single responsibility principle is defined on wikipedia as The single responsibility principle is a computer programming principle that states that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class If a class should only have a single responsibility, how can it have more than 1 method? Wouldn't each method have a different responsibility, which would then mean that the class would have more than 1 responsibility. Every example I've seen demonstrating the single responsibility principle uses an example class that only has one method. It might help to see an example or to have an explanation of a class with multiple methods that can still be considered to have one responsibility. | The key here is scope , or, if you prefer, granularity . A part of functionality represented by a class can be further separated into parts of functionality, each part being a method. Here's an example. Imagine you need to create a CSV from a sequence. If you want to be compliant with RFC 4180, it would take quite some time to implement the algorithm and handle all the edge cases. Doing it in a single method would result in a code which won't be particularly readable, and especially, the method would do several things at once. Therefore, you will split it into several methods; for instance, one of them may be in charge of generating the header, i.e. the very first line of the CSV, while another method would convert a value of any type to its string representation suited for CSV format, while another one would determine if a value needs to be enclosed into double quotes. Those methods have their own responsibility. The method which checks whether there is a need to add double quotes or not has its own, and the method which generates the header has one. This is SRP applied to methods . Now, all those methods have one goal in common, that is, take a sequence, and generate the CSV. This is the single responsibility of the class . Pablo H commented: Nice example, but I feel it still doesn't answer why SRP allows a class to have more than one public method. Indeed. The CSV example I gave has ideally one public method and all other methods are private. A better example would be of a queue, implemented by a Queue class. This class would contain, basically, two methods: push (also called enqueue ), and pop (also called dequeue ). The responsibility of Queue.push is to add an object to the queue's tail. The responsibility of Queue.pop is to remove an object from the queue's head, and handle the case where the queue is empty. The responsibility of Queue class is to provide a queue logic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/395419",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182925/"
]
} |
395,784 | I'm working on a very large research-led open-source project, with a bunch of other regular contributors. Because the project is now quite big, a consortium (composed of two full-time employees and few members) is in charge of maintaining the project, the continuous integration (CI), etc. They just don't have time for integration of external contributions though. The project is composed of a "core" framework, of about half-a-milion-or-so lines of code, a bunch of "plugins" that are maintained by the consortium, and several external plugins, most of which we aren't even aware of. Currently, our CI builds the core, and the maintained plugins. One of the big issue we face is that most contributors (and especially the occasional ones) aren't building 90% of the maintained plugins, so when they propose refactoring changes in the core (which these days happens on a quite regular basis), they checked that the code compiles on their machine before making a pull request on GitHub. The code works, they're happy, and then the CI finishes building and the problems start: compilation failed in a consortium-maintained plugin, that the contributor did not build on his/her machine. That plugin might have dependencies on third-party libraries, such as CUDA for instance, and the user does not want, does not know how to, or simply can't for hardware reasons, compile that broken plugin. So then - either the PR stays ad aeternam in the limbo of never-to-be-merged PRs - Or the contributor greps the renamed variable in the source of the broken plugin, changes the code, pushes on his/her branch, waits for the CI to finish compiling, usually gets more errors, and reiterates the process until CI is happy - Or one of the two already-overbooked permanents in the consortium gives a hand and tries to fix the PR on their machine. None of those options are viable, but we just don't know how to do it differently. Have you ever been confronted to a similar situation of your projects? And if so, how did you handle this problem? Is there a solution I'm not seeing here? | CI-driven development is fine! This is a lot better than not running tests and including broken code! However, there are a couple of things to make this easier on everyone involved: Set expectations: Have contribution documentation that explains that CI often finds additional issues, and that these will have to be fixed before a merge. Perhaps explain that smallish, local changes are more likely to work well – so splitting a large change into multiple PRs can be sensible. Encourage local testing: Make it easy to set up a test environment for your system. A script that verifies that all dependencies have been installed? A Docker container that's ready to go? A virtual machine image? Does your test runner have mechanisms that allows more important tests to be prioritized? Explain how to use CI for themselves: Part of the frustration is that this feedback only comes after submitting a PR. If the contributors set up CI for their own repositories, they'll get earlier feedback – and produce less CI notifications for other people. Resolve all PRs, either way: If something cannot be merged because it is broken, and if there's no progress towards getting the problems fixed, just close it. These abandoned open PRs just clutter up everything, and any feedback is better than just ignoring the issue. It is possible to phrase this very nicely, and make it clear that of course you'd be happy to merge when the problems are fixed. (see also: The Art of Closing by Jessie Frazelle , Best Practices for Maintainers: Learning to say no ) Also consider making these abandoned PRs discoverable so that someone else can pick them up. This may even be a good task for new contributors, if the remaining issues are more mechanical and don't need deep familiarity with the system. For the long-term perspective, that changes seem to break unrelated functionality so often could mean that your current design is a bit problematic. For example, do the plugin interfaces properly encapsulate the internals of your core? C++ makes it easy to accidentally leak implementation details, but also makes it possible to create strong abstractions that are very difficult to misuse. You can't change this over night, but you can shepherd the long-term evolution of the software towards a less fragile architecture. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/395784",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/342904/"
]
} |
396,151 | In his DDD book Evans promotes the idea of layered architecture, and in particular that the business logic should be confined to domain layer and separated from UI/persistence/other concerns. He also introduces Repository pattern as a mean to abstract access and persistent storage to Entities and Value Objects. To me the following left unclear: Which layer Repositories belong: the Domain Layer, Persistence Layer or something in the middle? (It seems that if it were below Domain Layer it would violate Layered Architecture principle, because it depends on a domain object which it stores) Can Entities, Value Objects or Domain Services call Repositories? Should Repositories be abstracted from storage technology (which would be implied if they belong to domain layer) or can they leverage those storage technologies? Can Repositories contain business logic? Have you applied these constraints in practice and what was the effect on the quality of your project? (I am mostly interested in DDD perspective) | Repositories and their placement in the code structure is a matter of intense debate in DDD circles. It is also a matter of preference, and often a decision taken based on the specific abilities of your framework and ORM. The issue is also muddied when you consider other design philosophies like Clean Architecture, which advocate using an abstract repository in the domain layer while providing concrete implementations in the infrastructure layer. But here's what I have discovered, and what has worked for me, after trying out different permutation/combinations. From a DDD perspective, Repositories sit between Application Services and Domain Objects. Domain Objects encapsulate behavior and contain the bulk of business logic, enforcing invariants at the aggregate level. Application services receive calls from UI/API/Controllers/Channels (external facing), initial repositories to load aggregates (if needed), invoke domain model for the necessary changes, then use the repositories again to persist aggregates To your questions: Which layer Repositories belong: the Domain Layer, Persistence Layer or something in the middle? I would say there are three distinct layers in DDD applications - the inner domain layer, the outer application layer, and the external world (includes the API/UI). The domain layer contains aggregates, entities, value objects, domain services, and domain events. These elements are only dependent on each other and actively avoid dependencies any outer layers. The application layer contains Application Services, Repositories, Message Brokers, and whatever else you need to make your application practically possible. This layer is where most of the persistence, authorization, pub-sub processing, etc. happens. Application Layer depends and knows about the domain layer elements, but follows DDD guidelines when accessing the domain model. For example, it will only invoke methods on Aggregates and Domain Services. The outermost layer includes API Controllers, serializers, authentication, logging, etc., whatever is not related to business logic or your domain, but very much part of your application. Can Entities, Value Objects or Domain Services call Repositories? No. The domain layer should preferably remain agnostic to repositories. Application services should take on the responsibility of transactions and repository interactions. Should Repositories be abstracted from storage technology (which would be implied if they belong to domain layer) or can they leverage those storage technologies? Repositories lean towards the domain side, meaning they contain methods that are meaningful from a domain point of view (like GetAdults() or GetMinors() ). But the concrete implementation can be done in a couple of ways: You could use an abstract repository to declare the necessary methods, and then create concrete implementations for different databases. The database implementation can be chosen at the beginning of application startup, based on your configurations. Note that even in this case, Domain layer has nothing to do with repositories Repositories could act like wrappers and make use of underlying DAO objects (one per table/document) that implement the actual logic for interacting with the database. The DAO objects are initialized usually with dependency injection if your framework/language supports it, or they could be initialized manually based on active configuration. Can Repositories contain business logic? Repositories represent domain concepts, with meaningful method names, but seldom contain any business logic. They encapsulate the database query and give it a conceptual name that usually is derived directly from the ubiquitous language. It is so much better to have a method called GetAdults() instead of .filter(age > 21) . Have you applied these constraints in practice and what was the effect on the quality of your project? If you restrict yourself to using repositories only in application services, and control transactions at one place (usually with Unit of Work pattern), Repositories are pretty easy to work with. In my past projects, I have found it extremely useful to restrict all database interaction to repositories instead of sprinkling lifecycle methods in the domain layer. When I called lifecycle methods (like save , update , etc.) from the aggregate layer, I found it to be extremely complex and difficult to reliably control ACID transactions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/396151",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7763/"
]
} |
396,567 | I find myself often looking up questions online, and many solutions include dictionaries. However, whenever I try to implement them, I get this horrible reek in my code. For example every time I want to use a value: int x;
if (dict.TryGetValue("key", out x)) {
DoSomethingWith(x);
} That's 4 lines of code to essentially do the following: DoSomethingWith(dict["key"]) I've heard that using the out keyword is an anti pattern because it makes functions mutate their parameters. Also, I find myself often needing a "reversed" dictionary, where I flip the keys and values. Similarly, I often would like to iterate through the items in a dictionary and find myself converting keys or values to lists etc to do this better. I feel like there's almost always a better, more elegant way of using dictionaries, But I'm at a loss. | Dictionaries (C# or otherwise) are simply a container where you look up a value based on a key. In many languages it's more correctly identified as a Map with the most common implementation being a HashMap. The problem to consider is what happens when a key does not exist. Some languages behave by returning null or nil or some other equivalent value. Silently defaulting to a value instead of informing you that a value does not exist. For better or for worse, the C# library designers came up with an idiom to deal with the behavior. They reasoned that the default behavior for looking up a value that does not exist is to throw an exception. If you want to avoid exceptions, then you can use the Try variant. It's the same approach they use for parsing strings into integers or date/time objects. Essentially, the impact is like this: T count = int.Parse("12T45"); // throws exception
if (int.TryParse("12T45", out count))
{
// Does not throw exception
} And that carried forward to the dictionary, whose indexer delegates to Get(index) : var myvalue = dict["12345"]; // throws exception
myvalue = dict.Get("12345"); // throws exception
if (dict.TryGet("12345", out myvalue))
{
// Does not throw exception
} This is simply the way the language is designed. Should out variables be discouraged? C# isn't the first language to have them, and they have their purpose in specific situations. If you are trying to build a highly concurrent system, then you cannot use out variables at the concurrency boundaries. In many ways, if there is an idiom that is espoused by the language and core library providers, I try to adopt those idioms in my APIs. That makes the API feel more consistent and at home in that language. So a method written in Ruby isn't going to look like a method written in C#, C, or Python. They each have a preferred way of building code, and working with that helps the users of your API learn it more quickly. Are Maps in General an Anti-pattern? They have their purpose, but many times they may be the wrong solution for the purpose you have. Particularly if you have a bi-directional mapping you need. There are many containers and ways of organizing data. There are many approaches that you can use, and sometimes you need to think a bit before you pick that container. If you have a very short list of bi-directional mapping values, then you might only need a list of tuples. Or a list of structs, where you can just as easily find the first match on either side of the mapping. Think of the problem domain, and pick the most appropriate tool for the job. If there isn't one, then create it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/396567",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157621/"
]
} |
396,585 | What is the meaning of the words asynchronous and synchronous in computer science? If you google the meaning of the words you will get the following: Asynchronous: not existing or occurring at the same time . Synchronous: existing or occurring at the same time . But it seems like they are used to convey the opposite meaning in programming or computer science: HTML async attribute means that the script will be executed as soon as it is downloaded even if the HTML is still parsing or downloading, which means both processes, the script and the HTML, exist and occur at the same time to me. Are these terms used to convey the opposite meaning in computer science or am I missing the point? | I would like to give you an answer which is directly related to those definitions you found. When one task T1 starts a second task T2, it can happen in the following manner: Synchronous: existing or occurring at the same time. So T2 is guaranteed to be started and executed inside the time slice of T1 . T1 "waits" for the ending of T2 and can continue processing afterwards. In this sense, T1 and T2 occur "at the same time" (not "in parallel", but in a contiguous time interval). Asynchronous: not existing or occurring at the same time. So the execution time of T2 is now unrelated to T1. It may be executed in parallel, it may happen one second, one minute or several hours later, and T2 may still run when T1 has ended (so to process a result of T2, a new task T3 may be required). In this sense, T1 and T2 are not "occuring at the same time (interval)". Of course, I agree, the literal definitions appear to be ambiguous when seeing that asynchronous operations nowadays are often used for creating parallel executions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/396585",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/344291/"
]
} |
396,663 | Recently I've been trying to explain pointers in a visual way, as flashcards. Question 001: This is the drawing of a location in computer memory. Is it
true that its address is 0x23452 ? Why? Answer: Yes, because 0x23452 describes where the computer can find this location. Question 002: Is it true that the character b is stored inside the memory location 0x23452 ? Why? Answer: No, because the character a is actually stored inside it. Question 003: Is it true that a pointer is stored inside the memory location 0x23452 ? Why? Answer: Yes, because the address of memory location 0x34501 is stored inside it. Question 004: Is it true that a pointer is stored inside the memory location 0x23452 ? Why? Answer: Yes, because the address of another memory location is stored inside it. Now for the part that has got me worried. A software engineer explained pointers to me like this: A pointer is a variable whose value is the memory address of another variable. Based on the four flashcards I've shown you all, I'd define pointers in a slightly different way: A pointer is a memory location whose value is the memory address of another memory location. Is it safe to say that a variable is the same thing as a memory location? If not, then who's right? What's the difference between a variable and a memory location? | A variable is a logical construct that goes to the intent of an algorithm, whereas a memory location is a physical construct that describes the operation of a computer. Generally speaking, in order to execute a program there is (compiler generated) mapping between the logical notion of a variable and the storage of the computer. (Even in assembly language we have a notion of (logical) variables going to algorithm and intent, and (physical) memory locations, though they are more conflated in assembly.) A variable is a high(er) level concept. A variable represents either an unknown (as in mathematics, or programming assignment) or a place-holder that can be substituted with a value (as in programming: parameters). A memory location is a low(er) level concept. A memory location can be used to store a value, sometimes, to store the value of a variable. However, a CPU register is another way to store the value of some variable(s). CPU registers are also low(er) level storage locations, but they are not memory locations as they do not have addresses, just names. In some sense, a variable is a mechanism of abstraction for expressing intent of the program, whereas a memory location is a physical entity of the processing environment that provides storage & retrieval. Question 003: Is it true that a pointer is stored inside the memory location 0x23452? Why? We cannot say fore sure. Just because there is a value there that would work as an address, doesn't mean it is that address, it could be the integer (decimal) 144466, instead. We cannot make assumptions on the interpretation of values merely based on how they appear numerically. Question 004: Is it true that a pointer is stored inside the memory location 0x23452? Why? This is indeed an odd question. They expect some assumptions based on the boxes, however, let's note that the addresses increase by 1 for each box. In any modern computer, that would mean that each box can hold a byte — byte addressability has been the norm for decades now. However a byte is only 8-bits and can range from 0 to 255 (for unsigned values); yet they show a much larger value stored within one of these addresses, so very suspicious. (This could work if this were a word addressed machine, but it doesn't say that, and, few machines today are, though some educational machines are so.) Based on the four flashcards I've shown you all, I'd define pointers in a slightly different way: A pointer is a memory location whose value is the memory address of another memory location. While there are situations where this thinking is correct, you are mixing metaphors here. The notion of a variable goes to the algorithm and its intent — there is no need to assume all variables have memory locations. Some variables (especially arrays) have memory locations because memory locations support addressing (whereas CPU registers can only be named not indexed). For execution, there is a logical mapping between variables & statements and processor memory locations & processor instruction sequences. A variable whose value never changes (e.g. a constant) does not even necessarily require a memory location, since the value can be reproduced at will (e.g. as needed for code sequences generated by the compiler). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/396663",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/344386/"
]
} |
396,959 | Back in the 2000s a colleague of mine told me that it is an anti-pattern to make public methods virtual or abstract. For example, he considered a class like this not well designed: public abstract class PublicAbstractOrVirtual
{
public abstract void Method1(string argument);
public virtual void Method2(string argument)
{
if (argument == null) throw new ArgumentNullException(nameof(argument));
// default implementation
}
} He stated that the developer of a derived class that implements Method1 and overrides Method2 has to repeat the argument validation. in case the developer of the base class decides to add something around the customizable part of Method1 or Method2 later, he cannot do it. Instead my colleague proposed this approach: public abstract class ProtectedAbstractOrVirtual
{
public void Method1(string argument)
{
if (argument == null) throw new ArgumentNullException(nameof(argument));
this.Method1Core(argument);
}
public void Method2(string argument)
{
if (argument == null) throw new ArgumentNullException(nameof(argument));
this.Method2Core(argument);
}
protected abstract void Method1Core(string argument);
protected virtual void Method2Core(string argument)
{
// default implementation
}
} He told me making public methods (or properties) virtual or abstract is as bad as making fields public. By wrapping fields into properties one can intercept any access to that fields later, if needed. The same applies to public virtual/abstract members: wrapping them the way as shown in the ProtectedAbstractOrVirtual class allows the base class developer to intercept any calls that go to the virtual/abstract methods. But I don't see this as a design guideline. Even Microsoft doesn't follow it: just have a look at the Stream class to verify this. What do you think of that guidline? Does it make any sense, or do you think it's overcomplicating the API? | Saying that it is an anti-pattern to make public methods virtual or abstract because of
the developer of a derived class that implements Method1 and overrides Method2 has to repeat the argument validation is mixing up cause and effect. It makes the assumption that every overrideable method requires a non-customizable argument validation. But it is quite the other way round: If one wants to design a method in a way it provides some fixed argument validations in all derivations of the class (or - more general - a customizable and a non-customizable part), then it makes sense to make the entry point non-virtual, and instead provide a virtual or abstract method for the customizable part which is called internally. But there are lots of examples where it makes perfect sense to have a public virtual method, since there is no fixed non-customizable part: look at standard methods like ToString or Equals or GetHashCode - would it improve the design of the object class to have these not public and virtual at the same time? I don't think so. Or, in terms of your own code: when the code in the base class finally and intentionally looks like this public void Method1(string argument)
{
// nothing to validate here, all strings including null allowed
this.Method1Core(argument);
} having this separation between Method1 and Method1Core only complicates things for no apparent reason. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/396959",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/344770/"
]
} |
397,096 | I'm trying to teach myself how to calculate BigO notation for an arbitrary function. I found this function in a textbook. The book asserts that the function is O(n 2 ). It gives an explanation as to why this is, but I'm struggling to follow. I wonder if someone might be able to show me the math behind why this is so. Fundamentally, I understand that it is something less than O(n 3 ), but I couldn't independently land on O(n 2 ) Suppose we are given three sequences of numbers, A, B, and C. We will
assume that no individual sequence contains duplicate values, but that
there may be some numbers that are in two or three of the sequences.
The three-way set disjointness problem is to determine if the
intersection of the three sequences is empty, namely, that there is no
element x such that x ∈ A, x ∈ B, and x ∈ C. Incidentally, this is not a homework problem for me -- that ship has sailed years ago : ), just me trying to get smarter. def disjoint(A, B, C):
"""Return True if there is no element common to all three lists."""
for a in A:
for b in B:
if a == b: # only check C if we found match from A and B
for c in C:
if a == c # (and thus a == b == c)
return False # we found a common value
return True # if we reach this, sets are disjoint [Edit]
According to the textbook: In the improved version, it is not simply that we save time if we get
lucky. We claim that the worst-case running time for disjoint is
O(n 2 ). The book's explanation, which I struggle to follow, is this: To account for the overall running time, we examine the time spent
executing each line of code. The management of the for loop over A
requires O(n) time. The management of the for loop over B accounts for
a total of O(n 2 ) time, since that loop is executed n different times.
The test a == b is evaluated O(n 2 ) times. The rest of the time spent
depends upon how many matching (a,b) pairs exist. As we have noted,
there are at most n such pairs, and so the management of the loop over
C, and the commands within the body of that loop, use at most O(n 2 )
time. The total time spent is O(n 2 ). (And to give proper credit ...) The book is:
Data Structures and Algorithms in Python by Michael T. Goodrich et. all, Wiley Publishing, pg. 135 [Edit] A justification; Below is the code before optimization: def disjoint1(A, B, C):
"""Return True if there is no element common to all three lists."""
for a in A:
for b in B:
for c in C:
if a == b == c:
return False # we found a common value
return True # if we reach this, sets are disjoint In the above, you can clearly see that this is O(n 3 ), because each loop must run to its fullest. The book would assert that in the simplified example (given first), the third loop is only a complexity of O(n 2 ), so the complexity equation goes as k + O(n 2 ) + O(n 2 ) which ultimately yields O(n 2 ). While I cannot prove this is the case (thus the question), the reader can agree that the complexity of the simplified algorithm is at least less than the original. [Edit] And to prove that the simplified version is quadratic: if __name__ == '__main__':
for c in [100, 200, 300, 400, 500]:
l1, l2, l3 = get_random(c), get_random(c), get_random(c)
start = time.time()
disjoint1(l1, l2, l3)
print(time.time() - start)
start = time.time()
disjoint2(l1, l2, l3)
print(time.time() - start) Yields: 0.02684807777404785
0.00019478797912597656
0.19134306907653809
0.0007600784301757812
0.6405444145202637
0.0018095970153808594
1.4873297214508057
0.003167390823364258
2.953308343887329
0.004908084869384766 Since the second difference is equal, the simplified function is indeed quadratic: [Edit] And yet even further proof: If I assume worst case (A = B != C), if __name__ == '__main__':
for c in [10, 20, 30, 40, 50]:
l1, l2, l3 = range(0, c), range(0,c), range(5*c, 6*c)
its1 = disjoint1(l1, l2, l3)
its2 = disjoint2(l1, l2, l3)
print(f"iterations1 = {its1}")
print(f"iterations2 = {its2}")
disjoint2(l1, l2, l3) yields: iterations1 = 1000
iterations2 = 100
iterations1 = 8000
iterations2 = 400
iterations1 = 27000
iterations2 = 900
iterations1 = 64000
iterations2 = 1600
iterations1 = 125000
iterations2 = 2500 Using the second difference test, the worst case result is exactly quadratic. | The book is indeed correct, and it provides a good argument. Note that timings are not a reliable indicator of algorithmic complexity. The timings might only consider a special data distribution, or the test cases might be too small: algorithmic complexity only describes how resource usage or runtime scales beyond some suitably large input size. The book makes the argument that complexity is O(n²) because the if a == b branch is entered at most n times. This is non-obvious because the loops are still written as nested. It is more obvious if we extract it: def disjoint(A, B, C):
AB = (a
for a in A
for b in B
if a == b)
ABC = (a
for a in AB
for c in C
if a == c)
for a in ABC:
return False
return True This variant uses generators to represent intermediate results. In the generator AB , we will have at most n elements (because of the guarantee that input lists won't contain duplicates), and producing the generator takes O(n²) complexity. Producing the generator ABC involves a loop over the generator AB of length n and over C of length n , so that its algorithmic complexity is O(n²) as well. These operations are not nested but happen independently, so that the total complexity is O(n² + n²) = O(n²). Because pairs of input lists can be checked sequentially, it follows that determining whether any number of lists are disjoint can be done in O(n²) time. This analysis is imprecise because it assumes that all lists have the same length. We can say more precisely that AB has at most length min(|A|, |B|) and producing it has complexity O(|A|•|B|). Producing ABC has complexity O(min(|A|, |B|)•|C|). Total complexity then depends how the input lists are ordered. With |A| ≤ |B| ≤ |C| we get total worst-case complexity of O(|A|•|C|). Note that efficiency wins are possible if the input containers allow for fast membership tests rather than having to iterate over all elements. This could be the case when they are sorted so that a binary search can be done, or when they are hash sets. Without explicit nested loops, this would look like: for a in A:
if a in B: # might implicitly loop
if a in C: # might implicitly loop
return False
return True or in the generator-based version: AB = (a for a in A if a in B)
ABC = (a for a in AB if a in C)
for a in ABC:
return False
return True | {
"source": [
"https://softwareengineering.stackexchange.com/questions/397096",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/291622/"
]
} |
398,253 | We’re working on a new service – this service will potentially be called directly from applications on user devices. These applications will be developed and supported by multiple development teams from all over the organisation, all depending on the data we provide. We’re keen to identify which applications are sending which requests, so that we can identify usage patterns and developers responsible. (For the avoidance of doubt, user authentication is handled separately.) Our solution is to require API keys, one per application – then we have contact details for the development team. We don’t want getting the API keys to be a source of friction, but we’re concerned that developers will share them to colleagues in other teams, meaning we can no longer identify traffic for just one application. How can we incentivise developers not to share API keys internally? | In order to share those keys between teams, the teams need to talk to each other, agree to share, then share them. This takes time. So if a team can request API keys from you more quickly and more easily, there's no incentive to share. And the easiest way for them to request those keys is for you to pre-empt them. Assuming you know all the other teams that will need API keys, create them and share them before making the service available to them. There's one other incentive that you can offer: debugging support. Those teams will want your help when things don't quite work properly when they integrate their work with your service. Those API keys allow you to track their specific requests and thus to assist in debugging what's going wrong. So sell that as the reason for the keys, rather than " identify usage patterns and developers responsible ", which sounds like you are spying on their activities. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/398253",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/338400/"
]
} |
398,436 | Are stored procedures considered bad practice in a microservice architecture? Here are my thoughts: most books on microservices recommend one database per microservice.
Stored procedures typically work on a monolithic database. again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written, say specifically in Oracle, tightly couples the microservice to that technology. most microservice architecture books (that I have read) recommend that microservices should be business oriented (designed using domain-driven design (DDD)). By moving business logic into stored procedures in the database this is no longer the case. Any thoughts on this? | There is nothing that explicitly forbids or argues against using stored procedures with microservices. Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way. Stored procedures typically work on a monolith database. I think you're succumbing to a logical fallacy. Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular. Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs. most books on microservices recommend one database per microservice. There is no technical reason why these smaller databases cannot have stored procedures. As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done. again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology. On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be. Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices. most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case. This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way. But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/398436",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/346711/"
]
} |
398,637 | I am a developer working on a new mobile app for Android and iOS with a big backend component. We have been in three sprints of this project, and we use Scrum with all of its ceremonies (refinement, planning, dailies, retrospectives, etc.). In two of the sprints the team had to work (unpaid) overtime and weekends, because management were very alarmed we wouldn't complete the sprint commitment on time. Everyone worked hard, but some external dependencies and optimistic estimations made us struggle to accomplish all the sprint stories. In my experience having a small percentage of stories not completed during some sprints is somewhat normal, and they can be tackled in the next one.
But our project manager says it is our fault as we made the estimation ourselves, so we should complete all the items in the sprint. Is this an acceptable/common variation of Scrum I am not aware of? How do you suggest that I should act on this? | A few things stand out to me. The idea that management has that the team commits to a set of work is inconsistent with the latest versions of the Scrum Guide. The word "commit" or "commitment" is only used twice in the most recent (November 2017) version of the Scrum Guide - once when listing the Scrum Values and once to indicate that "people personally commit to achieving the goals of the Scrum Team". The idea of goals is important to Scrum. Not only do organizations and teams have broad goals, but in Scrum, each Sprint has a Sprint Goal that is defined at Sprint Planning as a collaboration between the Product Owner and the Development Team. The Sprint Goal is met by implementing items from the Product Backlog, but it is not simply "finish this body of work" and it often doesn't represent the complete Sprint Backlog. That is, you should be able to achieve the Sprint Goal without completing every single Product Backlog Item selected for the Sprint or every single item in the Sprint Backlog. My current thinking is that the work needed to accomplish the Sprint Goal should be somewhere around 60-70% of your team's capacity, however you account for capacity. Different organizations will be different, though, but that's likely to be a good starting point. Working overtime and weekends is also an anti-Agile Software Development practice. One of the underlying principles is that all stakeholders of an effort are able to work a sustainable pace. Long days and weekends, even if they were paid, is not sustainable for a team. At this point, there are a few next steps. The team's Scrum Master should be educating management on the values and principles of Scrum and Agile Software Development (such as "commitment" and "sustainable pace"). The team should work on its ability to forecast work and negotiate scope with the Product Owner. The team should also identify and work toward resolving or preventing the impediments that led to this situation (eliminating or reducing the impact of external dependencies). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/398637",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/347041/"
]
} |
398,703 | This Stack Overflow post lists a fairly comprehensive list of situations where the C/C++ language specification declares as to be 'undefined behaviour'. However, I want to understand why other modern languages, like C# or Java, doesn't have the concept of 'undefined behavior'. Does it mean, the compiler designer can control all possible scenarios (C# and Java) or not (C and C++)? | Basically because the designers of Java and similar languages didn't want undefined behavior in their language. This was a trade off - allowing undefined behavior has the potential to improve performance, but the language designers prioritized safety and predictability higher. For example, if you allocate an array in C, the data is unspecified. In Java, all bytes must be initialized to 0 (or some other specified value). This means the runtime must pass over the array (an O(n) operation), while C can perform the allocation in an instant. So C will always be faster for such operations. If the code using the array is going to populate it anyway before reading, this is basically wasted effort for Java. But in the case where the code read first, you get predictable results in Java but unpredictable results in C. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/398703",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/309242/"
]
} |
398,828 | We have an API function that breaks down a total amount into monthly amounts based on given start and end dates. // JavaScript
function convertToMonths(timePeriod) {
// ... returns the given time period converted to months
}
function getPaymentBreakdown(total, startDate, endDate) {
const numMonths = convertToMonths(endDate - startDate);
return {
numMonths,
monthlyPayment: total / numMonths,
};
} Recently, a consumer for this API wanted to specify the date range in other ways: 1) by providing the number of months instead of the end date, or 2) by providing the monthly payment and calculating the end date. In response to this, the API team changed the function to the following: // JavaScript
function addMonths(date, numMonths) {
// ... returns a new date numMonths after date
}
function getPaymentBreakdown(
total,
startDate,
endDate /* optional */,
numMonths /* optional */,
monthlyPayment /* optional */,
) {
let innerNumMonths;
if (monthlyPayment) {
innerNumMonths = total / monthlyPayment;
} else if (numMonths) {
innerNumMonths = numMonths;
} else {
innerNumMonths = convertToMonths(endDate - startDate);
}
return {
numMonths: innerNumMonths,
monthlyPayment: total / innerNumMonths,
endDate: addMonths(startDate, innerNumMonths),
};
} I feel this change complicates the API. Now the caller needs to worry about the heuristics hidden with the function's implementation in determining which parameters take preference in being used to calculate the date range (i.e. by order of priority monthlyPayment , numMonths , endDate ). If a caller doesn't pay attention to the function signature, they might send multiple of the optional parameters and get confused as to why endDate is being ignored. We do specify this behavior in the function documentation. Additionally I feel it sets a bad precedent and adds responsibilities to the API that it should not concern itself with (i.e. violating SRP). Suppose additional consumers want the function to support more use cases, such as calculating total from the numMonths and monthlyPayment parameters. This function will become more and more complicated over time. My preference is to keep the function as it was and instead require the caller to calculate endDate themselves. However, I may be wrong and was wondering if the changes they made were an acceptable way to design an API function. Alternatively, is there a common pattern for handling scenarios like this? We could provide additional higher-order functions in our API that wrap the original function, but this bloats the API. Maybe we could add an additional flag parameter specifying which approach to use inside of the function. | Seeing the implementation, it appears to me what you really require here is 3 different functions instead of one: The original one: function getPaymentBreakdown(total, startDate, endDate) The one providing the number of months instead of the end date: function getPaymentBreakdownByNoOfMonths(total, startDate, noOfMonths) and the one providing the monthly payment and calculating the end date: function getPaymentBreakdownByMonthlyPayment(total, startDate, monthlyPayment) Now, there are no optional parameters any more, and it should be pretty clear which function is called how and for which purpose. As mentioned in the comments, in a strictly typed language, one could also utilize function overloading, distinguishing the 3 different functions not necessarily by their name, but by their signature, in case this does not obfuscate their purpose. Note the different functions don't mean you have to duplicate any logic - internally, if these functions share a common algorithm, it should be refactored to a "private" function. is there a common pattern for handling scenarios like this I don't think there is a "pattern" (in the sense of the GoF design patterns) which describes good API design. Using self-describing names, functions with fewer parameters, functions with orthogonal (=independent) parameters, are just basic principles of creating readable, maintainable and evolvable code. Not every good idea in programming is necessarily a "design pattern". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/398828",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44207/"
]
} |
399,096 | For binary operators we have both bitwise and logical operators: & bitwise AND
| bitwise OR
&& logical AND
|| logical OR NOT (a unary operator) behaves differently though. There is ~ for bitwise and ! for logical. I recognize NOT is a unary operation as opposed to AND and OR but I cannot think of a reason why the designers chose to deviate from the principle that single is bitwise and double is logical here, and went for a different character instead. I guess you could read it wrong, like a double bitwise operation that would always return the operand value. But that does not seem a real problem to me. Is there a reason I am missing? | Strangely, the history of C-style programming language doesn’t start with C. Dennis Ritchie explains well the challenges of C’s birth in this article . When reading it, it becomes obvious that C inherited a part of its language design from its predecessor BCPL , and especially the operators. The section “Neonatal C” of the aforementioned article explains how BCPL’s & and | were enriched with two new operators && and || . The reasons were: different priority was required due to its use in combination with == different evaluation logic: left-to-right evaluation with short-circuit (i.e when a is false in a&&b , b is not evaluated). Interestingly, this doubling does not create any ambiguity for the reader: a && b will not be misinterpreted as a(&(&b)) . From a parsing point of view, there is no ambiguity either: &b could make sense if b were an lvalue, but it would be a pointer whereas the bitwise & would require an integer operand, so the logical AND would be the only reasonable choice. BCPL already used ~ for bitwise negation. So from a point of view of consistency, it could have been doubled to give a ~~ to give it its logical meaning. Unfortunately this would have been extremely ambiguous since ~ is a unary operator: ~~b could also mean ~(~b)) . This is why another symbol had to be chosen for the missing negation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/399096",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/209665/"
]
} |
399,335 | As per Wikipedia: A compiled language is a programming language whose implementations are typically compilers (translators that generate machine code from source code). And an interpreted language is a type of programming language for which most of its implementations execute instructions directly and freely, without previously compiling a program into machine-language instructions. Hence the following is clear. C, C++ and few other similar languages - Compiled Language Shell script, Perl, Ruby and some more - Interpreted Language However, there is a 3rd kind of language as well. Languages like C# and Java which use both a compiler and a JIT while running. So my question is, is there a separate name for such languages or they can be categorized to either of the above types? An explanatory answer would be more helpful? EDIT: From Wikipedia and this SO post: Interpreted vs Compiled it is evident that there are 2 well-defined language implementation types. But my question is about the fact that is it sufficient to have 2 categories, can all languages be fit within these 2 or is there a 3rd one? | The answer to your question: Can every language be categorized as either compiled or interpreted? Is "No", but not for the reason you think it is. The reason is not that there is a third missing category, the reason is that the categorization itself is nonsensical . There is no such thing as a "compiled language" or an "interpreted language". Those terms are not even wrong, they are nonsensical. Programming languages are sets of abstract mathematical rules, definitions, and restrictions. Programming languages aren't compiled or interpreted. Programming languages just are . [Credit goes to Shriram Krishnamurthi who said this in an interview on Channel 9 years ago (at about 51:37-52:20). ] In fact, a programming language can perfectly exist without having any interpreter or compiler! For example, Konrad Zuse 's Plankalkül which he designed in the 1930s was never implemented during his lifetime. You could still write programs in it, you could analyze those programs, reason about them, prove properties about them … you just couldn't execute them. (Well, actually, even that is wrong: you can of course run them in your head or with pen and paper.) Compilation and interpretation are traits of the compiler or interpreter (duh!), not the programming language. Compilation and interpretation live on a different level of abstraction than programming languages: a programming language is an abstract concept, a specification, a piece of paper. A compiler or interpreter is a concrete piece of software (or hardware) that implements that specification. If English were a typed language, the terms "compiled language" and "interpreted language" would be type errors. [Again, credit to Shriram Krishnamurthi.] Every programming language can be implemented by a compiler. Every programming language can be implemented by an interpreter. Many modern mainstream programming languages have both interpreted and compiled implementations. Many modern mainstream high-performance programming language implementations have both compilers and interpreters. There are interpreters for C and for C++. On the other hand, every single current major mainstream implementation of ECMAScript, PHP, Python, Ruby, and Lua has a compiler. The original version of Google's V8 ECMAScript engine was a pure native machine code compiler. (They went through several different designs, and the current version does have an interpreter, but for many years, it didn't have one.) XRuby and Ruby.NET were purely compiled Ruby implementations. IronRuby started out as a purely compiled Ruby implementation, then added an interpreter later in order to improve performance. Opal is a purely compiled Ruby implementation. Some people might say that the terms "interpreted language" or "compiled language" make sense to apply to programming languages that can only be implemented by an interpreter or by a compiler. But, no such programming language exists. Every programming language can be implemented by an interpreter and by a compiler. For example, you can automatically and mechanically derive a compiler from an interpreter using the Second Futamura Projection. It was first described by Prof. Yoshihiko Futamura in his 1971 paper Partial Evaluation of Computation Process – An approach to a Compiler-Compiler (Japanese) , an English version of which was republished 28 years later. It uses Partial Evaluation , by partially evaluating the partial evaluator itself with respect to the interpreter, thus yielding a compiler. But even without such complex highly-academic transformations, you can create something that is functionally indistinguishable from compilation in a much simpler way: just bundle together the interpreter with the program to be interpreted into a single executable. Another possibility is the idea of a "meta-JIT". (This is related in spirit to the Futamura Projections.) This is e.g. used in the RPython framework for implementing programming languages. In RPython, you write an interpreter for your language, and then the RPython framework will JIT-compile your interpreter while it is interpreting the program, thus producing a specialized compiled version of the interpreter which can only interpret that one single program – which is again indistinguishable from compiling that program. So, in some sense, RPython dynamically generates JIT compilers from interpreters. The other way around, you can wrap a compiler into a wrapper that first compiles the program and then directly executes it, making this wrapped compiler indistinguishable from an interpreter. This is, in fact, how the Scala REPL, the C♯ REPL (both in Mono and .NET), the Clojure REPL, the interactive GHC REPL, and many other REPLs are implemented. They simply take one line / one statement / one expression, compile it, immediately run it, and print the result. This mode of interacting with the compiler is so indistinguishable from an interpreter, that some people actually use the existence of a REPL for the programming language as the defining characteristic of what it means to be an "interpreted programming language". Note, however, that you can't run a program without an interpreter. A compiler simply translates a program from one language to another. But that's it. Now you have the same program, just in a different language. The only way to actually get a result of the program is to interpret it. Sometimes, the language is an extremely simple binary machine language, and the interpreter is actually hard-coded in silicone (and we call it a "CPU"), but that's still interpretation. Some people say that you can call a programming language "interpreted" if the majority of its implementations are interpreters. Well, let's just look at a very popular programming language: ECMAScript. There are a number of production-ready, widely-used, high-performance mainstream implementations of ECMAScript, and every single one of them includes at least one compiler, some even multiple compilers. So, according to this definition, ECMAScript is clearly a compiled language. You might also be interested in this answer of mine, which explains the differences and the different means of combining interpreters, JIT compilers and AOT compilers and this answer dealing with the differences between an AOT compiler and a JIT compiler . It is possible to categorize language implementations to some degree. In general, we have the distinction between compilers and interpreters (if the interpreter interprets a language that is not meant for humans, it is also often called a virtual machine ) Within the group of compilers, we have the temporal distinction when the compiler is run: Just-In-Time compilers run while the program is executing Ahead-Of-Time compilers run before the program starts And then we have implementations which combine interpreters and compilers, or combine multiple compilers, or (much more rare) multiple interpreters. Some typical combinations are mixed-mode execution engines which combine an interpreter and a JIT compiler that both process the same program at the same time (examples: Oracle HotSpot JVM, IBM J9 JVM) multi-phase [I invented that term; I don't know of a widely-used one] execution engines, where the first phase is a compiler that compiles the program to a language more suitable for the next phase, and then a second phase which processes that language. (There could be more phases, but two is typical.) As you can probably guess, the second phase can again use different implementation strategies: an interpreter: this is a typical implementation strategy. Often, the language that is interpreted is some form of bytecode that is optimized for "interpretability". Examples: CPython, YARV (pre-2.6) , Zend Engine a compiler, which makes this a combination of two compilers. Typically, the first compiler translates the language into some form of bytecode that is optimized for "compilability" and the second compiler is an optimizing compiler that is specific to the target platform a mixed-mode VM. Examples: YARV post-2.6, Rubinius, SpiderMonkey, SquirrelFish Extreme, Chakra But, there are still others. Some implementations use two compilers instead of a compiler and an interpreter to get the same benefits as a mixed-mode engines (e.g. the first few years, V8 worked this way). RPython combines a bytecode interpreter and a JIT, but the JIT does not compile the user program, it compiles the bytecode interpreter while it interprets the user program! The reason for this is that RPython is a framework for implementing languages, and in this way, a language implementor only has to write the bytecode interpreter and gets a JIT for free. (The most well-known user of RPython is of course PyPy .) The Truffle framework interprets a language-agnostic AST, but at the same time it specializes itself to the specific AST, which is kind-of like compilation but also kind-of not. The end result is that Truffle can execute the code extremely fast, without knowing too much about the language-specifics. (Truffle is also a generic framework for language implementations.) Because the AST is language-agnostic, you can mix and match multiple languages in the same program, and Truffle is able to perform optimizations across languages, such as inlining a Ruby method into an ECMAScript function etc. Macros and eval are sometimes cited as features that cannot possibly be compiled. But that is wrong. There are two simple ways of compiling macros and eval . (Note that for the purpose of compilation, macros and eval are somewhat dual to each other, and can be handled using similar means.) Using an interpreter: for macros, you embed an interpreter into the compiler. For eval , you embed an interpreter into the compiled program or into the runtime support libraries. Using a compiler: for macros, you compile the macro first, then embed the compiled macro into your compiler and compile the program using this "extended" compiler. For eval , you embed a compiler into the compiled program or into the runtime support libraries. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/399335",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/309242/"
]
} |
399,424 | What is best practice when a unhandled exceptions occurs in a desktop application? I was thinking about to show a message to the user, so that he can contact support. I would recommend to the user to restart the application, but not force it. Similar to what is discussed here: ux.stackexchange.com - What's the best way to handle unexpected application errors? The project is a .NET WPF application, so the described proposal could look like this (Note that this is a simplified example. Probably it would make sense to hide the exception details until the user click on "Show Details" and provide some functionality to easily report the error): public partial class App : Application
{
public App()
{
DispatcherUnhandledException += OnDispatcherUnhandledException;
}
private void OnDispatcherUnhandledException(object sender, DispatcherUnhandledExceptionEventArgs e)
{
LogError(e.Exception);
MessageBoxResult result = MessageBox.Show(
$"Please help us fix it and contact [email protected]. Exception details: {e.Exception}" +
"We recommend to restart the application. " +
"Do you want to stop the application now? (Warning: Unsaved data gets lost).",
"Unexpected error occured.", MessageBoxButton.YesNo);
// Setting 'Handled' to 'true' will prevent the application from terminating.
e.Handled = result == MessageBoxResult.No;
}
private void LogError(Exception ex)
{
// Log to a log file...
}
} In the implementation (Commands of ViewModels or event handler of external events) I would then only catch the specific exogenous exception and let all other exceptions (boneheaded and unknown exceptions) bubble up to the "Last resort handler" described above. For a definition of boneheaded and exogenous exceptions have a look at: Eric Lippert - Vexing exceptions Does it make sense to let the user decide if the application should be terminated? When the application is terminated, then you for sure have no inconsistent state... On the other hand the user may loose unsaved data or is not able to stop any started external process anymore until the application is restarted. Or is the decision if you should terminate the application on unhandled exceptions depending of the type of application you are writting? Is it just a trade off between "robustness" vs. "correctness" like described in Code Complete, Second Edition To give you some context what kind of application we are talking about:
The application is mainly used to control chemical lab instruments and show the measured results to the user. To do so the WPF applications communicates with some services (local and remote services). The WPF application does not communicate directly with the instruments. | You have to expect your program to terminate for more reasons than just an unhandled exception anyway, like a power failure, or a different background process which crashes the whole system. Therefore I would recommend to terminate and restart the application, but with some measures to mitigate the consequences of such a restart and minimize the possible data loss . Start with analysing the following points: How much data can actually get lost in case of a program termination? How severe is such a loss really for the user? Can the lost data reconstructed in less than 5 minutes, or are we talking about losing a days work? How much effort is it to implement some "intermediate backup" strategy? Don't rule this out because "the user would have to enter a change reason" on a regular save operation, as you wrote in a comment. Better think of something like a temporary file or state, which may be reloaded after a program crash automatically. Many types of productivity software does this (for example MS Office and LibreOffice both have an "autosave" feature and crash recovery). In case data was wrong or corrupted, can the user see this easily (maybe after a restart of the program)? If yes, you may offer an option to let the user save the data (with some small chance it is corrupted), then force a restart, reload it and let the user check if the data looks fine. Make sure not to overwrite the last version that was saved regularly (instead write to a temporary location/file) to avoid corrupting the old version. If such an "intermediate backup" strategy is a sensible option depends ultimately on the application and its architecture, and on the nature and structure of the data involved. But if the user will loose less than 10 minutes of work, and such a crash happens once a week or even more seldom, I would probably not invest too much thought into this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/399424",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/236277/"
]
} |
399,765 | In our company we recently had a discussion about which part of an application is responsible for the (re)arrangement of the items of a list. The list is a visual representation of steps to be done in production. Backend provides it in the "normal" order (1., 2., 3., 4.). In the frontend at the very top should be the 4th element, below is the 3rd, below the 2nd...
Since it's a website, frontend has to render from top to bottom, it's the wrong order for the website. He needs the elements to be 4., 3., 2., 1., in order to render it the right way. These are 10 items at max which are all shown each time. So no paging or filters needed. Basically the frontend needs the items in the reverse order of which the backend is returning it.
My colleague, a frontend dev, says it's the job of the backend. In my opinion (as a backend dev), it's part of the frontend. His points: the frontend is "untouchable". If there will be changes in the presentation, we don't have to change anything in the frontend, the frontend says what it needs and the backend has to adapt, the frontend is dumb - there should be no logic in it. My points: it's a matter of representation. the backend should not have to worry about it at all. It doesn't care (and know) in which order he needs the items. wrong responsibility. I don't want to work on my cars engine if I want to change tires, so if there are changes in the UI (i.E other arrangement), the changes should happen in the UI. I dont want to be specific on one UI. If we choose to add an other frontend (what we are probably not going to do) and it's not gonna show the list in the same way, do we have to provide another method for that? I guess you get the point of our views. It's not a matter of implementation since it's probably one line of code. It is an architectural question since we're planning on a new part of the system. Maybe we just need another layer in between. But on which side, frontend or backend? Last Edit:
Thank you all for your answers, those were waaaay more and detailed than I expected. For those who are interested, here is the way we did it.
Although the majority recommended to put in in the backend, we will put the logic in the frontend.
Why? Mainly because i described it too loosely and answeres were based on fact and requirements which i have described too late. By definition this List of max. 10 elements is ordered in the right order. And since the frontend needs it in an other order which is only due to the reason that it is easier to render it on web this way, its purely a presentation thing.
Don't get me wrong, all answeres were very useful and we took a lot of information from it. I guess overengineering was the keyword we needed to realize that we never gonna need any filters, paging or anything else on this API. But hell, we learned way more than we expected! | Any decent back-end API should receive parameters that allow some customization or filtering of the results it gives back to the client. For something that returns a list, it's usually common to provide: parameters for ordering (ASC or DESC) based on some field criteria; parameters for pagination; parameters for the search query (i.e. filtering the list to request only a subset of records). It's not always a rule, but often you find all of these three items combined. Having a back-end API return just a list of results and then "it's front-end's problem what it does with it" is most of the times a bad idea. It causes the UI to become more complicated, to do something that the back-end can do very easily and without impact on performance. Let's switch the discussion for one second and think about the use case of having the list paginated inside the UI. Are you going to force the front-end to retrieve the entire list, keep it in memory and do the pagination on the client side? What if someone decides to request the full list each time the user clicks on the next page? What I'm saying is that it's a trade-off. Some things need to be done on the back-end, some on the front-end. It's not "all on back-end" or "all on front-end". Discuss it between you and agree on what the API should return and what parameters it should receive. Basically to create an API that's flexible, not one that's rigid. And if you don't mind me being completely honest, this isn't an architectural question. Architecture is what's important . Ordering of items on the screen is not important, thus not architecture. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/399765",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/333658/"
]
} |
399,941 | I'm trying to build a C# application that sends notifications from the server to its users (a mobile app).
Since each user can have its notifications collection changed by any thread, I have to lock those collections whenever somebody tries to read/add/delete a notification. The problem I think that I might face if there are a lot of users (hopefully millions ;)) logged in at the same time is that I'll have to keep a separate lock for each collection, but a process can hold only a limited number of handles and I'll need more locks than I'm allowed to have. Is this a real problem or am I worried for nothing?
Is there a better solution for this? | Using many locks may cause some problems: The number locks a process may request is often limited by the OS. Your system may, indeed, run out of locks. Note: this depends on the kind of lock. The number of interprocess locks such as Mutex and all other synchronisation primitives derived from WaitHandle is limited by the OS. The Compare-And-Swap cache line locking operations (in .net provided via the class Interlocked ) are CPU instructions and can be used without limitations. Critical sections (provided by keyword lock in c# and the Monitor class in .net) are probably only limited by available memory, as may be ReaderWriterLockSlim and SemaphoreSlim (as added via comment by Greeble31). Requesting access to a lock costs time; and, requesting access to a lock, used by another thread, causes the OS to block the thread and switch to another one, which also costs time. With too many locks and threads your process might end up spending most of its time doing the bookkeeping for the locking, instead of doing computations that your users desire. If your process is not disciplined about what locks it tries to acquire and in what order you may end up with deadlocks. Alternatives: Use an event sourcing architecture (possibly with read-only data projection (see CQRS)). An event broker (either custom or of the shelf) can handle the locking. Use lock-free algorithms and data structures, combined with lightweight compare-and-swap cache line locking instructions. Do most of the work outside the lock and swap pointers to new list heads/tree nodes. See for example Ctries by Aleksander Prokopec. Use the (row based?) locking facilities of the database to protect notification from concurrent updates (including the read/unread status of the notification) (as added via comment by Bart van Ingen Schenau). Each notification can be a row in a database table. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/399941",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153399/"
]
} |
400,077 | I work for a publishing company and we are making interactive software that accompanies our books. The problem is that many clients complain that the antivirus keeps deleting parts of the software, especially the .exe files. Which is the best way to avoid this? By digitally signing the software? (I don't know if that's the correct term, or maybe it's called licensing). Are there companies who provide such a thing? | By running that same anti-virus software in your testing environment. Make it part of your test procedure: "Software not deleted by antivirus." (In my experience: some packers, which compress your executable, will make your executable get flagged.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400077",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/349453/"
]
} |
400,183 | The DRY principle sometimes forces the programmers to write complex, hard-to-maintain functions/classes. Code like this has a tendency to become more complex and harder to maintain over time. Violating the KISS principle . For example, when multiple functions needs to do something similar. The usual DRY solution is to write a function that takes different parameters to allow for the slight variations in usage. The upside is obvious, DRY = one place to make changes, etc. The downside and the reason it's violating KISS is because functions like these have a tendency to become more and more complex with more and more parameters over time. In the end, the programmers will be very afraid to make any changes to such functions or they will cause bugs in other use cases of the function. Personally I think it makes sense to violate DRY principle to make it follow KISS principle. I would rather have 10 super simple functions that are similar than to have one super complex function. I would rather do something tedious, but easy (make the same change or similar change in 10 places), than make a very scary/difficult change in one place. Obviously the ideal way is to make it as KISS as possible without violating DRY. But sometimes it seems impossible. One question that comes up is "how often does this code change?" implying that if it changes often, then it's more relevant to make it DRY. I disagree, because changing this one complex DRY function often will make it grow in complexity and become even worse over time. So basically, I think, in general, KISS > DRY. What do you think? In which cases do you think DRY should always win over KISS, and vice versa? What things do you consider in making the decision? How do you avoid the situation? | KISS is subjective. DRY is easy to over apply. Both have good ideas behind them but both are easy to abuse. The key is balance. KISS is really in the eye of your team. You don't know what KISS is. Your team does. Show your work to them and see if they think it's simple. You are a poor judge of this because you already know how it works. Find out how hard your code is for others to read. DRY is not about how your code looks. You can't spot real DRY issues by searching for identical code. A real DRY issue could be that you're solving the same problem with completely different looking code in a different place. You don't violate DRY when you use identical code to solve a different problem in a different place. Why? Because different problems can change independently. Now one needs to change and the other doesn't. Make design decisions in one place. Don't spread a decision around. But don't fold every decision that happens to look the same right now into the same place. It's ok to have both x and y even when they both are set to 1. With this perspective I don't ever put KISS or DRY over the other. I don't see nearly the tension between them. I guard against abuse of either. These are both important principles but neither is a silver bullet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/349618/"
]
} |
400,269 | The theory is that access modifiers improve code safety because they support encapsulation of internal state. When doing OOP, every language I've used implements some kind of access restriction. I like some access models better than others. I am on a team of Java developers. Our projects spend time in code reviews considering the access modifiers, their appropriateness, and the use of things like @VisibleForTesting (a Java annotation). Our projects also occasionally spend time de-finalizing or de-privatizing something in a 3rd party library if a source-code change is not feasible. I went looking for the research that shows how the use of access modifiers affects defect density or occurrences of run-time errors. I cannot find any studies on it. Maybe my Google-Fu is weak. What is the evidence that access modifiers actually provide the benefits we assume they do? Where are the studies that quantify the problems with how access modifiers are used? | Let me give you a real world example of when access modifiers "mattered" that I ran into personally: Our software is primarily python, and one way that python differs from most other OO languages is that there are no explicit access modifiers. Instead, it is convention to prefix methods and attributes that should be private with an underscore. One day, a developer was working on a particular feature, and could not make it work with the interface of the object he was working with. But he noticed that if he worked with a particular attribute that was marked private, he could do what he wanted to do. So he did it, checked it in, and (unfortunately) it slipped past code review, and into the master branch. Fast forward two years. That developer had moved on. We updated to a newer version of an underlying library. Code that had been reliably suddenly stopped working. This resulted in lots of debugging and back-and-forth messages with another team in a different time zone. Eventually we figured out the issue: the developers who owned that underlying object changed the way it worked in a very subtle way. Subtle enough that no exceptions were thrown, no other errors occurred. The library just became flaky. This happened because the developers of that library had no clue that they were doing anything that would cause any troubles to anyone. They were changing something internal, not the interface. So after the fact we did what should have been done originally: we asked the library developers to add a public method that solved our problem rather than mucking about with the internals of their objects. So that's what access modifiers prevent. They ensure that the separation of interface and implementation is clear. It lets users know exactly what they can do with the class safely and lets developers of the class change internals without breaking user's software. You could do this all with convention, not force, as python shows, but even where it's just convention, having that public/private separation is a great boon toward maintainability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400269",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43536/"
]
} |
400,442 | I am a C developer for an embedded system. YouTube has recently started recommending "C++ for embedded systems" talks. Having watched some of them, they pique my interest, but none of them answer the question they leave me with. These talks (especially Modern C++ in Embedded Systems by Michael Caisse) advocate for a development process whereby, instead of: writing and edit code debugging it to confirm it works (or, more likely, debugging it to see what's wrong and where to go from here) repeat until working ...one should avoid the debugger completely, trusting that the choice of language and good practice makes bugs less likely, which then eliminates the need for the debugger. But as someone who writes firmware for a microcontroller that controls analogue circuitry, many of my problems are found when hardware shows unexpected behaviour and I find I can only investigate this behaviour (especially the timing of events) by throwing breakpoints all over my code and waiting to see events happen out of order, or not happen at all. This will then either reveal a mis-configured register, or unexpected behaviour by one of the microcontroller's peripherals, which was not obvious from the device manual and necessitates a small code re-design. These talks have my attention, but I cannot see how these techniques that are supposed to help people like me, actually help me with hardware issues. Can abstractions and good code practice (which I'm all for) eliminate the need for the debugger (something I see as necessary for addressing hardware bugs)? | I think you are misrepresenting the message of the "Modern C++ in Embedded Systems" video. The point is that there are people in the embedded world that write code and then test it by running the code in the debugger to verify that it does what they think it does. He argues that a better alternative is to use abstractions so that the compiler can verify that certain assumptions about the code hold. This method still allows to use the debugger to find bugs, especially hardware problems. You should just not use the debugger to understand code, it should be understandable and correct by writing it that way. The advantage of using higher abstractions to validate assumptions is that there are certain types of bugs, e.g. having a function f(int mode, int value) which is called as f(value, mode) , that can be completely avoided. Michael Caisse argues that using the right tools, e.g. strong types in C++, alleviates this and should therefore be used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400442",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/350060/"
]
} |
400,574 | I am currently working on problem with a chicken or egg first situation.
Basically, I am designing a solution which goes like this: World is a collection of countries; Each Country has a name, flag and president (all are strings); There will be relations between countries, like country A is friend or supporter of country B. So there will be many countries which are either friends of other countries or the leader countries; There will be some conditions for whether a country is friend or not, based on the number of trades between the countries.
If country X exports more to say countries P,Q,R,S then X is the leader country and P,Q,R,S are its friends, and so on for other countries. At any point of time I may need to handle the request like: who is the ultimate leader country (which has max supporters or friends etc),
or who are supporters of the ultimate leader country, or who are supporters of any given leader country, etc. The trick here is the basic properties of friends or leaders are same (like every Country has name, flag, president) I want a good skeleton structure to address the basic service requirements of this problem statement. But I need a design able to handle any future extensions, such as successors, descendants, duration of ruling, family of presidents, etc. I am confused in deciding which approach to follow: do I need to make (Approach 1) or should I define (Approach 2). Approach 1: Country is part of World class World
{
class country *country_list // not taking array , so that i need not to limit the list
}; Approach 2: Country defined first and World inherits from it This looks odd, because Country is a small entity and World is big entity. And, then what would be the contents of world, but again its a list of countries. class World : Inherit class country
{
// not sure what should be the content here
// can getcounty();
// add_country_to_world();
//
};
//not sure wether to make this abstract or normal.
//no duplicates countries are allowed.
class country
{
string countryname;
string flag;
string president;
}; Then I would like to make a mapper which contains a key (made of country name) and values are its details, like country is friend or leader, no of friends, so that i can directly check who is ultimate leader. | I wouldn't use any form of inheritance for World and Country. The World is not a Country, and a Country is not the World. Instead, the World is a "bag" which contains many Countries. By "bag" I mean some kind of container class of your choosing (linked list, set, dictionary/map, or whatever). Pick whichever container type allows you to most efficiently find Countries. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/350272/"
]
} |
400,745 | I am adding unit tests to some existing code which initially was not tested at all. I have set up my CI system to fail builds where the code coverage percentage is reduced compared to previous build - mainly to set a path of continuing improvement. I then encountered an unexpected and annoying (although mathematically correct) situation that can be reduced to the following: Before refactor: Method 1: 100 lines of code, 10 covered --> 10% coverage Method 2: 20 lines of code, 15 covered --> 75% coverage Total: 25 / 120 --> ~20% coverage After refactor: Method 1: 100 lines of code, 10 covered --> 10% coverage (untouched) Method 2: 5 lines of code, 5 covered --> 100% coverage Total: 15 / 105 --> ~14% coverage So even though IMO the situation was improved, my CI system obviously does not agree. Admittedly, this is a very esoteric problem, and would probably disappear when the bulk of the code will be covered better, but I would appreciate insights and ideas (or tools/configurations) that might allow me to keep enforcing an "improvement" path for coverage with my CI system. The environment is Java, Jenkins, JaCoCo. | The problem I see here is that you have made the code coverage a trigger for build failure . I do believe that code coverage should be something that is routinely reviewed, but as you have experienced, you can have temporary reductions in your pursuit of higher code coverage. In general, build failures should be predictable . The following make good build failure events: Code will not compile Tests will not run Tests fail Post build packaging fails (i.e. can't make containers, etc.) Packages cannot be published to repositories All of these are pass/fail, quantitative measures, they work (binary value 1) or they don't (binary value 0). Code quality should be monitored because they are qualitative . NOTE: percentages are a qualitative measure even though they have a number associated with it. The following are qualitative measures: Cyclomatic complexity (a number associated with a qualitative concept, i.e. the number has a qualitative meaning) Maintainability index (a number associated with a qualitative concept, i.e. the number has a qualitative meaning) Percentage covered lines/branches/methods (qualitative summary of quantitative results) Static analysis results (no numbers involved) As with any trend, when you look over past releases you can find momentary reductions while the overall trend improves or stays the same (100% remains 100%). If the long term trend is toward less tested or maintainable code then you have a team problem you need to address. If the long term trend is toward higher quality code based on your measurements, then the team is doing their jobs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400745",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/326954/"
]
} |
400,932 | Pseudo code and comments to explain: // first select companies to process from db
foreach (company) {
// select the employees
foreach (employee of company) {
// select items they can access
foreach (item) {
// do some calculation and save I believe this will be O(n^3) time complexity, please correct me if I am wrong - Big O has always given me a headache. My question is if you introduce Parallel processing at the first level, what does it become? What if we introduce a second parallel.foreach() for the second iteration as well? Edit:
It was suggested that What is O(...) and how do I calculate it? would address my question, however I am more interested in whether Parallel.Foreach impacts time complexity so I believe the question is sufficiently different to stand on its own. | Time complexity is not about how long an algorithm takes to solve, it is about how much longer it takes as the input grows . That is, even the most complex (in terms of time complexity) algorithms can be solved very fast for adequately small inputs! Your proposed code will not be O ( n ^3) complexity, rather O ( n * m * k ), because, in all likelihood, you don't have equal numbers of companies, employees per company and items per employee (i.e. n companies, m employees per company and k items per employee). Yes, you can think of each nested loop like a multiplication in terms of Big-O notation. For each outer item i out of k , you must perform a multiple of, say, m actions. So for each additional outer item, it's another m actions. Note however, that if you have other things going on inside your loops (e.g. early stopping, etc.), time complexity may change. Because of 1, Parallel.ForEach will not change the time complexity. However, it will certainly lower the overall time. A simplified way to think of this is a factor, out of whatever is inside the parentheses, e.g. think of O ( n * m * k ) as an alias for t ( n , m , k ) = a *( n * m * k ) ( t meaning actual time). Increasing or decreasing the constant a will obviously have an impact on the total running time of the algorithm. So, using Parallel.ForEach will have more of an effect on that constant, but the time complexity will not change (i.e. the time growth in input will increase approximately as it would without Parallel.ForEach ). Let alone Parallel.ForEach depends on the actual hardware that is available, therefore, it is case-specific. As a result, in certain cases, you may end up delaying your run-time instead of speeding it up. In simple words, if all companies have the same number of employees and each employee the same number of items , then triple the number of companies will triple the total running time (approximately, of course, and always given that you don't do any "tricks" inside the loops, which may depend on the company), with or without Parallel.ForEach , and this is what Big-O notation expresses. *Once again, note that point 4 above is a simplification , just for the purpose of illustration. Things are much more complicated and, in reality, run-time analysis involves multiple tests against various input sizes and distributions. Update To provide a more direct answer to the question, let's reference a parallel vector model of computation . The step complexity is a theoretical measure that assumes using the ideal case of a processing device with infinite processors. This is done to "abstract" the complication of restricting the length of an input vector, in order to focus on the "parallelizability" of a problem. As a result, if a process can be broken down in steps, which can be executed in parallel, the step complexity is the time complexity of each such step. Of course, a problem may have more than one ways to be split in parallel steps. Minimizing this step complexity is important on its own. The point is that, in the real world (see here for some further analysis): , where N > p and p is the count of processors you are considering. Considering N = ∞, to relate to the aforementioned cases, this reads something along the lines: the total running time for p processors is bounded by the step complexity, plus the total work (sometimes called work complexity , i.e. the total work if the algorithm ran sequentially in 1 processor) divided by the number of processors. What this means (well, in very simple words ) is that you get, at most, a p-fold improvement, minus something that depends on how well you designed your algorithm (of course, this is an upper bound, in the real-world, mixed results are observed, which some times appear to conflict with theory ). The important thing here is that the p-fold improvement dominates the complexity as p gets smaller (i.e. as reality gets more relevant). The take-home point, however, is that this "p-fold improvement" scales based on the number of processors and not on the size of the input ! If you are a general-purpose algorithm designer, you usually have little control on p . What you have control over is typically T_1 , i.e. you can employ algorithms that are as efficient as possible sequentially , to begin with. Optimizing an algorithm for parallel execution is a very complicated task and, unless the requirements and use case are explicitly such, and adequately backed up, so that you absolutely must sit down and thoroughly analyze your algorithm, the most you can expect to get from Parallel.ForEach is something slightly worse than this p-fold improvement (and often, quite a bit of trouble debugging). Note that when p = 1, it is not unexpected to get increased run-time. Additional Comments Some additional points, in order to address some of the comments, as the original example is probably overly simplified. Big-O notation is a mathematical "artifact" . The use of the definition in the context of computational complexity requires a certain degree of "abuse", in order to be useful. For example, if f ( x ) = x^3 + 5x, then f ( x ) = O ( x ^3), but f ( x ) = O ( x ^15) is just as correct from a mathematical perspective. Even the use of the equals sign is not strictly OK. Context is important . That means that the function f and the variable x can mean anything. Big-O simply expresses an asymptotic upper bound. In most contexts, the tightest known bound is typically used, because it conveys the most useful information. But Big-O stops right there. Further interpretation is based on what function f means in the desired context. Analyzing the same algorithm can end up producing different expressions of complexity. It all depends on the definition of the problem size . For example, consider square Matrix addition . It is equally valid to say that the time complexity is O ( N ) or O ( n ^2). The key is that these statements consider a different size measure for the problem. If one cares about total elements N , then yes, square matrix addition scales no worse than linearly (this is precisely what Big O states) with respect to total elements N . In an obscurely equivalent manner, square matrix addition scales no worse than quadratically with respect to row (or column) count n . 3x the rows (and columns, as the matrices are square) ends up taking <= 9x the time, since total elements are now (3n)x(3n) = 9(n^2). How is this relevant to the question? Well, one of the comments indicates that to arrive at an O ( n * m * k ) complexity for the OP's code, certain important assumptions have to be made. In order to clarify some intricate points, consider the following scenarios: Non-parallel computation with n indicating total number of companies, m indicating total number of employees per each and every company, and k items per each and every employee, with no employee belonging to more than 1 company and no item belonging to more than 1 employee. What matters in this scenario, is that the computation involves n iterations, of m iterations, of k iterations (as in a 3-dimensional matrix) and the time complexity is O ( n * m * k ). Alternatively, if q is the overall total number of items, the time complexity is O ( q ). Facts of life, such as employees working in more than one company or items belonging to multiple employees do change the complexity. Also, in real-world input, there is no m variable, or k variable because these represent per-company/employee counts, not overall counts. Considering more realistic conditions: Employees can only work in 1 company but items are definitely accessible from more than 1 employee (i.e. considering domain implications). In that case, things are slightly more complicated. Consider a total of n companies, a total of m employees (of all companies) and a total of k items (from all companies). In the worst case, all employees can access all items at all companies. In that case, m_i * k_i iterations would have to be performed for each company i (considering m_i representing the total number of employees and k_i representing total items for company i. Now, regardless of how many the companies are , m * k = (m_1 + m_2 + ... m_n) * (k_1 + k_2 + ... k_n) >= m_1 * k_1 + m_2 * k_2 + ... + m_n * k_n. In short, as the number of companies, n , grows, the worst case is that each employee accesses all items of their company . As a result, under the given assumptions, the time complexity is O ( m * k ), i.e. the triple loop would not scale worse than an equivalent double loop running over all employees and all items. In "domain words", your algorithm will always run faster than the equivalent "flattened down" case (i.e. if all your employees and items belonged to 1 company). An important note for the last point, above, is that O( m * k ) is just one asymptotic upper bound. It may not be the tightest known , i.e. an actual programmed solution may scale far better than that. A tighter bound may exist, but, of course, determining it would require further analysis of all conditions involved in the algorithmic procedure, alternative calculation paths etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400932",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/228245/"
]
} |
400,935 | While refactoring my code using Test Driven Development (TDD), should I keep making new test cases for the new refactored code I am writing? This question is bases on the following TDD steps: Write just enough of a test for code to fail Write just enough code for the test to pass Refactor My doubt is in the refactor step. Should new unit test cases be written for refactored code? To illustrate that, I will give a simplified example: Suppose I am making a RPG and I am making a HPContainer system that should do the following: Allow the player to lose HP. HP should not go below zero. To answer that, I write the following tests: [Test]
public void LoseHP_LosesHP_DecreasesCurrentHPByThatAmount()
{
int initialHP = 100;
HPContainer hpContainer= new HPContainer(initialHP);
hpContainer.Lose(5)
int currentHP = hpContainer.Current();
Assert.AreEqual(95, currentHP);
} [Test]
public void LoseHP_LosesMoreThanCurrentHP_CurrentHPIsZero()
{
int initialHP = 100;
HPContainer hpContainer= new HPContainer(initialHP);
hpContainer.Lose(200)
int currentHP = hpContainer.Current();
Assert.AreEqual(0, currentHP);
} To satisfy the requirements, I implement the following code: public class HPContainer
{
private int currentHP = 0;
public void HPContainer(int initialHP)
{
this.currentHP = initialHP;
}
public int Current()
{
return this.currentHP;
}
public void Lose(int value)
{
this.currentHP -= value;
if (this.currentHP < 0)
this.currentHP = 0;
}
} Good! The tests are passing. We did our job! Now let's say the code grows and I want to refactor that code, and I decide that adding a Clamper class as following is a good solution. public static class Clamper
{
public static int ClampToNonNegative(int value)
{
if(value < 0)
return 0;
return value;
}
} And as a result, changing the HPContainer class: public class HPContainer
{
private int currentHP = 0;
public void HPContainer(int initialHP)
{
this.currentHP = initialHP;
}
public int Current()
{
return this.currentHP;
}
public void Lose(int value)
{
this.currentHP = Clamper.ClampToNonNegative(this.currentHP - value);
}
} The tests still pass, so we are sure we did not introduce a regression in our code. But my question is: Should unit tests be added to the class Clamper ? I see two opposing arguments: Yes, tests should be added because we need to cover Clamper from regression. It will ensure that if Clamper ever needs to be changed that we can do that safely with test coverage. No, Clamper is not part of the business logic, and is already covered by the test cases of HPContainer. Adding tests to it will only make unnecessary clutter and slow future refactoring. What is the correct reasoning, following the TDD principles and good practices? | Testing before and after In TDD, should I add unit tests to refactored code? "refactored code" implies you are adding the tests after you've refactored. This is missing the point of testing your changes. TDD very much relies on testing before and after implementing/refactoring/fixing code. If you can prove that the unit test outcomes are the same before and after your refactoring, you've proven that the refactoring did not change the behavior. If your tests went from failing (before) to passing (after), you've proven that your implementations/fixes have solved the issue at hand. You shouldn't be adding your unit tests after you refactor, but rather before (assuming these tests are warranted of course). Refactoring means unchanged behavior Should new unit test cases be written for refactored code? The very definition of refactoring is to change the code without changing its behavior. Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior . As unit tests are written specifically to test the behavior, it doesn't make sense for you to require additional unit tests after refactoring. If these new tests are relevant, then they were already relevant before the refactoring. If these new tests are not relevant, then they are obviously not needed. If these new tests were not relevant, but are now, then your refactoring must invariably have changed the behavior, which means you've done more than just refactoring. Refactoring can inherently never lead to needing additional unit tests that were not needed before. Adding tests needs to happen sometimes That being said, if there were tests that you should have had from the beginning but you had forgotten it until now, of course you can add them. Don't take my answer to mean that you can't add tests just because you had forgotten to write them before. Similarly, sometimes you forget to cover a case and it only becomes apparent after you've encountered a bug. It's good practice to then write a new test that now checks for this problem case. Unit testing other things Should unit tests be added to the class Clamper? It seems to me that Clamper should be an internal class, as it is a hidden dependency of your HPContainer . The consumer of your HPContainer class doesn't know that Clamper exists, and doesn't need to know that. Unit tests only focus on external (public) behavior to consumers. As Clamper should be internal , it requires no unit tests. If Clamper is in another assembly altogether, then it does need unit testing as it is public. But your question makes it unclear if this is relevant. Sidenote I'm not going to go into a whole IoC sermon here. Some hidden dependencies are acceptable when they are pure (i.e. stateless) and don't need to be mocked - e.g. no one is really enforcing that .NET's Math class be injected, and your Clamper is functionally no different from Math . I'm sure that others will disagree and take the "inject everything" approach. I'm not disagreeing that it can be done, but it's not the focus of this answer as it's not pertinent to the posted question, in my opinion. Clamping? I don't think the clamping method is all that necessary to begin with. public static int ClampToNonNegative(int value)
{
if(value < 0)
return 0;
return value;
} What you've written here is a more limited version of the existing Math.Max() method. Every usage: this.currentHP = Clamper.ClampToNonNegative(this.currentHP - value); can be replaced by Math.Max : this.currentHP = Math.Max(this.currentHP - value, 0); If your method is nothing but a wrapper around a single existing method, it becomes pointless to have it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400935",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/349267/"
]
} |
400,953 | Following this post https://stackoverflow.com/questions/21554977/should-services-always-return-dtos-or-can-they-also-return-domain-models and best practices in Software Arch suggestions by Martin Fowler https://martinfowler.com/eaaCatalog/serviceLayer.html#:~:text=A%20Service%20Layer%20defines%20an,the%20implementation%20of%20its%20operations A Service Layer defines an application's boundary [Cockburn PloP] and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic I have a problem doing so consider the following: UserService {
UserDto findUser();
} UserService should be fine if used in the controller where I just need data only so dto is sufficient, But here is the problem if I used this service in another service say e.g CustomerService I need the model itself the User object since the model should be managed by some persistence context e.g CustomerService {
void addCustomer() {
Customer customer = new Customer();
User user = userService.findUser(xxx); // BAM compilation fails since findUser returns UserDto not User
customer.setUser(user);
}
} What would be best practice here? Should I create 2 copies of findUser method with 2 different return types or 2 copies of UserService class one for controller use and other for service or core package use? Or should I consider the proxy pattern? | A really simple rule of thumb is that you should only use a DTO object when you need to ... transfer data. That means use a DTO at the boundaries in your web API, or when you are sending the object on a message bus. Internally, just use your domain objects. The only reason a DTO should exist is the limitations of the (de)serialization layers. Those need simple objects without complex logic. Recommendation: Use standard models in your services Convert the model to a DTO in the controller (i.e. in your web application) just prior to serialization. If you use asynchronous messaging, convert the model to a DTO just before pushing the DTO on the message queue This allows you to use your services as you desire, and save the DTOs for when they are actually required. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/400953",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92003/"
]
} |
401,126 | I have a coding style related question. In my example it is java, but I think this can be a generic question regarding languages that are changing rapidly. I'm working on a java code base which was mainly written targeting java 8. The code was migrated to java 12. By migration I mean the code can run on jdk12, so the necessary library changes/additions were done. However the new language features haven't been used yet ( e.g. var keyword ). My question is, what should be the approach to introduce new language elements from readability point of view? How important is consistency? I have a few possible approaches in my mind. New language features should be used only in new classes New language features can be used in newly added parts of existing classes (e.g. a new method), even if the rest of the class is not updated. When a new language features are added to new parts of existing classes, then the rest of the class should be updated too I know this question is a bit hard to answer (as coding style questions in general), but still I hope there will be some conclusion. | When in Rome do as the Romans do. It's nice if an entire code base follows one consistent style. However that will trap it's style in the past. If you're enchanted by some new fangled style the worst thing you can do is scatter it randomly among code left in the old style. This is why when you repaint a house a different color you don't just do it where the paint is damaged. You do it room by room. It's better to make the change within some boundary. Those reading the code should find it easy to predict what style they will find in each place. Changing the style of working code is a refactoring. Don't refactor without tests. And don't refactor everything at once. Do one room, step back, and ask others what they think. When you can't do all that, stick with the old style. Only use new language features that blend well with the old style. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401126",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/320642/"
]
} |
401,137 | I am working on an application with multiple threads (using Qt, C++). One of this threads is designed to execute a batch of operations like reading/writing from/to files as well as creating new ones. Sometimes, while this worker thread is active, I need some sort of confirmation from the user for operations like overwriting a file; I was thinking about doing this by emitting a signal from the worker and waiting for a response from the UI thread, which will show a popup and then give back the result; but I think this is an anti-pattern for asynchronous programming. Are there any good, safe patterns to manage a problem like this? | When in Rome do as the Romans do. It's nice if an entire code base follows one consistent style. However that will trap it's style in the past. If you're enchanted by some new fangled style the worst thing you can do is scatter it randomly among code left in the old style. This is why when you repaint a house a different color you don't just do it where the paint is damaged. You do it room by room. It's better to make the change within some boundary. Those reading the code should find it easy to predict what style they will find in each place. Changing the style of working code is a refactoring. Don't refactor without tests. And don't refactor everything at once. Do one room, step back, and ask others what they think. When you can't do all that, stick with the old style. Only use new language features that blend well with the old style. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401137",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/351073/"
]
} |
401,415 | I have been told by another fellow C programmer to write large applications in several different .c and .h files, and then compile them together. They say it will run faster. Does a multifile application run faster than a singlefile one? If so, what makes it run faster? Also, what other benefits are there to multi file programming? Which platform(s) does multi-file C programs affect performance? Will a multi-file Windows application run faster than a single-file one? Will a multi-file MacOS application run faster than a single-file one? Will a multi-file Ubuntu application run faster than a single-file one? | There are a lot of technical reasons behind using multiple files when writing large complex systems. All of them are meaningless in the face of the best reason to use multiple files: Readability. When I write code that resides in one file I'm presenting what you need to understand to follow how this part of the system works. Every detail not in this file is abstracted away, represented with a good name that should ensure you can still understand what is happening here without poking your nose into the other files. If I've failed to do that I've written crappy code and you should call me out for it. In cases like that multiple files rarely do you any good. Without that consideration I can write the whole program in one file. The CPU wont care. It will just make humans miserable when they try to read it. The traditional technical reason is to separate code into independently deployable units that can change without having to redeploy the whole system. There are cases where that's very important such as when your software is burned on many chips and you don't want to throw away all the chips just because one needs to change. It's also true that being independently deployable allows compiles to go faster since you only have to recompile what changed. Even in those cases though, I'd still argue that the biggest benefit is creating a boundary that limits what you expect your readers to hold in their head at any one time. TL;DR If multi file programs annoy you because you have to keep looking in multiple files to understand them you're simply looking at poorly abstracted code with bad names. That shouldn't be what it feels like. Each file should tell one story from one perspective. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401415",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
401,594 | Some ( link 1 , link 2 ) programming languages allow spaces in their identifiers (e.g. variables, procedures) but most of them don't and instead programmers usually use camel case , snake case and other ways to separate words in names. To support spaces or even other Unicode characters some programming languages
allow encapsulating the name with a certain character to delimit its start and end. Is it a bad idea to allow spaces or is it just commonly not allowed for historical reasons (when there were more limitations than now or simply being decided not worth implementing)? The question is more about the main pros and cons of implementing it in newly created programming languages. Related pages: link 1 , link 2 . | Consider the following. var [Example Number] = 5;
[Example Number] = [Example Number] + 5;
print([Example Number]);
int[] [Examples Array] = new int[25];
[Examples Array][[Example Number]] = [Example Number] Compare it with the more traditional example: var ExampleNumber = 5;
ExampleNumber = ExampleNumber + 5;
print(ExampleNumber);
int[] ExamplesArray = new int[25];
ExamplesArray[ExampleNumber] = ExampleNumber; I'm pretty sure you noticed that the strain for your brain to read the second example was much lower. If you allow whitespaces on an identifier, you'll need to put some other language element to mark the start and the stop of a word. Those delimiters force the brain to do some extra parsing and, depending on which one you pick, create a whole new set of ambiguity issues for the human brain. If you don't put delimiters, and try to infer what identifier you're talking about when typing code by context only, you invite another type of can of worms: var Example = 5;
var Number = 10;
var Example Number = Example + Number;
int[] Examples Array = new int[25];
Examples Array[Example Number] = Example Number;
Example Number = Example Number + Example + Number;
print text(Example Number); Perfectly doable. A total pain for your brain's pattern matching. Those examples are painful to read not only because of the choice of the words I'm picking, but also because your brain takes some extra time to identify what is every identifier. Consider the more regular format, once again: var Example = 5;
var Number = 10;
var ExampleNumber = Example + Number;
int[] ExamplesArray = new int[25];
ExamplesArray[ExampleNumber] = ExampleNumber;
ExampleNumber = ExampleNumber + Example + Number;
printText(ExampleNumber); Do you notice something? The names of the variables are still terrible, but the strain to read it went way down. That happens because your brain now has a natural anchor to identify the beginning and the ending of every word, enabling you to abstract away that part of your thinking. You don't need to worry about that context anymore - you see a break in the text, you know it is a new identifier coming. When reading code, you brain doesn't much read the words as much as it matches it with what you have in your mind right now. You don't really stop to read "ExampleWord". You see the overal shape of the thing, ExxxxxxWxxd, matches it with whatever you have stashed in your mental heap, and them go ahead reading. That's why it is easy to miss up mistakes like "ExampleWord = ExapmleWord" - your brain isn't really reading it. You're just matching up similar stuff. Once more, consider the following: Example Word += Example Word + 1; Now imagine yourself trying to debug that code. Imagine how many times you'll miss that extra space on "Example Word". A misplaced letter is already hard as fork to detect at first glance; an extra space is an order of magnitude worse. In the end, it is hard to say that allowing whitespaces would make the text more readable. I find it difficult to believe that the added hassle of extra terminators and the extra overhead on my brain would be worth to use this type of functionality if the language I'm working with had it. Personally, I consider it bad design - not because of the hassle on the compiler, interpreter, or whatever, but because my brain trips on those spaces thinking that it is a new identifier that is about to begin, when it is not. In a sense, our brain suffers the same problems than our processors, when it comes to branch prediction . So please, be kind to our trains of thought. Don't put whitespaces on your identifiers. I completely forgot to add a mention to a language I use every single day accepts spaces in identifiers - SQL! That doesn't mean it is a good idea to use them, however. Most people I know agree it's a Bad Idea to shove spaces around on your identifiers - to the point it's sometimes a forgotten feature of the language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401594",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/351728/"
]
} |
401,617 | Many times I want to define an interface with some methods that maintain a behavior relationship between them. However, I feel that many times this relationship is implicit. With that in mind, I asked myself: Is there any way to enforce a behavior relationship between interface methods? I thought about defining this behavior via inheritance (by defining a common implementation). But since C# does not allow multiple inheritance, I believe that many times an interface would be more advisable and that inheritance is not flexible enough. For example: public interface IComponent
{
void Enable();
void Disable();
bool IsEnabled();
} For this interface, I wanted the following relationship to be fulfilled: If Enable() is called, IsEnabled() should return true . If Disable() is called, IsEnabled() should return false . In that example, the behavior constraint that I would want to enforce is: When implementing Enable() , the implementer should ensure that IsEnabled() returns true When implementing Disable() , the implementer should ensure that IsEnabled() returns false Is there a way to enforce this implementation constraint? Or, the fact that I am thinking about enforcing this kind of constraint is itself a sign that there is a flaw in the design? | Well, first of all, let's tweak your interface a bit. public interface IComponent
{
void Enable();
void Disable();
bool IsEnabled { get; }
} Now then. What could potentially go wrong here? For example, could an exception be thrown in the Enable() or Disable() methods? What state would IsEnabled be in then? Even if you use Code Contracts, I don't see how IsEnabled can be correlated to the use of your Enable or Disable methods unless those methods are guaranteed to succeed. IsEnabled should represent the actual state your object is in, not some hypothetical state. That said, all you really need is public interface IComponent
{
bool IsEnabled { get; set; }
} Clear it, and the component disables itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/401617",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/349267/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.