source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
280,305 | I've heard time and time again that in object-oriented programming, you should try to split objects that 'do too much' into multiple classes, to avoid the "God Object" problem. This seems like fine advice for a project that has plenty of room to expand, but in our project, our packages are already loaded down with too many objects - some that are very bare-bones - while we also have the problem of very large objects that do too much. Is it a better idea, for code sanitation, to split our larger objects that do too much work into smaller objects? Or is there a limit to the amount of good it can do? | It seems to me that the fundamental principle to apply here would be the Single Responsibility Principle . Does each class have a single, clearly articulated, well-bounded responsibility? Note that I don't mean "does each class do one thing." For example, a repository "Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects." But it still might have several methods that accomplish parts of this overall responsibility. If you find that a clearly-articulated, well-bounded responsibility that should be contained in a single class with multiple methods is instead being split over many smaller classes, then your classes are getting too small. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123958/"
]
} |
280,408 | I'm wondering if I should defend against a method call's return value by validating that they meet my expectations even if I know that the method I'm calling will meet such expectations. GIVEN User getUser(Int id)
{
User temp = new User(id);
temp.setName("John");
return temp;
} SHOULD I DO void myMethod()
{
User user = getUser(1234);
System.out.println(user.getName());
} OR void myMethod()
{
User user = getUser(1234);
// Validating
Preconditions.checkNotNull(user, "User can not be null.");
Preconditions.checkNotNull(user.getName(), "User's name can not be null.");
System.out.println(user.getName());
} I'm asking this at the conceptual level. If I know the inner workings of the method I'm calling. Either because I wrote it or I inspected it. And the logic of the possible values it returns meet my preconditions. Is it "better" or "more appropriate" to skip the validation, or should I still defend against wrong values before proceeding with the method I'm currently implementing even if it should always pass. My conclusion from all answers (feel free to come to your own): Assert when The method has shown to misbehave in the past The method is from an untrusted source The method is used from other places, and does not explicitly state it's post-conditions Do not assert when: The method lives closely to yours (see chosen answer for details) The method explicitly defines it's contract with something like proper doc, type safety, a unit test, or a post-condition check Performance is critical (in which case, a debug mode assert could work as a hybrid approach) | That depends on how likely getUser and myMethod are to change, and more importantly, how likely they are to change independently of each other . If you somehow know for certain that getUser will never, ever, ever change in the future, then yes it's a waste of time validating it, as much as it is to waste time validating that i has a value of 3 immediately after an i = 3; statement. In reality, you don't know that. But there are some things you do know: Do these two functions "live together"? In other words, do they have the same people maintaining them, are they part of the same file, and thus are they likely to stay "in sync" with each other on their own? In this case it's probably overkill to add validation checks, since that merely means more code that has to change (and potentially spawn bugs) every time the two functions change or get refactored into a different number of functions. Is the getUser function is part of a documented API with a specific contract, and myMethod merely a client of said API in another codebase? If so, you can read that documentation to find out whether you should be validating return values (or pre-validating input parameters!) or if it really is safe to blindly follow the happy path. If the documentation does not make this clear, ask the maintainer to fix that. Finally, if this particular function has suddenly and unexpectedly changed its behavior in the past, in a way that broke your code, you have every right to be paranoid about it. Bugs tend to cluster. Note that all of the above applies even if you are the original author of both functions. We don't know if these two functions are expected to "live together" for the rest of their lives, or if they'll slowly drift apart into separate modules, or if you have somehow shot yourself in the foot with a bug in older versions of getUser. But you can probably make a pretty decent guess. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280408",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89630/"
]
} |
280,415 | I asked this question a while ago - the answers were really helpful, and as I read them and the questions that were linked - I also saw this , and the first answer I think really addresses what I thought was the essence of the more powerful type system. I was trying to really understand the pseudo-implementation of the functor that the author gives in his example - and I wondered if anyone can give me a slightly simpler explanation of this part of his answer. Directly quoted from the answer - it's these bits that I don't quite get. for this block of code. interface Functor<A> {
Functor<B> map(Function<A, B> f);
} The type system doesn't allow us to express the invariant that the
map method always returns the same Functor subclass as the receiver. Therefore, there's no statically type-safe manner to invoke a
non-Functor method on the result of map. Is a simple way of looking at this that in a concrete implementation of Functor , you can declare what you want to use as A , but not what you want to use as B, or even the B is the same on both sides of the map function? More generall, my slightly simplistic interpretation of this is that single-kinded types really mean that it limits how far you can "reach" or "specify" the contract you are trying to specify with your types. With higher kinded types, you get much more flexibility on how you specify your types, i.e you can constrain and bind your functions more specifically that you can with simple java generics. ... or I really it could be that I just don't understand what a Type Constructor is or why it's useful! | That depends on how likely getUser and myMethod are to change, and more importantly, how likely they are to change independently of each other . If you somehow know for certain that getUser will never, ever, ever change in the future, then yes it's a waste of time validating it, as much as it is to waste time validating that i has a value of 3 immediately after an i = 3; statement. In reality, you don't know that. But there are some things you do know: Do these two functions "live together"? In other words, do they have the same people maintaining them, are they part of the same file, and thus are they likely to stay "in sync" with each other on their own? In this case it's probably overkill to add validation checks, since that merely means more code that has to change (and potentially spawn bugs) every time the two functions change or get refactored into a different number of functions. Is the getUser function is part of a documented API with a specific contract, and myMethod merely a client of said API in another codebase? If so, you can read that documentation to find out whether you should be validating return values (or pre-validating input parameters!) or if it really is safe to blindly follow the happy path. If the documentation does not make this clear, ask the maintainer to fix that. Finally, if this particular function has suddenly and unexpectedly changed its behavior in the past, in a way that broke your code, you have every right to be paranoid about it. Bugs tend to cluster. Note that all of the above applies even if you are the original author of both functions. We don't know if these two functions are expected to "live together" for the rest of their lives, or if they'll slowly drift apart into separate modules, or if you have somehow shot yourself in the foot with a bug in older versions of getUser. But you can probably make a pretty decent guess. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280415",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80091/"
]
} |
280,513 | I have a base class, Base . It has two subclasses, Sub1 and Sub2 . Each subclass has some additional methods. For example, Sub1 has Sandwich makeASandwich(Ingredients... ingredients) , and Sub2 has boolean contactAliens(Frequency onFrequency) . Since these methods take different parameters and do entirely different things, they're completely incompatible, and I can't just use polymorphism to solve this problem. Base provides most of the functionality, and I have a large collection of Base objects. However, all Base objects are either a Sub1 or a Sub2 , and sometimes I need to know which they are. It seems like a bad idea to do the following: for (Base base : bases) {
if (base instanceof Sub1) {
((Sub1) base).makeASandwich(getRandomIngredients());
// ... etc.
} else { // must be Sub2
((Sub2) base).contactAliens(getFrequency());
// ... etc.
}
} So I came up with a strategy to avoid this without casting. Base now has these methods: boolean isSub1();
Sub1 asSub1();
Sub2 asSub2(); And of course, Sub1 implements these methods as boolean isSub1() { return true; }
Sub1 asSub1(); { return this; }
Sub2 asSub2(); { throw new IllegalStateException(); } And Sub2 implements them in the opposite way. Unfortunately, now Sub1 and Sub2 have these methods in their own API. So I can do this, for example, on Sub1 . /** no need to use this if object is known to be Sub1 */
@Deprecated
boolean isSub1() { return true; }
/** no need to use this if object is known to be Sub1 */
@Deprecated
Sub1 asSub1(); { return this; }
/** no need to use this if object is known to be Sub1 */
@Deprecated
Sub2 asSub2(); { throw new IllegalStateException(); } This way, if the object is known to be only a Base , these methods are un-deprecated, and can be used to "cast" itself to a different type so I can invoke the subclass's methods on it. This seems elegant to me in a way, but on the other hand, I'm kind of abusing Deprecated annotations as a way to "remove" methods from a class. Since a Sub1 instance really is a Base, it does make sense to use inheritance rather than encapsulation. Is what I'm doing good? Is there a better way to solve this problem? | From my perspective: your design is wrong . Translated to natural language, you are saying the following: Given we have animals , there are cats and fish . animals have properties, which are common to cats and fish . But that's not enough: there are some properties, which differentiate cat from fish , therefore you need to subclass. Now you have the problem, that you forgot to model movement .
Okay. That's relatively easy: for(Animal a : animals){
if (a instanceof Fish) swim();
if (a instanceof Cat) walk();
} But that is a wrong design. The correct way would be: for(Animal a : animals){
animal.move()
} Where move would be shared behavior implemented differently by each animal. Since these methods take different parameters and do entirely different things, they're completely incompatible, and I can't just use polymorphism to solve this problem. This means: your design is broken. My recommendation: refactor Base , Sub1 and Sub2 . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280513",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/110560/"
]
} |
280,548 | Life, the work day, and personal projects don't always give us the opportunity the commit code at a logical completion (functionality subset programmed, bug fix completely patched, etc.). Sometimes we need to stop working half way through a new method, or through a half-baked fix. Think... The work day ends, or it is time to shut down for the evening. I feel guilty in doing a commit at this point. Mostly because my commit message is some daft comment saying "Started to do blah " or "Did this partially but need to complete". Committing without a nice a logical stop feels sloppy, mostly for commit log and historical purposes. But at the same time I don't like to "disconnect" from the code or call it quits with an uncommitted working directory. Maybe unfounded, but also for the sense of not having a dvcs backup at this point in case of a hardware failure. What is the right approach here? When you are at a force programming stopping point but not a logical stopping point in the code, commit or wait until that logical containment is done to commit? Does the latter contradict the "commit often" mantra? It just seems history and diffing this could become difficult to understand after the fact. | Committing code is cheap in git . You have several options: Commit and amend later $ git commit --all -m "WIP: half-implemented hack" ... time passes ... $ # back to work
$ git commit --all --amend -m "Nice logical atomic commit" Use git stash Same as above, but using a stash: Commands are a little shorter to write A stash by nature looks like work in progress (stashes are named "WIP on master" by default). $ git stash # must go home now ... later ... $ git stash pop # back to work Note that your work will temporarly be reverted, which could be confusing if you forget that you worked on something. It helps to work with gitk or magit (emacs) in order to see stashes. Commit and rebase You can split your work in different units and/or experiments. When you are done, do a git rebase --interactive ( -i ) to edit individual commits (the command expects a base commit as an argument). Then, you can push your changes upstream (or not; it helps to clean regularly). Why commit? You should commit, because even though failures are minimal, the cost of committing is so low that you don't have to take a risk. As noted in the comment, at any point you can choose to have a local branch to keep things organized, if it helps. Basically, anything you do locally is good, as long as you don't push temporary commits. That being said, you can push your half-worked changes to another server, provided you only do it in a personnal, private branch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280548",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123656/"
]
} |
280,648 | I'm confused about why we care about different representations for positive and negative zero. I vaguely recall reading claims that having a negative zero representation is extremely important in programming that involves complex numbers. I've never had the opportunity to write code involving complex numbers, so I'm a little baffled about why this would be the case. Wikipedia's article on the concept isn't especially helpful; it only makes vague claims about signed zero making certain mathematical operations simpler in floating point, if I understand correctly. This answer lists a couple of functions that behave differently, and perhaps something could be inferred from the examples if you're familiar with how they might be used. (Although, the particular example of the complex square roots looks flat out wrong , since the two numbers are mathematically equivalent, unless I have a misunderstanding.) But I have been unable to find a clear statement of the kind of trouble you would get into if it wasn't there. The more mathematical resources I've been able to find state that there is no distinguishing between the two from a mathematical perspective, and the Wikipedia article seems to suggest that this is rarely seen outside of computing aside from describing limits. So why is a negative zero valuable in computing? I'm sure I'm just missing something. | You need to keep in mind that in FPU arithmetics, 0 doesn't necessarily has to mean exactly zero, but also value too small to be represented using given datatype, e.g. a = -1 / 1000000000000000000.0 a is too small to be represented correctly by float (32 bit), so it is "rounded" to -0. Now, let's say our computation continues: b = 1 / a Because a is float, it will result in -infinity which is quite far from the correct answer of -1000000000000000000.0 Now let's compute b if there's no -0 (so a is rounded to +0): b = 1 / +0
b = +infinity The result is wrong again because of rounding, but now it is "more wrong" - not only numerically, but more importantly because of different sign (result of computation is +infinity, correct result is -1000000000000000000.0). You could still say that it doesn't really matter as both are wrong. The important thing is that there are a lot of numerical applications where the most important result of the computation is the sign - e.g. when deciding whether to turn left or right at the crossroad using some machine learning algorithm, you can interpret positive value => turn left, negative value => turn right, actual "magnitude" of the value is just "confidence coefficient". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280648",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92517/"
]
} |
280,765 | There are three software projects: A, B and C. A is published to anyone and is licensed under GPL. B extends A, is published too, but has no license information or is mistakenly licensed under LGPL. Basically it violates the license of A by not being GPL. Source code of B is still available. C extends B. Can C be published under GPL? Motivation would be "A is GPL, any derivative must be GPL too, so B is GPL and C can be GPL too". | First off, B is in violation of the GPL on A. But that's not exactly your concern and is irrelevant to the question here (who knows, maybe B got a LGPL license from A on their code so that it may be released under LGPL?). The question is "Can you build a GPL piece of software based on LGPL code?" The answer to this is simply "yes". The LGPL is less restrictive than the GPL (thus why B is in violation of the license on A unless other provisions were made), but also allows it to be brought back into a GPL project fairly easily. From the LGPL license: Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. Its part of the license. You can easily build a GPL software based on LGPL code. There are some version differences that you'll have to pay attention to to make sure that the code is licensed in the correct way, under the correct version of the GPL. In the event that there is no license information presented, you do not have the right to extend upon it. B should not have been distributed, but its contributions are not licensed under an open source license. This may have been an internal project that got published or some other event. It is not presented under a license that is compatible with extending with the GPL. Consider the situation that a company, using GPL software internally (acceptable - not a violation), mistakingly made their repo public. In this case, it is quite possible that the project C is in violation of copyright infringement itself (the material that B added that is not licensed under the GPL as it should not have been distributed in the first place). One cannot force a license on someone else's source. It is either in compliance with the license, or in violation of it. If it is in violation of it, then as spelled out in the license: You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). A violation of the GPL does not mean that the material is under GPL, but rather that it can't be distributed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280765",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176578/"
]
} |
280,776 | I've been working in MVC web frameworks since its they started getting popular with RoR and ASP.NET MVC. I have always been careful to never put "business logic" on my controllers since that couples the framework with the logic. These days, like many of you, I'm not really using the MVC part of my framework, since I'm not returning "views" and framework artifacts. Instead, I'm just using routes to controller actions to build API's that return xml or json. In the past couple years, I've been trying to move closer to CQRS, so my business logic ends up being packaged in commands and handlers with a dispatcher to connect the two. The "real" business logic is actually in my entities, but my command handlers drive the operations. I have also started allowing multiple services to expose different bounded contexts (micro-services). Then I started thinking "outside the box" (outside for me, at least). Why do I bother with dispatcher, commands, and command handlers? Now that my framework is only a networking layer over my bounded context, why not let my controllers be my command handlers? Why not let my dispatcher be the routing system in my framework? Why not let the content of the HTTP request be my command? | First off, B is in violation of the GPL on A. But that's not exactly your concern and is irrelevant to the question here (who knows, maybe B got a LGPL license from A on their code so that it may be released under LGPL?). The question is "Can you build a GPL piece of software based on LGPL code?" The answer to this is simply "yes". The LGPL is less restrictive than the GPL (thus why B is in violation of the license on A unless other provisions were made), but also allows it to be brought back into a GPL project fairly easily. From the LGPL license: Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. Its part of the license. You can easily build a GPL software based on LGPL code. There are some version differences that you'll have to pay attention to to make sure that the code is licensed in the correct way, under the correct version of the GPL. In the event that there is no license information presented, you do not have the right to extend upon it. B should not have been distributed, but its contributions are not licensed under an open source license. This may have been an internal project that got published or some other event. It is not presented under a license that is compatible with extending with the GPL. Consider the situation that a company, using GPL software internally (acceptable - not a violation), mistakingly made their repo public. In this case, it is quite possible that the project C is in violation of copyright infringement itself (the material that B added that is not licensed under the GPL as it should not have been distributed in the first place). One cannot force a license on someone else's source. It is either in compliance with the license, or in violation of it. If it is in violation of it, then as spelled out in the license: You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). A violation of the GPL does not mean that the material is under GPL, but rather that it can't be distributed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/280776",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66363/"
]
} |
281,788 | One of the measures of maintainable software is its acceptance of changes later on by developers other than the one(s) who wrote it. But what characteristics does software have when it has been in production for awhile and the client wants a new developer to add a feature or fix a bug? What activities, attitudes, techniques, and/or practices contribute to software being "maintainable" 6 months later? | First off, B is in violation of the GPL on A. But that's not exactly your concern and is irrelevant to the question here (who knows, maybe B got a LGPL license from A on their code so that it may be released under LGPL?). The question is "Can you build a GPL piece of software based on LGPL code?" The answer to this is simply "yes". The LGPL is less restrictive than the GPL (thus why B is in violation of the license on A unless other provisions were made), but also allows it to be brought back into a GPL project fairly easily. From the LGPL license: Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. Its part of the license. You can easily build a GPL software based on LGPL code. There are some version differences that you'll have to pay attention to to make sure that the code is licensed in the correct way, under the correct version of the GPL. In the event that there is no license information presented, you do not have the right to extend upon it. B should not have been distributed, but its contributions are not licensed under an open source license. This may have been an internal project that got published or some other event. It is not presented under a license that is compatible with extending with the GPL. Consider the situation that a company, using GPL software internally (acceptable - not a violation), mistakingly made their repo public. In this case, it is quite possible that the project C is in violation of copyright infringement itself (the material that B added that is not licensed under the GPL as it should not have been distributed in the first place). One cannot force a license on someone else's source. It is either in compliance with the license, or in violation of it. If it is in violation of it, then as spelled out in the license: You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). A violation of the GPL does not mean that the material is under GPL, but rather that it can't be distributed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/281788",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66363/"
]
} |
281,827 | Recently I got into a problem with the readability of my code. I had a function that did an operation and returned a string representing the ID of this operation for future reference (a bit like OpenFile in Windows returning a handle). The user would use this ID later to start the operation and to monitor its finish. The ID had to be a random string because of interoperability concerns. This created a method that had a very unclear signature like so: public string CreateNewThing() This makes the intent of the return type unclear. I thought to wrap this string in another type that makes its meaning clearer like thus: public OperationIdentifier CreateNewThing() The type will contain only the string and be used whenever this string is used. It's obvious that the advantage of this way of operation is more type safety and clearer intent, but it also creates a lot more code and code that isn't very idiomatic. On the one hand I like the added safety, but it also creates a lot of clutter. Do you think it is a good practice to wrap simple types in a class for safety reasons? | Primitives, such as string or int , have no meaning in a business domain. A direct consequence of this is that you may mistakenly use an URL when a product ID is expected, or use quantity when expecting price . This is also why Object Calisthenics challenge features primitives wrapping as one of its rules: Rule 3: Wrap all primitives and Strings In the Java language, int is a primitive, not a real object, so it obeys different rules than objects. It is used with a syntax that isn’t object-oriented. More importantly, an int on its own is just a scalar, so it has no meaning. When a method takes an int as a parameter, the method name needs to do all of the work of expressing the intent. If the same method takes an Hour as a parameter, it’s much easier to see what’s going on. The same document explains that there is an additional benefit: Small objects like Hour or Money also give us an obvious place to put behavior that would otherwise have been littered around other classes . Indeed, when primitives are used, it's usually extremely difficult to track the exact location of the code related to those types, often leading to severe code duplication . If there is Price: Money class, it is natural to find the range checking inside. If, instead, a int (worse, a double ) is used to store product prices, who should validate the range? The product? The rebate? The cart? Finally, a third benefit not mentioned in the document is the ability to change relatively easy the underlying type. If today my ProductId has short as its underlying type and later I need to use int instead, chances are the code to change will not span the entire code base. The drawback—and the same argument applies to every rule of Object Calisthenics exercise—is that if quickly becomes too overwhelming to create a class for everything . If Product contains ProductPrice which inherits from PositivePrice which inherits from Price which in turn inherits from Money , this is not clean architecture, but rather a complete mess where in order to find a single thing, a maintainer should open a few dozen files every time. Another point to consider is the cost (in terms of lines of code) of creating additional classes. If the wrappers are immutable (as they should be, usually), it means that, if we take C#, you have to have, within the wrapper at least: The property getter, Its backing field, A constructor which assigns the value to the backing field, A custom ToString() , XML documentation comments (which makes a lot of lines), A Equals and a GetHashCode overrides (also a lot of LOC). and eventually, when relevant: A DebuggerDisplay attribute, An override of == and != operators, Eventually an overload of the implicit conversion operator to seamlessly convert to and from the encapsulated type, Code contracts (including the invariant, which is a rather long method, with its three attributes), Several converters which will be used during XML serialization, JSON serialization or storing/loading a value to/from a database. A hundred LOC for a simple wrapper makes it quite prohibitive, which is also why you may be completely sure of the long-term profitability of such wrapper. The notion of scope explained by Thomas Junk is particularly relevant here. Writing a hundred LOCs to represent a ProductId used all over your code base looks quite useful. Writing a class of this size for a piece of code which makes three lines within a single method is much more questionable. Conclusion: Do wrap primitives in classes which have a meaning in a business domain of the application when (1) it helps reducing mistakes, (2) reduces the risk of code duplication or (3) helps changing the underlying type later. Don't wrap automatically every primitive you find in your code: there are many cases where using string or int is perfectly fine. In practice, in public string CreateNewThing() , returning an instance of ThingId class instead of string might help, but you may also: Return an instance of Id<string> class, that is an object of generic type indicating that the underlying type is a string. You have your benefit of readability, without the drawback of having to maintain a lot of types. Return an instance of Thing class. If the user only needs the ID, this can easily be done with: var thing = this.CreateNewThing();
var id = thing.Id; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/281827",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21598/"
]
} |
281,882 | I recently was wondering when to use C over C++, and vice versa? Fortunately someone already beat me to it and although it took a while, I was able to digest all the answers and comments to that question. However one item in that post keeps being addressed again and again, without any kind of example, verification or explanation: "C code is good for when you want to have multiple language bindings for your library" That's a paraphrase. I should note that several people point out that multiple language bindings are possible in C++ (via some extern functioning), but nevertheless, if you read that post in its entirety, it's pretty obvious that C is ideal for portability/language binding. My question is: why? Can someone please provide concrete reasons why writing libraries in C allows for easier bindings and/or portability in other languages? | C has a much, much simpler interface, and its rules for converting a source code interface into a binary interface are straightforward enough that generating external interfaces to bind to is done in a well-established manner. C++, on the other hand, has an incredibly complicated interface, and the rules for ABI binding are not standardized at all, neither formally nor in practice. This means that pretty much any compiler for any language for any platform can bind against an external C interface and know exactly what to expect, but for a C++ interface, it's essentially impossible because the rules change depending on which compiler, which version, and which platform the C++ code was built with. In C, there's no standard binary language implementation rules,
either, but it's an order of magnitude simpler and in practice
compilers use the same rules. Another reason making C++ code hard to
debug is the above-mentioned complicated grammar, since debuggers
frequently can't deal with many language features (place breakpoints
in templates, parse pointer casting commands in data display windows,
etc.). The lack of a standard ABI (application binary interface) has another
consequence - it makes shipping C++ interfaces to other teams /
customers impractical since the user code won't work unless it's
compiled with the same tools and build options. We've already seen
another source of this problem - the instability of binary interfaces
due to the lack of compile time encapsulation. -- Defective C++ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/281882",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
281,913 | For instance, the System.IO.Path.Combine method in .NET has the following overloads: Combine(params String[])
Combine(String, String)
Combine(String, String, String)
Combine(String, String, String, String) What is the point of the last three? The first one would cover them all, as if you look closely, it uses the params keyword. The argument of backwards compatibility would only cover the Combine(String, String) variant, as it was the only version until .NET 4. | The main reason is for performance. The "unlimited arguments" syntactical sugar is actually an array of Strings. If you are only passing one string, why create an array with only one string? Especially if ~90% of the invocations of this method will be with 3 or fewer arguments, there is no need for the heavier weight array object. It's a little lighter in memory and takes a little less processing time because you don't need a loop in order to define the method. If you have three strings, you just code for three strings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/281913",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57807/"
]
} |
281,979 | I have a big object: class BigObject{
public int Id {get;set;}
public string FieldA {get;set;}
// ...
public string FieldZ {get;set;}
} and a specialized, DTO-like object: class SmallObject{
public int Id {get;set;}
public EnumType Type {get;set;}
public string FieldC {get;set;}
public string FieldN {get;set;}
} I personally find a concept of explicitly casting BigObject into SmallObject - knowing that it is a one-way, data-losing operation - very intuitive and readable: var small = (SmallObject) bigOne;
passSmallObjectToSomeone(small); It is implemented using explicit operator: public static explicit operator SmallObject(BigObject big){
return new SmallObject{
Id = big.Id,
FieldC = big.FieldC,
FieldN = big.FieldN,
EnumType = MyEnum.BigObjectSpecific
};
} Now, I could create a SmallObjectFactory class with FromBigObject(BigObject big) method, that would do the same thing, add it to dependency injection and call it when needed... but to me it seems even more overcomplicated and unnecessary. PS I'm not sure if this is relevant, but there will be OtherBigObject that will also be able to be converted into SmallObject , setting different EnumType . | It is... Not great. I've worked with code that did this clever trick and it led to confusion. After all, you would expect to be able to just assign the BigObject into a SmallObject variable if the objects are related enough to cast them. It doesn't work though - you get compiler errors if you try since as far as the type system is concerned, they're unrelated. It is also mildly distasteful for the casting operator to make new objects. I would recommend a .ToSmallObject() method instead. It is clearer about what is actually going on and about as verbose. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/281979",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109857/"
]
} |
283,168 | I want to know why the .compareTo() is in the Comparable interface while a method like .equals is in the Object class. To me, it seems arbitrary why a method like .compareTo() is not in the Object class already. To use .compareTo() , you implement the Comparable interface and implement the .compareTo() method for your purposes. For the .equals() method, you simply override the method in your class, since all classes inherit from the Object class. My question is why is a method like .compareTo() in an interface that you implement rather than in a class like Object ? Likewise, why is the .equals() method in the class Object and not in some interface to be implemented? | Not all objects can be compared, but all objects can be checked for equality. If nothing else, one can see if two objects exist at the same location in memory (reference equality). What does it mean to compareTo() on two Thread objects? How is one thread "greater than" another? How do you compare two ArrayList<T> s? The Object contract applies to all Java classes. If even one class cannot be compared to other instances of its own class, then Object cannot require it to be part of the interface. Joshua Bloch uses the key words "natural ordering" when explaining why a class might want to implement Comparable . Not every class has a natural ordering as I mentioned in my examples above, so not every class should implement Comparable nor should Object have the compareTo method. ...the compareTo method is not declared in Object . ... It is
similar in character to Object 's equals method, except that it
permits order comparisons in addition to simple equality comparisons,
and it is generic. By implementing Comparable , a class indicates
that its instances have a natural ordering . Effective Java, Second Edition : Joshua Bloch. Item 12, Page 62. Ellipses remove references to other chapters and code examples. For cases where you do want to impose an ordering on a non- Comparable class that does not have a natural ordering, you can always supply a Comparator instance to help sort it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283168",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/179129/"
]
} |
283,194 | This issue is most apparent when you have different implementations of an interface, and for the purposes of a particular collection you only care about the interface-level view of the objects. For example, suppose you had an interface like this: public interface Person {
int getId();
} The usual way to implement hashcode() and equals() in implementing classes would have code like this in the equals method: if (getClass() != other.getClass()) {
return false;
} This causes problems when you mix implementations of Person in a HashMap . If the HashMap only cares about the interface-level view of Person , then it could end up with duplicates that differ only in their implementing classes. You could make this case work by using the same liberal equals() method for all the implementations, but then you run the risk of equals() doing the wrong thing in a different context (such as comparing two Person s that are backed by database records with version numbers). My intuition tells me that equality should be defined per collection instead of per class. When using collections that rely on ordering, you can use a custom Comparator to pick the right ordering in each context. There is no analogue for hash-based collections. Why is this? Just to clarify, this question is distinct from " Why is .compareTo() in an interface while .equals() is in a class in Java? " because it deals with the implementation of collections. compareTo() and equals() / hashcode() both suffer from the problem of universality when using collections: you can't pick different comparison functions for different collections. So for the purposes of this question, the inheritance hierarchy of an object doesn't matter at all; all that matters is whether the comparison function is defined per-object or per-collection. | This design is sometimes known as "Universal Equality", it is the belief that whether two things are equal or not is a universal property. What's more, equality is a property of two objects, but in OO, you always call a method on one single object , and that object gets to solely decide how to handle that method call. So, in a design like Java's, where equality is a property of one of the two objects being compared, it isn't even possible to guarantee some basic properties of equality such as symmetry ( a == b ⇔ b == a ), because in the first case, the method is being called on a and in the second case it is being called on b and due to the basic principles of OO, it is solely a 's decision (in the first case) or b 's decision (in the second case) whether or not it considers itself equal to the other one. The only way to gain symmetry is to have the two objects cooperate, but if they don't… tough luck. One solution would be to make equality not a property of one object, but either a property of two objects, or a property of a third object. That latter option also solves the problem of universal equality, because if you make equality a property of a third "context" object, then you can imagine having different EqualityComparer objects for different contexts. This is the design chosen for Haskell, for example, with the Eq typeclass. It is also the design chosen by some third-party Scala libraries (ScalaZ, for example), but not the Scala core or standard library, which uses universal equality for compatibility with the underlying host platform. It is, interestingly, also the design chosen with Java's Comparable / Comparator interfaces. The designers of Java clearly were aware of the problem, but for some reason only solved it for ordering, but not for equality (or hashing). So, as to the question why is there a Comparator interface but no Hasher and Equator ? the answer is "I don't know". Clearly, the designers of Java were aware of the problem, as evidenced by the existence of Comparator , but they obviously didn't think it a problem for equality and hashing. Other languages and libraries make different choices. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283194",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/83385/"
]
} |
283,200 | If someone opens an issue on GitHub but more information to reproduce the error is asked and never given, what's the normal procedure? Example . Here the author states that the "nav breaks". While I believe it is fixed, I would like word from the author to make sure we were talking about the same thing. But sometimes the issue's reporter just disappears. Is it a good/common practice to set a expiration date for abandoned issues? Something like these conditions: A question is raised on the issue to be able to debug it. Over 2-6 months have passed since the last unanswered question/comment from the dev team. Bug cannot be reproduced at the time of closing it (for any reason, maybe they could never be reproduced). A warning is issued 2 weeks before closing it. What do projects normally do? I couldn't find anything on Google. Also, how would I document this? Does a simple note in the README.md detailing the points above and a comment in the issue explaining why it's being closed suffice? Note: it's different from this question since the bug might still be relevant (or not), however there's lack of information. | This is a dilemma: you cannot close the issue as "fixed", because you don't actually know if it was fixed, or at least even if some issue was fixed, you don't actually know whether this was the issue the reporter was talking about. On the other hand, you don't want to leave an issue that might have been fixed open, especially if you won't ever be able to close it because you'll never get confirmation. So, you should close it, but probably not as "fixed". You could invent a custom close reason "maybefixed" or "unconfirmedfix" if you want to be positive or "reportervanished" if you don't. You could also just say "could not reproduce", and wait for the same bug to pop up for a more responsive reporter. However, you should not expend resources on a bug for which you will never know whether it was actually fixed or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283200",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71839/"
]
} |
283,316 | I'm working in a project which deals with physical devices, and I've been confused as how to properly name some classes in this project. Considering the actual devices (sensors and receivers) are one thing, and their representation in software is another, I am thinking about naming some classes with the "Info" suffix name pattern. For example, while a Sensor would be a class to represent the actual sensor (when it is actually connected to some working device), SensorInfo would be used to represent only the characteristics of such sensor. For example, upon file save, I would serialize a SensorInfo to the file header, instead of serializing a Sensor , which sort of wouldn't even make sense. But now I am confused, because there is a middleground on objects' lifecycle where I cannot decide if I should use one or another, or how to get one from another, or even whether both variants should actually be collapsed to only one class. Also, the all too common example Employee class obviously is just a representation of the real person, but nobody would suggest to name the class EmployeeInfo instead, as far as I know. The language I am working with is .NET, and this naming pattern seems to be common throughout the framework, for exemple with these classes: Directory and DirectoryInfo classes; File and FileInfo classes; ConnectionInfo class (with no correspondent Connection class); DeviceInfo class (with no correspondent Device class); So my question is: is there a common rationale about using this naming pattern? Are there cases where it makes sense to have pairs of names ( Thing and ThingInfo ) and other cases where there should only exist the ThingInfo class, or the Thing class, without its counterpart? | I think "info" is a misnomer. Objects have state and actions: "info" is just another name for "state" which is already baked into OOP. What are you really trying to model here? You need an object that represents the hardware in software so other code can use it. That is easy to say but as you found out, there is more to it than that. "Representing hardware" is surprisingly broad. An object that does that has several concerns: Low-level device communication, whether it be talking to the USB interface, a serial port, TCP/IP, or proprietary connection. Managing state. Is the device turned on? Ready to talk to software? Busy? Handling events. The device produced data: now we need to generate events to pass to other classes that are interested. Certain devices such as sensors will have fewer concerns than say a printer/scanner/fax multifunction device. A sensor likely just produces a bit stream, while a complex device may have complex protocols and interactions. Anyway, back to your specific question, there are several ways to do this depending on your specific requirements as well as the complexity of the hardware interaction. Here is an example of how I would design the class hierarchy for a temperature sensor: ITemperatureSource: interface that represents anything that can produce temperature data: a sensor, could even be a file wrapper or hard-coded data (think: mock testing). Acme4680Sensor: ACME model 4680 sensor (great for detecting when the Roadrunner is nearby). This may implement multiple interfaces: perhaps this sensor detects both temperature and humidity. This object contains program-level state such as "is the sensor connected?" and "what was the last reading?" Acme4680SensorComm: used solely for communicating with the physical device. It does not maintain much state. It is used for sending and receiving messages. It has a C# method for each of the messages the hardware understands. HardwareManager: used for getting devices. This is essentially a factory that caches instances: there should only be one instance of a device object for each hardware device. It has to be smart enough to know that if thread A requests the ACME temperature sensor and thread B requests the ACME humidity sensor, these are actually the same object and should be returned to both threads. At the top level you will have interfaces for each hardware type. They describe actions your C# code would take on the devices, using C# data types (not e.g. byte arrays which the raw device driver might use). At the same level you have an enumeration class with one instance for each hardware type. Temperature sensor might be one type, humidity sensor another. One level below this are the actual classes that implement those interfaces: they represent one device similar the Acme4680Sensor I described above. Any particular class may implement multiple interfaces if the device can perform multiple functions. Each device class has its own private Comm (communication) class that handles the low-level task of talking to the hardware. Outside of the hardware module, the only layer that is visible is the interfaces/enum plus the HardwareManager. The HardwareManager class is the factory abstraction that handles the instantiation of device classes, caching instances (you really do not want two device classes talking to the same hardware device), etc. A class that needs a particular type of sensor asks the HardwareManager to get the device for the particular enum, which it then figures out if it is already instantiated, if not how to create it and initialize it, etc. The goal here is to decouple business logic from low-level hardware logic. When you are writing code that prints sensor data to the screen, that code should not care what type of sensor you have if and only if this decoupling is in place which centers on those hardware interfaces. Note: there are associations between the HardwareManager and each device class that I did not draw because the diagram would have turned into arrow soup. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283316",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35959/"
]
} |
283,762 | I've seen the discussion at this question regarding how a class that implements from an interface would be instantiated. In my case, I'm writing a very small program in Java that uses an instance of TreeMap , and according to everyone's opinion there, it should be instantiated like: Map<X> map = new TreeMap<X>(); In my program, I'm calling the function map.pollFirstEntry() , which is not declared in the Map interface (and a couple others who are present in the Map interface too). I've managed to do this by casting to a TreeMap<X> everywhere I call this method like: someEntry = ((TreeMap<X>) map).pollFirstEntry(); I understand the advantages of the initialization guidelines as described above for large programs, however for a very small program where this object would not be passed to other methods, I would think it is unnecessary. Still, I'm writing this sample code as part of a job application, and I don't want my code to look badly nor cluttered. What would be the most elegant solution? EDIT: I would like to point out that I'm more interested in the broad good coding practices instead of the application of the specific function TreeMap . As some of the answers have already pointed out (and I've marked as answered the first one to do so), the higher abstraction level possible should be used, without losing functionality. | "Programming to an interface" does not mean "use the most abstracted version possible". In that case everyone would just use Object . What it means is that you should define your program against the lowest possible abstraction without losing functionality . If you require a TreeMap then you will need to define a contract using a TreeMap . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/179830/"
]
} |
283,833 | I've found myself in a tough spot as of late. Been working on a game with a programming buddy for nearly 8 months now. We both started off as newcomers to programming around August of last year, he is a 2nd year CS student, I'm an IT support tech by trade and am a self-taught programmer with multitudes of books and online subscriptions. The issue that I've been constantly seeing is that as we write out a chunk of code, it will often be a bit hacked together, have many failings, and if it's a brand new concept to one of us, full of naive solutions. This is fine, we are learning, I expect both of our code to be a bit hacked together on the firs or second pass. The problem presents itself when it comes to actually fixing and refactoring those hacked together behaviors. My partner will hold onto his freshly cobbled together behavior, blatantly refusing to see any error the moment it starts to work. Claiming near perfection from a piece of structure I can't even try to use even if it had comments and appropriately named methods & fields. No matter how hard I try I just cannot get him to see the glaringly obvious flaws that will prevent any further changes or expansion of the behavior without completely breaking it and everything it's so tightly coupled to they might as well be in the same class. Hacked solutions perpetually stay hacked, poorly thought out designs stay the way they where when first conceived and tested. I spend as much time babysitting new code as I do writing it myself, I'm a loss of what to do. My partner lost it tonight, and made it clear that no matter what, no matter the benchmark, no matter the common practice, no matter the irrefutable proof, his code will stay the way he first made it. Even if entire books have been written on why you want to avoid doing something, he will refuse to acknowledge their validity claiming it's just someones opinion. I have a vested interest in our project, however I am not sure if I can continue to work with my partner. I seem to have three options open to me. Stop caring about the codebase functioning past the point of compiling, and just deal with trying to maintain and parse behavior that's barely limping along. Hoping that once things start to seriously break he will see that and try to do more than just put a bandaid over the fundamentally flawed design. Keep up the endless arguments over issues that have been figured out a decade ago by other much more capable individuals. Stop programming on this project, abandoning nearly 10,000 lines of my code and countless hours slaving over design and try and find a new project on my own. What approach can I take to determine if it is worth continuing on with this project with this person? Or what factors should influence my decision? We have written a lot of code and I do not want to give this up unless necessary. | This may be a cultural thing. In some cultures, admitting that you made a mistake is unheard of, and asking someone to admit to making a mistake is about the rudest thing you can do. If it is that situation, run away. In my experience with very smart people, if you tell them that something they are doing is less than perfect, they will either (1) give you a correct and easy to understand reason why what they are doing is actually right, (2) tell you that they know it is wrong, but they have no time to fix it because of priorities, or (3) thank you for pointing it out and fix it. Refusing to learn is about the worst attribute a software developer could have. Leave him to it. Life is too short and precious to waste your time on him. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283833",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/171304/"
]
} |
283,863 | I have just measured a large chunk of PHP code (1153 lines) using PHPMD ( http://phpmd.org/ ) and it tells me the code has an NPath complexity of 16244818757303403077832757824. That looks like a crazily big number to me, suggesting that perhaps PHPMD has broken in some way. Is it even possible for a piece of code written by humans to have such a high NPath complexity? The cyclomatic complexity is 351. Two possibly important details - This was procedural code, mixed in with HTML, and PHPMD will only measure object-oriented code. To get around this, I wrapped the whole file in a class with a single function - this is representative of how it's used. The file consists of a series of nested switch statements, and inside those there are lots of if..else statements - so it's certainly pretty complicated. Edit I want to clarify that I'm not questioning whether PHPMD is lying to me. I know that the code is an awful mess, I just wonder if it's possible for any code to be really that bad. It seems like the answer is yes, it's very possible. | This is entirely possible. Let's assume we have 35 switch-case constructs of 10 cases each, which would give us a rough cyclomatic complexity of 350 when each switch occurs one after the other. The first switch gives us 10 paths. The second switch gives us another independent 10 paths, so that we have 10·10 paths until here. With the third switch, we get 10·10·10=10³ paths, and so on until we get 10 35 paths in total! This is even higher than your result of 1.6·10 28 paths, which is probably due to a different branching factor, and due to nested control flow statements which reduce the number of paths through your code. As a worst case scenario for a given cyclomatic complexity c, we can have a maximum of 2 c acyclic paths through the code (here: 2 351 = 4.6·10 105 ). The tool's judgement is clear: the code you are dealing with is a convoluted, untestable, and unmaintainable mess. Consider splitting it into smaller, independent functions, and abstracting away repetition. E.g. you could separate HTML generation from the main logic of your PHP script. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283863",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6974/"
]
} |
283,872 | On an old, large project with technical debt how can you reliably estimate or measure the benefit of refactoring code? For example, say you have some components within a software stack solution written in an older language, and some later components written in a newer language. New features and bug fixes are constantly being added to this solution by a development team. The developers suggest that replacing the existing old language components with the new version would be 'a good thing' However, no extra features would be added by this work and it will cost x dev-days. A sales person suggests adding 'a great new feature'. the company measures the value of his suggestion by using a formula. ie. It will earn the company F million pounds, over t years at a cost of x dev-days work How can the company cost up the refactoring proposal in a meaningful way? and under what circumstances is it likely to be the most profitable option for the company? | It will earn the company F million pounds, over t years at a cost of x dev-days work Which is ignoring maintenance costs, support costs, the cost of sales/marketing, and makes a whole lot of assumptions about how the feature will be taken in the marketplace. But whatever; your question is clear enough about what you're looking for: How can I make a business case for refactoring? The main thing to realize is that time equals money. You can go straight to "5 developers 2 weeks = 80 hours * 5 developers * $50/hr -> $20,000" and that makes sense to business people. Smart business people will note that those 5 developers are being paid either way, with which you counter that it's not spending/saving the $20,000 - it's using the $20,000 in the most profitable manner. Anyways, on with the list. Efficiency - Presumably, you can get more stuff done with C# than VB6. Better tooling, better libraries, whatever. Newer technologies tend to be better at things than older ones. If you can do stuff in less time, that means you're saving the company money. Overhead - Coding isn't the only cost of software. Consider what happens when Windows 2020 comes out. How much time and effort are you going to spend getting that VB6 app working on it? How much more time and effort is that than a C# app? That is saving your company money. Quality - Presumably, you can get stuff done with higher quality in C# than VB6 (or with a cleaner architecture, or whatever else your refactoring target is). Higher quality means less bugs. Less bugs means less cost to fix those bugs, less customer support costs to field those issues, less customer loss due to quality issues, increased sales due to a reputation for quality... all dollars for your company. HR Savings - Let's face facts: nobody wants to work with that shitty VB6 app. That means more people will leave the company leading to time and money spent to replace them. That means it takes more time and money to hire new employees. Worst of all, it means you'll have developers who are totally fine committing career suicide working with that VB6 app. Those developers in turn hasten the death spiral of quality issues. Cutting turnover and hiring times saves your company money. Keeping complacent, crappy developers away saves your company. Morale - Similarly, the programmers that are already there hate working in VB6. They'll do it, and they might even do it well. But it sucks. Maybe not for everyone, but certainly for some. That means more time spent browsing the web to recoup motivation. It means longer lunches. It means less getting stuff done while your programmers take longer to recover from the sucky work. Capability - This is less applicable with technology driven refactors, but applies to architecture sort of refactors. Certain code problems actively prevent you from providing cool feature X to make tons of money. Maybe you can't scale. Maybe you can't get at the data to do cool informatics. Maybe the code is such a rat's nest that it is prohibitively difficult to actually do the work. Whatever. Sometimes you're at that point, sometimes you're driving straight towards that point. It's hard to translate to business-speak, but if you can the "this problem is preventing us from taking advantage of opportunities X, Y and Z" can be powerful. All in all, it comes down to "this stuff will help us do our jobs better; if we do our jobs better, we can make/save you more money". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/283872",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/177980/"
]
} |
284,139 | I understand the motive behind the principle of least knowledge , but I find some disadvantages if I try to apply it in my design. One of the examples of this principle (actually how not to use it), which I found in the book Head First Design Patterns specify that it is wrong to call a method on objects that were returned from calling other methods, in terms of this principle. But it seems that sometimes it is very needed to use such capability. For example:
I have several classes: video-capture class, encoder class, streamer class, and they all use some basic other class, VideoFrame, and since they interact with each other, they can do for example something like this: streamer class code ...
frame = encoder->WaitEncoderFrame()
frame->DoOrGetSomething();
.... As you can see, this principle is not applied here. Can this principle be applied here, or is it that this principle cannot always be applied in a design like this? | The principle you are talking of (better known as Law of Demeter ) for functions can be applied by adding another helper method to your streamer class like {
frame = encoder->WaitEncoderFrame()
DoOrGetSomethingForFrame(frame);
...
}
void DoOrGetSomethingForFrame(Frame *frame)
{
frame->DoOrGetSomething();
} Now, each function only "talks to friends", not to "friends of friends". IMHO it is a rough guideline which can help to create methods which follow more strictly the single responsibility principle. In a simple case like the one above it is probably very opinionated if this is really worth the hassle and if the resulting code is really "cleaner", or if it will just expand your code formally without any noteable gain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284139",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176293/"
]
} |
284,215 | I'm having something of a hard time with designing classes in an oo way. I've read that objects expose their behavior, not their data; therefore, rather than using getter/setters to modify data, the methods of a given class should be "verbs" or actions operating on the object. For example, in an 'Account' object, we would have the methods Withdraw() and Deposit() rather than setAmount() etc. See: Why getter and setter methods are evil . So for example, given a Customer class that keeps alot of information about the customer, e.g. Name, DOB, Tel, Address etc., how would one avoid getter/setters for getting and setting all those attributes? What 'Behavior' type method can one write to populate all that data? | As stated in quite a few answers and comments, DTOs are appropriate and useful in some situations, especially in transferring data across boundaries (e.g. serializing to JSON to send through a web service). For the rest of this answer, I'll more or less ignore that and talk about domain classes, and how they can be designed to minimize (if not eliminate) getters and setters, and still be useful in a large project. I also won't talk about why remove getters or setters, or when to do so, because those are questions of their own. As an example, imagine that your project is a board game like Chess or Battleship. You might have various ways of representing this in a presentation layer (console app, web service, GUI,etc.), but you also have a core domain. One class you might have is Coordinate , representing a position on the board. The "evil" way to write it would be: public class Coordinate
{
public int X {get; set;}
public int Y {get; set;}
} (I'm going to be writing code examples in C# rather than Java, for brevity and because I'm more familiar with it. Hopefully that's not a problem. The concepts are the same and the translation should be simple.) Removing Setters: Immutability While public getters and setters are both potentially problematic, setters are the far more "evil" of the two. They're also usually the easier to eliminate. The process is a simple one- set the value from within the constructor. Any methods which previously mutated the object should instead return a new result. So: public class Coordinate
{
public int X {get; private set;}
public int Y {get; private set;}
public Coordinate(int x, int y)
{
X = x;
Y = y;
}
} Note that this doesn't protect against other methods in the class mutating X and Y. To be more strictly immutable, you could use readonly ( final in Java). But either way- whether you make your properties truly immutable or just prevent direct public mutation through setters- it does the trick of removing your public setters. In the vast majority of situations, this works just fine. Removing Getters, Part 1: Designing for Behavior The above is all well and good for setters, but in terms of getters, we actually shot ourselves in the foot before even starting. Our process was to think of what a coordinate is- the data it represents- and create a class around that. Instead, we should have started with what behavior we need from a coordinate. This process, by the way, is aided by TDD, where we only extract classes like this once we have a need for them, so we start with the desired behavior and work from there. So let's say that the first place you found yourself needing a Coordinate was for collision detection: you wanted to check if two pieces occupy the same space on the board. Here's the "evil" way (constructors omitted for brevity): public class Piece
{
public Coordinate Position {get; private set;}
}
public class Coordinate
{
public int X {get; private set;}
public int Y {get; private set;}
}
//...And then, inside some class
public bool DoPiecesCollide(Piece one, Piece two)
{
return one.X == two.X && one.Y == two.Y;
} And here's the good way: public class Piece
{
private Coordinate _position;
public bool CollidesWith(Piece other)
{
return _position.Equals(other._position);
}
}
public class Coordinate
{
private readonly int _x;
private readonly int _y;
public bool Equals(Coordinate other)
{
return _x == other._x && _y == other._y;
}
} ( IEquatable implementation abbreviated for simplicity). By designing for behavior rather than modelling data, we've managed to remove our getters. Note this is also relevant to your example. You may be using an ORM, or display customer information on a website or something, in which case some kind of Customer DTO would probably make sense. But just because your system includes customers and they are represented in the data model does not automatically mean you should have a Customer class in your domain. Maybe as you design for behavior, one will emerge, but if you want to avoid getters, don't create one pre-emptively. Removing Getters, Part 2: External Behaviour So the above is a good start, but sooner or later you will probably run into a situation where you have behavior which is associated with a class, which in some way depends on the class's state, but which doesn't belong on the class. This sort of behavior is what typically lives in the service layer of your application. Taking our Coordinate example, eventually you'll want to represent your game to the user, and that might mean drawing to the screen. You might, for example, have a UI project which uses Vector2 to represent a point on the screen. But it would be inappropriate for the Coordinate class to take charge of converting from a coordinate to a point on the screen- that would be bringing all sorts of presentation concerns into your core domain. Unfortunately this type of situation is inherent in OO design. The first option , which is very commonly chosen, is just expose the damn getters and say to hell with it. This has the advantage of simplicity. But since we're talking about avoiding getters, let's say for argument's sake we reject this one and see what other options there are. A second option is to add some kind of .ToDTO() method on your class. This- or similar- may well be needed anyway, for example when you want to save the game you need to capture pretty much all of your state. But the difference between doing this for your services and just accessing the getter directly is more or less aesthetic. It still has just as much "evil" to it. A third option - which I've seen advocated by Zoran Horvat in a couple of Pluralsight videos- is to use a modified version of the visitor pattern. This is a pretty unusual use and variation of the pattern and I think people's mileage will vary massively on whether it's adding complexity for no real gain or whether it's a nice compromise for the situation. The idea is essentially to use the standard visitor pattern, but have the Visit methods take the state they need as parameters, instead of the class they're visiting. Examples can be found here . For our problem, a solution using this pattern would be: public class Coordinate
{
private readonly int _x;
private readonly int _y;
public T Transform<T>(IPositionTransformer<T> transformer)
{
return transformer.Transform(_x,_y);
}
}
public interface IPositionTransformer<T>
{
T Transform(int x, int y);
}
//This one lives in the presentation layer
public class CoordinateToVectorTransformer : IPositionTransformer<Vector2>
{
private readonly float _tileWidth;
private readonly float _tileHeight;
private readonly Vector2 _topLeft;
Vector2 Transform(int x, int y)
{
return _topLeft + new Vector2(_tileWidth*x + _tileHeight*y);
}
} As you can probably tell, _x and _y aren't really encapsulated any more. We could extract them by creating an IPositionTransformer<Tuple<int,int>> which just returns them directly. Depending on taste, you may feel this makes the entire exercise pointless. However, with public getters, it's very easy to do things the wrong way, just pulling data out directly and using it in violation of Tell, Don't Ask . Whereas using this pattern it's actually simpler to do it the right way: when you want to create behaviour, you'll automatically start by creating a type associated with it. Violations of TDA will be very obviously smelly and probably require working around a simpler, better solution. In practice, these points make it much easier to do it the right, OO, way than the "evil" way that getters encourage. Finally , even if it isn't initially obvious, there may in fact be ways to expose enough of what you need as behavior to avoid needing to expose state. For example, using our previous version of Coordinate whose only public member is Equals() (in practice it would need a full IEquatable implementation), you could write the following class in your presentation layer: public class CoordinateToVectorTransformer
{
private Dictionary<Coordinate,Vector2> _coordinatePositions;
public CoordinateToVectorTransformer(int boardWidth, int boardHeight)
{
for(int x=0; x<boardWidth; x++)
{
for(int y=0; y<boardWidth; y++)
{
_coordinatePositions[new Coordinate(x,y)] = GetPosition(x,y);
}
}
}
private static Vector2 GetPosition(int x, int y)
{
//Some implementation goes here...
}
public Vector2 Transform(Coordinate coordinate)
{
return _coordinatePositions[coordinate];
}
} It turns out, perhaps surprisingly, that all the behavior we really needed from a coordinate to achieve our goal was equality checking! Of course, this solution is tailored to this problem, and makes assumptions about acceptable memory usage/performance. It's just an example that fits this particular problem domain, rather than a blueprint for a general solution. And again, opinions will vary on whether in practice this is needless complexity. In some cases, no such solution like this might exist, or it might be prohibitively weird or complex, in which case you can revert to the above three. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284215",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/180329/"
]
} |
284,280 | Stay with me here. When I say the code "does nothing", I mean something like the following function: void factorial(int n)
{
int result = 1;
for(int i = 1; i <= n; ++i)
{
result *= i;
}
} Sure, this function creates a local variable and computes a value, but it doesn't do anything to advance the program; there is no way to make use of the result of the computation, and it has no side-effects. Keep in mind that this is a contrived example for the purposes of the question, but the code I refer to may be more complicated. When I encounter such code, I often spend some extra time and go through it several times, because I worry that I might be missing something. I could just take the high road, or fix the bug and be done with it, but I worry that this slowdown would reflect badly on me and my performance. So, should I take the high road, and assume the team understands, or should I communicate that the code doesn't do much? If the latter is the better option, how do I do that without being condescending or disrespectful? | The easiest way is to ask from a point of not understanding and ask their opinion. It's no good to say "your code sucks", it's quite another to say "erm, I saw this and I really can't understand what it's doing, or why it's doing it. What do you think?" or similar - you could say what your concern is and ask them to explain it to you, or ask them just if you are right. This way, you bring them into being part of the solution and typically will realize the problem and fix it themselves (or you can offer to do it for them). I'm sure you've written code like that yourself, you just don't see it until someone points it out and you have a d'oh moment. Treat such code not as a huge, glaring error but as just another slight cock up that we all make now and then and you'll be good. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284280",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/180413/"
]
} |
284,306 | I'm a relatively new developer, fresh from college. While in college and during subsequent job-seeking, I realized that there were a lot of "modern" software development methodologies that my education was lacking: unit testing, logging, database normalization, agile development (vs. generic agile concepts), coding style guides, refactoring, code reviews, no standardized documentation methods (or even requirements), etc. Overall, I didn't see this is a problem. I expected my first job to embrace all of these ideas and to teach them to me on the job. Then I got my first job (full stack web development) at a big corporation and I realized that we do none of these things. In fact I, the least experienced on the team, am the one who is spearheading attempts to bring my team up to speed with "modern" programming techniques - as I worry that not doing so is professional suicide down the road. First I began with logging software (log4J), but then I quickly moved on to writing my own styleguide, then abandoning it for the Google styleguide - and then I realized that our Java web development used hand-written front controllers, so I pushed for our adoption of Spring - but then I realized we had no unit tests, either, but I was already learning Spring... and as you can see, it becomes overwhelming all too quickly, especially when paired with normal development work. Furthermore, it is difficult for me to become "expert" enough in these methodologies to teach anyone else in them without devoting too much time to a single one of them, let alone them all. Of all these techniques, which I see as "expected" in today's software development world, how do I integrate them into a team as a new player without overwhelming both myself and the team? How can I influence my team to become more agile? is related, but I'm not an Agile developer like the asker here, and I'm looking at a much broader set of methodologies than Agile. | It sounds to me like you are putting the cart before the horse. What is the major problem your team is facing and which technologies would help fix it? For example, if there are lots of bugs, particularly regression-type bugs, then unit testing may be a starting point. If your team is lacking time, perhaps a framework may help (medium to long term). If people have difficulty reading each others' code, styling may be useful. Remember that the purpose of the business you work for is to make money, not to make code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284306",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161111/"
]
} |
284,415 | Something that I've known for a while but never considered is that in most languages it is possible to give priority to operators in an if statement based on their order. I often use this as a way to prevent null reference exceptions, e.g.: if (smartphone != null && smartphone.GetSignal() > 50)
{
// Do stuff
} In this case the code will result in first checking that the object is not null and then using this object knowing that it exists. The language is clever because it knows that if the the first statement if false then there is no point even evaluating the second statement and so a null reference exception is never thrown. This works the same for and and or operators. This can be useful in other situations too such as checking that an index is in the bounds of an array and such a technique can be performed in various languages, to my knowledge: Java, C#, C++, Python and Matlab. My question is: Is this sort of code a representation of bad practice? Does this bad practice arise from some hidden technical issue (i.e. this could eventually result in an error) or does it lead to a readability issue for other programmers? Could it be confusing? | No, this is not bad practice. Relying on short-circuiting of conditionals is a widely accepted, useful technique--as long as you are using a language that guarantees this behavior (which includes the vast majority of modern languages). Your code example is quite clear and, indeed, that is often the best way to write it. Alternatives (such as nested if statements) would be messier, more complicated, and therefore harder to understand. A couple of caveats: Use common sense regarding complexity and readability The following is not such a good idea: if ((text_string != null && replace(text_string,"$")) || (text_string = get_new_string()) != null) Like many "clever" ways of reducing the lines in your code, over-reliance on this technique can result in hard-to-understand, error-prone code. If you need to handle errors, there is often a better way Your example is fine as long as it is a normal program state for smartphone to be null. If, however, this is an error, you should incorporate smarter error handling. Avoid repeated checks of the same condition As Neil points out, many repeated checks of the same variable in different conditionals are most likely evidence of a design flaw. If you have ten different statements sprinkled through your code that should not be run if smartphone is null , consider whether there is a better way to handle this than checking the variable each time. This isn't really about short-circuiting specifically; it's a more general problem of repetitive code. However, it is worth mentioning, because it is quite common to see code that has many repeated statements like your example statement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284415",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/177908/"
]
} |
284,530 | I've been browsing SQL dumps of some famous CMSes, including Drupal 7, Wordpress (some quite very old version), and some custom application based on Python. All of these dumps contained data with string flags instead of integer ones. For example, a post's status was represented as published , closed , or inherit rather than 1 , 2 , or 3 . I have quite limited experience in design of databases and I have never went past simple SQLs, but I was always taught that I should use numeric/integer flags for data like this. It's obvious that tinyint consumes much less space in a database than, for example, varchar(9) . So what am I missing? Isn't this a waste of data storage and a data redundancy? Wouldn't browsing, searching and indexing be a bit faster if these columns used integers instead of strings? | Yes, storing strings instead of numbers can use more space. The reason that high-profile pltforms are doing it anyway is that they think the benefits of that solution are greater than the cost. What are the benefits? You can easily read a database dump and understand what it's about without memorizing the enum tables, and even semi-official GUIs might simply use the values themeselves rather than transform the record they get. (This is a basic form of disk space/processing time tradeoff.) What about the cost? Data storage capacity hasn't been the bottleneck in CMS for a long time, since disks have gotten so large and so cheap. Programmer time, on the other hand, usually becomes more expensive - so anything that trades development effort for disk space is also a good thing, from a business perspective. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284530",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57759/"
]
} |
284,561 | I believed I searched many times about virtual destructors, most mention the purpose of virtual destructors, and why you need virtual destructors. Also I think in most cases destructors need to be virtual. Then the question is: Why doesn't c++ set all destructors virtual by default? or in other questions: When do I NOT need to use virtual destructors? In which case I should NOT use virtual destructors? What is the cost of using virtual destructors if I use it even if it is not needed? | If you add a virtual destructor to a class: in most (all?) current C++ implementations, every object instance of that class needs to store a pointer to the virtual dispatch table for the runtime type, and that virtual dispatch table itself added to the executable image the address of the virtual dispatch table is not necessarily valid across processes, which can prevent safely sharing such objects in shared memory have an embedded virtual pointer frustrates creating a class with memory layout matching some known input or output format (for example, so a Price_Tick* could be aimed directly at suitably aligned memory in an incoming UDP packet and used to parse/access or alter the data, or placement- new ing such a class to write data into an outgoing packet) the destructor calls themselves may - under certain conditions - have to be dispatched virtually and therefore out-of-line, whereas non-virtual destructors might be inlining or optimised away if trivial or irrelevant to the caller The "not designed to be inherited from" argument wouldn't be a practical reason for not always having a virtual destructor if it weren't also worse in a practical way as explained above; but given it is worse that's a major criterion for when to pay the cost: default to having a virtual destructor if your class is meant to be used as a base class . That's not always necessary, but it ensures the classes in the heirarchy can be used more freely without accidental undefined behaviour if a derived class destructor is invoked using a base class pointer or reference. "in most case destructors need to be virtual" Not so... many classes have no such need. There are so many examples of where it's unnecessary it feels silly to enumerate them, but just look through your Standard Library or say boost and you'll see there's a large majority of classes that don't have virtual destructors. In boost 1.53 I count 72 virtual destructors out of 494. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284561",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196142/"
]
} |
284,815 | Say we want to provide an abstraction of an "account" in a bank. Here's one approach, using a function object in Python: def account():
"""Return a dispatch dictionary representing a bank account.
>>> a = account()
>>> a['deposit'](100)
100
>>> a['withdraw'](90)
10
>>> a['withdraw'](90)
'Insufficient funds'
>>> a['balance']
10
"""
def withdraw(amount):
if amount > dispatch['balance']:
return 'Insufficient funds'
dispatch['balance'] -= amount
return dispatch['balance']
def deposit(amount):
dispatch['balance'] += amount
return dispatch['balance']
dispatch = {'balance': 0,
'withdraw': withdraw,
'deposit': deposit}
return dispatch Here's another approach using type abstraction (i.e., class keyword in Python): class Account(object):
"""A bank account has a balance and an account holder.
>>> a = Account('John')
>>> a.deposit(100)
100
>>> a.withdraw(90)
10
>>> a.withdraw(90)
'Insufficient funds'
>>> a.balance
10
"""
def __init__(self, account_holder):
self.balance = 0
self.holder = account_holder
def deposit(self, amount):
"""Add amount to balance."""
self.balance = self.balance + amount
return self.balance
def withdraw(self, amount):
"""Subtract amount from balance if funds are available."""
if amount > self.balance:
return 'Insufficient funds'
self.balance = self.balance - amount
return self.balance My teacher started the topic "Object oriented programming" by introducing the class keyword, and showing us these bullet points: Object-oriented programming A method for organizing modular programs: Abstraction barriers Message passing Bundling together information and related behavior Do you think the first approach would suffice to satisfy the above definition? If yes, why do we need the class keyword to do object-oriented programming? | Congratulations! You rediscovered the well known fact that object orientation can be done without specific programming language support. It is basically the same way objects are introduced in Scheme in this classic text book . Note that Scheme does not have a class keyword or some kind of equivalent, and objects can be created without having even classes. However, the object orientated paradigm was so successful that lots of languages - and Python is no exception - provide built-in support for it. This is simply to make it easier for developers to use the paradigm and to provide a standard form of object orientation for that language. It is essentially the same reason why lots of languages provide a for loop, though it could be emulated using a while loop with just one or two additional lines of code - simply ease of use . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284815",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/131582/"
]
} |
284,903 | Currently, I used to create a new branch each time I have to add a new feature to my application. When my feature is finished and functional, I merge it with the master branch. But later, when I need to update this feature (like an improvement) is it better to create a new branch or do I need to rebase the previous with the master, do the update then merge again? For example, I have branch called modeling-member in a Ruby on Rails application. Later, I need to add some attributes to the member model (which was created in this branch). What should I do? Rebase this branch with the master, update the model and merge it again or simply create a new branch? | Create a new branch, because: A brand new branch is less likely to have merge conflicts when you're done and want to merge it into master. Few things are more error-prone than fixing merge conflicts. The feature may have gone through several changes and updates since its original implementation, making the original branch totally obsolete. The only way to bring it up to date is to merge master into the feature branch...and at that point you're just branching off master in a needlessly complicated way. If only for simplicity's sake, it's usually a good idea to have the same workflow for updates, bug fixes and new features. That applies to branching, code reviews, bug tracker usage, and pretty much everything else. The difference between updating an existing feature, adding a new feature and fixing a bug is often subjective anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284903",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/134886/"
]
} |
284,905 | Can you tell me the value of using PHP encoder (ioncube, phpshield) with currently present service like decry.pt ( http://www.decry.pt/ ) that can easily decode source codes. I have tried decry.pt's free demo. Just drag & drop an encoded source and it will return the decoded one. It is so easy. It seems the value of encoder can be easily be canceled. | Create a new branch, because: A brand new branch is less likely to have merge conflicts when you're done and want to merge it into master. Few things are more error-prone than fixing merge conflicts. The feature may have gone through several changes and updates since its original implementation, making the original branch totally obsolete. The only way to bring it up to date is to merge master into the feature branch...and at that point you're just branching off master in a needlessly complicated way. If only for simplicity's sake, it's usually a good idea to have the same workflow for updates, bug fixes and new features. That applies to branching, code reviews, bug tracker usage, and pretty much everything else. The difference between updating an existing feature, adding a new feature and fixing a bug is often subjective anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284905",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/181322/"
]
} |
284,937 | As I understand it, implicit conversions can cause errors. But that doesn't make sense -- shouldn't normal conversions also cause errors, then? Why not have len(100) work by the language interpreting (or compiling) it as len(str(100)) especially since that's the only way (I know of) for it to work. The language knows what the error is, why not fix it? For this example I used Python, though I feel that for something this small it's basically universal. | For what it's worth, len(str(100)) , len(chr(100)) and len(hex(100)) are all different. str is not the only way to make it work, since there's more than one different conversion in Python from an integer to a string. One of them of course is the most common, but it doesn't necessarily go without saying that's the one you meant. An implicit conversion literally means, "it goes without saying". One of the practical problems with implicit conversions is that it isn't always obvious which conversion will be applied where there are several possibilities, and this results in readers making errors interpreting the code because they fail to figure out the correct implicit conversion. Everyone always says that what they intended is the "obvious interpretation". It's obvious to them because it's what they meant. It might not be obvious to someone else. This is why (most of the time) Python prefers explicit to implicit, it prefers not to risk it. The main case where Python does do type coercion is in arithmetic. It allows 1 + 1.0 because the alternative would be too annoying to live with, but it doesn't allow 1 + "1" because it thinks you should have to specify whether you mean int("1") , float("1") , ord("1") , str(1) + "1" , or something else. It also doesn't allow (1,2,3) + [4,5,6] , even though it could define rules to choose a result type, just like it defines rules to choose the result type of 1 + 1.0 . Other languages disagree and do have lots of implicit conversions. The more they include, the less obvious they become. Try memorising the rules from the C standard for "integer promotions" and "usual arithmetic conversions" before breakfast! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/284937",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/181372/"
]
} |
285,097 | I'm a software developer. There is a team of testers who follow and run test cases written by the analyst, but also perform exploratory testing. It seems like the testers have been competing to see who opens more bugs, and I've noticed that the quality of bug reports has decreased. Instead of testing functionality and reporting bugs related to the operation of the software, the testers have been submitting bugs about screen enhancements, usability, or stupid bugs. Is this good for the project? If not, how can I (as a software developer), try to change the thinking and attitudes of the team of testers? Another problem is because the deadline is estimated and cannot change, so as the deadline nears, the testers will be scrambling to finish their test cases, and this will cause the quality of the tests decrease. This will cause legitimate bugs to be in the final product received by the client. OBS: This competition is not a practice of the company! It is a competition between only the testers organized by them, and without any prizes. | I do not think it's good that they make a contest out of finding the most bugs. While it is true that their job is to find bugs, their job is not "find the most bugs". Their goal isn't to find the most, their goal is to help improve the quality of the software. Rewarding them for finding more bugs is about the same as rewarding a programmer for writing the most lines of code, rather than the highest quality code. Turning it into a game gives them an incentive to focus on finding many shallow bugs, rather than finding the most critical bugs. As you mention in your edit, this is exactly what is happening in your organization. One could argue that any bug they find is fair game, and that all bugs need to be discovered. However, given that your team likely has limited resources, would you rather have a tester focus several hours or days probing deeply into your system trying to find really big bugs, or spend several hours or days skipping through the app looking for typographical errors and small errors in the alignment of objects on a page? If the company really wants to make a game out of it, give the developers the power to add points to a bug. "stupid bugs" get negative points, hard to find bugs with well written reports get multiple points. This then moves the incentive from "find the most" to "be the best at doing your job". However , this isn't recommended either, because a programmer and QA analyst could work together to artificially pad their numbers. Bottom line: don't make a game out of finding bugs. Find ways in your organization to reward good work and leave it at that. Gamification rewards people for reaching a goal. You don't want a QA analyst to have the goal of "find the most bugs", you want their goal to be "improve the quality of the software". Those two goals are not the same. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285097",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121459/"
]
} |
285,255 | I've been wrestling with a problem in a Java project about circular references. I'm trying to model a real-world situation in which it seems the objects in question are interdependent and need to know about each other. The project is a generic model of playing a board game. The basic classes are non-specific, but are extended to deal with specifics of chess, backgammon and other games. I coded this up as an applet 11 years ago with half a dozen different games, but the problem is that it's full of circular references. I implemented it back then by stuffing all the intertwined classes in a single source file, but I get the idea that that's bad form in Java. Now I want to implement a similar thing as an Android app, and I want to do things properly. The classes are: RuleBook: an object that can be interrogated for such things as the
initial layout of the Board, other initial game State information
like who moves first, the Moves that are available, what happens to
the game State after a proposed Move, and an evaluation of a current
or proposed board position. Board: a simple representation of a game board, which can be
instructed to reflect a Move. MoveList: a list of Moves. This is dual-purpose: a choice of moves
available at a given point, or a list of moves that have been made in
the game. It could be split into two near-identical classes, but
that's not relevant to the question I'm asking and may complicate it
further. Move: a single move. It includes everything about the move as a list
of atoms: pick up a piece from here, put it down there, remove a
captured piece from there. State: the full state information of a game in progress. Not only
the Board position, but a MoveList, and other state information such
as who is to move now. In chess one would record whether the king
and rooks of each player have been moved. Circular references abound, for example: the RuleBook needs to know about the game State to determine what moves are available at a given time, but the game State needs to query the RuleBook for the initial starting layout and for what side effects accompany a move once it is made (e.g. who moves next). I tried organising the new set of classes hierarchically, with RuleBook at the top as it needs to know about everything. But this results in having to move lots of methods into the RuleBook class (such as making a move) making it monolithic and not particularly representative of what a RuleBook should be. So what's the proper way to organise this? Should I turn RuleBook into BigClassThatDoesAlmostEverythingInTheGame to avoid circular references, abandoning the attempt to model the real-world game accurately? Or should I stick with the interdependent classes and coax the compiler into compiling them somehow, retaining my real-world model? Or is there some obvious valid structure I'm missing? Thanks for any help you can give! | I've been wrestling with a problem in a Java project about circular references. Java's garbage collector doesn't rely on reference counting techniques. Circular references do not cause any kind of problem in Java. Time spent eliminating perfectly natural circular references in Java is time wasted. I coded this up [...] but the problem is that it's full of circular references. I implemented it back then by stuffing all the intertwined classes in a single source file , [...] Not necessary. If you just compile all the source files at once (e.g., javac *.java ), the compiler will resolve all forward references without problems. Or should I stick with the interdependent classes and coax the compiler into compiling them somehow, [...] Yes. Application classes are expected to be interdependent. Compiling all Java source files that belong to the same package at once isn't a clever hack, it's precisely the way Java is supposed to work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285255",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/181753/"
]
} |
285,273 | I always wonder if there is any benefit of publishing programming/code related articles/tips either on ones own website or coding sites? I think it has a benefit that a programmer can have a record of his/her learning and refer them later, other than that? As far as I can imagine, forum participation like stackoverflow offers a place for mutual learning and you may get business also, but what will be returned if I write solution for "How to show toggle drawer on Android activity"... etc. Such writing will only attract junior coders who immediately need solutions for their current problems, they will read it and move on, I doubt this will be ever noticed by business person or senior developers. But lot of programmers do that why? if keeping record of leanings is a reason there can be number of other ways. | I've been wrestling with a problem in a Java project about circular references. Java's garbage collector doesn't rely on reference counting techniques. Circular references do not cause any kind of problem in Java. Time spent eliminating perfectly natural circular references in Java is time wasted. I coded this up [...] but the problem is that it's full of circular references. I implemented it back then by stuffing all the intertwined classes in a single source file , [...] Not necessary. If you just compile all the source files at once (e.g., javac *.java ), the compiler will resolve all forward references without problems. Or should I stick with the interdependent classes and coax the compiler into compiling them somehow, [...] Yes. Application classes are expected to be interdependent. Compiling all Java source files that belong to the same package at once isn't a clever hack, it's precisely the way Java is supposed to work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285273",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/169750/"
]
} |
285,333 | I starting working through an online course on iOS development in the new language from Apple, Swift. The instructor made a point that raised this question in my mind. He said something to the effect of: There's no need to worry about memory management because it's all
taken care of for you by reference counting, and this means there's no
need to worry about understanding how garbage collection works. When I heard this I thought to myself, "why would anyone use garbage collection when you can simply use reference counting?" So how do the two approaches compare? | To understand how the two approaches compare we need to first examine how they work and the weaknesses of each. Automatic Reference counting or ARC, is a form of garbage collection in which objects are deallocated once there are no more references to them, i.e. no other variable refers to the object in particular. Each object, under ARC, contains a reference counter, stored as an extra field in memory, which is incremented every time you set a variable to that object (i.e. a new reference to the object is created), and is decremented every time you set a reference to the object to nil/null, or a reference goes out of scope (i.e. it is deleted when the stack unwinds), once the reference counter goes down to zero, the object takes care of deleting itself, calling the destructor and freeing the allocated memory. This approach has a significant weakness, as we shall see below. "There's no need to worry about memory management because it's all taken care of for you by reference counting," that's actually a misconception you still do need to take care to avoid certain conditions, namely circular references, in order for ARC to function correctly. A circular reference is when an object A holds a strong reference to an object B, which itself holds a strong reference to the same object A, in this situation neither object is going to be deallocated because in order for A to be deallocated its reference counter must be decremented to zero, but at least one of those references is object B, for object B to be deallocated, its reference counter must also be decremented to 0, but at least one of those references is object A, can you see the problem? ARC solves this by allowing the programmer to give compiler hints about how different object references should be treated, there are two types of references: strong references and weak references. Strong references are, as I mentioned above, a type of reference which prolongs the life of the referenced object (increments its reference counter), weak references are a type of reference which does not prolong the life of an object (that is, it does not increment the object's reference counter), but that would mean the referenced object could get deallocated and you'd would be left with an invalid reference pointing to junk memory. In order for this situation to be avoided, the weak reference is set to a safe value (e.g. nil in Objective-C) once the object is deallocated, thus the object has an extra responsibility of keeping track of all weak references and setting them to a safe value once it deletes itself. Weak references are usually used in a child-parent object relation, the parent holds a strong reference to all it's child objects, whereas the child objects hold a weak reference to the parent, the rationale being that in most cases if you no longer care about the parent object, you most likely no longer care about the child objects either. Tracing garbage collection (i.e. what is most often referred to as simply garbage collection) involves keeping a list of all root objects (i.e. those stored in global variables, the local variables of the main procedure, etc) and tracing which objects are reachable (marking each object encountered) from those root objects. Once the garbage collector has gone through all the objects referenced by the root objects, the GC now goes through every allocated object, if it is marked as reachable it stays in memory, if it is not marked as reachable it is deallocated, this is known as the mark-and-sweep algorithm. This has the advantage of not suffering from the circular reference problem as: if neither the mutually referenced object A and object B are referenced by any other object reachable from the root objects, neither object A nor object B are marked as reachable and are both deallocated. Tracing garbage collectors run in certain intervals pausing all threads, which can lead to inconsistent performance (sporadic pauses). The algorithm described here is a very basic description, modern GC's are usually much more advanced using an object generation system, tri-color sets etc, and also perform other tasks such as defragmentation of the program's memory space by moving the objects to a contiguous storage space, this is the reason why GC'ed languages such as C# and Java do not allow pointers. One significant weakness of tracing garbage collectors is that class destructors are no longer deterministic, that is the programmer cannot tell when an object is going to be garbage collected in-fact GC'ed languages do not even allow the programmer to specify a class destructor, thus classes can no longer be used to encapsulate the management of resources such as file handles, database connections, etc. The responsibility is left on the programmer to close open files, database connections manually, hence why languages such as Java have a finally keyword (in the try,catch block) to make sure the cleanup code is always executed before the stack unwinds, whereas in C++ (no GC) such resources are handled by a wrapper object (allocated on the stack) which acquires the resource in the constructor and releases it in the destructor, which is always called as the object is removed from the stack. As for performance, both have performance penalties. Automatic reference counting delivers a more consistent performance, no pauses, but slows down your application as a whole as every assignment of an object to a variable, every deallocation of an object, etc, will need an associated incrementation/decrementation of the reference counter, and taking care of reassigning the weak references and calling each destructor of each object being deallocated. GC does not have the performance penalty of ARC when dealing with object references; however, it incurs pauses while it is collecting garbage (rendering unusable for real-time processing systems) and requires a large memory space in order for it to function effectively such that it is not forced to run, thus pausing execution, too often. As you can see both have their own advantages and disadvantages, there is no clear cut ARC is better or GC is better, both are compromises. PS: ARC also becomes problematic when objects are shared across multiple threads requiring atomic incrementation/decrementation of the reference counter, which itself presents a whole new array of complexities and problems. This should answer your question as to "why would anyone use garbage collection". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285333",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20108/"
]
} |
285,334 | I wrote a simple program in java to create and maintain Dynamic Arrays: public class DynamicArrays {
private Integer[] input = new Integer[1];
private Integer length = 0;
private Integer capacity = 1;
/**
* Big O Analysis of add:
* 1+1+addBulk
* 2+O(n)
* So it seems add is an O(n) operation!
* But add is only called when capacity is full and how many times can that happen?
* Let's say there are 8 elements added.
* so capacity will become full after 1st, 2nd, 4th additions. so its is Log(n) times?
*/
public void add(Integer i) {
input[length] = i;
length++;
if(capacity <= length)
addBulk();
}
/**
* Big O: O(n)
*/
public void addBulk() {
Integer[] newInput = new Integer[2*capacity];
for(int i=0;i<input.length;i++)
newInput[i] = input[i];
input = newInput;
capacity = capacity*2;
}
} Now I wonder what is the time complexity of the add operation? If addBulk() is not called, it is O(1). Else it is O(n) because addBulk() copies all the elements. However addBulk is called log(n) times of the total input. So is the complexity O(n*log(n)) ? I also read somewhere that the amortized complexity of dynamic arrays addition is O(2*n), hence O(n). I couldn't relate to that point from the code. Can you please clarify? | To understand how the two approaches compare we need to first examine how they work and the weaknesses of each. Automatic Reference counting or ARC, is a form of garbage collection in which objects are deallocated once there are no more references to them, i.e. no other variable refers to the object in particular. Each object, under ARC, contains a reference counter, stored as an extra field in memory, which is incremented every time you set a variable to that object (i.e. a new reference to the object is created), and is decremented every time you set a reference to the object to nil/null, or a reference goes out of scope (i.e. it is deleted when the stack unwinds), once the reference counter goes down to zero, the object takes care of deleting itself, calling the destructor and freeing the allocated memory. This approach has a significant weakness, as we shall see below. "There's no need to worry about memory management because it's all taken care of for you by reference counting," that's actually a misconception you still do need to take care to avoid certain conditions, namely circular references, in order for ARC to function correctly. A circular reference is when an object A holds a strong reference to an object B, which itself holds a strong reference to the same object A, in this situation neither object is going to be deallocated because in order for A to be deallocated its reference counter must be decremented to zero, but at least one of those references is object B, for object B to be deallocated, its reference counter must also be decremented to 0, but at least one of those references is object A, can you see the problem? ARC solves this by allowing the programmer to give compiler hints about how different object references should be treated, there are two types of references: strong references and weak references. Strong references are, as I mentioned above, a type of reference which prolongs the life of the referenced object (increments its reference counter), weak references are a type of reference which does not prolong the life of an object (that is, it does not increment the object's reference counter), but that would mean the referenced object could get deallocated and you'd would be left with an invalid reference pointing to junk memory. In order for this situation to be avoided, the weak reference is set to a safe value (e.g. nil in Objective-C) once the object is deallocated, thus the object has an extra responsibility of keeping track of all weak references and setting them to a safe value once it deletes itself. Weak references are usually used in a child-parent object relation, the parent holds a strong reference to all it's child objects, whereas the child objects hold a weak reference to the parent, the rationale being that in most cases if you no longer care about the parent object, you most likely no longer care about the child objects either. Tracing garbage collection (i.e. what is most often referred to as simply garbage collection) involves keeping a list of all root objects (i.e. those stored in global variables, the local variables of the main procedure, etc) and tracing which objects are reachable (marking each object encountered) from those root objects. Once the garbage collector has gone through all the objects referenced by the root objects, the GC now goes through every allocated object, if it is marked as reachable it stays in memory, if it is not marked as reachable it is deallocated, this is known as the mark-and-sweep algorithm. This has the advantage of not suffering from the circular reference problem as: if neither the mutually referenced object A and object B are referenced by any other object reachable from the root objects, neither object A nor object B are marked as reachable and are both deallocated. Tracing garbage collectors run in certain intervals pausing all threads, which can lead to inconsistent performance (sporadic pauses). The algorithm described here is a very basic description, modern GC's are usually much more advanced using an object generation system, tri-color sets etc, and also perform other tasks such as defragmentation of the program's memory space by moving the objects to a contiguous storage space, this is the reason why GC'ed languages such as C# and Java do not allow pointers. One significant weakness of tracing garbage collectors is that class destructors are no longer deterministic, that is the programmer cannot tell when an object is going to be garbage collected in-fact GC'ed languages do not even allow the programmer to specify a class destructor, thus classes can no longer be used to encapsulate the management of resources such as file handles, database connections, etc. The responsibility is left on the programmer to close open files, database connections manually, hence why languages such as Java have a finally keyword (in the try,catch block) to make sure the cleanup code is always executed before the stack unwinds, whereas in C++ (no GC) such resources are handled by a wrapper object (allocated on the stack) which acquires the resource in the constructor and releases it in the destructor, which is always called as the object is removed from the stack. As for performance, both have performance penalties. Automatic reference counting delivers a more consistent performance, no pauses, but slows down your application as a whole as every assignment of an object to a variable, every deallocation of an object, etc, will need an associated incrementation/decrementation of the reference counter, and taking care of reassigning the weak references and calling each destructor of each object being deallocated. GC does not have the performance penalty of ARC when dealing with object references; however, it incurs pauses while it is collecting garbage (rendering unusable for real-time processing systems) and requires a large memory space in order for it to function effectively such that it is not forced to run, thus pausing execution, too often. As you can see both have their own advantages and disadvantages, there is no clear cut ARC is better or GC is better, both are compromises. PS: ARC also becomes problematic when objects are shared across multiple threads requiring atomic incrementation/decrementation of the reference counter, which itself presents a whole new array of complexities and problems. This should answer your question as to "why would anyone use garbage collection". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285334",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/181859/"
]
} |
285,433 | Previously, I've only used Object Oriented Programming languages (C++, Ruby, Python, PHP), and am now learning C. I'm finding it difficult to figure out the proper way to do things in a language with no concept of an 'Object'. I realize that it's possible to use OOP paradigms in C, but I'd like to learn the C-idiomatic way. When solving a programming problem, the first thing I do is to imagine an object that will solve the problem. What steps do I replace this with, when using a non-OOP Imperative Programming paradigm? | A C program is a collection of functions. A function is a collection of statements. You can encapsulate data with a struct . That's it. How did you write a class? That's pretty much how you write a .C file. Granted, you don't get things like method polymorphism and inheritance, but you can simulate those with different function names and composition anyway. To pave the way, study Functional Programming. It's really quite amazing what you can do without classes, and some things actually work better without the overhead of classes. Further Reading Object-Orientation in ANSI C | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285433",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/181961/"
]
} |
285,527 | So, in my efforts to write a program to conjugate verbs (algorithmically, not through a dataset) for French, I've come across a slight problem. The algorithm to conjugate the verbs is actually fairly simple for the 17-or-so cases of verbs, and runs on a particular pattern for each case; thus, the conjugation suffixes for these 17 classes are static and will (very likely) not change any time soon. For example: // Verbs #1 : (model: "chanter")
terminations = {
ind_imp: ["ais", "ais", "ait", "ions", "iez", "aient"],
ind_pre: ["e", "es", "e", "ons", "ez", "ent"],
ind_fut: ["erai", "eras", "era", "erons", "erez", "eront"],
participle: ["é", "ant"]
}; These are inflectional suffixes for the most common class of verb in French. There are other classes of verbs (irregulars), whose conjugations will also very likely remain static for the next century or two. Since they're irregular, their complete conjugations must be statically included, because they can't reliably be conjugated from a pattern (there are also only [by my count] 32 irregulars). For example: // "être":
forms = {
ind_imp: ["étais", "étais", "était", "étions", "étiez", "étaient"],
ind_pre: ["suis", "es", "est", "sommes", "êtes", "sont"],
ind_fut: ["serai", "seras", "sera", "serons", "serez", "seront"],
participle: ["été", "étant"]
}; I could put all this into XML or even JSON and deserialize it when it needs to be used, but is there a point? These strings are part of natural language, which does change, but at a slow rate. My concern is that by doing things the "right" way and deserializing some data source, I've not only complicated the problem which doesn't need to be complicated, but I've also completely back-tracked on the whole goal of the algorithmic approach: to not use a data source! In C#, I could just create a class under namespace Verb.Conjugation (e.g. class Irregular ) to house these strings in an enumerated type or something, instead of stuffing them into XML and creating a class IrregularVerbDeserializer . So the question: is it appropriate to hard-code strings that are very unlikely to change during the lifetime of an application? Of course I can't guarantee 100% that they won't change, but the risk vs cost is almost trivial to weigh in my eyes - hardcoding is the better idea here. Edit : The proposed duplicate asks how to store a large number of static strings , while my question is when should I hard-code these static strings . | is it appropriate to hard-code strings that are very unlikely to change during the lifetime of an application? Of course I can't guarantee 100% that they won't change, but the risk vs cost is almost trivial to weigh in my eyes - hardcoding is the better idea here It looks to me that you answered your own question. One of the biggest challenges we face is to separate out the stuff that's likely to change from the stuff that won't change. Some people go nuts and dump absolutely everything they can into a config file. Others go to the other extreme and require a recompile for even the most obvious changes. I'd go with the easiest approach to implement until I found a compelling reason to make it more complicated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285527",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109112/"
]
} |
285,540 | Often times at work we opt to create views in the database to expose the data that we want to work with instead of building some monster query in our code. Being somewhat new to this field my assumptions are that the view is just another query that is added onto whatever query I pass at run-time. Sometimes we are returning hundreds of thousands of results, even millions sometimes and this can take some times. So, my question is does a view affect the query time and/or performance of the query? | is it appropriate to hard-code strings that are very unlikely to change during the lifetime of an application? Of course I can't guarantee 100% that they won't change, but the risk vs cost is almost trivial to weigh in my eyes - hardcoding is the better idea here It looks to me that you answered your own question. One of the biggest challenges we face is to separate out the stuff that's likely to change from the stuff that won't change. Some people go nuts and dump absolutely everything they can into a config file. Others go to the other extreme and require a recompile for even the most obvious changes. I'd go with the easiest approach to implement until I found a compelling reason to make it more complicated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285540",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182092/"
]
} |
285,552 | Is there any standard on what traits/categories to give to tests? Here is an example: using Xuint;
public class XUnitTest1
{
[Fact]
[Trait("Category", "CI")]
public void TestMethodX1() { ... }
[Fact]
[Trait("Category", "CI")]
public void TestMethodX2() { ... }
[Fact]
[Trait("Category", "CI")]
[Trait("Runningtime", "Short")]
[Trait("Priority", "2")]
[Trait("Owner", "Terje")]
public void TestMethodX3()
{
var sut = new SomeClasses.VerySimpleMath();
int result = sut.Add(2, 3);
Assert.Equal(result, 5)
}
} I guess that this is mostly used to reduce the time of executing when some are long but is there any standard? Should one bother starting giving traits/categories right from the start or only when tests are taking longer to run or when the need arises? What kind or traits/categories do people usually end up with? | is it appropriate to hard-code strings that are very unlikely to change during the lifetime of an application? Of course I can't guarantee 100% that they won't change, but the risk vs cost is almost trivial to weigh in my eyes - hardcoding is the better idea here It looks to me that you answered your own question. One of the biggest challenges we face is to separate out the stuff that's likely to change from the stuff that won't change. Some people go nuts and dump absolutely everything they can into a config file. Others go to the other extreme and require a recompile for even the most obvious changes. I'd go with the easiest approach to implement until I found a compelling reason to make it more complicated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285552",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39464/"
]
} |
285,677 | Why don't Windows/Linux use relational databases ( RDBMS )? I know they use file systems to store all data but don't you think it is more efficient to use databases like we use in web sites/web apps? Please elaborate on the use of a file system over a database for storage. This is not a duplicate of When should use of database be preferred over parsing data from a text file? I am talking in terms of only operating system contexts, and that question is generalized. | Today, most database management systems (e.g. PostGreSQL , MongoDB , etc...) internally keep their data inside OS files (in the past, some DBMSs used raw disk partitions directly). On recent computers still using spinning hard disks , the disk is so slow - relative to the CPU or the RAM - that adding a few software layers is not relevant. SSD technology might change that a bit, and some file systems are optimized for SSDs. Files are present in most OSes in general for historical and social reasons (in particular, C compilers and most tools - editors, linkers - want files, so there is a chicken and egg issue), and because there are a lot of very good file system implementations. BTW, some essential system facilities can use databases. For example on Linux PAM can be configured to use information in databases (but this is rarely done in practice). Also, some mail servers may store some or most of their data in databases (e.g. Exim ). Files are slightly lower abstractions than databases, so they can be easier to implement (as the file systems & VFS layer in the Linux kernel) and faster to use. In particular, the operations on files are much more restricted than those on databases. In fact, you could see files or file systems as some very restricted databases! You could design an operating system without any files , but with some other orthogonal persistence machinery (e.g. having every process be persistent, then you don't care much explicitly about storage, since the OS is managing persistent resources). This has been done in several academic operating systems (1) (and also in the Smalltalk and Lisp machines of the 1980s, somehow in the IBM System i , a.k.a. AS/400 , and in some toy projects linked from osdev ), but when you design your OS this way you cannot leverage on many existing tools (e.g. you also need to make your compiler and your user interface from scratch, and that is a lot of work). Notice that microkernel operating systems might not need files provided by kernel layers since the file systems are just application servers (e.g. Hurd translators running in userland). Look also at the unikernel approach in today's MirageOS Linux (and probably Windows, which got most of its inspiration from VMS & Unix ) need files to work. At the very least, the init program (the first program started by the kernel) must be an executable stored in a file (often /sbin/init , but it could be systemd these days), and (nearly) all other programs are started with execve(2) syscall so must be stored in a file. However, FUSE enables you to give file-like semantics to non-file things. Notice also that on Linux (and perhaps even Windows, which I don't know and never used) sqlite is a library managing some SQL database in a files and providing an API for that. It is widely known that Android (a Linux variant) uses a lot of sqlite files (but it still does have a POSIX-like file system). Read also about application checkpointing (which, on many current OSes, is implemented to write the process state in files). Pushed to the extreme, that approach does not need to manually write application files (but only to persist the entire process state using the checkpointing machinery). Actually, the interesting question is why do current operating systems still use files, and the answer is legacy, and economic and cultural reasons (sadly, most programming languages and libraries today still want files). Note 1: persistent academic OSes include Lisaac & Grasshopper , but these academic projects seem to be inactive. Look also into http://tunes.org/ ; it is inactive, but has gotten lots of discussions around such topics. Note 2: the notion of file has widely changed over time (look at this answer about my first programming experiences): the first MSDOS on 1980s IBM PCs (no directories!), the VMS -on 1978 Vaxen - (had both fixed-record files and sequential files, with a primitive versioning system), the 1970s mainframes ( IBM/370 with OS/VS2 MVS ) had a very different notion of files and file systems (in particular because at their time the ratio of hard disk access time to core memory access time was a few thousand - so at that time disk ran relatively faster than today, even if today's disks are absolutely faster than in the previous century, today the CPU / disk speed ratio is about a million; but we now have SSDs). Also, files are less (or even not) useful when the memory is persistent (as on CAB500 magnetic drum, 1960s; or future computers using MRAM ) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285677",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/127193/"
]
} |
285,787 | I'm having some discussions with my new colleagues regarding commenting. We both like Clean Code , and I'm perfectly fine with the fact that inline code comments should be avoided and that class and methods names should be used to express what they do. However, I'm a big fan of adding small class summaries that tries to explain the purpose of the class and what is actually represents, primarily so that its easy to maintain the single responsibility principle pattern. I'm also used to adding one-line summaries to methods that explains what the method is supposed to do. A typical example is the simple method public Product GetById(int productId) {...} I'm adding the following method summary /// <summary>
/// Retrieves a product by its id, returns null if no product was found.
/// </summary I believe that the fact that the method returns null should be documented. A developer that wants to call a method should not have to open up my code in order to see if the method returns null or throws an exception. Sometimes it's part of an interface, so the developer doesn't even know which underlying code is running? However, my colleagues think that these kinds of comments are " code smell " and that "comments are always failures" ( Robert C. Martin ). Is there a way to express and communicate these types of knowledge without adding comments? Since I'm a big fan of Robert C. Martin, I'm getting a bit confused. Are summaries the same as comments and therefore always failures? This is not a question about in-line comments. | As others have said, there's a difference between API-documenting comments and in-line comments. From my perspective, the main difference is that an in-line comment is read alongside the code , whereas a documentation comment is read alongside the signature of whatever you're commenting. Given this, we can apply the same DRY principle. Is the comment saying the same thing as the signature? Let's look at your example: Retrieves a product by its id This part just repeats what we already see from the name GetById plus the return type Product . It also raises the question what the difference between "getting" and "retrieving" is, and what bearing code vs. comment has on that distinction. So it's needless and slightly confusing. If anything, it's getting in the way of the actually useful, second part of the comment: returns null if no product was found. Ah! That's something we definitely can't know for sure just from the signature, and provides useful information. Now take this a step further. When people talk about comments as code smells, the question isn't whether the code as it is needs a comment, but whether the comment indicates that the code could be written better, to express the information in the comment. That's what "code smell" means- it doesn't mean "don't do this!", it means "if you're doing this, it could be a sign there's a problem". So if your colleagues tell you this comment about null is a code smell, you should simply ask them: "Okay, how should I express this then?" If they have a feasible answer, you've learned something. If not, it'll probably kill their complaints dead. Regarding this specific case, generally the null issue is well known to be a difficult one. There's a reason code bases are littered with guard clauses, why null checks are a popular precondition for code contracts, why the existence of null has been called a "billion-dollar mistake". There aren't that many viable options. One popular one, though, found in C# is the Try... convention: public bool TryGetById(int productId, out Product product); In other languages, it may be idiomatic to use a type (often called something like Optional or Maybe ) to indicate a result that may or may not be there: public Optional<Product> GetById(int productId); So in a way, this anti-comment stance has gotten us somewhere: we've at least thought about whether this comment represents a smell, and what alternatives might exist for us. Whether we should actually prefer these over the original signature is a whole other debate, but we at least have options for expressing through code rather than comments what happens when no product is found. You should discuss with your colleagues which of these options they think is better and why, and hopefully help move on beyond blanket dogmatic statements about comments. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285787",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/159952/"
]
} |
285,860 | One of the founding principles of the Agile Manifesto is Agile processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace
indefinitely. Scrum teams use the term sprint to refer to a work cycle (also known as an iteration). However this doesn't make sense to me. According to Google a sprint is: run at full speed over a short distance. In other words it's not sustainable. Why do Scrum teams use the word sprint ? It appears to me to conflict one of the basic principles of Agile. | In other words it's not sustainable. Right. You don't run a sprint for months at a time in most Agile (well functioning ones, I'm sure some "we wanted buzzwords so we're an Agile waterfall shop" do), you have short sprints, followed by new planning/retros/etc. That's the point. Why do Scrum teams use the word "Sprint"? It appears to me to conflict one of the basic principals of Agile. The basic principles of Agile are relatively broad, but the main point is to not run a "marathon" that's planned initially (ie waterfall), but to break it into very short pieces. Hence, "sprint." As for where the term came from within Agile, the SCRUM Development Process seminal work used the term. I suspect no one has changed it since. For those of you curious about length, from that work: A Sprint is a set of development activities conducted over a pre-defined period, usually
one to four weeks. The interval is based on product complexity, risk assessment, and
degree of oversight desired. Sprint speed and intensity are driven by the selected duration of the Sprint. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29899/"
]
} |
285,881 | Plain object keys must be strings, whereas a Map can have keys of any type. But I have little use for this in practice. In nearly all cases, I find myself using strings as keys anyway. And presumably new Map() is slower than {} . So is there any other reason why it might be better to use a Map instead of a plain object? | There are some reasons why I prefer using Map s over plain objects ( {} ) for storing runtime data (caches, etc): The .size property lets me know how many entries exist in this Map; The various utility methods - .clear() , .forEach() , etc; They provide me iterators by default! Every other case, like passing function arguments, storing configurations and etc, are all written using plain objects. Also, remember: Don't try to optimize your code too early. Don't waste your time doing benchmarks of plain object vs Maps unless your project is suffering performance problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285881",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30385/"
]
} |
285,885 | I've got a package.json that's expecting a SPDX-approved license acronym, but I can't find one that means 'proprietary commercial license, all rights reserved'. Is there one for non-FOSS, where I want to specify that I want to allow no reuse? | As of npm 3.10 you have to use UNLICENSED : { "license": "UNLICENSED"} or { "license": "SEE LICENSE IN <filename>"} The value of license must either one of the options above or the identifier for the license from this list of SPDX licenses . Any other value is not valid. The following is no longer valid for current versions of npm For npm versions before 3.10 you may use: { "license" : "LicenseRef-LICENSE" } Then include a LICENSE file at the top level of the package. It could be as short as: (c) Copyright 2015 person or company, all rights reserved. But you might want to be more explicit about what is not allowed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285885",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81537/"
]
} |
285,888 | There are a few number of tools out there ( Excelsior JET , etc.) that claim to transform Java app's into native executables ( *.exe ). However it is my understanding that these tools are really just creating native wrappers that invoke/execute java from a shell or command-line. If that understanding is incorrect, I don't see how it could be. If a running JVM ( java process) is essentially a high performance interpreter, loading bytecode from Java classfiles on the fly, then I don't see how a Java app (a collection of bytecode files that serve as input to a JVM) could ever be truly converted into an executable. This is because the JVM process is already a native executable that takes sets of bytecode files as input. To merge those bytecode files and the JVM process into a single, unified native executable doesn't seem possible without completely rewriting the JVM and de-railing from the JVM specification. So I ask: how do these tools actually "transform" Java class files into a native executable, or do they? | All programs have a runtime environment. We tend to forget this, but its there. Standard lib for C that wraps system calls to the operating system. Objective-C has its runtime that wraps all of its message passing. With Java, the runtime is the JVM. Most of the Java implementations that people are familiar with are similar to the HotSpot JVM which is a byte code interpreter and JIT compiler. This doesn't have to be the only implementation. There is absolutely nothing saying you can't build a standard lib-esque runtime for Java and compile the code to native machine code and run that within the runtime that handles calls for new objects into mallocs and file access into system calls on the machine. And thats what the Ahead Of Time (AOT rather than JIT) compiler does. Call that runtime what you will... you could call it a JVM implementation (and it does follow the JVM specification) or a runtime environment or standard lib for Java. Its there and it does essentially the same thing. It could be done either by reimplementing javac to target the native machine (that's kind of what GCJ did). Or it could be done with translating the byte code generated by javac into machine (or byte) code for another machine - that's what Android does. Based on Wikipedia that's what Excelsior JET does too ("The compiler transforms the portable Java byte code into optimized executables for the desired hardware and operating system (OS)"), and the same is true for RoboVM . There are additional complications with Java that means this is very hard to do as an exclusive approach. Dynamic loading of classes ( class.forName() ) or proxied objects require dynamics that AOT compilers do not easily provide and so their respective JVMs must also include either a JIT compiler (Excelsior JET) or an interpreter (GCJ) to handle classes that couldn't be precompiled into native. Remember, the JVM is a specification , with many implementations . The C standard library is also a specification with many different implementations. With Java8, a fair bit of work has been done on AOT compilation. At best, one can only summarize AOT in general within the confines of textbox. However, in the JVM Language Summit for 2015 (August of 2015), there was a presentation: Java Goes AOT (youtube video). This video is 40 minutes long and goes into many of the deeper technical aspects and performance benchmarks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285888",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
285,941 | After reading many posts explaining closures here I'm still missing a key concept: Why write a closure? What specific task would a programmer be performing that might be best served by a closure? Examples of closures in Swift are accesses of an NSUrl and using the reverse geocoder. Here is one such example. Unfortunately, those courses just present the closure; they do not explain why the code solution is written as a closure. An example of a real world programming problem that might trigger my brain to say, "aha, I should write a closure for this", would be more informative than a theoretical discussion. There is no shortage of theoretical discussions available on this site. | By way of explanation, I'm going to borrow some code from this excellent blog post about closures . It's JavaScript, but that's the language most blog posts that talk about closures use, because closures are so important in JavaScript. Let's say you wanted to render an array as an HTML table. You could do it like this: function renderArrayAsHtmlTable (array) {
var table = "<table>";
for (var idx in array) {
var object = array[idx];
table += "<tr><td>" + object + "</td></tr>";
}
table += "</table>";
return table;
} But you're at the mercy of JavaScript as to how each element in the array will be rendered. If you wanted to control the rendering, you could do this: function renderArrayAsHtmlTable (array, renderer) {
var table = "<table>";
for (var idx in array) {
var object = array[idx];
table += "<tr><td>" + renderer(object) + "</td></tr>";
}
table += "</table>";
return table;
} And now you can just pass a function that returns the rendering you want. What if you wanted to display a running total in each Table Row? You would need a variable to track that total, wouldn't you? A closure allows you to write a renderer function that closes over the running total variable, and allows you to write a renderer that can keep track of the running total: function intTableWithTotals (intArray) {
var total = 0;
var renderInt = function (i) {
total += i;
return "Int: " + i + ", running total: " + total;
};
return renderObjectsInTable(intArray, renderInt);
} The magic that is happening here is that renderInt retains access to the total variable, even though renderInt is repeatedly called and exits. In a more traditionally object-oriented language than JavaScript, you could write a class that contains this total variable, and pass that around instead of creating a closure. But a closure is a much more powerful, clean and elegant way of doing it. Further Reading MDN on Closures How do JavaScript closures work? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/285941",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182561/"
]
} |
286,008 | I've just discovered a huge hole in my Java knowledge. I knew that Java passes parameters by value. I thought I understand that and whenever I needed to edit the field object of a class, I create a field. For example, private int mNumber;
void main(){
mNumber = 1;
}
void otherMethod(){
mNumber = 3;
} This would change the field object, everyone's happy. I was debugging and had to review the classes inside Android SDK and I noticed this piece of code public static String dumpCursorToString(Cursor cursor) {
StringBuilder sb = new StringBuilder();
dumpCursor(cursor, sb);
return sb.toString();
}
public static void dumpCursor(Cursor cursor, StringBuilder sb) {
sb.append(">>>>> Dumping cursor " + cursor + "\n");
if (cursor != null) {
int startPos = cursor.getPosition();
cursor.moveToPosition(-1);
while (cursor.moveToNext()) {
dumpCurrentRow(cursor, sb);
}
cursor.moveToPosition(startPos);
}
sb.append("<<<<<\n");
} The are changing the local variable sb by passing it as a parameter to another method. In my case, sb would be a field and I would change it from another method. How is this possible? Is a reference (pointer) passed as a parameter so we are ALWAYS changing the source value (just like we pass it by reference in other languages)? Was my approach with field unnecessary all this time? So my approach could be easily changed to void main(){
int number = 1;
otherMethod(number)
}
void otherMethod(int number){
number = 3;
} Don't be angry at me if this sound like a beginner question. It may actually be, but I have just discovered a huge hole in my Java knowledge. | The statement that "java is always pass-by-value" is technically correct, but it can be very misleading, because as you have just witnessed, when you pass an object to a function, the function can modify the contents of the object, so it appears that the object has been passed by reference , and not by value. So, what is happening? Pointers. That's what's happening. One of the major selling points of Java was that "it is a safe language because it is free of pointers". This statement is true in the sense that you cannot do most of the unsafe things like pointer arithmetic that have historically been the cause of many a monstrous bug in languages like C and C++. However, this statement has also hindered the understanding of java for many a novice programmer. You see, java is actually full of pointers; Novice programmers only begin to make sense out of the language once they realize that in java everything which is not a primitive is in fact a pointer; pointers are everywhere; you cannot declare an object statically, or on the stack, or as a element of an array as you can do in C++; the only way to declare and use an object is by pointer; even arrays of objects are in fact nothing but arrays of pointers to objects; everything that looks like an object in java is never really an object, it is always a pointer to an object. Java makes a clear distinction between primitives and objects. Primitives, ( int , boolean , float , etc.,) also known as value types, are always passed by value, and they behave precisely as you would expect according to the "java is always pass-by-value" maxim. Objects, on the other hand, also known as reference types, are always accessed by reference, that is, via a pointer, so their behavior is a bit trickier. When you call a method passing an object to it, java does not really pass the object itself; it passes a pointer to it. This pointer is passed by value, so the maxim "java is always pass-by-value" still holds from a strictly technical perspective. If you set this pointer to null within the method, the pointer held by the caller is unaffected, so obviously, what you have inside the method is a copy of the pointer. But if you modify the object pointed by the pointer, the caller will always see these modifications, which means that both the caller and the callee are holding references to the exact same object. So, the statement "java is always pass-by-value" is true from a strictly technical, but ultimately not very useful point of view, according to which objects are never really passed anywhere, only pointers to the objects are passed. In reality, when you write code, when you think what the code does, it is very useful to think of objects as being passed to functions, and since java passes pointers (by value) under the hood, the objects themselves always appear to be passed by reference. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286008",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21538/"
]
} |
286,226 | I know that pointers hold addresses. I know that pointers' types are "generally" known based on the "type" of data they point to. But, pointers are still variables and the addresses they hold must have a data "type". According to my info, the addresses are in hexadecimal format. But, I still do not know what "type" of data is this hexadecimal. (Note that I know what a hexadecimal is, but when you say 10CBA20 , for example, is this string of characters? integers? what? When I want to access the address and manipulate it .. itself, I need to know its type. This is why I am asking.) | The type of a pointer variable is .. pointer. The operations you're formally allowed to do in C are to compare it (to other pointers, or the special NULL / zero value), to add or subtract integers, or to cast it to other pointers. Once you accept undefined behaviour , you can look at what the value actually is. It will usually be a machine word, the same kind of thing as an integer, and can usually be losslessly cast to and from an integer type. (Quite a lot of Windows code does this by hiding pointers in DWORD or HANDLE typedefs). There are some architectures where pointers are not simple because memory is not flat. DOS/8086 'near' and 'far'; PIC's different memory and code spaces. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286226",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182941/"
]
} |
286,252 | Java SE 8 comes with a new mechanism for dates, introducing LocalDate , LocalTime and LocalDateTime classes to represent instants of time. To manipulate such instants, a set of methods are given: LocalDate.plusDays(...) , LocalDate.minusDays(...) and so on. I've always thought that good practice was naming methods after verbs describing their purpose, as methods are, actually, operations to be executed, something that will perform an action. Just to mention, if you consider classes like StringBuilder , for instance, methods' names are append , insert , delete ... This is why to me it doesn't sound right naming a method plusDays instead of sumDays , minusDays instead of subtractDays . It's just me finding it very annoying? What do you think? The only reason I can think of is that dates are immutable objects, so by calling plusDays you're not adding days to the original object but creating a new one with new properties, but that's very very subtle. | The only reason I can think of is that dates are immutable objects, so by calling plusDays you're not adding days to the original object but creating a new one with new properties, but that's very vary subtle. This is exactly the reason. Imagine you had some kind of api for manipulating ranges of dates for scheduling purposes. It might expose methods letting you make a statement like: var workdaySchedule = initialSchedule.withoutWeekends(); This reads very similarly to the English statement: "The workday schedule is the initial schedule without weekends". It doesn't imply changing the initial schedule, it implies the work schedule being a different, new thing. Now instead imagine it was named: var workdaySchedule = initialSchedule.removeWeekends(); This is confusing. Is initial schedule being modified? It certainly sounds like it, because it sounds like we're removing weekends from it. But then why are we assigning it to a new variable? Although these two naming schemes are very similar, this one is much less clearly evocative of what's happening. This would be more appropriate if removeWeekends did change the initial schedule, and returned void- in which case withoutWeekends would be the confusing option. This is essentially a declarative vs. imperative distinction. Are we declaring that the workdaySchedule is a particular thing, or are we carrying out a list of imperative instructions (like "remove") to make that particular thing? Usually, imperative naming makes more sense when you're mutating values, and declarative makes more sense with immutable values, as the above example demonstrates. In your case, you have exactly the same thing. If I saw: tomorrow.plusDays , I wouldn't imagine that tomorrow was being mutated, whereas tomorrow.addDays , I'd think it might be. This is somewhat subtle- but not necessarily in a bad way. Without having to think about it too hard, this naming naturally sets your thinking along the right lines in terms of whether or not you're mutating. To make this distinction between these imperative and declaritive styles clearer: "add" (and "remove") are verbs , whereas "plus" (and "without") are prepositions . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286252",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182970/"
]
} |
286,293 | Assume we have resources like this, book:
type: object
properties:
author: {type: string}
isbn: {type: string}
title: {type: string}
books:
type: array
items: book So, when someone makes a GET on the books resource, we would be returning the following [{"author": "Dan Brown", "isbn": "123456", "title": "Digital Fortress"},
{"author": "JK Rowling", "isbn": "234567", "title": "Harry Potter and the Chamber of Secrets"}] I heard from someone at work that the recommended REST practice is to always return responses as JSON objects, which would mean that our schema for books would look like this, books:
type: object
properties:
list:
type: array
items: book So, now, response would look like this, {
"list": [{"author": "Dan Brown", "isbn": "123456", "title": "Digital Fortress"},
{"author": "JK Rowling", "isbn": "234567", "title": "Harry Potter and the Chamber of Secrets"}]
} Which of these is the best REST practice? | In practice, the second option is the best practice. The reason for this is that you cannot extend the resource at all when you just return an array. For example: If you need to add a count of all records you are already done with the array-only approach. If that happens in one list api then you want to keep it consistent so make all an object then your api becomes more consistent and easier to use for developers. For example: Let's say a developer writes generic code to use your api to show list and detail pages. He does not want to build an exception, because sometimes it's an array, and sometimes it's an object with a list property. This answer in total has nothing to do with principles about rest, HATEOAS and other protocols but just being real about the data you need to send to the client. If you decide to follow for example HATEOAS then, of course, stick to their standards (which are also objects btw). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286293",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/183031/"
]
} |
286,490 | I have heard that it is a good practice to write functions that do not receive anything as a parameter like this: int func(void); But I hear that the right way to express that is like this: int func(); What is the difference between these two function declarations in both C and C++? | C and C++ are different in this respect. C 2011 Online Standard 6.7.6.3 Function declarators (including prototypes) ... 10 The special case of an unnamed parameter of type void as the only item in the list
specifies that the function has no parameters. ... 14 An identifier list declares only the identifiers of the parameters of the function. An empty
list in a function declarator that is part of a definition of that function specifies that the
function has no parameters. The empty list in a function declarator that is not part of a
definition of that function specifies that no information about the number or types of the
parameters is supplied. 145) In short, an empty parameter list in a function declaration indicates that the function takes an unspecified number of parameters, while an empty parameter list in a function definition indicates that the function takes no parameters. T foo( void ); // declaration, foo takes no parameters
T bar(); // declaration, bar takes an *unspecified* number of parameters
T foo( void ) { ... } // definition, foo takes no parameters
T bar() { ... } // definition, bar takes no parameters As far as C is concerned, you should never use an empty identifier list in a function declaration or definition. If a function is not meant to take any parameters, specify that by using void in the parameter list. Online C++ standard 8.3.5 Functions [dcl.fct] ... 4 The parameter-declaration-clause determines the arguments that can be specified, and their processing, when
the function is called. [ Note: the parameter-declaration-clause is used to convert the arguments specified
on the function call; see 5.2.2. — end note ] If the parameter-declaration-clause is empty, the function takes
no arguments. A parameter list consisting of a single unnamed parameter of non-dependent type void is
equivalent to an empty parameter list. Except for this special case, a parameter shall not have type cv void .
If the parameter-declaration-clause terminates with an ellipsis or a function parameter pack (14.5.3), the
number of arguments shall be equal to or greater than the number of parameters that do not have a default
argument and are not function parameter packs. Where syntactically correct and where “...” is not part of
an abstract-declarator , “, ...” is synonymous with “...”. [ Example: the declaration int printf(const char*, ...); declares a function that can be called with varying numbers and types of arguments. printf("hello world");
printf("a=%d b=%d", a, b); However, the first argument must be of a type that can be converted to a const char* — end example ]
[ Note: The standard header <cstdarg> contains a mechanism for accessing arguments passed using the
ellipsis (see 5.2.2 and 18.10). — end note ] In the case of C++, an empty parameter list in either a declaration or a definition indicates that the function takes no arguments, and is equivalent to using a parameter list of void . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/183279/"
]
} |
286,552 | Why not have the compiler take a program like this: function a(b) { return b^2 };
function c(b) { return a(b) + 5 }; and convert it into a program like this: function c(b) { return b^2 + 5 }; thereby eliminating the computer's need to remember c(b)'s return address? I suppose the increased hard disk space and RAM needed to store the program and support its compilation (respectively) is the reason why we use call stacks. Is that correct? | This is called "inlining" and many compilers do this as an optimization strategy in cases where it makes sense. In your particular example, this optimization would save both space and execution time. But if the function was called in multiple places in the program (not uncommon!), it would increase code size, so the strategy becomes more dubious. (And of course if a function called itself directly or indirectly it would be impossible to inline, since then the code would become infinite in size.) And obviously it is only possible for "private" functions. Functions which are exposed for external callers cannot be optimized away, at least not in languages with dynamic linking. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286552",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/164475/"
]
} |
286,600 | If you go to E.g: http://www.forbes.com/sites/adrianbridgwater/2015/06/12/why-technology-has-to-be-continuous/ OR http://www.zdnet.com/article/if-you-want-those-cool-ios-9-features-its-time-to-buy-a-new-ipad/ When you get to the bottom of the Page, a New News story loads and the URL in my Internet Browser changes to the URL of this Next News Story.
So I was wondering how a webpage can almost instantly load the next webpage with almost negligible delay between pages. Are they for example Pre-downloading the Webpage of the Next News story, and then loading the webpage really fast? | The short answer is that the page's client-side Javascript code detects when you get "too close" to the bottom of the page, and asks the server for more data when that happens. Without getting too technical, they are not reloading the entire web page. Instead the Javascript code on that page is requesting more data from the server, then when it receives the new data, it adds that data to the current page. The parts of the page that do not need to change remain completely unchanged. The most modern way of doing this is to use the HTML5 history-modification features , which appears to be what those sites are using. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286600",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/183386/"
]
} |
286,712 | Does compilation that produces an interim bytecode (like with Java), rather than going "all the way" to machine code, generally involve less complexity (and thus likely take less time)? | Yes, compiling to Java bytecode is easier than compiling to machine code. This is partially because there's only one format to target (as Mandrill mentions, though this only reduces compiler complexity, not compile time), partly because the JVM is a much simpler machine and more convenient to program than real CPUs — as it's been designed in tandem with the Java language, most Java operations map to exactly one bytecode operation in a very simple way. Another very important reason is that practically no optimization takes place. Almost all efficiency concerns are left to the JIT compiler (or to the JVM as a whole), so the entire middle end of normal compilers disappears. It can basically walk through the AST once and generate ready-made bytecode sequences for each node. There is some "administrative overhead" of generating method tables, constant pools, etc. but that's nothing compared to the complexities of, say, LLVM. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286712",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/118925/"
]
} |
286,788 | What is faster performance wise? Creating a REST API and having your web app use the REST API to do all interactions with your database OR querying your database directly (i.e. using whatever typical object your language uses to query a database such as JDBC for Java)? The way I see it with REST: You make an object in your code to call the REST method Call http method Code inside your REST API queries the database Database returns some data REST API code packs up the data into Json and sends it to your client Client receives Json/XML response Map response to an object in your code On the other hand, querying a database directly: You make an object with query string to query the database Database returns some data Map response to an object in your code So wouldn't this mean that using a REST API would be slower? Maybe it depends on the type of database (SQL vs NoSQL)? | When you add complexity the code will run slower. Introducing a REST service if it's not required will slow the execution down as the system is doing more. Abstracting the database is good practice. If you're worried about speed you could look into caching the data in memory so that the database doesn't need to be touched to handle the request. Before optimizing performance though I'd look into what problem you're trying to solve and the architecture you're using, I'm struggling to think of a situation where the database options would be direct access vs REST. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286788",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182345/"
]
} |
286,996 | I have the following homework question: Implement the stack methods push(x) and pop() using two queues. This seems odd to me because: A Stack is a (LIFO) queue I don't see why you would need two queues to implement it I searched around: GeeksForGeeks StackOverflow and found a couple solutions. This is what I ended up with: public class Stack<T> {
LinkedList<T> q1 = new LinkedList<T>();
LinkedList<T> q2 = new LinkedList<T>();
public void push(T t) {
q1.addFirst(t);
}
public T pop() {
if (q1.isEmpty()) {
throw new RuntimeException(
"Can't pop from an empty stack!");
}
while(q1.size() > 1) {
q2.addFirst( q1.removeLast() );
}
T popped = q1.pop();
LinkedList<T> tempQ = q1;
q1 = q2;
q2 = tempQ;
return popped;
}
} But I don't understand what the advantage is over using a single queue; the two queue version seems pointlessly complicated. Say we choose for pushes to be the more efficient of the 2 (as I did above), push would remain the same, and pop would simply require iterating to the last element, and returning it. In both cases, the push would be O(1) , and the pop would be O(n) ; but the single queue version would be drastically simpler. It should only require a single for loop. Am I missing something? Any insight here would be appreciated. | There is no advantage: this is a purely academic exercise. A very long time ago when I was a freshman in college I had a similar exercise 1 . The goal was to teach students how to use object-oriented programming to implement algorithms instead of writing iterative solutions using for loops with loop counters. Instead, combine and reuse existing data structures to achieve your goals. You will never use this code in the Real World TM . What you need to take away from this exercise is how to "think outside the box" and reuse code. Please note that you should be using the java.util.Queue interface in your code instead of using the implementation directly: Queue<T> q1 = new LinkedList<T>();
Queue<T> q2 = new LinkedList<T>(); This allows you to use other Queue implementations if desired, as well as hiding 2 methods that are on LinkedList that might get around the spirit of the Queue interface. This includes get(int) and pop() (while your code compiles, there is a logic error in there given the constraints of your assignment. Declaring your variables as Queue instead of LinkedList will reveal it). Related reading: Understanding “programming to an interface” and Why are interfaces useful? 1 I still remember: the exercise was to reverse a Stack using only methods on the Stack interface and no utility methods in java.util.Collections or other "static only" utility classes. The correct solution involves using other data structures as temporary holding objects: you have to know the different data structures, their properties, and how to combine them to do it. Stumped most of my CS101 class who had never programmed before. 2 The methods are still there, but you cannot access them without type casts or reflection. So it is not easy to use those non-queue methods. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/286996",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/139925/"
]
} |
287,039 | I'm a newbie to working in software development and I read a lot about how cool unit tests are. Now, I've made my first steps into a project where I'm working with a number of equally unexperienced programmers which is why we have produced a bit of spaghetti code. I'm trying to learn how to use testing and other techniques to improve the code quality but one of my newbie co-workers says that tests make things more difficult. Apparently, he has done internships in teams where unit tests were used. He argued that tests were constantly in his way when he tried to implement a new feature. The tests would fail after he had changed the code. So he had to adapt the tests which of course increased his workload. But that doesn't make sense to me. I thought tests were supposed to make things easier. So, I suspect that he either didn't implement the features correctly or that the unit tests were badly done. So, I'm wondering: How can I write unit tests so that they don't fail just because a new feature was implemented? | By making them test just the thing they're about, and not lots of unrelated properties that are true now but might change later. Some examples from my experience. Often systems are supposed to send notifications to their users when certain things happen. With a proper test harness, it's easy to mock email messages and verify that they go out, get to the correct recipient and say what they're supposed to say. However, it's not a good idea to assert simply "When this even happens, this user receives that exact message text". Such a test would fail whenever the I18N texts are revised. It's much better to assert "The message contains the user's new password / The link to the announced resource / the user's name and a greeting", so that the test keeps working and only breaks when the routine does, in fact, not do its stated job. Similarly, when you test auto-generated IDs for something, never assume that whatever value is generated at the moment will always be generated. Even when your test harness doesn't change, the implementation of that feature might change so that the outcome changes while still fulfilling its contract. Again, you don't want to assert "The first user receives ID AAA", but rather "The first user receives an ID composed of letters, and the second user receives an ID also composed of letters and distinct from the first one". In general, beware of testing things that aren't in the contract for the thing you're testing. Understanding what is essentially true about the behaviour of a unit and what is only accidentally true is the key to writing minimal covering tests - and it is also extremely helpful for understanding the system and maintaining it successfully. This is one way in which test-driven development improves outcomes even when not catching bugs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287039",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/166569/"
]
} |
287,085 | I came across the following conditional in a program that I have taken over from another developer: if (obj.Performance <= LOW_PERFORMANCE)
{
obj.NeedsChange = true;
}
else
{
obj.NeedsChange = false;
} I believe this code is redundant and ugly, so I changed it to what I thought was a simple boolean assignment based on a comparison: obj.NeedsChange = obj.Performance <= LOW_PERFORMANCE; Upon seeing this, someone reviewing my code commented that although my change is functionally correct, it might confuse someone else looking at it. He believes that using a ternary operator makes this assignment more clear, whereas I don't like the addition of more redundant code: obj.NeedsChange = (obj.Performance <= LOW_PERFORMANCE) ? true : false; His reasoning is that doing something in the most concise way is not worth it, if it causes another developer to have to stop and puzzle out exactly what you've done. The real question here is which of these three methods of assigning a value to the boolean obj.NeedsChange is the most clear and the most maintainable? | I prefer 2, but I might go for a small adjustment to it: obj.NeedsChange = ( obj.Performance <= LOW_PERFORMANCE ); To me the parentheses makes the line easier to parse and makes it clear at a glance that you are assigning the result of a comparison, and not performing a double assignment. I'm not sure why that is (as off-hand I can't think of a language where parentheses would actually prevent a double assignment), but if you must satisfy your reviewer then perhaps this will be an acceptable compromise. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287085",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/183903/"
]
} |
287,342 | As per the below diagram, except for interface Iterable , all the remaining constructs (interface / class / abstract class) sit in same package java.util Why does Iterable sit in java.lang package? Note: The intention is to understand the packaging aspect of java programming. | As explained in its javadoc , the purpose of Iterable is to support particular language syntax : Implementing this interface allows an object to be the target of the "foreach" statement As such, it belongs to the lang package , which Provides classes that are fundamental to the design of the Java programming language. Other classes at the diagram belong to JCF and hence, are in the util package which Contains the collections framework... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287342",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/131582/"
]
} |
287,461 | Fair warning, I'm new to functional programming so I may hold many bad assumptions. I've been learning about algebraic types. Many functional languages seem to have them, and they are fairly useful in conjunction with pattern matching. However, what problem do they actually solve? I can implement a seemingly (sort-of) algebraic type in C# like this: public abstract class Option { }
public class None : Option { }
public class Some<T> : Option
{
public T Value { get; set; }
}
var result = GetSomeValue();
if(result is None)
{
}
else
{
} But I think most would agree this is a bastardization of object oriented programming, and you shouldn't ever do it. So does functional programming just add a cleaner syntax that makes this style of programming seem less gross? What else am I missing? | Classes with interfaces and inheritance present an open world: Anyone can add a new kind of data. For a given interface, there may be classes implementing it all over the world, in different files, in different projects, at different companies. They make it easy to add cases to the data structures, but because the implementations of the interface are decentralized, it is hard to add a new method to the interface. Once an interface is public, it is basically frozen. Nobody knows all the possible implementations. Algebraic data types are the dual to that, they are closed . All the cases of the data are listed in one place and operations not only can list the variants exhaustively, they are encouraged to do so. Consequently writing a new function operating on an algebraic data type is trivial: Just write the damn function. In return, adding new cases is complicated because you need to go over basically the entire code base and extend every match . Similar to the situation with interfaces, in the Rust standard library, adding a new variant is a breaking change (for public types). These are two sides of the expression problem . Algebraic data types are an incomplete solution to them, but so is OOP. Both have advantages depending on how many cases of data there are, how often those cases change, and how frequently the operations are extended or changed. (Which is why many modern languages provide both, or something similar, or go straight for more powerful and more complicated mechanisms that try to subsume both approaches.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287461",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66745/"
]
} |
287,468 | Given the following problem (a slightly simplified description of trading in the computer game Escape Velocity: Nova ( system map )): Given a set of (solar) systems. Each system is connected by hyperspace travel route to one or more other systems; each route connects two systems. Making a hyperspace jump has a cost in dollars (covering fuel costs). The cost is the same no matter which jump you're making. Each system has a trading center with various commodities (food, metal, equipment, etc.) available for trade at a given price. The price varies between systems. To keep it simple we'll say you can only hold a single unit of a single commodity at any given time. A trade route is a series of hyperspace jumps that start and end in the same system (a loop), buying and selling commodities along the way. The profit of a trade route is defined as profit made by buying and selling commodities along the way minus the total cost of the hyperspace jumps made. I want to be able to answer questions like the following: What's the most profitable trade route? What's the most profitable trade route starting and ending in a specific system? What's the most profitable trade route involving fewer than X jumps? What's the most profitable trade route involving fewer than X jumps starting and ending in a specific system? I plan to model this as a digraph: Each system is a vertex. For each hyperspace travel route add two edges (one in each direction) connecting the two vertexes with a weight equal to the cost of making a hyperspace jump. For each system for each commodity available in that system for each system where the price of the commodity is lower than in the current system (we'll never make a profit if the cost at the destination is higher so we omit those edges) add an edge from the current system to that system with weight of (number of jumps to reach that system) * (cost per jump) - (difference in price of the commodity) (i.e. profit to be made by buying in the current system and selling in the destination system) I believe the technical description of what I'm after is (for question 1): given the above digraph what's the negative cycle with the lowest average cost per edge. I realize I can use a breadth-first search to answer questions 3 and 4 as long as the maximum number of jumps is small enough that I don't run out of memory, but I suspect it won't be practical to answer 1 or 2 this way. I've read though the Solving Problems by Searching chapter in AI: A Modern Approach but as far as I can tell the algorithms there require positive edge weights. A bit of Googling found me the Bellman–Ford algorithm and while that supports negative edge weights my understanding is that it doesn't give accurate costs when negative cycles are involved. Are there any algorithms I can use to solve this problem more efficiently? | Classes with interfaces and inheritance present an open world: Anyone can add a new kind of data. For a given interface, there may be classes implementing it all over the world, in different files, in different projects, at different companies. They make it easy to add cases to the data structures, but because the implementations of the interface are decentralized, it is hard to add a new method to the interface. Once an interface is public, it is basically frozen. Nobody knows all the possible implementations. Algebraic data types are the dual to that, they are closed . All the cases of the data are listed in one place and operations not only can list the variants exhaustively, they are encouraged to do so. Consequently writing a new function operating on an algebraic data type is trivial: Just write the damn function. In return, adding new cases is complicated because you need to go over basically the entire code base and extend every match . Similar to the situation with interfaces, in the Rust standard library, adding a new variant is a breaking change (for public types). These are two sides of the expression problem . Algebraic data types are an incomplete solution to them, but so is OOP. Both have advantages depending on how many cases of data there are, how often those cases change, and how frequently the operations are extended or changed. (Which is why many modern languages provide both, or something similar, or go straight for more powerful and more complicated mechanisms that try to subsume both approaches.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287468",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/184371/"
]
} |
287,816 | public class MyClass
{
public object Prop1 { get; set; }
public object Prop2 { get; set; }
public object Prop3 { get; set; }
} Suppose I have an object myObject of MyClass and I need to reset its properties, is it better to create a new object or reassign each property? Assume I don't have any additional use with the old instance. myObject = new MyClass(); or myObject.Prop1 = null;
myObject.Prop2 = null;
myObject.Prop3 = null; | Instantiating a new object is always better, then you have 1 place to initialise the properties (the constructor) and can easily update it. Imagine you add a new property to the class, you would rather update the constructor than add a new method that also re-initialises all properties. Now, there are cases where you might want to re-use an object, one where a property is very expensive to re-initialise and you'd want to keep it. This would be more specialist however, and you'd have special methods to reinitialise all other properties. You'd still want to create a new object sometimes even for this situation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287816",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/184809/"
]
} |
287,819 | I had a heated discussion today about our MVC application. We have a website written in MVC ( ASP.NET ), and it usually follows the pattern of do something in the view -> hit the controller -> controller builds a model (calls a Manager that gets the data, builds the model in the controller method itself) -> model goes to view -> rinse & repeat. He said that our code was too tightly coupled. For example, if we wanted a desktop application as well, we would not be able to use our existing code. The solution and best practice he said is to build an API, and then build your website on top of your API, and then building a desktop application, mobile app, etc. is very simple. This seems like a bad idea to me for various reasons. Anyway I can't seem to find anything by googling that might discuss this practice. Does anyone have any information about pros, cons, why you should, why you shouldn't or further reading? Some reasons I think it's a bad idea: It's way too abstract to run your backend off an API. You're trying to make it too flexible which will make it an unmanagable mess. All the stuff built into MVC seems useless, like roles and authentication. For example, [Authorize] attributes and security; you will have to roll your own. All your API calls will require security information attached, and you will have to develop a token system and whatnot. You will have to write complete API calls for every single function your program will ever do. Pretty much every method you want to implement will need to be ran off an API. A Get/Update/Delete for every user, plus a variant for each other operation eg update user name, add user to a group, etc. etc. and each one would be a distinct API call. You lose all kinds of tools like interfaces and abstract classes when it comes to APIs. Stuff like WCF has very tenuous support for interfaces. You have a method that creates a user, or performs some task. If you want to create 50 users, you can just call it 50 times. When you decide to do this method as an API your local webserver can named-pipes connect to it and no problem - your desktop client can hit it too, but suddenly your bulk user creation will involve hammering the API over the Internet 50 times which isn't good. So you have to create a bulk method, but really you're just creating it for desktop clients. This way, you end up having to a) modify your API based on what's integrating with it, and you can't just directly integrate with it, b) do a lot more work to create an extra function. YAGNI . Unless you're specifically planning to write two identically functioning applications, one web and one Windows application for example, it is a huge amount of extra development work. Debugging is much harder when you can't step through end-to-end. Lots of independent operations that will require lots of back and forth, for example some code might get the current user, check the user is in the administrator role, get the company the user belongs to, get a list of other members, send them all an email. That would require a lot of API calls, or writing a bespoke method the specific task you want, where that bespoke method's only benefit would be speed yet the downside would be it would be inflexible. Probably some more reasons these are just off the top of my head. It just seems to me like unless you really need two identical applications, then it's really not worth it. I've never seen an ASP.NET application built like this either, you'd have to write two separate applications (the API and your code) and version control them both as well (if your user page gets a new field, you'd have to update the API and your consuming code simultaneously to ensure no ill-effects or put lots of extra work into keeping it robust). Edit: Some great responses, really starting to get a good idea of what this all means now. So to expand on my question, how would you structure an MVC app to follow this API structure? For example, you have a website that displays info about a user. Under MVC, you have: View - (CS)HTML page that displays a UserViewModel
Controller - Calls GetUser() and creates a UserViewModel that it passes to the view
Manager class (sort of your API) that has a GetUser method. The controller does GetUser() but you want a desktop app too. This means your GetUser needs to be exposed via some kind of API. You might want a TCP connection, either WCF, or perhaps Remoting. You also want a mobile app which will be RESTful since persistent connections are flaky. So would you then write an API for each one, a WCF web service that has a method GetUser() and the code just does return new UserManager().GetUser() ? And an mvc 4 web api method that does the same thing? While continuing to call GetUser directly in your MVC controller method? Or would you choose the solution that would work for all three (web api REST service) and build everything on that, so all three apps make API calls (the mvc ones, to the local machine). And is this just a theoretical perfect scenario? I can see large overheads in developing this way, especially if you have to develop in a way that will let you do operations in a RESTful manner. I think some of this has been covered in the replies. Edit 2: After reading more stuff, I've put a comment below that I think might explain it. The question is a bit of a trick question I think. Should you write your back-end as an API had me confused thinking there should be a single webservice that everything (mvc app, desktop app, mobile app) calls to do stuff. The conclusion I have come to is that what you should really do is make sure your business logic layer is correctly decoupled. Looking at my code, I do this already - the controller will call GetUser() on a manager, then create a view model from it to render with a View. So really, the business logic layer is an API. If you want to call it from a desktop app though, you'll need to write something like a WCF service to facilitate calling it. Even just having a WCF method called GetUser() that contains the code return MyBusinessLayer.GetUser() would be sufficient. So the API is the business logic, and WCF / web api etc. are just titbits of code to let external applications in to call it. So there is some overhead, in that you have to wrap your business logic layer in different APIs depending on what you need, and you will have to write an API method for each operation you want your other apps to do, plus you will need to sort out a way to do authentication, but for the most part it's the same. Stick your business logic in a separate project (class library), and you will probably have no issue! Hopefully this interpretation is correct. Thanks for all the discussion/comments it has generated. | Yes you should. It not only makes your back end re-usable but allows for more security and better design. If you write your backend as part of a single system, you're making a monolithic design that's never easy to extend, replace or enhance. One area where this is popular at the moment is in Microservices . Where the backend is split into many little (or even large) services that each provide an API that the client system consumes. If you imagine using many 3rd party sources of data in your application you realise you might be doing this already. One other benefit is that the construction and maintenance of each service can be handed off to a different team, they can add features to it that do not affect any other team producing product. Only when they are done and release their service do you them start to add features to your product to consume them. This can make development much smoother (though potentially slower overall, you would tend to get better quality and understandable) Edit: OK I see your problem. You think of the API as a remote library. It's not. Think of the service as more of a data providing service. You call the service to get data and then perform operations on that data locally. To determine if a user is logged on you would call " GetUser " and then look at the 'logged on' value, for example. ( YMMV with that example, of course). Your example for bulk user creation is just making excuses - there is no difference here, whatever you could have done in a monolithic system can still be done in a service architecture (e.g. you would have passed an array of users to bulk-create, or a single one to create. You can still do exactly the same with services). MVC is already based around the concept of isolated services, only the MVC frameworks bundle them into a single project. That doesn't mean you lose anything except the bundled helpers that your framework is giving you. Use a different framework and you'll have to use different helpers. Or, in this case, rolling your own (or adding them directly using a library). Debugging is easy too - you can thoroughly test the API in isolation so you don't need to debug into it (and you can debug end-to-end, Visual Studio can attach to several processes simultaneously). Things like extra work implementing security is a good thing. Currently, if you bundle all the code into your website, if a hacker gains access to it, they also gain access to everything, DB included. If you split it into an API the hacker can do very little with your code unless they also hack the API layer too - which will be incredibly difficult for them (ever wondered how attackers gain vast lists of all website's users or cc details? It's because they hacked the OS or the web server and it had a direct connection to the DB where they could run " select * from users " with ease). I'll say that I have seen many web sites (and client-server applications) written like this. When I worked in the financial services industry, nobody would ever write a website all-in-one, partly as it's too much of a security risk, and partly because much development is pretty GUIs over stable (i.e. legacy) back-end data processing systems. It's easy to expose the DP system as a website using a service style architecture. 2nd Edit: Some links on the subject (for the OP): Note that when talking about these in context of a website, the web server should be considered the presentation layer, because it is the client that calls the other tiers, and also because it constructs the UI views that are sent to the browser for rendering. It's a big subject, and there are many ways to design your application - data-centric or domain-centric (I typically consider domain centric to be 'purer', but YMMV ), but it all comes down to sticking a logic tier in between your client and your DB. It's a little like MVC if you consider the middle, API, tier to be equivalent to your Model, only the model is not a simple wrapper for the DB, it's richer and can do much more (e.g. aggregate data from 2 data sources, post-process the data to fit the API, cache the data, etc.): Multitier architecture Building an N-Tier Application in .NET Youtube tutorial - see the related links too, inc one to implement in C# Pluralsight training course (free trial or paid) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/287819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13850/"
]
} |
288,165 | How do I design a subclass whose method contradicts its superclass? Let's say we have this example: # Example 1
class Scissor
def right_handed?
true
end
end
class LeftHandedScissor < Scissor
def right_handed?
false
end
end Let's assume that the inheritance is necessary in this example e.g. that there might be other methods aside from right_handed? that are being inherited. So a LeftHandedScissor is a Scissor. Its method right_handed? matches the signature and return value type of its superclass's method. However, the subclass method contradicts the return value of the superclass method. This bothers me because it looks like a logical contradiction: All scissors are right-handed. A left-handed scissor is a scissor. Therefore, a left-handed scissor is right-handed. I considered introducing an AbstractScissor to avoid this contradiction: # Example 2
class AbstractScissor
def right_handed?
raise NotImplementedError
end
end
class RightHandedScissor < AbstractScissor
def right_handed?
true
end
end
class LeftHandedScissor < AbstractScissor
def right_handed?
false
end
end My colleague says the downside to this solution is that it's more verbose. Aside from adding an extra class, it requires us to call scissors "right-handed scissors", which no one does in the real world. This leads me to my last solution: # Example 3
class AbstractScissor
def right_handed?
raise NotImplementedError
end
end
class Scissor < AbstractScissor
def right_handed?
true
end
end
class LeftHandedScissor < AbstractScissor
def right_handed?
false
end
end It's nearly the same as example 2 but I just renamed RightHandedScissor as Scissor. There's a chance that someone might assume LeftHandedScissor is a subclass of Scissor based solely on the class names, but at least the code makes it obvious that that's not the case. Any other ideas? | There is nothing wrong with the design shown in the question. While one could also introduce abstract Scissor with two concrete subclasses, and maybe more overall clarity, it's also common to do it like shown (especially when the hierarchy is a result of years of incremental development, with Scissor being around for much longer than the concept of handedness). " Okay, I guess the lesson here is that you don't make assumptions about the subclasses based solely on the superclass? " You do make such assumptions based on the contracts (method signatures) of the base class, but not based on method implementations. In this case the contract says that right_handed is a Boolean method which can be true or false, so it can be either. You should ignore the fact that the base class implementation always returns true, especially if you allowed to derive from the base class by not freezing it. The base class implementation is then just a default, and the method right_handed exists exactly because the scissor could also be left handed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288165",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/185220/"
]
} |
288,205 | For most of my programming career, I've used the "build/compile/run" command in whatever IDE I'm working with to produce a runnable program. This is one button, pretty easy. As I learn more about different languages and frameworks, though, I see more and more talk of "build scripts" (ANT, Maven, Gradle, etc.) to get a project running. My understanding of these is that they are instructions to the compiler/linker/magical-program-maker that specify configuration details - minutiae. I remember writing makefiles back in school, but I didn't see any special advantage then (we only used them when writing in a Unix terminal, where an IDE with it's handy "build" button wasn't present). Beyond that, I've seen other questions on here that discuss how build scripts can do more than just create your program - they can run unit tests as well as secure resources regardless of the host machine . I can't shake the feeling that build scripts are important to understand as a developer, but I'd like a meaningful explanation; why should I use/write build scripts? Responsibilities of Build Script and Build Server discusses the role it plays within a broader scope. I'm looking for specific advantages offered by a build script versus an IDE's "build/run" command or similarly simple methods. | Automation. When you are developing, only in the most simple projects will the default "build" button do everything you need it to do; you may need to create WS out of APIs, generate docs, link with external resources, deploy the changes to a server, etc. Some IDEs allow you to customize the build process by adding extra steps or builders, but that only means that you are generating your build script through the IDE commodities. But developing a system is not only writing code. There are multiple steps involved. An IDE independent script can be executed automatically, meaning that: When you commit a change to version control, a new build can be launched automatically by the server. It will ensure that you have not forgot to commit anything needed for the build. Similarly, after the build is done, tests can automatically be run to see if you broke something. Now the rest of the organization (QA, sysadmins) has a built product that a) is perfectly reproducible just from the control version. b) is common to all of them. Even when I have been working as a one man team I have used scripts for that purpose; when I had developed the fix I would commit to SVN, export the SVN back in another directory and use the build script to generate the solution, which would go then to Preproduction systems and later to Production. If a few weeks later (with my local codebase already changed) someone complained of a bug, I would knew exactly which SVN revision I would have to checkout in order to properly debug the system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288205",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161111/"
]
} |
288,231 | I understand the structure of binary trees and how to traverse them. However, I am struggling to realize their actual uses, purposes in programs and programming. When I think about 'real life' examples of hierarchical data they almost certainly have more than 2 children. For example, in a family tree, a mother may often have more than two children. Are 'binary trees' really only useful to store linearly related data due to the faster processing times over arrays and lists? Alternatively, do they serve a specific purpose in storing hierarchical data? If so, what examples are there of the application of binary trees. What data is such that a node has at most 2 children? | No, binary trees are not for storing hierarchical data in the sense you're thinking of. The primary use case for n-ary trees, where n is a fixed number, is fast search capability , not a semantic hierarchy. Remember the old game where one person thinks of a number between 1 and 100, and the other has to guess it in as few guesses as possible, and if you guess wrong the person thinking of the number has to tell you if you're too high or too low? It gets boring after a while because you quickly figure out that you should always start at 50, then go to 25 or 75, and keep dividing the range to be searched in half with each new guess after that, and eventually you can guess any number in at most 7 guesses, guaranteed. It may not make for a fun game, but that property is what makes binary (and other n-ary) trees useful: you can use them to search a very large data set in a very small amount of time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288231",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/142752/"
]
} |
288,376 | I'm designing a REST API for a project where users are always on one of several "plans" - each plan defines some resource limits, such as the max number of users an account may have or the max number of data they may upload. Once one of these limits is reached, users can upgrade their plans (essentially pay up) to get more resources. I want to return a special status code indicating a situation where the action cannot be performed due to account resource limits, and upgrading the plan will resolve this - for example if a user uses 100% of their storage capacity and try to upload an additional file, they will get this response. The candidates are, IMHO: 403 Forbidden - however, I would like to distinguish between this case and other cases where the user simply lacks the permission to perform this action. 401 Unauthorized - not a good idea, we're using this for authentication related problems. 402 Payment Required - makes kind of sense but I'm worried about using a non-standard yet reserved status code Something even less standard like 423 Locked as its unlikely we'll use it for anything else in the future Another option is to go with something very standard such as 403 but indicate the specifics of the error in the response body. I'm wondering which approach you believe would (a) work best in the long run and (b) would stick more nicely to RESTful principles. | I think 403 is the only reasonable response, though 405 Method Not Allowed or 409 Conflict might be acceptable, I don't think either are as good as 403 which states: The server understood the request, but is refusing to fulfill it.
Authorization will not help and the request SHOULD NOT be repeated.
If the request method was not HEAD and the server wishes to make
public why the request has not been fulfilled, it SHOULD describe the
reason for the refusal in the entity If you return a 403 error, it'll include some information on why the resource was denied - invalid permission is only the most common case, exceeded limits isn't much different - you don't have permission because your limit was exceeded. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/185564/"
]
} |
288,405 | I'm attempting to get into the habit of writing unit tests regularly with my code, but I've read that first it's important to write testable code . This question touches on SOLID principles of writing testable code, but I want to know if those design principles are beneficial (or at least not harmful) without planning on writing tests at all. To clarify - I understand the importance of writing tests; this is not a question on their usefulness. To illustrate my confusion, in the piece that inspired this question, the writer gives an example of a function that checks the current time, and returns some value depending on the time. The author points to this as bad code because it produces the data (the time) it uses internally, thus making it difficult to test. To me, though, it seems like overkill to pass in the time as an argument. At some point the value needs to be initialized, and why not closest to consumption? Plus, the purpose of the method in my mind is to return some value based on the current time , by making it a parameter you imply that this purpose can/should be changed. This, and other questions, lead me to wonder if testable code was synonymous with "better" code. Is writing testable code still good practice even in the absence of tests? Is testable code actually more stable? has been suggested as a duplicate. However, that question is about the "stability" of the code, but I am asking more broadly about whether the code is superior for other reasons as well, such as readability, performance, coupling, and so forth. | In regard to the common definition of unit tests, I'd say no. I've seen simple code made convoluted because of the need to twist it to suit the testing framework (eg. interfaces and IoC everywhere making things difficult to follow through layers of interface calls and data that should be obvious passed in by magic). Given the choice between code that is easy to understand or code that is easy to unit test, I go with the maintainable code every time. This doesn't mean not to test, but to fit the tools to suit you, not the other way round. There are other ways to test (but difficult-to-understand code is always bad code). For example, you can create unit tests that are less granular (eg. Martin Fowler 's attitude that a unit is generally a class, not a method), or you can hit your program with automated integration tests instead. Such may not be as pretty as your testing framework lights up with green ticks, but we're after tested code, not the gamification of the process, right? You can make your code easy to maintain and still be good for unit tests by defining good interfaces between them and then writing tests that exercise the public interface of the component; or you could get a better test framework (one that replaces functions at runtime to mock them, rather than requiring the code to be compiled with mocks in place). A better unit test framework lets you replace the system GetCurrentTime() functionality with your own, at runtime, so you don't need to introduce artificial wrappers to this just to suit the test tool. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288405",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161111/"
]
} |
288,715 | The main two arguments against overriding Object.finalize() is that: You don't get to decide when it's called. It may not get called at all. If I understand this correctly, I don't think those are good enough reasons to hate Object.finalize() so much. It is up to the VM implementation and the GC to determine when the right time to deallocate an object is, not the developer. Why is it important to decide when Object.finalize() gets called? Normally, and correct me if I'm wrong, the only time Object.finalize() wouldn't get called is when the application got terminated before the GC got a chance to run. However, the object got deallocated anyway when the application's process got terminated with it. So Object.finalize() didn't get called because it was not needed to be called. Why would the developer care? Every time I'm using objects that I have to manually close (like file handles and connections), I get very frustrated. I have to be constantly checking if an object has an implementation of close() , and I'm sure I have missed a few calls to it at some points in the past. Why isn't it just simpler and safer to just leave it to the VM and GC to dispose of these objects by putting the close() implementation in Object.finalize() ? | In my experience, there is one and only one reason for overriding Object.finalize() , but it is a very good reason : To place error logging code in finalize() which notifies you if you ever
forget to invoke close() . Static analysis can only catch omissions in trivial usage scenarios, and the compiler warnings mentioned in another answer have such a simplistic view of things that you actually have to disable them in order to get anything non-trivial done. (I have far more warnings enabled than any other programmer that I know of or ever heard of, but I don't have stupid warnings enabled.) Finalization might seem to be a good mechanism for making sure that resources do not go undisposed, but most people see it in a completely wrong way: they think of it as an alternate fallback mechanism, a "second chance" safeguard which will automagically save the day by disposing of the resources that they forgot. This is dead wrong . There must be only one way of doing any given thing: either you always close everything, or finalization always closes everything. But since finalization is unreliable, finalization cannot be it. So, there is this scheme which I call Mandatory Disposal , and it stipulates that the programmer is responsible for always explicitly closing everything which implements Closeable or AutoCloseable . (The try-with-resources statement still counts as explicit closing.) Of course, the programmer may forget, so that's where finalization comes into play, but not as a magic fairy which will magically make things right in the end: If finalization discovers that close() was not invoked, it does not attempt to invoke it, precisely because there will (with mathematical certainty) be hordes of n00b programmers who will rely on it to do the job that they were too lazy or too absent minded to do. So, with mandatory disposal, when finalization discovers that close() was not invoked, it logs a bright red error message, telling the programmer with big fat all-capital letters to fix his s-- er, his stuff. As an additional benefit, rumor has it that "the JVM will ignore a trivial finalize() method (e.g. one which just returns without doing anything, like the one defined in the Object class)", so with mandatory disposal you can avoid all finalization overhead in your entire system ( see alip's answer for information on how terrible this overhead is) by coding your finalize() method like this: @Override
protected void finalize() throws Throwable
{
if( Global.DEBUG && !closed )
{
Log.Error( "FORGOT TO CLOSE THIS!" );
}
//super.finalize(); see alip's comment on why this should not be invoked.
} The idea behind this is that Global.DEBUG is a static final variable whose value is known at compilation time, so if it is false then the compiler will not emit any code at all for the entire if statement, which will make this a trivial (empty) finalizer, which in turn means that your class will be treated as if it does not have a finalizer. (In C# this would be done with a nice #if DEBUG block, but what can we do, this is java, where we pay apparent simplicity in the code with additional overhead in the brain.) More about Mandatory Disposal, with additional discussion about disposing of resources in dot Net, here: michael.gr: Mandatory disposal vs. the "Dispose-disposing" abomination | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288715",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/138956/"
]
} |
288,848 | In the last three years that I have worked as developer, I have seen a lot of examples where people use a switch statement to set the path (both in back-end and front-end) for a URL. Below is an example of this: Back-end example (C#): public static string getHost(EnvironmentEnum environment){
var path = String.Empty;
switch (environment)
{
case EnvironmentEnum.dev:
path = "http://localhost:55793/";
break;
case EnvironmentEnum.uat:
path = "http://dev.yourpath.com/";
break;
case EnvironmentEnum.production:
path = "http://yourpath.com/";
break;
}
return path;
} Front-end example (JavaScript): (function () {
if (window.location.host.indexOf("localhost") !== -1) {
window.serviceUrl = "http://localhost:57939/";
}
else if (window.location.host.indexOf("qa") !== -1) {
window.serviceUrl = "http://dev.yourpath.com/";
}
else {
window.serviceUrl = "http://yourpath.com/";
}
})(); It has been discussed whether it is a good or bad practice, and I think it is a bad practice, because we must avoid this kind of code and set a proper configuration. But to be honest I really don't know the proper answer and why is it not recommended and what is the correct way to implement this. can someone explain it the pros and cons of the above practice? | Code that works for you and is easy to maintain is by definition "good". You should never change things just for the sake of obeying someone's idea of "good practice" if that person cannot point out what the problem with your code is. In this case, the most obvious problem is that resources are hard-coded into your application - even if they're selected dynamically, they're still hard-coded. This means that you cannot change these resources without recompiling/redeploying your application. With an external configuration file, you'd only have to change that file and restart/reload your application. Whether or not that is a problem depends on what you do with it. In a Javascript framework that is automatically redistributed with every request anyway, it is no problem at all - the changed value will propagate to every user the next time they use the application. With an on-premises deployment in a compiled language in an inaccessible location it is a very big problem indeed. Reinstalling the application might take a long time, cost a lot of money or have to be done at night to preserve availability. Whether or not hard-coded values are a problem depends on whether your situation is more like the first example or more like the second example. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288848",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/172687/"
]
} |
288,935 | I know this is a very basic question but I cant seem to find the answer with Google. What is the difference between a hotfix and a bugfix? | The term hotfix is generally used when client has found an issue within the current release of the product and can not wait to be fixed until the next big release. Hence a hotfix issue is created to fix it and is released as a part of update to the current release usually called Cumulative Update(CU). CUs are nothing but a bunch of hotfixes together. Bugfix - We usually use this when an issue is found during the development and testing phase internally. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288935",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/134708/"
]
} |
288,989 | I am part of a consultant team implementing a new solution for a customer. I am
responsible for the majority of code reviews on the client-side codebase (React and javascript). I have noticed that some team members use unique coding patterns to a point that I could pick a file at random at tell who was the author from the style alone. Example 1 (one-off inline functions) React.createClass({
render: function () {
var someFunc = function () {
...
return someValue;
};
return <div>{someFunc()}</div>
}
}); The author argues that by assigning a meaningful name to someFunc the code will be easier to read. I believe that by inlining the function and adding a comment instead the same effect could be achieved. Example 2 (unbound functions) function renderSomePart(props, state) {
return (
<div>
<p>{props.myProp}</p>
<p>{state.myState}</p>
</div>
);
}
React.createClass({
render: function () {
return <div>{renderSomePart(this.props, this.state)}</div>;
}
}); This is how we usually do it (avoids having to pass state and props): React.createClass({
renderSomePart: function () {
return (
<div>
<p>{this.props.myProp}</p>
<p>{this.state.myState}</p>
</div>
);
},
render: function () {
return <div>{this.renderSomePart()}</div>;
}
}); While these coding patterns are technically correct they are not consistent with the rest of the codebase nor with the style and patterns that Facebook (the author of React) hints at in tutorials and examples. We need to keep a fast pace in order to deliver on time and I don't want to burden the team unnecessarily. At the same time we need to be on a reasonable quality level. I am trying to imagine myself as the customers' maintenance developer faced with inconsistencies like these (every component might require you to understand another way of doing the same thing). Question: What is the value as perceived by the customer and its maintenance developers of a consistent code base vs. allowing inconsistencies like these to remain and potentially spread? | Code Transfer Advantage Following patterns provided by a library, React in your case, means that the product you deliver will be easily picked up and maintained by other developers who are also familiar with React. Potential Backward Compatibility Issues Some libraries would have a new major version out, and backward compatibility might be compromised if your patterns are significantly different, thus slowing/halting your future upgrade. I am not sure how React would deal with new releases, but I have seen this happen before. New Members on The Team Start Being Productive Quicker If you follow what is provided by the author, you are more likely hiring talented developers using your framework and start them off quicker with your system rather than teaching new patterns. Potential Undiscovered Issues There might be issues in the future that you haven't discovered yet with your approach, that are solved by the author's approach. That being said, innovation is always a risk, if you strongly feel that your approach is better and it works for your team, go for it! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288989",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/172542/"
]
} |
288,990 | Does an object have to represent an entity? By an entity I mean something like a Product , Motor , a ParkingLot etc, a physical, or even a clear-cut non-physical conceptual object -- something that is well defined, with some core data clearly belonging to the object, and some functions/methods that clearly operate on the core data. For example, I can have an object of a Demon , an entity in itself, an imaginary one perhaps and not physical but an entity nevertheless Can an object be just a collection of methods , a common set of procedures that tie in with a common goal? Example: can a class be called MotorOperations or MotorActions , where there is no entity, but methods inside the class do things like getMotorDataFromHTMLForm() getMotorManufacturers() selectMotorFromUserRequirements($requirements) canMotorCanHandleOperatingConditions($conditions) computePowerConsumptionForMotor($id) A class is typically defined as data central to the object + operations on data. So for a Motor there may be some motor variables relating to motor specifications, and there can be operations that combine those data to produce something. In my case it is more like I have a class with operations on data + data that is passed through the class, there is no data centric to "Motor Operations", other than temporary pass-through-the-class data. Question Can classes represent entity-less objects?
If not, why they are bad/incomplete/non-OOP-centric? Are there ways they need to be changed/improved conceptually to be in line with OOP? | No , an object does not have to represent an entity. In fact, I would argue that when you stop thinking about objects as physical entities is when you finally get the benefits that OOP promises. This isn't the best example, but the Coffee Maker design is probably where the light started to come on for me. Objects are about messages. They're about responsibilities. They're not about Cars, Users, or Orders. I know we teach OO this way, but it becomes apparent after a few tries how fundamentally frustrating it is to figure out where things go when you try to do MVC, MVVM or MVWhatever. Either your models become ridiculously bloated or your controllers do. For navigability, it's great to know that anything that touches Vehicles is in the Vehicle.ext file, but when your application is about Vehicles, you inevitably end up with 3000 lines of spaghetti in that file. When you have a new message to send, you have at least one new object, and perhaps a pair of them. So in your question about a bundle of methods, I would argue that you're potentially talking about a bundle of messages. And each one could be its own object, with it's own job to do. And that's ok. It will become apparent as you split things apart which things really, really need to be together. And you put them together. But you don't immediately drop every method in a vaguely appropriate drawer for convenience sake if you want to enjoy OO. Let's talk about bags of functions An object can be just a collection of methods and still be OO, but my "rules" are pretty strict. The collection should have a single responsibility, and that responsibility can't be as generic as "Does stuff to motors". I might do such a thing as a service-layer facade, but I'm acutely aware that I'm being lazy for navigability/discovery reasons, not because I'm trying to write OO code. All the methods should be at a consistent layer of abstraction. If one method retrieves Motor objects and another returns Horsepower, that's probably too far apart. The object should work on the same "kind" of data. This object does stuff to motors(start/stop), this one does things with crank lengths, this one handles ignition sequencing, this one takes an html form. This data could conceivably be fields on the object and it would seem cohesive. I generally build objects of this sort when I'm doing transforms, composition, or just don't want to worry about mutability. I find focusing on object responsibilities leads me towards cohesion. There has to be some cohesion to be an object, but there doesn't need to be any fields nor very much behavior for it to be an object. If I was building a system that needed those 5 motor methods, I would start out with 5 different objects that do those things. As I found commonality, I would either start to merge things together or use common "helper" objects. That moves me into open/closed concerns - how can I extract this bit of functionality so I never have to modify that particular file again but still use it where needed? Objects are about messages Fields barely matter to an object - getting and setting registers doesn't change the world outside the program. Collaborating with other objects gets the work done. However, the strength of OO is that we can create abstractions so we don't have to think about all the individual details at once. Abstractions that leak or don't make sense are problematic, so we think deeply (too much, maybe) about creating objects that match our mental models. Key question: Why do these two objects need to talk to each other? Think of the object as an organ in a person - it has a default purpose and only changes behavior when it receives a specific message that it cares about. Imagine a scenario where you're in the crosswalk and a car is coming fast. As the brain object, I detect a stressor. I tell the hypothalamus to send corticotrophin-releasing hormone. The pituitary gland gets that message and releases adrenal corticotrophic hormone. The adrenal glands get that message and create adrenaline. When the muscle object gets that adrenaline message it contracts. When the heart get the same message, it beats faster. There's a whole chain of players involved in starting the complex behavior of sprinting across the street and it's the messages that matter. The brain object knows how to get the hypothalamus to send out the alert, but it doesn't know the chain of objects that will eventually make the behavior happen. Likewise the heart has no idea where adrenaline comes from, it just knows to do something different when that shows up. So in this ( simplified ) example, the adrenal gland object only needs to know how to take ACTH and make adrenaline. It doesn't need any fields to do that, yet it still seems like an object to me. Now if our application is designed only to sprint across the street, I may not need the pituitary gland and the adrenal gland objects. Or I only need a pituitary gland object that only does a small part of what we might conceptually see as the "pituitary gland model". These concepts all exist as conceptual entities, but it's software and we can make the AdrenalineSender or MuscleContractor or whatever and not worry much about the "incompleteness" of our model. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/288990",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119333/"
]
} |
289,290 | I am creating an object model for a device that has multiple channels. The nouns used between the client and I are Channel and ChannelSet . ("Set" isn't semantically accurate, because it's ordered and a proper set isn't. But that's a problem for a different time.) I'm using C#. Here is a usage example of ChannelSet : // load a 5-channel ChannelSet
ChannelSet channels = ChannelSetFactory.FromFile("some_5_channel_set.json");
Console.Write(channels.Count);
// -> 5
foreach (Channel channel in channels) {
Console.Write(channel.Average);
Console.Write(", ");
}
// -> 0.3, 0.3, 0.9, 0.1, 0.2 All is dandy. However, the clients are not programmers and they absolutely will be confused by zero indexing - the first channel is channel 1 to them. But, for the sake of consistency with C#, I want to keep the ChannelSet indexed from zero . This is sure to cause some disconnects between my dev team and the clients when they interact. But worse, any inconsistency in how this is handled within the codebase is a potential problem. For example, here's a UI screen where the end user ( who thinks in terms of 1 indexing ) is editing channel 13: That Save button is eventually going to result in some code. If ChannelSet is 1 indexed: channels.GetChannel(13).SomeProperty = newValue; // notice: 13 or this if it's zero indexed: channels.GetChannel(12).SomeProperty = newValue; // notice: 12 I'm not really sure how to handle this. I feel like it's good practice to keep an ordered, integer-indexed list of things (the ChannelSet ) consistent with all of the other array and list interfaces in the C# universe (by zero indexing ChannelSet ). But then every piece of code between the UI and the backend will need a translation (subtract by 1), and we all know how insidious and common off-by-one errors already are. So, has a decision like this ever bitten you? Should I zero index or one index? | It feels like you're conflating the Identifier for the Channel with its position within a ChannelSet . The following is my visualisation of how your code/comments would look at the moment : public sealed class ChannelSet
{
private Channel[] channels;
/// <summary>Retrieves the specified channel</summary>
/// <param name="channelId">The id of the channel to return</param>
Channel GetChannel(int channelId)
{
return channels[channelId-1];
}
} It does feel like you've decided that because Channel s within a ChannelSet are identified by numbers that have an upper and lower bound they must be indexes and therefore as it's C#, 0 based. If the natural way to refer to each of the channels is by a number between 1 and X, refer to them by a number between 1 and X. Don't try and force them into being indexes. If you really want to provide a way to access them by 0 based index (what benefit does this give your end user, or developers that consume the code?) then implement an Indexer : public sealed class ChannelSet
{
private Channel[] channels;
/// <summary>Retrieves the specified channel</summary>
/// <param name="channelId">The id of the channel to return</param>
public Channel GetChannel(int channelId)
{
return channels[channelId-1];
}
/// <summary>Return the channel at the specified index</summary>
public Channel this[int index]
{
return channels[index];
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/289290",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/168619/"
]
} |
289,413 | When my coworker thinks that there is no need for a test on his PC, he makes changes, commits and then pushes. Then he tests on the production server and realizes that he made a mistake. It happens once in a week. Now I see that he made 3 commits and and pushes with deployment to the production server within 5 minutes. I told him few times that this is not the way how good work is done. I don't want to be rude to him again and he is in the same status with me in the company and he has worked more than me here. I want this behavior to be punished in some way or make it unpleasant as much as possible. Before I started, the company was deploying using antique methods, such as FTP, and there was no version control. I forced them/us to use Git, Bitbucket, Dploy.io, and HipChat. The deployment is not automatic, someone has to login to dply.io and press the deploy button. Now, how can I force them to not test on the production server? Something like HipChat bot can sense that there is repeated edits on the same line and send a notice to the programmer. | You need a proper Quality Assurance (QA) process. In a professional software development team, you don't push from development right to production. You have at least three separate environments: development, staging and production. When you think that you got something working in your development environment, you push to staging first, where each commit is tested by the QA team, and only if that test is successful, it gets pushed to production. Ideally, development, testing and pushing to production is done by separate people. This can be ensured by configuring your build automation system in a way that developers can only deploy from development to staging and that the QA team can only deploy from staging to production. When you can't persuade management to hire someone to do your QA, then maybe one of you can play that role for the other. I never worked with diploy.io, but some build automation systems can be configured in a way that a user can deploy both from development to staging and from staging to production, but not do both for the same build, so a second person is always required (but make sure you have some backup people for times when one of you is absent). Another option is to have your support staff do the QA. This might seem like additional work for them, but it also makes sure that they are aware of any changes to the application which can safe them some work in the long run. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/289413",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2410/"
]
} |
289,448 | Recently in my company it has been suggested that one developer should focus (and only one) in one feature. That would mean something like setting the developer aside of the normal team routine, releasing it of some other responsibilities (meetings and such) and this person would be the "only" responsible for the feature, technology wise. For the record, we use SCRUM within SAFe, and we have for full time developers per team, sharing QA and product owners between our two teams (Android and iOS). While I agree that this would increase productivity in a short term, I have the feeling (and I think I learned it in the university) that this is a bad practice for many reasons: Code review loses value. Minimal knowledge sharing. Risk increment. Loss of team flexibility. Am I right or it's not a bad practice at all? | In my 20 years of experience, it is better to have code ownership rotate responsibilities amongst designers or at least have a pair of owners. Single feature ownership has the following issues, several of which you mentioned: it tends to pigeon hole designers and limit their growth opportunities it puts all the eggs in one basket so if someone is hit by a bus or quits, there can be a hole in knowledge a single person may not see an issue in the code and without a peer owner code reviews are far less effective it is hard to maintain code consistency and readability if everyone is working on code using their own style - while this can be worked around with style guidelines, subtleties can creep in especially when using convention over configuration where people are relying on default behavior developers can tend to become protective and defensive of their code if they own it which can inhibit evolution of code - if several people own it, this tendency is reduced | {
"source": [
"https://softwareengineering.stackexchange.com/questions/289448",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/129713/"
]
} |
289,691 | So I'm making a method to create a salutation line based on two people from a database. There are four parameters: the two names ( name1 and name2 ) and the two genders ( gender and gender2 ). For every gender combination, I have a kind of different output. For example: if gender 1 is M (man) and gender 2 is also M , the output should be something like: Dear Sir name1 and Sir name2, At this time, my switch looks like this: switch(gender1){
case 'M':
switch(gender2){
case 'M': printf("Dear Sir %s and Sir %s", name1, name2); break;
case 'W': printf("Dear Sir %s and Madame %s", name1, name2); break;
case 'R': ...
}
break;
case 'W':
switch(gender2){
case 'M': printf("Dear Madame %s and Sir %s", name1, name2); break
case 'W': printf("Dear Madame %s and Madame %s", name1, name2); break;
case 'R': ...
}
break;
case ...etc.
} Note that I have multiple gender options, like 'R' for "Dear Relation" and some more that I do not have the time to translate. How can I reduce this double switch statement? Putting the second switch in a method is not an option because there is also a case where both of the names are the same and then the output should be combined like: "Dear Sir and Madame name1," | Add the title to the parameters of the printf: char* title1;
switch(gender1){
case 'M':
title1 = "Sir";
break;
case 'W':
title1 = "Madam";
break;
case ...etc.
}
char* title2;
switch(gender2){
case 'M':
title2 = "Sir";
break;
case 'W':
title2 = "Madam";
break;
case ...etc.
}
printf("Dear %s %s and %s %s", title1, name1, title2, name2); you can extract the switch to its own function for re-usability and compactness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/289691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/171501/"
]
} |
289,746 | We developed a product (prototype) P_OLD in language X and we are now rewriting it from scratch as P_NEW in language Y. Since P_NEW and P_OLD are the same product: Should P_NEW just be a brach of P_OLD old or should it be its own repository? What is the usual way to handle such big changes form a version control perspective? | You almost certainly want a new repository. The purpose of the repository is: to track history and changes so you can compare them easily to manage branches and merges rather than just emailing patch files around and applying them to working directories manually If you're totally rewriting a project from scratch then there is no point putting the rewrite in the same repository. You won't be able to apply patches written in the old language to your rewrite. Switching repos won't make the history in the old repo go away, andif you switch you won't have any weird interim stages where you have two languages kicking round in your repo. The only reason I would even consider keeping the repository when changing languages would be if a) the languages are so similar that code can often be copy-pasted from one to the other without making any changes, or b) you have a project in which the majority of the functional content in version control is something like templates in a templating language you're retaining, and the language of the core which you're changing be translated line-for line into another language (and even then only if you know you're going to need to keep iterating the templates during the migration). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/289746",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/186488/"
]
} |
290,078 | What term can I use to describe something with O(N log N) complexity? For example: O(1): Constant O(log N): Logarithmic O(N): Linear O(N log N): ?????? O(N 2 ): Quadratic O(N 3 ): Cubic | "N log N" is as good as you're going to get, and should be well understood by professional programmers. You can't expect there to be a single word to describe every complexity class that exists. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290078",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/155587/"
]
} |
290,180 | I'm trying to make sort of a game where I have a a grid of 20x20 and I display a player (P), a target (T) and three enemies (X). All these have an X and a Y coordinate which are assigned using rand() . The problem is that if I try to get more points in the game (refills for energy etc) they overlap with one or more of the other points because the range is small(1 to 20 inclusive). These are my variables and how I'm assigning them values:
(the COORD is a struct with just an X and a Y) const int gridSize = 20;
COORD player;
COORD target;
COORD enemy1;
COORD enemy2;
COORD enemy3;
//generate player
srand ( time ( NULL ) );
spawn(&player);
//generate target
spawn(&target);
//generate enemies
spawn(&enemy1);
spawn(&enemy2);
spawn(&enemy3);
void spawn(COORD *point)
{
//allot X and Y coordinate to a point
point->X = randNum();
point->Y = randNum();
}
int randNum()
{
//generate a random number between 1 and gridSize
return (rand() % gridSize) + 1;
} I want to add more things to the game but the probability of overlap increases when I do that. Is there any way to fix this? | While the users who complain about rand() and recommend better RNGs are right about the quality of the random numbers, they are also missing the bigger picture. Duplicates in streams of random numbers cannot be avoided, they are a fact of life. This is the lesson of the birthday problem . On a grid of 20 * 20 = 400 possible spawn positions, a duplicate spawn point is to be expected (50% probability) even when spawning only 24 entities. With 50 entities (still only 12.5% of the whole grid), the probability of a duplicate is over 95%. You have to deal with collisions. Sometimes you can draw all samples at once, then you can use a shuffle algorithm to draw n guaranteed-distinct items. You just need to generate the list of all possibilities. If the full list of possibilities is too large to store, you can generate spawn positions one at a time as you do now (just with a better RNG) and simply re-generate when a collision occurs. Even though having some collisions is likely, many collisions in a row are exponentially unlikely even if most of the grid is populated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290180",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/187835/"
]
} |
290,232 | The company I work at is initializing all of their data structures through an initialize function like so: //the structure
typedef struct{
int a,b,c;
} Foo;
//the initialize function
InitializeFoo(Foo* const foo){
foo->a = x; //derived here based on other data
foo->b = y; //derived here based on other data
foo->c = z; //derived here based on other data
}
//initializing the structure
Foo foo;
InitializeFoo(&foo); I've gotten some push back trying to initialize my structs like this: //the structure
typedef struct{
int a,b,c;
} Foo;
//the initialize function
Foo ConstructFoo(int a, int b, int c){
Foo foo;
foo.a = a; //part of parameter input (inputs derived outside of function)
foo.b = b; //part of parameter input (inputs derived outside of function)
foo.c = c; //part of parameter input (inputs derived outside of function)
return foo;
}
//initialize (or construct) the structure
Foo foo = ConstructFoo(x,y,z); Is there an advantage to one over the other? Which one should I do, and how would I justify it as a better practice? | In the 2nd approach you will never have a half-initialised Foo. Putting all the construction in one place seems a more sensible, and obvious place. But... the 1st way isn't so bad, and is often used in many areas (there's even a discussion of the best way to dependency-inject, either property-injection like your 1st way, or constructor injection like the 2nd). Neither is wrong. So if neither is wrong and the rest of the company uses approach #1, then you should be fitting in with the existing codebase and not trying to mess it up by introducing a new pattern. This is really the most important factor at play here, play nice with your new friends and don't try to be that special snowflake who does things differently. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/63566/"
]
} |
290,351 | Empty interfaces are generally consider bad practice, as far as I can tell - especially where things like attributes are supported by the language. However, is an interface considered 'empty' if it inherits from other interfaces? interface I1 { ... }
interface I2 { ... } //unrelated to I1
interface I3
: I1, I2
{
// empty body
} Anything that implements I3 will need to implement I1 and I2 , and objects from different classes that inherit I3 can then be used interchangeably (see below), so is it right to call I3 empty ? If so what would be a better way to architecture this? // with I3 interface
class A : I3 { ... }
class B : I3 { ... }
class Test {
void foo() {
I3 something = new A();
something = new B();
something.SomeI1Function();
something.SomeI2Function();
}
}
// without I3 interface
class A : I1, I2 { ... }
class B : I1, I2 { ... }
class Test {
void foo() {
I1 something = new A();
something = new B();
something.SomeI1Function();
something.SomeI2Function(); // we can't do this without casting first
}
} | Anything that implements I3 will need to implement I1 and I2, and objects from different classes that inherit I3 can then be used interchangeably (see below), so is it right to call I3 empty? If so what would be a better way to architecture this? "Bundling" I1 and I2 into I3 offers a nice (?) shortcut, or some sort of syntax sugar for wherever you'd like both to be implemented. The problem with this approach is that you can't stop other developers from explicitly implementing I1 and I2 side by side, but it doesn't work both ways - while : I3 is equivalent of : I1, I2 , : I1, I2 is not an equivalent of : I3 . Even if they're functionally identical, a class that supports both I1 and I2 will not be detected as supporting I3 . This means that you're offering two ways of accomplishing the same thing - "manual" and "sweet" (with your syntax sugar). And it can have a bad impact on code consistency. Offering 2 different ways to accomplish the same thing here, there, and in another place results in 8 possible combinations already :) void foo() {
I1 something = new A();
something = new B();
something.SomeI1Function();
something.SomeI2Function(); // we can't do this without casting first
} OK, you can't. But perhaps it's actually a good sign? If I1 and I2 imply different responsibilities (otherwise there would be no need to separate them in the first place), then maybe foo() is wrong to try and make something perform two different responsibilities in one go? In other words, it could be that foo() itself violates the principle of Single Responsibility, and the fact that it requires casting and runtime type checks to pull it off is a red flag alarming us about it. EDIT: If you insisted on ensuring that foo takes something that implements I1 and I2 , you could pass them as separate parameters: void foo(I1 i1, I2 i2) And they could be the same object: foo(something, something); What does foo care if they're the same instance or not? But if it does care (eg. because they're not stateless), or you just think this looks ugly, one could use generics instead: void Foo<T>(T something) where T: I1, I2 Now generic constraints take care of the desired effect while not polluting the outside world with anything. This is idiomatic C#, based on compile-time checks, and it feels more right overall. I see I3 : I2, I1 somewhat as an effort to emulate union types in a language that doesn't support it out of the box (C# or Java don't, whereas union types exist in functional languages). Trying to emulate a construct that's not really supported by the language is often clunky. Similarly, in C# you can use extension methods and interfaces if you want to emulate mixins . Possible? Yes. Good idea? I'm not sure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290351",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100925/"
]
} |
290,372 | I have a class with two readonly int fields. They are exposed as properties: public class Thing
{
private readonly int _foo, _bar;
/// <summary> I AM IMMUTABLE. </summary>
public Thing(int foo, int bar)
{
_foo = foo;
_bar = bar;
}
public int Foo { get { return _foo; } set { } }
public int Bar { get { return _bar; } set { } }
} However, that means that the following is perfectly legal code: Thing iThoughtThisWasMutable = new Thing(1, 42);
iThoughtThisWasMutable.Foo = 99; // <-- Poor, mistaken developer.
// He surely has bugs now. :-( The bugs that come from assuming that would work are sure to be insidious. Sure, the mistaken developer should have read the docs. But that doesn't change the fact that no compile- or run-time error warned him about the problem. How should the Thing class be changed so that devs are less likely to make the above mistake? Throw an exception? Use a getter method instead of a property? | Why make that code legal? Take out the set { } if it does nothing. This is how you define a read only public property: public int Foo { get { return _foo; } } | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290372",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/168619/"
]
} |
290,482 | I am writing a library which deals a lot with sub-sequences of ordered containers. So for example I have a container (1,2,3,4,5,6) and a user wants to access (3,4,5). I am providing the subsequence by a pair of iterators, pointing to its first and last element respectively, i.e. 3 and 5. Since the library is written in C++ and AFAIK the std convention is to have the last iterator point beyond the last element, I am wondering if what I am doing is good practice or wether I should return a pair of iterators, pointing to the first and beyond last element respectively, i.e. 3 and 6? Also from a programming perspective, it complicates things when using std functionality, for example to count the number of elements, I have to do: int elementCnt = std::distance(startIt, endIt) + 1; | Follow the standard - the end is the iterator past the one you want. This allows you to use all the standard algorithms and containers without problem. It also means your users will be able to write the code they always have (eg for (x=startIt; x != endIt; x++) and this will work as expected. If you change this behaviour and set the last iterator to the last element, all that goes out of the window and you might as well use a different nomenclature than iterators as you're effectively changing the way everyone will expect them to work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/186488/"
]
} |
290,566 | I'd like to avoid having the cookies banner on my websites where possible. Could I store session id's in localStorage to bypass implementing the banner? | The cookie law is not actually about cookies (and its not actually called the cookie law). Its about tracking users, storing and sharing the information with third parties. Cookies are just the most popular method to track users. If you don't want to show the "cookie warning" then just don't track the users beyond the session and don't share traffic data with third parties. The actual directive . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290566",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/188312/"
]
} |
290,917 | One of the major issues that I have seen occur in a system with microservices is the way transactions work when they span over different services. Within our own architecture, we have been using distributed transactions to resolve this, but they come with their own issues. Especially deadlocks have been a pain so far. Another option seems to be some kind of custom-made transaction manager, which knows the flows within your system, and will take care of the rollbacks for you as a background process spanning over your entire system (so it will tell the other services to rollback and if they're down, notify them later on). Is there another, accepted option? Both of these seem to have their disadvantages. The first one could cause deadlocks and a bunch of other issues, the second one could result in data inconsistency. Are there better options? | The usual approach is to isolate those microservices as much as possible - treat them as single units. Then transactions can be developed in context of the service as a whole (ie not part of usual DB transactions, though you can still have DB transactions internal to the service). Think how transactions occur and what kind make sense for your services then, you can implement a rollback mechanism that un-does the original operation, or a 2-phase commit system that reserves the original operation until told to commit for real. Of course both these systems mean you're implementing your own, but then you're already implementing your microservices. Financial services do this kind of thing all the time - if I want to move money from my bank to your bank, there is no single transaction like you'd have in a DB. You don't know what systems either bank is running, so must effectively treat each like your microservices. In this case, my bank would move my money from my account to a holding account and then tell your bank they have some money, if that send fails, my bank will refund my account with the money they tried to send. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290917",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/185326/"
]
} |
290,922 | Imagine a scenario of two different microservices. One to handle Authentication within the service, the other one takes care of User Management. They both have a concept of a User, and will talk about Users through calls to each other. Where would the Domain model of a "User" belong though? Would they both have a different representation of what a User is on the level of the database? What about when we have a UserDTO to be used in API calls, would they both have one for their respective API's? What is the general accepted solution for this kind of architectural issue? | In a Microservices architecture, each one is absolutely independent of the others and it must hide the details of the internal implementation. If you share the model you are coupling microservices and lose one of the greatest advantages in which each team can develop its microservice without restrictions and the need of knowing how evolve others microservices. Remember that you can even use different languages in each one, this would be difficult if you start to couple microservices. If they are too related maybe they are really one like @soru says. Related questions: Why is it so bad to read data from a database “owned” by a different microservice How do you handle shared concepts in a microservice architecture? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290922",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/185326/"
]
} |
290,929 | I have an algorithm which creates a collection of objects. These objects are mutable during creation, since they start out with very little, but then they are populated with data in different places within the algorithm. After the algorithm is completed, the objects should never be changed - however they are consumed by other parts of the software. In these scenarios, is it considered good practice to have two versions of the class, as described below? The mutable one is created by the algorithm, then On completion of the algorithm, the data is copied into immutable objects which are returned. | You can perhaps use the builder pattern . It uses a separate 'builder' object with the purpose of collecting the necessary data, and when all data is collected it creates the actual object. The created object can be immutable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290929",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
]
} |
290,955 | I was assigned to maintain an application written some time ago by more skilled developers. I came across this piece of code: public Configuration retrieveUserMailConfiguration(Long id) throws MailException {
try {
return translate(mailManagementService.retrieveUserMailConfiguration(id));
} catch (Exception e) {
rethrow(e);
}
throw new RuntimeException("cannot reach here");
} I'm curious if throwing RuntimeException("cannot reach here") is justified. I'm probably missing something obvious knowing that this piece of code comes from more seasoned colleague. EDIT:
Here is rethrow body that some answers referred to. I deemed it not important in this question. private void rethrow(Exception e) throws MailException {
if (e instanceof InvalidDataException) {
InvalidDataException ex = (InvalidDataException) e;
rethrow(ex);
}
if (e instanceof EntityAlreadyExistsException) {
EntityAlreadyExistsException ex = (EntityAlreadyExistsException) e;
rethrow(ex);
}
if (e instanceof EntityNotFoundException) {
EntityNotFoundException ex = (EntityNotFoundException) e;
rethrow(ex);
}
if (e instanceof NoPermissionException) {
NoPermissionException ex = (NoPermissionException) e;
rethrow(ex);
}
if (e instanceof ServiceUnavailableException) {
ServiceUnavailableException ex = (ServiceUnavailableException) e;
rethrow(ex);
}
LOG.error("internal error, original exception", e);
throw new MailUnexpectedException();
}
private void rethrow(ServiceUnavailableException e) throws
MailServiceUnavailableException {
throw new MailServiceUnavailableException();
}
private void rethrow(NoPermissionException e) throws PersonNotAuthorizedException {
throw new PersonNotAuthorizedException();
}
private void rethrow(InvalidDataException e) throws
MailInvalidIdException, MailLoginNotAvailableException,
MailInvalidLoginException, MailInvalidPasswordException,
MailInvalidEmailException {
switch (e.getDetail()) {
case ID_INVALID:
throw new MailInvalidIdException();
case LOGIN_INVALID:
throw new MailInvalidLoginException();
case LOGIN_NOT_ALLOWED:
throw new MailLoginNotAvailableException();
case PASSWORD_INVALID:
throw new MailInvalidPasswordException();
case EMAIL_INVALID:
throw new MailInvalidEmailException();
}
}
private void rethrow(EntityAlreadyExistsException e)
throws MailLoginNotAvailableException, MailEmailAddressAlreadyForwardedToException {
switch (e.getDetail()) {
case LOGIN_ALREADY_TAKEN:
throw new MailLoginNotAvailableException();
case EMAIL_ADDRESS_ALREADY_FORWARDED_TO:
throw new MailEmailAddressAlreadyForwardedToException();
}
}
private void rethrow(EntityNotFoundException e) throws
MailAccountNotCreatedException,
MailAliasNotCreatedException {
switch (e.getDetail()) {
case ACCOUNT_NOT_FOUND:
throw new MailAccountNotCreatedException();
case ALIAS_NOT_FOUND:
throw new MailAliasNotCreatedException();
}
} | This rethrow(e); function violates the principle which says that under normal circumstances, a function will return, while under exceptional circumstances, a function will throw an exception. This function violates this principle by throwing an exception under normal circumstances. That's the source of all of the confusion. The compiler assumes that this function will return under normal circumstances, so as far as the compiler can tell, execution may reach the end of the retrieveUserMailConfiguration function, at which point it is an error not to have a return statement. The RuntimeException thrown there is supposed to alleviate this concern of the compiler, but it is a rather clunky way of doing it. Another way of preventing the function must return a value error is to add a return null; //to keep the compiler happy statement, but that's equally clunky in my opinion. So, personally, I would replace this: rethrow(e); with this: report(e);
throw e; or, better yet, (as coredump suggested ,) with this: throw reportAndTransform(e); Thus, the flow of control would be made obvious to the compiler, so your final throw new RuntimeException("cannot reach here"); would become not only redundant, but actually not even compilable, since it would be flagged by the compiler as unreachable code. That's the most elegant and actually also simplest way of getting out of this ugly situation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/290955",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/188802/"
]
} |
291,200 | I have a base class with a fair amount of "meta programming" to give it the flexibility/abstraction it needs to be rather generic. I do have a lot of subclasses using the common methods in the base class, and I have behavior oriented unit tests covering all of the cases in each subclass. Is it ok to skip testing the base class? | To verify if you have enough tests or not, you can check your code coverage and your branch coverage induced by the tests (maybe by using a coverage tool, maybe manually by reviewing the code paths or by using a debugger). If you come to the conclusion the tests for the subclasses give you a high enough coverage for your base classes code, then adding further tests obviously won't bring you much benefit. On the other hand, if there are code paths you can only test by adding specific tests using the base class directly, then you should go this route. Another possible reason for "testing your base class directly" is that you want to test a specific function of that class "in isolation". Sometimes it can be easier to design test cases directly for a specific method, instead of only testing that method indirectly by calling the methods of your subclasses which use that method. Note that when you have a generic base class for which the typical usage scenario is to derive a subclass, your base class is probably abstract. So for testing such a class you need to make a derivation anyway. For this situation, testing "the base class directly" could mean to add a special derivation just for testing purposes, of course. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/291200",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/188952/"
]
} |
291,230 | I have been researching Interpreters/Compilers, then I stumbled across JIT-Compilation - specifically Google Chrome's V8 Javascript Engine. My questions are - How can it be faster than standard Interpretation? Why wasn't JIT-Compilation used in the first place? My Current Understanding Every Javascript Program starts out as source code , then, regardless of the method of execution, is ultimately is translated to machine code . Both JIT-Compilation and Interpretation must follow this path , so how can JIT-Compilation be faster (also because JIT is time-constrained, unlike AOT-Compilation) ? It seems that JIT-Compilation is a relatively old innovation , based off of Wikipedia's JIT-Compilation Article . "The earliest published JIT compiler is generally attributed to work on LISP by McCarthy in 1960 ." "Smalltalk (c. 1983 ) pioneered new aspects of JIT compilations. For example, translation to machine code was done on demand, and the result was cached for later use. When memory became scarce, the system would delete some of this code and regenerate it when it was needed again." So why was Javascript Interpreted to begin with ? I'm very confused, and I've done a lot of research on this, but I haven't found satisfactory answers. So clear, concise answers would be appreciated. And if additional explanation about Interpreters, JIT-Compilers, etc. needs to be brought in, that's appreciated as well. | The short answer is that JIT has longer initialization times, but is a lot faster in the long run, and JavaScript wasn't originally intended for the long run. In the 90s, typical JavaScript on a web site would amount to one or two functions in the header, and a handful of code embedded directly in onclick properties and the like. It would typically get run right when the user was expecting a huge page load delay anyway. Think extremely basic form validation or tiny math utilities like mortgage interest calculators. Interpreting as needed was a lot simpler and provided perfectly adequate performance for the use cases of the day. If you wanted something with long-run performance, you used flash or a java applet. Google maps in 2004 was one of the first killer apps for heavy JavaScript use. It was eye-opening to the possibilities of JavaScript, but also highlighted its performance problems. Google spent some time trying to encourage browsers to improve their JavaScript performance, then eventually decided competition would be the best motivator, and would also give them the best seat at the browser-standards table. Chrome and V8 were released in 2008 as a result. Now, 11 years after Google Maps came on the scene, we have new developers who don't remember JavaScript ever being considered inadequate for that sort of task. Say you have a function animateDraggedMap . It might take 500 ms to interpret it, and 700 ms to JIT compile it. However, after JIT compilation, it might take only 100 ms to actually run. If it's the 90s and you're only calling a function once then reloading the page, JIT is not worth it at all. If it's today and you're calling animateDraggedMap hundreds or thousands of times, that extra 200 ms at initialization is nothing, and it can be done behind the scenes before the user even tries to drag the map. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/291230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/186734/"
]
} |
291,489 | Suppose that I am developing a relatively large project. I have already documented all my classes and functions with Doxygen, however, I had an idea to put a "programmer's notes" on each source code file. The idea behind this is to explain in layman's terms how a specific class works (and not only why as most comments do). In other words, to give fellow programmers an other-view of how a class works. For example: /*
* PROGRAMMER'S NOTES:
*
* As stated in the documentation, the GamepadManager class
* reads joystick joystick input using SDL and 'parses' SDL events to
* Qt signals.
*
* Most of the code here is about goofing around the joystick mappings.
* We want to avoid having different joystick behaviours between
* operating systems to have a more integrated user experience, since
* we don't want team members to have a bad surprise while
* driving their robots with different laptops.
*
* Unfortunately, we cannot use SDL's GamepadAPI because the robots
* are interested in getting the button/axes numbers, not the "A" or
* "X" button.
*
* To get around this issue, we created a INI file for the most common
* controllers that maps each joystick button/axis to the "standard"
* buttons and axes used by most teams.
*
* We choose to use INI files because we can safely use QSettings
* to read its values and we don't have to worry about having to use
* third-party tools to read other formats.
*/ Would this be a good way to make a large project easier for new programmers/contributors to understand how it works? Aside from maintaining a consistent coding style and 'standard' directory organization, are there any 'standards' or recommendations for these cases? | This is awesome. I wish more software developers took the time and effort to do this. It: States in plain English what the class does (i.e. it's responsibility), Provides useful supplementary information about the code without repeating verbatim what the code already says, Outlines some of the design decisions and why they were made, and Highlights some of the gotchas that might befall the next person reading your code. Alas, many programmers fall into the camp of "if code is written properly, it shouldn't have to be documented." Not true. There are many implied relationships between code classes, methods, modules and other artifacts that are not obvious from just reading the code itself. An experienced coder can carefully craft a design having clear, easily-understandable architecture that is obvious without documentation. But how many programs like that have you actually seen? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/291489",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87708/"
]
} |
291,628 | Suppose we have a method foo(String bar) that only operates on strings that meet certain criteria; for example, it must be lowercase, must not be empty or have only whitespace, and must match the pattern [a-z0-9-_./@]+ . The documentation for the method states these criteria. Should the method reject any and all deviations from this criteria, or should the method be more forgiving about some criteria? For example, if the initial method is public void foo(String bar) {
if (bar == null) {
throw new IllegalArgumentException("bar must not be null");
}
if (!bar.matches(BAR_PATTERN_STRING)) {
throw new IllegalArgumentException("bar must match pattern: " + BAR_PATTERN_STRING);
}
this.bar = bar;
} And the second forgiving method is public void foo(String bar) {
if (bar == null) {
throw new IllegalArgumentException("bar must not be null");
}
if (!bar.matches(BAR_PATTERN_STRING)) {
bar = bar.toLowerCase().trim().replaceAll(" ", "_");
if (!bar.matches(BAR_PATTERN_STRING) {
throw new IllegalArgumentException("bar must match pattern: " + BAR_PATTERN_STRING);
}
}
this.bar = bar;
} Should the documentation be changed to state that it will be transformed and set to the transformed value if possible, or should the method be kept as simple as possible and reject any and all deviations? In this case, bar could be set by the user of an application. The primary use-case for this would be users accessing objects from a repository by a specific string identifier. Each object in the repository should have a unique string to identify it. These repositories could store the objects in various ways (sql server, json, xml, binary, etc) and so I tried to identify the lowest common denominator that would match most naming conventions. | Your method should do what it says it does. This prevents bugs, both from use and from maintainers changing behavior later. It saves time because maintainers don't need to spend as much time figuring out what is going on. That said, if the defined logic isn't user friendly, it should perhaps be improved. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/291628",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161992/"
]
} |
291,950 | When a C program is running, the data is stored on the heap or the stack. The values are stored in RAM addresses. But what about the type indicators (e.g., int or char )? Are they also stored? Consider the following code: char a = 'A';
int x = 4; I read that A and 4 are stored in RAM addresses here. But what about a and x ? Most confusingly, how does the execution know that a is a char and x is an int? I mean, is the int and char mentioned somewhere in RAM? Let's say a value is stored somewhere in RAM as 10011001; if I am the program which executes the code, how will I know whether this 10011001 is a char or an int ? What I don't understand is how the computer knows, when it reads a variable's value from an address such as 10001, whether it is an int or char . Imagine I click on a program called anyprog.exe . Immediately the code starts executing. Does this executable file include information on whether the variables stored are of the type int or char ? | To address the question you've posted in several comments(which I think you should edit into your post): What I don't understand is how does the computer know lets when it reads a variable's value from and address such as 10001 if is an int or char. Imagine I click on a program called anyprog.exe. Immediately the code starts executing. Does this exe file include information about if the variables are stored as in or char? So lets put some code to it. Let's say you write: int x = 4; And let's assume that it gets stored in RAM: 0x00010004: 0x00000004 The first part being the address, the second part being the value. When your program(which executes as machine code) runs, all it sees at 0x00010004 is the value 0x000000004 . It doesn't 'know' the type of this data, and it doesn't know how it is 'supposed' to be used. So, how does your program figure out the right thing to do? Consider this code: int x = 4;
x = x + 5; We have a read and a write here. When your program reads x from memory, it finds 0x00000004 there. And your program knows to add 0x00000005 to it. And the reason your program 'knows' this is a valid operation, is because the compiler ensures that the operation is valid through type-safety. Your compiler has already verified that you can add 4 and 5 together. So when your binary code runs(the exe), it doesn't have to make that verification. It just executes each step blindly, assuming everything is OK(bad things happen when they are in fact, not OK). Another way to think of it is like this. I give you this information: 0x00000004: 0x12345678 Same format as before - address on the left, value on the right. What type is the value? At this point, you know just as much information about that value as your computer does when it's executing code. If I told you to add 12743 to that value, you could do it. You have no idea what the repercussions of that operation will be on the whole system, but adding two numbers is something you're really good at, so you could do it. Does that make the value an int ? Not necessarily - All you see is two 32-bit values and the addition operator. Perhaps some of the confusion is then getting the data back out. If we have: char A = 'a'; How does the computer know to display a in the console? Well, there are a lot of steps to that. The first is to go to A s location in memory and read it: 0x00000004: 0x00000061 The hex value for a in ASCII is 0x61, so the above might be something you'd see in memory. So now our machine code knows the integer value. How does it know to turn the integer value into a character to display it? Simply put, the compiler made sure to put in all of the necessary steps to make that transition. But your computer itself(or the program/exe) has no idea what the type of that data is. That 32-bit value could be anything - int , char , half of a double , a pointer, part of an array, part of a string , part of an instruction, etc. Here's a brief interaction your program (exe) might have with the computer/operating system. Program: I want to start up. I need 20 MB of memory. Operating System: finds 20 free MB of memory that aren't in use and hands them over (The important note is that this could return any 20 free MB of memory, they don't even have to be contiguous. At this point, the program can now operate within the memory it has without talking to the OS) Program: I'm going to assume that the first spot in memory is a 32-bit integer variable x . (The compiler makes sure that accesses to other variables will never touch this spot in memory. There's nothing on the system that says the first byte is variable x , or that variable x is an integer. An analogy: you have a bag. You tell people that you will only put yellow colored balls in this bag. When someone later pulls something out of the bag, then it would be shocking that they would pull out something blue or a cube - something has gone horribly wrong. The same goes for computers: your program is now assuming the first memory spot is variable x and that it is an integer. If something else is ever written over this byte of memory or it's assumed to be something else - something horrible has happened. The compiler ensures these kinds of things don't happen) Program: I will now write 2 to the first four bytes where I'm assuming x is at. Program: I want to add 5 to x . Reads the value of X into a temporary register Adds 5 to the temporary register Stores the value of the temporary register back into the first byte, which is still assumed to be x . Program: I'm going to assume the next available byte is the char variable y . Program: I will write a to variable y . A library is used to find the byte value for a The byte is written to the address the program is assuming is y . Program: I want to display the contents of y Reads the value in the second memory spot Uses a library to convert from the byte to a character Uses graphics libraries to alter the console screen(setting pixels from black to white, scrolling one line, etc) (And it goes on from here) What you're probably getting hung up on is - what happens when the first spot in memory is no longer x ? or the second is no longer y ? What happens when someone reads x as a char or y as a pointer? In short, bad things happen. Some of these things have well-defined behavior, and some have undefined behavior. Undefined behavior is exactly that - anything can happen, from nothing at all, to crashing the program or the operating system. Even well-defined behavior can be malicious. If I can change x to a pointer to my program, and get your program to use it as a pointer, then I can get your program to start executing my program - which is exactly what hackers do. The compiler is there to help make sure we don't use int x as a string , and things of that nature. The machine code itself is not aware of types, and it will only do what the instructions tell it to do. There is also a large amount of information that's discovered at run-time: which bytes of memory is the program allowed to use? Does x start at the first byte or the 12th? But you can imagine how horrible it would be to actually write programs like this(and you can, in the assembly language). You start off by 'declaring' your variables - you tell yourself that byte 1 is x , byte 2 is y , and as you write each line of code, loading and storing registers, you (as a human) have to remember which one is x and which one is y , because the system has no idea. And you (as a human) have to remember what types x and y are, because again - the system has no idea. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/291950",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/189932/"
]
} |
293,288 | I'm building a web application using a MVC pattern. Following this kind of architecture we can see that all the methods used to interact with database are implemented in the model . But what happen if I have to call a service exposed by others on web? For example, I would like to access the Facebook API in order to get all the follower of my page, so, where I put these methods? Obviously the view is not a good idea because this module is dedicated to the presentation, the controller should not be used to retrieve data but the model is usually dedicated only to the interaction with database. So, can you give me some hint about that? And please, can you tell me if I'm making some mistakes about the MVC architecture? | The model is not limited to interaction with the database, the model is responsible for getting and manipulating data. So, to your view and controller, it should make no difference, if the data comes from a database or from a webservice or is even totally random, therefore you should do it in model. MVC is a presentation pattern, that only separates the different representation layers. It does not mean, that model has to be a uniform mess of spaghetti code. Your model itself can be layered as well, but the controller should not know, where the data comes from. A public method in your model can be structured like this (Pseudo-code), which can be called by your controller: public MyDataClass getData(int id) {
WebServiceData wsData = WebService->getData(id);
DatabaseData dbData = ORM->getData(id);
return new MyDataClass(wsData, dbData);
} WebService and ORM may need to be instances of interfaces that can be replaced by mocks via dependency injection, but your controllers and views do not have to change for testing purposes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293288",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147758/"
]
} |
293,435 | SQL injection is a very serious security issue, in large part because it's so easy to get it wrong: the obvious, intuitive way to build a query incorporating user input leaves you vulnerable, and the Right Way to mitigate it requires you to know about parameterized queries and SQL injection first. Seems to me that the obvious way to fix this would be to shut down the obvious (but wrong) option: fix the database engine so that any query received that uses hard-coded values in its WHERE clause instead of parameters returns a nice, descriptive error message instructing you to use parameters instead. This would obviously need to have an opt-out option so that stuff like ad-hoc queries from administrative tools will still run easily, but it should be enabled by default. Having this would shut down SQL injection cold, almost overnight, but as far as I know, no RDBMS actually does this. Is there any good reason why not? | There are too many cases where using a literal is the right approach. From a performance standpoint, there are times that you want literals in your queries. Imagine I have a bug tracker where once it gets big enough to worry about performance I expect that 70% of the bugs in the system will be "closed", 20% will be "open", 5% will be "active" and 5% will be in some other status. I may reasonably want to have the query that returns all active bugs to be SELECT *
FROM bug
WHERE status = 'active' rather than passing the status as a bind variable. I want a different query plan depending on the value passed in for status -- I'd want to do a table scan to return the closed bugs and an index scan on the status column to return the active loans. Now, different databases and different versions have different approaches to (more or less successfully) allow the same query to use a different query plan depending on the value of the bind variable. But that tends to introduce a decent amount of complexity that needs to be managed to balance out the decision of whether to bother re-parsing a query or whether to reuse an existing plan for a new bind variable value. For a developer, it may make sense to deal with this complexity. Or it may make sense to force a different path when I have more information about what my data is going to look like than the optimizer does. From a code complexity standpoint, there are also plenty of times that it makes perfect sense to have literals in SQL statements. For example, if you have a zip_code column that has a 5 character zip code and sometimes has an additional 4 digits, it makes perfect sense to do something like SELECT substr( zip_code, 1, 5 ) zip,
substr( zip_code, 7, 4 ) plus_four rather than passing in 4 separate parameters for the numeric values. These aren't things that will ever change so making them bind variables only serves to make the code potentially more difficult to read and to create the potential that someone will bind parameters in the wrong order and end up with a bug. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293435",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/935/"
]
} |
293,525 | I decided to write a singly-linked list, and had the plan going in to make the internal linked node structure immutable. I ran into a snag though. Say I have the following linked nodes (from previous add operations): 1 -> 2 -> 3 -> 4 and say I want to append a 5 . To do this, since node 4 is immutable, I need to create a new copy of 4 , but replace its next field with a new node containing a 5 . The problem is now, 3 is referencing the old 4 ; the one without the appended 5 . Now I need to copy 3 , and replace its next field to reference the 4 copy, but now 2 is referencing the old 3 ... Or in other words, to do an append, the entire list seems to need to be copied. My questions: Is my thinking correct? Is there any way to do an append without copying the entire structure? Apparently "Effective Java" contains the reccomendation: Classes should be immutable unless there's a very good reason to make them mutable... Is this a good case for mutability? I don't think this is a duplicate of the suggested answer since I'm not talking about the list itself; that obviously has to be mutable to conform to the interface (without doing something like keeping the new list internally and retrieving it via a getter. On second thought though, even that would require some mutation; it would just be kept to a minimum). I'm talking about whether or not the internals of the list must be immutable. | With lists in functional languages, you nearly always work with a head and tail, the first element and the remainder of the list. Prepending is much more common because, as you surmised, appending requires copying the entire list (or other lazy data structures that don't precisely resemble a linked list). In imperative languages, appending is much more common, because it tends to feel more natural semantically, and you don't care about invalidating references to previous versions of the list. As an example of why prepending doesn't require copying the entire list, consider you have: 2 -> 3 -> 4 Prepending a 1 gives you: 1 -> 2 -> 3 -> 4 But note that it doesn't matter if someone else is still holding a reference to 2 as the head of their list, because the list is immutable and the links only go one way. There's no way to tell the 1 is even there if you only have a reference to 2 . Now, if you appended a 5 onto either list, you'd have to make a copy of the entire list, because otherwise it would appear on the other list as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293525",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/139925/"
]
} |
293,851 | I've been reading over and over that functional languages are ideal (or at least very often useful) for parallelism. Why is this? What core concepts and paradigms are typically employed and which specific problems do they solve? On an abstract level, for example, I can see how immutability might be useful in preventing race conditions or other problems stemming from resource competition, but I can't put it any more specifically than that. Please note, however, that my question is broader in scope than just immutability -- a good answer will provide examples of several relevant concepts. | The main reason is that referential transparency (and even more so laziness) abstracts over the execution order. This makes it trivial to parallelize evaluation. For example, if both a , b , and || are referentially transparent, then it doesn't matter if in a || b a gets evaluated first, b gets evaluated first, or b doesn't get evaluated at all (because a was evaluated to true ). In a || a it doesn't matter if a gets evaluated once or twice (or, heck, even 5 times … which wouldn't make sense, but doesn't matter nonetheless). So, if it doesn't matter in which order they are evaluated and it doesn't matter whether they are evaluated needlessly, then you can simply evaluate every sub-expression in parallel. So, we could evaluate a and b in parallel, and then, || could wait for either one of the two threads to finish, look what it returned, and if it returned true , it could even cancel the other one and immediately return true . Every sub-expression can be evaluated in parallel. Trivially. Note, however, that this is not a magic bullet. Some experimental early versions of GHC did this, and it was a disaster: there was just too much potential parallelism. Even a simple program could spawn hundreds, thousands, millions of threads and for the overwhelming majority of sub-expressions spawning the thread takes much longer than just evaluating the expression in the first place. With so many threads, context switching time completely dominates any useful computation. You could say that functional programming turns the problem on its head: typically, the problem is how to break apart a serial program into just the right size of parallel "chunks", whereas with functional programming, the problem is how to group together parallel sub-programs into serial "chunks". The way GHC does it today is that you can manually annotate two subexpressions to be evaluated in parallel. This is actually similar to how you would do it in an imperative language as well, by putting the two expressions into separate threads. But there is an important difference: adding this annotation can never change the result of the program! It can make it faster, it can make it slower, it can make it use more memory, but it cannot change its result. This makes it way easier to experiment with parallelism to find just the right amount of parallelism and the right size of chunks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293851",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61277/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.